threads
listlengths
1
2.99k
[ { "msg_contents": "Hi, all\n\nRecently, I got a problem that the startup process of standby is stuck and keeps in a waiting state. The backtrace of startup process shows that it is waiting for a backend process which conflicts with recovery processing to exit, the guc parameters max_standby_streaming_delay and max_standby_archive_delay are both set as 30 seconds, but it doesn't work. The backend process keeps alive, and the backtrace of this backend process show that it is waiting for the socket to be writeable in secure_write(). After further reading the code, I found that ProcessClientWriteInterrupt() in secure_write() only process interrupts when ProcDiePending is true, otherwise do nothing. However, snapshot conflicts with recovery will only set QueryCancelPending as true, so the response to the signal will de delayed indefinitely if the corresponding client is stuck, thus blocking the recovery process.\n\nI want to know why the interrupt is only handled when ProcDiePending is true, I think query which is supposed to be canceled also should respond to the signal.\n\nI also want to share a patch with you, I add a guc parameter max_standby_client_write_delay, if a query is supposed to be canceled, and the time delayed by a client exceeds max_standby_client_write_delay, then set ProcDiePending as true to avoid being delayed indefinitely, what do you think of it, hope to get your reply.\n\nThanks & Best Regard", "msg_date": "Mon, 23 Aug 2021 16:15:02 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxsIGdldCBzdHVjayBvbiBzZWN1?=\n =?UTF-8?B?cmVfd3JpdGUgZnVuY3Rpb24=?=" }, { "msg_contents": "On Mon, Aug 23, 2021 at 4:15 AM 蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com> wrote:\n> I want to know why the interrupt is only handled when ProcDiePending is true, I think query which is supposed to be canceled also should respond to the signal.\n\nWell, if we're halfway through sending a message to the client and we\nabort the write, we have no way of re-establishing protocol sync,\nright? It's OK to die without sending any more data to the client,\nsince then the connection is closed anyway and the loss of sync\ndoesn't matter, but continuing the session doesn't seem workable.\n\nYour proposed patch actually seems to dodge this problem and I think\nperhaps we could consider something along those lines. It would be\ninteresting to hear what Andres thinks about this idea, since I think\nhe was the last one to rewrite that code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 10:13:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On 2021-Aug-23, Robert Haas wrote:\n\n> On Mon, Aug 23, 2021 at 4:15 AM 蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com> wrote:\n> > I want to know why the interrupt is only handled when ProcDiePending\n> > is true, I think query which is supposed to be canceled also should\n> > respond to the signal.\n\nYeah, I agree.\n\n> Well, if we're halfway through sending a message to the client and we\n> abort the write, we have no way of re-establishing protocol sync,\n> right? It's OK to die without sending any more data to the client,\n> since then the connection is closed anyway and the loss of sync\n> doesn't matter, but continuing the session doesn't seem workable.\n> \n> Your proposed patch actually seems to dodge this problem and I think\n> perhaps we could consider something along those lines.\n\nDo we actually need new GUCs, though? I think we should never let an\nunresponsive client dictate what the server does, because that opens the\ndoor for uncooperative or malicious clients to wreak serious havoc. I\nthink the implementation should wait until time now+X to cancel the\nquery, but if by time now+2X (or whatever we deem reasonable -- maybe\nnow+1.1X) we're still waiting, then it's okay to just close the\nconnection. This suggests a completely different implementation, though.\n\nI wonder if it's possible to write a test for this. We would have to\nsend a query and then hang the client somehow. I recently added a TAP\ntest that uses SIGSTOP to a walsender ... can we use SIGSTOP with a\nbackground psql that's running SELECT pg_sleep() perhaps?\n(Or maybe it's sufficient to start background psql and not pump() it?)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n", "msg_date": "Mon, 23 Aug 2021 10:45:19 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On Mon, Aug 23, 2021 at 10:45 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Do we actually need new GUCs, though? I think we should never let an\n> unresponsive client dictate what the server does, because that opens the\n> door for uncooperative or malicious clients to wreak serious havoc. I\n> think the implementation should wait until time now+X to cancel the\n> query, but if by time now+2X (or whatever we deem reasonable -- maybe\n> now+1.1X) we're still waiting, then it's okay to just close the\n> connection. This suggests a completely different implementation, though.\n\nI don't quite understand what you mean by waiting until time now+X to\ncancel the query. We don't know a priori when query cancels are going\nto happen, but when they do happen we want to respond to them as\nquickly as we can. It seems reasonable to say that if we can't handle\nthem within X amount of time we can resort to emergency measures, but\nthat's basically what the patch does, except it uses a GUC instead of\nhardcoding X.\n\n> I wonder if it's possible to write a test for this. We would have to\n> send a query and then hang the client somehow. I recently added a TAP\n> test that uses SIGSTOP to a walsender ... can we use SIGSTOP with a\n> background psql that's running SELECT pg_sleep() perhaps?\n> (Or maybe it's sufficient to start background psql and not pump() it?)\n\nStarting a background process and not pumping it sounds promising,\nbecause it seems like it would be more likely to be portable. I think\nwe'd want to be careful not to assume very much about how large the\noutput buffer is, because I'm guessing that varies by platform and\nthat it might in some cases be fairly large. Perhaps we could use\npg_stat_activity to wait until we block in a ClientWrite state,\nalthough I wonder if we might find out that we sometimes block on a\ndifferent wait state than what we expect to see.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 11:09:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On 2021-Aug-23, Robert Haas wrote:\n\n> On Mon, Aug 23, 2021 at 10:45 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Do we actually need new GUCs, though? I think we should never let an\n> > unresponsive client dictate what the server does, because that opens the\n> > door for uncooperative or malicious clients to wreak serious havoc. I\n> > think the implementation should wait until time now+X to cancel the\n> > query, but if by time now+2X (or whatever we deem reasonable -- maybe\n> > now+1.1X) we're still waiting, then it's okay to just close the\n> > connection. This suggests a completely different implementation, though.\n> \n> I don't quite understand what you mean by waiting until time now+X to\n> cancel the query. We don't know a priori when query cancels are going\n> to happen, but when they do happen we want to respond to them as\n> quickly as we can. It seems reasonable to say that if we can't handle\n> them within X amount of time we can resort to emergency measures, but\n> that's basically what the patch does, except it uses a GUC instead of\n> hardcoding X.\n\nAren't we talking about query cancellations that occur in response to a\nstandby delay limit? Those aren't in response to user action. What I\nmean is that if the standby delay limit is exceeded, then we send a\nquery cancel; we expect the standby to cancel its query at that time and\nthen the primary can move on. But if the standby doesn't react, then we\ncan have it terminate its connection. I'm looking at the problem from\nthe primary's point of view rather than the standby's point of view.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 23 Aug 2021 11:26:53 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On Mon, Aug 23, 2021 at 11:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Aren't we talking about query cancellations that occur in response to a\n> standby delay limit? Those aren't in response to user action.\n\nOh, you're right. But I guess a similar problem could also occur in\nresponse to pg_terminate_backend(), no?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Aug 2021 14:45:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "\nOn 2021/08/24 0:26, Alvaro Herrera wrote:\n> Aren't we talking about query cancellations that occur in response to a\n> standby delay limit? Those aren't in response to user action. What I\n> mean is that if the standby delay limit is exceeded, then we send a\n> query cancel; we expect the standby to cancel its query at that time and\n> then the primary can move on. But if the standby doesn't react, then we\n> can have it terminate its connection.\n\n+1\n\n\nOn 2021/08/24 3:45, Robert Haas wrote:\n> On Mon, Aug 23, 2021 at 11:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Aren't we talking about query cancellations that occur in response to a\n>> standby delay limit? Those aren't in response to user action.\n> \n> Oh, you're right. But I guess a similar problem could also occur in\n> response to pg_terminate_backend(), no?\n\nThere seems no problem in that case because pg_terminate_backend() causes\na backend to set ProcDiePending to true in die() signal hander and\nProcessClientWriteInterrupt() called by secure_write() handles ProcDiePending.\nNo?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 24 Aug 2021 14:14:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "Yes, pg_terminate_backend() can terminate the connection successfully in this case because ProcDiePending is set as true and ProcessClientWriteInterrupt() can handle it.\n\nQueries those exceed standby delay limit can be terminated in this way, but what about other queries that should be canceled but get stuck on secure_write()? After all, there is currently no way to list all possible situations and then terminate these queries one by one.\n\n\n------------------------------------------------------------------\n发件人:Fujii Masao <masao.fujii@oss.nttdata.com>\n发送时间:2021年8月24日(星期二) 13:15\n收件人:Robert Haas <robertmhaas@gmail.com>; Alvaro Herrera <alvherre@alvh.no-ip.org>\n抄 送:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; Andres Freund <andres@anarazel.de>\n主 题:Re: Queries that should be canceled will get stuck on secure_write function\n\n\nOn 2021/08/24 0:26, Alvaro Herrera wrote:\n> Aren't we talking about query cancellations that occur in response to a\n> standby delay limit? Those aren't in response to user action. What I\n> mean is that if the standby delay limit is exceeded, then we send a\n> query cancel; we expect the standby to cancel its query at that time and\n> then the primary can move on. But if the standby doesn't react, then we\n> can have it terminate its connection.\n\n+1\n\n\nOn 2021/08/24 3:45, Robert Haas wrote:\n> On Mon, Aug 23, 2021 at 11:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Aren't we talking about query cancellations that occur in response to a\n>> standby delay limit? Those aren't in response to user action.\n> \n> Oh, you're right. But I guess a similar problem could also occur in\n> response to pg_terminate_backend(), no?\n\nThere seems no problem in that case because pg_terminate_backend() causes\na backend to set ProcDiePending to true in die() signal hander and\nProcessClientWriteInterrupt() called by secure_write() handles ProcDiePending.\nNo?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\nYes, pg_terminate_backend() can terminate the connection successfully in this case because ProcDiePending is set as true and ProcessClientWriteInterrupt() can handle it.Queries those exceed standby delay limit can be terminated in this way, but what about other queries that should be canceled but get stuck on secure_write()? After all, there is currently no way to list all possible situations and then terminate these queries one by one.------------------------------------------------------------------发件人:Fujii Masao <masao.fujii@oss.nttdata.com>发送时间:2021年8月24日(星期二) 13:15收件人:Robert Haas <robertmhaas@gmail.com>; Alvaro Herrera <alvherre@alvh.no-ip.org>抄 送:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>; Andres Freund <andres@anarazel.de>主 题:Re: Queries that should be canceled will get stuck on secure_write functionOn 2021/08/24 0:26, Alvaro Herrera wrote:> Aren't we talking about query cancellations that occur in response to a> standby delay limit?  Those aren't in response to user action.  What I> mean is that if the standby delay limit is exceeded, then we send a> query cancel; we expect the standby to cancel its query at that time and> then the primary can move on.  But if the standby doesn't react, then we> can have it terminate its connection.+1On 2021/08/24 3:45, Robert Haas wrote:> On Mon, Aug 23, 2021 at 11:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:>> Aren't we talking about query cancellations that occur in response to a>> standby delay limit?  Those aren't in response to user action.> > Oh, you're right. But I guess a similar problem could also occur in> response to pg_terminate_backend(), no?There seems no problem in that case because pg_terminate_backend() causesa backend to set ProcDiePending to true in die() signal hander andProcessClientWriteInterrupt() called by secure_write() handles ProcDiePending.No?Regards,-- Fujii MasaoAdvanced Computing Technology CenterResearch and Development HeadquartersNTT DATA CORPORATION", "msg_date": "Tue, 24 Aug 2021 15:25:08 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxsIGdldCBzdHVj?=\n =?UTF-8?B?ayBvbiBzZWN1cmVfd3JpdGUgZnVuY3Rpb24=?=" }, { "msg_contents": "On Tue, Aug 24, 2021 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Oh, you're right. But I guess a similar problem could also occur in\n> > response to pg_terminate_backend(), no?\n>\n> There seems no problem in that case because pg_terminate_backend() causes\n> a backend to set ProcDiePending to true in die() signal hander and\n> ProcessClientWriteInterrupt() called by secure_write() handles ProcDiePending.\n> No?\n\nHmm, maybe you're right. What about pg_cancel_backend()?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Aug 2021 13:30:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "\n\nOn 2021/08/25 2:30, Robert Haas wrote:\n> Hmm, maybe you're right. What about pg_cancel_backend()?\n\nI was thinking that it's valid even if secure_write() doesn't react to\npg_cancel_backend() because it's basically called outside transaction block,\ni.e., there is no longer transaction to cancel in that case. But there can be\nsome cases where secure_write() is called inside transaction block,\nfor example, when the query generates NOTICE message. In these cases,\nsecure_write() might need to react to the cancel request.\n\nBTW, when an error happens, I found that a backend calls EmitErrorReport()\nto report an error to a client, and then calls AbortCurrentTransaction()\nto abort the transaction. If secure_write() called by EmitErrorReport()\ngets stuck, a backend gets stuck inside transaction block and the locks\nkeep being held unnecessarily. Isn't this problematic? Can we change\nthe order of them?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 25 Aug 2021 10:58:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On Tue, Aug 24, 2021 at 9:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I was thinking that it's valid even if secure_write() doesn't react to\n> pg_cancel_backend() because it's basically called outside transaction block,\n> i.e., there is no longer transaction to cancel in that case. But there can be\n> some cases where secure_write() is called inside transaction block,\n> for example, when the query generates NOTICE message. In these cases,\n> secure_write() might need to react to the cancel request.\n\nYeah. I think we could also be sending tuple data.\n\n> BTW, when an error happens, I found that a backend calls EmitErrorReport()\n> to report an error to a client, and then calls AbortCurrentTransaction()\n> to abort the transaction. If secure_write() called by EmitErrorReport()\n> gets stuck, a backend gets stuck inside transaction block and the locks\n> keep being held unnecessarily. Isn't this problematic? Can we change\n> the order of them?\n\nI think there might be problems with that, like perhaps the ErrorData\nobject can have pointers into the memory contexts that we'd be\ndestroying in AbortCurrentTransaction().\n\nMore generally, I think it's hopeless to try to improve the situation\nvery much by changing the place where secure_write() happens. It\nhappens in so many different places, and it is clearly not going to be\npossible to make it so that in none of those places do we hold any\nimportant server resources. Therefore I think the only solution is to\nfix secure_write() itself ... and the trick is what to do there given\nthat we have to be very careful not to do anything that might try to\nwrite another message while we are already in the middle of writing a\nmessage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Aug 2021 08:27:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "Hi,\n\nOn 2021-08-23 10:13:03 -0400, Robert Haas wrote:\n> On Mon, Aug 23, 2021 at 4:15 AM 蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com> wrote:\n> > I want to know why the interrupt is only handled when ProcDiePending is true, I think query which is supposed to be canceled also should respond to the signal.\n> \n> Well, if we're halfway through sending a message to the client and we\n> abort the write, we have no way of re-establishing protocol sync,\n> right? It's OK to die without sending any more data to the client,\n> since then the connection is closed anyway and the loss of sync\n> doesn't matter, but continuing the session doesn't seem workable.\n\nRight.\n\n\n> Your proposed patch actually seems to dodge this problem and I think\n> perhaps we could consider something along those lines. It would be\n> interesting to hear what Andres thinks about this idea, since I think\n> he was the last one to rewrite that code.\n\nI think it's a reasonable idea. I have some quibbles with the implementation\n(new code should be in ProcessClientWriteInterrupt(), not secure_write()), and\nI suspect we should escalate more quickly to killing the connection, but those\nseem like details.\n\nI think that if we want to tackle this, we need to do the same for\nsecure_read() as well. secure_read() will process interrupts normally while\nDoingCommandRead, but if we're already in a command, we'd just as well be\nprevented from processing a !ProcDiePending interrupt.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:15:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "Hi,\n\nOn 2021-08-27 08:27:38 -0400, Robert Haas wrote:\n> On Tue, Aug 24, 2021 at 9:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > to report an error to a client, and then calls AbortCurrentTransaction()\n> > to abort the transaction. If secure_write() called by EmitErrorReport()\n> > gets stuck, a backend gets stuck inside transaction block and the locks\n> > keep being held unnecessarily. Isn't this problematic? Can we change\n> > the order of them?\n> ...\n> More generally, I think it's hopeless to try to improve the situation\n> very much by changing the place where secure_write() happens. It\n> happens in so many different places, and it is clearly not going to be\n> possible to make it so that in none of those places do we hold any\n> important server resources. Therefore I think the only solution is to\n> fix secure_write() itself ... and the trick is what to do there given\n> that we have to be very careful not to do anything that might try to\n> write another message while we are already in the middle of writing a\n> message.\n\nI wonder if we could improve the situation somewhat by using the non-blocking\npqcomm functions in a few select places. E.g. if elog.c's\nsend_message_to_frontend() sent its message via a new pq_endmessage_noblock()\n(which'd use the existing pq_putmessage_noblock()) and used\npq_flush_if_writable() instead of pq_flush(), we'd a) not block sending to the\nclient before AbortCurrentTransaction(), b) able to queue further error\nmessages safely.\n\nI think this'd not violate the goal of putting pq_flush() into\nsend_message_to_frontend():\n\t/*\n\t * This flush is normally not necessary, since postgres.c will flush out\n\t * waiting data when control returns to the main loop. But it seems best\n\t * to leave it here, so that the client has some clue what happened if the\n\t * backend dies before getting back to the main loop ... error/notice\n\t * messages should not be a performance-critical path anyway, so an extra\n\t * flush won't hurt much ...\n\t */\n\tpq_flush();\n\nbecause the only situations where we'd not send the data out immediately would\nbe when the socket buffer is already full. In which case the client wouldn't\nget the error immediately anyway?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:24:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "On Fri, Aug 27, 2021 at 3:24 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if we could improve the situation somewhat by using the non-blocking\n> pqcomm functions in a few select places. E.g. if elog.c's\n> send_message_to_frontend() sent its message via a new pq_endmessage_noblock()\n> (which'd use the existing pq_putmessage_noblock()) and used\n> pq_flush_if_writable() instead of pq_flush(), we'd a) not block sending to the\n> client before AbortCurrentTransaction(), b) able to queue further error\n> messages safely.\n\npq_flush_if_writable() could succeed in sending only part of the data,\nso I don't see how this works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Aug 2021 16:07:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 27, 2021, at 13:07, Robert Haas wrote:\n> On Fri, Aug 27, 2021 at 3:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > I wonder if we could improve the situation somewhat by using the non-blocking\n> > pqcomm functions in a few select places. E.g. if elog.c's\n> > send_message_to_frontend() sent its message via a new pq_endmessage_noblock()\n> > (which'd use the existing pq_putmessage_noblock()) and used\n> > pq_flush_if_writable() instead of pq_flush(), we'd a) not block sending to the\n> > client before AbortCurrentTransaction(), b) able to queue further error\n> > messages safely.\n> \n> pq_flush_if_writable() could succeed in sending only part of the data,\n> so I don't see how this works.\n\nAll the data is buffered though, so I don't see that problem that causes?\n\nAndres\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:10:14 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Queries_that_should_be_canceled_will_get_stuck_on_secure=5F?=\n =?UTF-8?Q?write_function?=" }, { "msg_contents": "I add a test to reproduce the problem, you can see the attachment for specific content\nduring the last sleep time of the test, use pstack to get the stack of the backend process, which is as follows:\n\n#0 0x00007f6ebdd744d3 in __epoll_wait_nocancel () from /lib64/libc.so.6\n#1 0x00000000007738d2 in WaitEventSetWait ()\n#2 0x0000000000675aae in secure_write ()\n#3 0x000000000067bfab in internal_flush ()\n#4 0x000000000067c13a in internal_putbytes ()\n#5 0x000000000067c217 in socket_putmessage ()\n#6 0x0000000000497f36 in printtup ()\n#7 0x00000000006301e0 in standard_ExecutorRun ()\n#8 0x00000000007985fb in PortalRunSelect ()\n#9 0x0000000000799968 in PortalRun ()\n#10 0x0000000000795866 in exec_simple_query ()\n#11 0x0000000000796cff in PostgresMain ()\n#12 0x0000000000488339 in ServerLoop ()\n#13 0x0000000000717bbc in PostmasterMain ()\n#14 0x0000000000488f26 in main ()", "msg_date": "Mon, 06 Sep 2021 16:03:43 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxsIGdldCBzdHVj?=\n =?UTF-8?B?ayBvbiBzZWN1cmVfd3JpdGUgZnVuY3Rpb24=?=" }, { "msg_contents": "I changed the implementation about this problem: \na) if the cancel query interrupt is from db for some reason, such as recovery conflict, then handle it immediately, and cancel request is treated as terminate request;\nb) if the cancel query interrupt is from client, then ignore as original way\n\nIn the attached patch, I also add a tap test to generate a recovery conflict on a standby during the backend process is stuck on client write, check whether it can handle the cancel query request due to recovery conflict.\n\nwhat do you think of it, hope to get your reply\n\nThanks & Best Regards", "msg_date": "Thu, 09 Sep 2021 17:38:06 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxsIGdldCBzdHVj?=\n =?UTF-8?B?ayBvbiBzZWN1cmVfd3JpdGUgZnVuY3Rpb24=?=" }, { "msg_contents": "On Fri, Aug 27, 2021 at 4:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On Fri, Aug 27, 2021, at 13:07, Robert Haas wrote:\n> > On Fri, Aug 27, 2021 at 3:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I wonder if we could improve the situation somewhat by using the non-blocking\n> > > pqcomm functions in a few select places. E.g. if elog.c's\n> > > send_message_to_frontend() sent its message via a new pq_endmessage_noblock()\n> > > (which'd use the existing pq_putmessage_noblock()) and used\n> > > pq_flush_if_writable() instead of pq_flush(), we'd a) not block sending to the\n> > > client before AbortCurrentTransaction(), b) able to queue further error\n> > > messages safely.\n> >\n> > pq_flush_if_writable() could succeed in sending only part of the data,\n> > so I don't see how this works.\n>\n> All the data is buffered though, so I don't see that problem that causes?\n\nOK, I guess I'm confused here.\n\nIf we're always buffering the data, then I suppose there's no risk of\ninjecting a protocol message into the middle of some other protocol\nmessage, assuming that we don't have a non-local transfer of control\nhalfway through putting a message in the buffer. But there's still the\nrisk of getting out of step with the client. Suppose the client does\nSELECT 1/0 and we send an ErrorResponse complaining about the division\nby zero. But as we're trying to send that response, we block. Later, a\nquery cancel occurs. We can't queue another ErrorResponse because the\nclient will interpret that as the response to the next query, since\nthe division by zero error is the response to the current one. I don't\nthink that changing pq_flush() to pq_flush_if_writable() in elog.c or\nanywhere else can fix that problem.\n\nBut that doesn't mean that it isn't a good idea. Any place where we're\ndoing a pq_flush() and know that another one will happen soon\nafterward, before we wait for data from the client, can be changed to\npq_flush_if_writable() without harm, and it's beneficial to do so,\nbecause like you say, it avoids blocking in places that users may find\ninconvenient - e.g. while holding locks, as Fujii-san said. The\ncomment here claims that \"postgres.c will flush out waiting data when\ncontrol returns to the main loop\" but the only pq_flush() call that's\ndirectly present in postgres.c is in response to receiving a Flush\nmessage, so I suppose this is actually talking about the pq_flush()\ninside ReadyForQuery. It's not 100% clear to me that we do that in all\nrelevant cases, though. Suppose we hit an error while processing some\nprotocol message that does not set send_ready_for_query = true, like\nfor example Describe ('D'). I think in that case the flush in elog.c\nis the only one. Perhaps we ought to change postgres.c so that if we\ndon't enter the block guarded by \"if (send_ready_for_query)\" we\ninstead pq_flush().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Sep 2021 10:39:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Queries that should be canceled will get stuck on secure_write\n function" }, { "msg_contents": "Hi all, I want to know your opinion on this patch, or in what way do you think we should solve this problem?\n------------------------------------------------------------------\n发件人:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>\n发送时间:2021年9月9日(星期四) 17:38\n收件人:Robert Haas <robertmhaas@gmail.com>; Andres Freund <andres@anarazel.de>; alvherre <alvherre@alvh.no-ip.org>; masao.fujii <masao.fujii@oss.nttdata.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:回复:Queries that should be canceled will get stuck on secure_write function\n\n\nI changed the implementation about this problem: \na) if the cancel query interrupt is from db for some reason, such as recovery conflict, then handle it immediately, and cancel request is treated as terminate request;\nb) if the cancel query interrupt is from client, then ignore as original way\n\nIn the attached patch, I also add a tap test to generate a recovery conflict on a standby during the backend process is stuck on client write, check whether it can handle the cancel query request due to recovery conflict.\n\nwhat do you think of it, hope to get your reply\n\nThanks & Best Regards", "msg_date": "Wed, 22 Sep 2021 00:14:49 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxsIGdldCBzdHVj?=\n =?UTF-8?B?ayBvbiBzZWN1cmVfd3JpdGUgZnVuY3Rpb24=?=" }, { "msg_contents": "\n\nOn 2021/09/22 1:14, 蔡梦娟(玊于) wrote:\n> Hi all, I want to know your opinion on this patch, or in what way do you think we should solve this problem?\n\nI agree that something like the patch (i.e., introduction of promotion\nfrom cancel request to terminate one) is necessary for the fix. One concern\non the patch is that the cancel request can be promoted to the terminate one\neven when secure_write() doesn't actually get stuck. Is this acceptable?\nMaybe I'm tempted to set up the duration until the promotion happens....\nOr we should introduce the dedicated timer for communication on the socket?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Sep 2021 12:52:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkje+8mlF1ZXJpZXMgdGhhdCBzaG91bGQgYmUgY2FuY2Vs?=\n =?UTF-8?Q?ed_will_get_stuck_on_secure=5fwrite_function?=" }, { "msg_contents": "Yes, it is more appropriate to set a duration time to determine whether secure_write() is stuck, but it is difficult to define how long the duration time is.\nin my first patch, I add a GUC to allow the user to set the time, or it can be hardcoded if a time deemed reasonable is provided?\n\n\n\n------------------------------------------------------------------I agree that something like the patch (i.e., introduction of promotion\nfrom cancel request to terminate one) is necessary for the fix. One concern\non the patch is that the cancel request can be promoted to the terminate one\neven when secure_write() doesn't actually get stuck. Is this acceptable?\nMaybe I'm tempted to set up the duration until the promotion happens....\nOr we should introduce the dedicated timer for communication on the socket?\n\n\nYes, it is more appropriate to set a duration time to determine whether secure_write() is stuck, but it is difficult to define how long the duration time is.in my first patch, I add a GUC to allow the user to set the time, or it can be hardcoded if a time deemed reasonable is provided?------------------------------------------------------------------I agree that something like the patch (i.e., introduction of promotionfrom cancel request to terminate one) is necessary for the fix. One concernon the patch is that the cancel request can be promoted to the terminate oneeven when secure_write() doesn't actually get stuck. Is this acceptable?Maybe I'm tempted to set up the duration until the promotion happens....Or we should introduce the dedicated timer for communication on the socket?", "msg_date": "Fri, 24 Sep 2021 11:59:42 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77yaUXVlcmllcyB0aGF0IHNob3VsZCBiZSBjYW5jZWxlZCB3aWxs?=\n =?UTF-8?B?IGdldCBzdHVjayBvbiBzZWN1cmVfd3JpdGUgZnVuY3Rpb24=?=" } ]
[ { "msg_contents": "While looking at the regexp code, I started to get uncomfortable\nabout the implications of commit 0c3405cf1: it means that not\nonly the cdissect() phase, but also the preceding DFA-check phase\n(longest()/shortest()) rely on saved subexpression match positions\nto be valid for the match we're currently considering. It seemed\nto me that the cdissect() recursion wasn't being careful to reset\nthe match pointers for an abandoned submatch before moving on to\nthe next attempt, meaning that dfa_backref() could conceivably get\napplied using a stale match pointer.\n\nUpon poking into it, I failed to find any bug of that exact ilk,\nbut what I did find was a pre-existing bug of not resetting an\nabandoned match pointer at all. That allows these fun things:\n\nregression=# select 'abcdef' ~ '^(.)\\1|\\1.';\n ?column? \n----------\n t\n(1 row)\n\nregression=# select 'abadef' ~ '^((.)\\2|..)\\2';\n ?column? \n----------\n t\n(1 row)\n\nIn both of these examples, the (.) capture is in an alternation\nbranch that subsequently fails; therefore, the later backref\nshould never be able to match. But it does, because we forgot\nto clear the capture's match data on the way out.\n\nIt turns out that this can be fixed using fewer, not more, zaptreesubs\ncalls, if we are careful to define the recursion rules precisely.\nSee attached.\n\nThis bug is ancient. I verified it as far back as PG 7.4, and\nit can also be reproduced in Tcl, so it's likely aboriginal to\nSpencer's library. It's not that surprising that no one has\nreported it, because regexps that have this sort of possibly-invalid\nbackref are most likely incorrect. In most use-cases the existing\ncode will fail to match, as expected, so users would probably notice\nthat and fix their regexps without observing that there are cases\nwhere the match incorrectly succeeds.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 23 Aug 2021 11:43:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Regexp: observable bug from careless usage of zaptreesubs" } ]
[ { "msg_contents": "Hi\r\n\r\nThe customer reports a very slow query. I have a reproducer script. The\r\ndataset is not too big\r\n\r\npostgres=# \\dt+\r\n List of relations\r\n┌────────┬───────┬───────┬───────┬─────────────┬────────────┬─────────────┐\r\n│ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n╞════════╪═══════╪═══════╪═══════╪═════════════╪════════════╪═════════════╡\r\n│ public │ f_dep │ table │ pavel │ permanent │ 8192 bytes │ │\r\n│ public │ f_emp │ table │ pavel │ permanent │ 1001 MB │ │\r\n│ public │ f_fin │ table │ pavel │ permanent │ 432 kB │ │\r\n│ public │ qt │ table │ pavel │ permanent │ 1976 kB │ │\r\n│ public │ qtd │ table │ pavel │ permanent │ 87 MB │ │\r\n└────────┴───────┴───────┴───────┴─────────────┴────────────┴─────────────┘\r\n(5 rows)\r\n\r\nand the query is not too complex\r\n\r\nSELECT\r\n sub.a_121,\r\n count(*)\r\nFROM (\r\n SELECT\r\n f_fin.dt_business_day_id AS a_1056,\r\n f_dep.description_id AS a_121,\r\n f_emp.employee_id_id AS a_1327\r\n FROM f_emp\r\n INNER JOIN f_dep ON\r\n ( f_emp.department_id_id = f_dep.id )\r\n INNER JOIN f_fin ON\r\n ( f_emp.business_day_date_id = f_fin.id )\r\n GROUP BY 1, 2, 3\r\n ) AS sub\r\nINNER JOIN qt ON\r\n ( sub.a_1056 = qt.tt_1056_1056_b )\r\nLEFT OUTER JOIN qtd AS qt_2 ON\r\n ( ( qt.tt_1056_1056_b = qt_2.a_1056 )\r\n AND ( sub.a_121 = qt_2.a_121 )\r\n AND ( sub.a_1327 = qt_2.a_1327 ) )\r\nLEFT OUTER JOIN qtd AS qt_3 ON\r\n ( ( qt.tt_1056_1056_a = qt_3.a_1056 )\r\n AND ( sub.a_121 = qt_3.a_121 )\r\n AND ( sub.a_1327 = qt_3.a_1327 ) )\r\nGROUP BY 1;\r\n\r\nBy default I get a good plan, and the performance is ok\r\nhttps://explain.depesz.com/s/Mr2H (about 16 sec). Unfortunately, when I\r\nincrease work_mem, I get good plan with good performance\r\nhttps://explain.depesz.com/s/u4Ff\r\n\r\nBut this depends on index only scan. In the production environment, the\r\nindex only scan is not always available, and I see another plan (I can get\r\nthis plan with disabled index only scan).\r\n\r\nAlthough the cost is almost the same, the query is about 15x slower\r\nhttps://explain.depesz.com/s/L6zP\r\n\r\n│ HashAggregate (cost=1556129.74..1556131.74 rows=200 width=12) (actual\r\ntime=269948.878..269948.897 rows=64 loops=1)\r\n │\r\n│ Group Key: f_dep.description_id\r\n\r\n │\r\n│ Batches: 1 Memory Usage: 40kB\r\n\r\n │\r\n│ Buffers: shared hit=5612 read=145051\r\n\r\n │\r\n│ -> Merge Left Join (cost=1267976.96..1534602.18 rows=4305512 width=4)\r\n(actual time=13699.847..268785.500 rows=4291151 loops=1)\r\n │\r\n│ Merge Cond: ((f_emp.employee_id_id = qt_3.a_1327) AND\r\n(f_dep.description_id = qt_3.a_121))\r\n │\r\n│ Join Filter: (qt.tt_1056_1056_a = qt_3.a_1056)\r\n\r\n │\r\n│ Rows Removed by Join Filter: 1203659495\r\n\r\n │\r\n│ Buffers: shared hit=5612 read=145051\r\n\r\n │\r\n\r\n .....\r\n\r\n │\r\n│ -> Sort (cost=209977.63..214349.77 rows=1748859 width=12) (actual\r\ntime=979.522..81842.913 rows=1205261892 loops=1)\r\n │\r\n│ Sort Key: qt_3.a_1327, qt_3.a_121\r\n\r\n │\r\n│ Sort Method: quicksort Memory: 144793kB\r\n\r\n │\r\n│ Buffers: shared hit=2432 read=8718\r\n\r\n │\r\n│ -> Seq Scan on qtd qt_3 (cost=0.00..28638.59 rows=1748859\r\nwidth=12) (actual time=0.031..284.437 rows=1748859 loops=1)\r\n│ Buffers: shared hit=2432 read=8718\r\n\r\nThe sort of qtd table is very fast\r\n\r\npostgres=# explain analyze select * from qtd order by a_1327, a_121;\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Sort (cost=209977.63..214349.77 rows=1748859 width=27) (actual\r\ntime=863.923..1111.213 rows=1748859 loops=1) │\r\n│ Sort Key: a_1327, a_121\r\n │\r\n│ Sort Method: quicksort Memory: 199444kB\r\n │\r\n│ -> Seq Scan on qtd (cost=0.00..28638.59 rows=1748859 width=27)\r\n(actual time=0.035..169.385 rows=1748859 loops=1) │\r\n│ Planning Time: 0.473 ms\r\n │\r\n│ Execution Time: 1226.305 ms\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(6 rows)\r\n\r\nbut here it returns 700x lines more and it is 70 x slower. Probably it is\r\nbecause something does rescan. But why? With index only scan, I don't see\r\nany indices of rescan.\r\n\r\nIs it an executor or optimizer bug? Or is it a bug? I tested this behaviour\r\non Postgres 13 and on the fresh master branch.\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiThe customer reports a very slow query. I have a reproducer script. The dataset is not too bigpostgres=# \\dt+                             List of relations┌────────┬───────┬───────┬───────┬─────────────┬────────────┬─────────────┐│ Schema │ Name  │ Type  │ Owner │ Persistence │    Size    │ Description │╞════════╪═══════╪═══════╪═══════╪═════════════╪════════════╪═════════════╡│ public │ f_dep │ table │ pavel │ permanent   │ 8192 bytes │             ││ public │ f_emp │ table │ pavel │ permanent   │ 1001 MB    │             ││ public │ f_fin │ table │ pavel │ permanent   │ 432 kB     │             ││ public │ qt    │ table │ pavel │ permanent   │ 1976 kB    │             ││ public │ qtd   │ table │ pavel │ permanent   │ 87 MB      │             │└────────┴───────┴───────┴───────┴─────────────┴────────────┴─────────────┘(5 rows)and the query is not too complexSELECT     sub.a_121,     count(*)FROM (        SELECT              f_fin.dt_business_day_id AS a_1056,              f_dep.description_id AS a_121,              f_emp.employee_id_id AS a_1327        FROM  f_emp        INNER JOIN f_dep ON                ( f_emp.department_id_id = f_dep.id )        INNER JOIN f_fin ON                ( f_emp.business_day_date_id = f_fin.id )        GROUP BY 1, 2, 3      ) AS subINNER JOIN qt ON        ( sub.a_1056 = qt.tt_1056_1056_b )LEFT OUTER JOIN qtd AS qt_2 ON        ( ( qt.tt_1056_1056_b = qt_2.a_1056 )        AND ( sub.a_121 = qt_2.a_121 )        AND ( sub.a_1327 = qt_2.a_1327 ) )LEFT OUTER JOIN qtd AS qt_3 ON        ( ( qt.tt_1056_1056_a = qt_3.a_1056 )        AND ( sub.a_121 = qt_3.a_121 )        AND ( sub.a_1327 = qt_3.a_1327 ) )GROUP BY 1;By default I get a good plan, and the performance is ok https://explain.depesz.com/s/Mr2H (about 16 sec). Unfortunately, when I increase work_mem, I get good plan with good performance https://explain.depesz.com/s/u4FfBut this depends on index only scan. In the production environment, the index only scan is not always available, and I see another plan (I can get this plan with disabled index only scan).Although the cost is almost the same, the query is about 15x slower https://explain.depesz.com/s/L6zP│ HashAggregate  (cost=1556129.74..1556131.74 rows=200 width=12) (actual time=269948.878..269948.897 rows=64 loops=1)                                               ││   Group Key: f_dep.description_id                                                                                                                                 ││   Batches: 1  Memory Usage: 40kB                                                                                                                                  ││   Buffers: shared hit=5612 read=145051                                                                                                                            ││   ->  Merge Left Join  (cost=1267976.96..1534602.18 rows=4305512 width=4) (actual time=13699.847..268785.500 rows=4291151 loops=1)                                ││         Merge Cond: ((f_emp.employee_id_id = qt_3.a_1327) AND (f_dep.description_id = qt_3.a_121))                                                                ││         Join Filter: (qt.tt_1056_1056_a = qt_3.a_1056)                                                                                                            ││         Rows Removed by Join Filter: 1203659495                                                                                                                   ││         Buffers: shared hit=5612 read=145051                                                                                                                      │            .....                                                                                                  ││         ->  Sort  (cost=209977.63..214349.77 rows=1748859 width=12) (actual time=979.522..81842.913 rows=1205261892 loops=1)                                      ││               Sort Key: qt_3.a_1327, qt_3.a_121                                                                                                                   ││               Sort Method: quicksort  Memory: 144793kB                                                                                                            ││               Buffers: shared hit=2432 read=8718                                                                                                                  ││               ->  Seq Scan on qtd qt_3  (cost=0.00..28638.59 rows=1748859 width=12) (actual time=0.031..284.437 rows=1748859 loops=1)│                     Buffers: shared hit=2432 read=8718   The sort of qtd table is very fastpostgres=# explain analyze select * from qtd order by a_1327, a_121;┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│                                                      QUERY PLAN                                                      │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Sort  (cost=209977.63..214349.77 rows=1748859 width=27) (actual time=863.923..1111.213 rows=1748859 loops=1)         ││   Sort Key: a_1327, a_121                                                                                            ││   Sort Method: quicksort  Memory: 199444kB                                                                           ││   ->  Seq Scan on qtd  (cost=0.00..28638.59 rows=1748859 width=27) (actual time=0.035..169.385 rows=1748859 loops=1) ││ Planning Time: 0.473 ms                                                                                              ││ Execution Time: 1226.305 ms                                                                                          │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(6 rows)but here it returns 700x lines more and it is 70 x slower. Probably it is because something does rescan. But why? With index only scan, I don't see any indices of rescan.Is it an executor or optimizer bug? Or is it a bug? I tested this behaviour on Postgres 13 and on the fresh master branch.RegardsPavel", "msg_date": "Mon, 23 Aug 2021 18:44:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "pretty slow merge-join due rescan?" }, { "msg_contents": "po 23. 8. 2021 v 18:44 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n> Hi\r\n>\r\n> The customer reports a very slow query. I have a reproducer script. The\r\n> dataset is not too big\r\n>\r\n> postgres=# \\dt+\r\n> List of relations\r\n> ┌────────┬───────┬───────┬───────┬─────────────┬────────────┬─────────────┐\r\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n> ╞════════╪═══════╪═══════╪═══════╪═════════════╪════════════╪═════════════╡\r\n> │ public │ f_dep │ table │ pavel │ permanent │ 8192 bytes │ │\r\n> │ public │ f_emp │ table │ pavel │ permanent │ 1001 MB │ │\r\n> │ public │ f_fin │ table │ pavel │ permanent │ 432 kB │ │\r\n> │ public │ qt │ table │ pavel │ permanent │ 1976 kB │ │\r\n> │ public │ qtd │ table │ pavel │ permanent │ 87 MB │ │\r\n> └────────┴───────┴───────┴───────┴─────────────┴────────────┴─────────────┘\r\n> (5 rows)\r\n>\r\n> and the query is not too complex\r\n>\r\n> SELECT\r\n> sub.a_121,\r\n> count(*)\r\n> FROM (\r\n> SELECT\r\n> f_fin.dt_business_day_id AS a_1056,\r\n> f_dep.description_id AS a_121,\r\n> f_emp.employee_id_id AS a_1327\r\n> FROM f_emp\r\n> INNER JOIN f_dep ON\r\n> ( f_emp.department_id_id = f_dep.id )\r\n> INNER JOIN f_fin ON\r\n> ( f_emp.business_day_date_id = f_fin.id )\r\n> GROUP BY 1, 2, 3\r\n> ) AS sub\r\n> INNER JOIN qt ON\r\n> ( sub.a_1056 = qt.tt_1056_1056_b )\r\n> LEFT OUTER JOIN qtd AS qt_2 ON\r\n> ( ( qt.tt_1056_1056_b = qt_2.a_1056 )\r\n> AND ( sub.a_121 = qt_2.a_121 )\r\n> AND ( sub.a_1327 = qt_2.a_1327 ) )\r\n> LEFT OUTER JOIN qtd AS qt_3 ON\r\n> ( ( qt.tt_1056_1056_a = qt_3.a_1056 )\r\n> AND ( sub.a_121 = qt_3.a_121 )\r\n> AND ( sub.a_1327 = qt_3.a_1327 ) )\r\n> GROUP BY 1;\r\n>\r\n> By default I get a good plan, and the performance is ok\r\n> https://explain.depesz.com/s/Mr2H (about 16 sec). Unfortunately, when I\r\n> increase work_mem, I get good plan with good performance\r\n> https://explain.depesz.com/s/u4Ff\r\n>\r\n> But this depends on index only scan. In the production environment, the\r\n> index only scan is not always available, and I see another plan (I can get\r\n> this plan with disabled index only scan).\r\n>\r\n> Although the cost is almost the same, the query is about 15x slower\r\n> https://explain.depesz.com/s/L6zP\r\n>\r\n> │ HashAggregate (cost=1556129.74..1556131.74 rows=200 width=12) (actual\r\n> time=269948.878..269948.897 rows=64 loops=1)\r\n> │\r\n> │ Group Key: f_dep.description_id\r\n>\r\n> │\r\n> │ Batches: 1 Memory Usage: 40kB\r\n>\r\n> │\r\n> │ Buffers: shared hit=5612 read=145051\r\n>\r\n> │\r\n> │ -> Merge Left Join (cost=1267976.96..1534602.18 rows=4305512\r\n> width=4) (actual time=13699.847..268785.500 rows=4291151 loops=1)\r\n> │\r\n> │ Merge Cond: ((f_emp.employee_id_id = qt_3.a_1327) AND\r\n> (f_dep.description_id = qt_3.a_121))\r\n> │\r\n> │ Join Filter: (qt.tt_1056_1056_a = qt_3.a_1056)\r\n>\r\n> │\r\n> │ Rows Removed by Join Filter: 1203659495\r\n>\r\n> │\r\n> │ Buffers: shared hit=5612 read=145051\r\n>\r\n> │\r\n>\r\n> .....\r\n>\r\n> │\r\n> │ -> Sort (cost=209977.63..214349.77 rows=1748859 width=12) (actual\r\n> time=979.522..81842.913 rows=1205261892 loops=1)\r\n> │\r\n> │ Sort Key: qt_3.a_1327, qt_3.a_121\r\n>\r\n> │\r\n> │ Sort Method: quicksort Memory: 144793kB\r\n>\r\n> │\r\n> │ Buffers: shared hit=2432 read=8718\r\n>\r\n> │\r\n> │ -> Seq Scan on qtd qt_3 (cost=0.00..28638.59\r\n> rows=1748859 width=12) (actual time=0.031..284.437 rows=1748859 loops=1)\r\n> │ Buffers: shared hit=2432 read=8718\r\n>\r\n> The sort of qtd table is very fast\r\n>\r\n> postgres=# explain analyze select * from qtd order by a_1327, a_121;\r\n>\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN\r\n> │\r\n>\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Sort (cost=209977.63..214349.77 rows=1748859 width=27) (actual\r\n> time=863.923..1111.213 rows=1748859 loops=1) │\r\n> │ Sort Key: a_1327, a_121\r\n> │\r\n> │ Sort Method: quicksort Memory: 199444kB\r\n> │\r\n> │ -> Seq Scan on qtd (cost=0.00..28638.59 rows=1748859 width=27)\r\n> (actual time=0.035..169.385 rows=1748859 loops=1) │\r\n> │ Planning Time: 0.473 ms\r\n> │\r\n> │ Execution Time: 1226.305 ms\r\n> │\r\n>\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (6 rows)\r\n>\r\n> but here it returns 700x lines more and it is 70 x slower. Probably it is\r\n> because something does rescan. But why? With index only scan, I don't see\r\n> any indices of rescan.\r\n>\r\n> Is it an executor or optimizer bug? Or is it a bug? I tested this\r\n> behaviour on Postgres 13 and on the fresh master branch.\r\n>\r\n\r\n When I increase cpu_operator_cost, then I got workable plan although I use\r\nhigh work mem\r\n\r\nhttps://explain.depesz.com/s/jl4v\r\n\r\nThe strange thing of this issue is possible unhappy behaviour although the\r\nestimation is very well\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n> Regards\r\n>\r\n> Pavel\r\n>\r\n>\r\n\npo 23. 8. 2021 v 18:44 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiThe customer reports a very slow query. I have a reproducer script. The dataset is not too bigpostgres=# \\dt+                             List of relations┌────────┬───────┬───────┬───────┬─────────────┬────────────┬─────────────┐│ Schema │ Name  │ Type  │ Owner │ Persistence │    Size    │ Description │╞════════╪═══════╪═══════╪═══════╪═════════════╪════════════╪═════════════╡│ public │ f_dep │ table │ pavel │ permanent   │ 8192 bytes │             ││ public │ f_emp │ table │ pavel │ permanent   │ 1001 MB    │             ││ public │ f_fin │ table │ pavel │ permanent   │ 432 kB     │             ││ public │ qt    │ table │ pavel │ permanent   │ 1976 kB    │             ││ public │ qtd   │ table │ pavel │ permanent   │ 87 MB      │             │└────────┴───────┴───────┴───────┴─────────────┴────────────┴─────────────┘(5 rows)and the query is not too complexSELECT     sub.a_121,     count(*)FROM (        SELECT              f_fin.dt_business_day_id AS a_1056,              f_dep.description_id AS a_121,              f_emp.employee_id_id AS a_1327        FROM  f_emp        INNER JOIN f_dep ON                ( f_emp.department_id_id = f_dep.id )        INNER JOIN f_fin ON                ( f_emp.business_day_date_id = f_fin.id )        GROUP BY 1, 2, 3      ) AS subINNER JOIN qt ON        ( sub.a_1056 = qt.tt_1056_1056_b )LEFT OUTER JOIN qtd AS qt_2 ON        ( ( qt.tt_1056_1056_b = qt_2.a_1056 )        AND ( sub.a_121 = qt_2.a_121 )        AND ( sub.a_1327 = qt_2.a_1327 ) )LEFT OUTER JOIN qtd AS qt_3 ON        ( ( qt.tt_1056_1056_a = qt_3.a_1056 )        AND ( sub.a_121 = qt_3.a_121 )        AND ( sub.a_1327 = qt_3.a_1327 ) )GROUP BY 1;By default I get a good plan, and the performance is ok https://explain.depesz.com/s/Mr2H (about 16 sec). Unfortunately, when I increase work_mem, I get good plan with good performance https://explain.depesz.com/s/u4FfBut this depends on index only scan. In the production environment, the index only scan is not always available, and I see another plan (I can get this plan with disabled index only scan).Although the cost is almost the same, the query is about 15x slower https://explain.depesz.com/s/L6zP│ HashAggregate  (cost=1556129.74..1556131.74 rows=200 width=12) (actual time=269948.878..269948.897 rows=64 loops=1)                                               ││   Group Key: f_dep.description_id                                                                                                                                 ││   Batches: 1  Memory Usage: 40kB                                                                                                                                  ││   Buffers: shared hit=5612 read=145051                                                                                                                            ││   ->  Merge Left Join  (cost=1267976.96..1534602.18 rows=4305512 width=4) (actual time=13699.847..268785.500 rows=4291151 loops=1)                                ││         Merge Cond: ((f_emp.employee_id_id = qt_3.a_1327) AND (f_dep.description_id = qt_3.a_121))                                                                ││         Join Filter: (qt.tt_1056_1056_a = qt_3.a_1056)                                                                                                            ││         Rows Removed by Join Filter: 1203659495                                                                                                                   ││         Buffers: shared hit=5612 read=145051                                                                                                                      │            .....                                                                                                  ││         ->  Sort  (cost=209977.63..214349.77 rows=1748859 width=12) (actual time=979.522..81842.913 rows=1205261892 loops=1)                                      ││               Sort Key: qt_3.a_1327, qt_3.a_121                                                                                                                   ││               Sort Method: quicksort  Memory: 144793kB                                                                                                            ││               Buffers: shared hit=2432 read=8718                                                                                                                  ││               ->  Seq Scan on qtd qt_3  (cost=0.00..28638.59 rows=1748859 width=12) (actual time=0.031..284.437 rows=1748859 loops=1)│                     Buffers: shared hit=2432 read=8718   The sort of qtd table is very fastpostgres=# explain analyze select * from qtd order by a_1327, a_121;┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│                                                      QUERY PLAN                                                      │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Sort  (cost=209977.63..214349.77 rows=1748859 width=27) (actual time=863.923..1111.213 rows=1748859 loops=1)         ││   Sort Key: a_1327, a_121                                                                                            ││   Sort Method: quicksort  Memory: 199444kB                                                                           ││   ->  Seq Scan on qtd  (cost=0.00..28638.59 rows=1748859 width=27) (actual time=0.035..169.385 rows=1748859 loops=1) ││ Planning Time: 0.473 ms                                                                                              ││ Execution Time: 1226.305 ms                                                                                          │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(6 rows)but here it returns 700x lines more and it is 70 x slower. Probably it is because something does rescan. But why? With index only scan, I don't see any indices of rescan.Is it an executor or optimizer bug? Or is it a bug? I tested this behaviour on Postgres 13 and on the fresh master branch. When I increase cpu_operator_cost, then I got workable plan although I use high work memhttps://explain.depesz.com/s/jl4vThe strange thing of this issue is possible unhappy behaviour although the estimation is very wellRegardsPavelRegardsPavel", "msg_date": "Mon, 23 Aug 2021 20:11:51 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pretty slow merge-join due rescan?" } ]
[ { "msg_contents": "Included 蔡梦娟 and Jakub Wartak because they've expressed interest on\nthis topic -- notably [2] (\"Bug on update timing of walrcv->flushedUpto\nvariable\").\n\nAs mentioned in the course of thread [1], we're missing a fix for\nstreaming replication to avoid sending records that the primary hasn't\nfully flushed yet. This patch is a first attempt at fixing that problem\nby retreating the LSN reported as FlushPtr whenever a segment is\nregistered, based on the understanding that if no registration exists\nthen the LogwrtResult.Flush pointer can be taken at face value; but if a\nregistration exists, then we have to stream only till the start LSN of\nthat registered entry.\n\nThis patch is probably incomplete. First, I'm not sure that logical\nreplication is affected by this problem. I think it isn't, because\nlogical replication will halt until the record can be read completely --\nmaybe I'm wrong and there is a way for things to go wrong with logical\nreplication as well. But also, I need to look at the other uses of\nGetFlushRecPtr() and see if those need to change to the new function too\nor they can remain what they are now.\n\nI'd also like to have tests. That seems moderately hard, but if we had\nWAL-molasses that could be used in walreceiver, it could be done. (It\nsounds easier to write tests with a molasses-archive_command.)\n\n\n[1] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com\n[2] https://postgr.es/m/3f9c466d-d143-472c-a961-66406172af96.mengjuan.cmj@alibaba-inc.com\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/", "msg_date": "Mon, 23 Aug 2021 18:52:17 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "prevent immature WAL streaming" }, { "msg_contents": "At Mon, 23 Aug 2021 18:52:17 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Included 蔡梦娟 and Jakub Wartak because they've expressed interest on\n> this topic -- notably [2] (\"Bug on update timing of walrcv->flushedUpto\n> variable\").\n> \n> As mentioned in the course of thread [1], we're missing a fix for\n> streaming replication to avoid sending records that the primary hasn't\n> fully flushed yet. This patch is a first attempt at fixing that problem\n> by retreating the LSN reported as FlushPtr whenever a segment is\n> registered, based on the understanding that if no registration exists\n> then the LogwrtResult.Flush pointer can be taken at face value; but if a\n> registration exists, then we have to stream only till the start LSN of\n> that registered entry.\n> \n> This patch is probably incomplete. First, I'm not sure that logical\n> replication is affected by this problem. I think it isn't, because\n> logical replication will halt until the record can be read completely --\n> maybe I'm wrong and there is a way for things to go wrong with logical\n> replication as well. But also, I need to look at the other uses of\n> GetFlushRecPtr() and see if those need to change to the new function too\n> or they can remain what they are now.\n> \n> I'd also like to have tests. That seems moderately hard, but if we had\n> WAL-molasses that could be used in walreceiver, it could be done. (It\n> sounds easier to write tests with a molasses-archive_command.)\n> \n> \n> [1] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com\n> [2] https://postgr.es/m/3f9c466d-d143-472c-a961-66406172af96.mengjuan.cmj@alibaba-inc.com\n\n(I'm not sure what \"WAL-molasses\" above expresses, same as \"sugar\"?)\n\nFor our information, this issue is related to the commit 0668719801\nwhich makes XLogPageRead restart reading a (continued or\nsegments-spanning) record with switching sources. In that thread, I\nmodifed the code to cause a server crash under the desired situation.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Aug 2021 12:03:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 8/23/21, 3:53 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> As mentioned in the course of thread [1], we're missing a fix for\r\n> streaming replication to avoid sending records that the primary hasn't\r\n> fully flushed yet. This patch is a first attempt at fixing that problem\r\n> by retreating the LSN reported as FlushPtr whenever a segment is\r\n> registered, based on the understanding that if no registration exists\r\n> then the LogwrtResult.Flush pointer can be taken at face value; but if a\r\n> registration exists, then we have to stream only till the start LSN of\r\n> that registered entry.\r\n\r\nI wonder if we need to move the call to RegisterSegmentBoundary() to\r\nsomewhere before WALInsertLockRelease() for this to work as intended.\r\nRight now, boundary registration could take place after the flush\r\npointer has been advanced, which means GetSafeFlushRecPtr() could\r\nstill return an unsafe position.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 24 Aug 2021 18:28:07 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Aug-24, Bossart, Nathan wrote:\n\n> I wonder if we need to move the call to RegisterSegmentBoundary() to\n> somewhere before WALInsertLockRelease() for this to work as intended.\n> Right now, boundary registration could take place after the flush\n> pointer has been advanced, which means GetSafeFlushRecPtr() could\n> still return an unsafe position.\n\nYeah, you're right -- that's a definite risk. I didn't try to reproduce\na problem with that, but it is seems pretty obvious that it can happen.\n\nI didn't have a lot of luck with a reliable reproducer script. I was\nable to reproduce the problem starting with Ryo Matsumura's script and\nattaching a replica; most of the time the replica would recover by\nrestarting from a streaming position earlier than where the problem\noccurred; but a few times it would just get stuck with a WAL segment\ncontaining a bogus record. Then, after patch, the problem no longer\noccurs.\n\nI attach the patch with the change you suggested.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru", "msg_date": "Tue, 24 Aug 2021 19:01:27 -0400", "msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 8/24/21, 4:03 PM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-24, Bossart, Nathan wrote:\r\n>\r\n>> I wonder if we need to move the call to RegisterSegmentBoundary() to\r\n>> somewhere before WALInsertLockRelease() for this to work as intended.\r\n>> Right now, boundary registration could take place after the flush\r\n>> pointer has been advanced, which means GetSafeFlushRecPtr() could\r\n>> still return an unsafe position.\r\n>\r\n> Yeah, you're right -- that's a definite risk. I didn't try to reproduce\r\n> a problem with that, but it is seems pretty obvious that it can happen.\r\n>\r\n> I didn't have a lot of luck with a reliable reproducer script. I was\r\n> able to reproduce the problem starting with Ryo Matsumura's script and\r\n> attaching a replica; most of the time the replica would recover by\r\n> restarting from a streaming position earlier than where the problem\r\n> occurred; but a few times it would just get stuck with a WAL segment\r\n> containing a bogus record. Then, after patch, the problem no longer\r\n> occurs.\r\n\r\nIf moving RegisterSegmentBoundary() is sufficient to prevent the flush\r\npointer from advancing before we register the boundary, I bet we could\r\nalso remove the WAL writer nudge.\r\n\r\nAnother interesting thing I see is that the boundary stored in\r\nearliestSegBoundary is not necessarily the earliest one. It's just\r\nthe first one that has been registered. I did this for simplicity for\r\nthe .ready file fix, but I can see it causing problems here. I think\r\nwe can move earliestSegBoundary backwards as long as it is greater\r\nthan lastNotifiedSeg + 1. However, it's still not necessarily the\r\nearliest one if we copied latestSegBoundary to earliestSegBoundary in\r\nNotifySegmentsReadyForArchive(). To handle this, we could track\r\nseveral boundaries in an array, but then we'd have to hold the safe\r\nLSN back whenever the array overflowed and we started forgetting\r\nboundaries.\r\n\r\nPerhaps there's a simpler solution. I'll keep thinking...\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 24 Aug 2021 23:52:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi Álvaro, -hackers, \n\n> I attach the patch with the change you suggested.\n\nI've gave a shot to to the v02 patch on top of REL_12_STABLE (already including 5065aeafb0b7593c04d3bc5bc2a86037f32143fc). Previously(yesterday) without the v02 patch I was getting standby corruption always via simulation by having separate /pg_xlog dedicated fs, and archive_mode=on, wal_keep_segments=120, archive_command set to rsync to different dir on same fs, wal_init_zero at default(true). \n\nToday (with v02) I've got corruption in only initial 2 runs out of ~ >30 tries on standby. Probably the 2 failures were somehow my fault (?) or some rare condition (and in 1 of those 2 cases simply restarting standby did help). To be honest I've tried to force this error, but with v02 I simply cannot force this error anymore, so that's good! :)\n\n> I didn't have a lot of luck with a reliable reproducer script. I was able to\n> reproduce the problem starting with Ryo Matsumura's script and attaching\n> a replica; most of the time the replica would recover by restarting from a\n> streaming position earlier than where the problem occurred; but a few\n> times it would just get stuck with a WAL segment containing a bogus\n> record. \n\nIn order to get reliable reproducer and get proper the fault injection instead of playing with really filling up fs, apparently one could substitute fd with fd of /dev/full using e.g. dup2() so that every write is going to throw this error too:\n\nroot@hive:~# ./t & # simple while(1) { fprintf() flush () } testcase\nroot@hive:~# ls -l /proc/27296/fd/3\nlrwx------ 1 root root 64 Aug 25 06:22 /proc/27296/fd/3 -> /tmp/testwrite\nroot@hive:~# gdb -q -p 27296\n-- 1089 is bitmask O_WRONLY|..\n(gdb) p dup2(open(\"/dev/full\", 1089, 0777), 3)\n$1 = 3\n(gdb) c\nContinuing.\n==>\nfflush/write(): : No space left on device\n\nSo I've also tried to be malicious while writing to the DB and inject ENOSPCE near places like:\n \na) XLogWrite()->XLogFileInit() near line 3322 // assuming: if (wal_init_zero) is true, one gets classic \"PANIC: could not write to file \"pg_wal/xlogtemp.90670\": No space left on device\"\nb) XLogWrite() near line 2547 just after pg_pwrite // one can get \"PANIC: could not write to log file 000000010000003B000000A8 at offset 0, length 15466496: No space left on device\" (that would be possible with wal_init_zero=false?)\nc) XLogWrite() near line 2592 // just before issue_xlog_fsync to get \"PANIC: could not fdatasync file \"000000010000004300000004\": Invalid argument\" that would pretty much mean same as above but with last possible offset near end of WAL? \n\nThis was done with gdb voodoo:\nhandle SIGUSR1 noprint nostop\nbreak xlog.c:<LINE> // https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/transam/xlog.c#L3311\nc\nprint fd or openLogFile -- to verify it is 3\np dup2(open(\"/dev/full\", 1089, 0777), 3) -- during most of walwriter runtime it has current log as fd=3\n\nAfter restarting master and inspecting standby - in all of those above 3 cases - the standby didn't inhibit the \"invalid contrecord length\" at least here, while without corruption this v02 patch it is notorious. So if it passes the worst-case code review assumptions I would be wondering if it shouldn't even be committed as it stands right now.\n\n-J.\n\n\n", "msg_date": "Wed, 25 Aug 2021 11:59:45 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Aug-24, Bossart, Nathan wrote:\n\n> If moving RegisterSegmentBoundary() is sufficient to prevent the flush\n> pointer from advancing before we register the boundary, I bet we could\n> also remove the WAL writer nudge.\n\nCan you elaborate on this? I'm not sure I see the connection.\n\n> Another interesting thing I see is that the boundary stored in\n> earliestSegBoundary is not necessarily the earliest one. It's just\n> the first one that has been registered. I did this for simplicity for\n> the .ready file fix, but I can see it causing problems here.\n\nHmm, is there really a problem here? Surely the flush point cannot go\npast whatever has been written. If somebody is writing an earlier\nsection of WAL, then we cannot move the flush pointer to a later\nposition. So it doesn't matter if the earliest point we have registered\nis the true earliest -- we only care for it to be the earliest that is\npast the flush point.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 25 Aug 2021 08:32:31 -0400", "msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Mon, Aug 23, 2021 at 11:04 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 23 Aug 2021 18:52:17 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > I'd also like to have tests. That seems moderately hard, but if we had\n> > WAL-molasses that could be used in walreceiver, it could be done. (It\n> > sounds easier to write tests with a molasses-archive_command.)\n> >\n> >\n> > [1] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com\n> > [2] https://postgr.es/m/3f9c466d-d143-472c-a961-66406172af96.mengjuan.cmj@alibaba-inc.com\n>\n> (I'm not sure what \"WAL-molasses\" above expresses, same as \"sugar\"?)\n\nI think, but am not 100% sure, that \"molasses\" here is being used to\nrefer to fault injection.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Aug 2021 09:56:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 8/25/21, 5:33 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Aug-24, Bossart, Nathan wrote:\r\n>\r\n>> If moving RegisterSegmentBoundary() is sufficient to prevent the flush\r\n>> pointer from advancing before we register the boundary, I bet we could\r\n>> also remove the WAL writer nudge.\r\n>\r\n> Can you elaborate on this? I'm not sure I see the connection.\r\n\r\nThe reason we are moving RegisterSegmentBoundary() to before\r\nWALInsertLockRelease() is because we believe it will prevent boundary\r\nregistration from taking place after the flush pointer has already\r\nadvanced past the boundary in question. We had added the WAL writer\r\nnudge to make sure we called NotifySegmentsReadyForArchive() whenever\r\nthat happened.\r\n\r\nIf moving boundary registration to before we release the lock(s) is\r\nenough to prevent the race condition with the flush pointer, then ISTM\r\nwe no longer have to worry about nudging the WAL writer.\r\n\r\n>> Another interesting thing I see is that the boundary stored in\r\n>> earliestSegBoundary is not necessarily the earliest one. It's just\r\n>> the first one that has been registered. I did this for simplicity for\r\n>> the .ready file fix, but I can see it causing problems here.\r\n>\r\n> Hmm, is there really a problem here? Surely the flush point cannot go\r\n> past whatever has been written. If somebody is writing an earlier\r\n> section of WAL, then we cannot move the flush pointer to a later\r\n> position. So it doesn't matter if the earliest point we have registered\r\n> is the true earliest -- we only care for it to be the earliest that is\r\n> past the flush point.\r\n\r\nLet's say we have the following situation (F = flush, E = earliest\r\nregistered boundary, and L = latest registered boundary), and let's\r\nassume that each segment has a cross-segment record that ends in the\r\nnext segment.\r\n\r\n F E L\r\n |-----|-----|-----|-----|-----|-----|-----|-----|\r\n 1 2 3 4 5 6 7 8\r\n\r\nThen, we write out WAL to disk and create .ready files as needed. If\r\nwe didn't flush beyond the latest registered boundary, the latest\r\nregistered boundary now becomes the earliest boundary.\r\n\r\n F E\r\n |-----|-----|-----|-----|-----|-----|-----|-----|\r\n 1 2 3 4 5 6 7 8\r\n\r\nAt this point, the earliest segment boundary past the flush point is\r\nbefore the \"earliest\" boundary we are tracking.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 25 Aug 2021 18:18:59 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Aug-25, Jakub Wartak wrote:\n\n> In order to get reliable reproducer and get proper the fault injection\n> instead of playing with really filling up fs, apparently one could\n> substitute fd with fd of /dev/full using e.g. dup2() so that every\n> write is going to throw this error too:\n\nOh, this is a neat trick that I didn't know about. Thanks.\n\n> After restarting master and inspecting standby - in all of those above\n> 3 cases - the standby didn't inhibit the \"invalid contrecord length\"\n> at least here, while without corruption this v02 patch it is\n> notorious. So if it passes the worst-case code review assumptions I\n> would be wondering if it shouldn't even be committed as it stands\n> right now.\n\nWell, Nathan is right that this patch isn't really closing the hole\nbecause we aren't tracking segment boundaries that aren't \"earliest\" nor\n\"latest\" at the moment of registration. The simplistic approach seems\nokay for the archive case, but not for the replication case.\n\nI also noticed today (facepalm) that the patch doesn't work unless you\nhave archiving enabled, because the registration code is only invoked\nwhen XLogArchivingActive(). Doh. This part is easy to solve. The\nother doesn't look easy ...\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 25 Aug 2021 19:29:54 -0400", "msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "BTW while going about testing this, I noticed that we forbid\npg_walfile_name() while in recovery. That restriction was added by\ncommit 370f770c15a4 because ThisTimeLineID was not set correctly during\nrecovery. That was supposed to be fixed by commit 1148e22a82ed, so I\nthought that it should be possible to remove the restriction. However,\nI did that per the attached patch, but was quickly disappointed because\nThisTimeLineID seems to remain zero in a standby for reasons that I\ndidn't investigate.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)", "msg_date": "Wed, 25 Aug 2021 20:20:04 -0400", "msg_from": "\"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Wed, 25 Aug 2021 18:18:59 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/25/21, 5:33 AM, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote:\n> > On 2021-Aug-24, Bossart, Nathan wrote:\n> >> Another interesting thing I see is that the boundary stored in\n> >> earliestSegBoundary is not necessarily the earliest one. It's just\n> >> the first one that has been registered. I did this for simplicity for\n> >> the .ready file fix, but I can see it causing problems here.\n> >\n> > Hmm, is there really a problem here? Surely the flush point cannot go\n> > past whatever has been written. If somebody is writing an earlier\n> > section of WAL, then we cannot move the flush pointer to a later\n> > position. So it doesn't matter if the earliest point we have registered\n> > is the true earliest -- we only care for it to be the earliest that is\n> > past the flush point.\n> \n> Let's say we have the following situation (F = flush, E = earliest\n> registered boundary, and L = latest registered boundary), and let's\n> assume that each segment has a cross-segment record that ends in the\n> next segment.\n> \n> F E L\n> |-----|-----|-----|-----|-----|-----|-----|-----|\n> 1 2 3 4 5 6 7 8\n> \n> Then, we write out WAL to disk and create .ready files as needed. If\n> we didn't flush beyond the latest registered boundary, the latest\n> registered boundary now becomes the earliest boundary.\n> \n> F E\n> |-----|-----|-----|-----|-----|-----|-----|-----|\n> 1 2 3 4 5 6 7 8\n> \n> At this point, the earliest segment boundary past the flush point is\n> before the \"earliest\" boundary we are tracking.\n\nWe know we have some cross-segment records in the regin [E L] so we\ncannot add a .ready file if flush is in the region. I haven't looked\nthe latest patch (or I may misunderstand the discussion here) but I\nthink we shouldn't move E before F exceeds previous (or in the first\npicture above) L. Things are done that way in my ancient proposal in\n[1].\n\nhttps://www.postgresql.org/message-id/attachment/117052/v4-0001-Avoid-archiving-a-WAL-segment-that-continues-to-t.patch\n+ if (LogwrtResult.Write < firstSegContRecStart ||\n+ lastSegContRecEnd <= LogwrtResult.Write)\n+ {\n <notify the last segment>\n\nBy doing so, at the time of the second picutre, the pointers are set as:\n\n E F L\n |-----|-----|-----|-----|-----|-----|-----|-----|\n 1 2 3 4 5 6 7 8\n\nThen the poiter are cleard at the time F reaches L,\n\n F\n |-----|-----|-----|-----|-----|-----|-----|-----|\n 1 2 3 4 5 6 7 8\n\nIsn't this work correctly? As I think I mentioned in the thread, I\ndon't think we don't have so many (more than several, specifically)\nsegments in a region [E L].\n\n[1] https://www.postgresql.org/message-id/20201216.110120.887433782054853494.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Aug 2021 09:40:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Wed, 25 Aug 2021 20:20:04 -0400, \"alvherre@alvh.no-ip.org\" <alvherre@alvh.no-ip.org> wrote in \n> BTW while going about testing this, I noticed that we forbid\n> pg_walfile_name() while in recovery. That restriction was added by\n> commit 370f770c15a4 because ThisTimeLineID was not set correctly during\n> recovery. That was supposed to be fixed by commit 1148e22a82ed, so I\n> thought that it should be possible to remove the restriction. However,\n> I did that per the attached patch, but was quickly disappointed because\n> ThisTimeLineID seems to remain zero in a standby for reasons that I\n> didn't investigate.\n\nOn a intermediate node of a cascading replication set, timeline id on\nwalsender and walrecever can differ and ordinary backends cannot\ndecide which to use.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:20:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "(this is off-topic here)\n\nAt Wed, 25 Aug 2021 09:56:56 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Aug 23, 2021 at 11:04 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Mon, 23 Aug 2021 18:52:17 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > I'd also like to have tests. That seems moderately hard, but if we had\n> > > WAL-molasses that could be used in walreceiver, it could be done. (It\n> > > sounds easier to write tests with a molasses-archive_command.)\n> > >\n> > >\n> > > [1] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com\n> > > [2] https://postgr.es/m/3f9c466d-d143-472c-a961-66406172af96.mengjuan.cmj@alibaba-inc.com\n> >\n> > (I'm not sure what \"WAL-molasses\" above expresses, same as \"sugar\"?)\n> \n> I think, but am not 100% sure, that \"molasses\" here is being used to\n> refer to fault injection.\n\nOh. That makes sense, thanks.\n\nI sometimes inject artificial faults (a server crash, in this case) to\ncreate specific on-disk states but I cannot imagine that that kind of\nmachinery can be statically placed in the source tree..\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:32:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 8/25/21, 5:40 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> At Wed, 25 Aug 2021 18:18:59 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\r\n>> Let's say we have the following situation (F = flush, E = earliest\r\n>> registered boundary, and L = latest registered boundary), and let's\r\n>> assume that each segment has a cross-segment record that ends in the\r\n>> next segment.\r\n>>\r\n>> F E L\r\n>> |-----|-----|-----|-----|-----|-----|-----|-----|\r\n>> 1 2 3 4 5 6 7 8\r\n>>\r\n>> Then, we write out WAL to disk and create .ready files as needed. If\r\n>> we didn't flush beyond the latest registered boundary, the latest\r\n>> registered boundary now becomes the earliest boundary.\r\n>>\r\n>> F E\r\n>> |-----|-----|-----|-----|-----|-----|-----|-----|\r\n>> 1 2 3 4 5 6 7 8\r\n>>\r\n>> At this point, the earliest segment boundary past the flush point is\r\n>> before the \"earliest\" boundary we are tracking.\r\n>\r\n> We know we have some cross-segment records in the regin [E L] so we\r\n> cannot add a .ready file if flush is in the region. I haven't looked\r\n> the latest patch (or I may misunderstand the discussion here) but I\r\n> think we shouldn't move E before F exceeds previous (or in the first\r\n> picture above) L. Things are done that way in my ancient proposal in\r\n> [1].\r\n\r\nThe strategy in place ensures that we track a boundary that doesn't\r\nchange until the flush position passes it as well as the latest\r\nregistered boundary. I think it is important that any segment\r\nboundary tracking mechanism does at least those two things. I don't\r\nsee how we could do that if we didn't update E until F passed both E\r\nand L.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 26 Aug 2021 03:24:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Thu, 26 Aug 2021 03:24:48 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 8/25/21, 5:40 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > At Wed, 25 Aug 2021 18:18:59 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\n> >> Let's say we have the following situation (F = flush, E = earliest\n> >> registered boundary, and L = latest registered boundary), and let's\n> >> assume that each segment has a cross-segment record that ends in the\n> >> next segment.\n> >>\n> >> F E L\n> >> |-----|-----|-----|-----|-----|-----|-----|-----|\n> >> 1 2 3 4 5 6 7 8\n> >>\n> >> Then, we write out WAL to disk and create .ready files as needed. If\n> >> we didn't flush beyond the latest registered boundary, the latest\n> >> registered boundary now becomes the earliest boundary.\n> >>\n> >> F E\n> >> |-----|-----|-----|-----|-----|-----|-----|-----|\n> >> 1 2 3 4 5 6 7 8\n> >>\n> >> At this point, the earliest segment boundary past the flush point is\n> >> before the \"earliest\" boundary we are tracking.\n> >\n> > We know we have some cross-segment records in the regin [E L] so we\n> > cannot add a .ready file if flush is in the region. I haven't looked\n> > the latest patch (or I may misunderstand the discussion here) but I\n> > think we shouldn't move E before F exceeds previous (or in the first\n> > picture above) L. Things are done that way in my ancient proposal in\n> > [1].\n> \n> The strategy in place ensures that we track a boundary that doesn't\n> change until the flush position passes it as well as the latest\n> registered boundary. I think it is important that any segment\n> boundary tracking mechanism does at least those two things. I don't\n> see how we could do that if we didn't update E until F passed both E\n> and L.\n\n(Sorry, but I didn't get you clearly. So the discussion below might be\npointless.)\n\nThe ancient patch did:\n\nIf a flush didn't reach E, we can archive finished segments.\n\nIf a flush ends between E and L, we shouldn't archive finshed segments\nat all. L can be moved further while this state, while E sits on the\nsame location while this state.\n\nOnce a flush passes L, we can archive all finished segments and can\nerase both E and L.\n\nA drawback of this strategy is that the region [E L] can contain gaps\n(that is, segment boundaries that is not bonded by a continuation\nrecord) and archive can be excessively retarded. Perhaps if flush\ngoes behind write head by more than two segments, the probability of\ncreating the gaps would be higher.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Aug 2021 17:48:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "So I'm again distracted by something else, so here's what will have to\npass for v3 for the time being.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/", "msg_date": "Thu, 26 Aug 2021 10:50:52 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-08-23 18:52:17 -0400, Alvaro Herrera wrote:\n> Included 蔡梦娟 and Jakub Wartak because they've expressed interest on\n> this topic -- notably [2] (\"Bug on update timing of walrcv->flushedUpto\n> variable\").\n>\n> As mentioned in the course of thread [1], we're missing a fix for\n> streaming replication to avoid sending records that the primary hasn't\n> fully flushed yet. This patch is a first attempt at fixing that problem\n> by retreating the LSN reported as FlushPtr whenever a segment is\n> registered, based on the understanding that if no registration exists\n> then the LogwrtResult.Flush pointer can be taken at face value; but if a\n> registration exists, then we have to stream only till the start LSN of\n> that registered entry.\n\nI'm doubtful that the approach of adding awareness of record boundaries\nis a good path to go down:\n\n- It adds nontrivial work to hot code paths to handle an edge case,\n rather than making rare code paths more expensive.\n\n- There are very similar issues with promotions of replicas (consider\n what happens if we need to promote with the end of local WAL spanning\n a segment boundary, and what happens to cascading replicas). We have\n some logic to try to deal with that, but it's pretty grotty and I\n think incomplete.\n\n- It seems to make some future optimizations harder - we should work\n towards replicating data sooner, rather than the opposite. Right now\n that's a major bottleneck around syncrep.\n\n- Once XLogFlush() for some LSN returned we can write that LSN to\n disk. The LSN doesn't necessarily have to correspond to a specific\n on-disk location (it could e.g. be the return value from\n GetFlushRecPtr()). But \"rewinding\" to before the last record makes that\n problematic.\n\n- I suspect that schemes with heuristic knowledge of segment boundary\n spanning records have deadlock or at least latency spike issues. What\n if synchronous commit needs to flush up to a certain record boundary,\n but streaming rep doesn't replicate it out because there's segment\n spanning records both before and after?\n\n\n\nI think a better approach might be to handle this on the WAL layout\nlevel. What if we never overwrite partial records but instead just\nskipped over them during decoding?\n\nOf course there's some difficulties with that - the checksum and the\nlength from the record header aren't going to be meaningful.\n\nBut we could deal with that using a special flag in the\nXLogPageHeaderData.xlp_info of the following page. If that flag is set,\nxlp_rem_len could contain the checksum of the partial record.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Aug 2021 21:29:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Aug-30, Andres Freund wrote:\n\n> I'm doubtful that the approach of adding awareness of record boundaries\n> is a good path to go down:\n\nHonestly, I do not like it one bit and if I can avoid relying on them\nwhile making the whole thing work correctly, I am happy. Clearly it\nwasn't a problem for the ancient recovery-only WAL design, but as soon\nas we added replication on top the whole issue of continuation records\nbecame a bug.\n\nI do think that the code should be first correct and second performant,\nthough.\n \n> - There are very similar issues with promotions of replicas (consider\n> what happens if we need to promote with the end of local WAL spanning\n> a segment boundary, and what happens to cascading replicas). We have\n> some logic to try to deal with that, but it's pretty grotty and I\n> think incomplete.\n\nOuch, I hadn't thought of cascading replicas.\n\n> - It seems to make some future optimizations harder - we should work\n> towards replicating data sooner, rather than the opposite. Right now\n> that's a major bottleneck around syncrep.\n\nAbsolutely.\n\n> I think a better approach might be to handle this on the WAL layout\n> level. What if we never overwrite partial records but instead just\n> skipped over them during decoding?\n\nMaybe this is a workable approach, let's work it out fully.\n\nLet me see if I understand what you mean:\n* We would remove the logic to inhibit archiving and streaming-\n replicating the tail end of a split WAL record; that logic deals with\n bytes only, so doesn't have to be aware of record boundaries.\n* On WAL replay, we ignore records that are split across a segment\n boundary and whose checksum does not match.\n* On WAL write ... ?\n\nHow do we detect after recovery that a record that was being written,\nand potentially was sent to the archive, needs to be \"skipped\"?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 31 Aug 2021 09:56:30 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-08-31 09:56:30 -0400, Alvaro Herrera wrote:\n> On 2021-Aug-30, Andres Freund wrote:\n> > I think a better approach might be to handle this on the WAL layout\n> > level. What if we never overwrite partial records but instead just\n> > skipped over them during decoding?\n>\n> Maybe this is a workable approach, let's work it out fully.\n>\n> Let me see if I understand what you mean:\n> * We would remove the logic to inhibit archiving and streaming-\n> replicating the tail end of a split WAL record; that logic deals with\n> bytes only, so doesn't have to be aware of record boundaries.\n> * On WAL replay, we ignore records that are split across a segment\n> boundary and whose checksum does not match.\n> * On WAL write ... ?\n\nI was thinking that on a normal WAL write we'd do nothing. Instead we would\nhave dedicated code at the end of recovery that, if the WAL ends in a partial\nrecord, changes the page following the \"valid\" portion of the WAL to indicate\nthat an incomplete record is to be skipped.\n\nOf course, we need to be careful to not weaken WAL validity checking too\nmuch. How about the following:\n\nIf we're \"aborting\" a continued record, we set XLP_FIRST_IS_ABORTED_PARTIAL on\nthe page at which we do so (i.e. the page after the valid end of the WAL).\n\nOn a page with XLP_FIRST_IS_ABORTED_PARTIAL we expect a special type of record\nto start just after the page header. That record contains sufficient\ninformation for us to verify the validity of the partial record (since its\nchecksum and length aren't valid, and may not even be all readable if the\nrecord header itself was split). I think it would make sense to include the\nLSN of the aborted record, and a checksum of the partial data.\n\n\n> How do we detect after recovery that a record that was being written,\n> and potentially was sent to the archive, needs to be \"skipped\"?\n\nI think we can just read the WAL and see if it ends with a partial\nrecord. It'd add a bit of complication to the error checking in xlogreader,\nbecause we'd likely want to treat verification from page headers a bit\ndifferent from verification due to record data. But that seems doable.\n\nDoes this make sense?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:53:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "\n\nOn 2021/09/01 0:53, Andres Freund wrote:\n> I was thinking that on a normal WAL write we'd do nothing. Instead we would\n> have dedicated code at the end of recovery that, if the WAL ends in a partial\n> record, changes the page following the \"valid\" portion of the WAL to indicate\n> that an incomplete record is to be skipped.\n\nAgreed!\n\n\n\n> Of course, we need to be careful to not weaken WAL validity checking too\n> much. How about the following:\n> \n> If we're \"aborting\" a continued record, we set XLP_FIRST_IS_ABORTED_PARTIAL on\n> the page at which we do so (i.e. the page after the valid end of the WAL).\n\nWhen do you expect that XLP_FIRST_IS_ABORTED_PARTIAL is set? It's set\nwhen recovery finds a a partially-flushed segment-spanning record?\nBut maybe we cannot do that (i.e., cannot overwrite the page) because\nthe page that the flag is set in might have already been archived. No?\n\n\n> I think we can just read the WAL and see if it ends with a partial\n> record. It'd add a bit of complication to the error checking in xlogreader,\n> because we'd likely want to treat verification from page headers a bit\n> different from verification due to record data. But that seems doable.\n\nYes.\n\n\n> Does this make sense?\n\nYes, I think!\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Sep 2021 11:34:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-09-01 11:34:34 +0900, Fujii Masao wrote:\n> On 2021/09/01 0:53, Andres Freund wrote:\n> > Of course, we need to be careful to not weaken WAL validity checking too\n> > much. How about the following:\n> > \n> > If we're \"aborting\" a continued record, we set XLP_FIRST_IS_ABORTED_PARTIAL on\n> > the page at which we do so (i.e. the page after the valid end of the WAL).\n> \n> When do you expect that XLP_FIRST_IS_ABORTED_PARTIAL is set? It's set\n> when recovery finds a a partially-flushed segment-spanning record?\n> But maybe we cannot do that (i.e., cannot overwrite the page) because\n> the page that the flag is set in might have already been archived. No?\n\nI was imagining that XLP_FIRST_IS_ABORTED_PARTIAL would be set in the \"tail\nend\" of a partial record. I.e. if there's a partial record starting in the\nsuccessfully archived segment A, but the end of the record, in B, has not been\nwritten to disk before a crash, we'd set XLP_FIRST_IS_ABORTED_PARTIAL at the\nend of the valid data in B. Which could not have been archived yet, or we'd\nnot have a partial record. So we should never need to set the flag on an\nalready archived page.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Aug 2021 20:15:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Tue, 31 Aug 2021 20:15:24 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2021-09-01 11:34:34 +0900, Fujii Masao wrote:\n> > On 2021/09/01 0:53, Andres Freund wrote:\n> > > Of course, we need to be careful to not weaken WAL validity checking too\n> > > much. How about the following:\n> > > \n> > > If we're \"aborting\" a continued record, we set XLP_FIRST_IS_ABORTED_PARTIAL on\n> > > the page at which we do so (i.e. the page after the valid end of the WAL).\n> > \n> > When do you expect that XLP_FIRST_IS_ABORTED_PARTIAL is set? It's set\n> > when recovery finds a a partially-flushed segment-spanning record?\n> > But maybe we cannot do that (i.e., cannot overwrite the page) because\n> > the page that the flag is set in might have already been archived. No?\n> \n> I was imagining that XLP_FIRST_IS_ABORTED_PARTIAL would be set in the \"tail\n> end\" of a partial record. I.e. if there's a partial record starting in the\n> successfully archived segment A, but the end of the record, in B, has not been\n> written to disk before a crash, we'd set XLP_FIRST_IS_ABORTED_PARTIAL at the\n> end of the valid data in B. Which could not have been archived yet, or we'd\n> not have a partial record. So we should never need to set the flag on an\n> already archived page.\n\nI agree that that makes sense.\n\nIs that that, crash recovery remembers if the last record was an\nimmature record that looks like continue to the next segment, and if\nso, set the flag when inserting the first record, which would be\nCHECKPOINT_SHUTDOWN? (and reader deals with it)\n\nI'll try to show how it looks like.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 01 Sep 2021 13:15:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "\n\nOn 2021/09/01 12:15, Andres Freund wrote:\n> Hi,\n> \n> On 2021-09-01 11:34:34 +0900, Fujii Masao wrote:\n>> On 2021/09/01 0:53, Andres Freund wrote:\n>>> Of course, we need to be careful to not weaken WAL validity checking too\n>>> much. How about the following:\n>>>\n>>> If we're \"aborting\" a continued record, we set XLP_FIRST_IS_ABORTED_PARTIAL on\n>>> the page at which we do so (i.e. the page after the valid end of the WAL).\n>>\n>> When do you expect that XLP_FIRST_IS_ABORTED_PARTIAL is set? It's set\n>> when recovery finds a a partially-flushed segment-spanning record?\n>> But maybe we cannot do that (i.e., cannot overwrite the page) because\n>> the page that the flag is set in might have already been archived. No?\n> \n> I was imagining that XLP_FIRST_IS_ABORTED_PARTIAL would be set in the \"tail\n> end\" of a partial record. I.e. if there's a partial record starting in the\n> successfully archived segment A, but the end of the record, in B, has not been\n> written to disk before a crash, we'd set XLP_FIRST_IS_ABORTED_PARTIAL at the\n> end of the valid data in B. Which could not have been archived yet, or we'd\n> not have a partial record. So we should never need to set the flag on an\n> already archived page.\n\nThanks for clarifying that! Unless I misunderstand that, when recovery ends\nat a partially-flushed segment-spanning record, it sets\nXLP_FIRST_IS_ABORTED_PARTIAL in the next segment before starting writing\nnew WAL data there. Therefore XLP_FIRST_IS_CONTRECORD or\nXLP_FIRST_IS_ABORTED_PARTIAL must be set in the next segment following\na partially-flushed segment-spanning record. If none of them is set,\nthe validation code in recovery should report an error.\n\nYes, this design looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Sep 2021 15:01:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-09-01 15:01:43 +0900, Fujii Masao wrote:\n> Thanks for clarifying that! Unless I misunderstand that, when recovery ends\n> at a partially-flushed segment-spanning record, it sets\n> XLP_FIRST_IS_ABORTED_PARTIAL in the next segment before starting writing\n> new WAL data there. Therefore XLP_FIRST_IS_CONTRECORD or\n> XLP_FIRST_IS_ABORTED_PARTIAL must be set in the next segment following\n> a partially-flushed segment-spanning record. If none of them is set,\n> the validation code in recovery should report an error.\n\nRight. With the small addition that I think we shouldn't just do this for\nsegment spanning records, but for all records spanning pages.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Sep 2021 10:00:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 9/1/21, 10:06 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-09-01 15:01:43 +0900, Fujii Masao wrote:\r\n>> Thanks for clarifying that! Unless I misunderstand that, when recovery ends\r\n>> at a partially-flushed segment-spanning record, it sets\r\n>> XLP_FIRST_IS_ABORTED_PARTIAL in the next segment before starting writing\r\n>> new WAL data there. Therefore XLP_FIRST_IS_CONTRECORD or\r\n>> XLP_FIRST_IS_ABORTED_PARTIAL must be set in the next segment following\r\n>> a partially-flushed segment-spanning record. If none of them is set,\r\n>> the validation code in recovery should report an error.\r\n>\r\n> Right. With the small addition that I think we shouldn't just do this for\r\n> segment spanning records, but for all records spanning pages.\r\n\r\nThis approach seems promising. I like that it avoids adding extra\r\nwork in the hot path for writing WAL. I'm assuming it won't be back-\r\npatchable, though.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Sep 2021 00:04:17 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Wed, 1 Sep 2021 15:01:43 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/01 12:15, Andres Freund wrote:\n> > Hi,\n> > On 2021-09-01 11:34:34 +0900, Fujii Masao wrote:\n> >> On 2021/09/01 0:53, Andres Freund wrote:\n> >>> Of course, we need to be careful to not weaken WAL validity checking\n> >>> too\n> >>> much. How about the following:\n> >>>\n> >>> If we're \"aborting\" a continued record, we set\n> >>> XLP_FIRST_IS_ABORTED_PARTIAL on\n> >>> the page at which we do so (i.e. the page after the valid end of the\n> >>> WAL).\n> >>\n> >> When do you expect that XLP_FIRST_IS_ABORTED_PARTIAL is set? It's set\n> >> when recovery finds a a partially-flushed segment-spanning record?\n> >> But maybe we cannot do that (i.e., cannot overwrite the page) because\n> >> the page that the flag is set in might have already been archived. No?\n> > I was imagining that XLP_FIRST_IS_ABORTED_PARTIAL would be set in the\n> > \"tail\n> > end\" of a partial record. I.e. if there's a partial record starting in\n> > the\n> > successfully archived segment A, but the end of the record, in B, has\n> > not been\n> > written to disk before a crash, we'd set XLP_FIRST_IS_ABORTED_PARTIAL\n> > at the\n> > end of the valid data in B. Which could not have been archived yet, or\n> > we'd\n> > not have a partial record. So we should never need to set the flag on\n> > an\n> > already archived page.\n> \n> Thanks for clarifying that! Unless I misunderstand that, when recovery\n> ends\n> at a partially-flushed segment-spanning record, it sets\n> XLP_FIRST_IS_ABORTED_PARTIAL in the next segment before starting\n> writing\n> new WAL data there. Therefore XLP_FIRST_IS_CONTRECORD or\n> XLP_FIRST_IS_ABORTED_PARTIAL must be set in the next segment following\n> a partially-flushed segment-spanning record. If none of them is set,\n> the validation code in recovery should report an error.\n> \n> Yes, this design looks good to me.\n\nSo, this is a crude PoC of that.\n\nAt the end of recovery:\n\n- When XLogReadRecord misses a page where the next part of the current\n continuation record should be seen, xlogreader->ContRecAbortPtr is\n set to the beginning of the missing page.\n\n- When StartupXLOG receives a valid ContRecAbortPtr, the value is used\n as the next WAL insertion location then sets the same to\n XLogCtl->contAbortedRecPtr.\n\n- When XLogCtl->contAbortedRecPtr is set, AdvanceXLInsertBuffer()\n (called under XLogInsertRecord()) sets XLP_FIRST_IS_ABORTED_PARTIAL\n flag to the page.\n\nWhile recovery:\n- When XLogReadRecord meets a XLP_FIRST_IS_ABORT_PARTIAL page, it\n rereads a record from there.\n\nIn this PoC,\n\n1. This patch is written on the current master, but it doesn't\n interfare with the seg-boundary-memorize patch since it removes the\n calls to RegisterSegmentBoundary.\n\n2. Since xlogreader cannot emit a log-message immediately, we don't\n have a means to leave a log message to inform recovery met an\n aborted partial continuation record. (In this PoC, it is done by\n fprintf:p)\n\n3. Myebe we need to pg_waldump to show partial continuation records,\n but I'm not sure how to realize that.\n\n4. By the way, is this (method) applicable in this stage?\n\n\nThe attached first is the PoC including debug-crash aid. The second\nis the repro script. It failes to reproduce the situation once per\nseveral trials.\n\nThe following log messages are shown by a run of the script.\n\n> ...\n> TRAP: FailedAssertion(\"c++ < 1\", File: \"xlog.c\", Line: 2675, PID: 254644)\n> ...\n> LOG: database system is shut down\n> ...\n> \n> LOG: redo starts at 0/2000028\n> LOG: redo done at 0/6FFFFA8 system usage:...\n> LOG: #### Recovery finished: ContRecAbort: 0/7000000 (EndRecPtr: 0/6FFFFE8)\n\nThe record from 6FFFFE8 is missing the trailing part after 7000000.\n\n> LOG: #### EndOfLog=0/7000000\n> LOG: #### set XLP_FIRST_IS_ABORT_PARTIAL@0/7000000\n\nSo, WAL insertion starts from 7000000 and the first page is set the flag.\n\n> LOG: database system is ready to accept connections\n> ...\n> LOG: database system is shut down\n> ...\n> #########################\n> ...\n> LOG: redo starts at 0/2000028\n> LOG: consistent recovery state reached at 0/2000100\n> ...\n> LOG: restored log file \"000000010000000000000007\" from archive\n> #### aborted partial continuation record found at 0/6FFFFE8, continue from 0/7000000\n\nThe record from 6FFFFE8 is immature so skip it and continue reading\nfrom 7000000.\n\n> LOG: last completed transaction was at log time 2021-09-01 20:40:21.775295+09\n> LOG: #### Recovery finished: ContRecAbort: 0/0 (EndRecPtr: 0/8000000)\n> LOG: restored log file \"000000010000000000000007\" from archive\n> LOG: selected new timeline ID: 2\n> LOG: archive recovery complete\n> LOG: #### EndOfLog=0/8000000\n\nRecovery ends.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 24165ab03e..b0f18e4e5e 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -111,6 +111,7 @@ int\t\t\tCommitSiblings = 5; /* # concurrent xacts needed to sleep */\n int\t\t\twal_retrieve_retry_interval = 5000;\n int\t\t\tmax_slot_wal_keep_size_mb = -1;\n bool\t\ttrack_wal_io_timing = false;\n+bool\t\tcontrec_aborted = false;\n \n #ifdef WAL_DEBUG\n bool\t\tXLOG_DEBUG = false;\n@@ -586,6 +587,7 @@ typedef struct XLogCtlData\n \tXLogRecPtr\treplicationSlotMinLSN;\t/* oldest LSN needed by any slot */\n \n \tXLogSegNo\tlastRemovedSegNo;\t/* latest removed/recycled XLOG segment */\n+\tXLogRecPtr\tcontAbortedRecPtr;\n \n \t/* Fake LSN counter, for unlogged relations. Protected by ulsn_lck. */\n \tXLogRecPtr\tunloggedLSN;\n@@ -735,6 +737,10 @@ typedef struct XLogCtlData\n \tXLogSegNo\tlatestSegBoundary;\n \tXLogRecPtr\tlatestSegBoundaryEndPtr;\n \n+\t/* BEGIN: FOR DEBUGGING-CRASH USE*/\n+\tbool\t\tcrossseg;\n+\t/* END: DEBUGGING-CRASH USE*/\n+\n \tslock_t\t\tsegtrack_lck;\t/* locks shared variables shown above */\n } XLogCtlData;\n \n@@ -860,6 +866,7 @@ static XLogSource XLogReceiptSource = XLOG_FROM_ANY;\n /* State information for XLOG reading */\n static XLogRecPtr ReadRecPtr;\t/* start of last record read */\n static XLogRecPtr EndRecPtr;\t/* end+1 of last record read */\n+static XLogRecPtr ContRecAbortPtr;\t/* end+1 of last aborted contrec */\n \n /*\n * Local copies of equivalent fields in the control file. When running\n@@ -1178,16 +1185,10 @@ XLogInsertRecord(XLogRecData *rdata,\n \t\tXLByteToSeg(StartPos, StartSeg, wal_segment_size);\n \t\tXLByteToSeg(EndPos, EndSeg, wal_segment_size);\n \n-\t\t/*\n-\t\t * Register our crossing the segment boundary if that occurred.\n-\t\t *\n-\t\t * Note that we did not use XLByteToPrevSeg() for determining the\n-\t\t * ending segment. This is so that a record that fits perfectly into\n-\t\t * the end of the segment causes the latter to get marked ready for\n-\t\t * archival immediately.\n-\t\t */\n-\t\tif (StartSeg != EndSeg && XLogArchivingActive())\n-\t\t\tRegisterSegmentBoundary(EndSeg, EndPos);\n+\t\t/* BEGIN: FOR DEBUGGING-CRASH USE */\n+\t\tif (StartSeg != EndSeg)\n+\t\t\tXLogCtl->crossseg = true;\n+\t\t/* END: FOR DEBUGGING-CRASH USE */\n \n \t\t/*\n \t\t * Advance LogwrtRqst.Write so that it includes new block(s).\n@@ -2292,6 +2293,27 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, bool opportunistic)\n \t\tif (!Insert->forcePageWrites)\n \t\t\tNewPage->xlp_info |= XLP_BKP_REMOVABLE;\n \n+\t\t/*\n+\t\t * If the last page ended with an aborted partial continuation record,\n+\t\t * mark it to tell the parital record is omittable. Snice this happens\n+\t\t * only at the end of crash recovery, no rece condition here.\n+\t\t */\n+\t\tif (XLogCtl->contAbortedRecPtr >= NewPageBeginPtr)\n+\t\t{\n+\t\t\tif (XLogCtl->contAbortedRecPtr == NewPageBeginPtr)\n+\t\t\t{\n+\t\t\t\tNewPage->xlp_info |= XLP_FIRST_IS_ABORT_PARTIAL;\n+\t\t\t\telog(LOG, \"#### set XLP_FIRST_IS_ABORT_PARTIAL@%X/%X\",\n+\t\t\t\t\t LSN_FORMAT_ARGS(NewPageBeginPtr));\n+\t\t\t}\n+\t\t\telse\n+\t\t\t\telog(LOG, \"### incosistent abort location %X/%X, expected %X/%X\",\n+\t\t\t\t\t LSN_FORMAT_ARGS(XLogCtl->contAbortedRecPtr),\n+\t\t\t\t\t LSN_FORMAT_ARGS(NewPageBeginPtr));\n+\t\t\t\t\t \n+\t\t\tXLogCtl->contAbortedRecPtr = InvalidXLogRecPtr;\n+\t\t}\n+\t\t\t\n \t\t/*\n \t\t * If first page of an XLOG segment file, make it a long header.\n \t\t */\n@@ -2644,6 +2666,17 @@ XLogWrite(XLogwrtRqst WriteRqst, bool flexible)\n \t\t\t{\n \t\t\t\tissue_xlog_fsync(openLogFile, openLogSegNo);\n \n+\t\t\t\t/* BEGIN: FOR DEBUGGING-CRASH USE */\n+\t\t\t\tif (XLogCtl->crossseg)\n+\t\t\t\t{\n+\t\t\t\t\tstatic int c = 0;\n+\t\t\t\t\tstruct stat b;\n+\n+\t\t\t\t\tif (stat(\"/tmp/hoge\", &b) == 0)\n+\t\t\t\t\t\tAssert (c++ < 1);\n+\t\t\t\t}\n+\t\t\t\t/* END: FOR DEBUGGING-CRASH USE */\n+\n \t\t\t\t/* signal that we need to wakeup walsenders later */\n \t\t\t\tWalSndWakeupRequest();\n \n@@ -4568,6 +4601,7 @@ ReadRecord(XLogReaderState *xlogreader, int emode,\n \t\trecord = XLogReadRecord(xlogreader, &errormsg);\n \t\tReadRecPtr = xlogreader->ReadRecPtr;\n \t\tEndRecPtr = xlogreader->EndRecPtr;\n+\t\tContRecAbortPtr = xlogreader->ContRecAbortPtr;\n \t\tif (record == NULL)\n \t\t{\n \t\t\tif (readFile >= 0)\n@@ -7873,12 +7907,26 @@ StartupXLOG(void)\n \tStandbyMode = false;\n \n \t/*\n-\t * Re-fetch the last valid or last applied record, so we can identify the\n-\t * exact endpoint of what we consider the valid portion of WAL.\n+\t * The last record may be an immature continuation record at the end of a\n+\t * page. We continue writing from ContRecAbortPtr instead of EndRecPtr that\n+\t * case.\n \t */\n-\tXLogBeginRead(xlogreader, LastRec);\n-\trecord = ReadRecord(xlogreader, PANIC, false);\n-\tEndOfLog = EndRecPtr;\n+\telog(LOG, \"#### Recovery finished: ContRecAbort: %X/%X (EndRecPtr: %X/%X)\", LSN_FORMAT_ARGS(ContRecAbortPtr), LSN_FORMAT_ARGS(EndRecPtr));\n+\tif (XLogRecPtrIsInvalid(ContRecAbortPtr))\n+\t{\n+\t\t/*\n+\t\t * Re-fetch the last valid or last applied record, so we can identify\n+\t\t * the exact endpoint of what we consider the valid portion of WAL.\n+\t\t */\n+\t\tXLogBeginRead(xlogreader, LastRec);\n+\t\trecord = ReadRecord(xlogreader, PANIC, false);\n+\t\tEndOfLog = EndRecPtr;\n+\t}\n+\telse\n+\t{\n+\t\tEndOfLog = ContRecAbortPtr;\n+\t\tXLogCtl->contAbortedRecPtr = ContRecAbortPtr;\n+\t}\n \n \t/*\n \t * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n@@ -8013,7 +8061,8 @@ StartupXLOG(void)\n \tInsert = &XLogCtl->Insert;\n \tInsert->PrevBytePos = XLogRecPtrToBytePos(LastRec);\n \tInsert->CurrBytePos = XLogRecPtrToBytePos(EndOfLog);\n-\n+\telog(LOG, \"#### EndOfLog=%X/%X\", LSN_FORMAT_ARGS(EndOfLog));\n+\t\n \t/*\n \t * Tricky point here: readBuf contains the *last* block that the LastRec\n \t * record spans, not the one it starts in. The last block is indeed the\ndiff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\nindex 5cf74e181a..404db7ce4d 100644\n--- a/src/backend/access/transam/xlogreader.c\n+++ b/src/backend/access/transam/xlogreader.c\n@@ -294,6 +294,7 @@ XLogReadRecord(XLogReaderState *state, char **errormsg)\n \n \tResetDecoder(state);\n \n+\tstate->ContRecAbortPtr = InvalidXLogRecPtr;\n \tRecPtr = state->EndRecPtr;\n \n \tif (state->ReadRecPtr != InvalidXLogRecPtr)\n@@ -319,6 +320,7 @@ XLogReadRecord(XLogReaderState *state, char **errormsg)\n \t\trandAccess = true;\n \t}\n \n+retry:\n \tstate->currRecPtr = RecPtr;\n \n \ttargetPagePtr = RecPtr - (RecPtr % XLOG_BLCKSZ);\n@@ -444,12 +446,27 @@ XLogReadRecord(XLogReaderState *state, char **errormsg)\n \t\t\t\t\t\t\t\t\t\t XLOG_BLCKSZ));\n \n \t\t\tif (readOff < 0)\n-\t\t\t\tgoto err;\n+\t\t\t\tgoto err_partial_contrec;\n \n \t\t\tAssert(SizeOfXLogShortPHD <= readOff);\n \n \t\t\t/* Check that the continuation on next page looks valid */\n \t\t\tpageHeader = (XLogPageHeader) state->readBuf;\n+\t\t\tif (pageHeader->xlp_info & XLP_FIRST_IS_ABORT_PARTIAL)\n+\t\t\t{\n+\t\t\t\tif (pageHeader->xlp_info & XLP_FIRST_IS_CONTRECORD)\n+\t\t\t\t{\n+\t\t\t\t\treport_invalid_record(state,\n+\t\t\t\t\t\t\t\t\t\t \"both XLP_FIRST_IS_CONTRECORD and XLP_FIRST_IS_ABORT_PARTIAL are set at %X/%X\",\n+\t\t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(RecPtr));\n+\t\t\t\t\tgoto err;\n+\t\t\t\t}\n+\t\t\t\t\t\n+\t\t\t\tfprintf(stderr, \"#### aborted partial continuation record found at %X/%X, continue from %X/%X\\n\", LSN_FORMAT_ARGS(RecPtr), LSN_FORMAT_ARGS(targetPagePtr));\n+\t\t\t\tResetDecoder(state);\n+\t\t\t\tRecPtr = targetPagePtr;\n+\t\t\t\tgoto retry;\n+\t\t\t}\t\t\t\t\n \t\t\tif (!(pageHeader->xlp_info & XLP_FIRST_IS_CONTRECORD))\n \t\t\t{\n \t\t\t\treport_invalid_record(state,\n@@ -550,6 +567,10 @@ XLogReadRecord(XLogReaderState *state, char **errormsg)\n \telse\n \t\treturn NULL;\n \n+err_partial_contrec:\n+\tstate->ContRecAbortPtr = targetPagePtr;\n+\t\tfprintf(stderr, \"contrec aborted@%X/%X\\n\", LSN_FORMAT_ARGS(state->ContRecAbortPtr));\n+\t\n err:\n \n \t/*\ndiff --git a/src/include/access/xlog_internal.h b/src/include/access/xlog_internal.h\nindex 3b5eceff65..6390812a5a 100644\n--- a/src/include/access/xlog_internal.h\n+++ b/src/include/access/xlog_internal.h\n@@ -76,8 +76,10 @@ typedef XLogLongPageHeaderData *XLogLongPageHeader;\n #define XLP_LONG_HEADER\t\t\t\t0x0002\n /* This flag indicates backup blocks starting in this page are optional */\n #define XLP_BKP_REMOVABLE\t\t\t0x0004\n+/* This flag indicates the first record in this page breaks a contrecord */\n+#define XLP_FIRST_IS_ABORT_PARTIAL\t0x0008\n /* All defined flag bits in xlp_info (used for validity checking of header) */\n-#define XLP_ALL_FLAGS\t\t\t\t0x0007\n+#define XLP_ALL_FLAGS\t\t\t\t0x000F\n \n #define XLogPageHeaderSize(hdr)\t\t\\\n \t(((hdr)->xlp_info & XLP_LONG_HEADER) ? SizeOfXLogLongPHD : SizeOfXLogShortPHD)\ndiff --git a/src/include/access/xlogreader.h b/src/include/access/xlogreader.h\nindex 21d200d3df..00a03a628c 100644\n--- a/src/include/access/xlogreader.h\n+++ b/src/include/access/xlogreader.h\n@@ -175,6 +175,8 @@ struct XLogReaderState\n \tXLogRecPtr\tReadRecPtr;\t\t/* start of last record read */\n \tXLogRecPtr\tEndRecPtr;\t\t/* end+1 of last record read */\n \n+\tXLogRecPtr\tContRecAbortPtr; /* end+1 of aborted partial contrecord if\n+\t\t\t\t\t\t\t\t * any */\n \n \t/* ----------------------------------------\n \t * Decoded representation of current record\ndiff --git a/src/include/catalog/pg_control.h b/src/include/catalog/pg_control.h\nindex e3f48158ce..26fc123cdb 100644\n--- a/src/include/catalog/pg_control.h\n+++ b/src/include/catalog/pg_control.h\n@@ -76,6 +76,7 @@ typedef struct CheckPoint\n #define XLOG_END_OF_RECOVERY\t\t\t0x90\n #define XLOG_FPI_FOR_HINT\t\t\t\t0xA0\n #define XLOG_FPI\t\t\t\t\t\t0xB0\n+#define XLOG_ABORT_CONTRECORD\t\t\tx0C0\n \n \n /*\n\nPWD=`pwd`\nDATA=data\nBKUP=bkup\nARCH=$PWD/arch\nrm -rf arch\nmkdir arch\nrm -rf $DATA\ninitdb -D $DATA\necho \"restart_after_crash = off\" >> $DATA/postgresql.conf\necho \"archive_mode=on\" >> $DATA/postgresql.conf\necho \"archive_command='cp %p ${ARCH}/%f'\" >> $DATA/postgresql.conf\necho \"restart_after_crash = off\" >> $DATA/postgresql.conf\nrm /tmp/hoge\npg_ctl -D $DATA start\nrm -rf $BKUP\npg_basebackup -D $BKUP -h /tmp\necho \"archive_mode=off\" >> $BKUP/postgresql.conf\necho \"restore_command='cp ${ARCH}/%f %p'\" >> $BKUP/postgresql.conf\ntouch bkup/recovery.signal\n\npsql -c 'create table t(a int); insert into t (select a from generate_series(0, 600000) a)'\ntouch /tmp/hoge\npsql -c 'insert into t (select a from generate_series(0, 600000) a)'\nrm /tmp/hoge\nsleep 5\npg_ctl -D $DATA -w start\npsql -c 'checkpoint'\npg_ctl -D $DATA -w stop\necho \"#########################\"\npg_ctl -D $BKUP -w start\nsleep 10\npg_ctl -D $BKUP -w stop", "msg_date": "Thu, 02 Sep 2021 09:24:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-02, Kyotaro Horiguchi wrote:\n\n> So, this is a crude PoC of that.\n\nI had ended up with something very similar, except I was trying to cram\nthe flag via the checkpoint record instead of hacking\nAdvanceXLInsertBuffer(). I removed that stuff and merged both, here's\nthe result.\n\n> 1. This patch is written on the current master, but it doesn't\n> interfare with the seg-boundary-memorize patch since it removes the\n> calls to RegisterSegmentBoundary.\n\nI rebased on top of the revert patch.\n\n> 2. Since xlogreader cannot emit a log-message immediately, we don't\n> have a means to leave a log message to inform recovery met an\n> aborted partial continuation record. (In this PoC, it is done by\n> fprintf:p)\n\nShrug. We can just use an #ifndef FRONTEND / elog(LOG). (I didn't keep\nthis part, sorry.)\n\n> 3. Myebe we need to pg_waldump to show partial continuation records,\n> but I'm not sure how to realize that.\n\nAh yes, we'll need to fix that.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)", "msg_date": "Thu, 2 Sep 2021 18:43:33 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Thu, 2 Sep 2021 18:43:33 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Sep-02, Kyotaro Horiguchi wrote:\n> \n> > So, this is a crude PoC of that.\n> \n> I had ended up with something very similar, except I was trying to cram\n> the flag via the checkpoint record instead of hacking\n> AdvanceXLInsertBuffer(). I removed that stuff and merged both, here's\n> the result.\n> \n> > 1. This patch is written on the current master, but it doesn't\n> > interfare with the seg-boundary-memorize patch since it removes the\n> > calls to RegisterSegmentBoundary.\n> \n> I rebased on top of the revert patch.\n\nThanks!\n\n> > 2. Since xlogreader cannot emit a log-message immediately, we don't\n> > have a means to leave a log message to inform recovery met an\n> > aborted partial continuation record. (In this PoC, it is done by\n> > fprintf:p)\n> \n> Shrug. We can just use an #ifndef FRONTEND / elog(LOG). (I didn't keep\n> this part, sorry.)\n\nNo problem, it was mere a develop-time message for behavior\nobservation.\n\n> > 3. Myebe we need to pg_waldump to show partial continuation records,\n> > but I'm not sure how to realize that.\n> \n> Ah yes, we'll need to fix that.\n\nI just believe 0001 does the right thing.\n\n0002:\n> +\tXLogRecPtr\tabortedContrecordPtr; /* LSN of incomplete record at end of\n> +\t\t\t\t\t\t\t\t\t * WAL */\n\nThe name sounds like the start LSN. doesn't contrecordAbort(ed)Ptr work?\n\n> \t\t\tif (!(pageHeader->xlp_info & XLP_FIRST_IS_CONTRECORD))\n> \t\t\t{\n> \t\t\t\treport_invalid_record(state,\n> \t\t\t\t\t\t\t\t\t \"there is no contrecord flag at %X/%X\",\n> \t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(RecPtr));\n> -\t\t\t\tgoto err;\n> +\t\t\t\tgoto aborted_contrecord;\n\nThis loses the exclusion check between XLP_FIRST_IS_CONTRECORD and\n_IS_ABROTED_PARTIAL. Is it okay? (I don't object to remove the check.).\n\nI didin't thought this as an aborted contrecord. but on second\nthought, when we see a record broken in any style, we stop recovery at\nthe point. I agree to the change and all the silmiar changes.\n\n+\t\t\t\t\t/* XXX should we goto aborted_contrecord here? */\n\nI think it should be aborted_contrecord.\n\nWhen that happens, the loaded bytes actually looked like the first\nfragment of a continuation record to xlogreader, even if the cause\nwere a broken total_len. So if we abort the record there, the next\ntime xlogreader will meet XLP_FIRST_IS_ABORTED_PARTIAL at the same\npage, and correctly finds a new record there.\n\nOn the other hand if we just errored-out there, we will step-back to\nthe beginning of the broken record in the previous page or segment\nthen start writing a new record there but that is exactly what we want\nto avoid now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Sep 2021 16:09:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Kyotaro Horiguchi wrote:\n\n> At Thu, 2 Sep 2021 18:43:33 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n\n> 0002:\n> > +\tXLogRecPtr\tabortedContrecordPtr; /* LSN of incomplete record at end of\n> > +\t\t\t\t\t\t\t\t\t * WAL */\n> \n> The name sounds like the start LSN. doesn't contrecordAbort(ed)Ptr work?\n\nI went over various iterations of the name of this, and still not\nentirely happy. I think we need to convey the ideas that\n\n* This is the endptr+1 of the known-good part of the record, that is,\n the beginning of the next part of the record. I think \"endPtr\"\n summarizes this well; we use this name elsewhere.\n\n* At some point before recovery, this was the last WAL record that\n existed\n\n* there is an invalid contrecord, or we were looking for a contrecord\n and found invalid data\n\n* this record is incomplete\n\nSo maybe\n1. incompleteRecEndPtr\n2. finalInvalidRecEndPtr\n3. brokenContrecordEndPtr\n\n> > \t\t\tif (!(pageHeader->xlp_info & XLP_FIRST_IS_CONTRECORD))\n> > \t\t\t{\n> > \t\t\t\treport_invalid_record(state,\n> > \t\t\t\t\t\t\t\t\t \"there is no contrecord flag at %X/%X\",\n> > \t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(RecPtr));\n> > -\t\t\t\tgoto err;\n> > +\t\t\t\tgoto aborted_contrecord;\n> \n> This loses the exclusion check between XLP_FIRST_IS_CONTRECORD and\n> _IS_ABROTED_PARTIAL. Is it okay? (I don't object to remove the check.).\n\nYeah, I was unsure about that. I think it's good to have it as a\ncross-check, though it should never occur. I'll put it back.\n\nAnother related point is whether it's a good idea to have the ereport()\nabout the bit appearing in a not-start-of-page address being a PANIC.\nIf we degrade to WARNING then it'll be lost in the noise, but I'm not\nsure what else can we do. (If it's a PANIC, then you end up with an\nunusable database).\n\n> I didin't thought this as an aborted contrecord. but on second\n> thought, when we see a record broken in any style, we stop recovery at\n> the point. I agree to the change and all the silmiar changes.\n> \n> +\t\t\t\t\t/* XXX should we goto aborted_contrecord here? */\n> \n> I think it should be aborted_contrecord.\n> \n> When that happens, the loaded bytes actually looked like the first\n> fragment of a continuation record to xlogreader, even if the cause\n> were a broken total_len. So if we abort the record there, the next\n> time xlogreader will meet XLP_FIRST_IS_ABORTED_PARTIAL at the same\n> page, and correctly finds a new record there.\n> \n> On the other hand if we just errored-out there, we will step-back to\n> the beginning of the broken record in the previous page or segment\n> then start writing a new record there but that is exactly what we want\n> to avoid now.\n\nHmm, yeah, makes sense.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 09:59:29 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Oh, but of course we can't modify XLogReaderState in backbranches to add\nthe new struct member abortedContrecordPtr (or whatever we end up naming\nthat.)\n\nI think I'm going to fix this, in backbranches only, by having\nxlogreader.c have a global variable, which is going to be used by\nReadRecord instead of accessing the struct member.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 12:55:23 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Kyotaro Horiguchi wrote:\n\n> At Thu, 2 Sep 2021 18:43:33 -0400, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n\n> The name sounds like the start LSN. doesn't contrecordAbort(ed)Ptr work?\n> \n> > \t\t\tif (!(pageHeader->xlp_info & XLP_FIRST_IS_CONTRECORD))\n> > \t\t\t{\n> > \t\t\t\treport_invalid_record(state,\n> > \t\t\t\t\t\t\t\t\t \"there is no contrecord flag at %X/%X\",\n> > \t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(RecPtr));\n> > +\t\t\t\tgoto aborted_contrecord;\n> \n> This loses the exclusion check between XLP_FIRST_IS_CONTRECORD and\n> _IS_ABROTED_PARTIAL. Is it okay? (I don't object to remove the check.).\n\nOn second thought, I'm not sure that we should make xlogreader report an\ninvalid record here. If we do, how is the user going to recover?\nRecovery will stop there and lose whatever was written afterwards.\nMaybe you could claim that if both bits are set then WAL is corrupted,\nso it's okay to stop recovery. But if WAL is really corrupted, then the\nCRC check will fail. All in all, I think I'd rather ignore the flag if\nwe see it set.\n\nAt most, we could have an\n\n#ifndef FRONTEND\n\tereport(WARNING, \"found unexpected flag xyz\");\n#endif\n\nor something like that. However, xlogreader does not currently have\nanything like that, so I'm not completely sure.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:35:32 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-09-03 12:55:23 -0400, Alvaro Herrera wrote:\n> Oh, but of course we can't modify XLogReaderState in backbranches to add\n> the new struct member abortedContrecordPtr (or whatever we end up naming\n> that.)\n\nWhy is that? Afaict it's always allocated via XLogReaderAllocate(), so adding\na new field at the end should be fine?\n\nThat said, I'm worried that this stuff is too complicated to get right in the\nbackbranches. I suspect letting it stew in master for a while before\nbackpatching would be a good move.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Sep 2021 10:46:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Andres Freund wrote:\n\n> Hi,\n> \n> On 2021-09-03 12:55:23 -0400, Alvaro Herrera wrote:\n> > Oh, but of course we can't modify XLogReaderState in backbranches to add\n> > the new struct member abortedContrecordPtr (or whatever we end up naming\n> > that.)\n> \n> Why is that? Afaict it's always allocated via XLogReaderAllocate(), so adding\n> a new field at the end should be fine?\n\nHmm, true, that works.\n\n> That said, I'm worried that this stuff is too complicated to get right in the\n> backbranches. I suspect letting it stew in master for a while before\n> backpatching would be a good move.\n\nSure, we can put it in master now and backpatch before the November\nminors if everything goes well.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:56:16 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "I thought that the way to have debug output for this new WAL code is to\nuse WAL_DEBUG; that way it won't bother anyone and we can remove them\nlater if necessary.\n\nAlso, I realized that we should cause any error in the path that\nassembles a record from contrecords is to set a flag that we can test\nafter the standard \"err:\" label; no need to create a new label.\n\nI also wrote a lot more comments to try and explain what is going on and\nwhy.\n\nI'm still unsure about the two-flags reporting in xlogreader, so I put\nthat in a separate commit. Opinions on that one?\n\nThe last commit is something I noticed in pg_rewind ...\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)", "msg_date": "Fri, 3 Sep 2021 20:01:50 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Alvaro Herrera wrote:\n\n> I thought that the way to have debug output for this new WAL code is to\n> use WAL_DEBUG; that way it won't bother anyone and we can remove them\n> later if necessary.\n> \n> Also, I realized that we should cause any error in the path that\n> assembles a record from contrecords is to set a flag that we can test\n> after the standard \"err:\" label; no need to create a new label.\n> \n> I also wrote a lot more comments to try and explain what is going on and\n> why.\n> \n> I'm still unsure about the two-flags reporting in xlogreader, so I put\n> that in a separate commit. Opinions on that one?\n> \n> The last commit is something I noticed in pg_rewind ...\n\nOh, the pg_rewind tests died. I fat-fingered the Assert conversion.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 20:03:45 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-09-03 20:01:50 -0400, Alvaro Herrera wrote:\n> From 6abc5026f92b99d704bff527d1306eb8588635e9 Mon Sep 17 00:00:00 2001\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Tue, 31 Aug 2021 20:55:10 -0400\n> Subject: [PATCH v3 1/5] Revert \"Avoid creating archive status \".ready\" files\n> too early\"\n\n> This reverts commit 515e3d84a0b58b58eb30194209d2bc47ed349f5b.\n\nI'd prefer to see this pushed soon. I've a bunch of patches to xlog.c that\nconflict with the prior changes, and rebasing back and forth isn't that much\nfun...\n\n\n\n> From f767cdddb3120f1f6c079c8eb00eaff38ea98c79 Mon Sep 17 00:00:00 2001\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Thu, 2 Sep 2021 17:21:46 -0400\n> Subject: [PATCH v3 2/5] Implement FIRST_IS_ABORTED_CONTRECORD\n>\n> ---\n> src/backend/access/transam/xlog.c | 53 +++++++++++++++++++++++--\n> src/backend/access/transam/xlogreader.c | 39 +++++++++++++++++-\n> src/include/access/xlog_internal.h | 14 ++++++-\n> src/include/access/xlogreader.h | 3 ++\n> 4 files changed, 103 insertions(+), 6 deletions(-)\n>\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index e51a7a749d..411f1618df 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -586,6 +586,8 @@ typedef struct XLogCtlData\n> \tXLogRecPtr\treplicationSlotMinLSN;\t/* oldest LSN needed by any slot */\n>\n> \tXLogSegNo\tlastRemovedSegNo;\t/* latest removed/recycled XLOG segment */\n> +\tXLogRecPtr\tabortedContrecordPtr; /* LSN of incomplete record at end of\n> +\t\t\t\t\t\t\t\t\t * WAL */\n>\n> \t/* Fake LSN counter, for unlogged relations. Protected by ulsn_lck. */\n> \tXLogRecPtr\tunloggedLSN;\n> @@ -848,6 +850,7 @@ static XLogSource XLogReceiptSource = XLOG_FROM_ANY;\n> /* State information for XLOG reading */\n> static XLogRecPtr ReadRecPtr;\t/* start of last record read */\n> static XLogRecPtr EndRecPtr;\t/* end+1 of last record read */\n> +static XLogRecPtr abortedContrecordPtr;\t/* end+1 of incomplete record */\n>\n> /*\n> * Local copies of equivalent fields in the control file. When running\n> @@ -2246,6 +2249,30 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, bool opportunistic)\n> \t\tif (!Insert->forcePageWrites)\n> \t\t\tNewPage->xlp_info |= XLP_BKP_REMOVABLE;\n>\n> +\t\t/*\n> +\t\t * If the last page ended with an aborted partial continuation record,\n> +\t\t * mark the new page to indicate that the partial record can be\n> +\t\t * omitted.\n> +\t\t *\n> +\t\t * This happens only once at the end of recovery, so there's no race\n> +\t\t * condition here.\n> +\t\t */\n> +\t\tif (XLogCtl->abortedContrecordPtr >= NewPageBeginPtr)\n> +\t\t{\n\nCan we move this case out of AdvanceXLInsertBuffer()? As the comment says,\nthis only happens at the end of recovery, so putting it into\nAdvanceXLInsertBuffer() doesn't really seem necessary?\n\n\n> +#ifdef WAL_DEBUG\n> +\t\t\tif (XLogCtl->abortedContrecordPtr != NewPageBeginPtr)\n> +\t\t\t\telog(PANIC, \"inconsistent aborted contrecord location %X/%X, expected %X/%X\",\n> +\t\t\t\t\t LSN_FORMAT_ARGS(XLogCtl->abortedContrecordPtr),\n> +\t\t\t\t\t LSN_FORMAT_ARGS(NewPageBeginPtr));\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg_internal(\"setting XLP_FIRST_IS_ABORTED_PARTIAL flag at %X/%X\",\n> +\t\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(NewPageBeginPtr))));\n> +#endif\n> +\t\t\tNewPage->xlp_info |= XLP_FIRST_IS_ABORTED_PARTIAL;\n> +\n> +\t\t\tXLogCtl->abortedContrecordPtr = InvalidXLogRecPtr;\n> +\t\t}\n\n> \t\t/*\n> \t\t * If first page of an XLOG segment file, make it a long header.\n> \t\t */\n> @@ -4392,6 +4419,7 @@ ReadRecord(XLogReaderState *xlogreader, int emode,\n> \t\trecord = XLogReadRecord(xlogreader, &errormsg);\n> \t\tReadRecPtr = xlogreader->ReadRecPtr;\n> \t\tEndRecPtr = xlogreader->EndRecPtr;\n> +\t\tabortedContrecordPtr = xlogreader->abortedContrecordPtr;\n> \t\tif (record == NULL)\n> \t\t{\n> \t\t\tif (readFile >= 0)\n> @@ -7691,10 +7719,29 @@ StartupXLOG(void)\n> \t/*\n> \t * Re-fetch the last valid or last applied record, so we can identify the\n> \t * exact endpoint of what we consider the valid portion of WAL.\n> +\t *\n> +\t * When recovery ended in an incomplete record, continue writing from the\n> +\t * point where it went missing. This leaves behind an initial part of\n> +\t * broken record, which rescues downstream which have already received\n> +\t * that first part.\n> \t */\n> -\tXLogBeginRead(xlogreader, LastRec);\n> -\trecord = ReadRecord(xlogreader, PANIC, false);\n> -\tEndOfLog = EndRecPtr;\n> +\tif (XLogRecPtrIsInvalid(abortedContrecordPtr))\n> +\t{\n> +\t\tXLogBeginRead(xlogreader, LastRec);\n> +\t\trecord = ReadRecord(xlogreader, PANIC, false);\n> +\t\tEndOfLog = EndRecPtr;\n> +\t}\n> +\telse\n> +\t{\n> +#ifdef WAL_DEBUG\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg_internal(\"recovery overwriting broken contrecord at %X/%X (EndRecPtr: %X/%X)\",\n> +\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(abortedContrecordPtr),\n> +\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(EndRecPtr))));\n> +#endif\n\n\"broken\" sounds a bit off. But then, it's just WAL_DEBUG. Which made me\nrealize, isn't this missing a\nif (XLOG_DEBUG)?\n\n\n\n> @@ -442,14 +448,28 @@ XLogReadRecord(XLogReaderState *state, char **errormsg)\n> \t\t\treadOff = ReadPageInternal(state, targetPagePtr,\n> \t\t\t\t\t\t\t\t\t Min(total_len - gotlen + SizeOfXLogShortPHD,\n> \t\t\t\t\t\t\t\t\t\t XLOG_BLCKSZ));\n> -\n> \t\t\tif (readOff < 0)\n> \t\t\t\tgoto err;\n>\n> \t\t\tAssert(SizeOfXLogShortPHD <= readOff);\n>\n> -\t\t\t/* Check that the continuation on next page looks valid */\n> \t\t\tpageHeader = (XLogPageHeader) state->readBuf;\n> +\n> +\t\t\t/*\n> +\t\t\t * If we were expecting a continuation record and got an \"aborted\n> +\t\t\t * partial\" flag, that means the continuation record was lost.\n> +\t\t\t * Ignore the record we were reading, since we now know it's broken\n> +\t\t\t * and lost forever, and restart the read by assuming the address\n> +\t\t\t * to read is the location where we found this flag.\n> +\t\t\t */\n> +\t\t\tif (pageHeader->xlp_info & XLP_FIRST_IS_ABORTED_PARTIAL)\n> +\t\t\t{\n> +\t\t\t\tResetDecoder(state);\n> +\t\t\t\tRecPtr = targetPagePtr;\n> +\t\t\t\tgoto restart;\n> +\t\t\t}\n\nI think we need to add more validation to this path. What I was proposing\nearlier is that we add a new special type of record at the start of an\nXLP_FIRST_IS_ABORTED_PARTIAL page, which contains a) lsn of the record we're\naborting, b) checksum of the data up to this point.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Sep 2021 17:14:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Andres Freund wrote:\n\n> Hi,\n> \n> On 2021-09-03 20:01:50 -0400, Alvaro Herrera wrote:\n> > From 6abc5026f92b99d704bff527d1306eb8588635e9 Mon Sep 17 00:00:00 2001\n> > From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > Date: Tue, 31 Aug 2021 20:55:10 -0400\n> > Subject: [PATCH v3 1/5] Revert \"Avoid creating archive status \".ready\" files\n> > too early\"\n> \n> > This reverts commit 515e3d84a0b58b58eb30194209d2bc47ed349f5b.\n> \n> I'd prefer to see this pushed soon. I've a bunch of patches to xlog.c that\n> conflict with the prior changes, and rebasing back and forth isn't that much\n> fun...\n\nDone.\n\n> > +\t\t\t/*\n> > +\t\t\t * If we were expecting a continuation record and got an \"aborted\n> > +\t\t\t * partial\" flag, that means the continuation record was lost.\n> > +\t\t\t * Ignore the record we were reading, since we now know it's broken\n> > +\t\t\t * and lost forever, and restart the read by assuming the address\n> > +\t\t\t * to read is the location where we found this flag.\n> > +\t\t\t */\n> > +\t\t\tif (pageHeader->xlp_info & XLP_FIRST_IS_ABORTED_PARTIAL)\n> > +\t\t\t{\n> > +\t\t\t\tResetDecoder(state);\n> > +\t\t\t\tRecPtr = targetPagePtr;\n> > +\t\t\t\tgoto restart;\n> > +\t\t\t}\n> \n> I think we need to add more validation to this path. What I was proposing\n> earlier is that we add a new special type of record at the start of an\n> XLP_FIRST_IS_ABORTED_PARTIAL page, which contains a) lsn of the record we're\n> aborting, b) checksum of the data up to this point.\n\nHmm, a new record type? Yeah, we can do that, and sounds like it would\nmake things simpler too -- I wasn't too happy about adding clutter to\nAdvanceXLInsertBuffer either, but the alternative I had in mind was that\nwe'd pass the flag to the checkpointer, which seemed quite annoying API\nwise. But if we add a new record, seems we can write it directly in\nStartupXLOG and avoid most nastiness. I'm not too sure about the\nchecksum up to this point, though, but I'll spend some time with the\nidea to see how bad it is.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n", "msg_date": "Sat, 4 Sep 2021 12:22:05 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Andres Freund wrote:\n\n> > +#ifdef WAL_DEBUG\n> > +\t\tereport(LOG,\n> > +\t\t\t\t(errmsg_internal(\"recovery overwriting broken contrecord at %X/%X (EndRecPtr: %X/%X)\",\n> > +\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(abortedContrecordPtr),\n> > +\t\t\t\t\t\t\t\t LSN_FORMAT_ARGS(EndRecPtr))));\n> > +#endif\n> \n> \"broken\" sounds a bit off. But then, it's just WAL_DEBUG. Which made me\n> realize, isn't this missing a\n> if (XLOG_DEBUG)?\n\nAttached are the same patches as last night, except I added a test for\nXLOG_DEBUG where pertinent. (The elog(PANIC) is not made conditional on\nthat, since it's a cross-check rather than informative.) Also fixed the\nsilly pg_rewind mistake I made.\n\nI'll work on the new xlog record early next week.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)", "msg_date": "Sat, 4 Sep 2021 13:26:24 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 9/4/21, 10:26 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> Attached are the same patches as last night, except I added a test for\r\n> XLOG_DEBUG where pertinent. (The elog(PANIC) is not made conditional on\r\n> that, since it's a cross-check rather than informative.) Also fixed the\r\n> silly pg_rewind mistake I made.\r\n>\r\n> I'll work on the new xlog record early next week.\r\n\r\nAre these patches in a good state for some preliminary testing? I'd\r\nlike to try them out, but I'll hold off if they're not quite ready\r\nyet.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 18:41:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Tue, 7 Sep 2021 18:41:57 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/4/21, 10:26 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n> > Attached are the same patches as last night, except I added a test for\n> > XLOG_DEBUG where pertinent. (The elog(PANIC) is not made conditional on\n> > that, since it's a cross-check rather than informative.) Also fixed the\n> > silly pg_rewind mistake I made.\n> >\n> > I'll work on the new xlog record early next week.\n> \n> Are these patches in a good state for some preliminary testing? I'd\n> like to try them out, but I'll hold off if they're not quite ready\n> yet.\n\nThanks! As my understanding the new record add the ability to\ncross-check between a teard-off contrecord and the new record inserted\nafter the teard-off record. I didn't test the version by myself but\nthe previous version implemented the essential machinery and that\nwon't change fundamentally by the new record.\n\nSo I think the current patch deserves to see the algorithm actually\nworks against the problem.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 16:03:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-08, Kyotaro Horiguchi wrote:\n\n> Thanks! As my understanding the new record add the ability to\n> cross-check between a teard-off contrecord and the new record inserted\n> after the teard-off record. I didn't test the version by myself but\n> the previous version implemented the essential machinery and that\n> won't change fundamentally by the new record.\n> \n> So I think the current patch deserves to see the algorithm actually\n> works against the problem.\n\nHere's a version with the new record type. It passes check-world, and\nit seems to work correctly to prevent overwrite of the tail end of a\nsegment containing a broken record. This is very much WIP still;\ncomments are missing and I haven't tried to implement any sort of\nverification that the record being aborted is the right one.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n", "msg_date": "Tue, 14 Sep 2021 14:55:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-14, Alvaro Herrera wrote:\n\n> On 2021-Sep-08, Kyotaro Horiguchi wrote:\n> \n> > Thanks! As my understanding the new record add the ability to\n> > cross-check between a teard-off contrecord and the new record inserted\n> > after the teard-off record. I didn't test the version by myself but\n> > the previous version implemented the essential machinery and that\n> > won't change fundamentally by the new record.\n> > \n> > So I think the current patch deserves to see the algorithm actually\n> > works against the problem.\n> \n> Here's a version with the new record type. It passes check-world, and\n> it seems to work correctly to prevent overwrite of the tail end of a\n> segment containing a broken record. This is very much WIP still;\n> comments are missing and I haven't tried to implement any sort of\n> verification that the record being aborted is the right one.\n\nHere's the attachment I forgot earlier.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)", "msg_date": "Tue, 14 Sep 2021 22:32:04 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Tue, 14 Sep 2021 22:32:04 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Sep-14, Alvaro Herrera wrote:\n> \n> > On 2021-Sep-08, Kyotaro Horiguchi wrote:\n> > \n> > > Thanks! As my understanding the new record add the ability to\n> > > cross-check between a teard-off contrecord and the new record inserted\n> > > after the teard-off record. I didn't test the version by myself but\n> > > the previous version implemented the essential machinery and that\n> > > won't change fundamentally by the new record.\n> > > \n> > > So I think the current patch deserves to see the algorithm actually\n> > > works against the problem.\n> > \n> > Here's a version with the new record type. It passes check-world, and\n> > it seems to work correctly to prevent overwrite of the tail end of a\n> > segment containing a broken record. This is very much WIP still;\n> > comments are missing and I haven't tried to implement any sort of\n> > verification that the record being aborted is the right one.\n> \n> Here's the attachment I forgot earlier.\n\n(I missed the chance to complain about that:p)\n\nTnaks for the patch!\n\n-\t\tCopyXLogRecordToWAL(rechdr->xl_tot_len, isLogSwitch, rdata,\n-\t\t\t\t\t\t\tStartPos, EndPos);\n+\t\tCopyXLogRecordToWAL(rechdr->xl_tot_len, isLogSwitch,\n+\t\t\t\t\t\t\tflags & XLOG_SET_ABORTED_PARTIAL,\n+\t\t\t\t\t\t\trdata, StartPos, EndPos);\n\nThe new xlog flag XLOG_SET_ABORTED_PARTIAL is used only by\nRM_XLOG_ID/XLOG_OVERWRITE_CONTRECORD records, so the flag value is the\nequivalent of the record type. We might instead want a new flag\nXLOG_SPECIAL_TREATED_RECORD or something to quickly distinguish\nrecords that need a special treat like XLOG_SWITCH.\n\n if (flags & XLOG_SPECIAL_TREATED_RECORD)\n {\n \tif (rechdr->xl_rmid == RM_XLOG_ID)\n\t{\n if (info ==\tXLOG_SWITCH)\n\t isLogSwitch = true;\n if (info == XLOG_OVERWRITE_CONTRECORD)\n\t isOverwrite = true;\n }\n }\n ..\n CopyXLogRecrodToWAL(.., isLogSwitch, isOverwrite, rdata, StartPos, EndPos);\n\n\n+\t\t\t/* XXX can only happen once in the loop. Verify? */\n+\t\t\tif (set_aborted_partial)\n+\t\t\t\tpagehdr->xlp_info |= XLP_FIRST_IS_ABORTED_PARTIAL;\n+\n\nI'm not sure about the reason for the change from the previous patch\n(I might be missing something), this sets the flag on the *next* page\nof the page where the record starts. So in the first place we\nshouldn't set the flag there. The page header flags of the first page\nis set by AdvanceXLInsertBuffer. If we want to set the flag in the\nfunction, we need to find the page header for the beginning of the\nrecord and make sure that the record is placed at the beginning of the\npage. (It is the reason that I did that in AdvanceXLInsertBuffer..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Sep 2021 12:00:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-15, Kyotaro Horiguchi wrote:\n\n> +\t\tCopyXLogRecordToWAL(rechdr->xl_tot_len, isLogSwitch,\n> +\t\t\t\t\t\t\tflags & XLOG_SET_ABORTED_PARTIAL,\n> +\t\t\t\t\t\t\trdata, StartPos, EndPos);\n> \n> The new xlog flag XLOG_SET_ABORTED_PARTIAL is used only by\n> RM_XLOG_ID/XLOG_OVERWRITE_CONTRECORD records, so the flag value is the\n> equivalent of the record type.\n\nIn the new version I removed all this; it was wrong.\n\n> +\t\t\tif (set_aborted_partial)\n> +\t\t\t\tpagehdr->xlp_info |= XLP_FIRST_IS_ABORTED_PARTIAL;\n> +\n> \n> I'm not sure about the reason for the change from the previous patch\n> (I might be missing something), this sets the flag on the *next* page\n> of the page where the record starts. So in the first place we\n> shouldn't set the flag there.\n\nYou're right, this code is wrong. And in fact I had already noticed it\nyesterday, but much to my embarrasment I forgot to fix it. Here is a\nfixed version, where I moved the flag set back to AdvanceXLInsertBuffer.\nI think doing it anywhere else is going to be very painful. AFAICT we\ndo the right thing now, but amusingly we don't have any tooling to\nverify that the XLP flag is set in the page where we want it.\n\nWith this patch we now have two recptrs: the LSN of the broken record,\nand the LSN of the missing contrecord. The latter is where to start\nwriting WAL after recovery is done, and the former is currently unused\nbut we could use it to double-check that we're aborting (forgetting) the\ncorrect record. I didn't try to implement that, but IIUC it is\nxlogreader that would have to do that.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Si quieres ser creativo, aprende el arte de perder el tiempo\"", "msg_date": "Wed, 15 Sep 2021 21:29:54 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "OK, this version is much more palatable, because here we verify that the\nOVERWRITE_CONTRECORD we replay matches the record that was lost. Also,\nI wrote a test script that creates such a broken record (by the simple\nexpedient of deleting the WAL file containing the second half while the\nserver is down); we then create a standby and we can observe that it\nreplays the sequence correctly.\n\nIf you have some time to try your reproducers with this new proposed\nfix, I would appreciate it.\n\n\nAdded Matsumura-san to CC, because he was interested in this topic too\nper the earlier thread.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Fri, 17 Sep 2021 13:37:05 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 9/17/21, 9:37 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> OK, this version is much more palatable, because here we verify that the\r\n> OVERWRITE_CONTRECORD we replay matches the record that was lost. Also,\r\n> I wrote a test script that creates such a broken record (by the simple\r\n> expedient of deleting the WAL file containing the second half while the\r\n> server is down); we then create a standby and we can observe that it\r\n> replays the sequence correctly.\r\n>\r\n> If you have some time to try your reproducers with this new proposed\r\n> fix, I would appreciate it.\r\n\r\nI haven't had a chance to look at the patch yet, but it appears to fix\r\nthings with my original reproduction steps for the archive_status\r\nstuff [0].\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\r\n\r\n", "msg_date": "Fri, 17 Sep 2021 17:07:30 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-17, Alvaro Herrera wrote:\n\n> Added Matsumura-san to CC, because he was interested in this topic too\n> per the earlier thread.\n\nI failed to do this, so hopefully this serves as a ping.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 17 Sep 2021 14:33:21 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-17, Bossart, Nathan wrote:\n\n> On 9/17/21, 9:37 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\n\n> > If you have some time to try your reproducers with this new proposed\n> > fix, I would appreciate it.\n> \n> I haven't had a chance to look at the patch yet, but it appears to fix\n> things with my original reproduction steps for the archive_status\n> stuff [0].\n\nThank you, this is good to hear.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Fri, 17 Sep 2021 14:34:20 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 9/17/21, 10:35 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Sep-17, Bossart, Nathan wrote:\r\n>\r\n>> On 9/17/21, 9:37 AM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n>\r\n>> > If you have some time to try your reproducers with this new proposed\r\n>> > fix, I would appreciate it.\r\n>>\r\n>> I haven't had a chance to look at the patch yet, but it appears to fix\r\n>> things with my original reproduction steps for the archive_status\r\n>> stuff [0].\r\n>\r\n> Thank you, this is good to hear.\r\n\r\nI gave the patch a read-through. I'm wondering if the\r\nXLOG_OVERWRITE_CONTRECORD records are actually necessary. IIUC we\r\nwill set XLP_FIRST_IS_ABORTED_PARTIAL on the next page, and\r\nxlp_pageaddr on that page will already be validated in\r\nXLogReaderValidatePageHeader(). Does adding this new record also help\r\nensure the page header is filled in and flushed to disk?\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 17 Sep 2021 18:50:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-17, Bossart, Nathan wrote:\n\n> I gave the patch a read-through. I'm wondering if the\n> XLOG_OVERWRITE_CONTRECORD records are actually necessary. IIUC we\n> will set XLP_FIRST_IS_ABORTED_PARTIAL on the next page, and\n> xlp_pageaddr on that page will already be validated in\n> XLogReaderValidatePageHeader(). Does adding this new record also help\n> ensure the page header is filled in and flushed to disk?\n\nThat was the first implementation, a few versions of the patch ago. An\nadded benefit of a separate WAL record is that you can carry additional\ndata for validation, such as -- as suggested by Andres -- the CRC of the\npartial data contained in the message that we're skipping. I didn't\nimplement that, but it should be trivial to add it.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n", "msg_date": "Fri, 17 Sep 2021 17:31:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 9/17/21, 1:32 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Sep-17, Bossart, Nathan wrote:\r\n>\r\n>> I gave the patch a read-through. I'm wondering if the\r\n>> XLOG_OVERWRITE_CONTRECORD records are actually necessary. IIUC we\r\n>> will set XLP_FIRST_IS_ABORTED_PARTIAL on the next page, and\r\n>> xlp_pageaddr on that page will already be validated in\r\n>> XLogReaderValidatePageHeader(). Does adding this new record also help\r\n>> ensure the page header is filled in and flushed to disk?\r\n>\r\n> That was the first implementation, a few versions of the patch ago. An\r\n> added benefit of a separate WAL record is that you can carry additional\r\n> data for validation, such as -- as suggested by Andres -- the CRC of the\r\n> partial data contained in the message that we're skipping. I didn't\r\n> implement that, but it should be trivial to add it.\r\n\r\nI see. IMO feels a bit counterintuitive to validate a partial record\r\nthat you are ignoring anyway, but I suppose it's still valuable to\r\nknow when the WAL is badly broken. It's not expensive, and it doesn't\r\nadd a ton of complexity, either.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 17 Sep 2021 21:15:27 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-17, Bossart, Nathan wrote:\n\n> > That was the first implementation, a few versions of the patch ago. An\n> > added benefit of a separate WAL record is that you can carry additional\n> > data for validation, such as -- as suggested by Andres -- the CRC of the\n> > partial data contained in the message that we're skipping. I didn't\n> > implement that, but it should be trivial to add it.\n> \n> I see. IMO feels a bit counterintuitive to validate a partial record\n> that you are ignoring anyway, but I suppose it's still valuable to\n> know when the WAL is badly broken. It's not expensive, and it doesn't\n> add a ton of complexity, either.\n\nYeah, we don't have any WAL record history validation other than the\nverifying the LSN of the previous record; I suppose in this particular\ncase you could argue that we shouldn't bother with any validation\neither. But it seems safer to do it. It doesn't hurt anything anyway.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 17 Sep 2021 18:22:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "I made one final pass over the whole thing to be sure it's all commented\nas thoroughly as possible, and changed the names of things to make them\nall consistent. So here's the last version which I intend to push to\nall branches soon. (The only difference in back-branches is that\nthe xlogreader struct needs to be adjusted to have the new fields at the\nbottom.)\n\nOne thing to note is that this is an un-downgradeable minor; and of\ncourse people should upgrade standbys before primaries.\n\nOn 2021-Sep-17, Alvaro Herrera wrote:\n\n> On 2021-Sep-17, Bossart, Nathan wrote:\n\n> > I see. IMO feels a bit counterintuitive to validate a partial record\n> > that you are ignoring anyway, but I suppose it's still valuable to\n> > know when the WAL is badly broken. It's not expensive, and it doesn't\n> > add a ton of complexity, either.\n> \n> Yeah, we don't have any WAL record history validation other than the\n> verifying the LSN of the previous record; I suppose in this particular\n> case you could argue that we shouldn't bother with any validation\n> either. But it seems safer to do it. It doesn't hurt anything anyway.\n\nBTW we do validate the CRC for all records, but we don't have any means\nto validate the CRC of a partial record; so in theory if we don't add\nCRC validation of the ignored contents, there would be no validation at\nall for that record. I don't think we care, but it's true that there\nwould be a big blob in WAL that we don't really know anything about.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)", "msg_date": "Tue, 21 Sep 2021 18:19:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "This took some time because backpatching the tests was more work than I\nanticipated -- function name changes, operators that don't exist,\ndefinition of the WAL segment size in pg_settings. I had to remove the\nsecond test in branches 13 and earlier due to lack of LSN+bytes\noperator. Fortunately, the first test (which is not as clean) still\nworks all the way back.\n\nHowever, I notice now that the pg_rewind tests reproducibly fail in\nbranch 14 for reasons I haven't yet understood. It's strange that no\nother branch fails, even when run quite a few times.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n", "msg_date": "Thu, 23 Sep 2021 13:39:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-23, Alvaro Herrera wrote:\n\n> However, I notice now that the pg_rewind tests reproducibly fail in\n> branch 14 for reasons I haven't yet understood. It's strange that no\n> other branch fails, even when run quite a few times.\n\nTurns out that this is a real bug (setting EndOfLog seems insufficient).\nI'm looking into it.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n", "msg_date": "Thu, 23 Sep 2021 23:32:57 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi Alvaro,\n\nI just started reading this thread, but maybe you can confirm or\nrefute my understanding of what was done.\n\nIn the first email you write\n\n> As mentioned in the course of thread [1], we're missing a fix for\nstreaming replication to avoid sending records that the primary hasn't\nfully flushed yet. This patch is a first attempt at fixing that problem\nby retreating the LSN reported as FlushPtr whenever a segment is\nregistered, based on the understanding that if no registration exists\nthen the LogwrtResult.Flush pointer can be taken at face value; but if a\nregistration exists, then we have to stream only till the start LSN of\nthat registered entry.\n\nSo did we end up holding back the wal_sender to not send anything that\nis not confirmed as flushed on master\n\nAre there measurements on how much this slows down replication\ncompared to allowing sending the moment it is written in buffers but\nnot necessarily flushed locally ?\n\nDid we investigate possibility of sending as fast as possible and\ncontrolling the flush synchronisation by sending separate flush\npointers *both* ways ?\n\nAnd maybe there was even an alternative considered where we are\nlooking at a more general Durability, for example 2-out-of-3 where\nprimary is one of the 3 and not necessarily the most durable one?\n\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\n\n\n\nOn Fri, Sep 24, 2021 at 4:33 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Sep-23, Alvaro Herrera wrote:\n>\n> > However, I notice now that the pg_rewind tests reproducibly fail in\n> > branch 14 for reasons I haven't yet understood. It's strange that no\n> > other branch fails, even when run quite a few times.\n>\n> Turns out that this is a real bug (setting EndOfLog seems insufficient).\n> I'm looking into it.\n>\n> --\n> Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n> \"No necesitamos banderas\n> No reconocemos fronteras\" (Jorge González)\n>\n>\n\n\n", "msg_date": "Fri, 24 Sep 2021 19:38:45 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-23, Alvaro Herrera wrote:\n\n> On 2021-Sep-23, Alvaro Herrera wrote:\n> \n> > However, I notice now that the pg_rewind tests reproducibly fail in\n> > branch 14 for reasons I haven't yet understood. It's strange that no\n> > other branch fails, even when run quite a few times.\n> \n> Turns out that this is a real bug (setting EndOfLog seems insufficient).\n> I'm looking into it.\n\nI had misdiagnosed it; the real problem is that this was taking action\nin standby mode and breaking things after promotion. I took quite a\ndetour ...\n\nHere's the set for all branches, which I think are really final, in case\nsomebody wants to play and reproduce their respective problem scenarios.\nNathan already confirmed that his reproducer no longer shows a problem,\nand this version shouldn't affect that.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)", "msg_date": "Fri, 24 Sep 2021 21:48:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-24, Alvaro Herrera wrote:\n\n> Here's the set for all branches, which I think are really final, in case\n> somebody wants to play and reproduce their respective problem scenarios.\n\nI forgot to mention that I'll wait until 14.0 is tagged before getting\nanything pushed.\n\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n", "msg_date": "Sat, 25 Sep 2021 08:51:36 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-24, Alvaro Herrera wrote:\n\n> Here's the set for all branches, which I think are really final, in case\n> somebody wants to play and reproduce their respective problem scenarios.\n> Nathan already confirmed that his reproducer no longer shows a problem,\n> and this version shouldn't affect that.\n\nPushed. Watching for buildfarm fireworks now.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 29 Sep 2021 11:43:49 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-24, Hannu Krosing wrote:\n\nHi Hannu\n\n> In the first email you write\n> \n> > As mentioned in the course of thread [1], we're missing a fix for\n> streaming replication to avoid sending records that the primary hasn't\n> fully flushed yet. This patch is a first attempt at fixing that problem\n> by retreating the LSN reported as FlushPtr whenever a segment is\n> registered, based on the understanding that if no registration exists\n> then the LogwrtResult.Flush pointer can be taken at face value; but if a\n> registration exists, then we have to stream only till the start LSN of\n> that registered entry.\n> \n> So did we end up holding back the wal_sender to not send anything that\n> is not confirmed as flushed on master\n\nNo. We eventually realized that that approach was a dead end, so I\nabandoned the whole thing and attacked the problem differently. So your\nother questions don't apply. I tried to make the commit message explain\nboth the problem and the solution in as much detail as possible; please\nhave a look at that and let me know if something is unclear.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n", "msg_date": "Wed, 29 Sep 2021 11:48:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Pushed. Watching for buildfarm fireworks now.\n\njacana didn't like it:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-09-29%2018%3A25%3A53\n\nThe relevant info seems to be\n\n# Running: pg_basebackup -D /home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/backup/backup -h 127.0.0.1 -p 59502 --checkpoint fast --no-sync\npg_basebackup: error: connection to server at \"127.0.0.1\", port 59502 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\nBail out! system pg_basebackup failed\n\nwhich looks like a pretty straightforward bogus-connection-configuration\nproblem, except why wouldn't other BF members show it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Sep 2021 16:33:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "\nOn 9/29/21 4:33 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Pushed. Watching for buildfarm fireworks now.\n> jacana didn't like it:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-09-29%2018%3A25%3A53\n>\n> The relevant info seems to be\n>\n> # Running: pg_basebackup -D /home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/backup/backup -h 127.0.0.1 -p 59502 --checkpoint fast --no-sync\n> pg_basebackup: error: connection to server at \"127.0.0.1\", port 59502 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n> Bail out! system pg_basebackup failed\n>\n> which looks like a pretty straightforward bogus-connection-configuration\n> problem, except why wouldn't other BF members show it?\n>\n> \t\t\t\n\n\nThis:\n\n # Second test: a standby that receives WAL via archive/restore commands.\n $node = PostgresNode->new('primary2');\n $node->init(\n     has_archiving => 1,\n     extra         => ['--wal-segsize=1']);\n\n\ndoesn't have \"allows_streaming => 1\".\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Sep 2021 17:04:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-29, Andrew Dunstan wrote:\n\n> > The relevant info seems to be\n> >\n> > # Running: pg_basebackup -D /home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/backup/backup -h 127.0.0.1 -p 59502 --checkpoint fast --no-sync\n> > pg_basebackup: error: connection to server at \"127.0.0.1\", port 59502 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n> > Bail out! system pg_basebackup failed\n> >\n> > which looks like a pretty straightforward bogus-connection-configuration\n> > problem, except why wouldn't other BF members show it?\n> \n> This:\n> \n> # Second test: a standby that receives WAL via archive/restore commands.\n> $node = PostgresNode->new('primary2');\n> $node->init(\n>     has_archiving => 1,\n>     extra         => ['--wal-segsize=1']);\n> \n> doesn't have \"allows_streaming => 1\".\n\nHmm, but I omitted allows_streaming on purpose -- I only wanted\narchiving, not streaming. I understand that your point is that\nset_replication_conf is not called unless allows_streaming is set.\n\nSo, do we take the stance that we have no right to expect pg_basebackup\nto work if we didn't pass allow_streaming => 1? If so, the fix is to\nadd it. But my preferred fix would be to call set_replication_conf if\neither allows_streaming or has_archiving are given.\n\n\nAnother easy fix would be to call $primary2->set_replication_conf in the\ntest file, but then you'd complain that that's supposed to be an\ninternal method :-)\n\n(This reminds me that I had to add something that seemed like it should\nhave been unnecessary: wal_level=replica should become set if I request\narchiving, right? Otherwise the WAL archive is useless. I also had to\nadd max_wal_senders=2 so that pg_basebackup would work, but I'm on the\nfence about setting that automatically if has_archiving is given.)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n", "msg_date": "Wed, 29 Sep 2021 18:27:54 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/29/21 4:33 PM, Tom Lane wrote:\n>> which looks like a pretty straightforward bogus-connection-configuration\n>> problem, except why wouldn't other BF members show it?\n\n> This:\n> ...\n> doesn't have \"allows_streaming => 1\".\n\nOh, and that only breaks things on Windows, cf set_replication_conf.\n\n... although I wonder why the fact that sub init otherwise sets\nwal_level = minimal doesn't cause a problem for this test.\nMaybe the test script is cowboy-ishly overriding that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Sep 2021 17:29:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-29, Tom Lane wrote:\n\n> ... although I wonder why the fact that sub init otherwise sets\n> wal_level = minimal doesn't cause a problem for this test.\n> Maybe the test script is cowboy-ishly overriding that?\n\nIt is. (I claim that it should be set otherwise when has_archiving=>1).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n", "msg_date": "Wed, 29 Sep 2021 18:38:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "\nOn 9/29/21 5:27 PM, Alvaro Herrera wrote:\n> On 2021-Sep-29, Andrew Dunstan wrote:\n>\n>>> The relevant info seems to be\n>>>\n>>> # Running: pg_basebackup -D /home/pgrunner/bf/root/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/backup/backup -h 127.0.0.1 -p 59502 --checkpoint fast --no-sync\n>>> pg_basebackup: error: connection to server at \"127.0.0.1\", port 59502 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n>>> Bail out! system pg_basebackup failed\n>>>\n>>> which looks like a pretty straightforward bogus-connection-configuration\n>>> problem, except why wouldn't other BF members show it?\n>> This:\n>>\n>> # Second test: a standby that receives WAL via archive/restore commands.\n>> $node = PostgresNode->new('primary2');\n>> $node->init(\n>>     has_archiving => 1,\n>>     extra         => ['--wal-segsize=1']);\n>>\n>> doesn't have \"allows_streaming => 1\".\n> Hmm, but I omitted allows_streaming on purpose -- I only wanted\n> archiving, not streaming. I understand that your point is that\n> set_replication_conf is not called unless allows_streaming is set.\n>\n> So, do we take the stance that we have no right to expect pg_basebackup\n> to work if we didn't pass allow_streaming => 1? If so, the fix is to\n> add it. But my preferred fix would be to call set_replication_conf if\n> either allows_streaming or has_archiving are given.\n>\n>\n> Another easy fix would be to call $primary2->set_replication_conf in the\n> test file, but then you'd complain that that's supposed to be an\n> internal method :-)\n\n\nIt claims that but it's also used here:\n\n\nsrc/bin/pg_basebackup/t/010_pg_basebackup.pl\n\n\n(Also, good perl style would start purely internal method names with an\nunderscore.)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Sep 2021 17:40:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/29/21 5:27 PM, Alvaro Herrera wrote:\n>> So, do we take the stance that we have no right to expect pg_basebackup\n>> to work if we didn't pass allow_streaming => 1? If so, the fix is to\n>> add it. But my preferred fix would be to call set_replication_conf if\n>> either allows_streaming or has_archiving are given.\n>> \n>> Another easy fix would be to call $primary2->set_replication_conf in the\n>> test file, but then you'd complain that that's supposed to be an\n>> internal method :-)\n\n> It claims that but it's also used here:\n> src/bin/pg_basebackup/t/010_pg_basebackup.pl\n\nGiven that precedent, it seems like calling set_replication_conf\nis a good quick-fix for getting the buildfarm green again.\nBut +1 for then refactoring this to get rid of these hacks (both\nwith respect to pg_hba.conf and the postgresql.conf parameters)\nin both of these tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Sep 2021 18:48:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-09-29 17:04:48 -0400, Andrew Dunstan wrote:\n> On 9/29/21 4:33 PM, Tom Lane wrote:\n> # Second test: a standby that receives WAL via archive/restore commands.\n> $node = PostgresNode->new('primary2');\n> $node->init(\n> ��� has_archiving => 1,\n> ��� extra�������� => ['--wal-segsize=1']);\n> \n> \n> doesn't have \"allows_streaming => 1\".\n\nFWIW, with that fixed I see the test hanging (e.g. [1]):\n\ncan't unlink c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/archives/000000010000000000000008.fail: Permission denied at t/026_overwrite_contrecord.pl line 189.\n### Stopping node \"primary2\" using mode immediate\n# Running: pg_ctl -D c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/pgdata -m immediate stop\nwaiting for server to shut down........................................................................................................................... failed\npg_ctl: server does not shut down\nBail out! command \"pg_ctl -D c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/pgdata -m immediate stop\" exited with value 1\nWarning: unable to close filehandle GEN6 properly: Bad file descriptor during global destruction.\n\n\nThe hang seems to be fixed by uncommenting the $h->finish(). Was there a\nreason you commented that out? THe test still fails, but at least it doesn't\nhang anymore.\n\nGreetings,\n\nAndres Freund\n\n[1] https://api.cirrus-ci.com/v1/artifact/task/6204050896060416/tap/src/test/recovery/tmp_check/log/regress_log_026_overwrite_contrecord\n\n\n", "msg_date": "Thu, 30 Sep 2021 00:51:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hello\n\nOn 2021-Sep-30, Andres Freund wrote:\n\n> FWIW, with that fixed I see the test hanging (e.g. [1]):\n> \n> can't unlink c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/archives/000000010000000000000008.fail: Permission denied at t/026_overwrite_contrecord.pl line 189.\n> ### Stopping node \"primary2\" using mode immediate\n> # Running: pg_ctl -D c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/pgdata -m immediate stop\n> waiting for server to shut down........................................................................................................................... failed\n> pg_ctl: server does not shut down\n> Bail out! command \"pg_ctl -D c:/cirrus/src/test/recovery/tmp_check/t_026_overwrite_contrecord_primary2_data/pgdata -m immediate stop\" exited with value 1\n> Warning: unable to close filehandle GEN6 properly: Bad file descriptor during global destruction.\n> \n> The hang seems to be fixed by uncommenting the $h->finish(). Was there a\n> reason you commented that out?\n\nHmm, no -- I was experimenting to see what effect it had, and because I\nsaw none, I left it like that. Let me fix both things and see what\nhappens next.\n\n> THe test still fails, but at least it doesn't hang anymore.\n\nI'll try and get it fixed, but ultimately we can always \"fix\" this test\nby removing it. It is only in 14 and master; the earlier branches only\nhave the first part of this test file. (That's because I ran out of\npatience trying to port it to versions lacking LSN operators and such.)\nI think the second half doesn't provide all that much additional\ncoverage anyway.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n", "msg_date": "Thu, 30 Sep 2021 09:02:30 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "\nOn 9/29/21 5:29 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 9/29/21 4:33 PM, Tom Lane wrote:\n>>> which looks like a pretty straightforward bogus-connection-configuration\n>>> problem, except why wouldn't other BF members show it?\n>> This:\n>> ...\n>> doesn't have \"allows_streaming => 1\".\n> Oh, and that only breaks things on Windows, cf set_replication_conf.\n>\n> ... although I wonder why the fact that sub init otherwise sets\n> wal_level = minimal doesn't cause a problem for this test.\n> Maybe the test script is cowboy-ishly overriding that?\n>\n> \t\t\t\n\n\n\nRegardless of this problem, I think we should simply call\nset_replication_conf unconditionally in init().  Replication connections\nare now allowed by default on Unix, this would just bring Windows nodes\ninto line with that.\n\n\nThe function does have this:\n\n $self->host eq $test_pghost\n    or die \"set_replication_conf only works with the default host\";\n\nI'm not sure when that wouldn't be true.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 30 Sep 2021 10:18:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Regardless of this problem, I think we should simply call\n> set_replication_conf unconditionally in init().  Replication connections\n> are now allowed by default on Unix, this would just bring Windows nodes\n> into line with that.\n\nYeah, I was thinking along the same lines yesterday. The fact that\npre-commit testing failed to note the problem is exactly because we\nhave this random difference between what works by default on Unix\nand what works by default on Windows. Let's close that gap before\nit bites us again.\n\nThere's still the issue of these tests overriding postgresql.conf\nentries made by init(), but maybe we can live with that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Sep 2021 10:28:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-30, Tom Lane wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Regardless of this problem, I think we should simply call\n> > set_replication_conf unconditionally in init().  Replication connections\n> > are now allowed by default on Unix, this would just bring Windows nodes\n> > into line with that.\n> \n> Yeah, I was thinking along the same lines yesterday. The fact that\n> pre-commit testing failed to note the problem is exactly because we\n> have this random difference between what works by default on Unix\n> and what works by default on Windows. Let's close that gap before\n> it bites us again.\n\n+1\n\n> There's still the issue of these tests overriding postgresql.conf\n> entries made by init(), but maybe we can live with that?\n\nI vote to at least have has_archiving=>1 set wal_level=replica, and\npotentially max_wal_senders=2 too (allows pg_basebackup).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 30 Sep 2021 11:32:57 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-30, Tom Lane wrote:\n>> There's still the issue of these tests overriding postgresql.conf\n>> entries made by init(), but maybe we can live with that?\n\n> I vote to at least have has_archiving=>1 set wal_level=replica, and\n> potentially max_wal_senders=2 too (allows pg_basebackup).\n\nI think this requires a bit more investigation. I looked quickly\nthrough the pre-existing tests that set has_archiving=>1, and every\nsingle one of them also sets allows_streaming=>1, which no doubt\nexplains why this issue hasn't come up before. So now I'm wondering\nif any of those other tests is setting allows_streaming only because\nof this configuration issue.\n\nMore to the point, since we've not previously used the combination\nof has_archiving without allows_streaming, I wonder exactly how\nwe want to define it to work. I'm not really convinced that it\nshould be defined as \"allows basebackup even though replication\nis supposed to be off\".\n\nPerhaps a compromise could be to invent a third option\n\"allows_basebackup\", so that init() actually knows what's going on?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Sep 2021 12:12:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Just when you thought it was safe to go back in the water:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-09-29%2022%3A05%3A44\n\nwhich is complaining that the (misspelled, BTW) log message\n'sucessfully skipped missing contrecord at' doesn't show up.\n\nThis machine is old, slow, and 32-bit bigendian. I first thought\nthe problem might be \"didn't wait long enough\", but it seems like\nwaiting for replay ought to be sufficient. What I'm now guessing\nis that the test case is making unwarranted assumptions about how\nmuch WAL will be generated, such that no page-crossing contrecord\nactually appears.\n\nAlso, digging around, I see hornet showed the same problem:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2021-09-29%2018%3A19%3A55\n\nhornet is 64-bit bigendian ... so maybe this actually reduces to\nan endianness question?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Sep 2021 14:53:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-30, Tom Lane wrote:\n\n> Just when you thought it was safe to go back in the water:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2021-09-29%2022%3A05%3A44\n> \n> which is complaining that the (misspelled, BTW)\n\nAh, the case of the missing juxtaposed consonants. Les Luthiers have\nsomething to say on that matter.\nhttps://www.youtube.com/watch?v=ptorPqV7D5s\n\n> log message 'sucessfully skipped missing contrecord at' doesn't show\n> up.\n\nHmm. Well, as I said, maybe this part of the test isn't worth much\nanyway. Rather than spending time trying to figure out why isn't this\ntriggering the WAL overwriting, I compared the coverage report for\nrunning only the first test to the coverage report of running only the\nsecond test. It turns out that there's no relevant coverage increase in\nthe second test. So I propose just removing that part.\n\n(The reason I added that test in the first place was to try to reproduce\nthe problem without having to physically unlink a WAL file from the\nprimary's pg_wal subdir. But maybe it's just make-work.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Thu, 30 Sep 2021 17:04:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hmm. Well, as I said, maybe this part of the test isn't worth much\n> anyway. Rather than spending time trying to figure out why isn't this\n> triggering the WAL overwriting, I compared the coverage report for\n> running only the first test to the coverage report of running only the\n> second test. It turns out that there's no relevant coverage increase in\n> the second test. So I propose just removing that part.\n\nSeems reasonable. We don't need to spend buildfarm cycles forevermore\non a test that's not adding useful coverage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Sep 2021 16:26:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 29, 2021 at 8:14 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Sep-24, Alvaro Herrera wrote:\n>\n> > Here's the set for all branches, which I think are really final, in case\n> > somebody wants to play and reproduce their respective problem scenarios.\n> > Nathan already confirmed that his reproducer no longer shows a problem,\n> > and this version shouldn't affect that.\n>\n> Pushed. Watching for buildfarm fireworks now.\n>\n\nWhile reading this commit (ff9f111bce24), wondered can't we skip\nmissingContrecPtr global variable declaration and calculate that from\nabortedRecPtr value whenever it needed. IIUC, missingContrecPtr is the\nnext page to the page that abortedRecPtr contain and that can be\ncalculated as \"abortedRecPtr + (XLOG_BLCKSZ - (abortedRecPtr %\nXLOG_BLCKSZ))\", thoughts? Please correct me if I'm missing something,\nthanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 7 Oct 2021 16:41:37 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-07, Amul Sul wrote:\n\n> While reading this commit (ff9f111bce24), wondered can't we skip\n> missingContrecPtr global variable declaration and calculate that from\n> abortedRecPtr value whenever it needed. IIUC, missingContrecPtr is the\n> next page to the page that abortedRecPtr contain and that can be\n> calculated as \"abortedRecPtr + (XLOG_BLCKSZ - (abortedRecPtr %\n> XLOG_BLCKSZ))\", thoughts? Please correct me if I'm missing something,\n> thanks.\n\nI don't think that works -- what if the missing record is not on the\nnext page but on some future one? Imagine an enormously large record\nthat starts on segment 1, covers all of segment 2 and ends in segment 3.\nWe could have flushed segment 2 already, so with your idea we would skip\nahead only to that position, but really we need to skip all the way to\nthe first page of segment 3.\n\nThis is easier to imagine if you set wal segment size to 1 MB, but it is\npossible with the default segment size too, since commit records can be\narbitrarily large, and \"logical message\" records as well.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 7 Oct 2021 10:11:20 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Thu, 7 Oct 2021 at 6:41 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Oct-07, Amul Sul wrote:\n>\n> > While reading this commit (ff9f111bce24), wondered can't we skip\n> > missingContrecPtr global variable declaration and calculate that from\n> > abortedRecPtr value whenever it needed. IIUC, missingContrecPtr is the\n> > next page to the page that abortedRecPtr contain and that can be\n> > calculated as \"abortedRecPtr + (XLOG_BLCKSZ - (abortedRecPtr %\n> > XLOG_BLCKSZ))\", thoughts? Please correct me if I'm missing something,\n> > thanks.\n>\n> I don't think that works -- what if the missing record is not on the\n> next page but on some future one? Imagine an enormously large record\n> that starts on segment 1, covers all of segment 2 and ends in segment 3.\n> We could have flushed segment 2 already, so with your idea we would skip\n> ahead only to that position, but really we need to skip all the way to\n> the first page of segment 3.\n>\n> This is easier to imagine if you set wal segment size to 1 MB, but it is\n> possible with the default segment size too, since commit records can be\n> arbitrarily large, and \"logical message\" records as well.\n\n\nMake sense, thanks for the explanation.\n\nRegards,\nAmul\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nOn Thu, 7 Oct 2021 at 6:41 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Oct-07, Amul Sul wrote:\n\n> While reading this commit (ff9f111bce24), wondered can't we skip\n> missingContrecPtr global variable declaration and calculate that from\n> abortedRecPtr value whenever it needed. IIUC, missingContrecPtr is the\n> next page to the page that abortedRecPtr contain and that can be\n> calculated as \"abortedRecPtr + (XLOG_BLCKSZ - (abortedRecPtr %\n> XLOG_BLCKSZ))\", thoughts? Please correct me if I'm missing something,\n> thanks.\n\nI don't think that works -- what if the missing record is not on the\nnext page but on some future one?  Imagine an enormously large record\nthat starts on segment 1, covers all of segment 2 and ends in segment 3.\nWe could have flushed segment 2 already, so with your idea we would skip\nahead only to that position, but really we need to skip all the way to\nthe first page of segment 3.\n\nThis is easier to imagine if you set wal segment size to 1 MB, but it is\npossible with the default segment size too, since commit records can be\narbitrarily large, and \"logical message\" records as well.Make sense, thanks for the explanation.Regards,Amul-- Regards,Amul SulEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Oct 2021 18:52:39 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-07, Amul Sul wrote:\n\n> Make sense, thanks for the explanation.\n\nYou're welcome. Also, I forgot: thank you for taking the time to review\nthe code. Much appreciated.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 7 Oct 2021 10:27:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 7, 2021 at 6:57 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-07, Amul Sul wrote:\n>\n> > Make sense, thanks for the explanation.\n>\n> You're welcome. Also, I forgot: thank you for taking the time to review\n> the code. Much appreciated.\n\n:)\n\n>\n>\n\nI have one more question, regarding the need for other global\nvariables i.e. abortedRecPtr. (Sorry for coming back after so long.)\n\nInstead of abortedRecPtr point, isn't enough to write\noverwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\nare pointing to the same location then can't we use\nlastReplayedEndRecPtr instead of abortedRecPtr to write\noverwrite-contrecord and remove need of extra global variable, like\nattached?\n\nYou might wonder why I am so concerned about the global variable. The\nreason is that I am working on another thread[1] where we are trying\nto club all the WAL write operations that happen at the end of\nStartupXLOG into a separate function. In the future, we might want to\nallow executing this function from other processes (e.g.\nCheckpointer). For that, we need to remove the dependency of those WAL\nwrite operations having on the global variables which are mostly valid\nin the startup process. The easiest way to do that is simply copy all\nthe global variables into shared memory but that will not be an\noptimised solution, the goal is to try to see if we could leverage the existing\ninformation available into shared memory. I would be grateful if you\ncould share your thoughts on the same, thanks.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/CAAJ_b97KZzdJsffwRK7w0XU5HnXkcgKgTR69t8cOZztsyXjkQw@mail.gmail.com", "msg_date": "Wed, 13 Oct 2021 11:30:18 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-25, Alvaro Herrera wrote:\n>> On 2021-Sep-24, Alvaro Herrera wrote:\n>> \n>> > Here's the set for all branches, which I think are really final, in\n>> > case somebody wants to play and reproduce their respective problem\n>> scenarios.\n>> \n>> I forgot to mention that I'll wait until 14.0 is tagged before getting anything\n>> pushed.\n\nHi Alvaro, sorry for being late to the party, but to add some reassurance that v2-commited-fix this really fixes solves the initial production problem, I've done limited test on it (just like with the v1-patch idea earlier/ with using wal_keep_segments, wal_init_zero=on, archive_mode=on and archive_command='/bin/true')\n\n- On 12.8, I was able like last time to manually reproduce it on 3 out of 3 tries and I've got: 2x \"invalid contrecord length\", 1x \"there is no contrecord flag\" on standby.\n\n- On soon-to-be-become-12.9 REL_12_STABLE (with commit 1df0a914d58f2bdb03c11dfcd2cb9cd01c286d59 ) on 4 out of 4 tries, I've got beautiful insight into what happened:\nLOG: started streaming WAL from primary at 1/EC000000 on timeline 1\nLOG: sucessfully skipped missing contrecord at 1/EBFFFFF8, overwritten at 2021-10-13 11:22:37.48305+00\nCONTEXT: WAL redo at 1/EC000028 for XLOG/OVERWRITE_CONTRECORD: lsn 1/EBFFFFF8; time 2021-10-13 11:22:37.48305+00\n...and slave was able to carry-on automatically. In 4th test, the cascade was tested too (m -> s1 -> s11) and both {s1,s11} did behave properly and log the above message. Also additional check proved that after simulating ENOSPC crash on master the data contents were identical everywhere (m1=s1=s11). \n\nThank you Alvaro and also to everybody else who participated in solving this challenging and really edge-case nasty bug.\n\n-J.\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:53:37 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "RE: prevent immature WAL streaming" }, { "msg_contents": "On Wed, Oct 13, 2021 at 2:01 AM Amul Sul <sulamul@gmail.com> wrote:\n> Instead of abortedRecPtr point, isn't enough to write\n> overwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\n> are pointing to the same location then can't we use\n> lastReplayedEndRecPtr instead of abortedRecPtr to write\n> overwrite-contrecord and remove need of extra global variable, like\n> attached?\n\nI think you mean missingContrecPtr, not abortedRecPtr. If I understand\ncorrectly, abortedRecPtr is going to be the location in some WAL\nsegment which we replayed where a long record began, but\nmissingContrecPtr seems like it would have to point to the beginning\nof the first segment we were unable to find to continue replay; and\nthus it ought to be the same as lastReplayedEndRecPtr. But the\ncommitted code doesn't seem to check that these are the same or verify\nthe relationship between them in any way, so I'm worried there is some\nother case here. The comments in XLogReadRecord also suggest this:\n\n * We get here when a record that spans multiple pages needs to be\n * assembled, but something went wrong -- perhaps a contrecord piece\n * was lost. If caller is WAL replay, it will know where the aborted\n\nSaying that \"perhaps\" a contrecord piece was lost seems to imply that\nother explanations are possible as well, but I'm not sure what.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 13:14:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-13, Amul Sul wrote:\n\n> I have one more question, regarding the need for other global\n> variables i.e. abortedRecPtr. (Sorry for coming back after so long.)\n> \n> Instead of abortedRecPtr point, isn't enough to write\n> overwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\n> are pointing to the same location then can't we use\n> lastReplayedEndRecPtr instead of abortedRecPtr to write\n> overwrite-contrecord and remove need of extra global variable, like\n> attached?\n\nI'm a bit fuzzy on the difference \"the end+1\" and \"the start of the next\nrecord\". Are they always the same? We do have XLogRecPtrToBytePos()\nand XLogBytePosToEndRecPtr() to convert unadorned XLogRecPtr values to\n\"usable byte positions\", which suggests to me that the proposed patch\nmay fail if end+1 is a page or segment boundary.\n\nThe other difference is that abortedRecPtr is set if we fail to read a\nrecord, but XLogCtl->lastReplayedEndRecPtr is set even if we read the\nrecord successfully. So you'd have need a bool flag that the overwrite\ncontrecord record needs to be written. Your patch is using the fact\nthat missingContrecPtr is non-invalid as such a flag ... I can't see\nanything wrong with that. So maybe your patch is okay in this aspect.\n\n> You might wonder why I am so concerned about the global variable. The\n> reason is that I am working on another thread[1] where we are trying\n> to club all the WAL write operations that happen at the end of\n> StartupXLOG into a separate function. In the future, we might want to\n> allow executing this function from other processes (e.g.\n> Checkpointer). For that, we need to remove the dependency of those WAL\n> write operations having on the global variables which are mostly valid\n> in the startup process.\n\nSeems a fine goal.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 13 Oct 2021 14:27:58 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-13, Robert Haas wrote:\n\n> On Wed, Oct 13, 2021 at 2:01 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Instead of abortedRecPtr point, isn't enough to write\n> > overwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\n> > are pointing to the same location then can't we use\n> > lastReplayedEndRecPtr instead of abortedRecPtr to write\n> > overwrite-contrecord and remove need of extra global variable, like\n> > attached?\n> \n> I think you mean missingContrecPtr, not abortedRecPtr. If I understand\n> correctly, abortedRecPtr is going to be the location in some WAL\n> segment which we replayed where a long record began, but\n> missingContrecPtr seems like it would have to point to the beginning\n> of the first segment we were unable to find to continue replay; and\n> thus it ought to be the same as lastReplayedEndRecPtr.\n\nSo abortedRecPtr and missingContrecPtr both point to the same long\nrecord: the former is the start of the record, and the latter is some\nintermediate position where we failed to find the contrecord.\nlastReplayedEndRecPtr is the end+1 of the record *prior* to the long\nrecord.\n\n> But the committed code doesn't seem to check that these are the same\n> or verify the relationship between them in any way, so I'm worried\n> there is some other case here.\n\nYeah, the only reason they are the same is that xlogreader sets both to\nInvalid when reading a record, and then sets both when a read fails.\n\n> The comments in XLogReadRecord also suggest this:\n> \n> * We get here when a record that spans multiple pages needs to be\n> * assembled, but something went wrong -- perhaps a contrecord piece\n> * was lost. If caller is WAL replay, it will know where the aborted\n> \n> Saying that \"perhaps\" a contrecord piece was lost seems to imply that\n> other explanations are possible as well, but I'm not sure what.\n\nOther explanations are possible. Imagine cosmic rays alter one byte in\nthe last contrecord. WAL replay will stop there, and the contrecord\nwill have been found all right, but CRC check would have failed to pass,\nso we would set xlogreader->missingContrecPtr to the final contrecord of\nthat record (I didn't actually verify this particular scenario.)\n\nIn fact, anything that causes xlogreader.c:XLogReadRecord to return NULL\nafter setting \"assembled=true\" would set both abortedRecPtr and\nmissingContrecPtr -- except DecodeXLogRecord failure, which perhaps\nshould be handled in the same way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n", "msg_date": "Wed, 13 Oct 2021 14:39:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nIt seems that 026_overwrite_contrecord.pl test often fails under valgrind. I\nfirst thought that related failures on skink were due to me migrating the\nanimal to a new host (and then running out of space due to a mistake in ccache\nconfig). But it happened again after I fixed those, and I just reproduced the\nissue locally.\n\nIt's a bit odd that we didn't start to see these failures immediately, but\nonly in the last few days. I'd blame skink being migrated to a new home,\nexcept that I can see the issue locally.\n\nFWIW, the way skink runs all postgres instances through valgrind is by\nreplacing the postgres binary. Here's my local version of that:\n\nandres@awork3:~/build/postgres/dev-assert/vpath$ cat tmp_install/home/andres/build/postgres/dev-assert/install/bin/postgres\n#!/bin/bash\n\nexec /usr/bin/valgrind \\\n --quiet \\\n --error-exitcode=128 \\\n --suppressions=/home/andres/src/postgresql/src/tools/valgrind.supp \\\n --trace-children=yes --track-origins=yes --read-var-info=no \\\n --leak-check=no \\\n --run-libc-freeres=no \\\n --vgdb=no \\\n --error-markers=VALGRINDERROR-BEGIN,VALGRINDERROR-END \\\n /home/andres/build/postgres/dev-assert/vpath/tmp_install/home/andres/build/postgres/dev-assert/install/bin/postgres.orig \\\n \"$@\"\n\nmake -C src/test/recovery/ check PROVE_FLAGS='-v' PROVE_TESTS='t/026_overwrite_contrecord.pl' NO_TEMP_INSTALL=1\n...\n\nnot ok 1 - 000000010000000000000002 differs from 000000010000000000000002\n\n# Failed test '000000010000000000000002 differs from 000000010000000000000002'\n\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:03:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 11:03:39 -0700, Andres Freund wrote:\n> It seems that 026_overwrite_contrecord.pl test often fails under valgrind. I\n> first thought that related failures on skink were due to me migrating the\n> animal to a new host (and then running out of space due to a mistake in ccache\n> config). But it happened again after I fixed those, and I just reproduced the\n> issue locally.\n> \n> It's a bit odd that we didn't start to see these failures immediately, but\n> only in the last few days. I'd blame skink being migrated to a new home,\n> except that I can see the issue locally.\n> \n> FWIW, the way skink runs all postgres instances through valgrind is by\n> replacing the postgres binary. Here's my local version of that:\n> \n> andres@awork3:~/build/postgres/dev-assert/vpath$ cat tmp_install/home/andres/build/postgres/dev-assert/install/bin/postgres\n> #!/bin/bash\n> \n> exec /usr/bin/valgrind \\\n> --quiet \\\n> --error-exitcode=128 \\\n> --suppressions=/home/andres/src/postgresql/src/tools/valgrind.supp \\\n> --trace-children=yes --track-origins=yes --read-var-info=no \\\n> --leak-check=no \\\n> --run-libc-freeres=no \\\n> --vgdb=no \\\n> --error-markers=VALGRINDERROR-BEGIN,VALGRINDERROR-END \\\n> /home/andres/build/postgres/dev-assert/vpath/tmp_install/home/andres/build/postgres/dev-assert/install/bin/postgres.orig \\\n> \"$@\"\n> \n> make -C src/test/recovery/ check PROVE_FLAGS='-v' PROVE_TESTS='t/026_overwrite_contrecord.pl' NO_TEMP_INSTALL=1\n> ...\n> \n> not ok 1 - 000000010000000000000002 differs from 000000010000000000000002\n> \n> # Failed test '000000010000000000000002 differs from 000000010000000000000002'\n\nI added LSNs to the error message:\nnot ok 1 - 000000010000000000000002 (0/2002350) differs from 000000010000000000000002 (0/2099600)\n\nIt appears that the problem is that inbetween the determination of\nrows_walsize the insert LSN moved to the next segment separately from the\ninsertion, presumably due to autovacuum/analayze or such.\n<retries, with log_autovacuum_min_duration_statement=0, log_min_duration_statement=0>\n\n\n2021-10-13 11:23:25.659 PDT [1491455] 026_overwrite_contrecord.pl LOG: statement: insert into filler select * from generate_series(1, 1000)\n2021-10-13 11:23:26.467 PDT [1491455] 026_overwrite_contrecord.pl LOG: duration: 861.112 ms\n2021-10-13 11:23:27.055 PDT [1491458] 026_overwrite_contrecord.pl LOG: statement: select pg_current_wal_insert_lsn() - '0/0'\n2021-10-13 11:23:27.357 PDT [1491458] 026_overwrite_contrecord.pl LOG: duration: 347.888 ms\n2021-10-13 11:23:27.980 PDT [1491461] 026_overwrite_contrecord.pl LOG: statement: WITH setting AS (\n\t SELECT setting::int AS wal_segsize\n\t FROM pg_settings WHERE name = 'wal_segment_size'\n\t)\n\tINSERT INTO filler\n\tSELECT g FROM setting,\n\t generate_series(1, 1000 * (wal_segsize - ((pg_current_wal_insert_lsn() - '0/0') % wal_segsize)) / 64232) g\n2021-10-13 11:24:25.173 PDT [1491550] LOG: automatic analyze of table \"postgres.public.filler\"\n\tavg read rate: 3.185 MB/s, avg write rate: 0.039 MB/s\n\tbuffer usage: 96 hits, 731 misses, 9 dirtied\n\tsystem usage: CPU: user: 1.79 s, system: 0.00 s, elapsed: 1.81 s\n2021-10-13 11:24:26.255 PDT [1491461] 026_overwrite_contrecord.pl LOG: duration: 58360.811 ms\n2021-10-13 11:24:26.857 PDT [1491557] 026_overwrite_contrecord.pl LOG: statement: SELECT pg_current_wal_insert_lsn()\n2021-10-13 11:24:27.120 PDT [1491557] 026_overwrite_contrecord.pl LOG: duration: 300.562 ms\n\n\nHm. I guess we can disable autovac. But that's not a great solution, there\nmight be WAL files due to catalog access etc too.\n\nSeems like it might be worth doing the \"filling\" of the segment with a loop in\na DO block instead, where the end condition is to be within some distance to\nthe end of the segment? With plenty headroom?\n\n\nAnother thing: filling a segment by inserting lots of very tiny rows is pretty\nexpensive. Can't we use something a bit wider? Perhaps even emit_message?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:31:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-13, Andres Freund wrote:\n\n> I added LSNs to the error message:\n> not ok 1 - 000000010000000000000002 (0/2002350) differs from 000000010000000000000002 (0/2099600)\n> \n> It appears that the problem is that inbetween the determination of\n> rows_walsize the insert LSN moved to the next segment separately from the\n> insertion, presumably due to autovacuum/analayze or such.\n> <retries, with log_autovacuum_min_duration_statement=0, log_min_duration_statement=0>\n\nOh, of course.\n\n> Hm. I guess we can disable autovac. But that's not a great solution, there\n> might be WAL files due to catalog access etc too.\n\nWell, we don't expect anything else to happen -- the cluster is\notherwise idle. I think we should do it regardless of any other\nchanges, just to keep things steadier.\n\n> Seems like it might be worth doing the \"filling\" of the segment with a loop in\n> a DO block instead, where the end condition is to be within some distance to\n> the end of the segment? With plenty headroom?\n\nEh, good idea ... didn't think of that, but it should keep things more\nstable under strange conditions.\n\n> Another thing: filling a segment by inserting lots of very tiny rows is pretty\n> expensive. Can't we use something a bit wider? Perhaps even emit_message?\n\nI think I realized partway through writing the test that I could use\nemit_message instead of using a batched row insert ... so, yeah, we\ncan use it here also.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n", "msg_date": "Wed, 13 Oct 2021 15:52:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 15:52:46 -0300, Alvaro Herrera wrote:\n> > Hm. I guess we can disable autovac. But that's not a great solution, there\n> > might be WAL files due to catalog access etc too.\n> \n> Well, we don't expect anything else to happen -- the cluster is\n> otherwise idle. I think we should do it regardless of any other\n> changes, just to keep things steadier.\n\nIDK, it seems good to have a bit of variance as well. But I don't have a\nstrong opinion on it.\n\n\n> > Another thing: filling a segment by inserting lots of very tiny rows is pretty\n> > expensive. Can't we use something a bit wider? Perhaps even emit_message?\n\nFWIW, the count of inserted rows is something like 171985 ;)\n\n\n> I think I realized partway through writing the test that I could use\n> emit_message instead of using a batched row insert ... so, yeah, we\n> can use it here also.\n\nCool. Even if we want to use inserts, lets at least make the rows wide...\n\nI think it'd be good to have a bit of variance in record width. So perhaps\nadding a bit of random() in to influence record width would be a good idea?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:13:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-13, Andres Freund wrote:\n\n> > > Another thing: filling a segment by inserting lots of very tiny rows is pretty\n> > > expensive. Can't we use something a bit wider? Perhaps even emit_message?\n> \n> FWIW, the count of inserted rows is something like 171985 ;)\n\nThis does ~1600 iterations to fill one segment, 10 rows per iteration,\nrow size is variable; exits when two BLCKSZ remain to complete the WAL\nsegment:\n\ncreate table filler (a int, b text);\ndo $$\ndeclare\n wal_segsize int := setting::int from pg_settings where name = 'wal_segment_size';\n remain int;\n iters int := 0;\nbegin\n loop\n insert into filler\n select g, repeat(md5(g::text), (random() * 60 + 1)::int)\n from generate_series(1, 10) g;\n\n remain := wal_segsize - (pg_current_wal_insert_lsn() - '0/0') % wal_segsize;\n raise notice '(%) remain: %', iters, remain;\n if remain < 2 * setting::int from pg_settings where name = 'block_size' then\n exit;\n end if;\n iters := iters + 1;\n end loop;\nend\n$$ ;\n\n(Of course, I'm not proposing that the 'raise notice' be there in the\ncommitted form.)\n\nIf I enlarge the 'repeat' count, it gets worse (more iterations\nrequired) because a lot of the rows become toasted and thus subject to\ncompression. If I do 20 rows per iteration rather than 10, the risk is\nthat we'll do too many near the end of the segment and we'll have to\ncontinue running until completing the next one.\n\nSo, this seems good enough.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n", "msg_date": "Wed, 13 Oct 2021 16:57:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 12:13:45 -0700, Andres Freund wrote:\n> On 2021-10-13 15:52:46 -0300, Alvaro Herrera wrote:\n> > I think I realized partway through writing the test that I could use\n> > emit_message instead of using a batched row insert ... so, yeah, we\n> > can use it here also.\n>\n> Cool. Even if we want to use inserts, lets at least make the rows wide...\n>\n> I think it'd be good to have a bit of variance in record width. So perhaps\n> adding a bit of random() in to influence record width would be a good idea?\n\nSomething very roughly like the attached. Perhaps that's going a bit overboard\nthough. But it seems like it might be something we could use in a few tests?", "msg_date": "Wed, 13 Oct 2021 13:36:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "This works nicely with the TAP test:\n\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)", "msg_date": "Wed, 13 Oct 2021 17:42:37 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi\n\nOn 2021-Oct-13, Andres Freund wrote:\n\n> Something very roughly like the attached. Perhaps that's going a bit overboard\n> though. But it seems like it might be something we could use in a few tests?\n\nHah, our emails crossed. If you want to turn this into a patch to the\n026 test file, please go ahead. Failing that, I'd just push the patch I\njust sent in my other reply.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n", "msg_date": "Wed, 13 Oct 2021 17:46:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hi,\n\nOn 2021-10-13 17:46:53 -0300, Alvaro Herrera wrote:\n> On 2021-Oct-13, Andres Freund wrote:\n> \n> > Something very roughly like the attached. Perhaps that's going a bit overboard\n> > though. But it seems like it might be something we could use in a few tests?\n> \n> Hah, our emails crossed. If you want to turn this into a patch to the\n> 026 test file, please go ahead. Failing that, I'd just push the patch I\n> just sent in my other reply.\n\nYea, let's go for your patch then. I've verified that at least locally it\npasses under valgrind.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Oct 2021 14:15:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Oct-13, Andres Freund wrote:\n\n> Hi,\n> \n> On 2021-10-13 17:46:53 -0300, Alvaro Herrera wrote:\n> > On 2021-Oct-13, Andres Freund wrote:\n> > \n> > > Something very roughly like the attached. Perhaps that's going a bit overboard\n> > > though. But it seems like it might be something we could use in a few tests?\n> > \n> > Hah, our emails crossed. If you want to turn this into a patch to the\n> > 026 test file, please go ahead. Failing that, I'd just push the patch I\n> > just sent in my other reply.\n> \n> Yea, let's go for your patch then. I've verified that at least locally it\n> passes under valgrind.\n\nAh great, thanks. Pushed then.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Wed, 13 Oct 2021 19:09:28 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Wed, Oct 13, 2021 at 10:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-13, Amul Sul wrote:\n>\n> > I have one more question, regarding the need for other global\n> > variables i.e. abortedRecPtr. (Sorry for coming back after so long.)\n> >\n> > Instead of abortedRecPtr point, isn't enough to write\n> > overwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\n> > are pointing to the same location then can't we use\n> > lastReplayedEndRecPtr instead of abortedRecPtr to write\n> > overwrite-contrecord and remove need of extra global variable, like\n> > attached?\n>\n> I'm a bit fuzzy on the difference \"the end+1\" and \"the start of the next\n> record\". Are they always the same? We do have XLogRecPtrToBytePos()\n> and XLogBytePosToEndRecPtr() to convert unadorned XLogRecPtr values to\n> \"usable byte positions\", which suggests to me that the proposed patch\n> may fail if end+1 is a page or segment boundary.\n>\n\nYes, you are correct, that could be a possible failure.\n\nHow about calculating that from the lastReplayedEndRecPtr by\nconverting it first to \"usable byte positions\" and then recalculating\nthe record pointer from that, like attached?\n\n> The other difference is that abortedRecPtr is set if we fail to read a\n> record, but XLogCtl->lastReplayedEndRecPtr is set even if we read the\n> record successfully. So you'd have need a bool flag that the overwrite\n> contrecord record needs to be written. Your patch is using the fact\n> that missingContrecPtr is non-invalid as such a flag ... I can't see\n> anything wrong with that. So maybe your patch is okay in this aspect.\n>\n> > You might wonder why I am so concerned about the global variable. The\n> > reason is that I am working on another thread[1] where we are trying\n> > to club all the WAL write operations that happen at the end of\n> > StartupXLOG into a separate function. In the future, we might want to\n> > allow executing this function from other processes (e.g.\n> > Checkpointer). For that, we need to remove the dependency of those WAL\n> > write operations having on the global variables which are mostly valid\n> > in the startup process.\n>\n> Seems a fine goal.\n\nThanks for looking at the patch.\n\nRegards,\nAmul", "msg_date": "Thu, 14 Oct 2021 18:14:20 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Thu, Oct 14, 2021 at 6:14 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, Oct 13, 2021 at 10:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Oct-13, Amul Sul wrote:\n> >\n> > > I have one more question, regarding the need for other global\n> > > variables i.e. abortedRecPtr. (Sorry for coming back after so long.)\n> > >\n> > > Instead of abortedRecPtr point, isn't enough to write\n> > > overwrite-contrecord at XLogCtl->lastReplayedEndRecPtr? I think both\n> > > are pointing to the same location then can't we use\n> > > lastReplayedEndRecPtr instead of abortedRecPtr to write\n> > > overwrite-contrecord and remove need of extra global variable, like\n> > > attached?\n> >\n> > I'm a bit fuzzy on the difference \"the end+1\" and \"the start of the next\n> > record\". Are they always the same? We do have XLogRecPtrToBytePos()\n> > and XLogBytePosToEndRecPtr() to convert unadorned XLogRecPtr values to\n> > \"usable byte positions\", which suggests to me that the proposed patch\n> > may fail if end+1 is a page or segment boundary.\n> >\n>\n> Yes, you are correct, that could be a possible failure.\n>\n> How about calculating that from the lastReplayedEndRecPtr by\n> converting it first to \"usable byte positions\" and then recalculating\n> the record pointer from that, like attached?\n>\n\nAny thoughts about the patch posted previously?\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 22 Oct 2021 18:43:52 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Fri, 22 Oct 2021 18:43:52 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> Any thoughts about the patch posted previously?\n\nHonestly, xlogreader looks fine with the current shape. The reason is\nthat it seems cleaner as an interface boundary since the caller of\nxlogreader doesn't need to know about the details of xlogreader. The\ncurrent code nicely hides the end+1 confusion.\n\nEven if we want to get rid of global variables in xlog.c, I don't\nunderstand why we remove only abortedRecPtr. That change makes things\nmore complex as a whole by letting xlog.c be more conscious of\nxlogreader's internals. I'm not sure I like that aspect of the patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:32:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Mon, Oct 25, 2021 at 7:02 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 22 Oct 2021 18:43:52 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > Any thoughts about the patch posted previously?\n>\n> Honestly, xlogreader looks fine with the current shape. The reason is\n> that it seems cleaner as an interface boundary since the caller of\n> xlogreader doesn't need to know about the details of xlogreader. The\n> current code nicely hides the end+1 confusion.\n>\n> Even if we want to get rid of global variables in xlog.c, I don't\n> understand why we remove only abortedRecPtr. That change makes things\n> more complex as a whole by letting xlog.c be more conscious of\n> xlogreader's internals. I'm not sure I like that aspect of the patch.\n>\n\nBecause we have other ways to get abortedRecPtr without having a\nglobal variable, but we don't have such a way for missingContrecPtr,\nAFAICU.\n\nI agree using global variables makes things a bit easier, but those\nare inefficient when you want to share those with other processes --\nthat would add extra burden to shared memory.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 25 Oct 2021 10:34:27 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "At Mon, 25 Oct 2021 10:34:27 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> On Mon, Oct 25, 2021 at 7:02 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Fri, 22 Oct 2021 18:43:52 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > > Any thoughts about the patch posted previously?\n> >\n> > Honestly, xlogreader looks fine with the current shape. The reason is\n> > that it seems cleaner as an interface boundary since the caller of\n> > xlogreader doesn't need to know about the details of xlogreader. The\n> > current code nicely hides the end+1 confusion.\n> >\n> > Even if we want to get rid of global variables in xlog.c, I don't\n> > understand why we remove only abortedRecPtr. That change makes things\n> > more complex as a whole by letting xlog.c be more conscious of\n> > xlogreader's internals. I'm not sure I like that aspect of the patch.\n> >\n> \n> Because we have other ways to get abortedRecPtr without having a\n> global variable, but we don't have such a way for missingContrecPtr,\n> AFAICU.\n\nThat depends on the reason why you want to get rid of the glboal\nvariables. Since we restart WAL reading before reading the two\nvariables so we can not rely on the xlogreader's corresponding\nmembers. So we need another set of variables to preserve the values\nbeyond the restart.\n\n> I agree using global variables makes things a bit easier, but those\n> are inefficient when you want to share those with other processes --\n> that would add extra burden to shared memory.\n\nWe could simply add a new member in XLogCtlData. Or we can create\nanother struct for ReadRecord's (not XLogReader's) state then allocate\nshared memory to it. I don't think it is the right solution to infer\nit from another variable using knowledge of xlogreader's internals.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 26 Oct 2021 10:50:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-13, Andres Freund wrote:\n>> Yea, let's go for your patch then. I've verified that at least locally it\n>> passes under valgrind.\n\n> Ah great, thanks. Pushed then.\n\nSeems like this hasn't fixed the problem: skink still fails on\nthis test occasionally.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-10-22%2013%3A52%3A00\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-11-02%2000%3A28%3A30\n\nBoth of these look like\n\n# Failed test '000000010000000000000002 differs from 000000010000000000000002'\n# at t/026_overwrite_contrecord.pl line 61.\n# Looks like you failed 1 test of 3.\nt/026_overwrite_contrecord.pl ........ \nDubious, test returned 1 (wstat 256, 0x100)\nFailed 1/3 subtests \n\nwhich looks like the same thing we were seeing before.\n010e52337 seems to have just decreased the probability of failure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Nov 2021 15:38:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Hello Alvaro,\n14.10.2021 01:09, Alvaro Herrera wrote:\n>> Yea, let's go for your patch then. I've verified that at least locally it\n>> passes under valgrind.\n> Ah great, thanks. Pushed then.\n>\nWhile translating messages I've noticed that the version of the patch\nported to REL9_6_STABLE..REL_13_STABLE contains a typo \"sucessfully\".\nPlease consider committing the fix.\n\nBest regards,\nAlexander", "msg_date": "Mon, 8 Nov 2021 07:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-08, Alexander Lakhin wrote:\n\n> Hello Alvaro,\n> 14.10.2021 01:09, Alvaro Herrera wrote:\n> >> Yea, let's go for your patch then. I've verified that at least locally it\n> >> passes under valgrind.\n> > Ah great, thanks. Pushed then.\n> >\n> While translating messages I've noticed that the version of the patch\n> ported to REL9_6_STABLE..REL_13_STABLE contains a typo \"sucessfully\".\n> Please consider committing the fix.\n\nThanks, pushed. I also modified the .po files that contained the typo\nso as not to waste the translators' efforts.\n\nI blamed the wrong commit in the 9.6 commit message. Sigh.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n", "msg_date": "Mon, 8 Nov 2021 09:21:13 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "I wrote:\n> Seems like this hasn't fixed the problem: skink still fails on\n> this test occasionally.\n> # Failed test '000000010000000000000002 differs from 000000010000000000000002'\n> # at t/026_overwrite_contrecord.pl line 61.\n\nThis is still happening off and on, which makes it look like a\ntiming-sensitive problem. Confirming that, I can make it fail\nevery time by adding a long sleep just ahead of where\n026_overwrite_contrecord.pl captures $initfile. On reflection\nI think the problem is obvious: if autovacuum does anything\nconcurrently with the test's startup portion, it will cause the\ncarefully-established WAL insertion point to move into the\nnext segment. I propose to add \"autovacuum = off\" to the\ntest's postmaster configuration.\n\nAlso, I think we want\n\n-ok($initfile != $endfile, \"$initfile differs from $endfile\");\n+ok($initfile ne $endfile, \"$initfile differs from $endfile\");\n\nThe existing coding works as long as all characters of these\nWAL segment names happen to be decimal digits, but ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Nov 2021 18:08:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-09, Tom Lane wrote:\n\n> This is still happening off and on, which makes it look like a\n> timing-sensitive problem. Confirming that, I can make it fail\n> every time by adding a long sleep just ahead of where\n> 026_overwrite_contrecord.pl captures $initfile. On reflection\n> I think the problem is obvious: if autovacuum does anything\n> concurrently with the test's startup portion, it will cause the\n> carefully-established WAL insertion point to move into the\n> next segment. I propose to add \"autovacuum = off\" to the\n> test's postmaster configuration.\n\nOoh, of course.\n\n> Also, I think we want\n> \n> -ok($initfile != $endfile, \"$initfile differs from $endfile\");\n> +ok($initfile ne $endfile, \"$initfile differs from $endfile\");\n> \n> The existing coding works as long as all characters of these\n> WAL segment names happen to be decimal digits, but ...\n\nArgh!\n\nThanks for taking care of these issues.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)\n\n\n", "msg_date": "Wed, 10 Nov 2021 09:09:03 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Also, I think we want\n>\n> -ok($initfile != $endfile, \"$initfile differs from $endfile\");\n> +ok($initfile ne $endfile, \"$initfile differs from $endfile\");\n>\n> The existing coding works as long as all characters of these\n> WAL segment names happen to be decimal digits, but ...\n\nEven better style (IMO) would be:\n\n isnt($initfile, $endfile, \"WAL file name has changed\");\n\nOr some other more descriptive message of _why_ it should have changed.\n\n- ilmari\n\n\n", "msg_date": "Wed, 10 Nov 2021 13:31:06 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "We're *still* not out of the woods with 026_overwrite_contrecord.pl,\nas we are continuing to see occasional \"mismatching overwritten LSN\"\nfailures, further down in the test where it tries to start up the\nstandby:\n\n sysname | branch | snapshot | stage | l \n------------+---------------+---------------------+---------------+------------------------------------------------------------------------------------------------------------\n spurfowl | REL_13_STABLE | 2021-10-18 03:56:26 | recoveryCheck | 2021-10-18 00:08:09.324 EDT [2455:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n sidewinder | HEAD | 2021-10-19 04:32:36 | recoveryCheck | 2021-10-19 06:46:23.168 CEST [26393:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n francolin | REL9_6_STABLE | 2021-10-26 01:41:39 | recoveryCheck | 2021-10-26 01:48:05.646 UTC [3417202][][1/0:0] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n petalura | HEAD | 2021-11-05 00:20:03 | recoveryCheck | 2021-11-05 02:58:12.146 CET [61848fb3.28d157:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n lapwing | REL_11_STABLE | 2021-11-05 17:24:49 | recoveryCheck | 2021-11-05 17:39:29.741 UTC [9831:6] FATAL: mismatching overwritten LSN 0/1FFE014 -> 0/1FFE000\n morepork | HEAD | 2021-11-10 02:51:12 | recoveryCheck | 2021-11-10 04:03:33.576 CET [73561:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n petalura | HEAD | 2021-11-16 15:20:03 | recoveryCheck | 2021-11-16 18:16:47.875 CET [6193e77f.35b87f:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n morepork | HEAD | 2021-11-17 03:45:36 | recoveryCheck | 2021-11-17 04:57:04.359 CET [32089:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n spurfowl | REL_10_STABLE | 2021-11-22 22:21:03 | recoveryCheck | 2021-11-22 17:29:35.520 EST [16011:6] FATAL: mismatching overwritten LSN 0/1FFE018 -> 0/1FFE000\n(9 rows)\n\nLooking at adjacent successful runs, it seems that the exact point\nwhere the \"missing contrecord\" starts varies substantially, even after\nour previous fix to disable autovacuum in this test. How could that be?\n\nIt's probably for the best though, because I think this is exposing\nan actual bug that we would not have seen if the start point were\ncompletely consistent. I have not dug into the code, but it looks to\nme like if the \"consistent recovery state\" is reached exactly at a\npage boundary (0/1FFE000 in all these cases), then the standby expects\nthat to be what the OVERWRITE_CONTRECORD record will point at. But\nactually it points to the first WAL record on that page, resulting\nin a bogus failure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 14:04:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-23, Tom Lane wrote:\n\n> We're *still* not out of the woods with 026_overwrite_contrecord.pl,\n> as we are continuing to see occasional \"mismatching overwritten LSN\"\n> failures, further down in the test where it tries to start up the\n> standby:\n\nAugh.\n\n> Looking at adjacent successful runs, it seems that the exact point\n> where the \"missing contrecord\" starts varies substantially, even after\n> our previous fix to disable autovacuum in this test. How could that be?\n\nWell, there is intentionally some variability. Maybe not as much as one\nwould wish, but I expect that that should explain why that point is not\nalways the same.\n\n> It's probably for the best though, because I think this is exposing\n> an actual bug that we would not have seen if the start point were\n> completely consistent. I have not dug into the code, but it looks to\n> me like if the \"consistent recovery state\" is reached exactly at a\n> page boundary (0/1FFE000 in all these cases), then the standby expects\n> that to be what the OVERWRITE_CONTRECORD record will point at. But\n> actually it points to the first WAL record on that page, resulting\n> in a bogus failure.\n\nSo what is happening is that we set state->overwrittenRecPtr to the LSN\nof page start, ignoring the page header. Is that the LSN of the first\nrecord in a page? I'll see if I can reproduce the problem.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:40:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Wed, Nov 24, 2021 at 2:10 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Nov-23, Tom Lane wrote:\n>\n> > We're *still* not out of the woods with 026_overwrite_contrecord.pl,\n> > as we are continuing to see occasional \"mismatching overwritten LSN\"\n> > failures, further down in the test where it tries to start up the\n> > standby:\n>\n> Augh.\n>\n> > Looking at adjacent successful runs, it seems that the exact point\n> > where the \"missing contrecord\" starts varies substantially, even after\n> > our previous fix to disable autovacuum in this test. How could that be?\n>\n> Well, there is intentionally some variability. Maybe not as much as one\n> would wish, but I expect that that should explain why that point is not\n> always the same.\n>\n> > It's probably for the best though, because I think this is exposing\n> > an actual bug that we would not have seen if the start point were\n> > completely consistent. I have not dug into the code, but it looks to\n> > me like if the \"consistent recovery state\" is reached exactly at a\n> > page boundary (0/1FFE000 in all these cases), then the standby expects\n> > that to be what the OVERWRITE_CONTRECORD record will point at. But\n> > actually it points to the first WAL record on that page, resulting\n> > in a bogus failure.\n>\n> So what is happening is that we set state->overwrittenRecPtr to the LSN\n> of page start, ignoring the page header. Is that the LSN of the first\n> record in a page? I'll see if I can reproduce the problem.\n>\n\nIn XLogReadRecord(), both the variables being compared have\ninconsistency in the assignment -- one gets assigned from\nstate->currRecPtr where other is from RecPtr.\n\n.....\nstate->overwrittenRecPtr = state->currRecPtr;\n.....\nstate->abortedRecPtr = RecPtr;\n.....\n\nBefore the place where assembled flag sets, there is a bunch of code\nthat adjusts RecPtr. I think instead of RecPtr, the latter assignment\nshould use state->currRecPtr as well.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 25 Nov 2021 11:38:42 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-25, Amul Sul wrote:\n\n> In XLogReadRecord(), both the variables being compared have\n> inconsistency in the assignment -- one gets assigned from\n> state->currRecPtr where other is from RecPtr.\n> \n> .....\n> state->overwrittenRecPtr = state->currRecPtr;\n> .....\n> state->abortedRecPtr = RecPtr;\n> .....\n> \n> Before the place where assembled flag sets, there is a bunch of code\n> that adjusts RecPtr. I think instead of RecPtr, the latter assignment\n> should use state->currRecPtr as well.\n\nYou're exactly right. I managed to reproduce the problem shown by\nbuildfarm members, and indeed this fixes it. And it makes sense: the\nadjustment you refer to, is precisely to skip the page header when the\nLSN is the start of the page, which is exactly the problem we're seeing\nin the buildfarm ... except that on lapwing branch REL_11_STABLE, we're\nseeing the LSN is off by 0x14 instead of 0x18. That seems very strange.\nI think the reason for this is that lapwing has MAXALIGN 4, so\nMAXALIGN(sizeof(XLogPageHeaderData)) is 20, not 24 as is the case in the\nother failing members.\n\n... checks buildfarm ...\n\nYeah, all the others in Tom's list are x86-64.\n\nI'm pushing the fix in a minute.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Qué importan los años? Lo que realmente importa es comprobar que\na fin de cuentas la mejor edad de la vida es estar vivo\" (Mafalda)\n\n\n", "msg_date": "Thu, 25 Nov 2021 15:30:20 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Oh, but also I think I should push a mitigation in case a production\nsystem hits this problem: maybe reduce the message from FATAL to WARNING\nif the registered LSN is at a page boundary.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n", "msg_date": "Thu, 25 Nov 2021 15:32:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Oh, but also I think I should push a mitigation in case a production\n> system hits this problem: maybe reduce the message from FATAL to WARNING\n> if the registered LSN is at a page boundary.\n\nUh, why? The fix should remove the problem, and if it doesn't, we're\nstill looking at inconsistent WAL aren't we?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 13:52:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-25, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Oh, but also I think I should push a mitigation in case a production\n> > system hits this problem: maybe reduce the message from FATAL to WARNING\n> > if the registered LSN is at a page boundary.\n> \n> Uh, why? The fix should remove the problem, and if it doesn't, we're\n> still looking at inconsistent WAL aren't we?\n\nThe problem is that the bug occurs while writing the WAL record. Fixed\nservers won't produce such records, but if you run an unpatched server\nand it happens to write one, without a mitigation you cannot get away\nfrom FATAL during replay.\n\nSince this bug exists in released minors, we should allow people to\nupgrade to a newer version if they hit it.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n", "msg_date": "Thu, 25 Nov 2021 16:15:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Nov-25, Tom Lane wrote:\n>> Uh, why? The fix should remove the problem, and if it doesn't, we're\n>> still looking at inconsistent WAL aren't we?\n\n> The problem is that the bug occurs while writing the WAL record. Fixed\n> servers won't produce such records, but if you run an unpatched server\n> and it happens to write one, without a mitigation you cannot get away\n> from FATAL during replay.\n\nReally? AFAICS the WAL record contains the correct value, or at least\nwe should define that one as being correct, for precisely this reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 14:18:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-25, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> \n> > The problem is that the bug occurs while writing the WAL record. Fixed\n> > servers won't produce such records, but if you run an unpatched server\n> > and it happens to write one, without a mitigation you cannot get away\n> > from FATAL during replay.\n> \n> Really? AFAICS the WAL record contains the correct value, or at least\n> we should define that one as being correct, for precisely this reason.\n\nI don't know what is the correct value for a record that comes exactly\nafter the page header. But here's a patch that fixes the problem; and\nif a standby replays WAL written by an unpatched primary, it will be\nable to read past instead of dying of FATAL.\n\nI originally wrote this to have a WARNING in VerifyOverwriteContrecord\n(in the cases that are new), with the idea that it'd prompt people to\nupgrade, but that's probably a waste of time.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)", "msg_date": "Thu, 25 Nov 2021 16:58:28 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Nov-25, Tom Lane wrote:\n>> Really? AFAICS the WAL record contains the correct value, or at least\n>> we should define that one as being correct, for precisely this reason.\n\n> I don't know what is the correct value for a record that comes exactly\n> after the page header. But here's a patch that fixes the problem; and\n> if a standby replays WAL written by an unpatched primary, it will be\n> able to read past instead of dying of FATAL.\n\nMeh ... but given the simplicity of the write-side fix, maybe changing\nit is appropriate.\n\nHowever, this seems too forgiving:\n\n+ if (xlrec->overwritten_lsn != state->overwrittenRecPtr &&\n+ xlrec->overwritten_lsn - SizeOfXLogShortPHD != state->overwrittenRecPtr &&\n+ xlrec->overwritten_lsn - SizeOfXLogLongPHD != state->overwrittenRecPtr)\n\nThe latter two cases should only be accepted if overwrittenRecPtr is\nexactly at a page boundary.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 15:12:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "I wrote:\n> However, this seems too forgiving:\n\n... also, I don't know if you intended this already, but the\nVerifyOverwriteContrecord change should only be applied in\nback branches. There's no need for it in HEAD.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 15:15:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On Fri, Nov 26, 2021 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Nov-25, Tom Lane wrote:\n> >> Really? AFAICS the WAL record contains the correct value, or at least\n> >> we should define that one as being correct, for precisely this reason.\n>\n> > I don't know what is the correct value for a record that comes exactly\n> > after the page header. But here's a patch that fixes the problem; and\n> > if a standby replays WAL written by an unpatched primary, it will be\n> > able to read past instead of dying of FATAL.\n>\n> Meh ... but given the simplicity of the write-side fix, maybe changing\n> it is appropriate.\n>\n> However, this seems too forgiving:\n>\n> + if (xlrec->overwritten_lsn != state->overwrittenRecPtr &&\n> + xlrec->overwritten_lsn - SizeOfXLogShortPHD != state->overwrittenRecPtr &&\n> + xlrec->overwritten_lsn - SizeOfXLogLongPHD != state->overwrittenRecPtr)\n>\n\nUnless I am missing something, I am not sure why need this adjustment\nif we are going to use state->currRecPtr value which doesn't seem to\nbe changing at all. AFAICU, state->currRecPtr will be unchanged value\nwhether going to set overwrittenRecPtr or abortedRecPtr. Do primary\nand standby see state->currRecPtr differently, I guess not, never?\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 26 Nov 2021 09:54:30 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Nov-26, Amul Sul wrote:\n\n> On Fri, Nov 26, 2021 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Meh ... but given the simplicity of the write-side fix, maybe changing\n> > it is appropriate.\n\nActually, fixing the other side is equally simple, and it is also more\ncorrect. What changed my mind is that upon completing a successful read\nof a record, what we set as state->ReadRecPtr is the local variable\nRecPtr -- so that is what becomes the true LSN of the record. Using\nstate->currRecPtr is inconsistent with that definition.\n\n> Unless I am missing something, I am not sure why need this adjustment\n> if we are going to use state->currRecPtr value which doesn't seem to\n> be changing at all. AFAICU, state->currRecPtr will be unchanged value\n> whether going to set overwrittenRecPtr or abortedRecPtr. Do primary\n> and standby see state->currRecPtr differently, I guess not, never?\n\nYou're right for the wrong reason. We don't need the adjustment in the\nverify routine. The reason we don't is that we're not going to use\nstate->currRecPtr anymore, but rather RecPtr in both places. You're\nthinking that primary and standby would never \"see state->currRecPtr\ndifferently\", but that's only if they are both running the same code.\nIf you had a primary running 14.1 and a standby running 14.2, with the\npreviously proposed fix (using state->currRecPtr), you would be in\ntrouble. With this fix (using RecPtr) it works fine.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 26 Nov 2021 10:49:50 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" }, { "msg_contents": "On 2021-Sep-03, Alvaro Herrera wrote:\n\n> The last commit is something I noticed in pg_rewind ...\n\nI had missed this one; it's pushed now.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)\n\n\n", "msg_date": "Wed, 23 Mar 2022 19:38:33 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: prevent immature WAL streaming" } ]
[ { "msg_contents": "Hi,\n\nEXPLAIN command doesn't show testexpr. Sometimes it is not easy to \nunderstand a query plan. That I mean:\n\nCREATE TABLE a (x integer, y integer);\nEXPLAIN (COSTS OFF, VERBOSE) SELECT x, y FROM a upper\n WHERE y IN (SELECT y FROM a WHERE upper.y = x);\nEXPLAIN (COSTS OFF, VERBOSE) SELECT x, y FROM a upper\n WHERE x+y IN (SELECT y FROM a WHERE upper.y = x);\n\nThese two explains have the same representation:\nSeq Scan on public.a upper\n Output: upper.x, upper.y\n Filter: (SubPlan 1)\n SubPlan 1\n -> Seq Scan on public.a\n Output: a.y\n Filter: (upper.y = a.x)\n\nIt is a bit annoying when you don't have original query or don't trust \ncompetence of a user who sent you this explain.\nIn attachment - patch which solves this problem. I'm not completely sure \nthat this option really needed and patch presents a proof of concept only.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 24 Aug 2021 12:21:48 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Representation of SubPlan testexpr in EXPLAIN" } ]
[ { "msg_contents": "Hello hackers,\n\nHere is a summary of what was implemented over the summer in PL/Julia:\n\n1. Added support for more datatypes as input and output:\nNULL, boolean, numeric types, composite types, arrays of base types can now\nbe passed as input arguments to PL/Julia functions. Users can also return\nthe above, or sets of the above from PL/Julia UDFs.\n2. Added trigger support - users can write trigger functions in PL/Julia\n3. Added event trigger support\n4. Added support for the DO command\n5. Added functions for database access from PL/Julia:\nspi_exec(query, limit) and spi_exec(query) for SQL-statement execution,\nspi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to\nclose the cursor respectively,\nspi_prepare(query, argtypes) to prepare and save an execution plan and\nspi_exec_prepared(plan, args, limit) to execute a previously prepared plan.\n\nA brief presentation of the above\nhttps://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharing\nDocumentation with use examples\nhttps://gitlab.com/konskov/pljulia/-/blob/main/README.md\n\nCurrently the extension works for version 13 and Julia versions >= 1.6\n(Thanks to Imre Samu for testing!)\n\nI hope you find it interesting.\n\nRegards,\nKonstantina\n\nHello hackers,Here is a summary of what was implemented over the summer in PL/Julia:1. Added support for more datatypes as input and output: NULL,\n boolean, numeric types, composite types, arrays of base types can now \nbe passed as input arguments to PL/Julia functions. Users can also \nreturn the above, or sets of the above from PL/Julia UDFs.    2. Added trigger support - users can write trigger functions in PL/Julia3. Added event trigger support 4. Added support for the DO command5. Added functions for database access from PL/Julia: spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to close the cursor respectively, spi_prepare(query, argtypes) to prepare and save an execution plan andspi_exec_prepared(plan, args, limit) to execute a previously prepared plan.A brief presentation of the above https://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharingDocumentation with use examples https://gitlab.com/konskov/pljulia/-/blob/main/README.mdCurrently the extension works for version 13 and Julia versions >= 1.6 (Thanks to Imre Samu for testing!)I hope you find it interesting.Regards,Konstantina", "msg_date": "Tue, 24 Aug 2021 11:25:53 +0300", "msg_from": "Konstantina Skovola <konskov@gmail.com>", "msg_from_op": true, "msg_subject": "[GSoC 2021 project summary] PL/Julia" }, { "msg_contents": "Hi Konstantina,\n\nVery cool! I was actually looking at doing this as we also have PL/R.\n\nDave Cramer\n\n\n\nOn Tue, 24 Aug 2021 at 04:26, Konstantina Skovola <konskov@gmail.com> wrote:\n\n> Hello hackers,\n>\n> Here is a summary of what was implemented over the summer in PL/Julia:\n>\n> 1. Added support for more datatypes as input and output:\n> NULL, boolean, numeric types, composite types, arrays of base types can\n> now be passed as input arguments to PL/Julia functions. Users can also\n> return the above, or sets of the above from PL/Julia UDFs.\n> 2. Added trigger support - users can write trigger functions in PL/Julia\n> 3. Added event trigger support\n> 4. Added support for the DO command\n> 5. Added functions for database access from PL/Julia:\n> spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,\n> spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to\n> close the cursor respectively,\n> spi_prepare(query, argtypes) to prepare and save an execution plan and\n> spi_exec_prepared(plan, args, limit) to execute a previously prepared plan.\n>\n> A brief presentation of the above\n>\n> https://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharing\n> Documentation with use examples\n> https://gitlab.com/konskov/pljulia/-/blob/main/README.md\n>\n> Currently the extension works for version 13 and Julia versions >= 1.6\n> (Thanks to Imre Samu for testing!)\n>\n> I hope you find it interesting.\n>\n> Regards,\n> Konstantina\n>\n\nHi Konstantina,Very cool! I was actually looking  at doing this as we also have PL/R.Dave CramerOn Tue, 24 Aug 2021 at 04:26, Konstantina Skovola <konskov@gmail.com> wrote:Hello hackers,Here is a summary of what was implemented over the summer in PL/Julia:1. Added support for more datatypes as input and output: NULL,\n boolean, numeric types, composite types, arrays of base types can now \nbe passed as input arguments to PL/Julia functions. Users can also \nreturn the above, or sets of the above from PL/Julia UDFs.    2. Added trigger support - users can write trigger functions in PL/Julia3. Added event trigger support 4. Added support for the DO command5. Added functions for database access from PL/Julia: spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to close the cursor respectively, spi_prepare(query, argtypes) to prepare and save an execution plan andspi_exec_prepared(plan, args, limit) to execute a previously prepared plan.A brief presentation of the above https://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharingDocumentation with use examples https://gitlab.com/konskov/pljulia/-/blob/main/README.mdCurrently the extension works for version 13 and Julia versions >= 1.6 (Thanks to Imre Samu for testing!)I hope you find it interesting.Regards,Konstantina", "msg_date": "Tue, 24 Aug 2021 07:31:49 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: [GSoC 2021 project summary] PL/Julia" }, { "msg_contents": "On Tue, Aug 24, 2021 at 5:26 AM Konstantina Skovola <konskov@gmail.com>\nwrote:\n>\n> Hello hackers,\n>\n> Here is a summary of what was implemented over the summer in PL/Julia:\n>\n> 1. Added support for more datatypes as input and output:\n> NULL, boolean, numeric types, composite types, arrays of base types can\nnow be passed as input arguments to PL/Julia functions. Users can also\nreturn the above, or sets of the above from PL/Julia UDFs.\n> 2. Added trigger support - users can write trigger functions in PL/Julia\n> 3. Added event trigger support\n> 4. Added support for the DO command\n> 5. Added functions for database access from PL/Julia:\n> spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,\n> spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to\nclose the cursor respectively,\n> spi_prepare(query, argtypes) to prepare and save an execution plan and\n> spi_exec_prepared(plan, args, limit) to execute a previously prepared\nplan.\n>\n> A brief presentation of the above\n>\nhttps://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharing\n> Documentation with use examples\n> https://gitlab.com/konskov/pljulia/-/blob/main/README.md\n>\n> Currently the extension works for version 13 and Julia versions >= 1.6\n(Thanks to Imre Samu for testing!)\n>\n\nAwesome Konstantina, it was a pleasure working with you in this project.\n\nLooking forward to your next contributions to PostgreSQL.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Tue, Aug 24, 2021 at 5:26 AM Konstantina Skovola <konskov@gmail.com> wrote:>> Hello hackers,>> Here is a summary of what was implemented over the summer in PL/Julia:>> 1. Added support for more datatypes as input and output: > NULL, boolean, numeric types, composite types, arrays of base types can now be passed as input arguments to PL/Julia functions. Users can also return the above, or sets of the above from PL/Julia UDFs.    > 2. Added trigger support - users can write trigger functions in PL/Julia> 3. Added event trigger support > 4. Added support for the DO command> 5. Added functions for database access from PL/Julia: > spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,> spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to close the cursor respectively, > spi_prepare(query, argtypes) to prepare and save an execution plan and> spi_exec_prepared(plan, args, limit) to execute a previously prepared plan.>> A brief presentation of the above > https://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharing> Documentation with use examples > https://gitlab.com/konskov/pljulia/-/blob/main/README.md>> Currently the extension works for version 13 and Julia versions >= 1.6 (Thanks to Imre Samu for testing!)>Awesome Konstantina, it was a pleasure working with you in this project.Looking forward to your next contributions to PostgreSQL.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 24 Aug 2021 08:59:55 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [GSoC 2021 project summary] PL/Julia" } ]
[ { "msg_contents": "Hello,\n\nI have implemented per query network stat collection for FDW. It is done \nin a similar way to how buffer and WAL stats are collected and it can be \nseen with a new NETWORK option for explain command:\n\nexplain (analyze, network) insert into itrtest values (2, 'blah');\n\n                                           QUERY PLAN\n-----------------------------------------------------------------------------------------------\n  Insert on itrtest  (cost=0.00..0.01 rows=0 width=0) (actual \ntime=0.544..0.544 rows=0 loops=1)\n    Network: FDW bytes sent=197 received=72, wait_time=0.689\n    ->  Result  (cost=0.00..0.01 rows=1 width=36) (actual \ntime=0.003..0.003 rows=1 loops=1)\n  Planning Time: 0.025 ms\n  Execution Time: 0.701 ms\n(5 rows)\n\nI am yet to add corresponding columns to pg_stat_statements, write tests \nand documentation, but before I go ahead with that, I would like to know \nwhat the community thinks about the patch.\n\nRegards,\n\nIlya Gladyshev", "msg_date": "Tue, 24 Aug 2021 12:12:38 +0300", "msg_from": "Ilya Gladyshev <i.gladyshev@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Per query FDW network stat collection" }, { "msg_contents": "On Tue, Aug 24, 2021 at 5:12 PM Ilya Gladyshev\n<i.gladyshev@postgrespro.ru> wrote:\n>\n> I have implemented per query network stat collection for FDW. It is done\n> in a similar way to how buffer and WAL stats are collected and it can be\n> seen with a new NETWORK option for explain command:\n>\n> explain (analyze, network) insert into itrtest values (2, 'blah');\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Insert on itrtest (cost=0.00..0.01 rows=0 width=0) (actual\n> time=0.544..0.544 rows=0 loops=1)\n> Network: FDW bytes sent=197 received=72, wait_time=0.689\n> [...]\n\nIt sound like a really useful metric to have.\n\nHowever I'm not sure that having a new \"network\" option is the best\nway for that. It seems confusing as IIUC it won't be catching all\nnetwork activity (like fe/be activity, or network disk...) but only\nFDW activity. I think it would be better to have those information\nretrieved when using the verbose option rather than a new one.\nSimilarly, I'm afraid that INSTRUMENT_NETWORK could be misleading,\nalthough I don't have any better proposal right now.\n\n\n", "msg_date": "Tue, 24 Aug 2021 17:19:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Per query FDW network stat collection" }, { "msg_contents": "\nOn 24.08.2021 12:19, Julien Rouhaud wrote:\n> However I'm not sure that having a new \"network\" option is the best\n> way for that. It seems confusing as IIUC it won't be catching all\n> network activity (like fe/be activity, or network disk...) but only\n> FDW activity. I think it would be better to have those information\n> retrieved when using the verbose option rather than a new one.\n> Similarly, I'm afraid that INSTRUMENT_NETWORK could be misleading,\n> although I don't have any better proposal right now.\n\nI am also doubtful about this naming. Initially, I wanted to add fe/be \nactivity as one of the metrics, but then decided to restrict myself to \nFDW for now. However, I decided to leave \"network\" as it is, because to \nme it makes sense to have all the network-related metrics under a single \nexplain option (and a single instrumentation flag perhaps), in case more \nare added later. The struct fields used for collection internally tell \nexplicitly that they are meant to be used only for FDW stats and the \nexplain output also mentions that the displayed stats are for FDW \nnetwork activity.\n\n\n\n", "msg_date": "Tue, 24 Aug 2021 12:57:09 +0300", "msg_from": "Ilya Gladyshev <i.gladyshev@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Per query FDW network stat collection" } ]
[ { "msg_contents": "\nDuring the discussion on OpenSSL 3.0.0 support in pgcrypto [0], I \nstarted to wonder whether the \"internal\" code variants in pgcrypto (the \nones that implement the ciphers themselves instead of using OpenSSL) are \nmore trouble than they are worth. As discussed there, keeping this adds \nsome amount of complexity in the code that could otherwise easily be \ndone away with.\n\nHistorically, this made some sense. OpenSSL support and pgcrypto came \ninto PostgreSQL at around the same time. So it was probably reasonable \nfor pgcrypto not to rely exclusively on OpenSSL being available. But \ntoday, building PostgreSQL for production without some kind of SSL \nsupport seems rare, and then nevertheless requiring cryptographic \nhashing and encryption support from pgcrypto seems unreasonable.\n\nSo I'm tempted to suggest that we remove the built-in, non-OpenSSL \ncipher and hash implementations in pgcrypto (basically INT_SRCS in \npgcrypto/Makefile), and then also pursue the simplifications in the \nOpenSSL code paths described in [0].\n\nThoughts?\n\n(Some thoughts from those pursuing NSS support would also be useful.)\n\n\n[0]: \nhttps://www.postgresql.org/message-id/b1a62889-bb45-e5e0-d138-7a370a0a334f@enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Aug 2021 11:13:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "remove internal support in pgcrypto?" }, { "msg_contents": "On Tue, Aug 24, 2021 at 11:13:44AM +0200, Peter Eisentraut wrote:\n> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher\n> and hash implementations in pgcrypto (basically INT_SRCS in\n> pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL\n> code paths described in [0].\n\n+1 to remove the internal parts. And the options of the non-OpenSSL\ncode paths are more limited than the OpenSSL ones, with md5, sha1 and\nsha2.\n\nNSS has support for most of the hash implementations pgcrypto makes\nuse of, as far as I recall.\n--\nMichael", "msg_date": "Tue, 24 Aug 2021 19:28:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "> On 24 Aug 2021, at 11:13, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n\n+1\n\n> Thoughts?\n\nWith src/common/cryptohash_*.c and contrib/pgcrypto we have two abstractions\nfor hashing ciphers, should we perhaps retire hashing from pgcrypto altogether\nand pull across what we feel is useful to core (AES and 3DES and..)? There is\nalready significant overlap, and allowing core to only support certain ciphers\nwhen compiled with OpenSSL isn’t any different from doing it in pgcrypto\nreally.\n\n> (Some thoughts from those pursuing NSS support would also be useful.)\n\nBlowfish and CAST5 are not available in NSS. I've used the internal Blowfish\nimplementation as a fallback in the NSS patch and left CAST5 as not supported.\nThis proposal would mean that Blowfish too wasn’t supported in NSS builds, but\nI personally don’t see that as a dealbreaker.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 24 Aug 2021 14:38:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "On 24.08.21 11:13, Peter Eisentraut wrote:\n> So I'm tempted to suggest that we remove the built-in, non-OpenSSL \n> cipher and hash implementations in pgcrypto (basically INT_SRCS in \n> pgcrypto/Makefile), and then also pursue the simplifications in the \n> OpenSSL code paths described in [0].\n\nHere is a patch for this.", "msg_date": "Sat, 30 Oct 2021 14:11:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "\nOn 8/24/21 08:38, Daniel Gustafsson wrote:\n>> On 24 Aug 2021, at 11:13, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n> +1\n>\n>> Thoughts?\n> With src/common/cryptohash_*.c and contrib/pgcrypto we have two abstractions\n> for hashing ciphers, should we perhaps retire hashing from pgcrypto altogether\n> and pull across what we feel is useful to core (AES and 3DES and..)? There is\n> already significant overlap, and allowing core to only support certain ciphers\n> when compiled with OpenSSL isn’t any different from doing it in pgcrypto\n> really.\n>\n>> (Some thoughts from those pursuing NSS support would also be useful.)\n> Blowfish and CAST5 are not available in NSS. I've used the internal Blowfish\n> implementation as a fallback in the NSS patch and left CAST5 as not supported.\n> This proposal would mean that Blowfish too wasn’t supported in NSS builds, but\n> I personally don’t see that as a dealbreaker.\n>\n\nMaybe it would be worth creating a non-core extension for things like\nthis that we are ripping out? I have no idea how many people might be\nusing them.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Nov 2021 09:48:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "> On 30 Oct 2021, at 14:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 24.08.21 11:13, Peter Eisentraut wrote:\n>> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n> \n> Here is a patch for this.\n\n+1 on this patch, it does what it says on the tin and lays good foundations for\nfurther work on modernizing pgcrypto. If anything, maybe the hard OpenSSL\nrequirement should be advertised earlier in the documentation?\n\nShould we consider bumping the version number of the module? While it's true\nthat everyone will have to recompile anyways, and there are no changes in\nexposed functionality, it might be an easier sell for those it without OpenSSL\nif the version nunber indicates a change.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 2 Nov 2021 11:06:46 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "> On 30 Oct 2021, at 14:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 24.08.21 11:13, Peter Eisentraut wrote:\n>> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n> \n> Here is a patch for this.\n\nThis patch doesn't work on Windows, which I think is because it pulls in\npgcrypto even in builds without OpenSSL. Poking at that led me to realize that\nwe can simplify even more with this. The conditonal source includes can go\naway and be replaced with a simple OBJS clause, and with that the special hacks\nin Mkvcbuild.pm to overcome that.\n\nAttached is a diff on top of your patch to do the above. I haven't tested it\non Windows yet, but if you think it's in the right direction we'll take it for\na spin in a CI with/without OpenSSL.\n\nNow, *if* we merge the NSS patch this does introduce special cases again which\nthis rips out. I prefer to try and fix them in that patch to keep avoiding the\nneed for them rather than keep them on speculation for a patch which hasn't\nbeen decided on.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 3 Nov 2021 11:16:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "On 03.11.21 11:16, Daniel Gustafsson wrote:\n>> On 30 Oct 2021, at 14:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 24.08.21 11:13, Peter Eisentraut wrote:\n>>> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n>>\n>> Here is a patch for this.\n> \n> This patch doesn't work on Windows, which I think is because it pulls in\n> pgcrypto even in builds without OpenSSL. Poking at that led me to realize that\n> we can simplify even more with this. The conditonal source includes can go\n> away and be replaced with a simple OBJS clause, and with that the special hacks\n> in Mkvcbuild.pm to overcome that.\n> \n> Attached is a diff on top of your patch to do the above. I haven't tested it\n> on Windows yet, but if you think it's in the right direction we'll take it for\n> a spin in a CI with/without OpenSSL.\n\nHere is a consolidated patch. I have tested it locally, so it should be \nokay on Windows.\n\n> Now, *if* we merge the NSS patch this does introduce special cases again which\n> this rips out. I prefer to try and fix them in that patch to keep avoiding the\n> need for them rather than keep them on speculation for a patch which hasn't\n> been decided on.\n\nOkay, I wasn't sure about the preferred way forward here. I'm content \nwith the approach you have chosen.", "msg_date": "Wed, 3 Nov 2021 16:06:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "> On 3 Nov 2021, at 16:06, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 03.11.21 11:16, Daniel Gustafsson wrote:\n>>> On 30 Oct 2021, at 14:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>> \n>>> On 24.08.21 11:13, Peter Eisentraut wrote:\n>>>> So I'm tempted to suggest that we remove the built-in, non-OpenSSL cipher and hash implementations in pgcrypto (basically INT_SRCS in pgcrypto/Makefile), and then also pursue the simplifications in the OpenSSL code paths described in [0].\n>>> \n>>> Here is a patch for this.\n>> This patch doesn't work on Windows, which I think is because it pulls in\n>> pgcrypto even in builds without OpenSSL. Poking at that led me to realize that\n>> we can simplify even more with this. The conditonal source includes can go\n>> away and be replaced with a simple OBJS clause, and with that the special hacks\n>> in Mkvcbuild.pm to overcome that.\n>> Attached is a diff on top of your patch to do the above. I haven't tested it\n>> on Windows yet, but if you think it's in the right direction we'll take it for\n>> a spin in a CI with/without OpenSSL.\n> \n> Here is a consolidated patch. I have tested it locally, so it should be okay on Windows.\n\nI don't think this bit is correct, as OSSL_TESTS have been removed from the Makefie:\n\n-\t\t\t $config->{openssl}\n-\t\t\t ? GetTests(\"OSSL_TESTS\", $m)\n-\t\t\t : GetTests(\"INT_TESTS\", $m);\n+\t\t\t GetTests(\"OSSL_TESTS\", $m);\n\nI think we need something like the (untested) diff below:\n\ndiff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl\nindex e3a323b8bf..fc2406b2be 100644\n--- a/src/tools/msvc/vcregress.pl\n+++ b/src/tools/msvc/vcregress.pl\n@@ -729,13 +729,10 @@ sub fetchTests\n # pgcrypto is special since the tests depend on the\n # configuration of the build\n\n- my $cftests =\n- GetTests(\"OSSL_TESTS\", $m);\n my $pgptests =\n $config->{zlib}\n ? GetTests(\"ZLIB_TST\", $m)\n : GetTests(\"ZLIB_OFF_TST\", $m);\n- $t =~ s/\\$\\(CF_TESTS\\)/$cftests/;\n $t =~ s/\\$\\(CF_PGP_TESTS\\)/$pgptests/;\n }\n }\n\n>> Now, *if* we merge the NSS patch this does introduce special cases again which\n>> this rips out. I prefer to try and fix them in that patch to keep avoiding the\n>> need for them rather than keep them on speculation for a patch which hasn't\n>> been decided on.\n> \n> Okay, I wasn't sure about the preferred way forward here. I'm content with the approach you have chosen.\n\nI'm honestly not sure either; but as the NSS patch author, if I break it I get\nto keep both pieces =)\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 21:10:25 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "On 03.11.21 21:10, Daniel Gustafsson wrote:\n>> Here is a consolidated patch. I have tested it locally, so it should be okay on Windows.\n> \n> I don't think this bit is correct, as OSSL_TESTS have been removed from the Makefie:\n> \n> -\t\t\t $config->{openssl}\n> -\t\t\t ? GetTests(\"OSSL_TESTS\", $m)\n> -\t\t\t : GetTests(\"INT_TESTS\", $m);\n> +\t\t\t GetTests(\"OSSL_TESTS\", $m);\n> \n> I think we need something like the (untested) diff below:\n\nCommitted with that. Thanks.\n\n\n", "msg_date": "Fri, 5 Nov 2021 15:04:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: remove internal support in pgcrypto?" }, { "msg_contents": "> On 5 Nov 2021, at 15:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 03.11.21 21:10, Daniel Gustafsson wrote:\n>>> Here is a consolidated patch. I have tested it locally, so it should be okay on Windows.\n>> I don't think this bit is correct, as OSSL_TESTS have been removed from the Makefie:\n>> -\t\t\t $config->{openssl}\n>> -\t\t\t ? GetTests(\"OSSL_TESTS\", $m)\n>> -\t\t\t : GetTests(\"INT_TESTS\", $m);\n>> +\t\t\t GetTests(\"OSSL_TESTS\", $m);\n>> I think we need something like the (untested) diff below:\n> \n> Committed with that. Thanks.\n\nGreat! I guess I have some rebasing ahead of me then.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 5 Nov 2021 15:12:00 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove internal support in pgcrypto?" } ]
[ { "msg_contents": "Hi,\n\ntab completion for \"create unlogged\" gives this:\n\npostgres=# create unlogged \nMATERIALIZED VIEW TABLE \n\nGiven that a materialized table can not be unlogged:\n\npostgres=# create unlogged materialized view mv1 as select 1;\nERROR: materialized views cannot be unlogged\n\nShould this really show up there?\n\nRegards\nDaniel\n\n", "msg_date": "Tue, 24 Aug 2021 11:32:14 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "Tab completion for \"create unlogged\" a bit too lax?" }, { "msg_contents": "On Tue, Aug 24, 2021 at 11:32:14AM +0000, Daniel Westermann (DWE) wrote:\n> tab completion for \"create unlogged\" gives this:\n> \n> postgres=# create unlogged \n> MATERIALIZED VIEW TABLE \n> \n> Given that a materialized table can not be unlogged:\n> \n> postgres=# create unlogged materialized view mv1 as select 1;\n> ERROR: materialized views cannot be unlogged\n> \n> Should this really show up there?\n\nIt seems to be deliberate:\n\ncommit 3223b25ff737c2bf4a642c0deb7be2b30bfecc6e\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon May 6 11:57:05 2013 -0400\n\n Disallow unlogged materialized views.\n...\n I left the grammar and tab-completion support for CREATE UNLOGGED\n MATERIALIZED VIEW in place, since it's harmless and allows delivering a\n more specific error message about the unsupported feature.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 24 Aug 2021 06:52:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Tab completion for \"create unlogged\" a bit too lax?" }, { "msg_contents": ">On Tue, Aug 24, 2021 at 11:32:14AM +0000, Daniel Westermann (DWE) wrote:\n>> tab completion for \"create unlogged\" gives this:\n>> \n>> postgres=# create unlogged \n>> MATERIALIZED VIEW  TABLE   \n>> \n>> Given that a materialized table can not be unlogged:\n>> \n>> postgres=# create unlogged materialized view mv1 as select 1;\n>> ERROR:  materialized views cannot be unlogged\n>> \n>> Should this really show up there?\n\n>It seems to be deliberate:\n\n>commit 3223b25ff737c2bf4a642c0deb7be2b30bfecc6e\n>Author: Tom Lane <tgl@sss.pgh.pa.us>\n>Date:   Mon May 6 11:57:05 2013 -0400\n\n>    Disallow unlogged materialized views.\n>...\n>    I left the grammar and tab-completion support for CREATE UNLOGGED\n>    MATERIALIZED VIEW in place, since it's harmless and allows delivering a\n>    more specific error message about the unsupported feature.\n\nHm, I think tab completion should only give choices for operations that are supposed to work. Anyway, thanks for pointing me to the commit, that makes it more clear why it is that way.\n\nRegards\nDaniel\n\n", "msg_date": "Tue, 24 Aug 2021 12:04:28 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "Re: Tab completion for \"create unlogged\" a bit too lax?" }, { "msg_contents": "On Tue, Aug 24, 2021 at 12:04:28PM +0000, Daniel Westermann (DWE) wrote:\n> Hm, I think tab completion should only give choices for operations\n> that are supposed to work. Anyway, thanks for pointing me to the\n> commit, that makes it more clear why it is that way.\n\nFWIW, my position on that is that there is no point to recommend\ngrammars that will return errors, and many code paths of\ntab-complete.c list only their options available ignoring ones that\nfail. And we are talking about a one-line change as all the code\npaths of CREATE UNLOGGED only have code to handle the case of TABLE.\n--\nMichael", "msg_date": "Wed, 25 Aug 2021 09:21:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tab completion for \"create unlogged\" a bit too lax?" } ]
[ { "msg_contents": "Avoid using ambiguous word \"positive\" in error message.\n\nThere are two identical error messages about valid value of modulus for\nhash partition, in PostgreSQL source code. Commit 0e1275fb07 improved\nonly one of them so that ambiguous word \"positive\" was avoided there,\nand forgot to improve the other. This commit improves the other.\nWhich would reduce translator burden.\n\nBack-pach to v11 where the error message exists.\n\nAuthor: Kyotaro Horiguchi\nReviewed-by: Fujii Masao\nDiscussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/170aec63cd7139b453c52ad52bbeb83993faa31d\n\nModified Files\n--------------\nsrc/backend/parser/parse_utilcmd.c | 2 +-\nsrc/test/regress/expected/alter_table.out | 2 +-\nsrc/test/regress/expected/create_table.out | 2 +-\n3 files changed, 3 insertions(+), 3 deletions(-)", "msg_date": "Wed, 25 Aug 2021 02:48:58 +0000", "msg_from": "Fujii Masao <fujii@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Avoid using ambiguous word \"positive\" in error message." }, { "msg_contents": "On Tue, Aug 24, 2021 at 10:49 PM Fujii Masao <fujii@postgresql.org> wrote:\n> Avoid using ambiguous word \"positive\" in error message.\n\nThe new style seems good, but I don't really agree that \"positive\" and\n\"non-negative\" are ambiguous. \"positive\" means >0 and \"non-negative\"\nmeans >= 0, because 0 is neither positive nor negative.\n\nThis is just nitpicking, though. I think the change is an improvement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:03:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid using ambiguous word \"positive\" in error message." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The new style seems good, but I don't really agree that \"positive\" and\n> \"non-negative\" are ambiguous. \"positive\" means >0 and \"non-negative\"\n> means >= 0, because 0 is neither positive nor negative.\n\nWell, the point is precisely that not everyone makes that distinction.\nI agree that everyone will read \"non-negative\" as \">= 0\"; but there's\na fair percentage of the population that uses \"positive\" the same way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:16:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid using ambiguous word \"positive\" in error message." }, { "msg_contents": "On Mon, Aug 30, 2021 at 10:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > The new style seems good, but I don't really agree that \"positive\" and\n> > \"non-negative\" are ambiguous. \"positive\" means >0 and \"non-negative\"\n> > means >= 0, because 0 is neither positive nor negative.\n>\n> Well, the point is precisely that not everyone makes that distinction.\n> I agree that everyone will read \"non-negative\" as \">= 0\"; but there's\n> a fair percentage of the population that uses \"positive\" the same way.\n\nThe mathematician in me recoils.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:19:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid using ambiguous word \"positive\" in error message." }, { "msg_contents": "\nOn 8/30/21 10:19 AM, Robert Haas wrote:\n> On Mon, Aug 30, 2021 at 10:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> The new style seems good, but I don't really agree that \"positive\" and\n>>> \"non-negative\" are ambiguous. \"positive\" means >0 and \"non-negative\"\n>>> means >= 0, because 0 is neither positive nor negative.\n>> Well, the point is precisely that not everyone makes that distinction.\n>> I agree that everyone will read \"non-negative\" as \">= 0\"; but there's\n>> a fair percentage of the population that uses \"positive\" the same way.\n> The mathematician in me recoils.\n>\n\nYep, me too. IIRC Ada comes with a predefined subtype named \"Positive\"\nwhich has the range 1..Integer'Max. It also has \"Natural\" which includes\nzero.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 09:16:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Avoid using ambiguous word \"positive\" in error message." } ]
[ { "msg_contents": "While trying to refactor the node support in various ways, the Value \nnode is always annoying.\n\nThe Value node struct is a weird construct. It is its own node type,\nbut most of the time, it actually has a node type of Integer, Float,\nString, or BitString. As a consequence, the struct name and the node\ntype don't match most of the time, and so it has to be treated\nspecially a lot. There doesn't seem to be any value in the special\nconstruct. There is very little code that wants to accept all Value\nvariants but nothing else (and even if it did, this doesn't provide\nany convenient way to check it), and most code wants either just one\nparticular node type (usually String), or it accepts a broader set of\nnode types besides just Value.\n\nThis change removes the Value struct and node type and replaces them\nby separate Integer, Float, String, and BitString node types that are\nproper node types and structs of their own and behave mostly like\nnormal node types.\n\nAlso, this removes the T_Null node tag, which was previously also a\npossible variant of Value but wasn't actually used outside of the\nValue contained in A_Const. Replace that by an isnull field in\nA_Const.", "msg_date": "Wed, 25 Aug 2021 15:19:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove Value node struct" }, { "msg_contents": "On Wed, Aug 25, 2021 at 9:20 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> This change removes the Value struct and node type and replaces them\n> by separate Integer, Float, String, and BitString node types that are\n> proper node types and structs of their own and behave mostly like\n> normal node types.\n\n+1. I noticed this years ago and never thought of doing anything about\nit. I'm glad you did think of it...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Aug 2021 09:49:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> While trying to refactor the node support in various ways, the Value\n> node is always annoying.\n[…]\n> This change removes the Value struct and node type and replaces them\n> by separate Integer, Float, String, and BitString node types that are\n> proper node types and structs of their own and behave mostly like\n> normal node types.\n\nThis looks like a nice cleanup overall, independent of any future\nrefactoring.\n\n> Also, this removes the T_Null node tag, which was previously also a\n> possible variant of Value but wasn't actually used outside of the\n> Value contained in A_Const. Replace that by an isnull field in\n> A_Const.\n\nHowever, the patch adds:\n\n> +typedef struct Null\n> +{\n> +\tNodeTag\t\ttype;\n> +\tchar\t *val;\n> +} Null;\n\nwhich doesn't seem to be used anywhere. Is that a leftoverf from an\nintermediate development stage?\n\n- ilmari\n\n\n", "msg_date": "Wed, 25 Aug 2021 15:00:13 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "On Wed, Aug 25, 2021 at 9:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 25, 2021 at 9:20 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > This change removes the Value struct and node type and replaces them\n> > by separate Integer, Float, String, and BitString node types that are\n> > proper node types and structs of their own and behave mostly like\n> > normal node types.\n>\n> +1. I noticed this years ago and never thought of doing anything about\n> it. I'm glad you did think of it...\n\n+1, it also bothered me in the past.\n\n\n", "msg_date": "Sat, 28 Aug 2021 11:41:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "Agree to the motive and +1 for the concept.\n\nAt Wed, 25 Aug 2021 15:00:13 +0100, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote in \n> However, the patch adds:\n> \n> > +typedef struct Null\n> > +{\n> > +\tNodeTag\t\ttype;\n> > +\tchar\t *val;\n> > +} Null;\n> \n> which doesn't seem to be used anywhere. Is that a leftoverf from an\n> intermediate development stage?\n\n+1 Looks like so, it can be simply removed.\n\n0001 looks fine to me.\n\n0002:\n there's an \"integer Value node\" in gram.y: 7776.\n\n-\t\t\tn = makeFloatConst(v->val.str, location);\n+\t\t\tn = (Node *) makeFloatConst(castNode(Float, v)->val, location);\n\nmakeFloatConst is Node* so the cast doesn't seem needed. The same can\nbe said for Int and String Consts. This looks like a confustion with\nmakeInteger and friends.\n\n+\telse if (IsA(obj, Integer))\n+\t\t_outInteger(str, (Integer *) obj);\n+\telse if (IsA(obj, Float))\n+\t\t_outFloat(str, (Float *) obj);\n\nI felt that the type enames are a bit confusing as they might be too\ngeneric, or too close with the corresponding binary types.\n\n\n-\tNode\t *arg;\t\t\t/* a (Value *) or a (TypeName *) */\n+\tNode\t *arg;\n\nMmm. It's a bit pity that we lose the generic name for the value nodes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Aug 2021 11:13:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "On 30.08.21 04:13, Kyotaro Horiguchi wrote:\n>> However, the patch adds:\n>>\n>>> +typedef struct Null\n>>> +{\n>>> +\tNodeTag\t\ttype;\n>>> +\tchar\t *val;\n>>> +} Null;\n>>\n>> which doesn't seem to be used anywhere. Is that a leftoverf from an\n>> intermediate development stage?\n> \n> +1 Looks like so, it can be simply removed.\n\nfixed\n\n> 0002:\n> there's an \"integer Value node\" in gram.y: 7776.\n\nfixed\n\n> -\t\t\tn = makeFloatConst(v->val.str, location);\n> +\t\t\tn = (Node *) makeFloatConst(castNode(Float, v)->val, location);\n> \n> makeFloatConst is Node* so the cast doesn't seem needed. The same can\n> be said for Int and String Consts. This looks like a confustion with\n> makeInteger and friends.\n\nfixed\n\n> +\telse if (IsA(obj, Integer))\n> +\t\t_outInteger(str, (Integer *) obj);\n> +\telse if (IsA(obj, Float))\n> +\t\t_outFloat(str, (Float *) obj);\n> \n> I felt that the type enames are a bit confusing as they might be too\n> generic, or too close with the corresponding binary types.\n> \n> \n> -\tNode\t *arg;\t\t\t/* a (Value *) or a (TypeName *) */\n> +\tNode\t *arg;\n> \n> Mmm. It's a bit pity that we lose the generic name for the value nodes.\n\nNot sure what you mean here.", "msg_date": "Tue, 7 Sep 2021 11:22:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "At Tue, 7 Sep 2021 11:22:24 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 30.08.21 04:13, Kyotaro Horiguchi wrote:\n> > +\telse if (IsA(obj, Integer))\n> > +\t\t_outInteger(str, (Integer *) obj);\n> > +\telse if (IsA(obj, Float))\n> > +\t\t_outFloat(str, (Float *) obj);\n> > I felt that the type enames are a bit confusing as they might be too\n> > generic, or too close with the corresponding binary types.\n> > -\tNode\t *arg;\t\t\t/* a (Value *) or a (TypeName *) */\n> > +\tNode\t *arg;\n> > Mmm. It's a bit pity that we lose the generic name for the value\n> > nodes.\n> \n> Not sure what you mean here.\n\nThe member arg loses the information on what kind of nodes are to be\nstored there. Concretely it just removes the comment \"a (Value *) or a\n(TypeName *)\". If the (Value *) were expanded in a straight way, the\ncomment would be \"a (Integer *), (Float *), (String *), (BitString *),\nor (TypeName *)\". I supposed that the member loses the comment because\nit become too long.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 11:04:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove Value node struct" }, { "msg_contents": "On 08.09.21 04:04, Kyotaro Horiguchi wrote:\n> At Tue, 7 Sep 2021 11:22:24 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in\n>> On 30.08.21 04:13, Kyotaro Horiguchi wrote:\n>>> +\telse if (IsA(obj, Integer))\n>>> +\t\t_outInteger(str, (Integer *) obj);\n>>> +\telse if (IsA(obj, Float))\n>>> +\t\t_outFloat(str, (Float *) obj);\n>>> I felt that the type enames are a bit confusing as they might be too\n>>> generic, or too close with the corresponding binary types.\n>>> -\tNode\t *arg;\t\t\t/* a (Value *) or a (TypeName *) */\n>>> +\tNode\t *arg;\n>>> Mmm. It's a bit pity that we lose the generic name for the value\n>>> nodes.\n>>\n>> Not sure what you mean here.\n> \n> The member arg loses the information on what kind of nodes are to be\n> stored there. Concretely it just removes the comment \"a (Value *) or a\n> (TypeName *)\". If the (Value *) were expanded in a straight way, the\n> comment would be \"a (Integer *), (Float *), (String *), (BitString *),\n> or (TypeName *)\". I supposed that the member loses the comment because\n> it become too long.\n\nOk, I added the comment back in in a modified form.\n\nThe patches have been committed now. Thanks.\n\n\n", "msg_date": "Thu, 9 Sep 2021 09:23:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Remove Value node struct" } ]
[ { "msg_contents": "Hello all,\n\nI am going to refactor Greenplum backtraces for error messages and want to make it more compatible with PostgreSQL code. Backtraces in PostgreSQL were introduced by 71a8a4f6e36547bb060dbcc961ea9b57420f7190 commit (original discussion https://www.postgresql.org/message-id/CAMsr+YGL+yfWE=JvbUbnpWtrRZNey7hJ07+zT4bYJdVp4Szdrg@mail.gmail.com ) and rely on backtrace() and backtrace_symbols() functions. They are used inside errfinish() that is wrapped by ereport() macros. ereport() is invoked inside bgworker_die() and FloatExceptionHandler() signal handlers. I am confused with this fact - both backtrace functions are async-unsafe: backtrace_symbols() - always, backtrace() - only for the first call due to dlopen. I wonder why does PostgreSQL use async-unsafe functions in signal handlers?\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\n\n", "msg_date": "Wed, 25 Aug 2021 17:22:08 +0300", "msg_from": "Denis Smirnov <sd@arenadata.io>", "msg_from_op": true, "msg_subject": "Async-unsafe functions in signal handlers" }, { "msg_contents": "\n\n> 25 авг. 2021 г., в 19:22, Denis Smirnov <sd@arenadata.io> написал(а):\n> \n> I am going to refactor Greenplum backtraces for error messages and want to make it more compatible with PostgreSQL code. Backtraces in PostgreSQL were introduced by 71a8a4f6e36547bb060dbcc961ea9b57420f7190 commit (original discussion https://www.postgresql.org/message-id/CAMsr+YGL+yfWE=JvbUbnpWtrRZNey7hJ07+zT4bYJdVp4Szdrg@mail.gmail.com ) and rely on backtrace() and backtrace_symbols() functions. They are used inside errfinish() that is wrapped by ereport() macros. ereport() is invoked inside bgworker_die() and FloatExceptionHandler() signal handlers. I am confused with this fact - both backtrace functions are async-unsafe: backtrace_symbols() - always, backtrace() - only for the first call due to dlopen. I wonder why does PostgreSQL use async-unsafe functions in signal handlers?\n\nIn my view GUC backtrace_functions is expected to be used for debug purposes. Not for enabling on production server for bgworker_die() or FloatExceptionHandler().\nAre there any way to call backtrace_symbols() without touching backtrace_functions?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 26 Aug 2021 10:52:44 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "As far as I understand, the main problem with backtrace_symbols() is the internal malloc() call. Backend can lock forever if malloc() was interrupted by a signal and then was evaluated again in a signal handler.\n\nAt the moment Greenplum uses \"addr2line -s -e» (on Linux) and \"atos -o\" (on macOS) for each stack address instead of backtrace_symbols(). Both of these utils don’t use malloc() underhood, although there is no guarantee that this implementation never changes in the future. It seems to be a safer approach, but looks like a dirty hack.\n\n> 26 авг. 2021 г., в 08:52, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> \n> \n>> 25 авг. 2021 г., в 19:22, Denis Smirnov <sd@arenadata.io> написал(а):\n>> \n>> I am going to refactor Greenplum backtraces for error messages and want to make it more compatible with PostgreSQL code. Backtraces in PostgreSQL were introduced by 71a8a4f6e36547bb060dbcc961ea9b57420f7190 commit (original discussion https://www.postgresql.org/message-id/CAMsr+YGL+yfWE=JvbUbnpWtrRZNey7hJ07+zT4bYJdVp4Szdrg@mail.gmail.com ) and rely on backtrace() and backtrace_symbols() functions. They are used inside errfinish() that is wrapped by ereport() macros. ereport() is invoked inside bgworker_die() and FloatExceptionHandler() signal handlers. I am confused with this fact - both backtrace functions are async-unsafe: backtrace_symbols() - always, backtrace() - only for the first call due to dlopen. I wonder why does PostgreSQL use async-unsafe functions in signal handlers?\n> \n> In my view GUC backtrace_functions is expected to be used for debug purposes. Not for enabling on production server for bgworker_die() or FloatExceptionHandler().\n> Are there any way to call backtrace_symbols() without touching backtrace_functions?\n> \n> Best regards, Andrey Borodin.\n> \n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:21:04 +0300", "msg_from": "Denis Smirnov <sd@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "On Wed, Aug 25, 2021 at 10:22 AM Denis Smirnov <sd@arenadata.io> wrote:\n> I am going to refactor Greenplum backtraces for error messages and want to make it more compatible with PostgreSQL code. Backtraces in PostgreSQL were introduced by 71a8a4f6e36547bb060dbcc961ea9b57420f7190 commit (original discussion https://www.postgresql.org/message-id/CAMsr+YGL+yfWE=JvbUbnpWtrRZNey7hJ07+zT4bYJdVp4Szdrg@mail.gmail.com ) and rely on backtrace() and backtrace_symbols() functions. They are used inside errfinish() that is wrapped by ereport() macros. ereport() is invoked inside bgworker_die() and FloatExceptionHandler() signal handlers. I am confused with this fact - both backtrace functions are async-unsafe: backtrace_symbols() - always, backtrace() - only for the first call due to dlopen. I wonder why does PostgreSQL use async-unsafe functions in signal handlers?\n\nThat is a great question. I think bgworker_die() is extremely\ndangerous and ought to be removed. I can't see how that can ever be\nsafe.\n\nFloatExceptionHandler() is a bit different because in theory the\nthings that could trigger SIGFPE are relatively limited, and in theory\nwe know that those are safe places to ereport(). But I'm not very\nconvinced that this is really true. Among other things, kill -FPE\ncould be executed any time, but even aside from that, I doubt we have\nexhaustive knowledge of everything in the code that could trigger a\nfloating point exception.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:01:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That is a great question. I think bgworker_die() is extremely\n> dangerous and ought to be removed. I can't see how that can ever be\n> safe.\n\nAgreed, it looks pretty dangerous from here. The equivalent (and\nfar better battle-tested) signal handlers in postgres.c are a lot\nmore circumspect --- they will call stuff that's unsafe per POSIX,\nbut not from just any interrupt point.\n\n(BTW, I think it's pretty silly to imagine that adding backtrace()\ncalls inside ereport is making things any more dangerous. ereport\nhas pretty much always carried a likelihood of calling malloc(),\nfor example.)\n\n> FloatExceptionHandler() is a bit different because in theory the\n> things that could trigger SIGFPE are relatively limited, and in theory\n> we know that those are safe places to ereport(). But I'm not very\n> convinced that this is really true. Among other things, kill -FPE\n> could be executed any time, but even aside from that, I doubt we have\n> exhaustive knowledge of everything in the code that could trigger a\n> floating point exception.\n\nOn the one hand, that's theoretically true, and on the other hand,\nthat code's been like that since the last century and I'm unaware\nof any actual problems. There are not many places in the backend\nthat do arithmetic that's likely to trigger SIGFPE. Besides which,\nwhat's the alternative? I suppose we could SIG_IGN SIGFPE, but\nthat is almost certainly going to create real problems (i.e. missed\nerror cases) while removing only hypothetical ones.\n\n(The \"DBA randomly issues 'kill -FPE'\" scenario can be dismissed,\nI think --- surely that is in the category of \"when you break it,\nyou get to keep both pieces\".)\n\nThe larger subtext here is that just because it's undefined per\nPOSIX doesn't necessarily mean it's unsafe in our use-pattern.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:39:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "\n> 26 авг. 2021 г., в 23:39, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> \n> (BTW, I think it's pretty silly to imagine that adding backtrace()\n> calls inside ereport is making things any more dangerous. ereport\n> has pretty much always carried a likelihood of calling malloc(),\n> for example.)\n\nI have taken a look through the signal handlers and found out that many of them use malloc() via ereport() and elog(). Here is the list:\n\nSIGUSR1\n- procsignal_sigusr1_handler(): autoprewarm, autovacuum, bgworker, bgwriter, checkpointer, pgarch, startup, walwriter, walreciever, walsender\n- sigusr1_handler(): postmaster\n\nSIGFPE:\n- FloatExceptionHandler(): autovacuum, bgworker, postgres, plperl\n\nSIGHUP:\n- SIGHUP_handler(): postmaster\n\nSIGCHLD:\n- reaper(): postmaster\n\nSIGQUIT:\n- quickdie(): postgres\n\nSIGTERM:\n- bgworker_die(): bgworker\n\nSIGALRM:\n- handle_sig_alarm(): autovacuum, bgworker, postmaster, startup, walsender, postgres\n\nI suspect there are lots of potential ways to lock on malloc() inside any of this handlers. An interesting question is why there are still no evidence of such locks?\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 23:51:27 +0300", "msg_from": "Denis Smirnov <sd@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "Hi,\n\nOn 2021-08-27 23:51:27 +0300, Denis Smirnov wrote:\n> > 26 авг. 2021 г., в 23:39, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> > \n> > (BTW, I think it's pretty silly to imagine that adding backtrace()\n> > calls inside ereport is making things any more dangerous. ereport\n> > has pretty much always carried a likelihood of calling malloc(),\n> > for example.)\n> \n> I have taken a look through the signal handlers and found out that many of them use malloc() via ereport() and elog(). Here is the list:\n> \n> SIGUSR1\n> - procsignal_sigusr1_handler(): autoprewarm, autovacuum, bgworker, bgwriter, checkpointer, pgarch, startup, walwriter, walreciever, walsender\n\nThere shouldn't be meaningful uses of elog/ereport() inside\nprocsignal_sigusr1_handler(). The exception I found was an elog(FATAL) for\nunreachable code.\n\n\n> - sigusr1_handler(): postmaster\n> SIGHUP:\n> - SIGHUP_handler(): postmaster\n> SIGCHLD:\n> - reaper(): postmaster\n\nI think these runs in a very controlled set of circumstances because most of\npostmaster runs with signals masked.\n\n\n> SIGFPE:\n> - FloatExceptionHandler(): autovacuum, bgworker, postgres, plperl\n\nYep, although as discussed this might not be a \"real\" problem because it\nshould only run during an instruction triggering an FPE.\n\n\n> SIGQUIT:\n> - quickdie(): postgres\n\nYes, this is an issue. I've previously argued for handling this via write()\nand _exit(), instead of the full ereport() machinery. However, we have a\nbandaid that deals with possible hangs, by SIGKILLing when processes don't\nshut down (at that point things have already gone quite south, so that's not\nan issue).\n\n\n> SIGTERM:\n> - bgworker_die(): bgworker\n\nBad.\n\n\n> SIGALRM:\n> - handle_sig_alarm(): autovacuum, bgworker, postmaster, startup, walsender, postgres\n\nI don't think there reachable elogs in there. I'm not concerned about e.g.\n\t\telog(FATAL, \"timeout index %d out of range 0..%d\", index,\n\t\t\t num_active_timeouts - 1);\nbecause that's not something that should ever be reachable in a production\nscenario. If it is, there's bigger problems.\n\n\nPerhaps we ought to have a version of Assert() that's enabled in production\nbuilds as well, and that outputs the error messages via write() and then\n_exit()s?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Aug 2021 14:05:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "\n> 28 авг. 2021 г., в 07:05, Andres Freund <andres@anarazel.de> написал(а):\n> \n> However, we have a\n> bandaid that deals with possible hangs, by SIGKILLing when processes don't\n> shut down (at that point things have already gone quite south, so that's not\n> an issue).\n\nThanks for the explanation. I can see that child process SIGKILL machinery was introduced by 82233ce7ea42d6ba519aaec63008aff49da6c7af commit to fix a malloc() deadlock in quickdie() signal handler. As a result, all child processes that die too long are killed in the ServerLoop() with SIGKILL. But bgworker_die() is a problem as we initialize bgworkers right before ServerLoop(). So we can face malloc() deadlock on postmaster startup (before ServerLoop() started). Maybe we should simply use write() and exit() instead of ereport() for bgworker_die()?\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 10:45:57 +1000", "msg_from": "Denis Smirnov <sd@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Async-unsafe functions in signal handlers" }, { "msg_contents": "Honestly, I don’t know what to do with bgworker_die(). At the moment it produces ereport(FATAL) with async-unsafe proc_exit_prepare() and exit() underhood. I can see three solutions:\n\n1. Leave the code as is. Then SIGTERM can produce deadlocks in bgworker's signal handler. The locked process can terminated with an immediate shutdown <https://github.com/postgres/postgres/commit/82233ce7ea42d6ba519aaec63008aff49da6c7af> of the cluster. May be it is ok as we don’t expect to send SIGTERM to bgworker too often.\n\n2. Use async-safe _exit() in a signal handler instead of proc_exit_prepare() and exit(). In this case we’ll have to go through cluster recovery as the bgworker doesn't properly clean its shared memory. This solution is even worth than immediate shutdown as we recover for every SIGTERM have been sent to bgworker.\n\n3. Set a signal flag inside the handler (something like miscadmin.h XXX_INTERRUPTS() macros). So it becomes an extension developer's responsibility to properly handle this flag in the bgworker’s code. This approach breaks backward compatibility.\n\nMay be I've missed a good solution, do you see any?\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\nHonestly, I don’t know what to do with bgworker_die(). At the moment it produces ereport(FATAL) with async-unsafe proc_exit_prepare() and exit() underhood. I can see three solutions:1. Leave the code as is. Then SIGTERM can produce deadlocks in bgworker's signal handler. The locked process can terminated with an immediate shutdown of the cluster. May be it is ok as we don’t expect to send SIGTERM to bgworker too often.2. Use async-safe _exit() in a signal handler instead of proc_exit_prepare() and exit(). In this case we’ll have to go through cluster recovery as the bgworker doesn't properly clean its shared memory. This solution is even worth than immediate shutdown as we recover for every SIGTERM have been sent to bgworker.3. Set a signal flag inside the handler (something like miscadmin.h XXX_INTERRUPTS() macros). So it becomes an extension developer's responsibility to properly handle this flag in the bgworker’s code. This approach breaks backward compatibility.May be I've missed a good solution, do you see any?Best regards,Denis Smirnov | Developersd@arenadata.io Arenadata | Godovikova 9-17, Moscow 129085 Russia", "msg_date": "Tue, 31 Aug 2021 00:26:09 +1000", "msg_from": "Denis Smirnov <sd@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Async-unsafe functions in signal handlers" } ]
[ { "msg_contents": "Hello\n\nWhile executing the regression tests for MobilityDB I load a predefined\ndatabase on which I run the tests and then compare the results obtained\nwith those expected. All the tests are driven by the following bash file\nhttps://github.com/MobilityDB/MobilityDB/blob/develop/test/scripts/test.sh\n\nHowever, I continuously receive at a random step in the process the\nfollowing error in the log file\n\n2021-08-25 16:48:13.608 CEST [22375] LOG: received fast shutdown request\n2021-08-25 16:48:13.622 CEST [22375] LOG: aborting any active transactions\n2021-08-25 16:48:13.622 CEST [22375] LOG: background worker \"logical\nreplication launcher\" (PID 22382) exited with exit code 1\n2021-08-25 16:48:13.623 CEST [22377] LOG: shutting down\n2021-08-25 16:48:13.971 CEST [22375] LOG: database system is shut down\n\nand sometimes I need to relaunch *numerous* times the whole build process\nin CMake\nhttps://github.com/MobilityDB/MobilityDB/blob/develop/CMakeLists.txt\nto finalize the tests\n\n/* While on MobilityDB/build directory */\nrm -rf *\ncmake ..\nmake\nmake test\n\nAny idea where I can begin looking at the problem ?\n\nThanks for your help\n\nEsteban\n\nHelloWhile executing the regression tests for MobilityDB I load a predefined database on which I run the tests and then compare the results obtained with those expected. All the tests are driven by the following bash filehttps://github.com/MobilityDB/MobilityDB/blob/develop/test/scripts/test.shHowever, I continuously receive at a random step in the process the following error in the log file2021-08-25 16:48:13.608 CEST [22375] LOG:  received fast shutdown request2021-08-25 16:48:13.622 CEST [22375] LOG:  aborting any active transactions2021-08-25 16:48:13.622 CEST [22375] LOG:  background worker \"logical replication launcher\" (PID 22382) exited with exit code 12021-08-25 16:48:13.623 CEST [22377] LOG:  shutting down2021-08-25 16:48:13.971 CEST [22375] LOG:  database system is shut downand sometimes I need to relaunch *numerous* times the whole build process in CMakehttps://github.com/MobilityDB/MobilityDB/blob/develop/CMakeLists.txtto finalize the tests/* While on MobilityDB/build directory */rm -rf * cmake ..makemake testAny idea where I can begin looking at the problem ?Thanks for your helpEsteban", "msg_date": "Wed, 25 Aug 2021 17:01:32 +0200", "msg_from": "Esteban Zimanyi <esteban.zimanyi@ulb.be>", "msg_from_op": true, "msg_subject": "Regression tests for MobilityDB: Continous shutdowns at a random step" }, { "msg_contents": "Esteban Zimanyi <esteban.zimanyi@ulb.be> writes:\n> However, I continuously receive at a random step in the process the\n> following error in the log file\n\n> 2021-08-25 16:48:13.608 CEST [22375] LOG: received fast shutdown request\n\nThis indicates that something sent the postmaster SIGINT.\nYou need to look around for something in your test environment\nthat would do that. Possibly you need to decouple the test\nprocesses from your terminal session?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Aug 2021 11:34:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests for MobilityDB: Continous shutdowns at a random\n step" } ]
[ { "msg_contents": "log_autovacuum output looks like this (as of Postgres 14):\n\nLOG: automatic vacuum of table \"regression.public.bmsql_order_line\":\nindex scans: 1\npages: 0 removed, 8810377 remain, 0 skipped due to pins, 3044924 frozen\ntuples: 16819838 removed, 576364686 remain, 2207444 are dead but not\nyet removable, oldest xmin: 88197949\nbuffer usage: 174505 hits, 7630386 misses, 5628271 dirtied\nindex scan needed: 1959301 pages from table (22.24% of total) had\n11745226 dead item identifiers removed\nindex \"bmsql_order_line_pkey\": pages: 2380261 in total, 0 newly\ndeleted, 0 currently deleted, 0 reusable\navg read rate: 65.621 MB/s, avg write rate: 48.403 MB/s\nI/O timings: read: 65813.666 ms, write: 11310.689 ms\nsystem usage: CPU: user: 72.55 s, system: 52.07 s, elapsed: 908.42 s\nWAL usage: 7387358 records, 4051205 full page images, 28472185998 bytes\n\nI think that this output is slightly misleading. I'm concerned about\nthe specific order of the lines here: the \"buffer usage\" line comes\nafter the information that applies specifically to the heap structure,\nbut before the information about indexes. This is the case despite the\nfact that its output applies to all buffers (not just those for the\nheap structure).\n\nIt would be a lot clearer if the \"buffer usage\" line was simply moved\ndown. I think that it should appear after the lines that are specific\nto the table's indexes -- just before the \"avg read rate\" line. That\nway we'd group the buffer usage output with all of the other I/O\nrelated output that summarizes the VACUUM operation as a whole.\n\nI propose changing the ordering along those lines, and backpatching\nthe change to Postgres 14.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Aug 2021 10:34:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 10:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> It would be a lot clearer if the \"buffer usage\" line was simply moved\n> down. I think that it should appear after the lines that are specific\n> to the table's indexes -- just before the \"avg read rate\" line. That\n> way we'd group the buffer usage output with all of the other I/O\n> related output that summarizes the VACUUM operation as a whole.\n>\n\nThe last two lines are also \"*** usage\" -- shouldn't the buffer numbers be\nnext to them?\n\nOn Wed, Aug 25, 2021 at 10:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\nIt would be a lot clearer if the \"buffer usage\" line was simply moved\ndown. I think that it should appear after the lines that are specific\nto the table's indexes -- just before the \"avg read rate\" line. That\nway we'd group the buffer usage output with all of the other I/O\nrelated output that summarizes the VACUUM operation as a whole.The last two lines are also \"*** usage\" -- shouldn't the buffer numbers be next to them?", "msg_date": "Wed, 25 Aug 2021 11:42:22 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> log_autovacuum output looks like this (as of Postgres 14):\n> \n> LOG: automatic vacuum of table \"regression.public.bmsql_order_line\":\n> index scans: 1\n> pages: 0 removed, 8810377 remain, 0 skipped due to pins, 3044924 frozen\n> tuples: 16819838 removed, 576364686 remain, 2207444 are dead but not\n> yet removable, oldest xmin: 88197949\n> buffer usage: 174505 hits, 7630386 misses, 5628271 dirtied\n> index scan needed: 1959301 pages from table (22.24% of total) had\n> 11745226 dead item identifiers removed\n> index \"bmsql_order_line_pkey\": pages: 2380261 in total, 0 newly\n> deleted, 0 currently deleted, 0 reusable\n> avg read rate: 65.621 MB/s, avg write rate: 48.403 MB/s\n> I/O timings: read: 65813.666 ms, write: 11310.689 ms\n> system usage: CPU: user: 72.55 s, system: 52.07 s, elapsed: 908.42 s\n> WAL usage: 7387358 records, 4051205 full page images, 28472185998 bytes\n> \n> I think that this output is slightly misleading. I'm concerned about\n> the specific order of the lines here: the \"buffer usage\" line comes\n> after the information that applies specifically to the heap structure,\n> but before the information about indexes. This is the case despite the\n> fact that its output applies to all buffers (not just those for the\n> heap structure).\n> \n> It would be a lot clearer if the \"buffer usage\" line was simply moved\n> down. I think that it should appear after the lines that are specific\n> to the table's indexes -- just before the \"avg read rate\" line. That\n> way we'd group the buffer usage output with all of the other I/O\n> related output that summarizes the VACUUM operation as a whole.\n> \n> I propose changing the ordering along those lines, and backpatching\n> the change to Postgres 14.\n\nI don't have any particular issue with moving them.\n\nThanks,\n\nStephen", "msg_date": "Wed, 25 Aug 2021 16:33:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 11:42 AM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n> The last two lines are also \"*** usage\" -- shouldn't the buffer numbers be next to them?\n\nI agree that that would be better still -- but all the \"usage\" stuff\ntogether in one block.\n\nAnd that leads me to another observation: The track_io_timing stuff\n(also new to Postgres 14) might also need to be reordered. And maybe\neven the WAL usage stuff, which was added in Postgres 13.\n\nThat way the overall structure starts with details of the physical\ndata structures (the table and its indexes), then goes into buffers\n\n1. Heap pages\n2. Heap tuples\n3. Index stuff\n4. I/O timings (only when track_io_timing is on)\n5. avg read rate (always)\n6. buffer usage\n7. WAL usage.\n8. system usage\n\nThis would mean that I'd be flipping the order of 7 and 8 relative to\nPostgres 13 -- meaning there'd be one difference between Postgres 14\nand some existing stable release. But I think that putting WAL usage\nlast of all (after system usage) makes little sense -- commit\nb7ce6de93b shouldn't have done it that way. I always expect to see the\ngetrusage() stuff last.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Aug 2021 13:41:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 1:33 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't have any particular issue with moving them.\n\nWhat do you think of the plan I just outlined to Nikolay?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Aug 2021 13:41:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On 2021-Aug-25, Peter Geoghegan wrote:\n\n> That way the overall structure starts with details of the physical\n> data structures (the table and its indexes), then goes into buffers\n> \n> 1. Heap pages\n> 2. Heap tuples\n> 3. Index stuff\n> 4. I/O timings (only when track_io_timing is on)\n> 5. avg read rate (always)\n> 6. buffer usage\n> 7. WAL usage.\n> 8. system usage\n> \n> This would mean that I'd be flipping the order of 7 and 8 relative to\n> Postgres 13 -- meaning there'd be one difference between Postgres 14\n> and some existing stable release. But I think that putting WAL usage\n> last of all (after system usage) makes little sense -- commit\n> b7ce6de93b shouldn't have done it that way. I always expect to see the\n> getrusage() stuff last.\n\nYou mean:\n\nLOG: automatic vacuum of table \"regression.public.bmsql_order_line\": index scans: 1\npages: 0 removed, 8810377 remain, 0 skipped due to pins, 3044924 frozen\ntuples: 16819838 removed, 576364686 remain, 2207444 are dead but not yet removable, oldest xmin: 88197949\nindex scan needed: 1959301 pages from table (22.24% of total) had 11745226 dead item identifiers removed\nindex \"bmsql_order_line_pkey\": pages: 2380261 in total, 0 newly deleted, 0 currently deleted, 0 reusable\nI/O timings: read: 65813.666 ms, write: 11310.689 ms\navg read rate: 65.621 MB/s, avg write rate: 48.403 MB/s\nbuffer usage: 174505 hits, 7630386 misses, 5628271 dirtied\nWAL usage: 7387358 records, 4051205 full page images, 28472185998 bytes\nsystem usage: CPU: user: 72.55 s, system: 52.07 s, elapsed: 908.42 s\n\nI like it better than the current layout, so +1.\n\n\nI think the \"index scan needed\" line (introduced very late in the 14\ncycle, commit 5100010ee4d5 dated April 7 2021) is a bit odd. It is\ntelling us stuff about the table -- how many pages had TIDs removed, am\nI reading that right? -- and it is also telling us whether indexes were\nscanned. But the fact that it starts with \"index scan needed\" suggests\nthat it's talking about indexes. I think we should reword this line. I\ndon't have any great ideas; what do you think of this?\n\ndead items: 1959301 pages from table (22.24% of total) had 11745226 dead item identifiers removed; index scan {needed, not needed, bypassed, bypassed by failsafe}\n\nI have to say that I am a bit bothered about the coding pattern used to\nbuild this sentence from two parts. I'm not sure it'll work okay in\nlanguages that build sentences in different ways. Maybe we should split\nthis in two lines, one to give the numbers and the other to talk about\nthe decision taken about indexes.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Wed, 25 Aug 2021 17:06:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Wed, Aug 25, 2021 at 11:42 AM Nikolay Samokhvalov\n> <samokhvalov@gmail.com> wrote:\n> > The last two lines are also \"*** usage\" -- shouldn't the buffer numbers be next to them?\n> \n> I agree that that would be better still -- but all the \"usage\" stuff\n> together in one block.\n> \n> And that leads me to another observation: The track_io_timing stuff\n> (also new to Postgres 14) might also need to be reordered. And maybe\n> even the WAL usage stuff, which was added in Postgres 13.\n> \n> That way the overall structure starts with details of the physical\n> data structures (the table and its indexes), then goes into buffers\n> \n> 1. Heap pages\n> 2. Heap tuples\n> 3. Index stuff\n> 4. I/O timings (only when track_io_timing is on)\n> 5. avg read rate (always)\n> 6. buffer usage\n> 7. WAL usage.\n> 8. system usage\n> \n> This would mean that I'd be flipping the order of 7 and 8 relative to\n> Postgres 13 -- meaning there'd be one difference between Postgres 14\n> and some existing stable release. But I think that putting WAL usage\n> last of all (after system usage) makes little sense -- commit\n> b7ce6de93b shouldn't have done it that way. I always expect to see the\n> getrusage() stuff last.\n\nI generally like the idea though I'm not sure about changing things in\nv13 as there's likely code out there that's already parsing that data\nand it might suddenly break if this was changed.\n\nGiven that such code would need to be adjusted for v14 anyway, I don't\nreally see changing it in v14 as as being an issue (nor do I feel that\nit's even a big concern at this point in the release cycle, though\nperhaps others feel differently).\n\nThanks,\n\nStephen", "msg_date": "Wed, 25 Aug 2021 17:07:22 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 2:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> You mean:\n>\n> LOG: automatic vacuum of table \"regression.public.bmsql_order_line\": index scans: 1\n> pages: 0 removed, 8810377 remain, 0 skipped due to pins, 3044924 frozen\n> tuples: 16819838 removed, 576364686 remain, 2207444 are dead but not yet removable, oldest xmin: 88197949\n> index scan needed: 1959301 pages from table (22.24% of total) had 11745226 dead item identifiers removed\n> index \"bmsql_order_line_pkey\": pages: 2380261 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n> I/O timings: read: 65813.666 ms, write: 11310.689 ms\n> avg read rate: 65.621 MB/s, avg write rate: 48.403 MB/s\n> buffer usage: 174505 hits, 7630386 misses, 5628271 dirtied\n> WAL usage: 7387358 records, 4051205 full page images, 28472185998 bytes\n> system usage: CPU: user: 72.55 s, system: 52.07 s, elapsed: 908.42 s\n\nYes, exactly.\n\n> I like it better than the current layout, so +1.\n\n This seems like a release housekeeping task to me. I'll come up with\na patch targeting 14 and master in a few days.\n\n> I think the \"index scan needed\" line (introduced very late in the 14\n> cycle, commit 5100010ee4d5 dated April 7 2021) is a bit odd.\n\nBut that's largely a reflection of what's going on here.\n\n> It is\n> telling us stuff about the table -- how many pages had TIDs removed, am\n> I reading that right? -- and it is also telling us whether indexes were\n> scanned. But the fact that it starts with \"index scan needed\" suggests\n> that it's talking about indexes.\n\nThe question of whether or not we do an index scan (i.e. index\nvacuuming) depends entirely on the number of LP_DEAD items that heap\npruning left behind in the table structure. Actually, sometimes it's\n~100% opportunistic pruning that happens to run outside of VACUUM (in\nwhich case VACUUM merely notices and collects TIDs to delete from\nindexes) -- it depends entirely on the workload. This isn't a new\nthing added in commit 5100010ee4d5, really. That commit merely made\nthe index-bypass behavior not only occur when we had precisely 0 items\nto delete from indexes -- now it can be skipped when the percentage of\nheap pages with one or more LP_DEAD items is < 2%. So yes: this \"pages\nfrom table\" output *is* primarily concerned with what happened with\nindexes, even though the main piece of information says something\nabout the heap/table structure.\n\nNote that in general a table could easily have many many more \"tuples:\nN removed\" than \"N dead item identifiers removed\" in its\nlog_autovacuum output -- this is very common (any table that mostly or\nonly gets HOT updates and no deletes will look like that). The\nopposite situation is also possible, and almost as common with tables\nthat only get non-HOT updates. The BenchmarkSQL TPC-C implementation\nhas tables in both categories -- it does tend to be a stable thing for\na table, in general.\n\nHere is the second largest BenchmarkSQL table (this is just a random\nVACUUM operation from logs used by a recent benchmark of mine):\n\nautomatic aggressive vacuum of table \"regression.public.bmsql_oorder\":\nindex scans: 1\npages: 0 removed, 943785 remain, 6 skipped due to pins, 205851 skipped frozen\ntuples: 63649 removed, 105630136 remain, 2785 are dead but not yet\nremovable, oldest xmin: 186094041\nbuffer usage: 2660543 hits, 1766591 misses, 1375104 dirtied\nindex scan needed: 219092 pages from table (23.21% of total) had\n14946563 dead item identifiers removed\nindex \"bmsql_oorder_pkey\": pages: 615866 in total, 0 newly deleted, 0\ncurrently deleted, 0 reusable\nindex \"bmsql_oorder_idx1\": pages: 797957 in total, 131608 newly\ndeleted, 131608 currently deleted, 131608 reusable\navg read rate: 33.933 MB/s, avg write rate: 26.413 MB/s\nI/O timings: read: 105551.978 ms, write: 16538.690 ms\nsystem usage: CPU: user: 79.71 s, system: 49.74 s, elapsed: 406.73 s\nWAL usage: 1934993 records, 1044051 full page images, 7076820876 bytes\n\nOn Postgres 13 you'd only see \"tuples: 63649 removed\" here. You'd\nnever see anything like \"14946563 dead item identifiers removed\", even\nthough that's obviously hugely important (more important than \"tuples\nremoved\", even). A user could be forgiven for thinking that HOT must\nhurt performance! So yes, I agree. This *is* a bit odd.\n\n(Another problem here is that \"205851 skipped frozen\" only counts\nthose heap pages that were specifically skipped frozen, even for a\nnon-aggressive VACUUM.)\n\n> I think we should reword this line. I\n> don't have any great ideas; what do you think of this?\n>\n> dead items: 1959301 pages from table (22.24% of total) had 11745226 dead item identifiers removed; index scan {needed, not needed, bypassed, bypassed by failsafe}\n>\n> I have to say that I am a bit bothered about the coding pattern used to\n> build this sentence from two parts. I'm not sure it'll work okay in\n> languages that build sentences in different ways. Maybe we should split\n> this in two lines, one to give the numbers and the other to talk about\n> the decision taken about indexes.\n\nI'm happy to work with you to make the message more translatable. But\nit's not easy. I personally believe that this kind of information is\ngenerally only valuable in some specific context. Usually the rate of\nchange over time is a big part of what is truly interesting.\nRecognizable correlations with good or bad performance (perhaps\ndetermined at some much higher level of the user's stack) are also\nimportant.\n\nFor example, here is what BenchmarkSQL shows for the first few VACUUMs\nfor its new order table, which is supposed to have a more or less\nfixed size (but actually doesn't right now):\n\nindex scan needed: 7810 pages from table (15.28% of total) had 452785\ndead item identifiers removed\n...\nindex scan needed: 8482 pages from table (16.60% of total) had 456030\ndead item identifiers removed\n...\nindex scan needed: 8811 pages from table (17.24% of total) had 454976\ndead item identifiers removed\n\nThese 3 VACUUMs are all within an hour of each other -- the percentage\nhere slowly climbs over many hours. Because of heap fragmentation,\nthis percentage never stops growing -- though it will take maybe 12+\nhours for it to saturate at ~99.5%. Obviously it's hard to explain\nthis stuff in a way that will clearly generalize to many different\nsituations. At the same time I believe that many DBAs will find these\ndetails very useful. Even when they have a flawed understanding of\nwhat each item truly means. They're mostly looking at patterns,\ntrends.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Aug 2021 17:02:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On 2021-Aug-25, Peter Geoghegan wrote:\n\n> On Wed, Aug 25, 2021 at 2:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I like it better than the current layout, so +1.\n> \n> This seems like a release housekeeping task to me. I'll come up with\n> a patch targeting 14 and master in a few days.\n\nAgreed, thanks.\n\n> The question of whether or not we do an index scan (i.e. index\n> vacuuming) depends entirely on the number of LP_DEAD items that heap\n> pruning left behind in the table structure. [...]\n\nOoh, this was illuminating -- thanks for explaining. TBH I would have\nbeen very confused if asked to explain what that log line meant; and now\nthat I know what it means, I am even more convinced that we need to work\nharder at it :-)\n\nI'll see if I can come up with something ...\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n", "msg_date": "Wed, 25 Aug 2021 20:23:04 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > The question of whether or not we do an index scan (i.e. index\n> > vacuuming) depends entirely on the number of LP_DEAD items that heap\n> > pruning left behind in the table structure. [...]\n>\n> Ooh, this was illuminating -- thanks for explaining. TBH I would have\n> been very confused if asked to explain what that log line meant; and now\n> that I know what it means, I am even more convinced that we need to work\n> harder at it :-)\n\nThe way that VACUUM and ANALYZE do dead tuple accounting is very\nconfusing. In fact, it's so confusing that even autovacuum can get\nconfused! I think that we need to treat LP_DEAD items and pruned\ntuples even more differently than we do in Postgres 14, probably in a\nnumber of different areas (not just VACUUM).\n\nI've found that if I set autovacuum_vacuum_scale_factor and\nautovacuum_analyze_scale_factor to 0.02 with a HOT-heavy workload\n(almost stock pgbench), then autovacuum workers are launched almost\nconstantly. If I then increase autovacuum_vacuum_scale_factor to 0.05,\nbut make no other changes, then the system decides that it should\nactually never launch an autovacuum worker, even once (except for\nanti-wraparound purposes) [1]. This behavior is completely absurd, of\ncourse. To me this scenario illustrates an important general point:\nVACUUM has the wrong idea. At least when it comes to certain specific\ndetails. Details that have plenty of real world relevance.\n\nVACUUM currently fails to understand anything about the rate of change\n-- which, as I've said, is often the most important thing in the real\nworld. That's what my absurd scenario seems to show. That's how I view\na lot of these things.\n\n> I'll see if I can come up with something ...\n\nThanks.\n\nThe message itself probably does need some work. But documentation\nseems at least as important. It's slightly daunting, honestly, because\nwe don't even document HOT itself (unless you count passing references\nthat don't even explain the basic idea). I did try to get people\ninterested in this stuff at one point not too long ago [2]. That\nthread went an entirely different direction to the one I'd planned on,\nthough, so I became discouraged. I should pick it up again now,\nthough.\n\n[1] https://postgr.es/m/CAH2-Wz=sJm3tm+FpXbyBhEhX5tbz1trQrhG6eOhYk4-+5uL=ww@mail.gmail.com\n[2] https://postgr.es/m/CAH2-WzkjU+NiBskZunBDpz6trSe+aQvuPAe+xgM8ZvoB4wQwhA@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Aug 2021 19:59:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 08:23:04PM -0400, Alvaro Herrera wrote:\n> On 2021-Aug-25, Peter Geoghegan wrote:\n>> This seems like a release housekeeping task to me. I'll come up with\n>> a patch targeting 14 and master in a few days.\n> \n> Agreed, thanks.\n\nSorry for the late reply here. Indeed, I can see your point to move\nthe buffer usage a bit down, grouped with the other information\nrelated to I/O. Moving down this information gives the attached. If\nyou wish to do that yourself, that's fine by me, of course.\n\nSaying this, an ANALYZE-only command does amvacuumcleanup() for all\nthe indexes and the stats exist. I am not saying that we should do\nthat for 14 as that's too late, but we could consider adding the index\ninformation also in this case in 15~?\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 09:54:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:02 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I like it better than the current layout, so +1.\n>\n> This seems like a release housekeeping task to me. I'll come up with\n> a patch targeting 14 and master in a few days.\n\nHere is a patch that outputs log_autovacuum's lines in this order:\n\nLOG: automatic vacuum of table \"regression.public.foo\": index scans: 1\n pages: 9600 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 2169423 removed, 0 remain, 0 are dead but not yet\nremovable, oldest xmin: 731\n index \"foo_pkey\": pages: 5951 in total, 5947 newly deleted,\n5947 currently deleted, 0 reusable\n I/O timings: read: 75.394 ms, write: 76.980 ms\n avg read rate: 103.349 MB/s, avg write rate: 73.317 MB/s\n buffer usage: 47603 hits, 32427 misses, 23004 dirtied\n WAL usage: 46607 records, 1 full page images, 15841331 bytes\n system usage: CPU: user: 1.18 s, system: 0.23 s, elapsed: 2.45 s\n\nI'll commit this in a day or two, backpatching to 14. Barring any objections.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Thu, 26 Aug 2021 22:28:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 2:07 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I generally like the idea though I'm not sure about changing things in\n> v13 as there's likely code out there that's already parsing that data\n> and it might suddenly break if this was changed.\n\nAgreed -- I won't backpatch anything to v13.\n\n> Given that such code would need to be adjusted for v14 anyway, I don't\n> really see changing it in v14 as as being an issue (nor do I feel that\n> it's even a big concern at this point in the release cycle, though\n> perhaps others feel differently).\n\nBTW, I noticed one thing about the track_io_time stuff. Sometimes it\nlooks like this:\n\n I/O timings:\n\ni.e., it doesn't show anything at all after the colon. This happens\nbecause the instrumentation indicates that no time was spent on either\nread I/O or write I/O.\n\nI now wonder if we should just unconditionally report both things\n(both \"read:\" and \"write:\"), without regard for whether or not they're\nnon-zero. (We'd do the same thing with ANALYZE's equivalent code too,\nif we actually did this -- same issue there.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Aug 2021 10:52:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Wed, Aug 25, 2021 at 2:07 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I generally like the idea though I'm not sure about changing things in\n> > v13 as there's likely code out there that's already parsing that data\n> > and it might suddenly break if this was changed.\n> \n> Agreed -- I won't backpatch anything to v13.\n\nOk.\n\n> > Given that such code would need to be adjusted for v14 anyway, I don't\n> > really see changing it in v14 as as being an issue (nor do I feel that\n> > it's even a big concern at this point in the release cycle, though\n> > perhaps others feel differently).\n> \n> BTW, I noticed one thing about the track_io_time stuff. Sometimes it\n> looks like this:\n> \n> I/O timings:\n> \n> i.e., it doesn't show anything at all after the colon. This happens\n> because the instrumentation indicates that no time was spent on either\n> read I/O or write I/O.\n\nHrmpf. That's an interesting point.\n\n> I now wonder if we should just unconditionally report both things\n> (both \"read:\" and \"write:\"), without regard for whether or not they're\n> non-zero. (We'd do the same thing with ANALYZE's equivalent code too,\n> if we actually did this -- same issue there.)\n\nSo, it was done that way to match how we report I/O Timings from explain\nanalyze, around src/backend/commands/explain.c:3574 (which I note is now\nslightly different from what VACUUM/ANALYZE do due to f4f4a64...). The\nintent was to be consistent in all of these places and I generally still\nfeel that's a worthwhile thing to strive for.\n\nI don't have any particular problem with just always reporting it. Sure\nlooks odd to have the line there w/o anything after it. Perhaps we\nshould also address that in the explain analyze case though, and make\nthe same changes there that were done in f4f4a64? Reporting zeros seems\nvaluable to me in that it shows that we did actually track the io timing\nand there simply wasn't any time spent doing that- if we didn't include\nthe line at all then it wouldn't be clear if there wasn't any time spent\nin i/o or if track io timing wasn't enabled.\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Aug 2021 14:35:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Thu, Aug 26, 2021 at 10:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'll commit this in a day or two, backpatching to 14. Barring any objections.\n\nActually, we also need to make the corresponding lines for ANALYZE\nfollow the same convention -- those really should be consistent. As in\nthe attached revision.\n\nI haven't tried to address the issue with \"I/O timings:\" that I just\nbrought to Stephen's attention. We can handle that question\nseparately.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 27 Aug 2021 11:57:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Fri, Aug 27, 2021 at 11:35 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > BTW, I noticed one thing about the track_io_time stuff. Sometimes it\n> > looks like this:\n> >\n> > I/O timings:\n> >\n> > i.e., it doesn't show anything at all after the colon.\n\n> Reporting zeros seems\n> valuable to me in that it shows that we did actually track the io timing\n> and there simply wasn't any time spent doing that- if we didn't include\n> the line at all then it wouldn't be clear if there wasn't any time spent\n> in i/o or if track io timing wasn't enabled.\n\nThe principle that we don't show things that are all-zeroes is unique\nto text-format EXPLAIN output -- any other EXPLAIN format doesn't\ntreat all-zeroes as a special case. And so the most consistent and\ncorrect thing seems to be this: show both all-zero \"read:\" and\n\"write:\" (both in vacuumlazy.c and in analyze.c), without making any\nother changes (i.e., no changes to EXPLAIN output are needed).\n\nYou seem to be almost sold on that plan anyway. But this text format\nEXPLAIN rule seems like it decides the question for us.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:17:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Fri, Aug 27, 2021 at 11:35 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > BTW, I noticed one thing about the track_io_time stuff. Sometimes it\n> > > looks like this:\n> > >\n> > > I/O timings:\n> > >\n> > > i.e., it doesn't show anything at all after the colon.\n> \n> > Reporting zeros seems\n> > valuable to me in that it shows that we did actually track the io timing\n> > and there simply wasn't any time spent doing that- if we didn't include\n> > the line at all then it wouldn't be clear if there wasn't any time spent\n> > in i/o or if track io timing wasn't enabled.\n> \n> The principle that we don't show things that are all-zeroes is unique\n> to text-format EXPLAIN output -- any other EXPLAIN format doesn't\n> treat all-zeroes as a special case. And so the most consistent and\n> correct thing seems to be this: show both all-zero \"read:\" and\n> \"write:\" (both in vacuumlazy.c and in analyze.c), without making any\n> other changes (i.e., no changes to EXPLAIN output are needed).\n\nI suppose.\n\n> You seem to be almost sold on that plan anyway. But this text format\n> EXPLAIN rule seems like it decides the question for us.\n\nI don't particularly care for that explain rule, ultimately, but it's\nbeen around longer than I have and so I guess it wins. I'm fine with\nalways showing the read/write for VACUUM and ANALYZE.\n\nIncluding 'ms' and lower-casing 'Timings' to 'timings' still strikes me\nas something that should be consistent for all of these, but that's\nindependent of this and I'm not going to stress over it, particularly\nsince that's pre-existing.\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Aug 2021 15:30:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Fri, Aug 27, 2021 at 12:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't particularly care for that explain rule, ultimately, but it's\n> been around longer than I have and so I guess it wins. I'm fine with\n> always showing the read/write for VACUUM and ANALYZE.\n>\n> Including 'ms' and lower-casing 'Timings' to 'timings' still strikes me\n> as something that should be consistent for all of these, but that's\n> independent of this and I'm not going to stress over it, particularly\n> since that's pre-existing.\n\nOkay. Plan is now to push these two patches together, later on. The\nsecond patch concerns this separate track_io_timing issue. It's pretty\nstraightforward.\n\n(No change to the first patch in the series, relative to the v2 from earlier.)\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 27 Aug 2021 12:43:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Fri, Aug 27, 2021 at 12:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't particularly care for that explain rule, ultimately, but it's\n> > been around longer than I have and so I guess it wins. I'm fine with\n> > always showing the read/write for VACUUM and ANALYZE.\n> >\n> > Including 'ms' and lower-casing 'Timings' to 'timings' still strikes me\n> > as something that should be consistent for all of these, but that's\n> > independent of this and I'm not going to stress over it, particularly\n> > since that's pre-existing.\n> \n> Okay. Plan is now to push these two patches together, later on. The\n> second patch concerns this separate track_io_timing issue. It's pretty\n> straightforward.\n> \n> (No change to the first patch in the series, relative to the v2 from earlier.)\n\nLooks alright to me.\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Aug 2021 15:55:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Fri, Aug 27, 2021 at 12:55 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Okay. Plan is now to push these two patches together, later on. The\n> > second patch concerns this separate track_io_timing issue. It's pretty\n> > straightforward.\n> >\n> > (No change to the first patch in the series, relative to the v2 from earlier.)\n>\n> Looks alright to me.\n\nPushed both patches -- thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:35:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Ooh, this was illuminating -- thanks for explaining. TBH I would have\n> been very confused if asked to explain what that log line meant; and now\n> that I know what it means, I am even more convinced that we need to work\n> harder at it :-)\n>\n> I'll see if I can come up with something ...\n\nBTW, I wonder if you need to reconsider\nPROGRESS_VACUUM_NUM_DEAD_TUPLES in light of all this. It actually\ncounts LP_DEAD items, which aren't really dead tuples. As my example\nshows, the distinction between \"tuples removed\" (as this log output\nrefers to them) and LP_DEAD items removed from heap pages can be very\nimportant.\n\nOne way of handling this might be to call LP_DEAD items \"items removed\nfrom indexes\" -- \"tuples removed\" can be treated as \"items removed\nfrom table\". Or something along those lines, at least. This is how I\nphrase it in certain vacuumlazy.c source code comments already. It's\nnot 100% accurate, but in a way it's a lot closer to the truth. And it\nallows you to sidestep the issue with PROGRESS_VACUUM_NUM_DEAD_TUPLES\nby only slightly redefining what that means to users -- it can be\nrecast as information about index tuples specifically (there may not\nactually be any matching index tuples, especially in Postgres 14, but\nthat isn't worth getting in to in user docs IMV).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 31 Aug 2021 13:41:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: log_autovacuum in Postgres 14 -- ordering issue" } ]
[ { "msg_contents": "Hello.\n\nI'm facing a difficulty on cloning a repository via ssh.\n\nI understand that git repository can be accessed via http, git and ssh\nprotocol, and ssh access uses the ssh public key registered in\ncommunity account profile. I registered one in ecdsa-sha2-nistp256\nthat I believe the server accepts. I waited for more than 1 hour\nsince key registration until the operation.\n\nIf I ran the following command, it would fail.\n\n===\n$ git clone ssh://git@git.postgresql.org/postgresql.git postgresql\nCloning into 'postgresql'...\ngit@git.postgresql.org: Permission denied (publickey).\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n===\n\nIn detail it is failing as the following.\n\nGIT_SSH_COMMAND=\"ssh -vvv\" git clone ssh://git@git.postgresql.org/postgresql.git\n...\ndebug1: Offering public key: horiguti@cent8 ECDSA SHA256:z...QM agent\ndebug3: send packet: type 50\ndebug2: we sent a publickey packet, wait for reply\ndebug3: receive packet: type 51\n\nSo the server just refuses the key with SSH_MSG_USERAUTH_FAILURE. The\nkey in the debug1 line looks the correct one.\n\nAny comments on the operation above, or on how to diagnose that\nfurther are welcome. In other words, please help me!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:34:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "cannot access to postgres-git via ssh" }, { "msg_contents": "On Thu, Aug 26, 2021 at 9:34 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> I'm facing a difficulty on cloning a repository via ssh.\n>\n> I understand that git repository can be accessed via http, git and ssh\n> protocol, and ssh access uses the ssh public key registered in\n> community account profile. I registered one in ecdsa-sha2-nistp256\n> that I believe the server accepts. I waited for more than 1 hour\n> since key registration until the operation.\n\nHi!\n\nssh based access only works for repositories where you have explicit\npermissions, it does not support anonymous access -- that has to be\nover https (recommended) or git.\n\nAnd specifically, the postgresql.git repo mirror only allows anonymous access.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 26 Aug 2021 11:33:08 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: cannot access to postgres-git via ssh" }, { "msg_contents": "At Thu, 26 Aug 2021 16:34:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n\nHmm. I found www@postgresql.org more appropriate place to ask this\nquestion.\n\nPlease ignore this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 27 Aug 2021 11:16:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: cannot access to postgres-git via ssh" } ]
[ { "msg_contents": "Attached is the plain-text list of acknowledgments for the PG14 release \nnotes, current through today. Please check for problems such as wrong \nsorting, duplicate names in different variants, or names in the wrong \norder etc. (Note that the current standard is given name followed by \nsurname, independent of cultural origin.)", "msg_date": "Thu, 26 Aug 2021 10:41:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "list of acknowledgments for PG14" }, { "msg_contents": "> On 26 Aug 2021, at 10:41, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> Attached is the plain-text list of acknowledgments for the PG14 release notes, current through today. Please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Note that the current standard is given name followed by surname, independent of cultural origin.)\n\nI would have expected “Ö” (Önder Kalacı) to sort after “Z” but that might only\nbe true for my locale?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:48:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: list of acknowledgments for PG14" }, { "msg_contents": "On Thu, Aug 26, 2021 at 5:41 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Attached is the plain-text list of acknowledgments for the PG14 release\n> notes, current through today. Please check for problems such as wrong\n> sorting, duplicate names in different variants, or names in the wrong\n> order etc. (Note that the current standard is given name followed by\n> surname, independent of cultural origin.)\n\nThanks as usual! I think these are Japanese names and in the wrong order:\n\nKatsuragi Yuta\nKobayashi Hisanori\nKondo Yuta\nMatsumura Ryo\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 26 Aug 2021 18:20:45 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: list of acknowledgments for PG14" }, { "msg_contents": "On 26.08.21 10:48, Daniel Gustafsson wrote:\n>> On 26 Aug 2021, at 10:41, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> Attached is the plain-text list of acknowledgments for the PG14 release notes, current through today. Please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Note that the current standard is given name followed by surname, independent of cultural origin.)\n> \n> I would have expected “Ö” (Önder Kalacı) to sort after “Z” but that might only\n> be true for my locale?\n\nThe sort order is COLLATE \"en-x-icu\".\n\n\n", "msg_date": "Thu, 26 Aug 2021 13:42:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: list of acknowledgments for PG14" }, { "msg_contents": "On 26.08.21 11:20, Etsuro Fujita wrote:\n> On Thu, Aug 26, 2021 at 5:41 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Attached is the plain-text list of acknowledgments for the PG14 release\n>> notes, current through today. Please check for problems such as wrong\n>> sorting, duplicate names in different variants, or names in the wrong\n>> order etc. (Note that the current standard is given name followed by\n>> surname, independent of cultural origin.)\n> \n> Thanks as usual! I think these are Japanese names and in the wrong order:\n> \n> Katsuragi Yuta\n> Kobayashi Hisanori\n> Kondo Yuta\n> Matsumura Ryo\n\nCommitted with those corrections. Thanks.\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 09:02:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: list of acknowledgments for PG14" } ]
[ { "msg_contents": "Hi,\nI have following case: local pg_dump (v15) connecting to remote\nPostgreSQL (v12).\n\nI'm trying to get just schema (pg_dump -s). It's taking very long, which\nis kinda OK given that there is long distance and latency, but I got\ncurious and checked queries that the pg_dump was running (select * from\npg_stat_activity where application_name = 'pg_dump').\n\nAnd I noticed that many of these queries repeat many times.\n\nThe ones that I noticed were:\nSELECT pg_catalog.format_type('2281'::pg_catalog.oid, NULL)\naround the time that\nSELECT\n proretset,\n prosrc,\n probin,\n provolatile,\n proisstrict,\n prosecdef,\n lanname,\n proconfig,\n procost,\n prorows,\n pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n pg_catalog.pg_get_function_result(p.oid) AS funcresult,\n proleakproof,\n array_to_string(protrftypes, ' ') AS protrftypes,\n proparallel,\n prokind,\n prosupport,\n NULL AS prosqlbody\nFROM\n pg_catalog.pg_proc p,\n pg_catalog.pg_language l\nWHERE\n p.oid = '60188'::pg_catalog.oid\n AND l.oid = p.prolang\n\nwas called too.\n\nIt seems that for every function, pg_dump is getting it's data, and then\nruns format_type on each parameter/output type? I'm mostly guessing\nhere, as I didn't read the code.\n\nWouldn't it be possible to get all type formats at once, and cache them\nin pg_dump? Or at the very least reuse already received information?\n\nUnfortunately it seems I can't run pg_dump closer to the db server, and\nthe latency of queries is killing me.\n\nIt's been 15 minutes, and pg_dump (called: pg_dump -v -s -f schema.dump,\nwith env variables configuring db connection) hasn't written even single\nbyte to schema.dump)\n\ndepesz\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:44:30 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> It seems that for every function, pg_dump is getting it's data, and then\n> runs format_type on each parameter/output type? I'm mostly guessing\n> here, as I didn't read the code.\n> Wouldn't it be possible to get all type formats at once, and cache them\n> in pg_dump? Or at the very least reuse already received information?\n\nSend a patch ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:02:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, Aug 26, 2021 at 10:02:07AM -0400, Tom Lane wrote:\n> hubert depesz lubaczewski <depesz@depesz.com> writes:\n> > It seems that for every function, pg_dump is getting it's data, and then\n> > runs format_type on each parameter/output type? I'm mostly guessing\n> > here, as I didn't read the code.\n> > Wouldn't it be possible to get all type formats at once, and cache them\n> > in pg_dump? Or at the very least reuse already received information?\n> Send a patch ...\n\nYeah, that's not going to work, my C skills are next-to-none :(\n\nI guess I'll have to wait till someone else will assume it's a problem,\nsomeone with skills to do something about it.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:08:27 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> On Thu, Aug 26, 2021 at 10:02:07AM -0400, Tom Lane wrote:\n>> hubert depesz lubaczewski <depesz@depesz.com> writes:\n>>> Wouldn't it be possible to get all type formats at once, and cache them\n>>> in pg_dump? Or at the very least reuse already received information?\n\n>> Send a patch ...\n\n> Yeah, that's not going to work, my C skills are next-to-none :(\n> I guess I'll have to wait till someone else will assume it's a problem,\n> someone with skills to do something about it.\n\nWell, you could move it forward by doing the legwork to identify which\nqueries are worth merging. Is it really sane to do a global \"select\nformat_type() from pg_type\" query and save all the results on the client\nside? I wonder whether there are cases where that'd be a net loss.\nYou could do the experimentation to figure that out without necessarily\nhaving the C skills to make pg_dump actually do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Aug 2021 10:20:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, Aug 26, 2021 at 10:20:29AM -0400, Tom Lane wrote:\n> hubert depesz lubaczewski <depesz@depesz.com> writes:\n> > On Thu, Aug 26, 2021 at 10:02:07AM -0400, Tom Lane wrote:\n> >> hubert depesz lubaczewski <depesz@depesz.com> writes:\n> >>> Wouldn't it be possible to get all type formats at once, and cache them\n> >>> in pg_dump? Or at the very least reuse already received information?\n> \n> >> Send a patch ...\n> \n> > Yeah, that's not going to work, my C skills are next-to-none :(\n> > I guess I'll have to wait till someone else will assume it's a problem,\n> > someone with skills to do something about it.\n> \n> Well, you could move it forward by doing the legwork to identify which\n> queries are worth merging. Is it really sane to do a global \"select\n\nSure. On it. Will report back when I'll have more info.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:29:53 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On 8/26/21 1:44 AM, hubert depesz lubaczewski wrote:\n> Hi,\n> I have following case: local pg_dump (v15) connecting to remote\n> PostgreSQL (v12).\n\nSo you are using a dev version of pg_dump or is that a typo?\n\n> \n> It's been 15 minutes, and pg_dump (called: pg_dump -v -s -f schema.dump,\n> with env variables configuring db connection) hasn't written even single\n> byte to schema.dump)\n\nWhat happens if you run without the -v?\n\n> \n> depesz\n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Thu, 26 Aug 2021 07:34:26 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, Aug 26, 2021 at 07:34:26AM -0700, Adrian Klaver wrote:\n> On 8/26/21 1:44 AM, hubert depesz lubaczewski wrote:\n> > Hi,\n> > I have following case: local pg_dump (v15) connecting to remote\n> > PostgreSQL (v12).\n> So you are using a dev version of pg_dump or is that a typo?\n\nYes. I'm running pg_dump from my computer to (very) remote db server.\n\n> > It's been 15 minutes, and pg_dump (called: pg_dump -v -s -f schema.dump,\n> > with env variables configuring db connection) hasn't written even single\n> > byte to schema.dump)\n> What happens if you run without the -v?\n\nWell, I guess it works, but with no output I can't judge how fast.\nDefinitely doesn't seem to be going any faster.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:35:43 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On 8/26/21 7:35 AM, hubert depesz lubaczewski wrote:\n> On Thu, Aug 26, 2021 at 07:34:26AM -0700, Adrian Klaver wrote:\n>> On 8/26/21 1:44 AM, hubert depesz lubaczewski wrote:\n>>> Hi,\n>>> I have following case: local pg_dump (v15) connecting to remote\n>>> PostgreSQL (v12).\n>> So you are using a dev version of pg_dump or is that a typo?\n> \n> Yes. I'm running pg_dump from my computer to (very) remote db server.\n\nSSHing and dumping on the remote is out as a short term solution?\n\n> Well, I guess it works, but with no output I can't judge how fast.\n> Definitely doesn't seem to be going any faster.\n\nUnknown slow, that didn't help.\n\n> \n> Best regards,\n> \n> depesz\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Thu, 26 Aug 2021 07:46:46 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, Aug 26, 2021 at 07:46:46AM -0700, Adrian Klaver wrote:\n> On 8/26/21 7:35 AM, hubert depesz lubaczewski wrote:\n> > On Thu, Aug 26, 2021 at 07:34:26AM -0700, Adrian Klaver wrote:\n> > > On 8/26/21 1:44 AM, hubert depesz lubaczewski wrote:\n> > > > Hi,\n> > > > I have following case: local pg_dump (v15) connecting to remote\n> > > > PostgreSQL (v12).\n> > > So you are using a dev version of pg_dump or is that a typo?\n> > \n> > Yes. I'm running pg_dump from my computer to (very) remote db server.\n> SSHing and dumping on the remote is out as a short term solution?\n\nAs I mentioned in original post - I can't run pg_dump closer to server.\nSSH is not available, at least for me.\n\nAnyway - I got the dump, so I am good for now, but I think that this\ncould be improved, so I'll work on getting some stats on queries.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:48:26 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, Aug 26, 2021 at 10:20:29AM -0400, Tom Lane wrote:\n> Well, you could move it forward by doing the legwork to identify which\n> queries are worth merging. Is it really sane to do a global \"select\n> format_type() from pg_type\" query and save all the results on the client\n> side? I wonder whether there are cases where that'd be a net loss.\n> You could do the experimentation to figure that out without necessarily\n> having the C skills to make pg_dump actually do it.\n\nSo, I got some info.\n\nFirst, some stats. The DB contains:\n\n- 14 extensions\n- 1 aggregate\n- 107 functions\n- 5 schemas\n- 5 sequences\n- 188 logged tables\n- 1 unlogged table\n- 206 \"normal\" indexes\n- 30 unique indexes\n- 15 materialized views\n- 16 triggers\n- 87 types\n- 26 views\n\npg_dump -s of it is ~ 670kB.\n\nInterestingly, while dumping (pg_dump -s -v), we can see progress going on, and then, after:\n\n====\n...\npg_dump: reading publications\npg_dump: reading publication membership\npg_dump: reading subscriptions\npg_dump: reading dependency data\npg_dump: saving encoding = UTF8\npg_dump: saving standard_conforming_strings = on\npg_dump: saving search_path = \n====\n\nIt stops (progress visible in console). And then, in pg logs I see queries like:\n\n#v+\nSELECT\n proretset,\n prosrc,\n probin,\n provolatile,\n proisstrict,\n prosecdef,\n lanname,\n proconfig,\n procost,\n prorows,\n pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n pg_catalog.pg_get_function_result(p.oid) AS funcresult,\n proleakproof,\n array_to_string(protrftypes, ' ') AS protrftypes,\n proparallel,\n prokind,\n prosupport,\n NULL AS prosqlbody\n FROM pg_catalog.pg_proc p, pg_catalog.pg_language l\n WHERE p.oid = '43875'::pg_catalog.oid AND l.oid = p.prolang\n#v-\n\nNow for query stats.\n\nTo dump it all, pg_dump needed 9173 queries (logged by\nlog_min_duration_statement = 0 for this user).\n\nI extracted all queries to separate files, and made stats. In total there were\nonly 4257 unique queries.\n\nThen I checked for repeated queries. Top 10 most repeated offenders were:\n\n615 times : SELECT pg_catalog.format_type('25'::pg_catalog.oid, NULL)\n599 times : SELECT pg_catalog.format_type('23'::pg_catalog.oid, NULL)\n579 times : SELECT pg_catalog.format_type('2281'::pg_catalog.oid, NULL)\n578 times : SELECT pg_catalog.format_type('41946'::pg_catalog.oid, NULL)\n523 times : SELECT pg_catalog.format_type('701'::pg_catalog.oid, NULL)\n459 times : SELECT pg_catalog.format_type('42923'::pg_catalog.oid, NULL)\n258 times : SELECT pg_catalog.format_type('16'::pg_catalog.oid, NULL)\n176 times : SELECT pg_catalog.format_type('19'::pg_catalog.oid, NULL)\n110 times : SELECT pg_catalog.format_type('21'::pg_catalog.oid, NULL)\n106 times : SELECT pg_catalog.format_type('42604'::pg_catalog.oid, NULL)\n\nIn total, there were 5000 queries:\nSELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)\n\nBut there were only 83 separate oids that were scanned.\n\nThe only other repeated command was:\nSELECT pg_catalog.set_config('search_path', '', false);\nand it was called only twice.\n\nBased on my reading of queries in order it seems to follow the pattern of:\n\nOne call for:\n\nSELECT proretset, prosrc, probin, provolatile, proisstrict, prosecdef, lanname, proconfig, procost, prorows, pg_catalog.pg_get_function_arguments(p.oid) AS funcargs, pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs, pg_catalog.pg_get_function_re\nsult(p.oid) AS funcresult, proleakproof, array_to_string(protrftypes, ' ') AS protrftypes, proparallel, prokind, prosupport, NULL AS prosqlbody FROM pg_catalog.pg_proc p, pg_catalog.pg_language l WHERE p.oid = 'SOME_NUMBER'::pg_catalog.oid AND l.oid = p.prolang \n\nand then one or more:\n\nSELECT pg_catalog.format_type('SOME_NUMBER'::pg_catalog.oid, NULL)\n\nIn one case, after proc query, there were 94 concecutive\npg_catalog.format_type queries.\n\nI hope it helps.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 18:06:44 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:\n> Now for query stats.\n> \n> To dump it all, pg_dump needed 9173 queries (logged by\n> log_min_duration_statement = 0 for this user).\n> \n> I extracted all queries to separate files, and made stats. In total there were\n> only 4257 unique queries.\n> \n> Then I checked for repeated queries. Top 10 most repeated offenders were:\n> \n> 615 times : SELECT pg_catalog.format_type('25'::pg_catalog.oid, NULL)\n> 599 times : SELECT pg_catalog.format_type('23'::pg_catalog.oid, NULL)\n> 579 times : SELECT pg_catalog.format_type('2281'::pg_catalog.oid, NULL)\n> 578 times : SELECT pg_catalog.format_type('41946'::pg_catalog.oid, NULL)\n> 523 times : SELECT pg_catalog.format_type('701'::pg_catalog.oid, NULL)\n> 459 times : SELECT pg_catalog.format_type('42923'::pg_catalog.oid, NULL)\n> 258 times : SELECT pg_catalog.format_type('16'::pg_catalog.oid, NULL)\n> 176 times : SELECT pg_catalog.format_type('19'::pg_catalog.oid, NULL)\n> 110 times : SELECT pg_catalog.format_type('21'::pg_catalog.oid, NULL)\n> 106 times : SELECT pg_catalog.format_type('42604'::pg_catalog.oid, NULL)\n> \n> In total, there were 5000 queries:\n> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)\n> \n> But there were only 83 separate oids that were scanned.\n\nThat is a strong argument for using a hash table to cache the types.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 09:33:51 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:\n>> In total, there were 5000 queries:\n>> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)\n>> But there were only 83 separate oids that were scanned.\n\n> That is a strong argument for using a hash table to cache the types.\n\nThose queries are coming from getFormattedTypeName(), which is used\nfor function arguments and the like. I'm not quite sure why Hubert\nis seeing 5000 such calls in a database with only ~100 functions;\nsurely they don't all have an average of 50 arguments?\n\nI experimented with the attached, very quick-n-dirty patch to collect\nformat_type results during the initial scan of pg_type, instead. On the\nregression database in HEAD, it reduces the number of queries pg_dump\nissues from 3260 to 2905; but I'm having a hard time detecting any net\nperformance change.\n\n(This is not meant for commit as-is; notably, I didn't bother to fix\ngetTypes' code paths for pre-9.6 servers. It should be fine for\nperformance testing though.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 27 Aug 2021 17:23:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On 8/27/21 2:23 PM, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n>> On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:\n>>> In total, there were 5000 queries:\n>>> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)\n>>> But there were only 83 separate oids that were scanned.\n> \n>> That is a strong argument for using a hash table to cache the types.\n> \n> Those queries are coming from getFormattedTypeName(), which is used\n> for function arguments and the like. I'm not quite sure why Hubert\n> is seeing 5000 such calls in a database with only ~100 functions;\n> surely they don't all have an average of 50 arguments?\n\nCould be.\n\n From the stats post:\n\n\"Based on my reading of queries in order it seems to follow the pattern of:\n\nOne call for:\n\nSELECT proretset, prosrc, probin, provolatile, proisstrict, \nprosecdef, lanname, proconfig, procost, prorows, \npg_catalog.pg_get_function_arguments(p.oid) AS funcargs, \npg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs, \npg_catalog.pg_get_function_re\nsult(p.oid) AS funcresult, proleakproof, array_to_string(protrftypes, \n' ') AS protrftypes, proparallel, prokind, prosupport, NULL AS \nprosqlbody FROM pg_catalog.pg_proc p, pg_catalog.pg_language l WHERE \np.oid = 'SOME_NUMBER'::pg_catalog.oid AND l.oid = p.prolang\n\nand then one or more:\n\nSELECT pg_catalog.format_type('SOME_NUMBER'::pg_catalog.oid, NULL)\n\n\nIn one case, after proc query, there were 94 concecutive\npg_catalog.format_type queries.\n\"\n\n\n> \n> I experimented with the attached, very quick-n-dirty patch to collect\n> format_type results during the initial scan of pg_type, instead. On the\n> regression database in HEAD, it reduces the number of queries pg_dump\n> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> performance change.\n> \n> (This is not meant for commit as-is; notably, I didn't bother to fix\n> getTypes' code paths for pre-9.6 servers. It should be fine for\n> performance testing though.)\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Fri, 27 Aug 2021 14:53:01 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Thu, 2021-08-26 at 18:06 +0200, hubert depesz lubaczewski wrote:\n> >> In total, there were 5000 queries:\n> >> SELECT pg_catalog.format_type('[0-9]+'::pg_catalog.oid, NULL)\n> >> But there were only 83 separate oids that were scanned.\n> \n> > That is a strong argument for using a hash table to cache the types.\n> \n> Those queries are coming from getFormattedTypeName(), which is used\n> for function arguments and the like. I'm not quite sure why Hubert\n> is seeing 5000 such calls in a database with only ~100 functions;\n> surely they don't all have an average of 50 arguments?\n> \n> I experimented with the attached, very quick-n-dirty patch to collect\n> format_type results during the initial scan of pg_type, instead. On the\n> regression database in HEAD, it reduces the number of queries pg_dump\n> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> performance change.\n\nSeems like the issue here is mainly just the latency of each query being\nrather high compared to most use-cases, so local testing where there's\nbasically zero latency wouldn't see any change in timing, but throw a\ntrans-atlantic or worse amount of latency between the system running\npg_dump and the PG server and you'd see notable wall-clock savings in\ntime.\n\nOnly took a quick look but generally +1 on reducing the number of\nqueries that pg_dump is doing and the changes suggested looked good to\nme.\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Aug 2021 17:58:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I experimented with the attached, very quick-n-dirty patch to collect\n>> format_type results during the initial scan of pg_type, instead. On the\n>> regression database in HEAD, it reduces the number of queries pg_dump\n>> issues from 3260 to 2905; but I'm having a hard time detecting any net\n>> performance change.\n\n> Seems like the issue here is mainly just the latency of each query being\n> rather high compared to most use-cases, so local testing where there's\n> basically zero latency wouldn't see any change in timing, but throw a\n> trans-atlantic or worse amount of latency between the system running\n> pg_dump and the PG server and you'd see notable wall-clock savings in\n> time.\n\nYeah. What I was more concerned about was the potential downside\nof running format_type() for each pg_type row, even though we might\nuse only a few of those results. The fact that I'm *not* seeing\na performance hit with a local server is encouraging from that\nstandpoint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Aug 2021 18:25:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> I experimented with the attached, very quick-n-dirty patch to collect\n> >> format_type results during the initial scan of pg_type, instead. On the\n> >> regression database in HEAD, it reduces the number of queries pg_dump\n> >> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> >> performance change.\n> \n> > Seems like the issue here is mainly just the latency of each query being\n> > rather high compared to most use-cases, so local testing where there's\n> > basically zero latency wouldn't see any change in timing, but throw a\n> > trans-atlantic or worse amount of latency between the system running\n> > pg_dump and the PG server and you'd see notable wall-clock savings in\n> > time.\n> \n> Yeah. What I was more concerned about was the potential downside\n> of running format_type() for each pg_type row, even though we might\n> use only a few of those results. The fact that I'm *not* seeing\n> a performance hit with a local server is encouraging from that\n> standpoint.\n\nAh, yes, agreed.\n\nThanks!\n\nStephen", "msg_date": "Fri, 27 Aug 2021 18:27:20 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Adrian Klaver <adrian.klaver@aklaver.com> writes:\n> On 8/27/21 2:23 PM, Tom Lane wrote:\n>> Those queries are coming from getFormattedTypeName(), which is used\n>> for function arguments and the like. I'm not quite sure why Hubert\n>> is seeing 5000 such calls in a database with only ~100 functions;\n>> surely they don't all have an average of 50 arguments?\n\n> Could be.\n\nMaybe. I'm disturbed by the discrepancy between my result (about\n10% of pg_dump's queries are these) and Hubert's (over 50% are).\nI'd like to know the reason for that before we push forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Aug 2021 18:51:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Fri, Aug 27, 2021 at 05:23:23PM -0400, Tom Lane wrote:\n> I experimented with the attached, very quick-n-dirty patch to collect\n> format_type results during the initial scan of pg_type, instead. On the\n> regression database in HEAD, it reduces the number of queries pg_dump\n> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> performance change.\n> (This is not meant for commit as-is; notably, I didn't bother to fix\n> getTypes' code paths for pre-9.6 servers. It should be fine for\n> performance testing though.)\n\nHi,\nthanks a lot for this. Will test and report back, most likely on Monday,\nthough.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 08:38:24 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Fri, Aug 27, 2021 at 05:23:23PM -0400, Tom Lane wrote:\n> Those queries are coming from getFormattedTypeName(), which is used\n> for function arguments and the like. I'm not quite sure why Hubert\n> is seeing 5000 such calls in a database with only ~100 functions;\n> surely they don't all have an average of 50 arguments?\n\nOh. missed that part.\nSo I checked. In the mean time I got -Fc dump, so:\n\n#v+\n=$ pg_restore -l schema.dump | \\\n grep -P '^\\d*; \\d+ \\d+ FUNCTION ' |\n sed 's/^[^(]*(//; s/)[^)]*$//' |\n awk -F, '{print NF}' |\n sort -n |\n uniq -c\n23 0\n52 1\n21 2\n 8 3\n 1 4\n 2 5\n#v-\n\n23 functions with 0 arguments, 52 with 1, and the max is 5 arguments - two\nfunctions have these.\n\nNot sure if it matters but there is a lot of enums. 83 of them. And they have\nup to 250 elements (2 such types).\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 08:46:50 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> On Fri, Aug 27, 2021 at 05:23:23PM -0400, Tom Lane wrote:\n>> Those queries are coming from getFormattedTypeName(), which is used\n>> for function arguments and the like. I'm not quite sure why Hubert\n>> is seeing 5000 such calls in a database with only ~100 functions;\n>> surely they don't all have an average of 50 arguments?\n\n> 23 functions with 0 arguments, 52 with 1, and the max is 5 arguments - two\n> functions have these.\n> Not sure if it matters but there is a lot of enums. 83 of them. And they have\n> up to 250 elements (2 such types).\n\nHmm, no, I don't see any getFormattedTypeName calls in dumpEnumType.\n\nThere are two of 'em in dumpCast though. Does this DB by chance\nhave a ton of user-defined casts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Aug 2021 10:28:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Here is a second patch, quite independent of the first one, that\ngets rid of some other repetitive queries. On the regression database,\nthe number of queries needed to do \"pg_dump -s regression\" drops from\n3260 to 2589, and on my machine it takes 1.8 sec instead of 2.1 sec.\n\nWhat's attacked here is a fairly silly decision in getPolicies()\nto query pg_policy once per table, when we could do so just once.\nIt might have been okay if we skipped the per-table query for\ntables that lack policies, but it's not clear to me that we can\nknow that without looking into pg_policy. In any case I doubt\nthis is ever going to be less efficient than the original coding.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 28 Aug 2021 18:26:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "You guys are brilliant!\n\nRegards,\n\nGus\n\nOn Sat, Aug 28, 2021 at 6:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here is a second patch, quite independent of the first one, that\n> gets rid of some other repetitive queries. On the regression database,\n> the number of queries needed to do \"pg_dump -s regression\" drops from\n> 3260 to 2589, and on my machine it takes 1.8 sec instead of 2.1 sec.\n>\n> What's attacked here is a fairly silly decision in getPolicies()\n> to query pg_policy once per table, when we could do so just once.\n> It might have been okay if we skipped the per-table query for\n> tables that lack policies, but it's not clear to me that we can\n> know that without looking into pg_policy. In any case I doubt\n> this is ever going to be less efficient than the original coding.\n>\n> regards, tom lane\n>\n>\n\nYou guys are brilliant!Regards,GusOn Sat, Aug 28, 2021 at 6:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Here is a second patch, quite independent of the first one, that\ngets rid of some other repetitive queries.  On the regression database,\nthe number of queries needed to do \"pg_dump -s regression\" drops from\n3260 to 2589, and on my machine it takes 1.8 sec instead of 2.1 sec.\n\nWhat's attacked here is a fairly silly decision in getPolicies()\nto query pg_policy once per table, when we could do so just once.\nIt might have been okay if we skipped the per-table query for\ntables that lack policies, but it's not clear to me that we can\nknow that without looking into pg_policy.  In any case I doubt\nthis is ever going to be less efficient than the original coding.\n\n                        regards, tom lane", "msg_date": "Sun, 29 Aug 2021 07:35:21 -0400", "msg_from": "Gus Spier <gus.spier@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On 2021-Aug-28, Tom Lane wrote:\n\n> Here is a second patch, quite independent of the first one, that\n> gets rid of some other repetitive queries.\n\nAnother pointlessly repetitive query is in getTriggers, which we run\nonce per table to be dumped containing triggers. We could reduce that\nby running it in bulk for many relations at a time. I suppose it's\nnormally not hurtful, but as we grow the number of partitions we allow\nit's going to become a problem.\n\nNo patch from me for now — if someone wantw to volunteer one, it looks\nsimple enough ...\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 29 Aug 2021 09:13:15 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Another pointlessly repetitive query is in getTriggers, which we run\n> once per table to be dumped containing triggers. We could reduce that\n> by running it in bulk for many relations at a time. I suppose it's\n> normally not hurtful, but as we grow the number of partitions we allow\n> it's going to become a problem.\n\nPerhaps. In the regression database, only ~10% of the tables have\ntriggers, so it's likely not going to yield any measurable gain there.\nBut databases that make heavier use of foreign keys might see a win.\n\nAnother thing I've wondered about before is whether it could make sense\nto read pg_attribute once rather than once per table. There might be\na fair amount of wasted work if the dump is selective, and in big DBs\nthe sheer size of that result could be a problem. But those reads are\ndefinitely way up there on the number-of-queries scale.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Aug 2021 09:51:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Another pointlessly repetitive query is in getTriggers, which we run\n> > once per table to be dumped containing triggers. We could reduce that\n> > by running it in bulk for many relations at a time. I suppose it's\n> > normally not hurtful, but as we grow the number of partitions we allow\n> > it's going to become a problem.\n> \n> Perhaps. In the regression database, only ~10% of the tables have\n> triggers, so it's likely not going to yield any measurable gain there.\n> But databases that make heavier use of foreign keys might see a win.\n\nIt sure seems like in just about all cases fewer queries is going to be\nbetter.\n\n> Another thing I've wondered about before is whether it could make sense\n> to read pg_attribute once rather than once per table. There might be\n> a fair amount of wasted work if the dump is selective, and in big DBs\n> the sheer size of that result could be a problem. But those reads are\n> definitely way up there on the number-of-queries scale.\n\nYeah, I've thought about this before too. Would sure be nice if there\nwas a way that we could query the catalog selectively based on the\noptions the user has passed in but do so in as few queries as possible.\n\nThanks,\n\nStephen", "msg_date": "Sun, 29 Aug 2021 15:47:11 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Fri, Aug 27, 2021 at 05:23:23PM -0400, Tom Lane wrote:\n> I experimented with the attached, very quick-n-dirty patch to collect\n> format_type results during the initial scan of pg_type, instead. On the\n> regression database in HEAD, it reduces the number of queries pg_dump\n> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> performance change.\n\nHi,\nSo, I applied it to brand new HEAD from git, Result:\n\n From total of 9173 queries it went down to 4178.\nOriginally 5000 type queries, now 19!\nThis is actually strange given that previously it was asking querying\nabout 83 separate type oids. But, as far as I was able to check with\n\"pg_restore -l\" (from -Fc dump), results are the same.\n\nDump time down from 17m 22s to 8m 12s.\n\nThen, I applied the patch from\nhttps://www.postgresql.org/message-id/1082810.1630189581%40sss.pgh.pa.us\n\nwithout removing first one, as you said they are quite independent.\n\nWith both patches applied I got 3884 queries total, and dump from\noriginal db in 7m 35s.\n\nSo this clearly helps. A LOT.\n\nBut since we're looking at it, and with both patches applied, I looked\nat the next most common query. Which is:\n\n#v+\nSELECT\n proretset,\n prosrc,\n probin,\n provolatile,\n proisstrict,\n prosecdef,\n lanname,\n proconfig,\n procost,\n prorows,\n pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n pg_catalog.pg_get_function_result(p.oid) AS funcresult,\n proleakproof,\n array_to_string(protrftypes, ' ') AS protrftypes,\n proparallel,\n prokind,\n prosupport,\n NULL AS prosqlbody\n FROM pg_catalog.pg_proc p, pg_catalog.pg_language l\n WHERE p.oid = '25491'::pg_catalog.oid AND l.oid = p.prolang\n#v-\n\n From the 3884 in the current pg_dump (with both patches applied) - these\nqueries were called 1804 times. All of these calls where with different oids,\nso it's possible that there is nothing to be done about it, but figured I'll\nlet you know.\n\nThe thing is - even though it was called 1804 times, dump contains data only\nabout 107 functions (pg_restore -l schema.dump | grep -c FUNCTION), so it kinda\nseems that 94% of these calls is not needed.\n\nAnyway, even if we can't get any help for function queries, improvement of over\n50% is great.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 09:44:43 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> On Fri, Aug 27, 2021 at 05:23:23PM -0400, Tom Lane wrote:\n>> I experimented with the attached, very quick-n-dirty patch to collect\n>> format_type results during the initial scan of pg_type, instead.\n\n> So, I applied it to brand new HEAD from git, Result:\n> From total of 9173 queries it went down to 4178.\n> Originally 5000 type queries, now 19!\n> This is actually strange given that previously it was asking querying\n> about 83 separate type oids. But, as far as I was able to check with\n> \"pg_restore -l\" (from -Fc dump), results are the same.\n\nHm. So we're still no wiser than before about how such a small (in\nterms of number of objects) database could have produced so many\ngetFormattedTypeName calls. Plus, this result raises a new question:\nwith the patch, I think you shouldn't have seen *any* queries of that\nform. Where are the 19 survivors coming from?\n\nI don't suppose you could send me a schema-only dump of that\ndatabase, off-list? I'm now quite curious.\n\n> But since we're looking at it, and with both patches applied, I looked\n> at the next most common query. Which is:\n> [ collection of details about a function ]\n\n> The thing is - even though it was called 1804 times, dump contains data only\n> about 107 functions (pg_restore -l schema.dump | grep -c FUNCTION), so it kinda\n> seems that 94% of these calls is not needed.\n\nHm. It's not doing that for *every* row in pg_proc, at least.\nI speculate that it is collecting and then not printing the info\nabout functions that are in extensions --- can you check on\nhow many there are of those?\n\n(Actually, if you've got a whole lot of objects inside extensions,\nmaybe that explains the 5000 calls?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:11:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Mon, Aug 30, 2021 at 10:11:22AM -0400, Tom Lane wrote:\n> I don't suppose you could send me a schema-only dump of that\n> database, off-list? I'm now quite curious.\n\nAsked the owners for their permission.\n\n> > The thing is - even though it was called 1804 times, dump contains data only\n> > about 107 functions (pg_restore -l schema.dump | grep -c FUNCTION), so it kinda\n> > seems that 94% of these calls is not needed.\n\n> Hm. It's not doing that for *every* row in pg_proc, at least.\n> I speculate that it is collecting and then not printing the info\n> about functions that are in extensions --- can you check on\n> how many there are of those?\n> (Actually, if you've got a whole lot of objects inside extensions,\n> maybe that explains the 5000 calls?)\n\nWell, not sure if that's a lot, but:\nthere are 15 extensions, including plpgsql.\n\nSELECT\n count(*)\nFROM\n pg_catalog.pg_depend\nWHERE\n refclassid = 'pg_catalog.pg_extension'::pg_catalog.regclass\n AND deptype = 'e';\n\n\nreturn 2110 objects:\n\nSELECT\n classid::regclass,\n count(*)\nFROM\n pg_catalog.pg_depend\nWHERE\n refclassid = 'pg_catalog.pg_extension'::pg_catalog.regclass\n AND deptype = 'e'\nGROUP BY\n 1\nORDER BY\n 1;\n\n classid │ count \n─────────────────────────┼───────\n pg_type │ 31\n pg_proc │ 1729\n pg_class │ 61\n pg_foreign_data_wrapper │ 1\n pg_cast │ 30\n pg_language │ 1\n pg_opclass │ 73\n pg_operator │ 111\n pg_opfamily │ 73\n(9 rows)\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 16:45:51 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> On Mon, Aug 30, 2021 at 10:11:22AM -0400, Tom Lane wrote:\n>> I speculate that it is collecting and then not printing the info\n>> about functions that are in extensions --- can you check on\n>> how many there are of those?\n\n> classid │ count \n> ─────────────────────────┼───────\n> pg_type │ 31\n> pg_proc │ 1729\n> pg_class │ 61\n> pg_foreign_data_wrapper │ 1\n> pg_cast │ 30\n> pg_language │ 1\n> pg_opclass │ 73\n> pg_operator │ 111\n> pg_opfamily │ 73\n> (9 rows)\n\nAh-hah. Those 1729 extension-owned functions account nicely\nfor the extra probes into pg_proc, and I bet they are causing\nthe unexplained getFormattedTypeName calls too. So the\n*real* problem here seems to be that we're doing too much\nwork on objects that are not going to be dumped because they\nare extension members. I'll take a look at that later if\nnobody beats me to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:58:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "I wrote:\n> Ah-hah. Those 1729 extension-owned functions account nicely\n> for the extra probes into pg_proc, and I bet they are causing\n> the unexplained getFormattedTypeName calls too. So the\n> *real* problem here seems to be that we're doing too much\n> work on objects that are not going to be dumped because they\n> are extension members. I'll take a look at that later if\n> nobody beats me to it.\n\nI took a quick look at this, and it seems to be mostly the fault\nof the DUMP_COMPONENT refactorization that was done awhile ago.\nWe create DumpableObjects for all these objects, because we need\nthose to track dependencies. When we arrive at dumpFunc() for\nan extension-owned object, it has the DUMP_COMPONENT_SECLABEL\nand DUMP_COMPONENT_POLICY flag bits set, whether or not the\nfunction actually has any such properties. This causes dumpFunc\nto run through its data collection query, even though nothing\nat all is going to get output.\n\nI see that the reason those flags become set is that\ncheckExtensionMembership does this for an extension member:\n\n dobj->dump = ext->dobj.dump_contains & (DUMP_COMPONENT_ACL |\n DUMP_COMPONENT_SECLABEL |\n DUMP_COMPONENT_POLICY);\n\nThere is logic elsewhere that causes the DUMP_COMPONENT_ACL flag\nto get cleared if there's no interesting ACL for the object, but\nI see no such logic for SECLABEL or POLICY. That omission is costing\nus an awful lot of wasted queries in any database with a lot of\nextension-owned objects.\n\nI'm quite allergic to the way that the ACL logic is implemented anyhow,\nas there seem to be N copies of essentially identical logic, not to\nmention all the inefficient left joins and subqueries that were added\nto the fundamental data-gathering queries --- which are only supposed\nto find out which objects we want to dump, not expensively collect\nscads of detail about every object in the catalogs. I think this is\nless in need of a tweak than \"burn it to the ground and start over\".\nI wonder if we can't get to a place where there's only one query that\nactually looks into pg_init_privs, more like the way we do it for\ndescriptions and seclabels (not that the seclabel code is perfect,\nas we've found here).\n\nAnyway, it doesn't look like there's much hope of improving this\naspect without a significant rewrite. One band-aidy idea is that\nwe could check --no-security-labels earlier and not allow that\nflag bit to become set in the first place, but that only helps if\nthe user gives that flag, which few would. (I'm also wondering\nmore than a little bit why we're allowing DUMP_COMPONENT_POLICY\nto become set on objects that aren't tables.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 12:42:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> On Mon, Aug 30, 2021 at 10:11:22AM -0400, Tom Lane wrote:\n>> I don't suppose you could send me a schema-only dump of that\n>> database, off-list? I'm now quite curious.\n\n> Asked the owners for their permission.\n\nBTW, I think you can skip that part now --- it seems like the extensions\nsufficiently explain the extra queries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 13:33:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "[ redirecting to -hackers ]\n\nI wrote:\n> I experimented with the attached, very quick-n-dirty patch to collect\n> format_type results during the initial scan of pg_type, instead. On the\n> regression database in HEAD, it reduces the number of queries pg_dump\n> issues from 3260 to 2905; but I'm having a hard time detecting any net\n> performance change.\n\nI decided that that patch wasn't too safe, because it applies\nformat_type() to pg_type rows that we have no reason to trust the\nlongevity of. I think it could fall over if some concurrent process\nwere busy dropping a temp table, for example.\n\nSo here's a version that just does plain caching of the results\nof retail getFormattedTypeName() calls. This visibly adds no\nqueries that were not done before, so it should be safe enough.\nAnd there can't be any cases that it makes slower, either.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 30 Aug 2021 20:11:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Mon, Aug 30, 2021 at 08:11:00PM -0400, Tom Lane wrote:\n> [ redirecting to -hackers ]\n> \n> I wrote:\n> > I experimented with the attached, very quick-n-dirty patch to collect\n> > format_type results during the initial scan of pg_type, instead. On the\n> > regression database in HEAD, it reduces the number of queries pg_dump\n> > issues from 3260 to 2905; but I'm having a hard time detecting any net\n> > performance change.\n> \n> I decided that that patch wasn't too safe, because it applies\n> format_type() to pg_type rows that we have no reason to trust the\n> longevity of. I think it could fall over if some concurrent process\n> were busy dropping a temp table, for example.\n> \n> So here's a version that just does plain caching of the results\n> of retail getFormattedTypeName() calls. This visibly adds no\n> queries that were not done before, so it should be safe enough.\n> And there can't be any cases that it makes slower, either.\n\nHi,\ntested it in my case, and it reduced query count to 4261.\n\nWhich is great.\n\nBut, I also looked closer into the pg_proc queries and extensions.\nAnd - most functions come from relatively standard extensions:\n- postgis 1246 functions\n- btree_gist 179 functions\n- btree_gin 87 functions\n- hstore 58 functions\n\nMy point in here is that potential optimizations regarding queries for\npg_proc might speed up dumps for more people - as they might use things\nlike postgis, but never realized that it can be much faster.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:07:27 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.com> writes:\n> My point in here is that potential optimizations regarding queries for\n> pg_proc might speed up dumps for more people - as they might use things\n> like postgis, but never realized that it can be much faster.\n\nAgreed, but as I said upthread, fixing that looks like it will be\nrather invasive. Meanwhile, I went ahead and pushed the two\nsimple improvements discussed so far.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Aug 2021 15:06:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 03:06:25PM -0400, Tom Lane wrote:\n> Agreed, but as I said upthread, fixing that looks like it will be\n> rather invasive. Meanwhile, I went ahead and pushed the two\n> simple improvements discussed so far.\n\nGreat. Thank you very much.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 07:45:27 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "I wrote:\n> Anyway, it doesn't look like there's much hope of improving this\n> aspect without a significant rewrite.\n\nJust to close out this thread: I've now posted such a rewrite at\n\nhttps://www.postgresql.org/message-id/2273648.1634764485%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Oct 2021 17:46:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" }, { "msg_contents": "On Wed, Oct 20, 2021 at 05:46:01PM -0400, Tom Lane wrote:\n> I wrote:\n> > Anyway, it doesn't look like there's much hope of improving this\n> > aspect without a significant rewrite.\n> \n> Just to close out this thread: I've now posted such a rewrite at\n> https://www.postgresql.org/message-id/2273648.1634764485%40sss.pgh.pa.us\n\nThat looks amazing. Thanks a lot.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Thu, 21 Oct 2021 12:52:53 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of repeated queries from pg_dump?" } ]
[ { "msg_contents": "When using pg_basebackup with WAL streaming (-X stream), we have observed on a\nnumber of times in production that the streaming child exited prematurely (to\nno fault of the code it seems, most likely due to network middleboxes), which\ncause the backup to fail but only after it has run to completion. On long\nrunning backups this can consume a lot of time before it’s noticed.\n\nBy trapping the failure of the streaming process we can instead exit early to\nallow the user to fix and/or restart the process.\n\nThe attached adds a SIGCHLD handler for Unix, and catch the returnvalue from\nthe Windows thread, in order to break out early from the main loop. It still\nneeds a test, and proper testing on Windows, but early feedback on the approach\nwould be appreciated.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 26 Aug 2021 11:25:06 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Thu, Aug 26, 2021 at 2:55 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> When using pg_basebackup with WAL streaming (-X stream), we have observed on a\n> number of times in production that the streaming child exited prematurely (to\n> no fault of the code it seems, most likely due to network middleboxes), which\n> cause the backup to fail but only after it has run to completion. On long\n> running backups this can consume a lot of time before it’s noticed.\n\nHm.\n\n> By trapping the failure of the streaming process we can instead exit early to\n> allow the user to fix and/or restart the process.\n>\n> The attached adds a SIGCHLD handler for Unix, and catch the returnvalue from\n> the Windows thread, in order to break out early from the main loop. It still\n> needs a test, and proper testing on Windows, but early feedback on the approach\n> would be appreciated.\n\nHere are some comments on the patch:\n1) Do we need volatile keyword here to read the value of the variables\nalways from the memory?\n+static volatile sig_atomic_t bgchild_exited = false;\n\n2) Do we need #ifndef WIN32 ... #endif around sigchld_handler function\ndefinition?\n\n3) I'm not sure if the new value of bgchild_exited being set in the\nchild thread will reflect in the main process on Windows? But\ntheoretically, I can understand that the memory will be shared between\nthe main process thread and child thread.\n#ifdef WIN32\n/*\n* In order to signal the main thread of an ungraceful exit we\n* set the flag used on Unix to signal SIGCHLD.\n*/\nbgchild_exited = true;\n#endif\n\n4) How about \"set the same flag that we use on Unix to signal\nSIGCHLD.\" instead of \"* set the flag used on Unix to signal\nSIGCHLD.\"?\n\n5) How about \"background WAL receiver terminated unexpectedly\" instead\nof \"log streamer child terminated unexpectedly\"? This will be in sync\nwith the existing message \"starting background WAL receiver\". \"log\nstreamer\" is the word used internally in the code, user doesn't know\nit with that name.\n\n6) How about giving the exit code (like postmaster's reaper function\ndoes) instead of just a message saying unexpected termination? It will\nbe useful to know for what reason the process exited. For Windows, we\ncan use GetExitCodeThread (I'm referring to the code around waitpid in\npg_basebackup) and for Unix we can use waitpid.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 30 Aug 2021 16:01:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 30 Aug 2021, at 12:31, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Here are some comments on the patch:\n> 1) Do we need volatile keyword here to read the value of the variables\n> always from the memory?\n> +static volatile sig_atomic_t bgchild_exited = false;\n\nYes, fixed.\n\n> 2) Do we need #ifndef WIN32 ... #endif around sigchld_handler function\n> definition?\n\nAh yes, good point. Fixed.\n\n> 3) I'm not sure if the new value of bgchild_exited being set in the\n> child thread will reflect in the main process on Windows? But\n> theoretically, I can understand that the memory will be shared between\n> the main process thread and child thread.\n\nThe child does not have it’s own copy of bgchild_exited.\n\n> 4) How about \"set the same flag that we use on Unix to signal\n> SIGCHLD.\" instead of \"* set the flag used on Unix to signal\n> SIGCHLD.\"?\n\nFixed.\n\n> 5) How about \"background WAL receiver terminated unexpectedly\" instead\n> of \"log streamer child terminated unexpectedly\"? This will be in sync\n> with the existing message \"starting background WAL receiver\". \"log\n> streamer\" is the word used internally in the code, user doesn't know\n> it with that name.\n\nGood point, that’s better.\n\n> 6) How about giving the exit code (like postmaster's reaper function\n> does) instead of just a message saying unexpected termination? It will\n> be useful to know for what reason the process exited. For Windows, we\n> can use GetExitCodeThread (I'm referring to the code around waitpid in\n> pg_basebackup) and for Unix we can use waitpid.\n\nThe rest of the program is doing exit(1) regardless of the failure of the\nchild/thread, so it seems more consistent to keep doing that for this class of\nerror as well.\n\nA v2 with the above fixes is attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 1 Sep 2021 10:26:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Wed, Sep 1, 2021 at 1:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> A v2 with the above fixes is attached.\n\nThanks for the updated patch. Here are some comments:\n\n1) Do we need to set bgchild = -1 before the exit(1); in the code\nbelow so that we don't kill(bgchild, SIGTERM); unnecessarily in\nkill_bgchild_atexit?\n+ if (bgchild_exited)\n+ {\n+ pg_log_error(\"background WAL receiver terminated unexpectedly\");\n+ exit(1);\n+ }\n+\n\n2) Missing \",\" after \"On Windows, we use a .....\"\n+ * that time. On Windows we use a background thread which can communicate\n\n3) How about \"/* Flag to indicate whether or not child process exited\n*/\" instead of +/* State of child process */?\n\n4) Instead of just exiting from the main pg_basebackup process when\nthe child WAL receiver dies, can't we think of restarting the child\nprocess, probably with the WAL streaming position where it left off or\nstream from the beginning? This way, the work that the main\npg_basebackup has done so far doesn't get wasted. I'm not sure if this\naffects the pg_basebackup functionality. We can restart the child\nprocess for 1 or 2 times, if it still dies, we can kill the main\npg_baasebackup process too. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Sep 2021 15:58:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 1 Sep 2021, at 12:28, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Wed, Sep 1, 2021 at 1:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> A v2 with the above fixes is attached.\n> \n> Thanks for the updated patch. Here are some comments:\n> \n> 1) Do we need to set bgchild = -1 before the exit(1); in the code\n> below so that we don't kill(bgchild, SIGTERM); unnecessarily in\n> kill_bgchild_atexit?\n\nGood point. We can also inspect bgchild_exited in kill_bgchild_atexit.\n\n> 2) Missing \",\" after \"On Windows, we use a .....\"\n> + * that time. On Windows we use a background thread which can communicate\n> \n> 3) How about \"/* Flag to indicate whether or not child process exited\n> */\" instead of +/* State of child process */?\n\nFixed.\n\n> 4) Instead of just exiting from the main pg_basebackup process when\n> the child WAL receiver dies, can't we think of restarting the child\n> process, probably with the WAL streaming position where it left off or\n> stream from the beginning? This way, the work that the main\n> pg_basebackup has done so far doesn't get wasted. I'm not sure if this\n> affects the pg_basebackup functionality. We can restart the child\n> process for 1 or 2 times, if it still dies, we can kill the main\n> pg_baasebackup process too. Thoughts?\n\nI was toying with the idea, but I ended up not pursuing it. This error is well\ninto the “really shouldn’t happen, but can” territory and it’s quite likely\nthat some level of manual intervention is required to make it successfully\nrestart. I’m not convinced that adding complicated logic to restart (and even\nmore complicated tests to simulate and test it) will be worthwhile.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 3 Sep 2021 11:53:01 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Fri, Sep 3, 2021 at 3:23 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > 4) Instead of just exiting from the main pg_basebackup process when\n> > the child WAL receiver dies, can't we think of restarting the child\n> > process, probably with the WAL streaming position where it left off or\n> > stream from the beginning? This way, the work that the main\n> > pg_basebackup has done so far doesn't get wasted. I'm not sure if this\n> > affects the pg_basebackup functionality. We can restart the child\n> > process for 1 or 2 times, if it still dies, we can kill the main\n> > pg_baasebackup process too. Thoughts?\n>\n> I was toying with the idea, but I ended up not pursuing it. This error is well\n> into the “really shouldn’t happen, but can” territory and it’s quite likely\n> that some level of manual intervention is required to make it successfully\n> restart. I’m not convinced that adding complicated logic to restart (and even\n> more complicated tests to simulate and test it) will be worthwhile.\n\n I withdraw my suggestion because I now feel that it's better not to\nmake it complex and let the user decide if at all the child process\nexits abnormally.\n\nI think we might still miss abnormal child thread exits on Windows\nbecause we set bgchild_exited = true only if ReceiveXlogStream or\nwalmethod->finish() returns false. I'm not sure the parent thread on\nWindows can detect a child's abnormal exit. Since there is no signal\nmechanism on Windows, what the patch does is better to detect child\nexit on two important functions failures.\n\nOverall, the v3 patch looks good to me.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Sep 2021 20:33:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Fri, Sep 3, 2021 at 11:53 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 1 Sep 2021, at 12:28, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Sep 1, 2021 at 1:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> A v2 with the above fixes is attached.\n> >\n> > Thanks for the updated patch. Here are some comments:\n> >\n> > 1) Do we need to set bgchild = -1 before the exit(1); in the code\n> > below so that we don't kill(bgchild, SIGTERM); unnecessarily in\n> > kill_bgchild_atexit?\n>\n> Good point. We can also inspect bgchild_exited in kill_bgchild_atexit.\n>\n> > 2) Missing \",\" after \"On Windows, we use a .....\"\n> > + * that time. On Windows we use a background thread which can communicate\n> >\n> > 3) How about \"/* Flag to indicate whether or not child process exited\n> > */\" instead of +/* State of child process */?\n>\n> Fixed.\n>\n> > 4) Instead of just exiting from the main pg_basebackup process when\n> > the child WAL receiver dies, can't we think of restarting the child\n> > process, probably with the WAL streaming position where it left off or\n> > stream from the beginning? This way, the work that the main\n> > pg_basebackup has done so far doesn't get wasted. I'm not sure if this\n> > affects the pg_basebackup functionality. We can restart the child\n> > process for 1 or 2 times, if it still dies, we can kill the main\n> > pg_baasebackup process too. Thoughts?\n>\n> I was toying with the idea, but I ended up not pursuing it. This error is well\n> into the “really shouldn’t happen, but can” territory and it’s quite likely\n> that some level of manual intervention is required to make it successfully\n> restart. I’m not convinced that adding complicated logic to restart (and even\n> more complicated tests to simulate and test it) will be worthwhile.\n>\n\nI think the restart scenario while nice, definitely means moving the\ngoalposts quite far. Let's get this detection in first at least, and\nthen we can always consider that a separate patch in the future.\n\nMight be worth noting in one of the comments the difference in\nbehaviour if the backend process/thread *crashes* -- that is, on Unix\nit will be detected via the signal handler and on Windows the whole\nprocess including the main thread will die, so there is nothing to\ndetect.\n\nOther places in the code just refers to the background process as \"the\nbackground process\". The term \"WAL receiver\" is new from this patch.\nWhile I agree it's in many ways a better term, I think (1) we should\ntry to be consistent, and (2) maybe use a different term than \"WAL\nreceiver\" specifically because we have a backend component with that\nname.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 28 Sep 2021 15:48:50 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 28 Sep 2021, at 15:48, Magnus Hagander <magnus@hagander.net> wrote:\n\n> Might be worth noting in one of the comments the difference in\n> behaviour if the backend process/thread *crashes* -- that is, on Unix\n> it will be detected via the signal handler and on Windows the whole\n> process including the main thread will die, so there is nothing to\n> detect.\n\nGood point, done.\n\n> Other places in the code just refers to the background process as \"the\n> background process\". The term \"WAL receiver\" is new from this patch.\n> While I agree it's in many ways a better term, I think (1) we should\n> try to be consistent, and (2) maybe use a different term than \"WAL\n> receiver\" specifically because we have a backend component with that\n> name.\n\nLooking at the user-facing messaging we have before this patch, there is a bit\nof variability:\n\nOn UNIX:\n\n pg_log_error(\"could not create pipe for background process: %m\");\n pg_log_error(\"could not create background process: %m\");\n pg_log_info(\"could not send command to background pipe: %m\");\n pg_log_error(\"could not wait for child process: %m\");\n\nOn Windows:\n\n pg_log_error(\"could not create background thread: %m\");\n pg_log_error(\"could not get child thread exit status: %m\");\n pg_log_error(\"could not wait for child thread: %m\");\n pg_log_error(\"child thread exited with error %u\", ..);\n\nOn Both:\n\n pg_log_info(\"starting background WAL receiver\");\n pg_log_info(\"waiting for background process to finish streaming ...\");\n\nSo there is one mention of a background WAL receiver already in there, but it's\npretty inconsistent as to what we call it. For now I've changed the messaging\nin this patch to say \"background process\", leaving making this all consistent\nfor a follow-up patch.\n\nThe attached fixes the above, as well as the typo mentioned off-list and is\nrebased on top of todays HEAD.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 29 Sep 2021 13:18:40 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Wed, Sep 29, 2021 at 8:18 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Sep 2021, at 15:48, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > Might be worth noting in one of the comments the difference in\n> > behaviour if the backend process/thread *crashes* -- that is, on Unix\n> > it will be detected via the signal handler and on Windows the whole\n> > process including the main thread will die, so there is nothing to\n> > detect.\n>\n> Good point, done.\n>\n> > Other places in the code just refers to the background process as \"the\n> > background process\". The term \"WAL receiver\" is new from this patch.\n> > While I agree it's in many ways a better term, I think (1) we should\n> > try to be consistent, and (2) maybe use a different term than \"WAL\n> > receiver\" specifically because we have a backend component with that\n> > name.\n>\n> Looking at the user-facing messaging we have before this patch, there is a bit\n> of variability:\n>\n> On UNIX:\n>\n> pg_log_error(\"could not create pipe for background process: %m\");\n> pg_log_error(\"could not create background process: %m\");\n> pg_log_info(\"could not send command to background pipe: %m\");\n> pg_log_error(\"could not wait for child process: %m\");\n>\n> On Windows:\n>\n> pg_log_error(\"could not create background thread: %m\");\n> pg_log_error(\"could not get child thread exit status: %m\");\n> pg_log_error(\"could not wait for child thread: %m\");\n> pg_log_error(\"child thread exited with error %u\", ..);\n>\n> On Both:\n>\n> pg_log_info(\"starting background WAL receiver\");\n> pg_log_info(\"waiting for background process to finish streaming ...\");\n>\n> So there is one mention of a background WAL receiver already in there, but it's\n> pretty inconsistent as to what we call it. For now I've changed the messaging\n> in this patch to say \"background process\", leaving making this all consistent\n> for a follow-up patch.\n>\n> The attached fixes the above, as well as the typo mentioned off-list and is\n> rebased on top of todays HEAD.\n\nThank you for working on this issue.\n\nThe patch looks good to me but there is one minor comment:\n\n--- a/src/bin/pg_basebackup/pg_basebackup.c\n+++ b/src/bin/pg_basebackup/pg_basebackup.c\n@@ -174,6 +174,8 @@ static int bgpipe[2] = {-1, -1};\n /* Handle to child process */\n static pid_t bgchild = -1;\n static bool in_log_streamer = false;\n+/* Flag to indicate if child process exited unexpectedly */\n+static volatile sig_atomic_t bgchild_exited = false;\n\nIt's better to have a new line before the new comment.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 26 Oct 2021 20:25:06 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Wed, Sep 29, 2021 at 4:48 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > Other places in the code just refers to the background process as \"the\n> > background process\". The term \"WAL receiver\" is new from this patch.\n> > While I agree it's in many ways a better term, I think (1) we should\n> > try to be consistent, and (2) maybe use a different term than \"WAL\n> > receiver\" specifically because we have a backend component with that\n> > name.\n>\n> Looking at the user-facing messaging we have before this patch, there is a bit\n> of variability:\n>\n> On UNIX:\n>\n> pg_log_error(\"could not create pipe for background process: %m\");\n> pg_log_error(\"could not create background process: %m\");\n> pg_log_info(\"could not send command to background pipe: %m\");\n> pg_log_error(\"could not wait for child process: %m\");\n>\n> On Windows:\n>\n> pg_log_error(\"could not create background thread: %m\");\n> pg_log_error(\"could not get child thread exit status: %m\");\n> pg_log_error(\"could not wait for child thread: %m\");\n> pg_log_error(\"child thread exited with error %u\", ..);\n>\n> On Both:\n>\n> pg_log_info(\"starting background WAL receiver\");\n> pg_log_info(\"waiting for background process to finish streaming ...\");\n>\n> So there is one mention of a background WAL receiver already in there, but it's\n> pretty inconsistent as to what we call it. For now I've changed the messaging\n> in this patch to say \"background process\", leaving making this all consistent\n> for a follow-up patch.\n>\n> The attached fixes the above, as well as the typo mentioned off-list and is\n> rebased on top of todays HEAD.\n\nThe documentation [1] of pg_basebackup specifies it as a \"second\nreplication connection\". Also, I see that the pg_receivewal.c using\nthe following message:\n if (db_name)\n {\n pg_log_error(\"replication connection using slot \\\"%s\\\" is\nunexpectedly database specific\",\n replication_slot);\n exit(1);\n\nWe can use something like \"stream replication connection\" or\n\"background replication connection\" or \"background process/thread for\nreplication\". Otherwise just \"background process\" on Unix and\n\"background thread\" on Windows look fine to me. If others are okay, we\ncan remove the \"WAL receiver\" and use it consistently.\n\n[1]\ns\nstream\n\nStream write-ahead log data while the backup is being taken. This\nmethod will open a second connection to the server and start streaming\nthe write-ahead log in parallel while running the backup. Therefore,\nit will require two replication connections not just one. As long as\nthe client can keep up with the write-ahead log data, using this\nmethod requires no extra write-ahead logs to be saved on the source\nserver.\n\nWhen tar format is used, the write-ahead log files will be written to\na separate file named pg_wal.tar (if the server is a version earlier\nthan 10, the file will be named pg_xlog.tar).\n\nThis value is the default.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 10 Nov 2021 16:07:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Wed, Sep 29, 2021 at 01:18:40PM +0200, Daniel Gustafsson wrote:\n> So there is one mention of a background WAL receiver already in there, but it's\n> pretty inconsistent as to what we call it. For now I've changed the messaging\n> in this patch to say \"background process\", leaving making this all consistent\n> for a follow-up patch.\n> \n> The attached fixes the above, as well as the typo mentioned off-list and is\n> rebased on top of todays HEAD.\n\nI have been looking a bit at this patch, and did some tests on Windows\nto find out that this is able to catch the failure of the thread\nstreaming the WAL segments in pg_basebackup, avoiding a completion of\nthe base backup, while HEAD waits until the backup finishes. Testing\nthis scenario is actually simple by issuing pg_terminate_backend() on\nthe WAL sender that streams the WAL with START_REPLICATION, while\nthrottling the base backup.\n\nCould you add a test to automate this scenario? As far as I can see,\nsomething like the following should be stable even for Windows:\n1) Run a pg_basebackup in the background with IPC::Run, using\n--max-rate with a minimal value to slow down the base backup, for slow\nmachines. 013_crash_restart.pl does that as one example with $killme.\n2) Find out the WAL sender doing START_REPLICATION in the backend, and\nissue pg_terminate_backend() on it.\n3) Use a variant of pump_until() on the pg_basebackup process and\ncheck after one or more failure patterns. We should refactor this\npart, actually. If this new test uses the same logic, that would make\nthree tests doing that with 022_crash_temp_files.pl and\n013_crash_restart.pl. The CI should be fine to provide any feedback\nwith the test in place, though I am fine to test things also in my\nbox.\n--\nMichael", "msg_date": "Wed, 16 Feb 2022 16:27:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 16 Feb 2022, at 08:27, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Sep 29, 2021 at 01:18:40PM +0200, Daniel Gustafsson wrote:\n>> So there is one mention of a background WAL receiver already in there, but it's\n>> pretty inconsistent as to what we call it. For now I've changed the messaging\n>> in this patch to say \"background process\", leaving making this all consistent\n>> for a follow-up patch.\n>> \n>> The attached fixes the above, as well as the typo mentioned off-list and is\n>> rebased on top of todays HEAD.\n> \n> I have been looking a bit at this patch, and did some tests on Windows\n> to find out that this is able to catch the failure of the thread\n> streaming the WAL segments in pg_basebackup, avoiding a completion of\n> the base backup, while HEAD waits until the backup finishes. Testing\n> this scenario is actually simple by issuing pg_terminate_backend() on\n> the WAL sender that streams the WAL with START_REPLICATION, while\n> throttling the base backup.\n\nGreat, thanks!\n\n> Could you add a test to automate this scenario? As far as I can see,\n> something like the following should be stable even for Windows:\n> 1) Run a pg_basebackup in the background with IPC::Run, using\n> --max-rate with a minimal value to slow down the base backup, for slow\n> machines. 013_crash_restart.pl does that as one example with $killme.\n> 2) Find out the WAL sender doing START_REPLICATION in the backend, and\n> issue pg_terminate_backend() on it.\n> 3) Use a variant of pump_until() on the pg_basebackup process and\n> check after one or more failure patterns. We should refactor this\n> part, actually. If this new test uses the same logic, that would make\n> three tests doing that with 022_crash_temp_files.pl and\n> 013_crash_restart.pl. The CI should be fine to provide any feedback\n> with the test in place, though I am fine to test things also in my\n> box.\n\nThis is good idea, I was going in a different direction earlier with a test but\nthis is cleaner. The attached 0001 refactors pump_until; 0002 fixes a trivial\nspelling error found while hacking; and 0003 is the previous patch complete\nwith a test that passes on Cirrus CI.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 18 Feb 2022 22:00:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Fri, Feb 18, 2022 at 10:00:43PM +0100, Daniel Gustafsson wrote:\n> This is good idea, I was going in a different direction earlier with a test but\n> this is cleaner. The attached 0001 refactors pump_until; 0002 fixes a trivial\n> spelling error found while hacking; and 0003 is the previous patch complete\n> with a test that passes on Cirrus CI.\n\nThis looks rather sane to me, and I can confirm that this passes\nthe CI and a manual run of MSVC tests with my own box.\n\n+is($node->poll_query_until('postgres',\n+ \"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE \" .\n+ \"application_name = '010_pg_basebackup.pl' AND wait_event =\n'WalSenderMain' \" .\n+ \"AND backend_type = 'walsender'\"), \"1\", \"Walsender killed\");\nIf you do that, don't you have a risk to kill the WAL sender doing the\nBASE_BACKUP? That could falsify the test. It seems to me that it\nwould be safer to add a check on query ~ 'START_REPLICATION' or\nsomething like that.\n\n- diag(\"aborting wait: program timed out\");\n- diag(\"stream contents: >>\", $$stream, \"<<\");\n- diag(\"pattern searched for: \", $untl);\nKeeping some of this information around would be useful for\ndebugging in the refactored routine.\n\n+my $sigchld_bb = IPC::Run::start(\n+ [\n+ @pg_basebackup_defs, '-X', 'stream', '-D', \"$tempdir/sigchld\",\n+ '-r', '32', '-d', $node->connstr('postgres')\n+ ],\nI would recommend the use of long options here as a matter to\nself-document what this does, and add a comment explaining why\n--max-rate is preferable, mainly for fast machines.\n--\nMichael", "msg_date": "Mon, 21 Feb 2022 11:03:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 21 Feb 2022, at 03:03, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Feb 18, 2022 at 10:00:43PM +0100, Daniel Gustafsson wrote:\n>> This is good idea, I was going in a different direction earlier with a test but\n>> this is cleaner. The attached 0001 refactors pump_until; 0002 fixes a trivial\n>> spelling error found while hacking; and 0003 is the previous patch complete\n>> with a test that passes on Cirrus CI.\n> \n> This looks rather sane to me, and I can confirm that this passes\n> the CI and a manual run of MSVC tests with my own box.\n\nGreat, thanks!\n\n> +is($node->poll_query_until('postgres',\n> + \"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE \" .\n> + \"application_name = '010_pg_basebackup.pl' AND wait_event =\n> 'WalSenderMain' \" .\n> + \"AND backend_type = 'walsender'\"), \"1\", \"Walsender killed\");\n> If you do that, don't you have a risk to kill the WAL sender doing the\n> BASE_BACKUP? That could falsify the test. It seems to me that it\n> would be safer to add a check on query ~ 'START_REPLICATION' or\n> something like that.\n\nI don't think there's a risk, but I've added the check on query as well since\nit also makes it more readable.\n\n> - diag(\"aborting wait: program timed out\");\n> - diag(\"stream contents: >>\", $$stream, \"<<\");\n> - diag(\"pattern searched for: \", $untl);\n> Keeping some of this information around would be useful for\n> debugging in the refactored routine.\n\nMaybe, but we don't really have diag output anywhere in the modules or the\ntests so I didn't see much of a precedent for keeping it. Inspectig the repo I\nthink we can remove two more in pg_rewind, which I just started a thread for.\n\n> +my $sigchld_bb = IPC::Run::start(\n> + [\n> + @pg_basebackup_defs, '-X', 'stream', '-D', \"$tempdir/sigchld\",\n> + '-r', '32', '-d', $node->connstr('postgres')\n> + ],\n> \tI would recommend the use of long options here as a matter to\n> self-document what this does, and add a comment explaining why\n> --max-rate is preferable, mainly for fast machines.\n\nFair enough, done.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 21 Feb 2022 15:11:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "On Mon, Feb 21, 2022 at 03:11:30PM +0100, Daniel Gustafsson wrote:\n>On 21 Feb 2022, at 03:03, Michael Paquier <michael@paquier.xyz> wrote:\n>> +is($node->poll_query_until('postgres',\n>> + \"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE \" .\n>> + \"application_name = '010_pg_basebackup.pl' AND wait_event =\n>> 'WalSenderMain' \" .\n>> + \"AND backend_type = 'walsender'\"), \"1\", \"Walsender killed\");\n>> If you do that, don't you have a risk to kill the WAL sender doing the\n>> BASE_BACKUP? That could falsify the test. It seems to me that it\n>> would be safer to add a check on query ~ 'START_REPLICATION' or\n>> something like that.\n> \n> I don't think there's a risk, but I've added the check on query as well since\n> it also makes it more readable.\n\nOkay, thanks.\n\n>> - diag(\"aborting wait: program timed out\");\n>> - diag(\"stream contents: >>\", $$stream, \"<<\");\n>> - diag(\"pattern searched for: \", $untl);\n>> Keeping some of this information around would be useful for\n>> debugging in the refactored routine.\n> \n> Maybe, but we don't really have diag output anywhere in the modules or the\n> tests so I didn't see much of a precedent for keeping it. Inspectig the repo I\n> think we can remove two more in pg_rewind, which I just started a thread for.\n\nHmm. If you think this is better this way, I won't fight hard on this\npoint, either.\n\nThe patch set looks fine overall.\n--\nMichael", "msg_date": "Tue, 22 Feb 2022 10:13:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" }, { "msg_contents": "> On 22 Feb 2022, at 02:13, Michael Paquier <michael@paquier.xyz> wrote:\n\n> The patch set looks fine overall.\n\nThanks for reviewing and testing, I pushed this today after taking another\nround at it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 23 Feb 2022 20:58:29 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Trap errors from streaming child in pg_basebackup to exit early" } ]
[ { "msg_contents": "Hi.\n\nIn the code in src/backend/replication/logical/origin.c,\nthe error code \"ERRCODE_CONFIGURATION_LIMIT_EXCEEDED\" is given\nwhen a checksum check results in an error,\nbut \"ERRCODE_ DATA_CORRUPTED\" seems to be more appropriate.\n\n====================\ndiff --git a/src/backend/replication/logical/origin.c\nb/src/backend/replication/logical/origin.c\nindex 2c191de..65dcd03 100644\n--- a/src/backend/replication/logical/origin.c\n+++ b/src/backend/replication/logical/origin.c\n@@ -796,7 +796,7 @@ StartupReplicationOrigin(void)\n FIN_CRC32C(crc);\n if (file_crc != crc)\n ereport(PANIC,\n- (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n errmsg(\"replication slot checkpoint\nhas wrong checksum %u, expected %u\",\n crc, file_crc)));\n====================\nThought?\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Thu, 26 Aug 2021 18:47:59 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": true, "msg_subject": "Error code for checksum failure in origin.c" }, { "msg_contents": "On Thu, Aug 26, 2021 at 3:18 PM Kasahara Tatsuhito\n<kasahara.tatsuhito@gmail.com> wrote:\n>\n> Hi.\n>\n> In the code in src/backend/replication/logical/origin.c,\n> the error code \"ERRCODE_CONFIGURATION_LIMIT_EXCEEDED\" is given\n> when a checksum check results in an error,\n> but \"ERRCODE_ DATA_CORRUPTED\" seems to be more appropriate.\n>\n\n+1. Your observation looks correct to me and the error code suggested\nby you seems appropriate. We use ERRCODE_DATA_CORRUPTED in\nReadTwoPhaseFile() for similar error.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Aug 2021 15:33:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "> On 26 Aug 2021, at 12:03, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Thu, Aug 26, 2021 at 3:18 PM Kasahara Tatsuhito\n> <kasahara.tatsuhito@gmail.com> wrote:\n>> \n>> Hi.\n>> \n>> In the code in src/backend/replication/logical/origin.c,\n>> the error code \"ERRCODE_CONFIGURATION_LIMIT_EXCEEDED\" is given\n>> when a checksum check results in an error,\n>> but \"ERRCODE_ DATA_CORRUPTED\" seems to be more appropriate.\n>> \n> \n> +1. Your observation looks correct to me and the error code suggested\n> by you seems appropriate. We use ERRCODE_DATA_CORRUPTED in\n> ReadTwoPhaseFile() for similar error.\n\nAgreed, +1 for changing this.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 12:41:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "On Thu, Aug 26, 2021 at 4:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Aug 2021, at 12:03, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 3:18 PM Kasahara Tatsuhito\n> > <kasahara.tatsuhito@gmail.com> wrote:\n> >>\n> >> Hi.\n> >>\n> >> In the code in src/backend/replication/logical/origin.c,\n> >> the error code \"ERRCODE_CONFIGURATION_LIMIT_EXCEEDED\" is given\n> >> when a checksum check results in an error,\n> >> but \"ERRCODE_ DATA_CORRUPTED\" seems to be more appropriate.\n> >>\n> >\n> > +1. Your observation looks correct to me and the error code suggested\n> > by you seems appropriate. We use ERRCODE_DATA_CORRUPTED in\n> > ReadTwoPhaseFile() for similar error.\n>\n> Agreed, +1 for changing this.\n>\n\nI think we need to backpatch this till 9.6 as this is introduced by\ncommit 5aa2350426. Any objections to that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Aug 2021 10:02:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "On Fri, Aug 27, 2021 at 10:02:26AM +0530, Amit Kapila wrote:\n> I think we need to backpatch this till 9.6 as this is introduced by\n> commit 5aa2350426. Any objections to that?\n\nNone.\n--\nMichael", "msg_date": "Fri, 27 Aug 2021 14:00:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "On Fri, Aug 27, 2021 at 1:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 4:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 26 Aug 2021, at 12:03, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 26, 2021 at 3:18 PM Kasahara Tatsuhito\n> > > <kasahara.tatsuhito@gmail.com> wrote:\n> > >>\n> > >> Hi.\n> > >>\n> > >> In the code in src/backend/replication/logical/origin.c,\n> > >> the error code \"ERRCODE_CONFIGURATION_LIMIT_EXCEEDED\" is given\n> > >> when a checksum check results in an error,\n> > >> but \"ERRCODE_ DATA_CORRUPTED\" seems to be more appropriate.\n> > >>\n> > >\n> > > +1. Your observation looks correct to me and the error code suggested\n> > > by you seems appropriate. We use ERRCODE_DATA_CORRUPTED in\n> > > ReadTwoPhaseFile() for similar error.\n> >\n> > Agreed, +1 for changing this.\n> >\n>\n> I think we need to backpatch this till 9.6 as this is introduced by\n> commit 5aa2350426.\n+1\n\nBest regards,\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 27 Aug 2021 14:25:23 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "> On 27 Aug 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> I think we need to backpatch this till 9.6 as this is introduced by\n> commit 5aa2350426. Any objections to that?\n\nNo, that seems appropriate.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 09:17:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "On Fri, Aug 27, 2021 at 12:47 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Aug 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I think we need to backpatch this till 9.6 as this is introduced by\n> > commit 5aa2350426. Any objections to that?\n>\n> No, that seems appropriate.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Aug 2021 14:05:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error code for checksum failure in origin.c" }, { "msg_contents": "On Mon, Aug 30, 2021 at 5:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 27, 2021 at 12:47 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 27 Aug 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > I think we need to backpatch this till 9.6 as this is introduced by\n> > > commit 5aa2350426. Any objections to that?\n> >\n> > No, that seems appropriate.\n> >\n>\n> Pushed.\nThanks !\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Mon, 30 Aug 2021 19:16:09 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error code for checksum failure in origin.c" } ]
[ { "msg_contents": "\nIs there a reason why contrib/amcheck/verify_heapam.c does not want to \nrun on sequences? If I take out the checks, it appears to work. Is \nthis an oversight? Or if there is a reason, maybe it could be stated in \na comment, at least.\n\n\n", "msg_date": "Thu, 26 Aug 2021 12:03:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "verify_heapam for sequences?" }, { "msg_contents": "> On Aug 26, 2021, at 3:03 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> \n> Is there a reason why contrib/amcheck/verify_heapam.c does not want to run on sequences? If I take out the checks, it appears to work. Is this an oversight? Or if there is a reason, maybe it could be stated in a comment, at least.\n\nTesting the corruption checking logic on all platforms is a bit arduous, because the data layout on disk changes with alignment difference, endianness, etc. The work I did with Tom's help finally got good test coverage across the entire buildfarm, but that test (contrib/amcheck/t/001_verify_heapam.pl) doesn't work for sequences even on my one platform (mac laptop).\n\nI have added a modicum of test coverage for sequences in the attached WIP patch, which is enough to detect sequence corruption on my laptop. It would have to be tested across the buildfarm after being extended to cover more cases. As it stands now, it uses blunt force to corrupt the relation, and only verifies that verify_heapam() returns some corruption, not that it reports the right corruption.\n\nI understand that sequences are really just heap tables, and since we already test corrupted heap tables, we could assume that we already have sufficient coverage. I'm not entirely comfortable with that, though, because future patch authors who modify how tables or sequences work are not necessarily going to think carefully about whether their modifications invalidate that assumption.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Aug 2021 12:02:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: verify_heapam for sequences?" }, { "msg_contents": "On 26.08.21 21:02, Mark Dilger wrote:\n> I understand that sequences are really just heap tables, and since we already test corrupted heap tables, we could assume that we already have sufficient coverage. I'm not entirely comfortable with that, though, because future patch authors who modify how tables or sequences work are not necessarily going to think carefully about whether their modifications invalidate that assumption.\n\nWell, if we enabled verify_heapam to check sequences, and then someone \nwere to change the sequence storage, a test that currently reports no \ncorruption would probably report corruption then?\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:22:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: verify_heapam for sequences?" }, { "msg_contents": "> On Aug 30, 2021, at 1:22 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 26.08.21 21:02, Mark Dilger wrote:\n>> I understand that sequences are really just heap tables, and since we already test corrupted heap tables, we could assume that we already have sufficient coverage. I'm not entirely comfortable with that, though, because future patch authors who modify how tables or sequences work are not necessarily going to think carefully about whether their modifications invalidate that assumption.\n> \n> Well, if we enabled verify_heapam to check sequences, and then someone were to change the sequence storage, a test that currently reports no corruption would probably report corruption then?\n\nIt might. More to the point, any corruption test we create now will be geared towards corrupting the page in a way that verify_heapam will detect, which will be detected whether or not the implementation of sequences changes. That kind of testing won't really do anything. \n\nPerhaps the best we can do is to create a sequence, testing both before and after exercising it a bit. We can't properly guess which exercises (nextval, setval, etc.) will cause corruption testing to fail for some unknown future implementation change, so we just try all the obvious stuff.\n\nThe attached patch changes both contrib/amcheck/ and src/bin/pg_amcheck/ to allow checking sequences. In both cases, the changes required are fairly minor, though they both entail some documentation changes.\n\nIt seems fairly straightforward that if a user calls verify_heapam() on a sequence, then the new behavior is what they want. It is not quite so clear for pg_amcheck.\n\nIn pg_amcheck, the command-line arguments allow discriminating between tables and indexes with materialized views quietly treated as tables (which, of course, they are.) In v14, sequences were not treated as tables, nor checked at all. In this new patch, sequences are quietly treated the same way as tables. By \"quietly\", I mean there are no command-line switches to specifically filter them in or out separately from filtering ordinary tables.\n\nThis is a user-facing behavioral change, and the user might not be imagining sequences specifically when specifying a table name pattern that matches both tables and sequences. Do you see any problem with that? It was already true that materialized views matching a table name pattern would be checked, so this new behavior is not entirely out of line with the old behavior.\n\nThe new behavior is documented, and since I'm updating the docs, I made the behavior with respect to materialized views more explicit.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 30 Aug 2021 12:00:41 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: verify_heapam for sequences?" }, { "msg_contents": "On 30.08.21 21:00, Mark Dilger wrote:\n> The attached patch changes both contrib/amcheck/ and src/bin/pg_amcheck/ to allow checking sequences. In both cases, the changes required are fairly minor, though they both entail some documentation changes.\n> \n> It seems fairly straightforward that if a user calls verify_heapam() on a sequence, then the new behavior is what they want. It is not quite so clear for pg_amcheck.\n> \n> In pg_amcheck, the command-line arguments allow discriminating between tables and indexes with materialized views quietly treated as tables (which, of course, they are.) In v14, sequences were not treated as tables, nor checked at all. In this new patch, sequences are quietly treated the same way as tables. By \"quietly\", I mean there are no command-line switches to specifically filter them in or out separately from filtering ordinary tables.\n> \n> This is a user-facing behavioral change, and the user might not be imagining sequences specifically when specifying a table name pattern that matches both tables and sequences. Do you see any problem with that? It was already true that materialized views matching a table name pattern would be checked, so this new behavior is not entirely out of line with the old behavior.\n> \n> The new behavior is documented, and since I'm updating the docs, I made the behavior with respect to materialized views more explicit.\n\ncommitted\n\n\n", "msg_date": "Tue, 28 Sep 2021 16:05:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: verify_heapam for sequences?" } ]
[ { "msg_contents": "Hi,\n\nWhile porting postgresql-odbc v13 to AIX, we have found that (at least) 2 symbols are missing in libpq.a provided by the port of PostgreSQL v13.1 to AIX 7.1 by the BullFreeware project :\n\npg_char_to_encoding\npg_encoding_to_char\n\nLooking at details, it appears that these symbols are present in version 12.8 .\nThey were still missing in 13.4 .\nSomething has changed between v12 and v13.\n\nLooking at more details, the way libpq.a is built on AIX is different from the way libpq.so is built on Linux.\nOn Linux, the file \"exports.txt\" is used for building the list of symbols to be exported.\nOn AIX, the tool mkldexport.sh is used for dynamically generating the symbols to be exported.\nAnd it appears that 5 symbols (including the 2 above) are missing on AIX. Don't know why.\n\nA solution is to merge the two list of symbols to be exported in one list.\nThis is done by the patch attached here.\nThis patch does:\n - add a new 11-lines script ./src/backend/port/aix/mergeldexport.sh which makes the merge only if the file exports.txt does exist.\n - add the use of this script in: ./src/Makefile.shlib to be used for AIX only\n - add the definition of variable MERGELDEXPORT in: ./src/makefiles/Makefile.aix\nVery simple.\n\nI suggest to apply the change for v14 .\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\nwww.atos.net<https://mail.ad.bull.net/owa/redir.aspx?C=PvphmPvCZkGrAgHVnWGsdMcDKgzl_dEIsM6rX0g4u4v8V81YffzBGkWrtQeAXNovd3ttkJL8JIc.&URL=http%3a%2f%2fwww.atos.net%2f>", "msg_date": "Thu, 26 Aug 2021 12:49:01 +0000", "msg_from": "\"REIX, Tony\" <tony.reix@atos.net>", "msg_from_op": true, "msg_subject": "AIX: Symbols are missing in libpq.a" }, { "msg_contents": "On Thu, Aug 26, 2021 at 12:49:01PM +0000, REIX, Tony wrote:\n> While porting postgresql-odbc v13 to AIX, we have found that (at least) 2 symbols are missing in libpq.a provided by the port of PostgreSQL v13.1 to AIX 7.1 by the BullFreeware project :\n> \n> pg_char_to_encoding\n> pg_encoding_to_char\n> \n> Looking at details, it appears that these symbols are present in version 12.8 .\n> They were still missing in 13.4 .\n> Something has changed between v12 and v13.\n> \n> Looking at more details, the way libpq.a is built on AIX is different from the way libpq.so is built on Linux.\n> On Linux, the file \"exports.txt\" is used for building the list of symbols to be exported.\n> On AIX, the tool mkldexport.sh is used for dynamically generating the symbols to be exported.\n> And it appears that 5 symbols (including the 2 above) are missing on AIX. Don't know why.\n\nWould you study why it changed? If $(MKLDEXPORT) is no longer able to find\nall symbols, then we're likely to have problems in more libraries than libpq,\nincluding libraries that don't use a $(SHLIB_EXPORTS) file.\n\n\n", "msg_date": "Sun, 29 Aug 2021 08:46:41 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "Hi Noah,\n\nIt appears that the Makefile in src/interfaces/libpq has been modified between v12 and v13, removing encnames.o (and wchar.o) from the object files being used for creating the static libpq.a file which is used for creating the libpq.so and libpq.a AIX shared-library files.\nAnd, since pg_encoding_to_char() is defined in encnames.o , it has disappeared from the libpq.exp file .\n\n# diff postgresql-12.8/32bit/src/interfaces/libpq/Makefile postgresql-13.1/32bit/src/interfaces/libpq/Makefile | grep encnames\n< OBJS += encnames.o wchar.o\n< encnames.c wchar.c: % : $(backend_src)/utils/mb/%\n< rm -f encnames.c wchar.c\n\nRemember how the final libpq.a is built on AIX:\n\n rm -f libpq.a\n /usr/bin/ar crs libpq.a fe-auth-scram.o ...\n touch libpq.a\n ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n /opt/freeware/bin/gcc -maix64 -O3 ..... -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp .......\n rm -f libpq.a\n /usr/bin/ar crs libpq.a libpq.so.5\n\n 12.8 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n\n 13.1 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o fe-secure-common.o fe-secure-openssl.o\n\n\nMaybe you can discover why these changes were made in v13 for src/interfaces/libpq/Makefile .\nAnd mkldexport.sh , unchanged between v12 and v13, works perfectly.\n\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\nwww.atos.net<https://mail.ad.bull.net/owa/redir.aspx?C=PvphmPvCZkGrAgHVnWGsdMcDKgzl_dEIsM6rX0g4u4v8V81YffzBGkWrtQeAXNovd3ttkJL8JIc.&URL=http%3a%2f%2fwww.atos.net%2f>\n________________________________\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : dimanche 29 août 2021 17:46\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: AIX: Symbols are missing in libpq.a\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Thu, Aug 26, 2021 at 12:49:01PM +0000, REIX, Tony wrote:\n> While porting postgresql-odbc v13 to AIX, we have found that (at least) 2 symbols are missing in libpq.a provided by the port of PostgreSQL v13.1 to AIX 7.1 by the BullFreeware project :\n>\n> pg_char_to_encoding\n> pg_encoding_to_char\n>\n> Looking at details, it appears that these symbols are present in version 12.8 .\n> They were still missing in 13.4 .\n> Something has changed between v12 and v13.\n>\n> Looking at more details, the way libpq.a is built on AIX is different from the way libpq.so is built on Linux.\n> On Linux, the file \"exports.txt\" is used for building the list of symbols to be exported.\n> On AIX, the tool mkldexport.sh is used for dynamically generating the symbols to be exported.\n> And it appears that 5 symbols (including the 2 above) are missing on AIX. Don't know why.\n\nWould you study why it changed? If $(MKLDEXPORT) is no longer able to find\nall symbols, then we're likely to have problems in more libraries than libpq,\nincluding libraries that don't use a $(SHLIB_EXPORTS) file.\n\n\n\n\n\n\n\n\n\nHi Noah,\n\n\n\n\nIt appears that the Makefile\nin \nsrc/interfaces/libpq has been modified between v12 and v13, removing encnames.o (and wchar.o) from the object files being used for creating the static libpq.a file which\n is used for creating the libpq.so and libpq.a AIX shared-library files.\n\nAnd, since pg_encoding_to_char() is defined in encnames.o , it has disappeared from the libpq.exp file\n .\n\n\n\n\n# diff postgresql-12.8/32bit/src/interfaces/libpq/Makefile postgresql-13.1/32bit/src/interfaces/libpq/Makefile | grep encnames\n< OBJS += encnames.o wchar.o\n< encnames.c wchar.c: % : $(backend_src)/utils/mb/%\n<       rm -f encnames.c wchar.c\n\n\n\n\n\n\nRemember how the final libpq.a is built on AIX:\n\n\n\n\n rm -f libpq.a\n /usr/bin/ar crs libpq.a fe-auth-scram.o ...\n touch libpq.a\n ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n /opt/freeware/bin/gcc  -maix64    -O3 .....  -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp .......\n rm -f libpq.a\n /usr/bin/ar crs libpq.a libpq.so.5\n\n\n 12.8 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n                                pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o\nencnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n\n\n 13.1 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n                                pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o                    fe-secure-common.o fe-secure-openssl.o\n\n\n\n\n\n\n\n\nMaybe you can discover why these changes were made in v13 for src/interfaces/libpq/Makefile .\n\nAnd  mkldexport.sh  , unchanged between\n v12 and v13, works perfectly.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\n\nwww.atos.net\n\n\n\n\n\n\n\n\n\n\n\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : dimanche 29 août 2021 17:46\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: AIX: Symbols are missing in libpq.a\n \n\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Thu, Aug 26, 2021 at 12:49:01PM +0000, REIX, Tony wrote:\n> While porting postgresql-odbc v13 to AIX, we have found that (at least) 2 symbols are missing in libpq.a provided by the port of PostgreSQL v13.1 to AIX 7.1 by the BullFreeware project :\n> \n> pg_char_to_encoding\n> pg_encoding_to_char\n> \n> Looking at details, it appears that these symbols are present in version 12.8 .\n> They were still missing in 13.4 .\n> Something has changed between v12 and v13.\n> \n> Looking at more details, the way libpq.a is built on AIX is different from the way libpq.so is built on Linux.\n> On Linux, the file \"exports.txt\" is used for building the list of symbols to be exported.\n> On AIX, the tool  mkldexport.sh is used for dynamically generating the symbols to be exported.\n> And it appears that 5 symbols (including the 2 above) are missing on AIX. Don't know why.\n\nWould you study why it changed?  If $(MKLDEXPORT) is no longer able to find\nall symbols, then we're likely to have problems in more libraries than libpq,\nincluding libraries that don't use a $(SHLIB_EXPORTS) file.", "msg_date": "Mon, 30 Aug 2021 14:33:32 +0000", "msg_from": "\"REIX, Tony\" <tony.reix@atos.net>", "msg_from_op": true, "msg_subject": "RE: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "On Mon, Aug 30, 2021 at 02:33:32PM +0000, REIX, Tony wrote:\n> It appears that the Makefile in src/interfaces/libpq has been modified between v12 and v13, removing encnames.o (and wchar.o) from the object files being used for creating the static libpq.a file which is used for creating the libpq.so and libpq.a AIX shared-library files.\n> And, since pg_encoding_to_char() is defined in encnames.o , it has disappeared from the libpq.exp file .\n> \n> # diff postgresql-12.8/32bit/src/interfaces/libpq/Makefile postgresql-13.1/32bit/src/interfaces/libpq/Makefile | grep encnames\n> < OBJS += encnames.o wchar.o\n> < encnames.c wchar.c: % : $(backend_src)/utils/mb/%\n> < rm -f encnames.c wchar.c\n> \n> Remember how the final libpq.a is built on AIX:\n> \n> rm -f libpq.a\n> /usr/bin/ar crs libpq.a fe-auth-scram.o ...\n> touch libpq.a\n> ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n> /opt/freeware/bin/gcc -maix64 -O3 ..... -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp .......\n> rm -f libpq.a\n> /usr/bin/ar crs libpq.a libpq.so.5\n> \n> 12.8 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n> pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n> \n> 13.1 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n> pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o fe-secure-common.o fe-secure-openssl.o\n> \n> \n> Maybe you can discover why these changes were made in v13 for src/interfaces/libpq/Makefile .\n> And mkldexport.sh , unchanged between v12 and v13, works perfectly.\n\nThanks; that makes sense. Those files moved to libpgcommon_shlib.a. The\n$(MKLDEXPORT) call sees symbols in the .o files, but it doesn't see symbols\npulled in via libpg*.a. Let's fix this by having Makefile.shlib skip\n$(MKLDEXPORT) when a $(SHLIB_EXPORTS) file is available and have it instead\ncreate lib$(NAME).exp from $(SHLIB_EXPORTS). That's more correct than\nmerging, and it's simpler. Would you like to try that?\n\n\n", "msg_date": "Mon, 30 Aug 2021 07:52:56 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "Hi Noah,\n\nYes, trying to use the create lib$(NAME).exp from $(SHLIB_EXPORTS) when it exists was my first idea, too.\nHowever, I do not master (or I forgot) this kind of \"if....\" in a Makefile and I was unable to find a solution by reading Makefile manuals or by searching for a similar example. So, I did it in an easier (to me!) and quicker way: merge with a new command line in the Makefile rule.\nNow that we have a clear understanding of what is happenning, I may have a deeper look at a clean Makefile solution. However, if you know how to manage this, I would really appreciate some example. I'm asking my colleague too if he can help me here.\n\nThx\n\nTony\n\n________________________________\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : lundi 30 août 2021 16:52\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: AIX: Symbols are missing in libpq.a\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Mon, Aug 30, 2021 at 02:33:32PM +0000, REIX, Tony wrote:\n> It appears that the Makefile in src/interfaces/libpq has been modified between v12 and v13, removing encnames.o (and wchar.o) from the object files being used for creating the static libpq.a file which is used for creating the libpq.so and libpq.a AIX shared-library files.\n> And, since pg_encoding_to_char() is defined in encnames.o , it has disappeared from the libpq.exp file .\n>\n> # diff postgresql-12.8/32bit/src/interfaces/libpq/Makefile postgresql-13.1/32bit/src/interfaces/libpq/Makefile | grep encnames\n> < OBJS += encnames.o wchar.o\n> < encnames.c wchar.c: % : $(backend_src)/utils/mb/%\n> < rm -f encnames.c wchar.c\n>\n> Remember how the final libpq.a is built on AIX:\n>\n> rm -f libpq.a\n> /usr/bin/ar crs libpq.a fe-auth-scram.o ...\n> touch libpq.a\n> ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n> /opt/freeware/bin/gcc -maix64 -O3 ..... -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp .......\n> rm -f libpq.a\n> /usr/bin/ar crs libpq.a libpq.so.5\n>\n> 12.8 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n> pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n>\n> 13.1 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n> pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o fe-secure-common.o fe-secure-openssl.o\n>\n>\n> Maybe you can discover why these changes were made in v13 for src/interfaces/libpq/Makefile .\n> And mkldexport.sh , unchanged between v12 and v13, works perfectly.\n\nThanks; that makes sense. Those files moved to libpgcommon_shlib.a. The\n$(MKLDEXPORT) call sees symbols in the .o files, but it doesn't see symbols\npulled in via libpg*.a. Let's fix this by having Makefile.shlib skip\n$(MKLDEXPORT) when a $(SHLIB_EXPORTS) file is available and have it instead\ncreate lib$(NAME).exp from $(SHLIB_EXPORTS). That's more correct than\nmerging, and it's simpler. Would you like to try that?\n\n\n\n\n\n\n\n\n\nHi Noah,\n\n\n\n\nYes, trying to use the create\n lib$(NAME).exp from $(SHLIB_EXPORTS) when it exists was my first idea, too.\n\nHowever, I do not master\n (or I forgot) this kind of \"if....\" in a Makefile and I was unable to find a solution by reading Makefile manuals or by searching for a similar example. So, I did it in an easier (to me!) and quicker way: merge with a new command line in the Makefile rule.\n\nNow that we have a clear\n understanding of what is happenning, I may have a deeper look at a clean Makefile solution. However, if you know how to manage this, I would really appreciate some example. I'm asking my colleague too if he can help me here.\n\n\n\n\nThx\n\n\n\n\nTony\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : lundi 30 août 2021 16:52\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nObjet : Re: AIX: Symbols are missing in libpq.a\n \n\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Mon, Aug 30, 2021 at 02:33:32PM +0000, REIX, Tony wrote:\n> It appears that the Makefile in src/interfaces/libpq has been modified between v12 and v13, removing encnames.o (and wchar.o) from the object files being used for creating the static libpq.a file which is used for creating the libpq.so and libpq.a AIX shared-library\n files.\n> And, since pg_encoding_to_char() is defined in encnames.o , it has disappeared from the libpq.exp file .\n> \n> # diff postgresql-12.8/32bit/src/interfaces/libpq/Makefile postgresql-13.1/32bit/src/interfaces/libpq/Makefile | grep encnames\n> < OBJS += encnames.o wchar.o\n> < encnames.c wchar.c: % : $(backend_src)/utils/mb/%\n> <       rm -f encnames.c wchar.c\n> \n> Remember how the final libpq.a is built on AIX:\n> \n>  rm -f libpq.a\n>  /usr/bin/ar crs libpq.a fe-auth-scram.o ...\n>  touch libpq.a\n>  ../../../src/backend/port/aix/mkldexport.sh libpq.a libpq.so.5 >libpq.exp\n>  /opt/freeware/bin/gcc  -maix64    -O3 .....  -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp .......\n>  rm -f libpq.a\n>  /usr/bin/ar crs libpq.a libpq.so.5\n> \n>  12.8 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n>                                 pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o encnames.o wchar.o fe-secure-openssl.o fe-secure-common.o\n> \n>  13.1 : /usr/bin/ar crs libpq.a fe-auth.o fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol2.o fe-protocol3.o\n>                                 pqexpbuffer.o fe-secure.o legacy-pqsignal.o libpq-events.o                    fe-secure-common.o fe-secure-openssl.o\n> \n> \n> Maybe you can discover why these changes were made in v13 for src/interfaces/libpq/Makefile .\n> And  mkldexport.sh  , unchanged between v12 and v13, works perfectly.\n\nThanks; that makes sense.  Those files moved to libpgcommon_shlib.a.  The\n$(MKLDEXPORT) call sees symbols in the .o files, but it doesn't see symbols\npulled in via libpg*.a.  Let's fix this by having Makefile.shlib skip\n$(MKLDEXPORT) when a $(SHLIB_EXPORTS) file is available and have it instead\ncreate lib$(NAME).exp from $(SHLIB_EXPORTS).  That's more correct than\nmerging, and it's simpler.  Would you like to try that?", "msg_date": "Mon, 30 Aug 2021 15:35:23 +0000", "msg_from": "\"REIX, Tony\" <tony.reix@atos.net>", "msg_from_op": true, "msg_subject": "RE: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "On Mon, Aug 30, 2021 at 03:35:23PM +0000, REIX, Tony wrote:\n> Yes, trying to use the create lib$(NAME).exp from $(SHLIB_EXPORTS) when it exists was my first idea, too.\n> However, I do not master (or I forgot) this kind of \"if....\" in a Makefile and I was unable to find a solution by reading Makefile manuals or by searching for a similar example. So, I did it in an easier (to me!) and quicker way: merge with a new command line in the Makefile rule.\n> Now that we have a clear understanding of what is happenning, I may have a deeper look at a clean Makefile solution. However, if you know how to manage this, I would really appreciate some example. I'm asking my colleague too if he can help me here.\n\nHere's an example from elsewhere in Makefile.shlib:\n\n# If SHLIB_EXPORTS is set, the rules below will build a .def file from that.\n# Else we just use --export-all-symbols.\nifeq (,$(SHLIB_EXPORTS))\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n\t$(CC) $(CFLAGS) -shared -static-libgcc -o $@ $(OBJS) $(LDFLAGS) $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols -Wl,--out-implib=$(stlib)\nelse\nDLL_DEFFILE = lib$(NAME)dll.def\n\n$(shlib): $(OBJS) $(DLL_DEFFILE) | $(SHLIB_PREREQS)\n\t$(CC) $(CFLAGS) -shared -static-libgcc -o $@ $(OBJS) $(DLL_DEFFILE) $(LDFLAGS) $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--out-implib=$(stlib)\n\nUC_NAME = $(shell echo $(NAME) | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ')\n\n$(DLL_DEFFILE): $(SHLIB_EXPORTS)\n\techo 'LIBRARY LIB$(UC_NAME).dll' >$@\n\techo 'EXPORTS' >>$@\n\tsed -e '/^#/d' -e 's/^\\(.*[ \t]\\)\\([0-9][0-9]*\\)/ \\1@ \\2/' $< >>$@\nendif\n\n\n", "msg_date": "Mon, 30 Aug 2021 20:33:11 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "Thanks for your help!\nThat wasn't so difficult, once I've refreshed my memory.\nHere is a new patch, using the export.txt whenever it does exist.\nI have tested it with v13.4 : it's OK.\nPatch for 14beta3 should be the same since there was no change for src/Makefile.shlib between v13 and v14.\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\nwww.atos.net<https://mail.ad.bull.net/owa/redir.aspx?C=PvphmPvCZkGrAgHVnWGsdMcDKgzl_dEIsM6rX0g4u4v8V81YffzBGkWrtQeAXNovd3ttkJL8JIc.&URL=http%3a%2f%2fwww.atos.net%2f>\n________________________________\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : mardi 31 août 2021 05:33\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; CHIGOT, CLEMENT <clement.chigot@atos.net>\nObjet : Re: AIX: Symbols are missing in libpq.a\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Mon, Aug 30, 2021 at 03:35:23PM +0000, REIX, Tony wrote:\n> Yes, trying to use the create lib$(NAME).exp from $(SHLIB_EXPORTS) when it exists was my first idea, too.\n> However, I do not master (or I forgot) this kind of \"if....\" in a Makefile and I was unable to find a solution by reading Makefile manuals or by searching for a similar example. So, I did it in an easier (to me!) and quicker way: merge with a new command line in the Makefile rule.\n> Now that we have a clear understanding of what is happenning, I may have a deeper look at a clean Makefile solution. However, if you know how to manage this, I would really appreciate some example. I'm asking my colleague too if he can help me here.\n\nHere's an example from elsewhere in Makefile.shlib:\n\n# If SHLIB_EXPORTS is set, the rules below will build a .def file from that.\n# Else we just use --export-all-symbols.\nifeq (,$(SHLIB_EXPORTS))\n$(shlib): $(OBJS) | $(SHLIB_PREREQS)\n $(CC) $(CFLAGS) -shared -static-libgcc -o $@ $(OBJS) $(LDFLAGS) $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--export-all-symbols -Wl,--out-implib=$(stlib)\nelse\nDLL_DEFFILE = lib$(NAME)dll.def\n\n$(shlib): $(OBJS) $(DLL_DEFFILE) | $(SHLIB_PREREQS)\n $(CC) $(CFLAGS) -shared -static-libgcc -o $@ $(OBJS) $(DLL_DEFFILE) $(LDFLAGS) $(LDFLAGS_SL) $(SHLIB_LINK) $(LIBS) -Wl,--out-implib=$(stlib)\n\nUC_NAME = $(shell echo $(NAME) | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ')\n\n$(DLL_DEFFILE): $(SHLIB_EXPORTS)\n echo 'LIBRARY LIB$(UC_NAME).dll' >$@\n echo 'EXPORTS' >>$@\n sed -e '/^#/d' -e 's/^\\(.*[ ]\\)\\([0-9][0-9]*\\)/ \\1@ \\2/' $< >>$@\nendif", "msg_date": "Wed, 1 Sep 2021 08:59:57 +0000", "msg_from": "\"REIX, Tony\" <tony.reix@atos.net>", "msg_from_op": true, "msg_subject": "RE: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "A new patch, using exports.txt file instead of building the list of symbols to be exported and merging the 2 files, has been provided. This seems much better.", "msg_date": "Wed, 01 Sep 2021 10:29:32 +0000", "msg_from": "Tony Reix <tony.reix@atos.net>", "msg_from_op": false, "msg_subject": "Re: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "On Wed, Sep 01, 2021 at 08:59:57AM +0000, REIX, Tony wrote:\n> Here is a new patch, using the export.txt whenever it does exist.\n> I have tested it with v13.4 : it's OK.\n> Patch for 14beta3 should be the same since there was no change for src/Makefile.shlib between v13 and v14.\n\nThanks. This looks good. I'm attaching what I intend to push, which just\nadds a log message and some cosmetic changes compared to your version. Here\nare the missing symbols restored by the patch:\n\npg_encoding_to_char\npg_utf_mblen\npg_char_to_encoding\npg_valid_server_encoding\npg_valid_server_encoding_id\n\nI was ambivalent about whether to back-patch to v13 or to stop at v14, but I\ndecided that v13 should have this change. We should expect sad users when\nlibpq lacks a documented symbol. Complaints about loss of undocumented\nsymbols (e.g. pqParseInput3) are unlikely, and we're even less likely to have\nusers opposing reintroduction of long-documented symbols. An alternative\nwould be to have v13 merge the symbol lists, like your original proposal, so\nwe're not removing even undocumented symbols. I doubt applications have\naccrued dependencies on libpq-internal symbols in the year since v13 appeared,\nparticularly since those symbols are inaccessible on Linux. Our AIX export\nlists never included libpgport or libpgcommon symbols.", "msg_date": "Thu, 2 Sep 2021 19:58:08 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIX: Symbols are missing in libpq.a" }, { "msg_contents": "That seems good for me.\nThx !\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\nwww.atos.net<https://mail.ad.bull.net/owa/redir.aspx?C=PvphmPvCZkGrAgHVnWGsdMcDKgzl_dEIsM6rX0g4u4v8V81YffzBGkWrtQeAXNovd3ttkJL8JIc.&URL=http%3a%2f%2fwww.atos.net%2f>\n________________________________\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : vendredi 3 septembre 2021 04:58\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; CHIGOT, CLEMENT <clement.chigot@atos.net>\nObjet : Re: AIX: Symbols are missing in libpq.a\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Wed, Sep 01, 2021 at 08:59:57AM +0000, REIX, Tony wrote:\n> Here is a new patch, using the export.txt whenever it does exist.\n> I have tested it with v13.4 : it's OK.\n> Patch for 14beta3 should be the same since there was no change for src/Makefile.shlib between v13 and v14.\n\nThanks. This looks good. I'm attaching what I intend to push, which just\nadds a log message and some cosmetic changes compared to your version. Here\nare the missing symbols restored by the patch:\n\npg_encoding_to_char\npg_utf_mblen\npg_char_to_encoding\npg_valid_server_encoding\npg_valid_server_encoding_id\n\nI was ambivalent about whether to back-patch to v13 or to stop at v14, but I\ndecided that v13 should have this change. We should expect sad users when\nlibpq lacks a documented symbol. Complaints about loss of undocumented\nsymbols (e.g. pqParseInput3) are unlikely, and we're even less likely to have\nusers opposing reintroduction of long-documented symbols. An alternative\nwould be to have v13 merge the symbol lists, like your original proposal, so\nwe're not removing even undocumented symbols. I doubt applications have\naccrued dependencies on libpq-internal symbols in the year since v13 appeared,\nparticularly since those symbols are inaccessible on Linux. Our AIX export\nlists never included libpgport or libpgcommon symbols.\n\n\n\n\n\n\n\n\nThat seems good for me.\n\nThx !\n\n\n\n\n\n\n\n\n\n\n\n\n\nRegards/Cordialement,\n\nTony Reix\n\ntony.reix@atos.net\n\nATOS / Bull SAS\nATOS Expert\nIBM-Bull Cooperation Project: Architect & OpenSource Technical Leader\n\n1, rue de Provence - 38432 ECHIROLLES - FRANCE\n\nwww.atos.net\n\n\n\n\n\n\n\n\n\n\n\nDe : Noah Misch <noah@leadboat.com>\nEnvoyé : vendredi 3 septembre 2021 04:58\nÀ : REIX, Tony <tony.reix@atos.net>\nCc : pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; CHIGOT, CLEMENT <clement.chigot@atos.net>\nObjet : Re: AIX: Symbols are missing in libpq.a\n \n\n\nCaution! External email. Do not open attachments or click links, unless this email comes from a known sender and you know the content is safe.\n\nOn Wed, Sep 01, 2021 at 08:59:57AM +0000, REIX, Tony wrote:\n> Here is a new patch, using the export.txt whenever it does exist.\n> I have tested it with v13.4 : it's OK.\n> Patch for 14beta3 should be the same since there was no change for src/Makefile.shlib between v13 and v14.\n\nThanks.  This looks good.  I'm attaching what I intend to push, which just\nadds a log message and some cosmetic changes compared to your version.  Here\nare the missing symbols restored by the patch:\n\npg_encoding_to_char\npg_utf_mblen\npg_char_to_encoding\npg_valid_server_encoding\npg_valid_server_encoding_id\n\nI was ambivalent about whether to back-patch to v13 or to stop at v14, but I\ndecided that v13 should have this change.  We should expect sad users when\nlibpq lacks a documented symbol.  Complaints about loss of undocumented\nsymbols (e.g. pqParseInput3) are unlikely, and we're even less likely to have\nusers opposing reintroduction of long-documented symbols.  An alternative\nwould be to have v13 merge the symbol lists, like your original proposal, so\nwe're not removing even undocumented symbols.  I doubt applications have\naccrued dependencies on libpq-internal symbols in the year since v13 appeared,\nparticularly since those symbols are inaccessible on Linux.  Our AIX export\nlists never included libpgport or libpgcommon symbols.", "msg_date": "Mon, 6 Sep 2021 07:31:45 +0000", "msg_from": "\"REIX, Tony\" <tony.reix@atos.net>", "msg_from_op": true, "msg_subject": "RE: AIX: Symbols are missing in libpq.a" } ]
[ { "msg_contents": "I just noticed that the new heapam amcheck verification code can take\na very long time to respond to cancellations from pg_amcheck -- I saw\nthat it took over 2 minutes on a large database on my workstation.\n\nIt looks like we neglect to call CHECK_FOR_INTERRUPTS() anywhere\ninside verify_heapam.c. Is there any reason for this? Can't we just\nput a CHECK_FOR_INTERRUPTS() at the top of the outermost loop, inside\nverify_heapam()?\n\nNot sure if pg_amcheck itself is a factor here too -- didn't get that far.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 26 Aug 2021 14:38:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "amcheck/verify_heapam doesn't check for interrupts" }, { "msg_contents": "\n\n> On Aug 26, 2021, at 2:38 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> It looks like we neglect to call CHECK_FOR_INTERRUPTS() anywhere\n> inside verify_heapam.c. Is there any reason for this?\n\nNot any good one that I can see.\n\n> Can't we just\n> put a CHECK_FOR_INTERRUPTS() at the top of the outermost loop, inside\n> verify_heapam()?\n\nI expect we could.\n\n> Not sure if pg_amcheck itself is a factor here too -- didn't get that far.\n\nThat runs an event loop in the client over multiple checks (heap and/or btree) running in backends, just as reindexdb and vacuumdb do over parallel reindexes and vacuums running in backends. It should be just as safe to ctrl-c out of pg_amcheck as out of those two. They all three use fe_utils/cancel.h's setup_cancel_handler(), so I would expect modifying verify_heapam would be enough.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:24:08 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: amcheck/verify_heapam doesn't check for interrupts" }, { "msg_contents": "On Thu, Aug 26, 2021 at 4:24 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Aug 26, 2021, at 2:38 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> > It looks like we neglect to call CHECK_FOR_INTERRUPTS() anywhere\n> > inside verify_heapam.c. Is there any reason for this?\n>\n> Not any good one that I can see.\n\nSeems that way. Want to post a patch?\n\n> > Not sure if pg_amcheck itself is a factor here too -- didn't get that far.\n>\n> That runs an event loop in the client over multiple checks (heap and/or btree) running in backends, just as reindexdb and vacuumdb do over parallel reindexes and vacuums running in backends. It should be just as safe to ctrl-c out of pg_amcheck as out of those two. They all three use fe_utils/cancel.h's setup_cancel_handler(), so I would expect modifying verify_heapam would be enough.\n\nRight. I checked that out myself, after sending my email from earlier.\nWe don't have any problems when pg_amcheck happens to be verifying a\nB-Tree index -- verify_nbtree.c already has CHECK_FOR_INTERRUPTS() at\na few key points.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:39:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: amcheck/verify_heapam doesn't check for interrupts" }, { "msg_contents": "\n\n> On Aug 26, 2021, at 4:39 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> Not any good one that I can see.\n> \n> Seems that way. Want to post a patch?\n\nSure. I just posted another unrelated patch for amcheck this morning, so it seems a good day for it :)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 26 Aug 2021 16:41:07 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: amcheck/verify_heapam doesn't check for interrupts" }, { "msg_contents": "> On Aug 26, 2021, at 4:41 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> Seems that way. Want to post a patch?\n> \n> Sure.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Aug 2021 17:25:05 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: amcheck/verify_heapam doesn't check for interrupts" }, { "msg_contents": "Patch committed.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 26 Aug 2021 18:42:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: amcheck/verify_heapam doesn't check for interrupts" } ]
[ { "msg_contents": "Hi everyone,\n\nDo we already have a ${subject}? Otherwise I could offer my self.\nIf anyone agree, this would be my first time as CFM as I would\nappreciate some help.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Thu, 26 Aug 2021 18:16:08 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "CFM for september commitfest" }, { "msg_contents": "> On 27 Aug 2021, at 01:16, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:\n\n> Do we already have a ${subject}? Otherwise I could offer my self.\n\nAFAICT from searching the archive there has been no other volunteers, and the\nCF starts quite soon so if you’re still up for it then thanks for picking up\nthe task!\n\n> If anyone agree, this would be my first time as CFM as I would\n> appreciate some help.\n\nI would be happy to lend a hand, feel free to poke me off-list if you’d like to\nbounce ideas and thoughts etc.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 21:34:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: CFM for september commitfest" } ]
[ { "msg_contents": "Hi Hackers,\n\nThere is a known bug in the query rewriter: if a query that has a modifying\nCTE is re-written, the hasModifyingCTE flag is not getting set in the\nre-written query. This bug can result in the query being allowed to execute\nin parallel-mode, which results in an error.\n\nFor more details from a previous discussion about this, and a test case\nthat illustrates the issue, refer to:\nhttps://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com\n\n\nAs a proposal to fix this problem, I've attached a patch which:\n\n1) Copies the associated hasModifyingCTE and hasRecursive flags when the\nrewriter combines CTE lists (using Tom Lane's initial patch code seen in\n[1]). This flag copying is missing from the current Postgres code.\n2) Adds an error case to specifically disallow the case of applying an\nINSERT...SELECT rule action to a command with a data-modifying CTE. This is\nbecause in this case, the rewritten query will actually end up having a\ndata-modifying CTE that is not at the top level (as it is associated with\nthe SELECT subquery part), which is not actually allowed by Postgres if\nthat query is entered normally (as it's the parser that contains the\nerror-check to ensure that the modifying CTE is at the top level, so this\ncase avoids detection in the rewriter).\n3) Modifies the existing test case in with.sql that tests the merging of an\nouter CTE with a CTE in a rule action (as currently that rule action is\nusing INSERT...SELECT).\n\n\nFor the record, a workaround for this issue (at least addressing how\nhasModifyingCTE is meant to exclude the query from parallel execution) has\nbeen suggested in the past, but was not well received. It is the following\naddition to the max_parallel_hazard_walker() function:\n\n+ /*\n+ * ModifyingCTE expressions are treated as parallel-unsafe.\n+ *\n+ * XXX Normally, if the Query has a modifying CTE, the\nhasModifyingCTE\n+ * flag is set in the Query tree, and the query will be\nregarded as\n+ * parallel-usafe. However, in some cases, a re-written query\nwith a\n+ * modifying CTE does not have that flag set, due to a bug in\nthe query\n+ * rewriter. The following else-if is a workaround for this\nbug, to detect\n+ * a modifying CTE in the query and regard it as\nparallel-unsafe. This\n+ * comment, and the else-if block immediately below, may be\nremoved once\n+ * the bug in the query rewriter is fixed.\n+ */\n+ else if (IsA(node, CommonTableExpr))\n+ {\n+ CommonTableExpr *cte = (CommonTableExpr *)\nnode;\n+ Query *ctequery = castNode(Query,\ncte->ctequery);\n+\n+ if (ctequery->commandType != CMD_SELECT)\n+ {\n+ context->max_hazard =\nPROPARALLEL_UNSAFE;\n+ return true;\n+ }\n+ }\n+\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Fri, 27 Aug 2021 14:55:34 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Fix erroneous parallel execution when modifying CTE is present in\n rewritten query" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> [ v1-0001-Propagate-CTE-property-flags-when-copying-a-CTE-list.patch ]\n\nPushed with a couple of adjustments:\n\n* I rewrote the comment, mostly so as to include an explanation of how\nthe error could be removed, in case anyone ever wants to go to the\ntrouble.\n\n* The existing test case can be fixed up without fundamentally changing\nwhat it's testing, by replacing INSERT...SELECT with INSERT...VALUES.\n(That should likely also be our first suggestion to any complainers.)\n\nThanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Sep 2021 12:11:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix erroneous parallel execution when modifying CTE is present in\n rewritten query" } ]
[ { "msg_contents": "In certain cases like with OpenID Connect, a different scope is needed. This\npatch adds an additional variable `OAUTH2_SCOPE` that can be used to configure\nthe appropriate scope for the deployment. Already there are runtime checks to\nensure that the email claim is included in the user profile, so there is no need\nfor similar checks on the configuration. This commit does introduce a check in\nthe oauth2.py if a value for OAUTH2_SCOPE is set, to prevent a breaking change.\n\nRelated issue: https://redmine.postgresql.org/issues/6627\nOIDC docs: https://openid.net/specs/openid-connect-core-1_0.html#ScopeClaims\n\nI haven't yet tested this, as I'm still in the process of setting up a local\ndevelopment environment. I hope somebody else here can help me with the quality\nassurance.\n\nSigned-off-by: Nico Rikken <nico.rikken@alliander.com>\n---\n docs/en_US/oauth2.rst | 1 +\n web/config.py | 3 +++\n web/pgadmin/authenticate/oauth2.py | 6 +++++-\n web/pgadmin/browser/tests/test_oauth2_with_mocking.py | 1 +\n 4 files changed, 10 insertions(+), 1 deletion(-)\n\ndiff --git a/docs/en_US/oauth2.rst b/docs/en_US/oauth2.rst\nindex 8947b509e..4cc2628f5 100644\n--- a/docs/en_US/oauth2.rst\n+++ b/docs/en_US/oauth2.rst\n@@ -30,6 +30,7 @@ and modify the values for the following parameters:\n \"OAUTH2_AUTHORIZATION_URL\", \"Endpoint for user authorization\"\n \"OAUTH2_API_BASE_URL\", \"Oauth2 base URL endpoint to make requests simple, ex: *https://api.github.com/*\"\n \"OAUTH2_USERINFO_ENDPOINT\", \"User Endpoint, ex: *user* (for github) and *useinfo* (for google)\"\n+ \"OAUTH2_SCOPE\", \"Oauth scope, ex: 'openid email profile'. Note that an 'email' claim is required in the resulting profile.\"\n \"OAUTH2_ICON\", \"The Font-awesome icon to be placed on the oauth2 button, ex: fa-github\"\n \"OAUTH2_BUTTON_COLOR\", \"Oauth2 button color\"\n \"OAUTH2_AUTO_CREATE_USER\", \"Set the value to *True* if you want to automatically\ndiff --git a/web/config.py b/web/config.py\nindex d797e26f7..e932d17fc 100644\n--- a/web/config.py\n+++ b/web/config.py\n@@ -711,6 +711,9 @@ OAUTH2_CONFIG = [\n # Name of the Endpoint, ex: user\n 'OAUTH2_USERINFO_ENDPOINT': None,\n # Font-awesome icon, ex: fa-github\n+ 'OAUTH2_SCOPE': None,\n+ # Oauth scope, ex: 'openid email profile'\n+ # Note that an 'email' claim is required in the resulting profile\n 'OAUTH2_ICON': None,\n # UI button colour, ex: #0000ff\n 'OAUTH2_BUTTON_COLOR': None,\ndiff --git a/web/pgadmin/authenticate/oauth2.py b/web/pgadmin/authenticate/oauth2.py\nindex 91903165a..5e60d35dd 100644\n--- a/web/pgadmin/authenticate/oauth2.py\n+++ b/web/pgadmin/authenticate/oauth2.py\n@@ -104,7 +104,11 @@ class OAuth2Authentication(BaseAuthentication):\n access_token_url=oauth2_config['OAUTH2_TOKEN_URL'],\n authorize_url=oauth2_config['OAUTH2_AUTHORIZATION_URL'],\n api_base_url=oauth2_config['OAUTH2_API_BASE_URL'],\n- client_kwargs={'scope': 'email profile'}\n+ # Resort to previously hardcoded scope 'email profile' in case\n+ # no OAUTH2_SCOPE is provided. This prevents a breaking change.\n+ client_kwargs={'scope':\n+ oauth2_config.get('OAUTH2_SCOPE',\n+ 'email profile')}\n )\n \n def get_source_name(self):\ndiff --git a/web/pgadmin/browser/tests/test_oauth2_with_mocking.py b/web/pgadmin/browser/tests/test_oauth2_with_mocking.py\nindex b170720a8..71706ebe6 100644\n--- a/web/pgadmin/browser/tests/test_oauth2_with_mocking.py\n+++ b/web/pgadmin/browser/tests/test_oauth2_with_mocking.py\n@@ -58,6 +58,7 @@ class Oauth2LoginMockTestCase(BaseTestGenerator):\n 'https://github.com/login/oauth/authorize',\n 'OAUTH2_API_BASE_URL': 'https://api.github.com/',\n 'OAUTH2_USERINFO_ENDPOINT': 'user',\n+ 'OAUTH2_SCOPE': 'email profile',\n 'OAUTH2_ICON': 'fa-github',\n 'OAUTH2_BUTTON_COLOR': '#3253a8',\n }\n-- \n2.25.1\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 12:15:46 +0000", "msg_from": "Nico Rikken <nico.rikken@alliander.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add OAUTH2_SCOPE variable for scope configuration" }, { "msg_contents": "> On 27 Aug 2021, at 14:15, Nico Rikken <nico.rikken@alliander.com> wrote:\n\n> I haven't yet tested this, as I'm still in the process of setting up a local\n> development environment. I hope somebody else here can help me with the quality\n> assurance.\n\nThis is the mailinglist for the core postgres server, for pgadmin development\nplease see the below URL for an appropriate list:\n\n\thttps://www.pgadmin.org/support/list/\n\nI’m sure someone there will be able to help.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 27 Aug 2021 16:00:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add OAUTH2_SCOPE variable for scope configuration" } ]
[ { "msg_contents": "The SQL standard has been ambiguous about whether null values in\nunique constraints should be considered equal or not. Different\nimplementations have different behaviors. In the SQL:202x draft, this\nhas been formalized by making this implementation-defined and adding\nan option on unique constraint definitions UNIQUE [ NULLS [NOT]\nDISTINCT ] to choose a behavior explicitly.\n\nThis patch adds this option to PostgreSQL. The default behavior\nremains UNIQUE NULLS DISTINCT. Making this happen in the btree code\nis apparently pretty easy; most of the patch is just to carry the flag \naround to all the places that need it.\n\nThe CREATE UNIQUE INDEX syntax extension is not from the standard,\nit's my own invention.\n\n(I named all the internal flags, catalog columns, etc. in the\nnegative (\"nulls not distinct\") so that the default PostgreSQL\nbehavior is the default if the flag is false. But perhaps the double\nnegatives make some code harder to read.)", "msg_date": "Fri, 27 Aug 2021 14:38:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "UNIQUE null treatment option" }, { "msg_contents": "On Fri, Aug 27, 2021 at 3:38 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> In the SQL:202x draft, this\n> has been formalized by making this implementation-defined and adding\n> an option on unique constraint definitions UNIQUE [ NULLS [NOT]\n> DISTINCT ] to choose a behavior explicitly.\n>\n> The CREATE UNIQUE INDEX syntax extension is not from the standard,\n> it's my own invention.\n>\n\nFor the unique index syntax, should this be selectable per\ncolumn/expression, rather than for the entire index as a whole?\n\n\n.m\n\nOn Fri, Aug 27, 2021 at 3:38 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:In the SQL:202x draft, this\nhas been formalized by making this implementation-defined and adding\nan option on unique constraint definitions UNIQUE [ NULLS [NOT]\nDISTINCT ] to choose a behavior explicitly.\n\nThe CREATE UNIQUE INDEX syntax extension is not from the standard,\nit's my own invention.For the unique index syntax, should this be selectable per column/expression, rather than for the entire index as a whole?.m", "msg_date": "Fri, 27 Aug 2021 15:44:37 +0300", "msg_from": "Marko Tiikkaja <marko@joh.to>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "On 27.08.21 14:44, Marko Tiikkaja wrote:\n> On Fri, Aug 27, 2021 at 3:38 PM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> In the SQL:202x draft, this\n> has been formalized by making this implementation-defined and adding\n> an option on unique constraint definitions UNIQUE [ NULLS [NOT]\n> DISTINCT ] to choose a behavior explicitly.\n> \n> The CREATE UNIQUE INDEX syntax extension is not from the standard,\n> it's my own invention.\n> \n> \n> For the unique index syntax, should this be selectable per \n> column/expression, rather than for the entire index as a whole?\n\nSemantically, this would be possible, but the bookkeeping to make it \nwork seems out of proportion with the utility. And you'd have the \nunique index syntax out of sync with the unique constraint syntax, which \nwould be pretty confusing.\n\n\n", "msg_date": "Tue, 7 Sep 2021 13:17:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "Here is a rebased version of this patch.\n\nOn 27.08.21 14:38, Peter Eisentraut wrote:\n> The SQL standard has been ambiguous about whether null values in\n> unique constraints should be considered equal or not.  Different\n> implementations have different behaviors.  In the SQL:202x draft, this\n> has been formalized by making this implementation-defined and adding\n> an option on unique constraint definitions UNIQUE [ NULLS [NOT]\n> DISTINCT ] to choose a behavior explicitly.\n> \n> This patch adds this option to PostgreSQL.  The default behavior\n> remains UNIQUE NULLS DISTINCT.  Making this happen in the btree code\n> is apparently pretty easy; most of the patch is just to carry the flag \n> around to all the places that need it.\n> \n> The CREATE UNIQUE INDEX syntax extension is not from the standard,\n> it's my own invention.\n> \n> (I named all the internal flags, catalog columns, etc. in the\n> negative (\"nulls not distinct\") so that the default PostgreSQL\n> behavior is the default if the flag is false.  But perhaps the double\n> negatives make some code harder to read.)", "msg_date": "Wed, 29 Dec 2021 11:06:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "+1 for commiting this feature. Consider this useful.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n+1 for commiting this feature. Consider this useful.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 13 Jan 2022 17:21:06 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "I find this patch useful. It includes changes in documentation and tests.\nCode itself looks reasonable to me. Since, unique constraint check is done\nby corresponding btree index, it makes this feature implementation\nelegant and lightweight.\n\nIn my view, it is sufficient that heap relation can have different nulls\ntreatment in unique constraints for different unique columns. For example:\nCREATE TABLE t (i INT UNIQUE NULLS DISTINCT, a INT UNIQUE NULLS NOT\nDISTINCT);\n\nAll the tests are running ok on Linux and MacOS X.\n\nAlthough, patch doesn't apply with default git apply options. Only with the\n\"three way merge\" option (-3). Consider rebasing it, please. Then, in my\nview, it can be \"Ready for committer\".\n-- \nBest regards,\nMaxim Orlov.\n\nI find this patch useful. It includes changes in documentation and tests. Code itself looks reasonable to me. Since, unique constraint check is done by corresponding btree index, it makes this feature implementation elegant and lightweight.In my view, it is sufficient that heap relation can have different nulls treatment in unique constraints for different unique columns. For example:CREATE TABLE t (i INT UNIQUE NULLS DISTINCT, a INT UNIQUE NULLS NOT DISTINCT);All the tests are running ok on Linux and MacOS X.Although, patch doesn't apply with default git apply options. Only with the \"three way merge\" option (-3). Consider rebasing it, please. Then, in my view, it can be \"Ready for committer\".-- Best regards,Maxim Orlov.", "msg_date": "Thu, 13 Jan 2022 18:51:18 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "On Wed, Dec 29, 2021 at 2:06 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Here is a rebased version of this patch.\n\nBTScanInsertData.anynullkeys already effectively means \"if the index\nis a unique index, then we don't actually need to go through\n_bt_check_unique(), or perform any other checkingunique steps\". This\nis really an instruction about what to do (or not do), based on the\nspecifics of the values for the insertion scan key plus the index\ndefinition. In other words, the code in _bt_mkscankey() that sets up\nBTScanInsertData (an insertion scankey) was written with the exact\nrequirements of btinsert() in mind -- nothing more.\n\nI wonder if the logic for setting BTScanInsertData.anynullkeys inside\n_bt_mkscankey() is the place to put your test for\nrel->rd_index->indnullsnotdistinct -- not inside _bt_doinsert(). That\nwould probably necessitate renaming anynullkeys, but that's okay. This\nfeels more natural to me because a NULL key column in a NULLS NOT\nDISTINCT unique constraint is very similar to a NULL non-key column in\nan INCLUDE index, as far as our requirements go -- and so both cases\nshould probably be dealt with at the same point.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 13 Jan 2022 10:36:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "On Thu, Jan 13, 2022 at 10:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I wonder if the logic for setting BTScanInsertData.anynullkeys inside\n> _bt_mkscankey() is the place to put your test for\n> rel->rd_index->indnullsnotdistinct -- not inside _bt_doinsert(). That\n> would probably necessitate renaming anynullkeys, but that's okay. This\n> feels more natural to me because a NULL key column in a NULLS NOT\n> DISTINCT unique constraint is very similar to a NULL non-key column in\n> an INCLUDE index, as far as our requirements go -- and so both cases\n> should probably be dealt with at the same point.\n\nCorrection: I meant to write \"...a NULL key column in a NULLS DISTINCT\nunique constraint is very similar...\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 13 Jan 2022 10:47:09 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": ">\n> I wonder if the logic for setting BTScanInsertData.anynullkeys inside\n> _bt_mkscankey() is the place to put your test for\n> rel->rd_index->indnullsnotdistinct -- not inside _bt_doinsert(). That\n> would probably necessitate renaming anynullkeys, but that's okay. This\n> feels more natural to me because a NULL key column in a NULLS NOT\n> DISTINCT unique constraint is very similar to a NULL non-key column in\n> an INCLUDE index, as far as our requirements go -- and so both cases\n> should probably be dealt with at the same point.\n>\n\nA good point, indeed!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI wonder if the logic for setting BTScanInsertData.anynullkeys inside\n_bt_mkscankey() is the place to put your test for\nrel->rd_index->indnullsnotdistinct -- not inside _bt_doinsert(). That\nwould probably necessitate renaming anynullkeys, but that's okay. This\nfeels more natural to me because a NULL key column in a NULLS NOT\nDISTINCT unique constraint is very similar to a NULL non-key column in\nan INCLUDE index, as far as our requirements go -- and so both cases\nshould probably be dealt with at the same point.A good point, indeed!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 13 Jan 2022 23:01:49 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "On 13.01.22 19:36, Peter Geoghegan wrote:\n> I wonder if the logic for setting BTScanInsertData.anynullkeys inside\n> _bt_mkscankey() is the place to put your test for\n> rel->rd_index->indnullsnotdistinct -- not inside _bt_doinsert(). That\n> would probably necessitate renaming anynullkeys, but that's okay. This\n> feels more natural to me because a NULL key column in a NULLS NOT\n> DISTINCT unique constraint is very similar to a NULL non-key column in\n> an INCLUDE index, as far as our requirements go -- and so both cases\n> should probably be dealt with at the same point.\n\nMakes sense. Here is an updated patch with this change.\n\nI didn't end up renaming anynullkeys. I came up with names like \n\"anyalwaysdistinctkeys\", but in the end that felt too abstract, and \nmoreover, it would require rewriting a bunch of code comments that refer \nto null values in this context. Since as you wrote, anynullkeys is just \na local concern between two functions, this slight inaccuracy is perhaps \nbetter than some highly general but unclear terminology.", "msg_date": "Mon, 24 Jan 2022 16:50:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "Since cfbot did failed with error, probably, unrelated to the patch itself\n(see https://cirrus-ci.com/task/5330150500859904)\nand repeated check did not start automatically, I reattach patch v3 to\nrestart cfbot on this patch.\n\n\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 25 Jan 2022 13:05:17 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": ">\n> Makes sense. Here is an updated patch with this change.\n>\n> I didn't end up renaming anynullkeys. I came up with names like\n> \"anyalwaysdistinctkeys\", but in the end that felt too abstract, and\n> moreover, it would require rewriting a bunch of code comments that refer\n> to null values in this context. Since as you wrote, anynullkeys is just\n> a local concern between two functions, this slight inaccuracy is perhaps\n> better than some highly general but unclear terminology.\n\nAgree with that. With the comment it is clear how it works.\n\nI've looked at the patch v3. It seems good enough for me. CFbot tests have\nalso come green.\nSuggest it is RFC now.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nMakes sense.  Here is an updated patch with this change.\n\nI didn't end up renaming anynullkeys.  I came up with names like \n\"anyalwaysdistinctkeys\", but in the end that felt too abstract, and \nmoreover, it would require rewriting a bunch of code comments that refer \nto null values in this context.  Since as you wrote, anynullkeys is just \na local concern between two functions, this slight inaccuracy is perhaps \nbetter than some highly general but unclear terminology.Agree with that. With the comment it is clear how it works.I've looked at the patch v3. It seems good enough for me. CFbot tests have also come green.Suggest it is RFC now.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 28 Jan 2022 16:56:11 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UNIQUE null treatment option" }, { "msg_contents": "On 28.01.22 13:56, Pavel Borisov wrote:\n> Makes sense.  Here is an updated patch with this change.\n> \n> I didn't end up renaming anynullkeys.  I came up with names like\n> \"anyalwaysdistinctkeys\", but in the end that felt too abstract, and\n> moreover, it would require rewriting a bunch of code comments that\n> refer\n> to null values in this context.  Since as you wrote, anynullkeys is\n> just\n> a local concern between two functions, this slight inaccuracy is\n> perhaps\n> better than some highly general but unclear terminology.\n> \n> Agree with that. With the comment it is clear how it works.\n> \n> I've looked at the patch v3. It seems good enough for me. CFbot tests \n> have also come green.\n> Suggest it is RFC now.\n\nCommitted. Thanks.\n\n\n", "msg_date": "Thu, 3 Feb 2022 11:54:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: UNIQUE null treatment option" } ]
[ { "msg_contents": "\nSomehow -hackers got left off the cc:\n\n\nOn 8/22/21 6:11 PM, Andrew Dunstan wrote:\n> On 8/22/21 5:59 PM, ldh@laurent-hasson.com wrote:\n>> > -----Original Message-----\n>> > From: Andrew Dunstan <andrew@dunslane.net>\n>> > Sent: Sunday, August 22, 2021 17:27\n>> > To: Tom Lane <tgl@sss.pgh.pa.us>; ldh@laurent-hasson.com\n>> > Cc: Justin Pryzby <pryzby@telsasoft.com>; Ranier Vilela\n>> > <ranier.vf@gmail.com>; pgsql-performance@postgresql.org\n>> > Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2\n>> > and 13.4\n>> > > > On 8/22/21 4:11 PM, Tom Lane wrote:\n>> > > \"ldh@laurent-hasson.com\" <ldh@laurent-hasson.com> writes:\n>> > >> I do have a Linux install of 13.3, and things work beautifully,\n>> so this is\n>> > definitely a Windows thing here that started in V12.\n>> > > It's good to have a box around it, but that's still a pretty\n>> large box\n>> > > :-(.\n>> > >\n>> > > I'm hoping that one of our Windows-using developers will see if they\n>> > > can reproduce this, and if so, try to bisect where it started.\n>> > > Not sure how to make further progress without that.\n>> > >\n>> > >\n>> > > > Can do. Assuming the assertion that it started in Release 12 is\n>> correct, I\n>> > should be able to find it by bisecting between the branch point for 12\n>> > and the tip of that branch. That's a little over 20 probes by my\n>> > calculation.\n>> > > > cheers\n>> > > > andrew\n>> > > > --\n>> > Andrew Dunstan\n>> > EDB: https://www.enterprisedb.com\n>>\n>>\n>> I tried it on 11.13 and 12.3. Is there a place where I could download\n>> 12.1 and 12.2 and test that? Is it worth it or you think you have all\n>> you need?\n>>\n>\n> I think I have everything I need.\n>\n>\n> Step one will be to verify that the difference exists between the branch\n> point and the tip of release 12. Once that's done it will be a matter of\n> probing until the commit at fault is identified.\n>\n\nOK, here's what we know.\n\n\nFirst, this apparently only affects build done with NLS. And in fact\neven on release 11 the performance is much better when run on a non-NLS\nbuild. So there's lots of work to do here.\n\n\nI can't yet pinpoint the place where it got disastrously bad, because I\ncan't build with VS2017 back past commit a169155453 on the REL_13_STABLE\nbranch. That commit fixed an issue with VS2015 and newer.\n\n\nThe machine that runs bowerbird has some older VS installations, and\nchoco has vs2013 packages, so there are opportunities to explore\nfurther. I'll get back to this in a couple of days.\n\n\nThanks to my EDB colleagues Sandeep Thakkar and Tushar Ahuja for helping\nto identify the cause of the issue.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:00:38 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Fwd: Big Performance drop of Exceptions in UDFs between V11.2 and\n 13.4" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> First, this apparently only affects build done with NLS. And in fact\n> even on release 11 the performance is much better when run on a non-NLS\n> build. So there's lots of work to do here.\n\nWow ... it would not have occurred to me to check that.\n\nTesting that angle using HEAD on Linux (RHEL8), here are times\nI see for the OP's slow query:\n\nNon-NLS build, C locale:\nTime: 12452.062 ms (00:12.452)\n\nNLS build, en_US.utf8 locale:\nTime: 13596.114 ms (00:13.596)\n\nNLS build, after SET lc_messages TO 'es_ES.utf8':\nTime: 15190.689 ms (00:15.191)\n\nSo there is a cost for translating the error messages on Linux too,\nbut it's not nearly as awful as on Windows. I wonder if this\nboils down to a performance bug in the particular gettext version\nyou're using?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Aug 2021 13:30:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: Big Performance drop of Exceptions in UDFs between V11.2 and\n 13.4" } ]
[ { "msg_contents": "I noticed that for \\dP+ since 1c5d9270e, regclass is written without\n\"pg_catalog.\" (Alvaro and I failed to notice it in 421a2c483, too).\n\n+ if (showNested || pattern)\n+ appendPQExpBuffer(&buf,\n+ \",\\n c3.oid::regclass as \\\"%s\\\"\",\n+ gettext_noop(\"Parent name\"));\n+\n+ if (showIndexes)\n+ appendPQExpBuffer(&buf,\n+ \",\\n c2.oid::regclass as \\\"%s\\\"\",\n+ gettext_noop(\"On table\"));\n\n\\dX is new in v14, and introduced the same issue in ad600bba0 (and modifies it\nbut not fixed in a4d75c86).\n\nI searched for issues like this, which finds all 4 errors with 1 false positive\nin psql/describe.c\n\n|time grep -wF \"$(grep -oE 'pg_catalog\\.[_[:alpha:]]+' src/bin/psql/describe.c |sed -r 's/^pg_catalog\\.//; /^(char|oid|text|trigger)$/d' )\" src/bin/psql/describe.c |grep -Ev 'pg_catalog\\.|^\t*[/ ]\\*'\n|#include \"catalog/pg_am.h\"\n| \",\\n inh.inhparent::regclass as \\\"%s\\\"\",\n| \",\\n c2.oid::regclass as \\\"%s\\\"\",\n| \" es.stxrelid::regclass) AS \\\"%s\\\"\",\n| \"es.stxrelid::regclass) AS \\\"%s\\\"\",\n\nTom informs me that this is not considered to be interesting as a security patch.\n\n-- \nJustin", "msg_date": "Fri, 27 Aug 2021 14:31:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "\\dP and \\dX use ::regclass without \"pg_catalog.\"" }, { "msg_contents": "On 2021-Aug-27, Justin Pryzby wrote:\n\n> I noticed that for \\dP+ since 1c5d9270e, regclass is written without\n> \"pg_catalog.\" (Alvaro and I failed to notice it in 421a2c483, too).\n\nOops, will fix shortly.\n\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 28 Aug 2021 08:57:32 -0400", "msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\"" } ]
[ { "msg_contents": "Hi Everyone,\n\nI am Jianhui Lu, a student participating in GSoC 2021, and my project is\n'add monitoring to pg_stat_statements to pg_systat'. And following is a\nsummary of the work I have done during the past 10 weeks.\n\nThe first part is about adding new features to pg_systat. The first and\nmost important feature is adding monitoring of pg_stat_statement. It\nenables pg_systat to show statistics about query execution. The second\nfeature is adding monitoring of pg_stat_progress_copy. It's a new feature\nin pg14. And the third feature is monitoring of pg_buffercache which tracks\nthe data in the shared buffer cache.\n\nThe second part is about compatibility. Since pg_stat_progress_copy is a\nnew feature in pg14, we won't show this view when we connect to an older\nversion. And pg_stat_statements added new columns in pg13 and changed some\ncolumn names, the new columns also won't show in the older version.\n\nThe third part of my work is about documentation. I made an asciinema video\nto show how to use pg_systat. And in the right pane, I have shown the\ncorresponding table in postgreSQL. So users will know the relationship\nbetween pg_systat and in the database. Last but not least, I rewrote the\nreadme using rst and added more information including basic introduction\nand homepage.\n\nHere are links to my commit [1] and asciinema video[2].\n\nIt's really a wonderful experience to work with the community!\n\nBest Regards,\n\nLu\n\n[1] https://gitlab.com/trafalgar_lu/pg_systat/-/commits/main/\n\n[2] https://asciinema.org/a/427202\n\nHi Everyone,I am Jianhui Lu, a student participating in GSoC 2021, and my project is 'add monitoring to pg_stat_statements to pg_systat'. And following is a summary of the work I have done during the past 10 weeks.The first part is about adding new features to pg_systat. The first and most important feature is adding monitoring of pg_stat_statement. It enables pg_systat to show statistics about query execution. The second feature is adding monitoring of pg_stat_progress_copy. It's a new feature in pg14. And the third feature is monitoring of pg_buffercache which tracks the data in the shared buffer cache.The second part is about compatibility. Since pg_stat_progress_copy is a new feature in pg14, we won't show this view when we connect to an older version. And pg_stat_statements added new columns in pg13 and changed some column names, the new columns also won't show in the older version.The third part of my work is about documentation. I made an asciinema video to show how to use pg_systat. And in the right pane, I have shown the corresponding table in postgreSQL. So users will know the relationship between pg_systat and in the database. Last but not least, I rewrote the readme using rst and added more information including basic introduction and homepage.Here are links to my commit [1] and asciinema video[2].It's really a wonderful experience to work with the community!Best Regards,Lu[1] https://gitlab.com/trafalgar_lu/pg_systat/-/commits/main/[2] https://asciinema.org/a/427202", "msg_date": "Sat, 28 Aug 2021 16:39:37 +0800", "msg_from": "Trafalgar Ricardo Lu <trafalgarricardolu@gmail.com>", "msg_from_op": true, "msg_subject": "Summary of GSoC 2021" } ]
[ { "msg_contents": "Hi,\n\nIt seems there's a redundant assignment statement conn = NULL in\npg_receivewal's StreamLog function. Attaching a tiny patch herewith.\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 28 Aug 2021 17:40:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_receivewal: remove extra conn = NULL; in StreamLog" }, { "msg_contents": "> On 28 Aug 2021, at 14:10, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems there's a redundant assignment statement conn = NULL in\n> pg_receivewal's StreamLog function. Attaching a tiny patch herewith.\n> Thoughts?\n\nAgreed, while harmless this is superfluous since conn is already set to NULL\nafter the PQfinish call a few lines up (which was added in a4205fa00d526c3).\nUnless there are objections I’ll apply this tomorrow or Monday.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 28 Aug 2021 21:57:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal: remove extra conn = NULL; in StreamLog" }, { "msg_contents": "On Sun, Aug 29, 2021 at 1:27 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Aug 2021, at 14:10, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > It seems there's a redundant assignment statement conn = NULL in\n> > pg_receivewal's StreamLog function. Attaching a tiny patch herewith.\n> > Thoughts?\n>\n> Agreed, while harmless this is superfluous since conn is already set to NULL\n> after the PQfinish call a few lines up (which was added in a4205fa00d526c3).\n> Unless there are objections I’ll apply this tomorrow or Monday.\n\nThanks for picking this up. I added this to CF to not lose it in the\nwild - https://commitfest.postgresql.org/34/3317/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Sep 2021 14:28:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal: remove extra conn = NULL; in StreamLog" }, { "msg_contents": "> On 1 Sep 2021, at 10:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Sun, Aug 29, 2021 at 1:27 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 28 Aug 2021, at 14:10, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>>> It seems there's a redundant assignment statement conn = NULL in\n>>> pg_receivewal's StreamLog function. Attaching a tiny patch herewith.\n>>> Thoughts?\n>> \n>> Agreed, while harmless this is superfluous since conn is already set to NULL\n>> after the PQfinish call a few lines up (which was added in a4205fa00d526c3).\n>> Unless there are objections I’ll apply this tomorrow or Monday.\n> \n> Thanks for picking this up. I added this to CF to not lose it in the\n> wild - https://commitfest.postgresql.org/34/3317/\n\nPushed to master, and entry closed. Thanks.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Sep 2021 13:19:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal: remove extra conn = NULL; in StreamLog" } ]
[ { "msg_contents": "commit a4d75c86bf15220df22de0a92c819ecef9db3849\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Fri Mar 26 23:22:01 2021 +0100\n\n Extended statistics on expressions\n\nThis commit added to psql/describe.c:\n\n+ /* statistics object name (qualified with namespace) */\n+ appendPQExpBuffer(&buf, \"\\\"%s\\\".\\\"%s\\\"\",\n+ PQgetvalue(result, i, 2),\n+ PQgetvalue(result, i, 3));\n\nEverywhere else the double quotes are around the whole \"schema.object\" rather\nthan both separately: \"schema\".\"object\". The code handling servers before v14\nhas the same thing, since:\n\ncommit bc085205c8a425fcaa54e27c6dcd83101130439b\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Fri May 12 14:59:23 2017 -0300\n\n Change CREATE STATISTICS syntax\n\nsrc/bin/psql/describe.c- /* statistics object name (qualified with namespace) */\nsrc/bin/psql/describe.c: appendPQExpBuffer(&buf, \"\\\"%s\\\".\\\"%s\\\" (\",\nsrc/bin/psql/describe.c- PQgetvalue(result, i, 2),\nsrc/bin/psql/describe.c- PQgetvalue(result, i, 3));\n\nThat seems to have been first added in the patch here, but AFAIT not\nspecifically discussed.\nhttps://www.postgresql.org/message-id/20170511221330.5akgbsoyx6wm4u34%40alvherre.pgsql\n\nAt the time the patch was commited, it was the only place that used\n\"schema\".\"object\":\n$ git show bc085205c8a425fcaa54e27c6dcd83101130439b:src/bin/psql/describe.c |grep '\\\\\"\\.\\\\\"'\n appendPQExpBuffer(&buf, \"\\\"%s\\\".\\\"%s\\\" (\",\n\nAnd it's still the only place, not just in describe.c, but the entire project.\n$ git grep -Fc '\\\"%s\\\".\\\"%s\\\"' '*.c'\nsrc/bin/psql/describe.c:2\n\nI actually don't like writing it as \"a.b\" since it doesn't work to copy+paste\nthat, because that means an object called \"a.b\" in the default schema.\nBut I think for consistency it should be done the same here as everywhere else.\n\nI noticed that Peter E recently changed amcheck in the direction of consistency:\n| 4279e5bc8c pg_amcheck: Message style and structuring improvements\n\nI propose to change extended stats objects to be shown the same as everywhere\nelse, with double quotes around the whole %s.%s:\n\t$ git grep '\\\\\"%s\\.%s\\\\\"' '*.c' |wc -l\n\t126\n\nThis affects 9 lines of output in regression tests.\n\nNote that check constraints and indexes have the same schema as their table, so\n\\d doesn't show a schema at all, and quotes the name of the object. That\ndistinction may be relevant to how stats objects ended up being quoted like\nthis.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 28 Aug 2021 13:16:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "extended stats objects are the only thing written like \"%s\".\"%s\"" }, { "msg_contents": "On 2021-Aug-28, Justin Pryzby wrote:\n\n> Note that check constraints and indexes have the same schema as their table, so\n> \\d doesn't show a schema at all, and quotes the name of the object. That\n> distinction may be relevant to how stats objects ended up being quoted like\n> this.\n\nYeah, this was the rationale for including the schema name here.\n\nI think using \"%s.%s\" as is done everywhere else is pretty much\npointless. It's not usable as an object identifier, since you have to\nmake sure to remove the existing quotes, and unless the names work\nwithout quotes, you have to add different quotes. So it looks «nice»\nbut it's functionally more work.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Sat, 28 Aug 2021 14:25:21 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: extended stats objects are the only thing written like \"%s\".\"%s\"" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I think using \"%s.%s\" as is done everywhere else is pretty much\n> pointless. It's not usable as an object identifier, since you have to\n> make sure to remove the existing quotes, and unless the names work\n> without quotes, you have to add different quotes. So it looks «nice»\n> but it's functionally more work.\n\nI think what we are doing there is following the message style\nguideline that says to put double quotes around inserted strings.\nIn this case schema.object (as a whole) is the inserted string.\nPeople often confuse this with SQL double-quoted identifiers, but it\nhas nothing whatsoever to do with SQL's rules. (It's easier to make\nsense of this rule in translations where the quote marks are not\nASCII double-quotes ... like your example with «nice».)\n\nIn short: Justin is right, this should not be done this way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Aug 2021 15:48:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: extended stats objects are the only thing written like \"%s\".\"%s\"" }, { "msg_contents": "On 2021-Aug-28, Tom Lane wrote:\n\n> I think what we are doing there is following the message style\n> guideline that says to put double quotes around inserted strings.\n> In this case schema.object (as a whole) is the inserted string.\n> People often confuse this with SQL double-quoted identifiers, but it\n> has nothing whatsoever to do with SQL's rules. (It's easier to make\n> sense of this rule in translations where the quote marks are not\n> ASCII double-quotes ... like your example with «nice».)\n> \n> In short: Justin is right, this should not be done this way.\n\nI don't agree with the way we're applying the message guidelines here,\nbut since this is the only place where we do this, I've changed it to\nthe idiomatic way of quoting names.\n\nI only backpatched to 14 in order to avoid messing with established\noutput format in released branches, but if people really hate the extra\nquotes with a passion I'm not opposed to backpatching further.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Mon, 30 Aug 2021 14:06:02 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: extended stats objects are the only thing written like \"%s\".\"%s\"" }, { "msg_contents": "On 30.08.21 20:06, Alvaro Herrera wrote:\n> On 2021-Aug-28, Tom Lane wrote:\n> \n>> I think what we are doing there is following the message style\n>> guideline that says to put double quotes around inserted strings.\n>> In this case schema.object (as a whole) is the inserted string.\n>> People often confuse this with SQL double-quoted identifiers, but it\n>> has nothing whatsoever to do with SQL's rules. (It's easier to make\n>> sense of this rule in translations where the quote marks are not\n>> ASCII double-quotes ... like your example with «nice».)\n>>\n>> In short: Justin is right, this should not be done this way.\n> \n> I don't agree with the way we're applying the message guidelines here,\n> but since this is the only place where we do this, I've changed it to\n> the idiomatic way of quoting names.\n\nI agree that the current situation is not satisfactory. We should think \nabout extending the guidelines to cover this.\n\nNote that it's not necessarily enough to say, leave \\\"%s\\\".\\\"%s\\\" \nuntranslated. For example, this could create inconsistencies with \nanalogous messages that don't include a schema qualification. Also, \nunless we are being careful about escaping double-quoted strings inside \nthe substituted strings, it wouldn't be entirely correct either.\n\nA comprehensive approach across the tree would be preferable, perhaps \nwith additional APIs to support it. Also, the question when schema \nqualifications should be printed or not should be answered.\n\n\n", "msg_date": "Wed, 22 Sep 2021 11:39:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: extended stats objects are the only thing written like \"%s\".\"%s\"" } ]
[ { "msg_contents": "Hi,\n\nIt seems we have inconsistent behavior with the implementation of\n\"GENERATED BY DEFAULT AS IDENTITY\" constraint on a table column.\nHere we are not allowing(internally not replacing NULL with IDENTITY\nDEFAULT) the \"NULL\" insertion into the table column.\n\npostgres=# CREATE TABLE TEST_TBL_1(ID INTEGER GENERATED BY DEFAULT AS\nIDENTITY ,ID1 INTEGER);\nCREATE TABLE\npostgres=# insert into TEST_TBL_1 values (NULL, 10);\nERROR: null value in column \"id\" of relation \"test_tbl_1\" violates\nnot-null constraint\nDETAIL: Failing row contains (null, 10).\npostgres=# insert into TEST_TBL_1(id1) values ( 10);\nINSERT 0 1\n\n\nHowever this is allowed on normal default column:\npostgres=# create table TEST_TBL_2 (ID INTEGER DEFAULT 10 ,ID1 INTEGER);\nCREATE TABLE\npostgres=# insert into TEST_TBL_2 values (NULL, 10);\nINSERT 0 1\npostgres=# insert into TEST_TBL_2 (id1) values (20);\nINSERT 0 1\n\n\nif I understand it correctly, the use-case for supporting \"GENERATED BY\nDEFAULT AS IDENTITY\" is to have an inbuilt sequence generated DEFAULT value\nfor a column.\n\nIMHO below query should replace \"NULL\" value for ID column with the\nGENERATED IDENTITY value (should insert 1,10 for ID and ID1 respectively in\nbelow's example), similar to what we expect when we have DEFAULT constraint\non the column.\n\ninsert into TEST_TBL_1 values (NULL, 10);\n\nTO Support the above query ORACLE is having \"GENERATED BY DEFAULT ON NULL\nAS IDENTITY\" syntax. We can also think on similar lines and have similar\nimplementation\nor allow it under \"GENERATED BY DEFAULT AS IDENTITY\" itself.\n\nAny reason for disallowing NULL insertion?\n\nThoughts?\n\nThanks,\nHimanshu\n\nHi,It seems we have inconsistent behavior with the implementation of \"GENERATED BY DEFAULT AS IDENTITY\" constraint on a table column.Here we are not allowing(internally not replacing NULL with IDENTITY DEFAULT) the \"NULL\" insertion into the table column.postgres=# CREATE TABLE TEST_TBL_1(ID INTEGER  GENERATED BY DEFAULT AS IDENTITY ,ID1 INTEGER);CREATE TABLEpostgres=# insert into TEST_TBL_1 values  (NULL, 10);ERROR:  null value in column \"id\" of relation \"test_tbl_1\" violates not-null constraintDETAIL:  Failing row contains (null, 10).postgres=# insert into TEST_TBL_1(id1) values  ( 10);INSERT 0 1However this is allowed on normal default column:postgres=# create table TEST_TBL_2 (ID INTEGER  DEFAULT 10 ,ID1 INTEGER);CREATE TABLEpostgres=# insert into TEST_TBL_2 values  (NULL, 10);INSERT 0 1postgres=# insert into TEST_TBL_2 (id1) values  (20);INSERT 0 1if I understand it correctly, the use-case for supporting \"GENERATED BY DEFAULT AS IDENTITY\" is to have an inbuilt sequence generated DEFAULT value for a column. IMHO below query should replace \"NULL\" value for ID column with the GENERATED IDENTITY value (should insert 1,10 for ID and ID1 respectively in below's example), similar to what we expect when we have DEFAULT constraint on the column.insert into TEST_TBL_1 values  (NULL, 10);TO Support the above query ORACLE is having \"GENERATED BY DEFAULT ON NULL AS IDENTITY\" syntax. We can also think on similar lines and have similar implementationor allow it under \"GENERATED BY DEFAULT AS IDENTITY\" itself.Any reason for disallowing NULL insertion?Thoughts?Thanks,Himanshu", "msg_date": "Sun, 29 Aug 2021 13:36:56 +0530", "msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>", "msg_from_op": true, "msg_subject": "inconsistent behavior with \"GENERATED BY DEFAULT AS IDENTITY\"" }, { "msg_contents": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> writes:\n> IMHO below query should replace \"NULL\" value for ID column with the\n> GENERATED IDENTITY value (should insert 1,10 for ID and ID1 respectively in\n> below's example), similar to what we expect when we have DEFAULT constraint\n> on the column.\n\nWhy? Ordinary DEFAULT clauses do not act that way; if you specify NULL\n(or any other value) that is what you get. If you want the default\nvalue, you can omit the column, or write DEFAULT.\n\n> Any reason for disallowing NULL insertion?\n\nConsistency and standards compliance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Aug 2021 09:40:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inconsistent behavior with \"GENERATED BY DEFAULT AS IDENTITY\"" }, { "msg_contents": "ok, understood.\n\nThanks Tom.\n\nRegards,\nHimanshu\n\nOn Sun, Aug 29, 2021 at 7:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> writes:\n> > IMHO below query should replace \"NULL\" value for ID column with the\n> > GENERATED IDENTITY value (should insert 1,10 for ID and ID1 respectively\n> in\n> > below's example), similar to what we expect when we have DEFAULT\n> constraint\n> > on the column.\n>\n> Why? Ordinary DEFAULT clauses do not act that way; if you specify NULL\n> (or any other value) that is what you get. If you want the default\n> value, you can omit the column, or write DEFAULT.\n>\n> > Any reason for disallowing NULL insertion?\n>\n> Consistency and standards compliance.\n>\n> regards, tom lane\n>\n\nok, understood. Thanks Tom.Regards,HimanshuOn Sun, Aug 29, 2021 at 7:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> writes:\n> IMHO below query should replace \"NULL\" value for ID column with the\n> GENERATED IDENTITY value (should insert 1,10 for ID and ID1 respectively in\n> below's example), similar to what we expect when we have DEFAULT constraint\n> on the column.\n\nWhy?  Ordinary DEFAULT clauses do not act that way; if you specify NULL\n(or any other value) that is what you get.  If you want the default\nvalue, you can omit the column, or write DEFAULT.\n\n> Any reason for disallowing NULL insertion?\n\nConsistency and standards compliance.\n\n                        regards, tom lane", "msg_date": "Sun, 29 Aug 2021 21:41:59 +0530", "msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: inconsistent behavior with \"GENERATED BY DEFAULT AS IDENTITY\"" } ]
[ { "msg_contents": "Good day.\n\nCurrent checksum is not calculated in intended way and\nhas the flaw.\n\nSingle round function is written as:\n\n#define CHECKSUM_COMP(checksum, value) do {\\\n uint32 __tmp = (checksum) ^ (value);\\\n (checksum) = __tmp * FNV_PRIME ^ (__tmp >> 17);\\\n} while (0)\n\nAnd looks like it was intended to be\n (checksum) = (__tmp * FNV_PRIME) ^ (__tmp >> 17);\n\nAt least original Florian Pflug suggestion were correctly written\nin this way (but with shift 1):\nhttps://www.postgresql.org/message-id/99343716-5F5A-45C8-B2F6-74B9BA357396%40phlo.org\n\nBut due to C operation precedence it is actually calculated as:\n (checksum) = __tmp * (FNV_PRIME ^ (__tmp >> 17));\n\nIt has more longer collision chains and worse: it has 256 pathological\nresult slots of shape 0xXX000000 each has 517 collisions in average.\nTotally 132352 __tmp values are collided into this 256 slots.\n\nThat is happens due to if top 16 bits are happens to be\n0x0326 or 0x0327, then `FNV_PRIME ^ (__tmp >> 17) == 0x1000000`,\nand then `__tmp * 0x1000000` keeps only lower 8 bits. That means,\n9 bits masked by 0x0001ff00 are completely lost.\n\nmix(0x03260001) == mix(0x03260101) = mix(0x0327aa01) == mix(0x0327ff01)\n(where mix is a `__tmp` to `checksum` transformation)\n\nregards,\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\nPS. Test program in Crystal language is attached and output for current\nCHECKSUM_COMP implementation and \"correct\" (intended).\nExcuse me for Crystal, it is prettier to write for small compiled \nscripts.", "msg_date": "Mon, 30 Aug 2021 03:18:33 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "jff: checksum algorithm is not as intended" }, { "msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> Single round function is written as:\n\n> #define CHECKSUM_COMP(checksum, value) do {\\\n> uint32 __tmp = (checksum) ^ (value);\\\n> (checksum) = __tmp * FNV_PRIME ^ (__tmp >> 17);\\\n> } while (0)\n\n> And looks like it was intended to be\n> (checksum) = (__tmp * FNV_PRIME) ^ (__tmp >> 17);\n\nI'm not following your point? Multiplication binds tighter than XOR\nin C, see e.g.\n\nhttps://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Operator_precedence\n\nSo those sure look equivalent from here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 00:21:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: jff: checksum algorithm is not as intended" } ]
[ { "msg_contents": "Hi,\n\nI see a couple of improvements to receivelog.c and pg_receivewal.c:\n\n1) ReceiveXlogStream in receivelog.c has a duplicate code to execute\nIDENTIFY_SYSTEM replication command on the server which can be\nreplaced with RunIdentifySystem().\n2) bool returning ReceiveXlogStream() in pg_receivewal.c is being used\nwithout type-casting its return return value which might generate a\nwarning with some compilers. This kind of type-casting is more common\nin other places in the postgres code base.\n\nAttaching a patch to fix the above. Thoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 30 Aug 2021 11:00:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "replace IDENTIFY_SYSTEM code in receivelog.c with RunIdentifySystem()" }, { "msg_contents": "On Mon, Aug 30, 2021 at 11:00:40AM +0530, Bharath Rupireddy wrote:\n> 1) ReceiveXlogStream in receivelog.c has a duplicate code to execute\n> IDENTIFY_SYSTEM replication command on the server which can be\n> replaced with RunIdentifySystem().\n\nI have looked at that.\n\n> 2) bool returning ReceiveXlogStream() in pg_receivewal.c is being used\n> without type-casting its return return value which might generate a\n> warning with some compilers. This kind of type-casting is more common\n> in other places in the postgres code base.\n\nThis is usually a pattern used for Coverity, to hint it that we don't\ncare about the error code in a given code path. IMV, that's not\nsomething to bother about for older code.\n\n> Attaching a patch to fix the above. Thoughts?\n\nThe original refactoring of IDENTIFY_SYSTEM is from 0c013e08, and it\nfeels like I just missed ReceiveXlogStream(). What you have here is\nan improvement.\n\n+ if (!RunIdentifySystem(conn, &sysidentifier, &servertli, NULL, NULL))\n {\n- pg_log_error(\"could not send replication command \\\"%s\\\": %s\",\n- \"IDENTIFY_SYSTEM\", PQerrorMessage(conn));\n- PQclear(res);\n+ pg_free(sysidentifier);\n return false;\n\nHere you want to free sysidentifier only if it has been set, and\nRunIdentifySystem() may fail before doing that, so you should assign\nNULL to sysidentifier when it is declared.\n--\nMichael", "msg_date": "Mon, 30 Aug 2021 15:29:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: replace IDENTIFY_SYSTEM code in receivelog.c with\n RunIdentifySystem()" }, { "msg_contents": "On Mon, Aug 30, 2021 at 11:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > 2) bool returning ReceiveXlogStream() in pg_receivewal.c is being used\n> > without type-casting its return return value which might generate a\n> > warning with some compilers. This kind of type-casting is more common\n> > in other places in the postgres code base.\n>\n> This is usually a pattern used for Coverity, to hint it that we don't\n> care about the error code in a given code path. IMV, that's not\n> something to bother about for older code.\n\nShound't we fix it in master branch to keep the code in sync with\nother places where we usually follow that kind of type-casting? IMO,\nwe should just make that change, because it isn't a major change or we\naren't going to back patch it.\n\n> + if (!RunIdentifySystem(conn, &sysidentifier, &servertli, NULL, NULL))\n> {\n> - pg_log_error(\"could not send replication command \\\"%s\\\": %s\",\n> - \"IDENTIFY_SYSTEM\", PQerrorMessage(conn));\n> - PQclear(res);\n> + pg_free(sysidentifier);\n> return false;\n>\n> Here you want to free sysidentifier only if it has been set, and\n> RunIdentifySystem() may fail before doing that, so you should assign\n> NULL to sysidentifier when it is declared.\n\nIsn't the pg_free going to take care of sysidentifier being null?\n if (ptr != NULL)\n free(ptr);\n\nDo we still need this?\nif (sysidentifier)\n pg_free(sysidentifier);\n\nIMO, let the v1 patch be as-is and not do above.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 30 Aug 2021 13:01:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replace IDENTIFY_SYSTEM code in receivelog.c with\n RunIdentifySystem()" }, { "msg_contents": "On Mon, Aug 30, 2021 at 01:01:16PM +0530, Bharath Rupireddy wrote:\n> Shound't we fix it in master branch to keep the code in sync with\n> other places where we usually follow that kind of type-casting? IMO,\n> we should just make that change, because it isn't a major change or we\n> aren't going to back patch it.\n\nOne thing is that this creates conflicts with back-branches, and\nthat's always annoying. I'd be fine with changing new code for that,\nthough.\n\n>> Here you want to free sysidentifier only if it has been set, and\n>> RunIdentifySystem() may fail before doing that, so you should assign\n>> NULL to sysidentifier when it is declared.\n>\n> Isn't the pg_free going to take care of sysidentifier being null?\n> if (ptr != NULL)\n> free(ptr);\n\nIt would, but you don't initialize the variable to begin with, so you\nmay finish with freeing a pointer that points to nothing, and crash\nany code using ReceiveXlogStream() while some code paths should be\nable to handle retries. I guess that compilers would not complain\nhere because they cannot understand that RunIdentifySystem() may not\nset up the variable before this function returns. I have fixed this\ninitialization, and committed the patch. Note that there is only one\nother code path using RunIdentifySystem() with the system ID as of\npg_basebackup.c, but this one would just exit() if we fail to run the\ncommand, so we don't need to care about freeing the system ID there.\n--\nMichael", "msg_date": "Tue, 31 Aug 2021 10:21:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: replace IDENTIFY_SYSTEM code in receivelog.c with\n RunIdentifySystem()" } ]
[ { "msg_contents": "Hi,\n\nI see there's a scope to do following improvements to pg_receivewal.c:\n\n1) Fetch the server system identifier in the first RunIdentifySystem\ncall and use it to identify(via pg_receivewal's ReceiveXlogStream) any\nunexpected changes that may happen in the server while pg_receivewal\nis connected to it. This can be helpful in scenarios when\npg_receivewal tries to reconnect to the server (see the code around\npg_usleep with RECONNECT_SLEEP_TIME) but something unexpected has\nhappnend in the server that changed the its system identifier. Once\nthe pg_receivewal establishes the conenction to server again, then the\nReceiveXlogStream has a code chunk to compare the system identifier\nthat we received in the initial connection.\n2) Move the RunIdentifySystem to identify timeline id and start LSN\nfrom the server only if the pg_receivewal failed to get them from\nFindStreamingStart. This way, an extra IDENTIFY_SYSTEM command is\navoided.\n3) Place the \"replication connetion shouldn't have any database name\nassociated\" error check right after RunIdentifySystem so that we can\navoid fetching wal segment size with RetrieveWalSegSize if at all we\nwere to fail with that error. This change is similar to what\npg_recvlogical.c does.\n4) Move the RetrieveWalSegSize to just before pg_receivewal.c enters\nmain loop to get the wal from the server. This avoids an unnecessary\nquery for pg_receivewal with \"--create-slot\" or \"--drop-slot\".\n5) Have an assertion after the pg_receivewal done a good amount of\nwork to find start timeline and LSN might be helpful:\nAssert(stream.timeline != 0 && stream.startpos != InvalidXLogRecPtr);\n\nAttaching a patch that does take care of above improvements. Thoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 30 Aug 2021 13:02:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "improve pg_receivewal code" }, { "msg_contents": "On Mon, Aug 30, 2021 at 1:02 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I see there's a scope to do following improvements to pg_receivewal.c:\n>\n> 1) Fetch the server system identifier in the first RunIdentifySystem\n> call and use it to identify(via pg_receivewal's ReceiveXlogStream) any\n> unexpected changes that may happen in the server while pg_receivewal\n> is connected to it. This can be helpful in scenarios when\n> pg_receivewal tries to reconnect to the server (see the code around\n> pg_usleep with RECONNECT_SLEEP_TIME) but something unexpected has\n> happnend in the server that changed the its system identifier. Once\n> the pg_receivewal establishes the conenction to server again, then the\n> ReceiveXlogStream has a code chunk to compare the system identifier\n> that we received in the initial connection.\n> 2) Move the RunIdentifySystem to identify timeline id and start LSN\n> from the server only if the pg_receivewal failed to get them from\n> FindStreamingStart. This way, an extra IDENTIFY_SYSTEM command is\n> avoided.\n> 3) Place the \"replication connetion shouldn't have any database name\n> associated\" error check right after RunIdentifySystem so that we can\n> avoid fetching wal segment size with RetrieveWalSegSize if at all we\n> were to fail with that error. This change is similar to what\n> pg_recvlogical.c does.\n> 4) Move the RetrieveWalSegSize to just before pg_receivewal.c enters\n> main loop to get the wal from the server. This avoids an unnecessary\n> query for pg_receivewal with \"--create-slot\" or \"--drop-slot\".\n> 5) Have an assertion after the pg_receivewal done a good amount of\n> work to find start timeline and LSN might be helpful:\n> Assert(stream.timeline != 0 && stream.startpos != InvalidXLogRecPtr);\n>\n> Attaching a patch that does take care of above improvements. Thoughts?\n\nHere's the CF entry - https://commitfest.postgresql.org/34/3315/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Sep 2021 09:20:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improve pg_receivewal code" }, { "msg_contents": "Le lundi 30 août 2021, 09:32:05 CEST Bharath Rupireddy a écrit :\n> Hi,\n> \n> I see there's a scope to do following improvements to pg_receivewal.c:\n\nThank you Bharath for this patch.\n\n> \n> 1) Fetch the server system identifier in the first RunIdentifySystem\n> call and use it to identify(via pg_receivewal's ReceiveXlogStream) any\n> unexpected changes that may happen in the server while pg_receivewal\n> is connected to it. This can be helpful in scenarios when\n> pg_receivewal tries to reconnect to the server (see the code around\n> pg_usleep with RECONNECT_SLEEP_TIME) but something unexpected has\n> happnend in the server that changed the its system identifier. Once\n> the pg_receivewal establishes the conenction to server again, then the\n> ReceiveXlogStream has a code chunk to compare the system identifier\n> that we received in the initial connection.\n\nI'm not sure what kind of problem could be happening here: if you were \nsomewhat routed to another server ? Or if we \"switched\" the cluster listening \non that port ? \n\n> 2) Move the RunIdentifySystem to identify timeline id and start LSN\n> from the server only if the pg_receivewal failed to get them from\n> FindStreamingStart. This way, an extra IDENTIFY_SYSTEM command is\n> avoided.\n\nThat makes sense, even if we add another IDENTIFY_SYSTEM to check against the \none set in the first place.\n\n> 3) Place the \"replication connetion shouldn't have any database name\n> associated\" error check right after RunIdentifySystem so that we can\n> avoid fetching wal segment size with RetrieveWalSegSize if at all we\n> were to fail with that error. This change is similar to what\n> pg_recvlogical.c does.\n\nMakes sense.\n\n> 4) Move the RetrieveWalSegSize to just before pg_receivewal.c enters\n> main loop to get the wal from the server. This avoids an unnecessary\n> query for pg_receivewal with \"--create-slot\" or \"--drop-slot\".\n> 5) Have an assertion after the pg_receivewal done a good amount of\n> work to find start timeline and LSN might be helpful:\n> Assert(stream.timeline != 0 && stream.startpos != InvalidXLogRecPtr);\n> \n> Attaching a patch that does take care of above improvements. Thoughts?\n\nOverall I think it is good.\n\nHowever, you have some formatting issues, here it mixes tabs and spaces:\n\n+\t\t/*\n+\t \t * No valid data can be found in the existing output \ndirectory.\n+\t\t * Get start LSN position and current timeline ID from \nthe server.\n+\t \t */\n\nAnd here it is not formatted properly:\n\n+static char\t *server_sysid = NULL;\n\n\n\n> \n> Regards,\n> Bharath Rupireddy.\n\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 02 Sep 2021 17:34:59 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: improve pg_receivewal code" }, { "msg_contents": "On Thu, Sep 2, 2021 at 9:05 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> > 1) Fetch the server system identifier in the first RunIdentifySystem\n> > call and use it to identify(via pg_receivewal's ReceiveXlogStream) any\n> > unexpected changes that may happen in the server while pg_receivewal\n> > is connected to it. This can be helpful in scenarios when\n> > pg_receivewal tries to reconnect to the server (see the code around\n> > pg_usleep with RECONNECT_SLEEP_TIME) but something unexpected has\n> > happnend in the server that changed the its system identifier. Once\n> > the pg_receivewal establishes the conenction to server again, then the\n> > ReceiveXlogStream has a code chunk to compare the system identifier\n> > that we received in the initial connection.\n>\n> I'm not sure what kind of problem could be happening here: if you were\n> somewhat routed to another server ? Or if we \"switched\" the cluster listening\n> on that port ?\n\nYeah. Also, pg_control file on the server can get corrupted for\nwhatever reasons it may be. This sys identifier check is useful in\ncase if the pg_receivewal connects to the server again and again.\nThese are things that sound over cautious to me, however it's nothing\nwrong to use what ReceiveXlogStream provides. pg_basebackup does make\nuse of this already.\n\n> > 2) Move the RunIdentifySystem to identify timeline id and start LSN\n> > from the server only if the pg_receivewal failed to get them from\n> > FindStreamingStart. This way, an extra IDENTIFY_SYSTEM command is\n> > avoided.\n>\n> That makes sense, even if we add another IDENTIFY_SYSTEM to check against the\n> one set in the first place.\n>\n> > 3) Place the \"replication connetion shouldn't have any database name\n> > associated\" error check right after RunIdentifySystem so that we can\n> > avoid fetching wal segment size with RetrieveWalSegSize if at all we\n> > were to fail with that error. This change is similar to what\n> > pg_recvlogical.c does.\n>\n> Makes sense.\n>\n> > 4) Move the RetrieveWalSegSize to just before pg_receivewal.c enters\n> > main loop to get the wal from the server. This avoids an unnecessary\n> > query for pg_receivewal with \"--create-slot\" or \"--drop-slot\".\n> > 5) Have an assertion after the pg_receivewal done a good amount of\n> > work to find start timeline and LSN might be helpful:\n> > Assert(stream.timeline != 0 && stream.startpos != InvalidXLogRecPtr);\n> >\n> > Attaching a patch that does take care of above improvements. Thoughts?\n>\n> Overall I think it is good.\n\nThanks for your review.\n\n> However, you have some formatting issues, here it mixes tabs and spaces:\n>\n> + /*\n> + * No valid data can be found in the existing output\n> directory.\n> + * Get start LSN position and current timeline ID from\n> the server.\n> + */\n\nMy bad. I forgot to run \"git diff --check\" on the v1 patch.\n\n> And here it is not formatted properly:\n>\n> +static char *server_sysid = NULL;\n\nDone.\n\nHere's the v2 with above modifications.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 3 Sep 2021 09:23:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improve pg_receivewal code" }, { "msg_contents": "> Here's the v2 with above modifications.\n\nI was looking at this patch, and I agree that checking for the system\nID and the timeline by setting sysidentifier beforehand looks like an\nimprovement.\n\nThe extra IDENTIFY_SYSTEM done at the beginning of StreamLog() is not\na performance bottleneck as we run it only once for each loop. I\ndon't really get the argument of a server replacing another one on the\nsame port requiring to rely only on the first system ID fetched before \nstarting the loops of StreamLog() calls. So I would leave main()\nalone, but fill in the system ID from RunIdentifySystem() in\nStreamLog().\n--\nMichael", "msg_date": "Thu, 16 Sep 2021 13:01:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: improve pg_receivewal code" }, { "msg_contents": "On Thu, Sep 16, 2021 at 9:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > Here's the v2 with above modifications.\n>\n> I was looking at this patch, and I agree that checking for the system\n> ID and the timeline by setting sysidentifier beforehand looks like an\n> improvement.\n>\n> The extra IDENTIFY_SYSTEM done at the beginning of StreamLog() is not\n> a performance bottleneck as we run it only once for each loop. I\n> don't really get the argument of a server replacing another one on the\n> same port requiring to rely only on the first system ID fetched before\n> starting the loops of StreamLog() calls. So I would leave main()\n> alone, but fill in the system ID from RunIdentifySystem() in\n> StreamLog().\n\nThanks. I changed the code that way. PSA v3 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 17 Sep 2021 11:46:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improve pg_receivewal code" }, { "msg_contents": "On Fri, Sep 17, 2021 at 11:46:33AM +0530, Bharath Rupireddy wrote:\n> Thanks. I changed the code that way. PSA v3 patch.\n\nThanks. Done.\n--\nMichael", "msg_date": "Sat, 18 Sep 2021 10:52:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: improve pg_receivewal code" } ]
[ { "msg_contents": "Hi hackers,\nI wrote a patch to resolve the subtransactions concurrency performance\nproblems when suboverflowed.\n\nWhen we use more than PGPROC_MAX_CACHED_SUBXIDS(64) subtransactions per\ntransaction concurrency, it will lead to subtransactions performance\nproblems. \nAll backend will be stuck at acquiring lock SubtransSLRULock.\n\nThe reproduce steps in PG master branch:\n\n1, init a cluster, append postgresql.conf as below: \n\nmax_connections = '2500'\nmax_files_per_process = '2000'\nmax_locks_per_transaction = '64'\nmax_parallel_maintenance_workers = '8'\nmax_parallel_workers = '60'\nmax_parallel_workers_per_gather = '6'\nmax_prepared_transactions = '15000'\nmax_replication_slots = '10'\nmax_wal_senders = '64'\nmax_worker_processes = '250'\nshared_buffers = 8GB\n\n2, create table and insert some records as below:\n\nCREATE UNLOGGED TABLE contend (\n id integer,\n val integer NOT NULL\n)\nWITH (fillfactor='50');\n \nINSERT INTO contend (id, val)\nSELECT i, 0\nFROM generate_series(1, 10000) AS i;\n \nVACUUM (ANALYZE) contend;\n\n3, The script subtrans_128.sql in attachment. use pgbench with\nsubtrans_128.sql as below.\npgbench -d postgres -p 33800 -n -r -f subtrans_128.sql -c 500 -j 500 -T\n3600\n\n\n4, After for a while, we can get the stuck result. We can query\npg_stat_activity. All backends wait event is SubtransSLRULock.\n We can use pert top and try find the root cause. The result of perf top\nas below:\n66.20% postgres [.] pg_atomic_compare_exchange_u32_impl\n 29.30% postgres [.] pg_atomic_fetch_sub_u32_impl\n 1.46% postgres [.] pg_atomic_read_u32\n 1.34% postgres [.] TransactionIdIsCurrentTransactionId\n 0.75% postgres [.] SimpleLruReadPage_ReadOnly\n 0.14% postgres [.] LWLockAttemptLock\n 0.14% postgres [.] LWLockAcquire\n 0.12% postgres [.] pg_atomic_compare_exchange_u32\n 0.09% postgres [.] HeapTupleSatisfiesMVCC\n 0.06% postgres [.] heapgetpage\n 0.03% postgres [.] sentinel_ok\n 0.03% postgres [.] XidInMVCCSnapshot\n 0.03% postgres [.] slot_deform_heap_tuple\n 0.03% postgres [.] ExecInterpExpr\n 0.02% postgres [.] AllocSetCheck\n 0.02% postgres [.] HeapTupleSatisfiesVisibility\n 0.02% postgres [.] LWLockRelease\n 0.02% postgres [.] TransactionIdPrecedes\n 0.02% postgres [.] SubTransGetParent\n 0.01% postgres [.] heapgettup_pagemode\n 0.01% postgres [.] CheckForSerializableConflictOutNeeded\n\n\nAfter view the subtrans codes, it is easy to find that the global LWLock\nSubtransSLRULock is the bottleneck of subtrans concurrently.\n\nWhen a bakcend session assign more than PGPROC_MAX_CACHED_SUBXIDS(64)\nsubtransactions, we will get a snapshot with suboverflowed.\nA suboverflowed snapshot does not contain all data required to determine\nvisibility, so PostgreSQL will occasionally have to resort to pg_subtrans. \nThese pages are cached in shared buffers, but you can see the overhead of\nlooking them up in the high rank of SimpleLruReadPage_ReadOnly in the perf\noutput.\n\nTo resolve this performance problem, we think about a solution which cache\nSubtransSLRU to local cache. \nFirst we can query parent transaction id from SubtransSLRU, and copy the\nSLRU page to local cache page.\nAfter that if we need query parent transaction id again, we can query it\nfrom local cache directly.\nIt will reduce the number of acquire and release LWLock SubtransSLRULock\nobservably.\n\n From all snapshots, we can get the latest xmin. All transaction id which\nprecedes this xmin, it muse has been commited/abortd. \nTheir parent/top transaction has been written subtrans SLRU. Then we can\ncache the subtrans SLRU and copy it to local cache.\n\nUse the same produce steps above, with our patch we cannot get the stuck\nresult.\nNote that append our GUC parameter in postgresql.conf. This optimize is off\nin default.\nlocal_cache_subtrans_pages=128 \n\nThe patch is base on PG master branch\n0d906b2c0b1f0d625ff63d9ace906556b1c66a68\n\n\nOur project in https://github.com/ADBSQL/AntDB, Welcome to follow us,\nAntDB, AsiaInfo's PG-based distributed database product\n\nThanks\nPengcheng", "msg_date": "Mon, 30 Aug 2021 16:43:24 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi Pengcheng!\n\nYou are solving important problem, thank you!\n\n> 30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> \n> To resolve this performance problem, we think about a solution which cache\n> SubtransSLRU to local cache. \n> First we can query parent transaction id from SubtransSLRU, and copy the\n> SLRU page to local cache page.\n> After that if we need query parent transaction id again, we can query it\n> from local cache directly.\n\nA copy of SLRU in each backend's cache can consume a lot of memory. Why create a copy if we can optimise shared representation of SLRU?\n\nJFYI There is a related patch to make SimpleLruReadPage_ReadOnly() faster for bigger SLRU buffers[0].\nAlso Nik Samokhvalov recently published interesting investigation on the topic, but for some reason his message did not pass the moderation. [1]\n\nAlso it's important to note that there was a community request to move SLRUs to shared_buffers [2].\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/34/2627/\n[1] https://www.postgresql.org/message-id/flat/BE73A0BB-5929-40F4-BAF8-55323DE39561%40yandex-team.ru\n[2] https://www.postgresql.org/message-id/flat/20180814213500.GA74618%4060f81dc409fc.ant.amazon.com\n\n", "msg_date": "Mon, 30 Aug 2021 15:24:57 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi Andrey,\n Thanks a lot for your replay and reference information.\n \n The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.\n If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory. \n So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.\n And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.\n \n I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.\n I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.\n Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.\n After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.\n \n [1] solution. Actually first, we try to use Buffer manager to replace SLRU for subtrans too. And we have implemented it.\n With the test case which I mentioned in previous mail, It was still stuck. In default there is 2048 subtrans in one page.\n When some processes get the top transaction in one page, they should pin/unpin and lock/unlock the same page repeatedly.\n I found than it was stuck at pin/unpin page for some backends.\n \n Compare test results, pgbench with subtrans_128.sql\n Concurrency PG master PG master with path[0] Local cache optimize\n 300\t stuck stuck no stuck\n 500 stuck stuck no stuck\n 1000 stuck stuck no stuck\n \n Maybe we can test different approach with my test case. For massive concurrency, if it will not be stuck, we can get a good solution.\n\n[0] https://commitfest.postgresql.org/34/2627/\n[1] https://www.postgresql.org/message-id/flat/20180814213500.GA74618%4060f81dc409fc.ant.amazon.com\n\nThanks\nPengcheng\n\n-----Original Message-----\nFrom: Andrey Borodin <x4mmm@yandex-team.ru> \nSent: 2021年8月30日 18:25\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: suboverflowed subtransactions concurrency performance optimize\n\nHi Pengcheng!\n\nYou are solving important problem, thank you!\n\n> 30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> \n> To resolve this performance problem, we think about a solution which \n> cache SubtransSLRU to local cache.\n> First we can query parent transaction id from SubtransSLRU, and copy \n> the SLRU page to local cache page.\n> After that if we need query parent transaction id again, we can query \n> it from local cache directly.\n\nA copy of SLRU in each backend's cache can consume a lot of memory. Why create a copy if we can optimise shared representation of SLRU?\n\nJFYI There is a related patch to make SimpleLruReadPage_ReadOnly() faster for bigger SLRU buffers[0].\nAlso Nik Samokhvalov recently published interesting investigation on the topic, but for some reason his message did not pass the moderation. [1]\n\nAlso it's important to note that there was a community request to move SLRUs to shared_buffers [2].\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/34/2627/\n[1] https://www.postgresql.org/message-id/flat/BE73A0BB-5929-40F4-BAF8-55323DE39561%40yandex-team.ru\n[2] https://www.postgresql.org/message-id/flat/20180814213500.GA74618%4060f81dc409fc.ant.amazon.com\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:43:02 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Mon, Aug 30, 2021 at 1:43 AM Pengchengliu <pengchengliu@tju.edu.cn>\nwrote:\n\n> Hi hackers,\n> I wrote a patch to resolve the subtransactions concurrency performance\n> problems when suboverflowed.\n>\n> When we use more than PGPROC_MAX_CACHED_SUBXIDS(64) subtransactions per\n> transaction concurrency, it will lead to subtransactions performance\n> problems.\n> All backend will be stuck at acquiring lock SubtransSLRULock.\n>\n> The reproduce steps in PG master branch:\n>\n> 1, init a cluster, append postgresql.conf as below:\n>\n> max_connections = '2500'\n> max_files_per_process = '2000'\n> max_locks_per_transaction = '64'\n> max_parallel_maintenance_workers = '8'\n> max_parallel_workers = '60'\n> max_parallel_workers_per_gather = '6'\n> max_prepared_transactions = '15000'\n> max_replication_slots = '10'\n> max_wal_senders = '64'\n> max_worker_processes = '250'\n> shared_buffers = 8GB\n>\n> 2, create table and insert some records as below:\n>\n> CREATE UNLOGGED TABLE contend (\n> id integer,\n> val integer NOT NULL\n> )\n> WITH (fillfactor='50');\n>\n> INSERT INTO contend (id, val)\n> SELECT i, 0\n> FROM generate_series(1, 10000) AS i;\n>\n> VACUUM (ANALYZE) contend;\n>\n> 3, The script subtrans_128.sql in attachment. use pgbench with\n> subtrans_128.sql as below.\n> pgbench -d postgres -p 33800 -n -r -f subtrans_128.sql -c 500 -j 500 -T\n> 3600\n>\n>\n> 4, After for a while, we can get the stuck result. We can query\n> pg_stat_activity. All backends wait event is SubtransSLRULock.\n> We can use pert top and try find the root cause. The result of perf top\n> as below:\n> 66.20% postgres [.] pg_atomic_compare_exchange_u32_impl\n> 29.30% postgres [.] pg_atomic_fetch_sub_u32_impl\n> 1.46% postgres [.] pg_atomic_read_u32\n> 1.34% postgres [.] TransactionIdIsCurrentTransactionId\n> 0.75% postgres [.] SimpleLruReadPage_ReadOnly\n> 0.14% postgres [.] LWLockAttemptLock\n> 0.14% postgres [.] LWLockAcquire\n> 0.12% postgres [.] pg_atomic_compare_exchange_u32\n> 0.09% postgres [.] HeapTupleSatisfiesMVCC\n> 0.06% postgres [.] heapgetpage\n> 0.03% postgres [.] sentinel_ok\n> 0.03% postgres [.] XidInMVCCSnapshot\n> 0.03% postgres [.] slot_deform_heap_tuple\n> 0.03% postgres [.] ExecInterpExpr\n> 0.02% postgres [.] AllocSetCheck\n> 0.02% postgres [.] HeapTupleSatisfiesVisibility\n> 0.02% postgres [.] LWLockRelease\n> 0.02% postgres [.] TransactionIdPrecedes\n> 0.02% postgres [.] SubTransGetParent\n> 0.01% postgres [.] heapgettup_pagemode\n> 0.01% postgres [.] CheckForSerializableConflictOutNeeded\n>\n>\n> After view the subtrans codes, it is easy to find that the global LWLock\n> SubtransSLRULock is the bottleneck of subtrans concurrently.\n>\n> When a bakcend session assign more than PGPROC_MAX_CACHED_SUBXIDS(64)\n> subtransactions, we will get a snapshot with suboverflowed.\n> A suboverflowed snapshot does not contain all data required to determine\n> visibility, so PostgreSQL will occasionally have to resort to pg_subtrans.\n> These pages are cached in shared buffers, but you can see the overhead of\n> looking them up in the high rank of SimpleLruReadPage_ReadOnly in the perf\n> output.\n>\n> To resolve this performance problem, we think about a solution which cache\n> SubtransSLRU to local cache.\n> First we can query parent transaction id from SubtransSLRU, and copy the\n> SLRU page to local cache page.\n> After that if we need query parent transaction id again, we can query it\n> from local cache directly.\n> It will reduce the number of acquire and release LWLock SubtransSLRULock\n> observably.\n>\n> From all snapshots, we can get the latest xmin. All transaction id which\n> precedes this xmin, it muse has been commited/abortd.\n> Their parent/top transaction has been written subtrans SLRU. Then we can\n> cache the subtrans SLRU and copy it to local cache.\n>\n> Use the same produce steps above, with our patch we cannot get the stuck\n> result.\n> Note that append our GUC parameter in postgresql.conf. This optimize is off\n> in default.\n> local_cache_subtrans_pages=128\n>\n> The patch is base on PG master branch\n> 0d906b2c0b1f0d625ff63d9ace906556b1c66a68\n>\n>\n> Our project in https://github.com/ADBSQL/AntDB, Welcome to follow us,\n> AntDB, AsiaInfo's PG-based distributed database product\n>\n> Thanks\n> Pengcheng\n>\n> Hi,\n\n+ uint16 valid_offset; /* how many entry is valid */\n\nhow many entry is -> how many entries are\n\n+int slru_subtrans_page_num = 32;\n\nLooks like the variable represents the number of subtrans pages. Maybe name\nthe variable slru_subtrans_page_count ?\n\n+ if (lbuffer->in_htab == false)\n\nThe condition can be written as 'if (!lbuffer->in_htab)'\n\nFor SubtransAllocLocalBuffer(), you can enclose the body of method in while\nloop so that you don't use goto statement.\n\nCheers\n\nOn Mon, Aug 30, 2021 at 1:43 AM Pengchengliu <pengchengliu@tju.edu.cn> wrote:Hi hackers,\nI wrote a patch to resolve the subtransactions concurrency performance\nproblems when suboverflowed.\n\nWhen we use more than PGPROC_MAX_CACHED_SUBXIDS(64) subtransactions per\ntransaction concurrency, it will lead to subtransactions performance\nproblems. \nAll backend will be stuck at acquiring lock SubtransSLRULock.\n\nThe reproduce steps in PG master branch:\n\n1, init a cluster, append postgresql.conf as below: \n\nmax_connections = '2500'\nmax_files_per_process = '2000'\nmax_locks_per_transaction = '64'\nmax_parallel_maintenance_workers = '8'\nmax_parallel_workers = '60'\nmax_parallel_workers_per_gather = '6'\nmax_prepared_transactions = '15000'\nmax_replication_slots = '10'\nmax_wal_senders = '64'\nmax_worker_processes = '250'\nshared_buffers = 8GB\n\n2, create table and insert some records as below:\n\nCREATE UNLOGGED TABLE contend (\n    id integer,\n    val integer NOT NULL\n)\nWITH (fillfactor='50');\n\nINSERT INTO contend (id, val)\nSELECT i, 0\nFROM generate_series(1, 10000) AS i;\n\nVACUUM (ANALYZE) contend;\n\n3, The script subtrans_128.sql in attachment. use pgbench with\nsubtrans_128.sql as below.\npgbench  -d postgres -p 33800 -n -r -f subtrans_128.sql  -c 500 -j 500 -T\n3600\n\n\n4, After for a while, we can get the stuck result. We can query\npg_stat_activity. All backends wait event is SubtransSLRULock.\n   We can use pert top and try find the root cause. The result of perf top\nas below:\n66.20%  postgres            [.] pg_atomic_compare_exchange_u32_impl\n  29.30%  postgres            [.] pg_atomic_fetch_sub_u32_impl\n   1.46%  postgres            [.] pg_atomic_read_u32\n   1.34%  postgres            [.] TransactionIdIsCurrentTransactionId\n   0.75%  postgres            [.] SimpleLruReadPage_ReadOnly\n   0.14%  postgres            [.] LWLockAttemptLock\n   0.14%  postgres            [.] LWLockAcquire\n   0.12%  postgres            [.] pg_atomic_compare_exchange_u32\n   0.09%  postgres            [.] HeapTupleSatisfiesMVCC\n   0.06%  postgres            [.] heapgetpage\n   0.03%  postgres            [.] sentinel_ok\n   0.03%  postgres            [.] XidInMVCCSnapshot\n   0.03%  postgres            [.] slot_deform_heap_tuple\n   0.03%  postgres            [.] ExecInterpExpr\n   0.02%  postgres            [.] AllocSetCheck\n   0.02%  postgres            [.] HeapTupleSatisfiesVisibility\n   0.02%  postgres            [.] LWLockRelease\n   0.02%  postgres            [.] TransactionIdPrecedes\n   0.02%  postgres            [.] SubTransGetParent\n   0.01%  postgres            [.] heapgettup_pagemode\n   0.01%  postgres            [.] CheckForSerializableConflictOutNeeded\n\n\nAfter view the subtrans codes, it is easy to find that the global LWLock\nSubtransSLRULock is the bottleneck of subtrans concurrently.\n\nWhen a bakcend session assign more than PGPROC_MAX_CACHED_SUBXIDS(64)\nsubtransactions, we will get a snapshot with suboverflowed.\nA suboverflowed snapshot does not contain all data required to determine\nvisibility, so PostgreSQL will occasionally have to resort to pg_subtrans. \nThese pages are cached in shared buffers, but you can see the overhead of\nlooking them up in the high rank of SimpleLruReadPage_ReadOnly in the perf\noutput.\n\nTo resolve this performance problem, we think about a solution which cache\nSubtransSLRU to local cache. \nFirst we can query parent transaction id from SubtransSLRU, and copy the\nSLRU page to local cache page.\nAfter that if we need query parent transaction id again, we can query it\nfrom local cache directly.\nIt will reduce the number of acquire and release LWLock SubtransSLRULock\nobservably.\n\n From all snapshots, we can get the latest xmin. All transaction id which\nprecedes this xmin, it muse has been commited/abortd. \nTheir parent/top transaction has been written subtrans SLRU. Then we can\ncache the subtrans SLRU and copy it to local cache.\n\nUse the same produce steps above, with our patch we cannot get the stuck\nresult.\nNote that append our GUC parameter in postgresql.conf. This optimize is off\nin default.\nlocal_cache_subtrans_pages=128 \n\nThe patch is base on PG master branch\n0d906b2c0b1f0d625ff63d9ace906556b1c66a68\n\n\nOur project in  https://github.com/ADBSQL/AntDB, Welcome to follow us,\nAntDB, AsiaInfo's PG-based distributed database product\n\nThanks\nPengcheng\nHi,+   uint16  valid_offset;   /* how many entry is valid */how many entry is -> how many entries are+int slru_subtrans_page_num = 32;Looks like the variable represents the number of subtrans pages. Maybe name the variable slru_subtrans_page_count ?+               if (lbuffer->in_htab == false)The condition can be written as 'if (!lbuffer->in_htab)'For SubtransAllocLocalBuffer(), you can enclose the body of method in while loop so that you don't use goto statement.Cheers", "msg_date": "Tue, 31 Aug 2021 10:20:56 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "31 авг. 2021 г., в 11:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):Hi Andrey, Thanks a lot for your replay and reference information. The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically. If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory. So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache. And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email. I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice. I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop. Prevent acquire/release  SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop. After I apply this patch which I  optimize SubTransGetTopmostTransaction,  with my test case, I still get stuck result.SubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.Thanks!Best regards, Andrey Borodin.\n", "msg_date": "Fri, 3 Sep 2021 09:11:16 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Sorry, for some reason Mail.app converted message to html and mailing list mangled this html into mess. I'm resending previous message as plain text again. Sorry for the noise.\n\n> 31 авг. 2021 г., в 11:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> \n> Hi Andrey,\n> Thanks a lot for your replay and reference information.\n> \n> The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.\n> If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory. \n> So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.\n> And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.\n> \n> I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.\n> I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.\n> Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.\n> After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.\n\nSubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.\n\nI think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 3 Sep 2021 11:50:59 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi Andrey,\n\n> I think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks\n I agree with you. If we can resolve the performance issue with this approach, It should be a good solution.\n\n> one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch)\n I have tested with this patch. And I have modified NUM_SUBTRANS_BUFFERS to 128. With 500 concurrence, it would not be stuck indeed. But the performance is very bad. For a sequence scan table, it uses more than one minute.\nI think it is unacceptable in a production environment.\n\npostgres=# select count(*) from contend ;\n count \n-------\n 10127\n(1 row)\n\nTime: 86011.593 ms (01:26.012)\npostgres=# select count(*) from contend ;\n count \n-------\n 10254\n(1 row)\nTime: 79399.949 ms (01:19.400)\n\n\nWith my local subtrans optimize approach, the same env and the same test script and 500 concurrence, a sequence scan, it uses only less than 10 seconds.\n\npostgres=# select count(*) from contend ;\n count \n-------\n 10508\n(1 row)\n\nTime: 7104.283 ms (00:07.104)\n\npostgres=# select count(*) from contend ;\ncount \n-------\n 13175\n(1 row)\n\nTime: 6602.635 ms (00:06.603)\nThanks\nPengcheng\n\n-----Original Message-----\nFrom: Andrey Borodin <x4mmm@yandex-team.ru> \nSent: 2021年9月3日 14:51\nTo: Pengchengliu <pengchengliu@tju.edu.cn>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: suboverflowed subtransactions concurrency performance optimize\n\nSorry, for some reason Mail.app converted message to html and mailing list mangled this html into mess. I'm resending previous message as plain text again. Sorry for the noise.\n\n> 31 авг. 2021 г., в 11:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> \n> Hi Andrey,\n> Thanks a lot for your replay and reference information.\n> \n> The default NUM_SUBTRANS_BUFFERS is 32. My implementation is local_cache_subtrans_pages can be adjusted dynamically.\n> If we configure local_cache_subtrans_pages as 64, every backend use only extra 64*8192=512KB memory. \n> So the local cache is similar to the first level cache. And subtrans SLRU is the second level cache.\n> And I think extra memory is very well worth it. It really resolve massive subtrans stuck issue which I mentioned in previous email.\n> \n> I have view the patch of [0] before. For SLRU buffers adding GUC configuration parameters are very nice.\n> I think for subtrans, its optimize is not enough. For SubTransGetTopmostTransaction, we should get the SubtransSLRULock first, then call SubTransGetParent in loop.\n> Prevent acquire/release SubtransSLRULock in SubTransGetTopmostTransaction-> SubTransGetParent in loop.\n> After I apply this patch which I optimize SubTransGetTopmostTransaction, with my test case, I still get stuck result.\n\nSubTransGetParent() acquires only Shared lock on SubtransSLRULock. The problem may arise only when someone reads page from disk. But if you have big enough cache - this will never happen. And this cache will be much less than 512KB*max_connections.\n\nI think if we really want to fix exclusive SubtransSLRULock I think best option would be to split SLRU control lock into array of locks - one for each bank (in v17-0002-Divide-SLRU-buffers-into-n-associative-banks.patch). With this approach we will have to rename s/bank/partition/g for consistency with locks and buffers partitions. I really liked having my own banks, but consistency worth it anyway.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Tue, 7 Sep 2021 15:11:32 +0800", "msg_from": "\"Pengchengliu\" <pengchengliu@tju.edu.cn>", "msg_from_op": true, "msg_subject": "RE: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi Pengcheng!\n>\n> You are solving important problem, thank you!\n>\n> > 30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> >\n> > To resolve this performance problem, we think about a solution which cache\n> > SubtransSLRU to local cache.\n> > First we can query parent transaction id from SubtransSLRU, and copy the\n> > SLRU page to local cache page.\n> > After that if we need query parent transaction id again, we can query it\n> > from local cache directly.\n>\n> A copy of SLRU in each backend's cache can consume a lot of memory.\n\nYes, copying the whole SLRU into local cache seems overkill.\n\n> Why create a copy if we can optimise shared representation of SLRU?\n\ntransam.c uses a single item cache to prevent thrashing from repeated\nlookups, which reduces problems with shared access to SLRUs.\nmultitrans.c also has similar.\n\nI notice that subtrans. doesn't have this, but could easily do so.\nPatch attached, which seems separate to other attempts at tuning.\n\nOn review, I think it is also possible that we update subtrans ONLY if\nsomeone uses >PGPROC_MAX_CACHED_SUBXIDS.\nThis would make subtrans much smaller and avoid one-entry-per-page\nwhich is a major source of cacheing.\nThis would means some light changes in GetSnapshotData().\nLet me know if that seems interesting also?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Tue, 30 Nov 2021 12:19:00 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "\n\n> 30 нояб. 2021 г., в 17:19, Simon Riggs <simon.riggs@enterprisedb.com> написал(а):\n> \n> On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> \n>> Hi Pengcheng!\n>> \n>> You are solving important problem, thank you!\n>> \n>>> 30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n>>> \n>>> To resolve this performance problem, we think about a solution which cache\n>>> SubtransSLRU to local cache.\n>>> First we can query parent transaction id from SubtransSLRU, and copy the\n>>> SLRU page to local cache page.\n>>> After that if we need query parent transaction id again, we can query it\n>>> from local cache directly.\n>> \n>> A copy of SLRU in each backend's cache can consume a lot of memory.\n> \n> Yes, copying the whole SLRU into local cache seems overkill.\n> \n>> Why create a copy if we can optimise shared representation of SLRU?\n> \n> transam.c uses a single item cache to prevent thrashing from repeated\n> lookups, which reduces problems with shared access to SLRUs.\n> multitrans.c also has similar.\n> \n> I notice that subtrans. doesn't have this, but could easily do so.\n> Patch attached, which seems separate to other attempts at tuning.\nI think this definitely makes sense to do.\n\n\n> On review, I think it is also possible that we update subtrans ONLY if\n> someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> This would make subtrans much smaller and avoid one-entry-per-page\n> which is a major source of cacheing.\n> This would means some light changes in GetSnapshotData().\n> Let me know if that seems interesting also?\n\nI'm afraid of unexpected performance degradation. When the system runs fine, you provision a VM of some vCPU\\RAM, and then some backend uses a little more than 64 subtransactions and all the system is stuck. Or will it affect only backend using more than 64 subtransactions?\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:41:37 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Tue, Nov 30, 2021 at 5:49 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n\n> transam.c uses a single item cache to prevent thrashing from repeated\n> lookups, which reduces problems with shared access to SLRUs.\n> multitrans.c also has similar.\n>\n> I notice that subtrans. doesn't have this, but could easily do so.\n> Patch attached, which seems separate to other attempts at tuning.\n\nYeah, this definitely makes sense.\n\n> On review, I think it is also possible that we update subtrans ONLY if\n> someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> This would make subtrans much smaller and avoid one-entry-per-page\n> which is a major source of cacheing.\n> This would means some light changes in GetSnapshotData().\n> Let me know if that seems interesting also?\n\nDo you mean to say avoid setting the sub-transactions parent if the\nnumber of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?\nBut the TransactionIdDidCommit(), might need to fetch the parent if\nthe transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how\nwould we handle that?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Dec 2021 11:56:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, 3 Dec 2021 at 01:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > On review, I think it is also possible that we update subtrans ONLY if\n> > someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> > This would make subtrans much smaller and avoid one-entry-per-page\n> > which is a major source of cacheing.\n> > This would means some light changes in GetSnapshotData().\n> > Let me know if that seems interesting also?\n>\n> Do you mean to say avoid setting the sub-transactions parent if the\n> number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?\n> But the TransactionIdDidCommit(), might need to fetch the parent if\n> the transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how\n> would we handle that?\n\nTRANSACTION_STATUS_SUB_COMMITTED is set as a transient state during\nfinal commit.\nIn that case, the top-level xid is still in procarray when nsubxids <\nPGPROC_MAX_CACHED_SUBXIDS\nso we need not consult pg_subtrans in that case, see step 4 of\nTransactionIdIsInProgress()\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Dec 2021 11:30:18 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, Dec 3, 2021 at 5:00 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, 3 Dec 2021 at 01:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > On review, I think it is also possible that we update subtrans ONLY if\n> > > someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> > > This would make subtrans much smaller and avoid one-entry-per-page\n> > > which is a major source of cacheing.\n> > > This would means some light changes in GetSnapshotData().\n> > > Let me know if that seems interesting also?\n> >\n> > Do you mean to say avoid setting the sub-transactions parent if the\n> > number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?\n> > But the TransactionIdDidCommit(), might need to fetch the parent if\n> > the transaction status is TRANSACTION_STATUS_SUB_COMMITTED, so how\n> > would we handle that?\n>\n> TRANSACTION_STATUS_SUB_COMMITTED is set as a transient state during\n> final commit.\n> In that case, the top-level xid is still in procarray when nsubxids <\n> PGPROC_MAX_CACHED_SUBXIDS\n> so we need not consult pg_subtrans in that case, see step 4 of.\n> TransactionIdIsInProgress()\n\nOkay I see, that there is a rule that before calling\nTransactionIdDidCommit(), we must consult TransactionIdIsInProgress()\nfor non MVCC snapshot or XidInMVCCSnapshot(). Okay so now I don't\nhave this concern, thanks for clarifying. I will think more about\nthis approach from other aspects.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:28:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Wed, 1 Dec 2021 at 06:41, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> > On review, I think it is also possible that we update subtrans ONLY if\n> > someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> > This would make subtrans much smaller and avoid one-entry-per-page\n> > which is a major source of cacheing.\n> > This would means some light changes in GetSnapshotData().\n> > Let me know if that seems interesting also?\n>\n> I'm afraid of unexpected performance degradation. When the system runs fine, you provision a VM of some vCPU\\RAM, and then some backend uses a little more than 64 subtransactions and all the system is stuck. Or will it affect only backend using more than 64 subtransactions?\n\nThat is the objective: to isolate the effect to only those that\noverflow. It seems possible.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 8 Dec 2021 15:34:21 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, 3 Dec 2021 at 06:27, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 30, 2021 at 5:49 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n>\n> > transam.c uses a single item cache to prevent thrashing from repeated\n> > lookups, which reduces problems with shared access to SLRUs.\n> > multitrans.c also has similar.\n> >\n> > I notice that subtrans. doesn't have this, but could easily do so.\n> > Patch attached, which seems separate to other attempts at tuning.\n>\n> Yeah, this definitely makes sense.\n>\n> > On review, I think it is also possible that we update subtrans ONLY if\n> > someone uses >PGPROC_MAX_CACHED_SUBXIDS.\n> > This would make subtrans much smaller and avoid one-entry-per-page\n> > which is a major source of cacheing.\n> > This would means some light changes in GetSnapshotData().\n> > Let me know if that seems interesting also?\n>\n> Do you mean to say avoid setting the sub-transactions parent if the\n> number of sun-transactions is not crossing PGPROC_MAX_CACHED_SUBXIDS?\n\nYes.\n\nThis patch shows where I'm going, with changes in GetSnapshotData()\nand XidInMVCCSnapshot() and XactLockTableWait().\nPasses make check, but needs much more, so this is review-only at this\nstage to give a flavour of what is intended.\n\n(No where near replacing the subtrans module as I envisage as the\nfinal outcome, meaning we don't need ExtendSUBTRANS()).\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 8 Dec 2021 16:39:11 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi,\n\nOn Wed, Dec 08, 2021 at 04:39:11PM +0000, Simon Riggs wrote:\n> \n> This patch shows where I'm going, with changes in GetSnapshotData()\n> and XidInMVCCSnapshot() and XactLockTableWait().\n> Passes make check, but needs much more, so this is review-only at this\n> stage to give a flavour of what is intended.\n\nThanks a lot to everyone involved in this!\n\nI can't find any entry in the commitfest for the work being done here. Did I\nmiss something? If not could you create an entry in the next commitfest to\nmake sure that it doesn't get forgotten?\n\n\n", "msg_date": "Sat, 15 Jan 2022 12:22:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Tue, 30 Nov 2021 at 12:19, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 30 Aug 2021 at 11:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > Hi Pengcheng!\n> >\n> > You are solving important problem, thank you!\n> >\n> > > 30 авг. 2021 г., в 13:43, Pengchengliu <pengchengliu@tju.edu.cn> написал(а):\n> > >\n> > > To resolve this performance problem, we think about a solution which cache\n> > > SubtransSLRU to local cache.\n> > > First we can query parent transaction id from SubtransSLRU, and copy the\n> > > SLRU page to local cache page.\n> > > After that if we need query parent transaction id again, we can query it\n> > > from local cache directly.\n> >\n> > A copy of SLRU in each backend's cache can consume a lot of memory.\n>\n> Yes, copying the whole SLRU into local cache seems overkill.\n>\n> > Why create a copy if we can optimise shared representation of SLRU?\n>\n> transam.c uses a single item cache to prevent thrashing from repeated\n> lookups, which reduces problems with shared access to SLRUs.\n> multitrans.c also has similar.\n>\n> I notice that subtrans. doesn't have this, but could easily do so.\n> Patch attached, which seems separate to other attempts at tuning.\n\nRe-attached, so that the CFapp isn't confused between the multiple\npatches on this thread.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Mon, 17 Jan 2022 13:44:02 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "\n\n> 17 янв. 2022 г., в 18:44, Simon Riggs <simon.riggs@enterprisedb.com> написал(а):\n> \n> Re-attached, so that the CFapp isn't confused between the multiple\n> patches on this thread.\n\nFWIW I've looked into the patch and it looks good to me. Comments describing when the cache is useful seem valid.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 17 Jan 2022 21:21:13 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 01:44:02PM +0000, Simon Riggs wrote:\n>\n> Re-attached, so that the CFapp isn't confused between the multiple\n> patches on this thread.\n\nThanks a lot for working on this!\n\nThe patch is simple and overall looks good to me. A few comments though:\n\n\n+/*\n+ * Single-item cache for results of SubTransGetTopmostTransaction. It's worth having\n+ * such a cache because we frequently find ourselves repeatedly checking the\n+ * same XID, for example when scanning a table just after a bulk insert,\n+ * update, or delete.\n+ */\n+static TransactionId cachedFetchXid = InvalidTransactionId;\n+static TransactionId cachedFetchTopmostXid = InvalidTransactionId;\n\nThe comment is above the 80 chars after\ns/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this\ncomment is valid for subtrans.c.\n\nAlso, maybe naming the first variable cachedFetchSubXid would make it a bit\nclearer?\n\nIt would be nice to see some benchmarks, for both when this change is\nenough to avoid a contention (when there's a single long-running overflowed\nbackend) and when it's not enough. That will also be useful if/when working on\nthe \"rethink_subtrans\" patch.\n\n\n", "msg_date": "Mon, 7 Mar 2022 17:48:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Mon, 7 Mar 2022 at 09:49, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Jan 17, 2022 at 01:44:02PM +0000, Simon Riggs wrote:\n> >\n> > Re-attached, so that the CFapp isn't confused between the multiple\n> > patches on this thread.\n>\n> Thanks a lot for working on this!\n>\n> The patch is simple and overall looks good to me. A few comments though:\n>\n>\n> +/*\n> + * Single-item cache for results of SubTransGetTopmostTransaction. It's worth having\n> + * such a cache because we frequently find ourselves repeatedly checking the\n> + * same XID, for example when scanning a table just after a bulk insert,\n> + * update, or delete.\n> + */\n> +static TransactionId cachedFetchXid = InvalidTransactionId;\n> +static TransactionId cachedFetchTopmostXid = InvalidTransactionId;\n>\n> The comment is above the 80 chars after\n> s/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this\n> comment is valid for subtrans.c.\n\nWhat aspect makes it invalid? The comment seems exactly applicable to\nme; Andrey thinks so also.\n\n> Also, maybe naming the first variable cachedFetchSubXid would make it a bit\n> clearer?\n\nSure, that can be done.\n\n> It would be nice to see some benchmarks, for both when this change is\n> enough to avoid a contention (when there's a single long-running overflowed\n> backend) and when it's not enough. That will also be useful if/when working on\n> the \"rethink_subtrans\" patch.\n\nThe patch doesn't do anything about the case of when there's a single\nlong-running overflowed backend, nor does it claim that.\n\nThe patch will speed up calls to SubTransGetTopmostTransaction(), which occur in\nsrc/backend/access/heap/heapam.c\nsrc/backend/utils/time/snapmgr.c\nsrc/backend/storage/lmgr/lmgr.c\nsrc/backend/storage/ipc/procarray.c\n\nThe patch was posted because TransactionLogFetch() has a cache, yet\nSubTransGetTopmostTransaction() does not, yet the argument should be\nidentical in both cases.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:27:40 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Mon, Mar 07, 2022 at 01:27:40PM +0000, Simon Riggs wrote:\n> > +/*\n> > + * Single-item cache for results of SubTransGetTopmostTransaction. It's worth having\n> > + * such a cache because we frequently find ourselves repeatedly checking the\n> > + * same XID, for example when scanning a table just after a bulk insert,\n> > + * update, or delete.\n> > + */\n> > +static TransactionId cachedFetchXid = InvalidTransactionId;\n> > +static TransactionId cachedFetchTopmostXid = InvalidTransactionId;\n> >\n> > The comment is above the 80 chars after\n> > s/TransactionLogFetch/SubTransGetTopmostTransaction/, and I don't think this\n> > comment is valid for subtrans.c.\n> \n> What aspect makes it invalid? The comment seems exactly applicable to\n> me; Andrey thinks so also.\n\nSorry, I somehow missed the \"for example\", and was thinking that\nSubTransGetTopmostTransaction was used in many other places compared to\nTransactionIdDidCommit and friends.\n\n> > It would be nice to see some benchmarks, for both when this change is\n> > enough to avoid a contention (when there's a single long-running overflowed\n> > backend) and when it's not enough. That will also be useful if/when working on\n> > the \"rethink_subtrans\" patch.\n> \n> The patch doesn't do anything about the case of when there's a single\n> long-running overflowed backend, nor does it claim that.\n\nI was thinking that having a cache for SubTransGetTopmostTransaction could help\nat least to some extent for that problem, sorry if that's not the case.\n\nI'm still curious on how much this simple optimization can help in some\nscenarios, even if they're somewhat artificial.\n\n> The patch was posted because TransactionLogFetch() has a cache, yet\n> SubTransGetTopmostTransaction() does not, yet the argument should be\n> identical in both cases.\n\nI totally agree with that.\n\n\n", "msg_date": "Mon, 7 Mar 2022 22:17:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Mon, Mar 07, 2022 at 10:17:41PM +0800, Julien Rouhaud wrote:\n> On Mon, Mar 07, 2022 at 01:27:40PM +0000, Simon Riggs wrote:\n>> The patch was posted because TransactionLogFetch() has a cache, yet\n>> SubTransGetTopmostTransaction() does not, yet the argument should be\n>> identical in both cases.\n> \n> I totally agree with that.\n\nAgreed as well. That's worth doing in isolation and that will save\nsome lookups of pg_subtrans anyway while being simple. As mentioned\nupthread, this needed an indentation, and the renaming of\ncachedFetchXid to cachedFetchSubXid looks adapted. So.. Applied all\nthose things.\n--\nMichael", "msg_date": "Thu, 7 Apr 2022 14:36:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Thu, 7 Apr 2022 at 00:36, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 07, 2022 at 10:17:41PM +0800, Julien Rouhaud wrote:\n> > On Mon, Mar 07, 2022 at 01:27:40PM +0000, Simon Riggs wrote:\n> >> The patch was posted because TransactionLogFetch() has a cache, yet\n> >> SubTransGetTopmostTransaction() does not, yet the argument should be\n> >> identical in both cases.\n> >\n> > I totally agree with that.\n>\n> Agreed as well. That's worth doing in isolation and that will save\n> some lookups of pg_subtrans anyway while being simple. As mentioned\n> upthread, this needed an indentation, and the renaming of\n> cachedFetchXid to cachedFetchSubXid looks adapted. So.. Applied all\n> those things.\n\nThanks Michael, thanks all.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 10 Apr 2022 13:18:10 -0500", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 14:36:35 +0900, Michael Paquier wrote:\n> On Mon, Mar 07, 2022 at 10:17:41PM +0800, Julien Rouhaud wrote:\n> > On Mon, Mar 07, 2022 at 01:27:40PM +0000, Simon Riggs wrote:\n> >> The patch was posted because TransactionLogFetch() has a cache, yet\n> >> SubTransGetTopmostTransaction() does not, yet the argument should be\n> >> identical in both cases.\n> > \n> > I totally agree with that.\n> \n> Agreed as well. That's worth doing in isolation and that will save\n> some lookups of pg_subtrans anyway while being simple. As mentioned\n> upthread, this needed an indentation, and the renaming of\n> cachedFetchXid to cachedFetchSubXid looks adapted. So.. Applied all\n> those things.\n\nAs is, this strikes me as dangerous. At the very least this ought to be\nstructured so it can have assertions verifying that the cache contents are\ncorrect.\n\nIt's far from obvious that it is correct to me, fwiw. Potential issues:\n\n1) The result of SubTransGetTopmostTransaction() can change between subsequent\n calls. If TransactionXmin advances, the TransactionXmin cutoff can change\n the result. It might be unreachable or harmless, but it's not obvious that\n it is, and there's zero comments explaining why it is obvious.\n\n2) xid wraparound. There's nothing forcing SubTransGetTopmostTransaction() to\n be called regularly, so even if a backend isn't idle, the cache could just\n get more and more outdated until hitting wraparound\n\n\nTo me it also seems odd that we cache in SubTransGetTopmostTransaction(), but\nnot in SubTransGetParent(). I think it's at least as common to end up with\nsubtrans access via TransactionIdDidCommit(), which calls SubTransGetParent()\nrather than SubTransGetTopmostTransaction()? Why is\nSubTransGetTopmostTransaction() the correct layer for caching?\n\n\nI tried to find a benchmark result for this patch upthread, without\nsuccess. Has there been validation this helps with anything?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 May 2022 16:52:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Tue, May 24, 2022 at 04:52:50PM -0700, Andres Freund wrote:\n> As is, this strikes me as dangerous. At the very least this ought to be\n> structured so it can have assertions verifying that the cache contents are\n> correct.\n\nWell, under USE_ASSERT_CHECKING we could force a recalculation of the\nloop itself before re-checking and sending the cached result, as one\nthing.\n\n> It's far from obvious that it is correct to me, fwiw. Potential issues:\n> \n> 1) The result of SubTransGetTopmostTransaction() can change between subsequent\n> calls. If TransactionXmin advances, the TransactionXmin cutoff can change\n> the result. It might be unreachable or harmless, but it's not obvious that\n> it is, and there's zero comments explaining why it is obvious.\n\nI am not sure to follow on this one. A change in the TransactionXmin\ncutoff does not change the result retrieved for parentXid from the\nSLRU layer, because the xid cached refers to a parent still running.\n\n> 2) xid wraparound. There's nothing forcing SubTransGetTopmostTransaction() to\n> be called regularly, so even if a backend isn't idle, the cache could just\n> get more and more outdated until hitting wraparound\n\nHence, you mean that the non-regularity of the call makes it more\nexposed to an inconsistent result after a wraparound?\n\n> To me it also seems odd that we cache in SubTransGetTopmostTransaction(), but\n> not in SubTransGetParent(). I think it's at least as common to end up with\n> subtrans access via TransactionIdDidCommit(), which calls SubTransGetParent()\n> rather than SubTransGetTopmostTransaction()? Why is\n> SubTransGetTopmostTransaction() the correct layer for caching?\n\nHmm. I recall thinking about this exact point but left it out of the\ncaching to maintain a symmetry with the setter routine that does the\nsame and reverse operation on those SLRUs. Anyway, one reason to not\nuse SubTransGetParent() is that it may return an invalid XID which\nwe'd better not cache depending on its use (say, a serialized\ntransaction), and SubTransGetTopmostTransaction() looping around to we\nmake sure to never hit this case looks like the correct path to do\ndo. Well, we could also store nothing if an invalid parent is found,\nbut then the previous argument about the symmetry of the routines\nwould not apply. This would be beneficial about cases like the one at\nthe top of the thread about SLRU caches when subxids are overflowing\nwhen referring to the same XID. The ODBC driver likes a lot\nsavepoints, for example.\n\n> I tried to find a benchmark result for this patch upthread, without\n> success. Has there been validation this helps with anything?\n\nI have been studying that again, and you are right that I should have\nasked for much more here. A benchmark like what's presented upthread\nmay show some benefits with the case of the same savepoint used across\nmultiple queries, only if with a caching of SubTransGetParent(), with\nenough subxids exhausted to overflow the snapshots. It would be\nbetter to revisit that stuff, and the benefit is limited with only\nSubTransGetTopmostTransaction(). Point 2) is something I did not\nconsider, and that's a good one. For now, it looks better to revert\nthis part rather than tweak it post beta1.\n--\nMichael", "msg_date": "Thu, 26 May 2022 16:23:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Thu, May 26, 2022 at 12:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 24, 2022 at 04:52:50PM -0700, Andres Freund wrote:\n>\n> > 2) xid wraparound. There's nothing forcing SubTransGetTopmostTransaction() to\n> > be called regularly, so even if a backend isn't idle, the cache could just\n> > get more and more outdated until hitting wraparound\n>\n> Hence, you mean that the non-regularity of the call makes it more\n> exposed to an inconsistent result after a wraparound?\n>\n\nWon't in theory the similar cache in transam.c is also prone to\nsimilar behavior?\n\nAnyway, how about if we clear this cache for subtrans whenever\nTransactionXmin is advanced and cachedFetchSubXid precedes it? The\ncomments atop SubTransGetTopmostTransaction seem to state that we\ndon't care about the exact topmost parent when the intermediate one\nprecedes TransactionXmin. I think it should preserve the optimization\nbecause anyway for such cases there is a fast path in\nSubTransGetTopmostTransaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 May 2022 15:44:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "Hi,\n\nOn 2022-05-27 15:44:39 +0530, Amit Kapila wrote:\n> On Thu, May 26, 2022 at 12:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, May 24, 2022 at 04:52:50PM -0700, Andres Freund wrote:\n> >\n> > > 2) xid wraparound. There's nothing forcing SubTransGetTopmostTransaction() to\n> > > be called regularly, so even if a backend isn't idle, the cache could just\n> > > get more and more outdated until hitting wraparound\n> >\n> > Hence, you mean that the non-regularity of the call makes it more\n> > exposed to an inconsistent result after a wraparound?\n> >\n> \n> Won't in theory the similar cache in transam.c is also prone to\n> similar behavior?\n\nIt's not quite the same risk, because there we are likely to actually hit the\ncache regularly. Whereas quite normal workloads might not hit this cache for\ndays on end.\n\n\n> Anyway, how about if we clear this cache for subtrans whenever\n> TransactionXmin is advanced and cachedFetchSubXid precedes it? The\n> comments atop SubTransGetTopmostTransaction seem to state that we\n> don't care about the exact topmost parent when the intermediate one\n> precedes TransactionXmin. I think it should preserve the optimization\n> because anyway for such cases there is a fast path in\n> SubTransGetTopmostTransaction.\n\nThere's not even a proof this does speed up anything useful! There's not a\nsingle benchmark for the patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 May 2022 08:55:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, May 27, 2022 at 8:55 AM Andres Freund <andres@anarazel.de> wrote:\n> > Anyway, how about if we clear this cache for subtrans whenever\n> > TransactionXmin is advanced and cachedFetchSubXid precedes it? The\n> > comments atop SubTransGetTopmostTransaction seem to state that we\n> > don't care about the exact topmost parent when the intermediate one\n> > precedes TransactionXmin. I think it should preserve the optimization\n> > because anyway for such cases there is a fast path in\n> > SubTransGetTopmostTransaction.\n>\n> There's not even a proof this does speed up anything useful! There's not a\n> single benchmark for the patch.\n\nI find it hard to believe that there wasn't even a cursory effort at\nperformance validation before this was committed, but that's what it\nlooks like.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 May 2022 11:48:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On 2022-05-27 11:48:45 -0700, Peter Geoghegan wrote:\n> On Fri, May 27, 2022 at 8:55 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Anyway, how about if we clear this cache for subtrans whenever\n> > > TransactionXmin is advanced and cachedFetchSubXid precedes it? The\n> > > comments atop SubTransGetTopmostTransaction seem to state that we\n> > > don't care about the exact topmost parent when the intermediate one\n> > > precedes TransactionXmin. I think it should preserve the optimization\n> > > because anyway for such cases there is a fast path in\n> > > SubTransGetTopmostTransaction.\n> >\n> > There's not even a proof this does speed up anything useful! There's not a\n> > single benchmark for the patch.\n> \n> I find it hard to believe that there wasn't even a cursory effort at\n> performance validation before this was committed, but that's what it\n> looks like.\n\nYea. Imo this pretty clearly should be reverted. It has correctness issues,\ntesting issues and we don't know whether it does anything useful.\n\n\n", "msg_date": "Fri, 27 May 2022 11:59:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, May 27, 2022 at 11:59 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-05-27 11:48:45 -0700, Peter Geoghegan wrote:\n> > I find it hard to believe that there wasn't even a cursory effort at\n> > performance validation before this was committed, but that's what it\n> > looks like.\n>\n> Yea. Imo this pretty clearly should be reverted. It has correctness issues,\n> testing issues and we don't know whether it does anything useful.\n\nIt should definitely be reverted.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 May 2022 12:30:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" }, { "msg_contents": "On Fri, May 27, 2022 at 08:55:02AM -0700, Andres Freund wrote:\n> On 2022-05-27 15:44:39 +0530, Amit Kapila wrote:\n>> Won't in theory the similar cache in transam.c is also prone to\n>> similar behavior?\n\nTransactionIdDidCommit() and TransactionIdDidAbort() are used in much\nmore code paths for visibility purposes, contrary to the subtrans.c\nones.\n\n> It's not quite the same risk, because there we are likely to actually hit the\n> cache regularly. Whereas quite normal workloads might not hit this cache for\n> days on end.\n\nYeah. In short, this mostly depends on the use of savepoints and the\nnumber of XIDs issued until PGPROC_MAX_CACHED_SUBXIDS is reached, and\na single cache entry in this code path would reduce the pressure on\nthe SLRU lookups depending on the number of queries issued, for\nexample. One thing I know of that likes to abuse of savepoints and\ncould cause overflows to make this easier to hit is the ODBC driver\ncoupled with short queries in long transactions, where its internals\nenforce the use of a savepoint each time a query is issued by an\napplication (pretty much what the benchmark at the top of the thread\ndoes). In this case, even the single cache approach would not help\nmuch because I recall that we finish with one savepoint per query to\nbe able to rollback to any previous state within a given transaction\n(as the ODBC APIs allow).\n\nDoing a caching within SubTransGetParent() would be more interesting,\nfor sure, though the invalidation to clean the cache and to make that\nrobust enough may prove tricky.\n\nIt took me some time to come back to this thread. The change has now\nbeen reverted.\n--\nMichael", "msg_date": "Sat, 28 May 2022 15:21:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: suboverflowed subtransactions concurrency performance optimize" } ]
[ { "msg_contents": "Hi hackers,\n\nThere are several overloaded versions of timezone() function. One\nversion accepts timezone name and timestamptz and returns timestamp:\n\n=# show time zone;\n TimeZone\n---------------\n Europe/Moscow\n\n=# select timezone('MSK', '2021-08-30 12:34:56 MSK' :: timestamptz);\n timezone\n---------------------\n 2021-08-30 12:34:56\n\nThis function is marked as IMMUTABLE and it's possible to use it in\nfunctional indexes. I believe it's a bug. Since the function accepts\nthe name of the time zone, and the rules of time zones change, this\nfunction may return different results for the same arguments in the\nfuture. This makes it STABLE, or at least definitely not IMMUTABLE\n[1]. timezone(text, timestamp), which returns timestamptz should be\nSTABLE as well for the same reasons.\n\nThe proposed patch (v1) fixes this.\n\nOther versions of timezone() seem to be fine, except:\n\n=# \\df+ timezone\n...\n-[ RECORD 4 ]-------+---------------------------------------\nSchema | pg_catalog\nName | timezone\nResult data type | time with time zone\nArgument data types | text, time with time zone\nType | func\nVolatility | volatile\nParallel | safe\nOwner | eax\nSecurity | invoker\nAccess privileges |\nLanguage | internal\nSource code | timetz_zone\nDescription | adjust time with time zone to new zone\n...\n\n\nDoes anyone know the reason why, unlike other versions, it's marked\nVOLATILE? I attached an alternative version of the patch (v2), which\nfixes this too. None of the patches includes any regression tests. As\nI understand there is little reason to re-check the volatility stated\nin pg_proc.dat in runtime.\n\n[1]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 30 Aug 2021 17:19:54 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> There are several overloaded versions of timezone() function. One\n> version accepts timezone name and timestamptz and returns timestamp:\n> This function is marked as IMMUTABLE and it's possible to use it in\n> functional indexes. I believe it's a bug.\n\nThat's a deliberate choice IIRC. I agree that the behavior could\nchange after a tzdata update, but if the standard is that \"immutable\"\nmeans \"no conceivable future code or configuration change could alter\nthe results\", then there's not a lot of functions that will pass that\ntest :-(.\n\nAs a pretty relevant example, we're not going to stop marking text\ncomparison operators as immutable, even though we know all too well\nthat the OS' sort order might change underneath us. The loss of\nfunctionality and performance that would result from downgrading\nthose to stable is just not acceptable. It's better to treat them\nas immutable and accept the risk of sometimes having to rebuild\nindexes.\n\nI don't see a lot of argument for treating tzdata changes differently\nfrom OS locale changes.\n\n> Other versions of timezone() seem to be fine, except:\n> Source code | timetz_zone\n> Does anyone know the reason why, unlike other versions, it's marked\n> VOLATILE?\n\nLooking at the code, it decides whether to use DST or not based on\nthe current time ... which it gets using time(NULL). So the volatile\nmarking is correct for this implementation, because it could change\nintra-query. This seems like a pretty dumb choice though: I'd think\nit'd make more sense to use the value of now() as the referent.\nThen it could be stable, and it'd also be faster because it wouldn't\nneed its own kernel call.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:51:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "I wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n>> [ why is timetz_zone volatile? ]\n\nAh ... after a bit of digging in the git history, I found this [1]:\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master Release: REL8_1_BR [35979e6c3] 2005-09-09 06:51:12 +0000\n\n Given its current definition that depends on time(NULL), timetz_zone\n is certainly no longer immutable, but must indeed be marked volatile.\n I wonder if it should use the value of now() (that is, transaction\n start time) so that it could be marked stable. But it's probably not\n important enough to be worth changing the code for ... indeed, I'm not\n even going to force an initdb for this catalog change, seeing that we\n just did one a few hours ago.\n\nI wasn't excited enough about it personally to change it, and I'm\nstill not --- but if you want to, send a patch.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=35979e6c3\n\n\n", "msg_date": "Mon, 30 Aug 2021 11:07:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Hi Tom,\n\nThanks for the quick reply.\n\n> I don't see a lot of argument for treating tzdata changes differently\n> from OS locale changes.\n\nGot it. But in this case, what's your opinion on the differences between\ndate_trunc() and timezone()? Shouldn't date_trunc() be always IMMUTABLE as\nwell?\n\nI can see pros and cons to be IMMUTABLE _or_ STABLE when dealing with time\nzones, but at least PostgreSQL should be consistent in this, right?\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Tom,Thanks for the quick reply.> I don't see a lot of argument for treating tzdata changes differently> from OS locale changes.Got it. But in this case, what's your opinion on the differences between date_trunc() and timezone()? Shouldn't date_trunc() be always IMMUTABLE as well?I can see pros and cons to be IMMUTABLE _or_ STABLE when dealing with time zones, but at least PostgreSQL should be consistent in this, right?-- Best regards,Aleksander Alekseev", "msg_date": "Mon, 30 Aug 2021 19:09:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Got it. But in this case, what's your opinion on the differences between\n> date_trunc() and timezone()? Shouldn't date_trunc() be always IMMUTABLE as\n> well?\n\nNo, because date_trunc depends on the current timezone setting,\nor at least its stable variants do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 12:58:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "On Mon, Aug 30, 2021 at 12:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n> > Got it. But in this case, what's your opinion on the differences between\n> > date_trunc() and timezone()? Shouldn't date_trunc() be always IMMUTABLE\nas\n> > well?\n>\n> No, because date_trunc depends on the current timezone setting,\n> or at least its stable variants do.\n\nA light bulb went off in my head just now, because I modeled date_bin() in\npart on date_trunc(), but apparently it didn't get the memo that the\nvariant with timezone should have been marked stable.\n\nI believe it's been discussed before that it'd be safer if pg_proc.dat had\nthe same defaults as CREATE FUNCTION, and this is further evidence for that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 30, 2021 at 12:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Aleksander Alekseev <aleksander@timescale.com> writes:> > Got it. But in this case, what's your opinion on the differences between> > date_trunc() and timezone()? Shouldn't date_trunc() be always IMMUTABLE as> > well?>> No, because date_trunc depends on the current timezone setting,> or at least its stable variants do.A light bulb went off in my head just now, because I modeled date_bin() in part on date_trunc(), but apparently it didn't get the memo that the variant with timezone should have been marked stable.I believe it's been discussed before that it'd be safer if pg_proc.dat had the same defaults as CREATE FUNCTION, and this is further evidence for that.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Aug 2021 13:34:06 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> I believe it's been discussed before that it'd be safer if pg_proc.dat had\n> the same defaults as CREATE FUNCTION, and this is further evidence for that.\n\nYeah, maybe so. It'd make the .dat file quite a bit bigger, but maybe\nless mistake-prone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Aug 2021 13:40:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Hi Tom,\n\n> No, because date_trunc depends on the current timezone setting,\n> or at least its stable variants do.\n\nOnce again, many thanks for your answers!\n\n> I wasn't excited enough about it personally to change it, and I'm\n> still not --- but if you want to, send a patch.\n\nHere is the patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 1 Sep 2021 12:19:47 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>>> [ timetz_zone is VOLATILE ]\n>> I wasn't excited enough about it personally to change it, and I'm\n>> still not --- but if you want to, send a patch.\n\n> Here is the patch.\n\nI looked at this patch, and felt unhappy about the fact that it left\ntimetz_zone() still depending on pg_time_t and pg_localtime, which\nnothing else in date.c does. Poking at it closer, I realized that\nthe DYNTZ code path is actually completely broken, and has been\nfor years. Observe:\n\nregression=# select timezone('America/Santiago', '12:34 -02'::timetz);\n timezone \n-------------\n 11:34:00-03\n(1 row)\n\nThat's fine. But CLT, which should be entirely equivalent\nto America/Santiago, produces seeming garbage:\n\nregression=# select timezone('CLT', '12:34 -02'::timetz);\n timezone \n-------------------\n 09:51:14-04:42:46\n(1 row)\n\n<digression>\nWhat's happening there is that pg_localtime produces a struct tm\ncontaining POSIX-style values, in particular tm_year is relative\nto 1900. But DetermineTimeZoneAbbrevOffset expects a struct using\nthe PG convention that tm_year is relative to \"AD 0\". So it sees\na date in the second century AD, decides that that's way out of\nrange, and falls back to the \"LMT\" offset provided by the tzdb\ndatabase. That lines up with what you'd get from\n\nregression=# set timezone = 'America/Santiago';\nSET\nregression=# select '0121-09-03 12:34'::timestamptz;\n timestamptz \n------------------------------\n 0121-09-03 12:34:00-04:42:46\n(1 row)\n\n</digression>\n\nBasically the problem here is that this is incredibly hoary code\nthat's never been touched or tested as we revised datetime-related\nAPIs elsewhere. I'm fairly unhappy now that we don't have any\nregression test coverage for this function. However, I see no\nvery good way to make that happen, because the interesting code\npaths will (by definition) produce different results at different\ntimes of year. I suppose we could carry two variant expected-files,\nbut ick. The DYNTZ path is particularly problematic here, because\nthat's only used for timezones that have changed definitions over\ntime, meaning they're particularly likely to change again.\n\nAnyway, attached is a revised patch that gets rid of the antique\ncode, and it produces correct results AFAICT.\n\nBTW, it's customary to *not* include catversion bumps in submitted\npatches, because that accomplishes little except to ensure that\nyour patch will soon fail to apply. (This one already is failing.)\nIf you feel a need to remind the committer that a catversion bump\nis needed, just comment to that effect in the submission email.\n\nI'm not entirely sure what to do about the discovery that the\nDYNTZ path has pre-existing breakage. Perhaps it'd be sensible\nto back-patch this patch, minus the catalog change. I doubt that\nanyone would have a problem with the nominal change of behavior\nnear DST boundaries. Or we could just ignore the bug in the back\nbranches, since it's fairly clear that basically no one uses this\nfunction.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 05 Sep 2021 14:57:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "> BTW, it's customary to *not* include catversion bumps in submitted\n> patches\n\nThanks, Tom.\n\n> Anyway, attached is a revised patch that gets rid of the antique\n> code, and it produces correct results AFAICT.\n\nI tested your patch against the current master branch 78aa616b on\nMacOS Catalina. I have nothing to add to the patch.\n\n> I'm fairly unhappy now that we don't have any\n> regression test coverage for this function.\n\nYep, that's unfortunate. I see several tests for `AT TIME ZONE`\nsyntax, which is a syntax sugar to timezone() with timestamp[tz]\narguments. But considering how `timetz` type is broken in the first\nplace [1], I'm not surprised few people feel motivated to do anything\nrelated to it. Do you think there is a possibility that one day we may\nbe brave enough to get rid of this type?\n\n\n[1]: https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_timetz\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 6 Sep 2021 13:10:01 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> Anyway, attached is a revised patch that gets rid of the antique\n>> code, and it produces correct results AFAICT.\n\n> I tested your patch against the current master branch 78aa616b on\n> MacOS Catalina. I have nothing to add to the patch.\n\nThanks. Pushed, along with a quick-and-dirty patch to resolve the\nDYNTZ problem in the back branches.\n\n>> I'm fairly unhappy now that we don't have any\n>> regression test coverage for this function.\n\n> Yep, that's unfortunate. I see several tests for `AT TIME ZONE`\n> syntax, which is a syntax sugar to timezone() with timestamp[tz]\n> arguments. But considering how `timetz` type is broken in the first\n> place [1], I'm not surprised few people feel motivated to do anything\n> related to it. Do you think there is a possibility that one day we may\n> be brave enough to get rid of this type?\n\nI'm afraid not, seeing that it's required by the SQL standard.\n\nI thought about adding tests based on the CLT example I showed upthread,\nand just accepting the need for two variant result files. Maybe we\nshould do that. However, it still wouldn't be a great test, because\nit would not prove that the DST switchover happens at the right time of\nyear, or indeed at all. So for the moment I didn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Sep 2021 11:49:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: shouldn't timezone(text, timestamp[tz]) be STABLE?" } ]
[ { "msg_contents": "Hi,\n\nOriginally I posted it on -general, but Joe Conway suggested I repost in\nhere for greater visibility...\n\nWe hit a problem with Pg 12.6 (I know, we should upgrade, but that will\ntake long time to prepare).\n\nAnyway - it's 12.6 on aarm64.\nCouple of days there was replication slot started, and now it seems to\nbe stuck.\n\n=# select * from pg_stat_activity where pid = 22697 \\gx\n─[ RECORD 1 ]────┬──────────────────────────────────────────────────────────\ndatid │ 16591\ndatname │ canvas\npid │ 22697\nusesysid │ 16505\nusename │ <CENSORED>\napplication_name │ PostgreSQL JDBC Driver\nclient_addr │ <CENSORED>\nclient_hostname │ [null]\nclient_port │ 43160\nbackend_start │ 2021-08-18 02:12:05.758297+00\nxact_start │ [null]\nquery_start │ 2021-08-18 02:12:05.772271+00\nstate_change │ 2021-08-18 02:12:05.773428+00\nwait_event_type │ [null]\nwait_event │ [null]\nstate │ active\nbackend_xid │ [null]\nbackend_xmin │ [null]\nquery │ SELECT COUNT(1) FROM pg_publication WHERE pubname = 'cdc'\nbackend_type │ walsender\n\n=# select pg_current_wal_lsn(), pg_size_pretty( pg_current_wal_lsn() - sent_lsn), * from pg_stat_replication where pid = 22697 \\gx\n─[ RECORD 1 ]──────┬──────────────────────────────\npg_current_wal_lsn │ 1B14/718EA0B8\npg_size_pretty │ 290 GB\npid │ 22697\nusesysid │ 16505\nusename │ <CENSORED>\napplication_name │ PostgreSQL JDBC Driver\nclient_addr │ <CENSORED>\nclient_hostname │ [null]\nclient_port │ 43160\nbackend_start │ 2021-08-18 02:12:05.758297+00\nbackend_xmin │ [null]\nstate │ streaming\nsent_lsn │ 1ACC/D8689A8\nwrite_lsn │ 1ACC/D527BD8\nflush_lsn │ 1ACC/C97DF48\nreplay_lsn │ 1ACC/C97DF48\nwrite_lag │ 00:00:00.257041\nflush_lag │ 00:00:01.26665\nreplay_lag │ 00:00:01.26665\nsync_priority │ 0\nsync_state │ async\nreply_time │ 1999-12-21 03:15:13.449225+00\n\ntop shows the process using 100% of cpu. I tried strace'ing, but strace doesn't\nshow *anything* - it just sits there.\n\nGot backtrace:\n\n~# gdb --pid=22697 --batch -ex backtrace\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/aarch64-linux-gnu/libthread_db.so.1\".\nhash_seq_search (status=status@entry=0xffffdd90f380) at ./build/../src/backend/utils/hash/dynahash.c:1448\n1448 ./build/../src/backend/utils/hash/dynahash.c: No such file or directory.\n#0 hash_seq_search (status=status@entry=0xffffdd90f380) at ./build/../src/backend/utils/hash/dynahash.c:1448\n#1 0x0000aaaac3042060 in RelfilenodeMapInvalidateCallback (arg=<optimized out>, relid=105496194) at ./build/../src/backend/utils/cache/relfilenodemap.c:64\n#2 0x0000aaaac3033aa4 in LocalExecuteInvalidationMessage (msg=0xffff9b66eec8) at ./build/../src/backend/utils/cache/inval.c:595\n#3 0x0000aaaac2ec8274 in ReorderBufferExecuteInvalidations (rb=0xaaaac326bb00 <errordata>, txn=0xaaaac326b998 <formatted_start_time>, txn=0xaaaac326b998 <formatted_start_time>) at ./build/../src/backend/replication/logical/reorderbuffer.c:2149\n#4 ReorderBufferCommit (rb=0xaaaac326bb00 <errordata>, xid=xid@entry=2668396569, commit_lsn=187650393290540, end_lsn=<optimized out>, commit_time=commit_time@entry=683222349268077, origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at ./build/../src/backend/replication/logical/reorderbuffer.c:1770\n#5 0x0000aaaac2ebd314 in DecodeCommit (xid=2668396569, parsed=0xffffdd90f7e0, buf=0xffffdd90f960, ctx=0xaaaaf5d396a0) at ./build/../src/backend/replication/logical/decode.c:640\n#6 DecodeXactOp (ctx=ctx@entry=0xaaaaf5d396a0, buf=0xffffdd90f960, buf@entry=0xffffdd90f9c0) at ./build/../src/backend/replication/logical/decode.c:248\n#7 0x0000aaaac2ebd42c in LogicalDecodingProcessRecord (ctx=0xaaaaf5d396a0, record=0xaaaaf5d39938) at ./build/../src/backend/replication/logical/decode.c:117\n#8 0x0000aaaac2ecfdfc in XLogSendLogical () at ./build/../src/backend/replication/walsender.c:2840\n#9 0x0000aaaac2ed2228 in WalSndLoop (send_data=send_data@entry=0xaaaac2ecfd98 <XLogSendLogical>) at ./build/../src/backend/replication/walsender.c:2189\n#10 0x0000aaaac2ed2efc in StartLogicalReplication (cmd=0xaaaaf5d175a8) at ./build/../src/backend/replication/walsender.c:1133\n#11 exec_replication_command (cmd_string=cmd_string@entry=0xaaaaf5c0eb00 \"START_REPLICATION SLOT cdc LOGICAL 1A2D/4B3640 (\\\"proto_version\\\" '1', \\\"publication_names\\\" 'cdc')\") at ./build/../src/backend/replication/walsender.c:1549\n#12 0x0000aaaac2f258a4 in PostgresMain (argc=<optimized out>, argv=argv@entry=0xaaaaf5c78cd8, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4257\n#13 0x0000aaaac2eac338 in BackendRun (port=0xaaaaf5c68070, port=0xaaaaf5c68070) at ./build/../src/backend/postmaster/postmaster.c:4484\n#14 BackendStartup (port=0xaaaaf5c68070) at ./build/../src/backend/postmaster/postmaster.c:4167\n#15 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1725\n#16 0x0000aaaac2ead364 in PostmasterMain (argc=<optimized out>, argv=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:1398\n#17 0x0000aaaac2c3ca5c in main (argc=5, argv=0xaaaaf5c07720) at ./build/../src/backend/main/main.c:228\n\nThe thing is - I can't close it with pg_terminate_backend(), and I'd\nrather not kill -9, as it will, I think, close all other connections,\nand this is prod server.\n\nThe other end of the connection was something in kubernetes, and it no\nlonger exists.\n\nIs there anything we could do about it?\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 17:18:30 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Mon, 2021-08-30 at 17:18 +0200, hubert depesz lubaczewski wrote:\n> The thing is - I can't close it with pg_terminate_backend(), and I'd\n> rather not kill -9, as it will, I think, close all other connections,\n> and this is prod server.\n\nOf course the cause should be fixed, but to serve your immediate need:\n\nAfter calling pg_terminate_backend(), you can attach gdb to the backend and then run\n\n print ProcessInterrupts()\n\nThat will cause the backend to exit normally without crashing the server.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 30 Aug 2021 21:09:20 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Mon, Aug 30, 2021 at 09:09:20PM +0200, Laurenz Albe wrote:\n> On Mon, 2021-08-30 at 17:18 +0200, hubert depesz lubaczewski wrote:\n> > The thing is - I can't close it with pg_terminate_backend(), and I'd\n> > rather not kill -9, as it will, I think, close all other connections,\n> > and this is prod server.\n> \n> Of course the cause should be fixed, but to serve your immediate need:\n\nYou might save a coredump of the process using gdb gcore before killing it, in\ncase someone thinks how to debug it next month.\n\nDepending on your OS, you might have to do something special to get shared\nbuffers included in the dump (or excluded, if that's what's desirable).\n\nI wonder how far up the stacktrace it's stuck ?\nYou could set a breakpoint on LogicalDecodingProcessRecord and then \"c\"ontinue,\nand see if it hits the breakpoint in a few seconds. If not, try the next\nframe until you know which one is being called repeatedly.\n\nMaybe CheckForInterrupts should be added somewhere...\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 30 Aug 2021 14:34:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On 8/30/21 3:34 PM, Justin Pryzby wrote:\n> On Mon, Aug 30, 2021 at 09:09:20PM +0200, Laurenz Albe wrote:\n>> On Mon, 2021-08-30 at 17:18 +0200, hubert depesz lubaczewski wrote:\n>> > The thing is - I can't close it with pg_terminate_backend(), and I'd\n>> > rather not kill -9, as it will, I think, close all other connections,\n>> > and this is prod server.\n>> \n>> Of course the cause should be fixed, but to serve your immediate need:\n> \n> You might save a coredump of the process using gdb gcore before killing it, in\n> case someone thinks how to debug it next month.\n> \n> Depending on your OS, you might have to do something special to get shared\n> buffers included in the dump (or excluded, if that's what's desirable).\n> \n> I wonder how far up the stacktrace it's stuck ?\n> You could set a breakpoint on LogicalDecodingProcessRecord and then \"c\"ontinue,\n> and see if it hits the breakpoint in a few seconds. If not, try the next\n> frame until you know which one is being called repeatedly.\n> \n> Maybe CheckForInterrupts should be added somewhere...\n\nThe spot in the backtrace...\n\n#0 hash_seq_search (status=status@entry=0xffffdd90f380) at \n./build/../src/backend/utils/hash/dynahash.c:1448\n\n...is in the middle of this while loop:\n8<-----------------------------------------\n while ((curElem = segp[segment_ndx]) == NULL)\n {\n /* empty bucket, advance to next */\n if (++curBucket > max_bucket)\n {\n status->curBucket = curBucket;\n hash_seq_term(status);\n return NULL; /* search is done */\n }\n if (++segment_ndx >= ssize)\n {\n segment_num++;\n segment_ndx = 0;\n segp = hashp->dir[segment_num];\n }\n }\n8<-----------------------------------------\n\nIt would be interesting to step through a few times to see if it is \nreally stuck in that loop. That would be consistent with 100% CPU and \nnot checking for interrupts I think.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Mon, 30 Aug 2021 20:15:24 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> It would be interesting to step through a few times to see if it is \n> really stuck in that loop.\n\nYeah, this single data point is not enough justification to blame\ndynahash.c (which is *extremely* battle-tested code, you'll recall).\nI'm inclined to guess that the looping is happening a few stack levels\nfurther up, in the logical-decoding code (which is, um, not so much).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Aug 2021 20:22:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On 8/30/21 8:22 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> It would be interesting to step through a few times to see if it is \n>> really stuck in that loop.\n> \n> Yeah, this single data point is not enough justification to blame\n> dynahash.c (which is *extremely* battle-tested code, you'll recall).\n> I'm inclined to guess that the looping is happening a few stack levels\n> further up, in the logical-decoding code (which is, um, not so much).\n\n\nHeh, fair point :-)\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Mon, 30 Aug 2021 21:16:51 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Mon, Aug 30, 2021 at 09:16:51PM -0400, Joe Conway wrote:\n> On 8/30/21 8:22 PM, Tom Lane wrote:\n>> Yeah, this single data point is not enough justification to blame\n>> dynahash.c (which is *extremely* battle-tested code, you'll recall).\n>> I'm inclined to guess that the looping is happening a few stack levels\n>> further up, in the logical-decoding code (which is, um, not so much).\n> \n> Heh, fair point :-)\n\nIt looks something is messed up with the list of invalidation messages\nto process in this code path. Maybe some incorrect memory context\nhandling?\n--\nMichael", "msg_date": "Tue, 31 Aug 2021 10:25:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Mon, Aug 30, 2021 at 09:09:20PM +0200, Laurenz Albe wrote:\n> On Mon, 2021-08-30 at 17:18 +0200, hubert depesz lubaczewski wrote:\n> > The thing is - I can't close it with pg_terminate_backend(), and I'd\n> > rather not kill -9, as it will, I think, close all other connections,\n> > and this is prod server.\n> \n> Of course the cause should be fixed, but to serve your immediate need:\n> \n> After calling pg_terminate_backend(), you can attach gdb to the backend and then run\n> \n> print ProcessInterrupts()\n> \n> That will cause the backend to exit normally without crashing the server.\n\nI got this mail too late, and the decision was made to restart Pg. After\nrestart all cleaned up nicely.\n\nSo, while I can't help more with diagnosing the problem, I think it\nmight be good to try to figure out what could have happened.\n\nOn my end I gathered some more data:\n1. the logical replication app is debezium\n2. as far as I can tell it was patched against\n https://issues.redhat.com/browse/DBZ-1596\n3. app was gone (kubernetes cluister was shut down) in the mean time.\n4. the backend was up and running for 12 days, in the tight loop.\n\ndepesz\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:11:11 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Mon, Aug 30, 2021 at 08:15:24PM -0400, Joe Conway wrote:\n> It would be interesting to step through a few times to see if it is really\n> stuck in that loop. That would be consistent with 100% CPU and not checking\n> for interrupts I think.\n\nIf the problem will happen again, will do my best to get more\ninformation.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:12:21 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Tue, Aug 31, 2021 at 11:41 AM hubert depesz lubaczewski\n<depesz@depesz.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 09:09:20PM +0200, Laurenz Albe wrote:\n> > On Mon, 2021-08-30 at 17:18 +0200, hubert depesz lubaczewski wrote:\n> > > The thing is - I can't close it with pg_terminate_backend(), and I'd\n> > > rather not kill -9, as it will, I think, close all other connections,\n> > > and this is prod server.\n> >\n> > Of course the cause should be fixed, but to serve your immediate need:\n> >\n> > After calling pg_terminate_backend(), you can attach gdb to the backend and then run\n> >\n> > print ProcessInterrupts()\n> >\n> > That will cause the backend to exit normally without crashing the server.\n>\n> I got this mail too late, and the decision was made to restart Pg. After\n> restart all cleaned up nicely.\n>\n> So, while I can't help more with diagnosing the problem, I think it\n> might be good to try to figure out what could have happened.\n>\n\nOne possibility could be there are quite a few DDLs happening in this\napplication at some particular point in time which can lead to high\nCPU usage. Prior to commit d7eb52d718 in PG-14, we use to execute all\ninvalidations at each command end during logical decoding which might\nlead to such behavior temporarily. I think a bit of debugging when it\nshows this symptom could help us to identify if it is the problem I am\nspeculating here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Aug 2021 16:00:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" }, { "msg_contents": "On Tue, Aug 31, 2021 at 04:00:14PM +0530, Amit Kapila wrote:\n> One possibility could be there are quite a few DDLs happening in this\n> application at some particular point in time which can lead to high\n\nWhile not impossible, I'd rather say it's not very likely. We don't use\ntemporary tables, and while there are DB migrations, I don't think they\nare often, and definitely don't take many days.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 13:40:18 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": true, "msg_subject": "Re: Pg stuck at 100% cpu, for multiple days" } ]
[ { "msg_contents": "Hi,\n\nRelation invalidation was missing in case of create publication and\ndrop publication of \"FOR ALL TABLES\" publication, added so that the\npublication information can be rebuilt. Without these invalidation\nupdate/delete operations on the relation will be successful in the\npublisher which will later result in conflict in the subscriber.\nThanks to Amit for identifying the issue at [1]. Attached patch has\nthe fix for the same.\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LtTXMqu-UbcByjHw%2BaKP38t4%2Br7kyKnmBQMA-__9U52A%40mail.gmail.com\n\nRegards,\nVignesh", "msg_date": "Mon, 30 Aug 2021 22:39:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Added missing invalidations for all tables publication" }, { "msg_contents": "From Tuesday, August 31, 2021 1:10 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> Hi,\r\n> \r\n> Relation invalidation was missing in case of create publication and drop\r\n> publication of \"FOR ALL TABLES\" publication, added so that the publication\r\n> information can be rebuilt. Without these invalidation update/delete\r\n> operations on the relation will be successful in the publisher which will later\r\n> result in conflict in the subscriber.\r\n> Thanks to Amit for identifying the issue at [1]. Attached patch has the fix for the\r\n> same.\r\n> Thoughts?\r\n\r\nI have one comment about the testcase in the patch.\r\n\r\n+-- Test cache invalidation FOR ALL TABLES publication\r\n+SET client_min_messages = 'ERROR';\r\n+CREATE TABLE testpub_tbl4(a int);\r\n+CREATE PUBLICATION testpub_foralltables FOR ALL TABLES;\r\n+RESET client_min_messages;\r\n+-- fail missing REPLICA IDENTITY\r\n+UPDATE testpub_tbl4 set a = 2;\r\n+ERROR: cannot update table \"testpub_tbl4\" because it does not have a replica identity and publishes updates\r\n+HINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\r\n+DROP PUBLICATION testpub_foralltables;\r\n\r\nThe above testcases can pass without the code change in the patch, is it better\r\nto add a testcase which can show different results after applying the patch ?\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n", "msg_date": "Tue, 31 Aug 2021 02:10:53 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Added missing invalidations for all tables publication" }, { "msg_contents": "On Tue, Aug 31, 2021 at 7:40 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Tuesday, August 31, 2021 1:10 AM vignesh C <vignesh21@gmail.com> wrote:\n> > Hi,\n> >\n> > Relation invalidation was missing in case of create publication and drop\n> > publication of \"FOR ALL TABLES\" publication, added so that the publication\n> > information can be rebuilt. Without these invalidation update/delete\n> > operations on the relation will be successful in the publisher which will later\n> > result in conflict in the subscriber.\n> > Thanks to Amit for identifying the issue at [1]. Attached patch has the fix for the\n> > same.\n> > Thoughts?\n>\n> I have one comment about the testcase in the patch.\n>\n> +-- Test cache invalidation FOR ALL TABLES publication\n> +SET client_min_messages = 'ERROR';\n> +CREATE TABLE testpub_tbl4(a int);\n> +CREATE PUBLICATION testpub_foralltables FOR ALL TABLES;\n> +RESET client_min_messages;\n> +-- fail missing REPLICA IDENTITY\n> +UPDATE testpub_tbl4 set a = 2;\n> +ERROR: cannot update table \"testpub_tbl4\" because it does not have a replica identity and publishes updates\n> +HINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n> +DROP PUBLICATION testpub_foralltables;\n>\n> The above testcases can pass without the code change in the patch, is it better\n> to add a testcase which can show different results after applying the patch ?\n\nThanks for the comment, I have slightly modified the test case which\nwill fail without the patch. Attached v2 patch which has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 31 Aug 2021 08:31:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "At Tue, 31 Aug 2021 08:31:05 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> On Tue, Aug 31, 2021 at 7:40 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> Thanks for the comment, I have slightly modified the test case which\n> will fail without the patch. Attached v2 patch which has the changes\n> for the same.\n\nThe test works fine. The code looks fine for me except one minor\ncosmetic flaw.\n\n+\tif (!HeapTupleIsValid(tup))\n+\t\telog(ERROR, \"cache lookup failed for publication %u\",\n+\t\t\t pubid);\n\nThe last two lines don't need to be separated. ((Almost) All other\ninstance of the same error is written that way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 31 Aug 2021 17:30:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "On Tue, Aug 31, 2021 at 2:00 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 31 Aug 2021 08:31:05 +0530, vignesh C <vignesh21@gmail.com> wrote in\n> > On Tue, Aug 31, 2021 at 7:40 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > Thanks for the comment, I have slightly modified the test case which\n> > will fail without the patch. Attached v2 patch which has the changes\n> > for the same.\n>\n> The test works fine. The code looks fine for me except one minor\n> cosmetic flaw.\n>\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"cache lookup failed for publication %u\",\n> + pubid);\n>\n> The last two lines don't need to be separated. ((Almost) All other\n> instance of the same error is written that way.\n>\n\nThanks for the comments, the attached v3 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 31 Aug 2021 20:54:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "On Tue, Aug 31, 2021 at 8:54 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Aug 31, 2021 at 2:00 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n>\n> Thanks for the comments, the attached v3 patch has the changes for the same.\n>\n\nI think this bug should be fixed in back branches (till v10). OTOH, as\nthis is not reported by any user and we have found it during code\nreview so it seems either users don't have an exact use case or they\nhaven't noticed this yet. What do you people think about\nback-patching?\n\nAttached, please find a slightly updated patch with minor changes. I\nhave slightly changed the test to make it more meaningful. If we\ndecide to back-patch this, can you please test this on back-branches?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 6 Sep 2021 11:26:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "\r\nFrom Mon, Sep 6, 2021 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Aug 31, 2021 at 8:54 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > Thanks for the comments, the attached v3 patch has the changes for the\r\n> > same.\r\n> >\r\n> \r\n> I think this bug should be fixed in back branches (till v10). OTOH, as this is not\r\n> reported by any user and we have found it during code review so it seems\r\n> either users don't have an exact use case or they haven't noticed this yet. What\r\n> do you people think about back-patching?\r\n\r\nPersonally, I think it's ok to back-patch.\r\n\r\n> Attached, please find a slightly updated patch with minor changes. I have\r\n> slightly changed the test to make it more meaningful.\r\n\r\nThe patch looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 09:14:31 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Added missing invalidations for all tables publication" }, { "msg_contents": "> From Mon, Sep 6, 2021 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > On Tue, Aug 31, 2021 at 8:54 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > > Thanks for the comments, the attached v3 patch has the changes for\r\n> > > the same.\r\n> > >\r\n> >\r\n> > I think this bug should be fixed in back branches (till v10). OTOH, as\r\n> > this is not reported by any user and we have found it during code\r\n> > review so it seems either users don't have an exact use case or they\r\n> > haven't noticed this yet. What do you people think about back-patching?\r\n> \r\n> Personally, I think it's ok to back-patch.\r\n\r\nI found that the patch cannot be applied to back-branches(v10-v14) cleanly,\r\nso, I generate the patches for back-branches. Attached, all the patches have\r\npassed regression test.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 8 Sep 2021 02:27:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Added missing invalidations for all tables publication" }, { "msg_contents": "On Wed, Sep 8, 2021 at 7:57 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > From Mon, Sep 6, 2021 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Tue, Aug 31, 2021 at 8:54 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > Thanks for the comments, the attached v3 patch has the changes for\n> > > > the same.\n> > > >\n> > >\n> > > I think this bug should be fixed in back branches (till v10). OTOH, as\n> > > this is not reported by any user and we have found it during code\n> > > review so it seems either users don't have an exact use case or they\n> > > haven't noticed this yet. What do you people think about back-patching?\n> >\n> > Personally, I think it's ok to back-patch.\n>\n> I found that the patch cannot be applied to back-branches(v10-v14) cleanly,\n> so, I generate the patches for back-branches. Attached, all the patches have\n> passed regression test.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Sep 2021 13:57:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Sep 8, 2021 at 7:57 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>> I found that the patch cannot be applied to back-branches(v10-v14) cleanly,\n>> so, I generate the patches for back-branches. Attached, all the patches have\n>> passed regression test.\n\n> Pushed!\n\nShouldn't the CF entry for this be closed? [1]\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/34/3311/\n\n\n", "msg_date": "Sat, 11 Sep 2021 14:28:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Added missing invalidations for all tables publication" }, { "msg_contents": "On Sat, Sep 11, 2021 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Sep 8, 2021 at 7:57 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> >> I found that the patch cannot be applied to back-branches(v10-v14) cleanly,\n> >> so, I generate the patches for back-branches. Attached, all the patches have\n> >> passed regression test.\n>\n> > Pushed!\n>\n> Shouldn't the CF entry for this be closed? [1]\n>\n\nYes, and I have done that now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 13 Sep 2021 07:45:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added missing invalidations for all tables publication" } ]
[ { "msg_contents": "Hi. I noticed some code that seems the same as the nearby function\nunpack_sql_state, and I wondered why it is not just calling it?\n\nFor example,\n\n\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex a3e1c59..d91ed98 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -3313,7 +3313,7 @@ send_message_to_frontend(ErrorData *edata)\n const char *sev;\n char tbuf[12];\n int ssval;\n- int i;\n+ char *ssbuf;\n\n /* 'N' (Notice) is for nonfatal conditions, 'E' is for errors */\n pq_beginmessage(&msgbuf, (edata->elevel < ERROR) ? 'N' : 'E');\n@@ -3326,15 +3326,10 @@ send_message_to_frontend(ErrorData *edata)\n\n /* unpack MAKE_SQLSTATE code */\n ssval = edata->sqlerrcode;\n- for (i = 0; i < 5; i++)\n- {\n- tbuf[i] = PGUNSIXBIT(ssval);\n- ssval >>= 6;\n- }\n- tbuf[i] = '\\0';\n+ ssbuf = unpack_sql_state(ssval);\n\n pq_sendbyte(&msgbuf, PG_DIAG_SQLSTATE);\n- err_sendstring(&msgbuf, tbuf);\n+ err_sendstring(&msgbuf, ssbuf);\n\n /* M field is required per protocol, so always send something */\n pq_sendbyte(&msgbuf, PG_DIAG_MESSAGE_PRIMARY);\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 31 Aug 2021 09:32:41 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "unpack_sql_state not called?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 09:32:41AM +1000, Peter Smith wrote:\n> Hi. I noticed some code that seems the same as the nearby function\n> unpack_sql_state, and I wondered why it is not just calling it?\n\nThis looks like a piece that could have been done in d46bc44, and\nwould not matter performance-wise. No objections from here to do\nthis simplification.\n--\nMichael", "msg_date": "Tue, 31 Aug 2021 10:31:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: unpack_sql_state not called?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 11:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Aug 31, 2021 at 09:32:41AM +1000, Peter Smith wrote:\n> > Hi. I noticed some code that seems the same as the nearby function\n> > unpack_sql_state, and I wondered why it is not just calling it?\n>\n> This looks like a piece that could have been done in d46bc44, and\n> would not matter performance-wise. No objections from here to do\n> this simplification.\n\nThanks. Do you want me to re-post it as a patch attachment?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:01:02 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: unpack_sql_state not called?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 02:01:02PM +1000, Peter Smith wrote:\n> Do you want me to re-post it as a patch attachment?\n\nNo need. Thanks.\n--\nMichael", "msg_date": "Tue, 31 Aug 2021 13:06:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: unpack_sql_state not called?" }, { "msg_contents": "Thanks for pushing!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Sep 2021 08:49:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: unpack_sql_state not called?" } ]
[ { "msg_contents": "Hi all,\n\nI found another pass where we report stats after the stats collector\nshutdown. The reproducer and the backtrace I got are here:\n\n1. psql -c \"begin; create table a (a int); select pg_sleep(30); commit;\" &\n2. pg_recvlogical --create-slot -S slot -d postgres &\n3. stop the server\n\nTRAP: FailedAssertion(\"pgstat_is_initialized && !pgstat_is_shutdown\",\nFile: \"pgstat.c\", Line: 4752, PID: 62789)\n0 postgres 0x000000010a8ed79a\nExceptionalCondition + 234\n1 postgres 0x000000010a5e03d2\npgstat_assert_is_up + 66\n2 postgres 0x000000010a5e1dc4 pgstat_send + 20\n3 postgres 0x000000010a5e1d5c\npgstat_report_replslot_drop + 108\n4 postgres 0x000000010a64c796\nReplicationSlotDropPtr + 838\n5 postgres 0x000000010a64c0e9\nReplicationSlotDropAcquired + 89\n6 postgres 0x000000010a64bf23\nReplicationSlotRelease + 99\n7 postgres 0x000000010a6d60ab ProcKill + 219\n8 postgres 0x000000010a6a350c shmem_exit + 444\n9 postgres 0x000000010a6a326a\nproc_exit_prepare + 122\n10 postgres 0x000000010a6a3163 proc_exit + 19\n11 postgres 0x000000010a8ee665 errfinish + 1109\n12 postgres 0x000000010a6e3535\nProcessInterrupts + 1445\n13 postgres 0x000000010a65f654\nWalSndWaitForWal + 164\n14 postgres 0x000000010a65edb2\nlogical_read_xlog_page + 146\n15 postgres 0x000000010a22c336\nReadPageInternal + 518\n16 postgres 0x000000010a22b860 XLogReadRecord + 320\n17 postgres 0x000000010a619c67\nDecodingContextFindStartpoint + 231\n18 postgres 0x000000010a65c105\nCreateReplicationSlot + 1237\n19 postgres 0x000000010a65b64c\nexec_replication_command + 1180\n20 postgres 0x000000010a6e6d2b PostgresMain + 2459\n21 postgres 0x000000010a5ef1a9 BackendRun + 89\n22 postgres 0x000000010a5ee6fd BackendStartup + 557\n23 postgres 0x000000010a5ed487 ServerLoop + 759\n24 postgres 0x000000010a5eac22 PostmasterMain + 6610\n25 postgres 0x000000010a4c32d3 main + 819\n26 libdyld.dylib 0x00007fff73477cc9 start + 1\n\nAt step #2, wal sender waits for another transaction started at step\n#1 to complete after creating the replication slot. When the server is\nstopping, wal sender process drops the slot on releasing the slot\nsince it's still RS_EPHEMERAL. Then, after dropping the slot we report\nthe message for dropping the slot (see ReplicationSlotDropPtr()).\nThese are executed in ReplicationSlotRelease() called by ProcKill()\nwhich is called during calling on_shmem_exit callbacks, which is after\nshutting down pgstats during before_shmem_exit callbacks. I’ve not\ntested yet but I think this can potentially happen also when dropping\na temporary slot. ProcKill() also calls ReplicationSlotCleanup() to\nclean up temporary slots.\n\nThere are some ideas to fix this issue but I don’t think it’s a good\nidea to move either ProcKill() or the slot releasing code to\nbefore_shmem_exit in this case, like we did for other similar\nissues[1][2]. Reporting the slot dropping message on dropping the slot\nisn’t necessarily essential actually since autovacuums periodically\ncheck already-dropped slots and report to drop the stats. So another\nidea would be to move pgstat_report_replslot_drop() to a higher layer\nsuch as ReplicationSlotDrop() and ReplicationSlotsDropDBSlots() that\nare not called during callbacks. The replication slot stats are\ndropped when it’s dropped via commands such as\npg_drop_replication_slot() and DROP_REPLICATION_SLOT. On the other\nhand, for temporary slots and ephemeral slots, we rely on autovacuums\nto drop their stats. Even if we delay to drop the stats for those\nslots, pg_stat_replication_slots don’t show the stats for\nalready-dropped slots.\n\nAny other ideas?\n\nRegards,\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=675c945394b36c2db0e8c8c9f6209c131ce3f0a8\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=dcac5e7ac157964f71f15d81c7429130c69c3f9b\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 31 Aug 2021 11:37:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2021-08-31 11:37:08 +0900, Masahiko Sawada wrote:\n> At step #2, wal sender waits for another transaction started at step\n> #1 to complete after creating the replication slot. When the server is\n> stopping, wal sender process drops the slot on releasing the slot\n> since it's still RS_EPHEMERAL. Then, after dropping the slot we report\n> the message for dropping the slot (see ReplicationSlotDropPtr()).\n> These are executed in ReplicationSlotRelease() called by ProcKill()\n> which is called during calling on_shmem_exit callbacks, which is after\n> shutting down pgstats during before_shmem_exit callbacks. I’ve not\n> tested yet but I think this can potentially happen also when dropping\n> a temporary slot. ProcKill() also calls ReplicationSlotCleanup() to\n> clean up temporary slots.\n> \n> There are some ideas to fix this issue but I don’t think it’s a good\n> idea to move either ProcKill() or the slot releasing code to\n> before_shmem_exit in this case, like we did for other similar\n> issues[1][2].\n\nYea, that's clearly not an option.\n\nI wonder why the replication slot stuff is in ProcKill() rather than its\nown callback. That's probably my fault, but I don't remember what lead\nto that.\n\n\n> Reporting the slot dropping message on dropping the slot\n> isn’t necessarily essential actually since autovacuums periodically\n> check already-dropped slots and report to drop the stats. So another\n> idea would be to move pgstat_report_replslot_drop() to a higher layer\n> such as ReplicationSlotDrop() and ReplicationSlotsDropDBSlots() that\n> are not called during callbacks. The replication slot stats are\n> dropped when it’s dropped via commands such as\n> pg_drop_replication_slot() and DROP_REPLICATION_SLOT. On the other\n> hand, for temporary slots and ephemeral slots, we rely on autovacuums\n> to drop their stats. Even if we delay to drop the stats for those\n> slots, pg_stat_replication_slots don’t show the stats for\n> already-dropped slots.\n\nYea, we could do that, but I think it'd be nicer to find a bit more\nprincipled solution...\n\nPerhaps moving this stuff out from ProcKill() into its own\nbefore_shmem_exit() callback would do the trick?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Aug 2021 20:45:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Tue, Aug 31, 2021 at 12:45 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-08-31 11:37:08 +0900, Masahiko Sawada wrote:\n> > At step #2, wal sender waits for another transaction started at step\n> > #1 to complete after creating the replication slot. When the server is\n> > stopping, wal sender process drops the slot on releasing the slot\n> > since it's still RS_EPHEMERAL. Then, after dropping the slot we report\n> > the message for dropping the slot (see ReplicationSlotDropPtr()).\n> > These are executed in ReplicationSlotRelease() called by ProcKill()\n> > which is called during calling on_shmem_exit callbacks, which is after\n> > shutting down pgstats during before_shmem_exit callbacks. I’ve not\n> > tested yet but I think this can potentially happen also when dropping\n> > a temporary slot. ProcKill() also calls ReplicationSlotCleanup() to\n> > clean up temporary slots.\n> >\n> > There are some ideas to fix this issue but I don’t think it’s a good\n> > idea to move either ProcKill() or the slot releasing code to\n> > before_shmem_exit in this case, like we did for other similar\n> > issues[1][2].\n>\n> Yea, that's clearly not an option.\n>\n> I wonder why the replication slot stuff is in ProcKill() rather than its\n> own callback. That's probably my fault, but I don't remember what lead\n> to that.\n>\n>\n> > Reporting the slot dropping message on dropping the slot\n> > isn’t necessarily essential actually since autovacuums periodically\n> > check already-dropped slots and report to drop the stats. So another\n> > idea would be to move pgstat_report_replslot_drop() to a higher layer\n> > such as ReplicationSlotDrop() and ReplicationSlotsDropDBSlots() that\n> > are not called during callbacks. The replication slot stats are\n> > dropped when it’s dropped via commands such as\n> > pg_drop_replication_slot() and DROP_REPLICATION_SLOT. On the other\n> > hand, for temporary slots and ephemeral slots, we rely on autovacuums\n> > to drop their stats. Even if we delay to drop the stats for those\n> > slots, pg_stat_replication_slots don’t show the stats for\n> > already-dropped slots.\n>\n> Yea, we could do that, but I think it'd be nicer to find a bit more\n> principled solution...\n>\n> Perhaps moving this stuff out from ProcKill() into its own\n> before_shmem_exit() callback would do the trick?\n\nYou mean to move only the part of sending the message to its own\nbefore_shmem_exit() callback? or move ReplicationSlotRelease() and\nReplicationSlotCleanup() from ProcKill() to it?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:22:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On 2021-08-31 14:22:39 +0900, Masahiko Sawada wrote:\n> You mean to move only the part of sending the message to its own\n> before_shmem_exit() callback? or move ReplicationSlotRelease() and\n> ReplicationSlotCleanup() from ProcKill() to it?\n\nThe latter.\n\n\n", "msg_date": "Mon, 30 Aug 2021 22:34:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-08-31 14:22:39 +0900, Masahiko Sawada wrote:\n> > You mean to move only the part of sending the message to its own\n> > before_shmem_exit() callback? or move ReplicationSlotRelease() and\n> > ReplicationSlotCleanup() from ProcKill() to it?\n>\n> The latter.\n\nMakes sense.\n\nI've attached the patch that moves them to its own\nbefore_shmem_exit(). Unless I missed to register the callback it works\nthe same as before except for where to release and clean up the slots.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 31 Aug 2021 17:14:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "At Tue, 31 Aug 2021 17:14:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2021-08-31 14:22:39 +0900, Masahiko Sawada wrote:\n> > > You mean to move only the part of sending the message to its own\n> > > before_shmem_exit() callback? or move ReplicationSlotRelease() and\n> > > ReplicationSlotCleanup() from ProcKill() to it?\n> >\n> > The latter.\n> \n> Makes sense.\n> \n> I've attached the patch that moves them to its own\n> before_shmem_exit(). Unless I missed to register the callback it works\n> the same as before except for where to release and clean up the slots.\n\nIs there any reason we need to register the callback dynamically? It\nseems to me what we need to do here is to call the functions at\nbefore-shmem-exit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 31 Aug 2021 18:34:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2021-08-31 18:34:12 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 31 Aug 2021 17:14:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> > On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2021-08-31 14:22:39 +0900, Masahiko Sawada wrote:\n> > > > You mean to move only the part of sending the message to its own\n> > > > before_shmem_exit() callback? or move ReplicationSlotRelease() and\n> > > > ReplicationSlotCleanup() from ProcKill() to it?\n> > >\n> > > The latter.\n> > \n> > Makes sense.\n> > \n> > I've attached the patch that moves them to its own\n> > before_shmem_exit(). Unless I missed to register the callback it works\n> > the same as before except for where to release and clean up the slots.\n> \n> Is there any reason we need to register the callback dynamically? It\n> seems to me what we need to do here is to call the functions at\n> before-shmem-exit.\n\n+1. I'd just add a ReplicationSlotInitialize() to BaseInit().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Aug 2021 10:39:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Wed, Sep 1, 2021 at 2:39 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-08-31 18:34:12 +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 31 Aug 2021 17:14:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2021-08-31 14:22:39 +0900, Masahiko Sawada wrote:\n> > > > > You mean to move only the part of sending the message to its own\n> > > > > before_shmem_exit() callback? or move ReplicationSlotRelease() and\n> > > > > ReplicationSlotCleanup() from ProcKill() to it?\n> > > >\n> > > > The latter.\n> > >\n> > > Makes sense.\n> > >\n> > > I've attached the patch that moves them to its own\n> > > before_shmem_exit(). Unless I missed to register the callback it works\n> > > the same as before except for where to release and clean up the slots.\n> >\n> > Is there any reason we need to register the callback dynamically? It\n> > seems to me what we need to do here is to call the functions at\n> > before-shmem-exit.\n>\n> +1. I'd just add a ReplicationSlotInitialize() to BaseInit().\n\n+1. But BaseInit() is also called by auxiliary processes, which seems\nnot necessary. So isn't it better to add it to InitPostgres() or\nInitProcess()?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 1 Sep 2021 10:05:18 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2021-09-01 10:05:18 +0900, Masahiko Sawada wrote:\n> On Wed, Sep 1, 2021 at 2:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-08-31 18:34:12 +0900, Kyotaro Horiguchi wrote:\n> > > At Tue, 31 Aug 2021 17:14:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > > On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > I've attached the patch that moves them to its own\n> > > > before_shmem_exit(). Unless I missed to register the callback it works\n> > > > the same as before except for where to release and clean up the slots.\n> > >\n> > > Is there any reason we need to register the callback dynamically? It\n> > > seems to me what we need to do here is to call the functions at\n> > > before-shmem-exit.\n> >\n> > +1. I'd just add a ReplicationSlotInitialize() to BaseInit().\n> \n> +1. But BaseInit() is also called by auxiliary processes, which seems\n> not necessary. So isn't it better to add it to InitPostgres() or\n> InitProcess()?\n\n-0.5 - I think we should default to making the environments more similar,\nrather than the opposite. With exceptions for cases where that'd cause\noverhead or undue complexity. Which I don't see here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Aug 2021 20:37:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Wed, Sep 1, 2021 at 12:37 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-09-01 10:05:18 +0900, Masahiko Sawada wrote:\n> > On Wed, Sep 1, 2021 at 2:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2021-08-31 18:34:12 +0900, Kyotaro Horiguchi wrote:\n> > > > At Tue, 31 Aug 2021 17:14:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > > > On Tue, Aug 31, 2021 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > I've attached the patch that moves them to its own\n> > > > > before_shmem_exit(). Unless I missed to register the callback it works\n> > > > > the same as before except for where to release and clean up the slots.\n> > > >\n> > > > Is there any reason we need to register the callback dynamically? It\n> > > > seems to me what we need to do here is to call the functions at\n> > > > before-shmem-exit.\n> > >\n> > > +1. I'd just add a ReplicationSlotInitialize() to BaseInit().\n> >\n> > +1. But BaseInit() is also called by auxiliary processes, which seems\n> > not necessary. So isn't it better to add it to InitPostgres() or\n> > InitProcess()?\n>\n> -0.5 - I think we should default to making the environments more similar,\n> rather than the opposite. With exceptions for cases where that'd cause\n> overhead or undue complexity. Which I don't see here?\n>\n\nSorry for the late response. I'd missed this discussion for some reason.\n\nI agreed with Andres and Horiguchi-san and attached an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 10 Dec 2021 18:13:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "At Fri, 10 Dec 2021 18:13:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> I agreed with Andres and Horiguchi-san and attached an updated patch.\n\nThanks for the new version.\n\nIt seems fine, but I have some comments from a cosmetic viewpoint.\n\n+\t/*\n+\t * Register callback to make sure cleanup and releasing the replication\n+\t * slot on exit.\n+\t */\n+\tReplicationSlotInitialize();\n\nActually all the function does is that but it looks slightly\ninconsistent with the function name. I think we can call it just\n\"initialization\" here. I think we don't need to change the function\ncomment the same way but either will do for me.\n\n+ReplicationSlotBeforeShmemExit(int code, Datum arg)\n\nThe name looks a bit too verbose. Doesn't just \"ReplicationSlotShmemExit\" work?\n\n-\t\t * so releasing here is fine. There's another cleanup in ProcKill()\n-\t\t * ensuring we'll correctly cleanup on FATAL errors as well.\n+\t\t * so releasing here is fine. There's another cleanup in\n+\t\t * ReplicationSlotBeforeShmemExit() callback ensuring we'll correctly\n+\t\t * cleanup on FATAL errors as well.\n\nI'd prefer \"during before_shmem_exit()\" than \"in\nReplicationSlotBeforeShmemExit() callback\" here. (But the current\nwording is also fine by me.)\n\n\nThe attached detects that bug, but I'm not sure it's worth expending\ntest time, or this might be in the server test suit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl\nindex 90da1662e3..0fb4e67697 100644\n--- a/src/bin/pg_basebackup/t/030_pg_recvlogical.pl\n+++ b/src/bin/pg_basebackup/t/030_pg_recvlogical.pl\n@@ -5,7 +5,8 @@ use strict;\n use warnings;\n use PostgreSQL::Test::Utils;\n use PostgreSQL::Test::Cluster;\n-use Test::More tests => 20;\n+use Test::More tests => 25;\n+use IPC::Run qw(pump finish timer);\n \n program_help_ok('pg_recvlogical');\n program_version_ok('pg_recvlogical');\n@@ -106,3 +107,44 @@ $node->command_ok(\n \t\t'--start', '--endpos', \"$nextlsn\", '--no-loop', '-f', '-'\n \t],\n \t'replayed a two-phase transaction');\n+\n+## Check for a crash bug caused by replication-slot cleanup after\n+## pgstat shutdown.\n+#fire up an interactive psql session\n+my $in = '';\n+my $out = '';\n+my $timer = timer(5);\n+my $h = $node->interactive_psql('postgres', \\$in, \\$out, $timer);\n+like($out, qr/psql/, \"print startup banner\");\n+\n+# open a transaction\n+$out = \"\";\n+$in .= \"BEGIN;\\nCREATE TABLE a (a int);\\n\";\n+pump $h until ($out =~ /CREATE TABLE/ || timer->is_expired);\n+ok(!timer->is_expired, 'background CREATE TABLE passed');\n+\n+# this recvlogical waits for the transaction ends\n+ok(open(my $recvlogical, '-|',\n+\t\t'pg_recvlogical', '--create-slot', '-S', 'test2',\n+\t\t'-d', $node->connstr('postgres')),\n+ 'launch background pg_recvlogical');\n+\n+$node->poll_query_until('postgres',\n+\t\t\tqq{SELECT count(*) > 0 FROM pg_stat_activity \n+\t\t\t\t\t\tWHERE backend_type='walsender'\n+\t\t\t\t\t\tAND query like 'CREATE_REPLICATION_SLOT %';});\n+# stop server while it hangs. This shouldn't crash server.\n+$node->stop;\n+ok(open(my $cont, '-|', 'pg_controldata', $node->data_dir),\n+ 'run pg_controldata');\n+my $stop_result = '';\n+while (<$cont>)\n+{\n+\tif (/^Database cluster state: *([^ ].*)$/)\n+\t{\n+\t\t$stop_result = $1;\n+\t\tlast;\n+\t}\n+}\n+\n+is($stop_result, 'shut down', 'server is properly shut down');", "msg_date": "Mon, 13 Dec 2021 12:11:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Mon, Dec 13, 2021 at 12:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 10 Dec 2021 18:13:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > I agreed with Andres and Horiguchi-san and attached an updated patch.\n>\n> Thanks for the new version.\n>\n> It seems fine, but I have some comments from a cosmetic viewpoint.\n>\n> + /*\n> + * Register callback to make sure cleanup and releasing the replication\n> + * slot on exit.\n> + */\n> + ReplicationSlotInitialize();\n>\n> Actually all the function does is that but it looks slightly\n> inconsistent with the function name. I think we can call it just\n> \"initialization\" here. I think we don't need to change the function\n> comment the same way but either will do for me.\n>\n> +ReplicationSlotBeforeShmemExit(int code, Datum arg)\n>\n> The name looks a bit too verbose. Doesn't just \"ReplicationSlotShmemExit\" work?\n>\n> - * so releasing here is fine. There's another cleanup in ProcKill()\n> - * ensuring we'll correctly cleanup on FATAL errors as well.\n> + * so releasing here is fine. There's another cleanup in\n> + * ReplicationSlotBeforeShmemExit() callback ensuring we'll correctly\n> + * cleanup on FATAL errors as well.\n>\n> I'd prefer \"during before_shmem_exit()\" than \"in\n> ReplicationSlotBeforeShmemExit() callback\" here. (But the current\n> wording is also fine by me.)\n\nThank you for the comments! Agreed with all comments.\n\nI've attached an updated patch. Please review it.\n\n> The attached detects that bug, but I'm not sure it's worth expending\n> test time, or this might be in the server test suit.\n\nThanks. It's convenient to test this issue but I'm also not sure it's\nworth adding to the test suit.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 22 Dec 2021 22:34:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2021-12-22 22:34:45 +0900, Masahiko Sawada wrote:\n> I've attached an updated patch. Please review it.\n\nSorry for dropping the ball on this again :(. I've pushed the fix with some\nvery minor polishing.\n\n\n> > The attached detects that bug, but I'm not sure it's worth expending\n> > test time, or this might be in the server test suit.\n>\n> Thanks. It's convenient to test this issue but I'm also not sure it's\n> worth adding to the test suit.\n\nI think it's definitely worth adding a test, but I don't particularly like the\nspecific test implementation. Primarily because I think it's better to test\nthis in a cluster that stays running, so that we can verify that the slot drop\nworked. It also doesn't seem necessary to create a separate cluster.\n\nI wrote the attached isolation test. I ended up not committing it yet - I'm\nworried that there could be some OS dependent output difference, due to some\nlibpq error handling issues. See [1], which Tom pointed out is caused by the\nissue discussed in [2].\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220215004143.dlzsn72oqsmqa7uw%40alap3.anarazel.de\n[2] https://postgr.es/m/20220215004143.dlzsn72oqsmqa7uw%40alap3.anarazel.de", "msg_date": "Mon, 14 Feb 2022 17:20:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "At Mon, 14 Feb 2022 17:20:16 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2021-12-22 22:34:45 +0900, Masahiko Sawada wrote:\n> > I've attached an updated patch. Please review it.\n> \n> Sorry for dropping the ball on this again :(. I've pushed the fix with some\n> very minor polishing.\n\nThanks!\n\n> > > The attached detects that bug, but I'm not sure it's worth expending\n> > > test time, or this might be in the server test suit.\n> >\n> > Thanks. It's convenient to test this issue but I'm also not sure it's\n> > worth adding to the test suit.\n> \n> I think it's definitely worth adding a test, but I don't particularly like the\n> specific test implementation. Primarily because I think it's better to test\n> this in a cluster that stays running, so that we can verify that the slot drop\n> worked. It also doesn't seem necessary to create a separate cluster.\n\nOne of the points I was not satisfied the TAP test is the second point\nabove. FWIW I agree to the proposed test on the direction.\n\n> I wrote the attached isolation test. I ended up not committing it yet - I'm\n> worried that there could be some OS dependent output difference, due to some\n> libpq error handling issues. See [1], which Tom pointed out is caused by the\n> issue discussed in [2].\n\nMmm.. This is..\nslot_creation_error.out\n> step s2_init: <... completed>\n> FATAL: terminating connection due to administrator command\n> FATAL: terminating connection due to administrator command\n\n> Greetings,\n> \n> Andres Freund\n> \n> [1] https://postgr.es/m/20220215004143.dlzsn72oqsmqa7uw%40alap3.anarazel.de\n> [2] https://postgr.es/m/20220215004143.dlzsn72oqsmqa7uw%40alap3.anarazel.de\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 15 Feb 2022 12:09:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Tue, Feb 15, 2022 at 12:09 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 14 Feb 2022 17:20:16 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > Hi,\n> >\n> > On 2021-12-22 22:34:45 +0900, Masahiko Sawada wrote:\n> > > I've attached an updated patch. Please review it.\n> >\n> > Sorry for dropping the ball on this again :(. I've pushed the fix with some\n> > very minor polishing.\n\nThanks!\n\n>\n> > > > The attached detects that bug, but I'm not sure it's worth expending\n> > > > test time, or this might be in the server test suit.\n> > >\n> > > Thanks. It's convenient to test this issue but I'm also not sure it's\n> > > worth adding to the test suit.\n> >\n> > I think it's definitely worth adding a test, but I don't particularly like the\n> > specific test implementation. Primarily because I think it's better to test\n> > this in a cluster that stays running, so that we can verify that the slot drop\n> > worked. It also doesn't seem necessary to create a separate cluster.\n>\n> FWIW I agree to the proposed test on the direction.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 15 Feb 2022 14:07:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2022-02-15 14:07:26 +0900, Masahiko Sawada wrote:\n> On Tue, Feb 15, 2022 at 12:09 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > FWIW I agree to the proposed test on the direction.\n> \n> +1\n\nPushed the test yesterday evening, after Tom checked if it is likely to be\nproblematic. Seems to worked without problems so far.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Feb 2022 08:58:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Tue, Feb 15, 2022 at 08:58:56AM -0800, Andres Freund wrote:\n> Pushed the test yesterday evening, after Tom checked if it is likely to be\n> problematic. Seems to worked without problems so far.\n\n wrasse │ 2022-02-15 09:29:06 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-02-15%2009%3A29%3A06\n flaviventris │ 2022-02-24 15:17:30 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2022-02-24%2015%3A17%3A30\n calliphoridae │ 2022-03-08 01:14:51 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2022-03-08%2001%3A14%3A51\n\nThe buildfarm failed to convey adequate logs for this particular test suite.\nHere's regression.diffs from the wrasse case (saved via keep_error_builds):\n\n===\ndiff -U3 /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n--- /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\tTue Feb 15 06:58:14 2022\n+++ /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\tTue Feb 15 11:38:14 2022\n@@ -29,16 +29,17 @@\n t \n (1 row)\n \n-step s2_init: <... completed>\n-ERROR: canceling statement due to user request\n step s1_view_slot: \n SELECT slot_name, slot_type, active FROM pg_replication_slots WHERE slot_name = 'slot_creation_error'\n \n-slot_name|slot_type|active\n----------+---------+------\n-(0 rows)\n+slot_name |slot_type|active\n+-------------------+---------+------\n+slot_creation_error|logical |t \n+(1 row)\n \n step s1_c: COMMIT;\n+step s2_init: <... completed>\n+ERROR: canceling statement due to user request\n \n starting permutation: s1_b s1_xid s2_init s1_c s1_view_slot s1_drop_slot\n step s1_b: BEGIN;\n===\n\nI can make it fail that way by injecting a 1s delay here:\n\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -3339,6 +3339,7 @@ ProcessInterrupts(void)\n \t\t */\n \t\tif (!DoingCommandRead)\n \t\t{\n+\t\t\tpg_usleep(1 * 1000 * 1000);\n \t\t\tLockErrorCleanup();\n \t\t\tereport(ERROR,\n \t\t\t\t\t(errcode(ERRCODE_QUERY_CANCELED),\n\nI plan to fix this as attached, similar to how commit c04c767 fixed the same\nchallenge in detach-partition-concurrently-[34].", "msg_date": "Fri, 18 Mar 2022 00:28:37 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "At Fri, 18 Mar 2022 00:28:37 -0700, Noah Misch <noah@leadboat.com> wrote in \n> ===\n> diff -U3 /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n> --- /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\tTue Feb 15 06:58:14 2022\n> +++ /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\tTue Feb 15 11:38:14 2022\n\n\n> I plan to fix this as attached, similar to how commit c04c767 fixed the same\n> challenge in detach-partition-concurrently-[34].\n\nIt looks correct and I confirmed that it works.\n\n\nIt looks like a similar issue with [1] but this is cleaner and stable.\n\n[1] https://www.postgresql.org/message-id/20220304.113347.2105652521035137491.horikyota.ntt@gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 18 Mar 2022 18:18:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "Hi,\n\nOn 2022-03-18 00:28:37 -0700, Noah Misch wrote:\n> On Tue, Feb 15, 2022 at 08:58:56AM -0800, Andres Freund wrote:\n> > Pushed the test yesterday evening, after Tom checked if it is likely to be\n> > problematic. Seems to worked without problems so far.\n> \n> wrasse │ 2022-02-15 09:29:06 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-02-15%2009%3A29%3A06\n> flaviventris │ 2022-02-24 15:17:30 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2022-02-24%2015%3A17%3A30\n> calliphoridae │ 2022-03-08 01:14:51 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2022-03-08%2001%3A14%3A51\n\nHuh. Weirdly enough I saw this failure twice in a development branch\nyesterday...\n\n\n> The buildfarm failed to convey adequate logs for this particular test suite.\n> Here's regression.diffs from the wrasse case (saved via keep_error_builds):\n\nThanks for getting that!\n\n\n> ===\n> diff -U3 /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n> --- /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\tTue Feb 15 06:58:14 2022\n> +++ /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\tTue Feb 15 11:38:14 2022\n> @@ -29,16 +29,17 @@\n> t \n> (1 row)\n> \n> -step s2_init: <... completed>\n> -ERROR: canceling statement due to user request\n> step s1_view_slot: \n> SELECT slot_name, slot_type, active FROM pg_replication_slots WHERE slot_name = 'slot_creation_error'\n> \n> -slot_name|slot_type|active\n> ----------+---------+------\n> -(0 rows)\n> +slot_name |slot_type|active\n> +-------------------+---------+------\n> +slot_creation_error|logical |t \n> +(1 row)\n> \n> step s1_c: COMMIT;\n> +step s2_init: <... completed>\n> +ERROR: canceling statement due to user request\n> \n> starting permutation: s1_b s1_xid s2_init s1_c s1_view_slot s1_drop_slot\n> step s1_b: BEGIN;\n> ===\n> \n> I can make it fail that way by injecting a 1s delay here:\n> \n> --- a/src/backend/tcop/postgres.c\n> +++ b/src/backend/tcop/postgres.c\n> @@ -3339,6 +3339,7 @@ ProcessInterrupts(void)\n> \t\t */\n> \t\tif (!DoingCommandRead)\n> \t\t{\n> +\t\t\tpg_usleep(1 * 1000 * 1000);\n> \t\t\tLockErrorCleanup();\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_QUERY_CANCELED),\n\nSo isolationtester still sees the blocking condition from before the cancel\nprocessing is finished and thus proceeds to the next statement - which it\nnormally should only do once the other running step is detected as still\nblocking?\n\nI wonder if we should emit <waiting> everytime a step is detected anew as\nbeing blocked to make it easier to understand issues like this.\n\n\n\n> diff --git a/contrib/test_decoding/specs/slot_creation_error.spec b/contrib/test_decoding/specs/slot_creation_error.spec\n> index 6816696..d1e35bf 100644\n> --- a/contrib/test_decoding/specs/slot_creation_error.spec\n> +++ b/contrib/test_decoding/specs/slot_creation_error.spec\n> @@ -35,7 +35,7 @@ step s2_init {\n> # The tests first start a transaction with an xid assigned in s1, then create\n> # a slot in s2. The slot creation waits for s1's transaction to end. Instead\n> # we cancel / terminate s2.\n> -permutation s1_b s1_xid s2_init s1_view_slot s1_cancel_s2 s1_view_slot s1_c\n> +permutation s1_b s1_xid s2_init s1_view_slot s1_cancel_s2(s2_init) s1_view_slot s1_c\n> permutation s1_b s1_xid s2_init s1_c s1_view_slot s1_drop_slot # check slot creation still works\n> -permutation s1_b s1_xid s2_init s1_terminate_s2 s1_c s1_view_slot\n> +permutation s1_b s1_xid s2_init s1_terminate_s2(s2_init) s1_c s1_view_slot\n> # can't run tests after this, due to s2's connection failure\n\nThat looks good to me.\n\nThanks!\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Mar 2022 13:24:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." }, { "msg_contents": "On Fri, Mar 18, 2022 at 01:24:15PM -0700, Andres Freund wrote:\n> On 2022-03-18 00:28:37 -0700, Noah Misch wrote:\n> > ===\n> > diff -U3 /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n> > --- /export/home/nm/farm/studio64v12_6/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\tTue Feb 15 06:58:14 2022\n> > +++ /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\tTue Feb 15 11:38:14 2022\n> > @@ -29,16 +29,17 @@\n> > t \n> > (1 row)\n> > \n> > -step s2_init: <... completed>\n> > -ERROR: canceling statement due to user request\n> > step s1_view_slot: \n> > SELECT slot_name, slot_type, active FROM pg_replication_slots WHERE slot_name = 'slot_creation_error'\n> > \n> > -slot_name|slot_type|active\n> > ----------+---------+------\n> > -(0 rows)\n> > +slot_name |slot_type|active\n> > +-------------------+---------+------\n> > +slot_creation_error|logical |t \n> > +(1 row)\n> > \n> > step s1_c: COMMIT;\n> > +step s2_init: <... completed>\n> > +ERROR: canceling statement due to user request\n> > \n> > starting permutation: s1_b s1_xid s2_init s1_c s1_view_slot s1_drop_slot\n> > step s1_b: BEGIN;\n> > ===\n> > \n> > I can make it fail that way by injecting a 1s delay here:\n> > \n> > --- a/src/backend/tcop/postgres.c\n> > +++ b/src/backend/tcop/postgres.c\n> > @@ -3339,6 +3339,7 @@ ProcessInterrupts(void)\n> > \t\t */\n> > \t\tif (!DoingCommandRead)\n> > \t\t{\n> > +\t\t\tpg_usleep(1 * 1000 * 1000);\n> > \t\t\tLockErrorCleanup();\n> > \t\t\tereport(ERROR,\n> > \t\t\t\t\t(errcode(ERRCODE_QUERY_CANCELED),\n> \n> So isolationtester still sees the blocking condition from before the cancel\n> processing is finished and thus proceeds to the next statement - which it\n> normally should only do once the other running step is detected as still\n> blocking?\n\nEssentially. It called s1_view_slot too early. s2_init can remain blocked\narbitrarily long after pg_cancel_backend returns. Writing\ns1_cancel_s2(s2_init) directs the isolationtester to send the cancel, then\nwait for s2_init to finish, then wait for the cancel to finish.\n\n> I wonder if we should emit <waiting> everytime a step is detected anew as\n> being blocked to make it easier to understand issues like this.\n\nGood question.\n\n> > diff --git a/contrib/test_decoding/specs/slot_creation_error.spec b/contrib/test_decoding/specs/slot_creation_error.spec\n> > index 6816696..d1e35bf 100644\n> > --- a/contrib/test_decoding/specs/slot_creation_error.spec\n> > +++ b/contrib/test_decoding/specs/slot_creation_error.spec\n> > @@ -35,7 +35,7 @@ step s2_init {\n> > # The tests first start a transaction with an xid assigned in s1, then create\n> > # a slot in s2. The slot creation waits for s1's transaction to end. Instead\n> > # we cancel / terminate s2.\n> > -permutation s1_b s1_xid s2_init s1_view_slot s1_cancel_s2 s1_view_slot s1_c\n> > +permutation s1_b s1_xid s2_init s1_view_slot s1_cancel_s2(s2_init) s1_view_slot s1_c\n> > permutation s1_b s1_xid s2_init s1_c s1_view_slot s1_drop_slot # check slot creation still works\n> > -permutation s1_b s1_xid s2_init s1_terminate_s2 s1_c s1_view_slot\n> > +permutation s1_b s1_xid s2_init s1_terminate_s2(s2_init) s1_c s1_view_slot\n> > # can't run tests after this, due to s2's connection failure\n> \n> That looks good to me.\n\nPushed. Kyotaro Horiguchi had posted a patch that also changed the\npg_terminate_backend call in temp-schema-cleanup.spec. I think that one is\nfine as-is, because it does pg_terminate_backend(pg_backend_pid()). There's\nno way for a backend running that particular command to report that it's ready\nfor another query, so the problem doesn't arise.\n\n\n", "msg_date": "Fri, 18 Mar 2022 19:26:29 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot drop message is sent after pgstats shutdown." } ]
[ { "msg_contents": "Hi Hackers,\n\nIn the current version, when GUC huge_pages=try, which is the default setting, no log is output regardless of the success or failure of the HugePages acquisition. If you want to output logs, you need to set log_min_messages=DEBUG3, but it will output a huge amount of extra logs.\nWith huge_pages=try setting, if the kernel parameter vm.nr_hugepages is not enough, the administrator will not notice that HugePages is not being used.\nI think it should output a log if HugePages was not available.\n\nBy the way, in MySQL with almost the same architecture, the following log is output at the Warning level.\n\n[Warning] [MY-012677] [InnoDB] Failed to allocate 138412032 bytes. errno 1\n[Warning] [MY-012679] [InnoDB] Using conventional memory pool\n\nThe attached small patch outputs a log at the WARNING level when huge_pages = try and if the acquisition of HugePages fails.\n\nRegards, \nNoriyoshi Shinoda", "msg_date": "Tue, 31 Aug 2021 05:36:47 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Aug 31, 2021 at 1:37 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n>\n> In the current version, when GUC huge_pages=try, which is the default setting, no log is output regardless of the success or failure of the HugePages acquisition. If you want to output logs, you need to set log_min_messages=DEBUG3, but it will output a huge amount of extra logs.\n> With huge_pages=try setting, if the kernel parameter vm.nr_hugepages is not enough, the administrator will not notice that HugePages is not being used.\n> I think it should output a log if HugePages was not available.\n\nI agree that the message should be promoted to a higher level. But I\nthink we should also make that information available at the SQL level,\nas the log files may be truncated / rotated before you need the info,\nand it can be troublesome to find the information at the OS level, if\nyou're lucky enough to have OS access.\n\n\n", "msg_date": "Tue, 31 Aug 2021 14:57:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/08/31 15:57, Julien Rouhaud wrote:\n> On Tue, Aug 31, 2021 at 1:37 PM Shinoda, Noriyoshi (PN Japan FSIP)\n> <noriyoshi.shinoda@hpe.com> wrote:\n>>\n>> In the current version, when GUC huge_pages=try, which is the default setting, no log is output regardless of the success or failure of the HugePages acquisition. If you want to output logs, you need to set log_min_messages=DEBUG3, but it will output a huge amount of extra logs.\n>> With huge_pages=try setting, if the kernel parameter vm.nr_hugepages is not enough, the administrator will not notice that HugePages is not being used.\n>> I think it should output a log if HugePages was not available.\n\n+1\n\n-\t\t\telog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\",\n+\t\t\telog(WARNING, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\",\n\nelog() should be used only for internal errors and low-level debug logging.\nSo per your proposal, elog() is not suitable here. Instead, ereport()\nshould be used.\n\nThe log level should be LOG rather than WARNING because this message\nindicates the information about server activity that administrators are\ninterested in.\n\nThe message should be updated so that it follows the Error Message Style Guide.\nhttps://www.postgresql.org/docs/devel/error-style-guide.html\n\nWith huge_pages=on, if shared memory fails to be allocated, the error message\nis reported currently. Even with huge_page=try, this error message should be\nused to simplify the code as follows?\n\n errno = mmap_errno;\n- ereport(FATAL,\n+ ereport((huge_pages == HUGE_PAGES_ON) ? FATAL : LOG,\n (errmsg(\"could not map anonymous shared memory: %m\"),\n (mmap_errno == ENOMEM) ?\n errhint(\"This error usually means that PostgreSQL's request \"\n\n\n\n> I agree that the message should be promoted to a higher level. But I\n> think we should also make that information available at the SQL level,\n> as the log files may be truncated / rotated before you need the info,\n> and it can be troublesome to find the information at the OS level, if\n> you're lucky enough to have OS access.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Sep 2021 02:05:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Fujii-san, Julien-san\r\n\r\nThank you very much for your comment.\r\nI followed your comment and changed the elog function to ereport function and also changed the log level. The output message is the same as in the case of non-HugePages memory acquisition failure.I did not simplify the error messages as it would have complicated the response to the preprocessor.\r\n\r\n> I agree that the message should be promoted to a higher level. But I \r\n> think we should also make that information available at the SQL level, \r\n> as the log files may be truncated / rotated before you need the info, \r\n> and it can be troublesome to find the information at the OS level, if \r\n> you're lucky enough to have OS access.\r\n\r\nIn the attached patch, I have added an Internal GUC 'using_huge_pages' to know that it is using HugePages. This parameter will be True only if the instance is using HugePages.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Fujii Masao [mailto:masao.fujii@oss.nttdata.com] \r\nSent: Wednesday, September 1, 2021 2:06 AM\r\nTo: Julien Rouhaud <rjuju123@gmail.com>; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\n\r\n\r\nOn 2021/08/31 15:57, Julien Rouhaud wrote:\r\n> On Tue, Aug 31, 2021 at 1:37 PM Shinoda, Noriyoshi (PN Japan FSIP) \r\n> <noriyoshi.shinoda@hpe.com> wrote:\r\n>>\r\n>> In the current version, when GUC huge_pages=try, which is the default setting, no log is output regardless of the success or failure of the HugePages acquisition. If you want to output logs, you need to set log_min_messages=DEBUG3, but it will output a huge amount of extra logs.\r\n>> With huge_pages=try setting, if the kernel parameter vm.nr_hugepages is not enough, the administrator will not notice that HugePages is not being used.\r\n>> I think it should output a log if HugePages was not available.\r\n\r\n+1\r\n\r\n-\t\t\telog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\",\r\n+\t\t\telog(WARNING, \"mmap(%zu) with MAP_HUGETLB failed, huge pages \r\n+disabled: %m\",\r\n\r\nelog() should be used only for internal errors and low-level debug logging.\r\nSo per your proposal, elog() is not suitable here. Instead, ereport() should be used.\r\n\r\nThe log level should be LOG rather than WARNING because this message indicates the information about server activity that administrators are interested in.\r\n\r\nThe message should be updated so that it follows the Error Message Style Guide.\r\nhttps://www.postgresql.org/docs/devel/error-style-guide.html \r\n\r\nWith huge_pages=on, if shared memory fails to be allocated, the error message is reported currently. Even with huge_page=try, this error message should be used to simplify the code as follows?\r\n\r\n errno = mmap_errno;\r\n- ereport(FATAL,\r\n+ ereport((huge_pages == HUGE_PAGES_ON) ? FATAL : LOG,\r\n (errmsg(\"could not map anonymous shared memory: %m\"),\r\n (mmap_errno == ENOMEM) ?\r\n errhint(\"This error usually means that PostgreSQL's request \"\r\n\r\n\r\n\r\n> I agree that the message should be promoted to a higher level. But I \r\n> think we should also make that information available at the SQL level, \r\n> as the log files may be truncated / rotated before you need the info, \r\n> and it can be troublesome to find the information at the OS level, if \r\n> you're lucky enough to have OS access.\r\n\r\n+1\r\n\r\nRegards,\r\n\r\n--\r\nFujii Masao\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION", "msg_date": "Fri, 3 Sep 2021 06:28:58 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "Hello.\n\nAt Fri, 3 Sep 2021 06:28:58 +0000, \"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com> wrote in \n> Fujii-san, Julien-san\n> \n> Thank you very much for your comment.\n> I followed your comment and changed the elog function to ereport function and also changed the log level. The output message is the same as in the case of non-HugePages memory acquisition failure.I did not simplify the error messages as it would have complicated the response to the preprocessor.\n> \n> > I agree that the message should be promoted to a higher level. But I \n> > think we should also make that information available at the SQL level, \n> > as the log files may be truncated / rotated before you need the info, \n> > and it can be troublesome to find the information at the OS level, if \n> > you're lucky enough to have OS access.\n> \n> In the attached patch, I have added an Internal GUC 'using_huge_pages' to know that it is using HugePages. This parameter will be True only if the instance is using HugePages.\n\nIF you are thinking to show that in GUC, you might want to look into\nthe nearby thread [1], especially about the behavior when invoking\npostgres -C using_huge_pages. (Even though the word \"using\" in the\nname may suggest that the server is running, but I don't think it is\nneat that the variable shows \"no\" by the command but shows \"yes\" while\nthe same server is running.)\n\nI have some comment about the patch.\n\n-\t\tif (huge_pages == HUGE_PAGES_TRY && ptr == MAP_FAILED)\n-\t\t\telog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\",\n-\t\t\t\t allocsize);\n+\t\tif (ptr != MAP_FAILED)\n+\t\t\tusing_huge_pages = true;\n+\t\telse if (huge_pages == HUGE_PAGES_TRY)\n+\t\t\tereport(LOG,\n+\t\t\t\t\t(errmsg(\"could not map anonymous shared memory: %m\"),\n+\t\t\t\t \t (mmap_errno == ENOMEM) ?\n+\t\t\t\t \t errhint(\"This error usually means that PostgreSQL's request \"\n\nIf we set huge_pages to try and postgres falled back to regular pages,\nit emits a large message relative to its importance. The user specifed\nthat \"I'd like to use huge pages, but it's ok if not available.\", so I\nthink the message should be far smaller. Maybe just raising the\nDEBUG1 message to LOG along with moving to ereport might be\nsufficient.\n\n-\t\t\t\telog(DEBUG1, \"CreateFileMapping(%zu) with SEC_LARGE_PAGES failed, \"\n-\t\t\t\t\t \"huge pages disabled\",\n-\t\t\t\t\t size);\n+\t\t\t\tereport(LOG,\n+\t\t\t\t\t\t(errmsg(\"could not create shared memory segment: error code %lu\", GetLastError()),\n+\t\t\t\t\t\t errdetail(\"Failed system call was CreateFileMapping(size=%zu, name=%s).\",\n+\t\t\t\t\t\t\t\t size, szShareMem)));\n\nIt doesn't seem to be a regular user-facing message. Isn't it\nsufficient just to raise the log level to LOG?\n\n\n[1] https://www.postgresql.org/message-id/20210903.141206.103927759882272221.horikyota.ntt%40gmail.com\n\n\n", "msg_date": "Fri, 03 Sep 2021 16:49:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/03 16:49, Kyotaro Horiguchi wrote:\n> IF you are thinking to show that in GUC, you might want to look into\n> the nearby thread [1]\n\nYes, let's discuss this feature at that thread.\n\n\n> I have some comment about the patch.\n> \n> -\t\tif (huge_pages == HUGE_PAGES_TRY && ptr == MAP_FAILED)\n> -\t\t\telog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB failed, huge pages disabled: %m\",\n> -\t\t\t\t allocsize);\n> +\t\tif (ptr != MAP_FAILED)\n> +\t\t\tusing_huge_pages = true;\n> +\t\telse if (huge_pages == HUGE_PAGES_TRY)\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"could not map anonymous shared memory: %m\"),\n> +\t\t\t\t \t (mmap_errno == ENOMEM) ?\n> +\t\t\t\t \t errhint(\"This error usually means that PostgreSQL's request \"\n> \n> If we set huge_pages to try and postgres falled back to regular pages,\n> it emits a large message relative to its importance. The user specifed\n> that \"I'd like to use huge pages, but it's ok if not available.\", so I\n> think the message should be far smaller. Maybe just raising the\n> DEBUG1 message to LOG along with moving to ereport might be\n> sufficient.\n\nIMO, if the level is promoted to LOG, the message should be updated\nso that it follows the error message style guide. But I agree that simpler\nmessage would be better in this case. So what about something like\nthe following?\n\nLOG: could not map anonymous shared memory (%zu bytes) with huge pages enabled\nHINT: The server will map anonymous shared memory again with huge pages disabled.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Sep 2021 22:37:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> IMO, if the level is promoted to LOG, the message should be updated\n> so that it follows the error message style guide. But I agree that simpler\n> message would be better in this case. So what about something like\n> the following?\n\n> LOG: could not map anonymous shared memory (%zu bytes) with huge pages enabled\n> HINT: The server will map anonymous shared memory again with huge pages disabled.\n\nThat is not a hint. Maybe it qualifies as errdetail, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 10:27:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/03 23:27, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> IMO, if the level is promoted to LOG, the message should be updated\n>> so that it follows the error message style guide. But I agree that simpler\n>> message would be better in this case. So what about something like\n>> the following?\n> \n>> LOG: could not map anonymous shared memory (%zu bytes) with huge pages enabled\n>> HINT: The server will map anonymous shared memory again with huge pages disabled.\n> \n> That is not a hint. Maybe it qualifies as errdetail, though.\n\nYes, agreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 4 Sep 2021 01:35:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hello,\r\n\r\nThank you everyone for comments.\r\nIn the thread [1] that Horiguchi told me about, there is already a review going on about GUC for HugePages memory.\r\nFor this reason, I have removed the new GUC implementation and attached a patch that changes only the message at instance startup.\r\n\r\n[1]\r\nhttps://www.postgresql.org/message-id/20210903.141206.103927759882272221.hor\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Fujii Masao [mailto:masao.fujii@oss.nttdata.com] \r\nSent: Saturday, September 4, 2021 1:36 AM\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; rjuju123@gmail.com; pgsql-hackers@postgresql.org\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\n\r\n\r\nOn 2021/09/03 23:27, Tom Lane wrote:\r\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\r\n>> IMO, if the level is promoted to LOG, the message should be updated \r\n>> so that it follows the error message style guide. But I agree that \r\n>> simpler message would be better in this case. So what about something \r\n>> like the following?\r\n> \r\n>> LOG: could not map anonymous shared memory (%zu bytes) with huge \r\n>> pages enabled\r\n>> HINT: The server will map anonymous shared memory again with huge pages disabled.\r\n> \r\n> That is not a hint. Maybe it qualifies as errdetail, though.\r\n\r\nYes, agreed.\r\n\r\nRegards,\r\n\r\n--\r\nFujii Masao\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION", "msg_date": "Tue, 7 Sep 2021 04:09:01 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/07 13:09, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Hello,\n> \n> Thank you everyone for comments.\n> In the thread [1] that Horiguchi told me about, there is already a review going on about GUC for HugePages memory.\n> For this reason, I have removed the new GUC implementation and attached a patch that changes only the message at instance startup.\n\nThanks for updating the patch!\n\nEven with the patch, there are still some cases where huge pages is\ndisabled silently. We should report something even in these cases?\nFor example, in the platform where huge pages is not supported,\nit's silently disabled when huge_pages=try.\n\nOne big concern about the patch is that log message is always reported\nwhen shared memory fails to be allocated with huge pages enabled\nwhen huge_pages=try. Since huge_pages=try is the default setting,\nmany users would see this new log message whenever they start\nthe server. Those who don't need huge pages but just use the default\nsetting might think that such log messages would be noisy.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 7 Sep 2021 19:12:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Sep 07, 2021 at 07:12:36PM +0900, Fujii Masao wrote:\n> One big concern about the patch is that log message is always reported\n> when shared memory fails to be allocated with huge pages enabled\n> when huge_pages=try. Since huge_pages=try is the default setting,\n> many users would see this new log message whenever they start\n> the server. Those who don't need huge pages but just use the default\n> setting might think that such log messages would be noisy.\n\nI don't see this as any issue. We're only talking about a single message on\neach restart, which would be added in a major release. If it's a problem, the\nmessage could be a NOTICE or INFO, and it won't be shown by default.\n\nI think it should say \"with/out huge pages\" without \"enabled/disabled\", without\n\"again\", and without \"The server\", like:\n\n+ (errmsg(\"could not map anonymous shared memory (%zu bytes)\"\n+ \" with huge pages.\", allocsize),\n+ errdetail(\"Anonymous shared memory will be mapped \"\n+ \"without huge pages.\")));\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Sep 2021 08:16:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Tue, 7 Sep 2021 08:16:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Tue, Sep 07, 2021 at 07:12:36PM +0900, Fujii Masao wrote:\n> > One big concern about the patch is that log message is always reported\n> > when shared memory fails to be allocated with huge pages enabled\n> > when huge_pages=try. Since huge_pages=try is the default setting,\n> > many users would see this new log message whenever they start\n> > the server. Those who don't need huge pages but just use the default\n> > setting might think that such log messages would be noisy.\n> \n> I don't see this as any issue. We're only talking about a single message on\n> each restart, which would be added in a major release. If it's a problem, the\n> message could be a NOTICE or INFO, and it won't be shown by default.\n> \n> I think it should say \"with/out huge pages\" without \"enabled/disabled\", without\n> \"again\", and without \"The server\", like:\n> \n> + (errmsg(\"could not map anonymous shared memory (%zu bytes)\"\n> + \" with huge pages.\", allocsize),\n> + errdetail(\"Anonymous shared memory will be mapped \"\n> + \"without huge pages.\")));\n\nI don't dislike the message, but I'm not sure I like the message is\ntoo verbose, especially about it has DETAILS. It seems to me something\nlike the following is sufficient, or I'd like see it even more concise.\n\n\"fall back anonymous shared memory to non-huge pages: required %zu bytes for huge pages\"\n\nIf we emit an error message for other than mmap failure, it would be\nlike the following.\n\n\"fall back anonymous shared memory to non-huge pages: huge pages not available\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 11:17:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hello,\n\nThank you everyone for comments.\nI have attached a patch that simply changed the message like the advice from Horiguchi-san.\n\n> Even with the patch, there are still some cases where huge pages is disabled silently. We should report something even in these cases?\n> For example, in the platform where huge pages is not supported, it's silently disabled when huge_pages=try.\n\nThe area where this patch is written is inside the \"#ifdef MAP_HUGETLB #endif\" block.\nFor this reason, I think it is excluded from binaries created in an environment that does not have the MAP_HUGETLB macro.\n\n> One big concern about the patch is that log message is always reported when shared memory fails to be allocated with huge pages enabled when huge_pages=try. Since \n> huge_pages=try is the default setting, many users would see this new log message whenever they start the server. Those who don't need huge pages but just use the default \n> setting might think that such log messages would be noisy.\n\nThis patch is meant to let the admin know that HugePages isn't being used, so I'm sure you're right. I have no idea what to do so far.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com] \nSent: Wednesday, September 8, 2021 11:18 AM\nTo: pryzby@telsasoft.com\nCc: masao.fujii@oss.nttdata.com; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org; rjuju123@gmail.com; tgl@sss.pgh.pa.us\nSubject: Re: Improve logging when using Huge Pages\n\nAt Tue, 7 Sep 2021 08:16:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Tue, Sep 07, 2021 at 07:12:36PM +0900, Fujii Masao wrote:\n> > One big concern about the patch is that log message is always \n> > reported when shared memory fails to be allocated with huge pages \n> > enabled when huge_pages=try. Since huge_pages=try is the default \n> > setting, many users would see this new log message whenever they \n> > start the server. Those who don't need huge pages but just use the \n> > default setting might think that such log messages would be noisy.\n> \n> I don't see this as any issue. We're only talking about a single \n> message on each restart, which would be added in a major release. If \n> it's a problem, the message could be a NOTICE or INFO, and it won't be shown by default.\n> \n> I think it should say \"with/out huge pages\" without \n> \"enabled/disabled\", without \"again\", and without \"The server\", like:\n> \n> + (errmsg(\"could not map anonymous shared memory (%zu bytes)\"\n> + \" with huge pages.\", allocsize),\n> + errdetail(\"Anonymous shared memory will be mapped \"\n> + \"without huge \n> + pages.\")));\n\nI don't dislike the message, but I'm not sure I like the message is too verbose, especially about it has DETAILS. It seems to me something like the following is sufficient, or I'd like see it even more concise.\n\n\"fall back anonymous shared memory to non-huge pages: required %zu bytes for huge pages\"\n\nIf we emit an error message for other than mmap failure, it would be like the following.\n\n\"fall back anonymous shared memory to non-huge pages: huge pages not available\"\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 8 Sep 2021 07:52:35 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "Thanks!\n\nAt Wed, 8 Sep 2021 07:52:35 +0000, \"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com> wrote in \n> Hello,\n> \n> Thank you everyone for comments.\n> I have attached a patch that simply changed the message like the advice from Horiguchi-san.\n> \n> > Even with the patch, there are still some cases where huge pages is disabled silently. We should report something even in these cases?\n> > For example, in the platform where huge pages is not supported, it's silently disabled when huge_pages=try.\n> \n> The area where this patch is written is inside the \"#ifdef MAP_HUGETLB #endif\" block.\n> For this reason, I think it is excluded from binaries created in an environment that does not have the MAP_HUGETLB macro.\n\nAh, right.\n\n> > One big concern about the patch is that log message is always reported when shared memory fails to be allocated with huge pages enabled when huge_pages=try. Since \n> > huge_pages=try is the default setting, many users would see this new log message whenever they start the server. Those who don't need huge pages but just use the default \n> > setting might think that such log messages would be noisy.\n> \n> This patch is meant to let the admin know that HugePages isn't being used, so I'm sure you're right. I have no idea what to do so far.\n\nIt seems *to me* sufficient. I'm not sure what cases CreateFileMapping\nreturn ERROR_NO_SYSTEM_RESOURCES when non-huge page can be allocated\nsuccessfully, though, but that doesn't matter much, maybe.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Sep 2021 14:34:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/07 22:16, Justin Pryzby wrote:\n> On Tue, Sep 07, 2021 at 07:12:36PM +0900, Fujii Masao wrote:\n>> One big concern about the patch is that log message is always reported\n>> when shared memory fails to be allocated with huge pages enabled\n>> when huge_pages=try. Since huge_pages=try is the default setting,\n>> many users would see this new log message whenever they start\n>> the server. Those who don't need huge pages but just use the default\n>> setting might think that such log messages would be noisy.\n> \n> I don't see this as any issue. We're only talking about a single message on\n> each restart, which would be added in a major release.\n\nI was afraid that logging the message like \"could not ...\" every time\nwhen the server starts up would surprise users unnecessarily.\nBecause the message sounds like it reports a server error.\nSo it might be good idea to change the message to something like\n\"disabling huge pages\" to avoid such surprise.\n\n> If it's a problem, the\n> message could be a NOTICE or INFO, and it won't be shown by default.\n\nThat's an idea, but neither NOTICE nor INFO are suitable for\nthis kind of message....\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Sep 2021 00:12:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/08 11:17, Kyotaro Horiguchi wrote:\n> I don't dislike the message, but I'm not sure I like the message is\n> too verbose, especially about it has DETAILS. It seems to me something\n> like the following is sufficient, or I'd like see it even more concise.\n> \n> \"fall back anonymous shared memory to non-huge pages: required %zu bytes for huge pages\"\n> \n> If we emit an error message for other than mmap failure, it would be\n> like the following.\n> \n> \"fall back anonymous shared memory to non-huge pages: huge pages not available\"\n\nHow about simpler message like \"disabling huge pages\" or\n\"disabling huge pages due to lack of huge pages available\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Sep 2021 00:13:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Fri, 17 Sep 2021 00:13:41 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/08 11:17, Kyotaro Horiguchi wrote:\n> > I don't dislike the message, but I'm not sure I like the message is\n> > too verbose, especially about it has DETAILS. It seems to me something\n> > like the following is sufficient, or I'd like see it even more\n> > concise.\n> > \"fall back anonymous shared memory to non-huge pages: required %zu\n> > bytes for huge pages\"\n> > If we emit an error message for other than mmap failure, it would be\n> > like the following.\n> > \"fall back anonymous shared memory to non-huge pages: huge pages not\n> > available\"\n> \n> How about simpler message like \"disabling huge pages\" or\n> \"disabling huge pages due to lack of huge pages available\"?\n\nHonestly, I cannot have conficence on my wording, but \"disabling huge\npages\" souds like somthing that happens on the OS layer. \"did not\nuse/gave up using huge pages for anounymous shared memory\" might work\nwell, I'm not sure, though...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Sep 2021 13:14:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hi,\n\nThank you for your comment.\n\n> I was afraid that logging the message like \"could not ...\" every time when the server starts up would surprise users unnecessarily.\n> Because the message sounds like it reports a server error.\n\nFujii-san, \nI was worried about the same thing as you.\nSo the attached patch gets the value of the kernel parameter vm.nr_hugepages, \nand if it is the default value of zero, the log level is the same as before. \nOn the other hand, if any value is set, the level is set to LOG.\nI hope I can find a better message other than this kind of implementation.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com] \nSent: Friday, September 17, 2021 1:15 PM\nTo: masao.fujii@oss.nttdata.com\nCc: pryzby@telsasoft.com; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org; rjuju123@gmail.com; tgl@sss.pgh.pa.us\nSubject: Re: Improve logging when using Huge Pages\n\nAt Fri, 17 Sep 2021 00:13:41 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/08 11:17, Kyotaro Horiguchi wrote:\n> > I don't dislike the message, but I'm not sure I like the message is \n> > too verbose, especially about it has DETAILS. It seems to me \n> > something like the following is sufficient, or I'd like see it even \n> > more concise.\n> > \"fall back anonymous shared memory to non-huge pages: required %zu \n> > bytes for huge pages\"\n> > If we emit an error message for other than mmap failure, it would be \n> > like the following.\n> > \"fall back anonymous shared memory to non-huge pages: huge pages not \n> > available\"\n> \n> How about simpler message like \"disabling huge pages\" or \"disabling \n> huge pages due to lack of huge pages available\"?\n\nHonestly, I cannot have conficence on my wording, but \"disabling huge pages\" souds like somthing that happens on the OS layer. \"did not use/gave up using huge pages for anounymous shared memory\" might work well, I'm not sure, though...\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 20 Sep 2021 08:55:13 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/09/20 17:55, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> I was worried about the same thing as you.\n> So the attached patch gets the value of the kernel parameter vm.nr_hugepages,\n> and if it is the default value of zero, the log level is the same as before.\n> On the other hand, if any value is set, the level is set to LOG.\n\nProbably I understood your point. But isn't it more confusing to users?\nBecause whether the log message is output or not is changed depending on\nthe setting of the kernel parameter. For example, when vm.nr_hugepages=0\nand no log message about huge pages is output, users might easily misunderstand\nthat shared memory was successfully allocated with huge pages because\nthey saw no such log message.\n\nIMO we should leave the log message \"mmap(%zu) with MAP_HUGETLB failed...\"\nas it is if users don't like additional log message output whenever\nthe server restarts. In this case, instead, maybe it's better to provide GUC or\nsomething to report whether shared memory was successfully allocated\nwith huge pages or not.\n\nOTOH, if users can accept such additional log message, I think that it's\nless confusing to report something like \"disable huge pages ...\" whenever\nmmap() with huge pages fails. I still prefer \"disable huge pages ...\" to\n\"fall back ...\" as the log message, but if many people think the latter is\nbetter, I'd follow that.\n\nAnother idea is to output \"Anonymous shared memory was allocated with\n huge pages\" when it's successfully allocated with huge pages, and to output\n \"Anonymous shared memory was allocated without huge pages\"\n when it's successfully allocated without huge pages. I'm not sure if users\n may think even this message is noisy, though.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Sep 2021 02:03:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Sep 22, 2021 at 02:03:11AM +0900, Fujii Masao wrote:\n> Another idea is to output \"Anonymous shared memory was allocated with\n> huge pages\" when it's successfully allocated with huge pages, and to output\n> \"Anonymous shared memory was allocated without huge pages\"\n> when it's successfully allocated without huge pages. I'm not sure if users\n> may think even this message is noisy, though.\n\n+1\n\nMaybe it could show the page size instead of \"with\"/without:\n\"Shared memory allocated with 4k/1MB/1GB pages.\"\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Sep 2021 19:23:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Tue, 21 Sep 2021 19:23:22 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Sep 22, 2021 at 02:03:11AM +0900, Fujii Masao wrote:\n> > Another idea is to output \"Anonymous shared memory was allocated with\n> > huge pages\" when it's successfully allocated with huge pages, and to output\n> > \"Anonymous shared memory was allocated without huge pages\"\n> > when it's successfully allocated without huge pages. I'm not sure if users\n> > may think even this message is noisy, though.\n> \n> +1\n\n+1. Positive phrase looks better.\n\n> Maybe it could show the page size instead of \"with\"/without:\n> \"Shared memory allocated with 4k/1MB/1GB pages.\"\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Sep 2021 11:40:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hi, all.\nThank you for your comment.\n\n> Probably I understood your point. But isn't it more confusing to users?\nAs you say, I think the last patch was rather confusing. For this reason, I simply reconsidered it.\nThe attached patch just outputs a log like your advice on acquiring Huge Page.\nIt is possible to limit the log output trigger only when huge_pages=try, but is it better not to always output it?\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com] \nSent: Monday, September 27, 2021 11:40 AM\nTo: pryzby@telsasoft.com\nCc: masao.fujii@oss.nttdata.com; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org; rjuju123@gmail.com; tgl@sss.pgh.pa.us\nSubject: Re: Improve logging when using Huge Pages\n\nAt Tue, 21 Sep 2021 19:23:22 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Sep 22, 2021 at 02:03:11AM +0900, Fujii Masao wrote:\n> > Another idea is to output \"Anonymous shared memory was allocated \n> > with huge pages\" when it's successfully allocated with huge pages, \n> > and to output \"Anonymous shared memory was allocated without huge pages\"\n> > when it's successfully allocated without huge pages. I'm not sure \n> > if users may think even this message is noisy, though.\n> \n> +1\n\n+1. Positive phrase looks better.\n\n> Maybe it could show the page size instead of \"with\"/without:\n> \"Shared memory allocated with 4k/1MB/1GB pages.\"\n\n+1.\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 27 Sep 2021 08:21:22 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "+ ereport(LOG, (errmsg(\"Anonymous shared memory was allocated %s huge pages.\", with_hugepages ? \"with\" : \"without\")));\n\nYou shouldn't break a sentence into pieces like this, since it breaks\ntranslation. You don't want an untranslated \"without\" to appear in the middle\nof the translated message.\n\nThere are cases where a component *shouldn't* be translated, like this one:\nwhere \"numeric\" should not be translated.\n\nsrc/backend/utils/adt/numeric.c: errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\nsrc/backend/utils/adt/numeric.c- \"numeric\", str)));\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 21 Oct 2021 21:38:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hi,\nThank you for your comment.\nThe attached patch stops message splitting.\nThis patch also limits the timing of message output when huge_pages = try and HugePages is not used.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Justin Pryzby [mailto:pryzby@telsasoft.com] \nSent: Friday, October 22, 2021 11:38 AM\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\nCc: masao.fujii@oss.nttdata.com; pgsql-hackers@postgresql.org; rjuju123@gmail.com; tgl@sss.pgh.pa.us; Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nSubject: Re: Improve logging when using Huge Pages\n\n+ ereport(LOG, (errmsg(\"Anonymous shared memory was \n+ allocated %s huge pages.\", with_hugepages ? \"with\" : \"without\")));\n\nYou shouldn't break a sentence into pieces like this, since it breaks translation. You don't want an untranslated \"without\" to appear in the middle of the translated message.\n\nThere are cases where a component *shouldn't* be translated, like this one:\nwhere \"numeric\" should not be translated.\n\nsrc/backend/utils/adt/numeric.c: errmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\nsrc/backend/utils/adt/numeric.c- \"numeric\", str)));\n\n--\nJustin", "msg_date": "Wed, 27 Oct 2021 06:39:46 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "Hi,\n\nOn Wed, Oct 27, 2021 at 06:39:46AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Thank you for your comment.\n> The attached patch stops message splitting.\n> This patch also limits the timing of message output when huge_pages = try and HugePages is not used.\n\nThanks for updating the patch.\n\nI hope we've debated almost everything about its behavior, and it's nearly\nready :)\n\n+ } else if (!with_hugepages && huge_pages == HUGE_PAGES_TRY)\n+ ereport(LOG, (errmsg(\"Anonymous shared memory was allocated without huge pages.\")));\n\nI would write this as a separate \"if\". The preceding block is a terminal FATAL\nand it seems more intuitive that way. But it's up to you (and the committer).\n\nThe errmsg() text should not be capitalized and not end with a period.\nhttps://www.postgresql.org/docs/devel/error-style-guide.html\n\nThe parenthesis around (errmsg()) are not required since 18 months ago\n(e3a87b499). Since this change won't be backpatched, I think it's better to\nomit them.\n\nShould it include an errcode() ?\nERRCODE_INSUFFICIENT_RESOURCES ?\nERRCODE_OUT_OF_MEMORY ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 28 Oct 2021 17:05:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Oct 27, 2021 at 3:40 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n>\n> Hi,\n> Thank you for your comment.\n> The attached patch stops message splitting.\n> This patch also limits the timing of message output when huge_pages = try and HugePages is not used.\n>\n\nI've looked at the patch. Here are comments:\n\n if (huge_pages == HUGE_PAGES_TRY && ptr == MAP_FAILED)\n elog(DEBUG1, \"mmap(%zu) with MAP_HUGETLB\nfailed, huge pages disabled: %m\",\n allocsize);\n+ else\n+ with_hugepages = true;\n\nISTM the name with_hugepages could lead to confusion since it can be\ntrue even if mmap failed and huge_pages == HUGE_PAGES_ON.\n\nAlso, with the patch, the log message is emitted also during initdb\nand starting up in single user mode:\n\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default time zone ... Asia/Tokyo\ncreating configuration files ... ok\nrunning bootstrap script ... 2021-10-29 15:45:51.408 JST [55101] LOG:\nAnonymous shared memory was allocated without huge pages.\nok\nperforming post-bootstrap initialization ... 2021-10-29 15:45:53.326\nJST [55104] LOG: Anonymous shared memory was allocated without huge\npages.\nok\nsyncing data to disk ... ok\n\nWhich is noisy. Perhaps it's better to log it only when\nIsPostmasterEnvironment is true.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 29 Oct 2021 16:00:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/10/29 7:05, Justin Pryzby wrote:\n> Hi,\n> \n> On Wed, Oct 27, 2021 at 06:39:46AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>> Thank you for your comment.\n>> The attached patch stops message splitting.\n>> This patch also limits the timing of message output when huge_pages = try and HugePages is not used.\n\nThe log message should be reported even when huge_pages=off (and huge pages\nare not used)? Otherwise we cannot determine whether huge pages are actually\nused or not when no such log message is found in the server log.\n\nOr it's simpler and more intuitive to log the message \"Anonymous shared\nmemory was allocated with huge pages\" only when huge pages are successfully\nrequested? If that message is logged, we can determine that huge pages are\nused whatever the setting is. OTOH, if there is no such message, we can\ndetermine that huge pages are not used for some reasons, e.g., OS doesn't\nsupport huge pages, shared_memory_type is not set to mmap, etc.\n\n\n> + } else if (!with_hugepages && huge_pages == HUGE_PAGES_TRY)\n> + ereport(LOG, (errmsg(\"Anonymous shared memory was allocated without huge pages.\")));\n> \n> I would write this as a separate \"if\". The preceding block is a terminal FATAL\n> and it seems more intuitive that way.\n\nAgreed.\n\n\n> Should it include an errcode() ?\n> ERRCODE_INSUFFICIENT_RESOURCES ?\n> ERRCODE_OUT_OF_MEMORY ?\n\nProbably errcode is not necessary here because it's a log message\nnot error one?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Nov 2021 01:24:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/10/29 16:00, Masahiko Sawada wrote:\n> Which is noisy. Perhaps it's better to log it only when\n> IsPostmasterEnvironment is true.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Nov 2021 01:25:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Fujii-san, Sawada-san,\r\n\r\nThank you for your comment.\r\n\r\n> Also, with the patch, the log message is emitted also during initdb and starting up in single user mode:\r\n\r\nCertainly the log output when executing the initdb command was a noise.\r\nThe attached patch reflects the comments and uses IsPostmasterEnvironment to suppress the output message.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Fujii Masao [mailto:masao.fujii@oss.nttdata.com] \r\nSent: Tuesday, November 2, 2021 1:25 AM\r\nTo: Masahiko Sawada <sawada.mshk@gmail.com>; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\r\nCc: pgsql-hackers@postgresql.org; rjuju123@gmail.com; tgl@sss.pgh.pa.us; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Justin Pryzby <pryzby@telsasoft.com>\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\n\r\n\r\nOn 2021/10/29 16:00, Masahiko Sawada wrote:\r\n> Which is noisy. Perhaps it's better to log it only when \r\n> IsPostmasterEnvironment is true.\r\n\r\n+1\r\n\r\nRegards,\r\n\r\n--\r\nFujii Masao\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION", "msg_date": "Tue, 2 Nov 2021 09:31:33 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "\n\nOn 2021/11/02 18:31, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Fujii-san, Sawada-san,\n> \n> Thank you for your comment.\n> \n>> Also, with the patch, the log message is emitted also during initdb and starting up in single user mode:\n> \n> Certainly the log output when executing the initdb command was a noise.\n> The attached patch reflects the comments and uses IsPostmasterEnvironment to suppress the output message.\n\nThanks for updating the patch!\n\n+\t\tereport(IsPostmasterEnvironment ? LOG : NOTICE, (errmsg(\"Anonymous shared memory was allocated without huge pages.\")));\n\nThis change causes the log message to be output with NOTICE level\neven when IsPostmasterEnvironment is false. But do we really want\nto log that NOTICE message in that case? Instead, isn't it better\nto just output the log message with LOG level only when\nIsPostmasterEnvironment is true?\n\n\nJustin and I posted other comments upthread. Could you consider\nwhether it's worth applying those comments to the patch?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Nov 2021 23:35:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Fujii-san, \r\n\r\nThank you for your comment.\r\nAs advised by Justin, I modified the comment according to the style guide and split the if statement.\r\nAs you say, the NOTICE log was deleted as it may not be needed.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Fujii Masao [mailto:masao.fujii@oss.nttdata.com] \r\nSent: Tuesday, November 2, 2021 11:35 PM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; pgsql-hackers@postgresql.org; Masahiko Sawada <sawada.mshk@gmail.com>\r\nCc: rjuju123@gmail.com; tgl@sss.pgh.pa.us; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Justin Pryzby <pryzby@telsasoft.com>\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\n\r\n\r\nOn 2021/11/02 18:31, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\r\n> Fujii-san, Sawada-san,\r\n> \r\n> Thank you for your comment.\r\n> \r\n>> Also, with the patch, the log message is emitted also during initdb and starting up in single user mode:\r\n> \r\n> Certainly the log output when executing the initdb command was a noise.\r\n> The attached patch reflects the comments and uses IsPostmasterEnvironment to suppress the output message.\r\n\r\nThanks for updating the patch!\r\n\r\n+\t\tereport(IsPostmasterEnvironment ? LOG : NOTICE, (errmsg(\"Anonymous \r\n+shared memory was allocated without huge pages.\")));\r\n\r\nThis change causes the log message to be output with NOTICE level even when IsPostmasterEnvironment is false. But do we really want to log that NOTICE message in that case? Instead, isn't it better to just output the log message with LOG level only when IsPostmasterEnvironment is true?\r\n\r\n\r\nJustin and I posted other comments upthread. Could you consider whether it's worth applying those comments to the patch?\r\n\r\n\r\nRegards,\r\n\r\n--\r\nFujii Masao\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION", "msg_date": "Mon, 8 Nov 2021 12:37:48 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Nov 2, 2021 at 1:24 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/10/29 7:05, Justin Pryzby wrote:\n> > Hi,\n> >\n> > On Wed, Oct 27, 2021 at 06:39:46AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> >> Thank you for your comment.\n> >> The attached patch stops message splitting.\n> >> This patch also limits the timing of message output when huge_pages = try and HugePages is not used.\n>\n> The log message should be reported even when huge_pages=off (and huge pages\n> are not used)? Otherwise we cannot determine whether huge pages are actually\n> used or not when no such log message is found in the server log.\n>\n> Or it's simpler and more intuitive to log the message \"Anonymous shared\n> memory was allocated with huge pages\" only when huge pages are successfully\n> requested? If that message is logged, we can determine that huge pages are\n> used whatever the setting is. OTOH, if there is no such message, we can\n> determine that huge pages are not used for some reasons, e.g., OS doesn't\n> support huge pages, shared_memory_type is not set to mmap, etc.\n\nIf users want to know whether the shared memory is allocated with huge\npages, I think it’s more intuitive to emit the log only on success\nwhen huge_pages = on/try. On the other hand, I guess that users might\nwant to use the message to adjust vm.nr_hugepages when it is not\nallocated with huge pages. In this case, it’d be better to log the\nmessage on failure with the request memory size (or whatever reason\nfor the failure). That is, we end up logging such a message on failure\nwhen huge_pages = on/try.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 11 Nov 2021 14:44:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Sawada-san, Fujii-san,\r\n\r\nThank you for your reviews.\r\n\r\nIn a database with huge_pages = on / off explicitly set, if memory allocation fails, the instance cannot be started, so I think that additional logs are unnecessary.\r\nThe attached patch outputs the log only when huge_pages = try, and outputs the requested size if the acquisition of Huge Pages fails.\r\n\r\nI have a completely different approach, setting GUC shared_memory_size_in_huge_pages to zero if the Huge Pages acquisition fails. This GUC is currently calculated independently of getting Huge Pages. The attached patch does not include this specification.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Masahiko Sawada [mailto:sawada.mshk@gmail.com] \r\nSent: Thursday, November 11, 2021 2:45 PM\r\nTo: Fujii Masao <masao.fujii@oss.nttdata.com>\r\nCc: Justin Pryzby <pryzby@telsasoft.com>; Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; PostgreSQL-development <pgsql-hackers@postgresql.org>; Julien Rouhaud <rjuju123@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\nOn Tue, Nov 2, 2021 at 1:24 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n>\r\n>\r\n>\r\n> On 2021/10/29 7:05, Justin Pryzby wrote:\r\n> > Hi,\r\n> >\r\n> > On Wed, Oct 27, 2021 at 06:39:46AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\r\n> >> Thank you for your comment.\r\n> >> The attached patch stops message splitting.\r\n> >> This patch also limits the timing of message output when huge_pages = try and HugePages is not used.\r\n>\r\n> The log message should be reported even when huge_pages=off (and huge \r\n> pages are not used)? Otherwise we cannot determine whether huge pages \r\n> are actually used or not when no such log message is found in the server log.\r\n>\r\n> Or it's simpler and more intuitive to log the message \"Anonymous \r\n> shared memory was allocated with huge pages\" only when huge pages are \r\n> successfully requested? If that message is logged, we can determine \r\n> that huge pages are used whatever the setting is. OTOH, if there is no \r\n> such message, we can determine that huge pages are not used for some \r\n> reasons, e.g., OS doesn't support huge pages, shared_memory_type is not set to mmap, etc.\r\n\r\nIf users want to know whether the shared memory is allocated with huge pages, I think it’s more intuitive to emit the log only on success when huge_pages = on/try. On the other hand, I guess that users might want to use the message to adjust vm.nr_hugepages when it is not allocated with huge pages. In this case, it’d be better to log the message on failure with the request memory size (or whatever reason for the failure). That is, we end up logging such a message on failure when huge_pages = on/try.\r\n\r\nRegards,\r\n\r\n--\r\nMasahiko Sawada\r\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 22 Nov 2021 01:12:37 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/3310/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:45:23 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hello,\r\n\r\n> As discussed in [1], we're taking this opportunity to return some patchsets that don't appear to be getting enough reviewer interest.\r\nThank you for your helpful comments.\r\nAs you say, my patch doesn't seem to be of much interest to reviewers either.\r\nI will discard the patch I proposed this time and consider it again.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Jacob Champion <jchampion@timescale.com> \r\nSent: Tuesday, August 2, 2022 5:45 AM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; Masahiko Sawada <sawada.mshk@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>\r\nCc: Justin Pryzby <pryzby@telsasoft.com>; PostgreSQL-development <pgsql-hackers@postgresql.org>; Julien Rouhaud <rjuju123@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\nAs discussed in [1], we're taking this opportunity to return some patchsets that don't appear to be getting enough reviewer interest.\r\n\r\nThis is not a rejection, since we don't necessarily think there's anything unacceptable about the entry, but it differs from a standard \"Returned with Feedback\" in that there's probably not much actionable feedback at all. Rather than code changes, what this patch needs is more community interest. You might\r\n\r\n- ask people for help with your approach,\r\n- see if there are similar patches that your code could supplement,\r\n- get interested parties to agree to review your patch in a CF, or\r\n- possibly present the functionality in a way that's easier to review\r\n overall.\r\n\r\n(Doing these things is no guarantee that there will be interest, but it's hopefully better than endlessly rebasing a patchset that is not receiving any feedback from the community.)\r\n\r\nOnce you think you've built up some community support and the patchset is ready for review, you (or any interested party) can resurrect the patch entry by visiting\r\n\r\n https://commitfest.postgresql.org/38/3310/ \r\n\r\nand changing the status to \"Needs Review\", and then changing the status again to \"Move to next CF\". (Don't forget the second step; hopefully we will have streamlined this in the near future!)\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com \r\n", "msg_date": "Wed, 3 Aug 2022 08:42:01 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Aug 3, 2022 at 8:42 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n> > As discussed in [1], we're taking this opportunity to return some patchsets that don't appear to be getting enough reviewer interest.\n> Thank you for your helpful comments.\n> As you say, my patch doesn't seem to be of much interest to reviewers either.\n> I will discard the patch I proposed this time and consider it again.\n\nI wonder if the problem here is that people are reluctant to add noise\nto every starting system. There are people who have not configured\ntheir system and don't want to see that noise, and then some people\nhave configured their system and would like to know about it if it\ndoesn't work so they can be aware of that, but don't want to use \"off\"\nbecause they don't want a hard failure. Would it be better if there\nwere a new level \"try_log\" (or something), which only logs a message\nif it tries and fails?\n\n\n", "msg_date": "Thu, 3 Nov 2022 14:10:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Thanks for your comment. \r\nI understand that some people don't like noise log. However, I don't understand the feeling of disliking the one-line log that is output when the instance is started. \r\nIn both MySQL and Oracle Database, a log is output if it fails to acquire HugePages with the same behavior as huge_pages=try in PostgreSQL.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Thursday, November 3, 2022 10:10 AM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\r\nCc: Jacob Champion <jchampion@timescale.com>; Masahiko Sawada <sawada.mshk@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>; Justin Pryzby <pryzby@telsasoft.com>; PostgreSQL-development <pgsql-hackers@postgresql.org>; Julien Rouhaud <rjuju123@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\nSubject: Re: Improve logging when using Huge Pages\r\n\r\nOn Wed, Aug 3, 2022 at 8:42 PM Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\r\n> > As discussed in [1], we're taking this opportunity to return some patchsets that don't appear to be getting enough reviewer interest.\r\n> Thank you for your helpful comments.\r\n> As you say, my patch doesn't seem to be of much interest to reviewers either.\r\n> I will discard the patch I proposed this time and consider it again.\r\n\r\nI wonder if the problem here is that people are reluctant to add noise\r\nto every starting system. There are people who have not configured\r\ntheir system and don't want to see that noise, and then some people have configured their system and would like to know about it if it doesn't work so they can be aware of that, but don't want to use \"off\"\r\nbecause they don't want a hard failure. Would it be better if there were a new level \"try_log\" (or something), which only logs a message if it tries and fails?\r\n", "msg_date": "Fri, 4 Nov 2022 10:48:57 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Nov 3, 2022 at 8:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I wonder if the problem here is that people are reluctant to add noise\n> to every starting system. There are people who have not configured\n> their system and don't want to see that noise, and then some people\n> have configured their system and would like to know about it if it\n> doesn't work so they can be aware of that, but don't want to use \"off\"\n> because they don't want a hard failure. Would it be better if there\n> were a new level \"try_log\" (or something), which only logs a message\n> if it tries and fails?\n\nI think the best thing to do is change huge_pages='on' to log a WARNING and\nfallback to regular pages if the mapping fails. That way, both dev and prod\ncan keep the same settings, since 'on' will have both visibility and\nrobustness. I don't see a good reason to refuse to start -- seems like an\nanti-pattern.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Nov 3, 2022 at 8:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:>> I wonder if the problem here is that people are reluctant to add noise> to every starting system.   There are people who have not configured> their system and don't want to see that noise, and then some people> have configured their system and would like to know about it if it> doesn't work so they can be aware of that, but don't want to use \"off\"> because they don't want a hard failure.  Would it be better if there> were a new level \"try_log\" (or something), which only logs a message> if it tries and fails?I think the best thing to do is change huge_pages='on' to log a WARNING and fallback to regular pages if the mapping fails. That way, both dev and prod can keep the same settings, since 'on' will have both visibility and robustness. I don't see a good reason to refuse to start -- seems like an anti-pattern.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sun, 6 Nov 2022 14:04:29 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Sun, Nov 06, 2022 at 02:04:29PM +0700, John Naylor wrote:\n> On Thu, Nov 3, 2022 at 8:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > I wonder if the problem here is that people are reluctant to add noise\n> > to every starting system. There are people who have not configured\n> > their system and don't want to see that noise, and then some people\n> > have configured their system and would like to know about it if it\n> > doesn't work so they can be aware of that, but don't want to use \"off\"\n> > because they don't want a hard failure. Would it be better if there\n> > were a new level \"try_log\" (or something), which only logs a message\n> > if it tries and fails?\n> \n> I think the best thing to do is change huge_pages='on' to log a WARNING and\n> fallback to regular pages if the mapping fails. That way, both dev and prod\n> can keep the same settings, since 'on' will have both visibility and\n> robustness. I don't see a good reason to refuse to start -- seems like an\n> anti-pattern.\n\nI'm glad to see there's still discussion on this topic :)\n\nAnother idea is to add a RUNTIME_COMPUTED GUC to *display* the state of\nhuge pages, so I can stop parsing /proc/maps to find it.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Nov 2022 07:04:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hi,\n\nOn 2022-11-06 14:04:29 +0700, John Naylor wrote:\n> I think the best thing to do is change huge_pages='on' to log a WARNING and\n> fallback to regular pages if the mapping fails. That way, both dev and prod\n> can keep the same settings, since 'on' will have both visibility and\n> robustness. I don't see a good reason to refuse to start -- seems like an\n> anti-pattern.\n\nHow would on still have robustness if it doesn't actually do anything other\nthan cause a WARNING? The use of huge pages can have very substantial effects\non memory usage an performance. And it's easy to just have huge_pages fail,\nanother program that started could have used huge pages, or some config\nvariables was changed to incerase shared memory usage...\n\nI am strongly opposed to making 'on' fall back to not using huge pages. That's\nwhat 'try' is for.\n\nI know of people that scripted cluster start so that they start with 'on' and\nchange the system setting of the number of huge pages according to the error\nmessage.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Nov 2022 07:14:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-06 14:04:29 +0700, John Naylor wrote:\n>> I think the best thing to do is change huge_pages='on' to log a WARNING and\n>> fallback to regular pages if the mapping fails.\n\n> I am strongly opposed to making 'on' fall back to not using huge pages. That's\n> what 'try' is for.\n\nAgreed --- changing \"on\" to be exactly like \"try\" isn't an improvement.\nIf you want \"try\" semantics, choose \"try\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Nov 2022 10:56:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Nov 8, 2022 at 4:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-06 14:04:29 +0700, John Naylor wrote:\n> >> I think the best thing to do is change huge_pages='on' to log a WARNING and\n> >> fallback to regular pages if the mapping fails.\n>\n> > I am strongly opposed to making 'on' fall back to not using huge pages. That's\n> > what 'try' is for.\n>\n> Agreed --- changing \"on\" to be exactly like \"try\" isn't an improvement.\n> If you want \"try\" semantics, choose \"try\".\n\nAgreed, but how can we make the people who want a log message happy,\nwithout upsetting the people who don't want new log messages? Hence\nmy suggestion of a new level. How about try_verbose?\n\n\n", "msg_date": "Tue, 8 Nov 2022 11:34:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Tue, 8 Nov 2022 11:34:53 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Nov 8, 2022 at 4:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-11-06 14:04:29 +0700, John Naylor wrote:\n> > Agreed --- changing \"on\" to be exactly like \"try\" isn't an improvement.\n> > If you want \"try\" semantics, choose \"try\".\n> \n> Agreed, but how can we make the people who want a log message happy,\n> without upsetting the people who don't want new log messages? Hence\n> my suggestion of a new level. How about try_verbose?\n\nHonestly I don't come up with other users of the new\nlog-level. Another possible issue is it might be a bit hard for people\nto connect that level to huge_pages=try, whereas I think we shouldn't\nput a description about the concrete impact range of that log-level.\n\nI came up with an alternative idea that add a new huge_pages value\ntry_report or try_verbose, which tell postgresql to *always* report\nthe result of huge_pages = try.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Nov 2022 11:47:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Nov 09, 2022 at 11:47:57AM +0900, Kyotaro Horiguchi wrote:\n> Honestly I don't come up with other users of the new\n> log-level. Another possible issue is it might be a bit hard for people\n> to connect that level to huge_pages=try, whereas I think we shouldn't\n> put a description about the concrete impact range of that log-level.\n> \n> I came up with an alternative idea that add a new huge_pages value\n> try_report or try_verbose, which tell postgresql to *always* report\n> the result of huge_pages = try.\n\nHere is an extra idea for the bucket of ideas: switch the user-visible\nvalue of huge_pages to 'on' when we are at \"try\" but success in using\nhuge pages, and switch the visible value to \"off\". The idea of Justin\nin [1] to use an internal runtime-computed GUC sounds sensible, as well\n(say a boolean effective_huge_pages?).\n\n[1]: https://www.postgresql.org/message-id/20221106130426.GG16921@telsasoft.com\n--\nMichael", "msg_date": "Wed, 9 Nov 2022 14:04:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Nov 09, 2022 at 02:04:00PM +0900, Michael Paquier wrote:\n> On Wed, Nov 09, 2022 at 11:47:57AM +0900, Kyotaro Horiguchi wrote:\n> > Honestly I don't come up with other users of the new\n> > log-level. Another possible issue is it might be a bit hard for people\n> > to connect that level to huge_pages=try, whereas I think we shouldn't\n> > put a description about the concrete impact range of that log-level.\n> > \n> > I came up with an alternative idea that add a new huge_pages value\n> > try_report or try_verbose, which tell postgresql to *always* report\n> > the result of huge_pages = try.\n> \n> Here is an extra idea for the bucket of ideas: switch the user-visible\n> value of huge_pages to 'on' when we are at \"try\" but success in using\n> huge pages, and switch the visible value to \"off\". The idea of Justin\n> in [1] to use an internal runtime-computed GUC sounds sensible, as well\n> (say a boolean effective_huge_pages?).\n> \n> [1]: https://www.postgresql.org/message-id/20221106130426.GG16921@telsasoft.com\n> --\n> Michael\n\nMichael seemed to support this idea and nobody verbalized hatred of it,\nso I implemented it. In v15, we have shared_memory_size_in_huge_pages,\nso this adds effective_huge_pages.\n\nFeel free to suggest a better name.\n\n-- \nJustin", "msg_date": "Mon, 23 Jan 2023 19:21:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Hi,\n\nOn 2023-01-23 19:21:00 -0600, Justin Pryzby wrote:\n> Michael seemed to support this idea and nobody verbalized hatred of it,\n> so I implemented it. In v15, we have shared_memory_size_in_huge_pages,\n> so this adds effective_huge_pages.\n> \n> Feel free to suggest a better name.\n\nHm. Should this be a GUC? There's a reason shared_memory_size_in_huge_pages is\none - it's so it's accessible via -C. But here we could make it a function or\nwhatnot as well.\n\nI'm ok with this being a GUC, it doesn't feel elegant, but I suspect there's\nno realistic elegant answer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 17:33:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Jan 23, 2023 at 05:33:35PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-23 19:21:00 -0600, Justin Pryzby wrote:\n> > Michael seemed to support this idea and nobody verbalized hatred of it,\n> > so I implemented it. In v15, we have shared_memory_size_in_huge_pages,\n> > so this adds effective_huge_pages.\n> > \n> > Feel free to suggest a better name.\n> \n> Hm. Should this be a GUC? There's a reason shared_memory_size_in_huge_pages is\n> one - it's so it's accessible via -C. But here we could make it a function or\n> whatnot as well.\n\nI have no strong opinion, but a few minor arguments in favour of a GUC:\n\n - the implementation parallels huge_pages, huge_page_size, and\n shared_memory_size_in_huge_pages;\n - it's short;\n - it's glob()able with the others:\n\npostgres=# \\dconfig *huge*\n List of configuration parameters\n Parameter | Value \n----------------------------------+-------\n effective_huge_pages | off\n huge_pages | try\n huge_page_size | 0\n shared_memory_size_in_huge_pages | 12\n\n> I'm ok with this being a GUC, it doesn't feel elegant, but I suspect there's\n> no realistic elegant answer.\n\nBTW, I didn't realize it when I made the suggestion, nor when I wrote\nthe patch, but a GUC was implemented in the v2 patch.\nhttps://www.postgresql.org/message-id/TU4PR8401MB1152CB4FEB99658BC6B35543EECF9%40TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM\n\nThe original proposal was to change the elog(DEBUG1, \"mmap..\") to LOG,\nand the thread seems to have derailed out of concern for a hypothetical\nuser who was adverse to an additional line of log messages during server\nstart.\n\nI don't share that concern, but GUC or function seems better anyway,\nsince the log message is not easily available (except maybe, sometimes,\nshortly after the server restart). I'm currently checking for huge\npages in a nagios script by grepping /proc/pid/smaps (I *could* make use\nof a logfile if that was available, but it's better if it's a persistent\nstate/variable).\n\nWhether it's a GUC or a function, I propose to name it hugepages_active.\nIf there's no objections, I'll add to the CF.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 24 Jan 2023 19:37:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Jan 24, 2023 at 07:37:29PM -0600, Justin Pryzby wrote:\n\n> BTW, I didn't realize it when I made the suggestion, nor when I wrote\n> the patch, but a GUC was implemented in the v2 patch.\n> https://www.postgresql.org/message-id/TU4PR8401MB1152CB4FEB99658BC6B35543EECF9%40TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM\n\n> Whether it's a GUC or a function, I propose to name it hugepages_active.\n> If there's no objections, I'll add to the CF.\n\nAs such, I re-opened the previous CF.\nhttps://commitfest.postgresql.org/38/3310/", "msg_date": "Mon, 30 Jan 2023 21:56:39 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On 2023-Jan-24, Justin Pryzby wrote:\n\n> On Mon, Jan 23, 2023 at 05:33:35PM -0800, Andres Freund wrote:\n\n> > I'm ok with this being a GUC, it doesn't feel elegant, but I suspect there's\n> > no realistic elegant answer.\n\n> Whether it's a GUC or a function, I propose to name it hugepages_active.\n> If there's no objections, I'll add to the CF.\n\nMaybe I misread the code (actually I only read the patch), but -- does\nit set active=true when huge_pages=on? I think the code only works when\nhuge_pages=try. That might be pretty confusing; I think it should say\n\"on\" in both cases.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n", "msg_date": "Thu, 2 Feb 2023 16:53:37 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Feb 02, 2023 at 04:53:37PM +0100, Alvaro Herrera wrote:\n> Maybe I misread the code (actually I only read the patch), but -- does\n> it set active=true when huge_pages=on? I think the code only works when\n> huge_pages=try. That might be pretty confusing; I think it should say\n> \"on\" in both cases.\n\n+1\n\nAlso, while this is indeed a runtime-computed parameter, it won't be\ninitialized until after 'postgres -C' has been handled, so 'postgres -C'\nwill always report it as \"off\". However, I'm not sure it makes sense to\ncheck it with 'postgres -C' anyway since you want to know if the current\nserver is using huge pages.\n\nAt the moment, I think I'd vote for a new function instead of a GUC.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Feb 2023 14:56:34 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Feb 02, 2023 at 04:53:37PM +0100, Alvaro Herrera wrote:\n> On 2023-Jan-24, Justin Pryzby wrote:\n> > On Mon, Jan 23, 2023 at 05:33:35PM -0800, Andres Freund wrote:\n> > > I'm ok with this being a GUC, it doesn't feel elegant, but I suspect there's\n> > > no realistic elegant answer.\n> \n> > Whether it's a GUC or a function, I propose to name it hugepages_active.\n> > If there's no objections, I'll add to the CF.\n> \n> Maybe I misread the code (actually I only read the patch), but -- does\n> it set active=true when huge_pages=on? I think the code only works when\n> huge_pages=try. That might be pretty confusing; I think it should say\n> \"on\" in both cases.\n\nYes - I realized that too. There's no reason this GUC should be\ninaccurate when huge_pages=on. (I had hoped there would be a conflict\nneeding resolution before anyone else noticed.)\n\nI don't think it makes sense to run postgres -C huge_pages_active,\nhowever, so I see no issue that that would always returns \"false\".\nIf need be, maybe the documentation could say \"indicates whether huge\npages are active for the running server\".\n\nDoes anybody else want to vote for a function rather than a\nRUNTIME_COMPUTED GUC ?\n\n-- \nJustin", "msg_date": "Wed, 8 Feb 2023 17:18:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On 2023-Feb-08, Justin Pryzby wrote:\n\n> I don't think it makes sense to run postgres -C huge_pages_active,\n> however, so I see no issue that that would always returns \"false\".\n\nHmm, I would initialize it to return \"unknown\" rather than \"off\" — and\nmake sure it turns \"off\" at the appropriate time. Otherwise you're just\nmoving the confusion elsewhere.\n\n> If need be, maybe the documentation could say \"indicates whether huge\n> pages are active for the running server\".\n\nDunno, that seems way too subtle.\n\n> Does anybody else want to vote for a function rather than a\n> RUNTIME_COMPUTED GUC ?\n\nI don't think I'd like to have SELECT show_block_size() et al, so I'd\nrather not go that way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Qué importan los años? Lo que realmente importa es comprobar que\na fin de cuentas la mejor edad de la vida es estar vivo\" (Mafalda)\n\n\n", "msg_date": "Thu, 9 Feb 2023 10:40:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Feb 09, 2023 at 10:40:13AM +0100, Alvaro Herrera wrote:\n> On 2023-Feb-08, Justin Pryzby wrote:\n>> I don't think it makes sense to run postgres -C huge_pages_active,\n>> however, so I see no issue that that would always returns \"false\".\n> \n> Hmm, I would initialize it to return \"unknown\" rather than \"off\" — and\n> make sure it turns \"off\" at the appropriate time. Otherwise you're just\n> moving the confusion elsewhere.\n\nI think this approach would address my concerns about using a GUC.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Feb 2023 11:29:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Feb 09, 2023 at 11:29:09AM -0800, Nathan Bossart wrote:\n> On Thu, Feb 09, 2023 at 10:40:13AM +0100, Alvaro Herrera wrote:\n> > On 2023-Feb-08, Justin Pryzby wrote:\n> >> I don't think it makes sense to run postgres -C huge_pages_active,\n> >> however, so I see no issue that that would always returns \"false\".\n> > \n> > Hmm, I would initialize it to return \"unknown\" rather than \"off\" — and\n> > make sure it turns \"off\" at the appropriate time. Otherwise you're just\n> > moving the confusion elsewhere.\n> \n> I think this approach would address my concerns about using a GUC.\n\nDone that way. This also fixes the logic in win32_shmem.c.\n\n-- \nJustin", "msg_date": "Mon, 13 Feb 2023 17:22:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Feb 13, 2023 at 05:22:45PM -0600, Justin Pryzby wrote:\n> + Reports whether huge pages are in use by the current process.\n> + See <xref linkend=\"guc-huge-pages\"/> for more information.\n\nnitpick: Should this say \"server\" instead of \"current process\"?\n\n> +static char *huge_pages_active = \"unknown\"; /* dynamically set */\n\nnitpick: Does this need to be initialized here?\n\n> +\t{\n> +\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n> +\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n> +\t\t},\n> +\t\t&huge_pages_active,\n> +\t\t\"unknown\",\n> +\t\tNULL, NULL, NULL\n> +\t},\n\nI'm curious why you chose to make this a string instead of an enum. There\nmight be little practical difference, but since there are only three\npossible values, I wonder if it'd be better form to make it an enum.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 Feb 2023 20:18:52 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> On Mon, Feb 13, 2023 at 05:22:45PM -0600, Justin Pryzby wrote:\n> > + Reports whether huge pages are in use by the current process.\n> > + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> \n> nitpick: Should this say \"server\" instead of \"current process\"?\n\nIt should probably say \"instance\" :)\n\n> > +static char *huge_pages_active = \"unknown\"; /* dynamically set */\n> \n> nitpick: Does this need to be initialized here?\n\nNone of the GUCs' C vars need to be initialized, since the guc machinery\nwill do it. \n\n...but the convention is that they *are* initialized - and that's now\npartially enforced.\n\nSee:\nd9d873bac67047cfacc9f5ef96ee488f2cb0f1c3\n7d25958453a60337bcb7bcc986e270792c007ea4\na73952b795632b2cf5acada8476e7cf75857e9be\n\n> > +\t{\n> > +\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n> > +\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n> > +\t\t\tNULL,\n> > +\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n> > +\t\t},\n> > +\t\t&huge_pages_active,\n> > +\t\t\"unknown\",\n> > +\t\tNULL, NULL, NULL\n> > +\t},\n> \n> I'm curious why you chose to make this a string instead of an enum. There\n> might be little practical difference, but since there are only three\n> possible values, I wonder if it'd be better form to make it an enum.\n\nIt takes more code to write as an enum - see 002.txt. I'm not convinced\nthis is better.\n\nBut your comment made me fix its <type>, and reconsider the strings,\nwhich I changed to active={unknown/true/false} rather than {unk/on/off}.\nIt could also be active={unknown/yes/no}...\n\n-- \nJustin", "msg_date": "Tue, 14 Feb 2023 19:32:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n>> On Mon, Feb 13, 2023 at 05:22:45PM -0600, Justin Pryzby wrote:\n>> nitpick: Does this need to be initialized here?\n> \n> None of the GUCs' C vars need to be initialized, since the guc machinery\n> will do it. \n> \n> ...but the convention is that they *are* initialized - and that's now\n> partially enforced.\n> \n> See:\n> d9d873bac67047cfacc9f5ef96ee488f2cb0f1c3\n> 7d25958453a60337bcb7bcc986e270792c007ea4\n> a73952b795632b2cf5acada8476e7cf75857e9be\n\nI see. This looked a little strange to me because many of the other\nvariables are uninitialized. In a73952b, I see that we allow the variables\nfor string GUCs to be initialized to NULL. Anyway, this is only a nitpick.\nI don't feel strongly about it.\n\n>> I'm curious why you chose to make this a string instead of an enum. There\n>> might be little practical difference, but since there are only three\n>> possible values, I wonder if it'd be better form to make it an enum.\n> \n> It takes more code to write as an enum - see 002.txt. I'm not convinced\n> this is better.\n> \n> But your comment made me fix its <type>, and reconsider the strings,\n> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> It could also be active={unknown/yes/no}...\n\nI think unknown/true/false is fine. I'm okay with using a string if no one\nelse thinks it should be an enum (or a bool).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 Feb 2023 10:13:17 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n>> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n>>> I'm curious why you chose to make this a string instead of an enum. There\n>>> might be little practical difference, but since there are only three\n>>> possible values, I wonder if it'd be better form to make it an enum.\n>> \n>> It takes more code to write as an enum - see 002.txt. I'm not convinced\n>> this is better.\n>> \n>> But your comment made me fix its <type>, and reconsider the strings,\n>> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n>> It could also be active={unknown/yes/no}...\n> \n> I think unknown/true/false is fine. I'm okay with using a string if no one\n> else thinks it should be an enum (or a bool).\n\nThere's been no response for this, so I guess we can proceed with a string\nGUC.\n\n+ Reports whether huge pages are in use by the current instance.\n+ See <xref linkend=\"guc-huge-pages\"/> for more information.\n\nI still think we should say \"server\" in place of \"current instance\" here.\n\n+\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n+\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n+\t\t\tNULL,\n+\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n+\t\t},\n\nI don't think we need to use GUC_RUNTIME_COMPUTED. 'postgres -C' seems to\nalways report \"unknown\" for this GUC, so all this would do is cause that\ncommand to error unnecessarily when the server is running.\n\nIt might be worth documenting exactly what \"unknown\" means. IIUC you'll\nonly ever see \"on\" or \"off\" via SHOW or pg_settings, which doesn't seem\ntremendously obvious.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 Mar 2023 14:16:56 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> >>> I'm curious why you chose to make this a string instead of an enum. There\n> >>> might be little practical difference, but since there are only three\n> >>> possible values, I wonder if it'd be better form to make it an enum.\n> >> \n> >> It takes more code to write as an enum - see 002.txt. I'm not convinced\n> >> this is better.\n> >> \n> >> But your comment made me fix its <type>, and reconsider the strings,\n> >> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> >> It could also be active={unknown/yes/no}...\n> > \n> > I think unknown/true/false is fine. I'm okay with using a string if no one\n> > else thinks it should be an enum (or a bool).\n> \n> There's been no response for this, so I guess we can proceed with a string\n> GUC.\n\nJust happened to see this and I'm not really a fan of this being a\nstring when it's pretty clear that's not what it actually is.\n\n> + Reports whether huge pages are in use by the current instance.\n> + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> \n> I still think we should say \"server\" in place of \"current instance\" here.\n\nWe certainly use 'server' a lot more in config.sgml than we do\n'instance'. \"currently running server\" might be closer to how we\ndescribe a running PG system in other parts (we talk about \"currently\nrunning server processes\", \"while the server is running\", \"When running\na standby server\", \"when the server is running\"; \"instance\" is used much\nless and seems to more typically refer to 'state of files on disk' in my\nreading vs. 'actively running process' though there's some of each).\n\n> +\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n> +\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n> +\t\t},\n> \n> I don't think we need to use GUC_RUNTIME_COMPUTED. 'postgres -C' seems to\n> always report \"unknown\" for this GUC, so all this would do is cause that\n> command to error unnecessarily when the server is running.\n\n... or we could consider adjusting things to actually try the mmap() and\nfind out if we'd end up with huge pages or not. Naturally we'd only\nwant to do that if the server isn't running though and erroring if it is\nwould be perfectly correct. Either that or just refusing to provide it\nby an error or other approach (see below) seems entirely reasonable.\n\n> It might be worth documenting exactly what \"unknown\" means. IIUC you'll\n> only ever see \"on\" or \"off\" via SHOW or pg_settings, which doesn't seem\n> tremendously obvious.\n\nIf we could get rid of that case and just make this a boolean, that\nseems like it'd really be the best answer.\n\nTo that end- perhaps this isn't appropriate as a GUC at all? Why not\nhave this just be a system information function? Functionally it really\nprovides the same info- if the server is running then you get back\neither true or false, if it's not then you can't call it but that's\nhardly different from an unknown or error result.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Mar 2023 09:34:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Nathan Bossart (nathandbossart@gmail.com) wrote:\n> > On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> > >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> > >>> I'm curious why you chose to make this a string instead of an enum. There\n> > >>> might be little practical difference, but since there are only three\n> > >>> possible values, I wonder if it'd be better form to make it an enum.\n> > >> \n> > >> It takes more code to write as an enum - see 002.txt. I'm not convinced\n> > >> this is better.\n> > >> \n> > >> But your comment made me fix its <type>, and reconsider the strings,\n> > >> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> > >> It could also be active={unknown/yes/no}...\n> > > \n> > > I think unknown/true/false is fine. I'm okay with using a string if no one\n> > > else thinks it should be an enum (or a bool).\n> > \n> > There's been no response for this, so I guess we can proceed with a string\n> > GUC.\n> \n> Just happened to see this and I'm not really a fan of this being a\n> string when it's pretty clear that's not what it actually is.\n\nI don't understand what you mean by that.\nWhy do you say it isn't a string ?\n\n> > + Reports whether huge pages are in use by the current instance.\n> > + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> > \n> > I still think we should say \"server\" in place of \"current instance\" here.\n> \n> We certainly use 'server' a lot more in config.sgml than we do\n> 'instance'. \"currently running server\" might be closer to how we\n> describe a running PG system in other parts (we talk about \"currently\n> running server processes\", \"while the server is running\", \"When running\n> a standby server\", \"when the server is running\"; \"instance\" is used much\n> less and seems to more typically refer to 'state of files on disk' in my\n> reading vs. 'actively running process' though there's some of each).\n\nI called it \"instance\" since the GUC has no meaning when it's not\nrunning. I'm fine to rename it to \"running server\".\n\n> > +\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n> > +\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n> > +\t\t\tNULL,\n> > +\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n> > +\t\t},\n> > \n> > I don't think we need to use GUC_RUNTIME_COMPUTED. 'postgres -C' seems to\n> > always report \"unknown\" for this GUC, so all this would do is cause that\n> > command to error unnecessarily when the server is running.\n> \n> ... or we could consider adjusting things to actually try the mmap() and\n> find out if we'd end up with huge pages or not.\n\nThat seems like a bad idea, since it might work one moment and fail one\nmoment later. If we could tell in advance whether it was going to work,\nwe wouldn't be here, and probably also wouldn't have invented\nhuge_pages=true.\n\n> > It might be worth documenting exactly what \"unknown\" means. IIUC you'll\n> > only ever see \"on\" or \"off\" via SHOW or pg_settings, which doesn't seem\n> > tremendously obvious.\n> \n> If we could get rid of that case and just make this a boolean, that\n> seems like it'd really be the best answer.\n> \n> To that end- perhaps this isn't appropriate as a GUC at all? Why not\n> have this just be a system information function? Functionally it really\n> provides the same info- if the server is running then you get back\n> either true or false, if it's not then you can't call it but that's\n> hardly different from an unknown or error result.\n\nWe talked about making it a function ~6 weeks ago.\n\nIs there an agreement to use a function, instead ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Mar 2023 10:38:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On 2023-Mar-09, Justin Pryzby wrote:\n\n> On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n\n> > > + Reports whether huge pages are in use by the current instance.\n> > > + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> > > \n> > > I still think we should say \"server\" in place of \"current instance\" here.\n> > \n> > We certainly use 'server' a lot more in config.sgml than we do\n> > 'instance'. \"currently running server\" might be closer to how we\n> > describe a running PG system in other parts (we talk about \"currently\n> > running server processes\", \"while the server is running\", \"When running\n> > a standby server\", \"when the server is running\"; \"instance\" is used much\n> > less and seems to more typically refer to 'state of files on disk' in my\n> > reading vs. 'actively running process' though there's some of each).\n> \n> I called it \"instance\" since the GUC has no meaning when it's not\n> running. I'm fine to rename it to \"running server\".\n\nI'd rather make all these other places use \"instance\" instead. We used\nto consider these terms interchangeable, but since we introduced the\nglossary to unify the terminology, they are no longer supposed to be.\nA server (== a machine) can contain many instances, and each individual\ninstance in the server could be using huge pages or not.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n", "msg_date": "Thu, 9 Mar 2023 18:51:21 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 10:38:56AM -0600, Justin Pryzby wrote:\n> On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n>> To that end- perhaps this isn't appropriate as a GUC at all? Why not\n>> have this just be a system information function? Functionally it really\n>> provides the same info- if the server is running then you get back\n>> either true or false, if it's not then you can't call it but that's\n>> hardly different from an unknown or error result.\n> \n> We talked about making it a function ~6 weeks ago.\n> \n> Is there an agreement to use a function, instead ?\n\nIf we're tallying votes, please count me as +1 for a system information\nfunction. I think that nicely sidesteps having to return \"unknown\" in some\ncases, which I worry will just cause confusion. IMHO it is much more\nobvious that the value refers to the current server if you have to log in\nand execute a function to see it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Mar 2023 11:46:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 06:51:21PM +0100, Alvaro Herrera wrote:\n> I'd rather make all these other places use \"instance\" instead. We used\n> to consider these terms interchangeable, but since we introduced the\n> glossary to unify the terminology, they are no longer supposed to be.\n> A server (== a machine) can contain many instances, and each individual\n> instance in the server could be using huge pages or not.\n\nAh, good to know. I've always considered \"server\" in this context to mean\nthe server process(es) for a single instance, but I can see the value in\nhaving different terminology to clearly distinguish the process(es) from\nthe machine.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Mar 2023 11:52:31 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> > * Nathan Bossart (nathandbossart@gmail.com) wrote:\n> > > On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > > > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> > > >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> > > >>> I'm curious why you chose to make this a string instead of an enum. There\n> > > >>> might be little practical difference, but since there are only three\n> > > >>> possible values, I wonder if it'd be better form to make it an enum.\n> > > >> \n> > > >> It takes more code to write as an enum - see 002.txt. I'm not convinced\n> > > >> this is better.\n> > > >> \n> > > >> But your comment made me fix its <type>, and reconsider the strings,\n> > > >> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> > > >> It could also be active={unknown/yes/no}...\n> > > > \n> > > > I think unknown/true/false is fine. I'm okay with using a string if no one\n> > > > else thinks it should be an enum (or a bool).\n> > > \n> > > There's been no response for this, so I guess we can proceed with a string\n> > > GUC.\n> > \n> > Just happened to see this and I'm not really a fan of this being a\n> > string when it's pretty clear that's not what it actually is.\n> \n> I don't understand what you mean by that.\n> Why do you say it isn't a string ?\n\nBecause it's clearly a yes/no, either the server is currently running\nwith huge pages, or it isn't. That's the definition of a boolean.\nSure, anything can be cast to text but when there's a data type that\nfits better, that's almost uniformly better to use.\n\n> > > + Reports whether huge pages are in use by the current instance.\n> > > + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> > > \n> > > I still think we should say \"server\" in place of \"current instance\" here.\n> > \n> > We certainly use 'server' a lot more in config.sgml than we do\n> > 'instance'. \"currently running server\" might be closer to how we\n> > describe a running PG system in other parts (we talk about \"currently\n> > running server processes\", \"while the server is running\", \"When running\n> > a standby server\", \"when the server is running\"; \"instance\" is used much\n> > less and seems to more typically refer to 'state of files on disk' in my\n> > reading vs. 'actively running process' though there's some of each).\n> \n> I called it \"instance\" since the GUC has no meaning when it's not\n> running. I'm fine to rename it to \"running server\".\n\nGreat, I do think that would match better with the rest of the\ndocumentation.\n\n> > > +\t\t{\"huge_pages_active\", PGC_INTERNAL, PRESET_OPTIONS,\n> > > +\t\t\tgettext_noop(\"Indicates whether huge pages are in use.\"),\n> > > +\t\t\tNULL,\n> > > +\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED\n> > > +\t\t},\n> > > \n> > > I don't think we need to use GUC_RUNTIME_COMPUTED. 'postgres -C' seems to\n> > > always report \"unknown\" for this GUC, so all this would do is cause that\n> > > command to error unnecessarily when the server is running.\n> > \n> > ... or we could consider adjusting things to actually try the mmap() and\n> > find out if we'd end up with huge pages or not.\n> \n> That seems like a bad idea, since it might work one moment and fail one\n> moment later. If we could tell in advance whether it was going to work,\n> we wouldn't be here, and probably also wouldn't have invented\n> huge_pages=true.\n\nSure it might ... but I tend to disagree that it's actually all that\nlikely for it to end up being as inconsistent as that and it'd be nice\nto be able to see if the server will end up successfully starting (for\nthis part, at least), or not, when configured with huge pages set to on,\nor if starting with 'try' is likely to result in it actually using huge\npages, or not.\n\n> > > It might be worth documenting exactly what \"unknown\" means. IIUC you'll\n> > > only ever see \"on\" or \"off\" via SHOW or pg_settings, which doesn't seem\n> > > tremendously obvious.\n> > \n> > If we could get rid of that case and just make this a boolean, that\n> > seems like it'd really be the best answer.\n> > \n> > To that end- perhaps this isn't appropriate as a GUC at all? Why not\n> > have this just be a system information function? Functionally it really\n> > provides the same info- if the server is running then you get back\n> > either true or false, if it's not then you can't call it but that's\n> > hardly different from an unknown or error result.\n> \n> We talked about making it a function ~6 weeks ago.\n\nOh, good, glad I'm not the only one to have thought of that.\n\n> Is there an agreement to use a function, instead ?\n\nLooking back at the arguments for having it be a GUC ... I just don't\nreally see any of them as very strong. Not that I feel super strongly\nabout it being a function either, but it's certainly not a configuration\nvariable and it also isn't really available with postgres -C (and\ntherefore doesn't actually go along with how the *size GUCs work). It's\nliterally information about the running system that the user might be\ncurious about ... and that sure seems to fit pretty cleanly under\n'System Information Functions'.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Mar 2023 15:02:29 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2023-Mar-09, Justin Pryzby wrote:\n> > On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> > > > + Reports whether huge pages are in use by the current instance.\n> > > > + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> > > > \n> > > > I still think we should say \"server\" in place of \"current instance\" here.\n> > > \n> > > We certainly use 'server' a lot more in config.sgml than we do\n> > > 'instance'. \"currently running server\" might be closer to how we\n> > > describe a running PG system in other parts (we talk about \"currently\n> > > running server processes\", \"while the server is running\", \"When running\n> > > a standby server\", \"when the server is running\"; \"instance\" is used much\n> > > less and seems to more typically refer to 'state of files on disk' in my\n> > > reading vs. 'actively running process' though there's some of each).\n> > \n> > I called it \"instance\" since the GUC has no meaning when it's not\n> > running. I'm fine to rename it to \"running server\".\n> \n> I'd rather make all these other places use \"instance\" instead. We used\n> to consider these terms interchangeable, but since we introduced the\n> glossary to unify the terminology, they are no longer supposed to be.\n> A server (== a machine) can contain many instances, and each individual\n> instance in the server could be using huge pages or not.\n\nPerhaps, but then the references to instance should probably also be\nchanged since there's certainly some that are referring to a set of data\nfiles and not to backend processes, eg:\n\n######\nThe <literal>shutdown</literal> setting is useful to have the instance\nready at the exact replay point desired. The instance will still be\nable to replay more WAL records (and in fact will have to replay WAL\nrecords since the last checkpoint next time it is started).\n######\n\nNot sure about just changing one thing at a time though or using the\n'right' term when introducing things while having the 'wrong' term be\nused next to it. Also not saying that this patch should be responsible\nfor fixing everything either. 'Server' in the glossary does explicitly\nsay it could be used when referring to an 'instance' too though, so\n'running server' doesn't seem to be entirely wrong.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Mar 2023 15:15:29 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 03:02:29PM -0500, Stephen Frost wrote:\n> * Justin Pryzby (pryzby@telsasoft.com) wrote:\n> > On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> > > * Nathan Bossart (nathandbossart@gmail.com) wrote:\n> > > > On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > > > > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> > > > >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> > > > >>> I'm curious why you chose to make this a string instead of an enum. There\n> > > > >>> might be little practical difference, but since there are only three\n> > > > >>> possible values, I wonder if it'd be better form to make it an enum.\n> > > > >> \n> > > > >> It takes more code to write as an enum - see 002.txt. I'm not convinced\n> > > > >> this is better.\n> > > > >> \n> > > > >> But your comment made me fix its <type>, and reconsider the strings,\n> > > > >> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> > > > >> It could also be active={unknown/yes/no}...\n> > > > > \n> > > > > I think unknown/true/false is fine. I'm okay with using a string if no one\n> > > > > else thinks it should be an enum (or a bool).\n> > > > \n> > > > There's been no response for this, so I guess we can proceed with a string\n> > > > GUC.\n> > > \n> > > Just happened to see this and I'm not really a fan of this being a\n> > > string when it's pretty clear that's not what it actually is.\n> > \n> > I don't understand what you mean by that.\n> > Why do you say it isn't a string ?\n> \n> Because it's clearly a yes/no, either the server is currently running\n> with huge pages, or it isn't. That's the definition of a boolean.\n\nI originally implemented it as a boolean, and I changed it in response\nto the feedback that postgres -C huge_pages_active should return\n\"unknown\".\n\n> > Is there an agreement to use a function, instead ?\n\nAlvaro was -1 on using a function\nAndres is +0 (?)\nNathan is +1\nStephen is +1\n\nI'm -0.5, but I reimplemented it as a function. I hope that helps it to\nprogress. Please include a suggestion if there's better place for the\nfunction or global var.\n\n-- \nJustin", "msg_date": "Mon, 13 Mar 2023 15:03:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Greetings,\n\nOn Mon, Mar 13, 2023 at 21:03 Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Mar 09, 2023 at 03:02:29PM -0500, Stephen Frost wrote:\n> > * Justin Pryzby (pryzby@telsasoft.com) wrote:\n> > > On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> > > > * Nathan Bossart (nathandbossart@gmail.com) wrote:\n> > > > > On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > > > > > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> > > > > >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> > > > > >>> I'm curious why you chose to make this a string instead of an\n> enum. There\n> > > > > >>> might be little practical difference, but since there are only\n> three\n> > > > > >>> possible values, I wonder if it'd be better form to make it an\n> enum.\n> > > > > >>\n> > > > > >> It takes more code to write as an enum - see 002.txt. I'm not\n> convinced\n> > > > > >> this is better.\n> > > > > >>\n> > > > > >> But your comment made me fix its <type>, and reconsider the\n> strings,\n> > > > > >> which I changed to active={unknown/true/false} rather than\n> {unk/on/off}.\n> > > > > >> It could also be active={unknown/yes/no}...\n> > > > > >\n> > > > > > I think unknown/true/false is fine. I'm okay with using a\n> string if no one\n> > > > > > else thinks it should be an enum (or a bool).\n> > > > >\n> > > > > There's been no response for this, so I guess we can proceed with\n> a string\n> > > > > GUC.\n> > > >\n> > > > Just happened to see this and I'm not really a fan of this being a\n> > > > string when it's pretty clear that's not what it actually is.\n> > >\n> > > I don't understand what you mean by that.\n> > > Why do you say it isn't a string ?\n> >\n> > Because it's clearly a yes/no, either the server is currently running\n> > with huge pages, or it isn't. That's the definition of a boolean.\n>\n> I originally implemented it as a boolean, and I changed it in response\n> to the feedback that postgres -C huge_pages_active should return\n> \"unknown\".\n\n\nI really don’t see how that’s at all useful.\n\n> > Is there an agreement to use a function, instead ?\n>\n> Alvaro was -1 on using a function\n\n\nI don’t entirely get that argument (select thisfunc(); is much worse than\nshow thisguc; ..? Also, the former is easier to use with other functions\nand such, as you don’t have to do current_setting(‘thisguc’)…).\n\nAndres is +0 (?)\n\n\nKinda felt like this was closer to +0.5 or more, for my part anyway.\n\nNathan is +1\n> Stephen is +1\n>\n> I'm -0.5,\n\n\nWhy..?\n\nbut I reimplemented it as a function.\n\n\nThanks!\n\n I hope that helps it to\n> progress. Please include a suggestion if there's better place for the\n> function or global var.\n\n\nWill try to give it a look tomorrow.\n\nThanks again!\n\nStephen\n\n>\n\nGreetings,On Mon, Mar 13, 2023 at 21:03 Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Mar 09, 2023 at 03:02:29PM -0500, Stephen Frost wrote:\n> * Justin Pryzby (pryzby@telsasoft.com) wrote:\n> > On Thu, Mar 09, 2023 at 09:34:10AM -0500, Stephen Frost wrote:\n> > > * Nathan Bossart (nathandbossart@gmail.com) wrote:\n> > > > On Wed, Feb 15, 2023 at 10:13:17AM -0800, Nathan Bossart wrote:\n> > > > > On Tue, Feb 14, 2023 at 07:32:56PM -0600, Justin Pryzby wrote:\n> > > > >> On Mon, Feb 13, 2023 at 08:18:52PM -0800, Nathan Bossart wrote:\n> > > > >>> I'm curious why you chose to make this a string instead of an enum.  There\n> > > > >>> might be little practical difference, but since there are only three\n> > > > >>> possible values, I wonder if it'd be better form to make it an enum.\n> > > > >> \n> > > > >> It takes more code to write as an enum - see 002.txt.  I'm not convinced\n> > > > >> this is better.\n> > > > >> \n> > > > >> But your comment made me fix its <type>, and reconsider the strings,\n> > > > >> which I changed to active={unknown/true/false} rather than {unk/on/off}.\n> > > > >> It could also be active={unknown/yes/no}...\n> > > > > \n> > > > > I think unknown/true/false is fine.  I'm okay with using a string if no one\n> > > > > else thinks it should be an enum (or a bool).\n> > > > \n> > > > There's been no response for this, so I guess we can proceed with a string\n> > > > GUC.\n> > > \n> > > Just happened to see this and I'm not really a fan of this being a\n> > > string when it's pretty clear that's not what it actually is.\n> > \n> > I don't understand what you mean by that.\n> > Why do you say it isn't a string ?\n> \n> Because it's clearly a yes/no, either the server is currently running\n> with huge pages, or it isn't.  That's the definition of a boolean.\n\nI originally implemented it as a boolean, and I changed it in response\nto the feedback that postgres -C huge_pages_active should return\n\"unknown\".I really don’t see how that’s at all useful.\n> > Is there an agreement to use a function, instead ?\n\nAlvaro was -1 on using a functionI don’t entirely get that argument (select thisfunc(); is much worse than show thisguc; ..?   Also, the former is easier to use with other functions and such, as you don’t have to do current_setting(‘thisguc’)…).\nAndres is +0 (?)Kinda felt like this was closer to +0.5 or more, for my part anyway.\nNathan is +1\nStephen is +1\n\nI'm -0.5, Why..?but I reimplemented it as a function.Thanks!  I hope that helps it to\nprogress.  Please include a suggestion if there's better place for the\nfunction or global var.Will try to give it a look tomorrow.  Thanks again!Stephen", "msg_date": "Mon, 13 Mar 2023 21:33:31 +0100", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Mon, 13 Mar 2023 21:33:31 +0100, Stephen Frost <sfrost@snowman.net> wrote in \n> > On Thu, Mar 09, 2023 at 03:02:29PM -0500, Stephen Frost wrote:\n> > > * Justin Pryzby (pryzby@telsasoft.com) wrote:\n> > > Is there an agreement to use a function, instead ?\n> >\n> > Alvaro was -1 on using a function\n> \n> \n> I don’t entirely get that argument (select thisfunc(); is much worse than\n> show thisguc; ..? Also, the former is easier to use with other functions\n> and such, as you don’t have to do current_setting(‘thisguc’)…).\n> \n> Andres is +0 (?)\n> \n> \n> Kinda felt like this was closer to +0.5 or more, for my part anyway.\n> \n> Nathan is +1\n> > Stephen is +1\n> >\n> > I'm -0.5,\n> \n> \n> Why..?\n\nIMHO, it appears that we use GUC \"internal\" variables to for the\nlifespan-long numbers of a postmaster process or an instance.\nExamples of such variables includes shared_memory_size and\ns_m_s_in_huge_pages, integer_datetimes and data_checksums. I'm\nuncertain about in_hot_standby, as it can change during a postmaster's\nlifetime. However, pg_is_in_recovery() provides essentially the same\ninformation.\n\nRegarding huge_page_active, its value remains constant throughout a\npostmaster's lifespan. In this regard, GUC may be a better fit for\nthis information. The issue with using GUC for this value is that the\npostgres command cannot report the final value via the -C option,\nwhich may be the reason for the third alternative \"unknown\".\n\nI slightly prefer using a function for this, as if GUC is used, it can\nonly return \"unknown\" for the command \"postgres -C\nhuge_page_active\". However, apart from this advantage, I prefer using\na GUC for this information.\n\nIf we implement it as a function, I suggest naming it\n\"pg_huge_page_is_active\" or something similar (the use of \"is\" is\nsigninficant here) to follow the naming convention used in\npg_is_in_recovery().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Mar 2023 14:02:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Mar 14, 2023 at 02:02:19PM +0900, Kyotaro Horiguchi wrote:\n> Regarding huge_page_active, its value remains constant throughout a\n> postmaster's lifespan. In this regard, GUC may be a better fit for\n> this information. The issue with using GUC for this value is that the\n> postgres command cannot report the final value via the -C option,\n> which may be the reason for the third alternative \"unknown\".\n> \n> I slightly prefer using a function for this, as if GUC is used, it can\n> only return \"unknown\" for the command \"postgres -C\n> huge_page_active\". However, apart from this advantage, I prefer using\n> a GUC for this information.\n\nThe main advantage of a read-only GUC over a function is that users\nwould not need to start a postmaster to know if huge pages would be\nactive or not. This is the main reason why a GUC would be a better\nfit, in my opinion, because it makes for a cheaper check, while still\nallowing a SQL query to check the value of the GUC.\n--\nMichael", "msg_date": "Mon, 20 Mar 2023 13:54:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Mar 20, 2023 at 01:54:46PM +0900, Michael Paquier wrote:\n> The main advantage of a read-only GUC over a function is that users\n> would not need to start a postmaster to know if huge pages would be\n> active or not. This is the main reason why a GUC would be a better\n> fit, in my opinion, because it makes for a cheaper check, while still\n> allowing a SQL query to check the value of the GUC.\n\n[ Should have read more carefully ]\n\n.. Which is something you cannot do with -C because mmap() happens\nafter the runtime-computed logic for postgres -C. It does not sound\nright to do the mmap() for a GUC check, so indeed a function may be\nmore adapted rather than move mmap() call a bit earlier in the\npostmaster startup sequence.\n--\nMichael", "msg_date": "Mon, 20 Mar 2023 14:03:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 14, 2023 at 02:02:19PM +0900, Kyotaro Horiguchi wrote:\n>> I slightly prefer using a function for this, as if GUC is used, it can\n>> only return \"unknown\" for the command \"postgres -C\n>> huge_page_active\". However, apart from this advantage, I prefer using\n>> a GUC for this information.\n\n> The main advantage of a read-only GUC over a function is that users\n> would not need to start a postmaster to know if huge pages would be\n> active or not.\n\nI'm confused here, because Horiguchi-san is saying that that\nwon't work. I've not checked the code lately, but I think that\n\"postgres -C var\" prints its results before actually attempting\nto establish shared memory, so I suspect Horiguchi-san is right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Mar 2023 01:09:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Mar 20, 2023 at 01:09:09AM -0400, Tom Lane wrote:\n> I'm confused here, because Horiguchi-san is saying that that\n> won't work. I've not checked the code lately, but I think that\n> \"postgres -C var\" prints its results before actually attempting\n> to establish shared memory, so I suspect Horiguchi-san is right.\n\nYes, I haven't read correctly through. Sorry for the noise.\n--\nMichael", "msg_date": "Mon, 20 Mar 2023 14:17:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Mar 20, 2023 at 02:03:13PM +0900, Michael Paquier wrote:\n> On Mon, Mar 20, 2023 at 01:54:46PM +0900, Michael Paquier wrote:\n> > The main advantage of a read-only GUC over a function is that users\n> > would not need to start a postmaster to know if huge pages would be\n> > active or not. This is the main reason why a GUC would be a better\n> > fit, in my opinion, because it makes for a cheaper check, while still\n> > allowing a SQL query to check the value of the GUC.\n> \n> [ Should have read more carefully ]\n> \n> .. Which is something you cannot do with -C because mmap() happens\n> after the runtime-computed logic for postgres -C. It does not sound\n> right to do the mmap() for a GUC check, so indeed a function may be\n> more adapted rather than move mmap() call a bit earlier in the\n> postmaster startup sequence.\n\nOn Mon, Mar 20, 2023 at 02:17:33PM +0900, Michael Paquier wrote:\n> On Mon, Mar 20, 2023 at 01:09:09AM -0400, Tom Lane wrote:\n> > I'm confused here, because Horiguchi-san is saying that that\n> > won't work. I've not checked the code lately, but I think that\n> > \"postgres -C var\" prints its results before actually attempting\n> > to establish shared memory, so I suspect Horiguchi-san is right.\n> \n> Yes, I haven't read correctly through. Sorry for the noise.\n\nYou set this patch to \"waiting on author\" twice. Would you let me know\nwhat I could do to help progress the patch? Right now, I have no idea.\n\nMost recently, you said it'd be better implemented as a GUC to allow\nusing -C, but then recanted because -C doesn't work for this (which is\nwhy I implemented it as a string back on 2023-02-08). Which is why I\nreset its status on 2023-03-20.\n\n2023-03-22 01:36:58 \tMichael Paquier (michael-kun) \tNew status: Waiting on Author\n2023-03-20 13:05:32 \tJustin Pryzby (justinpryzby) \tNew status: Needs review\n2023-03-20 05:03:53 \tMichael Paquier (michael-kun) \tNew status: Waiting on Author\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Mar 2023 21:19:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Mar 21, 2023 at 09:19:41PM -0500, Justin Pryzby wrote:\n> You set this patch to \"waiting on author\" twice. Would you let me know\n> what I could do to help progress the patch? Right now, I have no idea.\n\nMy mistake, I've been looking at an incorrect version of the patch.\nThanks for correcting me here.\n\nI have read through the proposed v5 of the patch, that seems to be the\nlatest one available:\nhttps://www.postgresql.org/message-id/ZA+Bpk/6LcYiUXnh@telsasoft.com\n\nI have noted something.. For the WIN32 case, we have that:\n\n+++ b/src/backend/port/win32_shmem.c\n@@ -327,6 +327,8 @@ retry:\n Sleep(1000);\n continue;\n }\n+\n+ huge_pages_active = ((flProtect & SEC_LARGE_PAGES) != 0);\n break;\n\nAre you sure that this is correct? This is set in\nPGSharedMemoryCreate(), part of CreateSharedMemoryAndSemaphores() in\nthe startup sequence that creates the shmem segment. However, for a\nnormal backend created by EXEC_BACKEND, SubPostmasterMain() reattaches\nto an existing shared memory segment, so we don't go through the\ncreation path that would set huge_pages_active for the process just\nstarted, (note that InitPostmasterChild() switches IsUnderPostmaster,\nbypassing the shmem segment creation).\n--\nMichael", "msg_date": "Wed, 22 Mar 2023 16:37:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Mar 22, 2023 at 04:37:01PM +0900, Michael Paquier wrote:\n> I have noted something.. For the WIN32 case, we have that:\n> \n> +++ b/src/backend/port/win32_shmem.c\n> @@ -327,6 +327,8 @@ retry:\n> Sleep(1000);\n> continue;\n> }\n> +\n> + huge_pages_active = ((flProtect & SEC_LARGE_PAGES) != 0);\n> break;\n> \n> Are you sure that this is correct? This is set in\n> PGSharedMemoryCreate(), part of CreateSharedMemoryAndSemaphores() in\n> the startup sequence that creates the shmem segment. However, for a\n> normal backend created by EXEC_BACKEND, SubPostmasterMain() reattaches\n> to an existing shared memory segment, so we don't go through the\n> creation path that would set huge_pages_active for the process just\n> started, (note that InitPostmasterChild() switches IsUnderPostmaster,\n> bypassing the shmem segment creation).\n\nWow, good point. I think to make it work we'd need put\nhuge_pages_active into BackendParameters and handle it in\nsave_backend_variables(). If so, that'd be strong argument for using a\nGUC, which already has all the necessary infrastructure for exposing the\nserver's state.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 22 Mar 2023 17:18:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Wed, Mar 22, 2023 at 05:18:28PM -0500, Justin Pryzby wrote:\n> Wow, good point. I think to make it work we'd need put\n> huge_pages_active into BackendParameters and handle it in\n> save_backend_variables(). If so, that'd be strong argument for using a\n> GUC, which already has all the necessary infrastructure for exposing the\n> server's state.\n\nI am afraid so, duplicating an existing infrastructure for a need like\nthe one of this thread is not really appealing.\n--\nMichael", "msg_date": "Thu, 23 Mar 2023 07:23:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Thu, 23 Mar 2023 07:23:28 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Mar 22, 2023 at 05:18:28PM -0500, Justin Pryzby wrote:\n> > Wow, good point. I think to make it work we'd need put\n> > huge_pages_active into BackendParameters and handle it in\n> > save_backend_variables(). If so, that'd be strong argument for using a\n> > GUC, which already has all the necessary infrastructure for exposing the\n> > server's state.\n> \n> I am afraid so, duplicating an existing infrastructure for a need like\n> the one of this thread is not really appealing.\n\nWouldn't storing the value in the shared memory itself work? Though,\nthat space does become almost dead for the server's lifetime...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Mar 2023 17:25:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 23, 2023 at 05:25:46PM +0900, Kyotaro Horiguchi wrote:\n> Wouldn't storing the value in the shared memory itself work? Though,\n> that space does become almost dead for the server's lifetime...\n\nI'm sure it's possible, but it's also not worth writing a special\nimplementation just to handle huge_pages_active, which is better written\nin 30 lines than in 300 lines.\n\nIf we needed to avoid using a GUC, maybe it'd work to add\nhuge_pages_active to PGShmemHeader. But \"avoid using gucs at all costs\"\nisn't the goal, and using a GUC parallels all the similar, and related\nthings without needing to allocate extra bits of shared_memory.\n\nOn Thu, Mar 23, 2023 at 07:23:28AM +0900, Michael Paquier wrote:\n> On Wed, Mar 22, 2023 at 05:18:28PM -0500, Justin Pryzby wrote:\n> > Wow, good point. I think to make it work we'd need put\n> > huge_pages_active into BackendParameters and handle it in\n> > save_backend_variables(). If so, that'd be strong argument for using a\n> > GUC, which already has all the necessary infrastructure for exposing the\n> > server's state.\n> \n> I am afraid so, duplicating an existing infrastructure for a need like\n> the one of this thread is not really appealing.\n\nThis goes back to using a GUC, and:\n\n - removes GUC_RUNTIME_COMPUTED, since that causes a useless error when\n using -C if the server is running, rather than successfully printing\n \"unknown\".\n - I also renamed it from huge_pages_active = {true,false,unknown} to\n huge_pages_STATUS = {on,off,unknown}. This parallels huge_pages,\n which is documented to accept values on/off/try. Also, true/false\n isn't how bools are displayed.\n - I realized that the rename suggested implementing it as an enum GUC,\n and re-using the existing HUGE_PAGES_{ON,OFF} values (and adding an\n \"UNKNOWN\"). Maybe this also avoids Stephen's earlier objection to\n using a string ?\n\nI mis-used cirrusci to check that the GUC does work correctly under\nwindows.\n\nIf there's continuing aversions to using a GUC, please say so, and try\nto suggest an alternate implementation you think would be justified.\nIt'd need to work under windows with EXEC_BACKEND.\n\n-- \nJustin", "msg_date": "Thu, 23 Mar 2023 20:50:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 23, 2023 at 08:50:50PM -0500, Justin Pryzby wrote:\n> I'm sure it's possible, but it's also not worth writing a special\n> implementation just to handle huge_pages_active, which is better written\n> in 30 lines than in 300 lines.\n> \n> If we needed to avoid using a GUC, maybe it'd work to add\n> huge_pages_active to PGShmemHeader. But \"avoid using gucs at all costs\"\n> isn't the goal, and using a GUC parallels all the similar, and related\n> things without needing to allocate extra bits of shared_memory.\n\nYeah, I kind of agree that the complications are not appealing\ncompared to the use case. FWIW, I am fine with just using \"unknown\"\neven under -C because we don't know the status without the mmpa()\ncall. And we don't want to assign what would be potentially a bunch\nof memory when running that.\n\n> This goes back to using a GUC, and:\n> \n> - removes GUC_RUNTIME_COMPUTED, since that causes a useless error when\n> using -C if the server is running, rather than successfully printing\n> \"unknown\".\n> - I also renamed it from huge_pages_active = {true,false,unknown} to\n> huge_pages_STATUS = {on,off,unknown}. This parallels huge_pages,\n> which is documented to accept values on/off/try. Also, true/false\n> isn't how bools are displayed.\n> - I realized that the rename suggested implementing it as an enum GUC,\n> and re-using the existing HUGE_PAGES_{ON,OFF} values (and adding an\n> \"UNKNOWN\"). Maybe this also avoids Stephen's earlier objection to\n> using a string ?\n\nhuge_pages_status is OK here, as is reusing the enum struct to avoid\nan unnecessary duplication.\n\n> I mis-used cirrusci to check that the GUC does work correctly under\n> windows.\n\nYou mean that you abused of it in some custom branch running the CI on\ngithub? If I may ask, what did you do to make sure that huge pages\nare set when re-attaching a backend to a shmem area?\n\n> If there's continuing aversions to using a GUC, please say so, and try\n> to suggest an alternate implementation you think would be justified.\n> It'd need to work under windows with EXEC_BACKEND.\n\nLooking at the patch, it seems like that this should work even for\nEXEC_BACKEND on *nix when a backend reattaches.. It may be better to\nbe sure of that, as well, if it has not been tested.\n\n+++ b/src/backend/port/win32_shmem.c\n@@ -327,6 +327,10 @@ retry:\n Sleep(1000);\n continue;\n }\n+\n+ SetConfigOption(\"huge_pages_status\", (flProtect & SEC_LARGE_PAGES) ?\n+ \"on\" : \"off\", PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT)\n\nPerhaps better to just move that at the end of PGSharedMemoryCreate()\nfor WIN32?\n\n+ <varlistentry id=\"guc-huge-pages-status\" xreflabel=\"huge_pages_status\">\n+ <term><varname>huge_pages_status</varname> (<type>enum</type>)\n+ <indexterm>\n+ <primary><varname>huge_pages_status</varname> configuration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Reports the state of huge pages in the current instance.\n+ See <xref linkend=\"guc-huge-pages\"/> for more information.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nThe patch needs to provide more details about the unknown state, IMO,\nto make it crystal-clear to the users what are the limitations of this\nGUC, especially regarding the fact that this is useful when \"try\"-ing\nto allocate huge pages, and that the value can only be consulted after\nthe server has been started.\n--\nMichael", "msg_date": "Tue, 28 Mar 2023 09:35:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Mar 28, 2023 at 09:35:30AM +0900, Michael Paquier wrote:\n> The patch needs to provide more details about the unknown state, IMO,\n> to make it crystal-clear to the users what are the limitations of this\n> GUC, especially regarding the fact that this is useful when \"try\"-ing\n> to allocate huge pages, and that the value can only be consulted after\n> the server has been started.\n\nThis is still unanswered? Perhaps at this stage we'd better consider\nthat for v17 so as we have more time to agree on the user interface?\n--\nMichael", "msg_date": "Thu, 6 Apr 2023 11:06:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Mar 28, 2023 at 09:35:30AM +0900, Michael Paquier wrote:\n> On Thu, Mar 23, 2023 at 08:50:50PM -0500, Justin Pryzby wrote:\n> \n> You mean that you abused of it in some custom branch running the CI on\n> github? If I may ask, what did you do to make sure that huge pages\n> are set when re-attaching a backend to a shmem area?\n\nYes. I hijacked a tap test to first run \"show huge_pages_active\" and then\nalso caused it to fail, such that I could check its output.\n\nhttps://cirrus-ci.com/task/6309510885670912\nhttps://cirrus-ci.com/task/6613427737591808\n\n> > If there's continuing aversions to using a GUC, please say so, and try\n> > to suggest an alternate implementation you think would be justified.\n> > It'd need to work under windows with EXEC_BACKEND.\n> \n> Looking at the patch, it seems like that this should work even for\n> EXEC_BACKEND on *nix when a backend reattaches.. It may be better to\n> be sure of that, as well, if it has not been tested.\n\nArg, no - it wasn't working. I added an assert to check that a running\nserver won't output \"unknown\".\n\n> +++ b/src/backend/port/win32_shmem.c\n> @@ -327,6 +327,10 @@ retry:\n> Sleep(1000);\n> continue;\n> }\n> +\n> + SetConfigOption(\"huge_pages_status\", (flProtect & SEC_LARGE_PAGES) ?\n> + \"on\" : \"off\", PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT)\n> \n> Perhaps better to just move that at the end of PGSharedMemoryCreate()\n> for WIN32?\n\nMaybe - I don't know.\n\n> + <varlistentry id=\"guc-huge-pages-status\" xreflabel=\"huge_pages_status\">\n> + <term><varname>huge_pages_status</varname> (<type>enum</type>)\n> + <indexterm>\n> + <primary><varname>huge_pages_status</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Reports the state of huge pages in the current instance.\n> + See <xref linkend=\"guc-huge-pages\"/> for more information.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> \n> The patch needs to provide more details about the unknown state, IMO,\n> to make it crystal-clear to the users what are the limitations of this\n> GUC, especially regarding the fact that this is useful when \"try\"-ing\n> to allocate huge pages, and that the value can only be consulted after\n> the server has been started.\n\nI'm not sure I agree. It can be confusing (even harmful) to overspecify the\ndocumentation. I used the word \"instance\" specifically to refer to a running\nserver. Anyone querying the status of huge pages is going to need to\nunderstand that it doesn't make sense to ask about the memory state of a server\nthat's not running. Actually, I'm having trouble imagining that anybody would\never run postgres -C huge_page_status except if they wondered whether it might\nmisbehave. If what I've written is inadequate, could you propose something ?\n\n-- \nJustin\n\nPS. I hadn't updated this thread because my last message from you (on\nanother thread) indicated that you'd gotten busy. I'm hoping you'll\nrespond to these other inquiries when you're able.\n\nhttps://www.postgresql.org/message-id/ZCUZLiCeXq4Im7OG%40telsasoft.com\nhttps://www.postgresql.org/message-id/ZCUKL22GutwGrrZk%40telsasoft.com", "msg_date": "Thu, 6 Apr 2023 16:57:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 23, 2023 at 05:25:46PM +0900, Kyotaro Horiguchi wrote:\n> Wouldn't storing the value in the shared memory itself work? Though,\n> that space does become almost dead for the server's lifetime...\n\nSure, it would work. However, we'd still need an interface for the\nextra function. At this point, a GUC with an unknown state is kind of\nOK for me. Anyway, where would you stick this state?\n--\nMichael", "msg_date": "Tue, 11 Apr 2023 15:17:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "At Tue, 11 Apr 2023 15:17:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 23, 2023 at 05:25:46PM +0900, Kyotaro Horiguchi wrote:\n> > Wouldn't storing the value in the shared memory itself work? Though,\n> > that space does become almost dead for the server's lifetime...\n> \n> Sure, it would work. However, we'd still need an interface for the\n> extra function. At this point, a GUC with an unknown state is kind of\n> OK for me. Anyway, where would you stick this state?\n\n(Digging memory..)\n\nSorry for confusion but I didn't mean to stick to the function. Just\nI thought that some people seem to dislike having the third state for\nthe should-be-boolean variable.\n\nSo, I'm okay with GUC, having \"unknown\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 Apr 2023 16:41:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Mar 23, 2023 at 07:23:28AM +0900, Michael Paquier wrote:\n> On Wed, Mar 22, 2023 at 05:18:28PM -0500, Justin Pryzby wrote:\n>> Wow, good point. I think to make it work we'd need put\n>> huge_pages_active into BackendParameters and handle it in\n>> save_backend_variables(). If so, that'd be strong argument for using a\n>> GUC, which already has all the necessary infrastructure for exposing the\n>> server's state.\n> \n> I am afraid so, duplicating an existing infrastructure for a need like\n> the one of this thread is not really appealing.\n\nAFAICT this would involve adding a bool to BackendParameters and using it\nin save_backend_variables() and restore_backend_variables(), which is an\nadditional 3 lines of code. That doesn't sound too bad to me, but perhaps\nI am missing something.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 20 Apr 2023 14:16:17 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Thu, Apr 20, 2023 at 02:16:17PM -0700, Nathan Bossart wrote:\n> AFAICT this would involve adding a bool to BackendParameters and using it\n> in save_backend_variables() and restore_backend_variables(), which is an\n> additional 3 lines of code. That doesn't sound too bad to me, but perhaps\n> I am missing something.\n\nAppending more information to BackendParameters would be OK, still\nthis would require the extra SQL function to access it, which is\nsomething that pg_settings is able to equally offer access to if\nusing a GUC.\n--\nMichael", "msg_date": "Tue, 2 May 2023 11:17:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, May 02, 2023 at 11:17:50AM +0900, Michael Paquier wrote:\n> On Thu, Apr 20, 2023 at 02:16:17PM -0700, Nathan Bossart wrote:\n>> AFAICT this would involve adding a bool to BackendParameters and using it\n>> in save_backend_variables() and restore_backend_variables(), which is an\n>> additional 3 lines of code. That doesn't sound too bad to me, but perhaps\n>> I am missing something.\n> \n> Appending more information to BackendParameters would be OK, still\n> this would require the extra SQL function to access it, which is\n> something that pg_settings is able to equally offer access to if\n> using a GUC.\n\nFair enough. I know I've been waffling in the GUC versus function\ndiscussion, but FWIW v7 of the patch looks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Jun 2023 14:37:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Jun 12, 2023 at 02:37:15PM -0700, Nathan Bossart wrote:\n> Fair enough. I know I've been waffling in the GUC versus function\n> discussion, but FWIW v7 of the patch looks reasonable to me.\n\n+ Assert(strcmp(\"unknown\", GetConfigOption(\"huge_pages_status\", false, false)) != 0);\n\nNot sure that there is anything to gain with this assertion in\nCreateSharedMemoryAndSemaphores(), because this is pretty much what\ncheck_GUC_init() looks after?\n\n@@ -627,6 +627,9 @@ CreateAnonymousSegment(Size *size)\n }\n #endif\n \n+ SetConfigOption(\"huge_pages_status\", ptr == MAP_FAILED ? \"off\" : \"on\",\n+ PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT);\n\nThe choice of this location is critical, because this is just *after*\nwe've tried huge pages, but *before* doing the fallback mmap() call\nwhen the huge page allocation was on \"try\". I think that this\ndeserves a comment.\n\n@@ -327,6 +327,10 @@ retry:\n Sleep(1000);\n continue;\n }\n+\n+ SetConfigOption(\"huge_pages_status\", (flProtect & SEC_LARGE_PAGES) ?\n+ \"on\" : \"off\", PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT);\n\nHmm. I still think that it is cleaner to move that at the end of\nPGSharedMemoryCreate() for the WIN32 case. There are also few FATALs\nin-between, which would make SetConfigOption() useless if there is an\nin-flight problem.\n\nDon't we need to update save_backend_variables() and add an entry\nin BackendParameters to make other processes launched by EXEC_BACKEND\ninherit the status value set?\n--\nMichael", "msg_date": "Tue, 13 Jun 2023 14:50:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Jun 13, 2023 at 02:50:30PM +0900, Michael Paquier wrote:\n> Don't we need to update save_backend_variables() and add an entry\n> in BackendParameters to make other processes launched by EXEC_BACKEND\n> inherit the status value set?\n\nI thought this was handled by read/write_nondefault_variables().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Jun 2023 23:11:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Mon, Jun 12, 2023 at 11:11:14PM -0700, Nathan Bossart wrote:\n> On Tue, Jun 13, 2023 at 02:50:30PM +0900, Michael Paquier wrote:\n>> Don't we need to update save_backend_variables() and add an entry\n>> in BackendParameters to make other processes launched by EXEC_BACKEND\n>> inherit the status value set?\n> \n> I thought this was handled by read/write_nondefault_variables().\n\nAh, you are right. I forgot this part. That should be OK.\n--\nMichael", "msg_date": "Tue, 13 Jun 2023 15:35:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Jun 13, 2023 at 02:50:30PM +0900, Michael Paquier wrote:\n> + Assert(strcmp(\"unknown\", GetConfigOption(\"huge_pages_status\", false, false)) != 0);\n> \n> Not sure that there is anything to gain with this assertion in\n> CreateSharedMemoryAndSemaphores(), because this is pretty much what\n> check_GUC_init() looks after?\n\nThere was a second thing that bugged me here. Would it be worth\nadding some checks on huge_pages_status to make sure that it is never\nreported as unknown when the server is up and running? I am not sure\nwhat would be the best location for that because there is nothing\nspecific to huge pages in the tests yet, but authentication/ with\n005_sspi.pl and a second one would do the job?\n--\nMichael", "msg_date": "Wed, 14 Jun 2023 09:15:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Jun 13, 2023 at 02:50:30PM +0900, Michael Paquier wrote:\n> On Mon, Jun 12, 2023 at 02:37:15PM -0700, Nathan Bossart wrote:\n> > Fair enough. I know I've been waffling in the GUC versus function\n> > discussion, but FWIW v7 of the patch looks reasonable to me.\n> \n> + Assert(strcmp(\"unknown\", GetConfigOption(\"huge_pages_status\", false, false)) != 0);\n> \n> Not sure that there is anything to gain with this assertion in\n> CreateSharedMemoryAndSemaphores(), because this is pretty much what\n> check_GUC_init() looks after?\n\nIt seems like you misread the assertion, so I added a comment about it.\nIndeed, the assertion addresses the other question you asked later.\n\nThat's what I already commented about, and the reason I found it\ncompelling not to use a boolean.\n\nOn Thu, Apr 06, 2023 at 04:57:58PM -0500, Justin Pryzby wrote:\n> I added an assert to check that a running server won't output\n> \"unknown\".\n\nOn Wed, Jun 14, 2023 at 09:15:35AM +0900, Michael Paquier wrote:\n> There was a second thing that bugged me here. Would it be worth\n> adding some checks on huge_pages_status to make sure that it is never\n> reported as unknown when the server is up and running?\n\n-- \nJustin", "msg_date": "Tue, 20 Jun 2023 18:44:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Tue, Jun 20, 2023 at 06:44:20PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 13, 2023 at 02:50:30PM +0900, Michael Paquier wrote:\n>> On Mon, Jun 12, 2023 at 02:37:15PM -0700, Nathan Bossart wrote:\n>> > Fair enough. I know I've been waffling in the GUC versus function\n>> > discussion, but FWIW v7 of the patch looks reasonable to me.\n>> \n>> + Assert(strcmp(\"unknown\", GetConfigOption(\"huge_pages_status\", false, false)) != 0);\n>> \n>> Not sure that there is anything to gain with this assertion in\n>> CreateSharedMemoryAndSemaphores(), because this is pretty much what\n>> check_GUC_init() looks after?\n> \n> It seems like you misread the assertion, so I added a comment about it.\n> Indeed, the assertion addresses the other question you asked later.\n> \n> That's what I already commented about, and the reason I found it\n> compelling not to use a boolean.\n\nApologies for the late reply here.\n\nAt the end, I am on board with the addition of this assertion and its\nposition after PGSharedMemoryCreate().\n\nI would also move the SetConfigOption() for the WIN32 path after ew\nhave passed all the checks. There are a few FATALs that can be\ntriggered so it would be a waste to call it if we are going to fail\nthe shmem creation in this path.\n\nI could not resist adding two checks in the TAP tests to make sure\nthat we don't report unknown. Perhaps that's not necessary, but that\nwould provide coverage in a more broader way, and these are cheap.\n\nI have run one indentation, while on it.\n\nNote to self: check that manually on Windows.\n--\nMichael", "msg_date": "Fri, 23 Jun 2023 13:57:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" }, { "msg_contents": "On Fri, Jun 23, 2023 at 01:57:51PM +0900, Michael Paquier wrote:\n> I could not resist adding two checks in the TAP tests to make sure\n> that we don't report unknown. Perhaps that's not necessary, but that\n> would provide coverage in a more broader way, and these are cheap.\n> \n> I have run one indentation, while on it.\n> \n> Note to self: check that manually on Windows.\n\nI have spent a few hours on that, running more tests with\n-DEXEC_BACKEND, including Windows and macos, and applied it.\n--\nMichael", "msg_date": "Thu, 6 Jul 2023 15:20:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve logging when using Huge Pages" } ]
[ { "msg_contents": "Hi All,\n\n I'm very happy to announce that I now work for Supabase [1]. They\nhave hired me so that I can participate in, and contribute to the\nPostgres community.\n\n I'm announcing it here in the hopes that more companies feel\nencouraged to contribute to Postgres. For those who don't know my past\nwork and involvement in the Postgres community, please see the\n'PostgreSQL RDBMS' section in my resume [2] (on page 4).\n\n I'm deeply indebted to Supabase for giving me this opportunity to\nwork with, and for the Postgres community.\n\n Following is the statement by Paul (CEO) and Anthony (CTO), the\nco-founders of Supabase:\n\n Supabase is a PostgreSQL hosting service that makes PostgreSQL\nincredibly easy to use. Since our inception in 2020 we've benefited\nhugely from the work of the PostgreSQL community.\n\n We've been long-time advocates of PostgreSQL, and we're now in a\nposition to contribute back in a tangible way. We're hiring Gurjeet\nwith the explicit goal of working on PostgreSQL community\ncontributions. We're excited to welcome Gurjeet to the team at\nSupabase.\n\n[1]: https://supabase.io/\n[2]: https://gurjeet.singh.im/GurjeetResume.pdf\n\nPS: Hacker News announcement is at https://news.ycombinator.com/item?id=\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Mon, 30 Aug 2021 22:53:35 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Returning to Postgres community work" }, { "msg_contents": "On Mon, Aug 30, 2021 at 10:53 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> PS: Hacker News announcement is at https://news.ycombinator.com/item?id=\n\nhttps://news.ycombinator.com/item?id=28364406\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Mon, 30 Aug 2021 22:56:44 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On 8/31/21 1:53 AM, Gurjeet Singh wrote:\n> I'm very happy to announce that I now work for Supabase [1]. They\n> have hired me so that I can participate in, and contribute to the\n> Postgres community.\n\nWelcome back! :-)\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Tue, 31 Aug 2021 08:05:45 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On Tue, Aug 31, 2021 at 8:04 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Aug-30, Gurjeet Singh wrote:\n>\n> > I'm very happy to announce that I now work for Supabase [1]. They\n> > have hired me so that I can participate in, and contribute to the\n> > Postgres community.\n>\n> Hey Gurjeet, welcome back. Glad to hear you've found a good spot.\n\nThank you!\n\n> You know what I've heard? That your index advisor module languishes\n> unmaintained and that there's no shortage of people wishing to use it.\n\nNow there's a masterclass in making someone feel great and guilty in\nthe same sentence ;-)\n\n> Heck, we spent a lot of back-and-forth in the spanish mailing list\n> with somebody building a super obsolete version of Postgres just so that\n> they could compile your index advisor. I dunno, if you have some spare\n> time, maybe updating that one would be a valuable contribution from\n> users' perspective.\n\nAye-aye Capn' :-)\n\nEDB folks reached out to me a few months ago to assign a license to\nthe project, which I did and it is now a Postgres-licensed project\n[1].\n\nGiven the above, it is safe to assume that this tool is at least being\nmaintained by EDB, at least internally for their customers. I would\nrequest them to contribute the changes back in the open.\n\nRegardless of that, please rest assured that I will work on making it\ncompatible with the current supported versions of Postgres. Lack of\ntime is not an excuse anymore :-)\n\nThanks for bringing this to my attention!\n\n[1]: https://github.com/gurjeet/pg_adviser/blob/master/LICENSE\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Tue, 31 Aug 2021 10:02:03 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On Tue, Aug 31, 2021 at 11:24 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> Hi All,\n>\n> I'm very happy to announce that I now work for Supabase [1]. They\n> have hired me so that I can participate in, and contribute to the\n> Postgres community.\n>\n\nCongratulations! Glad to hear this news.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Sep 2021 14:01:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On Wed, Sep 1, 2021 at 1:02 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Tue, Aug 31, 2021 at 8:04 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > You know what I've heard? That your index advisor module languishes\n> > unmaintained and that there's no shortage of people wishing to use it.\n>\n> Now there's a masterclass in making someone feel great and guilty in\n> the same sentence ;-)\n>\n> > Heck, we spent a lot of back-and-forth in the spanish mailing list\n> > with somebody building a super obsolete version of Postgres just so that\n> > they could compile your index advisor. I dunno, if you have some spare\n> > time, maybe updating that one would be a valuable contribution from\n> > users' perspective.\n>\n> Aye-aye Capn' :-)\n>\n> EDB folks reached out to me a few months ago to assign a license to\n> the project, which I did and it is now a Postgres-licensed project\n> [1].\n>\n> Given the above, it is safe to assume that this tool is at least being\n> maintained by EDB, at least internally for their customers. I would\n> request them to contribute the changes back in the open.\n>\n> Regardless of that, please rest assured that I will work on making it\n> compatible with the current supported versions of Postgres. Lack of\n> time is not an excuse anymore :-)\n\nFor the record we created an index adviser, which can be used either\nwith powa user interface (which requires a bit more effort to setup\nbut gives a lot of additional performance info) or a standalone one in\nSQL using only pg_qualstats exension.\n\nUnlike most advisers it's using the predicates sampled from the actual\nworkload rather than with a per-single-query basis to come up with its\nsuggestion. As a result it can give better results as it can e.g.\nsuggest multi-column indexes to optimize multiple queries at once\nrather than suggesting multiple partially redundant indexes for each\nquery. The UI version can also check all the suggested indexes using\nhypopg to verify if they're sensible and also give a rough idea on how\nmuch the queries can benefit from it. You can see a naive example at\n[1].\n\nNote that this is compatible with all postgres version down to 9.4.\n\n[1]: https://powa.readthedocs.io/en/latest/_images/hypopg_db1.png\n\n\n", "msg_date": "Wed, 1 Sep 2021 17:19:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On Tue, Aug 31, 2021 at 11:24 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> Hi All,\n>\n> I'm very happy to announce that I now work for Supabase [1]. They\n> have hired me so that I can participate in, and contribute to the\n> Postgres community.\n>\n\nWelcome back Gurjeet.\n\n\n>\n> I'm announcing it here in the hopes that more companies feel\n> encouraged to contribute to Postgres. For those who don't know my past\n> work and involvement in the Postgres community, please see the\n> 'PostgreSQL RDBMS' section in my resume [2] (on page 4).\n>\n> I'm deeply indebted to Supabase for giving me this opportunity to\n> work with, and for the Postgres community.\n>\n> Following is the statement by Paul (CEO) and Anthony (CTO), the\n> co-founders of Supabase:\n>\n> Supabase is a PostgreSQL hosting service that makes PostgreSQL\n> incredibly easy to use. Since our inception in 2020 we've benefited\n> hugely from the work of the PostgreSQL community.\n>\n> We've been long-time advocates of PostgreSQL, and we're now in a\n> position to contribute back in a tangible way. We're hiring Gurjeet\n> with the explicit goal of working on PostgreSQL community\n> contributions. We're excited to welcome Gurjeet to the team at\n> Supabase.\n>\n> [1]: https://supabase.io/\n> [2]: https://gurjeet.singh.im/GurjeetResume.pdf\n>\n> PS: Hacker News announcement is at https://news.ycombinator.com/item?id=\n>\n> Best regards,\n> --\n> Gurjeet Singh http://gurjeet.singh.im/\n>\n>\n>\n\n-- \nRushabh Lathia\n\nOn Tue, Aug 31, 2021 at 11:24 AM Gurjeet Singh <gurjeet@singh.im> wrote:Hi All,\n\n     I'm very happy to announce that I now work for Supabase [1]. They\nhave hired me so that I can participate in, and contribute to the\nPostgres community.Welcome back Gurjeet. \n\n    I'm announcing it here in the hopes that more companies feel\nencouraged to contribute to Postgres. For those who don't know my past\nwork and involvement in the Postgres community, please see the\n'PostgreSQL RDBMS' section in my resume [2] (on page 4).\n\n    I'm deeply indebted to Supabase for giving me this opportunity to\nwork with, and for the Postgres community.\n\n    Following is the statement by Paul (CEO) and Anthony (CTO), the\nco-founders of Supabase:\n\n    Supabase is a PostgreSQL hosting service that makes PostgreSQL\nincredibly easy to use. Since our inception in 2020 we've benefited\nhugely from the work of the PostgreSQL community.\n\n    We've been long-time advocates of PostgreSQL, and we're now in a\nposition to contribute back in a tangible way. We're hiring Gurjeet\nwith the explicit goal of working on PostgreSQL community\ncontributions. We're excited to welcome Gurjeet to the team at\nSupabase.\n\n[1]: https://supabase.io/\n[2]: https://gurjeet.singh.im/GurjeetResume.pdf\n\nPS: Hacker News announcement is at https://news.ycombinator.com/item?id=\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n-- Rushabh Lathia", "msg_date": "Mon, 11 Oct 2021 22:04:44 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Returning to Postgres community work" }, { "msg_contents": "On Tue, Aug 31, 2021 at 10:02 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> On Tue, Aug 31, 2021 at 8:04 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > You know what I've heard? That your index advisor module languishes\n> > unmaintained and that there's no shortage of people wishing to use it.\n>\n> Now there's a masterclass in making someone feel great and guilty in\n> the same sentence ;-)\n>\n> > Heck, we spent a lot of back-and-forth in the spanish mailing list\n> > with somebody building a super obsolete version of Postgres just so that\n> > they could compile your index advisor. I dunno, if you have some spare\n> > time, maybe updating that one would be a valuable contribution from\n> > users' perspective.\n>\n> Aye-aye Capn' :-)\n\nAs part of helping the GSoC contributor getting onboard (see below), I\nwent through a similar process and had to figure out the Postgres\nversion, system packages, etc. (all ancient) that were needed to build\nand use it. It's no fun having to deal with software from over a\ndecade ago :-(\n\n> EDB folks reached out to me a few months ago to assign a license to\n> the project, which I did and it is now a Postgres-licensed project\n> [1].\n>\n> Given the above, it is safe to assume that this tool is at least being\n> maintained by EDB, at least internally for their customers. I would\n> request them to contribute the changes back in the open.\n\nAfter over a year of conversations and follow-ups, a couple of months\nago EnterpriseDB finally made it clear that they won't be contributing\ntheir changes back to the open-source version of Index Advisor. With\nthat avenue now closed, we can now freely pursue\n\n> Regardless of that, please rest assured that I will work on making it\n> compatible with the current supported versions of Postgres. Lack of\n> time is not an excuse anymore :-)\n\nOh, how wrong was I :-)\n\nI have a few updates on the current state and plans around the Index\nAdviser extension.\n\nI proposed Index Adviser as a potential project for GSoC 2023 [1].\nAhmed (CCd) has signed up as the contributor. The project has now been\naccepted/funded by GSoC. The primary goal of the project is to support\nall the active versions of Postgres. The extended goal is to support\nadditional index types. The extension currently supports Postgres\nversion 8.3, and BTree index type.\n\n[1]: https://wiki.postgresql.org/wiki/GSoC_2023#pg_adviser_.2F_index_adviser:_Recommend_Potentially_Useful_Indexes\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 11 May 2023 17:29:40 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Postgres Index Advisor status and GSoC (was: Re: Returning to\n Postgres community work)" } ]
[ { "msg_contents": "Hello,\n\nWhile working through the documentation I found an empty cell in the \ntable for the large objects privilege display with the psql command [1]. \nAnd indeed the \\dl command does not show privileges. And there is no \nmodifier + for it.\n\nThis patch adds a + modifier to the \\dl command and also to the \\lo_list \ncommand to display privilege information on large objects.\n\nI decided to move the do_lo_list function to describe.c in order to use \nthe printACLColumn helper function. And at the same time I renamed \ndo_lo_list to listLargeObjects to unify with the names of other similar \nfunctions.\n\nI don't like how I handled the + modifier in the \\lo_list command. But I \ndon't know how to do better now. This is the second time I've programmed \nin C. The first time was the 'Hello World' program. So maybe something \nis done wrong.\n\nIf it's interesting, I can add the patch to commitfest.\n\n1. \nhttps://www.postgresql.org/docs/devel/ddl-priv.html#PRIVILEGES-SUMMARY-TABLE\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 31 Aug 2021 17:14:12 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "psql: \\dl+ to list large objects privileges" }, { "msg_contents": "> On 31 Aug 2021, at 16:14, Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n\n> If it's interesting, I can add the patch to commitfest.\n\nPlease do, if it was interesting enough for you to write it, it’s interesting\nenough to be in the commitfest.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 16:35:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "On 31.08.2021 17:35, Daniel Gustafsson wrote:\n> Please do, if it was interesting enough for you to write it, it’s \n> interesting enough to be in the commitfest.\n\nThanks, added to the commitfest.\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 31 Aug 2021 20:13:55 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nthank you for the patch, I personally think it is a useful addition and thus it\r\ngets my vote. However, I also think that the proposed code will need some\r\nchanges.\r\n\r\nOn a high level I will recommend the addition of tests. There are similar tests\r\npresent in:\r\n ./src/test/regress/sql/psql.sql\r\n\r\nI will also inquire as to the need for renaming the function `do_lo_list` to\r\n`listLargeObjects` and its move to describe.c. from large_obj.c. In itself it is\r\nnot necessarily a blocking point, though it will require some strong arguments\r\nfor doing so.\r\n\r\nApplying the patch, generates several whitespace warnings. It will be helpful\r\nif those warnings are removed.\r\n\r\nThe patch contains:\r\n\r\n case 'l':\r\n- success = do_lo_list();\r\n+ success = listLargeObjects(show_verbose);\r\n\r\n\r\nIt might be of some interest to consider in the above to check the value of the\r\nnext character in command or emit an error if not valid. Such a pattern can be\r\nfound in the same switch block as for example:\r\n\r\n switch (cmd[2])\r\n {\r\n case '\\0':\r\n case '+':\r\n <snip>\r\n success = ...\r\n </snip>\r\n break;\r\n default:\r\n status = PSQL_CMD_UNKNOWN;\r\n break;\r\n }\r\n\r\n\r\nThe patch contains:\r\n\r\n else if (strcmp(cmd + 3, \"list\") == 0)\r\n- success = do_lo_list();\r\n+ success = listLargeObjects(false);\r\n+\r\n+ else if (strcmp(cmd + 3, \"list+\") == 0)\r\n+ success = listLargeObjects(true);\r\n\r\n\r\nIn a fashion similar to `exec_command_list`, it might be interesting to consider\r\nexpressing the above as:\r\n\r\n show_verbose = strchr(cmd, '+') ? true : false;\r\n <snip>\r\n else if (strcmp(cmd + 3, \"list\") == 0\r\n success = do_lo_list(show_verbose);\r\n\r\n\r\nThoughts?\r\n\r\nCheers,\r\n//Georgios", "msg_date": "Fri, 03 Sep 2021 12:25:44 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nThere is something I forgot to mention in my previous review.\r\n\r\nTypically when describing a thing in psql, it is the column \"Description\" that\r\nbelongs in the verbose version. For example listing indexes produces:\r\n\r\n List of relations\r\n Schema | Name | Type | Owner | Table\r\n\r\nand the verbose version:\r\n List of relations\r\n Schema | Name | Type | Owner | Table | Persistence | Access method | Size | Description\r\n\r\nSince '\\dl' already contains description, it might be worthwhile to consider to add the column `Access privileges`\r\nwithout introducing a verbose version.\r\n\r\nThoughts?\r\n\r\nCheers,\r\n//Georgios", "msg_date": "Fri, 03 Sep 2021 12:45:14 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hello,\n\nThank you very much for review.\n\n> Since '\\dl' already contains description, it might be worthwhile to consider to add the column `Access privileges`\n> without introducing a verbose version.\n\nI thought about it.\nThe reason why I decided to add the verbose version is because of \nbackward compatibility. Perhaps the appearance of a new column in an \nexisting command may become undesirable to someone.\n\nIf we don't worry about backward compatibility, the patch will be \neasier. But I'm not sure this is the right approach.\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 3 Sep 2021 16:20:09 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hello,\n\nThank you very mush for review.\n\nI will prepare a new version of the patch according to your comments. \nFor now, I will answer this question:\n\n> I will also inquire as to the need for renaming the function `do_lo_list` to\n> `listLargeObjects` and its move to describe.c. from large_obj.c. In itself it is\n> not necessarily a blocking point, though it will require some strong arguments\n> for doing so.\n\nI understand that I needed a good reason for such actions.\n\nOn the one hand all the commands for working with large objects are in \nlarge_obj.c. On the other hand, all commands for displaying the contents \nof system catalogs are in describe.c. The function do_lo_list belongs to \nboth groups.\n\nThe main reason for moving the function to describe.c is that I wanted \nto use the printACLColumn function to display lomacl column. \nprintACLColumn function is used in all the other commands to display \nprivileges and this function is locally defined in describe.c and there \nis no reason to make in public.\n\nAnother option is to duplicate the printACLColumn function (or its \ncontents) in large_obj.c. This seemed wrong to me.\nIs it any other way?\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 3 Sep 2021 16:43:43 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hi,\n\nOn 03.09.2021 15:25, Georgios Kokolatos wrote:\n> On a high level I will recommend the addition of tests. There are similar tests\n\nTests added.\n\n> Applying the patch, generates several whitespace warnings. It will be helpful\n> if those warnings are removed.\n\nI know this is a silly mistake, and after reading this article[1] I \ntried to remove the extra spaces.\n\nCan you tell me, please, how can you get such warnings?\n\n> The patch contains:\n>\n> case 'l':\n> - success = do_lo_list();\n> + success = listLargeObjects(show_verbose);\n>\n>\n> It might be of some interest to consider in the above to check the value of the\n> next character in command or emit an error if not valid. Such a pattern can be\n> found in the same switch block as for example:\n>\n> switch (cmd[2])\n> {\n> case '\\0':\n> case '+':\n> <snip>\n> success = ...\n> </snip>\n> break;\n> default:\n> status = PSQL_CMD_UNKNOWN;\n> break;\n> }\n\nCheck added.\n\n> The patch contains:\n>\n> else if (strcmp(cmd + 3, \"list\") == 0)\n> - success = do_lo_list();\n> + success = listLargeObjects(false);\n> +\n> + else if (strcmp(cmd + 3, \"list+\") == 0)\n> + success = listLargeObjects(true);\n>\n>\n> In a fashion similar to `exec_command_list`, it might be interesting to consider\n> expressing the above as:\n>\n> show_verbose = strchr(cmd, '+') ? true : false;\n> <snip>\n> else if (strcmp(cmd + 3, \"list\") == 0\n> success = do_lo_list(show_verbose);\n\nI rewrote this part.\n\nNew version attached.\n\n[1] https://wiki.postgresql.org/wiki/Creating_Clean_Patches\n\n--\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 5 Sep 2021 22:47:27 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Sunday, September 5th, 2021 at 21:47, Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n\nHi,\n\n> Hi,\n>\n> On 03.09.2021 15:25, Georgios Kokolatos wrote:\n>\n> > On a high level I will recommend the addition of tests. There are similar tests\n>\n> Tests added.\n\nThanks! The tests look good. A minor nitpick would be to also add a comment on the\nlarge object to verify that it is picked up correctly.\n\nAlso:\n\n +\\lo_unlink 42\n +DROP ROLE lo_test;\n +\n\nThat last empty line can be removed.\n\n>\n> > Applying the patch, generates several whitespace warnings. It will be helpful\n> > if those warnings are removed.\n>\n> I know this is a silly mistake, and after reading this article[1] I tried to remove the extra spaces.\n> Can you tell me, please, how can you get such warnings?\n\nI only mentioned it because I thought you might find it useful.\nI apply patches by `git apply` and executing the command on the latest version\nof your patch, produces:\n\n $ git apply lo-list-acl-v2.patch\n lo-list-acl-v2.patch:349: new blank line at EOF.\n +\n warning: 1 line adds whitespace errors\n\nThe same can be observed highlighted by executing `git diff --color` as\nrecommended in the the article you linked.\n\n>\n> > The patch contains:\n> >\n> > case 'l':\n> > - success = do_lo_list();\n> > + success = listLargeObjects(show_verbose);\n> >\n> >\n> > It might be of some interest to consider in the above to check the value of the\n> > next character in command or emit an error if not valid. Such a pattern can be\n> > found in the same switch block as for example:\n> >\n> > switch (cmd[2])\n> > {\n> > case '\\0':\n> > case '+':\n> > <snip>\n> > success = ...\n> > </snip>\n> > break;\n> > default:\n> > status = PSQL_CMD_UNKNOWN;\n> > break;\n> > }\n>\n> Check added.\n\nThanks.\n\n>\n> > The patch contains:\n> >\n> > else if (strcmp(cmd + 3, \"list\") == 0)\n> > - success = do_lo_list();\n> > + success = listLargeObjects(false);\n> > +\n> > + else if (strcmp(cmd + 3, \"list+\") == 0)\n> > + success = listLargeObjects(true);\n> >\n> >\n> > In a fashion similar to `exec_command_list`, it might be interesting to consider\n> > expressing the above as:\n> >\n> > show_verbose = strchr(cmd, '+') ? true : false;\n> > <snip>\n> > else if (strcmp(cmd + 3, \"list\") == 0\n> > success = do_lo_list(show_verbose);\n>\n> I rewrote this part.\n\nThank you. It looks good to me.\n\n>\n> New version attached.\n\nThe new version looks good to me.\n\nI did spend a bit of time considering the addition of the verbose version of\nthe command. I understand your reasoning explained upstream in the same thread.\nHowever, I am not entirely convinced. The columns in consideration are,\n\"Description\" and \"Access Privileges\". Within the describe commands there are\nfour different options, listed and explained bellow:\n\n commands where description is found in verbose\n\\d \\dA \\dc \\dd \\des \\df \\dFd \\dFt \\di \\dL \\dn \\dO \\dP \\dPt \\dt \\du \\dx \\dy \\da\n\\db \\dC \\dD \\det \\dew \\dFp \\dg \\dl \\dm \\do \\dPi \\dS \\dT\n\n commands where description is not found in verbose (including object\n description)\n\\dd \\dFd \\dFt \\dL \\dx \\da \\dF \\dFp \\dl \\do \\dT\n\n commands where access privileges is found in verbose\n\\def \\dD \\des\n\n commands where access privileges is not found in verbose (including access\n privilege describing)\n\\ddp \\dp \\des \\df \\dL \\dn \\db \\dD \\dew \\dl \\dT\n\nWith the above list, I would argue that it feels more natural to include\nthe \"Access Privileges\" column in the non verbose version and be done with\nthe verbose version all together.\n\nMy apologies for the back and forth on this detail.\n\nThoughts?\n\nCheers,\n//Georgios\n\n>\n> [1] https://wiki.postgresql.org/wiki/Creating_Clean_Patches\n>\n> --\n> Pavel Luzanov\n> Postgres Professional: https://postgrespro.com\n> The Russian Postgres Company\n\n\n", "msg_date": "Mon, 06 Sep 2021 11:39:51 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hi,\n\nOn 06.09.2021 14:39, gkokolatos@pm.me wrote:\n\n> I apply patches by `git apply` and executing the command on the latest version\n> of your patch, produces:\n>\n> $ git apply lo-list-acl-v2.patch\n> lo-list-acl-v2.patch:349: new blank line at EOF.\n> +\n> warning: 1 line adds whitespace errors\n\nThanks, this is what I was looking for. The patch command doesn't show these warnings\n(or I don't know the right way for use it).\n\n> I did spend a bit of time considering the addition of the verbose version of\n> the command. I understand your reasoning explained upstream in the same thread.\n> However, I am not entirely convinced. The columns in consideration are,\n> \"Description\" and \"Access Privileges\". Within the describe commands there are\n> four different options, listed and explained bellow:\n>\n> commands where description is found in verbose\n> \\d \\dA \\dc \\dd \\des \\df \\dFd \\dFt \\di \\dL \\dn \\dO \\dP \\dPt \\dt \\du \\dx \\dy \\da\n> \\db \\dC \\dD \\det \\dew \\dFp \\dg \\dl \\dm \\do \\dPi \\dS \\dT\n>\n> commands where description is not found in verbose (including object\n> description)\n> \\dd \\dFd \\dFt \\dL \\dx \\da \\dF \\dFp \\dl \\do \\dT\n>\n> commands where access privileges is found in verbose\n> \\def \\dD \\des\n>\n> commands where access privileges is not found in verbose (including access\n> privilege describing)\n> \\ddp \\dp \\des \\df \\dL \\dn \\db \\dD \\dew \\dl \\dT\n>\n> With the above list, I would argue that it feels more natural to include\n> the \"Access Privileges\" column in the non verbose version and be done with\n> the verbose version all together.\n\nMy thoughts.\nFor most object types, the Description column is shown only in the verbose\nversion of the commands. But there are several object types,\nincluding Large Objects, for which the description is shown in the normal version.\nBoth are valid options, so the Description column for large objects stays\nin the normal version of the command.\n\nRegarding the display of access privileges.\nInstances of object types for which you can manage the access privileges\nare listed in Table 5.2 [1].\n\nFor clarity, I will only show the first and last columns:\n\nTable 5.2. Summary of Access Privileges\n\nObject Type psql Command\n------------------------------ ------------\nDATABASE \\l\nDOMAIN \\dD+\nFUNCTION or PROCEDURE \\df+\nFOREIGN DATA WRAPPER \\dew+\nFOREIGN SERVER \\des+\nLANGUAGE \\dL+\nLARGE OBJECT\nSCHEMA \\dn+\nSEQUENCE \\dp\nTABLE (and table-like objects) \\dp\nTable column \\dp\nTABLESPACE \\db+\nTYPE \\dT+\n\nBy the way, after seeing an empty cell for Large Objects in this table,\nI decided to make a patch.\n\nNote that the \\dp command is specially designed to display access privileges,\nso you don't need to pay attention to the lack of a + sign for it.\n\nIt turns out that all commands use the verbose version (or special command)\nto display access privileges. Except the \\l command for the databases.\n\nNow the question.\nShould we add a second exception and display access privileges\nfor large objects with the \\dl command or do the verbose version\nlike most other commands: \\dl+\n?\n\nIf you still think that there is no need for a verbose version,\nI will drop it and add an 'Access privileges' column to the normal command.\n\n\n[1] https://www.postgresql.org/docs/devel/ddl-priv.html#PRIVILEGES-SUMMARY-TABLE\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nHi,\n\n\nOn 06.09.2021 14:39, gkokolatos@pm.me wrote:\n\n\nI apply patches by `git apply` and executing the command on the latest version\nof your patch, produces:\n\n $ git apply lo-list-acl-v2.patch\n lo-list-acl-v2.patch:349: new blank line at EOF.\n +\n warning: 1 line adds whitespace errors\n\n\n\n\nThanks, this is what I was looking for. The patch command doesn't show these warnings\n(or I don't know the right way for use it).\n\n\n\n\nI did spend a bit of time considering the addition of the verbose version of\nthe command. I understand your reasoning explained upstream in the same thread.\nHowever, I am not entirely convinced. The columns in consideration are,\n\"Description\" and \"Access Privileges\". Within the describe commands there are\nfour different options, listed and explained bellow:\n\n commands where description is found in verbose\n\\d \\dA \\dc \\dd \\des \\df \\dFd \\dFt \\di \\dL \\dn \\dO \\dP \\dPt \\dt \\du \\dx \\dy \\da\n\\db \\dC \\dD \\det \\dew \\dFp \\dg \\dl \\dm \\do \\dPi \\dS \\dT\n\n commands where description is not found in verbose (including object\n description)\n\\dd \\dFd \\dFt \\dL \\dx \\da \\dF \\dFp \\dl \\do \\dT\n\n commands where access privileges is found in verbose\n\\def \\dD \\des\n\n commands where access privileges is not found in verbose (including access\n privilege describing)\n\\ddp \\dp \\des \\df \\dL \\dn \\db \\dD \\dew \\dl \\dT\n\nWith the above list, I would argue that it feels more natural to include\nthe \"Access Privileges\" column in the non verbose version and be done with\nthe verbose version all together.\n\n\n\nMy thoughts.\nFor most object types, the Description column is shown only in the verbose\nversion of the commands. But there are several object types,\nincluding Large Objects, for which the description is shown in the normal version.\nBoth are valid options, so the Description column for large objects stays\nin the normal version of the command. \n\nRegarding the display of access privileges.\nInstances of object types for which you can manage the access privileges\nare listed in Table 5.2 [1].\n\nFor clarity, I will only show the first and last columns:\n\nTable 5.2. Summary of Access Privileges\n\nObject Type psql Command\n------------------------------ ------------\nDATABASE \\l\nDOMAIN \\dD+\nFUNCTION or PROCEDURE \\df+\nFOREIGN DATA WRAPPER \\dew+\nFOREIGN SERVER \\des+\nLANGUAGE \\dL+\nLARGE OBJECT \nSCHEMA \\dn+\nSEQUENCE \\dp\nTABLE (and table-like objects) \\dp\nTable column \\dp\nTABLESPACE \\db+\nTYPE \\dT+\n\nBy the way, after seeing an empty cell for Large Objects in this table,\nI decided to make a patch.\n\nNote that the \\dp command is specially designed to display access privileges,\nso you don't need to pay attention to the lack of a + sign for it. \n\nIt turns out that all commands use the verbose version (or special command)\nto display access privileges. Except the \\l command for the databases.\n\nNow the question.\nShould we add a second exception and display access privileges\nfor large objects with the \\dl command or do the verbose version\nlike most other commands: \\dl+\n?\n\nIf you still think that there is no need for a verbose version,\nI will drop it and add an 'Access privileges' column to the normal command.\n\n\n[1] https://www.postgresql.org/docs/devel/ddl-priv.html#PRIVILEGES-SUMMARY-TABLE\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 6 Sep 2021 17:10:51 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hi,\n\nOn 06.09.2021 14:39, gkokolatos@pm.me wrote:\n\n> Thanks! The tests look good. A minor nitpick would be to also add a comment on the\n> large object to verify that it is picked up correctly.\n>\n> Also:\n>\n> +\\lo_unlink 42\n> +DROP ROLE lo_test;\n> +\n>\n> That last empty line can be removed.\n\nThe new version adds a comment to a large object and removes the last empty line.\n\n--\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 7 Sep 2021 14:28:38 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi, I think this is an interesting patch. +1\r\nI tested it for the latest version, and it works well.", "msg_date": "Sat, 18 Sep 2021 02:41:54 +0000", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Hi,\n\nThank you for testing.\nAs far as I understand, for the patch to move forward, someone has to become a reviewer\nand change the status in the commitfest app.\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\nOn 18.09.2021 05:41, Neil Chen wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> Hi, I think this is an interesting patch. +1\n> I tested it for the latest version, and it works well.\n\n\n\n\n\n\n\nHi,\n\nThank you for testing.\nAs far as I understand, for the patch to move forward, someone has to become a reviewer\nand change the status in the commitfest app.\n\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\nOn 18.09.2021 05:41, Neil Chen wrote:\n\n\nThe following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi, I think this is an interesting patch. +1\nI tested it for the latest version, and it works well.", "msg_date": "Mon, 20 Sep 2021 11:47:02 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "On Tue, Aug 31, 2021 at 05:14:12PM +0300, Pavel Luzanov wrote:\n> I decided to move the do_lo_list function to describe.c in order to use the\n> printACLColumn helper function. And at the same time I renamed do_lo_list to\n> listLargeObjects to unify with the names of other similar functions.\n\nThe tabs were changed to spaces when you moved the function.\n\nI suggest to move the function in a separate 0001 commit, which makes no code\nchanges other than moving from one file to another. \n\nA committer would probably push them as a single patch, but this makes it\neasier to read and review the changes in 0002.\nPossibly like git diff HEAD~:src/bin/psql/large_obj.c src/bin/psql/describe.c\n\n> + if (pset.sversion >= 90000)\n\nSince a few weeks ago, psql no longer supports server versions before 9.2, so\nthe \"if\" branch can go away.\n\n> I don't like how I handled the + modifier in the \\lo_list command. But I\n> don't know how to do better now. This is the second time I've programmed in\n> C. The first time was the 'Hello World' program. So maybe something is done\n> wrong.\n\nI think everywhere else just uses verbose = strchr(cmd, '+') != 0;\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 Jan 2022 00:24:18 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I suggest to move the function in a separate 0001 commit, which makes no code\n> changes other than moving from one file to another. \n> A committer would probably push them as a single patch, but this makes it\n> easier to read and review the changes in 0002.\n\nYeah, I agree with that idea. It's really tough to notice small changes\nby hand when the entire code block has been moved somewhere else.\n\n> Since a few weeks ago, psql no longer supports server versions before 9.2, so\n> the \"if\" branch can go away.\n\nAnd, in fact, the patch is no longer applying per the cfbot, because\nthat hasn't been done.\n\nTo move things along a bit, I split the patch more or less as Justin\nsuggests and brought it up to HEAD. I did not study the command.c\nchanges, but the rest of it seems okay, with one exception: I don't like\nthe test case much at all. For one thing, it will fail in the buildfarm\nbecause you didn't adhere to the convention that roles created by a\nregression test must be named regress_something. For another, there's\nconsiderable overlap with testing done in largeobject.sql, which\nalready creates some commented large objects. That means there's\nan undesirable ordering dependency --- if somebody wanted to run\nlargeobject.sql first, the expected output of psql.sql would change.\nI wonder if we shouldn't put these new tests in largeobject.sql instead.\n(That would mean there are two expected-files to change, which is a bit\ntedious, but it's not very hard.)\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 04 Jan 2022 15:42:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Justin, Tom,\n\nThanks for the review and the help in splitting the patch into two parts.\n\n> I wonder if we shouldn't put these new tests in largeobject.sql instead.\n> (That would mean there are two expected-files to change, which is a bit\n> tedious, but it's not very hard.)\n\nAs suggested, I moved the tests from psql.sql to largeobject.sql.\nThe tests are added to the beginning of the file because I need one \nlarge object with a known id and a known owner.� This guarantees the \nsame output of \\dl+ as expected.\n\nI made the same changes to two files in the 'expected' directory: \nlargeobject.out and largeobject_1.out.\nAlthough I must say that I still can't understand why more than one file \nwith expected result is used for some tests.\n\nAlso, I decided to delete following line in the listLargeObjects \nfunction because all the other commands in describe.c do not contain it:\n ��� myopt.topt.tuples_only = false;\n\nThis changed the existing behavior, but is consistent with the other \ncommands.\n\nSecond version (after splitting) is attached.\nv2-0001-move-and-rename-do_lo_list.patch - no changes,\nv2-0002-print-large-object-acls.patch - tests moved to largeobject.sql\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 6 Jan 2022 14:41:19 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n>> I wonder if we shouldn't put these new tests in largeobject.sql instead.\n>> (That would mean there are two expected-files to change, which is a bit\n>> tedious, but it's not very hard.)\n\n> I made the same changes to two files in the 'expected' directory: \n> largeobject.out and largeobject_1.out.\n> Although I must say that I still can't understand why more than one file \n> with expected result is used for some tests.\n\nBecause the output sometimes varies across platforms. IIRC, the\ncase where largeobject_1.out is needed is for Windows, where the\nfile that gets inserted into one of the LOs might contain CR/LF\nnot just LF newlines, so the LO contents look different.\n\n> Also, I decided to delete following line in the listLargeObjects \n> function because all the other commands in describe.c do not contain it:\n>     myopt.topt.tuples_only = false;\n\nAgreed, I'd done that already in my version of the patch.\n\n> Second version (after splitting) is attached.\n\nPushed with some minor editorialization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jan 2022 13:13:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" }, { "msg_contents": "On 06.01.2022 21:13, Tom Lane wrote:\n>> I made the same changes to two files in the 'expected' directory:\n>> largeobject.out and largeobject_1.out.\n>> Although I must say that I still can't understand why more than one file\n>> with expected result is used for some tests.\n> Because the output sometimes varies across platforms. IIRC, the\n> case where largeobject_1.out is needed is for Windows, where the\n> file that gets inserted into one of the LOs might contain CR/LF\n> not just LF newlines, so the LO contents look different.\n\nSo simple. Thanks for the explanation.\n\n>> Pushed with some minor editorialization.\n\nThanks!\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 6 Jan 2022 23:50:55 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: \\dl+ to list large objects privileges" } ]
[ { "msg_contents": "Hi,\n\nThis patch adds a new log_destination, \"jsonlog\", that writes log entries\nas lines of JSON. It was originally started by David Fetter using\nthe jsonlog module by Michael Paquier (\nhttps://github.com/michaelpq/pg_plugins/blob/master/jsonlog/jsonlog.c) as a\nbasis for how to serialize the log messages. Thanks to both of them because\nthis wouldn't be possible without that starting point.\n\nThe first commit splits out the destination in log pipe messages into its\nown field. Previously it would piggyback on the \"is_last\" field. This adds\nan int to the message size but makes the rest of the code easier to follow.\n\nThe second commit adds a TAP test for log_destination \"csvlog\". This was\ndone to both confirm that the previous change didn't break anything and as\na skeleton for the test in the next commit.\n\nThe third commit adds the new log_destination \"jsonlog\". The output format\nis one line per entry with the top level output being a JSON object keyed\nwith the log fields. Newlines in the output fields are escaped as \\n so the\noutput file has exactly one line per log entry. It also includes a new test\nfor verifying the JSON output with some basic regex checks (similar to the\ncsvlog test).\n\nHere's a sample of what the log entries look like:\n\n{\"timestamp\":\"2021-08-31 10:15:25.129\nEDT\",\"user\":\"sehrope\",\"dbname\":\"postgres\",\"pid\":12012,\"remote_host\":\"[local]\",\"session_id\":\"612e397d.2eec\",\"line_num\":1,\"ps\":\"idle\",\"session_start\":\"2021-08-31\n10:15:25\nEDT\",\"vxid\":\"3/2\",\"txid\":\"0\",\"error_severity\":\"LOG\",\"application_name\":\"\n006_jsonlog.pl\",\"message\":\"statement: SELECT 1/0\"}\n\nIt builds and passes \"make check-world\" on Linux. It also includes code to\nhandle Windows as well but I have not actually tried building it there.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Tue, 31 Aug 2021 11:34:56 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Aug 31, 2021 at 11:34:56AM -0400, Sehrope Sarkuni wrote:\n> The second commit adds a TAP test for log_destination \"csvlog\". This was\n> done to both confirm that the previous change didn't break anything and as\n> a skeleton for the test in the next commit.\n\n+note \"Before sleep\";\n+usleep(100_000);\n+note \"Before rotate\";\n+$node->logrotate();\n+note \"After rotate\";\n+usleep(100_000);\n\nDo you really need a rotation of the log files here? Wouldn't it be\nbetter to grab the position of the current log file with a fixed log\nfile name, and then slurp the file from this position with your\nexpected output? That would make the test faster, as well.\n\n> The third commit adds the new log_destination \"jsonlog\". The output format\n> is one line per entry with the top level output being a JSON object keyed\n> with the log fields. Newlines in the output fields are escaped as \\n so the\n> output file has exactly one line per log entry. It also includes a new test\n> for verifying the JSON output with some basic regex checks (similar to the\n> csvlog test).\n\n+ * Write logs in json format.\n+ */\n+static void\n+write_jsonlog(ErrorData *edata)\n+{\nRather than making elog.c larger, I think that we should try to split\nthat into more files. Why not refactoring out the CSV part first?\nYou could just call that csvlog.c, then create a new jsonlog.c for the\nmeat of the patch.\n\nThe list of fields is not up to date. At quick glance, you are\nmissing:\n- backend type.\n- leader PID.\n- query ID.\n- Session start timestamp (?)\n--\nMichael", "msg_date": "Wed, 1 Sep 2021 09:43:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Aug 31, 2021 at 8:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Aug 31, 2021 at 11:34:56AM -0400, Sehrope Sarkuni wrote:\n> > The second commit adds a TAP test for log_destination \"csvlog\". This was\n> > done to both confirm that the previous change didn't break anything and\n> as\n> > a skeleton for the test in the next commit.\n>\n> +note \"Before sleep\";\n> +usleep(100_000);\n> +note \"Before rotate\";\n> +$node->logrotate();\n> +note \"After rotate\";\n> +usleep(100_000);\n>\n> Do you really need a rotation of the log files here? Wouldn't it be\n> better to grab the position of the current log file with a fixed log\n> file name, and then slurp the file from this position with your\n> expected output? That would make the test faster, as well.\n>\n\nYes, that was intentional. I used the log rotation tap test as a base and\nkept that piece in there so it verifies that the csv log files are actually\nrotated. Same for the json logs.\n\n\n> Rather than making elog.c larger, I think that we should try to split\n> that into more files. Why not refactoring out the CSV part first?\n> You could just call that csvlog.c, then create a new jsonlog.c for the\n> meat of the patch.\n>\n\nThat's a good idea. I'll try that out.\n\nThe list of fields is not up to date. At quick glance, you are\n> missing:\n> - backend type.\n> - leader PID.\n> - query ID.\n> - Session start timestamp (?)\n>\n\nThanks. I'll take a look.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Tue, Aug 31, 2021 at 8:43 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Aug 31, 2021 at 11:34:56AM -0400, Sehrope Sarkuni wrote:\n> The second commit adds a TAP test for log_destination \"csvlog\". This was\n> done to both confirm that the previous change didn't break anything and as\n> a skeleton for the test in the next commit.\n\n+note \"Before sleep\";\n+usleep(100_000);\n+note \"Before rotate\";\n+$node->logrotate();\n+note \"After rotate\";\n+usleep(100_000);\n\nDo you really need a rotation of the log files here?  Wouldn't it be\nbetter to grab the position of the current log file with a fixed log\nfile name, and then slurp the file from this position with your\nexpected output?  That would make the test faster, as well.Yes, that was intentional. I used the log rotation tap test as a base and kept that piece in there so it verifies that the csv log files are actually rotated. Same for the json logs. Rather than making elog.c larger, I think that we should try to split\nthat into more files.  Why not refactoring out the CSV part first?\nYou could just call that csvlog.c, then create a new jsonlog.c for the\nmeat of the patch.That's a good idea. I'll try that out.\nThe list of fields is not up to date.  At quick glance, you are\nmissing:\n- backend type.\n- leader PID.\n- query ID.\n- Session start timestamp (?)Thanks. I'll take a look.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 1 Sep 2021 08:33:54 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "Updated patch set is attached.\n\nThis version splits out the existing csvlog code into its own file and\ncentralizes the common helpers into a new elog-internal.h so that they're\nonly included by the actual write_xyz sources.\n\nThat makes the elog.c changes in the JSON logging patch minimal as all it's\nreally doing is invoking the new write_jsonlog(...) function.\n\nIt also adds those missing fields to the JSON logger output.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 1 Sep 2021 16:39:43 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Sep 01, 2021 at 04:39:43PM -0400, Sehrope Sarkuni wrote:\n> That makes the elog.c changes in the JSON logging patch minimal as all it's\n> really doing is invoking the new write_jsonlog(...) function.\n\nLooking at 0001, to do things in order.\n\n> @@ -46,8 +46,8 @@ typedef struct\n> \tchar\t\tnuls[2];\t\t/* always \\0\\0 */\n> \tuint16\t\tlen;\t\t\t/* size of this chunk (counts data only) */\n> \tint32\t\tpid;\t\t\t/* writer's pid */\n> -\tchar\t\tis_last;\t\t/* last chunk of message? 't' or 'f' ('T' or\n> -\t\t\t\t\t\t\t\t * 'F' for CSV case) */\n> +\tint32\t\tdest;\t\t\t/* log destination */\n> +\tchar\t\tis_last; /* last chunk of message? 't' or 'f'*/\n> \tchar\t\tdata[FLEXIBLE_ARRAY_MEMBER];\t/* data payload starts here */\n> } PipeProtoHeader;\n\nMaking PipeProtoHeader larger is not free, and that could penalize\nworkloads with a lot of short messages and many backends as the\nsyslogger relies on pipes with sync calls. Why not switching is_last\nto bits8 flags instead? That should be enough for the addition of\nJSON. 3 bits are enough at the end: one to know if it is the last\nchunk of message, one for CSV and one for JSON.\n--\nMichael", "msg_date": "Wed, 8 Sep 2021 15:58:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "\nOn 9/8/21 2:58 AM, Michael Paquier wrote:\n> On Wed, Sep 01, 2021 at 04:39:43PM -0400, Sehrope Sarkuni wrote:\n>> That makes the elog.c changes in the JSON logging patch minimal as all it's\n>> really doing is invoking the new write_jsonlog(...) function.\n> Looking at 0001, to do things in order.\n>\n>> @@ -46,8 +46,8 @@ typedef struct\n>> \tchar\t\tnuls[2];\t\t/* always \\0\\0 */\n>> \tuint16\t\tlen;\t\t\t/* size of this chunk (counts data only) */\n>> \tint32\t\tpid;\t\t\t/* writer's pid */\n>> -\tchar\t\tis_last;\t\t/* last chunk of message? 't' or 'f' ('T' or\n>> -\t\t\t\t\t\t\t\t * 'F' for CSV case) */\n>> +\tint32\t\tdest;\t\t\t/* log destination */\n>> +\tchar\t\tis_last; /* last chunk of message? 't' or 'f'*/\n>> \tchar\t\tdata[FLEXIBLE_ARRAY_MEMBER];\t/* data payload starts here */\n>> } PipeProtoHeader;\n> Making PipeProtoHeader larger is not free, and that could penalize\n> workloads with a lot of short messages and many backends as the\n> syslogger relies on pipes with sync calls. Why not switching is_last\n> to bits8 flags instead? That should be enough for the addition of\n> JSON. 3 bits are enough at the end: one to know if it is the last\n> chunk of message, one for CSV and one for JSON.\n\n\n\nYeah. A very simple change would be to use two different values for json\n(say 'y' and 'n'). A slightly more principled scheme might use the top\nbit for the end marker and the bottom 3 bits for the dest type (so up to\n8 types possible), with the rest available for future use.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 08:46:44 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Sep 08, 2021 at 08:46:44AM -0400, Andrew Dunstan wrote:\n> Yeah. A very simple change would be to use two different values for json\n> (say 'y' and 'n'). A slightly more principled scheme might use the top\n> bit for the end marker and the bottom 3 bits for the dest type (so up to\n> 8 types possible), with the rest available for future use.\n\nI was thinking to track stderr as a case where no bits are set in the\nflags for the area of the destinations, but that's a bit crazy if we\nhave a lot of margin in what can be stored. I have looked at that and\nfinished with the attached which is an improvement IMO, especially\nwhen it comes to the header validation.\n\nFWIW, while looking at my own external module for the JSON logs, I\nnoticed that I used the chunk protocol when the log redirection is\nenabled, but just enforced everything to be sent to stderr.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 11:17:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "Fwiw I was shocked when I saw the t/f T/F kluge when I went to work on\njsonlogging. That's the kind of dead-end short-sighted hack that just\nlays traps and barriers for future hackers to have to clean up before\nthey can do the work they want to do.\n\nPlease just put a \"format\" field (or \"channel\" field -- the logging\ndaemon doesn't really care what format) with a list of defined formats\nthat can easily be extended in the future. If you want to steal the\nhigh bit for \"is last\" and only allow 128 values instead of 256 so be\nit.\n\n\n", "msg_date": "Wed, 8 Sep 2021 22:58:51 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Sep 08, 2021 at 10:58:51PM -0400, Greg Stark wrote:\n> Please just put a \"format\" field (or \"channel\" field -- the logging\n> daemon doesn't really care what format) with a list of defined formats\n> that can easily be extended in the future. If you want to steal the\n> high bit for \"is last\" and only allow 128 values instead of 256 so be\n> it.\n\nWhich is what I just posted here:\nhttps://www.postgresql.org/message-id/YTlunSciDRl1z7ik@paquier.xyz\n\nWell, we could also do things so as we have two fields, as of\nsomething like:\ntypedef struct\n{\n[...]\nbits8\tflags:4, format:4;\n[...]\n} PipeProtoHeader;\n\nI am not sure if this is an improvement in readability though, and\nthat's less consistent with the recent practice we've been trying to\nfollow with bitmasks and flag-like options.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 12:58:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Sep 09, 2021 at 11:17:01AM +0900, Michael Paquier wrote:\n> I was thinking to track stderr as a case where no bits are set in the\n> flags for the area of the destinations, but that's a bit crazy if we\n> have a lot of margin in what can be stored. I have looked at that and\n> finished with the attached which is an improvement IMO, especially\n> when it comes to the header validation.\n\nOne part that was a bit flacky after more consideration is that the\nheader validation would consider as correct the case where both stderr\nand csvlog are set in the set of flags. I have finished by just using\npg_popcount() on one byte with a filter on the log destinations,\nmaking the whole more robust. If there are any objections, please let\nme know.\n--\nMichael", "msg_date": "Fri, 10 Sep 2021 10:07:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Sep 01, 2021 at 04:39:43PM -0400, Sehrope Sarkuni wrote:\n> This version splits out the existing csvlog code into its own file and\n> centralizes the common helpers into a new elog-internal.h so that they're\n> only included by the actual write_xyz sources.\n> \n> That makes the elog.c changes in the JSON logging patch minimal as all it's\n> really doing is invoking the new write_jsonlog(...) function.\n> \n> It also adds those missing fields to the JSON logger output.\n\nForking a bit this thread while looking at 0002 that adds new tests\nfor csvlog. While I agree that it would be useful to have more\ncoverage with the syslogger message chunk protocol in this area, I\nthink that having a separate test is a waste of resources. Creating a\nnew node is not cheap either, and this adds more wait phases, making\nthe tests take longer. It would be much better to extend\n004_logrotate.pl and update it to use log_destination = 'stderr,\ncsvlog', to minimize the number of nodes we create as well as the\nadditional amount of time we'd spend for those tests. Plugging in\nJSON into that would not be complicated either once we have in place a\nset of small routines that limit the code duplication between the\nchecks for each log destination type.\n--\nMichael", "msg_date": "Fri, 10 Sep 2021 13:07:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Fri, Sep 10, 2021 at 01:07:00PM +0900, Michael Paquier wrote:\n> Forking a bit this thread while looking at 0002 that adds new tests\n> for csvlog. While I agree that it would be useful to have more\n> coverage with the syslogger message chunk protocol in this area, I\n> think that having a separate test is a waste of resources. Creating a\n> new node is not cheap either, and this adds more wait phases, making\n> the tests take longer. It would be much better to extend\n> 004_logrotate.pl and update it to use log_destination = 'stderr,\n> csvlog', to minimize the number of nodes we create as well as the\n> additional amount of time we'd spend for those tests. Plugging in\n> JSON into that would not be complicated either once we have in place a\n> set of small routines that limit the code duplication between the\n> checks for each log destination type.\n\nAnd this part leads me to the attached, where the addition of the JSON\nformat would result in the addition of a couple of lines.\n--\nMichael", "msg_date": "Fri, 10 Sep 2021 15:56:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Fri, Sep 10, 2021 at 03:56:18PM +0900, Michael Paquier wrote:\n> And this part leads me to the attached, where the addition of the JSON\n> format would result in the addition of a couple of lines.\n\nOkay, I have worked through the first half of the patch set, and\napplied the improved versions of 0001 (refactoring of the chunk\nprotocol) and 0002 (addition of the tests for csvlog).\n\nI have not looked in details at 0003 and 0004 yet. Still, here are\nsome comments after a quick scan.\n\n+ * elog-internal.h\nI'd rather avoid the hyphen, and use elog_internal.h.\n\n+#define FORMATTED_TS_LEN 128\n+extern char formatted_start_time[FORMATTED_TS_LEN];\n+extern char formatted_log_time[FORMATTED_TS_LEN];\n+\n+void setup_formatted_log_time(void);\n+void setup_formatted_start_time(void);\nWe could just use a static buffer in each one of those routines, and\nreturn back a pointer to the caller.\n\n+ else if ((Log_destination & LOG_DESTINATION_JSONLOG) &&\n+ (jsonlogFile == NULL ||\n+ time_based_rotation || (size_rotation_for & LOG_DESTINATION_JSONLOG)))\n[...]\n+ /* Write to JSON log if enabled */\n+ else if (Log_destination & LOG_DESTINATION_JSONLOG)\n+ {\nThose bits in 0004 are wrong. They should be two \"if\" clauses. This\nis leading to issues when setting log_destination to multiple values\nwith jsonlog in the set of values and logging_connector = on, and the\nlogs are not getting redirected properly to the .json file. We would \nget the data for the .log and .csv files though. This issue also\nhappens with the original patch set applied on top of e757080. I\nthink that we should also be more careful with the way we free\nStringInfoData in send_message_to_server_log(), or we are going to\nfinish with unwelcome leaks.\n\nThe same coding pattern is now repeated three times in logfile_rotate():\n- Check if a logging type is enabled.\n- Optionally open new file, with logfile_open().\n- Business with ENFILE and EMFILE.\n- pfree() and save of the static FILE* ane file name for each type.\nI think that we have enough material for a wrapper routine that does\nthis work, where we pass down LOG_DESTINATION_* and pointers to the\nstatic FILE* and the static last file names. That would have avoided\nthe bug I found above.\n\nThe rebased patch set, that has reworked the tests to be in line with\nHEAD, also fails. They compile.\n\nSehrope, could you adjust that?\n--\nMichael", "msg_date": "Mon, 13 Sep 2021 11:22:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Sun, Sep 12, 2021 at 10:22 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Fri, Sep 10, 2021 at 03:56:18PM +0900, Michael Paquier wrote:\n> > And this part leads me to the attached, where the addition of the JSON\n> > format would result in the addition of a couple of lines.\n>\n> Okay, I have worked through the first half of the patch set, and\n> applied the improved versions of 0001 (refactoring of the chunk\n> protocol) and 0002 (addition of the tests for csvlog).\n>\n\nThanks. I finally got a chance to look through those changes. I like it.\nThe popcount and pulling out the flags are much easier to follow than the\nold hard coded letters.\n\n\n> I have not looked in details at 0003 and 0004 yet. Still, here are\n> some comments after a quick scan.\n>\n> + * elog-internal.h\n> I'd rather avoid the hyphen, and use elog_internal.h.\n>\n> +#define FORMATTED_TS_LEN 128\n> +extern char formatted_start_time[FORMATTED_TS_LEN];\n> +extern char formatted_log_time[FORMATTED_TS_LEN];\n> +\n> +void setup_formatted_log_time(void);\n> +void setup_formatted_start_time(void);\n> We could just use a static buffer in each one of those routines, and\n> return back a pointer to the caller.\n>\n\n+1\n\n\n> + else if ((Log_destination & LOG_DESTINATION_JSONLOG) &&\n> + (jsonlogFile == NULL ||\n> + time_based_rotation || (size_rotation_for &\n> LOG_DESTINATION_JSONLOG)))\n> [...]\n> + /* Write to JSON log if enabled */\n> + else if (Log_destination & LOG_DESTINATION_JSONLOG)\n> + {\n> Those bits in 0004 are wrong. They should be two \"if\" clauses. This\n> is leading to issues when setting log_destination to multiple values\n> with jsonlog in the set of values and logging_connector = on, and the\n> logs are not getting redirected properly to the .json file. We would\n> get the data for the .log and .csv files though. This issue also\n> happens with the original patch set applied on top of e757080. I\n> think that we should also be more careful with the way we free\n> StringInfoData in send_message_to_server_log(), or we are going to\n> finish with unwelcome leaks.\n>\n\nGood catch. Staring at that piece again, that's tricky as it tries to\naggressively free the buffer before calling write_cvslog(...). Which can't\njust be duplicated for additional destinations.\n\nI think we need to pull up the negative case (i.e. syslogger not available)\nbefore the other destinations and if it matches, free and exit early.\nOtherwise, free the buffer and call whatever destination routines are\nenabled.\n\n\n> The same coding pattern is now repeated three times in logfile_rotate():\n> - Check if a logging type is enabled.\n> - Optionally open new file, with logfile_open().\n> - Business with ENFILE and EMFILE.\n> - pfree() and save of the static FILE* ane file name for each type.\n> I think that we have enough material for a wrapper routine that does\n> this work, where we pass down LOG_DESTINATION_* and pointers to the\n> static FILE* and the static last file names. That would have avoided\n> the bug I found above.\n>\n\nI started on a bit of this as well. There's so much overlap already between\nthe syslog_ and csvlog code that I'm going to put that together first. Then\nthe json addition should just fall into a couple of call sites.\n\nI'm thinking of adding an internal struct to house the FILE* and file\nnames. Then all the opening, closing, and log rotation code can pass\npointers to those and centralize the pfree() and NULL checks.\n\n\n> The rebased patch set, that has reworked the tests to be in line with\n> HEAD, also fails. They compile.\n>\n> Sehrope, could you adjust that?\n\n\nYes I'm looking to sync those up and address the above later this week.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Sun, Sep 12, 2021 at 10:22 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Sep 10, 2021 at 03:56:18PM +0900, Michael Paquier wrote:\n> And this part leads me to the attached, where the addition of the JSON\n> format would result in the addition of a couple of lines.\n\nOkay, I have worked through the first half of the patch set, and\napplied the improved versions of 0001 (refactoring of the chunk\nprotocol) and 0002 (addition of the tests for csvlog).Thanks. I finally got a chance to look through those changes. I like it. The popcount and pulling out the flags are much easier to follow than the old hard coded letters. \nI have not looked in details at 0003 and 0004 yet.  Still, here are\nsome comments after a quick scan.\n\n+ * elog-internal.h\nI'd rather avoid the hyphen, and use elog_internal.h.\n\n+#define FORMATTED_TS_LEN 128\n+extern char formatted_start_time[FORMATTED_TS_LEN];\n+extern char formatted_log_time[FORMATTED_TS_LEN];\n+\n+void setup_formatted_log_time(void);\n+void setup_formatted_start_time(void);\nWe could just use a static buffer in each one of those routines, and\nreturn back a pointer to the caller.+1 \n+   else if ((Log_destination & LOG_DESTINATION_JSONLOG) &&\n+       (jsonlogFile == NULL ||\n+        time_based_rotation || (size_rotation_for & LOG_DESTINATION_JSONLOG)))\n[...]\n+   /* Write to JSON log if enabled */\n+   else if (Log_destination & LOG_DESTINATION_JSONLOG)\n+   {\nThose bits in 0004 are wrong.  They should be two \"if\" clauses.  This\nis leading to issues when setting log_destination to multiple values\nwith jsonlog in the set of values and logging_connector = on, and the\nlogs are not getting redirected properly to the .json file.  We would \nget the data for the .log and .csv files though.  This issue also\nhappens with the original patch set applied on top of e757080.   I\nthink that we should also be more careful with the way we free\nStringInfoData in send_message_to_server_log(), or we are going to\nfinish with unwelcome leaks.Good catch. Staring at that piece again, that's tricky as it tries to aggressively free the buffer before calling write_cvslog(...). Which can't just be duplicated for additional destinations.I think we need to pull up the negative case (i.e. syslogger not available) before the other destinations and if it matches, free and exit early. Otherwise, free the buffer and call whatever destination routines are enabled. \nThe same coding pattern is now repeated three times in logfile_rotate():\n- Check if a logging type is enabled.\n- Optionally open new file, with logfile_open().\n- Business with ENFILE and EMFILE.\n- pfree() and save of the static FILE* ane file name for each type.\nI think that we have enough material for a wrapper routine that does\nthis work, where we pass down LOG_DESTINATION_* and pointers to the\nstatic FILE* and the static last file names.  That would have avoided\nthe bug I found above.I started on a bit of this as well. There's so much overlap already between the syslog_ and csvlog code that I'm going to put that together first. Then the json addition should just fall into a couple of call sites.I'm thinking of adding an internal struct to house the FILE* and file names. Then all the opening, closing, and log rotation code can pass pointers to those and centralize the pfree() and NULL checks.  \nThe rebased patch set, that has reworked the tests to be in line with\nHEAD, also fails.  They compile.\n\nSehrope, could you adjust that?Yes I'm looking to sync those up and address the above later this week.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Mon, 13 Sep 2021 23:56:52 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Mon, Sep 13, 2021 at 11:56:52PM -0400, Sehrope Sarkuni wrote:\n> Good catch. Staring at that piece again, that's tricky as it tries to\n> aggressively free the buffer before calling write_cvslog(...). Which can't\n> just be duplicated for additional destinations.\n> \n> I think we need to pull up the negative case (i.e. syslogger not available)\n> before the other destinations and if it matches, free and exit early.\n> Otherwise, free the buffer and call whatever destination routines are\n> enabled.\n\nYes, I got a similar impression.\n\n> I started on a bit of this as well. There's so much overlap already between\n> the syslog_ and csvlog code that I'm going to put that together first. Then\n> the json addition should just fall into a couple of call sites.\n> \n> I'm thinking of adding an internal struct to house the FILE* and file\n> names. Then all the opening, closing, and log rotation code can pass\n> pointers to those and centralize the pfree() and NULL checks.\n\nAgreed on both points (using a structure and doing the refactoring as\na first patch).\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 15:06:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "Attached three patches refactor the syslogger handling of file based\ndestinations a bit ... and then a lot.\n\nFirst patch adds a new constant to represent both file based destinations.\nThis should make it easier to ensure additional destinations are handled in\n\"For all file destinations...\" situations (e.g. when we add the jsonlog\ndestination).\n\nSecond patch refactors the file descriptor serialization and re-opening in\nEXEC_BACKEND forking. Previously the code was duplicated for both stderr\nand csvlog. Again, this should simplify adding new destinations as they'd\njust invoke the same helper. There's an existing comment about not handling\nfailed opens in syslogger_parseArgs(...) and this patch doesn't fix that,\nbut it does provide a single location to do so in the future.\n\nThe third patch adds a new internal (to syslogger.c) structure,\nFileLogDestination, for file based log destinations and modifies the\nexisting handling for syslogFile and csvlogFile to defer to a bunch of\nhelper functions. It also renames \"syslogFile\" to \"stderr_file_info\".\nWorking through this patch, it was initially confusing that the stderr log\nfile was named \"syslogFile\", the C file is named syslogger.c, and there's\nan entirely separate syslog (the message logging API) destination. I think\nthis clears that up a bit.\n\nThe patches pass check-world on Linux. I haven't tested it on Windows but\nit does pass check-world with EXEC_BACKEND defined to try out the forking\ncode paths. Definitely needs some review to ensure it's functionally\ncorrect for the different log files.\n\nI haven't tried splitting the csvlog code out or adding the new jsonlog\natop this yet. There's enough changes here that I'd like to get this\nsettled first.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Thu, 16 Sep 2021 17:27:20 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "The previous patches failed on the cfbot Appveyor Windows build. That also\nmakes me question whether I did the EXEC_BACKEND testing correctly as it\nshould have caught that compile error. I'm going to look into that.\n\nIn the meantime the attached should correct that part of the build.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\n>", "msg_date": "Thu, 16 Sep 2021 20:59:54 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Sep 16, 2021 at 05:27:20PM -0400, Sehrope Sarkuni wrote:\n> Attached three patches refactor the syslogger handling of file based\n> destinations a bit ... and then a lot.\n> \n> First patch adds a new constant to represent both file based destinations.\n> This should make it easier to ensure additional destinations are handled in\n> \"For all file destinations...\" situations (e.g. when we add the jsonlog\n> destination).\n> \n> Second patch refactors the file descriptor serialization and re-opening in\n> EXEC_BACKEND forking. Previously the code was duplicated for both stderr\n> and csvlog. Again, this should simplify adding new destinations as they'd\n> just invoke the same helper. There's an existing comment about not handling\n> failed opens in syslogger_parseArgs(...) and this patch doesn't fix that,\n> but it does provide a single location to do so in the future.\n> \n> The third patch adds a new internal (to syslogger.c) structure,\n> FileLogDestination, for file based log destinations and modifies the\n> existing handling for syslogFile and csvlogFile to defer to a bunch of\n> helper functions. It also renames \"syslogFile\" to \"stderr_file_info\".\n> Working through this patch, it was initially confusing that the stderr log\n> file was named \"syslogFile\", the C file is named syslogger.c, and there's\n> an entirely separate syslog (the message logging API) destination. I think\n> this clears that up a bit.\n> \n> The patches pass check-world on Linux. I haven't tested it on Windows but\n> it does pass check-world with EXEC_BACKEND defined to try out the forking\n> code paths. Definitely needs some review to ensure it's functionally\n> correct for the different log files.\n\nCompilation with EXEC_BACKEND on Linux should work, so no need to\nbother with Windows when it comes to that. There is a buildfarm\nmachine testing this configuration, for example.\n\n> I haven't tried splitting the csvlog code out or adding the new jsonlog\n> atop this yet. There's enough changes here that I'd like to get this\n> settled first.\n\nMakes sense.\n\nI am not really impressed by 0001 and the introduction of\nLOG_DESTINATIONS_WITH_FILES. So I would leave that out and keep the\nlist of destinations listed instead. And we are talking about two\nplaces here, only within syslogger.c.\n\n0002 looks like an improvement, but I think that 0003 makes the\nreadability of the code worse with the introduction of eight static\nroutines to handle all those cases.\n\nfile_log_dest_should_rotate_for_size(), file_log_dest_close(),\nfile_log_dest_check_rotate_for_size(), get_syslogger_file() don't\nbring major improvements. On the contrary,\nfile_log_dest_write_metadata and file_log_dest_rotate seem to reduce a\nbit the noise.\n--\nMichael", "msg_date": "Fri, 17 Sep 2021 10:36:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Sep 16, 2021 at 9:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> I am not really impressed by 0001 and the introduction of\n> LOG_DESTINATIONS_WITH_FILES. So I would leave that out and keep the\n> list of destinations listed instead. And we are talking about two\n> places here, only within syslogger.c.\n>\n\nI've taken that out for now. The idea was to simplify the additions when\njsonlog is added but can circle back to that later if it makes sense.\n\n\n> 0002 looks like an improvement,\n\n\nNice. That's left unchanged (renamed to 0001 now).\n\n\n> but I think that 0003 makes the\n> readability of the code worse with the introduction of eight static\n> routines to handle all those cases.\n>\n> file_log_dest_should_rotate_for_size(), file_log_dest_close(),\n> file_log_dest_check_rotate_for_size(), get_syslogger_file() don't\n> bring major improvements. On the contrary,\n> file_log_dest_write_metadata and file_log_dest_rotate seem to reduce a\n> bit the noise.\n>\n\nIt was helpful to split them out while working on the patch but I see your\npoint upon re-reading through the result.\n\nAttached version (renamed to 002) adds only three static functions for\nchecking rotation size, performing the actual rotation, and closing. Also\none other to handle generating the logfile names with a suffix to handle\nthe pfree-ing (wrapping logfile_open(...)).\n\nThe rest of the changes happen in place using the new structures.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Fri, 17 Sep 2021 16:36:57 -0400", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": true, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Fri, Sep 17, 2021 at 04:36:57PM -0400, Sehrope Sarkuni wrote:\n> It was helpful to split them out while working on the patch but I see your\n> point upon re-reading through the result.\n> \n> Attached version (renamed to 002) adds only three static functions for\n> checking rotation size, performing the actual rotation, and closing. Also\n> one other to handle generating the logfile names with a suffix to handle\n> the pfree-ing (wrapping logfile_open(...)).\n> \n> The rest of the changes happen in place using the new structures.\n\nI have looked at that in details, and found that the introduction of\nFileLogDestination makes the code harder to follow, and that the\nintroduction of the file extension, the destination name and the\nexpected target destination LOG_DESTINATION_* had a limited impact\nbecause they are used in few places. The last two useful pieces are\nthe FILE* handle and the last file name for current_logfiles.\n\nAttached are updated patches. The logic of 0001 to refactor the fd\nfetch/save logic when forking the syslogger in EXEC_BACKEND builds is\nunchanged. I have tweaked the patch with more comments and different\nroutine names though. Patch 0002 refactors the main point that\nintroduced FileLogDestination by refactoring the per-destination file\nrotation, not forgetting the fact that the file last name and handle\nfor stderr can never be cleaned up even if LOG_DESTINATION_STDERR is\ndisabled. Grepping after LOG_DESTINATION_CSVLOG in the code tree, I'd\nbe fine to live with this level of abstraction as each per-destination\nchange are grouped with each other so they are hard to miss.\n\n0001 is in a rather commitable shape, and I have made the code\nconsistent with HEAD. However, I think that its handling of\n_get_osfhandle() is clunky for 64-bit compilations as long is 32b in\nWIN32 but intptr_t is platform-dependent as it could be 32b or 64b, so\natoi() would overflow if the handle is larger than INT_MAX for 64b\nbuilds:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/standard-types\nThis problem deserves a different thread.\n\nIt would be good for 0002 if an extra pair of eyes looks at it. While\non it, I have renamed the existing last_file_name to\nlast_sys_file_name in 0002 to make the naming more consistent with\nsyslogFile. It is independent of 0001, so it could be done first as\nwell.\n--\nMichael", "msg_date": "Tue, 28 Sep 2021 12:30:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Sep 28, 2021 at 12:30:10PM +0900, Michael Paquier wrote:\n> 0001 is in a rather commitable shape, and I have made the code\n> consistent with HEAD. However, I think that its handling of\n> _get_osfhandle() is clunky for 64-bit compilations as long is 32b in\n> WIN32 but intptr_t is platform-dependent as it could be 32b or 64b, so\n> atoi() would overflow if the handle is larger than INT_MAX for 64b\n> builds:\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/standard-types\n> This problem deserves a different thread.\n\nThis happens to not be a problem as only 32 bits are significant for\nhandles for both Win32 and Win64. This also means that we should be\nable to remove the use for \"long\" in this code, making the routines\nmore symmetric. I have done more tests with Win32 and Win64, and\napplied it. I don't have MinGW environments at hand, but looking at\nthe upstream code that should not matter. The buildfarm will let\nus know soon enough if there is a problem thanks to the TAP tests of\npg_ctl.\n--\nMichael", "msg_date": "Wed, 29 Sep 2021 11:02:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Sep 29, 2021 at 11:02:10AM +0900, Michael Paquier wrote:\n> This happens to not be a problem as only 32 bits are significant for\n> handles for both Win32 and Win64. This also means that we should be\n> able to remove the use for \"long\" in this code, making the routines\n> more symmetric. I have done more tests with Win32 and Win64, and\n> applied it. I don't have MinGW environments at hand, but looking at\n> the upstream code that should not matter. The buildfarm will let\n> us know soon enough if there is a problem thanks to the TAP tests of\n> pg_ctl.\n\nSo, I have been looking at the rest of the patch set for the last\ncouple of days, and I think that I have spotted all the code paths\nthat need to be smarter when it comes to multiple file-based log\ndestinations. Attached is a new patch set:\n- 0001 does some refactoring of the file rotation in syslogger.c,\nthat's the same patch as previously posted.\n- 0002 is more refactoring of elog.c, adding routines for the start\ntimestamp, log timestamp, the backend type and an extra one to check\nif a query can be logged or not.\n- 0003 is a change to send_message_to_server_log() to be smarter\nregarding the fallback to stderr if a csvlog (or a jsonlog!) entry\ncannot be logged because the redirection is not ready yet. The code\nof HEAD processes first stderr, then csvlog, with csvlog moving back\nto stderr if not done yet. That's a bit strange, because for example\non WIN32 we would lose any csvlog entry for a service. I propose here\nto do csvlog first, and fallback to stderr so as it gets done in one\ncode path instead of two. I have spent quite a bit of time thinking\nabout the best way to handle the case of multiple file log\ndestinations here because we don't want to log multiple times to\nstderr if csvlog and jsonlog are both enabled. And I think that this\nis the simplest thing we could do.\n- 0004 moves the CSV-specific code into its own file. This include\nsome refactoring of elog.c that should be moved to 0002, as this\nrequires more routines of elog.c to be published:\n-- write_pipe_chunks()\n-- error_severity()\n- 0005 is the main meat, that introduces JSON as log_destination.\nThis compiles and passes all my tests, but I have not really done an\nin-depth review of this code yet.\n\n0002 and 0004 could be more polished and most of their pieces had\nbetter be squashed together. 0003, though, would improve the case of\nWIN32 where only csvlog is enabled so as log entries are properly\nredirected to the event logs if the redirection is not done yet. I'd\nlike to move on with 0001 and 0003 as independent pieces.\n\nSehrope, any thoughts?\n--\nMichael", "msg_date": "Tue, 5 Oct 2021 16:18:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On 10/5/21, 12:22 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> - 0001 does some refactoring of the file rotation in syslogger.c,\r\n> that's the same patch as previously posted.\r\n\r\nMy compiler is unhappy with 5c6e33f:\r\n\r\n syslogger.c: In function ‘logfile_rotate_dest’:\r\n syslogger.c:1302:11: warning: ‘logFileExt’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n filename = logfile_getname(fntime, logFileExt);\r\n ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nThe attached patch seems to fix it.\r\n\r\nNathan", "msg_date": "Thu, 7 Oct 2021 06:00:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Oct 07, 2021 at 06:00:08AM +0000, Bossart, Nathan wrote:\n> The attached patch seems to fix it.\n\nThanks, sorry about that. I was able to see that once I compiled\nwithout assertions.\n--\nMichael", "msg_date": "Thu, 7 Oct 2021 16:25:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Oct 05, 2021 at 04:18:17PM +0900, Michael Paquier wrote:\n> 0002 and 0004 could be more polished and most of their pieces had\n> better be squashed together. 0003, though, would improve the case of\n> WIN32 where only csvlog is enabled so as log entries are properly\n> redirected to the event logs if the redirection is not done yet. I'd\n> like to move on with 0001 and 0003 as independent pieces.\n\n0001 and 0003 have been applied independently, attached is a rebased\nversion.\n--\nMichael", "msg_date": "Fri, 8 Oct 2021 12:28:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Fri, Oct 08, 2021 at 12:28:58PM +0900, Michael Paquier wrote:\n> 0001 and 0003 have been applied independently, attached is a rebased\n> version.\n\nAttached are rebased versions of the patch set, where I have done a\ncleaner split:\n- 0001 includes all the refactoring of the routines from elog.c.\n- 0002 moves csv logging into its own file.\n- 0003 introduces the JSON log.\n\n0001 and 0002, the refactoring bits, are in a rather committable\nshape, so I'd like to apply that as the last refactoring pieces I know\nof for this thread. 0003 still needs a closer lookup, and one part I\ndo not like much in it is the split for [u]int and long values when it\ncomes to key and values.\n--\nMichael", "msg_date": "Tue, 19 Oct 2021 20:02:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Oct 19, 2021 at 08:02:02PM +0900, Michael Paquier wrote:\n> 0001 and 0002, the refactoring bits, are in a rather committable\n> shape, so I'd like to apply that as the last refactoring pieces I know\n> of for this thread. 0003 still needs a closer lookup, and one part I\n> do not like much in it is the split for [u]int and long values when it\n> comes to key and values.\n\nI have finally come around 0003 and reviewed it. There were a couple\nof issues within it, from complications in the code that did not feel\nnecessary to incorrect handling of the values logged, mostly around \nwhen values should be escaped or not. jsonlog.c has been reorganized\nso as its fields match with csvlog.c, and I have simplified the APIs\nin charge of saving the integers into a single one with an argument\nlist and an option to control if the value should be escaped or not.\n\npostgresql.conf.sample also needed a refresh.\n\nI have also spent some time on the documentation, where the list of\nJSON keys with their descriptions and types has been changed to a\ntable, for clarity. The list was a bit incorrect (incorrect fields\nand missing entries), so that should hopefully be clean now.\n\nPatch 0003 has been heavily reworked, and it would be good to have an\nextra pair of eyes on it. So I have switched the CF entry as \"Needs\nReview\" and added my name to the list of authors (originally this\nstuff took code portions of own module, as well).\n--\nMichael", "msg_date": "Wed, 10 Nov 2021 22:44:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "Hi,\n\nOn 2021-11-10 22:44:49 +0900, Michael Paquier wrote:\n> Patch 0003 has been heavily reworked, and it would be good to have an\n> extra pair of eyes on it. So I have switched the CF entry as \"Needs\n> Review\" and added my name to the list of authors (originally this\n> stuff took code portions of own module, as well).\n\nThe tests don't seem to pass on windows:\nhttps://cirrus-ci.com/task/5412456754315264?logs=test_bin#L47\nhttps://api.cirrus-ci.com/v1/artifact/task/5412456754315264/tap/src/bin/pg_ctl/tmp_check/log/regress_log_004_logrotate\n\npsql:<stdin>:1: ERROR: division by zero\ncould not open \"c:/cirrus/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\": The system cannot find the file specified at t/004_logrotate.pl line 87.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Jan 2022 13:34:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Sun, Jan 02, 2022 at 01:34:45PM -0800, Andres Freund wrote:\n> The tests don't seem to pass on windows:\n> https://cirrus-ci.com/task/5412456754315264?logs=test_bin#L47\n> https://api.cirrus-ci.com/v1/artifact/task/5412456754315264/tap/src/bin/pg_ctl/tmp_check/log/regress_log_004_logrotate\n> \n> psql:<stdin>:1: ERROR: division by zero\n> could not open \"c:/cirrus/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\": The system cannot find the file specified at t/004_logrotate.pl line 87.\n\nThis seems to point out that the syslogger is too slow to capture the\nlogrotate signal, and the patch set is introducing nothing new in\nterms of infrastructure, just an extra value for log_destination.\nThis stuff passes here, and I am not spotting something amiss after an\nextra close read.\n\nAttached is an updated patch set that increases the test timeout (5min\n-> 10min). That should help, I assume.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 16:32:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "\nOn 1/5/22 02:32, Michael Paquier wrote:\n> On Sun, Jan 02, 2022 at 01:34:45PM -0800, Andres Freund wrote:\n>> The tests don't seem to pass on windows:\n>> https://cirrus-ci.com/task/5412456754315264?logs=test_bin#L47\n>> https://api.cirrus-ci.com/v1/artifact/task/5412456754315264/tap/src/bin/pg_ctl/tmp_check/log/regress_log_004_logrotate\n>>\n>> psql:<stdin>:1: ERROR: division by zero\n>> could not open \"c:/cirrus/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\": The system cannot find the file specified at t/004_logrotate.pl line 87.\n> This seems to point out that the syslogger is too slow to capture the\n> logrotate signal, and the patch set is introducing nothing new in\n> terms of infrastructure, just an extra value for log_destination.\n> This stuff passes here, and I am not spotting something amiss after an\n> extra close read.\n>\n> Attached is an updated patch set that increases the test timeout (5min\n> -> 10min). That should help, I assume.\n\n\nITYM 3 min -> 6  min. But in any case, is that really going to solve\nthis? The file should exist, even if its contents are not up to date, AIUI.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 Jan 2022 13:06:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "\nOn 1/6/22 13:06, Andrew Dunstan wrote:\n> On 1/5/22 02:32, Michael Paquier wrote:\n>> On Sun, Jan 02, 2022 at 01:34:45PM -0800, Andres Freund wrote:\n>>> The tests don't seem to pass on windows:\n>>> https://cirrus-ci.com/task/5412456754315264?logs=test_bin#L47\n>>> https://api.cirrus-ci.com/v1/artifact/task/5412456754315264/tap/src/bin/pg_ctl/tmp_check/log/regress_log_004_logrotate\n>>>\n>>> psql:<stdin>:1: ERROR: division by zero\n>>> could not open \"c:/cirrus/src/bin/pg_ctl/tmp_check/t_004_logrotate_primary_data/pgdata/current_logfiles\": The system cannot find the file specified at t/004_logrotate.pl line 87.\n>> This seems to point out that the syslogger is too slow to capture the\n>> logrotate signal, and the patch set is introducing nothing new in\n>> terms of infrastructure, just an extra value for log_destination.\n>> This stuff passes here, and I am not spotting something amiss after an\n>> extra close read.\n>>\n>> Attached is an updated patch set that increases the test timeout (5min\n>> -> 10min). That should help, I assume.\n>\n> ITYM 3 min -> 6  min. But in any case, is that really going to solve\n> this? The file should exist, even if its contents are not up to date, AIUI.\n\n\n\nI have tested on an msys2 setup with your v8 patches and I am getting this:\n\n\n#   Failed test 'current_logfiles is sane'\n#   at t/004_logrotate.pl line 96.\n#                   'stderr log/postgresql-2022-01-06_222419.log\n# csvlog log/postgresql-2022-01-06_222419.csv\n# '\n#     doesn't match '(?^:^stderr log/postgresql-.*log\n# csvlog log/postgresql-.*csv\n# jsonlog log/postgresql-.*json$)'\n\n#   Failed test 'found expected log file content for stderr'\n#   at t/004_logrotate.pl line 103.\n#                   ''\n#     doesn't match '(?^:division by zero)'\n\n#   Failed test 'found expected log file content for jsonlog'\n#   at t/004_logrotate.pl line 105.\n#                   undef\n#     doesn't match '(?^:division by zero)'\n\n#   Failed test 'pg_current_logfile() gives correct answer with jsonlog'\n#   at t/004_logrotate.pl line 105.\n#          got: ''\n#     expected: undef\n# Looks like you failed 4 tests of 14.\n[22:37:31] t/004_logrotate.pl ...\nDubious, test returned 4 (wstat 1024, 0x400)\nFailed 4/14 subtests\n[22:37:31]\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 6 Jan 2022 18:28:26 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Jan 06, 2022 at 06:28:26PM -0500, Andrew Dunstan wrote:\n> I have tested on an msys2 setup with your v8 patches and I am getting this:\n> \n> #   Failed test 'current_logfiles is sane'\n> #   at t/004_logrotate.pl line 96.\n> #                   'stderr log/postgresql-2022-01-06_222419.log\n> # csvlog log/postgresql-2022-01-06_222419.csv\n\nYes, I was waiting for the latest results, but that did not help at\nall. Something is wrong with the patch, I am not sure what yet, but\nthat seems like a mistake in the backend part of it rather than the\ntests. I have switched the CF entry as waiting on author until this\nis addressed.\n--\nMichael", "msg_date": "Fri, 7 Jan 2022 15:49:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Fri, Jan 07, 2022 at 03:49:47PM +0900, Michael Paquier wrote:\n> Yes, I was waiting for the latest results, but that did not help at\n> all. Something is wrong with the patch, I am not sure what yet, but\n> that seems like a mistake in the backend part of it rather than the\n> tests. I have switched the CF entry as waiting on author until this\n> is addressed.\n\nThe issue comes from an incorrect change in syslogger_parseArgs()\nwhere I missed that the incrementation of argv by 3 has no need to be\nchanged. A build with -DEXEC_BACKEND is enough to show the failure,\nwhich caused a crash when starting up the syslogger because of a NULL\npointer dereference. The attached v9 should be enough to switch the\nCF bot to green.\n--\nMichael", "msg_date": "Mon, 10 Jan 2022 21:48:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On 1/10/22, 4:51 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> The issue comes from an incorrect change in syslogger_parseArgs()\r\n> where I missed that the incrementation of argv by 3 has no need to be\r\n> changed. A build with -DEXEC_BACKEND is enough to show the failure,\r\n> which caused a crash when starting up the syslogger because of a NULL\r\n> pointer dereference. The attached v9 should be enough to switch the\r\n> CF bot to green.\r\n\r\nI've been looking at the latest patch set intermittently and playing\r\naround with jsonlog a little. It seems to work well, and I don't have\r\nany significant comments about the code. 0001 and 0002 seem\r\nstraightforward and uncontroversial. IIUC 0003 simply introduces\r\njsonlog using the existing framework.\r\n\r\nI wonder if we should consider tracking each log destination as a set\r\nof function pointers. The main logging code would just loop through\r\nthe enabled log destinations and use these functions, and it otherwise\r\nwould be completely detached (i.e., no \"if jsonlog\" blocks). This\r\nmight open up the ability to define custom log destinations via\r\nmodules, too. However, I don't know if there's any real demand for\r\nsomething like this, and it should probably be done separately from\r\nintroducing jsonlog, anyway.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 20:34:26 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Tue, Jan 11, 2022 at 08:34:26PM +0000, Bossart, Nathan wrote:\n> I've been looking at the latest patch set intermittently and playing\n> around with jsonlog a little. It seems to work well, and I don't have\n> any significant comments about the code. 0001 and 0002 seem\n> straightforward and uncontroversial.\n\nThanks. I have looked again at 0001 and 0002 today and applied both,\nso it means that we are done with all the refactoring pieces proposed\nup to now.\n\n> IIUC 0003 simply introduces jsonlog using the existing framework.\n\nThis part will have to wait a bit more, but yes, this piece should be\nstraight-forward.\n\n> I wonder if we should consider tracking each log destination as a set\n> of function pointers. The main logging code would just loop through\n> the enabled log destinations and use these functions, and it otherwise\n> would be completely detached (i.e., no \"if jsonlog\" blocks). This\n> might open up the ability to define custom log destinations via\n> modules, too. However, I don't know if there's any real demand for\n> something like this, and it should probably be done separately from\n> introducing jsonlog, anyway.\n\nI am not sure that this is worth the complications, either.\n--\nMichael", "msg_date": "Wed, 12 Jan 2022 15:27:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Wed, Jan 12, 2022 at 03:27:19PM +0900, Michael Paquier wrote:\n> This part will have to wait a bit more, but yes, this piece should be\n> straight-forward.\n\nOkay, this last piece has been applied this morning, after more review\nand a couple of adjustments, mainly cosmetic (pg_current_logfile\nmissed a refresh, incorrect copyright in jsonlog.c, etc.). Let's see\nwhat the buildfarm thinks.\n--\nMichael", "msg_date": "Mon, 17 Jan 2022 10:48:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Mon, Jan 17, 2022 at 10:48:06AM +0900, Michael Paquier wrote:\n> Okay, this last piece has been applied this morning, after more review\n> and a couple of adjustments, mainly cosmetic (pg_current_logfile\n> missed a refresh, incorrect copyright in jsonlog.c, etc.). Let's see\n> what the buildfarm thinks.\n\nBy the way, while on it, using directly COPY to load the logs from a\ngenerated .json file can be trickier than it looks, as backslashes\nrequire an extra escap when loading the data. One idea, while not the\nbest performance-wise, is to rely on COPY FROM PROGRAM with commands\nlike that:\nCREATE TABLE logs (data jsonb);\nCOPY logs FROM PROGRAM 'cat logs.json | sed ''s/\\\\/\\\\\\\\/g''';\n--\nMichael", "msg_date": "Mon, 17 Jan 2022 11:12:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "So, thinking about this, there is one important piece that is missing\nhere, which is the ability to change the default format for what we\nwrite to stderr. Right now, if you have stderr output, it is always in\nthe \"plain multiline\" format, with no option to change it. If you want\na JSON log, you have to read a file. But ISTM it would be pretty useful\nif you could say \"log_default_format=json\" and get the log that we get\nin stderr in the JSON format instead.\n\n From what I hear in the container world, what they would *prefer* (but\nthey don't often get) is to receive the JSON-format logs directly in\nstderr from the daemons they run; they capture stderr and they have the\nlogs just in the format they need, without having to open the log files,\nparsing the lines to rewrite in a different format as is done currently.\n\nI think this would be a relatively easy patch to do. Opinions?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 10 Feb 2022 19:45:17 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> So, thinking about this, there is one important piece that is missing\n> here, which is the ability to change the default format for what we\n> write to stderr. Right now, if you have stderr output, it is always in\n> the \"plain multiline\" format, with no option to change it. If you want\n> a JSON log, you have to read a file. But ISTM it would be pretty useful\n> if you could say \"log_default_format=json\" and get the log that we get\n> in stderr in the JSON format instead.\n\n>> From what I hear in the container world, what they would *prefer* (but\n> they don't often get) is to receive the JSON-format logs directly in\n> stderr from the daemons they run; they capture stderr and they have the\n> logs just in the format they need, without having to open the log files,\n> parsing the lines to rewrite in a different format as is done currently.\n\n> I think this would be a relatively easy patch to do. Opinions?\n\nI think assuming that everything that comes out on the postmaster's stderr\nis generated by our code is hopelessly naive. See for example glibc's\nbleats when it detects malloc corruption, or when loading a shlib fails.\nSo I don't believe something like this can be made to work reliably.\n\nThe existing syslogger logic does have the ability to cope with\nsuch out-of-protocol data. So maybe, if you are using syslogger,\nyou could have it transform such messages into some\nlowest-common-denominator jsonlog format. But it's not going to\nwork to expect that to happen with raw stderr.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Feb 2022 20:18:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" }, { "msg_contents": "On Thu, Feb 10, 2022 at 07:45:17PM -0300, Alvaro Herrera wrote:\n> From what I hear in the container world, what they would *prefer* (but\n> they don't often get) is to receive the JSON-format logs directly in\n> stderr from the daemons they run; they capture stderr and they have the\n> logs just in the format they need, without having to open the log files,\n> parsing the lines to rewrite in a different format as is done currently.\n\nYes, I have been pinged about that, which is why there are still cases\nfor my out-of-core extension jsonlog that uses the elog hook.\n\n> I think this would be a relatively easy patch to do. Opinions?\n\nThe postmaster goes through a couple of loops with the fd to open for\nthe default format, that the syslogger inherits from the postmaster,\nand I am pretty sure that there are a couple of code paths around the\npostmaster startup that can be tricky to reason about.\n\nMaking the new parameter PGC_POSTMASTER makes things easier to handle,\nstill the postmaster generates a couple of LOG entries and redirects\nthem to stderr before loading any GUC values, which would mean that we\ncannot make sure that all the logs are valid JSON objects. If we want\nto be 100% waterproof here, we may want to track down the format to\nuse by default with a mean different than a GUC for the postmaster\nstartup? A file holding this information in the root of the data\nfolder would be one way.\n--\nMichael", "msg_date": "Fri, 11 Feb 2022 10:24:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add jsonlog log_destination for JSON server logs" } ]
[ { "msg_contents": "(Starting a new thread for greater visibility)\n\nThe attached is a fairly straightforward correction. I did want to make\nsure it was okay to bump the catversion in the PG14 branch also. I've seen\nfixes where doing that during beta was in question.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Aug 2021 14:32:47 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> (Starting a new thread for greater visibility)\n> The attached is a fairly straightforward correction. I did want to make\n> sure it was okay to bump the catversion in the PG14 branch also. I've seen\n> fixes where doing that during beta was in question.\n\nYeah, you need to bump catversion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Aug 2021 15:07:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Tue, Aug 31, 2021 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > (Starting a new thread for greater visibility)\n> > The attached is a fairly straightforward correction. I did want to make\n> > sure it was okay to bump the catversion in the PG14 branch also. I've\nseen\n> > fixes where doing that during beta was in question.\n>\n> Yeah, you need to bump catversion.\n\nDone, thanks for confirming.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 31, 2021 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > (Starting a new thread for greater visibility)> > The attached is a fairly straightforward correction. I did want to make> > sure it was okay to bump the catversion in the PG14 branch also. I've seen> > fixes where doing that during beta was in question.>> Yeah, you need to bump catversion.Done, thanks for confirming.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Aug 2021 15:22:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Tue, Aug 31, 2021 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, you need to bump catversion.\n\n> Done, thanks for confirming.\n\nFor future reference --- I think it's potentially confusing to use\nthe same catversion number in different branches, except for the\nshort time after a new branch where the initial catalog contents\nare actually identical. So the way I'd have done this is to use\n202108311 in the back branch and 202108312 in HEAD. It's not\nterribly important, but something to note for next time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Aug 2021 15:38:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "Hi John,\n\nBy looking at timestamptz_bin() implementation I don't see why it\nshould be STABLE. Its return value depends only on the input values.\nIt doesn't look at the session parameters. timestamptz_in() and\ntimestamptz_out() are STABLE, that's true, but this is no concern of\ntimestamptz_bin().\n\nAm I missing something?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Sep 2021 12:32:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Wed, Sep 1, 2021 at 5:32 AM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n>\n> By looking at timestamptz_bin() implementation I don't see why it\n> should be STABLE. Its return value depends only on the input values.\n> It doesn't look at the session parameters. timestamptz_in() and\n> timestamptz_out() are STABLE, that's true, but this is no concern of\n> timestamptz_bin().\n\nI'm not quite willing to bet the answer couldn't change if the timezone\nchanges, but it's possible I'm the one missing something.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 1, 2021 at 5:32 AM Aleksander Alekseev <aleksander@timescale.com> wrote:>> By looking at timestamptz_bin() implementation I don't see why it> should be STABLE. Its return value depends only on the input values.> It doesn't look at the session parameters. timestamptz_in() and> timestamptz_out() are STABLE, that's true, but this is no concern of> timestamptz_bin().I'm not quite willing to bet the answer couldn't change if the timezone changes, but it's possible I'm the one missing something.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Sep 2021 13:26:26 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Wed, Sep 1, 2021 at 5:32 AM Aleksander Alekseev <aleksander@timescale.com>\n> wrote:\n>> By looking at timestamptz_bin() implementation I don't see why it\n>> should be STABLE. Its return value depends only on the input values.\n>> It doesn't look at the session parameters. timestamptz_in() and\n>> timestamptz_out() are STABLE, that's true, but this is no concern of\n>> timestamptz_bin().\n\n> I'm not quite willing to bet the answer couldn't change if the timezone\n> changes, but it's possible I'm the one missing something.\n\nAfter playing with it for awhile, it seems like the behavior is indeed\nnot TZ-dependent, but the real question is should it be?\nAs an example,\n\nregression=# set timezone to 'America/New_York';\nSET\nregression=# select date_bin('1 day', '2021-11-01 00:00 +00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);\n date_bin \n------------------------\n 2021-10-31 00:00:00-04\n(1 row)\n\nregression=# select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);\n date_bin \n------------------------\n 2021-11-08 23:00:00-05\n(1 row)\n\nI see that these two answers are both exactly multiples of 24 hours away\nfrom the given origin. But if I'm binning on the basis of \"days\" or\nlarger units, I would sort of expect to get local midnight, and I'm not\ngetting that once I cross a DST boundary.\n\nIf this is indeed the behavior we want, I concur with Aleksander\nthat date_bin isn't TZ-sensitive and needn't be marked STABLE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Sep 2021 14:44:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Wed, Sep 01, 2021 at 01:26:26PM -0400, John Naylor wrote:\n> On Wed, Sep 1, 2021 at 5:32 AM Aleksander Alekseev <aleksander@timescale.com>\n> wrote:\n> >\n> > By looking at timestamptz_bin() implementation I don't see why it\n> > should be STABLE. Its return value depends only on the input values.\n> > It doesn't look at the session parameters. timestamptz_in() and\n> > timestamptz_out() are STABLE, that's true, but this is no concern of\n> > timestamptz_bin().\n> \n> I'm not quite willing to bet the answer couldn't change if the timezone\n> changes, but it's possible I'm the one missing something.\n\nts=# SET timezone='-12';\nts=# SELECT date_bin('1hour', '2021-07-01 -1200', '2021-01-01');\ndate_bin | 2021-07-01 00:00:00-12\n\nts=# SET timezone='+12';\nts=# SELECT date_bin('1hour', '2021-07-01 -1200', '2021-01-01');\ndate_bin | 2021-07-02 00:00:00+12\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Sep 2021 13:50:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Sep 01, 2021 at 01:26:26PM -0400, John Naylor wrote:\n>> I'm not quite willing to bet the answer couldn't change if the timezone\n>> changes, but it's possible I'm the one missing something.\n\n> ts=# SET timezone='-12';\n> ts=# SELECT date_bin('1hour', '2021-07-01 -1200', '2021-01-01');\n> date_bin | 2021-07-01 00:00:00-12\n\n> ts=# SET timezone='+12';\n> ts=# SELECT date_bin('1hour', '2021-07-01 -1200', '2021-01-01');\n> date_bin | 2021-07-02 00:00:00+12\n\nYeah, but those are the same timestamptz value.\n\nAnother problem with this example as written is that the origin\nvalues being used are not the same in the two cases ... so I\nthink it's a bit accidental that the answers come out the same.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Sep 2021 15:15:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> regression=# set timezone to 'America/New_York';\n> SET\n> regression=# select date_bin('1 day', '2021-11-01 00:00\n+00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);\n> date_bin\n> ------------------------\n> 2021-10-31 00:00:00-04\n> (1 row)\n>\n> regression=# select date_bin('1 day', '2021-11-10 00:00\n+00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);\n> date_bin\n> ------------------------\n> 2021-11-08 23:00:00-05\n> (1 row)\n>\n> I see that these two answers are both exactly multiples of 24 hours away\n> from the given origin. But if I'm binning on the basis of \"days\" or\n> larger units, I would sort of expect to get local midnight, and I'm not\n> getting that once I cross a DST boundary.\n\nHmm, that's seems like a reasonable expectation. I can get local midnight\nif I recast to timestamp:\n\n# select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz::timestamp,\n'2021-09-01 00:00 -04'::timestamptz::timestamp);\n date_bin\n---------------------\n 2021-11-09 00:00:00\n(1 row)\n\nIt's a bit unintuitive, though.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> regression=# set timezone to 'America/New_York';> SET> regression=# select date_bin('1 day', '2021-11-01 00:00 +00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);>         date_bin       > ------------------------>  2021-10-31 00:00:00-04> (1 row)>> regression=# select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz, '2021-09-01 00:00 -04'::timestamptz);>         date_bin       > ------------------------>  2021-11-08 23:00:00-05> (1 row)>> I see that these two answers are both exactly multiples of 24 hours away> from the given origin.  But if I'm binning on the basis of \"days\" or> larger units, I would sort of expect to get local midnight, and I'm not> getting that once I cross a DST boundary.Hmm, that's seems like a reasonable expectation. I can get local midnight if I recast to timestamp:# select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz::timestamp, '2021-09-01 00:00 -04'::timestamptz::timestamp);      date_bin--------------------- 2021-11-09 00:00:00(1 row)It's a bit unintuitive, though.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Sep 2021 15:18:54 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I see that these two answers are both exactly multiples of 24 hours away\n>> from the given origin. But if I'm binning on the basis of \"days\" or\n>> larger units, I would sort of expect to get local midnight, and I'm not\n>> getting that once I cross a DST boundary.\n\n> Hmm, that's seems like a reasonable expectation. I can get local midnight\n> if I recast to timestamp:\n\n> # select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz::timestamp,\n> '2021-09-01 00:00 -04'::timestamptz::timestamp);\n> date_bin\n> ---------------------\n> 2021-11-09 00:00:00\n> (1 row)\n\nYeah, and then back to timestamptz if that's what you really need :-(\n\n> It's a bit unintuitive, though.\n\nAgreed. If we keep it like this, adding some documentation around\nthe point would be a good idea I think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Sep 2021 15:25:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Wed, Sep 1, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I see that these two answers are both exactly multiples of 24 hours\naway\n> >> from the given origin. But if I'm binning on the basis of \"days\" or\n> >> larger units, I would sort of expect to get local midnight, and I'm not\n> >> getting that once I cross a DST boundary.\n>\n> > Hmm, that's seems like a reasonable expectation. I can get local\nmidnight\n> > if I recast to timestamp:\n>\n> > # select date_bin('1 day', '2021-11-10 00:00\n+00'::timestamptz::timestamp,\n> > '2021-09-01 00:00 -04'::timestamptz::timestamp);\n> > date_bin\n> > ---------------------\n> > 2021-11-09 00:00:00\n> > (1 row)\n>\n> Yeah, and then back to timestamptz if that's what you really need :-(\n>\n> > It's a bit unintuitive, though.\n>\n> Agreed. If we keep it like this, adding some documentation around\n> the point would be a good idea I think.\n\nHaving heard no votes on changing this behavior (and it would be a bit of\nwork), I'll start on a documentation patch. And I'll go ahead and re-mark\nthe function as immutable tomorrow barring objections.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 1, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> I see that these two answers are both exactly multiples of 24 hours away> >> from the given origin.  But if I'm binning on the basis of \"days\" or> >> larger units, I would sort of expect to get local midnight, and I'm not> >> getting that once I cross a DST boundary.>> > Hmm, that's seems like a reasonable expectation. I can get local midnight> > if I recast to timestamp:>> > # select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz::timestamp,> > '2021-09-01 00:00 -04'::timestamptz::timestamp);> >       date_bin> > ---------------------> >  2021-11-09 00:00:00> > (1 row)>> Yeah, and then back to timestamptz if that's what you really need :-(>> > It's a bit unintuitive, though.>> Agreed.  If we keep it like this, adding some documentation around> the point would be a good idea I think.Having heard no votes on changing this behavior (and it would be a bit of work), I'll start on a documentation patch. And I'll go ahead and re-mark the function as immutable tomorrow barring objections.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Sep 2021 11:16:50 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Wed, Sep 1, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I see that these two answers are both exactly multiples of 24 hours\naway\n> >> from the given origin. But if I'm binning on the basis of \"days\" or\n> >> larger units, I would sort of expect to get local midnight, and I'm not\n> >> getting that once I cross a DST boundary.\n>\n> > Hmm, that's seems like a reasonable expectation. I can get local\nmidnight\n> > if I recast to timestamp:\n>\n> > # select date_bin('1 day', '2021-11-10 00:00\n+00'::timestamptz::timestamp,\n> > '2021-09-01 00:00 -04'::timestamptz::timestamp);\n> > date_bin\n> > ---------------------\n> > 2021-11-09 00:00:00\n> > (1 row)\n>\n> Yeah, and then back to timestamptz if that's what you really need :-(\n>\n> > It's a bit unintuitive, though.\n>\n> Agreed. If we keep it like this, adding some documentation around\n> the point would be a good idea I think.\n\nAttached is a draft doc patch using the above examples. Is there anything\nelse that would be useful to mention?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Sep 2021 11:57:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "> On Wed, Sep 1, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n> > > On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> I see that these two answers are both exactly multiples of 24 hours\naway\n> > >> from the given origin. But if I'm binning on the basis of \"days\" or\n> > >> larger units, I would sort of expect to get local midnight, and I'm\nnot\n> > >> getting that once I cross a DST boundary.\n> >\n> > > Hmm, that's seems like a reasonable expectation. I can get local\nmidnight\n> > > if I recast to timestamp:\n> >\n> > > # select date_bin('1 day', '2021-11-10 00:00\n+00'::timestamptz::timestamp,\n> > > '2021-09-01 00:00 -04'::timestamptz::timestamp);\n> > > date_bin\n> > > ---------------------\n> > > 2021-11-09 00:00:00\n> > > (1 row)\n> >\n> > Yeah, and then back to timestamptz if that's what you really need :-(\n> >\n> > > It's a bit unintuitive, though.\n> >\n> > Agreed. If we keep it like this, adding some documentation around\n> > the point would be a good idea I think.\n>\n> Attached is a draft doc patch using the above examples. Is there anything\nelse that would be useful to mention?\n\nAny thoughts on the doc patch?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n> On Wed, Sep 1, 2021 at 3:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> > John Naylor <john.naylor@enterprisedb.com> writes:> > > On Wed, Sep 1, 2021 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> > >> I see that these two answers are both exactly multiples of 24 hours away> > >> from the given origin.  But if I'm binning on the basis of \"days\" or> > >> larger units, I would sort of expect to get local midnight, and I'm not> > >> getting that once I cross a DST boundary.> >> > > Hmm, that's seems like a reasonable expectation. I can get local midnight> > > if I recast to timestamp:> >> > > # select date_bin('1 day', '2021-11-10 00:00 +00'::timestamptz::timestamp,> > > '2021-09-01 00:00 -04'::timestamptz::timestamp);> > >       date_bin> > > ---------------------> > >  2021-11-09 00:00:00> > > (1 row)> >> > Yeah, and then back to timestamptz if that's what you really need :-(> >> > > It's a bit unintuitive, though.> >> > Agreed.  If we keep it like this, adding some documentation around> > the point would be a good idea I think.>> Attached is a draft doc patch using the above examples. Is there anything else that would be useful to mention?Any thoughts on the doc patch?--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Sep 2021 07:46:59 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "Hi John,\n\n> Any thoughts on the doc patch?\n\nIt so happened that I implemented a similar feature in TimescaleDB [1].\n\nI discovered that it's difficult from both developer's and user's\nperspectives to think about the behavior of the function in the\ncontext of given TZ and its complicated rules, as you are trying to do\nin the doc patch. So what we did instead is saying: for timestamptz\nthe function works as if it was timestamp. E.g. time_bucket_ng(\"3\nmonth\", \"2021 Oct 03 12:34:56 TZ\") = \"2021 Jan 01 00:00:00 TZ\" no\nmatter what TZ it is and what rules (DST, corrections, etc) it has. It\nseems to be not only logical and easy to understand, but also easy to\nimplement [2].\n\nDo you think it would be possible to adopt a similar approach in\nrespect of documenting for date_bin()? To be honest, I didn't try to\nfigure out what is the actual implementation of date_bin() for TZ\ncase.\n\n[1]: https://docs.timescale.com/api/latest/hyperfunctions/time_bucket_ng/\n[2]: https://github.com/timescale/timescaledb/blob/master/src/time_bucket.c#L470\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 23 Sep 2021 11:13:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "Hi hackers,\n\n> the function works as if it was timestamp. E.g. time_bucket_ng(\"3\n> month\", \"2021 Oct 03 12:34:56 TZ\") = \"2021 Jan 01 00:00:00 TZ\" no\n\nThat was a typo. What I meant was:\n\ntime_bucket_ng(\"3 month\", \"2021 Feb 03 12:34:56 TZ\")\n\nFebruary, not October. Sorry for the confusion.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 23 Sep 2021 15:06:19 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "On Thu, Sep 23, 2021 at 4:13 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n>\n> Hi John,\n>\n> > Any thoughts on the doc patch?\n>\n> It so happened that I implemented a similar feature in TimescaleDB [1].\n>\n> I discovered that it's difficult from both developer's and user's\n> perspectives to think about the behavior of the function in the\n> context of given TZ and its complicated rules, as you are trying to do\n> in the doc patch. So what we did instead is saying: for timestamptz\n> the function works as if it was timestamp. E.g. time_bucket_ng(\"3\n> month\", \"2021 Oct 03 12:34:56 TZ\") = \"2021 Jan 01 00:00:00 TZ\" no\n> matter what TZ it is and what rules (DST, corrections, etc) it has. It\n> seems to be not only logical and easy to understand, but also easy to\n> implement [2].\n>\n> Do you think it would be possible to adopt a similar approach in\n> respect of documenting for date_bin()? To be honest, I didn't try to\n> figure out what is the actual implementation of date_bin() for TZ\n> case.\n\nI think you have a point that it could be stated more simply and generally.\nI'll try to move in that direction.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Sep 23, 2021 at 4:13 AM Aleksander Alekseev <aleksander@timescale.com> wrote:>> Hi John,>> > Any thoughts on the doc patch?>> It so happened that I implemented a similar feature in TimescaleDB [1].>> I discovered that it's difficult from both developer's and user's> perspectives to think about the behavior of the function in the> context of given TZ and its complicated rules, as you are trying to do> in the doc patch. So what we did instead is saying: for timestamptz> the function works as if it was timestamp. E.g. time_bucket_ng(\"3> month\", \"2021 Oct 03 12:34:56 TZ\") = \"2021 Jan 01 00:00:00 TZ\" no> matter what TZ it is and what rules (DST, corrections, etc) it has. It> seems to be not only logical and easy to understand, but also easy to> implement [2].>> Do you think it would be possible to adopt a similar approach in> respect of documenting for date_bin()? To be honest, I didn't try to> figure out what is the actual implementation of date_bin() for TZ> case.I think you have a point that it could be stated more simply and generally. I'll try to move in that direction.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Sep 2021 10:49:51 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" }, { "msg_contents": "I wrote:\n\n> I think you have a point that it could be stated more simply and\ngenerally. I'll try to move in that direction.\n\nOn second thought, I don't think this is helpful. Concrete examples are\neasier to reason about.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> I think you have a point that it could be stated more simply and generally. I'll try to move in that direction.On second thought, I don't think this is helpful. Concrete examples are easier to reason about. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Sep 2021 12:16:47 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: mark the timestamptz variant of date_bin() as stable" } ]
[ { "msg_contents": "Hello,\r\n\r\nThere was a brief discussion [1] back in February on allowing user\r\nmapping for LDAP, in order to open up some more complex authorization\r\nlogic (and slightly reduce the need for LDAP-to-Postgres user\r\nsynchronization). Attached is an implementation of this that separates\r\nthe LDAP authentication and authorization identities, and lets the\r\nclient control the former with an `ldapuser` connection option or its\r\nassociated PGLDAPUSER envvar.\r\n\r\nThis isn't as useful as, say, authorization based on LDAP attributes or\r\ngroup membership, but it seems like a necessary step towards that\r\nfeature, since we'll need to separate authn and authz anyway. This\r\nprovides some feature parity with other auth methods like gss, and it\r\nsolves the \"let anyone who can authenticate against LDAP connect as X\r\nrole\" use case trivially.\r\n\r\nThere is precedent for allowing the DBA to choose whether to map a bare\r\nusername or the \"full\" identity expansion -- see for example\r\ninclude_realm=1 for gss and clientname=DN for cert -- so I added an\r\nldap_map_dn=1 option which can be used to map the whole DN. (I'm not\r\nentirely convinced that it's useful, but maybe there are some\r\ndeployments that put authorization information into the LDAP topology,\r\nlike \"everyone in this particular subtree is an admin\".) Unlike\r\ninclude_realm, this is only allowed with an explicit map. I don't\r\nanticipate people using a full DN as a database username, and I'm\r\nworried that doing that without normalization could cause some major\r\nconfusion and/or security problems.\r\n\r\nWhen using a newer client with an older server, the new `ldapuser`\r\noption will cause a connection failure. For the case where PGUSER and\r\nPGLDAPUSER are identical, that failure is technically unnecessary, and\r\nI briefly considered stripping the `ldapuser` option from the\r\nconnection string in that case so that we could have wider\r\ncompatibility. I now think that's a bad idea, because suddenly\r\nintroducing a hard connection failure (or new success) just because\r\nyour PGLDAPUSER variable changed would be a major support hazard. The\r\nTODO is still in the code to remind me to have the conversation.\r\n\r\nWDYT?\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/94f6b945f9ca8cabe2b9d2a38ec19dca6f90a083.camel%40vmware.com", "msg_date": "Tue, 31 Aug 2021 19:39:59 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "[PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Tue, 2021-08-31 at 19:39 +0000, Jacob Champion wrote:\r\n> Hello,\r\n> \r\n> There was a brief discussion [1] back in February on allowing user\r\n> mapping for LDAP, in order to open up some more complex authorization\r\n> logic (and slightly reduce the need for LDAP-to-Postgres user\r\n> synchronization). Attached is an implementation of this that separates\r\n> the LDAP authentication and authorization identities, and lets the\r\n> client control the former with an `ldapuser` connection option or its\r\n> associated PGLDAPUSER envvar.\r\n\r\nThe cfbot found a failure in postgres_fdw, which I completely neglected\r\nin my design. I think the desired functionality should be to allow the\r\nldapuser connection option during CREATE USER MAPPING but not CREATE\r\nSERVER. I'll have a v2 up today to fix that.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 1 Sep 2021 15:42:35 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, 2021-09-01 at 15:42 +0000, Jacob Champion wrote:\r\n> The cfbot found a failure in postgres_fdw, which I completely neglected\r\n> in my design. I think the desired functionality should be to allow the\r\n> ldapuser connection option during CREATE USER MAPPING but not CREATE\r\n> SERVER.\r\n\r\nFixed in v2, attached.\r\n\r\n--Jacob", "msg_date": "Wed, 1 Sep 2021 18:43:08 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, Sep 1, 2021 at 11:43 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Wed, 2021-09-01 at 15:42 +0000, Jacob Champion wrote:\n> > The cfbot found a failure in postgres_fdw, which I completely neglected\n> > in my design. I think the desired functionality should be to allow the\n> > ldapuser connection option during CREATE USER MAPPING but not CREATE\n> > SERVER.\n>\n> Fixed in v2, attached.\n>\n> --Jacob\n>\nHi,\n\n+ if (strcmp(val, \"1\") == 0)\n+ hbaline->ldap_map_dn = true;\n+ else\n+ hbaline->ldap_map_dn = false;\n\nThe above can be shortened as:\n\n hbaline->ldap_map_dn = strcmp(val, \"1\") == 0;\n\nCheers\n\nOn Wed, Sep 1, 2021 at 11:43 AM Jacob Champion <pchampion@vmware.com> wrote:On Wed, 2021-09-01 at 15:42 +0000, Jacob Champion wrote:\n> The cfbot found a failure in postgres_fdw, which I completely neglected\n> in my design. I think the desired functionality should be to allow the\n> ldapuser connection option during CREATE USER MAPPING but not CREATE\n> SERVER.\n\nFixed in v2, attached.\n\n--JacobHi,+       if (strcmp(val, \"1\") == 0)+           hbaline->ldap_map_dn = true;+       else+           hbaline->ldap_map_dn = false;The above can be shortened as:  hbaline->ldap_map_dn = strcmp(val, \"1\") == 0;Cheers", "msg_date": "Wed, 1 Sep 2021 12:59:32 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, 2021-09-01 at 12:59 -0700, Zhihong Yu wrote:\r\n> + if (strcmp(val, \"1\") == 0)\r\n> + hbaline->ldap_map_dn = true;\r\n> + else\r\n> + hbaline->ldap_map_dn = false;\r\n> \r\n> The above can be shortened as:\r\n> \r\n> hbaline->ldap_map_dn = strcmp(val, \"1\") == 0;\r\n\r\nI usually prefer simplifying those conditionals, too, but in this case\r\nI think it'd be a pretty big departure from the existing style. See for\r\nexample the handling of include_realm and compat_realm just after this\r\nhunk.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 1 Sep 2021 20:56:21 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, Sep 1, 2021 at 1:56 PM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Wed, 2021-09-01 at 12:59 -0700, Zhihong Yu wrote:\n> > + if (strcmp(val, \"1\") == 0)\n> > + hbaline->ldap_map_dn = true;\n> > + else\n> > + hbaline->ldap_map_dn = false;\n> >\n> > The above can be shortened as:\n> >\n> > hbaline->ldap_map_dn = strcmp(val, \"1\") == 0;\n>\n> I usually prefer simplifying those conditionals, too, but in this case\n> I think it'd be a pretty big departure from the existing style. See for\n> example the handling of include_realm and compat_realm just after this\n> hunk.\n>\n> --Jacob\n>\nHi,\nI looked at v2-Allow-user-name-mapping-with-LDAP.patch\nand src/backend/postmaster/postmaster.c in master branch but didn't find\nwhat you mentioned.\n\nI still think my recommendation is concise.\n\nCheers\n\nOn Wed, Sep 1, 2021 at 1:56 PM Jacob Champion <pchampion@vmware.com> wrote:On Wed, 2021-09-01 at 12:59 -0700, Zhihong Yu wrote:\n> +       if (strcmp(val, \"1\") == 0)\n> +           hbaline->ldap_map_dn = true;\n> +       else\n> +           hbaline->ldap_map_dn = false;\n> \n> The above can be shortened as:\n> \n>   hbaline->ldap_map_dn = strcmp(val, \"1\") == 0;\n\nI usually prefer simplifying those conditionals, too, but in this case\nI think it'd be a pretty big departure from the existing style. See for\nexample the handling of include_realm and compat_realm just after this\nhunk.\n\n--JacobHi,I looked at v2-Allow-user-name-mapping-with-LDAP.patch and src/backend/postmaster/postmaster.c in master branch but didn't find what you mentioned.I still think my recommendation is concise.Cheers", "msg_date": "Wed, 1 Sep 2021 14:20:39 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, 2021-09-01 at 14:20 -0700, Zhihong Yu wrote:\r\n> I looked at v2-Allow-user-name-mapping-with-LDAP.patch\r\n> and src/backend/postmaster/postmaster.c in master branch but didn't\r\n> find what you mentioned.\r\n\r\nThis hunk is in src/backend/libpq/hba.c, in the parse_hba_auth_opt()\r\nfunction. The code there uses the less concise form throughout, as far\r\nas I can see.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 1 Sep 2021 21:34:32 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Wed, Sep 1, 2021 at 8:43 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Wed, 2021-09-01 at 15:42 +0000, Jacob Champion wrote:\n> > The cfbot found a failure in postgres_fdw, which I completely neglected\n> > in my design. I think the desired functionality should be to allow the\n> > ldapuser connection option during CREATE USER MAPPING but not CREATE\n> > SERVER.\n>\n> Fixed in v2, attached.\n\nA couple of quick comments from a quick look-over:\n\nI'm a bit hesitant about the ldapuser libpq parameter. Do we really\nwant to limit ourselves to just ldap, if we allow this? I mean, why\nnot allow say radius or pam to also specify a different username for\nthe external system? If we want to do that, now or in the future, we\nshould have a much more generic parameter name, something like\nauthuser?\n\nWhy do we actually need ldap_map_dn? Shouldn't this just be what\nhappens if you specify map= on an ldap connection?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 28 Sep 2021 15:38:50 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Tue, 2021-09-28 at 15:38 +0200, Magnus Hagander wrote:\r\n> I'm a bit hesitant about the ldapuser libpq parameter. Do we really\r\n> want to limit ourselves to just ldap, if we allow this? I mean, why\r\n> not allow say radius or pam to also specify a different username for\r\n> the external system? If we want to do that, now or in the future, we\r\n> should have a much more generic parameter name, something like\r\n> authuser?\r\n\r\nI'd be on board with a more general option name.\r\n\r\nSo from the perspective of a SASL exchange, PGUSER would be the\r\nauthorization identity, and PGAUTHUSER, say, would be the\r\nauthentication identity. Is \"auth\" a clear enough prefix that users and\r\ndevs will understand what the difference is between the two?\r\n\r\n | authn authz\r\n---------+-----------------------------------\r\n envvar | PGAUTHUSER PGUSER\r\nconninfo | authuser user\r\nfrontend | conn->pgauthuser conn->pguser backend | port->auth_user port->user_name\r\n\r\n> Why do we actually need ldap_map_dn? Shouldn't this just be what\r\n> happens if you specify map= on an ldap connection?\r\n\r\nFor simple-bind setups, I think requiring users to match an entire DN\r\nis probably unnecessary (and/or dangerous once you start getting into\r\nregex mapping), so the map uses the bare username by default. My intent\r\nwas for that to have the same default behavior as cert maps.\r\n\r\nThanks,\r\n--Jacob\r\n", "msg_date": "Tue, 28 Sep 2021 18:02:38 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Tue, 2021-09-28 at 18:02 +0000, Jacob Champion wrote:\r\n> On Tue, 2021-09-28 at 15:38 +0200, Magnus Hagander wrote:\r\n> > I'm a bit hesitant about the ldapuser libpq parameter. Do we really\r\n> > want to limit ourselves to just ldap, if we allow this? I mean, why\r\n> > not allow say radius or pam to also specify a different username for\r\n> > the external system? If we want to do that, now or in the future, we\r\n> > should have a much more generic parameter name, something like\r\n> > authuser?\r\n> \r\n> I'd be on board with a more general option name.\r\n> \r\n> So from the perspective of a SASL exchange, PGUSER would be the\r\n> authorization identity, and PGAUTHUSER, say, would be the\r\n> authentication identity. Is \"auth\" a clear enough prefix that users and\r\n> devs will understand what the difference is between the two?\r\n> \r\n> | authn authz\r\n> ---------+-----------------------------------\r\n> envvar | PGAUTHUSER PGUSER\r\n> conninfo | authuser user\r\n> frontend | conn->pgauthuser conn->pguser backend | port->auth_user port->user_name\r\n> \r\n> > Why do we actually need ldap_map_dn? Shouldn't this just be what\r\n> > happens if you specify map= on an ldap connection?\r\n> \r\n> For simple-bind setups, I think requiring users to match an entire DN\r\n> is probably unnecessary (and/or dangerous once you start getting into\r\n> regex mapping), so the map uses the bare username by default. My intent\r\n> was for that to have the same default behavior as cert maps.\r\n> \r\n> Thanks,\r\n> --Jacob\r\n\r\n", "msg_date": "Tue, 28 Sep 2021 18:08:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Tue, 2021-09-28 at 18:08 +0000, Jacob Champion wrote:\r\n> > | authn authz\r\n> > ---------+-----------------------------------\r\n> > envvar | PGAUTHUSER PGUSER\r\n> > conninfo | authuser user\r\n> > frontend | conn->pgauthuser conn->pguser backend | port->auth_user port->user_name\r\n\r\nUgh, PEBKAC problems today... apologies. This should have been\r\n\r\n | authn authz\r\n---------+-----------------------------------\r\n envvar | PGAUTHUSER PGUSER\r\nconninfo | authuser user\r\nfrontend | conn->pgauthuser conn->pguser\r\n backend | port->auth_user port->user_name\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 28 Sep 2021 18:15:01 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Tue, 2021-09-28 at 18:15 +0000, Jacob Champion wrote:\r\n> | authn authz\r\n> ---------+-----------------------------------\r\n> envvar | PGAUTHUSER PGUSER\r\n> conninfo | authuser user\r\n> frontend | conn->pgauthuser conn->pguser\r\n> backend | port->auth_user port->user_name\r\n\r\nv3 attached, which uses the above naming scheme and removes the stale\r\nTODO. Changes in since-v2.\r\n\r\n--Jacob", "msg_date": "Fri, 29 Oct 2021 17:38:20 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" }, { "msg_contents": "On Fri, 2021-10-29 at 17:38 +0000, Jacob Champion wrote:\r\n> v3 attached, which uses the above naming scheme and removes the stale\r\n> TODO. Changes in since-v2.\r\n\r\nv4 rebases over the recent TAP changes.\r\n\r\n--Jacob", "msg_date": "Thu, 17 Feb 2022 19:15:57 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support pg_ident mapping for LDAP" } ]
[ { "msg_contents": "Hi hackers,\n\nI'm not a perl specialist and it seems to me that the Win32 build is broken.\nThe Win32 build is still important because of the 32-bit clients still in\nuse.\nI'm investigating the problem.\n\n-----------------------------------------------------------------------------------------------\nDetected hardware platform: Win32\nFiles src/bin/pgbench/exprscan.l\nFiles src/bin/pgbench/exprparse.y\nFiles src/bin/psql/psqlscanslash.l\nGenerating configuration headers...\nMicrosoft(R) Build Engine versão 16.11.0+0538acc04 para .NET Framework\nCopyright (C) Microsoft Corporation. Todos os direitos reservados.\n\nCompilando os projetos desta solução um por vez. Para habilitar o build\nparalelo, adicione a opção \"-m\".\nCompilação de 31/08/2021 19:35:11 iniciada.\nProjeto \"C:\\dll\\postgres\\postgres_head\\pgsql.sln\" no nó 1 (destinos padrão).\nC:\\dll\\postgres\\postgres_head\\pgsql.sln.metaproj : error MSB4126: A\nconfiguração da solução especificada \"Release|x86\" é inválida. Especifique\numa configuração de solução válida com as propriedades Configuratio\nn e Platform (por exemplo, MSBuild.exe Solution.sln /p:Configuration=Debug\n/p:Platform=\"Qualquer CPU\") ou deixe essas propriedades em branco para usar\na configuração de solução padrão. [C:\\dll\\postgres\\postgres\n_head\\pgsql.sln]\nProjeto de compilação pronto \"C:\\dll\\postgres\\postgres_head\\pgsql.sln\"\n(destinos padrão) -- FALHA.\n\n\nFALHA da compilação.\n\n\"C:\\dll\\postgres\\postgres_head\\pgsql.sln\" (destino padrão) (1) ->\n(ValidateSolutionConfiguration destino) ->\n C:\\dll\\postgres\\postgres_head\\pgsql.sln.metaproj : error MSB4126: A\nconfiguração da solução especificada \"Release|x86\" é inválida. Especifique\numa configuração de solução válida com as propriedades Configurat\nion e Platform (por exemplo, MSBuild.exe Solution.sln\n/p:Configuration=Debug /p:Platform=\"Qualquer CPU\") ou deixe essas\npropriedades em branco para usar a configuração de solução padrão.\n[C:\\dll\\postgres\\postgr\nes_head\\pgsql.sln]\n--------------------------------------------------------------------------------\n\nregards,\nRanier Vilela\n\nHi hackers,I'm not a perl specialist and it seems to me that the Win32 build is broken.The Win32 build is still important because of the 32-bit clients still in use.I'm investigating the problem.-----------------------------------------------------------------------------------------------Detected hardware platform: Win32Files src/bin/pgbench/exprscan.lFiles src/bin/pgbench/exprparse.yFiles src/bin/psql/psqlscanslash.lGenerating configuration headers...Microsoft(R) Build Engine versão 16.11.0+0538acc04 para .NET FrameworkCopyright (C) Microsoft Corporation. Todos os direitos reservados.Compilando os projetos desta solução um por vez. Para habilitar o build paralelo, adicione a opção \"-m\".Compilação de 31/08/2021 19:35:11 iniciada.Projeto \"C:\\dll\\postgres\\postgres_head\\pgsql.sln\" no nó 1 (destinos padrão).C:\\dll\\postgres\\postgres_head\\pgsql.sln.metaproj : error MSB4126: A configuração da solução especificada \"Release|x86\" é inválida. Especifique uma configuração de solução válida com as propriedades Configuration e Platform (por exemplo, MSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Qualquer CPU\") ou deixe essas propriedades em branco para usar a configuração de solução padrão. [C:\\dll\\postgres\\postgres_head\\pgsql.sln]Projeto de compilação pronto \"C:\\dll\\postgres\\postgres_head\\pgsql.sln\" (destinos padrão) -- FALHA.FALHA da compilação.\"C:\\dll\\postgres\\postgres_head\\pgsql.sln\" (destino padrão) (1) ->(ValidateSolutionConfiguration destino) ->  C:\\dll\\postgres\\postgres_head\\pgsql.sln.metaproj : error MSB4126: A configuração da solução especificada \"Release|x86\" é inválida. Especifique uma configuração de solução válida com as propriedades Configuration e Platform (por exemplo, MSBuild.exe Solution.sln /p:Configuration=Debug /p:Platform=\"Qualquer CPU\") ou deixe essas propriedades em branco para usar a configuração de solução padrão. [C:\\dll\\postgres\\postgres_head\\pgsql.sln]--------------------------------------------------------------------------------regards,Ranier Vilela", "msg_date": "Tue, 31 Aug 2021 19:49:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres Win32 build broken?" }, { "msg_contents": "On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n> The Win32 build is still important because of the 32-bit clients still in\n> use.\n> I'm investigating the problem.\n\nBeing able to see the command you are using for build.pl, your\nbuildenv.pl and/or config.pl, as well as your build dependencies\nshould help to know what's wrong.\n\nMSVC builds are tested by various buildfarm members on a daily basis,\nand nothing is red. I also have a x86 and x64 configuration with\nVS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\nexperience, one could say that N Windows PG developpers finish with at\nleast (N+1) different environments. Basically Simon Riggs's theorem\napplied to Windows development..\n--\nMichael", "msg_date": "Wed, 1 Sep 2021 10:52:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "Em ter., 31 de ago. de 2021 às 22:52, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n> > I'm not a perl specialist and it seems to me that the Win32 build is\n> broken.\n> > The Win32 build is still important because of the 32-bit clients still in\n> > use.\n> > I'm investigating the problem.\n>\n> Being able to see the command you are using for build.pl, your\n> buildenv.pl and/or config.pl, as well as your build dependencies\n> should help to know what's wrong.\n>\nWhen I build Postgres to post, I basically don't change anything.\nEverything is the head's default.\nconfig.pl does not exist\ncommand to build, either on x64 or Win32.\nbuild.bat <enter>\n\n\n>\n> MSVC builds are tested by various buildfarm members on a daily basis,\n> and nothing is red. I also have a x86 and x64 configuration with\n> VS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\n> experience, one could say that N Windows PG developpers finish with at\n> least (N+1) different environments. Basically Simon Riggs's theorem\n> applied to Windows development..\n>\nI'm using the latest msvc 2019.\n From the error message, there is no Release|x86, but Release|Win32.\nBut I still haven't found where are setting this \"Release|x86\"\n\nregards,\nRanier Vilela\n\nEm ter., 31 de ago. de 2021 às 22:52, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n> The Win32 build is still important because of the 32-bit clients still in\n> use.\n> I'm investigating the problem.\n\nBeing able to see the command you are using for build.pl, your\nbuildenv.pl and/or config.pl, as well as your build dependencies\nshould help to know what's wrong.When I build Postgres to post, I basically don't change anything.Everything is the head's default.config.pl does not existcommand to build, either on x64 or Win32.build.bat <enter> \n\nMSVC builds are tested by various buildfarm members on a daily basis,\nand nothing is red.  I also have a x86 and x64 configuration with\nVS2015 that prove to work as of HEAD at de1d4fe, FWIW.  Now, by\nexperience, one could say that N Windows PG developpers finish with at\nleast (N+1) different environments.  Basically Simon Riggs's theorem\napplied to Windows development..I'm using the latest msvc 2019.From the error message, there is no Release|x86, but Release|Win32.But I still haven't found where are setting this \"Release|x86\" regards,Ranier Vilela", "msg_date": "Wed, 1 Sep 2021 08:13:06 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "\nOn 8/31/21 9:52 PM, Michael Paquier wrote:\n> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n>> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n>> The Win32 build is still important because of the 32-bit clients still in\n>> use.\n>> I'm investigating the problem.\n> Being able to see the command you are using for build.pl, your\n> buildenv.pl and/or config.pl, as well as your build dependencies\n> should help to know what's wrong.\n>\n> MSVC builds are tested by various buildfarm members on a daily basis,\n> and nothing is red. I also have a x86 and x64 configuration with\n> VS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\n> experience, one could say that N Windows PG developpers finish with at\n> least (N+1) different environments. Basically Simon Riggs's theorem\n> applied to Windows development..\n\n\n\nI am seeing the same result as Ranier using VS2017 and VS 2019.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 16:00:58 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "\nOn 9/1/21 4:00 PM, Andrew Dunstan wrote:\n> On 8/31/21 9:52 PM, Michael Paquier wrote:\n>> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n>>> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n>>> The Win32 build is still important because of the 32-bit clients still in\n>>> use.\n>>> I'm investigating the problem.\n>> Being able to see the command you are using for build.pl, your\n>> buildenv.pl and/or config.pl, as well as your build dependencies\n>> should help to know what's wrong.\n>>\n>> MSVC builds are tested by various buildfarm members on a daily basis,\n>> and nothing is red. I also have a x86 and x64 configuration with\n>> VS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\n>> experience, one could say that N Windows PG developpers finish with at\n>> least (N+1) different environments. Basically Simon Riggs's theorem\n>> applied to Windows development..\n>\n>\n> I am seeing the same result as Ranier using VS2017 and VS 2019.\n>\n>\n\nBut not with VS2013. If you need to build 32 bit client libraries, using\nan older VS release is probably your best bet.\n\n\nchers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 18:49:31 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "On Wed, Sep 01, 2021 at 06:49:31PM -0400, Andrew Dunstan wrote:\n> On 9/1/21 4:00 PM, Andrew Dunstan wrote:\n>> I am seeing the same result as Ranier using VS2017 and VS 2019.\n>\n> But not with VS2013. If you need to build 32 bit client libraries, using\n> an older VS release is probably your best bet.\n\nThat's annoying. Should we be more careful with the definition of\n{platform} in DeterminePlatform for those versions of VS?\n--\nMichael", "msg_date": "Thu, 2 Sep 2021 08:56:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "Em qua., 1 de set. de 2021 às 19:49, Andrew Dunstan <andrew@dunslane.net>\nescreveu:\n\n>\n> On 9/1/21 4:00 PM, Andrew Dunstan wrote:\n> > On 8/31/21 9:52 PM, Michael Paquier wrote:\n> >> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n> >>> I'm not a perl specialist and it seems to me that the Win32 build is\n> broken.\n> >>> The Win32 build is still important because of the 32-bit clients still\n> in\n> >>> use.\n> >>> I'm investigating the problem.\n> >> Being able to see the command you are using for build.pl, your\n> >> buildenv.pl and/or config.pl, as well as your build dependencies\n> >> should help to know what's wrong.\n> >>\n> >> MSVC builds are tested by various buildfarm members on a daily basis,\n> >> and nothing is red. I also have a x86 and x64 configuration with\n> >> VS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\n> >> experience, one could say that N Windows PG developpers finish with at\n> >> least (N+1) different environments. Basically Simon Riggs's theorem\n> >> applied to Windows development..\n> >\n> >\n> > I am seeing the same result as Ranier using VS2017 and VS 2019.\n> >\n> >\n>\n> But not with VS2013. If you need to build 32 bit client libraries, using\n> an older VS release is probably your best bet.\n>\nThanks Andrew, but I finally got a workaround for the problem.\nset MSBFLAGS=/p:Platform=\"Win32\"\n\nNow Postgres builds fine in 32 bits with the latest msvc (2019).\nIs it worth documenting this?\n\nregards,\nRanier Vilela", "msg_date": "Wed, 1 Sep 2021 21:01:03 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "> On Sep 1, 2021, at 8:01 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> \n>> Em qua., 1 de set. de 2021 às 19:49, Andrew Dunstan <andrew@dunslane.net> escreveu:\n>> \n>> On 9/1/21 4:00 PM, Andrew Dunstan wrote:\n>> > On 8/31/21 9:52 PM, Michael Paquier wrote:\n>> >> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n>> >>> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n>> >>> The Win32 build is still important because of the 32-bit clients still in\n>> >>> use.\n>> >>> I'm investigating the problem.\n>> >> Being able to see the command you are using for build.pl, your\n>> >> buildenv.pl and/or config.pl, as well as your build dependencies\n>> >> should help to know what's wrong.\n>> >>\n>> >> MSVC builds are tested by various buildfarm members on a daily basis,\n>> >> and nothing is red. I also have a x86 and x64 configuration with\n>> >> VS2015 that prove to work as of HEAD at de1d4fe, FWIW. Now, by\n>> >> experience, one could say that N Windows PG developpers finish with at\n>> >> least (N+1) different environments. Basically Simon Riggs's theorem\n>> >> applied to Windows development..\n>> >\n>> >\n>> > I am seeing the same result as Ranier using VS2017 and VS 2019.\n>> >\n>> >\n>> \n>> But not with VS2013. If you need to build 32 bit client libraries, using\n>> an older VS release is probably your best bet.\n> Thanks Andrew, but I finally got a workaround for the problem.\n> set MSBFLAGS=/p:Platform=\"Win32\"\n> \n> Now Postgres builds fine in 32 bits with the latest msvc (2019).\n> Is it worth documenting this?\n\nBetter to try to automate it.\n\nCheers\n\nAndrew\n\nOn Sep 1, 2021, at 8:01 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:Em qua., 1 de set. de 2021 às 19:49, Andrew Dunstan <andrew@dunslane.net> escreveu:\nOn 9/1/21 4:00 PM, Andrew Dunstan wrote:\n> On 8/31/21 9:52 PM, Michael Paquier wrote:\n>> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n>>> I'm not a perl specialist and it seems to me that the Win32 build is broken.\n>>> The Win32 build is still important because of the 32-bit clients still in\n>>> use.\n>>> I'm investigating the problem.\n>> Being able to see the command you are using for build.pl, your\n>> buildenv.pl and/or config.pl, as well as your build dependencies\n>> should help to know what's wrong.\n>>\n>> MSVC builds are tested by various buildfarm members on a daily basis,\n>> and nothing is red.  I also have a x86 and x64 configuration with\n>> VS2015 that prove to work as of HEAD at de1d4fe, FWIW.  Now, by\n>> experience, one could say that N Windows PG developpers finish with at\n>> least (N+1) different environments.  Basically Simon Riggs's theorem\n>> applied to Windows development..\n>\n>\n> I am seeing the same result as Ranier using VS2017 and VS 2019.\n>\n>\n\nBut not with VS2013. If you need to build 32 bit client libraries, using\nan older VS release is probably your best bet.Thanks Andrew, but I finally got a workaround for the problem.set MSBFLAGS=/p:Platform=\"Win32\"Now Postgres builds fine in 32 bits with the latest msvc (2019).Is it worth documenting this?Better to try to automate it.CheersAndrew", "msg_date": "Wed, 1 Sep 2021 21:58:09 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" }, { "msg_contents": "\nOn 9/1/21 8:01 PM, Ranier Vilela wrote:\n> Em qua., 1 de set. de 2021 às 19:49, Andrew Dunstan\n> <andrew@dunslane.net <mailto:andrew@dunslane.net>> escreveu:\n>\n>\n> On 9/1/21 4:00 PM, Andrew Dunstan wrote:\n> > On 8/31/21 9:52 PM, Michael Paquier wrote:\n> >> On Tue, Aug 31, 2021 at 07:49:40PM -0300, Ranier Vilela wrote:\n> >>> I'm not a perl specialist and it seems to me that the Win32\n> build is broken.\n> >>> The Win32 build is still important because of the 32-bit\n> clients still in\n> >>> use.\n> >>> I'm investigating the problem.\n> >> Being able to see the command you are using for build.pl\n> <http://build.pl>, your\n> >> buildenv.pl <http://buildenv.pl> and/or config.pl\n> <http://config.pl>, as well as your build dependencies\n> >> should help to know what's wrong.\n> >>\n> >> MSVC builds are tested by various buildfarm members on a daily\n> basis,\n> >> and nothing is red.  I also have a x86 and x64 configuration with\n> >> VS2015 that prove to work as of HEAD at de1d4fe, FWIW.  Now, by\n> >> experience, one could say that N Windows PG developpers finish\n> with at\n> >> least (N+1) different environments.  Basically Simon Riggs's\n> theorem\n> >> applied to Windows development..\n> >\n> >\n> > I am seeing the same result as Ranier using VS2017 and VS 2019.\n> >\n> >\n>\n> But not with VS2013. If you need to build 32 bit client libraries,\n> using\n> an older VS release is probably your best bet.\n>\n> Thanks Andrew, but I finally got a workaround for the problem.\n> set MSBFLAGS=/p:Platform=\"Win32\"\n>\n> Now Postgres builds fine in 32 bits with the latest msvc (2019).\n> Is it worth documenting this?\n>\n>\n\n\nI think we should be able to detect this and do it automatically in\nsrc/tools/msvc/build.pl, or possibly in the project files.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 2 Sep 2021 07:43:54 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Win32 build broken?" } ]
[ { "msg_contents": "To who it may concern,\n\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\n\nprowess by means of two new High Precision mixed decimal number types in a self installing\n\nextension. Hopefully, I want this to be a free or low cost project.\n\n\nIs there anyone who can read these project specifications and email back to\n\nme here, at poweruserm@live.com.au, to give me a quote for this project?\n\nThey are in my top posting at this discussion thread, at:\n\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics. It is\n\nto be written in C, and will need a number of offline installers for major operating\n\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\n\nor GitHub.\n\n\nIf anyone on this list is interested, or knows which direction to point me in,\n\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\nZM.\n\n\n\n\n\n\n\n\n\nTo who it may concern,\n\n\n\n\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\nprowess by means of two new High Precision mixed decimal number types in a self installing\nextension.  Hopefully, I want this to be a free or low cost project.\n\n\n\nIs there anyone who can read these project specifications and email back to\nme here, at poweruserm@live.com.au, to give me a quote for this project?\nThey are in my top posting at this discussion thread, at:\n\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\n\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics.  It is\n\nto be written in C, and will need a number of offline installers for major operating\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\n\n\nor GitHub.\n\n\nIf anyone on this list is interested, or knows which direction to point me in,\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\nZM.", "msg_date": "Wed, 1 Sep 2021 08:36:07 +0000", "msg_from": "A Z <poweruserm@live.com.au>", "msg_from_op": true, "msg_subject": "Question about creation of a new PostgreSQL Extension." }, { "msg_contents": "To who it may concern,\n\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\n\nprowess by means of two new High Precision mixed decimal number types in a self installing\n\nextension. Hopefully, I want this to be a free or low cost project.\n\n\nIs there anyone who can read these project specifications and email back to\n\nme here, at poweruserm@live.com.au, to give me a quote for this project?\n\nThey are in my top posting at this discussion thread, at:\n\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics. It is\n\nto be written in C, and will need a number of offline installers for major operating\n\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\n\nor GitHub.\n\n\nIf anyone on this list is interested, or knows which direction to point me in,\n\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\nZM.\n\n\n\n\n\n\n\n\nTo who it may concern,\n\n\n\n\n\n\n\n\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\n\nprowess by means of two new High Precision mixed decimal number types in a self installing\n\nextension.  Hopefully, I want this to be a free or low cost project.\n\n\n\n\n\nIs there anyone who can read these project specifications and email back to\n\nme here, at poweruserm@live.com.au, to give me a quote for this project?\n\nThey are in my top posting at this discussion thread, at:\n\n\n\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\n\n\n\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics.  It is\n\n\nto be written in C, and will need a number of offline installers for major operating\n\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\n\n\n\nor GitHub.\n\n\n\n\nIf anyone on this list is interested, or knows which direction to point me in,\n\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\n\n\nZM.", "msg_date": "Wed, 1 Sep 2021 08:41:58 +0000", "msg_from": "A Z <poweruserm@live.com.au>", "msg_from_op": true, "msg_subject": "Question about creation of a new PostgreSQL Extension." } ]
[ { "msg_contents": "It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\nIn Progress and opened the November one for new entries. Jaime Casanova has\nvolunteered for CFM [0], so let’s help him close the 284 still open items in\nthe queue.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/20210826231608.GA7242@ahch-to\n\n\n\n", "msg_date": "Wed, 1 Sep 2021 15:10:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "2021-09 Commitfest" }, { "msg_contents": "On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:\n> It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\n> In Progress and opened the November one for new entries. Jaime Casanova has\n> volunteered for CFM [0], so let’s help him close the 284 still open items in\n> the queue.\n> \n\nThank you Daniel for editing the commitfest entries, that's something I\ncannot do.\n\nAnd you're right, we have 284 patches in the queue (excluding committed, \nreturned with feedback, withdrawn and rejected)... 18 of them for more than\n10 commitfests!\n\nNeeds review: 192. \nWaiting on Author: 68. \nReady for Committer: 24\n\nIf you have a patch in this commitfest, please check in\nhttp://commitfest.cputube.org/ if your patch still applies and passes\ntests. \n\nThanks to all of you for your great work!\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 1 Sep 2021 09:26:33 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Wed, Sep 1, 2021 at 4:26 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:\n> > It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\n> > In Progress and opened the November one for new entries. Jaime Casanova has\n> > volunteered for CFM [0], so let’s help him close the 284 still open items in\n> > the queue.\n> >\n>\n> Thank you Daniel for editing the commitfest entries, that's something I\n> cannot do.\n\nI've added cf admin permissions to you as well now.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 1 Sep 2021 18:32:38 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Wed, Sep 01, 2021 at 06:32:38PM +0200, Magnus Hagander wrote:\n> On Wed, Sep 1, 2021 at 4:26 PM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:\n> > > It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\n> > > In Progress and opened the November one for new entries. Jaime Casanova has\n> > > volunteered for CFM [0], so let’s help him close the 284 still open items in\n> > > the queue.\n> > >\n> >\n> > Thank you Daniel for editing the commitfest entries, that's something I\n> > cannot do.\n> \n> I've added cf admin permissions to you as well now.\n> \n\nI have the power! mwahahaha!\neh! i mean, thanks Magnus ;)\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 1 Sep 2021 11:51:42 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Wed, Sep 01, 2021 at 09:26:33AM -0500, Jaime Casanova wrote:\n> On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:\n> > It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\n> > In Progress and opened the November one for new entries. Jaime Casanova has\n> > volunteered for CFM [0], so let’s help him close the 284 still open items in\n> > the queue.\n> > \n> \n> Thank you Daniel for editing the commitfest entries, that's something I\n> cannot do.\n> \n> And you're right, we have 284 patches in the queue (excluding committed, \n> returned with feedback, withdrawn and rejected)... 18 of them for more than\n> 10 commitfests!\n> \n> Needs review: 192. \n> Waiting on Author: 68. \n> Ready for Committer: 24\n> \n\nHi everyone,\n\nOn the first 10 days of this commitfest some numbers have moved, mostly\nthanks to Daniel Gustafsson and good work from committers:\n\nNeeds review: 171. \nWaiting on Author: 79. \nReady for Committer: 15.\n\nHow could we advance on the \"needs review\" queue? It's just too long!\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sat, 11 Sep 2021 00:52:10 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Hi Jaime,\n\n> Needs review: 171.\n> Waiting on Author: 79.\n> Ready for Committer: 15.\n>\n> How could we advance on the \"needs review\" queue? It's just too long!\n\nFor the record, some patches marked as \"Needs review\" are in fact\nrotted and need to be rebased http://cfbot.cputube.org/ I notified\nseveral authors and changed the status to \"Waiting for Author\", but\nsomehow I don't feel comfortable doing it for 40+ patches at once...\nAlso, I recall that in the past the fact that the patch doesn't pass\nCI was not considered enough not to review it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 11 Sep 2021 23:58:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sat, Sep 11, 2021 at 12:52:10AM -0500, Jaime Casanova wrote:\n> On Wed, Sep 01, 2021 at 09:26:33AM -0500, Jaime Casanova wrote:\n> > On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:\n> > > It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to\n> > > In Progress and opened the November one for new entries. Jaime Casanova has\n> > > volunteered for CFM [0], so let’s help him close the 284 still open items in\n> > > the queue.\n> > > \n> > \n> > Thank you Daniel for editing the commitfest entries, that's something I\n> > cannot do.\n> > \n> > And you're right, we have 284 patches in the queue (excluding committed, \n> > returned with feedback, withdrawn and rejected)... 18 of them for more than\n> > 10 commitfests!\n> > \n> > Needs review: 192. \n> > Waiting on Author: 68. \n> > Ready for Committer: 24\n> > \n> \n> Hi everyone,\n> \n> On the first 10 days of this commitfest some numbers have moved, mostly\n> thanks to Daniel Gustafsson and good work from committers:\n> \n> Needs review: 171. \n> Waiting on Author: 79. \n> Ready for Committer: 15.\n> \n\nHi,\n\nDuring this commitfest there around 40 patches committed, there where\nsome patches already committed at the beggining.\n\n Committed: 55.\n\nIn the last hours Michael Paquier made a scan over the patch queue and\neven after that we still have a lot of patches open.\n\n Needs review: 131. \n Waiting on Author: 47. \n Ready for Committer: 12. \n\nI understand this CF was in the middle of the release of 14 and that\naffected too.\n\nAnyway we need to advance to a close, so I need help with:\n\n- what should we do with WoA patches? moving them to the Next CF?\n- How can we reduce the number of Needs Review patches? some of them\n have been in silence for more than a month!\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 08:53:23 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Fri, Oct 01, 2021 at 08:53:23AM -0500, Jaime Casanova wrote:\n> \n> Anyway we need to advance to a close, so I need help with:\n> \n> - what should we do with WoA patches? moving them to the Next CF?\n\nCorrecting myself, we cannot move WoA patches. So we should just close\nthem with RwF.\n\nBarring objections I will do that in the next couple of hours.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:31:31 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> Correcting myself, we cannot move WoA patches. So we should just close\n> them with RwF.\n\nUh, really? I don't think that's been common practice in the past.\nI thought we generally just pushed everything forward to the next CF\nwith the same status.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Oct 2021 13:34:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Fri, Oct 01, 2021 at 01:34:45PM -0400, Tom Lane wrote:\n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> > Correcting myself, we cannot move WoA patches. So we should just close\n> > them with RwF.\n> \n> Uh, really? I don't think that's been common practice in the past.\n> I thought we generally just pushed everything forward to the next CF\n> with the same status.\n> \n\nActually i thought the same thing but found that I couldn't.\n\nJust tried again to get the error message: \"A patch in status Waiting on\nAuthor cannot be moved to next commitfest.\"\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:40:12 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "\nOn 10/1/21 1:31 PM, Jaime Casanova wrote:\n> On Fri, Oct 01, 2021 at 08:53:23AM -0500, Jaime Casanova wrote:\n>> Anyway we need to advance to a close, so I need help with:\n>>\n>> - what should we do with WoA patches? moving them to the Next CF?\n> Correcting myself, we cannot move WoA patches. So we should just close\n> them with RwF.\n>\n> Barring objections I will do that in the next couple of hours.\n>\n\nIsn't the usual procedure to change their status, move them, and then\nchange it back again? ISTR something like that when I managed a CF.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 1 Oct 2021 13:43:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Fri, Oct 01, 2021 at 01:43:23PM -0400, Andrew Dunstan wrote:\n> \n> On 10/1/21 1:31 PM, Jaime Casanova wrote:\n> > On Fri, Oct 01, 2021 at 08:53:23AM -0500, Jaime Casanova wrote:\n> >> Anyway we need to advance to a close, so I need help with:\n> >>\n> >> - what should we do with WoA patches? moving them to the Next CF?\n> > Correcting myself, we cannot move WoA patches. So we should just close\n> > them with RwF.\n> >\n> > Barring objections I will do that in the next couple of hours.\n> >\n> \n> Isn't the usual procedure to change their status, move them, and then\n> change it back again? ISTR something like that when I managed a CF.\n> \n\nReally?! That sounds tedious!\nI will do that but we should improve that process.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:49:24 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> On Fri, Oct 01, 2021 at 01:43:23PM -0400, Andrew Dunstan wrote:\n>> Isn't the usual procedure to change their status, move them, and then\n>> change it back again? ISTR something like that when I managed a CF.\n\n> Really?! That sounds tedious!\n> I will do that but we should improve that process.\n\nIndeed, that seems pretty silly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Oct 2021 14:15:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "> On 1 Oct 2021, at 19:49, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:\n> On Fri, Oct 01, 2021 at 01:43:23PM -0400, Andrew Dunstan wrote:\n\n>> Isn't the usual procedure to change their status, move them, and then\n>> change it back again? ISTR something like that when I managed a CF.\n\nCorrect, if one looks at the activity log for an old entry the pattern of\nmoving to needs review, then to the next CF, then WoA is clearly visible.\n\n> Really?!\n\nSadly yes.\n\n> That sounds tedious!\n\nCorrect.\n\n> I will do that but we should improve that process.\n\nCorrect again.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Oct 2021 20:29:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Fri, Oct 01, 2021 at 08:29:08PM +0200, Daniel Gustafsson wrote:\n> Correct, if one looks at the activity log for an old entry the pattern of\n> moving to needs review, then to the next CF, then WoA is clearly visible.\n\nThat's the tricky part. It does not really make sense either to keep\nmoving patches that are waiting on author for months. The scan of the\nCF app I have done was about those idle patches waiting on author for\nmonths. It takes time as authors and/or reviewers tend to sometimes\nnot update the status of a patch so the state in the app does not\nreflect the reality, but this vacuuming limits the noise in for the\nnext CFs.\n\n>> That sounds tedious!\n> \n> Correct.\n\nIt consumes power.\n--\nMichael", "msg_date": "Sat, 2 Oct 2021 14:30:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> That's the tricky part. It does not really make sense either to keep\n> moving patches that are waiting on author for months. The scan of the\n> CF app I have done was about those idle patches waiting on author for\n> months. It takes time as authors and/or reviewers tend to sometimes\n> not update the status of a patch so the state in the app does not\n> reflect the reality, but this vacuuming limits the noise in for the\n> next CFs.\n\nYeah. I have been thinking of looking through the oldest CF entries\nand proposing that we just reject any that look permanently stalled.\nIt doesn't do much good to leave things in the list when there's\nno apparent interest in pushing them to conclusion. But I've not\ndone the legwork yet, and I'm a little worried about the push-back\nthat will inevitably result.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 10:52:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On 2021-Oct-01, Daniel Gustafsson wrote:\n\n> > On 1 Oct 2021, at 19:49, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:\n\n> > I will do that but we should improve that process.\n> \n> Correct again.\n\nI think if we all agree that this is a desired workflow, then we should\nupdate the app to allow WoA patches to be moved to next CF.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n", "msg_date": "Sat, 2 Oct 2021 12:18:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On 2021-Oct-02, Tom Lane wrote:\n\n> Yeah. I have been thinking of looking through the oldest CF entries\n> and proposing that we just reject any that look permanently stalled.\n> It doesn't do much good to leave things in the list when there's\n> no apparent interest in pushing them to conclusion. But I've not\n> done the legwork yet, and I'm a little worried about the push-back\n> that will inevitably result.\n\nI was just going to say the same thing yesterday, and reference [1]\nwhen I did it once in 2019. I think it was a useful cleanup exercise.\nIn hindsight, some of these patches were resubmitted later, and those\nare either still ongoing or are already committed.\n[1] https://postgr.es/m/20190930182818.GA25331@alvherre.pgsql\n\n\n(I did have the luxury of a local copy of the commitfest database, which\nis perhaps a service we could offer to CFMs to make their lives easier.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Digital and video cameras have this adjustment and film cameras don't for the\nsame reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)\n\n\n", "msg_date": "Sat, 2 Oct 2021 12:20:09 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I think if we all agree that this is a desired workflow, then we should\n> update the app to allow WoA patches to be moved to next CF.\n\nI'm fairly astonished that anyone would have thought that that\n*wasn't* an expected case. For example, if someone reviews a\npatch and sets the status to WoA on the last day of the CF,\nwhat then? You can't expect the patch author to respond\ninstantaneously.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 11:24:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Oct-02, Tom Lane wrote:\n>> Yeah. I have been thinking of looking through the oldest CF entries\n>> and proposing that we just reject any that look permanently stalled.\n\n> I was just going to say the same thing yesterday, and reference [1]\n> when I did it once in 2019. I think it was a useful cleanup exercise.\n> [1] https://postgr.es/m/20190930182818.GA25331@alvherre.pgsql\n\nRight. Michael and Jaime have been doing some of that too in the last\nfew days, but obviously a CFM should only do that unilaterally in very\nclear-cut cases of patch abandonment. I was intending to go after some\nwhere maybe a bit of community consensus is needed for rejection.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 11:32:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "> On 2 Oct 2021, at 17:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I think if we all agree that this is a desired workflow, then we should\n>> update the app to allow WoA patches to be moved to next CF.\n> \n> I'm fairly astonished that anyone would have thought that that\n> *wasn't* an expected case.\n\nAFAIK this is a case of everyone agreeing and noone having had the time (or\npriorities) to hack on the CF app to make it happen. \n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 2 Oct 2021 20:21:36 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sat, Oct 02, 2021 at 11:32:01AM -0400, Tom Lane wrote:\n> Right. Michael and Jaime have been doing some of that too in the last\n> few days, but obviously a CFM should only do that unilaterally in very\n> clear-cut cases of patch abandonment. I was intending to go after some\n> where maybe a bit of community consensus is needed for rejection.\n\nOne thing I have used in this process is what I'd call the two-week\nrule: if a patch is listed in the CF app as waiting on author for two\nweeks at the middle of the CF, and if it has stalled with the same\nstate by the end of the commit fest with the thread remaining idle, it\nis rather safe to switch the patch as returned with feedback. I have\ntried to follow this rule for the last couple of years and received\nfew complains when done this way. The CF patch tester has proved to\nbe really helpful regarding that, even if some patches have sometimes\na state in the CF app that does not reflect what the thread tells. In\nshort, it is important to check the state of the patches mid-CF\npinging the related threads if necessary, and at the end of the CF.\n--\nMichael", "msg_date": "Sun, 3 Oct 2021 16:15:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Oct 01, 2021 at 08:29:08PM +0200, Daniel Gustafsson wrote:\n> > Correct, if one looks at the activity log for an old entry the pattern of\n> > moving to needs review, then to the next CF, then WoA is clearly visible.\n>\n> That's the tricky part. It does not really make sense either to keep\n> moving patches that are waiting on author for months. The scan of the\n> CF app I have done was about those idle patches waiting on author for\n> months. It takes time as authors and/or reviewers tend to sometimes\n> not update the status of a patch so the state in the app does not\n> reflect the reality, but this vacuuming limits the noise in for the\n> next CFs.\n>\n\nI'm pretty sure this is the original reason for adding this -- to enforce\nthat this review happens.\n\nPrior to this being added, all patches moved would end up in \"needs review\"\nstatus. When we changed it so that the patch would keep it's status in the\nnext CF, we explicitly wanted to avoid having lots of patches in WoA status\nin the new CF.\n\nBut this was 5 years ago, and the feature was new at the time. This may\nhave been wrong already then, or it may simply be that we use the system in\na different way now (and we for example did not have the cfbot back then).\nEither one of those is a good reason to re-visit the decision. And it\ncertainly sounds from this thread that nobody is actually arguing to keep\nthat behaviour -- unless that changes knowing the original reason?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Oct 01, 2021 at 08:29:08PM +0200, Daniel Gustafsson wrote:\n> Correct, if one looks at the activity log for an old entry the pattern of\n> moving to needs review, then to the next CF, then WoA is clearly visible.\n\nThat's the tricky part.  It does not really make sense either to keep\nmoving patches that are waiting on author for months.  The scan of the\nCF app I have done was about those idle patches waiting on author for\nmonths.  It takes time as authors and/or reviewers tend to sometimes\nnot update the status of a patch so the state in the app does not\nreflect the reality, but this vacuuming limits the noise in for the\nnext CFs.I'm pretty sure this is the original reason for adding this -- to enforce that this review happens.Prior to this being added, all patches moved would end up in \"needs review\" status. When we changed it so that the patch would keep it's status in the next CF, we explicitly wanted to avoid having lots of patches in WoA status in the new CF.But this was 5 years ago, and the feature was new at the time. This may have been wrong already then, or it may simply be that we use the system in a different way now (and we for example did not have the cfbot back then). Either one of those is a good reason to re-visit the decision. And it certainly sounds from this thread that nobody is actually arguing to keep that behaviour -- unless that changes knowing the original reason? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 3 Oct 2021 12:23:33 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> That's the tricky part. It does not really make sense either to keep\n>> moving patches that are waiting on author for months.\n\n> I'm pretty sure this is the original reason for adding this -- to enforce\n> that this review happens.\n\nThe CF tool is in no way able to enforce that, though. All it's doing\nis making extra work for the CFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Oct 2021 09:48:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sat, Oct 02, 2021 at 11:32:01AM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Oct-02, Tom Lane wrote:\n> >> Yeah. I have been thinking of looking through the oldest CF entries\n> >> and proposing that we just reject any that look permanently stalled.\n> \n> > I was just going to say the same thing yesterday, and reference [1]\n> > when I did it once in 2019. I think it was a useful cleanup exercise.\n> > [1] https://postgr.es/m/20190930182818.GA25331@alvherre.pgsql\n> \n> Right. Michael and Jaime have been doing some of that too in the last\n> few days, but obviously a CFM should only do that unilaterally in very\n> clear-cut cases of patch abandonment. I was intending to go after some\n> where maybe a bit of community consensus is needed for rejection.\n> \n\nI have done so with 2 or 3 patches that has been stalled more than one\nmonth and after asking in the thread if I receive no answer for 2 or 3\nweeks.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 3 Oct 2021 11:20:21 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sat, Oct 02, 2021 at 12:20:09PM -0300, Alvaro Herrera wrote:\n> On 2021-Oct-02, Tom Lane wrote:\n> \n> > Yeah. I have been thinking of looking through the oldest CF entries\n> > and proposing that we just reject any that look permanently stalled.\n> > It doesn't do much good to leave things in the list when there's\n> > no apparent interest in pushing them to conclusion. But I've not\n> > done the legwork yet, and I'm a little worried about the push-back\n> > that will inevitably result.\n> \n> I was just going to say the same thing yesterday, and reference [1]\n> when I did it once in 2019. I think it was a useful cleanup exercise.\n> In hindsight, some of these patches were resubmitted later, and those\n> are either still ongoing or are already committed.\n> [1] https://postgr.es/m/20190930182818.GA25331@alvherre.pgsql\n> \n> \n> (I did have the luxury of a local copy of the commitfest database, which\n> is perhaps a service we could offer to CFMs to make their lives easier.)\n> \n\nRight now, an option to bulk move everything in their current states to\nNext CF would be handy... There are still 139 remaining patches to move.\n\n11 of them \"Ready for Committer\"\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 3 Oct 2021 11:23:21 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sun, Oct 03, 2021 at 11:20:21AM -0500, Jaime Casanova wrote:\n> On Sat, Oct 02, 2021 at 11:32:01AM -0400, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > On 2021-Oct-02, Tom Lane wrote:\n> > >> Yeah. I have been thinking of looking through the oldest CF entries\n> > >> and proposing that we just reject any that look permanently stalled.\n> > \n> > > I was just going to say the same thing yesterday, and reference [1]\n> > > when I did it once in 2019. I think it was a useful cleanup exercise.\n> > > [1] https://postgr.es/m/20190930182818.GA25331@alvherre.pgsql\n> > \n> > Right. Michael and Jaime have been doing some of that too in the last\n> > few days, but obviously a CFM should only do that unilaterally in very\n> > clear-cut cases of patch abandonment. I was intending to go after some\n> > where maybe a bit of community consensus is needed for rejection.\n> > \n> \n> I have done so with 2 or 3 patches that has been stalled more than one\n> month and after asking in the thread if I receive no answer for 2 or 3\n> weeks.\n> \n\nActually it should be some kind of rule of thumb (that could be used as\nguide) for doing so. Keeping around patches that has no expectation of\nbeing worked on makes us no favor and the queue keeps growing. \n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 3 Oct 2021 11:25:53 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Sun, Oct 3, 2021 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> That's the tricky part. It does not really make sense either to keep\n> >> moving patches that are waiting on author for months.\n>\n> > I'm pretty sure this is the original reason for adding this -- to enforce\n> > that this review happens.\n>\n> The CF tool is in no way able to enforce that, though. All it's doing\n> is making extra work for the CFM.\n>\n\nI've now deployed this:\nhttps://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Oct 3, 2021 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> That's the tricky part.  It does not really make sense either to keep\n>> moving patches that are waiting on author for months.\n\n> I'm pretty sure this is the original reason for adding this -- to enforce\n> that this review happens.\n\nThe CF tool is in no way able to enforce that, though.  All it's doing\nis making extra work for the CFM.I've now deployed this: https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9 --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 4 Oct 2021 12:06:40 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "> On 4 Oct 2021, at 12:06, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Sun, Oct 3, 2021 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n> Magnus Hagander <magnus@hagander.net <mailto:magnus@hagander.net>> writes:\n> > On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> >> That's the tricky part. It does not really make sense either to keep\n> >> moving patches that are waiting on author for months.\n> \n> > I'm pretty sure this is the original reason for adding this -- to enforce\n> > that this review happens.\n> \n> The CF tool is in no way able to enforce that, though. All it's doing\n> is making extra work for the CFM.\n> \n> I've now deployed this: https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9 <https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9>\nAFAICT this should now allow for WoA patches to be moved to the next CF, but\ntrying that on a patch in the current CF failed with \"Invalid existing patch\nstatus\" in a red topbar. Did I misunderstand what this change was?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 14:41:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Mon, Oct 4, 2021 at 2:41 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 4 Oct 2021, at 12:06, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sun, Oct 3, 2021 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:\n> tgl@sss.pgh.pa.us>> wrote:\n> > Magnus Hagander <magnus@hagander.net <mailto:magnus@hagander.net>>\n> writes:\n> > > On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n> > >> That's the tricky part. It does not really make sense either to keep\n> > >> moving patches that are waiting on author for months.\n> >\n> > > I'm pretty sure this is the original reason for adding this -- to\n> enforce\n> > > that this review happens.\n> >\n> > The CF tool is in no way able to enforce that, though. All it's doing\n> > is making extra work for the CFM.\n> >\n> > I've now deployed this:\n> https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9\n> <\n> https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9\n> >\n> AFAICT this should now allow for WoA patches to be moved to the next CF,\n> but\n> trying that on a patch in the current CF failed with \"Invalid existing\n> patch\n> status\" in a red topbar. Did I misunderstand what this change was?\n>\n\nUgh. i missed one of the two checks. That's what I get for not testing\nproperly when the change \"was so simple\"...\n\nPlease try again.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Oct 4, 2021 at 2:41 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 4 Oct 2021, at 12:06, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Sun, Oct 3, 2021 at 3:48 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n> Magnus Hagander <magnus@hagander.net <mailto:magnus@hagander.net>> writes:\n> > On Sat, Oct 2, 2021 at 7:31 AM Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> >> That's the tricky part.  It does not really make sense either to keep\n> >> moving patches that are waiting on author for months.\n> \n> > I'm pretty sure this is the original reason for adding this -- to enforce\n> > that this review happens.\n> \n> The CF tool is in no way able to enforce that, though.  All it's doing\n> is making extra work for the CFM.\n> \n> I've now deployed this: https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9 <https://git.postgresql.org/gitweb/?p=pgcommitfest2.git;a=commitdiff;h=65073ba7614ba539aff961e59c9eddbbb8d095f9>\nAFAICT this should now allow for WoA patches to be moved to the next CF, but\ntrying that on a patch in the current CF failed with \"Invalid existing patch\nstatus\" in a red topbar.  Did I misunderstand what this change was?Ugh. i missed one of the two checks. That's what I get for not testing properly when the change \"was so simple\"...Please try again. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 4 Oct 2021 14:56:21 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "> On 4 Oct 2021, at 14:56, Magnus Hagander <magnus@hagander.net> wrote:\n\n> Ugh. i missed one of the two checks. That's what I get for not testing properly when the change \"was so simple\"...\n> \n> Please try again. \n\nIt works now, I was able to move a patch (3128) over to the 2021-11 CF. It\ndoes bring up the below warning(?) in a blue bar when the move was performed\nwhich at first made me think it hadn't worked.\n\n \"The status of this patch cannot be changed in this commitfest. You must\n modify it in the one where it's open!\"\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 15:01:11 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "On Mon, Oct 4, 2021 at 3:05 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 4 Oct 2021, at 14:56, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > Ugh. i missed one of the two checks. That's what I get for not testing\n> properly when the change \"was so simple\"...\n> >\n> > Please try again.\n>\n> It works now, I was able to move a patch (3128) over to the 2021-11 CF. It\n> does bring up the below warning(?) in a blue bar when the move was\n> performed\n> which at first made me think it hadn't worked.\n>\n> \"The status of this patch cannot be changed in this commitfest. You\n> must\n> modify it in the one where it's open!\"\n>\n\nDid you try it with more than one patch? It could be a held back message\nthat got delivered late (yes, there are some such cases, sadly). I ask\nbecause I'm failing to reproduce this one...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Oct 4, 2021 at 3:05 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 4 Oct 2021, at 14:56, Magnus Hagander <magnus@hagander.net> wrote:\n\n> Ugh. i missed one of the two checks. That's what I get for not testing properly when the change \"was so simple\"...\n> \n> Please try again. \n\nIt works now, I was able to move a patch (3128) over to the 2021-11 CF.  It\ndoes bring up the below warning(?) in a blue bar when the move was performed\nwhich at first made me think it hadn't worked.\n\n    \"The status of this patch cannot be changed in this commitfest.  You must\n    modify it in the one where it's open!\"Did you try it with more than one patch? It could be a held back message that got delivered late (yes, there are some such cases, sadly). I ask because I'm failing to reproduce this one... --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 4 Oct 2021 18:51:56 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-09 Commitfest" }, { "msg_contents": "> On 4 Oct 2021, at 18:51, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Mon, Oct 4, 2021 at 3:05 PM Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> wrote:\n> > On 4 Oct 2021, at 14:56, Magnus Hagander <magnus@hagander.net <mailto:magnus@hagander.net>> wrote:\n> \n> > Ugh. i missed one of the two checks. That's what I get for not testing properly when the change \"was so simple\"...\n> > \n> > Please try again. \n> \n> It works now, I was able to move a patch (3128) over to the 2021-11 CF. It\n> does bring up the below warning(?) in a blue bar when the move was performed\n> which at first made me think it hadn't worked.\n> \n> \"The status of this patch cannot be changed in this commitfest. You must\n> modify it in the one where it's open!\"\n> \n> Did you try it with more than one patch? It could be a held back message that got delivered late (yes, there are some such cases, sadly). I ask because I'm failing to reproduce this one... \n\nIt was on the very same entry and I only tested that one, so it sounds likely\nthat it was a message that was late to the party.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 20:38:06 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: 2021-09 Commitfest" } ]
[ { "msg_contents": "Hello,\n\nin psycopg 3 we are currently using PQexecParams - although with no\nparams - to send COPY commands. The reason is mostly to avoid people\nto send COPY together with other statements. Especially if other\noperations are chained after COPY: we would only notice them after\ncopy is finished. Data changes might have been applied by then, so\nthrowing an exception seems impolite (the result might have been\napplied already) but managing the result is awkward too.\n\nSomeone [1] has pointed out this conversation [2] which suggests that\nCOPY with extended protocol might break in the future.\n\n[1] https://github.com/psycopg/psycopg/issues/78\n[2] https://www.postgresql.org/message-id/flat/CAMsr%2BYGvp2wRx9pPSxaKFdaObxX8DzWse%2BOkWk2xpXSvT0rq-g%40mail.gmail.com#CAMsr+YGvp2wRx9pPSxaKFdaObxX8DzWse+OkWk2xpXSvT0rq-g@mail.gmail.com\n\nAs far as PostgreSQL is concerned, would it be better to stick to\nPQexec with COPY, and if people append statements afterwards they\nwould be the ones to deal with the consequences? (being the server\napplying the changes, the client throwing an exception)\n\nThank you very much\n\n-- Daniele\n\n\n", "msg_date": "Wed, 1 Sep 2021 18:25:29 +0200", "msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>", "msg_from_op": true, "msg_subject": "Is it safe to use the extended protocol with COPY?" }, { "msg_contents": "Daniele Varrazzo <daniele.varrazzo@gmail.com> writes:\n> Someone [1] has pointed out this conversation [2] which suggests that\n> COPY with extended protocol might break in the future.\n\nAs was pointed out in that same thread, the odds of us actually\nbreaking that case are nil. I wouldn't recommend changing your\ncode on this basis.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Sep 2021 12:41:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is it safe to use the extended protocol with COPY?" }, { "msg_contents": "On Wed, Sep 1, 2021 at 2:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniele Varrazzo <daniele.varrazzo@gmail.com> writes:\n> > Someone [1] has pointed out this conversation [2] which suggests that\n> > COPY with extended protocol might break in the future.\n>\n> As was pointed out in that same thread, the odds of us actually\n> breaking that case are nil. I wouldn't recommend changing your\n> code on this basis.\n\nI agree that there doesn't seem to be an risk of a wire protocol\nchange in the near future, but it might still be a good idea to change\nany code that does this on the grounds that the current wire protocol\nmakes reliable error handling impossible - unless you wait to send\nSync until you see how the server responds to the earlier messages.[1]\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoa4eA%2BcPXaiGQmEBp9XisVd3ZE9dbvnbZEvx9UcMiw2tg%40mail.gmail.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Sep 2021 14:09:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it safe to use the extended protocol with COPY?" } ]
[ { "msg_contents": "Hi,\n\nBack 2017, Michael and Magus apparently fixed a bug report[1] about\nfailing basebackups on Windows due to its concurrent file access\nsemantics:\n\ncommit 9951741bbeb3ec37ca50e9ce3df1808c931ff6a6\nAuthor: Magnus Hagander <magnus@hagander.net>\nDate: Wed Jan 4 10:48:30 2017 +0100\n\n Attempt to handle pending-delete files on Windows\n\nI think this has been re-broken by:\n\ncommit bed90759fcbcd72d4d06969eebab81e47326f9a2\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Oct 9 16:20:12 2020 -0400\n\n Fix our Windows stat() emulation to handle file sizes > 4GB.\n\nThere's code in there that appears to understand the\nERROR_PENDING_DELETE stuff, but it seems to be too late, as this bit\nwill fail with ERROR_ACCESS_DENIED first:\n\n /* fast not-exists check */\n if (GetFileAttributes(name) == INVALID_FILE_ATTRIBUTES)\n {\n _dosmaperr(GetLastError());\n return -1;\n }\n\n... and if you comment that out, then the CreateFile() call will fail\nand we'll return before we get to the code that purports to grok\npending deletes. I don't really understand that code, but I can\nreport that it's not reached.\n\nThis came up because in our work on AIO, we have extra io worker\nprocesses that might have file handles open even in a single session\nscenario like 010_pg_basebackup.pl, so we make these types of problems\nmore likely to hit (hence also my CF entry to fix a related problem in\nDROP TABLESPACE). But that's just chance: I assume basebackup could\nfail for anyone in 14 for the same reason due to any other backend\nthat hasn't processed a sinval to close the file yet.\n\nPerhaps we need some combination of the old way (that apparently knew\nhow to detect pending deletes) and the new way (that knows about large\nfiles)?\n\n[1] https://www.postgresql.org/message-id/flat/20160712083220.1426.58667%40wrigleys.postgresql.org\n\n\n", "msg_date": "Thu, 2 Sep 2021 10:10:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Sep 2, 2021 at 10:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Perhaps we need some combination of the old way (that apparently knew\n> how to detect pending deletes) and the new way (that knows about large\n> files)?\n\nI tried that, but as far as I can tell, the old approach didn't really\nwork either :-(\n\nA disruptive solution that works in my tests: we could reuse the\nglobal barrier proposed in CF #2962. If you see EACCES, ask every\nbackend to close all vfds at their next CFI() and wait for them all to\nfinish, and then retry. If you get EACCES again it really means\nEACCES, but you'll very probably get ENOENT.\n\nThe cheapest solution would be to assume EACCES really means ENOENT,\nbut that seems unacceptably incorrect.\n\nI suspect it might be possible to use underdocumented/unstable NtXXX()\ninterfaces to get at the information, but I don't know much about\nthat.\n\nIs there another way that is cheap, correct and documented?\n\n\n", "msg_date": "Thu, 2 Sep 2021 22:28:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> A disruptive solution that works in my tests: we could reuse the\n> global barrier proposed in CF #2962. If you see EACCES, ask every\n> backend to close all vfds at their next CFI() and wait for them all to\n> finish, and then retry. If you get EACCES again it really means\n> EACCES, but you'll very probably get ENOENT.\n\nThat seems quite horrid :-(. But if it works, doesn't that mean that\nsomewhere we are opening a problematic file without the correct\nsharing flags?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Sep 2021 06:31:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Sep 2, 2021 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > A disruptive solution that works in my tests: we could reuse the\n> > global barrier proposed in CF #2962. If you see EACCES, ask every\n> > backend to close all vfds at their next CFI() and wait for them all to\n> > finish, and then retry. If you get EACCES again it really means\n> > EACCES, but you'll very probably get ENOENT.\n>\n> That seems quite horrid :-(. But if it works, doesn't that mean that\n> somewhere we are opening a problematic file without the correct\n> sharing flags?\n\nI'm no expert, but not AFAICS. We managed to delete the file while\nsome other backend had it open, which FILE_SHARE_DELETE allowed. We\njust can't open it or create a new file with the same name until it's\nreally gone (all handles closed).\n\n\n", "msg_date": "Thu, 2 Sep 2021 22:51:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Sep 2, 2021 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That seems quite horrid :-(. But if it works, doesn't that mean that\n>> somewhere we are opening a problematic file without the correct\n>> sharing flags?\n\n> I'm no expert, but not AFAICS. We managed to delete the file while\n> some other backend had it open, which FILE_SHARE_DELETE allowed. We\n> just can't open it or create a new file with the same name until it's\n> really gone (all handles closed).\n\nRight, but we shouldn't ever need to access such a file --- we couldn't do\nso on Unix, certainly. So for the open() case, it's sufficient to return\nENOENT, and the problem is only to make sure that that's what we return if\nthe underlying error is ERROR_DELETE_PENDING.\n\nIt's harder if the desire is to create a new file of the same name.\nI'm inclined to think that the best answer might be \"if it hurts,\ndon't do that\". We should not have such a case for ordinary relation\nfiles or WAL files, but maybe there are individual other cases where\nsome redesign is indicated?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Sep 2021 07:12:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Sep 2, 2021 at 11:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I'm no expert, but not AFAICS. We managed to delete the file while\n> > some other backend had it open, which FILE_SHARE_DELETE allowed. We\n> > just can't open it or create a new file with the same name until it's\n> > really gone (all handles closed).\n>\n> Right, but we shouldn't ever need to access such a file --- we couldn't do\n> so on Unix, certainly. So for the open() case, it's sufficient to return\n> ENOENT, and the problem is only to make sure that that's what we return if\n> the underlying error is ERROR_DELETE_PENDING.\n\nYeah. The problem is that it still shows up in directory listings\nAFAIK, so something like basebackup.c sees it, and even if it didn't,\nit reads the directory, and then stats the files, and then opens the\nfiles at different times. The non-public API way to ask for the real\nreason after such a failure would apparently be to call\nNtFileCreate(), which can return STATUS_DELETE_PENDING. I don't know\nwhat complications that might involve, but I see now that we have code\nthat digs such non-public APIs out of ntdll.dll already (for long dead\nOS versions only). Hmm.\n\n(Another thing you can't do is delete the directory that contains such\na file, which is a problem for DROP TABLESPACE and the reason I\ndeveloped the global barrier thing.)\n\n> It's harder if the desire is to create a new file of the same name.\n> I'm inclined to think that the best answer might be \"if it hurts,\n> don't do that\". We should not have such a case for ordinary relation\n> files or WAL files, but maybe there are individual other cases where\n> some redesign is indicated?\n\nI guess GetNewRelFileNode()’s dilemma branch is an example; it'd\nprobably be nicer to users to treat a pending-deleted file as a\ncollision.\n\n if (access(rpath, F_OK) == 0)\n {\n /* definite collision */\n collides = true;\n }\n else\n {\n /*\n * Here we have a little bit of a dilemma: if errno is something\n * other than ENOENT, should we declare a collision and loop? In\n * practice it seems best to go ahead regardless of the errno. If\n * there is a colliding file we will get an smgr failure when we\n * attempt to create the new relation file.\n */\n collides = false;\n }\n\n\n", "msg_date": "Fri, 3 Sep 2021 00:44:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Fri, Sep 3, 2021 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> NtFileCreate()\n\nErm, that's spelled NtCreateFile. I see Michael mentioned this\nbefore[1]; I don't think it's only available in kernel mode though,\nthe docs[2] say \"This function is the user-mode equivalent to the\nZwCreateFile function\", and other open source user space stuff is\nusing it. It's explicitly internal and subject to change though,\nhence my desire to avoid it.\n\n[1] https://www.postgresql.org/message-id/flat/a9c76882-27c7-9c92-7843-21d5521b70a9%40postgrespro.ru\n[2] https://docs.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntcreatefile\n\n\n", "msg_date": "Fri, 3 Sep 2021 01:01:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "At Fri, 3 Sep 2021 01:01:43 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Fri, Sep 3, 2021 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > NtFileCreate()\n> \n> Erm, that's spelled NtCreateFile. I see Michael mentioned this\n> before[1]; I don't think it's only available in kernel mode though,\n> the docs[2] say \"This function is the user-mode equivalent to the\n> ZwCreateFile function\", and other open source user space stuff is\n> using it. It's explicitly internal and subject to change though,\n> hence my desire to avoid it.\n> \n> [1] https://www.postgresql.org/message-id/flat/a9c76882-27c7-9c92-7843-21d5521b70a9%40postgrespro.ru\n> [2] https://docs.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntcreatefile\n\nMight be stupid, if a delete-pending'ed file can obstruct something,\ncouldn't we change unlink on Windows to rename to a temporary random\nname then remove it? We do something like it explicitly while WAL\nfile removal. (It may cause degradation on bulk file deletion, and we\nmay need further fix so that such being-deleted files are excluded\nwhile running a directory scan, though..)\n\nHowever, looking [1], with that strategy there may be a case where\nsuch \"deleted\" files may be left alone forever, though.\n\n\n[1] https://www.postgresql.org/message-id/002101d79fc2%24c96dff60%245c49fe20%24%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Sep 2021 11:01:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Fri, Sep 3, 2021 at 2:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Might be stupid, if a delete-pending'ed file can obstruct something,\n> couldn't we change unlink on Windows to rename to a temporary random\n> name then remove it? We do something like it explicitly while WAL\n> file removal. (It may cause degradation on bulk file deletion, and we\n> may need further fix so that such being-deleted files are excluded\n> while running a directory scan, though..)\n>\n> However, looking [1], with that strategy there may be a case where\n> such \"deleted\" files may be left alone forever, though.\n\nIt's a good idea. I tested it and it certainly does fix the\nbasebackup problem I've seen (experimental patch attached). But,\nyeah, I'm also a bit worried that that path could be fragile and need\nspecial handling in lots of places.\n\nI also tried writing a new open() wrapper using the lower level\nNtCreateFile() interface, and then an updated stat() wrapper built on\ntop of that. As a non-Windows person, getting that to (mostly) work\ninvolved a fair amount of suffering. I can share that if someone is\ninterested, but while learning about that family of interfaces, I\nrealised we could keep the existing Win32-based code, but also\nretrieve the NT status, leading to a very small change (experimental\npatch attached).\n\nThe best idea is probably to set FILE_DISPOSITION_DELETE |\nFILE_DISPOSITION_POSIX_SEMANTICS before unlinking. This appears to be\na silver bullet, but isn't available on ancient Windows releases that\nwe support, or file systems other than local NTFS. So maybe we need a\ncombination of that + STATUS_DELETE_PENDING as shown in the attached.\nI'll look into that next.", "msg_date": "Mon, 6 Sep 2021 01:32:55 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "Hi,\n\nOn 2021-09-06 01:32:55 +1200, Thomas Munro wrote:\n> It's a good idea. I tested it and it certainly does fix the\n> basebackup problem I've seen (experimental patch attached). But,\n> yeah, I'm also a bit worried that that path could be fragile and need\n> special handling in lots of places.\n\nIt's also expensive-ish.\n\n\n> I also tried writing a new open() wrapper using the lower level\n> NtCreateFile() interface, and then an updated stat() wrapper built on\n> top of that. As a non-Windows person, getting that to (mostly) work\n> involved a fair amount of suffering. I can share that if someone is\n> interested, but while learning about that family of interfaces, I\n> realised we could keep the existing Win32-based code, but also\n> retrieve the NT status, leading to a very small change (experimental\n> patch attached).\n\nIs it guaranteed, or at least reliable, that the status we fetch with\nRtlGetLastNtStatus is actually from the underlying filesystem operation,\nrather than some other work that happens during the win32->nt translation?\nE.g. a memory allocation or such? Presumably most of such work happens before\nthe actual nt \"syscall\", but ...\n\n\n> The best idea is probably to set FILE_DISPOSITION_DELETE |\n> FILE_DISPOSITION_POSIX_SEMANTICS before unlinking. This appears to be\n> a silver bullet, but isn't available on ancient Windows releases that\n> we support, or file systems other than local NTFS. So maybe we need a\n> combination of that + STATUS_DELETE_PENDING as shown in the attached.\n> I'll look into that next.\n\nWhen was that introduced?\n\nI'd be ok to only fix these bugs on e.g. Win10, Win Server 2019, Win Server\n2016 or such. I don't think we need to support OSs that the vendor doesn't\nsupport - and I wouldn't count \"only security fixes\" as support in this\ncontext.\n main extended\nWindows 10 Oct 14, 2025\nWindows Server 2019 Jan 9, 2024 Jan 9, 2029\nWindows Server 2016 Jan 11, 2022 Jan 12, 2027\nWindows 7 Jan 13, 2015 Jan 14, 2020\nWindows Vista Apr 10, 2012 Apr 11, 2017\n\n\nOne absurd detail here is that the deault behaviour changed sometime in\nWindows 10's lifetime:\nhttps://stackoverflow.com/questions/60424732/did-the-behaviour-of-deleted-files-open-with-fileshare-delete-change-on-windows\n\n\"The behavior changed in recent releases of Windows 10 -- without notice\nAFAIK. DeleteFileW now tries to use POSIX semantics if the filesystem supports\nit. NTFS does.\"\n\n\n> #ifndef FRONTEND\n> -\tAssert(pgwin32_signal_event != NULL);\t/* small chance of pg_usleep() */\n> +\t/* XXX When called by stat very early on, this fails! */\n> +\t//Assert(pgwin32_signal_event != NULL);\t/* small chance of pg_usleep() */\n> #endif\n\nPerhaps we should move the win32 signal initialization into StartupHacks()?\nThere's some tension around it using ereport(), and MemoryContextInit() only\nbeing called a tad later, but that seems resolvable.\n\n\n> +\t * Our open wrapper will report STATUS_DELETE_PENDING as ENOENT. We pass\n> +\t * in a special private flag to say that it's _pgstat64() calling, to\n> +\t * activate a mode that allows directories to be opened for limited\n> +\t * purposes.\n> +\t *\n> +\t * XXX Think about fd pressure, since we're opening an fd?\n> \t */\n\nIf I understand https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/getmaxstdio?view=msvc-160\netc correctly, it looks like there is. But only at the point we do\n_open_osfhandle(). So perhaps we should a pgwin32_open() version returning a\nhandle and make pgwin32_open() a wrapper around that?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Sep 2021 14:44:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-09-06 01:32:55 +1200, Thomas Munro wrote:\n>> The best idea is probably to set FILE_DISPOSITION_DELETE |\n>> FILE_DISPOSITION_POSIX_SEMANTICS before unlinking. This appears to be\n>> a silver bullet, but isn't available on ancient Windows releases that\n>> we support, or file systems other than local NTFS. So maybe we need a\n>> combination of that + STATUS_DELETE_PENDING as shown in the attached.\n>> I'll look into that next.\n\n> When was that introduced?\n\nGoogling says that it was introduced in Win10, although in RS2 (version\n1703, general availability in 4/2017) not the initial release.\n\n> I'd be ok to only fix these bugs on e.g. Win10, Win Server 2019, Win Server\n> 2016 or such.\n\nYeah, particularly if the fix is trivial on the newer systems and\nincredibly complicated otherwise. Between the effort needed and\nthe risk of introducing new bugs, I'm really not excited about\nan invasive fix for this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Sep 2021 17:55:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Mon, Sep 6, 2021 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'd be ok to only fix these bugs on e.g. Win10, Win Server 2019, Win Server\n> > 2016 or such.\n>\n> Yeah, particularly if the fix is trivial on the newer systems and\n> incredibly complicated otherwise. Between the effort needed and\n> the risk of introducing new bugs, I'm really not excited about\n> an invasive fix for this.\n\nIf DeleteFile() is automatically using\nFILE_DISPOSITION_POSIX_SEMANTICS by default when possible on recent\nreleases as per the SO link that Andres posted above (\"18363.657\ndefinitely has the new behavior\"), then that's great news and maybe we\nshouldn't even bother to try to request that mode ourselves explicitly\n(eg in some kind of unlink wrapper). Then we'd need just one\naccomodation for older systems and non-NTFS systems, not two, and I\ncurrently think that should be the short and sweet approach shown in\n0001-Handle-STATUS_DELETE_PENDING-on-Windows.patch, with some tidying\nand adjustments per feedback.\n\n\n", "msg_date": "Mon, 6 Sep 2021 10:22:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "> If DeleteFile() is automatically using\n> FILE_DISPOSITION_POSIX_SEMANTICS by default when possible on recent\n> releases as per the SO link that Andres posted above (\"18363.657\n> definitely has the new behavior\"), then that's great news and maybe we\n> shouldn't even bother to try to request that mode ourselves explicitly\n> (eg in some kind of unlink wrapper). Then we'd need just one\n> accomodation for older systems and non-NTFS systems, not two, and I\n> currently think that should be the short and sweet approach shown in\n> 0001-Handle-STATUS_DELETE_PENDING-on-Windows.patch, with some tidying\n> and adjustments per feedback.\n\nHaving a non-invasive fix for this long-standing issue would be really\ngreat, even if that means reducing the scope of systems where this can\nbe fixed.\n\nThe last time I poked at the bear (54fb8c7d), there was a test posted\nby Alexander Lakhin that was really useful in making sure that\nconcurrency is correctly handled when a file is unlinked:\nhttps://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709@gmail.com\n\nIt worked with VS but not on MinGW. How does your patch react to this\ntest?\n--\nMichael", "msg_date": "Mon, 6 Sep 2021 15:36:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Mon, Sep 6, 2021 at 9:44 AM Andres Freund <andres@anarazel.de> wrote:\n> Is it guaranteed, or at least reliable, that the status we fetch with\n> RtlGetLastNtStatus is actually from the underlying filesystem operation,\n> rather than some other work that happens during the win32->nt translation?\n> E.g. a memory allocation or such? Presumably most of such work happens before\n> the actual nt \"syscall\", but ...\n\nI don't know. I know at least that it's thread-local, so that's\nsomething. I guess it's plausible that CreateFile() might want to\nfree a temporary buffer that it used for conversion to NT pathname\nformat, and whatever code it uses to do that might clobber the NT\nstatus. Nothing like that seems to happen in common cases though, and\nI guess it would also be clobbered on success. Frustrating.\n\nAlright then, here also is the version that bypasses CreateFile() and\ngoes straight to NtCreateFile(). This way, the status can't possibly\nbe clobbered before we see it, but maybe there are other risks due to\nusing a much wider set of unstable ntdll interfaces...\n\nBoth versions pass all tests on CI, including the basebackup one in a\nscenario where an unlinked file has an open descriptor, but still need\na bit more tidying.\n\n> \"The behavior changed in recent releases of Windows 10 -- without notice\n> AFAIK. DeleteFileW now tries to use POSIX semantics if the filesystem supports\n> it. NTFS does.\"\n\nNice find. I wonder if this applies also to rename()...\n\n> > #ifndef FRONTEND\n> > - Assert(pgwin32_signal_event != NULL); /* small chance of pg_usleep() */\n> > + /* XXX When called by stat very early on, this fails! */\n> > + //Assert(pgwin32_signal_event != NULL); /* small chance of pg_usleep() */\n> > #endif\n>\n> Perhaps we should move the win32 signal initialization into StartupHacks()?\n> There's some tension around it using ereport(), and MemoryContextInit() only\n> being called a tad later, but that seems resolvable.\n\nThe dependencies among open(), pg_usleep(),\npgwin32_signal_initialize() and read_backend_variables() are not very\nnice. I don't have a fix for that yet.\n\n> > + * XXX Think about fd pressure, since we're opening an fd?\n\n> If I understand https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/getmaxstdio?view=msvc-160\n> etc correctly, it looks like there is. But only at the point we do\n> _open_osfhandle(). So perhaps we should a pgwin32_open() version returning a\n> handle and make pgwin32_open() a wrapper around that?\n\nYeah. Done, in both variants.\n\nI haven't tried it, but I suspect the difference between stat() and\nlstat() could be handled with FILE_OPEN_REPARSE_POINT (as\nNtCreateFile() calls it) or FILE_FLAG_OPEN_REPARSE_POINT (as\nCreateFile() calls it).", "msg_date": "Mon, 6 Sep 2021 23:45:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Mon, Sep 6, 2021 at 6:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Having a non-invasive fix for this long-standing issue would be really\n> great, even if that means reducing the scope of systems where this can\n> be fixed.\n\nI hope those patches fix at least the basebackup problem on all\nrelevant OS versions, until the better POSIX thing is everywhere (they\ncan't fix all related problems though, since zombie files still stop\nyou creating new ones with the same name or deleting the containing\ndirectory). I didn't try to find out how far back those APIs go, but\nthey look ancient/fundamental and widely used by other software...\nBut do they qualify as non-invasive?\n\n> The last time I poked at the bear (54fb8c7d), there was a test posted\n> by Alexander Lakhin that was really useful in making sure that\n> concurrency is correctly handled when a file is unlinked:\n> https://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709@gmail.com\n\nThanks. It's a confusing topic with many inconclusive threads.\n\n> It worked with VS but not on MinGW. How does your patch react to this\n> test?\n\nThanks. Adding Alexander in CC in case he has ideas/feedback.\n\n\n", "msg_date": "Tue, 7 Sep 2021 01:04:46 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "06.09.2021 16:04, Thomas Munro wrote:\n> On Mon, Sep 6, 2021 at 6:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> The last time I poked at the bear (54fb8c7d), there was a test posted\n>> by Alexander Lakhin that was really useful in making sure that\n>> concurrency is correctly handled when a file is unlinked:\n>> https://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709@gmail.com\nThe new approach looks very promising. Knowing that the file is really\nin the DELETE_PENDING state simplifies a lot.\nI've tested the patch v2_0001_Check... with my demo tests [1] and [2],\nand it definitely works.\n\n[1]\nhttps://www.postgresql.org/message-id/e5179494-715e-f8a3-266b-0cf52adac8f4%40gmail.com\n[2]\nhttps://www.postgresql.org/message-id/c3427edf-d7c0-ff57-90f6-b5de3bb62709@gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 7 Sep 2021 09:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Tue, Sep 07, 2021 at 09:00:01AM +0300, Alexander Lakhin wrote:\n> The new approach looks very promising. Knowing that the file is really\n> in the DELETE_PENDING state simplifies a lot.\n> I've tested the patch v2_0001_Check... with my demo tests [1] and [2],\n> and it definitely works.\n\nOho, nice. Just to be sure. You are referring to\nv2-0001-Check*.patch posted here, right?\nhttps://www.postgresql.org/message-id/CA+hUKGKj3p+2AciBGacCf_cXE0JLCYevWHexvOpK6uL1+V-zag@mail.gmail.com\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 15:05:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "Hello Michael,\n07.09.2021 09:05, Michael Paquier wrote:\n> On Tue, Sep 07, 2021 at 09:00:01AM +0300, Alexander Lakhin wrote:\n>> The new approach looks very promising. Knowing that the file is really\n>> in the DELETE_PENDING state simplifies a lot.\n>> I've tested the patch v2_0001_Check... with my demo tests [1] and [2],\n>> and it definitely works.\n> Oho, nice. Just to be sure. You are referring to\n> v2-0001-Check*.patch posted here, right?\n> https://www.postgresql.org/message-id/CA+hUKGKj3p+2AciBGacCf_cXE0JLCYevWHexvOpK6uL1+V-zag@mail.gmail.com\nYes, i've tested that one, on the master branch (my tests needed a minor\nmodification due to PostgresNode changes).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 7 Sep 2021 10:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Tue, Sep 7, 2021 at 7:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 07.09.2021 09:05, Michael Paquier wrote:\n> > On Tue, Sep 07, 2021 at 09:00:01AM +0300, Alexander Lakhin wrote:\n> >> The new approach looks very promising. Knowing that the file is really\n> >> in the DELETE_PENDING state simplifies a lot.\n> >> I've tested the patch v2_0001_Check... with my demo tests [1] and [2],\n> >> and it definitely works.\n\n> > Oho, nice. Just to be sure. You are referring to\n> > v2-0001-Check*.patch posted here, right?\n> > https://www.postgresql.org/message-id/CA+hUKGKj3p+2AciBGacCf_cXE0JLCYevWHexvOpK6uL1+V-zag@mail.gmail.com\n\n> Yes, i've tested that one, on the master branch (my tests needed a minor\n> modification due to PostgresNode changes).\n\nThanks very much!\n\nTime to tidy up some loose ends. There are a couple of judgement\ncalls involved. Here's what Andres and I came up with in an off-list\nchat. Any different suggestions?\n\n1. I abandoned the \"direct NtCreateFile()\" version for now. I guess\nusing more and wider unstable interfaces might expose us to greater\nrisk of silent API/behavior changes or have subtle bugs. If we ever\nhave a concrete reason to believe that RtlGetLastNtStatus() is not\nreliable here, we could reconsider.\n\n2. I dropped the assertion that the signal event has been created\nbefore the first call to the open() wrapper. Instead, I taught\npg_usleep() to fall back to plain old SleepEx() if the signal stuff\nisn't up yet. Other solutions are possible of course, but it struck\nme as a bad idea to place initialisation ordering constraints on very\nbasic facilities like open() and stat().\n\nI should point out explicitly that with this patch, stat() benefits\nfrom open()'s tolerance for sharing violations, as a side effect.\nThat is, it'll retry for a short time in the hope that whoever opened\nour file without allowing sharing will soon go away. I don't know how\nuseful that bandaid loop really is in practice, but I don't see why\nwe'd want that for open() and not stat(), so this change seems good to\nme on consistency grounds at the very least.\n\n3. We fixed the warnings about macro redefinition with #define\nUMDF_USING_NTSTATUS and #include <ntstatus.h> in win32_port.h. (Other\nideas considered: (1) Andres reported that it also works to move the\n#include to ~12 files that need things from it, ie things that were\nsuppressed from windows.h by that macro and must now be had from\nntstatus.h, but the files you have to change are probably different in\nback branches if we decide to do that, (2) I tried defining that macro\nlocally in files that need it, *before* including c.h/postgres.h, and\nthen locally include ntstatus.h afterwards, but that seems to violate\nproject style and generally seems weird.)\n\nAnother thing to point out explicitly is that I added a new file\nsrc/port/win32ntdll.c, which is responsible for fishing out the NT\nfunction pointers. It was useful to be able to do that in the\nabandoned NtCreateFile() variant because it needed three of them and I\ncould reduce boiler-plate noise with a static array of function names\nto loop over. In this version the array has just one element, but I'd\nstill rather centralise this stuff in one place and make it easy to\nadd any more of these that we eventually find a need for.\n\nBTW, I also plan to help Victor get his \"POSIX semantics\" patch[1]\ninto the tree (and extend it to cover more ops). That should make\nthese problems go away in a more complete way IIUC, but doesn't work\neverywhere (not sure if we have any build farm animals where it\ndoesn't work, if so it might be nice to change that), so it's\ncomplementary to this patch. (My earlier idea that that stuff would\nmagically start happening for free on all relevant systems some time\nsoon has faded.)\n\n[1] https://www.postgresql.org/message-id/flat/a529b660-da15-5b62-21a0-9936768210fd%40postgrespro.ru", "msg_date": "Fri, 10 Sep 2021 17:04:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Fri, Sep 10, 2021 at 5:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Sep 7, 2021 at 7:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > 07.09.2021 09:05, Michael Paquier wrote:\n> > > On Tue, Sep 07, 2021 at 09:00:01AM +0300, Alexander Lakhin wrote:\n> > >> The new approach looks very promising. Knowing that the file is really\n> > >> in the DELETE_PENDING state simplifies a lot.\n> > >> I've tested the patch v2_0001_Check... with my demo tests [1] and [2],\n> > >> and it definitely works.\n\nSince our handling of that stuff never really worked the way we wanted\n(or if it did, then Windows' behaviour changed, possibly well over a\ndecade ago, from what I could dig up), this isn't an open item\ncandidate for 14 after all, it's a pre-existing condition. So I\npropose to push this fix to master only soon, and then let it stew\nthere for a little while to see how the buildfarm Windows variants and\nthe Windows hacker community testing on master react. If it looks\ngood, we can back-patch it a bit later, perhaps some more convenient\ntime WRT the release.\n\nI added a CF entry to see if anyone else wants to review it and get CI.\n\nOne small detail I'd like to draw attention to is this bit, which\ndiffers slightly from the [non-working] previous attempts by mapping\nto two different errors:\n\n+ * If there's no O_CREAT flag, then we'll pretend the file is\n+ * invisible. With O_CREAT, we have no choice but to report that\n+ * there's a file in the way (which wouldn't happen on Unix).\n\n...\n\n+ if (fileFlags & O_CREAT)\n+ err = ERROR_FILE_EXISTS;\n+ else\n+ err = ERROR_FILE_NOT_FOUND;\n\n\n", "msg_date": "Thu, 23 Sep 2021 14:57:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Sep 23, 2021 at 4:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> One small detail I'd like to draw attention to is this bit, which\n> differs slightly from the [non-working] previous attempts by mapping\n> to two different errors:\n>\n> + * If there's no O_CREAT flag, then we'll pretend the file is\n> + * invisible. With O_CREAT, we have no choice but to report that\n> + * there's a file in the way (which wouldn't happen on Unix).\n>\n> ...\n>\n> + if (fileFlags & O_CREAT)\n> + err = ERROR_FILE_EXISTS;\n> + else\n> + err = ERROR_FILE_NOT_FOUND;\n>\n\nWhen GetTempFileName() finds a duplicated file name and the file is pending\nfor deletion, it fails with an \"ERROR_ACCESS_DENIED\" error code. That may\ndescribe the situation better than \"ERROR_FILE_EXISTS\".\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Sep 23, 2021 at 4:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nOne small detail I'd like to draw attention to is this bit, which\ndiffers slightly from the [non-working] previous attempts by mapping\nto two different errors:\n\n+         * If there's no O_CREAT flag, then we'll pretend the file is\n+         * invisible.  With O_CREAT, we have no choice but to report that\n+         * there's a file in the way (which wouldn't happen on Unix).\n\n...\n\n+            if (fileFlags & O_CREAT)\n+                err = ERROR_FILE_EXISTS;\n+            else\n+                err = ERROR_FILE_NOT_FOUND;When GetTempFileName() finds a duplicated file name and the file is pending for deletion, it fails with an \"ERROR_ACCESS_DENIED\" error code. That may describe the situation better than \"ERROR_FILE_EXISTS\".Regards,Juan José Santamaría Flecha", "msg_date": "Thu, 23 Sep 2021 11:05:47 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Sep 23, 2021 at 9:13 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Thu, Sep 23, 2021 at 4:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> One small detail I'd like to draw attention to is this bit, which\n>> differs slightly from the [non-working] previous attempts by mapping\n>> to two different errors:\n>>\n>> + * If there's no O_CREAT flag, then we'll pretend the file is\n>> + * invisible. With O_CREAT, we have no choice but to report that\n>> + * there's a file in the way (which wouldn't happen on Unix).\n>>\n>> ...\n>>\n>> + if (fileFlags & O_CREAT)\n>> + err = ERROR_FILE_EXISTS;\n>> + else\n>> + err = ERROR_FILE_NOT_FOUND;\n>\n>\n> When GetTempFileName() finds a duplicated file name and the file is pending for deletion, it fails with an \"ERROR_ACCESS_DENIED\" error code. That may describe the situation better than \"ERROR_FILE_EXISTS\".\n\nThanks for looking. Why do you think that's better? I assume that's\njust the usual NT->Win32 error conversion at work.\n\nThe only case I can think of so far in our tree where you'd notice\nthis change of errno for the O_CREAT case is relfilenode creation[1],\nand there it's just a case of printing a different message. Trying to\ncreate a relfilenode that exists already in delete-pending state will\nfail, but with this change we'll log the %m string for EEXIST instead\nof EACCES (what you see today) or ENOENT (which seems nonsensical, \"I\ncan't create your file because it doesn't exist\", and what you'd get\nwith this patch if I didn't have the special case for O_CREAT). So I\nthink that's pretty arguably an improvement.\n\nAs for how likely you are to reach that case... hmm, I don't know what\naccess() returns for a file in delete-pending state. The docs say\n\"The function returns -1 if the named file does not exist or does not\nhave the given mode\", so perhaps it returns 0 for such a case, in\nwhich case we'll consider it a collision and keep searching for\nanother free relfilenode. If that's the case, it's probably really\nreally unlikely you'll reach the case described in the previous\nparagraph, so it probably doesn't matter much.\n\nDo we have any other code paths where this finer point could cause\nproblems? Looking around at code that handles EEXIST, most of it is\ndirectory creation (unaffected by this patch), and then\nsrc/port/mkdtemp.c for which this change is appropriate (it implements\nPOSIX mkdtemp(), which shouldn't report EACCES to its caller if\nsomething it tries bumps into a delete-pending file, it should see\nEEXIST and try a new name, which I think it will do with this patch,\nthrough its call to open(O_CREAT | O_EXCL)).\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJz_pZTF9mckn6XgSv69%2BjGwdgLkxZ6b3NWGLBCVjqUZA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 28 Sep 2021 13:49:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Tue, Sep 28, 2021 at 2:50 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Sep 23, 2021 at 9:13 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> > When GetTempFileName() finds a duplicated file name and the file is\n> pending for deletion, it fails with an \"ERROR_ACCESS_DENIED\" error code.\n> That may describe the situation better than \"ERROR_FILE_EXISTS\".\n>\n> Thanks for looking. Why do you think that's better? I assume that's\n> just the usual NT->Win32 error conversion at work.\n>\n> When a function returns an error caused by accessing a file\nin DELETE_PENDING you should expect an EACCES. Nonetheless, if we can\nemulate a POSIX behaviour by mapping it to EEXIST, that works for me. I\nalso consider that having the logic for DELETE_PENDING in a single function\nis an improvement.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Sep 28, 2021 at 2:50 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Sep 23, 2021 at 9:13 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:> When GetTempFileName() finds a duplicated file name and the file is pending for deletion, it fails with an \"ERROR_ACCESS_DENIED\" error code. That may describe the situation better than \"ERROR_FILE_EXISTS\".\n\nThanks for looking.  Why do you think that's better?  I assume that's\njust the usual NT->Win32 error conversion at work.When a function returns an error caused by accessing a file in DELETE_PENDING you should expect an EACCES. Nonetheless, if we can emulate a POSIX behaviour by mapping it to EEXIST, that works for me. I also consider that having the logic for DELETE_PENDING in a single function is an improvement.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 29 Sep 2021 12:26:25 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "This patch doesn't compile on Windows according to Appveyor, seemingly because\nof a syntax error in the new win32ntdll.h file, but the MSVC logs are hard on\nthe eye so it might be unrelated.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 3 Nov 2021 12:02:53 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "> On 3 Nov 2021, at 12:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> This patch doesn't compile on Windows according to Appveyor, seemingly because\n> of a syntax error in the new win32ntdll.h file, but the MSVC logs are hard on\n> the eye so it might be unrelated.\n\nAs the thread has stalled with a patch that doesn't apply, I'm marking this\npatch Returned with Feedback. Please feel free to resubmit when a new patch is\nready.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 15:11:53 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Dec 2, 2021 at 3:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 3 Nov 2021, at 12:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > This patch doesn't compile on Windows according to Appveyor, seemingly because\n> > of a syntax error in the new win32ntdll.h file, but the MSVC logs are hard on\n> > the eye so it might be unrelated.\n>\n> As the thread has stalled with a patch that doesn't apply, I'm marking this\n> patch Returned with Feedback. Please feel free to resubmit when a new patch is\n> ready.\n\nI think this was broken by WIN32_LEAN_AND_MEAN (and since gained a\nmerge conflict, but that's easy to fix). I'll try to figure out the\nright system header hacks to unbreak it...\n\n\n", "msg_date": "Sat, 4 Dec 2021 18:18:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Sat, Dec 4, 2021 at 6:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Dec 2, 2021 at 3:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > This patch doesn't compile on Windows according to Appveyor, seemingly because\n> > > of a syntax error in the new win32ntdll.h file, but the MSVC logs are hard on\n> > > the eye so it might be unrelated.\n\n> I think this was broken by WIN32_LEAN_AND_MEAN (and since gained a\n> merge conflict, but that's easy to fix). I'll try to figure out the\n> right system header hacks to unbreak it...\n\nShort version: It needed <winternl.h>.\n\nLong version: Where Unix shares headers between user space and kernel\nwith #ifdef _KERNEL, today I learned that Windows seems to have two\nuniverses of headers, with some stuff defined in both places. You\ncan't cross the streams. I had already defined UMDF_USING_NTSTATUS,\nwhich tells <windows.h> that you're planning to include <ntstatus.h>,\nto avoid a bunch of double-definitions (the other approach I'd found\non the 'net was to #define and #undef WIN32_NO_STATUS in the right\nplaces), but when WIN32_LEAN_AND_MEAN was added, that combination lost\nthe definition of NTSTATUS, which is needed by various macros like\nWAIT_OBJECT_0 (it's used in casts). It's supposed to come from\n<ntdef.h>, but if you include that directly you get more double\ndefinitions of other random stuff. Eventually I learned that\n<winternl.h> fixes that. No doubt this is eroding the gains made by\nWIN32_LEAN_AND_MEAN, but I don't see how to avoid it until we do the\nwork to stop including <windows.h> in win32_port.h. Well, I do know\none way... I noticed that <bcrypt.h> just defines NTSTATUS itself if\nit sees that <ntdef.h> hasn't been included (by testing its include\nguard). I tried that and it worked, but it seems pretty ugly and not\nsomething that we should be doing.", "msg_date": "Mon, 6 Dec 2021 21:17:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Dec 4, 2021 at 6:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I think this was broken by WIN32_LEAN_AND_MEAN (and since gained a\n> > merge conflict, but that's easy to fix). I'll try to figure out the\n> > right system header hacks to unbreak it...\n>\n> Short version: It needed <winternl.h>.\n\nSlightly improvement: now I include <winternl.h> only from\nsrc/port/open.c and src/port/win32ntdll.c, so I avoid the extra\ninclude for the other ~1500 translation units. That requires a small\nextra step to work, see comment in win32ntdll.h. I checked that this\nstill cross-compiles OK under mingw on Linux. This is the version\nthat I'm planning to push to master only tomorrow if there are no\nobjections.", "msg_date": "Thu, 9 Dec 2021 21:16:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" }, { "msg_contents": "On Thu, Dec 9, 2021 at 9:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Slightly improvement: now I include <winternl.h> only from\n> src/port/open.c and src/port/win32ntdll.c, so I avoid the extra\n> include for the other ~1500 translation units. That requires a small\n> extra step to work, see comment in win32ntdll.h. I checked that this\n> still cross-compiles OK under mingw on Linux. This is the version\n> that I'm planning to push to master only tomorrow if there are no\n> objections.\n\nDone. I'll keep an eye on the build farm.\n\n\n", "msg_date": "Fri, 10 Dec 2021 16:25:07 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: stat() vs ERROR_DELETE_PENDING, round N + 1" } ]
[ { "msg_contents": "Hi,\n\nWhen reviewing other patches, I noticed two typos:\n\n1.\nsrc/backend/parser/gram.y\nALTER TABLE <name> ALTER [COLUMN] <colname> RESET ( column_parameter = value [, ... ] )\n\nRESET cannot specify value.\n\n2.\nsrc/backend/utils/adt/xid8funcs.c\n* Same as pg_current_xact_if_assigned() but doesn't assign a new xid if there\n\npg_current_xact_if_assigned() should be pg_current_xact_id()\n\nBest regards,\nHou zhijie", "msg_date": "Thu, 2 Sep 2021 11:54:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix typo in comments" }, { "msg_contents": "\n\nOn 2021/09/02 20:54, houzj.fnst@fujitsu.com wrote:\n> Hi,\n> \n> When reviewing other patches, I noticed two typos:\n\nThanks! Both fixes look good to me.\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Sep 2021 21:47:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix typo in comments" }, { "msg_contents": "\n\nOn 2021/09/02 21:47, Fujii Masao wrote:\n> \n> \n> On 2021/09/02 20:54, houzj.fnst@fujitsu.com wrote:\n>> Hi,\n>>\n>> When reviewing other patches, I noticed two typos:\n> \n> Thanks! Both fixes look good to me.\n> Barring any objection, I will commit the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 6 Sep 2021 17:10:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix typo in comments" } ]
[ { "msg_contents": "I had a customer point out to me that we're inconsistent in how we\nspell read-only. Turns out we're not as inconsistent as I initially\nthought :), but that they did manage to spot the one actual log\nmessage we have that writes it differently than everything else -- but\nthat broke their grepping...\n\nAlmost everywhere we use read-only. Attached patch changes the one log\nmessage where we didn't, as well as a few places in the docs for it. I\ndid not bother with things like comments in the code.\n\nTwo questions:\n\n1. Is it worth fixing? Or just silly nitpicking?\n\n2. What about translations? This string exists in translations --\nshould we just \"fix\" it there, without touching the translated string?\nOr try to fix both? Or leave it for the translators who will get a\ndiff on it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 2 Sep 2021 20:20:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Read-only vs read only vs readonly" }, { "msg_contents": "On 9/2/21, 11:30 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\r\n> I had a customer point out to me that we're inconsistent in how we\r\n> spell read-only. Turns out we're not as inconsistent as I initially\r\n> thought :), but that they did manage to spot the one actual log\r\n> message we have that writes it differently than everything else -- but\r\n> that broke their grepping...\r\n>\r\n> Almost everywhere we use read-only. Attached patch changes the one log\r\n> message where we didn't, as well as a few places in the docs for it. I\r\n> did not bother with things like comments in the code.\r\n> \r\n> Two questions:\r\n>\r\n> 1. Is it worth fixing? Or just silly nitpicking?\r\n\r\nIt seems entirely reasonable to me to consistently use \"read-only\" in\r\nthe log messages and documentation.\r\n\r\n> 2. What about translations? This string exists in translations --\r\n> should we just \"fix\" it there, without touching the translated string?\r\n> Or try to fix both? Or leave it for the translators who will get a\r\n> diff on it?\r\n\r\nI don't have a strong opinion, but if I had to choose, I would say to\r\nleave it to the translators to decide.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Sep 2021 22:07:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Read-only vs read only vs readonly" }, { "msg_contents": "At Thu, 2 Sep 2021 22:07:02 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 9/2/21, 11:30 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\n> > I had a customer point out to me that we're inconsistent in how we\n> > spell read-only. Turns out we're not as inconsistent as I initially\n> > thought :), but that they did manage to spot the one actual log\n> > message we have that writes it differently than everything else -- but\n> > that broke their grepping...\n> >\n> > Almost everywhere we use read-only. Attached patch changes the one log\n> > message where we didn't, as well as a few places in the docs for it. I\n> > did not bother with things like comments in the code.\n> > \n> > Two questions:\n> >\n> > 1. Is it worth fixing? Or just silly nitpicking?\n> \n> It seems entirely reasonable to me to consistently use \"read-only\" in\n> the log messages and documentation.\n> \n> > 2. What about translations? This string exists in translations --\n> > should we just \"fix\" it there, without touching the translated string?\n> > Or try to fix both? Or leave it for the translators who will get a\n> > diff on it?\n> \n> I don't have a strong opinion, but if I had to choose, I would say to\n> leave it to the translators to decide.\n\n+1 for both. As a translator, it seems that that kind of changes are\nusual. Many changes about full-stops, spacings, capitalizing and so\nhappen especially at near-release season like now.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Sep 2021 15:10:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read-only vs read only vs readonly" }, { "msg_contents": "On Fri, Sep 3, 2021 at 8:10 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 2 Sep 2021 22:07:02 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in\n> > On 9/2/21, 11:30 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\n> > > I had a customer point out to me that we're inconsistent in how we\n> > > spell read-only. Turns out we're not as inconsistent as I initially\n> > > thought :), but that they did manage to spot the one actual log\n> > > message we have that writes it differently than everything else -- but\n> > > that broke their grepping...\n> > >\n> > > Almost everywhere we use read-only. Attached patch changes the one log\n> > > message where we didn't, as well as a few places in the docs for it. I\n> > > did not bother with things like comments in the code.\n> > >\n> > > Two questions:\n> > >\n> > > 1. Is it worth fixing? Or just silly nitpicking?\n> >\n> > It seems entirely reasonable to me to consistently use \"read-only\" in\n> > the log messages and documentation.\n> >\n> > > 2. What about translations? This string exists in translations --\n> > > should we just \"fix\" it there, without touching the translated string?\n> > > Or try to fix both? Or leave it for the translators who will get a\n> > > diff on it?\n> >\n> > I don't have a strong opinion, but if I had to choose, I would say to\n> > leave it to the translators to decide.\n>\n> +1 for both. As a translator, it seems that that kind of changes are\n> usual. Many changes about full-stops, spacings, capitalizing and so\n> happen especially at near-release season like now.\n\nThanks for the input. I've applied this and back-patched to 14 since\nit's not out yet and there is translation still do be done. I didn't\nbackpatch it further back than that to avoid the need for translation\nupdates there.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 7 Sep 2021 22:05:58 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Read-only vs read only vs readonly" } ]
[ { "msg_contents": "Hello,\n\nLacking a tool to edit postgresql.conf programmatically, people resort \nto passing cluster options on the command line. While passing all \nnon-default options in this way may sound like an abuse of the feature, \nIMHO pg_ctl should not blindly truncate generated command lines at \nMAXPGPATH (1024 characters) and then run that, resulting in:\n\n/bin/sh: Syntax error: end of file unexpected (expecting word)\npg_ctl: could not start server\nExamine the log output.\n\nThe attached patch tries to fix it in the least intrusive way.\n\nWhile we're at it, is it supposed that pg_ctl is a very short-lived \nprocess and is therefore allowed to leak memory? I've noticed some \nplaces where I would like to add a free() call.\n\n-- Ph.", "msg_date": "Thu, 02 Sep 2021 23:36:13 +0200", "msg_from": "Phil Krylov <phil@krylov.eu>", "msg_from_op": true, "msg_subject": "[PATCH] pg_ctl should not truncate command lines at 1024 characters" }, { "msg_contents": "Em qui., 2 de set. de 2021 às 18:36, Phil Krylov <phil@krylov.eu> escreveu:\n\n> Hello,\n>\n> Lacking a tool to edit postgresql.conf programmatically, people resort\n> to passing cluster options on the command line. While passing all\n> non-default options in this way may sound like an abuse of the feature,\n> IMHO pg_ctl should not blindly truncate generated command lines at\n> MAXPGPATH (1024 characters) and then run that, resulting in:\n>\nThe msvc docs says that limit for the command line is 32,767 characters,\nwhile ok for me, I think if not it would be better to check this limit?\n\n\n> /bin/sh: Syntax error: end of file unexpected (expecting word)\n> pg_ctl: could not start server\n> Examine the log output.\n>\n> The attached patch tries to fix it in the least intrusive way.\n>\n> While we're at it, is it supposed that pg_ctl is a very short-lived\n> process and is therefore allowed to leak memory? I've noticed some\n> places where I would like to add a free() call.\n>\n+1 to add free.\n\nregards,\nRanier Vilela\n\nEm qui., 2 de set. de 2021 às 18:36, Phil Krylov <phil@krylov.eu> escreveu:Hello,\n\nLacking a tool to edit postgresql.conf programmatically, people resort \nto passing cluster options on the command line. While passing all \nnon-default options in this way may sound like an abuse of the feature, \nIMHO pg_ctl should not blindly truncate generated command lines at \nMAXPGPATH (1024 characters) and then run that, resulting in:The msvc docs says that limit for the command line is \n32,767 characters, while ok for me, I think if not it would be better to check this limit? \n\n/bin/sh: Syntax error: end of file unexpected (expecting word)\npg_ctl: could not start server\nExamine the log output.\n\nThe attached patch tries to fix it in the least intrusive way.\n\nWhile we're at it, is it supposed that pg_ctl is a very short-lived \nprocess and is therefore allowed to leak memory? I've noticed some \nplaces where I would like to add a free() call.+1 to add free.regards,Ranier Vilela", "msg_date": "Thu, 2 Sep 2021 19:36:08 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_ctl should not truncate command lines at 1024\n characters" }, { "msg_contents": "On 2021-09-03 00:36, Ranier Vilela wrote:\n\n> The msvc docs says that limit for the command line is 32,767 \n> characters,\n> while ok for me, I think if not it would be better to check this limit?\n\nWell, it's ARG_MAX in POSIX, and ARG_MAX is defined as 256K in Darwin, \n512K in FreeBSD, 128K in Linux; _POSIX_ARG_MAX is defined as 4096 on all \nthree platforms. Windows may differ too. Anyways, allocating even 128K \nin precious stack space is too much, that's why I suggest to use \npsprintf(). As for checking any hard limit, I don't think it would have \nmuch value - somehow we got the original command line, thus it is \nsupported by the system, so we can just pass it on.\n\n-- Ph.\n\n\n", "msg_date": "Fri, 03 Sep 2021 00:57:16 +0200", "msg_from": "Phil Krylov <phil@krylov.eu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_ctl should not truncate command lines at 1024\n characters" }, { "msg_contents": "Phil Krylov <phil@krylov.eu> writes:\n> IMHO pg_ctl should not blindly truncate generated command lines at \n> MAXPGPATH (1024 characters) and then run that, resulting in:\n\nFair enough.\n\n> The attached patch tries to fix it in the least intrusive way.\n\nSeems reasonable. We didn't have psprintf when this code was written,\nbut now that we do, it's hardly any more complicated to do it without\nthe length restriction.\n\n> While we're at it, is it supposed that pg_ctl is a very short-lived \n> process and is therefore allowed to leak memory? I've noticed some \n> places where I would like to add a free() call.\n\nI think that these free() calls you propose to add are a complete\nwaste of code space. Certainly a free() right before an exit() call\nis that; if anything, it's *delaying* recycling the memory space for\nsome useful purpose. But no part of pg_ctl runs long enough for it\nto be worth worrying about small leaks.\n\nI do not find your proposed test case to be a useful expenditure\nof test cycles, either. If it ever fails, we'd learn nothing,\nexcept that that particular platform has a surprisingly small\ncommand line length limit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Sep 2021 20:09:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_ctl should not truncate command lines at 1024\n characters" }, { "msg_contents": "On 2021-09-03 02:09, Tom Lane wrote:\n> I think that these free() calls you propose to add are a complete\n> waste of code space. Certainly a free() right before an exit() call\n> is that; if anything, it's *delaying* recycling the memory space for\n> some useful purpose. But no part of pg_ctl runs long enough for it\n> to be worth worrying about small leaks.\n\nOK, I have removed the free() before exit().\n\n> I do not find your proposed test case to be a useful expenditure\n> of test cycles, either. If it ever fails, we'd learn nothing,\n> except that that particular platform has a surprisingly small\n> command line length limit.\n\nHmm, it's a test case that fails with the current code and stops failing \nwith my fix, so I've put it there to show the problem. But, truly, it \ndoes not bring much value after the fix is applied.\n\nAttaching the new version, with the test case and free-before-exit \nremoved.\n\n-- Ph.", "msg_date": "Fri, 03 Sep 2021 10:17:47 +0200", "msg_from": "Phil Krylov <phil@krylov.eu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_ctl should not truncate command lines at 1024\n characters" }, { "msg_contents": "Phil Krylov <phil@krylov.eu> writes:\n> Attaching the new version, with the test case and free-before-exit \n> removed.\n\nPushed with minor cosmetic adjustments. Thanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 21:06:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_ctl should not truncate command lines at 1024\n characters" } ]
[ { "msg_contents": "Few tap test files have the \"tempdir_short\" variable which isn't in\nuse. The attached patch removes the same\n\nRegards,\nAmul", "msg_date": "Fri, 3 Sep 2021 10:53:19 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Unused variable in TAP tests file" }, { "msg_contents": "On Fri, Sep 03, 2021 at 10:53:19AM +0530, Amul Sul wrote:\n> Few tap test files have the \"tempdir_short\" variable which isn't in\n> use. The attached patch removes the same\n\nIndeed. Let's clean up that. Thanks!\n--\nMichael", "msg_date": "Fri, 3 Sep 2021 16:03:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unused variable in TAP tests file" }, { "msg_contents": "On Fri, Sep 03, 2021 at 04:03:36PM +0900, Michael Paquier wrote:\n> Indeed. Let's clean up that. Thanks!\n\nAnd done.\n--\nMichael", "msg_date": "Mon, 6 Sep 2021 11:28:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unused variable in TAP tests file" }, { "msg_contents": "Thank you !\n\nRegards,\nAmul\n\nOn Mon, Sep 6, 2021 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 03, 2021 at 04:03:36PM +0900, Michael Paquier wrote:\n> > Indeed. Let's clean up that. Thanks!\n>\n> And done.\n> --\n> Michael\n\n\n", "msg_date": "Mon, 6 Sep 2021 09:44:01 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unused variable in TAP tests file" } ]
[ { "msg_contents": "Hi all\n\nI want to share a patch with you, in which I add a guc parameter 'enable_send_stop' to enable set the value of SendStop in postmaster.c more conveniently. SendStop enable postmaster to send SIGSTOP rather than SIGQUIT to its children when some backend dumps core, and this variable is originally set with -T parameter when start postgres, which is inconvenient to control in some scenarios.\n\nThanks & Best Regards", "msg_date": "Fri, 03 Sep 2021 15:00:00 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?QWRkIGd1YyB0byBlbmFibGUgc2VuZCBTSUdTVE9QIHRvIHBlZXJzIHdoZW4gYmFja2VuZCBl?=\n =?UTF-8?B?eGl0cyBhYm5vcm1hbGx5?=" }, { "msg_contents": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com> writes:\n> I want to share a patch with you, in which I add a guc parameter 'enable_send_stop' to enable set the value of SendStop in postmaster.c more conveniently. SendStop enable postmaster to send SIGSTOP rather than SIGQUIT to its children when some backend dumps core, and this variable is originally set with -T parameter when start postgres, which is inconvenient to control in some scenarios.\n\nTBH, I'd sooner rip out SendStop, and simplify the related postmaster\nlogic. I've never used it in twenty-some years of Postgres hacking,\nand I doubt anyone else has used it much either. It's not worth the\noverhead of a GUC. (The argument that you need it in situations\nwhere you can't control the postmaster's command line seems pretty\nthin, too. I'm much more worried about somebody turning it on by\naccident and then complaining that the cluster freezes upon crash.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 10:39:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?B?QWRkIGd1YyB0byBlbmFibGUgc2VuZCBTSUdTVE9QIHRvIHBlZXJzIHdoZW4gYmFja2VuZCBl?=\n =?UTF-8?B?eGl0cyBhYm5vcm1hbGx5?=" }, { "msg_contents": "On 2021-Sep-03, Tom Lane wrote:\n\n> \"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com> writes:\n> > I want to share a patch with you, in which I add a guc parameter 'enable_send_stop' to enable set the value of SendStop in postmaster.c more conveniently. SendStop enable postmaster to send SIGSTOP rather than SIGQUIT to its children when some backend dumps core, and this variable is originally set with -T parameter when start postgres, which is inconvenient to control in some scenarios.\n> \n> TBH, I'd sooner rip out SendStop, and simplify the related postmaster\n> logic.\n\nI wrote a patch to do that in 2012, after this exchange:\nhttps://postgr.es/m/1333124720-sup-6193@alvh.no-ip.org\nI obviously doesn't apply at all anymore, but the thing that prevented\nme from sending it was I couldn't find what the mentioned feature was\nthat would cause all backends to dump core at the time of a crash.\nSo it seemed to me that we would be ripping out a feature I had used,\nwith no replacement.\n\n(It applies cleanly on top of 36b7e3da17bc.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)", "msg_date": "Fri, 3 Sep 2021 17:16:06 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add guc to enable send SIGSTOP to peers when backend exits\n abnormally" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-03, Tom Lane wrote:\n>> TBH, I'd sooner rip out SendStop, and simplify the related postmaster\n>> logic.\n\n> I wrote a patch to do that in 2012, after this exchange:\n> https://postgr.es/m/1333124720-sup-6193@alvh.no-ip.org\n> I obviously doesn't apply at all anymore, but the thing that prevented\n> me from sending it was I couldn't find what the mentioned feature was\n> that would cause all backends to dump core at the time of a crash.\n\nOh, I think you misunderstood what I wrote. I was thinking of the\nancient habit of most kernels to dump cores to a file just named\n\"core\"; so that even if you went around and manually SIGABRT'd\neach stopped process, the cores would all overwrite each other,\nleaving you with little to show for the exercise. Nowadays you're\nmore likely to get \"core.NNN\" for each PID, so that it could in\nprinciple be useful to force all the backends to dump core for later\nanalysis. But I know of no mechanism that would do that for you.\n\nHowever, thinking about this afresh, it seems like that Berkeley-era\ncomment about \"the wily post_hacker\" was never very apropos. If what\nyou wanted was a few GB of core files for later analysis, it'd make\nmore sense to have the postmaster send SIGABRT or the like. That\nsaves a bunch of tedious manual steps, plus the cluster isn't left\nin a funny state that requires yet more manual cleanup steps.\n\nSo I'm thinking that the *real* use-case for this is for developers\nto attach with gdb and do on-the-fly investigation of the state of\nother backends, rather than forcing core-dumps. However, it's still\na pretty half-baked feature because there's no easy way to clean up\nafterwards.\n\nThe other elephant in the room is that by the time the postmaster\nhas reacted to the initial backend crash, it's dubious whether the\nstate of other processes is still able to tell you much. (IME,\nat least, the postmaster doesn't hear about it until the kernel\nhas finished writing out the dying process's core image, which\ntakes approximately forever compared to modern CPU speeds.)\n\n> So it seemed to me that we would be ripping out a feature I had used,\n> with no replacement.\n\nIf we had a really useful feature here I'd be all over it.\nBut it looks more like somebody's ten-minute hack, so the\nfact that it's undocumented and obscure-to-invoke seems\nappropriate to me.\n\n(BTW, I think we had exactly this discussion way back when\nPeter cleaned up the postmaster/postgres command line switches.\nJust about all the other old switches have equivalent GUCs,\nand IIRC it is not an oversight that SendStop was left out.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 17:44:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add guc to enable send SIGSTOP to peers when backend exits\n abnormally" } ]
[ { "msg_contents": "To who it may concern,\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\nprowess by means of two new High Precision mixed decimal number types in a self installing\nextension. Hopefully, I want this to be a free or low cost project.\n\nIs there anyone who can read these project specifications and email back to\nme here, at poweruserm@live.com.au, to give me a quote for this project?\nThey are in my top posting at this discussion thread, at:\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics. It is\nto be written in C, and will need a number of offline installers for major operating\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\nor GitHub.\n\nIf anyone on this list is interested, or knows which direction to point me in,\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\nZM.\n\n\n\n\n\n\n\n\nTo who it may concern,\n\n\nI am trying to get a project completed to enhance PostgreSQL arithmetic and elementary functions\nprowess by means of two new High Precision mixed decimal number types in a self installing\nextension.  Hopefully, I want this to be a free or low cost project.\n\n\nIs there anyone who can read these project specifications and email back to\nme here, at poweruserm@live.com.au, to give me a quote for this project?\nThey are in my top posting at this discussion thread, at:\n\n\nhttps://github.com/dvarrazzo/pgmp/issues/22\n\n\nThe extension could be called HPPM, High Precision Postgresql Mathematics.  It is\nto be written in C, and will need a number of offline installers for major operating\nsystems, like Windows 10/11 or rpm based Linux. The project could be hosted on SourceForge\nor GitHub.\n\n\nIf anyone on this list is interested, or knows which direction to point me in,\ncould they please reply to me here, at poweruserm@live.com.au?\n\n\n\n\nZM.", "msg_date": "Fri, 3 Sep 2021 08:17:29 +0000", "msg_from": "A Z <poweruserm@live.com.au>", "msg_from_op": true, "msg_subject": "Question about an Extension Project" }, { "msg_contents": "Hi,\n\nI don't want to sound overly rude, but I suggest not spamming this list \nwith the same message when you don't get an answer right away. If no one \nanswered the first time, they're not going to answer the second time.\n\nProviding a quote usually requires some sort of a business relationship, \nand I doubt people on this list will rush to do that when the person is \nentirely anonymous, without any prior history in the community, etc.\n\n\nAs for the technical side, I only quickly skimmed the specification, and \nit's entirely unclear to me\n\n(a) Why? What's the whole point of the proposed extension, and why e.g. \npgmp is not suitable to achieve that.\n\n(b) What? Are the proposed data types & aritmetics a completely new \nthing, or is that already implemented somewhere? I doubt people on this \nlist will be interested in inventing entirely new ways to do math (as \nopposed to using a library that already exists).\n\n\nregards\n\nOn 9/3/21 10:17 AM, A Z wrote:\n> To who it may concern,\n> \n> I am trying to get a project completed to enhance PostgreSQL arithmetic \n> and elementary functions\n> prowess by means of two new High Precision mixed decimal number types in \n> a self installing\n> extension. �Hopefully, I want this to be a free or low cost project.\n> \n> Is there anyone who can read these project specifications and email back to\n> me here, at poweruserm@live.com.au, to give me a quote for this project?\n> They are in my top posting at this discussion thread, at:\n> \n> https://github.com/dvarrazzo/pgmp/issues/22\n> \n> The extension could be called HPPM, High Precision Postgresql \n> Mathematics. �It is\n> to be written in C, and will need a number of offline installers for \n> major operating\n> systems, like Windows 10/11 or rpm based Linux. The project could be \n> hosted on SourceForge\n> or GitHub.\n> \n> If anyone on this list is interested, or knows which direction to point \n> me in,\n> could they please reply to me here, at poweruserm@live.com.au?\n> \n> \n> ZM.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Sep 2021 11:41:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Question about an Extension Project" } ]
[ { "msg_contents": "Set the volatility of the timestamptz version of date_bin() back to immutable\n\n543f36b43d was too hasty in thinking that the volatility of date_bin()\nhad to match date_trunc(), since only the latter references\nsession_timezone.\n\nBump catversion\n\nPer feedback from Aleksander Alekseev\nBackpatch to v14, as the former commit was\n\nBranch\n------\nREL_14_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/67c33a114f38edbd66f68d7b2c0cb7b03611ec48\n\nModified Files\n--------------\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 2 +-\n2 files changed, 2 insertions(+), 2 deletions(-)", "msg_date": "Fri, 03 Sep 2021 17:42:28 +0000", "msg_from": "John Naylor <john.naylor@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Set the volatility of the timestamptz version of date_bin()\n back" }, { "msg_contents": "On 2021-Sep-03, John Naylor wrote:\n\n> Set the volatility of the timestamptz version of date_bin() back to immutable\n> \n> 543f36b43d was too hasty in thinking that the volatility of date_bin()\n> had to match date_trunc(), since only the latter references\n> session_timezone.\n> \n> Bump catversion\n\nThese catversion bumps in branch 14 this late in the cycle seem suspect.\nDidn't we have some hesitation to push multirange unnest around beta2\nprecisely because of a desire to avoid catversion bumps?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:45:50 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" }, { "msg_contents": "On Fri, Sep 3, 2021 at 1:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n>\n> On 2021-Sep-03, John Naylor wrote:\n> These catversion bumps in branch 14 this late in the cycle seem suspect.\n> Didn't we have some hesitation to push multirange unnest around beta2\n> precisely because of a desire to avoid catversion bumps?\n\nThis was for correcting a mistake (although the first commit turned out to\nbe a mistake itself), so I understood it to be necessary.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Sep 3, 2021 at 1:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:>> On 2021-Sep-03, John Naylor wrote:> These catversion bumps in branch 14 this late in the cycle seem suspect.> Didn't we have some hesitation to push multirange unnest around beta2> precisely because of a desire to avoid catversion bumps?This was for correcting a mistake (although the first commit turned out to be a mistake itself), so I understood it to be necessary.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Sep 2021 13:50:45 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" }, { "msg_contents": "On 2021-Sep-03, John Naylor wrote:\n\n> On Fri, Sep 3, 2021 at 1:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2021-Sep-03, John Naylor wrote:\n> > These catversion bumps in branch 14 this late in the cycle seem suspect.\n> > Didn't we have some hesitation to push multirange unnest around beta2\n> > precisely because of a desire to avoid catversion bumps?\n> \n> This was for correcting a mistake (although the first commit turned out to\n> be a mistake itself), so I understood it to be necessary.\n\nA crazy idea might have been to return to the original value.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:56:50 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" }, { "msg_contents": "On Fri, Sep 03, 2021 at 01:56:50PM -0400, Alvaro Herrera wrote:\n> On 2021-Sep-03, John Naylor wrote:\n> \n> > On Fri, Sep 3, 2021 at 1:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n> > >\n> > > On 2021-Sep-03, John Naylor wrote:\n> > > These catversion bumps in branch 14 this late in the cycle seem suspect.\n> > > Didn't we have some hesitation to push multirange unnest around beta2\n> > > precisely because of a desire to avoid catversion bumps?\n> > \n> > This was for correcting a mistake (although the first commit turned out to\n> > be a mistake itself), so I understood it to be necessary.\n> \n> A crazy idea might have been to return to the original value.\n\n+1. I think the catversion usually is always increased even in a \"revert\", but\nin this exceptional case [0] it would be nice if beta4/rc1 had the same number\nas b3.\n\n[0] two commits close to each other, with no other catalog changes, and with\nthe specific goal of allowing trivial upgrade from b3.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Sep 2021 13:27:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" }, { "msg_contents": "John Naylor <john.naylor@postgresql.org> writes:\n> Set the volatility of the timestamptz version of date_bin() back to immutable\n> 543f36b43d was too hasty in thinking that the volatility of date_bin()\n> had to match date_trunc(), since only the latter references\n> session_timezone.\n\n> Bump catversion\n\nWhat you should have done here, at least in the back branch, was *revert*\ncatversion to what it had been. As things stand, it would force users of\n14beta3 to initdb or pg_upgrade to move to 14.0, for no reason whatsoever.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 16:42:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-03, John Naylor wrote:\n>> On Fri, Sep 3, 2021 at 1:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n>> wrote:\n>>> These catversion bumps in branch 14 this late in the cycle seem suspect.\n>>> Didn't we have some hesitation to push multirange unnest around beta2\n>>> precisely because of a desire to avoid catversion bumps?\n\n>> This was for correcting a mistake (although the first commit turned out to\n>> be a mistake itself), so I understood it to be necessary.\n\n> A crazy idea might have been to return to the original value.\n\nYes, that is what should have been done, as I complained over\nin pgsql-committers before seeing this exchange. As things\nstand, a pg_upgrade is going to be forced on beta3 users\nwithout even the excuse of fixing a bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Sep 2021 16:47:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Set the volatility of the timestamptz version of\n date_bin() back" } ]
[ { "msg_contents": "Windows 10 supports Unix sockets as reported, e.g., here\nhttps://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n\nWe run the tests on MobilityDB using an ephemeral instance that is created\nby the test suite and torn down afterwards.\nhttps://github.com/MobilityDB/MobilityDB/blob/develop/test/scripts/test.sh\nFor this we use Unix sockets and thus the pg_ctl command is configured as\nfollows\n\nPGCTL=\"${BIN_DIR}/pg_ctl -w -D ${DBDIR} -l ${WORKDIR}/log/postgres.log -o\n-k -o ${WORKDIR}/lock -o -h -o ''\"\n\nThe log file reports things are working as expected\n\n2021-09-05 14:10:53.366 CEST [32170] LOG: starting PostgreSQL 13.3 on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0,\n64-bit\n2021-09-05 14:10:53.372 CEST [32170] LOG: listening on Unix socket\n\"/home/esteban/src/MobilityDB/build/tmptest/lock/.s.PGSQL.5432\"\n2021-09-05 14:10:53.394 CEST [32171] LOG: database system was shut down at\n2021-09-05 14:10:52 CEST\n2021-09-05 14:10:53.412 CEST [32170] LOG: database system is ready to\naccept connections\n\nWe are trying to port MobilityDB on Windows using msys2. In this case the\nabove command does not work as reported in the corresponding log\n\n2021-09-05 14:34:10.553 CEST [19060] LOG: starting PostgreSQL 13.4 on\nx86_64-w64-mingw32, compiled by gcc.exe (Rev5, Built by MSYS2 project)\n10.3.0, 64-bit\n2021-09-05 14:34:10.558 CEST [19060] LOG: could not translate host name\n\"''\", service \"5432\" to address: Unknown host\n2021-09-05 14:34:10.558 CEST [19060] WARNING: could not create listen\nsocket for \"''\"\n2021-09-05 14:34:10.558 CEST [19060] FATAL: could not create any TCP/IP\nsockets\n2021-09-05 14:34:10.560 CEST [19060] LOG: database system is shut down\n\nAny ideas on how to solve this ?\n\nEsteban\n\nWindows 10 supports Unix sockets as reported, e.g., herehttps://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/We run the tests on MobilityDB using an ephemeral instance that is created by the test suite and torn down afterwards. https://github.com/MobilityDB/MobilityDB/blob/develop/test/scripts/test.shFor this we use Unix sockets and thus the pg_ctl command is configured as followsPGCTL=\"${BIN_DIR}/pg_ctl -w -D ${DBDIR} -l ${WORKDIR}/log/postgres.log -o -k -o ${WORKDIR}/lock -o -h -o ''\"The log file reports things are working as expected2021-09-05 14:10:53.366 CEST [32170] LOG:  starting PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit2021-09-05 14:10:53.372 CEST [32170] LOG:  listening on Unix socket \"/home/esteban/src/MobilityDB/build/tmptest/lock/.s.PGSQL.5432\"2021-09-05 14:10:53.394 CEST [32171] LOG:  database system was shut down at 2021-09-05 14:10:52 CEST2021-09-05 14:10:53.412 CEST [32170] LOG:  database system is ready to accept connectionsWe are trying to port MobilityDB on Windows using msys2. In this case the above command does not work as reported in the corresponding log2021-09-05 14:34:10.553 CEST [19060] LOG:  starting PostgreSQL 13.4 on x86_64-w64-mingw32, compiled by gcc.exe (Rev5, Built by MSYS2 project) 10.3.0, 64-bit2021-09-05 14:34:10.558 CEST [19060] LOG:  could not translate host name \"''\", service \"5432\" to address: Unknown host2021-09-05 14:34:10.558 CEST [19060] WARNING:  could not create listen socket for \"''\"2021-09-05 14:34:10.558 CEST [19060] FATAL:  could not create any TCP/IP sockets2021-09-05 14:34:10.560 CEST [19060] LOG:  database system is shut downAny ideas on how to solve this ?Esteban", "msg_date": "Sun, 5 Sep 2021 14:38:18 +0200", "msg_from": "Esteban Zimanyi <esteban.zimanyi@ulb.be>", "msg_from_op": true, "msg_subject": "Fwd: Problem with Unix sockets when porting MobilityDB for Windows" } ]
[ { "msg_contents": "Hi all,\n\nRunning the recovery tests in a parallel run, enough to bloat a\nmachine in resources, sometimes leads me to the following failure:\nok 19 - walsender termination logged\n# poll_query_until timed out executing this query:\n# SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n\nThis corresponds to the following part of the test, where a WAL sender\nis SIGSTOP'd and SIGCONT'd:\n$node_primary3->poll_query_until('postgres',\n \"SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\",\n \"lost\")\n or die \"timed out waiting for slot to be lost\";\n\nThere is already a default timeout of 180s applied as of the default\nof PostgresNode::poll_query_until(), so it seems to me that there\ncould be a different issue hiding here.\n\nThanks,\n--\nMichael", "msg_date": "Mon, 6 Sep 2021 09:17:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "Hello\n\nOn 2021-Sep-06, Michael Paquier wrote:\n\n> Running the recovery tests in a parallel run, enough to bloat a\n> machine in resources, sometimes leads me to the following failure:\n> ok 19 - walsender termination logged\n> # poll_query_until timed out executing this query:\n> # SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n\nHmm, I've never seen that, and I do run tests in parallel quite often.\nWould you please attach the log files for that test in a failed run?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Mon, 6 Sep 2021 11:59:42 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-06, Michael Paquier wrote:\n>> # poll_query_until timed out executing this query:\n>> # SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n\n> Hmm, I've never seen that, and I do run tests in parallel quite often.\n\nI scraped the buildfarm logs looking for similar failures, and didn't\nfind any. (019_replslot_limit.pl hasn't failed at all in the farm\nsince the last fix it received, in late July.) I wonder if Michael's\nsetup had any unusual settings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Sep 2021 12:03:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 06, 2021 at 12:03:32PM -0400, Tom Lane wrote:\n> I scraped the buildfarm logs looking for similar failures, and didn't\n> find any. (019_replslot_limit.pl hasn't failed at all in the farm\n> since the last fix it received, in late July.)\n\nThe interesting bits are in 019_replslot_limit_primary3.log. In a\nfailed run, I can see that we get immediately a process termination,\nas follows:\n2021-09-07 07:52:53.402 JST [22890] LOG: terminating process 23082 to release replication slot \"rep3\"\n2021-09-07 07:52:53.442 JST [23082] standby_3 FATAL: terminating connection due to administrator command\n2021-09-07 07:52:53.442 JST [23082] standby_3 STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n2021-09-07 07:52:53.452 JST [23133] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n\nIn a successful run, the pattern is different:\n2021-09-07 09:27:39.832 JST [57114] standby_3 FATAL: terminating connection due to administrator command\n2021-09-07 09:27:39.832 JST [57114] standby_3 STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n2021-09-07 09:27:39.832 JST [57092] LOG: invalidating slot \"rep3\" because its restart_lsn 0/7000D8 exceeds max_slot_wal_keep_size\n2021-09-07 09:27:39.833 JST [57092] LOG: checkpoint complete: wrote\n19 buffers (14.8%); 0 WAL file(s) added, 1 removed, 0 recycled;\nwrite=0.025 s, sync=0.001 s, total=0.030 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=1024 kB, estimate=1024 kB\n2021-09-07 09:27:39.833 JST [57092] LOG: checkpoints are occurring too frequently (0 seconds apart)\n2021-09-07 09:27:39.833 JST [57092] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n2021-09-07 09:27:39.851 JST [57126] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n\nThe slot invalidation is forgotten because we don't complete a\ncheckpoint that does the work it should do, no? There is a completed\ncheckpoint before we query pg_replication_slots, and the buildfarm\nshows the same thing.\n\n> I wonder if Michael's setup had any unusual settings.\n\nThe way I use configure and build options has caught bugs with code\nordering in the past, but this one looks like just a timing issue with\nthe test itself. I can only see that with Big Sur 11.5.2, and I just\ngot fresh logs this morning with a new failure, as of the attached.\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 09:37:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "At Tue, 7 Sep 2021 09:37:10 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Sep 06, 2021 at 12:03:32PM -0400, Tom Lane wrote:\n> > I scraped the buildfarm logs looking for similar failures, and didn't\n> > find any. (019_replslot_limit.pl hasn't failed at all in the farm\n> > since the last fix it received, in late July.)\n> \n> The interesting bits are in 019_replslot_limit_primary3.log. In a\n> failed run, I can see that we get immediately a process termination,\n> as follows:\n> 2021-09-07 07:52:53.402 JST [22890] LOG: terminating process 23082 to release replication slot \"rep3\"\n> 2021-09-07 07:52:53.442 JST [23082] standby_3 FATAL: terminating connection due to administrator command\n> 2021-09-07 07:52:53.442 JST [23082] standby_3 STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n> 2021-09-07 07:52:53.452 JST [23133] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n> \n> In a successful run, the pattern is different:\n> 2021-09-07 09:27:39.832 JST [57114] standby_3 FATAL: terminating connection due to administrator command\n> 2021-09-07 09:27:39.832 JST [57114] standby_3 STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n> 2021-09-07 09:27:39.832 JST [57092] LOG: invalidating slot \"rep3\" because its restart_lsn 0/7000D8 exceeds max_slot_wal_keep_size\n> 2021-09-07 09:27:39.833 JST [57092] LOG: checkpoint complete: wrote\n> 19 buffers (14.8%); 0 WAL file(s) added, 1 removed, 0 recycled;\n> write=0.025 s, sync=0.001 s, total=0.030 s; sync files=0,\n> longest=0.000 s, average=0.000 s; distance=1024 kB, estimate=1024 kB\n> 2021-09-07 09:27:39.833 JST [57092] LOG: checkpoints are occurring too frequently (0 seconds apart)\n> 2021-09-07 09:27:39.833 JST [57092] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n> 2021-09-07 09:27:39.851 JST [57126] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n> \n> The slot invalidation is forgotten because we don't complete a\n> checkpoint that does the work it should do, no? There is a completed\n> checkpoint before we query pg_replication_slots, and the buildfarm\n> shows the same thing.\n\nIt seems like the \"kill 'STOP'\" in the script didn't suspend the\nprocesses before advancing WAL. The attached uses 'ps' command to\ncheck that since I didn't come up with the way to do the same in Perl.\n\nI'm still not sure it works as expected, though. (Imagining the case\nwhere the state changes before the process actually stops..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 07 Sep 2021 12:01:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On 2021-Sep-07, Kyotaro Horiguchi wrote:\n\n> It seems like the \"kill 'STOP'\" in the script didn't suspend the\n> processes before advancing WAL. The attached uses 'ps' command to\n> check that since I didn't come up with the way to do the same in Perl.\n\nAh! so we tell the kernel to send the signal, but there's no guarantee\nabout the timing for the reaction from the other process. Makes sense.\n\nYour proposal is to examine the other process' state until we see that\nit gets the T flag. I wonder how portable this is; I suspect not very.\n`ps` is pretty annoying, meaning not consistently implemented -- GNU's\nmanpage says there are \"UNIX options\", \"BSD options\" and \"GNU long\noptions\", so it seems hard to believe that there is one set of options\nthat will work everywhere.\n\nI found a Perl module (Proc::ProcessTable) that can be used to get the\nlist of processes and their metadata, but it isn't in core Perl and it\ndoesn't look very well maintained either, so that one's out.\n\nAnother option might be to wait on the kernel -- do something that would\ninvolve the kernel taking action on the other process, acting like a\nbarrier of sorts. I don't know if this actually works, but we could\ntry. Something like sending SIGSTOP first, then \"kill 0\" -- or just\nsend SIGSTOP twice:\n\ndiff --git a/src/test/recovery/t/019_replslot_limit.pl b/src/test/recovery/t/019_replslot_limit.pl\nindex e065c5c008..e8f323066a 100644\n--- a/src/test/recovery/t/019_replslot_limit.pl\n+++ b/src/test/recovery/t/019_replslot_limit.pl\n@@ -346,6 +346,8 @@ $logstart = get_log_size($node_primary3);\n # freeze walsender and walreceiver. Slot will still be active, but walreceiver\n # won't get anything anymore.\n kill 'STOP', $senderpid, $receiverpid;\n+kill 'STOP', $senderpid, $receiverpid;\n+\n advance_wal($node_primary3, 2);\n \n my $max_attempts = 180;\n\n\n\n> +\t# Haven't found the means to do the same on Windows\n> +\treturn if $TestLib::windows_os;\n\nI suppose if it came down to something like your patch, we could do\nsomething simple here like \"if Windows, sleep 2s and hope for the best\".\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n", "msg_date": "Fri, 17 Sep 2021 18:59:24 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Fri, Sep 17, 2021 at 06:59:24PM -0300, Alvaro Herrera wrote:\n> On 2021-Sep-07, Kyotaro Horiguchi wrote:\n> > It seems like the \"kill 'STOP'\" in the script didn't suspend the\n> > processes before advancing WAL. The attached uses 'ps' command to\n> > check that since I didn't come up with the way to do the same in Perl.\n> \n> Ah! so we tell the kernel to send the signal, but there's no guarantee\n> about the timing for the reaction from the other process. Makes sense.\n\nAgreed.\n\n> Your proposal is to examine the other process' state until we see that\n> it gets the T flag. I wonder how portable this is; I suspect not very.\n> `ps` is pretty annoying, meaning not consistently implemented -- GNU's\n> manpage says there are \"UNIX options\", \"BSD options\" and \"GNU long\n> options\", so it seems hard to believe that there is one set of options\n> that will work everywhere.\n\nI like this, and it's the most-robust way. I agree there's no portable way,\nso I'd modify it to be fail-open. Run a \"ps\" command that works on the OP's\nsystem. If the output shows the process in a state matching [DRS], we can\nconfidently sleep a bit for signal delivery to finish. If the command fails\nor prints something else (including state T, which we need check explicitly),\nassume SIGSTOP delivery is complete. If some other platform shows this race\nin the future, we can add an additional \"ps\" command.\n\nIf we ever get the \"stop events\" system\n(https://postgr.es/m/flat/CAPpHfdtSEOHX8dSk9Qp+Z++i4BGQoffKip6JDWngEA+g7Z-XmQ@mail.gmail.com),\nit would be useful for crafting this kind of test without problem seen here.\n\n> I found a Perl module (Proc::ProcessTable) that can be used to get the\n> list of processes and their metadata, but it isn't in core Perl and it\n> doesn't look very well maintained either, so that one's out.\n\nAgreed, that one's out.\n\n> Another option might be to wait on the kernel -- do something that would\n> involve the kernel taking action on the other process, acting like a\n> barrier of sorts. I don't know if this actually works, but we could\n> try. Something like sending SIGSTOP first, then \"kill 0\" -- or just\n> send SIGSTOP twice:\n> \n> diff --git a/src/test/recovery/t/019_replslot_limit.pl b/src/test/recovery/t/019_replslot_limit.pl\n> index e065c5c008..e8f323066a 100644\n> --- a/src/test/recovery/t/019_replslot_limit.pl\n> +++ b/src/test/recovery/t/019_replslot_limit.pl\n> @@ -346,6 +346,8 @@ $logstart = get_log_size($node_primary3);\n> # freeze walsender and walreceiver. Slot will still be active, but walreceiver\n> # won't get anything anymore.\n> kill 'STOP', $senderpid, $receiverpid;\n> +kill 'STOP', $senderpid, $receiverpid;\n> +\n> advance_wal($node_primary3, 2);\n> \n> my $max_attempts = 180;\n\nIf this fixes things for the OP, I'd like it slightly better than the \"ps\"\napproach. It's less robust, but I like the brevity.\n\nAnother alternative might be to have walreceiver reach walsender via a proxy\nPerl script. Then, make that proxy able to accept an instruction to pause\npassing data until further notice. However, I like two of your options better\nthan this one.\n\n\n", "msg_date": "Fri, 17 Sep 2021 20:41:00 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Fri, Sep 17, 2021 at 08:41:00PM -0700, Noah Misch wrote:\n> If this fixes things for the OP, I'd like it slightly better than the \"ps\"\n> approach. It's less robust, but I like the brevity.\n> \n> Another alternative might be to have walreceiver reach walsender via a proxy\n> Perl script. Then, make that proxy able to accept an instruction to pause\n> passing data until further notice. However, I like two of your options better\n> than this one.\n\nCould it be possible to rely on a combination of wait events set in WAL\nsenders and pg_stat_replication to assume that a WAL sender is in a\nstopped state? I would think about something like that in the top of\nmy mind (perhaps this would need 2 WAL senders, one stopped and one\nstill running):\n1) SIGSTOP WAL sender 1.\n2) Check WAL sender 1 is in WalSenderMain. If not retry 1) after a\nSIGCONT.\n3) Generate some WAL, and look at pg_stat_replication to see if there\nhas been some progress in 1), but that 2) is done.\n--\nMichael", "msg_date": "Sat, 18 Sep 2021 15:32:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On 2021-Sep-18, Michael Paquier wrote:\n\n> Could it be possible to rely on a combination of wait events set in WAL\n> senders and pg_stat_replication to assume that a WAL sender is in a\n> stopped state?\n\nHmm, sounds a possibly useful idea to explore, but I would only do so if\nthe other ideas prove fruitless, because it sounds like it'd have more\nmoving parts. Can you please first test if the idea of sending the signal\ntwice is enough? If that doesn't work, let's try Horiguchi-san's idea\nof using some `ps` flags to find the process.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Sat, 18 Sep 2021 17:19:04 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Sat, Sep 18, 2021 at 05:19:04PM -0300, Alvaro Herrera wrote:\n> Hmm, sounds a possibly useful idea to explore, but I would only do so if\n> the other ideas prove fruitless, because it sounds like it'd have more\n> moving parts. Can you please first test if the idea of sending the signal\n> twice is enough?\n\nThis idea does not work. I got one failure after 5 tries.\n\n> If that doesn't work, let's try Horiguchi-san's idea\n> of using some `ps` flags to find the process.\n\nTried this one as well, to see the same failure. I was just looking\nat the state of the test while it was querying pg_replication_slots\nand that was the expected state after the WAL sender received SIGCONT:\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\ntoto 12663 0.0 0.0 5014468 3384 ?? Ss 8:30PM 0:00.00 postgres: primary3: walsender toto [local] streaming 0/720000 \ntoto 12662 0.0 0.0 4753092 3936 ?? Ts 8:30PM 0:00.01 postgres: standby_3: walreceiver streaming 0/7000D8 \n\nThe test gets the right PIDs, as the logs showed:\nok 17 - have walsender pid 12663\nok 18 - have walreceiver pid 12662\n\nSo it does not seem that this is not an issue with the signals.\nPerhaps we'd better wait for a checkpoint to complete by for example\nscanning the logs before running the query on pg_replication_slots to\nmake sure that the slot is invalidated?\n--\nMichael", "msg_date": "Mon, 20 Sep 2021 21:12:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On 2021-Sep-20, Michael Paquier wrote:\n\n> > Can you please first test if the idea of sending the signal twice is\n> > enough?\n> \n> This idea does not work. I got one failure after 5 tries.\n\nOK, thanks for taking the time to test it.\n\n> > If that doesn't work, let's try Horiguchi-san's idea of using some\n> > `ps` flags to find the process.\n> \n> Tried this one as well, to see the same failure.\n\nHmm, do you mean that you used Horiguchi-san's patch in [1] and the\nfailure still occurred?\n[1] https://postgr.es/m/20210907.120106.1483239898065111540.horikyota.ntt@gmail.com\n\n> I was just looking at the state of the test while it was querying\n> pg_replication_slots and that was the expected state after the WAL\n> sender received SIGCONT:\n> USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\n> toto 12663 0.0 0.0 5014468 3384 ?? Ss 8:30PM 0:00.00 postgres: primary3: walsender toto [local] streaming 0/720000 \n> toto 12662 0.0 0.0 4753092 3936 ?? Ts 8:30PM 0:00.01 postgres: standby_3: walreceiver streaming 0/7000D8 \n> \n> The test gets the right PIDs, as the logs showed:\n> ok 17 - have walsender pid 12663\n> ok 18 - have walreceiver pid 12662\n\nAs I understood, Horiguchi-san's point isn't that the PIDs might be\nwrong -- the point is to make sure that the process is in state T before\nmoving on to the next step in the test.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 20 Sep 2021 09:38:29 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 20, 2021 at 09:38:29AM -0300, Alvaro Herrera wrote:\n> On 2021-Sep-20, Michael Paquier wrote:\n>>> If that doesn't work, let's try Horiguchi-san's idea of using some\n>>> `ps` flags to find the process.\n>> \n>> Tried this one as well, to see the same failure.\n> \n> Hmm, do you mean that you used Horiguchi-san's patch in [1] and the\n> failure still occurred?\n> [1] https://postgr.es/m/20210907.120106.1483239898065111540.horikyota.ntt@gmail.com\n\nYes, that's what I mean.\n--\nMichael", "msg_date": "Mon, 20 Sep 2021 22:18:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 20, 2021 at 09:38:29AM -0300, Alvaro Herrera wrote:\n> On 2021-Sep-20, Michael Paquier wrote:\n>> The test gets the right PIDs, as the logs showed:\n>> ok 17 - have walsender pid 12663\n>> ok 18 - have walreceiver pid 12662\n> \n> As I understood, Horiguchi-san's point isn't that the PIDs might be\n> wrong -- the point is to make sure that the process is in state T before\n> moving on to the next step in the test.\n\nI have spent more time on this issue, as it looks that I am the only\none with an environment able to reproduce it (Big Sur 11.6).\n\nAs far as I can see, the states of the WAL sender and receiver are\nfine, after adding some extra debugging with ps called from the test\nitself, and I have checked that they are SIGSTOP'd or SIGCONT'd when a\nfailure shows up.\n\nIn a sequence that passes, we have the following sequence:\n- Start checkpoint.\n- SIGSTOP sent to WAL sender and receiver.\n- Advance WAL (CREATE TABLE, DROP TABLE and pg_switch_wal)\n- Check that WAL sender is stopped\n- SIGCONT on WAL sender.\n- Invalidate the slot, checkpoint completes.\n- Check state of pg_replication_slots.\n- Check that slot invalidation happened in the logs.\n\nA failed sequence starts a checkpoint, but never completes it when\nwe reach the check on pg_replication_slots and the test remains\nstuck so the slot is never invalidated. I have played a bit with the\ntest and switched a bit the location of the test \"slot invalidation\nlogged\" that watches the logs, and the test fails to find the slot\ninvalidation, as a result of the checkpoint not finishing.\n\nTo keep the instance around for debugging, I have just launched an\nextra checkpoint after the SIGCONT sent to the WAL sender. It remains\nstuck as an effect of the first one:\n kill 'CONT', $senderpid;\n+$node_primary3->safe_psql('postgres', 'checkpoint;');\n\nWith that, I am able to grab the checkpointer of primary3 to see where\nit is waiting:\n * frame #0: 0x00007fff204f8c4a libsystem_kernel.dylib`kevent + 10\n frame #1: 0x0000000105a81a43 postgres`WaitEventSetWaitBlock(set=0x00007fb87f008748, cur_timeout=-1, occurred_events=0x00007ffeea765400, nevents=1) at latch.c:1601:7\n frame #2: 0x0000000105a80fd0 postgres`WaitEventSetWait(set=0x00007fb87f008748, timeout=-1, occurred_events=0x00007ffeea765400, nevents=1, wait_event_info=134217769) at latch.c:1396:8\n frame #3: 0x0000000105a80b46 postgres`WaitLatch(latch=0x00000001069ae7a4, wakeEvents=33, timeout=-1, wait_event_info=134217769) at latch.c:473:6\n frame #4: 0x0000000105a97011 postgres`ConditionVariableTimedSleep(cv=0x00000001069d8860, timeout=-1, wait_event_info=134217769) at condition_variable.c:163:10\n frame #5: 0x0000000105a96f32 postgres`ConditionVariableSleep(cv=0x00000001069d8860, wait_event_info=134217769) at condition_variable.c:100:9\n frame #6: 0x0000000105a299cf postgres`InvalidatePossiblyObsoleteSlot(s=0x00000001069d8780, oldestLSN=8388608, invalidated=0x00007ffeea76559f) at slot.c:1264:4\n frame #7: 0x0000000105a296bd postgres`InvalidateObsoleteReplicationSlots(oldestSegno=8) at slot.c:1333:7\n frame #8: 0x00000001055edbe6 postgres`CreateCheckPoint(flags=192) at xlog.c:9275:6\n frame #9: 0x00000001059b753d postgres`CheckpointerMain at checkpointer.c:448:5\n frame #10: 0x00000001059b470d postgres`AuxiliaryProcessMain(auxtype=CheckpointerProcess) at auxprocess.c:153:4\n frame #11: 0x00000001059c8912 postgres`StartChildProcess(type=CheckpointerProcess) at postmaster.c:5498:3\n frame #12: 0x00000001059c68fe postgres`PostmasterMain(argc=4, argv=0x00007fb87e505400) at postmaster.c:1458:21\n frame #13: 0x000000010589e1bf postgres`main(argc=4, argv=0x00007fb87e505400) at main.c:198:3\n frame #14: 0x00007fff20544f3d libdyld.dylib`start + 1\n\nSo there is really something fishy here IMO, something else than just\na test mis-design and it looks like a race condition, perhaps around\nInvalidateObsoleteReplicationSlots().\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 22 Sep 2021 16:27:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Wed, Sep 22, 2021 at 12:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 20, 2021 at 09:38:29AM -0300, Alvaro Herrera wrote:\n> > On 2021-Sep-20, Michael Paquier wrote:\n> >> The test gets the right PIDs, as the logs showed:\n> >> ok 17 - have walsender pid 12663\n> >> ok 18 - have walreceiver pid 12662\n> >\n> > As I understood, Horiguchi-san's point isn't that the PIDs might be\n> > wrong -- the point is to make sure that the process is in state T before\n> > moving on to the next step in the test.\n>\n> I have spent more time on this issue, as it looks that I am the only\n> one with an environment able to reproduce it (Big Sur 11.6).\n>\n> As far as I can see, the states of the WAL sender and receiver are\n> fine, after adding some extra debugging with ps called from the test\n> itself, and I have checked that they are SIGSTOP'd or SIGCONT'd when a\n> failure shows up.\n>\n> In a sequence that passes, we have the following sequence:\n> - Start checkpoint.\n> - SIGSTOP sent to WAL sender and receiver.\n> - Advance WAL (CREATE TABLE, DROP TABLE and pg_switch_wal)\n> - Check that WAL sender is stopped\n> - SIGCONT on WAL sender.\n>\n\nAm I understanding correctly that after sending SIGCONT to the WAL\nsender, the checkpoint's SIGTERM signal for the WAL sender is received\nand it releases the slot and terminates itself?\n\n> - Invalidate the slot, checkpoint completes.\n\nAfter which checkpoint invalidates the slot and completes.\n\nNow, in the failed run, it appears that due to some reason WAL sender\nhas not released the slot. Is it possible to see if the WAL sender is\nstill alive when a checkpoint is stuck at ConditionVariableSleep? And\nif it is active, what is its call stack?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 25 Sep 2021 17:12:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Sat, Sep 25, 2021 at 05:12:42PM +0530, Amit Kapila wrote:\n> Now, in the failed run, it appears that due to some reason WAL sender\n> has not released the slot. Is it possible to see if the WAL sender is\n> still alive when a checkpoint is stuck at ConditionVariableSleep? And\n> if it is active, what is its call stack?\n\nI got again a failure today, so I have used this occasion to check that\nwhen the checkpoint gets stuck the WAL sender process getting SIGCONT\nis still around, waiting for a write to happen:\n* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP\n frame #0: 0x00007fff20320c4a libsystem_kernel.dylib`kevent + 10\n frame #1: 0x000000010fe50a43 postgres`WaitEventSetWaitBlock(set=0x00007f884d80a690, cur_timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1) at latch.c:1601:7\n frame #2: 0x000000010fe4ffd0 postgres`WaitEventSetWait(set=0x00007f884d80a690, timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1, wait_event_info=100663297) at latch.c:1396:8\n frame #3: 0x000000010fc586c4 postgres`secure_write(port=0x00007f883eb04080, ptr=0x00007f885006a040, len=122694) at be-secure.c:298:3\n frame #4: 0x000000010fc66d81 postgres`internal_flush at pqcomm.c:1352:7\n frame #5: 0x000000010fc665b9 postgres`internal_putbytes(s=\"E, len=1) at pqcomm.c:1298:8\n frame #6: 0x000000010fc66be3 postgres`socket_putmessage(msgtype='E', s=\"SFATAL\", len=112) at pqcomm.c:1479:6\n frame #7: 0x000000010fc67318 postgres`pq_endmessage(buf=0x00007ffee0396118) at pqformat.c:301:9\n frame #8: 0x00000001100a469f postgres`send_message_to_frontend(edata=0x000000011030d640) at elog.c:3431:3\n frame #9: 0x00000001100a066d postgres`EmitErrorReport at elog.c:1546:3\n frame #10: 0x000000011009ff99 postgres`errfinish(filename=\"postgres.c\", lineno=3193, funcname=\"ProcessInterrupts\") at elog.c:597:2\n * frame #11: 0x000000010fe8e2f5 postgres`ProcessInterrupts at postgres.c:3191:4\n frame #12: 0x000000010fe0bbe5 postgres`WalSndLoop(send_data=(postgres`XLogSendPhysical at walsender.c:2550)) at walsender.c:2285:3\n frame #13: 0x000000010fe0754f postgres`StartReplication(cmd=0x00007f881d808790) at walsender.c:738:3\n frame #14: 0x000000010fe06149 postgres`exec_replication_command(cmd_string=\"START_REPLICATION SLOT \\\"rep3\\\" 0/700000 TIMELINE 1\") at walsender.c:1652:6\n frame #15: 0x000000010fe91eb8 postgres`PostgresMain(dbname=\"\", username=\"mpaquier\") at postgres.c:4493:12\n\nIt logs its FATAL \"terminating connection due to administrator\ncommand\" coming from ProcessInterrupts(), and then it sits idle on\nClientWrite.\n--\nMichael", "msg_date": "Mon, 27 Sep 2021 15:02:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 27, 2021 at 11:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Sep 25, 2021 at 05:12:42PM +0530, Amit Kapila wrote:\n> > Now, in the failed run, it appears that due to some reason WAL sender\n> > has not released the slot. Is it possible to see if the WAL sender is\n> > still alive when a checkpoint is stuck at ConditionVariableSleep? And\n> > if it is active, what is its call stack?\n>\n> I got again a failure today, so I have used this occasion to check that\n> when the checkpoint gets stuck the WAL sender process getting SIGCONT\n> is still around, waiting for a write to happen:\n> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP\n> frame #0: 0x00007fff20320c4a libsystem_kernel.dylib`kevent + 10\n> frame #1: 0x000000010fe50a43 postgres`WaitEventSetWaitBlock(set=0x00007f884d80a690, cur_timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1) at latch.c:1601:7\n> frame #2: 0x000000010fe4ffd0 postgres`WaitEventSetWait(set=0x00007f884d80a690, timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1, wait_event_info=100663297) at latch.c:1396:8\n> frame #3: 0x000000010fc586c4 postgres`secure_write(port=0x00007f883eb04080, ptr=0x00007f885006a040, len=122694) at be-secure.c:298:3\n..\n..\n> frame #15: 0x000000010fe91eb8 postgres`PostgresMain(dbname=\"\", username=\"mpaquier\") at postgres.c:4493:12\n>\n> It logs its FATAL \"terminating connection due to administrator\n> command\" coming from ProcessInterrupts(), and then it sits idle on\n> ClientWrite.\n>\n\nSo, it seems on your machine it has passed the following condition in\nsecure_write:\nif (n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN))\n\nIf so, this indicates write failure which seems odd to me and probably\nsomething machine-specific or maybe some different settings in your\nbuild or machine. BTW, if SSL or GSS is enabled that might have caused\nit in some way. I think the best way is to debug the secure_write\nduring this occurrence.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 27 Sep 2021 11:53:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 27, 2021 at 11:53:07AM +0530, Amit Kapila wrote:\n> So, it seems on your machine it has passed the following condition in\n> secure_write:\n> if (n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN))\n\nYep.\n\n> If so, this indicates write failure which seems odd to me and probably\n> something machine-specific or maybe some different settings in your\n> build or machine. BTW, if SSL or GSS is enabled that might have caused\n> it in some way. I think the best way is to debug the secure_write\n> during this occurrence.\n\nYeah, but we don't use any of them in the context of this test, so\nthis is something on a simple send(), no? Hmm. That would not be the\nfirst issue we see with macos these days with interrupted syscalls...\nAnd actually in this stack I can see that errno gets set to EINTR.\n--\nMichael", "msg_date": "Mon, 27 Sep 2021 15:43:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Mon, Sep 27, 2021 at 12:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 27, 2021 at 11:53:07AM +0530, Amit Kapila wrote:\n> > So, it seems on your machine it has passed the following condition in\n> > secure_write:\n> > if (n < 0 && !port->noblock && (errno == EWOULDBLOCK || errno == EAGAIN))\n>\n> Yep.\n>\n> > If so, this indicates write failure which seems odd to me and probably\n> > something machine-specific or maybe some different settings in your\n> > build or machine. BTW, if SSL or GSS is enabled that might have caused\n> > it in some way. I think the best way is to debug the secure_write\n> > during this occurrence.\n>\n> Yeah, but we don't use any of them in the context of this test, so\n> this is something on a simple send(), no? Hmm. That would not be the\n> first issue we see with macos these days with interrupted syscalls...\n> And actually in this stack I can see that errno gets set to EINTR.\n>\n\nIf errno is EINTR, then how would the code pass the above if check as\nit has a condition ((errno == EWOULDBLOCK || errno == EAGAIN))?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 27 Sep 2021 14:27:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On 2021-Sep-27, Michael Paquier wrote:\n\n> I got again a failure today, so I have used this occasion to check that\n> when the checkpoint gets stuck the WAL sender process getting SIGCONT\n> is still around, waiting for a write to happen:\n> * thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP\n> frame #0: 0x00007fff20320c4a libsystem_kernel.dylib`kevent + 10\n> frame #1: 0x000000010fe50a43 postgres`WaitEventSetWaitBlock(set=0x00007f884d80a690, cur_timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1) at latch.c:1601:7\n> frame #2: 0x000000010fe4ffd0 postgres`WaitEventSetWait(set=0x00007f884d80a690, timeout=-1, occurred_events=0x00007ffee0395fd0, nevents=1, wait_event_info=100663297) at latch.c:1396:8\n> frame #3: 0x000000010fc586c4 postgres`secure_write(port=0x00007f883eb04080, ptr=0x00007f885006a040, len=122694) at be-secure.c:298:3\n> frame #4: 0x000000010fc66d81 postgres`internal_flush at pqcomm.c:1352:7\n> frame #5: 0x000000010fc665b9 postgres`internal_putbytes(s=\"E, len=1) at pqcomm.c:1298:8\n> frame #6: 0x000000010fc66be3 postgres`socket_putmessage(msgtype='E', s=\"SFATAL\", len=112) at pqcomm.c:1479:6\n> frame #7: 0x000000010fc67318 postgres`pq_endmessage(buf=0x00007ffee0396118) at pqformat.c:301:9\n> frame #8: 0x00000001100a469f postgres`send_message_to_frontend(edata=0x000000011030d640) at elog.c:3431:3\n> frame #9: 0x00000001100a066d postgres`EmitErrorReport at elog.c:1546:3\n> frame #10: 0x000000011009ff99 postgres`errfinish(filename=\"postgres.c\", lineno=3193, funcname=\"ProcessInterrupts\") at elog.c:597:2\n> * frame #11: 0x000000010fe8e2f5 postgres`ProcessInterrupts at postgres.c:3191:4\n> frame #12: 0x000000010fe0bbe5 postgres`WalSndLoop(send_data=(postgres`XLogSendPhysical at walsender.c:2550)) at walsender.c:2285:3\n> frame #13: 0x000000010fe0754f postgres`StartReplication(cmd=0x00007f881d808790) at walsender.c:738:3\n> frame #14: 0x000000010fe06149 postgres`exec_replication_command(cmd_string=\"START_REPLICATION SLOT \\\"rep3\\\" 0/700000 TIMELINE 1\") at walsender.c:1652:6\n> frame #15: 0x000000010fe91eb8 postgres`PostgresMain(dbname=\"\", username=\"mpaquier\") at postgres.c:4493:12\n\nAh, so the problem here is that the walsender is not exiting. That also\ncauses the checkpointer to hang waiting for it. I wonder if this is\nrelated to the problem reported in\nhttps://www.postgresql.org/message-id/adce2c09-3bfc-4666-997a-c21991cb1eb1.mengjuan.cmj%40alibaba-inc.com\nA patch was proposed on that thread on September 22nd, can to try with\nthat and see if this problem still reproduces?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n", "msg_date": "Sat, 2 Oct 2021 19:00:01 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" }, { "msg_contents": "On Sat, Oct 02, 2021 at 07:00:01PM -0300, Alvaro Herrera wrote:\n> A patch was proposed on that thread on September 22nd, can to try with\n> that and see if this problem still reproduces?\n\nYes, the failure still shows up, even with a timeout set at 30s which\nis the default of the patch.\n--\nMichael", "msg_date": "Wed, 6 Oct 2021 13:24:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Timeout failure in 019_replslot_limit.pl" } ]
[ { "msg_contents": "Hi hackers,\n\nThere is an ongoing effort of cleaning up orphaned files using undo logs \n[1].\n\nUntil this get implemented and databases get upgraded to the version(s) \nthat will benefit from this ongoing work, one still need to handle \norphaned files (if any).\n\nI started to work on this subject some time ago by creating an extension \nthat is able to list the orphaned files, see [2].\n\nI have in mind to extend the extension capability to provide 4 more APIs to:\n\n * move orphaned files to a dedicated backup directory (that the\n extension will create on the fly)\n * list the files that are part of this backup directory\n * move back the files from this backup directory to their original\n location\n * remove files that are located in the backup directory\n\nThat would help dealing with existing orphaned files in a more secure \nway (means less error prone).\n\nWould that make sense sharing this work with you and later add this \nextension in the contrib directory?\n\nThanks in advance for your feedback,\nBertrand\n\n[1]: https://commitfest.postgresql.org/34/3228/\n\n[2]: https://github.com/bdrouvot/pg_orphaned\n\n\n\n\n\n\n\nHi hackers,\n\n There is an ongoing effort of cleaning up orphaned files using\n undo logs [1].\nUntil this get implemented and databases get upgraded to the\n version(s) that will benefit from this ongoing work, one still\n need to handle orphaned files (if any).\n\n I started to work on this subject some time ago by creating an\n extension that is able to list the orphaned files, see [2].\nI have in mind to extend the extension capability to provide 4\n more APIs to:\n\nmove orphaned files to a dedicated backup directory (that the\n extension will create on the fly)\nlist the files that are part of this backup directory\nmove back the files from this backup directory to their\n original location\nremove files that are located in the backup directory\n\nThat would help dealing with existing orphaned files in a more\n secure way (means less error prone).\n\n Would that make sense sharing this work with you and later add\n this extension in the contrib directory?\n\n Thanks in advance for your feedback,\n Bertrand\n\n [1]: https://commitfest.postgresql.org/34/3228/\n[2]: https://github.com/bdrouvot/pg_orphaned", "msg_date": "Mon, 6 Sep 2021 08:47:57 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Extension proposal to deal with existing orphaned files" } ]
[ { "msg_contents": "Hi\n\nI met a problem when using logical replication. Maybe it's a bug in logical replication. \nWhen publishing a partition table without replica identity, update\nor delete operation can be successful in some cases.\n\nFor example:\ncreate table tbl1 (a int) partition by range ( a );\ncreate table tbl1_part1 partition of tbl1 for values from (1) to (101);\ncreate table tbl1_part2 partition of tbl1 for values from (101) to (200);\ninsert into tbl1 select generate_series(1, 10);\ndelete from tbl1 where a=1;\ncreate publication pub for table tbl1;\ndelete from tbl1 where a=2;\n \nThe last DELETE statement can be executed successfully, but it should report\nerror message about missing a replica identity.\n\nI found this problem on HEAD and I could reproduce this problem at PG13 and\nPG14. (Logical replication of partition table was introduced in PG13.)\n\nRegards\nTang\n\n\n", "msg_date": "Mon, 6 Sep 2021 07:58:50 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "[BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "", "msg_date": "Mon, 6 Sep 2021 08:19:34 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Mon, Sep 6, 2021 at 1:49 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Mon, Sep 6, 2021 3:59 PM tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> > I met a problem when using logical replication. Maybe it's a bug in logical\n> > replication.\n> > When publishing a partition table without replica identity, update\n> > or delete operation can be successful in some cases.\n> >\n> > For example:\n> > create table tbl1 (a int) partition by range ( a );\n> > create table tbl1_part1 partition of tbl1 for values from (1) to (101);\n> > create table tbl1_part2 partition of tbl1 for values from (101) to (200);\n> > insert into tbl1 select generate_series(1, 10);\n> > delete from tbl1 where a=1;\n> > create publication pub for table tbl1;\n> > delete from tbl1 where a=2;\n> >\n> > The last DELETE statement can be executed successfully, but it should report\n> > error message about missing a replica identity.\n> >\n> > I found this problem on HEAD and I could reproduce this problem at PG13 and\n> > PG14. (Logical replication of partition table was introduced in PG13.)\n\nAdding Amit L and Peter E who were involved in this work (commit:\n17b9e7f9) to see if they have opinions on this matter.\n\n>\n> I can reproduce this bug.\n>\n> I think the reason is it didn't invalidate all the leaf partitions' relcache\n> when add a partitioned table to the publication, so the publication info was\n> not rebuilt.\n>\n> The following code only invalidate the target table:\n> ---\n> PublicationAddTables\n> publication_add_relation\n> /* Invalidate relcache so that publication info is rebuilt. */\n> CacheInvalidateRelcache(targetrel);\n> ---\n>\n> In addition, this problem can happen in both ADD TABLE, DROP\n> TABLE, and SET TABLE cases, so we need to invalidate the leaf partitions'\n> recache in all these cases.\n>\n\nFew comments:\n=============\n {\n@@ -664,7 +673,13 @@ PublicationDropTables(Oid pubid, List *rels, bool\nmissing_ok)\n\n ObjectAddressSet(obj, PublicationRelRelationId, prid);\n performDeletion(&obj, DROP_CASCADE, 0);\n+\n+ relids = GetPubPartitionOptionRelations(relids, PUBLICATION_PART_LEAF,\n+ relid);\n }\n+\n+ /* Invalidate relcache so that publication info is rebuilt. */\n+ InvalidatePublicationRels(relids);\n }\n\nWe already register the invalidation for the main table in\nRemovePublicationRelById which is called via performDeletion. I think\nit is better to perform invalidation for partitions at that place.\nSimilarly is there a reason for not doing invalidations of partitions\nin publication_add_relation()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Sep 2021 09:32:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "From Tues, Sep 7, 2021 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Sep 6, 2021 at 1:49 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I can reproduce this bug.\r\n> >\r\n> > I think the reason is it didn't invalidate all the leaf partitions'\r\n> > relcache when add a partitioned table to the publication, so the\r\n> > publication info was not rebuilt.\r\n> >\r\n> > The following code only invalidate the target table:\r\n> > ---\r\n> > PublicationAddTables\r\n> > publication_add_relation\r\n> > /* Invalidate relcache so that publication info is rebuilt. */\r\n> > CacheInvalidateRelcache(targetrel);\r\n> > ---\r\n> >\r\n> > In addition, this problem can happen in both ADD TABLE, DROP TABLE,\r\n> > and SET TABLE cases, so we need to invalidate the leaf partitions'\r\n> > recache in all these cases.\r\n> >\r\n> \r\n> Few comments:\r\n> =============\r\n> {\r\n> @@ -664,7 +673,13 @@ PublicationDropTables(Oid pubid, List *rels, bool\r\n> missing_ok)\r\n> \r\n> ObjectAddressSet(obj, PublicationRelRelationId, prid);\r\n> performDeletion(&obj, DROP_CASCADE, 0);\r\n> +\r\n> + relids = GetPubPartitionOptionRelations(relids, PUBLICATION_PART_LEAF,\r\n> + relid);\r\n> }\r\n> +\r\n> + /* Invalidate relcache so that publication info is rebuilt. */\r\n> + InvalidatePublicationRels(relids);\r\n> }\r\n> \r\n> We already register the invalidation for the main table in\r\n> RemovePublicationRelById which is called via performDeletion. I think it is\r\n> better to perform invalidation for partitions at that place.\r\n> Similarly is there a reason for not doing invalidations of partitions in\r\n> publication_add_relation()?\r\n\r\nThanks for the comment. I originally intended to reduce the number of invalid\r\nmessage when add/drop serval tables while each table has lots of partitions which\r\ncould exceed the MAX_RELCACHE_INVAL_MSGS. But that seems a rare case, so ,\r\nI changed the code as suggested.\r\n\r\nAttach new version patches which addressed the comment.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 7 Sep 2021 06:08:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "> > > ---\r\n> > > PublicationAddTables\r\n> > > publication_add_relation\r\n> > > /* Invalidate relcache so that publication info is rebuilt. */\r\n> > > CacheInvalidateRelcache(targetrel);\r\n> > > ---\r\n> > >\r\n> > > In addition, this problem can happen in both ADD TABLE, DROP TABLE,\r\n> > > and SET TABLE cases, so we need to invalidate the leaf partitions'\r\n> > > recache in all these cases.\r\n> > >\r\n> >\r\n> > Few comments:\r\n> > =============\r\n> > {\r\n> > @@ -664,7 +673,13 @@ PublicationDropTables(Oid pubid, List *rels, bool\r\n> > missing_ok)\r\n> >\r\n> > ObjectAddressSet(obj, PublicationRelRelationId, prid);\r\n> > performDeletion(&obj, DROP_CASCADE, 0);\r\n> > +\r\n> > + relids = GetPubPartitionOptionRelations(relids, PUBLICATION_PART_LEAF,\r\n> > + relid);\r\n> > }\r\n> > +\r\n> > + /* Invalidate relcache so that publication info is rebuilt. */\r\n> > + InvalidatePublicationRels(relids);\r\n> > }\r\n> >\r\n> > We already register the invalidation for the main table in\r\n> > RemovePublicationRelById which is called via performDeletion. I think it is\r\n> > better to perform invalidation for partitions at that place.\r\n> > Similarly is there a reason for not doing invalidations of partitions in\r\n> > publication_add_relation()?\r\n> \r\n> Thanks for the comment. I originally intended to reduce the number of invalid\r\n> message when add/drop serval tables while each table has lots of partitions which\r\n> could exceed the MAX_RELCACHE_INVAL_MSGS. But that seems a rare case, so ,\r\n> I changed the code as suggested.\r\n> \r\n> Attach new version patches which addressed the comment.\r\n\r\nThanks for your patch. I confirmed that the problem I reported was fixed.\r\n\r\nBesides, Your v2 patch also fixed an existing a problem about \"DROP PUBLICATION\" on HEAD.\r\n(Publication was dropped but it still reported errors about replica identity when trying to\r\nupdate or delete a partition table.)\r\n\r\nRegards\r\nTang\r\n", "msg_date": "Tue, 7 Sep 2021 07:35:17 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Tues, Sep 7, 2021 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Sep 6, 2021 at 1:49 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > I can reproduce this bug.\n> > >\n> > > I think the reason is it didn't invalidate all the leaf partitions'\n> > > relcache when add a partitioned table to the publication, so the\n> > > publication info was not rebuilt.\n> > >\n> > > The following code only invalidate the target table:\n> > > ---\n> > > PublicationAddTables\n> > > publication_add_relation\n> > > /* Invalidate relcache so that publication info is rebuilt. */\n> > > CacheInvalidateRelcache(targetrel);\n> > > ---\n> > >\n> > > In addition, this problem can happen in both ADD TABLE, DROP TABLE,\n> > > and SET TABLE cases, so we need to invalidate the leaf partitions'\n> > > recache in all these cases.\n> > >\n> >\n> > Few comments:\n> > =============\n> > {\n> > @@ -664,7 +673,13 @@ PublicationDropTables(Oid pubid, List *rels, bool\n> > missing_ok)\n> >\n> > ObjectAddressSet(obj, PublicationRelRelationId, prid);\n> > performDeletion(&obj, DROP_CASCADE, 0);\n> > +\n> > + relids = GetPubPartitionOptionRelations(relids, PUBLICATION_PART_LEAF,\n> > + relid);\n> > }\n> > +\n> > + /* Invalidate relcache so that publication info is rebuilt. */\n> > + InvalidatePublicationRels(relids);\n> > }\n> >\n> > We already register the invalidation for the main table in\n> > RemovePublicationRelById which is called via performDeletion. I think it is\n> > better to perform invalidation for partitions at that place.\n> > Similarly is there a reason for not doing invalidations of partitions in\n> > publication_add_relation()?\n>\n> Thanks for the comment. I originally intended to reduce the number of invalid\n> message when add/drop serval tables while each table has lots of partitions which\n> could exceed the MAX_RELCACHE_INVAL_MSGS. But that seems a rare case, so ,\n> I changed the code as suggested.\n>\n> Attach new version patches which addressed the comment.\n\nThanks for fixing this issue. The bug gets fixed by the patch, I did\nnot find any issues in my testing.\nI just had one minor comment:\n\nWe could clean the table at the end by calling DROP TABLE testpub_parted2:\n+-- still fail, because parent's publication replicates updates\n+UPDATE testpub_parted2 SET a = 2;\n+ERROR: cannot update table \"testpub_parted2\" because it does not\nhave a replica identity and publishes updates\n+HINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n+ALTER PUBLICATION testpub_forparted DROP TABLE testpub_parted;\n+-- works again, because update is no longer replicated\n+UPDATE testpub_parted2 SET a = 2;\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 14 Sep 2021 20:10:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> >\r\n> > Attach new version patches which addressed the comment.\r\n> \r\n> Thanks for fixing this issue. The bug gets fixed by the patch, I did not find any\r\n> issues in my testing.\r\n> I just had one minor comment:\r\n> \r\n> We could clean the table at the end by calling DROP TABLE testpub_parted2:\r\n> +-- still fail, because parent's publication replicates updates UPDATE\r\n> +testpub_parted2 SET a = 2;\r\n> +ERROR: cannot update table \"testpub_parted2\" because it does not\r\n> have a replica identity and publishes updates\r\n> +HINT: To enable updating the table, set REPLICA IDENTITY using ALTER\r\n> TABLE.\r\n> +ALTER PUBLICATION testpub_forparted DROP TABLE testpub_parted;\r\n> +-- works again, because update is no longer replicated UPDATE\r\n> +testpub_parted2 SET a = 2;\r\n\r\nThanks for the comment.\r\nAttach new version patches which clean the table at the end.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 16 Sep 2021 01:45:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Thu, Sep 16, 2021 at 7:15 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\n> > On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>\n> Thanks for the comment.\n> Attach new version patches which clean the table at the end.\n>\n\n+ * For partitioned table contained in the publication, we must\n+ * invalidate all partitions contained in the respective partition\n+ * trees, not just the one explicitly mentioned in the publication.\n\nCan we slightly change the above comment as: \"For the partitioned\ntables, we must invalidate all partitions contained in the respective\npartition hierarchies, not just the one explicitly mentioned in the\npublication. This is required because we implicitly publish the child\ntables when the parent table is published.\"\n\nApart from this, the patch looks good to me.\n\nI think we need to back-patch this till v13. What do you think? If\nyes, then can you please prepare and test the patches for\nback-branches? Does anyone else have opinions on back-patching this?\n\nI think this is not a show-stopper bug, so even if we decide to\nback-patch, I will do it next week after 14 RC1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Sep 2021 15:34:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Thursday, September 16, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > > On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Thanks for the comment.\r\n> > Attach new version patches which clean the table at the end.\r\n> >\r\n> \r\n> + * For partitioned table contained in the publication, we must\r\n> + * invalidate all partitions contained in the respective partition\r\n> + * trees, not just the one explicitly mentioned in the publication.\r\n> \r\n> Can we slightly change the above comment as: \"For the partitioned tables, we\r\n> must invalidate all partitions contained in the respective partition hierarchies,\r\n> not just the one explicitly mentioned in the publication. This is required\r\n> because we implicitly publish the child tables when the parent table is\r\n> published.\"\r\n> \r\n> Apart from this, the patch looks good to me.\r\n> \r\n> I think we need to back-patch this till v13. What do you think? \r\n\r\nI agreed.\r\n\r\nAttach patches for back-branch, each has passed regression tests and pgindent.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 17 Sep 2021 06:06:02 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Fri, Sep 17, 2021 at 11:36 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, September 16, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Thanks for the comment.\n> > > Attach new version patches which clean the table at the end.\n> > >\n> >\n> > + * For partitioned table contained in the publication, we must\n> > + * invalidate all partitions contained in the respective partition\n> > + * trees, not just the one explicitly mentioned in the publication.\n> >\n> > Can we slightly change the above comment as: \"For the partitioned tables, we\n> > must invalidate all partitions contained in the respective partition hierarchies,\n> > not just the one explicitly mentioned in the publication. This is required\n> > because we implicitly publish the child tables when the parent table is\n> > published.\"\n> >\n> > Apart from this, the patch looks good to me.\n> >\n> > I think we need to back-patch this till v13. What do you think?\n>\n> I agreed.\n>\n> Attach patches for back-branch, each has passed regression tests and pgindent.\n>\n\nThanks, your patches look good to me. I'll push them sometime next\nweek after Tuesday unless there are any comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Sep 2021 16:07:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Fri, Sep 17, 2021 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks, your patches look good to me. I'll push them sometime next\n> week after Tuesday unless there are any comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 22 Sep 2021 14:40:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Fri, Sep 17, 2021 at 7:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Sep 17, 2021 at 11:36 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > On Thursday, September 16, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > > On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > Thanks for the comment.\n> > > > Attach new version patches which clean the table at the end.\n> > > >\n> > >\n> > > + * For partitioned table contained in the publication, we must\n> > > + * invalidate all partitions contained in the respective partition\n> > > + * trees, not just the one explicitly mentioned in the publication.\n> > >\n> > > Can we slightly change the above comment as: \"For the partitioned tables, we\n> > > must invalidate all partitions contained in the respective partition hierarchies,\n> > > not just the one explicitly mentioned in the publication. This is required\n> > > because we implicitly publish the child tables when the parent table is\n> > > published.\"\n> > >\n> > > Apart from this, the patch looks good to me.\n> > >\n> > > I think we need to back-patch this till v13. What do you think?\n> >\n> > I agreed.\n> >\n> > Attach patches for back-branch, each has passed regression tests and pgindent.\n>\n> Thanks, your patches look good to me. I'll push them sometime next\n> week after Tuesday unless there are any comments.\n\nThanks Amit, Tang, and Hou for this.\n\nSorry that I didn't comment on this earlier, but I think either\nGetPubPartitionOptionRelations() or InvalidatePublicationRels()\nintroduced in the commit 4548c76738b should lock the partitions, just\nlike to the parent partitioned table would be, before invalidating\nthem. There may be some hazards to invalidating a relation without\nlocking it.\n\nFor example, maybe add a 'lockmode' parameter to\nGetPubPartitionOptionRelations() which it passes down to\nfind_all_inheritors() instead of NoLock as now. And make all sites\nexcept GetPublicationRelations() pass ShareUpdateExclusiveLock for it.\nMaybe like the attached.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Oct 2021 16:09:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Thu, Oct 7, 2021 at 12:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Sep 17, 2021 at 7:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Sep 17, 2021 at 11:36 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > On Thursday, September 16, 2021 6:05 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > > > On Tuesday, September 14, 2021 10:41 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > On Tue, Sep 7, 2021 at 11:38 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > Thanks for the comment.\n> > > > > Attach new version patches which clean the table at the end.\n> > > > >\n> > > >\n> > > > + * For partitioned table contained in the publication, we must\n> > > > + * invalidate all partitions contained in the respective partition\n> > > > + * trees, not just the one explicitly mentioned in the publication.\n> > > >\n> > > > Can we slightly change the above comment as: \"For the partitioned tables, we\n> > > > must invalidate all partitions contained in the respective partition hierarchies,\n> > > > not just the one explicitly mentioned in the publication. This is required\n> > > > because we implicitly publish the child tables when the parent table is\n> > > > published.\"\n> > > >\n> > > > Apart from this, the patch looks good to me.\n> > > >\n> > > > I think we need to back-patch this till v13. What do you think?\n> > >\n> > > I agreed.\n> > >\n> > > Attach patches for back-branch, each has passed regression tests and pgindent.\n> >\n> > Thanks, your patches look good to me. I'll push them sometime next\n> > week after Tuesday unless there are any comments.\n>\n> Thanks Amit, Tang, and Hou for this.\n>\n> Sorry that I didn't comment on this earlier, but I think either\n> GetPubPartitionOptionRelations() or InvalidatePublicationRels()\n> introduced in the commit 4548c76738b should lock the partitions, just\n> like to the parent partitioned table would be, before invalidating\n> them. There may be some hazards to invalidating a relation without\n> locking it.\n>\n\nI see your point but then on the same lines didn't the existing code\n\"for all tables\" case (where we call CacheInvalidateRelcacheAll()\nwithout locking all relations) have a similar problem. Also, in your\npatch, you are assuming that the callers of GetPublicationRelations()\nwill lock all the relations but what when it gets called from\nAlterPublicationOptions()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 8 Oct 2021 09:17:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "Hi Amit,\n\nOn Fri, Oct 8, 2021 at 12:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Oct 7, 2021 at 12:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Sorry that I didn't comment on this earlier, but I think either\n> > GetPubPartitionOptionRelations() or InvalidatePublicationRels()\n> > introduced in the commit 4548c76738b should lock the partitions, just\n> > like to the parent partitioned table would be, before invalidating\n> > them. There may be some hazards to invalidating a relation without\n> > locking it.\n>\n> I see your point but then on the same lines didn't the existing code\n> \"for all tables\" case (where we call CacheInvalidateRelcacheAll()\n> without locking all relations) have a similar problem.\n\nThere might be. I checked to see how other callers/modules use\nCacheInvalidateRelcacheAll(), though it seems that only the functions\nin publicationcmds.c use it or really was invented in 665d1fad99e for\nuse by publication commands.\n\nMaybe I need to look harder than I've done for any examples of hazard.\n\n> Also, in your\n> patch, you are assuming that the callers of GetPublicationRelations()\n> will lock all the relations but what when it gets called from\n> AlterPublicationOptions()?\n\nAh, my bad. I hadn't noticed that one for some reason.\n\nNow that you mention it, I do find it somewhat concerning (on the\nsimilar grounds as what prompted my previous email) that\nAlterPublicationOptions() does away with any locking on the affected\nrelations.\n\nAnyway, I'll think a bit more about the possible hazards of not doing\nthe locking and will reply again if there's indeed a problem(s) that\nneeds to be fixed.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Oct 2021 12:41:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Wed, Oct 13, 2021 at 9:11 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Amit,\n>\n> On Fri, Oct 8, 2021 at 12:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Oct 7, 2021 at 12:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Sorry that I didn't comment on this earlier, but I think either\n> > > GetPubPartitionOptionRelations() or InvalidatePublicationRels()\n> > > introduced in the commit 4548c76738b should lock the partitions, just\n> > > like to the parent partitioned table would be, before invalidating\n> > > them. There may be some hazards to invalidating a relation without\n> > > locking it.\n> >\n> > I see your point but then on the same lines didn't the existing code\n> > \"for all tables\" case (where we call CacheInvalidateRelcacheAll()\n> > without locking all relations) have a similar problem.\n>\n> There might be. I checked to see how other callers/modules use\n> CacheInvalidateRelcacheAll(), though it seems that only the functions\n> in publicationcmds.c use it or really was invented in 665d1fad99e for\n> use by publication commands.\n>\n> Maybe I need to look harder than I've done for any examples of hazard.\n>\n> > Also, in your\n> > patch, you are assuming that the callers of GetPublicationRelations()\n> > will lock all the relations but what when it gets called from\n> > AlterPublicationOptions()?\n>\n> Ah, my bad. I hadn't noticed that one for some reason.\n>\n> Now that you mention it, I do find it somewhat concerning (on the\n> similar grounds as what prompted my previous email) that\n> AlterPublicationOptions() does away with any locking on the affected\n> relations.\n>\n> Anyway, I'll think a bit more about the possible hazards of not doing\n> the locking and will reply again if there's indeed a problem(s) that\n> needs to be fixed.\n>\n\nI think you can try to reproduce the problem via the debugger. You can\nstop before calling GetPubPartitionOptionRelations in\npublication_add_relation() in session-1 and then from another session\n(say session-2) try to delete one of the partition table (without\nreplica identity). Then stop in session-2 somewhere after acquiring\nlock to the corresponding partition relation. Now, continue in\nsession-1 and invalidate the rels and let it complete the command. I\nthink session-2 will complete the update without processing the\ninvalidations.\n\nIf the above is true, then, this breaks the following behavior\nspecified in the documentation: \"The tables added to a publication\nthat publishes UPDATE and/or DELETE operations must have REPLICA\nIDENTITY defined. Otherwise, those operations will be disallowed on\nthose tables.\". Also, I think such updates won't be replicated on\nsubscribers as there is no replica identity or primary key column.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:24:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" }, { "msg_contents": "On Mon, Oct 18, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 13, 2021 at 9:11 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Anyway, I'll think a bit more about the possible hazards of not doing\n> > the locking and will reply again if there's indeed a problem(s) that\n> > needs to be fixed.\n> >\n>\n> I think you can try to reproduce the problem via the debugger. You can\n> stop before calling GetPubPartitionOptionRelations in\n> publication_add_relation() in session-1 and then from another session\n> (say session-2) try to delete one of the partition table (without\n> replica identity). Then stop in session-2 somewhere after acquiring\n> lock to the corresponding partition relation. Now, continue in\n> session-1 and invalidate the rels and let it complete the command. I\n> think session-2 will complete the update without processing the\n> invalidations.\n>\n\nIn the last sentence, it should be delete rather than update.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Oct 2021 12:26:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Unexpected action when publishing partition tables" } ]
[ { "msg_contents": "Hi,\n\nSingle lines entered in PSQL interactive-mode, containing just\nwhitespace or an SQL comment (\"--...\"), don't seem to be stored\ncorrectly in the history.\nFor example, such lines are currently prepended to the history of the\nnext command entered, rather than having their own history entry.\nAlso, if HISTCONTROL=ignorespace is in effect, if a line is entered\nthat starts with a space and the rest of the line is whitespace or an\nSQL comment, then it prevents the next command entered from being\nsaved in the history.\n\nI've attached a patch that corrects the behaviour.\nFor the type of lines mentioned, the patch makes the history behave\nmore like Bash history.\n\n[I noticed this problem in PSQL interactive-mode history when typing\nin a long SQL command which I then decided to just comment, using a\n\"--\" prefix, and enter it, to store it in the history, so I could\nlater recall it from the history after first executing some other\ncommands.]\n\n\nBelow are some examples of problem scenarios, and results BEFORE/AFTER\nthe patch is applied:\n\n(1)\n\n<space><ENTER>\nSELECT 1;<ENTER>\n\nBEFORE PATCH:\nResults in a single history entry, with <space> on the 1st line and\n\"SELECT 1;\" on the 2nd line.\nAFTER PATCH:\nResults in two history entries, 1st contains <space> and the 2nd\ncontains \"SELECT 1;\".\n\n\n(2)\n\n-- my comment<ENTER>\nSELECT 1;<ENTER>\n\nBEFORE PATCH:\nResults in a single history entry, containing \"-- my comment\" on the\n1st line and \"SELECT 1;\" on the 2nd line.\nAFTER PATCH:\nResults in two history entries, 1st contains \"-- my comment\" and the\n2nd contains \"SELECT 1;\".\n\n\n(3)\n{--variable=HISTCONTROL=ignorespace}\n\n<space><ENTER>\nSELECT 1;<ENTER>\n\nBEFORE PATCH:\nNo history entry is saved.\nAFTER PATCH:\nResults in one history entry, containing \"SELECT 1;\".\n\n\n(4)\n{--variable=HISTCONTROL=ignorespace}\n\n<space>-- my comment<ENTER>\nSELECT 1;<ENTER>\n\nBEFORE PATCH:\nNo history entry is saved.\nAFTER PATCH:\nResults in one history entry, containing \"SELECT 1;\".\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Tue, 7 Sep 2021 00:13:35 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Correct handling of blank/commented lines in PSQL interactive-mode\n history" }, { "msg_contents": "On Mon, Sep 6, 2021 at 7:13 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> I've attached a patch that corrects the behaviour.\n> For the type of lines mentioned, the patch makes the history behave\n> more like Bash history.\n>\n\nI have my doubts that you've really fixed anything here since Bash is a\nline-oriented shell while psql is a statement-oriented one. This is a\nfeature.\n\nWhat you are observing is, I think, a side-effect of that fact that\ncomments cannot terminate statements. That seems reasonable. In short,\nyour BEFORE results make sense and don't require fixing.\n\nDavid J.\n\nOn Mon, Sep 6, 2021 at 7:13 AM Greg Nancarrow <gregn4422@gmail.com> wrote:I've attached a patch that corrects the behaviour.\nFor the type of lines mentioned, the patch makes the history behave\nmore like Bash history.I have my doubts that you've really fixed anything here since Bash is a line-oriented shell while psql is a statement-oriented one.  This is a feature.What you are observing is, I think, a side-effect of that fact that comments cannot terminate statements.  That seems reasonable.  In short, your BEFORE results make sense and don't require fixing.David J.", "msg_date": "Mon, 6 Sep 2021 07:50:15 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Mon, 2021-09-06 at 07:50 -0700, David G. Johnston wrote:\n> On Mon, Sep 6, 2021 at 7:13 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > I've attached a patch that corrects the behaviour.\n> > For the type of lines mentioned, the patch makes the history behave\n> > more like Bash history.\n> \n> I have my doubts that you've really fixed anything here since Bash is a\n> line-oriented shell while psql is a statement-oriented one.  This is a feature.\n> What you are observing is, I think, a side-effect of that fact that\n> comments cannot terminate statements.  That seems reasonable.\n> In short, your BEFORE results make sense and don't require fixing.\n\nI think that psql's behavior should be governed more by usefulness than\nby consideratoins like \"comments cannot terminate statements\".\n\nI agree with Greg that the current behavior is annoying and would\nwelcome the change. This has bothered me before.\n\nThat multi-line statements that contain a line with a space are omitted\nfrom the history when HISTCONTROL is set to \"ignorespace\" seems like\na bug to me.\n\nSo +1 on the idea of the patch, although I didn't scrutinize the\nimplementation.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 06 Sep 2021 18:02:45 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Mon, Sep 06, 2021 at 07:50:15AM -0700, David G. Johnston wrote:\n> On Mon, Sep 6, 2021 at 7:13 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > I've attached a patch that corrects the behaviour.\n> > For the type of lines mentioned, the patch makes the history behave\n> > more like Bash history.\n\nThe behavior of bash is configurable here:\n\n|HISTCONTROL\n| A colon-separated list of values controlling how commands are saved on the history list. If the list of values includes ignorespace, lines which begin with a space character are not saved in the history\n| list....\n\n> I have my doubts that you've really fixed anything here since Bash is a\n> line-oriented shell while psql is a statement-oriented one. This is a\n> feature.\n\nHm, I don't think bash is \"line oriented\" ? You can type anything into it that\nyou'd put in a shell script. For example:\n\n$ for a in `seq 1 3`\n> do\n> echo $a\n> done\n1\n2\n3\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Sep 2021 14:37:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On 2021-Sep-06, Laurenz Albe wrote:\n\n> I agree with Greg that the current behavior is annoying and would\n> welcome the change. This has bothered me before.\n\nIt has bothered me too. I am particularly bothered by the uselessness\nthat M-# results in -- namely, inserting a # at the start of the buffer.\nThis is quite useless: in bash, the # starts a comment, so M-# makes the\nentry a comment; but in psql it doesn't have that effect. If somebody\nwere to make M-# make the buffer a /* ... */ comment, it would become\nquite handy.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 6 Sep 2021 21:28:53 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Sep-06, Laurenz Albe wrote:\n>> I agree with Greg that the current behavior is annoying and would\n>> welcome the change. This has bothered me before.\n\n> It has bothered me too.\n\nI'm not here to claim that the current behavior is perfect. However,\nAFAICT the patch as-presented breaks the principle that text goes into\nthe history at the moment it's sent to the server. In particular, we\nmight make an entry for text that *never* got to the server because you\ncleared the buffer instead. I don't find that to be an improvement.\nIt breaks one of the primary use-cases for history, ie keeping a record\nof what you did.\n\nWe could perhaps finesse that point by deciding that comment lines\nthat are handled this way will never be sent to the server --- but\nI'm sure people will complain about that, too. I've definitely heard\npeople complain because \"--\" comments are stripped from what's sent\n(so I'd look favorably on a patch to undo that).\n\nI think the questions around empty-line handling are largely\northogonal to this, and we'll just confuse ourselves if we\ndiscuss that at the same time. Likewise for M-#.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 14:50:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "[ this is a digression from the main point of the thread, but ... ]\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I am particularly bothered by the uselessness\n> that M-# results in -- namely, inserting a # at the start of the buffer.\n\nFixing that might be as simple as the attached. I've not beat on\nit hard though.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Sep 2021 15:16:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On 07.09.21 21:16, Tom Lane wrote:\n> [ this is a digression from the main point of the thread, but ... ]\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I am particularly bothered by the uselessness\n>> that M-# results in -- namely, inserting a # at the start of the buffer.\n> \n> Fixing that might be as simple as the attached. I've not beat on\n> it hard though.\n\nI see this in my .inputrc, although I don't remember when/how I put it \nthere:\n\n$if psql\nset comment-begin --\nset expand-tilde on\n$endif\n\n\n", "msg_date": "Fri, 17 Sep 2021 11:11:35 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "I wrote:\n> We could perhaps finesse that point by deciding that comment lines\n> that are handled this way will never be sent to the server --- but\n> I'm sure people will complain about that, too. I've definitely heard\n> people complain because \"--\" comments are stripped from what's sent\n> (so I'd look favorably on a patch to undo that).\n\nIn hopes of moving this thread forward, I experimented with doing that\nbit, basically by simplifying the {whitespace} rule in psqlscan.l\nto be just \"ECHO;\". That caused massive regression test failures,\nof which this'll do as a sample:\n\n --\n -- This should fail\n --\n copy (select * from test1) (t,id) to stdout;\n ERROR: syntax error at or near \"(\"\n-LINE 1: copy (select * from test1) (t,id) to stdout;\n+LINE 4: copy (select * from test1) (t,id) to stdout;\n ^\n --\n -- Test JOIN\n\nOf course, the problem is that since we're now including the three \"--\"\nlines in what's sent to the server, it thinks the \"copy\" is on line 4.\nI do not think we want such a behavior change: people don't tend to\nthink that such comments are part of the query.\n\nI then experimented with combining the psqlscan.l change with mainloop.c\nchanges akin to what Greg had proposed, so as to discard leading comments\nat the level of mainloop.c rather than inside the lexer. I didn't have\nmuch luck getting to a behavior that I thought could be acceptable,\nalthough maybe with more sweat it'd be possible.\n\nOne thing I noticed along the line is that because the history mechanism\nrecords raw input lines while psqlscan.l discards dash-dash comments,\nit's already the case that history entries don't match up too well with\nwhat's sent to the server. So I'm not sure that my upthread complaint\nabout that holds water, and I'm less against Greg's original patch than\nI was.\n\nTrying to gather together the various issues mentioned on this thread,\nI see:\n\n* Initial input lines that are blank (whitespace, maybe including a\ncomment) are merged into the next command's history entry; but since\nsaid lines don't give rise to any text sent to the server, there's\nnot really any reason why they couldn't be treated as text to be\nemitted to the history file immediately. This is what Greg originally\nset out to change. After my experiments mentioned above, I'm quite\ndoubtful that his patch is correct in detail (I'm afraid that it\nprobably emits stuff too soon in some cases), but it could likely be\nfixed if we could just get agreement that a change of that sort is OK.\n\n* It's not great that dash-dash comments aren't included in what we\nsend to the server. However, changing that is a lot trickier than\nit looks. I think we want to continue suppressing comments that\nprecede the query proper. Including comments that are within the\nquery text (ahead of the trailing semi) is not so hard, but comments\nfollowing the semicolon look too much like comments-ahead-of-the-\nnext-query. Perhaps that issue should be left for another day ...\nalthough it does feel like it interacts with the first point.\n\n* Misbehavior of M-# was also mentioned. Does anyone object to\nthe draft patch I posted for that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Nov 2021 18:30:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Sat, 2021-11-27 at 18:30 -0500, Tom Lane wrote:\n> Trying to gather together the various issues mentioned on this thread,\n> I see:\n> \n> * Initial input lines that are blank (whitespace, maybe including a\n> comment) are merged into the next command's history entry; but since\n> said lines don't give rise to any text sent to the server, there's\n> not really any reason why they couldn't be treated as text to be\n> emitted to the history file immediately.  This is what Greg originally\n> set out to change.  After my experiments mentioned above, I'm quite\n> doubtful that his patch is correct in detail (I'm afraid that it\n> probably emits stuff too soon in some cases), but it could likely be\n> fixed if we could just get agreement that a change of that sort is OK.\n\nFor me, it is just a mild annoyance to have unrelated comments\npreceding the query be part of the query's history file entry.\nIf that is difficult to improve, I can live with it the way it is.\n\n> * It's not great that dash-dash comments aren't included in what we\n> send to the server.  However, changing that is a lot trickier than\n> it looks.  I think we want to continue suppressing comments that\n> precede the query proper.  Including comments that are within the\n> query text (ahead of the trailing semi) is not so hard, but comments\n> following the semicolon look too much like comments-ahead-of-the-\n> next-query.  Perhaps that issue should be left for another day ...\n> although it does feel like it interacts with the first point.\n\nIf we treat double-dash comments differently from /* */ ones,\nthat is indeed odd. I personally haven't been bothered by it, though.\n\n> * Misbehavior of M-# was also mentioned.  Does anyone object to\n> the draft patch I posted for that?\n\nNo, I think that is a clear improvement.\n\nThere was one other problem mentioned in the original mail, and that\nseems to be the most serious one to me:\n\n> psql\npsql (14.1)\nType \"help\" for help.\n\ntest=> \\set HISTCONTROL ignorespace \ntest=> -- line that starts with space\ntest=> SELECT 42;\n ?column? \n══════════\n 42\n(1 row)\n\nNow that query is not added to the history file at all.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:07:48 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> There was one other problem mentioned in the original mail, and that\n> seems to be the most serious one to me:\n> [ HISTCONTROL behavior ]\n\nThe actual behavior of that option (which perhaps isn't adequately\ndocumented) is that it suppresses a history entry if the first\ncharacter of the possibly-multi-line entry is a space. It certainly\ncan't operate on a per-line basis, or you'd be likely to lose chunks\nof a single SQL command, so I think that definition is fine as\nit is (ignoring the whole question of whether the feature is sane\nat all ... but if you don't think so, why would you use it?)\n\nGreg's patch would fix this specifically by ensuring that the line\nwith the space and comment is treated as a separate history entry.\nSo I don't really see that as a separate bug. Or, if you will,\nthe fact that people see it as a bug confirms that such a line\nshould be treated as a separate history entry.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:43:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On 2021-Nov-29, Tom Lane wrote:\n\n> Greg's patch would fix this specifically by ensuring that the line\n> with the space and comment is treated as a separate history entry.\n> So I don't really see that as a separate bug. Or, if you will,\n> the fact that people see it as a bug confirms that such a line\n> should be treated as a separate history entry.\n\nI wonder if these things would be easier to deal with or more convenient\nif we thought of -- as starting a line-scoped comment, and /* */ as\nstarting a query-scoped comment, and treat both types differently. That\nis, a -- comment would not be part of the subsequent command (and they\nwould become two separate history entries), but a /* */ comment would be\npart of the command, and so both the comment and the query would be\nsaved as a single history entry.\n\nSo with ignorespace, then you can get either behavior:\n\n /* don't put neither comment nor command in history */\nselect 1;\nand then nothing gets put in history; but if you do\n\n -- don't put this *comment* in history, but do put command\nselect 1;\n\nthen the comment is ignored, but the query does end up in history.\n\n\nPerhaps one problem is how to behave with intra-query -- comments.\nSurely they should not split the command in three parts, but I'm not\nsure to what extent that is possible to implement.\n\nThis doesn't actually fix the issue that Greg was complaining about,\nbecause his problem is precisely the -- comments. But would Greg and\nother users be satisfied if we made that distinction?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 29 Nov 2021 12:23:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Mon, 2021-11-29 at 09:43 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > There was one other problem mentioned in the original mail, and that\n> > seems to be the most serious one to me:\n> > [ HISTCONTROL behavior ]\n> \n> The actual behavior of that option (which perhaps isn't adequately\n> documented) is that it suppresses a history entry if the first\n> character of the possibly-multi-line entry is a space.  It certainly\n> can't operate on a per-line basis, or you'd be likely to lose chunks\n> of a single SQL command, so I think that definition is fine as\n> it is (ignoring the whole question of whether the feature is sane\n> at all ... but if you don't think so, why would you use it?)\n> \n> Greg's patch would fix this specifically by ensuring that the line\n> with the space and comment is treated as a separate history entry.\n> So I don't really see that as a separate bug.  Or, if you will,\n> the fact that people see it as a bug confirms that such a line\n> should be treated as a separate history entry.\n\nAh, yes. You are right with both the explanation for the behavior\nand stating that it points towards treating leading comments as\nbeing seperate from the query.\n\nAnd, thinking about HISTCONTROL, it does not seem sane in the\ncontext of SQL, and I would never use it.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 16:25:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I wonder if these things would be easier to deal with or more convenient\n> if we thought of -- as starting a line-scoped comment, and /* */ as\n> starting a query-scoped comment, and treat both types differently. That\n> is, a -- comment would not be part of the subsequent command (and they\n> would become two separate history entries), but a /* */ comment would be\n> part of the command, and so both the comment and the query would be\n> saved as a single history entry.\n\nThe hack I was fooling with yesterday would have had that effect,\nalthough it was a consequence of the fact that I was too lazy\nto parse slash-star comments ;-). But basically what I was\ntrying to do was to force a line that was only whitespace\n(possibly plus dash-dash comment) to be treated as a separate\nhistory entry, while not suppressing dash-dash comments\naltogether as the current code does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 10:58:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "After some further hackery, here's a set of patches that I think\nmight be acceptable. They're actually fairly independent, although\nthey touch different aspects of the same behavior.\n\n0001 gets rid of psqlscan.l's habit of suppressing dash-dash comments,\nbut only once we have collected some non-whitespace query input.\nThe upshot of this is that dash-dash comments will get sent to the\nserver as long as they are within the query proper, that is after the\nfirst non-whitespace token and before the ending semicolon. Comments\nthat are between queries are still suppressed, because not doing that\nseems to result in far too much behavioral change. As it stands,\nthough, there are just a few regression test result changes.\n\n0002 is a simplified version of Greg's patch. I think we only need\nto look at the state of the query_buf to see if any input has been\ncollected in order to determine if we are within or between queries.\nI'd originally thought this'd need to be a lot more complicated,\nbut as long as psqlscan.l continues to drop pre-query comments,\nthis seems to be enough.\n\n0003 is the same patch I posted before to adjust M-# behavior.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Nov 2021 15:56:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Tue, Nov 30, 2021 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> After some further hackery, here's a set of patches that I think\n> might be acceptable. They're actually fairly independent, although\n> they touch different aspects of the same behavior.\n>\n> 0001 gets rid of psqlscan.l's habit of suppressing dash-dash comments,\n> but only once we have collected some non-whitespace query input.\n> The upshot of this is that dash-dash comments will get sent to the\n> server as long as they are within the query proper, that is after the\n> first non-whitespace token and before the ending semicolon. Comments\n> that are between queries are still suppressed, because not doing that\n> seems to result in far too much behavioral change. As it stands,\n> though, there are just a few regression test result changes.\n>\n> 0002 is a simplified version of Greg's patch. I think we only need\n> to look at the state of the query_buf to see if any input has been\n> collected in order to determine if we are within or between queries.\n> I'd originally thought this'd need to be a lot more complicated,\n> but as long as psqlscan.l continues to drop pre-query comments,\n> this seems to be enough.\n>\n> 0003 is the same patch I posted before to adjust M-# behavior.\n>\n\nI did some testing of the patches against the 4 problems that I\noriginally reported, and they fixed all of them.\n0002 is definitely simpler than my original effort.\nThe patches LGTM.\nThanks for working on this.\n(BTW, the patches are in Windows CRLF format, so on Linux at least I\nneeded to convert them using dos2unix so they'd apply using Git)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 30 Nov 2021 10:37:42 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On 2021-Nov-29, Tom Lane wrote:\n\n> After some further hackery, here's a set of patches that I think\n> might be acceptable. They're actually fairly independent, although\n> they touch different aspects of the same behavior.\n\nI tried the collection and I find the overall behavior quite convenient.\nI think it works just as I wish it would.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n", "msg_date": "Mon, 29 Nov 2021 20:58:58 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> (BTW, the patches are in Windows CRLF format, so on Linux at least I\n> needed to convert them using dos2unix so they'd apply using Git)\n\nHmm. Applying \"od -c\" to the copy of that message that's in my\nPG list folder shows clearly that there's no \\r in it, nor do\nI see any when I save off the attachment. I suppose this must\nbe an artifact of the way that your MUA treats text attachments;\nor maybe the mail got mangled on its way to you.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 19:08:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Tue, Nov 30, 2021 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > (BTW, the patches are in Windows CRLF format, so on Linux at least I\n> > needed to convert them using dos2unix so they'd apply using Git)\n>\n> Hmm. Applying \"od -c\" to the copy of that message that's in my\n> PG list folder shows clearly that there's no \\r in it, nor do\n> I see any when I save off the attachment. I suppose this must\n> be an artifact of the way that your MUA treats text attachments;\n> or maybe the mail got mangled on its way to you.\n>\n\nYeah, sorry, looks like it could be a Gmail issue for me.\nWhen I alternatively downloaded your patches from the pgsql-hackers\narchive, they're in Unix format, as you say.\nAfter a bit of investigation, it seems that patch attachments (like\nyours) with a Context-Type of \"text/x-diff\" download through Gmail in\nCRLF format for me (I'm running a browser on Windows, but my Postgres\ndevelopment environment is in a Linux VM). So those must get converted\nfrom Unix to CRLF format if downloaded using a browser running on\nWindows.\nThe majority of patch attachments (?) seem to have a Context-Type of\n\"application/octet-stream\" or \"text/x-patch\", and these seem to\ndownload raw (in their original Unix format).\nI guess the attachment context-type is varying according to the mail\nclient used for posting.\n\nSorry for the noise.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 30 Nov 2021 12:15:32 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Tue, Nov 30, 2021 at 12:15 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Yeah, sorry, looks like it could be a Gmail issue for me.\n> When I alternatively downloaded your patches from the pgsql-hackers\n> archive, they're in Unix format, as you say.\n> After a bit of investigation, it seems that patch attachments (like\n> yours) with a Context-Type of \"text/x-diff\" download through Gmail in\n> CRLF format for me (I'm running a browser on Windows, but my Postgres\n> development environment is in a Linux VM). So those must get converted\n> from Unix to CRLF format if downloaded using a browser running on\n> Windows.\n> The majority of patch attachments (?) seem to have a Context-Type of\n> \"application/octet-stream\" or \"text/x-patch\", and these seem to\n> download raw (in their original Unix format).\n> I guess the attachment context-type is varying according to the mail\n> client used for posting.\n>\n\nOops, typos, I meant to say \"Content-Type\".\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 30 Nov 2021 13:21:47 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> After a bit of investigation, it seems that patch attachments (like\n> yours) with a Context-Type of \"text/x-diff\" download through Gmail in\n> CRLF format for me (I'm running a browser on Windows, but my Postgres\n> development environment is in a Linux VM). So those must get converted\n> from Unix to CRLF format if downloaded using a browser running on\n> Windows.\n> The majority of patch attachments (?) seem to have a Context-Type of\n> \"application/octet-stream\" or \"text/x-patch\", and these seem to\n> download raw (in their original Unix format).\n\nInteresting. I can probably adjust my MUA to send \"text/x-patch\",\nbut I'll have to look around to see where that's determined.\n(I dislike using \"application/octet-stream\" for this, because\nthe archives won't show that as text, they only let you download\nthe attachment. Maybe that's more Safari's fault than the\narchives per se, not sure.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 23:12:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Mon, Nov 29, 2021 at 11:12:35PM -0500, Tom Lane wrote:\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > After a bit of investigation, it seems that patch attachments (like\n> > yours) with a Context-Type of \"text/x-diff\" download through Gmail in\n> > CRLF format for me (I'm running a browser on Windows, but my Postgres\n> > development environment is in a Linux VM). So those must get converted\n> > from Unix to CRLF format if downloaded using a browser running on\n> > Windows.\n> > The majority of patch attachments (?) seem to have a Context-Type of\n> > \"application/octet-stream\" or \"text/x-patch\", and these seem to\n> > download raw (in their original Unix format).\n> \n> Interesting. I can probably adjust my MUA to send \"text/x-patch\",\n> but I'll have to look around to see where that's determined.\n> (I dislike using \"application/octet-stream\" for this, because\n> the archives won't show that as text, they only let you download\n> the attachment. Maybe that's more Safari's fault than the\n> archives per se, not sure.)\n\nI would be interesting to know if \"text/x-patch\" is better than\n\"text/x-diff\" --- I currently use the later.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 16:26:53 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Nov 29, 2021 at 11:12:35PM -0500, Tom Lane wrote:\n>> Interesting. I can probably adjust my MUA to send \"text/x-patch\",\n>> but I'll have to look around to see where that's determined.\n\n> I would be interesting to know if \"text/x-patch\" is better than\n> \"text/x-diff\" --- I currently use the later.\n\nI found out that where that is coming from is \"file -i\", so I'm a\nbit loath to modify it. Is there any hard documentation as to why\n\"text/x-patch\" should be preferred?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Nov 2021 16:35:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "On Tue, Nov 30, 2021 at 04:35:13PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, Nov 29, 2021 at 11:12:35PM -0500, Tom Lane wrote:\n> >> Interesting. I can probably adjust my MUA to send \"text/x-patch\",\n> >> but I'll have to look around to see where that's determined.\n> \n> > I would be interesting to know if \"text/x-patch\" is better than\n> > \"text/x-diff\" --- I currently use the later.\n> \n> I found out that where that is coming from is \"file -i\", so I'm a\n> bit loath to modify it. Is there any hard documentation as to why\n> \"text/x-patch\" should be preferred?\n\nI thought this was happening from /etc/mime.types:\n\n\ttext/x-diff diff patch\n\nThe file extensions 'diff' and 'patch' trigger mime to use text/x-diff\nfor its attachments, at least on Debian. Based on that, I assumed\n\"text/x-diff\" was more standardized than \"text/x-patch\".\n\nHowever, it seems file -i also looks at the contents since a file with a\nsingle word in it is not recognized as a diff:\n\n\t$ git diff > /rtmp/x.diff\n\t$ file -i /rtmp/x.diff\n\t/rtmp/x.diff: text/x-diff; charset=us-ascii\n\t -----------\n\t$ echo test > /rtmp/x.diff\n\t$ file -i /rtmp/x.diff\n\t/rtmp/x.diff: text/plain; charset=us-ascii\n\t ----------\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 16:52:31 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Nov-29, Tom Lane wrote:\n>> After some further hackery, here's a set of patches that I think\n>> might be acceptable. They're actually fairly independent, although\n>> they touch different aspects of the same behavior.\n\n> I tried the collection and I find the overall behavior quite convenient.\n> I think it works just as I wish it would.\n\nHearing no further comments, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Dec 2021 12:26:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correct handling of blank/commented lines in PSQL\n interactive-mode history" } ]
[ { "msg_contents": "With Letsencrypt now protecting web servers left and right, and it makes\nsense to me to just re-use the cert that the server may already have\ninstalled.\n\nI've tested this on debian with the client compiled from the master branch,\nagainst a 13.3 server.\n\nThis is my first patch to postgresql, so I apologize for any process\nerrors. I tried to follow\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nHope this list takes attachments.\n\n-- \ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se <thomas@habets.pp.se>\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;", "msg_date": "Mon, 6 Sep 2021 16:42:07 +0100", "msg_from": "Thomas Habets <thomas@habets.se>", "msg_from_op": true, "msg_subject": "[PATCH] Add `verify-system` sslmode to use system CA pool for server\n cert" }, { "msg_contents": "Thomas Habets <thomas@habets.se> writes:\n> With Letsencrypt now protecting web servers left and right, and it makes\n> sense to me to just re-use the cert that the server may already have\n> installed.\n\nI'm confused by your description of this patch. AFAIK, OpenSSL verifies\nagainst the system-wide CA pool by default. Why do we need to do\nanything?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Sep 2021 15:47:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, 6 Sep 2021 20:47:37 +0100, Tom Lane <tgl@sss.pgh.pa.us> said:\n> I'm confused by your description of this patch. AFAIK, OpenSSL verifies\n> against the system-wide CA pool by default. Why do we need to do\n> anything?\n\nExperimentally, no it doesn't. Or if it does, then it doesn't verify\nthe CN/altnames of the cert.\n\nsslmode=require allows self-signed and name mismatch.\n\nverify-ca errors out if there is no ~/.postgresql/root.crt. verify-full too.\n\nIt seems that currently postgresql verifies the name if and only if\nverify-full is used, and then only against ~/.postgresql/root.crt CA file.\n\nBut could be that I missed a config option?\n\n--\ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\n\n", "msg_date": "Mon, 6 Sep 2021 15:21:13 -0700", "msg_from": "thomas@habets.se", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 9/6/21 6:21 PM, thomas@habets.se wrote:\n> On Mon, 6 Sep 2021 20:47:37 +0100, Tom Lane <tgl@sss.pgh.pa.us> said:\n>> I'm confused by your description of this patch. AFAIK, OpenSSL verifies\n>> against the system-wide CA pool by default. Why do we need to do\n>> anything?\n> Experimentally, no it doesn't. Or if it does, then it doesn't verify\n> the CN/altnames of the cert.\n>\n> sslmode=require allows self-signed and name mismatch.\n>\n> verify-ca errors out if there is no ~/.postgresql/root.crt. verify-full too.\n>\n> It seems that currently postgresql verifies the name if and only if\n> verify-full is used, and then only against ~/.postgresql/root.crt CA file.\n>\n> But could be that I missed a config option?\n\n\n\nThat's my understanding. But can't you specify a CA cert in the system's\nCA store if necessary? e.g. on my Fedora system I think it's\n/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Sep 2021 10:16:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, 7 Sep 2021 15:16:51 +0100, Andrew Dunstan <andrew@dunslane.net> said:\n> can't you specify a CA cert in the system's\n> CA store if necessary? e.g. on my Fedora system I think it's\n> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt\n\nI could, but that seems more like a workaround, where I have to change\nthings around as LetsEncrypt switches to another root (I believe they\nhave in the past, but I'm not sure), or the server decides to switch\nfrom LetsEncrypt to something else. Then all clients need to update.\n\nSuch a decision could actually be made by whoever runs the webserver,\nnot the database, and the database just reuses the cert and gets a\nfree ride for cert renewals.\n\nSo in other words postgresql currently doesn't use the system database\nat all, and the workaround is to find and copy from the system\ndatabase. I agree that is a workaround.\n\nIf you think this is enough of a corner case that the workaround is\nacceptable, or the added complexity of another sslmode setting isn't\nworth fixing this edge case, then I assume you have more knowledge\nabout postgres is used in the field than I do.\n\nBut it's not just about today. I would hope that now with LE that\nevery user of SSL starts using \"real\" certs. Postgres default settings\nimply that most people who even enable SSL will not verify the CA nor\nthe name, which is a shame.\n\n--\ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\n\n", "msg_date": "Tue, 7 Sep 2021 07:57:40 -0700", "msg_from": "thomas@habets.se", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "thomas@habets.se writes:\n> On Tue, 7 Sep 2021 15:16:51 +0100, Andrew Dunstan <andrew@dunslane.net> said:\n>> can't you specify a CA cert in the system's\n>> CA store if necessary? e.g. on my Fedora system I think it's\n>> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt\n\n> I could, but that seems more like a workaround, where I have to change\n> things around as LetsEncrypt switches to another root (I believe they\n> have in the past, but I'm not sure), or the server decides to switch\n> from LetsEncrypt to something else. Then all clients need to update.\n\nI experimented with loading a real (not self-signed, not from a private\nCA) cert into the server, and I can confirm these results when trying\nto use sslmode=verify-ca or sslmode=verify-full:\n\n* libpq fails the connection if ~/.postgresql/root.crt is missing\nor empty.\n\n* If I put an irrelevant cert into ~/.postgresql/root.crt, then\nlibpq reports \"SSL error: certificate verify failed\". So the\nverification insists that the server's cert chain to whatever\nis in root.crt.\n\nThis does seem to make it unreasonably painful to use a real SSL cert\nfor a PG server. If you've gone to the trouble and expense of getting\na real cert, it should not be on you to persuade the clients that\nit's valid. I agree with Thomas that copying the system trust store\ninto users' home directories is a completely horrid idea, from both\nthe ease-of-use and maintenance standpoints.\n\nThis is not how I supposed it worked, so I'm coming around to the idea\nthat we need to do something. I don't like the details of Thomas'\nproposal though; specifically I don't see a need to invent a new sslmode\nvalue. I think it should just be \"if ~/.postgresql/root.crt doesn't\nexist, use the system's default trust store\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 11:47:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 9/7/21 10:57 AM, thomas@habets.se wrote:\n> On Tue, 7 Sep 2021 15:16:51 +0100, Andrew Dunstan <andrew@dunslane.net> said:\n>> can't you specify a CA cert in the system's\n>> CA store if necessary? e.g. on my Fedora system I think it's\n>> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt\n> I could, but that seems more like a workaround, where I have to change\n> things around as LetsEncrypt switches to another root (I believe they\n> have in the past, but I'm not sure), or the server decides to switch\n> from LetsEncrypt to something else. Then all clients need to update.\n>\n> Such a decision could actually be made by whoever runs the webserver,\n> not the database, and the database just reuses the cert and gets a\n> free ride for cert renewals.\n>\n> So in other words postgresql currently doesn't use the system database\n> at all, and the workaround is to find and copy from the system\n> database. I agree that is a workaround.\n>\n> If you think this is enough of a corner case that the workaround is\n> acceptable, or the added complexity of another sslmode setting isn't\n> worth fixing this edge case, then I assume you have more knowledge\n> about postgres is used in the field than I do.\n>\n> But it's not just about today. I would hope that now with LE that\n> every user of SSL starts using \"real\" certs. Postgres default settings\n> imply that most people who even enable SSL will not verify the CA nor\n> the name, which is a shame.\n\n\nIt would be if it were true, but it's not. In talks I give on\nPostgreSQL+SSL I highly recommend people use verify-full. And the CA\ndoesn't have to be one that's publicly known. We cater for both public\nand private CAs.\n\nYou don't have to copy anything to achieve what you want. Just set the\nsslrootcert parameter of your connection to point to the system file. e.g.\n\npsql \"sslmode=verify-full sslrootcert=/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt ...\"\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Sep 2021 11:52:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> You don't have to copy anything to achieve what you want. Just set the\n> sslrootcert parameter of your connection to point to the system file. e.g.\n\n> psql \"sslmode=verify-full sslrootcert=/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt ...\"\n\nWhile that does work for me, it seems pretty OS-specific and\nuser-unfriendly. Why should ordinary users need to know that\nmuch about their platform's OpenSSL installation?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 12:48:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 9/7/21 11:47 AM, Tom Lane wrote:\n>\n> This is not how I supposed it worked, \n\n\nThat happens to me more than I usually admit -)\n\n\n> so I'm coming around to the idea\n> that we need to do something. I don't like the details of Thomas'\n> proposal though; specifically I don't see a need to invent a new sslmode\n> value. I think it should just be \"if ~/.postgresql/root.crt doesn't\n> exist, use the system's default trust store\".\n>\n> \t\t\t\n\n\nI agree sslmode is the wrong vehicle.\n\nAn alternative might be to allow a magic value for sslrootcert, say\n\"system\" which would make it go and look in the system's store wherever\nthat is, without the user having to know exactly where. OTOH it would\nrequire that the user knows that the system's store is being used, which\nmight not be a bad thing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Sep 2021 12:50:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/7/21 11:47 AM, Tom Lane wrote:\n>> so I'm coming around to the idea\n>> that we need to do something. I don't like the details of Thomas'\n>> proposal though; specifically I don't see a need to invent a new sslmode\n>> value. I think it should just be \"if ~/.postgresql/root.crt doesn't\n>> exist, use the system's default trust store\".\n\n> An alternative might be to allow a magic value for sslrootcert, say\n> \"system\" which would make it go and look in the system's store wherever\n> that is, without the user having to know exactly where. OTOH it would\n> require that the user knows that the system's store is being used, which\n> might not be a bad thing.\n\nYeah, that would mostly fix the usability concern. I guess what it\ncomes down to is whether you think that public or private certs are\nlikely to be the majority use-case in the long run. The shortage of\nprevious requests for this feature says that right now, just about\neveryone is using self-signed or private-CA certs for Postgres\nservers. So it would likely be a long time, if ever, before public-CA\ncerts become the majority use-case.\n\nOn the other hand, even if I'm using a private CA, there's a lot\nto be said for adding its root cert to system-level trust stores\nrather than copying it into individual users' home directories.\nSo I still feel like there's a pretty good case for allowing use\nof the system store to happen by default. (As I said, I'd always\nthought that was *already* what would happen.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 12:58:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, 7 Sept 2021 at 12:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I guess what it\n> comes down to is whether you think that public or private certs are\n> likely to be the majority use-case in the long run. The shortage of\n> previous requests for this feature says that right now, just about\n> everyone is using self-signed or private-CA certs for Postgres\n> servers. So it would likely be a long time, if ever, before public-CA\n> certs become the majority use-case.\n\nWell the main thing making public CA certs a pain is precisely tools\nthat are a pain to configure to use public CA certs so it's a bit of a\nchicken and egg problem. Projects like LetsEncrypt are all about\nmaking public CA certs work easily without any additional effort.\n\nHowever I have a different question. Are the system certificates\nintended or general purpose certificates? Do they have their intended\nuses annotated on the certificates? Does SSL Verification have any\nlogic deciding which certificates are appropriate for signing servers?\n\nI ask because the only authority I'm personally aware of is the web\nbrowser consortium that approves signers for web site domains. That's\nwhat web browsers need but I'm not sure those are the same authorities\nthat are appropriate for internal services like databases.\n\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 17 Sep 2021 12:53:58 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> However I have a different question. Are the system certificates\n> intended or general purpose certificates? Do they have their intended\n> uses annotated on the certificates? Does SSL Verification have any\n> logic deciding which certificates are appropriate for signing servers?\n\nAFAIK, once you've stuck a certificate into the system store, it\nwill be trusted by every service on your machine. Most distros\nship system-store contents that are basically just designed for\nweb browers, because the web is the only widely-applicable use\ncase. Like you said, chicken and egg problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Sep 2021 14:53:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Hm. Let's Encrypt's FAQ tells me I'm on the right track with that\nquestion but the distinctinos are far more coarse than I was worried\nabout:\n\n\nDoes Let’s Encrypt issue certificates for anything other than SSL/TLS\nfor websites?\n\nLet’s Encrypt certificates are standard Domain Validation\ncertificates, so you can use them for any server that uses a domain\nname, like web servers, mail servers, FTP servers, and many more.\n\nEmail encryption and code signing require a different type of\ncertificate that Let’s Encrypt does not issue.\n\n\nSo it sounds like, at least for SSL connections, we should use the\nsame certificate authorities used to authenticate web sites. If ever\nwe implemented signed extensions, for example, it might require\ndifferent certificates -- I don't know what that means for the SSL\nvalidation rules and the storage for them.\n\n\n", "msg_date": "Fri, 17 Sep 2021 17:35:58 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Hi,\n\nI manage a bunch of Postgres servers at Oslo University and we use real ssl\ncerts on all our servers.\n\nI was actually really surprised to discover that the libpq default is\nsslmode=require and that the root cert defaults to a file under the user’s\nhome directory. I have been planning to use our management system\n(CFEngine) to globally change the client settings to verify-ca and to use\nthe system trust store.\n\nSo that’s a +1 to use the system cert store for client connections.\n\nI also agree that the proposed patch is not the right way to go as it is\nessentially the same as verify-full, and I think that the correct fix would\nbe to change the default.\n\nThanks\nC\n\nHi,I manage a bunch of Postgres servers at Oslo University and we use real ssl certs on all our servers.I was actually really surprised to discover that the libpq default is sslmode=require and that the root cert defaults to a file under the user’s home directory. I have been planning to use our management system (CFEngine) to globally change the client settings to verify-ca and to use the system trust store.So that’s a +1 to use the system cert store for client connections.I also agree that the proposed patch is not the right way to go as it is essentially the same as verify-full, and I think that the correct fix would be to change the default.ThanksC", "msg_date": "Sat, 18 Sep 2021 01:09:49 +0200", "msg_from": "Cameron Murdoch <cam@macaroon.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Sat, 18 Sept 2021 at 00:10, Cameron Murdoch <cam@macaroon.net> wrote:\n\n> I also agree that the proposed patch is not the right way to go as it is\n> essentially the same as verify-full, and I think that the correct fix would\n> be to change the default.\n>\n\nBut these are two changes:\n1. Actually verify against a CA\n2. Actually check the CN/altnames\n\nAnything short of \"verify-full\" is in my view \"not checking\". Even with a\nprivate CA this allows for a lot of lateral movement in an org, as if you\nhave one cert you have them all, for impersonation purposes.\n\nChanging such a default is a big change. Maybe long term it's worth the\nshort term pain, though. Long term it'd be the config of least surprise, in\nmy opinion.\nBut note that one has to think about all the settings, such that the\ndefault is not more checking than \"require\", which might also be surprising.\n\nA magic setting of the file to be \"system\" sounds good for my use cases, at\nleast.\n\n\n\n-- \ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se <thomas@habets.pp.se>\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\nOn Sat, 18 Sept 2021 at 00:10, Cameron Murdoch <cam@macaroon.net> wrote:I also agree that the proposed patch is not the right way to go as it is essentially the same as verify-full, and I think that the correct fix would be to change the default.But these are two changes:1. Actually verify against a CA2. Actually check the CN/altnamesAnything short of \"verify-full\" is in my view \"not checking\". Even with a private CA this allows for a lot of lateral movement in an org, as if you have one cert you have them all, for impersonation purposes.Changing such a default is a big change. Maybe long term it's worth the short term pain, though. Long term it'd be the config of least surprise, in my opinion.But note that one has to think about all the settings, such that the default is not more checking than \"require\", which might also be surprising.A magic setting of the file to be \"system\" sounds good for my use cases, at least.-- typedef struct me_s { char name[]      = { \"Thomas Habets\" }; char email[]     = { \"thomas@habets.se\" }; char kernel[]    = { \"Linux\" }; char *pgpKey[]   = { \"http://www.habets.pp.se/pubkey.txt\" }; char pgp[] = { \"9907 8698 8A24 F52F 1C2E  87F6 39A4 9EEA 460A 0169\" }; char coolcmd[]   = { \"echo '. ./_&. ./_'>_;. ./_\" };} me_t;", "msg_date": "Sat, 18 Sep 2021 11:57:05 +0100", "msg_from": "Thomas Habets <thomas@habets.se>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Sat, 18 Sep 2021 at 12:57, Thomas Habets <thomas@habets.se> wrote:\n\n>\n> But these are two changes:\n> 1. Actually verify against a CA\n> 2. Actually check the CN/altnames\n>\n> Anything short of \"verify-full\" is in my view \"not checking\". Even with a\n> private CA this allows for a lot of lateral movement in an org, as if you\n> have one cert you have them all, for impersonation purposes.\n>\n\n100% agree. I suspect that many postgres users are not completely aware\nthat by default their ssl connections do not check the CA or CN/altnames.\n\n\n> Changing such a default is a big change.\n>\n\nAgreed. It is going to break existing installs that rely on the current\nbehaviour.\n\nThere are two defaults to worry about here:\n\nsslmode=prefer\nsslrootcert=~/.postgresql/root.crt\n\nHaving sslrootcert use the system trust store if ~/.postgresql/root.crt\ndoesn’t exist would seem like a good change.\n\nChanging sslmode to default to something else would mostly likely break a\nton of existing installations, and there are plenty of use cases were ssl\nisn’t used. Trying ssl first and without afterwards probably is still a\nsensible default. However…\n\nI haven’t completely through this through, but what if the sslmode=prefer\nlogic was:\n\n1. Try ssl first, with both CA and CN checking (ie same as verify-full)\n2. Print warnings appropriate to what type of ssl connection can be made\n3. If all else fails, try without ssl.\n\nIn other words start with verify-full and downgrade gracefully to prefer,\nbut actually tell the user that this has happen.\n\nEssentially sslmode=prefer is a type of opportunistic encryption. I’m\nsuggesting making it try stronger levels of ssl opportunistically. Require,\nverify-ca and verify-full can keep their semantics, or rather, they should\nall try verify-full first and then downgrade (with warnings logged) to the\nlevel they actually enforce.\n\nThanks\nC\n\nOn Sat, 18 Sep 2021 at 12:57, Thomas Habets <thomas@habets.se> wrote:But these are two changes:1. Actually verify against a CA2. Actually check the CN/altnamesAnything short of \"verify-full\" is in my view \"not checking\". Even with a private CA this allows for a lot of lateral movement in an org, as if you have one cert you have them all, for impersonation purposes.100% agree. I suspect that many postgres users are not completely aware that by default their ssl connections do not check the CA or CN/altnames.Changing such a default is a big change. Agreed. It is going to break existing installs that rely on the current behaviour.There are two defaults to worry about here:sslmode=prefersslrootcert=~/.postgresql/root.crtHaving sslrootcert use the system trust store if ~/.postgresql/root.crt doesn’t exist would seem like a good change.Changing sslmode to default to something else would mostly likely break a ton of existing installations, and there are plenty of use cases were ssl isn’t used. Trying ssl first and without afterwards probably is still a sensible default. However…I haven’t completely through this through, but what if the sslmode=prefer logic was:1. Try ssl first, with both CA and CN checking (ie same as verify-full)2. Print warnings appropriate to what type of ssl connection can be made3. If all else fails, try without ssl.In other words start with verify-full and downgrade gracefully to prefer, but actually tell the user that this has happen.Essentially sslmode=prefer is a type of opportunistic encryption. I’m suggesting making it try stronger levels of ssl opportunistically. Require, verify-ca and verify-full can keep their semantics, or rather, they should all try verify-full first and then downgrade (with warnings logged) to the level they actually enforce.ThanksC", "msg_date": "Sat, 18 Sep 2021 14:20:27 +0200", "msg_from": "Cameron Murdoch <cam@macaroon.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\n\nOn 9/17/21 5:35 PM, Greg Stark wrote:\n> Hm. Let's Encrypt's FAQ tells me I'm on the right track with that\n> question but the distinctinos are far more coarse than I was worried\n> about:\n>\n>\n> Does Let’s Encrypt issue certificates for anything other than SSL/TLS\n> for websites?\n>\n> Let’s Encrypt certificates are standard Domain Validation\n> certificates, so you can use them for any server that uses a domain\n> name, like web servers, mail servers, FTP servers, and many more.\n>\n> Email encryption and code signing require a different type of\n> certificate that Let’s Encrypt does not issue.\n\n\n\nPresumably this should be a certificate something like our client certs,\nwhere the subject designates a user id or similar (e.g. an email\naddress) rather than a domain name.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 19 Sep 2021 17:04:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Sat, 2021-09-18 at 14:20 +0200, Cameron Murdoch wrote:\r\n> Having sslrootcert use the system trust store if\r\n> ~/.postgresql/root.crt doesn’t exist would seem like a good change.\r\n\r\nFallback behavior can almost always be exploited given the right\r\ncircumstances. IMO, if I've told psql to use a root cert, it really\r\nneeds to do that and not trust anything else.\r\n\r\n> Changing sslmode to default to something else would mostly likely\r\n> break a ton of existing installations, and there are plenty of use\r\n> cases were ssl isn’t used. Trying ssl first and without afterwards\r\n> probably is still a sensible default. However…\r\n\r\nThe discussion on changing the sslmode default behavior seems like it\r\ncan be separated from the use of system certificates. Not to shut down\r\nthat branch of the conversation, but is there enough tentative support\r\nfor an \"sslrootcert=system\" option to move forward with that, while\r\nalso discussing potential changes to the sslmode defaults?\r\n\r\nThe NSS patchset [1] also deals with this problem. FWIW, it currently\r\ntreats an empty ssldatabase setting as \"use the system's (Mozilla's)\r\ntrusted roots\".\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se\r\n", "msg_date": "Wed, 22 Sep 2021 18:36:00 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 9/22/21 2:36 PM, Jacob Champion wrote:\n> On Sat, 2021-09-18 at 14:20 +0200, Cameron Murdoch wrote:\n>> Having sslrootcert use the system trust store if\n>> ~/.postgresql/root.crt doesn’t exist would seem like a good change.\n> Fallback behavior can almost always be exploited given the right\n> circumstances. IMO, if I've told psql to use a root cert, it really\n> needs to do that and not trust anything else.\n>\n>> Changing sslmode to default to something else would mostly likely\n>> break a ton of existing installations, and there are plenty of use\n>> cases were ssl isn’t used. Trying ssl first and without afterwards\n>> probably is still a sensible default. However…\n> The discussion on changing the sslmode default behavior seems like it\n> can be separated from the use of system certificates. Not to shut down\n> that branch of the conversation, but is there enough tentative support\n> for an \"sslrootcert=system\" option to move forward with that, while\n> also discussing potential changes to the sslmode defaults?\n>\n> The NSS patchset [1] also deals with this problem. FWIW, it currently\n> treats an empty ssldatabase setting as \"use the system's (Mozilla's)\n> trusted roots\".\n>\n\n\nI think we need to be consistent on this. NSS builds and OpenSSL builds\nshould act the same, mutatis mutandis.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 22 Sep 2021 14:59:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 22 Sep 2021, at 20:59, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I think we need to be consistent on this. NSS builds and OpenSSL builds\n> should act the same, mutatis mutandis.\n\nI 100% agree. Different TLS backends should be able use different truststores\netc but once the server is running they must be identical in terms of how they\ninteract with a connecting client. I've tried hard to match our OpenSSL\nimplementation when hacking on the NSS support, but no doubt I've slipped up\nsomewhere so indepth reviews like what Jacob et.al have done is needed (and\nvery welcome).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 22 Sep 2021 22:12:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Sep 7, 2021 at 12:58:44PM -0400, Tom Lane wrote:\n> Yeah, that would mostly fix the usability concern. I guess what it\n> comes down to is whether you think that public or private certs are\n> likely to be the majority use-case in the long run. The shortage of\n> previous requests for this feature says that right now, just about\n> everyone is using self-signed or private-CA certs for Postgres\n> servers. So it would likely be a long time, if ever, before public-CA\n> certs become the majority use-case.\n> \n> On the other hand, even if I'm using a private CA, there's a lot\n> to be said for adding its root cert to system-level trust stores\n> rather than copying it into individual users' home directories.\n> So I still feel like there's a pretty good case for allowing use\n> of the system store to happen by default. (As I said, I'd always\n> thought that was *already* what would happen.)\n\nI don't think public CA's are not a good idea for complex setups since\nthey open the ability for an external party to create certificates that\nare trusted by your server's CA, e.g., certificate authentication. I\ncan see public certs being useful for default installs where the client\n_only_ wants to verify the server is valid.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 27 Sep 2021 21:09:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, 28 Sep 2021 02:09:11 +0100, Bruce Momjian <bruce@momjian.us> said:\n> I don't think public CA's are not a good idea for complex setups since\n> they open the ability for an external party to create certificates that\n> are trusted by your server's CA, e.g., certificate authentication.\n\nI'm not arguing for, and in fact would argue against, public CA for\nclient certs.\n\nSo that's a separate issue.\n\nNote that mTLS prevents a MITM attack that exposes server data even if\nserver cert is compromised or re-issued, so if the install is using\nclient certs (with private CA) then the public CA for server matters\nmuch less.\n\nYou can end up at the wrong server, yes, and provide data as INSERT,\nbut can't steal or corrupt existing data.\n\nAnd you say for complex setups. Fair enough. But currently I'd say the\ndefault is wrong, and what should be default is not configurable.\n\n--\ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\n\n", "msg_date": "Tue, 28 Sep 2021 02:54:39 -0700", "msg_from": "thomas@habets.se", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Sep 28, 2021 at 02:54:39AM -0700, thomas@habets.se wrote:\n> On Tue, 28 Sep 2021 02:09:11 +0100, Bruce Momjian <bruce@momjian.us> said:\n> > I don't think public CA's are not a good idea for complex setups since\n> > they open the ability for an external party to create certificates that\n> > are trusted by your server's CA, e.g., certificate authentication.\n> \n> I'm not arguing for, and in fact would argue against, public CA for\n> client certs.\n> \n> So that's a separate issue.\n> \n> Note that mTLS prevents a MITM attack that exposes server data even if\n> server cert is compromised or re-issued, so if the install is using\n> client certs (with private CA) then the public CA for server matters\n> much less.\n> \n> You can end up at the wrong server, yes, and provide data as INSERT,\n> but can't steal or corrupt existing data.\n> \n> And you say for complex setups. Fair enough. But currently I'd say the\n> default is wrong, and what should be default is not configurable.\n\nAgreed, I think this needs much more discussion and documentation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 17:14:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Oct 4, 2021 at 9:14 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, Sep 28, 2021 at 02:54:39AM -0700, thomas@habets.se wrote:\n> > And you say for complex setups. Fair enough. But currently I'd say the\n> > default is wrong, and what should be default is not configurable.\n>\n> Agreed, I think this needs much more discussion and documentation.\n\nI'd like to try to get this conversation started again. To pique\ninterest I've attached a new version of 0001, which implements\n`sslrootcert=system` instead as suggested upthread. In 0002 I went\nfurther and switched the default sslmode to `verify-full` when using\nthe system CA roots, because I feel pretty strongly that anyone\ninterested in using public CA systems is also interested in verifying\nhostnames. (Otherwise, why make the switch?)\n\nNotes:\n- 0001, like Thomas' original patch, uses\nSSL_CTX_set_default_verify_paths(). This will load both a default file\nand a default directory. This is probably what most people want if\nthey're using the system roots -- just give me whatever the local\nsystem wants me to use! -- but sslrootcert currently deals with files\nonly, I think. Is that a problem?\n- The implementation in 0002 goes all the way down to\nconninfo_add_defaults(). Maybe this is overly complex. Should I just\nmake sslmode a derived option, via connectOptions2()?\n\nThanks,\n--Jacob", "msg_date": "Mon, 24 Oct 2022 17:03:23 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, 25 Oct 2022 01:03:23 +0100, Jacob Champion\n<jchampion@timescale.com> said:\n> I'd like to try to get this conversation started again. To pique\n> interest I've attached a new version of 0001, which implements\n> `sslrootcert=system` instead as suggested upthread. In 0002 I went\n> further and switched the default sslmode to `verify-full` when using\n> the system CA roots, because I feel pretty strongly that anyone\n> interested in using public CA systems is also interested in verifying\n> hostnames. (Otherwise, why make the switch?)\n\nYeah I agree that not forcing verify-full when using system CAs is a\ngiant foot-gun, and many will stop configuring just until it works.\n\nIs there any argument for not checking hostname when using a CA pool\nfor which literally anyone can create a cert that passes?\n\nIt makes sense for self-signed, or \"don't care\", since that provides\nat least protection against passive attacks, but if someone went out\nof their way to get a third party signed cert, then it doesn't.\n\nOne downside to this approach is that now one option will change the\nvalue of another option. For SSL mode (my rejected patch :-) ) that\nmakes maybe some more sense.\n\nFor users, what is more surprising: A foot-gun that sounds safe, or\none option that overrides another?\n\n--\ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\n\n", "msg_date": "Tue, 25 Oct 2022 13:01:57 +0200", "msg_from": "thomas@habets.se", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 2022-10-25 Tu 07:01, thomas@habets.se wrote:\n> On Tue, 25 Oct 2022 01:03:23 +0100, Jacob Champion\n> <jchampion@timescale.com> said:\n>> I'd like to try to get this conversation started again. To pique\n>> interest I've attached a new version of 0001, which implements\n>> `sslrootcert=system` instead as suggested upthread. In 0002 I went\n>> further and switched the default sslmode to `verify-full` when using\n>> the system CA roots, because I feel pretty strongly that anyone\n>> interested in using public CA systems is also interested in verifying\n>> hostnames. (Otherwise, why make the switch?)\n> Yeah I agree that not forcing verify-full when using system CAs is a\n> giant foot-gun, and many will stop configuring just until it works.\n>\n> Is there any argument for not checking hostname when using a CA pool\n> for which literally anyone can create a cert that passes?\n>\n> It makes sense for self-signed, or \"don't care\", since that provides\n> at least protection against passive attacks, but if someone went out\n> of their way to get a third party signed cert, then it doesn't.\n>\n> One downside to this approach is that now one option will change the\n> value of another option. For SSL mode (my rejected patch :-) ) that\n> makes maybe some more sense.\n>\n> For users, what is more surprising: A foot-gun that sounds safe, or\n> one option that overrides another?\n\n\nI don't find too much difficulty in having one option's default depend\non another's value, as long as it's documented.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 25 Oct 2022 10:26:41 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Oct 25, 2022 at 4:01 AM <thomas@habets.se> wrote:\n> Yeah I agree that not forcing verify-full when using system CAs is a\n> giant foot-gun, and many will stop configuring just until it works.\n>\n> Is there any argument for not checking hostname when using a CA pool\n> for which literally anyone can create a cert that passes?\n\nI don't think so. For verify-ca to make any sense, the system CA pool\nwould need to be very strictly curated, and IMO we already have that\nuse case covered today.\n\nIf there are no valuable use cases for weaker checks, then we could go\neven further than my 0002 and just reject any weaker sslmodes\noutright. That'd be nice.\n\n--Jacob\n\n\n", "msg_date": "Tue, 25 Oct 2022 13:17:05 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Oct 25, 2022 at 7:26 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I don't find too much difficulty in having one option's default depend\n> on another's value, as long as it's documented.\n\nMy patch is definitely missing the documentation for that part right\nnow; I wanted to get feedback on the approach before wordsmithing too\nmuch.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 25 Oct 2022 13:20:59 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Oct 25, 2022 at 1:20 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I wanted to get feedback on the approach before wordsmithing too\n> much.\n\nI've added this to tomorrow's CF [1]. Thomas, if you get (or already\nhave) a PG community username, I can add you as an author.\n\nThanks,\n--Jacob\n\n[1] https://commitfest.postgresql.org/40/3990/\n\n\n", "msg_date": "Mon, 31 Oct 2022 16:36:16 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, 31 Oct 2022 23:36:16 +0000, Jacob Champion\n<jchampion@timescale.com> said:\n>> I wanted to get feedback on the approach before wordsmithing too\n>> much.\n>\n> I've added this to tomorrow's CF [1]. Thomas, if you get (or already\n> have) a PG community username, I can add you as an author.\n\nSweet. I just created an account with username `habets`.\n\n--\ntypedef struct me_s {\n char name[] = { \"Thomas Habets\" };\n char email[] = { \"thomas@habets.se\" };\n char kernel[] = { \"Linux\" };\n char *pgpKey[] = { \"http://www.habets.pp.se/pubkey.txt\" };\n char pgp[] = { \"9907 8698 8A24 F52F 1C2E 87F6 39A4 9EEA 460A 0169\" };\n char coolcmd[] = { \"echo '. ./_&. ./_'>_;. ./_\" };\n} me_t;\n\n\n", "msg_date": "Tue, 1 Nov 2022 13:30:06 +0100", "msg_from": "thomas@habets.se", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Nov 1, 2022 at 5:30 AM <thomas@habets.se> wrote:\n> Sweet. I just created an account with username `habets`.\n\nAdded!\n\nOpenSSL 3.0.0 doesn't get along with one of my new tests:\n\n # Failed test 'sslrootcert=system does not connect with private CA: matches'\n # at /Users/admin/pgsql/src/test/ssl/t/001_ssltests.pl line 453.\n # 'psql: error: connection to server at \"127.0.0.1\", port 56124\nfailed: SSL error: unregistered scheme'\n # doesn't match '(?^:SSL error: certificate verify failed)'\n # Looks like you failed 1 test of 191.\n\nI'm not familiar with \"unregistered scheme\" in this context and will\nneed to dig in.\n\n--Jacob\n\n\n", "msg_date": "Tue, 1 Nov 2022 10:03:29 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Nov 1, 2022 at 10:03 AM Jacob Champion <jchampion@timescale.com> wrote:\n> I'm not familiar with \"unregistered scheme\" in this context and will\n> need to dig in.\n\nUnfortunately I can't reproduce with 3.0.0 on Ubuntu :(\n\nI'm suspicious that it may be related to [1], in which case the\nproblem might be fixed by upgrading to the latest OpenSSL. But that's\njust a guess.\n\n--Jacob\n\n[1] https://github.com/openssl/openssl/issues/18691\n\n\n", "msg_date": "Tue, 1 Nov 2022 10:55:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Nov 1, 2022 at 10:55 AM Jacob Champion <jchampion@timescale.com>\nwrote:\n> On Tue, Nov 1, 2022 at 10:03 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > I'm not familiar with \"unregistered scheme\" in this context and will\n> > need to dig in.\n>\n> Unfortunately I can't reproduce with 3.0.0 on Ubuntu :(\n\nSorry, when rereading my own emails I suspect they didn't make much\nsense to readers. The failure I'm talking about is in cfbot [1], on the\nMonterey/Meson build, which is using OpenSSL 3.0.0. I unfortunately\ncannot reproduce this on my own Ubuntu machine.\n\nThere is an additional test failure with LibreSSL, which doesn't appear\nto honor the SSL_CERT_FILE environment variable. This isn't a problem in\nproduction -- if you're using LibreSSL, you'd presumably understand that\nyou can't use that envvar -- but it makes testing difficult, because I\ndon't yet know a way to tell LibreSSL to use a different set of roots\nfor the duration of a test. Has anyone dealt with this before?\n\n> If there are no valuable use cases for weaker checks, then we could go\n> even further than my 0002 and just reject any weaker sslmodes\n> outright. That'd be nice.\n\nI plan to take this approach in a future v3, with the opinion that it'd\nbe better for this feature to start life as strict as possible.\n\n--Jacob\n\n[1] https://cirrus-ci.com/task/6176610722775040\n\n\n", "msg_date": "Thu, 3 Nov 2022 16:39:17 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Thu, Nov 3, 2022 at 4:39 PM Jacob Champion <jchampion@timescale.com> wrote:\n> There is an additional test failure with LibreSSL, which doesn't appear\n> to honor the SSL_CERT_FILE environment variable. This isn't a problem in\n> production -- if you're using LibreSSL, you'd presumably understand that\n> you can't use that envvar -- but it makes testing difficult, because I\n> don't yet know a way to tell LibreSSL to use a different set of roots\n> for the duration of a test. Has anyone dealt with this before?\n\nFixed in v3, with a large hammer (configure-time checks). Hopefully\nI've missed a simpler solution.\n\n> > If there are no valuable use cases for weaker checks, then we could go\n> > even further than my 0002 and just reject any weaker sslmodes\n> > outright. That'd be nice.\n\nDone. sslrootcert=system now prevents you from explicitly setting a\nweaker sslmode, to try to cement it as a Do What I Mean sort of\nfeature. If you need something weird then you can still jump through\nthe hoops by setting sslrootcert to a real file, same as today.\n\nThe macOS/OpenSSL 3.0.0 failure is still unfixed.\n\nThanks,\n--Jacob", "msg_date": "Mon, 7 Nov 2022 17:04:14 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Nov 07, 2022 at 05:04:14PM -0800, Jacob Champion wrote:\n> Done. sslrootcert=system now prevents you from explicitly setting a\n> weaker sslmode, to try to cement it as a Do What I Mean sort of\n> feature. If you need something weird then you can still jump through\n> the hoops by setting sslrootcert to a real file, same as today.\n> \n> The macOS/OpenSSL 3.0.0 failure is still unfixed.\n\nErr, could you look at that? I am switching the patch as waiting on\nauthor.\n--\nMichael", "msg_date": "Fri, 2 Dec 2022 14:26:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Thu, Dec 1, 2022 at 9:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Nov 07, 2022 at 05:04:14PM -0800, Jacob Champion wrote:\n> > The macOS/OpenSSL 3.0.0 failure is still unfixed.\n>\n> Err, could you look at that? I am switching the patch as waiting on\n> author.\n\nThanks for the nudge -- running with OpenSSL 3.0.7 in CI did not fix\nthe issue. I suspect a problem with our error stack handling...\n\nSeparately from this, our brew cache in Cirrus is extremely out of\ndate. Is there something that's supposed to be running `brew update`\n(or autoupdate) that is stuck or broken?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 2 Dec 2022 09:58:34 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Fri, Dec 2, 2022 at 9:58 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Thanks for the nudge -- running with OpenSSL 3.0.7 in CI did not fix\n> the issue. I suspect a problem with our error stack handling...\n\nIt is a problem with the error queue, but *whose* problem is probably\nup for debate. The queue looks like this after SSL_connect() returns:\n\n error:16000069:STORE\nroutines:ossl_store_get0_loader_int:unregistered\nscheme:crypto/store/store_register.c:237:scheme=file\n error:80000002:system library:file_open:No such file or\ndirectory:providers/implementations/storemgmt/file_store.c:269:calling\nstat(/usr/local/etc/openssl@3/certs)\n error:16000069:STORE\nroutines:ossl_store_get0_loader_int:unregistered\nscheme:crypto/store/store_register.c:237:scheme=file\n error:80000002:system library:file_open:No such file or\ndirectory:providers/implementations/storemgmt/file_store.c:269:calling\nstat(/usr/local/etc/openssl@3/certs)\n error:16000069:STORE\nroutines:ossl_store_get0_loader_int:unregistered\nscheme:crypto/store/store_register.c:237:scheme=file\n error:80000002:system library:file_open:No such file or\ndirectory:providers/implementations/storemgmt/file_store.c:269:calling\nstat(/usr/local/etc/openssl@3/certs)\n error:0A000086:SSL\nroutines:tls_post_process_server_certificate:certificate verify\nfailed:ssl/statem/statem_clnt.c:1883:\n\nNote that the error we care about is at the end, not the front.\n\nWe are not the first using Homebrew to run into this, and best I can\ntell, it is a brew-specific bug. The certificate directory that's been\nconfigured isn't actually installed by the formula. (A colleague here\nwas able to verify the same behavior on their local machine, so it's\nnot a Cirrus problem.)\n\nThe confusing \"unrecognized scheme\" message has thrown at least a few\npeople off the scent. That refers to an OpenSSL STORE URI, not the URI\ndescribing the peer. (Why `file://` is considered \"unregistered\" is\nbeyond me, considering the documentation says that file URI support is\nbuilt into libcrypto.) From inspection, that error is put onto the\nqueue before checking to see if the certificate directory exists, and\nthen it's popped back off the queue if the directory is found(?!).\nUnfortunately, the directory isn't there for Homebrew, which means we\nget both errors, the first of which is not actually helpful. And then\nit pushes the pair of errors two more times, for reasons I haven't\nbothered looking into yet.\n\nMaybe this is considered an internal error caused by a packaging bug,\nin which case I expect the formula maintainers to ask why it worked\nfor 1.1. Maybe it's a client error because we're not looking for the\nbest error on the queue, in which case I ask how we're supposed to\nknow which error is the most interesting. (I actually kind of know the\nanswer to this -- OpenSSL's builtin clients appear to check the front\nof the queue first, to see if it's an SSL-related error, and then if\nit's not they grab the error at the end of the queue instead. To which\nI ask: *what?*) Maybe clients are expected to present the entirety of\nthe queue. But then, why are three separate copies of the same errors\nspamming the queue? We can't present that.\n\nI'm considering filing an issue with OpenSSL, to see what they suggest\na responsible client should do in this situation...\n\n> Separately from this, our brew cache in Cirrus is extremely out of\n> date. Is there something that's supposed to be running `brew update`\n> (or autoupdate) that is stuck or broken?\n\n(If this is eventually considered a bug in the formula, we'll need to\nupdate to get the fix regardless.)\n\n--Jacob\n\n\n", "msg_date": "Mon, 5 Dec 2022 10:53:32 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Dec 5, 2022 at 10:53 AM Jacob Champion <jchampion@timescale.com> wrote:\n> We are not the first using Homebrew to run into this, and best I can\n> tell, it is a brew-specific bug. The certificate directory that's been\n> configured isn't actually installed by the formula. (A colleague here\n> was able to verify the same behavior on their local machine, so it's\n> not a Cirrus problem.)\n\nCorrection -- it is installed, but then it's removed during `brew\ncleanup`. I asked about it over on their discussion board [1].\n\n> (If this is eventually considered a bug in the formula, we'll need to\n> update to get the fix regardless.)\n\nFor now, it's worked around in v4. This should finally get the cfbot\nfully green.\n\n(The \"since diff\" is now in range-diff format; if you use them, let me\nknow if this is more or less useful than before.)\n\nThanks!\n--Jacob\n\n[1] https://github.com/orgs/Homebrew/discussions/4030", "msg_date": "Thu, 8 Dec 2022 15:10:11 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Thu, Dec 8, 2022 at 3:10 PM Jacob Champion <jchampion@timescale.com> wrote:\n> For now, it's worked around in v4. This should finally get the cfbot\n> fully green.\n\nCirrus's switch to M1 Macs changed the Homebrew installation path, so\nv5 adjusts the workaround accordingly.\n\n--Jacob", "msg_date": "Tue, 3 Jan 2023 13:06:16 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Huge +1 from me. On Azure we're already using public CAs to sign\ncertificates for our managed postgres offerings[1][2]. Right now, our\ncustomers have to go to the hassle of downloading a specific root cert or\nfinding their OS default location. Neither of these allow us to give users\na simple copy-pastable connection string that uses secure settings. This\nwould change this and make it much easier for our customers to use secure\nconnections to their database.\n\nI have two main questions:\n1. From the rest of the thread it's not entirely clear to me why this patch\ngoes for the sslrootcert=system approach, instead of changing what\nsslrootcert='' means when using verify-full. Like Tom Lane suggested, we\ncould change it to try ~/.postgresql/root.crt and if that doesn't exist\nmake it try the system store, instead of erroring out like it does now when\n~/.postgresql/root.crt doesn't exist. This approach seems nicer to me, as\nit doesn't require introducing another special keyword. It would also\nremove the need for the changing of defaults depending on the value of\nsslrootcert. NOTE: For sslmode=verify-ca we should still error out if\n~/.postgresql/root.crt doesn't exist, because as mentioned upthread it is\ntrivial to get a cert from these CAs.\n\n2. Should we allow the same approach with ssl_ca_file on the server side,\nfor client cert validation?\n\n\n[1]:\nhttps://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-connect-tls-ssl\n[2]:\nhttps://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/howto-ssl-connection-security\n\nOn Fri, 6 Jan 2023 at 10:42, Jacob Champion <jchampion@timescale.com> wrote:\n\n> On Thu, Dec 8, 2022 at 3:10 PM Jacob Champion <jchampion@timescale.com>\n> wrote:\n> > For now, it's worked around in v4. This should finally get the cfbot\n> > fully green.\n>\n> Cirrus's switch to M1 Macs changed the Homebrew installation path, so\n> v5 adjusts the workaround accordingly.\n>\n> --Jacob\n>\n\nHuge +1 from me. On Azure we're already using public CAs to sign certificates for our managed postgres offerings[1][2]. Right now, our customers have to go to the hassle of downloading a specific root cert or finding their OS default location. Neither of these allow us to give users a simple copy-pastable connection string that uses secure settings. This would change this and make it much easier for our customers to use secure connections to their database.I have two main questions:1. From the rest of the thread it's not entirely clear to me why this patch goes for the sslrootcert=system approach, instead of changing what sslrootcert='' means when using verify-full. Like Tom Lane suggested, we could change it to try ~/.postgresql/root.crt and if that doesn't exist make it try the system store, instead of erroring out like it does now when ~/.postgresql/root.crt doesn't exist. This approach seems nicer to me, as it doesn't require introducing another special keyword. \nIt would also remove the need for the changing of defaults depending on the value of sslrootcert. NOTE: For sslmode=verify-ca we should still error out if ~/.postgresql/root.crt doesn't exist, because as mentioned upthread it is trivial to get a cert from these CAs. 2. Should we allow the same approach with ssl_ca_file on the server side, for client cert validation?[1]: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-connect-tls-ssl[2]: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/howto-ssl-connection-securityOn Fri, 6 Jan 2023 at 10:42, Jacob Champion <jchampion@timescale.com> wrote:On Thu, Dec 8, 2022 at 3:10 PM Jacob Champion <jchampion@timescale.com> wrote:\n> For now, it's worked around in v4. This should finally get the cfbot\n> fully green.\n\nCirrus's switch to M1 Macs changed the Homebrew installation path, so\nv5 adjusts the workaround accordingly.\n\n--Jacob", "msg_date": "Fri, 6 Jan 2023 11:17:48 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 2023-01-06 Fr 05:17, Jelte Fennema wrote:\n> Huge +1 from me. On Azure we're already using public CAs to sign\n> certificates for our managed postgres offerings[1][2]. Right now, our\n> customers have to go to the hassle of downloading a specific root cert\n> or finding their OS default location. Neither of these allow us to\n> give users a simple copy-pastable connection string that uses secure\n> settings. This would change this and make it much easier for our\n> customers to use secure connections to their database.\n>\n> I have two main questions:\n> 1. From the rest of the thread it's not entirely clear to me why this\n> patch goes for the sslrootcert=system approach, instead of changing\n> what sslrootcert='' means when using verify-full. Like Tom Lane\n> suggested, we could change it to try ~/.postgresql/root.crt and if\n> that doesn't exist make it try the system store, instead of erroring\n> out like it does now when ~/.postgresql/root.crt doesn't exist. This\n> approach seems nicer to me, as it doesn't require introducing another\n> special keyword. It would also remove the need for the changing of\n> defaults depending on the value of sslrootcert. NOTE: For\n> sslmode=verify-ca we should still error out if ~/.postgresql/root.crt\n> doesn't exist, because as mentioned upthread it is trivial to get a\n> cert from these CAs.\n\n\nOne reason might be that it doesn't give you any way not to fall back on\nthe system store. Maybe that's important, maybe not. I don't know that\nthere would be much extra ease in doing it the other way, you're going\nto have to specify some ssl options anyway.\n\n\n>\n> 2. Should we allow the same approach with ssl_ca_file on the server\n> side, for client cert validation?\n\n\n+1 for doing this, although I think client certs are less likely to have\nbeen issued by a public CA.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 6 Jan 2023 08:49:57 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> One reason might be that it doesn't give you any way not to fall back on\n> the system store.\n\nTo not fall back to the system store you could still provide the exact path\nto the CA cert file.\n\n> +1 for doing this, although I think client certs are less likely to have\n> been issued by a public CA.\n\nI totally agree that it's less likely. And I definitely don't want to block this\npatch on this feature. Especially since configuring your database server\nis much easier than configuring ALL the clients that ever connect to your\ndatabase.\n\nHowever, I would like to give a use case where use public CA signed\nclient authentication can make sense:\nAuthenticating different nodes in a citus cluster to each other. If such\nnodes already have a public CA signed certificate for their hostname\nto attest their identity for regular clients, then you can set up client\nside auth on each of the nodes so that each node in the\ncluster can connect as any user to each of the other nodes in\nthe cluster by authenticating with that same certificate.\n\n\n", "msg_date": "Fri, 6 Jan 2023 15:28:03 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 2023-01-06 Fr 09:28, Jelte Fennema wrote:\n>> One reason might be that it doesn't give you any way not to fall back on\n>> the system store.\n> To not fall back to the system store you could still provide the exact path\n> to the CA cert file.\n\n\nI guess. I don't have strong feelings one way or the other about this.\n\n\n>\n>> +1 for doing this, although I think client certs are less likely to have\n>> been issued by a public CA.\n> I totally agree that it's less likely. And I definitely don't want to block this\n> patch on this feature. Especially since configuring your database server\n> is much easier than configuring ALL the clients that ever connect to your\n> database.\n>\n> However, I would like to give a use case where use public CA signed\n> client authentication can make sense:\n> Authenticating different nodes in a citus cluster to each other. If such\n> nodes already have a public CA signed certificate for their hostname\n> to attest their identity for regular clients, then you can set up client\n> side auth on each of the nodes so that each node in the\n> cluster can connect as any user to each of the other nodes in\n> the cluster by authenticating with that same certificate.\n\n\nYeah, I have done that sort of thing with pgbouncer auth using an ident\nmap. (There's probably a good case for making ident maps for useful by\nadopting the +role mechanism from pg_hba.conf processing, but that's a\nseparate issue).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 6 Jan 2023 10:18:55 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Fri, Jan 6, 2023 at 2:18 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> Huge +1 from me. On Azure we're already using public CAs to sign certificates for our managed postgres offerings[1][2]. Right now, our customers have to go to the hassle of downloading a specific root cert or finding their OS default location. Neither of these allow us to give users a simple copy-pastable connection string that uses secure settings. This would change this and make it much easier for our customers to use secure connections to their database.\n\nThanks! Timescale Cloud is in the same boat.\n\n> I have two main questions:\n> 1. From the rest of the thread it's not entirely clear to me why this patch goes for the sslrootcert=system approach, instead of changing what sslrootcert='' means when using verify-full. Like Tom Lane suggested, we could change it to try ~/.postgresql/root.crt and if that doesn't exist make it try the system store, instead of erroring out like it does now when ~/.postgresql/root.crt doesn't exist.\n\nI mentioned it briefly upthread, but to expand: For something this\ncritical to security, I don't like solutions that don't say exactly\nwhat they do. What does the following connection string mean?\n\n $ psql 'host=example.org sslmode=verify-full'\n\nIf it sometimes means to use root.crt (if one exists) and sometimes to\nuse the system store, then\n1) it's hard to audit the actual behavior without knowing the state of\nthe filesystem, and\n2) if I want to connect to a server using the system store, and *only*\nthe system store, then I have to make sure that the default root.crt\nnever exists. But what if other software on my system relies on it?\n\nIt also provides a bigger target for exploit chains, because I can\nremove somebody's root.crt file and their connections may try to\ncontinue with an effectively weaker verification level instead of\nerroring out. I realize that for many people this is a nonissue (\"if\nyou can delete the root cert, you can probably do much worse\") but IME\narbitrary file deletion vulnerabilities are treated with less concern\nthan arbitrary file writes.\n\nBy contrast,\n\n $ psql 'host=example.org sslrootcert=system sslmode=verify-full'\n\nhas a clear meaning. We'll never use a root.crt.\n\n(A downside of reusing sslrootcert like this is the cross-version\nhazard. The connection string 'host=example.org sslrootcert=system'\nmeans something strong with this patchset, but something very weak to\nlibpq 15 and prior. So clients should probably continue to specify\nsslmode=verify-full explicitly for the foreseeable future.)\n\n> This approach seems nicer to me, as it doesn't require introducing another special keyword.\n\nMaybe I'm being overly aspirational, but one upside to that special\nkeyword is that it's a clear signal that the user wants to use the\npublic CA model. So we have the opportunity to remove footguns\naggressively when we see this mode. In the future we may have further\nopportunities to strengthen sslrootcert=system (OCSP and/or\nmust-staple support?) that would be harder to roll out by default if\nwe're just trying to guess what the user wants.\n\n> It would also remove the need for the changing of defaults depending on the value of sslrootcert.\n\nAgreed. Personally I think the benefit of 0002 outweighs its cost, but\nmaybe there's a better way to implement it.\n\n> 2. Should we allow the same approach with ssl_ca_file on the server side, for client cert validation?\n\nI don't know enough about this use case to implement it safely. We'd\nhave to make sure the HBA entry is checking the hostname (so that we\ndo the reverse DNS dance), and I guess we'd need to introduce a new\nclientcert verify-* mode? Also, it seems like server operators are\nmore likely to know exactly which roots they need, at least compared\nto clients. I agree the feature is useful, but I'm not excited about\nattaching it to this patchset.\n\n--Jacob\n\n\n", "msg_date": "Fri, 6 Jan 2023 10:20:23 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Thanks for clarifying your reasoning. I now agree that ssrootcert=system\nis now the best option.\n\n> > 2. Should we allow the same approach with ssl_ca_file on the server side, for client cert validation?\n>\n> I don't know enough about this use case to implement it safely. We'd\n> have to make sure the HBA entry is checking the hostname (so that we\n> do the reverse DNS dance), and I guess we'd need to introduce a new\n> clientcert verify-* mode? Also, it seems like server operators are\n> more likely to know exactly which roots they need, at least compared\n> to clients. I agree the feature is useful, but I'm not excited about\n> attaching it to this patchset.\n\nThe main thing would be to have ssl_ca_file=system check against\nthe certs from the system CA store. And probably we would want\nto disallow clientcert=verify-ca when ssl_ca_file is set to system.\nOther than that I don't think anything is necessary. I definitely agree\nthat this patch should not be blocked on this. But it seems simple\nenough to implement and imho it would be a bit confusing if ssl_ca_file\ndoes not support the \"system\" value, but sslrootcert does.\n\nI also took a closer look at the code, and the only comment I have is:\n\n> appendPQExpBuffer(&conn->errorMessage,\n\nThese calls can all be replaced by the recently added libpq_append_conn_error\n\nFinally I tested this against a Postgres server I created on Azure and\nthe new value works as expected. The only thing that I think would be\ngood to change is the error message when sslmode=verify-full, and\nsslrootcert is not provided, but ~/.postgresql/root.crt is also not available.\nI think it would be good for the error to mention sslrootcert=system\n\n> psql: error: connection to server at \"xxx.postgres.database.azure.com\" (123.456.789.123), port 5432 failed: root certificate file \"/home/jelte/.postgresql/root.crt\" does not exist\n> Either provide the file or change sslmode to disable server certificate verification.\n\nChanging that last line to something like (maybe removing the part\nabout disabling server certificate verification):\n> Either provide the file using the sslrootcert parameter, or use sslrootert=system to use the OS certificate store, or change sslmode to disable server certificate verification.\n\nOn Fri, 6 Jan 2023 at 19:20, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Jan 6, 2023 at 2:18 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> >\n> > Huge +1 from me. On Azure we're already using public CAs to sign certificates for our managed postgres offerings[1][2]. Right now, our customers have to go to the hassle of downloading a specific root cert or finding their OS default location. Neither of these allow us to give users a simple copy-pastable connection string that uses secure settings. This would change this and make it much easier for our customers to use secure connections to their database.\n>\n> Thanks! Timescale Cloud is in the same boat.\n>\n> > I have two main questions:\n> > 1. From the rest of the thread it's not entirely clear to me why this patch goes for the sslrootcert=system approach, instead of changing what sslrootcert='' means when using verify-full. Like Tom Lane suggested, we could change it to try ~/.postgresql/root.crt and if that doesn't exist make it try the system store, instead of erroring out like it does now when ~/.postgresql/root.crt doesn't exist.\n>\n> I mentioned it briefly upthread, but to expand: For something this\n> critical to security, I don't like solutions that don't say exactly\n> what they do. What does the following connection string mean?\n>\n> $ psql 'host=example.org sslmode=verify-full'\n>\n> If it sometimes means to use root.crt (if one exists) and sometimes to\n> use the system store, then\n> 1) it's hard to audit the actual behavior without knowing the state of\n> the filesystem, and\n> 2) if I want to connect to a server using the system store, and *only*\n> the system store, then I have to make sure that the default root.crt\n> never exists. But what if other software on my system relies on it?\n>\n> It also provides a bigger target for exploit chains, because I can\n> remove somebody's root.crt file and their connections may try to\n> continue with an effectively weaker verification level instead of\n> erroring out. I realize that for many people this is a nonissue (\"if\n> you can delete the root cert, you can probably do much worse\") but IME\n> arbitrary file deletion vulnerabilities are treated with less concern\n> than arbitrary file writes.\n>\n> By contrast,\n>\n> $ psql 'host=example.org sslrootcert=system sslmode=verify-full'\n>\n> has a clear meaning. We'll never use a root.crt.\n>\n> (A downside of reusing sslrootcert like this is the cross-version\n> hazard. The connection string 'host=example.org sslrootcert=system'\n> means something strong with this patchset, but something very weak to\n> libpq 15 and prior. So clients should probably continue to specify\n> sslmode=verify-full explicitly for the foreseeable future.)\n>\n> > This approach seems nicer to me, as it doesn't require introducing another special keyword.\n>\n> Maybe I'm being overly aspirational, but one upside to that special\n> keyword is that it's a clear signal that the user wants to use the\n> public CA model. So we have the opportunity to remove footguns\n> aggressively when we see this mode. In the future we may have further\n> opportunities to strengthen sslrootcert=system (OCSP and/or\n> must-staple support?) that would be harder to roll out by default if\n> we're just trying to guess what the user wants.\n>\n> > It would also remove the need for the changing of defaults depending on the value of sslrootcert.\n>\n> Agreed. Personally I think the benefit of 0002 outweighs its cost, but\n> maybe there's a better way to implement it.\n>\n> > 2. Should we allow the same approach with ssl_ca_file on the server side, for client cert validation?\n>\n> I don't know enough about this use case to implement it safely. We'd\n> have to make sure the HBA entry is checking the hostname (so that we\n> do the reverse DNS dance), and I guess we'd need to introduce a new\n> clientcert verify-* mode? Also, it seems like server operators are\n> more likely to know exactly which roots they need, at least compared\n> to clients. I agree the feature is useful, but I'm not excited about\n> attaching it to this patchset.\n>\n> --Jacob\n\n\n", "msg_date": "Mon, 9 Jan 2023 16:07:17 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\nOn 2023-01-09 Mo 10:07, Jelte Fennema wrote:\n> Thanks for clarifying your reasoning. I now agree that ssrootcert=system\n> is now the best option.\n\n\nCool, that looks like a consensus.\n\n\n>\n>>> 2. Should we allow the same approach with ssl_ca_file on the server side, for client cert validation?\n>> I don't know enough about this use case to implement it safely. We'd\n>> have to make sure the HBA entry is checking the hostname (so that we\n>> do the reverse DNS dance), and I guess we'd need to introduce a new\n>> clientcert verify-* mode? Also, it seems like server operators are\n>> more likely to know exactly which roots they need, at least compared\n>> to clients. I agree the feature is useful, but I'm not excited about\n>> attaching it to this patchset.\n\n\nI'm confused. A client cert might not have a hostname at all, and isn't\nused to verify the connecting address, but to verify the username. It\nneeds to have a CN/DN equal to the user name of the connection, or that\nmaps to that name via pg_ident.conf.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 9 Jan 2023 10:40:34 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Jan 9, 2023 at 7:40 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I'm confused. A client cert might not have a hostname at all, and isn't\n> used to verify the connecting address, but to verify the username. It\n> needs to have a CN/DN equal to the user name of the connection, or that\n> maps to that name via pg_ident.conf.\n\nRight. But I don't know anything about the security model for using a\npublicly issued server certificate as a client certificate. So if you\ntell me that your only requirement is that the hostname/CN matches an\nentry in your ident file, and that you don't need to verify that the\ncertificate identifying example.org is actually coming from\nexample.org, or do any sort of online revocation processing to help\nmitigate the risks from that, or even handle wildcards or SANs in the\ncert -- fine, but I don't know the right questions to ask to review\nthat case for safety or correctness. It'd be better to ask someone who\nis already comfortable with it.\n\n From my perspective, ssl_ca_file=system sure *looks* like something\nreasonable for me to choose as a DBA, but I'm willing to guess it's\nnot actually reasonable for 99% of people. (If you get your pg_ident\nrule wrong, for example, the number of people who can attack you is\nscoped by the certificates issued by your CA... which for 'system'\nwould be the entire world.) By contrast I would have no problem\nrecommending sslrootcert=system as a default: a certificate can still\nbe misissued, but a would-be attacker would still have to get you to\nconnect to them. That model and its risks are, I think, generally well\nunderstood by the community.\n\n--Jacob\n\n\n", "msg_date": "Tue, 10 Jan 2023 13:07:32 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Jan 9, 2023 at 7:07 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> I also took a closer look at the code, and the only comment I have is:\n>\n> > appendPQExpBuffer(&conn->errorMessage,\n>\n> These calls can all be replaced by the recently added libpq_append_conn_error\n\nArgh, thanks for the catch. Fixed.\n\n> Finally I tested this against a Postgres server I created on Azure and\n> the new value works as expected. The only thing that I think would be\n> good to change is the error message when sslmode=verify-full, and\n> sslrootcert is not provided, but ~/.postgresql/root.crt is also not available.\n> I think it would be good for the error to mention sslrootcert=system\n\nGood idea. The wording I chose in v6 is\n\n Either provide the file, use the system's trusted roots with\nsslrootcert=system, or change sslmode to disable server certificate\nverification.\n\nWhat do you think?\n\nThanks!\n--Jacob", "msg_date": "Tue, 10 Jan 2023 15:15:47 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "LGTM. As far as I can tell this is ready for a committer.\n\nOn Wed, 11 Jan 2023 at 00:15, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Mon, Jan 9, 2023 at 7:07 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> > I also took a closer look at the code, and the only comment I have is:\n> >\n> > > appendPQExpBuffer(&conn->errorMessage,\n> >\n> > These calls can all be replaced by the recently added libpq_append_conn_error\n>\n> Argh, thanks for the catch. Fixed.\n>\n> > Finally I tested this against a Postgres server I created on Azure and\n> > the new value works as expected. The only thing that I think would be\n> > good to change is the error message when sslmode=verify-full, and\n> > sslrootcert is not provided, but ~/.postgresql/root.crt is also not available.\n> > I think it would be good for the error to mention sslrootcert=system\n>\n> Good idea. The wording I chose in v6 is\n>\n> Either provide the file, use the system's trusted roots with\n> sslrootcert=system, or change sslmode to disable server certificate\n> verification.\n>\n> What do you think?\n>\n> Thanks!\n> --Jacob\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:37:17 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Jan 11, 2023 at 6:37 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> LGTM. As far as I can tell this is ready for a committer.\n\nThanks for the reviews!\n\n--Jacob\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:27:32 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Jan 11, 2023 at 6:27 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Wed, Jan 11, 2023 at 6:37 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> >\n> > LGTM. As far as I can tell this is ready for a committer.\n>\n> Thanks for the reviews!\n>\n\nSorry to jump in (very) late in this game. So first, I like this general\napproach :)\n\nIt feels icky to have to add configure tests just to make a test work. But\nI guess there isn't really a way around that if we want to test the full\nthing.\n\nHowever, shouldn't we be using X509_get_default_cert_file_env() to get the\nname of the env? Granted it's unlikely to be anything else, but if it's an\nAPI you're supposed to use. (In an ideal world that function would not\nreturn anything in LibreSSL but I think it does include something, and then\njust ignores it?)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jan 11, 2023 at 6:27 PM Jacob Champion <jchampion@timescale.com> wrote:On Wed, Jan 11, 2023 at 6:37 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> LGTM. As far as I can tell this is ready for a committer.\n\nThanks for the reviews!Sorry to jump in (very) late in this game. So first, I like this general approach :)It feels icky to have to add configure tests just to make a test work. But I guess there isn't really a way around that if we want to test the full thing.However, shouldn't we be using X509_get_default_cert_file_env() to get the name of the env? Granted it's  unlikely to be anything else, but if it's an API you're supposed to use. (In an ideal world that function would not return anything in LibreSSL but I think it does include something, and then just ignores it?)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 19:23:23 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Jan 11, 2023 at 10:23 AM Magnus Hagander <magnus@hagander.net> wrote:\n> Sorry to jump in (very) late in this game. So first, I like this general approach :)\n\nThanks!\n\n> It feels icky to have to add configure tests just to make a test work. But I guess there isn't really a way around that if we want to test the full thing.\n\nI agree...\n\n> However, shouldn't we be using X509_get_default_cert_file_env() to get the name of the env? Granted it's unlikely to be anything else, but if it's an API you're supposed to use. (In an ideal world that function would not return anything in LibreSSL but I think it does include something, and then just ignores it?)\n\nI think you're right, but before I do that, is the cure better than\nthe disease? It seems like that would further complicate a part of the\nPerl tests that is already unnecessarily complicated. (The Postgres\ncode doesn't use the envvar at all.) Unless you already know of an\nOpenSSL-alike that doesn't use that same envvar name?\n\n--Jacob\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:06:45 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Jan 11, 2023 at 8:06 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Wed, Jan 11, 2023 at 10:23 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > Sorry to jump in (very) late in this game. So first, I like this general\n> approach :)\n>\n> Thanks!\n>\n> > It feels icky to have to add configure tests just to make a test work.\n> But I guess there isn't really a way around that if we want to test the\n> full thing.\n>\n> I agree...\n>\n> > However, shouldn't we be using X509_get_default_cert_file_env() to get\n> the name of the env? Granted it's unlikely to be anything else, but if\n> it's an API you're supposed to use. (In an ideal world that function would\n> not return anything in LibreSSL but I think it does include something, and\n> then just ignores it?)\n>\n> I think you're right, but before I do that, is the cure better than\n> the disease? It seems like that would further complicate a part of the\n> Perl tests that is already unnecessarily complicated. (The Postgres\n> code doesn't use the envvar at all.) Unless you already know of an\n> OpenSSL-alike that doesn't use that same envvar name?\n>\n\nFair point. No, I have not run into one, I just recalled having seen the\nAPI :)\n\nAnd you're right -- I didn't consider that we were looking at that one in\nthe *perl* code, not the C code. In the C code it would've been a trivial\nreplacement. In the perl, I agree it's not worth it -- at least not until\nwe run into a platform where it *would' matter.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jan 11, 2023 at 8:06 PM Jacob Champion <jchampion@timescale.com> wrote:On Wed, Jan 11, 2023 at 10:23 AM Magnus Hagander <magnus@hagander.net> wrote:\n> Sorry to jump in (very) late in this game. So first, I like this general approach :)\n\nThanks!\n\n> It feels icky to have to add configure tests just to make a test work. But I guess there isn't really a way around that if we want to test the full thing.\n\nI agree...\n\n> However, shouldn't we be using X509_get_default_cert_file_env() to get the name of the env? Granted it's  unlikely to be anything else, but if it's an API you're supposed to use. (In an ideal world that function would not return anything in LibreSSL but I think it does include something, and then just ignores it?)\n\nI think you're right, but before I do that, is the cure better than\nthe disease? It seems like that would further complicate a part of the\nPerl tests that is already unnecessarily complicated. (The Postgres\ncode doesn't use the envvar at all.) Unless you already know of an\nOpenSSL-alike that doesn't use that same envvar name?Fair point. No, I have not run into one, I just recalled having seen the API :) And you're right -- I didn't consider that we were looking at that one in the *perl* code, not the C code. In the C code it would've been a trivial replacement. In the perl, I agree it's not worth it -- at least not until we run into a platform where it *would' matter.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 22:58:47 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "FYI the last patch does not apply cleanly anymore. So a rebase is needed.\n\n\n", "msg_date": "Thu, 16 Feb 2023 10:35:07 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Thu, Feb 16, 2023 at 1:35 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> FYI the last patch does not apply cleanly anymore. So a rebase is needed.\n\nThanks for the nudge, v7 rebases over the configure conflict from 9244c11afe.\n\n--Jacob", "msg_date": "Thu, 16 Feb 2023 10:38:25 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On 2/16/23 10:38, Jacob Champion wrote:\n> Thanks for the nudge, v7 rebases over the configure conflict from 9244c11afe.\n\nI think/hope this is well-baked enough for a potential commit this CF,\nso I've adjusted the target version. Let me know if there are any\nconcerns about the approach.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Tue, 28 Feb 2023 15:49:07 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "It does look like a rebase for meson.build would be helpful. I'm not\nmarking it waiting on author just for meson.build conflicts but it\nwould be perhaps more likely to be picked up by a committer if it's\nshowing green in cfbot.\n\n\n", "msg_date": "Tue, 14 Mar 2023 14:00:27 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Tue, Mar 14, 2023 at 11:01 AM Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n> It does look like a rebase for meson.build would be helpful. I'm not\n> marking it waiting on author just for meson.build conflicts but it\n> would be perhaps more likely to be picked up by a committer if it's\n> showing green in cfbot.\n\nRebased over yesterday's Meson changes in v8.\n\nThanks!\n--Jacob", "msg_date": "Tue, 14 Mar 2023 12:20:21 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 14 Mar 2023, at 20:20, Jacob Champion <jchampion@timescale.com> wrote:\n\n> Rebased over yesterday's Meson changes in v8.\n\nI had a look at this and agree that it's something we should do. The patch\nseems quite close to committable, I just have a few comments on it:\n\n+ # Let tests differentiate between vanilla OpenSSL and LibreSSL.\n+ AC_CHECK_DECLS([LIBRESSL_VERSION_NUMBER], [], [], [#include <openssl/opensslv.h>])\nWe have a check for SSL_CTX_set_cert_cb which is specifically done since it's\nnot present in Libressl. Rather than spending more cycles in autoconf/meson,\ncouldn't we use HAVE_SSL_CTX_SET_CERT_CB for this test? (Longer term, maybe we\nshould make the checks properly distinguish between OpenSSL and LibreSSL as\nthey are diverging, but thats not for this patch to tackle.)\n\n\n+ # brew cleanup removes the empty certs directory in OPENSSLDIR, causing\n+ # OpenSSL to report unexpected errors (\"unregistered scheme\") during\n+ # verification failures. Put it back for now as a workaround.\n+ #\n+ # https://github.com/orgs/Homebrew/discussions/4030\n+ #\n+ # Note that $(brew --prefix openssl) will give us the opt/ prefix but not\n+ # the etc/ prefix, so we hardcode the full path here. openssl@3 is pinned\n+ # above to try to minimize the chances of this changing beneath us, but it's\n+ # brittle...\n+ mkdir -p \"/opt/homebrew/etc/openssl@3/certs\"\nI can agree with the comment that this seems brittle. How about moving the installation of openssl to after the brew cleanup stage to avoid the need for this? While that may leave more in the cache, it seems more palatable. Something like this essentially:\n\n\tbrew install <everything but openssl>\n\tbrew cleanup -s\t\n\t# Comment about why OpenSSL is kept separate\n\tbrew install openssl@3\n\n\n+ libpq_append_conn_error(conn, \"weak sslmode \\\"%s\\\" may not be used with sslrootcert=system\",\n+ conn->sslmode);\nI think we should help the user by indicating which sslmode we allow in this\nmessage.\n\n\n+\n+\t/*\n+\t * sslmode is not specified. Let it be filled in with the compiled\n+\t * default for now, but if sslrootcert=system, we'll override the\n+\t * default later before returning.\n+\t */\n+\tsslmode_default = option;\nAs a not to self and other reviewers, \"git am\" misplaced this when applying the\npatch such that the result was syntactically correct but semantically wrong,\ncausing very weird test errors.\n\n\n+\tsslmode_default->val = strdup(\"verify-full\");\nThis needs to be checked for OOM error.\n\n\n- if (fnbuf[0] != '\\0' &&\n- stat(fnbuf, &buf) == 0)\n+ if (strcmp(fnbuf, \"system\") == 0)\nI'm not a fan of magic values, but sadly I don't have a better idea for this.\nWe should however document that the keyword takes precedence over a file with\nthe same name (even though the collision is unlikely).\n\n\n+ if (SSL_CTX_set_default_verify_paths(SSL_context) != 1)\nOpenSSL documents this as \"A missing default location is still treated as a\nsuccess.\", is that something we need to document or in any way deal with?\n(Skimming the OpenSSL code I'm not sure it's actually correct in v3+, but I\nmight very well have missed something.)\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 31 Mar 2023 11:14:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On 3/31/23 02:14, Daniel Gustafsson wrote:\n>> On 14 Mar 2023, at 20:20, Jacob Champion <jchampion@timescale.com> wrote:\n> \n>> Rebased over yesterday's Meson changes in v8.\n> \n> I had a look at this and agree that it's something we should do.\n\nGreat, thanks for the review!\n\n> + # Let tests differentiate between vanilla OpenSSL and LibreSSL.\n> + AC_CHECK_DECLS([LIBRESSL_VERSION_NUMBER], [], [], [#include <openssl/opensslv.h>])\n> We have a check for SSL_CTX_set_cert_cb which is specifically done since it's\n> not present in Libressl. Rather than spending more cycles in autoconf/meson,\n> couldn't we use HAVE_SSL_CTX_SET_CERT_CB for this test? (Longer term, maybe we\n> should make the checks properly distinguish between OpenSSL and LibreSSL as\n> they are diverging, but thats not for this patch to tackle.)\n\nI can make that change; note that it'll also skip some of the new tests\nwith OpenSSL 1.0.1, where there's no SSL_CTX_set_cert_cb. If that's\nacceptable, it should be an easy switch.\n\n> I can agree with the comment that this seems brittle. How about moving the installation of openssl to after the brew cleanup stage to avoid the need for this? While that may leave more in the cache, it seems more palatable. Something like this essentially:\n> \n> \tbrew install <everything but openssl>\n> \tbrew cleanup -s\t\n> \t# Comment about why OpenSSL is kept separate\n> \tbrew install openssl@3\n\nThat looks much better to me, but it didn't work when I tried it. One or\nmore of the packages above it (and/or the previous cache?) has already\ninstalled OpenSSL as one of its dependencies, so the last `brew install`\nbecomes a no-op. I tried an `install --force` as well, but that didn't\nseem to do anything differently. :/\n\n> + libpq_append_conn_error(conn, \"weak sslmode \\\"%s\\\" may not be used with sslrootcert=system\",\n> + conn->sslmode);\n> I think we should help the user by indicating which sslmode we allow in this\n> message.\n\nAdded in v9.\n\n> +\n> +\t/*\n> +\t * sslmode is not specified. Let it be filled in with the compiled\n> +\t * default for now, but if sslrootcert=system, we'll override the\n> +\t * default later before returning.\n> +\t */\n> +\tsslmode_default = option;\n> As a not to self and other reviewers, \"git am\" misplaced this when applying the\n> patch such that the result was syntactically correct but semantically wrong,\n> causing very weird test errors.\n\nLovely... I've formatted v9 with a longer patch context.\n\n> +\tsslmode_default->val = strdup(\"verify-full\");\n> This needs to be checked for OOM error.\n\nWhoops, should be fixed now.\n\n> - if (fnbuf[0] != '\\0' &&\n> - stat(fnbuf, &buf) == 0)\n> + if (strcmp(fnbuf, \"system\") == 0)\n> I'm not a fan of magic values, but sadly I don't have a better idea for this.\n> We should however document that the keyword takes precedence over a file with\n> the same name (even though the collision is unlikely).\n\nAdded a note to the docs.\n\n> + if (SSL_CTX_set_default_verify_paths(SSL_context) != 1)\n> OpenSSL documents this as \"A missing default location is still treated as a\n> success.\", is that something we need to document or in any way deal with?\n> (Skimming the OpenSSL code I'm not sure it's actually correct in v3+, but I\n> might very well have missed something.)\n\nI think it's still true in v3+, because that sounds exactly like the\nbrew issue we're working around in Cirrus. I'm not sure if there's much\nfor us to do in that case, short of reimplementing the OpenSSL defaults\nlogic and checking it ourselves. (And even that would look different\nbetween OpenSSL and LibreSSL...)\n\nIs there something we could document that's more helpful than \"make sure\nyour installation isn't broken\"?\n\n--Jacob", "msg_date": "Fri, 31 Mar 2023 10:59:44 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 31 Mar 2023, at 19:59, Jacob Champion <jchampion@timescale.com> wrote:\n\n>> + # Let tests differentiate between vanilla OpenSSL and LibreSSL.\n>> + AC_CHECK_DECLS([LIBRESSL_VERSION_NUMBER], [], [], [#include <openssl/opensslv.h>])\n>> We have a check for SSL_CTX_set_cert_cb which is specifically done since it's\n>> not present in Libressl. Rather than spending more cycles in autoconf/meson,\n>> couldn't we use HAVE_SSL_CTX_SET_CERT_CB for this test? (Longer term, maybe we\n>> should make the checks properly distinguish between OpenSSL and LibreSSL as\n>> they are diverging, but thats not for this patch to tackle.)\n> \n> I can make that change; note that it'll also skip some of the new tests\n> with OpenSSL 1.0.1, where there's no SSL_CTX_set_cert_cb. If that's\n> acceptable, it should be an easy switch.\n\nI'm not sure I follow, AFAICT it's present all the way till 3.1 at least? What\nam I missing?\n\n>> I can agree with the comment that this seems brittle. How about moving the installation of openssl to after the brew cleanup stage to avoid the need for this? While that may leave more in the cache, it seems more palatable. Something like this essentially:\n>> \n>> \tbrew install <everything but openssl>\n>> \tbrew cleanup -s\t\n>> \t# Comment about why OpenSSL is kept separate\n>> \tbrew install openssl@3\n> \n> That looks much better to me, but it didn't work when I tried it. One or\n> more of the packages above it (and/or the previous cache?) has already\n> installed OpenSSL as one of its dependencies, so the last `brew install`\n> becomes a no-op. I tried an `install --force` as well, but that didn't\n> seem to do anything differently. :/\n\nUgh, that's very unfortunate, I guess we're stuck with this then. If we can't\nmake brew cleanup not remove it then any hack applied to make it stick around\nwill be equally brittle so we might as well mkdir it back.\n\n>> + if (SSL_CTX_set_default_verify_paths(SSL_context) != 1)\n>> OpenSSL documents this as \"A missing default location is still treated as a\n>> success.\", is that something we need to document or in any way deal with?\n>> (Skimming the OpenSSL code I'm not sure it's actually correct in v3+, but I\n>> might very well have missed something.)\n> \n> I think it's still true in v3+, because that sounds exactly like the\n> brew issue we're working around in Cirrus. I'm not sure if there's much\n> for us to do in that case, short of reimplementing the OpenSSL defaults\n> logic and checking it ourselves. (And even that would look different\n> between OpenSSL and LibreSSL...)\n\nRight, that's clearly not something we want to do.\n\n> Is there something we could document that's more helpful than \"make sure\n> your installation isn't broken\"?\n\nI wonder if there is an openssl command line example for verifying defaults\nthat we can document and refer to?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 2 Apr 2023 22:35:57 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Sun, Apr 2, 2023 at 1:36 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 31 Mar 2023, at 19:59, Jacob Champion <jchampion@timescale.com> wrote:\n> > I can make that change; note that it'll also skip some of the new tests\n> > with OpenSSL 1.0.1, where there's no SSL_CTX_set_cert_cb. If that's\n> > acceptable, it should be an easy switch.\n>\n> I'm not sure I follow, AFAICT it's present all the way till 3.1 at least? What\n> am I missing?\n\nI don't see it anywhere in my 1.0.1 setup, and Meson doesn't define\nHAVE_SSL_CTX_SET_CERT_CB when built against it.\n\n> > Is there something we could document that's more helpful than \"make sure\n> > your installation isn't broken\"?\n>\n> I wonder if there is an openssl command line example for verifying defaults\n> that we can document and refer to?\n\nWe could maybe have them connect to a known host:\n\n $ echo Q | openssl s_client -connect postgresql.org:443 -verify_return_error\n\nAlternatively, OpenSSL will show you the OPENSSLDIR:\n\n $ openssl version -d\n OPENSSLDIR: \"/usr/lib/ssl\"\n\nand then we could tell users to ensure they have a populated certs/\ndirectory or a cert.pem file underneath it. That'll be prone to rot,\nthough (e.g. OpenSSL 3 introduces the default store in addition to the\ndefault file+directory).\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 3 Apr 2023 12:04:50 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 3 Apr 2023, at 21:04, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Sun, Apr 2, 2023 at 1:36 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 31 Mar 2023, at 19:59, Jacob Champion <jchampion@timescale.com> wrote:\n>>> I can make that change; note that it'll also skip some of the new tests\n>>> with OpenSSL 1.0.1, where there's no SSL_CTX_set_cert_cb. If that's\n>>> acceptable, it should be an easy switch.\n>> \n>> I'm not sure I follow, AFAICT it's present all the way till 3.1 at least? What\n>> am I missing?\n> \n> I don't see it anywhere in my 1.0.1 setup, and Meson doesn't define\n> HAVE_SSL_CTX_SET_CERT_CB when built against it.\n\nDoh, sorry, my bad. I read and wrote 1.0.1 but was thinking about 1.0.2. You\nare right, in 1.0.1 that API does not exist. I'm not all too concerned with\nskipping this tests on OpenSSL versions that by the time 16 ships are 6 years\nEOL - and I'm not convinced that spending meson/autoconf cycles to include them\nis warranted.\n\nLonger term I'd want to properly distinguish between LibreSSL and OpenSSL, but\nthen we should have a bigger discussion on what we want to use these values for.\n\n>>> Is there something we could document that's more helpful than \"make sure\n>>> your installation isn't broken\"?\n>> \n>> I wonder if there is an openssl command line example for verifying defaults\n>> that we can document and refer to?\n> \n> We could maybe have them connect to a known host:\n> \n> $ echo Q | openssl s_client -connect postgresql.org:443 -verify_return_error\n\nSomething along these lines is probably best, if we do it at all. Needs\nsleeping on.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Apr 2023 21:40:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Mon, Apr 3, 2023 at 12:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Doh, sorry, my bad. I read and wrote 1.0.1 but was thinking about 1.0.2. You\n> are right, in 1.0.1 that API does not exist. I'm not all too concerned with\n> skipping this tests on OpenSSL versions that by the time 16 ships are 6 years\n> EOL - and I'm not convinced that spending meson/autoconf cycles to include them\n> is warranted.\n\nCool. v10 keys off of HAVE_SSL_CTX_SET_CERT_CB, instead.\n\n> > We could maybe have them connect to a known host:\n> >\n> > $ echo Q | openssl s_client -connect postgresql.org:443 -verify_return_error\n>\n> Something along these lines is probably best, if we do it at all. Needs\n> sleeping on.\n\nSounds good.\n\nThanks!\n--Jacob", "msg_date": "Mon, 3 Apr 2023 14:09:57 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 3 Apr 2023, at 23:09, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Mon, Apr 3, 2023 at 12:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Doh, sorry, my bad. I read and wrote 1.0.1 but was thinking about 1.0.2. You\n>> are right, in 1.0.1 that API does not exist. I'm not all too concerned with\n>> skipping this tests on OpenSSL versions that by the time 16 ships are 6 years\n>> EOL - and I'm not convinced that spending meson/autoconf cycles to include them\n>> is warranted.\n> \n> Cool. v10 keys off of HAVE_SSL_CTX_SET_CERT_CB, instead.\n\nI squashed and pushed v10 with a few small comment tweaks for typos and some\npgindenting. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 5 Apr 2023 23:27:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Apr 5, 2023 at 2:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> I squashed and pushed v10 with a few small comment tweaks for typos and some\n> pgindenting. Thanks!\n\nThank you very much!\n\n--Jacob\n\n\n", "msg_date": "Wed, 5 Apr 2023 14:29:44 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On 05.04.23 23:29, Jacob Champion wrote:\n> On Wed, Apr 5, 2023 at 2:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> I squashed and pushed v10 with a few small comment tweaks for typos and some\n>> pgindenting. Thanks!\n> \n> Thank you very much!\n\nThis patch (8eda731465) makes the ssl tests fail for me:\n\nnot ok 121 - sslrootcert=system does not connect with private CA: matches\n\n# Failed test 'sslrootcert=system does not connect with private CA: \nmatches'\n# at t/001_ssltests.pl line 479.\n# 'psql: error: connection to server at \"127.0.0.1\", \nport 53971 failed: SSL SYSCALL error: Undefined error: 0'\n# doesn't match '(?^:SSL error: certificate verify failed)'\n\nThis is with OpenSSL 3.1.0 from macOS/Homebrew.\n\nIf I instead use OpenSSL 1.1.1t, then the tests pass.\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 09:11:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 09:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 05.04.23 23:29, Jacob Champion wrote:\n>> On Wed, Apr 5, 2023 at 2:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> I squashed and pushed v10 with a few small comment tweaks for typos and some\n>>> pgindenting. Thanks!\n>> Thank you very much!\n> \n> This patch (8eda731465) makes the ssl tests fail for me:\n> \n> not ok 121 - sslrootcert=system does not connect with private CA: matches\n> \n> # Failed test 'sslrootcert=system does not connect with private CA: matches'\n> # at t/001_ssltests.pl line 479.\n> # 'psql: error: connection to server at \"127.0.0.1\", port 53971 failed: SSL SYSCALL error: Undefined error: 0'\n> # doesn't match '(?^:SSL error: certificate verify failed)'\n> \n> This is with OpenSSL 3.1.0 from macOS/Homebrew.\n> \n> If I instead use OpenSSL 1.1.1t, then the tests pass.\n\nThanks for the report, I'll look at it today as an open item for 16.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 09:19:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 09:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 05.04.23 23:29, Jacob Champion wrote:\n>> On Wed, Apr 5, 2023 at 2:27 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> I squashed and pushed v10 with a few small comment tweaks for typos and some\n>>> pgindenting. Thanks!\n>> Thank you very much!\n> \n> This patch (8eda731465) makes the ssl tests fail for me:\n> \n> not ok 121 - sslrootcert=system does not connect with private CA: matches\n> \n> # Failed test 'sslrootcert=system does not connect with private CA: matches'\n> # at t/001_ssltests.pl line 479.\n> # 'psql: error: connection to server at \"127.0.0.1\", port 53971 failed: SSL SYSCALL error: Undefined error: 0'\n> # doesn't match '(?^:SSL error: certificate verify failed)'\n> \n> This is with OpenSSL 3.1.0 from macOS/Homebrew.\n> \n> If I instead use OpenSSL 1.1.1t, then the tests pass.\n\nI am unable to reproduce this (or any failure) with OpenSSL 3.1 built from\nsource (or 3.0 or 3.1.1-dev) or installed via homebrew (on macOS 12 with Intel\nCPU). Do you have any more clues from logs what might've happened?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 11:24:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Apr 12, 2023 at 2:24 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 12 Apr 2023, at 09:11, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > # Failed test 'sslrootcert=system does not connect with private CA: matches'\n> > # at t/001_ssltests.pl line 479.\n> > # 'psql: error: connection to server at \"127.0.0.1\", port 53971 failed: SSL SYSCALL error: Undefined error: 0'\n> > # doesn't match '(?^:SSL error: certificate verify failed)'\n> >\n> > This is with OpenSSL 3.1.0 from macOS/Homebrew.\n> >\n> > If I instead use OpenSSL 1.1.1t, then the tests pass.\n>\n> I am unable to reproduce this (or any failure) with OpenSSL 3.1 built from\n> source (or 3.0 or 3.1.1-dev) or installed via homebrew (on macOS 12 with Intel\n> CPU). Do you have any more clues from logs what might've happened?\n\nThis looks similar (but not identical) to the brew bug we're working\naround for Cirrus, in which `brew cleanup` breaks the OpenSSL\ninstallation and turns certificate verification failures into\nbizarrely unhelpful messages.\n\nPeter, you should have a .../etc/openssl@3/certs directory somewhere\nin your Homebrew installation prefix -- do you, or has Homebrew\nremoved it by mistake?\n\n--Jacob\n\n\n", "msg_date": "Wed, 12 Apr 2023 09:54:20 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On 12.04.23 18:54, Jacob Champion wrote:\n> Peter, you should have a .../etc/openssl@3/certs directory somewhere\n> in your Homebrew installation prefix -- do you, or has Homebrew\n> removed it by mistake?\n\nI don't have that, but I don't have it for openssl@1.1 either. I have\n\n~$ ll /usr/local/etc/openssl@3\ntotal 76\ndrwxr-xr-x 7 peter admin 224 2023-03-08 08:49 misc/\nlrwxr-xr-x 1 peter admin 27 2023-03-21 13:41 cert.pem -> \n../ca-certificates/cert.pem\n-rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf\n-rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf.dist\n-rw-r--r-- 1 peter admin 351 2023-03-08 08:57 fipsmodule.cnf\n-rw-r--r-- 1 peter admin 12386 2023-03-13 10:49 openssl.cnf\n-rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.default\n-rw-r--r-- 1 peter admin 12292 2023-03-08 08:49 openssl.cnf.dist\n-rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.dist.default\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 21:43:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 21:43, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 12.04.23 18:54, Jacob Champion wrote:\n>> Peter, you should have a .../etc/openssl@3/certs directory somewhere\n>> in your Homebrew installation prefix -- do you, or has Homebrew\n>> removed it by mistake?\n> \n> I don't have that, but I don't have it for openssl@1.1 either.\n\nThe important bit is that your OPENSSLDIR points to a directory which has the\ncontent OpenSSL needs.\n\n> I have\n> \n> ~$ ll /usr/local/etc/openssl@3\n> total 76\n> drwxr-xr-x 7 peter admin 224 2023-03-08 08:49 misc/\n> lrwxr-xr-x 1 peter admin 27 2023-03-21 13:41 cert.pem -> ../ca-certificates/cert.pem\n> -rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf\n> -rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf.dist\n> -rw-r--r-- 1 peter admin 351 2023-03-08 08:57 fipsmodule.cnf\n> -rw-r--r-- 1 peter admin 12386 2023-03-13 10:49 openssl.cnf\n> -rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.default\n> -rw-r--r-- 1 peter admin 12292 2023-03-08 08:49 openssl.cnf.dist\n> -rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.dist.default\n\nAssuming that's your OPENSSLDIR, then that looks like it should (it's precisely\nwhat I have).\n\nJust to further rule out any issues in the installation, If you run the command\nfrom upthread, does that properly verify postgresql.org?\n\necho Q | <path to>openssl@3/bin/openssl s_client -connect postgresql.org:443 -verify_return_error\n\nIs the failure repeatable enough that you might be able to tease something out\nof the log? I've been trying again today but been unable to reproduce this =(\n\nWe don't have great coverage of macOS in the buildfarm sadly, I wonder if can\nget sifaka to run the SSL tests if we ask nicely?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 21:57:27 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> We don't have great coverage of macOS in the buildfarm sadly, I wonder if can\n> get sifaka to run the SSL tests if we ask nicely?\n\nI was just looking into that, but it seems like it'd be a mess.\n\nI have a modern openssl installation from MacPorts, but if\nI try to select that I am going to end up compiling with\n-I/opt/local/include -L/opt/local/lib, which exposes all of the\nmetric buttload of stuff that MacPorts tends to pull in. sifaka\nis intended to test in a reasonably-default macOS environment,\nand that would be far from it.\n\nPlausible alternatives include:\n\n1. Hand-built private copy of openssl. longfin is set up that way,\nbut I'm not really eager to duplicate that approach, especially if\nwe want to test cutting-edge openssl.\n\n2. Run a second BF animal that's intentionally pointed at the MacPorts\nenvironment, in hopes of testing what MacPorts users would see.\n\n#2 feels like it might not be a waste of cycles, and certainly that\nmachine is underworked at the moment. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Apr 2023 16:23:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 22:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> We don't have great coverage of macOS in the buildfarm sadly, I wonder if can\n>> get sifaka to run the SSL tests if we ask nicely?\n> \n> I was just looking into that, but it seems like it'd be a mess.\n> \n> I have a modern openssl installation from MacPorts, but if\n> I try to select that I am going to end up compiling with\n> -I/opt/local/include -L/opt/local/lib, which exposes all of the\n> metric buttload of stuff that MacPorts tends to pull in. sifaka\n> is intended to test in a reasonably-default macOS environment,\n> and that would be far from it.\n\nThat makes sense.\n\n> Plausible alternatives include:\n> \n> 1. Hand-built private copy of openssl. longfin is set up that way,\n> but I'm not really eager to duplicate that approach, especially if\n> we want to test cutting-edge openssl.\n> \n> 2. Run a second BF animal that's intentionally pointed at the MacPorts\n> environment, in hopes of testing what MacPorts users would see.\n> \n> #2 feels like it might not be a waste of cycles, and certainly that\n> machine is underworked at the moment. Thoughts?\n\nI think #2 would be a good addition. Most won't build OpenSSL themselves so\nseeing builds from envs that are reasonable to expect in the wild is more\ninteresting.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 22:29:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "(Peter, your emails are being redirected to spam for me, FYI.\nSomething about messagingengine.)\n\nOn Wed, Apr 12, 2023 at 12:57 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 12 Apr 2023, at 21:43, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > On 12.04.23 18:54, Jacob Champion wrote:\n> >> Peter, you should have a .../etc/openssl@3/certs directory somewhere\n> >> in your Homebrew installation prefix -- do you, or has Homebrew\n> >> removed it by mistake?\n> >\n> > I don't have that, but I don't have it for openssl@1.1 either.\n\nAFAIK this behavior started with 3.x.\n\n> The important bit is that your OPENSSLDIR points to a directory which has the\n> content OpenSSL needs.\n>\n> > I have\n> >\n> > ~$ ll /usr/local/etc/openssl@3\n> > total 76\n> > drwxr-xr-x 7 peter admin 224 2023-03-08 08:49 misc/\n> > lrwxr-xr-x 1 peter admin 27 2023-03-21 13:41 cert.pem -> ../ca-certificates/cert.pem\n> > -rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf\n> > -rw-r--r-- 1 peter admin 412 2023-03-21 13:41 ct_log_list.cnf.dist\n> > -rw-r--r-- 1 peter admin 351 2023-03-08 08:57 fipsmodule.cnf\n> > -rw-r--r-- 1 peter admin 12386 2023-03-13 10:49 openssl.cnf\n> > -rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.default\n> > -rw-r--r-- 1 peter admin 12292 2023-03-08 08:49 openssl.cnf.dist\n> > -rw-r--r-- 1 peter admin 12292 2023-03-21 13:41 openssl.cnf.dist.default\n>\n> Assuming that's your OPENSSLDIR, then that looks like it should (it's precisely\n> what I have).\n\nIt surprises me that you can get a successful test with a missing\ncerts directory. If I remove the workaround in Cirrus, I get the\nfollowing error, which looks the same to me:\n\n [20:40:00.253](0.000s) not ok 121 - sslrootcert=system does not\nconnect with private CA: matches\n [20:40:00.253](0.000s) # Failed test 'sslrootcert=system does\nnot connect with private CA: matches'\n # at /Users/admin/pgsql/src/test/ssl/t/001_ssltests.pl line 479.\n [20:40:00.253](0.000s) # 'psql: error:\nconnection to server at \"127.0.0.1\", port 57681 failed: SSL SYSCALL\nerror: Undefined error: 0'\n # doesn't match '(?^:SSL error: certificate verify failed)'\n\n(That broken error message has changed since 3.0; now it's busted in a\nnew way as of 3.1, I guess.)\n\nDoes the test start passing if you create an empty certs directory? It\nstill wouldn't explain why Daniel's setup is succeeding...\n\n--Jacob\n\n\n", "msg_date": "Wed, 12 Apr 2023 13:52:35 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Oh! I was a little behind on MacPorts updates, and after\npulling the latest (taking their openssl from 3.0.8 to 3.1.0)\nI can duplicate Peter's problem:\n\n# +++ tap check in src/test/ssl +++\nt/001_ssltests.pl .. 120/? \n# Failed test 'sslrootcert=system does not connect with private CA: matches'\n# at t/001_ssltests.pl line 479.\n# 'psql: error: connection to server at \"127.0.0.1\", port 58910 failed: SSL SYSCALL error: Undefined error: 0'\n# doesn't match '(?^:SSL error: certificate verify failed)'\nt/001_ssltests.pl .. 196/? # Looks like you failed 1 test of 205.\nt/001_ssltests.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/205 subtests \nt/002_scram.pl ..... ok \nt/003_sslinfo.pl ... ok \n\nTest Summary Report\n-------------------\nt/001_ssltests.pl (Wstat: 256 Tests: 205 Failed: 1)\n Failed test: 121\n Non-zero exit status: 1\nFiles=3, Tests=247, 14 wallclock secs ( 0.02 usr 0.01 sys + 2.04 cusr 1.54 csys = 3.61 CPU)\nResult: FAIL\nmake: *** [check] Error 1\n\nSo whatever this is, it's not strictly Homebrew's issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Apr 2023 16:56:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On 12.04.23 22:52, Jacob Champion wrote:\n> It surprises me that you can get a successful test with a missing\n> certs directory. If I remove the workaround in Cirrus, I get the\n> following error, which looks the same to me:\n> \n> [20:40:00.253](0.000s) not ok 121 - sslrootcert=system does not\n> connect with private CA: matches\n> [20:40:00.253](0.000s) # Failed test 'sslrootcert=system does\n> not connect with private CA: matches'\n> # at /Users/admin/pgsql/src/test/ssl/t/001_ssltests.pl line 479.\n> [20:40:00.253](0.000s) # 'psql: error:\n> connection to server at \"127.0.0.1\", port 57681 failed: SSL SYSCALL\n> error: Undefined error: 0'\n> # doesn't match '(?^:SSL error: certificate verify failed)'\n> \n> (That broken error message has changed since 3.0; now it's busted in a\n> new way as of 3.1, I guess.)\n> \n> Does the test start passing if you create an empty certs directory? It\n> still wouldn't explain why Daniel's setup is succeeding...\n\nAfter\n\nmkdir /usr/local/etc/openssl@3/certs\n\nthe tests pass!\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 23:23:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 12.04.23 22:52, Jacob Champion wrote:\n>> Does the test start passing if you create an empty certs directory? It\n>> still wouldn't explain why Daniel's setup is succeeding...\n\n> After\n> mkdir /usr/local/etc/openssl@3/certs\n> the tests pass!\n\nLikewise, though MacPorts unsurprisingly uses a different place:\n\n$ openssl info -configdir\n/opt/local/libexec/openssl3/etc/openssl\n$ sudo mkdir /opt/local/libexec/openssl3/etc/openssl/certs\n$ make check PG_TEST_EXTRA=ssl\n... success!\n\nSo this smells to me like a new OpenSSL bug: they should tolerate\na missing certs dir like they used to. Who wants to file it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Apr 2023 17:40:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 23:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 12.04.23 22:52, Jacob Champion wrote:\n>>> Does the test start passing if you create an empty certs directory? It\n>>> still wouldn't explain why Daniel's setup is succeeding...\n> \n>> After\n>> mkdir /usr/local/etc/openssl@3/certs\n>> the tests pass!\n> \n> Likewise, though MacPorts unsurprisingly uses a different place:\n> \n> $ openssl info -configdir\n> /opt/local/libexec/openssl3/etc/openssl\n> $ sudo mkdir /opt/local/libexec/openssl3/etc/openssl/certs\n> $ make check PG_TEST_EXTRA=ssl\n> ... success!\n> \n> So this smells to me like a new OpenSSL bug: they should tolerate\n> a missing certs dir like they used to. Who wants to file it?\n\nThey are specifying that: \"A missing default location is still treated as a\nsuccess\". That leaves out the interesting bit of what a success means here,\nand how it should work when verifications are requested. That being said, the\nsame is written in the 1.1.1 manpage.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 12 Apr 2023 23:46:01 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 12 Apr 2023, at 23:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 12 Apr 2023, at 23:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> So this smells to me like a new OpenSSL bug: they should tolerate\n>> a missing certs dir like they used to. Who wants to file it?\n> \n> They are specifying that: \"A missing default location is still treated as a\n> success\". That leaves out the interesting bit of what a success means here,\n> and how it should work when verifications are requested. That being said, the\n> same is written in the 1.1.1 manpage.\n\nAfter a little bit of digging I have a vague idea.\n\nOpenSSL will treat a missing default location as a success simply due to the\nfact that it mainly just stores the path, loading of the certs is deferred\nuntil use (which maps well to the error we are seeing). Patching OpenSSL to\nreport all errors makes no difference, a missing default is indeed not an error\neven with errors turned on.\n\nThe change in OpenSSL 3 is the addition of certificate stores via ossl_store\nAPI. When SSL_CTX_set_default_verify_paths() is called it will in 1.1.1 set\nthe default (hardcoded) filename and path; in 3 it also sets the default store.\nStores are initialized with a URL, and the default store falls back to using the\ndefault certs dir as the URI as no store is set.\n\nIf I patch OpenSSL 3 to skip setting the default store, the tests pass even\nwith a missing cert directory. This is effectively the 1.1.1 behavior.\n\nThe verification error we are hitting is given to us in the verify_cb which\nwe've short circuited. The issue we have is that we cannot get PGconn in\nverify_cb so logging an error is tricky.\n\nI need to sleep on this before I do some more digging to figure out if OpenSSL\nconsiders this to be the intended behavior, a regression in 3, or if we have a\nbug in how we catch verification errors which is exposed by a non-existing\nstore. I'll add an open item for this in the morning to track how we'd like to\nproceed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Apr 2023 01:25:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Wed, Apr 12, 2023, 19:30 Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> The issue we have is that we cannot get PGconn in\n> verify_cb so logging an error is tricky.\n\n\nHm, the man page talks about a \"ex_data mechanism\" which seems to be\nreferring to this Rube Goldberg device\nhttps://www.openssl.org/docs/man3.1/man3/SSL_get_ex_data.html\n\nIt looks like X509_STORE_CTX_set_app_data() and\nX509_STORE_CTX_get_app_data() would be convenience macros to do this.\n\n\n", "msg_date": "Wed, 12 Apr 2023 21:34:09 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 12 Apr 2023, at 22:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Plausible alternatives include:\n>> 2. Run a second BF animal that's intentionally pointed at the MacPorts\n>> environment, in hopes of testing what MacPorts users would see.\n>> \n>> #2 feels like it might not be a waste of cycles, and certainly that\n>> machine is underworked at the moment. Thoughts?\n\n> I think #2 would be a good addition. Most won't build OpenSSL themselves so\n> seeing builds from envs that are reasonable to expect in the wild is more\n> interesting.\n\nI have an animal cranked up and awaiting approval; however, it fails\nthe src/test/ldap tests in v11, apparently because aa1419e63 was not\nback-patched. Barring objections, I'll back-patch that before\nbringing the animal on-line (or Thomas can do it, if he wishes).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Apr 2023 01:43:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 12 Apr 2023, at 22:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2. Run a second BF animal that's intentionally pointed at the MacPorts\n>> environment, in hopes of testing what MacPorts users would see.\n\n> I think #2 would be a good addition. Most won't build OpenSSL themselves so\n> seeing builds from envs that are reasonable to expect in the wild is more\n> interesting.\n\nDone, reporting as \"indri\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Apr 2023 12:39:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 13 Apr 2023, at 18:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 12 Apr 2023, at 22:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> 2. Run a second BF animal that's intentionally pointed at the MacPorts\n>>> environment, in hopes of testing what MacPorts users would see.\n> \n>> I think #2 would be a good addition. Most won't build OpenSSL themselves so\n>> seeing builds from envs that are reasonable to expect in the wild is more\n>> interesting.\n> \n> Done, reporting as \"indri\".\n\nGreat, thanks heaps! That will for sure be helpful going forward.\n\nRegarding the thread; I hope to have a suggestion for a way forward regarding\nthe open issue later tonight.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Apr 2023 18:42:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 13 Apr 2023, at 18:42, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Regarding the thread; I hope to have a suggestion for a way forward regarding\n> the open issue later tonight.\n\nAfter reading OpenSSL code and documentation, I think the simplest solution is\nto explicitly check for X509 errors when OpenSSL reports SSL_ERROR_SYSCALL.\nIt's not documented why this particular errorcode is used, but AFAICT it's\nbecause while it is a cert verification failure, the cause of it is an IO error\nin reading a non-existing file or directory.\n\nThe attached diff passes the tests on OpenSSL 1.0.1 through 3.1 as well as on\nLibreSSL. Thoughts?\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 14 Apr 2023 00:26:19 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The attached diff passes the tests on OpenSSL 1.0.1 through 3.1 as well as on\n> LibreSSL. Thoughts?\n\n1. You can't assume that errno starts out zero, unless you zero it\nright before SSL_connect.\n\n2. I wonder whether it's safe to assume that errno (a/k/a SOCK_ERRNO)\ncan't be clobbered by SSL_get_verify_result.\n\n3. It seems weird to refer to errno directly just a couple lines away\nfrom where we refer to it via SOCK_ERRNO. Will this even compile\non Windows?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Apr 2023 18:52:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 14 Apr 2023, at 00:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> The attached diff passes the tests on OpenSSL 1.0.1 through 3.1 as well as on\n>> LibreSSL. Thoughts?\n> \n> 1. You can't assume that errno starts out zero, unless you zero it\n> right before SSL_connect.\n\nMaybe we should do that regardless of this? We do for reading and writing but\nnot in open_client_SSL, and I can't off the top of my head think of a good\nreason not to?\n\n> 2. I wonder whether it's safe to assume that errno (a/k/a SOCK_ERRNO)\n> can't be clobbered by SSL_get_verify_result.\n> \n> 3. It seems weird to refer to errno directly just a couple lines away\n> from where we refer to it via SOCK_ERRNO. Will this even compile\n> on Windows?\n\nGood points, it should of course be SOCK_ERRNO. The attached saves off errno\nand reinstates it to avoid clobbering. Will test it on Windows in the morning\nas well.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 14 Apr 2023 01:09:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Good points, it should of course be SOCK_ERRNO. The attached saves off errno\n> and reinstates it to avoid clobbering. Will test it on Windows in the morning\n> as well.\n\nI think instead of this:\n\n+ SOCK_ERRNO_SET(save_errno);\n\nyou could just do this:\n\n libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n+ SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n\nAlthough ... we're already assuming that SSL_get_error and ERR_get_error\ndon't clobber errno. Maybe SSL_get_verify_result doesn't either.\nOr we could make it look like this:\n\n+ SOCK_ERRNO_SET(0);\n ERR_clear_error();\n r = SSL_connect(conn->ssl);\n if (r <= 0)\n+ int save_errno = SOCK_ERRNO;\n int err = SSL_get_error(conn->ssl, r);\n unsigned long ecode;\n\n ...\n\n- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n+ SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n\nto remove all doubt.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Apr 2023 19:27:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 14 Apr 2023, at 01:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Good points, it should of course be SOCK_ERRNO. The attached saves off errno\n>> and reinstates it to avoid clobbering. Will test it on Windows in the morning\n>> as well.\n> \n> I think instead of this:\n> \n> + SOCK_ERRNO_SET(save_errno);\n> \n> you could just do this:\n> \n> libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n> - SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n> + SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n> \n> Although ... we're already assuming that SSL_get_error and ERR_get_error\n> don't clobber errno. Maybe SSL_get_verify_result doesn't either.\n> Or we could make it look like this:\n> \n> + SOCK_ERRNO_SET(0);\n> ERR_clear_error();\n> r = SSL_connect(conn->ssl);\n> if (r <= 0)\n> + int save_errno = SOCK_ERRNO;\n> int err = SSL_get_error(conn->ssl, r);\n> unsigned long ecode;\n> \n> ...\n> \n> - SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n> + SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n> \n> to remove all doubt.\n\nI mainly put save_errno back into SOCK_ERRNO for greppability, I don't have any\nstrong opinions either way so I went with the latter suggestion. Attached v3\ndoes the above change and passes the tests both with a broken and working\nsystem CA pool. Unless objections from those with failing local envs I propose\nthis is pushed to close the open item.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 14 Apr 2023 10:04:27 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I mainly put save_errno back into SOCK_ERRNO for greppability, I don't have any\n> strong opinions either way so I went with the latter suggestion. Attached v3\n> does the above change and passes the tests both with a broken and working\n> system CA pool. Unless objections from those with failing local envs I propose\n> this is pushed to close the open item.\n\nOne more question when looking at it with fresh eyes: should the argument\nof X509_verify_cert_error_string be \"ecode\" or \"vcode\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Apr 2023 09:51:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "\n\n> On 14 Apr 2023, at 15:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I mainly put save_errno back into SOCK_ERRNO for greppability, I don't have any\n>> strong opinions either way so I went with the latter suggestion. Attached v3\n>> does the above change and passes the tests both with a broken and working\n>> system CA pool. Unless objections from those with failing local envs I propose\n>> this is pushed to close the open item.\n> \n> One more question when looking at it with fresh eyes: should the argument\n> of X509_verify_cert_error_string be \"ecode\" or \"vcode\"?\n\nGood catch, it should be vcode.\n\n--\nDaniel Gustafsson\n\n\n\n\n", "msg_date": "Fri, 14 Apr 2023 16:20:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 14 Apr 2023, at 16:20, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 14 Apr 2023, at 15:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> I mainly put save_errno back into SOCK_ERRNO for greppability, I don't have any\n>>> strong opinions either way so I went with the latter suggestion. Attached v3\n>>> does the above change and passes the tests both with a broken and working\n>>> system CA pool. Unless objections from those with failing local envs I propose\n>>> this is pushed to close the open item.\n>> \n>> One more question when looking at it with fresh eyes: should the argument\n>> of X509_verify_cert_error_string be \"ecode\" or \"vcode\"?\n> \n> Good catch, it should be vcode.\n\nAnd again with the attachment.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 14 Apr 2023 16:20:57 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Fri, Apr 14, 2023 at 7:20 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> And again with the attachment.\n\nAfter some sleep... From inspection I think the final EOF branch could\nbe masked by the new branch, if verification has failed but was already\nignored.\n\nTo test that, I tried hanging up on the client partway through the\nserver handshake, and I got some strange results. With the patch, using\nsslmode=require and OpenSSL 1.0.1, I see:\n\n connection to server at \"127.0.0.1\", port 50859 failed: SSL error:\ncertificate verify failed: self signed certificate\n\nWhich is wrong -- we shouldn't care about the self-signed failure if\nwe're not using verify-*. I was going to suggest a patch like the following:\n\n> if (r == -1)\n> - libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n> - SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n> + {\n> + /*\n> + * If we get an X509 error here without an error in the\n> + * socket layer it means that verification failed without\n> + * it being a protocol error. A common cause is trying to\n> + * a default system CA which is missing or broken.\n> + */\n> + if (!save_errno && vcode != X509_V_OK)\n> + libpq_append_conn_error(conn, \"SSL error: certificate verify failed: %s\",\n> + X509_verify_cert_error_string(vcode));\n> + else\n> + libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n> + SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n> + }\n> else\n> libpq_append_conn_error(conn, \"SSL SYSCALL error: EOF detected\");\n\nBut then I tested my case against PG15, and I didn't get the EOF message\nI expected:\n\n connection to server at \"127.0.0.1\", port 50283 failed: SSL SYSCALL\nerror: Success\n\nSo it appears that this (hanging up on the client during the handshake)\nis _also_ a case where we could get a SYSCALL error with a zero errno,\nand my patch doesn't actually fix the misleading error message.\n\nThat makes me worried, but I don't really have a concrete suggestion to\nmake it better, yet. I'm not opposed to pushing this as a fix for the\ntests, but I suspect we'll have to iterate on this more.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 14 Apr 2023 10:34:36 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 14 Apr 2023, at 19:34, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Fri, Apr 14, 2023 at 7:20 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> And again with the attachment.\n> \n> After some sleep... From inspection I think the final EOF branch could\n> be masked by the new branch, if verification has failed but was already\n> ignored.\n> \n> To test that, I tried hanging up on the client partway through the\n> server handshake, and I got some strange results. With the patch, using\n> sslmode=require and OpenSSL 1.0.1, I see:\n> \n> connection to server at \"127.0.0.1\", port 50859 failed: SSL error:\n> certificate verify failed: self signed certificate\n> \n> Which is wrong -- we shouldn't care about the self-signed failure if\n> we're not using verify-*. I was going to suggest a patch like the following:\n> \n>> if (r == -1)\n>> - libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n>> - SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));\n>> + {\n>> + /*\n>> + * If we get an X509 error here without an error in the\n>> + * socket layer it means that verification failed without\n>> + * it being a protocol error. A common cause is trying to\n>> + * a default system CA which is missing or broken.\n>> + */\n>> + if (!save_errno && vcode != X509_V_OK)\n>> + libpq_append_conn_error(conn, \"SSL error: certificate verify failed: %s\",\n>> + X509_verify_cert_error_string(vcode));\n>> + else\n>> + libpq_append_conn_error(conn, \"SSL SYSCALL error: %s\",\n>> + SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));\n>> + }\n>> else\n>> libpq_append_conn_error(conn, \"SSL SYSCALL error: EOF detected\");\n> \n> But then I tested my case against PG15, and I didn't get the EOF message\n> I expected:\n> \n> connection to server at \"127.0.0.1\", port 50283 failed: SSL SYSCALL\n> error: Success\n\nThis \"error: Success\" error has been reported to the list numerous times as\nmisleading, and I'd love to make progress on improving error reporting during\nthe v17 cycle.\n\n> So it appears that this (hanging up on the client during the handshake)\n> is _also_ a case where we could get a SYSCALL error with a zero errno,\n> and my patch doesn't actually fix the misleading error message.\n> \n> That makes me worried, but I don't really have a concrete suggestion to\n> make it better, yet. I'm not opposed to pushing this as a fix for the\n> tests, but I suspect we'll have to iterate on this more.\n\nSo, taking a step back. We know that libpq error reporting for SSL errors\nisn't great, the permutations of sslmodes and OpenSSL versions and the very\nfine-grained error handling API of OpenSSL make it hard to generalize well.\nThat's not what we're trying to solve here.\n\nWhat we are trying solve is this one case where we know exactly what went\nwrong, and we know that the error message as-is will be somewhere between\nmisleading and utterly bogus. The committed feature is working as intended,\nand the connection is refused as it should when no CA is available, but we know\nit's a situation which is quite easy to get oneself into (a typo in an\nenvironment variable can be enough). So what we can do is pinpoint that\nspecific case and leave the unknowns to the current error reporting for\nconsistency with older postgres versions.\n\nThe attached checks for the specific known error, and leave all the other cases\nto the same logging that we have today. It relies on the knowledge that system\nsslrootcert configs has deferred loading, and will run with verify-full. So if\nwe see an X509 failure in loading the local issuer cert here then we know the\nthe user wanted to use the system CA pool for certificate verification but the\nroot CA cannot be loaded for some reason.\n\n--\nDaniel Gustafsson", "msg_date": "Sat, 15 Apr 2023 00:36:40 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Fri, Apr 14, 2023 at 3:36 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> This \"error: Success\" error has been reported to the list numerous times as\n> misleading, and I'd love to make progress on improving error reporting during\n> the v17 cycle.\n\nAgreed!\n\n> The attached checks for the specific known error, and leave all the other cases\n> to the same logging that we have today. It relies on the knowledge that system\n> sslrootcert configs has deferred loading, and will run with verify-full. So if\n> we see an X509 failure in loading the local issuer cert here then we know the\n> the user wanted to use the system CA pool for certificate verification but the\n> root CA cannot be loaded for some reason.\n\nThis LGTM; I agree with your reasoning. Note that it won't fix the\n(completely different) misleading error message for OpenSSL 3.0, but\nsince that's an *actively* unhelpful error message coming back from\nOpenSSL, I don't think we want to override it. For 3.1, we have no\ninformation and we're trying to fill in the gaps.\n\n--Jacob\n\n\n", "msg_date": "Mon, 17 Apr 2023 09:20:37 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 17 Apr 2023, at 18:20, Jacob Champion <jchampion@timescale.com> wrote:\n\n> Note that it won't fix the\n> (completely different) misleading error message for OpenSSL 3.0, but\n> since that's an *actively* unhelpful error message coming back from\n> OpenSSL, I don't think we want to override it.\n\nAgreed, the best we can do there is to memorize it in the test.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 18 Apr 2023 14:32:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 18 Apr 2023, at 14:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 17 Apr 2023, at 18:20, Jacob Champion <jchampion@timescale.com> wrote:\n> \n>> Note that it won't fix the\n>> (completely different) misleading error message for OpenSSL 3.0, but\n>> since that's an *actively* unhelpful error message coming back from\n>> OpenSSL, I don't think we want to override it.\n> \n> Agreed, the best we can do there is to memorize it in the test.\n\nThis has been done and the open item marked as completed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 21 Apr 2023 09:55:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "On Fri, Apr 21, 2023 at 12:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> This has been done and the open item marked as completed.\n\nThanks! Now that the weirdness is handled by the tests, I think we can\nremove the Cirrus workaround. Something like the attached, which\npasses the macOS Meson suite for me.\n\n--Jacob", "msg_date": "Fri, 21 Apr 2023 09:56:52 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" }, { "msg_contents": "> On 21 Apr 2023, at 18:56, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Fri, Apr 21, 2023 at 12:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> This has been done and the open item marked as completed.\n> \n> Thanks! Now that the weirdness is handled by the tests, I think we can\n> remove the Cirrus workaround. Something like the attached, which\n> passes the macOS Meson suite for me.\n\nAgreed, I had this on my TODO list for when the test fix patch had landed.\nVerified in CI for me as well so pushed. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 24 Apr 2023 11:48:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `verify-system` sslmode to use system CA pool for\n server cert" } ]
[ { "msg_contents": "If we create a column name longer than 64 bytes, it will be truncated in\nPostgreSQL to max (NAMEDATALEN) length.\n\nFor example: \"\nVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\"\nwill be truncated in database to \"\nVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\"\n\nBut in the codebase we could work with full column name - SQL functions\nlike INSERT/UPDATE work with long names without problem, automatically\nsearches for suitable column (thank you for it).\n\nBut if we try to update it with \"json_populate_recordset\" using full name,\nit will not just ignore column with long name - data in that record will be\nnulled.\n\nHow to reproduce:\n1. create table wow(\"\nVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\"\ntext);\n2. select * from\njson_populate_recordset(null::wow,'[{\"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\":\n\"haha\"}]');\n3. \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\"\nbecomes null.\n\n\nP.S. Why do I need columns with more than 64 bytes length - because I use\nnon-Latin characters in column and table names, so In fact I have only 32\nchars because of Unicode. (PostgreSQL: NAMEDATALEN increase because of\nnon-latin languages\n<https://www.postgresql.org/message-id/CALSd-crdmj9PGdvdioU%3Da5W7P%3DTgNmEB2QP9wiF6DTUbBuMXrQ%40mail.gmail.com>\n)\n\nIf we create a column name longer than 64 bytes, it will be truncated in PostgreSQL to max (NAMEDATALEN) length. For example: \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\" will be truncated in database to \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\"But in the codebase we could work with full column name - SQL functions like INSERT/UPDATE work with long names without problem, automatically searches for suitable column (thank you for it).But if we try to update it with \"json_populate_recordset\" using full name, it will not just ignore column with long name - data in that record will be nulled.How to reproduce:1. create table wow(\"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\" text);2. select * from json_populate_recordset(null::wow,'[{\"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\": \"haha\"}]');3. \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\" becomes null.P.S. Why do I need columns with more than 64 bytes length - because I use non-Latin characters in column and table names, so In fact I have only 32 chars because of Unicode. (PostgreSQL: NAMEDATALEN increase because of non-latin languages)", "msg_date": "Tue, 7 Sep 2021 10:27:04 +0700", "msg_from": "=?UTF-8?B?0JTQtdC90LjRgSDQoNC+0LzQsNC90LXQvdC60L4=?=\n <deromanenko@gmail.com>", "msg_from_op": true, "msg_subject": "Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "On Tue, Sep 7, 2021 at 11:27 AM Денис Романенко <deromanenko@gmail.com> wrote:\n>\n> If we create a column name longer than 64 bytes, it will be truncated in PostgreSQL to max (NAMEDATALEN) length.\n>\n> For example: \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\" will be truncated in database to \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\"\n>\n> But in the codebase we could work with full column name - SQL functions like INSERT/UPDATE work with long names without problem, automatically searches for suitable column (thank you for it).\n>\n> But if we try to update it with \"json_populate_recordset\" using full name, it will not just ignore column with long name - data in that record will be nulled.\n>\n> How to reproduce:\n> 1. create table wow(\"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\" text);\n> 2. select * from json_populate_recordset(null::wow,'[{\"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongName\": \"haha\"}]');\n> 3. \"VeryLongNameVeryLongNameVeryLongNameVeryLongNameVeryLongNameVer\" becomes null.\n\nYes, that's because json identifiers have different rules from\nrelation identifiers. Your only option here is to use the real /\ntruncated identifier. Also I don't think it would be a good thing to\nadd a way to truncate identifiers in json objects using the\nNAMEDATALEN limit, as this could easily lead to invalid json object\nthat should be valid.\n\n\n", "msg_date": "Tue, 7 Sep 2021 13:11:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "On Tue, Sep 07, 2021 at 01:11:49PM +0800, Julien Rouhaud wrote:\n> Yes, that's because json identifiers have different rules from\n> relation identifiers. Your only option here is to use the real /\n> truncated identifier. Also I don't think it would be a good thing to\n> add a way to truncate identifiers in json objects using the\n> NAMEDATALEN limit, as this could easily lead to invalid json object\n> that should be valid.\n\nYeah. We should try to work toward removing the limits on NAMEDATALEN\nfor the attribute names. Easier said than done :)\n--\nMichael", "msg_date": "Tue, 7 Sep 2021 14:31:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "On Tue, Sep 7, 2021 at 1:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Yeah. We should try to work toward removing the limits on NAMEDATALEN\n> for the attribute names. Easier said than done :)\n\nYes, but even if we eventually fix that my impression is that we would\nstill enforce a limit of 128 characters (or bytes) as this is the SQL\nspecification. So trying to rely on json identifiers having the same\nrules as SQL identifiers sounds like the wrong approach.\n\n\n", "msg_date": "Tue, 7 Sep 2021 14:00:28 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Tue, Sep 7, 2021 at 1:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Yeah. We should try to work toward removing the limits on NAMEDATALEN\n>> for the attribute names. Easier said than done :)\n\n> Yes, but even if we eventually fix that my impression is that we would\n> still enforce a limit of 128 characters (or bytes) as this is the SQL\n> specification.\n\nProbably not. I think SQL says that's the minimum expectation; and\neven if they say it should be that exactly, there is no reason we'd\nsuddenly start slavishly obeying that part of the spec after ignoring\nit for years ;-).\n\nThere would still be a limit of course, but it would stem from the max\ntuple width in the associated catalog, so on the order of 7kB or so.\n(Hmm ... perhaps it'd be wise to set a limit of say a couple of kB,\njust so that the implementation limit is crisp rather than being\na little bit different in each catalog and each release.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 10:08:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "On 8/09/21 2:08 am, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>> On Tue, Sep 7, 2021 at 1:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>> Yeah. We should try to work toward removing the limits on NAMEDATALEN\n>>> for the attribute names. Easier said than done :)\n>> Yes, but even if we eventually fix that my impression is that we would\n>> still enforce a limit of 128 characters (or bytes) as this is the SQL\n>> specification.\n> Probably not. I think SQL says that's the minimum expectation; and\n> even if they say it should be that exactly, there is no reason we'd\n> suddenly start slavishly obeying that part of the spec after ignoring\n> it for years ;-).\n>\n> There would still be a limit of course, but it would stem from the max\n> tuple width in the associated catalog, so on the order of 7kB or so.\n> (Hmm ... perhaps it'd be wise to set a limit of say a couple of kB,\n> just so that the implementation limit is crisp rather than being\n> a little bit different in each catalog and each release.)\n>\n> \t\t\tregards, tom lane\n>\n>\nHow about 4kB (unless there are systems for which this is too large)?\n\nThat should be easy to remember.\n\n\nCheers,\nGavin\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 07:15:08 +1200", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" }, { "msg_contents": "On Tue, Sep 7, 2021 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>\n> > Yes, but even if we eventually fix that my impression is that we would\n> > still enforce a limit of 128 characters (or bytes) as this is the SQL\n> > specification.\n>\n> Probably not. I think SQL says that's the minimum expectation;\n\nAh, I didn't know that.\n\n> and\n> even if they say it should be that exactly, there is no reason we'd\n> suddenly start slavishly obeying that part of the spec after ignoring\n> it for years ;-).\n\nWell, yes but we ignored it for years due to technical limitation.\nAnd the result of that is that we make migration to postgres harder.\n\nIf we somehow find a way to remove this limitation, ignoring the spec\nagain (assuming that the spec gives a hard limit) will make migration\nfrom postgres harder and will also probably bring other problems\n(allowing identifier kB long will lead to bigger caches for instances,\nwhich can definitely bite hard). Is it really worth it?\n\n\n", "msg_date": "Wed, 8 Sep 2021 06:56:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Data loss when '\"json_populate_recorset\" with long column name" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], we found that EXPLAIN(VERBOSE) to CTE with SEARCH \nBREADTH FIRST ends up ERROR.\n\nThis can be reproduced at the current HEAD(4c3478859b7359912d7):\n\n =# create table graph0( f int, t int, label text);\n CREATE TABLE\n\n =# insert into graph0 values (1, 2, 'arc 1 -> 2'),(1, 3, 'arc 1 -> \n3'),(2, 3, 'arc 2 -> 3'),(1, 4, 'arc 1 -> 4'),(4, 5, 'arc 4 -> 5');\n INSERT 0 5\n\n =# explain(verbose) with recursive search_graph(f, t, label) as (\n select * from graph0 g\n union all\n select g.*\n from graph0 g, search_graph sg\n where g.f = sg.t\n ) search breadth first by f, t set seq\n select * from search_graph order by seq;\n ERROR: failed to find plan for CTE sg\n\nIs this a bug?\n\n\n[1] \nhttps://www.postgresql.org/message-id/flat/cf8501bcd95ba4d727cbba886ba9eea8@oss.nttdata.com\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 07 Sep 2021 12:41:02 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "torikoshia <torikoshia@oss.nttdata.com> writes:\n> While working on [1], we found that EXPLAIN(VERBOSE) to CTE with SEARCH \n> BREADTH FIRST ends up ERROR.\n\nYeah. It's failing here:\n\n * We're deparsing a Plan tree so we don't have a CTE\n * list. But the only place we'd see a Var directly\n * referencing a CTE RTE is in a CteScan plan node, and we\n * can look into the subplan's tlist instead.\n\n if (!dpns->inner_plan)\n elog(ERROR, \"failed to find plan for CTE %s\",\n rte->eref->aliasname);\n\nThe problematic Var is *not* in a CteScan plan node; it's in a\nWorkTableScan node. It's not clear to me whether this is a bug\nin the planner's handling of SEARCH BREADTH FIRST, or if the plan\nis as-intended and ruleutils.c is failing to cope.\n\nEither way, this deserves an open item...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 14:31:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "On 07.09.21 20:31, Tom Lane wrote:\n> torikoshia <torikoshia@oss.nttdata.com> writes:\n>> While working on [1], we found that EXPLAIN(VERBOSE) to CTE with SEARCH\n>> BREADTH FIRST ends up ERROR.\n> \n> Yeah. It's failing here:\n> \n> * We're deparsing a Plan tree so we don't have a CTE\n> * list. But the only place we'd see a Var directly\n> * referencing a CTE RTE is in a CteScan plan node, and we\n> * can look into the subplan's tlist instead.\n> \n> if (!dpns->inner_plan)\n> elog(ERROR, \"failed to find plan for CTE %s\",\n> rte->eref->aliasname);\n> \n> The problematic Var is *not* in a CteScan plan node; it's in a\n> WorkTableScan node. It's not clear to me whether this is a bug\n> in the planner's handling of SEARCH BREADTH FIRST, or if the plan\n> is as-intended and ruleutils.c is failing to cope.\n\nThe search clause is resolved by the rewriter, so it's unlikely that the \nplanner is doing something wrong. Either the rewriting produces \nsomething incorrect (but then one might expect that the query results \nwould be wrong), or the structures constructed by rewriting are not \neasily handled by ruleutils.c.\n\nIf we start from the example in the documentation \n<https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-RECURSIVE>:\n\n\"\"\"\nWITH RECURSIVE search_tree(id, link, data, depth) AS (\n SELECT t.id, t.link, t.data, 0\n FROM tree t\n UNION ALL\n SELECT t.id, t.link, t.data, depth + 1\n FROM tree t, search_tree st\n WHERE t.id = st.link\n)\nSELECT * FROM search_tree ORDER BY depth;\n\nTo get a stable sort, add data columns as secondary sorting columns.\n\"\"\"\n\nIn order to handle that part about the stable sort, the query \nconstructed internally is something like\n\nWITH RECURSIVE search_tree(id, link, data, seq) AS (\n SELECT t.id, t.link, t.data, ROW(0, id, link)\n FROM tree t\n UNION ALL\n SELECT t.id, t.link, t.data, ROW(seq.depth + 1, id, link)\n FROM tree t, search_tree st\n WHERE t.id = st.link\n)\nSELECT * FROM search_tree ORDER BY seq;\n\nThe bit \"seq.depth\" isn't really valid when typed in like that, I think, \nbut of course internally this is all wired together with numbers rather \nthan identifiers. I suspect that that is what ruleutils.c trips over.\n\n\n", "msg_date": "Thu, 9 Sep 2021 12:03:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "On 2021-09-09 19:03, Peter Eisentraut wrote:\n> On 07.09.21 20:31, Tom Lane wrote:\n>> torikoshia <torikoshia@oss.nttdata.com> writes:\n>>> While working on [1], we found that EXPLAIN(VERBOSE) to CTE with \n>>> SEARCH\n>>> BREADTH FIRST ends up ERROR.\n>> \n>> Yeah. It's failing here:\n>> \n>> * We're deparsing a Plan tree so we don't have a \n>> CTE\n>> * list. But the only place we'd see a Var \n>> directly\n>> * referencing a CTE RTE is in a CteScan plan \n>> node, and we\n>> * can look into the subplan's tlist instead.\n>> \n>> if (!dpns->inner_plan)\n>> elog(ERROR, \"failed to find plan for CTE %s\",\n>> rte->eref->aliasname);\n>> \n>> The problematic Var is *not* in a CteScan plan node; it's in a\n>> WorkTableScan node. It's not clear to me whether this is a bug\n>> in the planner's handling of SEARCH BREADTH FIRST, or if the plan\n>> is as-intended and ruleutils.c is failing to cope.\n> \n> The search clause is resolved by the rewriter, so it's unlikely that\n> the planner is doing something wrong. Either the rewriting produces\n> something incorrect (but then one might expect that the query results\n> would be wrong), or the structures constructed by rewriting are not\n> easily handled by ruleutils.c.\n> \n> If we start from the example in the documentation\n> <https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-RECURSIVE>:\n> \n> \"\"\"\n> WITH RECURSIVE search_tree(id, link, data, depth) AS (\n> SELECT t.id, t.link, t.data, 0\n> FROM tree t\n> UNION ALL\n> SELECT t.id, t.link, t.data, depth + 1\n> FROM tree t, search_tree st\n> WHERE t.id = st.link\n> )\n> SELECT * FROM search_tree ORDER BY depth;\n> \n> To get a stable sort, add data columns as secondary sorting columns.\n> \"\"\"\n> \n> In order to handle that part about the stable sort, the query\n> constructed internally is something like\n> \n> WITH RECURSIVE search_tree(id, link, data, seq) AS (\n> SELECT t.id, t.link, t.data, ROW(0, id, link)\n> FROM tree t\n> UNION ALL\n> SELECT t.id, t.link, t.data, ROW(seq.depth + 1, id, link)\n> FROM tree t, search_tree st\n> WHERE t.id = st.link\n> )\n> SELECT * FROM search_tree ORDER BY seq;\n> \n> The bit \"seq.depth\" isn't really valid when typed in like that, I\n> think, but of course internally this is all wired together with\n> numbers rather than identifiers. I suspect that that is what\n> ruleutils.c trips over.\n\nThanks for your advice, it seems right.\n\nEXPLAIN VERBOSE can be output without error when I assigned testing \npurpose CoercionForm to 'seq.depth + 1'.\n\nI've attached the patch for the changes made for this test for your \nreference, but I'm not sure it's appropriate for creating a new \nCoercionForm to fix the issue..\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 10 Sep 2021 23:10:43 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "\nOn 9/10/21 10:10 AM, torikoshia wrote:\n> On 2021-09-09 19:03, Peter Eisentraut wrote:\n>> On 07.09.21 20:31, Tom Lane wrote:\n>>> torikoshia <torikoshia@oss.nttdata.com> writes:\n>>>> While working on [1], we found that EXPLAIN(VERBOSE) to CTE with\n>>>> SEARCH\n>>>> BREADTH FIRST ends up ERROR.\n>>>\n>>> Yeah.� It's failing here:\n>>>\n>>> ��������������������� * We're deparsing a Plan tree so we don't have\n>>> a CTE\n>>> ��������������������� * list.� But the only place we'd see a Var\n>>> directly\n>>> ��������������������� * referencing a CTE RTE is in a CteScan plan\n>>> node, and we\n>>> ��������������������� * can look into the subplan's tlist instead.\n>>>\n>>> �������������������� if (!dpns->inner_plan)\n>>> ������������������������ elog(ERROR, \"failed to find plan for CTE %s\",\n>>> ����������������������������� rte->eref->aliasname);\n>>>\n>>> The problematic Var is *not* in a CteScan plan node; it's in a\n>>> WorkTableScan node.� It's not clear to me whether this is a bug\n>>> in the planner's handling of SEARCH BREADTH FIRST, or if the plan\n>>> is as-intended and ruleutils.c is failing to cope.\n>>\n>> The search clause is resolved by the rewriter, so it's unlikely that\n>> the planner is doing something wrong.� Either the rewriting produces\n>> something incorrect (but then one might expect that the query results\n>> would be wrong), or the structures constructed by rewriting are not\n>> easily handled by ruleutils.c.\n>>\n>> If we start from the example in the documentation\n>> <https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-RECURSIVE>:\n>>\n>>\n>> \"\"\"\n>> WITH RECURSIVE search_tree(id, link, data, depth) AS (\n>> ��� SELECT t.id, t.link, t.data, 0\n>> ��� FROM tree t\n>> � UNION ALL\n>> ��� SELECT t.id, t.link, t.data, depth + 1\n>> ��� FROM tree t, search_tree st\n>> ��� WHERE t.id = st.link\n>> )\n>> SELECT * FROM search_tree ORDER BY depth;\n>>\n>> To get a stable sort, add data columns as secondary sorting columns.\n>> \"\"\"\n>>\n>> In order to handle that part about the stable sort, the query\n>> constructed internally is something like\n>>\n>> WITH RECURSIVE search_tree(id, link, data, seq) AS (\n>> ��� SELECT t.id, t.link, t.data, ROW(0, id, link)\n>> ��� FROM tree t\n>> � UNION ALL\n>> ��� SELECT t.id, t.link, t.data, ROW(seq.depth + 1, id, link)\n>> ��� FROM tree t, search_tree st\n>> ��� WHERE t.id = st.link\n>> )\n>> SELECT * FROM search_tree ORDER BY seq;\n>>\n>> The bit \"seq.depth\" isn't really valid when typed in like that, I\n>> think, but of course internally this is all wired together with\n>> numbers rather than identifiers.� I suspect that that is what\n>> ruleutils.c trips over.\n>\n> Thanks for your advice, it seems right.\n>\n> EXPLAIN VERBOSE can be output without error when I assigned testing\n> purpose CoercionForm to 'seq.depth + 1'.\n>\n> I've attached the patch for the changes made for this test for your\n> reference, but I'm not sure it's appropriate for creating a new\n> CoercionForm to fix the issue..\n\n\n\nThis is listed as an open item for release 14. Is it planned to commit\nthe patch? If not, we should close the item.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 15 Sep 2021 11:05:02 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/10/21 10:10 AM, torikoshia wrote:\n>> I've attached the patch for the changes made for this test for your\n>> reference, but I'm not sure it's appropriate for creating a new\n>> CoercionForm to fix the issue..\n\n> This is listed as an open item for release 14. Is it planned to commit\n> the patch? If not, we should close the item.\n\nI do not think that patch is a proper solution, but we do need to do\nsomething about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Sep 2021 11:28:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "I wrote:\n> I do not think that patch is a proper solution, but we do need to do\n> something about this.\n\nI poked into this and decided it's an ancient omission within ruleutils.c.\nThe reason we've not seen it before is probably that you can't get to the\ncase through the parser. The SEARCH stuff is generating a query structure\nbasically equivalent to\n\nregression=# with recursive cte (x,r) as (\nselect 42 as x, row(i, 2.3) as r from generate_series(1,3) i\nunion all \nselect x, row((c.r).f1, 4.5) from cte c\n) \nselect * from cte;\nERROR: record type has not been registered\n\nand as you can see, expandRecordVariable fails to figure out what\nthe referent of \"c.r\" is. I think that could be fixed (by looking\ninto the non-recursive term), but given the lack of field demand,\nI'm not feeling that it's urgent.\n\nSo the omission is pretty obvious from the misleading comment:\nactually, Vars referencing RTE_CTE RTEs can also appear in WorkTableScan\nnodes, and we're not doing anything to support that. But we only reach\nthis code when trying to resolve a field of a Var of RECORD type, which\nis a case that it seems like the parser can't produce.\n\nIt doesn't look too hard to fix: we just have to find the RecursiveUnion\nthat goes with the WorkTableScan, and drill down into that, much as we\nwould do in the CteScan case. See attached draft patch. I'm too tired\nto beat on this heavily or add a test case, but I have verified that it\npasses check-world and handles the example presented in this thread.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 Sep 2021 19:40:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "\nOn 9/15/21 7:40 PM, Tom Lane wrote:\n> I wrote:\n>> I do not think that patch is a proper solution, but we do need to do\n>> something about this.\n> I poked into this and decided it's an ancient omission within ruleutils.c.\n> The reason we've not seen it before is probably that you can't get to the\n> case through the parser. The SEARCH stuff is generating a query structure\n> basically equivalent to\n>\n> regression=# with recursive cte (x,r) as (\n> select 42 as x, row(i, 2.3) as r from generate_series(1,3) i\n> union all \n> select x, row((c.r).f1, 4.5) from cte c\n> ) \n> select * from cte;\n> ERROR: record type has not been registered\n>\n> and as you can see, expandRecordVariable fails to figure out what\n> the referent of \"c.r\" is. I think that could be fixed (by looking\n> into the non-recursive term), but given the lack of field demand,\n> I'm not feeling that it's urgent.\n>\n> So the omission is pretty obvious from the misleading comment:\n> actually, Vars referencing RTE_CTE RTEs can also appear in WorkTableScan\n> nodes, and we're not doing anything to support that. But we only reach\n> this code when trying to resolve a field of a Var of RECORD type, which\n> is a case that it seems like the parser can't produce.\n>\n> It doesn't look too hard to fix: we just have to find the RecursiveUnion\n> that goes with the WorkTableScan, and drill down into that, much as we\n> would do in the CteScan case. See attached draft patch. I'm too tired\n> to beat on this heavily or add a test case, but I have verified that it\n> passes check-world and handles the example presented in this thread.\n>\n> \t\t\t\n\n\nLooks like a nice simple fix. Thanks for working on this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 Sep 2021 09:15:43 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "På torsdag 16. september 2021 kl. 01:40:31, skrev Tom Lane <tgl@sss.pgh.pa.us \n<mailto:tgl@sss.pgh.pa.us>>: \n[...]\n regression=# with recursive cte (x,r) as (\n select 42 as x, row(i, 2.3) as r from generate_series(1,3) i\n union all \n select x, row((c.r).f1, 4.5) from cte c\n ) \n select * from cte;\n ERROR: record type has not been registered \n\n\n\n\nFWIW; I saw this Open Item was set to fixed, but I'm still getting this error \nin 388726753b638fb9938883bdd057b2ffe6f950f5 \n\n--\n Andreas Joseph Krogh", "msg_date": "Thu, 16 Sep 2021 18:53:07 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <andreas@visena.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> På torsdag 16. september 2021 kl. 01:40:31, skrev Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>>: \n> [...]\n> regression=# with recursive cte (x,r) as (\n> select 42 as x, row(i, 2.3) as r from generate_series(1,3) i\n> union all \n> select x, row((c.r).f1, 4.5) from cte c\n> ) \n> select * from cte;\n> ERROR: record type has not been registered \n\n> FWIW; I saw this Open Item was set to fixed, but I'm still getting this error \n> in 388726753b638fb9938883bdd057b2ffe6f950f5 \n\nThe open item was not about that parser shortcoming, nor did this patch\nclaim to fix it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Sep 2021 12:57:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "På torsdag 16. september 2021 kl. 18:57:39, skrev Tom Lane <tgl@sss.pgh.pa.us \n<mailto:tgl@sss.pgh.pa.us>>: \n[...]\n > FWIW; I saw this Open Item was set to fixed, but I'm still getting this \nerror\n > in 388726753b638fb9938883bdd057b2ffe6f950f5\n\n The open item was not about that parser shortcoming, nor did this patch\n claim to fix it.\n\n regards, tom lane \n\nOk, sorry for the noise. \n\n\n\n--\n Andreas Joseph Krogh", "msg_date": "Thu, 16 Sep 2021 19:29:14 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <andreas@visena.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "On 2021-09-16 08:40, Tom Lane wrote:\n> I wrote:\n>> I do not think that patch is a proper solution, but we do need to do\n>> something about this.\n> \n> I poked into this and decided it's an ancient omission within \n> ruleutils.c.\n> The reason we've not seen it before is probably that you can't get to \n> the\n> case through the parser. The SEARCH stuff is generating a query \n> structure\n> basically equivalent to\n> \n> regression=# with recursive cte (x,r) as (\n> select 42 as x, row(i, 2.3) as r from generate_series(1,3) i\n> union all\n> select x, row((c.r).f1, 4.5) from cte c\n> )\n> select * from cte;\n> ERROR: record type has not been registered\n> \n> and as you can see, expandRecordVariable fails to figure out what\n> the referent of \"c.r\" is. I think that could be fixed (by looking\n> into the non-recursive term), but given the lack of field demand,\n> I'm not feeling that it's urgent.\n> \n> So the omission is pretty obvious from the misleading comment:\n> actually, Vars referencing RTE_CTE RTEs can also appear in \n> WorkTableScan\n> nodes, and we're not doing anything to support that. But we only reach\n> this code when trying to resolve a field of a Var of RECORD type, which\n> is a case that it seems like the parser can't produce.\n> \n> It doesn't look too hard to fix: we just have to find the \n> RecursiveUnion\n> that goes with the WorkTableScan, and drill down into that, much as we\n> would do in the CteScan case. See attached draft patch. I'm too tired\n> to beat on this heavily or add a test case, but I have verified that it\n> passes check-world and handles the example presented in this thread.\n> \n> \t\t\tregards, tom lane\n\nThanks for looking into this and fixing it!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Sep 2021 21:43:14 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "Hi!\n\nIt seems like this patch causes another problem.\n\nIf I explain a simple row generator **without** verbose, it fails:\n\npostgres=# EXPLAIN (VERBOSE FALSE)\n WITH RECURSIVE gen (n) AS (\n VALUES (1)\n UNION ALL\n SELECT n+1\n FROM gen\n WHERE n < 3\n )\n SELECT * FROM gen\n ;\nERROR: could not find RecursiveUnion for WorkTableScan with wtParam 0\n\nThat’s the new error message introduced by the patch.\n\nThe same with verbose works just fine:\n\npostgres=# EXPLAIN (VERBOSE TRUE)\n WITH RECURSIVE gen (n) AS (\n VALUES (1)\n UNION ALL\n SELECT n+1\n FROM gen\n WHERE n < 3\n )\n SELECT * FROM gen\n ;\n QUERY PLAN\n-----------------------------------------------------------------------------\n CTE Scan on gen (cost=2.95..3.57 rows=31 width=4)\n Output: gen.n\n CTE gen\n -> Recursive Union (cost=0.00..2.95 rows=31 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n -> WorkTable Scan on gen gen_1 (cost=0.00..0.23 rows=3 width=4)\n Output: (gen_1.n + 1)\n Filter: (gen_1.n < 3)\n(9 rows)\n\nBoth variants work fine before that patch (4ac0f450b698442c3273ddfe8eed0e1a7e56645f).\n\nMarkus Winand\nwinand.at\n\n> On 21.09.2021, at 14:43, torikoshia <torikoshia@oss.nttdata.com> wrote:\n> \n> On 2021-09-16 08:40, Tom Lane wrote:\n>> I wrote:\n>>> I do not think that patch is a proper solution, but we do need to do\n>>> something about this.\n>> I poked into this and decided it's an ancient omission within ruleutils.c.\n>> The reason we've not seen it before is probably that you can't get to the\n>> case through the parser. The SEARCH stuff is generating a query structure\n>> basically equivalent to\n>> regression=# with recursive cte (x,r) as (\n>> select 42 as x, row(i, 2.3) as r from generate_series(1,3) i\n>> union all\n>> select x, row((c.r).f1, 4.5) from cte c\n>> )\n>> select * from cte;\n>> ERROR: record type has not been registered\n>> and as you can see, expandRecordVariable fails to figure out what\n>> the referent of \"c.r\" is. I think that could be fixed (by looking\n>> into the non-recursive term), but given the lack of field demand,\n>> I'm not feeling that it's urgent.\n>> So the omission is pretty obvious from the misleading comment:\n>> actually, Vars referencing RTE_CTE RTEs can also appear in WorkTableScan\n>> nodes, and we're not doing anything to support that. But we only reach\n>> this code when trying to resolve a field of a Var of RECORD type, which\n>> is a case that it seems like the parser can't produce.\n>> It doesn't look too hard to fix: we just have to find the RecursiveUnion\n>> that goes with the WorkTableScan, and drill down into that, much as we\n>> would do in the CteScan case. See attached draft patch. I'm too tired\n>> to beat on this heavily or add a test case, but I have verified that it\n>> passes check-world and handles the example presented in this thread.\n>> \t\t\tregards, tom lane\n> \n> Thanks for looking into this and fixing it!\n> \n> -- \n> Regards,\n> \n> --\n> Atsushi Torikoshi\n> NTT DATA CORPORATION\n> \n> \n> \n> \n\n\n\n", "msg_date": "Mon, 11 Oct 2021 12:22:41 +0200", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "On 11.10.21 12:22, Markus Winand wrote:\n> Both variants work fine before that patch (4ac0f450b698442c3273ddfe8eed0e1a7e56645f).\n\nThat commit is a message wording patch. Are you sure you meant that one?\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 16:27:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "\n> On 11.10.2021, at 16:27, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 11.10.21 12:22, Markus Winand wrote:\n>> Both variants work fine before that patch (4ac0f450b698442c3273ddfe8eed0e1a7e56645f).\n> \n> That commit is a message wording patch. Are you sure you meant that one?\n> \n\nWhat I meant is that it was still working on 4ac0f450b698442c3273ddfe8eed0e1a7e56645f, but not on the next (3f50b82639637c9908afa2087de7588450aa866b).\n\n-markus\n\n", "msg_date": "Mon, 11 Oct 2021 16:32:16 +0200", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" }, { "msg_contents": "Markus Winand <markus.winand@winand.at> writes:\n> What I meant is that it was still working on 4ac0f450b698442c3273ddfe8eed0e1a7e56645f, but not on the next (3f50b82639637c9908afa2087de7588450aa866b).\n\nYeah, silly oversight in that patch. Will push a fix shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Oct 2021 10:57:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN(VERBOSE) to CTE with SEARCH BREADTH FIRST fails" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that `SET TIME ZONE` / `SET timezone TO` don't work with\nabbreviations:\n\n```\n# select * from pg_timezone_names where abbrev = 'MSK';\n name | abbrev | utc_offset | is_dst\n-------------------+--------+------------+--------\n Europe/Moscow | MSK | 03:00:00 | f\n Europe/Simferopol | MSK | 03:00:00 | f\n W-SU | MSK | 03:00:00 | f\n\n97394 (master) =# set time zone 'Europe/Moscow';\nSET\n\n97394 (master) =# set time zone 'MSK';\nERROR: invalid value for parameter \"TimeZone\": \"MSK\"\n```\n\nHowever, I can use both Europe/Moscow and MSK in timestamptz_in():\n\n```\n# select '2021-09-07 12:34:56 Europe/Moscow' :: timestamptz;\n timestamptz\n------------------------\n 2021-09-07 12:34:56+03\n\n# select '2021-09-07 12:34:56 MSK' :: timestamptz;\n timestamptz\n------------------------\n 2021-09-07 12:34:56+03\n```\n\nPostgreSQL was built on MacOS Catalina without the `--with-system-tzdata=` flag.\n\nIs it a bug or this behavior is intentional (something to do with SQL\nstandard, perhaps)?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 7 Sep 2021 10:43:56 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[BUG?] SET TIME ZONE doesn't work with abbreviations" }, { "msg_contents": "On Tuesday, September 7, 2021, Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi hackers,\n>\n> I noticed that `SET TIME ZONE` / `SET timezone TO` don't work with\n> abbreviations:\n>\n> Is it a bug or this behavior is intentional (something to do with SQL\n> standard, perhaps)?\n>\n>\nWell, given that the limitation is documented I’d have to say it is\nintentional:\n\nYou cannot set the configuration parameters TimeZone\n<https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TIMEZONE>\n or log_timezone\n<https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TIMEZONE>\nto\na time zone abbreviation, but you can use abbreviations in date/time input\nvalues and with the AT TIME ZONE operator.\n\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES\n\nDavid J.\n\nOn Tuesday, September 7, 2021, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\nI noticed that `SET TIME ZONE` / `SET timezone TO` don't work with\nabbreviations:\nIs it a bug or this behavior is intentional (something to do with SQL\nstandard, perhaps)?\nWell, given that the limitation is documented I’d have to say it is intentional:You cannot set the configuration parameters TimeZone or log_timezone to a time zone abbreviation, but you can use abbreviations in date/time input values and with the AT TIME ZONE operator.https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONESDavid J.", "msg_date": "Tue, 7 Sep 2021 06:57:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG?] SET TIME ZONE doesn't work with abbreviations" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I noticed that `SET TIME ZONE` / `SET timezone TO` don't work with\n> abbreviations:\n\nThat's intentional, per the fine manual:\n\n A time zone abbreviation, for example <literal>PST</literal>. Such a\n specification merely defines a particular offset from UTC, in\n contrast to full time zone names which can imply a set of daylight\n savings transition rules as well. The recognized abbreviations\n are listed in the <literal>pg_timezone_abbrevs</literal> view (see <xref\n linkend=\"view-pg-timezone-abbrevs\"/>). You cannot set the\n configuration parameters <xref linkend=\"guc-timezone\"/> or\n <xref linkend=\"guc-log-timezone\"/> to a time\n zone abbreviation, but you can use abbreviations in\n date/time input values and with the <literal>AT TIME ZONE</literal>\n operator.\n\nI'm too caffeine-deprived to remember the exact reasoning right now,\nbut it was likely along the lines of \"you don't really want to do\nthat because it won't track DST changes\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 10:00:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG?] SET TIME ZONE doesn't work with abbreviations" }, { "msg_contents": "David, Tom,\n\n> Well, given that the limitation is documented I’d have to say it is intentional:\n> [...]\n\n> That's intentional, per the fine manual:\n> [...]\n\nMy bad, I missed this. Many thanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 7 Sep 2021 18:04:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [BUG?] SET TIME ZONE doesn't work with abbreviations" } ]
[ { "msg_contents": "Hi,\n\nIt seems like we use \"superuser\" as a standard term across the entire\ncode base i.e. error messages, docs, code comments. But there are\nstill a few code comments that use the term \"super user\". Can we\nreplace those with \"superuser\"? Attaching a tiny patch to do that.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 7 Sep 2021 18:14:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Use \"superuser\" instead of \"super user\" in code comments" }, { "msg_contents": "> On 7 Sep 2021, at 14:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems like we use \"superuser\" as a standard term across the entire\n> code base i.e. error messages, docs, code comments. But there are\n> still a few code comments that use the term \"super user\". Can we\n> replace those with \"superuser\"? Attaching a tiny patch to do that.\n\nGood catch, superuser is the term we should use for this. There is one\nadditional “super user” in src/test/regress/sql/conversion.sql (and its\ncorresponding resultfile) which can be included in this. Unless there are\nobjections I’ll apply this with the testfile fixup.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 7 Sep 2021 15:10:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Use \"superuser\" instead of \"super user\" in code comments" }, { "msg_contents": "On Tue, Sep 7, 2021 at 6:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 Sep 2021, at 14:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > It seems like we use \"superuser\" as a standard term across the entire\n> > code base i.e. error messages, docs, code comments. But there are\n> > still a few code comments that use the term \"super user\". Can we\n> > replace those with \"superuser\"? Attaching a tiny patch to do that.\n>\n> Good catch, superuser is the term we should use for this. There is one\n> additional “super user” in src/test/regress/sql/conversion.sql (and its\n> corresponding resultfile) which can be included in this. Unless there are\n> objections I’ll apply this with the testfile fixup.\n\nThanks for picking this up. Here's v2 including the change in\nconversion.sql and .out.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 7 Sep 2021 19:18:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use \"superuser\" instead of \"super user\" in code comments" }, { "msg_contents": "> On 7 Sep 2021, at 15:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Tue, Sep 7, 2021 at 6:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 7 Sep 2021, at 14:44, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>>> It seems like we use \"superuser\" as a standard term across the entire\n>>> code base i.e. error messages, docs, code comments. But there are\n>>> still a few code comments that use the term \"super user\". Can we\n>>> replace those with \"superuser\"? Attaching a tiny patch to do that.\n>> \n>> Good catch, superuser is the term we should use for this. There is one\n>> additional “super user” in src/test/regress/sql/conversion.sql (and its\n>> corresponding resultfile) which can be included in this. Unless there are\n>> objections I’ll apply this with the testfile fixup.\n> \n> Thanks for picking this up. Here's v2 including the change in\n> conversion.sql and .out.\n\nDone, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 17:04:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Use \"superuser\" instead of \"super user\" in code comments" } ]
[ { "msg_contents": "Hi,\n\nI have a 13.4 based setup (physical streaming replication) where the \nreplica does the attach log upon startup, and when the first message is \nsent from the primary.\n\nThere is the FATAL from when the WAL receiver shuts down, but I think it \nwould be a benefit to have report_invalid_record() log at ERROR level \ninstead to highlight to the admin that there is a serious problem.\n\nFeel free to contact me off-list for the setup (20Mb).\n\nThoughts ?\n\nBest regards,\n Jesper", "msg_date": "Tue, 7 Sep 2021 09:16:07 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Increase log level in xlogreader.c ?" } ]
[ { "msg_contents": "In postgres.h, there are these macros for working with compressed\ntoast:\n\n vvvvvvvv\n/* Decompressed size and compression method of an external compressed Datum */\n#define VARDATA_COMPRESSED_GET_EXTSIZE(PTR) \\\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo & VARLENA_EXTSIZE_MASK)\n#define VARDATA_COMPRESSED_GET_COMPRESS_METHOD(PTR) \\\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo >> VARLENA_EXTSIZE_BITS)\n\n/* Same, when working directly with a struct varatt_external */\n#define VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) \\\n ((toast_pointer).va_extinfo & VARLENA_EXTSIZE_MASK)\n#define VARATT_EXTERNAL_GET_COMPRESS_METHOD(toast_pointer) \\\n ((toast_pointer).va_extinfo >> VARLENA_EXTSIZE_BITS)\n\n\nOn the first line, is the comment \"external\" correct? It took me quite\na while to realize that the first two macros are the methods to be\nused on an *inline* compressed Datum, when the second set is used for\nvarlenas in toast tables.\n\nContext: Me figuring out https://github.com/credativ/toastinfo/blob/master/toastinfo.c#L119-L128\n\nChristoph\n\n\n", "msg_date": "Tue, 7 Sep 2021 17:55:54 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "VARDATA_COMPRESSED_GET_COMPRESS_METHOD comment?" }, { "msg_contents": "On Tue, Sep 7, 2021 at 11:56 AM Christoph Berg <myon@debian.org> wrote:\n> In postgres.h, there are these macros for working with compressed\n> toast:\n>\n> vvvvvvvv\n> /* Decompressed size and compression method of an external compressed Datum */\n> #define VARDATA_COMPRESSED_GET_EXTSIZE(PTR) \\\n> (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo & VARLENA_EXTSIZE_MASK)\n> #define VARDATA_COMPRESSED_GET_COMPRESS_METHOD(PTR) \\\n> (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo >> VARLENA_EXTSIZE_BITS)\n>\n> /* Same, when working directly with a struct varatt_external */\n> #define VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) \\\n> ((toast_pointer).va_extinfo & VARLENA_EXTSIZE_MASK)\n> #define VARATT_EXTERNAL_GET_COMPRESS_METHOD(toast_pointer) \\\n> ((toast_pointer).va_extinfo >> VARLENA_EXTSIZE_BITS)\n>\n> On the first line, is the comment \"external\" correct? It took me quite\n> a while to realize that the first two macros are the methods to be\n> used on an *inline* compressed Datum, when the second set is used for\n> varlenas in toast tables.\n\nWell ... technically the second set are used on a TOAST pointer, which\nis not really the same thing as a varlena. The varlena would start\nwith a 1-byte header identifying it as a TOAST pointer, and then\nthere'd be a 1-byte saying what kind of TOAST pointer it is, which\nwould be VARTAG_ONDISK if this is coming from a tuple on disk, and\nthen the TOAST pointer would start after that. So toast_pointer =\nvarlena_pointer + 2, if I'm not confused here.\n\nBut I agree with you that referring to the argument to\nVARDATA_COMPRESSED_GET_EXTSIZE or\nVARDATA_COMPRESSED_GET_COMPRESS_METHOD as an \"external compressed\nDatum\" doesn't seem quite right. It is compressed, but it is not\nexternal, at least in the sense that I understand that term.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Sep 2021 14:13:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: VARDATA_COMPRESSED_GET_COMPRESS_METHOD comment?" }, { "msg_contents": "Re: Robert Haas\n> But I agree with you that referring to the argument to\n> VARDATA_COMPRESSED_GET_EXTSIZE or\n> VARDATA_COMPRESSED_GET_COMPRESS_METHOD as an \"external compressed\n> Datum\" doesn't seem quite right. It is compressed, but it is not\n> external, at least in the sense that I understand that term.\n\nHow about \"compressed-in-line Datum\" like on the comment 5 lines above?\n\n/* caution: this will not work on an external or compressed-in-line Datum */\n/* caution: this will return a possibly unaligned pointer */\n#define VARDATA_ANY(PTR) \\\n (VARATT_IS_1B(PTR) ? VARDATA_1B(PTR) : VARDATA_4B(PTR))\n\n/* Decompressed size and compression method of an external compressed Datum */\n#define VARDATA_COMPRESSED_GET_EXTSIZE(PTR) \\\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo & VARLENA_EXTSIZE_MASK)\n#define VARDATA_COMPRESSED_GET_COMPRESS_METHOD(PTR) \\\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo >> VARLENA_EXTSIZE_BITS)\n\nThis \"external\" there cost me about one hour of extra poking around\nuntil I realized this is actually the macro I wanted.\n\n-> /* Decompressed size and compression method of a compressed-in-line Datum */\n\nChristoph\n\n\n", "msg_date": "Wed, 8 Sep 2021 17:33:22 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: VARDATA_COMPRESSED_GET_COMPRESS_METHOD comment?" }, { "msg_contents": "On Wed, Sep 8, 2021 at 11:33 AM Christoph Berg <myon@debian.org> wrote:\n> How about \"compressed-in-line Datum\" like on the comment 5 lines above?\n\nThat seems reasonable to me, but I think Tom Lane is responsible for\nthe current form of that comment, so it'd be nice to hear what he\nthinks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Sep 2021 12:27:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: VARDATA_COMPRESSED_GET_COMPRESS_METHOD comment?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Sep 8, 2021 at 11:33 AM Christoph Berg <myon@debian.org> wrote:\n>> How about \"compressed-in-line Datum\" like on the comment 5 lines above?\n\n> That seems reasonable to me, but I think Tom Lane is responsible for\n> the current form of that comment, so it'd be nice to hear what he\n> thinks.\n\nHmm ... looks like I copied-and-pasted that comment to the wrong place\nwhile rearranging stuff in aeb1631ed. The comment just below is\noff-point too. Will fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Sep 2021 14:01:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: VARDATA_COMPRESSED_GET_COMPRESS_METHOD comment?" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI'd like to gauge interest in parallelizing the archiver process.\r\nFrom a quick scan, I was only able to find one recent thread [0] that\r\nbrought up this topic, and ISTM the conventional wisdom is to use a\r\nbackup utility like pgBackRest that does things in parallel behind-\r\nthe-scenes. My experience is that the generating-more-WAL-than-we-\r\ncan-archive problem is pretty common, and parallelization seems to\r\nhelp quite a bit, so perhaps it's a good time to consider directly\r\nsupporting parallel archiving in PostgreSQL.\r\n\r\nBased on previous threads I've seen, I believe many in the community\r\nwould like to replace archive_command entirely, but what I'm proposing\r\nhere would build on the existing tools. I'm currently thinking of\r\nsomething a bit like autovacuum_max_workers, but the archive workers\r\nwould be created once and would follow a competing consumers model.\r\nAnother approach I'm looking at is to use background worker processes,\r\nalthough I'm not sure if linking such a critical piece of\r\nfunctionality to max_worker_processes is a good idea. However, I do\r\nsee that logical replication uses background workers.\r\n\r\nAnyway, I'm curious what folks think about this. I think it'd help\r\nsimplify server administration for many users.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/20180828060221.x33gokifqi3csjj4%40depesz.com\r\n\r\n", "msg_date": "Tue, 7 Sep 2021 22:36:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "parallelizing the archiver" }, { "msg_contents": "On Wed, Sep 8, 2021 at 6:36 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> I'd like to gauge interest in parallelizing the archiver process.\n> [...]\n> Based on previous threads I've seen, I believe many in the community\n> would like to replace archive_command entirely, but what I'm proposing\n> here would build on the existing tools.\n\nHaving a new implementation that would remove the archive_command is\nprobably a better long term solution, but I don't know of anyone\nworking on that and it's probably gonna take some time. Right now we\nhave a lot of users that face archiving bottleneck so I think it would\nbe a good thing to implement parallel archiving, fully compatible with\ncurrent archive_command, as a short term solution.\n\n\n", "msg_date": "Wed, 8 Sep 2021 14:38:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 9/7/21, 11:38 PM, \"Julien Rouhaud\" <rjuju123@gmail.com> wrote:\r\n> On Wed, Sep 8, 2021 at 6:36 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>\r\n>> I'd like to gauge interest in parallelizing the archiver process.\r\n>> [...]\r\n>> Based on previous threads I've seen, I believe many in the community\r\n>> would like to replace archive_command entirely, but what I'm proposing\r\n>> here would build on the existing tools.\r\n>\r\n> Having a new implementation that would remove the archive_command is\r\n> probably a better long term solution, but I don't know of anyone\r\n> working on that and it's probably gonna take some time. Right now we\r\n> have a lot of users that face archiving bottleneck so I think it would\r\n> be a good thing to implement parallel archiving, fully compatible with\r\n> current archive_command, as a short term solution.\r\n\r\nThanks for chiming in. I'm planning to work on a patch next week.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 9 Sep 2021 22:30:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 6:30 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> Thanks for chiming in. I'm planning to work on a patch next week.\n\nGreat news!\n\nAbout the technical concerns:\n\n> I'm currently thinking of\n> something a bit like autovacuum_max_workers, but the archive workers\n> would be created once and would follow a competing consumers model.\n\nIn this approach, the launched archiver workers would be kept as long\nas the instance is up, or should they be stopped if they're not\nrequired anymore, e.g. if there was a temporary write activity spike?\nI think we should make sure that at least one worker is always up.\n\n> Another approach I'm looking at is to use background worker processes,\n> although I'm not sure if linking such a critical piece of\n> functionality to max_worker_processes is a good idea. However, I do\n> see that logical replication uses background workers.\n\nI think that using background workers is a good approach, and the\nvarious guc in that area should allow users to properly configure\narchiving too. If that's not the case, it might be an opportunity to\nadd some new infrastructure that could benefit all bgworkers users.\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:08:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 8 сент. 2021 г., в 03:36, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> \n> Anyway, I'm curious what folks think about this. I think it'd help\n> simplify server administration for many users.\n\nBTW this thread is also related [0].\n\nMy 2 cents.\nIt's OK if external tool is responsible for concurrency. Do we want this complexity in core? Many users do not enable archiving at all.\nMaybe just add parallelism API for external tool?\nIt's much easier to control concurrency in external tool that in PostgreSQL core. Maintaining parallel worker is a tremendously harder than spawning goroutine, thread, task or whatever.\nExternal tool needs to know when xlog segment is ready and needs to report when it's done. Postgres should just ensure that external archiever\\restorer is running.\nFor example external tool could read xlog names from stdin and report finished files from stdout. I can prototype such tool swiftly :)\nE.g. postgres runs ```wal-g wal-archiver``` and pushes ready segment filenames on stdin. And no more listing of archive_status and hacky algorithms to predict next WAL name and completition time!\n\nThoughts?\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/CA%2BTgmobhAbs2yabTuTRkJTq_kkC80-%2Bjw%3DpfpypdOJ7%2BgAbQbw%40mail.gmail.com\n\n", "msg_date": "Fri, 10 Sep 2021 10:28:20 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 1:28 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> It's OK if external tool is responsible for concurrency. Do we want this complexity in core? Many users do not enable archiving at all.\n> Maybe just add parallelism API for external tool?\n> It's much easier to control concurrency in external tool that in PostgreSQL core. Maintaining parallel worker is a tremendously harder than spawning goroutine, thread, task or whatever.\n\nYes, but it also means that it's up to every single archiving tool to\nimplement a somewhat hackish parallel version of an archive_command,\nhoping that core won't break it. If this problem is solved in\npostgres core whithout API change, then all existing tool will\nautomatically benefit from it (maybe not the one who used to have\nhacks to make it parallel though, but it seems easier to disable it\nrather than implement it).\n\n> External tool needs to know when xlog segment is ready and needs to report when it's done. Postgres should just ensure that external archiever\\restorer is running.\n> For example external tool could read xlog names from stdin and report finished files from stdout. I can prototype such tool swiftly :)\n> E.g. postgres runs ```wal-g wal-archiver``` and pushes ready segment filenames on stdin. And no more listing of archive_status and hacky algorithms to predict next WAL name and completition time!\n\nYes, but that requires fundamental design changes for the archive\ncommands right? So while I agree it could be a better approach\noverall, it seems like a longer term option. As far as I understand,\nwhat Nathan suggested seems more likely to be achieved in pg15 and\ncould benefit from a larger set of backup solutions. This can give us\nenough time to properly design a better approach for designing a new\narchiving approach.\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:52:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n> \n> On Fri, Sep 10, 2021 at 1:28 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> \n>> It's OK if external tool is responsible for concurrency. Do we want this complexity in core? Many users do not enable archiving at all.\n>> Maybe just add parallelism API for external tool?\n>> It's much easier to control concurrency in external tool that in PostgreSQL core. Maintaining parallel worker is a tremendously harder than spawning goroutine, thread, task or whatever.\n> \n> Yes, but it also means that it's up to every single archiving tool to\n> implement a somewhat hackish parallel version of an archive_command,\n> hoping that core won't break it.\nI'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.\n\n> If this problem is solved in\n> postgres core whithout API change, then all existing tool will\n> automatically benefit from it (maybe not the one who used to have\n> hacks to make it parallel though, but it seems easier to disable it\n> rather than implement it).\nTrue hacky tools already can coordinate swarm of their processes and are prepared that they are called multiple times concurrently :)\n\n>> External tool needs to know when xlog segment is ready and needs to report when it's done. Postgres should just ensure that external archiever\\restorer is running.\n>> For example external tool could read xlog names from stdin and report finished files from stdout. I can prototype such tool swiftly :)\n>> E.g. postgres runs ```wal-g wal-archiver``` and pushes ready segment filenames on stdin. And no more listing of archive_status and hacky algorithms to predict next WAL name and completition time!\n> \n> Yes, but that requires fundamental design changes for the archive\n> commands right? So while I agree it could be a better approach\n> overall, it seems like a longer term option. As far as I understand,\n> what Nathan suggested seems more likely to be achieved in pg15 and\n> could benefit from a larger set of backup solutions. This can give us\n> enough time to properly design a better approach for designing a new\n> archiving approach.\n\nIt's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command. What fundamental design changes we need?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 10 Sep 2021 11:03:46 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n> >\n> > Yes, but it also means that it's up to every single archiving tool to\n> > implement a somewhat hackish parallel version of an archive_command,\n> > hoping that core won't break it.\n> I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.\n\nWhich is a big API beak.\n\n> It's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command. What fundamental design changes we need?\n\nI'm talking about the commands themselves. Your suggestion is to\nchange archive_command to be able to spawn a daemon, and it looks like\na totally different approach. I'm not saying that having a daemon\nbased approach to take care of archiving is a bad idea, I'm saying\nthat trying to fit that with the current archive_command + some new\nGUC looks like a bad idea.\n\n\n", "msg_date": "Fri, 10 Sep 2021 14:11:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 10 сент. 2021 г., в 11:11, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n> \n> On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> \n>>> 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n>>> \n>>> Yes, but it also means that it's up to every single archiving tool to\n>>> implement a somewhat hackish parallel version of an archive_command,\n>>> hoping that core won't break it.\n>> I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.\n> \n> Which is a big API beak.\nHuge extension, not a break.\n\n>> It's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command. What fundamental design changes we need?\n> \n> I'm talking about the commands themselves. Your suggestion is to\n> change archive_command to be able to spawn a daemon, and it looks like\n> a totally different approach. I'm not saying that having a daemon\n> based approach to take care of archiving is a bad idea, I'm saying\n> that trying to fit that with the current archive_command + some new\n> GUC looks like a bad idea.\nIt fits nicely, even in corner cases. E.g. restore_command run from pg_rewind seems compatible with this approach.\nOne more example: after failover DBA can just ```ls|wal-g wal-push``` to archive all WALs unarchived before network partition.\n\nThis is simple yet powerful approach, without any contradiction to existing archive_command API.\nWhy it's a bad idea?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 10 Sep 2021 11:29:22 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Tue, Sep 7, 2021 at 6:36 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Based on previous threads I've seen, I believe many in the community\n> would like to replace archive_command entirely, but what I'm proposing\n> here would build on the existing tools. I'm currently thinking of\n> something a bit like autovacuum_max_workers, but the archive workers\n> would be created once and would follow a competing consumers model.\n\nTo me, it seems way more beneficial to think about being able to\ninvoke archive_command with many files at a time instead of just one.\nI think for most plausible archive commands that would be way more\nefficient than what you propose here. It's *possible* that if we had\nthat, we'd still want this, but I'm not even convinced.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Sep 2021 09:13:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 9:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> To me, it seems way more beneficial to think about being able to\n> invoke archive_command with many files at a time instead of just one.\n> I think for most plausible archive commands that would be way more\n> efficient than what you propose here. It's *possible* that if we had\n> that, we'd still want this, but I'm not even convinced.\n\nThose approaches don't really seems mutually exclusive? In both case\nyou will need to internally track the status of each WAL file and\nhandle non contiguous file sequences. In case of parallel commands\nyou only need additional knowledge that some commands is already\nworking on a file. Wouldn't it be even better to eventually be able\nlaunch multiple batches of multiple files rather than a single batch?\n\nIf we start with parallelism first, the whole ecosystem could\nimmediately benefit from it as is. To be able to handle multiple\nfiles in a single command, we would need some way to let the server\nknow which files were successfully archived and which files weren't,\nso it requires a different communication approach than the command\nreturn code.\n\nBut as I said, I'm not convinced that using the archive_command\napproach for that is the best approach If I understand correctly,\nmost of the backup solutions would prefer to have a daemon being\nlaunched and use it at a queuing system. Wouldn't it be better to\nhave a new archive_mode, e.g. \"daemon\", and have postgres responsible\nto (re)start it, and pass information through the daemon's\nstdin/stdout or something like that?\n\n\n", "msg_date": "Fri, 10 Sep 2021 22:19:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 10:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Those approaches don't really seems mutually exclusive? In both case\n> you will need to internally track the status of each WAL file and\n> handle non contiguous file sequences. In case of parallel commands\n> you only need additional knowledge that some commands is already\n> working on a file. Wouldn't it be even better to eventually be able\n> launch multiple batches of multiple files rather than a single batch?\n\nWell, I guess I'm not convinced. Perhaps people with more knowledge of\nthis than I may already know why it's beneficial, but in my experience\ncommands like 'cp' and 'scp' are usually limited by the speed of I/O,\nnot the fact that you only have one of them running at once. Running\nseveral at once, again in my experience, is typically not much faster.\nOn the other hand, scp has a LOT of startup overhead, so it's easy to\nsee the benefits of batching.\n\n[rhaas pgsql]$ touch x y z\n[rhaas pgsql]$ time sh -c 'scp x cthulhu: && scp y cthulhu: && scp z cthulhu:'\nx 100% 207KB 78.8KB/s 00:02\ny 100% 0 0.0KB/s 00:00\nz 100% 0 0.0KB/s 00:00\n\nreal 0m9.418s\nuser 0m0.045s\nsys 0m0.071s\n[rhaas pgsql]$ time sh -c 'scp x y z cthulhu:'\nx 100% 207KB 273.1KB/s 00:00\ny 100% 0 0.0KB/s 00:00\nz 100% 0 0.0KB/s 00:00\n\nreal 0m3.216s\nuser 0m0.017s\nsys 0m0.020s\n\n> If we start with parallelism first, the whole ecosystem could\n> immediately benefit from it as is. To be able to handle multiple\n> files in a single command, we would need some way to let the server\n> know which files were successfully archived and which files weren't,\n> so it requires a different communication approach than the command\n> return code.\n\nThat is possibly true. I think it might work to just assume that you\nhave to retry everything if it exits non-zero, but that requires the\narchive command to be smart enough to do something sensible if an\nidentical file is already present in the archive.\n\n> But as I said, I'm not convinced that using the archive_command\n> approach for that is the best approach If I understand correctly,\n> most of the backup solutions would prefer to have a daemon being\n> launched and use it at a queuing system. Wouldn't it be better to\n> have a new archive_mode, e.g. \"daemon\", and have postgres responsible\n> to (re)start it, and pass information through the daemon's\n> stdin/stdout or something like that?\n\nSure. Actually, I think a background worker would be better than a\nseparate daemon. Then it could just talk to shared memory directly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Sep 2021 11:22:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 11:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Well, I guess I'm not convinced. Perhaps people with more knowledge of\n> this than I may already know why it's beneficial, but in my experience\n> commands like 'cp' and 'scp' are usually limited by the speed of I/O,\n> not the fact that you only have one of them running at once. Running\n> several at once, again in my experience, is typically not much faster.\n> On the other hand, scp has a LOT of startup overhead, so it's easy to\n> see the benefits of batching.\n\nI totally agree that batching as many file as possible in a single\ncommand is probably what's gonna achieve the best performance. But if\nthe archiver only gets an answer from the archive_command once it\ntried to process all of the file, it also means that postgres won't be\nable to remove any WAL file until all of them could be processed. It\nmeans that users will likely have to limit the batch size and\ntherefore pay more startup overhead than they would like. In case of\narchiving on server with high latency / connection overhead it may be\nbetter to be able to run multiple commands in parallel. I may be\noverthinking here and definitely having feedback from people with more\nexperience around that would be welcome.\n\n> That is possibly true. I think it might work to just assume that you\n> have to retry everything if it exits non-zero, but that requires the\n> archive command to be smart enough to do something sensible if an\n> identical file is already present in the archive.\n\nYes, it could be. I think that we need more feedback for that too.\n\n> Sure. Actually, I think a background worker would be better than a\n> separate daemon. Then it could just talk to shared memory directly.\n\nI thought about it too, but I was under the impression that most\npeople would want to implement a custom daemon (or already have) with\nsome more parallel/thread friendly language.\n\n\n", "msg_date": "Fri, 10 Sep 2021 23:48:54 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 10 сент. 2021 г., в 19:19, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n> Wouldn't it be better to\n> have a new archive_mode, e.g. \"daemon\", and have postgres responsible\n> to (re)start it, and pass information through the daemon's\n> stdin/stdout or something like that?\n\nWe don't even need to introduce new archive_mode.\nCurrently archive_command has no expectations regarding stdin\\stdout.\nLet's just say that we will push new WAL names to stdin until archive_command exits.\nAnd if archive_command prints something to stdout we will interpret it as archived WAL names.\nThat's it.\n\nExisting archive_commands will continue as is.\n\nCurrently information about what is archived is stored on filesystem in archive_status dir. We do not need to change anything.\nIf archive_command exits (with any exit code) we will restart it if there are WAL files that still were not archived.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 10 Sep 2021 20:55:21 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 9/10/21, 8:22 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Fri, Sep 10, 2021 at 10:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> Those approaches don't really seems mutually exclusive? In both case\r\n>> you will need to internally track the status of each WAL file and\r\n>> handle non contiguous file sequences. In case of parallel commands\r\n>> you only need additional knowledge that some commands is already\r\n>> working on a file. Wouldn't it be even better to eventually be able\r\n>> launch multiple batches of multiple files rather than a single batch?\r\n>\r\n> Well, I guess I'm not convinced. Perhaps people with more knowledge of\r\n> this than I may already know why it's beneficial, but in my experience\r\n> commands like 'cp' and 'scp' are usually limited by the speed of I/O,\r\n> not the fact that you only have one of them running at once. Running\r\n> several at once, again in my experience, is typically not much faster.\r\n> On the other hand, scp has a LOT of startup overhead, so it's easy to\r\n> see the benefits of batching.\r\n>\r\n> [...]\r\n>\r\n>> If we start with parallelism first, the whole ecosystem could\r\n>> immediately benefit from it as is. To be able to handle multiple\r\n>> files in a single command, we would need some way to let the server\r\n>> know which files were successfully archived and which files weren't,\r\n>> so it requires a different communication approach than the command\r\n>> return code.\r\n>\r\n> That is possibly true. I think it might work to just assume that you\r\n> have to retry everything if it exits non-zero, but that requires the\r\n> archive command to be smart enough to do something sensible if an\r\n> identical file is already present in the archive.\r\n\r\nMy initial thinking was similar to Julien's. Assuming I have an\r\narchive_command that handles one file, I can just set\r\narchive_max_workers to 3 and reap the benefits. If I'm using an\r\nexisting utility that implements its own parallelism, I can keep\r\narchive_max_workers at 1 and continue using it. This would be a\r\nsimple incremental improvement.\r\n\r\nThat being said, I think the discussion about batching is a good one\r\nto have. If the overhead described in your SCP example is\r\nrepresentative of a typical archive_command, then parallelism does\r\nseem a bit silly. We'd essentially be using a ton more resources when\r\nthere's obvious room for improvement via reducing amount of overhead\r\nper archive. I think we could easily make the batch size configurable\r\nso that existing archive commands would work (e.g.,\r\narchive_batch_size=1). However, unlike the simple parallel approach,\r\nyou'd likely have to adjust your archive_command if you wanted to make\r\nuse of batching. That doesn't seem terrible to me, though. As\r\ndiscussed above, there are some implementation details to work out for\r\narchive failures, but nothing about that seems intractable to me.\r\nPlus, if you still wanted to parallelize things, feeding your\r\narchive_command several files at a time could still be helpful.\r\n\r\nI'm currently leaning toward exploring the batching approach first. I\r\nsuppose we could always make a prototype of both solutions for\r\ncomparison with some \"typical\" archive commands if that would help\r\nwith the discussion.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Sep 2021 17:06:59 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, 2021-09-10 at 23:48 +0800, Julien Rouhaud wrote:\r\n> I totally agree that batching as many file as possible in a single\r\n> command is probably what's gonna achieve the best performance. But if\r\n> the archiver only gets an answer from the archive_command once it\r\n> tried to process all of the file, it also means that postgres won't be\r\n> able to remove any WAL file until all of them could be processed. It\r\n> means that users will likely have to limit the batch size and\r\n> therefore pay more startup overhead than they would like. In case of\r\n> archiving on server with high latency / connection overhead it may be\r\n> better to be able to run multiple commands in parallel.\r\n\r\nWell, users would also have to limit the parallelism, right? If\r\nconnections are high-overhead, I wouldn't imagine that running hundreds\r\nof them simultaneously would work very well in practice. (The proof\r\nwould be in an actual benchmark, obviously, but usually I would rather\r\nhave one process handling a hundred items than a hundred processes\r\nhandling one item each.)\r\n\r\nFor a batching scheme, would it be that big a deal to wait for all of\r\nthem to be archived before removal?\r\n\r\n> > That is possibly true. I think it might work to just assume that you\r\n> > have to retry everything if it exits non-zero, but that requires the\r\n> > archive command to be smart enough to do something sensible if an\r\n> > identical file is already present in the archive.\r\n> \r\n> Yes, it could be. I think that we need more feedback for that too.\r\n\r\nSeems like this is the sticking point. What would be the smartest thing\r\nfor the command to do? If there's a destination file already, checksum\r\nit and make sure it matches the source before continuing?\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 10 Sep 2021 17:07:01 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 11:49 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I totally agree that batching as many file as possible in a single\n> command is probably what's gonna achieve the best performance. But if\n> the archiver only gets an answer from the archive_command once it\n> tried to process all of the file, it also means that postgres won't be\n> able to remove any WAL file until all of them could be processed. It\n> means that users will likely have to limit the batch size and\n> therefore pay more startup overhead than they would like. In case of\n> archiving on server with high latency / connection overhead it may be\n> better to be able to run multiple commands in parallel. I may be\n> overthinking here and definitely having feedback from people with more\n> experience around that would be welcome.\n\nThat's a fair point. I'm not sure how much it matters, though. I think\nyou want to imagine a system where there are let's say 10 WAL flies\nbeing archived per second. Using fork() + exec() to spawn a shell\ncommand 10 times per second is a bit expensive, whether you do it\nserially or in parallel, and even if the command is something with a\nless-insane startup overhead than scp. If we start a shell command say\nevery 3 seconds and give it 30 files each time, we can reduce the\nstartup costs we're paying by ~97% at the price of having to wait up\nto 3 additional seconds to know that archiving succeeded for any\nparticular file. That sounds like a pretty good trade-off, because the\nmain benefit of removing old files is that it keeps us from running\nout of disk space, and you should not be running a busy system in such\na way that it is ever within 3 seconds of running out of disk space,\nso whatever.\n\nIf on the other hand you imagine a system that's not very busy, say 1\nWAL file being archived every 10 seconds, then using a batch size of\n30 would very significantly delay removal of old files. However, on\nthis system, batching probably isn't really needed. The rate of WAL\nfile generation is low enough that if you pay the startup cost of your\narchive_command for every file, you're probably still doing just fine.\n\nProbably, any kind of parallelism or batching needs to take this kind\nof time-based thinking into account. For batching, the rate at which\nfiles are generated should affect the batch size. For parallelism, it\nshould affect the number of processes used.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:09:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 9/10/21, 10:12 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> If on the other hand you imagine a system that's not very busy, say 1\r\n> WAL file being archived every 10 seconds, then using a batch size of\r\n> 30 would very significantly delay removal of old files. However, on\r\n> this system, batching probably isn't really needed. The rate of WAL\r\n> file generation is low enough that if you pay the startup cost of your\r\n> archive_command for every file, you're probably still doing just fine.\r\n>\r\n> Probably, any kind of parallelism or batching needs to take this kind\r\n> of time-based thinking into account. For batching, the rate at which\r\n> files are generated should affect the batch size. For parallelism, it\r\n> should affect the number of processes used.\r\n\r\nI was thinking that archive_batch_size would be the maximum batch\r\nsize. If the archiver only finds a single file to archive, that's all\r\nit'd send to the archive command. If it finds more, it'd send up to\r\narchive_batch_size to the command.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 10 Sep 2021 17:18:59 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Sep 10, 2021 at 1:07 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> That being said, I think the discussion about batching is a good one\n> to have. If the overhead described in your SCP example is\n> representative of a typical archive_command, then parallelism does\n> seem a bit silly.\n\nI think that's pretty realistic, because a lot of people's archive\ncommands are going to actually be, or need to use, scp specifically.\nHowever, there are also cases where people are using commands that\njust put the file in some local directory (maybe on a remote mount\npoint) and I would expect the startup overhead to be much less in\nthose cases. Maybe people are archiving via HTTPS or similar as well,\nand then you again have some connection overhead though, I suspect,\nnot as much as scp, since web pages do not take 3 seconds to get an\nhttps connection going. I don't know why scp is so crazy slow.\n\nEven in the relatively low-overhead cases, though, I think we would\nwant to do some real testing to see if the benefits are as we expect.\nSee http://postgr.es/m/20200420211018.w2qphw4yybcbxksl@alap3.anarazel.de\nand downthread for context. I was *convinced* that parallel backup was\na win. Benchmarking was a tad underwhelming, but there was a clear if\nmodest benefit by running a synthetic test of copying a lot of files\nserially or in parallel, with the files spread across multiple\nfilesystems on the same physical box. However, when Andres modified my\ntest program to use posix_fadvise(), posix_fallocate(), and\nsync_file_range() while doing the copies, the benefits of parallelism\nlargely evaporated, and in fact in some cases enabling parallelism\ncaused major regressions. In other words, the apparent benefits of\nparallelism were really due to suboptimal behaviors in the Linux page\ncache and some NUMA effects that were in fact avoidable.\n\nSo I'm suspicious that the same things might end up being true here.\nIt's not exactly the same, because the goal of WAL archiving is to\nkeep up with the rate of WAL generation, and the goal of a backup is\n(unless max-rate is used) to finish as fast as possible, and that\ndifference in goals might end up being significant. Also, you can make\nan argument that some people will benefit from a parallelism feature\neven if a perfectly-implemented archive_command doesn't, because many\npeople use really terrible archive_commnads. But all that said, I\nthink the parallel backup discussion is still a cautionary tale to\nwhich some attention ought to be paid.\n\n> We'd essentially be using a ton more resources when\n> there's obvious room for improvement via reducing amount of overhead\n> per archive. I think we could easily make the batch size configurable\n> so that existing archive commands would work (e.g.,\n> archive_batch_size=1). However, unlike the simple parallel approach,\n> you'd likely have to adjust your archive_command if you wanted to make\n> use of batching. That doesn't seem terrible to me, though. As\n> discussed above, there are some implementation details to work out for\n> archive failures, but nothing about that seems intractable to me.\n> Plus, if you still wanted to parallelize things, feeding your\n> archive_command several files at a time could still be helpful.\n\nYep.\n\n> I'm currently leaning toward exploring the batching approach first. I\n> suppose we could always make a prototype of both solutions for\n> comparison with some \"typical\" archive commands if that would help\n> with the discussion.\n\nYeah, I think the concerns here are more pragmatic than philosophical,\nat least for me.\n\nI had kind of been thinking that the way to attack this problem is to\ngo straight to allowing for a background worker, because the other\nproblem with archive_command is that running a shell command like cp,\nscp, or rsync is not really safe. It won't fsync your data, it might\nnot fail if the file is in the archive already, and it definitely\nwon't succeed without doing anything if there's a byte for byte\nidentical file in the archive and fail if there's a file with\ndifferent contents already in the archive. Fixing that stuff by\nrunning different shell commands is hard, but it wouldn't be that hard\nto do it in C code, and you could then also extend whatever code you\nwrote to do batching and parallelism; starting more workers isn't\nhard.\n\nHowever, I can't see the idea of running a shell command going away\nany time soon, in spite of its numerous and severe drawbacks. Such an\ninterface provides a huge degree of flexibility and allows system\nadmins to whack around behavior easily, which you don't get if you\nhave to code every change in C. So I think command-based enhancements\nare fine to pursue also, even though I don't think it's the ideal\nplace for most users to end up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:42:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 10 сент. 2021 г., в 22:18, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> \n> I was thinking that archive_batch_size would be the maximum batch\n> size. If the archiver only finds a single file to archive, that's all\n> it'd send to the archive command. If it finds more, it'd send up to\n> archive_batch_size to the command.\n\nI think that a concept of a \"batch\" is misleading.\nIf you pass filenames via stdin you don't need to know all names upfront.\nJust send more names to the pipe if achiver_command is still running one more segments just became available.\nThis way level of parallelism will adapt to the workload.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 10 Sep 2021 23:34:00 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > > 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123@gmail.com> написал(а):\n> > > Yes, but it also means that it's up to every single archiving tool to\n> > > implement a somewhat hackish parallel version of an archive_command,\n> > > hoping that core won't break it.\n\nWe've got too many archiving tools as it is, if you want my 2c on that.\n\n> > I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.\n> \n> Which is a big API beak.\n\nWe definitely need to stop being afraid of this. We completely changed\naround how restores work and made pretty much all of the backup/restore\ntools have to make serious changes when we released v12.\n\nI definitely don't think that we should be making assumptions that\nchanging archive command to start running things in parallel isn't\n*also* an API break too, in any case. It is also a change and there's\ndefinitely a good chance that it'd break some of the archivers out\nthere. If we're going to make a change here, let's make a sensible one.\n\n> > It's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command. What fundamental design changes we need?\n\nHaven't really thought about this proposal but it does sound\ninteresting.\n\nThanks,\n\nStephen", "msg_date": "Tue, 14 Sep 2021 16:14:58 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Wed, Sep 15, 2021 at 4:14 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> > > I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form.\n> >\n> > Which is a big API beak.\n>\n> We definitely need to stop being afraid of this. We completely changed\n> around how restores work and made pretty much all of the backup/restore\n> tools have to make serious changes when we released v12.\n\nI never said that we should avoid API break at all cost, I said that\nif we break the API we should introduce something better. The\nproposal to pass multiple file names to the archive command said\nnothing about how to tell which ones were successfully archived and\nwhich ones weren't, which is a big problem in my opinion. But I think\nwe should also consider different approach, such as maintaining some\nkind of daemon and asynchronously passing all the WAL file names,\nwaiting for answers. Or maybe something else. It's just that simply\n\"passing multiple file names\" doesn't seem like a big enough win to\njustify an API break to me.\n\n> I definitely don't think that we should be making assumptions that\n> changing archive command to start running things in parallel isn't\n> *also* an API break too, in any case. It is also a change and there's\n> definitely a good chance that it'd break some of the archivers out\n> there. If we're going to make a change here, let's make a sensible one.\n\nBut doing parallel archiving can and should be controlled with a GUC,\nso if your archive_command isn't compatible you can simply just not\nuse it (on top of having a default of not using parallel archiving, at\nleast for some times). It doesn't seem like a big problem.\n\n\n", "msg_date": "Wed, 15 Sep 2021 12:39:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 9/10/21, 10:42 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I had kind of been thinking that the way to attack this problem is to\r\n> go straight to allowing for a background worker, because the other\r\n> problem with archive_command is that running a shell command like cp,\r\n> scp, or rsync is not really safe. It won't fsync your data, it might\r\n> not fail if the file is in the archive already, and it definitely\r\n> won't succeed without doing anything if there's a byte for byte\r\n> identical file in the archive and fail if there's a file with\r\n> different contents already in the archive. Fixing that stuff by\r\n> running different shell commands is hard, but it wouldn't be that hard\r\n> to do it in C code, and you could then also extend whatever code you\r\n> wrote to do batching and parallelism; starting more workers isn't\r\n> hard.\r\n>\r\n> However, I can't see the idea of running a shell command going away\r\n> any time soon, in spite of its numerous and severe drawbacks. Such an\r\n> interface provides a huge degree of flexibility and allows system\r\n> admins to whack around behavior easily, which you don't get if you\r\n> have to code every change in C. So I think command-based enhancements\r\n> are fine to pursue also, even though I don't think it's the ideal\r\n> place for most users to end up.\r\n\r\nI've given this quite a bit of thought. I hacked together a batching\r\napproach for benchmarking, and it seemed to be a decent improvement,\r\nbut you're still shelling out every N files, and all the stuff about\r\nshell commands not being ideal that you mentioned still applies.\r\nPerhaps it's still a good improvement, and maybe we should still do\r\nit, but I get the idea that many believe we can still do better. So,\r\nI looked into adding support for setting up archiving via an\r\nextension.\r\n\r\nThe attached patch is a first try at adding alternatives for\r\narchive_command, restore_command, archive_cleanup_command, and\r\nrecovery_end_command. It adds the GUCs archive_library,\r\nrestore_library, archive_cleanup_library, and recovery_end_library.\r\nEach of these accepts a library name that is loaded at startup,\r\nsimilar to shared_preload_libraries. _PG_init() is still used for\r\ninitialization, and you can use the same library for multiple purposes\r\nby checking the new exported variables (e.g.,\r\nprocess_archive_library_in_progress). The library is then responsible\r\nfor implementing the relevant function, such as _PG_archive() or\r\n_PG_restore(). The attached patch also demonstrates a simple\r\nimplementation for an archive_library that is similar to the sample\r\narchive_command in the documentation.\r\n\r\nI tested the sample archive_command in the docs against the sample\r\narchive_library implementation in the patch, and I saw about a 50%\r\nspeedup. (The archive_library actually syncs the files to disk, too.)\r\nThis is similar to the improvement from batching.\r\n\r\nOf course, there are drawbacks to using an extension. Besides the\r\nobvious added complexity of building an extension in C versus writing\r\na shell command, the patch disallows changing the libraries without\r\nrestarting the server. Also, the patch makes no effort to simplify\r\nerror handling, memory management, etc. This is left as an exercise\r\nfor the extension author.\r\n\r\nI'm sure there are other ways to approach this, but I thought I'd give\r\nit a try to see what was possible and to get the conversation started.\r\n\r\nNathan", "msg_date": "Thu, 30 Sep 2021 04:47:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 9/29/21, 9:49 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> I'm sure there are other ways to approach this, but I thought I'd give\r\n> it a try to see what was possible and to get the conversation started.\r\n\r\nBTW I am also considering the background worker approach that was\r\nmentioned upthread. My current thinking is that the backup extension\r\nwould define a special background worker that communicates with the\r\narchiver via shared memory. As noted upthread, this would enable\r\nextension authors to do whatever batching, parallelism, etc. that they\r\nwant, and it should also prevent failures from taking down the\r\narchiver process. However, this approach might not make sense for\r\nthings like recovery_end_command that are only executed once. Maybe\r\nit's okay to leave that one alone for now.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 1 Oct 2021 18:05:36 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "\n\n> 30 сент. 2021 г., в 09:47, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> \n> The attached patch is a first try at adding alternatives for\n> archive_command\nLooks like an interesting alternative design.\n\n> I tested the sample archive_command in the docs against the sample\n> archive_library implementation in the patch, and I saw about a 50%\n> speedup. (The archive_library actually syncs the files to disk, too.)\n> This is similar to the improvement from batching.\nWhy test sample agains sample? I think if one tests this agains real archive tool doing archive_status lookup and ready->done renaming results will be much different.\n\n> Of course, there are drawbacks to using an extension. Besides the\n> obvious added complexity of building an extension in C versus writing\n> a shell command, the patch disallows changing the libraries without\n> restarting the server. Also, the patch makes no effort to simplify\n> error handling, memory management, etc. This is left as an exercise\n> for the extension author.\nI think the real problem with extension is quite different than mentioned above.\nThere are many archive tools that already feature parallel archiving. PgBackrest, wal-e, wal-g, pg_probackup, pghoard, pgbarman and others. These tools by far outweight tools that don't look into archive_status to parallelize archival.\nAnd we are going to ask them: add also a C extension without any feasible benefit to the user. You only get some restrictions like system restart to enable shared library.\n\nI think we need a design that legalises already existing de-facto standard features in archive tools. Or event better - enables these tools to be more efficient, reliable etc. Either way we will create legacy code from the scratch.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 2 Oct 2021 00:07:49 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/1/21, 12:08 PM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\r\n> 30 сент. 2021 г., в 09:47, Bossart, Nathan <bossartn@amazon.com> написал(а):\r\n>> I tested the sample archive_command in the docs against the sample\r\n>> archive_library implementation in the patch, and I saw about a 50%\r\n>> speedup. (The archive_library actually syncs the files to disk, too.)\r\n>> This is similar to the improvement from batching.\r\n> Why test sample agains sample? I think if one tests this agains real archive tool doing archive_status lookup and ready->done renaming results will be much different.\r\n\r\nMy intent was to demonstrate the impact of reducing the amount of\r\noverhead when archiving. I don't doubt that third party archive tools\r\ncan show improvements by doing batching/parallelism behind the scenes.\r\n\r\n>> Of course, there are drawbacks to using an extension. Besides the\r\n>> obvious added complexity of building an extension in C versus writing\r\n>> a shell command, the patch disallows changing the libraries without\r\n>> restarting the server. Also, the patch makes no effort to simplify\r\n>> error handling, memory management, etc. This is left as an exercise\r\n>> for the extension author.\r\n> I think the real problem with extension is quite different than mentioned above.\r\n> There are many archive tools that already feature parallel archiving. PgBackrest, wal-e, wal-g, pg_probackup, pghoard, pgbarman and others. These tools by far outweight tools that don't look into archive_status to parallelize archival.\r\n> And we are going to ask them: add also a C extension without any feasible benefit to the user. You only get some restrictions like system restart to enable shared library.\r\n>\r\n> I think we need a design that legalises already existing de-facto standard features in archive tools. Or event better - enables these tools to be more efficient, reliable etc. Either way we will create legacy code from the scratch.\r\n\r\nMy proposal wouldn't require any changes to any of these utilities.\r\nThis design just adds a new mechanism that would allow end users to\r\nset up archiving a different way with less overhead in hopes that it\r\nwill help them keep up. I suspect a lot of work has been put into the\r\narchive tools you mentioned to make sure they can keep up with high\r\nrates of WAL generation, so I'm skeptical that anything we do here\r\nwill really benefit them all that much. Ideally, we'd do something\r\nthat improves matters for everyone, though. I'm open to suggestions.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 1 Oct 2021 21:21:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 10/1/21, 12:08 PM, \"Andrey Borodin\" <x4mmm@yandex-team.ru> wrote:\n> > 30 сент. 2021 г., в 09:47, Bossart, Nathan <bossartn@amazon.com> написал(а):\n> >> Of course, there are drawbacks to using an extension. Besides the\n> >> obvious added complexity of building an extension in C versus writing\n> >> a shell command, the patch disallows changing the libraries without\n> >> restarting the server. Also, the patch makes no effort to simplify\n> >> error handling, memory management, etc. This is left as an exercise\n> >> for the extension author.\n> > I think the real problem with extension is quite different than mentioned above.\n> > There are many archive tools that already feature parallel archiving. PgBackrest, wal-e, wal-g, pg_probackup, pghoard, pgbarman and others. These tools by far outweight tools that don't look into archive_status to parallelize archival.\n> > And we are going to ask them: add also a C extension without any feasible benefit to the user. You only get some restrictions like system restart to enable shared library.\n> >\n> > I think we need a design that legalises already existing de-facto standard features in archive tools. Or event better - enables these tools to be more efficient, reliable etc. Either way we will create legacy code from the scratch.\n> \n> My proposal wouldn't require any changes to any of these utilities.\n> This design just adds a new mechanism that would allow end users to\n> set up archiving a different way with less overhead in hopes that it\n> will help them keep up. I suspect a lot of work has been put into the\n> archive tools you mentioned to make sure they can keep up with high\n> rates of WAL generation, so I'm skeptical that anything we do here\n> will really benefit them all that much. Ideally, we'd do something\n> that improves matters for everyone, though. I'm open to suggestions.\n\nThis has something we've contemplated quite a bit and the last thing\nthat I'd want to have is a requirement to configure a whole bunch of\nadditional parameters to enable this. Why do we need to have some many\nnew GUCs? I would have thought we'd probably be able to get away with\njust having the appropriate hooks and then telling folks to load the\nextension in shared_preload_libraries..\n\nAs for the hooks themselves, I'd certainly hope that they'd be designed\nto handle batches of WAL rather than individual ones as that's long been\none of the main issues with the existing archive command approach. I\nappreciate that maybe that's less of an issue with a shared library but\nit's still something to consider.\n\nAdmittedly, I haven't looked in depth with this patch set and am just\ngoing off of the description of them provided in the thread, so perhaps\nI missed something.\n\nThanks,\n\nStephen", "msg_date": "Mon, 4 Oct 2021 22:20:41 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/4/21, 7:21 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> This has something we've contemplated quite a bit and the last thing\r\n> that I'd want to have is a requirement to configure a whole bunch of\r\n> additional parameters to enable this. Why do we need to have some many\r\n> new GUCs? I would have thought we'd probably be able to get away with\r\n> just having the appropriate hooks and then telling folks to load the\r\n> extension in shared_preload_libraries..\r\n\r\nThat would certainly simplify my patch quite a bit. I'll do it this\r\nway in the next revision.\r\n\r\n> As for the hooks themselves, I'd certainly hope that they'd be designed\r\n> to handle batches of WAL rather than individual ones as that's long been\r\n> one of the main issues with the existing archive command approach. I\r\n> appreciate that maybe that's less of an issue with a shared library but\r\n> it's still something to consider.\r\n\r\nWill do. This seems like it should be easier with the hook because we\r\ncan provide a way to return which files were successfully archived.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 03:07:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 10/4/21, 7:21 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > This has something we've contemplated quite a bit and the last thing\n> > that I'd want to have is a requirement to configure a whole bunch of\n> > additional parameters to enable this. Why do we need to have some many\n> > new GUCs? I would have thought we'd probably be able to get away with\n> > just having the appropriate hooks and then telling folks to load the\n> > extension in shared_preload_libraries..\n> \n> That would certainly simplify my patch quite a bit. I'll do it this\n> way in the next revision.\n> \n> > As for the hooks themselves, I'd certainly hope that they'd be designed\n> > to handle batches of WAL rather than individual ones as that's long been\n> > one of the main issues with the existing archive command approach. I\n> > appreciate that maybe that's less of an issue with a shared library but\n> > it's still something to consider.\n> \n> Will do. This seems like it should be easier with the hook because we\n> can provide a way to return which files were successfully archived.\n\nIt's also been discussed, at least around the water cooler (as it were\nin pandemic times- aka our internal slack channels..) that the existing\narchive command might be reimplemented as an extension using these. Not\nsure if that's really necessary but it was a thought. In any case,\nthanks for working on this!\n\nStephen", "msg_date": "Mon, 4 Oct 2021 23:18:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/4/21, 8:19 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> It's also been discussed, at least around the water cooler (as it were\r\n> in pandemic times- aka our internal slack channels..) that the existing\r\n> archive command might be reimplemented as an extension using these. Not\r\n> sure if that's really necessary but it was a thought. In any case,\r\n> thanks for working on this!\r\n\r\nInteresting. I like the idea of having one code path for everything\r\ninstead of branching for the hook and non-hook paths. Thanks for\r\nsharing your thoughts.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 03:31:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Tue, Oct 5, 2021 at 5:32 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 10/4/21, 8:19 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > It's also been discussed, at least around the water cooler (as it were\n> > in pandemic times- aka our internal slack channels..) that the existing\n> > archive command might be reimplemented as an extension using these. Not\n> > sure if that's really necessary but it was a thought. In any case,\n> > thanks for working on this!\n>\n> Interesting. I like the idea of having one code path for everything\n> instead of branching for the hook and non-hook paths. Thanks for\n> sharing your thoughts.\n>\n\nI remember having had this discussion a few times, I think mainly with\nStephen and David as well (but not on their internal slack channels :P).\n\nI definitely think that's the way to go. It gives a single path for\neverything which makes it simpler in the most critical parts. And once you\nhave picked an implementation other than it, you're now completely rid of\nthe old implementation. And of course the good old idea that having an\nextension already using the API is a good way to show that the API is in a\ngood place.\n\nAs much as I dislike our current interface in archive_command, and would\nlike to see it go away completely, I do believe we need to ship something\nthat has it - if nothing else then for backwards compatibility. But an\nextension like this would also make it easier to eventually, down the road,\ndeprecate this solution.\n\nOh, and please put said implementation in a better place than contrib :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Oct 5, 2021 at 5:32 AM Bossart, Nathan <bossartn@amazon.com> wrote:On 10/4/21, 8:19 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> It's also been discussed, at least around the water cooler (as it were\n> in pandemic times- aka our internal slack channels..) that the existing\n> archive command might be reimplemented as an extension using these.  Not\n> sure if that's really necessary but it was a thought.  In any case,\n> thanks for working on this!\n\nInteresting.  I like the idea of having one code path for everything\ninstead of branching for the hook and non-hook paths.  Thanks for\nsharing your thoughts.I remember having had this discussion a few times, I think mainly with Stephen and David as well (but not on their internal slack channels :P).I definitely think that's the way to go. It gives a single path for everything which makes it simpler in the most critical parts. And once you have picked an implementation other than it, you're now completely rid of the old implementation.  And of course the good old idea that having an extension already using the API is a good way to show that the API is in a good place. As much as I dislike our current interface in archive_command, and would like to see it go away completely, I do believe we need to ship something that has it - if nothing else then for backwards compatibility. But an extension like this would also make it easier to eventually, down the road, deprecate this solution. Oh, and please put said implementation in a better place than contrib :)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 6 Oct 2021 22:33:18 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/6/21, 1:34 PM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\r\n> I definitely think that's the way to go. It gives a single path for everything which makes it simpler in the most critical parts. And once you have picked an implementation other than it, you're now completely rid of the old implementation. And of course the good old idea that having an extension already using the API is a good way to show that the API is in a good place. \r\n>\r\n> As much as I dislike our current interface in archive_command, and would like to see it go away completely, I do believe we need to ship something that has it - if nothing else then for backwards compatibility. But an extension like this would also make it easier to eventually, down the road, deprecate this solution. \r\n>\r\n> Oh, and please put said implementation in a better place than contrib :)\r\n\r\nI've attached an attempt at moving the archive_command logic to its\r\nown module and replacing it with a hook. This was actually pretty\r\nstraightforward.\r\n\r\nI think the biggest question is where to put the archive_command\r\nmodule, which I've called shell_archive. The only existing directory\r\nthat looked to me like it might work is src/test/modules. It might be\r\nrather bold to relegate this functionality to a test module so\r\nquickly, but on the other hand, perhaps it's the right thing to do\r\ngiven we intend to deprecate it in the future. I'm curious what\r\nothers think about this.\r\n\r\nI'm still working on the documentation updates, which are quite\r\nextensive. I haven't included any of those in the patch yet.\r\n\r\nNathan", "msg_date": "Mon, 18 Oct 2021 23:24:58 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Mon, Oct 18, 2021 at 7:25 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I think the biggest question is where to put the archive_command\n> module, which I've called shell_archive. The only existing directory\n> that looked to me like it might work is src/test/modules. It might be\n> rather bold to relegate this functionality to a test module so\n> quickly, but on the other hand, perhaps it's the right thing to do\n> given we intend to deprecate it in the future. I'm curious what\n> others think about this.\n\nI don't see that as being a viable path forward based on my customer\ninteractions working here at EDB.\n\nI am not quite sure why we wouldn't just compile the functions into\nthe server. Functions pointers can point to core functions as surely\nas loadable modules. The present design isn't too congenial to that\nbecause it's relying on the shared library loading mechanism to wire\nthe thing in place - but there's no reason it has to be that way.\nLogical decoding plugins don't work that way, for example. We could\nstill have a GUC, say call it archive_method, that selects the module\n-- with 'shell' being a builtin method, and others being loadable as\nmodules. If you set archive_method='shell' then you enable this\nmodule, and it has its own GUC, say call it archive_command, to\nconfigure the behavior.\n\nAn advantage of this approach is that it's perfectly\nbackward-compatible. I understand that archive_command is a hateful\nthing to many people here, but software has to serve the user base,\nnot just the developers. Lots of people use archive_command and rely\non it -- and are not interested in installing yet another piece of\nout-of-core software to do what $OTHERDB has built in.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 08:50:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/19/21 8:50 AM, Robert Haas wrote:\n> On Mon, Oct 18, 2021 at 7:25 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>> I think the biggest question is where to put the archive_command\n>> module, which I've called shell_archive. The only existing directory\n>> that looked to me like it might work is src/test/modules. It might be\n>> rather bold to relegate this functionality to a test module so\n>> quickly, but on the other hand, perhaps it's the right thing to do\n>> given we intend to deprecate it in the future. I'm curious what\n>> others think about this.\n> \n> I don't see that as being a viable path forward based on my customer\n> interactions working here at EDB.\n> \n> I am not quite sure why we wouldn't just compile the functions into\n> the server. Functions pointers can point to core functions as surely\n> as loadable modules. The present design isn't too congenial to that\n> because it's relying on the shared library loading mechanism to wire\n> the thing in place - but there's no reason it has to be that way.\n> Logical decoding plugins don't work that way, for example. We could\n> still have a GUC, say call it archive_method, that selects the module\n> -- with 'shell' being a builtin method, and others being loadable as\n> modules. If you set archive_method='shell' then you enable this\n> module, and it has its own GUC, say call it archive_command, to\n> configure the behavior.\n> \n> An advantage of this approach is that it's perfectly\n> backward-compatible. I understand that archive_command is a hateful\n> thing to many people here, but software has to serve the user base,\n> not just the developers. Lots of people use archive_command and rely\n> on it -- and are not interested in installing yet another piece of\n> out-of-core software to do what $OTHERDB has built in.\n\n+1 to all of this, certainly for the time being. The archive_command \nmechanism is not great, but it is simple, and this part is not really \nwhat makes writing a good archive command hard.\n\nI had also originally envisioned this a default extension in core, but \nhaving the default 'shell' method built-in is certainly simpler.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 19 Oct 2021 09:38:37 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Tue, Oct 19, 2021 at 2:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Oct 18, 2021 at 7:25 PM Bossart, Nathan <bossartn@amazon.com>\n> wrote:\n> > I think the biggest question is where to put the archive_command\n> > module, which I've called shell_archive. The only existing directory\n> > that looked to me like it might work is src/test/modules. It might be\n> > rather bold to relegate this functionality to a test module so\n> > quickly, but on the other hand, perhaps it's the right thing to do\n> > given we intend to deprecate it in the future. I'm curious what\n> > others think about this.\n>\n> I don't see that as being a viable path forward based on my customer\n> interactions working here at EDB.\n>\n> I am not quite sure why we wouldn't just compile the functions into\n> the server. Functions pointers can point to core functions as surely\n> as loadable modules. The present design isn't too congenial to that\n> because it's relying on the shared library loading mechanism to wire\n> the thing in place - but there's no reason it has to be that way.\n> Logical decoding plugins don't work that way, for example. We could\n> still have a GUC, say call it archive_method, that selects the module\n> -- with 'shell' being a builtin method, and others being loadable as\n> modules. If you set archive_method='shell' then you enable this\n> module, and it has its own GUC, say call it archive_command, to\n> configure the behavior.\n>\n\nYeah, seems reasonable. It wouldn't serve as well as an example to\ndevelopers, but then it's probably not the \"loadable module\" part of\nbuilding it that people need examples of. So as long as it's using the same\ninternal APIs and just happens to be compiled in by default, I see no\nproblem with that.\n\nBut, is logical decoding really that great an example? I mean, we build\npgoutput.so as a library, we don't provide it compiled-in. So we could\nbuild the \"shell archiver\" based on that pattern, in which case we should\ncreate a postmaster/shell_archiver directory or something like that?\n\nIt should definitely not go under \"test\".\n\n\nAn advantage of this approach is that it's perfectly\n> backward-compatible. I understand that archive_command is a hateful\n> thing to many people here, but software has to serve the user base,\n> not just the developers. Lots of people use archive_command and rely\n> on it -- and are not interested in installing yet another piece of\n> out-of-core software to do what $OTHERDB has built in.\n>\n\nBackwards compatibility is definitely a must, I'd say. Regardless of\nexactly how the backwards-compatible pugin is shipped, it should be what's\nturned on by default.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Oct 19, 2021 at 2:50 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Oct 18, 2021 at 7:25 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I think the biggest question is where to put the archive_command\n> module, which I've called shell_archive.  The only existing directory\n> that looked to me like it might work is src/test/modules.  It might be\n> rather bold to relegate this functionality to a test module so\n> quickly, but on the other hand, perhaps it's the right thing to do\n> given we intend to deprecate it in the future.  I'm curious what\n> others think about this.\n\nI don't see that as being a viable path forward based on my customer\ninteractions working here at EDB.\n\nI am not quite sure why we wouldn't just compile the functions into\nthe server. Functions pointers can point to core functions as surely\nas loadable modules. The present design isn't too congenial to that\nbecause it's relying on the shared library loading mechanism to wire\nthe thing in place - but there's no reason it has to be that way.\nLogical decoding plugins don't work that way, for example. We could\nstill have a GUC, say call it archive_method, that selects the module\n-- with 'shell' being a builtin method, and others being loadable as\nmodules. If you set archive_method='shell' then you enable this\nmodule, and it has its own GUC, say call it archive_command, to\nconfigure the behavior.Yeah, seems reasonable. It wouldn't serve as well as an example to developers, but then it's probably not the \"loadable module\" part of building it that people need examples of. So as long as it's using the same internal APIs and just happens to be compiled in by default, I see no problem with that.But, is logical decoding really that great an example? I mean, we build pgoutput.so as a library, we don't provide it compiled-in. So we could build the \"shell archiver\" based on that pattern, in which case we should create a postmaster/shell_archiver directory or something like that?It should definitely not go under \"test\".\nAn advantage of this approach is that it's perfectly\nbackward-compatible. I understand that archive_command is a hateful\nthing to many people here, but software has to serve the user base,\nnot just the developers. Lots of people use archive_command and rely\non it -- and are not interested in installing yet another piece of\nout-of-core software to do what $OTHERDB has built in.Backwards compatibility is definitely a must, I'd say. Regardless of exactly how the backwards-compatible pugin is shipped, it should be what's turned on by default. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 19 Oct 2021 16:19:04 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/19/21, 6:39 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n> On 10/19/21 8:50 AM, Robert Haas wrote:\r\n>> I am not quite sure why we wouldn't just compile the functions into\r\n>> the server. Functions pointers can point to core functions as surely\r\n>> as loadable modules. The present design isn't too congenial to that\r\n>> because it's relying on the shared library loading mechanism to wire\r\n>> the thing in place - but there's no reason it has to be that way.\r\n>> Logical decoding plugins don't work that way, for example. We could\r\n>> still have a GUC, say call it archive_method, that selects the module\r\n>> -- with 'shell' being a builtin method, and others being loadable as\r\n>> modules. If you set archive_method='shell' then you enable this\r\n>> module, and it has its own GUC, say call it archive_command, to\r\n>> configure the behavior.\r\n>>\r\n>> An advantage of this approach is that it's perfectly\r\n>> backward-compatible. I understand that archive_command is a hateful\r\n>> thing to many people here, but software has to serve the user base,\r\n>> not just the developers. Lots of people use archive_command and rely\r\n>> on it -- and are not interested in installing yet another piece of\r\n>> out-of-core software to do what $OTHERDB has built in.\r\n>\r\n> +1 to all of this, certainly for the time being. The archive_command\r\n> mechanism is not great, but it is simple, and this part is not really\r\n> what makes writing a good archive command hard.\r\n>\r\n> I had also originally envisioned this a default extension in core, but\r\n> having the default 'shell' method built-in is certainly simpler.\r\n\r\nI have no problem building it this way. It's certainly better for\r\nbackward compatibility, which I think everyone here feels is\r\nimportant.\r\n\r\nRobert's proposed design is a bit more like my original proof-of-\r\nconcept [0]. There, I added an archive_library GUC which was\r\nbasically an extension of shared_preload_libraries (which creates some\r\ninteresting problems in the library loading logic). You could only\r\nset one of archive_command or archive_library at any given time. When\r\nthe archive_library was set, we ran that library's _PG_init() just\r\nlike we do for any other library, and then we set the archiver\r\nfunction pointer to the library's _PG_archive() function.\r\n\r\nIIUC the main difference between this design and what Robert proposes\r\nis that we'd also move the existing archive_command stuff somewhere\r\nelse and then access it via the archiver function pointer. I think\r\nthat is clearly better than branching based on whether archive_command\r\nor archive_library is set. (BTW I'm not wedded to these GUCs. If\r\nfolks would rather create something like the archive_method GUC, I\r\nthink that would work just as well.)\r\n\r\nMy original proof-of-concept also attempted to handle a bunch of other\r\nshell command GUCs, but perhaps I'd better keep this focused on\r\narchive_command for now. What we do here could serve as an example of\r\nhow to adjust the other shell command GUCs later on. I'll go ahead\r\nand rework my patch to look more like what is being discussed here,\r\nalthough I expect the exact design for the interface will continue to\r\nevolve based on the feedback in this thread.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/E9035E94-EC76-436E-B6C9-1C03FBD8EF54%40amazon.com\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 16:10:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Tue, Oct 19, 2021 at 10:19 AM Magnus Hagander <magnus@hagander.net> wrote:\n> But, is logical decoding really that great an example? I mean, we build pgoutput.so as a library, we don't provide it compiled-in. So we could build the \"shell archiver\" based on that pattern, in which case we should create a postmaster/shell_archiver directory or something like that?\n\nWell, I guess you could also use parallel contexts as an example.\nThere, the core facilities that most people will use are baked into\nthe server, but you can provide your own in an extension and the\nparallel context stuff will happily call it for you if you so request.\n\nI don't think the details here are too important. I'm just saying that\nnot everything needs to depend on _PG_init() as a way of bootstrapping\nitself. TBH, if I ran the zoo and also had infinite time to tinker\nwith stuff like this, I'd probably make a pass through the hooks we\nalready have and try to refactor as many of them as possible to use\nsome mechanism other than _PG_init() to bootstrap themselves. That\nmechanism actually sucks. When we use other mechanisms -- like a\nlanguage \"C\" function that knows the shared object name and function\nname -- then load is triggered when it's needed, and the user gets the\nbehavior they want. Similarly with logical decoding and FDWs -- you,\nas the user, say that you want this or that kind of logical decoding\nor FDW or C function or whatever -- and then the system either notices\nthat it's already loaded and does what you want, or notices that it's\nnot loaded and loads it, and then does what you want.\n\nBut when the bootstrapping mechanism is _PG_init(), then the user has\ngot to make sure the library is loaded at the correct time. They have\nto know whether it should go into shared_preload_libraries or whether\nit should be put into one of the other various GUCs or if it can be\nloaded on the fly with LOAD. If they don't load it in the right way,\nor if it doesn't get loaded at all, well then probably it just\nsilently doesn't work. Plus there can be weird cases if it gets loaded\ninto some backends but not others and things like that.\n\nAnd here we seem to have an opportunity to improve the interface by\nnot depending on it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Oct 2021 12:12:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> Backwards compatibility is definitely a must, I'd say. Regardless of\n> exactly how the backwards-compatible pugin is shipped, it should be what's\n> turned on by default.\n\nI keep seeing this thrown around and I don't quite get why we feel this\nis the case. I'm not completely against trying to maintain backwards\ncompatibility, but at the same time, we just went through changing quite\na bit around in v12 with the restore command and that's the other half\nof this. Why are we so concerned about backwards compatibility here\nwhen there was hardly any complaint raised about breaking it in the\nrestore case?\n\nIf maintaining compatibility makes this a lot more difficult or ugly,\nthen I'm against doing so. I don't know that to be the case, none of\nthe proposed approaches really sound all that bad to me, but I certainly\ndon't think we should be entirely avoiding the idea of breaking\nbackwards compatibility here. We literally just did that and while\nthere's been some noise about it, it's hardly risen to the level of\nbeing \"something we should never, ever, even consider doing again\" as\nseems to be implied on this thread.\n\nFor those who might argue that maintaining compatibility for archive\ncommand is somehow more important than for restore command- allow me to\nsave you the trouble and just let you know that I don't buy off on such\nan argument. If anything, it should be the opposite. You back up your\ndatabase all the time and you're likely to see much more quickly if that\nstops working. Database restores, on the other hand, are nearly always\ndone in times of great stress and when you want things to be very clear\nand easy to follow and for everything to 'just work'.\n\nThanks,\n\nStephen", "msg_date": "Tue, 19 Oct 2021 14:50:34 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/19/21, 9:14 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> My original proof-of-concept also attempted to handle a bunch of other\r\n> shell command GUCs, but perhaps I'd better keep this focused on\r\n> archive_command for now. What we do here could serve as an example of\r\n> how to adjust the other shell command GUCs later on. I'll go ahead\r\n> and rework my patch to look more like what is being discussed here,\r\n> although I expect the exact design for the interface will continue to\r\n> evolve based on the feedback in this thread.\r\n\r\nAlright, I reworked the patch a bit to maintain backward\r\ncompatibility. My initial intent for 0001 was to just do a clean\r\nrefactor to move the shell archiving stuff to its own file. However,\r\nafter I did that, I realized that adding the hook wouldn't be too much\r\nmore work, so I did that as well. This seems to be enough to support\r\ncustom archiving modules. I included a basic example of such a module\r\nin 0002. 0002 is included primarily for demonstration purposes.\r\n\r\nI do wonder if there are some further enhancements we should make to\r\nthe archiving module interface. With 0001 applied, archive_command is\r\nsilently ignored if you've preloaded a library that uses the hook.\r\nThere's no way to indicate that you actually want to use\r\narchive_command or that you want to use a specific library as the\r\narchive library. On the other hand, just adding the hook keeps things\r\nsimple, and it doesn't preclude future improvements in this area.\r\n\r\nNathan", "msg_date": "Wed, 20 Oct 2021 22:20:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/20/21, 3:23 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Alright, I reworked the patch a bit to maintain backward\r\n> compatibility. My initial intent for 0001 was to just do a clean\r\n> refactor to move the shell archiving stuff to its own file. However,\r\n> after I did that, I realized that adding the hook wouldn't be too much\r\n> more work, so I did that as well. This seems to be enough to support\r\n> custom archiving modules. I included a basic example of such a module\r\n> in 0002. 0002 is included primarily for demonstration purposes.\r\n\r\nIt looks like the FreeBSD build is failing because sys/wait.h is\r\nmissing. Here is an attempt at fixing that.\r\n\r\nNathan", "msg_date": "Thu, 21 Oct 2021 19:51:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Tue, Oct 19, 2021 at 2:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I keep seeing this thrown around and I don't quite get why we feel this\n> is the case. I'm not completely against trying to maintain backwards\n> compatibility, but at the same time, we just went through changing quite\n> a bit around in v12 with the restore command and that's the other half\n> of this. Why are we so concerned about backwards compatibility here\n> when there was hardly any complaint raised about breaking it in the\n> restore case?\n\nThere are 0 references to restore_command in the v12 release notes.\nJust in case you had the version number wrong in this email, I\ncompared the documentation for restore_command in v10 to the\ndocumentation in v14. The differences seem to be only cosmetic. So I'm\nnot sure what functional change you think we made. It was probably\nless significant than what was being discussed here in regards to\narchive_command.\n\nAlso, more to the point, when there's a need to break backward\ncompatibility in order to get some improvement, it's worth\nconsidering, but here there just isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 16:19:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Oct 19, 2021 at 2:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I keep seeing this thrown around and I don't quite get why we feel this\n> > is the case. I'm not completely against trying to maintain backwards\n> > compatibility, but at the same time, we just went through changing quite\n> > a bit around in v12 with the restore command and that's the other half\n> > of this. Why are we so concerned about backwards compatibility here\n> > when there was hardly any complaint raised about breaking it in the\n> > restore case?\n> \n> There are 0 references to restore_command in the v12 release notes.\n> Just in case you had the version number wrong in this email, I\n> compared the documentation for restore_command in v10 to the\n> documentation in v14. The differences seem to be only cosmetic. So I'm\n> not sure what functional change you think we made. It was probably\n> less significant than what was being discussed here in regards to\n> archive_command.\n\nrestore_command used to be in recovery.conf, which disappeared with v12\nand it now has to go into postgresql.auto.conf or postgresql.conf.\n\nThat's a huge breaking change.\n\n> Also, more to the point, when there's a need to break backward\n> compatibility in order to get some improvement, it's worth\n> considering, but here there just isn't.\n\nThere won't be any thought towards a backwards-incompatible capability\nif everyone is saying that we can't possibly break it. That's why I was\ncommenting on it.\n\nThanks,\n\nStephen", "msg_date": "Thu, 21 Oct 2021 16:28:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Thu, Oct 21, 2021 at 4:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> restore_command used to be in recovery.conf, which disappeared with v12\n> and it now has to go into postgresql.auto.conf or postgresql.conf.\n>\n> That's a huge breaking change.\n\nNot in the same sense. Moving the functionality to a different\nconfiguration file can and probably did cause a lot of problems for\npeople, but the same basic functionality was still available.\n\n(Also, I'm pretty sure that the recovery.conf changes would have\nhappened years earlier if there hadn't been backward compatibility\nconcerns, from Simon in particular. So saying that there was \"hardly\nany complaint raised\" in that case doesn't seem to me to be entirely\naccurate.)\n\n> > Also, more to the point, when there's a need to break backward\n> > compatibility in order to get some improvement, it's worth\n> > considering, but here there just isn't.\n>\n> There won't be any thought towards a backwards-incompatible capability\n> if everyone is saying that we can't possibly break it. That's why I was\n> commenting on it.\n\nI can't speak for anyone else, but that is not what I am saying. I am\nopen to the idea of breaking it if we thereby get some valuable\nbenefit which cannot be obtained otherwise. But Nathan has now\nimplemented something which, from the sound of it, will allow us to\nobtain all of the available benefits with no incompatibilities. If we\nthink of additional benefits that we cannot obtain without\nincompatibilities, then we can consider that situation when it arises.\nIn the meantime, there's no need to go looking for reasons to break\nstuff that works in existing releases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Oct 2021 17:05:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Thu, Oct 21, 2021 at 11:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Oct 21, 2021 at 4:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > restore_command used to be in recovery.conf, which disappeared with v12\n> > and it now has to go into postgresql.auto.conf or postgresql.conf.\n> >\n> > That's a huge breaking change.\n>\n> Not in the same sense. Moving the functionality to a different\n> configuration file can and probably did cause a lot of problems for\n> people, but the same basic functionality was still available.\n>\n\nYeah.\n\nAnd as a bonus it got a bunch of people to upgrade their backup software\nthat suddenly stopped working. Or in some case, to install backup software\ninstead of using the hand-rolled scripts. So there were some good\nside-effects specifically to breaking it as well.\n\n\n\n(Also, I'm pretty sure that the recovery.conf changes would have\n> happened years earlier if there hadn't been backward compatibility\n> concerns, from Simon in particular. So saying that there was \"hardly\n> any complaint raised\" in that case doesn't seem to me to be entirely\n> accurate.)\n>\n> > > Also, more to the point, when there's a need to break backward\n> > > compatibility in order to get some improvement, it's worth\n> > > considering, but here there just isn't.\n> >\n> > There won't be any thought towards a backwards-incompatible capability\n> > if everyone is saying that we can't possibly break it. That's why I was\n> > commenting on it.\n>\n> I can't speak for anyone else, but that is not what I am saying. I am\n> open to the idea of breaking it if we thereby get some valuable\n> benefit which cannot be obtained otherwise. But Nathan has now\n> implemented something which, from the sound of it, will allow us to\n> obtain all of the available benefits with no incompatibilities. If we\n> think of additional benefits that we cannot obtain without\n> incompatibilities, then we can consider that situation when it arises.\n> In the meantime, there's no need to go looking for reasons to break\n> stuff that works in existing releases.\n>\n\n Agreed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Oct 21, 2021 at 11:05 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Oct 21, 2021 at 4:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> restore_command used to be in recovery.conf, which disappeared with v12\n> and it now has to go into postgresql.auto.conf or postgresql.conf.\n>\n> That's a huge breaking change.\n\nNot in the same sense. Moving the functionality to a different\nconfiguration file can and probably did cause a lot of problems for\npeople, but the same basic functionality was still available.Yeah.And as a bonus it got a bunch of people to upgrade their backup software that suddenly stopped working. Or in some case, to install backup software instead of using the hand-rolled scripts. So there were some good side-effects specifically to breaking it as well.\n(Also, I'm pretty sure that the recovery.conf changes would have\nhappened years earlier if there hadn't been backward compatibility\nconcerns, from Simon in particular. So saying that there was \"hardly\nany complaint raised\" in that case doesn't seem to me to be entirely\naccurate.)\n\n> > Also, more to the point, when there's a need to break backward\n> > compatibility in order to get some improvement, it's worth\n> > considering, but here there just isn't.\n>\n> There won't be any thought towards a backwards-incompatible capability\n> if everyone is saying that we can't possibly break it.  That's why I was\n> commenting on it.\n\nI can't speak for anyone else, but that is not what I am saying. I am\nopen to the idea of breaking it if we thereby get some valuable\nbenefit which cannot be obtained otherwise. But Nathan has now\nimplemented something which, from the sound of it, will allow us to\nobtain all of the available benefits with no incompatibilities. If we\nthink of additional benefits that we cannot obtain without\nincompatibilities, then we can consider that situation when it arises.\nIn the meantime, there's no need to go looking for reasons to break\nstuff that works in existing releases. Agreed.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 22 Oct 2021 16:33:47 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Thu, Oct 21, 2021 at 9:51 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 10/20/21, 3:23 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > Alright, I reworked the patch a bit to maintain backward\n> > compatibility. My initial intent for 0001 was to just do a clean\n> > refactor to move the shell archiving stuff to its own file. However,\n> > after I did that, I realized that adding the hook wouldn't be too much\n> > more work, so I did that as well. This seems to be enough to support\n> > custom archiving modules. I included a basic example of such a module\n> > in 0002. 0002 is included primarily for demonstration purposes.\n>\n> It looks like the FreeBSD build is failing because sys/wait.h is\n> missing. Here is an attempt at fixing that.\n>\n\nI still like the idea of loading the library via a special parameter,\narchive_library or such.\n\nOne reason for that is that adding/removing modules in\nshared_preload_libraries has a terrible UX in that you have to replace the\nwhole thing. This makes it much more complex to deal with when different\nmodules just want to add to it.\n\nE.g. my awsome backup program could set\narchive_library='my_awesome_backups', and know it didn't break anything\nelse. but it couldn't set shared_preload_libraries='my_awesome_bacukps',\nbecause then it might break a bunch of other modules that used to be there.\nSo it has to go try to parse the whole config and figure out where to make\nsuch modifications.\n\nNow, this could *also* be solved by allowing shared_preload_library to be a\n\"list\" instead of a string, and allow postgresql.conf to accept syntax like\nshared_preload_libraries+='my_awesome_backups'.\n\nBut without that level fo functionality available, I think a separate\nparameter for the archive library would be a good thing.\n\nOther than that:\n+\n+/*\n+ * Is WAL archiving configured? For consistency with previous releases,\nthis\n+ * checks that archive_command is set when archiving via shell is enabled.\n+ * Otherwise, we just check that an archive function is set, and it is the\n+ * responsibility of that archive function to ensure it is properly\nconfigured.\n+ */\n+#define XLogArchivingConfigured() \\\n+ (PG_archive && (PG_archive != shell_archive ||\nXLogArchiveCommand[0] != '\\0'))\n\n\nWouldn't that be better as a callback into the module? So that\nshell_archive would implement the check for XLogArchiveCommand. Then\nanother third party module can make it's own decision on what to check. And\nPGarchive would then be a struct that holds a function pointer to the\narchive command and another function pointer to the isenabled command? (I\nthink having a struct for it would be useful regardless -- for possible\nfuture extensions with more API points).\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Oct 21, 2021 at 9:51 PM Bossart, Nathan <bossartn@amazon.com> wrote:On 10/20/21, 3:23 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> Alright, I reworked the patch a bit to maintain backward\n> compatibility.  My initial intent for 0001 was to just do a clean\n> refactor to move the shell archiving stuff to its own file.  However,\n> after I did that, I realized that adding the hook wouldn't be too much\n> more work, so I did that as well.  This seems to be enough to support\n> custom archiving modules.  I included a basic example of such a module\n> in 0002.  0002 is included primarily for demonstration purposes.\n\nIt looks like the FreeBSD build is failing because sys/wait.h is\nmissing.  Here is an attempt at fixing that.I still like the idea of loading the library via a special parameter, archive_library or such.One reason for that is that adding/removing modules in shared_preload_libraries has a terrible UX in that you have to replace the whole thing. This makes it much more complex to deal with when different modules just want to add to it.E.g. my awsome backup program could set archive_library='my_awesome_backups', and know it didn't break anything else. but it couldn't set  shared_preload_libraries='my_awesome_bacukps', because then it might break a bunch of other modules that used to be there. So it has to go try to parse the whole config and figure out where to make such modifications.Now, this could *also* be solved by allowing shared_preload_library to be a \"list\" instead of a string, and allow postgresql.conf to accept syntax like shared_preload_libraries+='my_awesome_backups'.But without that level fo functionality available, I think a separate parameter for the archive library would be a good thing.Other than that:++/*+ * Is WAL archiving configured?  For consistency with previous releases, this+ * checks that archive_command is set when archiving via shell is enabled.+ * Otherwise, we just check that an archive function is set, and it is the+ * responsibility of that archive function to ensure it is properly configured.+ */+#define XLogArchivingConfigured() \\+       (PG_archive && (PG_archive != shell_archive || XLogArchiveCommand[0] != '\\0'))Wouldn't that be better as a callback into the module? So that shell_archive would implement the check for XLogArchiveCommand. Then another third party module can make it's own decision on what to check. And PGarchive would then be a struct that holds a function pointer to the archive command and another function pointer to the isenabled command? (I think having a struct for it would be useful regardless -- for possible future extensions with more API points).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 22 Oct 2021 16:42:01 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/22/21, 7:43 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\r\n> I still like the idea of loading the library via a special\r\n> parameter, archive_library or such.\r\n\r\nMy first attempt [0] added a GUC like this, so I can speak to some of\r\nthe interesting design decisions that follow.\r\n\r\nThe simplest thing we could do would be to add the archive_library GUC\r\nand to load that just like the library is at the end of\r\nshared_preload_libraries. This would mean that the archive library\r\ncould be specified in either GUC, and there would effectively be no\r\ndifference between the two.\r\n\r\nThe next thing we could consider doing is adding a new boolean called\r\nprocess_archive_library_in_progress, which would be analogous to\r\nprocess_shared_preload_libraries_in_progress. If a library is loaded\r\nfrom the archive_library GUC, its _PG_init() will be called with\r\nprocess_archive_library_in_progress set. This also means that if a\r\nlibrary is specified in both shared_preload_libraries and\r\narchive_library, we'd call its _PG_init() twice. The library could\r\nthen branch based on whether\r\nprocess_shared_preload_libraries_in_progress or\r\nprocess_archive_library_in_progress was set.\r\n\r\nAnother approach would be to add a new initialization function (e.g.,\r\nPG_archive_init()) that would be used if the library is being loaded\r\nfrom archive_library. Like before, you can use the library for both\r\nshared_preload_libraries and archive_library, but your initialization\r\nlogic would be expected to go in separate functions. However, there\r\nstill wouldn't be anything forcing that. A library could still break\r\nthe rules and do everything in _PG_init() and be loaded via\r\nshared_preload_libraries.\r\n\r\nOne more thing we could do is to discover the relevant symbols for\r\narchiving in library loading function. Rather than expecting the\r\ninitialization function to set the hook correctly, we'd just look up\r\nthe _PG_archive() function during loading. Again, a library could\r\nprobably still break the rules and do everything in\r\n_PG_init()/shared_preload_libraries, but there would at least be a\r\nnicer interface available.\r\n\r\nI believe the main drawbacks of going down this path are the\r\nadditional complexity in the backend and the slippery slope of adding\r\nall kinds of new GUCs in the future. My original patch also tried to\r\ndo something similar for some other shell command GUCs\r\n(archive_cleanup_command, restore_command, and recovery_end_command).\r\nWhile I'm going to try to keep this focused on archive_command for\r\nnow, presumably we'd eventually want the ability to use hooks for all\r\nof these things. I don't know if we really want to incur a new GUC\r\nfor every single one of these. To be clear, I'm not against adding a\r\nGUC if it seems like the right thing to do. I just want to make sure\r\nwe are aware of the tradeoffs compared to a simple\r\nshared_preload_libraries approach with its terrible UX.\r\n\r\n> Wouldn't that be better as a callback into the module? So that\r\n> shell_archive would implement the check for XLogArchiveCommand. Then\r\n> another third party module can make it's own decision on what to\r\n> check. And PGarchive would then be a struct that holds a function\r\n> pointer to the archive command and another function pointer to the\r\n> isenabled command? (I think having a struct for it would be useful\r\n> regardless -- for possible future extensions with more API points).\r\n\r\n+1. This crossed my mind, too. I'll add this in the next revision.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/E9035E94-EC76-436E-B6C9-1C03FBD8EF54%40amazon.com\r\n\r\n", "msg_date": "Fri, 22 Oct 2021 17:42:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Fri, Oct 22, 2021 at 1:42 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Another approach would be to add a new initialization function (e.g.,\n> PG_archive_init()) that would be used if the library is being loaded\n> from archive_library. Like before, you can use the library for both\n> shared_preload_libraries and archive_library, but your initialization\n> logic would be expected to go in separate functions. However, there\n> still wouldn't be anything forcing that. A library could still break\n> the rules and do everything in _PG_init() and be loaded via\n> shared_preload_libraries.\n\nI was imagining something like what logical decoding does. In that\ncase, you make a _PG_output_plugin_init function and it returns a\ntable of callbacks. Then the core code invokes those callbacks at the\nappropriate times.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Oct 2021 19:34:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/22/21, 4:35 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I was imagining something like what logical decoding does. In that\r\n> case, you make a _PG_output_plugin_init function and it returns a\r\n> table of callbacks. Then the core code invokes those callbacks at the\r\n> appropriate times.\r\n\r\nHere is an attempt at doing this. Archive modules are expected to\r\ndeclare _PG_archive_module_init(), which can define GUCs, register\r\nbackground workers, etc. This function must at least define the\r\narchive callbacks. For now, I've introduced two callbacks. The first\r\nis for checking that the archive module is configured, and the second\r\ncontains the actual archiving logic.\r\n\r\nI've written this so that the same library can be used for multiple\r\npurposes (e.g., it could be in shared_preload_libraries and\r\narchive_library). I don't know if that's really necessary, but it\r\nseemed to me like a reasonable way to handle the changes to the\r\nlibrary loading logic that we need anyway.\r\n\r\n0002 is still a sample backup module, but I also added some handling\r\nfor preexisting archives. If the preexisting archive file has the\r\nsame contents as the current file to archive, archiving is allowed to\r\ncontinue. If the contents don't match, archiving fails. This sample\r\nmodule could still produce unexpected results if two servers were\r\nsending archives to the same directory. I stopped short of adding\r\nhandling for that case, but that might be a good thing to tackle next.\r\n\r\nNathan", "msg_date": "Sun, 24 Oct 2021 06:15:13 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Thu, Oct 21, 2021 at 11:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Oct 21, 2021 at 4:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > restore_command used to be in recovery.conf, which disappeared with v12\n> > > and it now has to go into postgresql.auto.conf or postgresql.conf.\n> > >\n> > > That's a huge breaking change.\n> >\n> > Not in the same sense. Moving the functionality to a different\n> > configuration file can and probably did cause a lot of problems for\n> > people, but the same basic functionality was still available.\n> \n> Yeah.\n> \n> And as a bonus it got a bunch of people to upgrade their backup software\n> that suddenly stopped working. Or in some case, to install backup software\n> instead of using the hand-rolled scripts. So there were some good\n> side-effects specifically to breaking it as well.\n\nI feel like there's some confusion here- just to clear things up, I\nwasn't suggesting that we wouldn't include the capability, just that we\nshould be open to changing the interface/configuration based on what\nmakes sense and not, necessarily, insist on perfect backwards\ncompatibility. Seems everyone else has come out in support of that as\nwell at this point and so I don't think there's much more to say here.\n\nThe original complaint I had made was that it felt like folks were\npushing hard on backwards compatibility for the sake of it and I was\njust trying to make sure it's clear that we can, and do, break backwards\ncompatibility sometimes and the bar to clear isn't necessarily all that\nhigh, though of course we should be gaining something if we do decide to\nmake such a change.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 Oct 2021 12:26:13 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Sun, Oct 24, 2021 at 2:15 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Here is an attempt at doing this. Archive modules are expected to\n> declare _PG_archive_module_init(), which can define GUCs, register\n> background workers, etc. This function must at least define the\n> archive callbacks. For now, I've introduced two callbacks. The first\n> is for checking that the archive module is configured, and the second\n> contains the actual archiving logic.\n\nI don't see why this patch should need to make any changes to\ninternal_load_library(), PostmasterMain(), SubPostmasterMain(), or any\nother central point of control, and I don't think it should.\npgarch_archiveXlog() can just load the library at the time it's\nneeded. That way it only gets loaded in the archiver process, and the\nrequired changes are much more localized. Like instead of asserting\nthat the functions are initialized, just\nload_external_function(libname, \"_PG_archive_module_init\") and call it\nif they aren't.\n\nI think the attempt in check_archive_command()/check_archive_library()\nto force exactly one of those two to be set is not going to work well\nand should be removed. In general, GUCs whose legal values depend on\nthe values of other GUCs don't end up working out well. I think what\nshould happen instead is that if archive_library=shell then\narchive_command does whatever it does; otherwise archive_command is\nwithout effect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:01:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/25/21, 10:02 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I don't see why this patch should need to make any changes to\r\n> internal_load_library(), PostmasterMain(), SubPostmasterMain(), or any\r\n> other central point of control, and I don't think it should.\r\n> pgarch_archiveXlog() can just load the library at the time it's\r\n> needed. That way it only gets loaded in the archiver process, and the\r\n> required changes are much more localized. Like instead of asserting\r\n> that the functions are initialized, just\r\n> load_external_function(libname, \"_PG_archive_module_init\") and call it\r\n> if they aren't.\r\n\r\nIIUC this would mean that archive modules that need to define GUCs or\r\nregister background workers would have to separately define a\r\n_PG_init() and be loaded via shared_preload_libraries in addition to\r\narchive_library. That doesn't seem too terrible to me, but it was\r\nsomething I was trying to avoid.\r\n\r\n> I think the attempt in check_archive_command()/check_archive_library()\r\n> to force exactly one of those two to be set is not going to work well\r\n> and should be removed. In general, GUCs whose legal values depend on\r\n> the values of other GUCs don't end up working out well. I think what\r\n> should happen instead is that if archive_library=shell then\r\n> archive_command does whatever it does; otherwise archive_command is\r\n> without effect.\r\n\r\nI'm fine with this approach. I'll go this route in the next revision.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 17:14:26 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Mon, Oct 25, 2021 at 1:14 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> IIUC this would mean that archive modules that need to define GUCs or\n> register background workers would have to separately define a\n> _PG_init() and be loaded via shared_preload_libraries in addition to\n> archive_library. That doesn't seem too terrible to me, but it was\n> something I was trying to avoid.\n\nHmm. That doesn't seem like a terrible goal, but I think we should try\nto find some way of achieving it that looks tidier than this does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 13:17:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/25/21, 10:18 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Mon, Oct 25, 2021 at 1:14 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> IIUC this would mean that archive modules that need to define GUCs or\r\n>> register background workers would have to separately define a\r\n>> _PG_init() and be loaded via shared_preload_libraries in addition to\r\n>> archive_library. That doesn't seem too terrible to me, but it was\r\n>> something I was trying to avoid.\r\n>\r\n> Hmm. That doesn't seem like a terrible goal, but I think we should try\r\n> to find some way of achieving it that looks tidier than this does.\r\n\r\nWe could just treat archive_library as if it is tacked onto the\r\nshared_preload_libraries list. I think I can make that look\r\nrelatively tidy.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 17:48:34 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/25/21, 10:50 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 10/25/21, 10:18 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n>> On Mon, Oct 25, 2021 at 1:14 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>> IIUC this would mean that archive modules that need to define GUCs or\r\n>>> register background workers would have to separately define a\r\n>>> _PG_init() and be loaded via shared_preload_libraries in addition to\r\n>>> archive_library. That doesn't seem too terrible to me, but it was\r\n>>> something I was trying to avoid.\r\n>>\r\n>> Hmm. That doesn't seem like a terrible goal, but I think we should try\r\n>> to find some way of achieving it that looks tidier than this does.\r\n>\r\n> We could just treat archive_library as if it is tacked onto the\r\n> shared_preload_libraries list. I think I can make that look\r\n> relatively tidy.\r\n\r\nAlright, here is an attempt at that. With this revision, archive\r\nlibraries are preloaded (and _PG_init() is called), and the archiver\r\nis responsible for calling _PG_archive_module_init() to get the\r\ncallbacks. I've also removed the GUC check hooks as previously\r\ndiscussed.\r\n\r\nNathan", "msg_date": "Mon, 25 Oct 2021 19:45:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On Mon, Oct 25, 2021 at 3:45 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Alright, here is an attempt at that. With this revision, archive\n> libraries are preloaded (and _PG_init() is called), and the archiver\n> is responsible for calling _PG_archive_module_init() to get the\n> callbacks. I've also removed the GUC check hooks as previously\n> discussed.\n\nI would need to spend more time on this to have a detailed opinion on\nall of it, but I agree that part looks better this way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Oct 2021 16:29:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/25/21, 1:29 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Mon, Oct 25, 2021 at 3:45 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Alright, here is an attempt at that. With this revision, archive\r\n>> libraries are preloaded (and _PG_init() is called), and the archiver\r\n>> is responsible for calling _PG_archive_module_init() to get the\r\n>> callbacks. I've also removed the GUC check hooks as previously\r\n>> discussed.\r\n>\r\n> I would need to spend more time on this to have a detailed opinion on\r\n> all of it, but I agree that part looks better this way.\r\n\r\nGreat. Unless I see additional feedback on the basic design shortly,\r\nI'll give the documentation updates a try.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 25 Oct 2021 20:38:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 10/25/21, 1:41 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Great. Unless I see additional feedback on the basic design shortly,\r\n> I'll give the documentation updates a try.\r\n\r\nOkay, here is a more complete patch with a first attempt at the\r\ndocumentation changes. I tried to keep the changes to the existing\r\ndocs as minimal as possible, and then I added a new chapter that\r\ndescribes what goes into creating an archive module. Separately, I\r\nsimplified the basic_archive module, moved it to src/test/modules,\r\nand added a simple test. My goal is for this to serve as a basic\r\nexample and to provide some test coverage on the new infrastructure.\r\n\r\nNathan", "msg_date": "Wed, 27 Oct 2021 04:10:07 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 10/25/21, 1:41 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n> > Great. Unless I see additional feedback on the basic design shortly,\n> > I'll give the documentation updates a try.\n> \n> Okay, here is a more complete patch with a first attempt at the\n> documentation changes. I tried to keep the changes to the existing\n> docs as minimal as possible, and then I added a new chapter that\n> describes what goes into creating an archive module. Separately, I\n> simplified the basic_archive module, moved it to src/test/modules,\n> and added a simple test. My goal is for this to serve as a basic\n> example and to provide some test coverage on the new infrastructure.\n\nDefinitely interested and plan to look at this more shortly, and\ngenerally this all sounds good, but maybe we should have it be posted\nunder a new thread as it's moved pretty far from the subject and folks\nmight not appreciate what this is about at this point..?\n\nThanks,\n\nStephen", "msg_date": "Mon, 1 Nov 2021 13:57:26 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: parallelizing the archiver" }, { "msg_contents": "On 11/1/21, 10:57 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Definitely interested and plan to look at this more shortly, and\r\n> generally this all sounds good, but maybe we should have it be posted\r\n> under a new thread as it's moved pretty far from the subject and folks\r\n> might not appreciate what this is about at this point..?\r\n\r\nDone: https://postgr.es/m/668D2428-F73B-475E-87AE-F89D67942270%40amazon.com\r\n\r\nLooking forward to your feedback.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 1 Nov 2021 18:56:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: parallelizing the archiver" } ]
[ { "msg_contents": "Hi,\n\nI find a problem related to tablespace on win32(server2019).\n\n> postgres=# create tablespace tbs location 'C:\\Users\\postgres\\postgres_install\\aa\\..\\aa';\n> CREATE TABLESPACE\n> postgres=# create table tbl(col int) tablespace tbs;\n> ERROR: could not stat directory \"pg_tblspc/16384/PG_15_202109061/12754\": Invalid argument\n> postgres=# drop tablespace tbs;\n> WARNING: could not open directory \"pg_tblspc/16384/PG_15_202109061\": No such file or directory\n> ERROR: could not stat file \"pg_tblspc/16384\": Invalid argument\n\nI find that canonicalize_path() only remove the trailing '..', in this case, '..' is not removed , and \npgsymlink succeed.\n\nBut, in fact, if I double click the dir (%PGDATA%\\pg_tblspac\\16387), the error message is shown:\n> The filename, directory name, or volume label syntax is incorrect.\n\nSince the pgsymlink() seems right and I'm not sure I can change the action of canonicalize_path, \nso I want to add a error check(patch is attached).\n\nAny comment ?\n\nRegards,\nShenhao Wang", "msg_date": "Wed, 8 Sep 2021 10:16:46 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "drop tablespace failed when location contains .. on win32" }, { "msg_contents": "\nOn 9/8/21 6:16 AM, wangsh.fnst@fujitsu.com wrote:\n> Hi,\n>\n> I find a problem related to tablespace on win32(server2019).\n>\n>> postgres=# create tablespace tbs location 'C:\\Users\\postgres\\postgres_install\\aa\\..\\aa';\n>> CREATE TABLESPACE\n>> postgres=# create table tbl(col int) tablespace tbs;\n>> ERROR: could not stat directory \"pg_tblspc/16384/PG_15_202109061/12754\": Invalid argument\n>> postgres=# drop tablespace tbs;\n>> WARNING: could not open directory \"pg_tblspc/16384/PG_15_202109061\": No such file or directory\n>> ERROR: could not stat file \"pg_tblspc/16384\": Invalid argument\n> I find that canonicalize_path() only remove the trailing '..', in this case, '..' is not removed , and \n> pgsymlink succeed.\n\n\nThat seems like a bug. It's not very canonical :-)\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 08:54:25 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Wed, Sep 08, 2021 at 08:54:25AM -0400, Andrew Dunstan wrote:\n> That seems like a bug. It's not very canonical :-)\n\nYes, better to fix that. I fear that more places are impacted than\njust the tablespace code paths.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 11:19:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi, \n\n> -----Original Message-----\n> From: Michael Paquier <michael@paquier.xyz>\n> \n> On Wed, Sep 08, 2021 at 08:54:25AM -0400, Andrew Dunstan wrote:\n> > That seems like a bug. It's not very canonical :-)\n> \n> Yes, better to fix that. I fear that more places are impacted than\n> just the tablespace code paths.\n> --\n> Michael\n\nDo you mean changing the action of canonicalize_path(), like remove all the (..) ?\n\nI'm willing to fix this problem.\n\nRegards\nShenhao Wang\n\n\n", "msg_date": "Thu, 9 Sep 2021 02:35:52 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Thu, Sep 09, 2021 at 02:35:52AM +0000, wangsh.fnst@fujitsu.com wrote:\n> Do you mean changing the action of canonicalize_path(), like remove all the (..) ?\n> \n> I'm willing to fix this problem.\n\nLooking at canonicalize_path(), we have already some logic around\npending_strips to remove paths when we find a \"/..\" in the path, so\nthat's a matter of adjusting this area to trim properly the previous\ndirectory.\n\nOn *nix platforms, we don't apply this much caution either, say a\nsimple /tmp/path/../path/ results in this same path used in the link\nfrom pg_tblspc. But we are speaking about Windows here, and junction\npoints.\n\nBased on the lack of complains over the years, that does not seem\nreally worth backpatching. Just my 2c on this point.\n--\nMichael", "msg_date": "Thu, 9 Sep 2021 12:44:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "At Thu, 9 Sep 2021 12:44:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Sep 09, 2021 at 02:35:52AM +0000, wangsh.fnst@fujitsu.com wrote:\n> > Do you mean changing the action of canonicalize_path(), like remove all the (..) ?\n> > \n> > I'm willing to fix this problem.\n> \n> Looking at canonicalize_path(), we have already some logic around\n> pending_strips to remove paths when we find a \"/..\" in the path, so\n> that's a matter of adjusting this area to trim properly the previous\n> directory.\n> \n> On *nix platforms, we don't apply this much caution either, say a\n> simple /tmp/path/../path/ results in this same path used in the link\n> from pg_tblspc. But we are speaking about Windows here, and junction\n> points.\n> \n> Based on the lack of complains over the years, that does not seem\n> really worth backpatching. Just my 2c on this point.\n\nReading the first complaint, I remember I proposed that as a part of a\nlarger patch.\n\nhttps://www.postgresql.org/message-id/20190425.170855.39056106.horiguchi.kyotaro%40lab.ntt.co.jp\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Sep 2021 13:34:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "\nOn 9/8/21 11:44 PM, Michael Paquier wrote:\n> On Thu, Sep 09, 2021 at 02:35:52AM +0000, wangsh.fnst@fujitsu.com wrote:\n>> Do you mean changing the action of canonicalize_path(), like remove all the (..) ?\n>>\n>> I'm willing to fix this problem.\n> Looking at canonicalize_path(), we have already some logic around\n> pending_strips to remove paths when we find a \"/..\" in the path, so\n> that's a matter of adjusting this area to trim properly the previous\n> directory.\n>\n> On *nix platforms, we don't apply this much caution either, say a\n> simple /tmp/path/../path/ results in this same path used in the link\n> from pg_tblspc. But we are speaking about Windows here, and junction\n> points.\n>\n> Based on the lack of complains over the years, that does not seem\n> really worth backpatching. Just my 2c on this point.\n\n\n\nMaybe, although it's arguably a bug.\n\n\nI think I would say that we should remove any \".\" or \"..\" element in the\npath except at the beginning, and in the case of \"..\" also remove the\npreceding element, unless someone can convince me that there's a problem\nwith that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 9 Sep 2021 08:30:29 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi,\n\n> -----Original Message-----\n> From: Andrew Dunstan <andrew@dunslane.net>\n> Sent: Thursday, September 9, 2021 8:30 PM\n\n> I think I would say that we should remove any \".\" or \"..\" element in the\n> path except at the beginning, and in the case of \"..\" also remove the\n> preceding element, unless someone can convince me that there's a problem\n> with that.\n\nThese WIP patches try to remove all the '.' or '..' in the path except at\nthe beginning.\n\n0001 is a small fix, because I find that is_absolute_path is not appropriate, \nsee comment in skip_drive:\n> * On Windows, a path may begin with \"C:\" or \"//network/\".\n\nBut this modification will lead to a regress test failure on Windows:\n> -- Will fail with bad path\n> CREATE TABLESPACE regress_badspace LOCATION '/no/such/location';\n> -ERROR: directory \"/no/such/location\" does not exist\n> +ERROR: tablespace location must be an absolute path\n\nDo you think this modification is necessary ?\n\nRest of the modification is in 0002. I think this patch need more test and review.\n\n0003 is a test extension for me to check the action of canonicalize_path.\nDo you think is necessary to add this test extension(and some test scripts) to master ?\nIf necessary, maybe I can use the taptest to test the action of canonicalize_path\nin Linux and Windows.\n\n\nI will add this to next commitfest after further test .\n\nRegards.\nShenhao Wang", "msg_date": "Sun, 12 Sep 2021 07:33:23 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Sun, Sep 12, 2021 at 07:33:23AM +0000, wangsh.fnst@fujitsu.com wrote:\n> 0001 is a small fix, because I find that is_absolute_path is not appropriate, \n> see comment in skip_drive:\n> > * On Windows, a path may begin with \"C:\" or \"//network/\".\n\n #define is_absolute_path(filename) \\\n ( \\\n- IS_DIR_SEP((filename)[0]) || \\\n+ (IS_DIR_SEP((filename)[0]) && IS_DIR_SEP((filename)[1])) || \\\n (isalpha((unsigned char) ((filename)[0])) && (filename)[1] == ':' && \\\n IS_DIR_SEP((filename)[2])) \\\nWith this change you would consider a path beginning with \"/foo/..\" as\nnot being an absolute path, but that's not correct. Or am I missing\n something obvious?\n\n> 0003 is a test extension for me to check the action of canonicalize_path.\n> Do you think is necessary to add this test extension(and some test scripts) to master ?\n> If necessary, maybe I can use the taptest to test the action of canonicalize_path\n> in Linux and Windows.\n\nI am not sure that this is worth the cycles spent on, so I would\ndiscard it. This will help a lot in reviewing this patch, for sure.\nAnd you could add some regression tests to show how much testing you\nhave done, for both WIN32 and non-WIN32. I do that from time to time,\nand was actually thinking to test this API this way.\n--\nMichael", "msg_date": "Mon, 13 Sep 2021 16:06:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "At Mon, 13 Sep 2021 16:06:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Sep 12, 2021 at 07:33:23AM +0000, wangsh.fnst@fujitsu.com wrote:\n> > 0001 is a small fix, because I find that is_absolute_path is not appropriate, \n> > see comment in skip_drive:\n> > > * On Windows, a path may begin with \"C:\" or \"//network/\".\n> \n> #define is_absolute_path(filename) \\\n> ( \\\n> - IS_DIR_SEP((filename)[0]) || \\\n> + (IS_DIR_SEP((filename)[0]) && IS_DIR_SEP((filename)[1])) || \\\n> (isalpha((unsigned char) ((filename)[0])) && (filename)[1] == ':' && \\\n> IS_DIR_SEP((filename)[2])) \\\n> With this change you would consider a path beginning with \"/foo/..\" as\n> not being an absolute path, but that's not correct. Or am I missing\n> something obvious?\n\nMmm. I haven't thought that so seriously, but '/hoge/foo/bar' doesn't\nseem to be an absolute path on Windows since it lacks\n\"<dirver-letter>:\" or \"//hostname\" part. If we're on drive D:,\n\"/Program\\ Files\" doesn't mean \"C:\\Program\\ Files\" but \"D:\\Program\\\nFiles\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Sep 2021 17:36:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi, \n\n> -----Original Message-----\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Sent: Monday, September 13, 2021 4:36 PM\n> To: michael@paquier.xyz\n\n> Mmm. I haven't thought that so seriously, but '/hoge/foo/bar' doesn't\n> seem to be an absolute path on Windows since it lacks\n> \"<dirver-letter>:\" or \"//hostname\" part. If we're on drive D:,\n> \"/Program\\ Files\" doesn't mean \"C:\\Program\\ Files\" but \"D:\\Program\\\n> Files\".\n\nI don't know this. After some test, I think it's better to consider '/hoge/foo/bar'\nas a absolute path.\n\n0001 and 0002 are the are the bugfix patches.\n0003 is the test patch what I have tested on Linux and Windows.\n\nWaiting for some comment.\nAdd to the commitfest: https://commitfest.postgresql.org/35/3331/\n\nRegards,\nShenhao Wang", "msg_date": "Sun, 26 Sep 2021 08:40:15 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com> writes:\n> I don't know this. After some test, I think it's better to consider '/hoge/foo/bar'\n> as a absolute path.\n\nAgreed. I think we are considering \"absolute path\" here as a\nsyntactic concept; Windows' weird rules about drive letters\ndon't really matter for the purposes of path canonicalization.\n\n> 0001 and 0002 are the are the bugfix patches.\n> 0003 is the test patch what I have tested on Linux and Windows.\n> Waiting for some comment.\n\nI tried to read 0001 but really couldn't make sense of the logic\nat all, because it's seriously underdocumented. At minimum you\nneed an API spec comment for canonicalize_path_sub, explaining\nwhat it's supposed to do and why. This is a significant rewrite\nof what was already tricky code, so we can't skimp on\ndocumentation. I'd put some effort into choosing more descriptive\nnames, too (\"sub\" doesn't mean much, especially here where it's\nnot clear if it means \"subroutine\" or \"path component\").\n\nI did notice that you dropped the separate step to collapse\nadjacent separators (i.e, reduce \"foo//bar\" to \"foo/bar\"), which\nseems like probably a bad idea. I think such cases might confuse\ncanonicalize_path_sub, and even if it manages to do the right\nthing, that requirement will complicate its invariants won't it?\n\nAnother thing I happened to notice is that join_path_components\nis going out of its way to not generate \"foo/./bar\", but if\nwe are fixing canonicalize_path to be able to delete the \"./\",\nthat seems like a waste of code now.\n\nI am not entirely convinced that 0002 isn't re-introducing the\nsecurity hole that the existing code seeks to plug. That one\nis going to require more justification.\n\nI concur with the upthread comments that there's little chance\nwe'll commit 0003 as-is; the code-to-benefit ratio is too high.\nInstead, you might consider adding test_canonicalize_path in\nsrc/test/regress/regress.c, and then adding a smaller number\nof regression test cases using that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Nov 2021 17:43:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Wed, Nov 10, 2021 at 05:43:31PM -0500, Tom Lane wrote:\n> Another thing I happened to notice is that join_path_components\n> is going out of its way to not generate \"foo/./bar\", but if\n> we are fixing canonicalize_path to be able to delete the \"./\",\n> that seems like a waste of code now.\n> \n> I am not entirely convinced that 0002 isn't re-introducing the\n> security hole that the existing code seeks to plug. That one\n> is going to require more justification.\n\nAt the same time, do we have any need for doing 0002 at all if\nwe do 0001? The paths are canonicalized before checking them in\npath_contains_parent_reference().\n\n> I concur with the upthread comments that there's little chance\n> we'll commit 0003 as-is; the code-to-benefit ratio is too high.\n> Instead, you might consider adding test_canonicalize_path in\n> src/test/regress/regress.c, and then adding a smaller number\n> of regression test cases using that.\n\nSounds like a good idea to me. I would move these in misc.source for\nanything that require an absolute path.\n\n0001 is indeed in need of more comments and documentation so as one\ndoes not get lost if reading through this code in the future. Changes\nin trim_directory(), for example, should explain what is returned and\nwhy.\n\n+ isabs = is_absolute_path(path);\n+ tmppath = strdup(path);\nIf possible, it would be nice to cut any need for malloc() allocations\nin this code.\n--\nMichael", "msg_date": "Fri, 19 Nov 2021 16:50:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi,\n\nThis patch is a wanted bugfix and has been waiting for an update for 2 months.\n\nDo you plan to send a new version soon?\n\n\n", "msg_date": "Mon, 17 Jan 2022 08:40:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi\n\n> This patch is a wanted bugfix and has been waiting for an update for 2 months.\n> \n> Do you plan to send a new version soon?\n\nYes, I will send a new version before next weekend\n\nRegards\n\nShenhao Wang\n\n\n", "msg_date": "Tue, 18 Jan 2022 01:08:01 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Tue, Jan 18, 2022 at 01:08:01AM +0000, wangsh.fnst@fujitsu.com wrote:\n> Yes, I will send a new version before next weekend\n\nThanks!\n--\nMichael", "msg_date": "Tue, 18 Jan 2022 10:19:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Hi, \n\nThe new version is attached.\n\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n> I tried to read 0001 but really couldn't make sense of the logic\n> at all, because it's seriously underdocumented. At minimum you\n> need an API spec comment for canonicalize_path_sub, explaining\n> what it's supposed to do and why.\n\nI have added some comments, but I'm not sure these comments are enough\nor easy understand.\n\n> I did notice that you dropped the separate step to collapse\n> adjacent separators (i.e, reduce \"foo//bar\" to \"foo/bar\"), which\n> seems like probably a bad idea.\n\nAdd these sources back.\n\nMichael Paquier <michael@paquier.xyz> wrote:\n> for example, should explain what is returned and\n> why.\n> + isabs = is_absolute_path(path);\n> + tmppath = strdup(path);\n> If possible, it would be nice to cut any need for malloc() allocations\n> in this code.\n\nThank you for advice. In this version, I do not use the malloc().\n\n> > I concur with the upthread comments that there's little chance\n> > we'll commit 0003 as-is; the code-to-benefit ratio is too high.\n> > Instead, you might consider adding test_canonicalize_path in\n> > src/test/regress/regress.c, and then adding a smaller number\n> > of regression test cases using that.\n> \n> Sounds like a good idea to me. I would move these in misc.source for\n> anything that require an absolute path.\n\nI'm not fully understand this. So, I do not change the test patch.\n\nRegards,\nShenhao Wang", "msg_date": "Mon, 24 Jan 2022 11:21:12 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Mon, Jan 24, 2022 at 11:21:12AM +0000, wangsh.fnst@fujitsu.com wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I concur with the upthread comments that there's little chance\n>>> we'll commit 0003 as-is; the code-to-benefit ratio is too high.\n>>> Instead, you might consider adding test_canonicalize_path in\n>>> src/test/regress/regress.c, and then adding a smaller number\n>>> of regression test cases using that.\n>> \n>> Sounds like a good idea to me. I would move these in misc.source for\n>> anything that require an absolute path.\n> \n> I'm not fully understand this. So, I do not change the test patch.\n\nIn order to make the tests cheap, there is no need to have a separate\nmodule in src/test/modules/ as your patch 0002 is doing. Instead, you\nshould move the C code of your SQL function test_canonicalize_path()\nto src/test/regress/regress.c, then add some tests in\nsrc/test/regress/sql/, with a SQL function created in the test script\nthat feeds from what would be added to regress.so.\n\nPlease note that my previous comment has become incorrect as of\ndc9c3b0, that has removed the concept of input/output files in the\nregression tests, but you can do the same with a \\getenv to get access\nto absolute paths for the tests. There are many examples in the tree\nfor that, one is copy.sql.\n--\nMichael", "msg_date": "Mon, 24 Jan 2022 20:46:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> In order to make the tests cheap, there is no need to have a separate\n> module in src/test/modules/ as your patch 0002 is doing. Instead, you\n> should move the C code of your SQL function test_canonicalize_path()\n> to src/test/regress/regress.c, then add some tests in\n> src/test/regress/sql/, with a SQL function created in the test script\n> that feeds from what would be added to regress.so.\n\nHere's a revised patch version that does it like that. I also\nreviewed and simplified the canonicalize_path logic. I think\nthis is committable.\n\n(I suspect that adminpack's checks for unsafe file names could\nnow be simplified substantially, because many of the corner cases\nit worries about are no longer possible, as evidenced by the change\nin error message there. I've not pursued that, however.)\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 30 Jan 2022 16:50:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "On Sun, Jan 30, 2022 at 04:50:03PM -0500, Tom Lane wrote:\n> Here's a revised patch version that does it like that. I also\n> reviewed and simplified the canonicalize_path logic. I think\n> this is committable.\n\nThanks for the updated version. The range of the tests looks fine\nenough, and the CF bot does not complain. The code is\nstraight-forward and pretty clear in terms of the handling of \".\",\n\"..\" and the N-depth handling necessary.\n\nShould we have tests for WIN32 (aka for driver letters and \"//\")?\nThis could be split into its own separate test file to limit the\ndamage with the alternate outputs, and the original complain was from\nthere.\n\n> (I suspect that adminpack's checks for unsafe file names could\n> now be simplified substantially, because many of the corner cases\n> it worries about are no longer possible, as evidenced by the change\n> in error message there. I've not pursued that, however.)\n\nFine by me to let this part for later.\n--\nMichael", "msg_date": "Mon, 31 Jan 2022 17:15:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Should we have tests for WIN32 (aka for driver letters and \"//\")?\n> This could be split into its own separate test file to limit the\n> damage with the alternate outputs, and the original complain was from\n> there.\n\nI thought about it and concluded that the value couldn't justify\nthe pain-in-the-neck factor of adding a platform-specific variant\nresult file. skip_drive() is pretty simple and decoupled from what\nwe're trying to test here, plus it hasn't changed in decades and\nis unlikely to do so in future.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jan 2022 07:23:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "\nOn 1/30/22 16:50, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> In order to make the tests cheap, there is no need to have a separate\n>> module in src/test/modules/ as your patch 0002 is doing. Instead, you\n>> should move the C code of your SQL function test_canonicalize_path()\n>> to src/test/regress/regress.c, then add some tests in\n>> src/test/regress/sql/, with a SQL function created in the test script\n>> that feeds from what would be added to regress.so.\n> Here's a revised patch version that does it like that. I also\n> reviewed and simplified the canonicalize_path logic. I think\n> this is committable.\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 11:58:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 1/30/22 16:50, Tom Lane wrote:\n>> Here's a revised patch version that does it like that. I also\n>> reviewed and simplified the canonicalize_path logic. I think\n>> this is committable.\n\n> LGTM\n\nPushed, thanks for looking.\n\nI think I'm also going to have a look at simplifying some of the\ndependent code, just because it feels weird to leave that unfinished.\nIn particular, Shenhao-san suggested upthread that we could remove\npath_contains_parent_reference(). I complained about that at the\ntime, but I hadn't quite absorbed the fact that an absolute path\nis now *guaranteed* not to have any \"..\" after canonicalize_path.\nSo the existing calls in adminpack.c and genfile.c are certainly\ndead code. We probably want to keep path_contains_parent_reference()\nin case some extension is using it, but seeing that its API spec\nalready requires the input to be canonicalized, it could be simplified\nto just check for \"..\" at the start.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jan 2022 12:18:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: drop tablespace failed when location contains .. on win32" } ]
[ { "msg_contents": "Hi,\n\nWhile working on one of the internal features, we found that it is a\nbit difficult to run pg_waldump for a normal user to know WAL info and\nstats of a running postgres database instance in the cloud. Many a\ntimes users or DBAs or developers would want to get and analyze\nfollowing:\n1) raw WAL record associated with an LSN or raw WAL records between a\nstart LSN and end LSN for feeding to some other functionality\n2) WAL statistics associated with an LSN or between start LSN and end\nLSN for debugging or analytical purposes. The WAL stats are the number\nof inserts, updates, deletes, index inserts, commits, checkpoints,\naborts, wal record sizes, FPI (Full Page Image) count etc. which are\nbasically everything that we get with pg_waldump --stats option plus\nsome other information as we may feel will be useful.\n\nAn available option is to use pg_waldump, a standalone program\nemitting human readable WAL info into a standard output/file. This\nworks well when users have access to the system on which postgres is\nrunning. But for a postgres database instance running in the cloud\nenvironments, starting the pg_waldump, fetching and presenting its\noutput to the users in a structured way may be a bit hard to do.\n\nHow about we create a new extension, called pg_walinspect (synonymous\nto pageinspect extension) with a bunch of SQL-callable functions that\nget the raw WAL records or stats out of a running postgres database\ninstance in a more structured way that is easily consumable by all the\nusers or DBAs or developers? We can also provide these functionalities\ninto the core postgres (in xlogfuncs.c) instead of a new extension,\nbut we would like it to be pluggable so that the functions will be\nused only if required.\n\n[1] shows a rough sketch of the functions that the new pg_walinspect\nextension can provide. These are not exhaustive; we can\nadd/remove/modify as we move further.\n\nWe would like to invite more thoughts from the hackers.\n\nCredits: Thanks to Satya Narlapuram, Chen Liang (for some initial\nwork), Tianyu Zhang and Ashutosh Sharma (copied in cc) for internal\ndiscussions.\n\n[1]\na) bytea pg_get_wal_record(pg_lsn lsn); and bytea\npg_get_wal_record(pg_lsn lsn, text wal_dir); - Returns a single row of\nraw WAL record of bytea type. WAL data is read from pg_wal or\nspecified wal_dir directory.\n\nb) bytea[] pg_get_wal_record(pg_lsn start_lsn, pg_lsn end_lsn); and\nbytea[] pg_get_wal_record(pg_lsn start_lsn, pg_lsn end_lsn, text\nwal_dir); - Returns multiple rows of raw WAL records of bytea type,\none row per each WAL record. WAL data is read from pg_wal or specified\nwal_dir directory.\n\nCREATE TYPE walinspect_stats_type AS (stat1, stat2, stat3 …. statN);\nc) walinspect_stats_type pg_get_wal_stats(pg_lsn lsn); and\nwalinspect_stats_type pg_get_wal_stats(pg_lsn lsn, text wal_dir); -\nReturns a single row of WAL record’s stats of walinspect_stats_type\ntype. WAL data is read from pg_wal or specified wal_dir directory.\n\nd) walinspect_stats_type[] pg_get_wal_stats(pg_lsn start_lsn, pg_lsn\nend_lsn); and walinspect_stats_type[] pg_get_wal_stats(pg_lsn\nstart_lsn, pg_lsn end_lsn, text wal_dir); - Returns multiple rows of\nWAL record stats of walinspect_stats_type type, one row per each WAL\nrecord. WAL data is read from pg_wal or specified wal_dir directory.\n\ne) walinspect_stats_type pg_get_wal_stats(bytea wal_record); -\nReturns a single row of provided WAL record (wal_record) stats.\n\nf) walinspect_stats_type pg_get_wal_stats_aggr(pg_lsn start_lsn,\npg_lsn end_lsn); and walinspect_stats_type\npg_get_wal_stats_aggr(pg_lsn start_lsn, pg_lsn end_lsn, text wal_dir);\n- Returns a single row of aggregated stats of all the WAL records\nbetween start_lsn and end_lsn. WAL data is read from pg_wal or\nspecified wal_dir directory.\n\nCREATE TYPE walinspect_lsn_range_type AS (pg_lsn start_lsn, pg_lsn end_lsn);\ng) walinspect_lsn_range_type walinspect_get_lsn_range(text\nwal_dir); - Returns a single row of start LSN and end LSN of the WAL\nrecords available under pg_wal or specified wal_dir directory.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Sep 2021 19:18:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On 9/8/21, 6:49 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> How about we create a new extension, called pg_walinspect (synonymous\r\n> to pageinspect extension) with a bunch of SQL-callable functions that\r\n> get the raw WAL records or stats out of a running postgres database\r\n> instance in a more structured way that is easily consumable by all the\r\n> users or DBAs or developers? We can also provide these functionalities\r\n> into the core postgres (in xlogfuncs.c) instead of a new extension,\r\n> but we would like it to be pluggable so that the functions will be\r\n> used only if required.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 9 Sep 2021 22:49:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Sep 09, 2021 at 10:49:46PM +0000, Bossart, Nathan wrote:\n> +1\n\nA backend approach has the advantage that you can use the proper locks\nto make sure that a segment is not recycled or removed by a concurrent\ncheckpoint, so that would be reliable.\n--\nMichael", "msg_date": "Fri, 10 Sep 2021 10:51:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "\nOn 9/10/21 12:49 AM, Bossart, Nathan wrote:\n> On 9/8/21, 6:49 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> How about we create a new extension, called pg_walinspect (synonymous\n>> to pageinspect extension) with a bunch of SQL-callable functions that\n>> get the raw WAL records or stats out of a running postgres database\n>> instance in a more structured way that is easily consumable by all the\n>> users or DBAs or developers? We can also provide these functionalities\n>> into the core postgres (in xlogfuncs.c) instead of a new extension,\n>> but we would like it to be pluggable so that the functions will be\n>> used only if required.\n> +1\n>\n> Nathan\n>\n+1\n\nBertrand\n\n\n\n", "msg_date": "Fri, 10 Sep 2021 09:04:10 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Sep 10, 2021 at 7:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 09, 2021 at 10:49:46PM +0000, Bossart, Nathan wrote:\n> > +1\n>\n> A backend approach has the advantage that you can use the proper locks\n> to make sure that a segment is not recycled or removed by a concurrent\n> checkpoint, so that would be reliable.\n\nThanks for sharing your thoughts. IMO, using locks for showing WAL\nstats isn't a good way, because these new functions may block the\ncheckpointer from removing/recycling the WAL files. We don't want to\ndo that. If at all, user has asked stats of an LSN/range of LSNs if\nit is/they are available in the pg_wal directory, we provide the info\notherwise we can throw warnings/errors. This behaviour is pretty much\nin sycn with what pg_waldump does right now.\n\nAnd, some users may not need these new functions at all, so in such\ncases going with an extension way makes it more usable.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 10 Sep 2021 19:59:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Sep 8, 2021 at 07:18:08PM +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> While working on one of the internal features, we found that it is a\n> bit difficult to run pg_waldump for a normal user to know WAL info and\n> stats of a running postgres database instance in the cloud. Many a\n> times users or DBAs or developers would want to get and analyze\n> following:\n\nUh, are we going to implement everything that is only available at the\ncommand line as an extension just for people who are using managed cloud\nservices where the command line is not available and the cloud provider\nhas not made that information accessible? Seems like this might lead to\na lot of duplicated effort.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 18:07:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On 10/05/21 18:07, Bruce Momjian wrote:\n> Uh, are we going to implement everything that is only available at the\n> command line as an extension just for people who are using managed cloud\n> services\n\nOne extension that runs a curated menu of command-line tools for you\nand returns their stdout?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 5 Oct 2021 18:22:19 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On 10/5/21 15:07, Bruce Momjian wrote:\n> On Wed, Sep 8, 2021 at 07:18:08PM +0530, Bharath Rupireddy wrote:\n>> While working on one of the internal features, we found that it is a\n>> bit difficult to run pg_waldump for a normal user to know WAL info and\n>> stats of a running postgres database instance in the cloud. Many a\n>> times users or DBAs or developers would want to get and analyze\n>> following:\n> \n> Uh, are we going to implement everything that is only available at the\n> command line as an extension just for people who are using managed cloud\n> services where the command line is not available and the cloud provider\n> has not made that information accessible? Seems like this might lead to\n> a lot of duplicated effort.\n\nNo. For most command line utilities, there's no reason to expose them in\nSQL or they already are exposed in SQL. For example, everything in\npg_controldata is already available via SQL functions.\n\nSpecifically exposing pg_waldump functionality in SQL could speed up\nfinding bugs in the PG logical replication code. We found and fixed a\nfew over this past year, but there are more lurking out there.\n\nHaving the extension in core is actually the only way to avoid\nduplicated effort, by having shared code which the pg_dump binary and\nthe extension both wrap or call.\n\n-Jeremy\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Tue, 5 Oct 2021 15:30:07 -0700", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Oct 5, 2021 at 06:22:19PM -0400, Chapman Flack wrote:\n> On 10/05/21 18:07, Bruce Momjian wrote:\n> > Uh, are we going to implement everything that is only available at the\n> > command line as an extension just for people who are using managed cloud\n> > services\n> \n> One extension that runs a curated menu of command-line tools for you\n> and returns their stdout?\n\nYes, that would make sense, and something the cloud service providers\nwould write.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 20:38:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Oct 5, 2021 at 03:30:07PM -0700, Jeremy Schneider wrote:\n> On 10/5/21 15:07, Bruce Momjian wrote:\n> > On Wed, Sep 8, 2021 at 07:18:08PM +0530, Bharath Rupireddy wrote:\n> >> While working on one of the internal features, we found that it is a\n> >> bit difficult to run pg_waldump for a normal user to know WAL info and\n> >> stats of a running postgres database instance in the cloud. Many a\n> >> times users or DBAs or developers would want to get and analyze\n> >> following:\n> > \n> > Uh, are we going to implement everything that is only available at the\n> > command line as an extension just for people who are using managed cloud\n> > services where the command line is not available and the cloud provider\n> > has not made that information accessible? Seems like this might lead to\n> > a lot of duplicated effort.\n> \n> No. For most command line utilities, there's no reason to expose them in\n> SQL or they already are exposed in SQL. For example, everything in\n> pg_controldata is already available via SQL functions.\n\nThat's a good example.\n\n> Specifically exposing pg_waldump functionality in SQL could speed up\n> finding bugs in the PG logical replication code. We found and fixed a\n> few over this past year, but there are more lurking out there.\n\nUh, why is pg_waldump more important than other command line tool\ninformation?\n\n> Having the extension in core is actually the only way to avoid\n> duplicated effort, by having shared code which the pg_dump binary and\n> the extension both wrap or call.\n\nUh, aren't you duplicating code by having pg_waldump as a command-line\ntool and an extension?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 5 Oct 2021 20:43:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On 10/5/21 17:43, Bruce Momjian wrote:\n> On Tue, Oct 5, 2021 at 03:30:07PM -0700, Jeremy Schneider wrote:\n>> Specifically exposing pg_waldump functionality in SQL could speed up\n>> finding bugs in the PG logical replication code. We found and fixed a\n>> few over this past year, but there are more lurking out there.\n> \n> Uh, why is pg_waldump more important than other command line tool\n> information?\n\nGoing down the list of all other utilities in src/bin:\n\n* pg_amcheck, pg_config, pg_controldata: already available in SQL\n* psql, pgbench, pg_dump: already available as client-side apps\n* initdb, pg_archivecleanup, pg_basebackup, pg_checksums, pg_ctl,\npg_resetwal, pg_rewind, pg_upgrade, pg_verifybackup: can't think of any\npossible use case outside server OS access, most of these are too low\nlevel and don't even make sense in SQL\n* pg_test_fsync, pg_test_timing: marginally interesting ideas in SQL,\ndon't feel any deep interest myself\n\nSpeaking selfishly, there are a few reasons I would be specifically\ninterested in pg_waldump (the only remaining one on the list).\n\n.\n\nFirst, to better support existing features around logical replication\nand decoding.\n\nIn particular, it seems inconsistent to me that all the replication\nmanagement SQL functions take LSNs as arguments - and yet there's no\nSQL-based way to find the LSNs that you are supposed to pass into these\nfunctions.\n\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION\n\nOver the past few years I've been pulled in to help several large PG\nusers who ran into these bugs, and it's very painful - because the only\nreal remediation is to drop and recreate the replication slot, which\nmeans either re-copying all the data to the downstream system or\nfiguring out a way to resync it. Some PG users have 3rd party tools like\nHVR which can do row-by-row resync (IIUC), but no matter how you slice\nit, we're talking about a lot of pain for people replicating large data\nsets between multiple systems. In most cases, the only/best option even\nwith very large tables is to just make a fresh copy of the data - which\ncan translate to a business outage of hours or even days.\n\nMy favorite example is the SQL function pg_replication_slot_advance() -\nthis could really help PG users find less painful solutions to broken\ndecoding, however it's not really possible to /know/ an LSN to advance\nto without inspecting WAL. ISTM there's a strong use case here for a SQL\ninterface on WAL inspection.\n\n.\n\nSecond: debugging and troubleshooting logical replication and decoding bugs.\n\nI helped track down a few logical replication bugs and get fixed into\ncode at postgresql.org this past year. (But I give credit to others who\nare much better at C than I am, and who did a lot more work than I did\non these bugs!)\n\nLogical decoding bugs are some of the hardest to fix - because all you\nhave is a WAL stream, but you don't know the SQL or workload patterns or\nPG code paths which created that WAL stream.\n\nDumping the WAL - knowing which objects and which types of operations\nare involved and stats like number of changes or number of\nsubtransactions - this helps identify which transaction and what SQL in\nthe application triggered the failure, which can help find short-term\nworkarounds. Businesses need those short-term workarounds so they can\nkeep going while we work on finding and fixing bugs, which can take some\ntime. This is another place where I think a SQL interface to WAL would\nbe helpful to PG users. Especially the ability to filter and trace a\nsingle transaction through a large number of WAL files in the directory.\n\n.\n\nThird: statistics on WAL - especially full page writes. Giving users the\nfull power of SQL allows much more sophisticated analysis of the WAL\nrecords. Personally, I'd probably find myself importing all the WAL\nstats into the DB anyway because SQL is my preferred data manipulation\nlanguage.\n\n\n>> Having the extension in core is actually the only way to avoid\n>> duplicated effort, by having shared code which the pg_dump binary and\n>> the extension both wrap or call.\n> \n> Uh, aren't you duplicating code by having pg_waldump as a command-line\n> tool and an extension?\n\nWell this whole conversation is just theoretical anyway until the code\nis shared. :) But if Bharath is writing functions to decode WAL, then\nwouldn't we just have pg_waldump use these same functions in order to\navoid duplicating code?\n\nBharath - was some code already posted and I just missed it?\n\nLooking at the proposed API from the initial email, I like that there's\nboth stats functionality and WAL record inspection functionality\n(similar to pg_waldump). I like that there's the ability to pull the\nrecords as raw bytea data, however I think we're also going to want an\nability in v1 of the patch to decode it (similar to pageinspect\nheap_page_item_attrs, etc).\n\nAnother feature that might be interesting down the road would be the\nability to provide filtering of WAL records for security purposes. For\nexample, allowing a user to only dump raw WAL records for one particular\ndatabase, or maybe excluding WAL records that change system catalogs or\nthe like. But I probably wouldn't start here, personally.\n\nNow then.... as Blaise Pascal said in 1657 (and as was also said by\nWinston Churchill, Mark Twain, etc).... \"I'm sorry I wrote you such a\nlong letter; I didn't have time to write a short one.\"\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Wed, 6 Oct 2021 09:56:34 -0700", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Oct 6, 2021 at 09:56:34AM -0700, Jeremy Schneider wrote:\n> On 10/5/21 17:43, Bruce Momjian wrote:\n> > On Tue, Oct 5, 2021 at 03:30:07PM -0700, Jeremy Schneider wrote:\n> >> Specifically exposing pg_waldump functionality in SQL could speed up\n> >> finding bugs in the PG logical replication code. We found and fixed a\n> >> few over this past year, but there are more lurking out there.\n> > \n> > Uh, why is pg_waldump more important than other command line tool\n> > information?\n> \n> Going down the list of all other utilities in src/bin:\n> \n> * pg_amcheck, pg_config, pg_controldata: already available in SQL\n> * psql, pgbench, pg_dump: already available as client-side apps\n> * initdb, pg_archivecleanup, pg_basebackup, pg_checksums, pg_ctl,\n> pg_resetwal, pg_rewind, pg_upgrade, pg_verifybackup: can't think of any\n> possible use case outside server OS access, most of these are too low\n> level and don't even make sense in SQL\n> * pg_test_fsync, pg_test_timing: marginally interesting ideas in SQL,\n> don't feel any deep interest myself\n> \n> Speaking selfishly, there are a few reasons I would be specifically\n> interested in pg_waldump (the only remaining one on the list).\n\nThis is the analysis I was looking for to understand if copying the\nfeatures of command-line tools in extensions was a wise direction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 6 Oct 2021 13:19:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On 2021-Oct-06, Jeremy Schneider wrote:\n\n> Well this whole conversation is just theoretical anyway until the code\n> is shared. :) But if Bharath is writing functions to decode WAL, then\n> wouldn't we just have pg_waldump use these same functions in order to\n> avoid duplicating code?\n\nActually, a lot of the code is already shared, since the rmgrdesc\nroutines are in src/backend. Keep in mind that it was there before\npg_xlogdump existed, to support WAL_DEBUG. When pg_xlogdump was added,\nwhat we did was allow that backend-only code be compilable in a frontend\nenvironment. Also, we already have xlogreader.\n\nSo pg_waldump itself is mostly scaffolding to let the frontend\nenvironment get argument values to pass to backend-enabled code. The\nonly really interesting, novel thing is the --stats mode ... and I bet\nyou can write that with some SQL-level aggregation of the raw data, no\nneed for any C code.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 6 Oct 2021 14:23:33 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Oct 6, 2021 at 10:26 PM Jeremy Schneider\n<schneider@ardentperf.com> wrote:\n>\n> On 10/5/21 17:43, Bruce Momjian wrote:\n> > On Tue, Oct 5, 2021 at 03:30:07PM -0700, Jeremy Schneider wrote:\n> >> Specifically exposing pg_waldump functionality in SQL could speed up\n> >> finding bugs in the PG logical replication code. We found and fixed a\n> >> few over this past year, but there are more lurking out there.\n> >\n> > Uh, why is pg_waldump more important than other command line tool\n> > information?\n>\n> Going down the list of all other utilities in src/bin:\n>\n> * pg_amcheck, pg_config, pg_controldata: already available in SQL\n> * psql, pgbench, pg_dump: already available as client-side apps\n> * initdb, pg_archivecleanup, pg_basebackup, pg_checksums, pg_ctl,\n> pg_resetwal, pg_rewind, pg_upgrade, pg_verifybackup: can't think of any\n> possible use case outside server OS access, most of these are too low\n> level and don't even make sense in SQL\n> * pg_test_fsync, pg_test_timing: marginally interesting ideas in SQL,\n> don't feel any deep interest myself\n>\n> Speaking selfishly, there are a few reasons I would be specifically\n> interested in pg_waldump (the only remaining one on the list).\n\nThanks Jeremy for the analysis.\n\n> First, to better support existing features around logical replication\n> and decoding.\n>\n> In particular, it seems inconsistent to me that all the replication\n> management SQL functions take LSNs as arguments - and yet there's no\n> SQL-based way to find the LSNs that you are supposed to pass into these\n> functions.\n>\n> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION\n>\n> Over the past few years I've been pulled in to help several large PG\n> users who ran into these bugs, and it's very painful - because the only\n> real remediation is to drop and recreate the replication slot, which\n> means either re-copying all the data to the downstream system or\n> figuring out a way to resync it. Some PG users have 3rd party tools like\n> HVR which can do row-by-row resync (IIUC), but no matter how you slice\n> it, we're talking about a lot of pain for people replicating large data\n> sets between multiple systems. In most cases, the only/best option even\n> with very large tables is to just make a fresh copy of the data - which\n> can translate to a business outage of hours or even days.\n>\n> My favorite example is the SQL function pg_replication_slot_advance() -\n> this could really help PG users find less painful solutions to broken\n> decoding, however it's not really possible to /know/ an LSN to advance\n> to without inspecting WAL. ISTM there's a strong use case here for a SQL\n> interface on WAL inspection.\n>\n> Second: debugging and troubleshooting logical replication and decoding bugs.\n>\n> I helped track down a few logical replication bugs and get fixed into\n> code at postgresql.org this past year. (But I give credit to others who\n> are much better at C than I am, and who did a lot more work than I did\n> on these bugs!)\n>\n> Logical decoding bugs are some of the hardest to fix - because all you\n> have is a WAL stream, but you don't know the SQL or workload patterns or\n> PG code paths which created that WAL stream.\n>\n> Dumping the WAL - knowing which objects and which types of operations\n> are involved and stats like number of changes or number of\n> subtransactions - this helps identify which transaction and what SQL in\n> the application triggered the failure, which can help find short-term\n> workarounds. Businesses need those short-term workarounds so they can\n> keep going while we work on finding and fixing bugs, which can take some\n> time. This is another place where I think a SQL interface to WAL would\n> be helpful to PG users. Especially the ability to filter and trace a\n> single transaction through a large number of WAL files in the directory.\n>\n> Third: statistics on WAL - especially full page writes. Giving users the\n> full power of SQL allows much more sophisticated analysis of the WAL\n> records. Personally, I'd probably find myself importing all the WAL\n> stats into the DB anyway because SQL is my preferred data manipulation\n> language.\n\nJust to add to the above points, with the new extension pg_walinspect\nwe will have following advantages:\n1) Usability - SQL callable functions will be easier to use for the\nusers/admins/developers.\n2) Access Control - we can provide better access control for the WAL data/stats.\n3) Emitting the actual WAL data(as bytea structure) and stats via SQL\ncallable functions will help to analyze and answer questions like how\nmuch WAL data is being generated in the system, what kind of WAL data\nit is, how many FPWs are happening and so on. Jermey has already given\nmore realistic use cases.\n4) I came across this - there's a similar capability in SQL server -\nhttps://www.mssqltips.com/sqlservertip/3076/how-to-read-the-sql-server-database-transaction-log/\n\n> >> Having the extension in core is actually the only way to avoid\n> >> duplicated effort, by having shared code which the pg_dump binary and\n> >> the extension both wrap or call.\n> >\n> > Uh, aren't you duplicating code by having pg_waldump as a command-line\n> > tool and an extension?\n>\n> Well this whole conversation is just theoretical anyway until the code\n> is shared. :) But if Bharath is writing functions to decode WAL, then\n> wouldn't we just have pg_waldump use these same functions in order to\n> avoid duplicating code?\n>\n> Bharath - was some code already posted and I just missed it?\n>\n> Looking at the proposed API from the initial email, I like that there's\n> both stats functionality and WAL record inspection functionality\n> (similar to pg_waldump). I like that there's the ability to pull the\n> records as raw bytea data, however I think we're also going to want an\n> ability in v1 of the patch to decode it (similar to pageinspect\n> heap_page_item_attrs, etc).\n\nI'm yet to start working on the patch. I will be doing it soon.\n\n> Another feature that might be interesting down the road would be the\n> ability to provide filtering of WAL records for security purposes. For\n> example, allowing a user to only dump raw WAL records for one particular\n> database, or maybe excluding WAL records that change system catalogs or\n> the like. But I probably wouldn't start here, personally.\n\n+1.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Oct 2021 10:43:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Oct 7, 2021 at 10:43 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n> > Looking at the proposed API from the initial email, I like that there's\n> > both stats functionality and WAL record inspection functionality\n> > (similar to pg_waldump). I like that there's the ability to pull the\n> > records as raw bytea data, however I think we're also going to want an\n> > ability in v1 of the patch to decode it (similar to pageinspect\n> > heap_page_item_attrs, etc).\n>\n> I'm yet to start working on the patch. I will be doing it soon.\n\nThanks all. Here's the v1 patch set for the new extension pg_walinspect.\nNote that I didn't include the documentation part now, I will be doing it a\nbit later.\n\nPlease feel free to review and provide your thoughts.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 18 Nov 2021 18:43:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Nov 18, 2021 at 6:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Oct 7, 2021 at 10:43 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Looking at the proposed API from the initial email, I like that there's\n> > > both stats functionality and WAL record inspection functionality\n> > > (similar to pg_waldump). I like that there's the ability to pull the\n> > > records as raw bytea data, however I think we're also going to want an\n> > > ability in v1 of the patch to decode it (similar to pageinspect\n> > > heap_page_item_attrs, etc).\n> >\n> > I'm yet to start working on the patch. I will be doing it soon.\n>\n> Thanks all. Here's the v1 patch set for the new extension pg_walinspect. Note that I didn't include the documentation part now, I will be doing it a bit later.\n>\n> Please feel free to review and provide your thoughts.\n\nThe v1 patch set was failing to compile on Windows. Here's the v2\npatch set fixing that.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 25 Nov 2021 15:49:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Nov 25, 2021 at 3:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thanks all. Here's the v1 patch set for the new extension pg_walinspect. Note that I didn't include the documentation part now, I will be doing it a bit later.\n> >\n> > Please feel free to review and provide your thoughts.\n>\n> The v1 patch set was failing to compile on Windows. Here's the v2\n> patch set fixing that.\n\nI forgot to specify this: the v1 patch set was failing to compile on\nWindows with errors shown at [1]. Thanks to Julien Rouhaud who\nsuggested to use PGDLLIMPORT in an off-list discussion.\n\n[1] (Link target) ->\n pg_walinspect.obj : error LNK2001: unresolved external symbol\nforkNames [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n pg_walinspect.obj : error LNK2001: unresolved external symbol\npg_comp_crc32c [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n pg_walinspect.obj : error LNK2001: unresolved external symbol\nwal_segment_size [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n pg_walinspect.obj : error LNK2001: unresolved external symbol\nRmgrTable [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n .\\Release\\pg_walinspect\\pg_walinspect.dll : fatal error LNK1120: 4\nunresolved externals [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n\n 5 Error(s)\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 25 Nov 2021 17:54:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Nov 25, 2021 at 5:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 25, 2021 at 3:49 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Thanks all. Here's the v1 patch set for the new extension pg_walinspect. Note that I didn't include the documentation part now, I will be doing it a bit later.\n> > >\n> > > Please feel free to review and provide your thoughts.\n> >\n> > The v1 patch set was failing to compile on Windows. Here's the v2\n> > patch set fixing that.\n>\n> I forgot to specify this: the v1 patch set was failing to compile on\n> Windows with errors shown at [1]. Thanks to Julien Rouhaud who\n> suggested to use PGDLLIMPORT in an off-list discussion.\n>\n> [1] (Link target) ->\n> pg_walinspect.obj : error LNK2001: unresolved external symbol\n> forkNames [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n> pg_walinspect.obj : error LNK2001: unresolved external symbol\n> pg_comp_crc32c [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n> pg_walinspect.obj : error LNK2001: unresolved external symbol\n> wal_segment_size [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n> pg_walinspect.obj : error LNK2001: unresolved external symbol\n> RmgrTable [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n> .\\Release\\pg_walinspect\\pg_walinspect.dll : fatal error LNK1120: 4\n> unresolved externals [C:\\Users\\bhara\\postgres\\pg_walinspect.vcxproj]\n>\n> 5 Error(s)\n\nHere's the v3 patch-set with fixes for the compiler warnings reported\nin the cf bot at\nhttps://cirrus-ci.com/task/4979131497578496?logs=gcc_warning#L506.\n\nPlease review.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 4 Jan 2022 22:01:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "So I looked at this patch and I have the same basic question as Bruce.\nDo we really want to expose every binary tool associated with Postgres\nas an extension? Obviously this is tempting for cloud provider users\nwhich is not an unreasonable argument. But it does have consequences.\n\n1) Some things like pg_waldump are running code that is not normally\nunder user control. This could have security issues or reliability\nissues.\n\nOn that front I'm especially concerned that pg_verify_raw_wal_record()\nfor example would let an attacker feed arbitrary hand crafted xlog\nrecords into the parser which is not normally something a user can do.\nIf they feed it something it's not expecting it might be easy to cause\na crash and server restart.\n\nThere's also a bit of concern about data retention. Generally in\nPostgres when rows are deleted there's very weak guarantees about the\ndata really being wiped. We definitely don't wipe it from disk of\ncourse. And things like pageinspect could expose it long after it's\nbeen deleted. But one might imagine after pageinspect shows it's gone\nand/or after a vacuum full the data is actually purged. But then\nsomething like pg_walinspect would make even that insufficient.\n\n2) There's no documentation. I'm guessing you hesitated to write\ndocumentation until the interface is settled but actually sometimes\nwriting documentation helps expose things in the interface that look\nstrange when you try to explain them.\n\n3) And the interface does look a bit strange. Like what's the deal\nwith pg_get_wal_record_info_2() ? I gather it's just a SRF version of\npg_get_wal_record_info() but that's a strange name. And then what's\nthe point of pg_get_wal_record_info() at all? Why wouldn't the SRF be\nsufficient even for the specific case of a single record?\n\nAnd the stats functions seem a bit out of place to me. If the SRF\nreturned the data in the right format the user should be able to do\naggregate queries to generate these stats easily enough. If anything a\nsimple SQL function to do the aggregations could be provided.\n\nNow this is starting to get into the realm of bikeshedding but... Some\nof the code is taken straight from pg_waldump and does things like:\n\n+ appendStringInfo(&rec_blk_ref, \"blkref #%u: rel %u/%u/%u fork %s blk %u\",\n\nBut that's kind of out of place for an SQL interface. It makes it hard\nto write queries since things like the relid, block number etc are in\nthe string. If I wanted to use these functions I would expect to be\ndoing something like \"select all the decoded records pertaining to\nblock n\".\n\nAll that said, I don't want to gatekeep based on this kind of\ncriticism. The existing code is based on pg_waldump and if we want an\nextension to expose that then that's a reasonable place to start. We\ncan work on a better format for the data later it doesn't mean we\nshouldn't start with something we have today.\n\n4) This isn't really an issue with your patch at all but why on earth\ndo we have a bitvector for WAL compression methods?! Like, what does\nit mean to have multiple compression methods set? That should just be\na separate field with values for each type of compression surely?\n\nI suppose this raises the issue of what happens if someone fixes that.\nThey'll now have to update pg_waldump *and* pg_walinspect. I don't\nthink that would actually be a lot of work but it's definitely more\nthan just one. Also, perhaps they should be in the same contrib\ndirectory so at least people won't forget there are two.\n\n\n", "msg_date": "Mon, 31 Jan 2022 16:40:09 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Jan 31, 2022 at 04:40:09PM -0500, Greg Stark wrote:\n> 4) This isn't really an issue with your patch at all but why on earth\n> do we have a bitvector for WAL compression methods?! Like, what does\n> it mean to have multiple compression methods set? That should just be\n> a separate field with values for each type of compression surely?\n\nI don't have an answer to your question, but the discussion was here.\n\nIn the versions of the patches I sent on Mar 15, Mar 21, May 18, May 24, Jun\n13, I avoided \"one bit per compression method\", but Michael thought this was\nsimpler.\n\nhttps://www.postgresql.org/message-id/20210622031358.GF29179@telsasoft.com\nOn Mon, Jun 21, 2021 at 10:13:58PM -0500, Justin Pryzby wrote:\n> +/* compression methods supported */\n> +#define BKPIMAGE_COMPRESS_PGLZ 0x04\n> +#define BKPIMAGE_COMPRESS_ZLIB 0x08\n> +#define BKPIMAGE_COMPRESS_LZ4 0x10\n> +#define BKPIMAGE_COMPRESS_ZSTD 0x20\n> +#define BKPIMAGE_IS_COMPRESSED(info) \\\n> + ((info & (BKPIMAGE_COMPRESS_PGLZ | BKPIMAGE_COMPRESS_ZLIB | \\\n> + BKPIMAGE_COMPRESS_LZ4 | BKPIMAGE_COMPRESS_ZSTD)) != 0)\n> \n> You encouraged saving bits here, so I'm surprised to see that your patches\n> use one bit per compression method: 2 bits to support no/pglz/lz4, 3 to add\n> zstd, and the previous patch used 4 bits to also support zlib.\n> \n> There are spare bits available for that, but now there can be an inconsistency\n> if two bits are set. Also, 2 bits could support 4 methods (including \"no\").\n\nOn Tue, Jun 22, 2021 at 12:53:46PM +0900, Michael Paquier wrote:\n> Yeah, I know. I have just finished with that to get something\n> readable for the sake of the tests. As you say, the point is moot\n> just we add one new method, anyway, as we need just one new bit.\n> And that's what I would like to do for v15 with LZ4 as the resulting\n> patch is simple. It would be an idea to discuss more compression\n> methods here once we hear more from users when this is released in the\n> field, re-considering at this point if more is necessary or not.\n\n\n", "msg_date": "Mon, 31 Jan 2022 16:28:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Additionally I've looked at the tests and I'm not sure but I don't\nthink this arrangement is going to work. I don't have the time to run\nCLOBBER_CACHE and CLOBBER_CACHE_ALWAYS tests but I know they run\n*really* slowly. So the test can't just do a CHECKPOINT and then trust\nthat the next few transactions will still be in the wal to decode\nlater. There could have been many more timed checkpoints in between.\n\nI think the way to do it is to create either a backup label or a\nreplication slot. Then you can inspect the lsn of the label or slot\nand decode all transactions between that point and the current point.\n\nI also think you should try to have a wider set of wal records in that\nrange to test decoding records with and without full page writes, with\nDDL records, etc.\n\nI do like that the tests don't actually have the decoded record info\nin the test though. But they can do a minimal effort to check that the\nrecords they think they're testing are actually being tested. Insert\ninto a temporary table and then run a few queries with WHERE clauses\nto test for a heap insert, btree insert test the right relid is\npresent, and test that a full page write is present (if full page\nwrites are enabled I guess). You don't need an exhaustive set of\nchecks because you're not testing that wal logging works properly,\njust that the tests aren't accidentally passing because they're not\nfinding any interesting records.\n\n\n", "msg_date": "Wed, 2 Feb 2022 12:01:12 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Feb 1, 2022 at 3:10 AM Greg Stark <stark@mit.edu> wrote:\n>\n> So I looked at this patch and I have the same basic question as Bruce.\n\nThanks a lot for the comments.\n\n> Do we really want to expose every binary tool associated with Postgres\n> as an extension? Obviously this is tempting for cloud provider users\n> which is not an unreasonable argument. But it does have consequences.\n\nPerhaps not every tool needs to be exposed, but given the advantages\nthat pg_walinspect can provide it's a good candidate to have it as a\ncore extension. Some of the advantages are - debugging, WAL analysis,\nfeeding WAL stats and info to dashboards to show customers and answer\ntheir queries, RCA etc., for educational purposes - one can understand\nthe WAL structure, stats, different record types etc. Another nice\nthing is getting raw WAL data out of the running server (of course all\nthe users can't get it only the allowed ones, currently users with\npg_monitor role, if required we can change it to some other predefined\nrole). For instance, the raw WAL data can be fed to external page\nrepair tools to apply on a raw page (one can get this from pageinspect\nextension).\n\n> 1) Some things like pg_waldump are running code that is not normally\n> under user control. This could have security issues or reliability\n> issues.\n\nI understand this and also I think the same concern applies to\npageinspect tool which exposes getting raw page data. The\npg_walinspect functions are currently accessible by the users with\npg_monitor role, if required we can change this to some other\npredefined role.\n\n> On that front I'm especially concerned that pg_verify_raw_wal_record()\n> for example would let an attacker feed arbitrary hand crafted xlog\n> records into the parser which is not normally something a user can do.\n> If they feed it something it's not expecting it might be easy to cause\n> a crash and server restart.\n\nThis function does nothing (no writes) to the server but just checking\nthe CRC of the WAL record. If at all one can make the server crash\nwith an input, then that should be a problem with the server code\nwhich needs to be fixed. But IMO this function doesn't have a concern\nas such.\n\n> There's also a bit of concern about data retention. Generally in\n> Postgres when rows are deleted there's very weak guarantees about the\n> data really being wiped. We definitely don't wipe it from disk of\n> course. And things like pageinspect could expose it long after it's\n> been deleted. But one might imagine after pageinspect shows it's gone\n> and/or after a vacuum full the data is actually purged. But then\n> something like pg_walinspect would make even that insufficient.\n\nThe idea of pg_walinspect is to get the WAL info, data and stats out\nof a running postgres server, if the WAL isn't available, the\nfunctions would say that.\n\n> 2) There's no documentation. I'm guessing you hesitated to write\n> documentation until the interface is settled but actually sometimes\n> writing documentation helps expose things in the interface that look\n> strange when you try to explain them.\n\nI will send out the new patch set with documentation soon.\n\n> 3) And the interface does look a bit strange. Like what's the deal\n> with pg_get_wal_record_info_2() ? I gather it's just a SRF version of\n> pg_get_wal_record_info() but that's a strange name. And then what's\n> the point of pg_get_wal_record_info() at all? Why wouldn't the SRF be\n> sufficient even for the specific case of a single record?\n\nI agree, pg_get_wal_record_info_2 is a poor naming.\npg_get_wal_record_info_2 takes range of LSN (start and end) to give\nthe wal info, whereas pg_get_wal_record_info just takes one LSN. Maybe\nI will change pg_get_wal_record_info_2 to pg_get_wal_record_info_range\nor pg_get_wal_records_info or someother namign is better? If the\nsuggestion is to overload pg_get_wal_record_info one with single LSN\nand another with start and end LSNs, I'm okay with that too.\nOtherwise, I can have pg_get_wal_record_info with start and end LSN\n(end LSN default to NULL) and return setof record.\n\n> And the stats functions seem a bit out of place to me. If the SRF\n> returned the data in the right format the user should be able to do\n> aggregate queries to generate these stats easily enough. If anything a\n> simple SQL function to do the aggregations could be provided.\n>\n> Now this is starting to get into the realm of bikeshedding but... Some\n> of the code is taken straight from pg_waldump and does things like:\n>\n> + appendStringInfo(&rec_blk_ref, \"blkref #%u: rel %u/%u/%u fork %s blk %u\",\n>\n> But that's kind of out of place for an SQL interface. It makes it hard\n> to write queries since things like the relid, block number etc are in\n> the string. If I wanted to use these functions I would expect to be\n> doing something like \"select all the decoded records pertaining to\n> block n\".\n\nI will think more about this and change it in the next version of the\npatch set, perhaps I will add more columns to the functions.\n\n> All that said, I don't want to gatekeep based on this kind of\n> criticism. The existing code is based on pg_waldump and if we want an\n> extension to expose that then that's a reasonable place to start. We\n> can work on a better format for the data later it doesn't mean we\n> shouldn't start with something we have today.\n\nIMO, we can always extend the functions in future, once the\npg_walinspect extension gets in with minimum number of much-required\nand basic functions.\n\n> I suppose this raises the issue of what happens if someone fixes that.\n> They'll now have to update pg_waldump *and* pg_walinspect. I don't\n> think that would actually be a lot of work but it's definitely more\n> than just one. Also, perhaps they should be in the same contrib\n> directory so at least people won't forget there are two.\n\nCurrently, all the tools are placed in src/bin and extensions are in\ncontrib directory. I don't think we ever keep the extension in src/bin\nor vice versa. Having said, this maybe we can add comments on having\nto change/fix in both pg_waldump and pg_walinspect. We also have to\ndeal with this situation in some of the existing tools such as\npg_controldata.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 5 Feb 2022 20:01:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Jan 31, 2022 at 4:40 PM Greg Stark <stark@mit.edu> wrote:\n> So I looked at this patch and I have the same basic question as Bruce.\n> Do we really want to expose every binary tool associated with Postgres\n> as an extension? Obviously this is tempting for cloud provider users\n> which is not an unreasonable argument. But it does have consequences.\n>\n> 1) Some things like pg_waldump are running code that is not normally\n> under user control. This could have security issues or reliability\n> issues.\n\nFor what it's worth, I am generally in favor of having something like\nthis in PostgreSQL. I think it's wrong of us to continue assuming that\neveryone has command-line access. Even when that's true, it's not\nnecessarily convenient. If you choose to use a relational database,\nyou may be the sort of person who likes SQL. And if you are, you may\nwant to have the database tell you what's going on via SQL rather than\ncommand-line tools or operating system utilities. Imagine if we didn't\nhave pg_stat_activity and you had to get that information by running a\nseparate binary. Would anyone like that? Why is this case any\ndifferent?\n\nA few years ago we exposed data from pg_control via SQL and similar\nconcerns were raised - but it turns out to be pretty useful. I don't\nknow why this shouldn't be equally useful. Sure, there's some\nduplication in functionality, but it's not a huge maintenance burden\nfor the project, and people (including me) like having it available. I\nthink the same things will be true here.\n\nIf decoding WAL causes security problems, that's something we better\nfix, because WAL is constantly decoded on standbys and via logical\ndecoding on systems all over the place. I agree that we can't let\nusers supply their own hand-crafted WAL records to be decoded without\ncausing more trouble than we can handle, but if it's not safe to\ndecode the WAL the system generated than we are in a lot of trouble\nalready.\n\nI hasten to say that I'm not endorsing every detail or indeed any\ndetail of the proposed patch, and some of the concerns you mention\nlater sound well-founded to me. But I disagree with the idea that we\nshouldn't have both a command-line utility that roots through files on\ndisk and an SQL interface that works with a running system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 6 Feb 2022 10:45:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sun, Feb 6, 2022 at 9:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 4:40 PM Greg Stark <stark@mit.edu> wrote:\n> > So I looked at this patch and I have the same basic question as Bruce.\n> > Do we really want to expose every binary tool associated with Postgres\n> > as an extension? Obviously this is tempting for cloud provider users\n> > which is not an unreasonable argument. But it does have consequences.\n> >\n> > 1) Some things like pg_waldump are running code that is not normally\n> > under user control. This could have security issues or reliability\n> > issues.\n>\n> For what it's worth, I am generally in favor of having something like\n> this in PostgreSQL. I think it's wrong of us to continue assuming that\n> everyone has command-line access. Even when that's true, it's not\n> necessarily convenient. If you choose to use a relational database,\n> you may be the sort of person who likes SQL. And if you are, you may\n> want to have the database tell you what's going on via SQL rather than\n> command-line tools or operating system utilities. Imagine if we didn't\n> have pg_stat_activity and you had to get that information by running a\n> separate binary. Would anyone like that? Why is this case any\n> different?\n>\n> A few years ago we exposed data from pg_control via SQL and similar\n> concerns were raised - but it turns out to be pretty useful. I don't\n> know why this shouldn't be equally useful. Sure, there's some\n> duplication in functionality, but it's not a huge maintenance burden\n> for the project, and people (including me) like having it available. I\n> think the same things will be true here.\n>\n> If decoding WAL causes security problems, that's something we better\n> fix, because WAL is constantly decoded on standbys and via logical\n> decoding on systems all over the place. I agree that we can't let\n> users supply their own hand-crafted WAL records to be decoded without\n> causing more trouble than we can handle, but if it's not safe to\n> decode the WAL the system generated than we are in a lot of trouble\n> already.\n>\n> I hasten to say that I'm not endorsing every detail or indeed any\n> detail of the proposed patch, and some of the concerns you mention\n> later sound well-founded to me. But I disagree with the idea that we\n> shouldn't have both a command-line utility that roots through files on\n> disk and an SQL interface that works with a running system.\n\nThanks Robert for your comments.\n\n> + appendStringInfo(&rec_blk_ref, \"blkref #%u: rel %u/%u/%u fork %s blk %u\",\n>\n> But that's kind of out of place for an SQL interface. It makes it hard\n> to write queries since things like the relid, block number etc are in\n> the string. If I wanted to use these functions I would expect to be\n> doing something like \"select all the decoded records pertaining to\n> block n\".\n\nThanks Greg for your review of the patches. Since there can be\nmultiple blkref for WAL record type HEAP2 (for multi inserts\nbasically) [1], I couldn't find a better way to break it and represent\nit as a non-text column. IMO this is simpler and users can easily find\nout answers to \"how many WAL records my relation generated between\nlsn1 and lsn2 or how many WAL records of type Heap exist and so on?\",\nsee [2]. I've also added a test case to just show this in 0002 patch.\n\nHere's the v4 patch set that has the following changes along with\nGreg's review comments addressed:\n\n1) Added documentation as 0003 patch.\n2) Removed CHECKPOINT commands from tests as it is unnecessary.\n3) Added input validation code and tests.\n4) A few more comments have been added.\n5) Currently, only superusers can create the extension, but users with\nthe pg_monitor role can use the functions.\n6) Test cases are basic yet they cover all the functions, error cases\nwith input validations, I don't think we need to add many more test\ncases as suggested upthread, but I'm open to add a few more if I miss\nany use-case.\n\nPlease review the v4 patch set further and let me know your thoughts.\n\n[1]\nrmgr: Heap2 len (rec/tot): 64/ 8256, tx: 0, lsn:\n0/014A9070, prev 0/014A8FF8, desc: VISIBLE cutoff xid 709 flags 0x01,\nblkref #0: rel 1663/12757/16384 fork vm blk 0 FPW, blkref #1: rel\n1663/12757/16384 blk 0\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn:\n0/014AB0C8, prev 0/014A9070, desc: VISIBLE cutoff xid 709 flags 0x01,\nblkref #0: rel 1663/12757/16384 fork vm blk 0, blkref #1: rel\n1663/12757/16384 blk 1\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn:\n0/014AB108, prev 0/014AB0C8, desc: VISIBLE cutoff xid 709 flags 0x01,\nblkref #0: rel 1663/12757/16384 fork vm blk 0, blkref #1: rel\n1663/12757/16384 blk 2\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn:\n0/014AB148, prev 0/014AB108, desc: VISIBLE cutoff xid 709 flags 0x01,\nblkref #0: rel 1663/12757/16384 fork vm blk 0, blkref #1: rel\n1663/12757/16384 blk 3\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn:\n0/014AB188, prev 0/014AB148, desc: VISIBLE cutoff xid 709 flags 0x01,\nblkref #0: rel 1663/12757/16384 fork vm blk 0, blkref #1: rel\n1663/12757/16384 blk 4\n\n[2]\npostgres=# select count(*) from pg_get_wal_records_info('0/13C0A98',\n'0/0157A160') where block_ref like '%16384%' and rmgr like 'Heap';\n count\n-------\n 10100\n(1 row)\n\npostgres=# select count(*) from t1;\n count\n-------\n 10100\n(1 row)\n\npostgres=#\n\npostgres=# select count(*) from pg_get_wal_records_info('0/13C0A98',\n'0/0157A160') where block_ref like '%FPW%';\n count\n-------\n 78\n(1 row)\n\npostgres=#\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 07:56:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sun, Feb 6, 2022 at 7:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> For what it's worth, I am generally in favor of having something like\n> this in PostgreSQL. I think it's wrong of us to continue assuming that\n> everyone has command-line access. Even when that's true, it's not\n> necessarily convenient. If you choose to use a relational database,\n> you may be the sort of person who likes SQL. And if you are, you may\n> want to have the database tell you what's going on via SQL rather than\n> command-line tools or operating system utilities. Imagine if we didn't\n> have pg_stat_activity and you had to get that information by running a\n> separate binary. Would anyone like that? Why is this case any\n> different?\n\n+1. An SQL interface is significantly easier to work with. Especially\nbecause it can use the built-in LSN type, pg_lsn.\n\nI don't find the slippery slope argument convincing. There aren't that\nmany other things that are like pg_waldump, but haven't already been\nexposed via an SQL interface. Offhand, I can't think of any.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Feb 2022 08:25:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "\nOn 2/6/22 10:45, Robert Haas wrote:\n> For what it's worth, I am generally in favor of having something like\n> this in PostgreSQL. I think it's wrong of us to continue assuming that\n> everyone has command-line access. Even when that's true, it's not\n> necessarily convenient. If you choose to use a relational database,\n> you may be the sort of person who likes SQL.\n\n\n\nAlmost completely off topic, but this reminded me of an incident about\n30 years ago at my first gig as an SA/DBA. There was an application\nprogrammer who insisted on loading a set of values from a text file into\na temp table (it was Ingres, anyone remember that?). Why? Because he\nknew how to write \"Select * from mytable order by mycol\" but didn't know\nhow to drive the Unix sort utility at the command line. When I was\nunable to restrain myself from smiling at this he got very angry and\nyelled at me loudly.\n\nSo, yes, some people do like SQL and hate the command line.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 17:33:10 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Feb 10, 2022 at 9:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Feb 6, 2022 at 7:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > For what it's worth, I am generally in favor of having something like\n> > this in PostgreSQL. I think it's wrong of us to continue assuming that\n> > everyone has command-line access. Even when that's true, it's not\n> > necessarily convenient. If you choose to use a relational database,\n> > you may be the sort of person who likes SQL. And if you are, you may\n> > want to have the database tell you what's going on via SQL rather than\n> > command-line tools or operating system utilities. Imagine if we didn't\n> > have pg_stat_activity and you had to get that information by running a\n> > separate binary. Would anyone like that? Why is this case any\n> > different?\n>\n> +1. An SQL interface is significantly easier to work with. Especially\n> because it can use the built-in LSN type, pg_lsn.\n>\n> I don't find the slippery slope argument convincing. There aren't that\n> many other things that are like pg_waldump, but haven't already been\n> exposed via an SQL interface. Offhand, I can't think of any.\n\nOn Sat, Feb 12, 2022 at 4:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> Almost completely off topic, but this reminded me of an incident about\n> 30 years ago at my first gig as an SA/DBA. There was an application\n> programmer who insisted on loading a set of values from a text file into\n> a temp table (it was Ingres, anyone remember that?). Why? Because he\n> knew how to write \"Select * from mytable order by mycol\" but didn't know\n> how to drive the Unix sort utility at the command line. When I was\n> unable to restrain myself from smiling at this he got very angry and\n> yelled at me loudly.\n>\n> So, yes, some people do like SQL and hate the command line.\n\nThanks a lot for the comments. I'm looking forward to the review of\nthe latest v4 patches posted at [1].\n\n[1] https://www.postgresql.org/message-id/CALj2ACUS9%2B54QGPtUjk76dcYW-AMKp3hPe-U%2BpQo2-GpE4kjtA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 12 Feb 2022 17:03:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Here are few comments:\n\n+/*\n+ * Verify the authenticity of the given raw WAL record.\n+ */\n+Datum\n+pg_verify_raw_wal_record(PG_FUNCTION_ARGS)\n+{\n\n\nDo we really need this function? I see that whenever the record is\nread, we verify it. So could there be a scenario where any of these\nfunctions would return an invalid WAL record?\n\n--\n\nShould we add a function that returns the pointer to the first and\nprobably the last WAL record in the WAL segment? This would help users\nto inspect the wal records in the entire wal segment if they wish to\ndo so.\n\n--\n\n+PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n+PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n+PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n+PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n+PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n\nI think we should allow all these functions to be executed in wait and\n*nowait* mode. If a user specifies nowait mode, the function should\nreturn if no WAL data is present, rather than waiting for new WAL data\nto become available, default behaviour could be anything you like.\n\n--\n\n+Datum\n+pg_get_wal_records_info(PG_FUNCTION_ARGS)\n+{\n+#define PG_GET_WAL_RECORDS_INFO_COLS 10\n\n\nWe could probably have another variant of this function that would\nwork even if the end pointer is not specified, in which case the\ndefault end pointer would be the last WAL record in the WAL segment.\nCurrently it mandates the use of an end pointer which slightly reduces\nflexibility.\n\n--\n\n+\n+/*\n+ * Get the first valid raw WAL record lsn.\n+ */\n+Datum\n+pg_get_first_valid_wal_record_lsn(PG_FUNCTION_ARGS)\n\n\nI think this function should return a pointer to the nearest valid WAL\nrecord which can be the previous WAL record to the LSN entered by the\nuser or the next WAL record. If a user unknowingly enters an lsn that\ndoes not exist then in such cases we should probably return the lsn of\nthe previous WAL record instead of hanging or waiting for the new WAL\nrecord to arrive.\n\n--\n\nAnother important point I would like to mention here is - have we made\nan attempt to ensure that we try to share as much of code with\npg_waldump as possible so that if any changes happens in the\npg_waldump in future it gets applied here as well and additionally it\nwill also reduce the code duplication.\n\nI haven't yet looked into the code in detail. I will have a look at it\nasap. thanks.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Sat, Feb 12, 2022 at 5:03 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Feb 10, 2022 at 9:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Sun, Feb 6, 2022 at 7:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > For what it's worth, I am generally in favor of having something like\n> > > this in PostgreSQL. I think it's wrong of us to continue assuming that\n> > > everyone has command-line access. Even when that's true, it's not\n> > > necessarily convenient. If you choose to use a relational database,\n> > > you may be the sort of person who likes SQL. And if you are, you may\n> > > want to have the database tell you what's going on via SQL rather than\n> > > command-line tools or operating system utilities. Imagine if we didn't\n> > > have pg_stat_activity and you had to get that information by running a\n> > > separate binary. Would anyone like that? Why is this case any\n> > > different?\n> >\n> > +1. An SQL interface is significantly easier to work with. Especially\n> > because it can use the built-in LSN type, pg_lsn.\n> >\n> > I don't find the slippery slope argument convincing. There aren't that\n> > many other things that are like pg_waldump, but haven't already been\n> > exposed via an SQL interface. Offhand, I can't think of any.\n>\n> On Sat, Feb 12, 2022 at 4:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> > Almost completely off topic, but this reminded me of an incident about\n> > 30 years ago at my first gig as an SA/DBA. There was an application\n> > programmer who insisted on loading a set of values from a text file into\n> > a temp table (it was Ingres, anyone remember that?). Why? Because he\n> > knew how to write \"Select * from mytable order by mycol\" but didn't know\n> > how to drive the Unix sort utility at the command line. When I was\n> > unable to restrain myself from smiling at this he got very angry and\n> > yelled at me loudly.\n> >\n> > So, yes, some people do like SQL and hate the command line.\n>\n> Thanks a lot for the comments. I'm looking forward to the review of\n> the latest v4 patches posted at [1].\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACUS9%2B54QGPtUjk76dcYW-AMKp3hPe-U%2BpQo2-GpE4kjtA%40mail.gmail.com\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:31:51 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Feb 14, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Here are few comments:\n\nThanks for reviewing the patches.\n\n> +/*\n> + * Verify the authenticity of the given raw WAL record.\n> + */\n> +Datum\n> +pg_verify_raw_wal_record(PG_FUNCTION_ARGS)\n> +{\n>\n>\n> Do we really need this function? I see that whenever the record is\n> read, we verify it. So could there be a scenario where any of these\n> functions would return an invalid WAL record?\n\nYes, this function can be useful. Imagine a case where raw WAL records\nare fetched from one server using pg_get_wal_record_info and sent over\nthe network to another server (for fixing some of the corrupted data\npages or for whatever reasons), using pg_verify_raw_wal_record one can\nverify authenticity.\n\n> Should we add a function that returns the pointer to the first and\n> probably the last WAL record in the WAL segment? This would help users\n> to inspect the wal records in the entire wal segment if they wish to\n> do so.\n\nGood point. One can do this already with pg_get_wal_records_info and\npg_walfile_name_offset. Usually, the LSN format itself can give an\nidea about the WAL file it is in.\n\npostgres=# select lsn, pg_walfile_name_offset(lsn) from\npg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn asc\nlimit 1;\n lsn | pg_walfile_name_offset\n-----------+-------------------------------\n 0/5000038 | (000000010000000000000005,56)\n(1 row)\n\npostgres=# select lsn, pg_walfile_name_offset(lsn) from\npg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn desc\nlimit 1;\n lsn | pg_walfile_name_offset\n-----------+-------------------------------------\n 0/5FFFFC0 | (000000010000000000000005,16777152)\n(1 row)\n\nHaving said that, we can always add a function or a view (with the\nabove sort of queries) to pg_walinspect - given an LSN can give the\nvalid start record in that wal file (by following previous lsn links)\nand valid end record lsn. IMO, that's not required now, maybe later\nonce the initial version of pg_walinspect gets committed, as we\nalready have a way to achieve what we wanted here.\n\n> +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n>\n> I think we should allow all these functions to be executed in wait and\n> *nowait* mode. If a user specifies nowait mode, the function should\n> return if no WAL data is present, rather than waiting for new WAL data\n> to become available, default behaviour could be anything you like.\n\nCurrently, pg_walinspect uses read_local_xlog_page which waits in the\nwhile(1) loop if a future LSN is specified. As read_local_xlog_page is\nan implementation of XLogPageReadCB, which doesn't have a wait/nowait\nparameter, if we really need a wait/nowait mode behaviour, we need to\ndo extra things(either add a backend-level global wait variable, set\nbefore XLogReadRecord, if set, read_local_xlog_page can just exit\nwithout waiting and reset after the XLogReadRecord or add an extra\nbool wait variable to XLogReaderState and use it in\nread_local_xlog_page).\n\nAnother problem with the wait mode is - wait until when? Because we\ndon't want to wait forever by specifying a really really future LSN,\nmaybe you could think of adding a timeout (if the future LSN hasn't\ngenerated the given timeout, then just return). As I said upthread, I\nthink all of these functions can be parked to future pg_walinspect\nversions once it gets committed with most-useful functions as proposed\nin the v4 patch set.\n\n> +Datum\n> +pg_get_wal_records_info(PG_FUNCTION_ARGS)\n> +{\n> +#define PG_GET_WAL_RECORDS_INFO_COLS 10\n>\n>\n> We could probably have another variant of this function that would\n> work even if the end pointer is not specified, in which case the\n> default end pointer would be the last WAL record in the WAL segment.\n> Currently it mandates the use of an end pointer which slightly reduces\n> flexibility.\n\nLast WAL record in the WAL segment may not be of much use(one can\nfigure out the last valid WAL record in a wal file as mentioned\nabove), but the WAL records info till the latest current flush LSN of\nthe server would be a useful functionality. But that too, can be found\nusing something like \"select lsn, prev_lsn, resource_manager from\npg_get_wal_records_info('0/8099568', pg_current_wal_lsn());\"\n\n> +\n> +/*\n> + * Get the first valid raw WAL record lsn.\n> + */\n> +Datum\n> +pg_get_first_valid_wal_record_lsn(PG_FUNCTION_ARGS)\n>\n>\n> I think this function should return a pointer to the nearest valid WAL\n> record which can be the previous WAL record to the LSN entered by the\n> user or the next WAL record. If a user unknowingly enters an lsn that\n> does not exist then in such cases we should probably return the lsn of\n> the previous WAL record instead of hanging or waiting for the new WAL\n> record to arrive.\n\nIs it useful? If there's a strong reason, how about naming\npg_get_next_valid_wal_record_lsn returning the next valid wal record\nLSN and pg_get_previous_valid_wal_record_lsn returning the previous\nvalid wal record LSN ? If you think having two functions is too much,\nthen, how about pg_get_first_valid_wal_record_lsn returning both the\nnext valid wal record LSN and its previous wal record LSN?\n\n> Another important point I would like to mention here is - have we made\n> an attempt to ensure that we try to share as much of code with\n> pg_waldump as possible so that if any changes happens in the\n> pg_waldump in future it gets applied here as well and additionally it\n> will also reduce the code duplication.\n\nI tried, please have a look at the patch. Also, I added a note at the\nbeginning of pg_walinspect and pg_waldump to consider fixing\nissues/changing the code in both the places also.\n\n> I haven't yet looked into the code in detail. I will have a look at it\n> asap. thanks.\n\nThat will be great.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 16 Feb 2022 01:01:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Feb 15, 2022 at 2:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +/*\n> > + * Verify the authenticity of the given raw WAL record.\n> > + */\n> > +Datum\n> > +pg_verify_raw_wal_record(PG_FUNCTION_ARGS)\n> > +{\n> >\n> >\n> > Do we really need this function? I see that whenever the record is\n> > read, we verify it. So could there be a scenario where any of these\n> > functions would return an invalid WAL record?\n>\n> Yes, this function can be useful. Imagine a case where raw WAL records\n> are fetched from one server using pg_get_wal_record_info and sent over\n> the network to another server (for fixing some of the corrupted data\n> pages or for whatever reasons), using pg_verify_raw_wal_record one can\n> verify authenticity.\n\nAs I also said before, and so did Greg, I think giving the user a way\nto supply WAL records that we will then try to decode is never going\nto be OK. It's going to be a recipe for security bugs and crash bugs,\nand there's no compelling use case for it that I can see. I support\nthis patch set only to the extent that it decodes locally generated\nWAL read directly from the WAL stream.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 15:27:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Feb 16, 2022 at 1:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Feb 15, 2022 at 2:31 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > +/*\n> > > + * Verify the authenticity of the given raw WAL record.\n> > > + */\n> > > +Datum\n> > > +pg_verify_raw_wal_record(PG_FUNCTION_ARGS)\n> > > +{\n> > >\n> > >\n> > > Do we really need this function? I see that whenever the record is\n> > > read, we verify it. So could there be a scenario where any of these\n> > > functions would return an invalid WAL record?\n> >\n> > Yes, this function can be useful. Imagine a case where raw WAL records\n> > are fetched from one server using pg_get_wal_record_info and sent over\n> > the network to another server (for fixing some of the corrupted data\n> > pages or for whatever reasons), using pg_verify_raw_wal_record one can\n> > verify authenticity.\n>\n> As I also said before, and so did Greg, I think giving the user a way\n> to supply WAL records that we will then try to decode is never going\n> to be OK. It's going to be a recipe for security bugs and crash bugs,\n> and there's no compelling use case for it that I can see. I support\n> this patch set only to the extent that it decodes locally generated\n> WAL read directly from the WAL stream.\n\nAgreed, I will remove pg_verify_raw_wal_record function in the next\nversion of the patch set. Thanks.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 16 Feb 2022 08:54:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Feb 16, 2022 at 1:01 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Feb 14, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Here are few comments:\n>\n> Thanks for reviewing the patches.\n>\n> > +/*\n> > + * Verify the authenticity of the given raw WAL record.\n> > + */\n> > +Datum\n> > +pg_verify_raw_wal_record(PG_FUNCTION_ARGS)\n> > +{\n> >\n> >\n> > Do we really need this function? I see that whenever the record is\n> > read, we verify it. So could there be a scenario where any of these\n> > functions would return an invalid WAL record?\n>\n> Yes, this function can be useful. Imagine a case where raw WAL records\n> are fetched from one server using pg_get_wal_record_info and sent over\n> the network to another server (for fixing some of the corrupted data\n> pages or for whatever reasons), using pg_verify_raw_wal_record one can\n> verify authenticity.\n>\n\nI don't think that's the use case of this patch. Unless there is some\nother valid reason, I would suggest you remove it.\n\n> > Should we add a function that returns the pointer to the first and\n> > probably the last WAL record in the WAL segment? This would help users\n> > to inspect the wal records in the entire wal segment if they wish to\n> > do so.\n>\n> Good point. One can do this already with pg_get_wal_records_info and\n> pg_walfile_name_offset. Usually, the LSN format itself can give an\n> idea about the WAL file it is in.\n>\n> postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn asc\n> limit 1;\n> lsn | pg_walfile_name_offset\n> -----------+-------------------------------\n> 0/5000038 | (000000010000000000000005,56)\n> (1 row)\n>\n> postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn desc\n> limit 1;\n> lsn | pg_walfile_name_offset\n> -----------+-------------------------------------\n> 0/5FFFFC0 | (000000010000000000000005,16777152)\n> (1 row)\n>\n\nThe workaround you are suggesting is not very user friendly and FYKI\npg_wal_records_info simply hangs at times when we specify the higher\nand lower limit of lsn in a wal file.\n\nTo make things easier for the end users I would suggest we add a\nfunction that can return a valid first and last lsn in a walfile. The\noutput of this function can be used to inspect the wal records in the\nentire wal file if they wish to do so and I am sure they will. So, it\nshould be something like this:\n\nselect first_valid_lsn, last_valid_lsn from\npg_get_first_last_valid_wal_record('wal-segment-name');\n\nAnd above function can directly be used with pg_get_wal_records_info() like\n\nselect pg_get_wal_records_info(pg_get_first_last_valid_wal_record('wal-segment'));\n\nI think this is a pretty basic ASK that we expect to be present in the\nmodule like this.\n\n> > +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> > +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> >\n> > I think we should allow all these functions to be executed in wait and\n> > *nowait* mode. If a user specifies nowait mode, the function should\n> > return if no WAL data is present, rather than waiting for new WAL data\n> > to become available, default behaviour could be anything you like.\n>\n> Currently, pg_walinspect uses read_local_xlog_page which waits in the\n> while(1) loop if a future LSN is specified. As read_local_xlog_page is\n> an implementation of XLogPageReadCB, which doesn't have a wait/nowait\n> parameter, if we really need a wait/nowait mode behaviour, we need to\n> do extra things(either add a backend-level global wait variable, set\n> before XLogReadRecord, if set, read_local_xlog_page can just exit\n> without waiting and reset after the XLogReadRecord or add an extra\n> bool wait variable to XLogReaderState and use it in\n> read_local_xlog_page).\n>\n\nI am not asking to do any changes in the backend code. Please check -\nhow pg_waldump does this when a user requests to stop once the endptr\nhas reached. If not for all functions at least for a few functions we\ncan do this if it is doable.\n\n>\n> > +Datum\n> > +pg_get_wal_records_info(PG_FUNCTION_ARGS)\n> > +{\n> > +#define PG_GET_WAL_RECORDS_INFO_COLS 10\n> >\n> >\n> > We could probably have another variant of this function that would\n> > work even if the end pointer is not specified, in which case the\n> > default end pointer would be the last WAL record in the WAL segment.\n> > Currently it mandates the use of an end pointer which slightly reduces\n> > flexibility.\n>\n> Last WAL record in the WAL segment may not be of much use(one can\n> figure out the last valid WAL record in a wal file as mentioned\n> above), but the WAL records info till the latest current flush LSN of\n> the server would be a useful functionality. But that too, can be found\n> using something like \"select lsn, prev_lsn, resource_manager from\n> pg_get_wal_records_info('0/8099568', pg_current_wal_lsn());\"\n>\n\nWhat if a user wants to inspect all the valid wal records from a\nstartptr (startlsn) and he doesn't know the endptr? Why should he/she\nbe mandated to get the endptr and supply it to this function? I don't\nthink we should force users to do that. I think this is again a very\nbasic ASK that can be done in this version itself. It is not at all\nany advanced thing that we can think of doing in the future.\n\n> > +\n> > +/*\n> > + * Get the first valid raw WAL record lsn.\n> > + */\n> > +Datum\n> > +pg_get_first_valid_wal_record_lsn(PG_FUNCTION_ARGS)\n> >\n> >\n> > I think this function should return a pointer to the nearest valid WAL\n> > record which can be the previous WAL record to the LSN entered by the\n> > user or the next WAL record. If a user unknowingly enters an lsn that\n> > does not exist then in such cases we should probably return the lsn of\n> > the previous WAL record instead of hanging or waiting for the new WAL\n> > record to arrive.\n>\n> Is it useful?\n\nIt is useful in the same way as returning the next valid wal pointer\nis. Why should a user wait for the next valid wal pointer to be\navailable instead the function should identify the previous valid wal\nrecord and return it and put an appropriate message to the user.\n\nIf there's a strong reason, how about naming\n> pg_get_next_valid_wal_record_lsn returning the next valid wal record\n> LSN and pg_get_previous_valid_wal_record_lsn returning the previous\n> valid wal record LSN ? If you think having two functions is too much,\n> then, how about pg_get_first_valid_wal_record_lsn returning both the\n> next valid wal record LSN and its previous wal record LSN?\n>\n\nThe latter one looks better.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 16 Feb 2022 09:04:38 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Feb 16, 2022 at 9:04 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I don't think that's the use case of this patch. Unless there is some\n> other valid reason, I would suggest you remove it.\n\nRemoved the function pg_verify_raw_wal_record. Robert and Greg also\nvoted for removal upthread.\n\n> > > Should we add a function that returns the pointer to the first and\n> > > probably the last WAL record in the WAL segment? This would help users\n> > > to inspect the wal records in the entire wal segment if they wish to\n> > > do so.\n> >\n> > Good point. One can do this already with pg_get_wal_records_info and\n> > pg_walfile_name_offset. Usually, the LSN format itself can give an\n> > idea about the WAL file it is in.\n> >\n> > postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> > pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn asc\n> > limit 1;\n> > lsn | pg_walfile_name_offset\n> > -----------+-------------------------------\n> > 0/5000038 | (000000010000000000000005,56)\n> > (1 row)\n> >\n> > postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> > pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn desc\n> > limit 1;\n> > lsn | pg_walfile_name_offset\n> > -----------+-------------------------------------\n> > 0/5FFFFC0 | (000000010000000000000005,16777152)\n> > (1 row)\n> >\n>\n> The workaround you are suggesting is not very user friendly and FYKI\n> pg_wal_records_info simply hangs at times when we specify the higher\n> and lower limit of lsn in a wal file.\n>\n> To make things easier for the end users I would suggest we add a\n> function that can return a valid first and last lsn in a walfile. The\n> output of this function can be used to inspect the wal records in the\n> entire wal file if they wish to do so and I am sure they will. So, it\n> should be something like this:\n>\n> select first_valid_lsn, last_valid_lsn from\n> pg_get_first_last_valid_wal_record('wal-segment-name');\n>\n> And above function can directly be used with pg_get_wal_records_info() like\n>\n> select pg_get_wal_records_info(pg_get_first_last_valid_wal_record('wal-segment'));\n>\n> I think this is a pretty basic ASK that we expect to be present in the\n> module like this.\n\nAdded a new function that returns the first and last valid WAL record\nLSN of a given WAL file.\n\n> > > +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> > > +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> > > +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> > > +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> > > +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> > >\n> > > I think we should allow all these functions to be executed in wait and\n> > > *nowait* mode. If a user specifies nowait mode, the function should\n> > > return if no WAL data is present, rather than waiting for new WAL data\n> > > to become available, default behaviour could be anything you like.\n> >\n> > Currently, pg_walinspect uses read_local_xlog_page which waits in the\n> > while(1) loop if a future LSN is specified. As read_local_xlog_page is\n> > an implementation of XLogPageReadCB, which doesn't have a wait/nowait\n> > parameter, if we really need a wait/nowait mode behaviour, we need to\n> > do extra things(either add a backend-level global wait variable, set\n> > before XLogReadRecord, if set, read_local_xlog_page can just exit\n> > without waiting and reset after the XLogReadRecord or add an extra\n> > bool wait variable to XLogReaderState and use it in\n> > read_local_xlog_page).\n> >\n>\n> I am not asking to do any changes in the backend code. Please check -\n> how pg_waldump does this when a user requests to stop once the endptr\n> has reached. If not for all functions at least for a few functions we\n> can do this if it is doable.\n\nI've added a new function read_local_xlog_page_2 (similar to\nread_local_xlog_page but works in wait and no wait mode) and the\ncallers can specify whether to wait or not wait using private_data.\nActually, I wanted to use the private_data structure of\nread_local_xlog_page but the logical decoding already has context as\nprivate_data, that is why I had to have a new function. I know it\ncreates a bit of duplicate code, but its cleaner than using\nbackend-local variables or additional flags in XLogReaderState or\nadding wait/no-wait boolean to page_read callback. Any other\nsuggestions are welcome here.\n\nWith this, I'm able to have wait/no wait versions for any functions.\nBut for now, I'm having wait/no wait for two functions\n(pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\nsense.\n\n> > > +Datum\n> > > +pg_get_wal_records_info(PG_FUNCTION_ARGS)\n> > > +{\n> > > +#define PG_GET_WAL_RECORDS_INFO_COLS 10\n> > >\n> > >\n> > > We could probably have another variant of this function that would\n> > > work even if the end pointer is not specified, in which case the\n> > > default end pointer would be the last WAL record in the WAL segment.\n> > > Currently it mandates the use of an end pointer which slightly reduces\n> > > flexibility.\n> >\n> > Last WAL record in the WAL segment may not be of much use(one can\n> > figure out the last valid WAL record in a wal file as mentioned\n> > above), but the WAL records info till the latest current flush LSN of\n> > the server would be a useful functionality. But that too, can be found\n> > using something like \"select lsn, prev_lsn, resource_manager from\n> > pg_get_wal_records_info('0/8099568', pg_current_wal_lsn());\"\n> >\n>\n> What if a user wants to inspect all the valid wal records from a\n> startptr (startlsn) and he doesn't know the endptr? Why should he/she\n> be mandated to get the endptr and supply it to this function? I don't\n> think we should force users to do that. I think this is again a very\n> basic ASK that can be done in this version itself. It is not at all\n> any advanced thing that we can think of doing in the future.\n\nAgreed. Added new functions that emits wal records info/stats till the\nend of the WAL at the moment.\n\n> > > +\n> > > +/*\n> > > + * Get the first valid raw WAL record lsn.\n> > > + */\n> > > +Datum\n> > > +pg_get_first_valid_wal_record_lsn(PG_FUNCTION_ARGS)\n> > >\n> > >\n> > > I think this function should return a pointer to the nearest valid WAL\n> > > record which can be the previous WAL record to the LSN entered by the\n> > > user or the next WAL record. If a user unknowingly enters an lsn that\n> > > does not exist then in such cases we should probably return the lsn of\n> > > the previous WAL record instead of hanging or waiting for the new WAL\n> > > record to arrive.\n> >\n> > Is it useful?\n>\n> It is useful in the same way as returning the next valid wal pointer\n> is. Why should a user wait for the next valid wal pointer to be\n> available instead the function should identify the previous valid wal\n> record and return it and put an appropriate message to the user.\n>\n> If there's a strong reason, how about naming\n> > pg_get_next_valid_wal_record_lsn returning the next valid wal record\n> > LSN and pg_get_previous_valid_wal_record_lsn returning the previous\n> > valid wal record LSN ? If you think having two functions is too much,\n> > then, how about pg_get_first_valid_wal_record_lsn returning both the\n> > next valid wal record LSN and its previous wal record LSN?\n> >\n>\n> The latter one looks better.\n\nModified.\n\nAttaching v5 patch set, please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 25 Feb 2022 16:32:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n\n+--\n+-- pg_get_wal_records_info()\n+--\n+CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n+ IN end_lsn pg_lsn,\n+ IN wait_for_wal boolean DEFAULT false,\n+ OUT lsn pg_lsn,\n\nWhat does the wait_for_wal flag mean here when one has already\nspecified the start and end lsn? AFAIU, If a user has specified a\nstart and stop LSN, it means that the user knows the extent to which\nhe/she wants to display the WAL records in which case we need to stop\nonce the end lsn has reached . So what is the meaning of wait_for_wal\nflag? Does it look sensible to have the wait_for_wal flag here? To me\nit doesn't.\n\n==\n\n+--\n+-- pg_get_wal_records_info_till_end_of_wal()\n+--\n+CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n+ OUT lsn pg_lsn,\n+ OUT prev_lsn pg_lsn,\n+ OUT xid xid,\n\nWhy is this function required? Is pg_get_wal_records_info() alone not\nenough? I think it is. See if we can make end_lsn optional in\npg_get_wal_records_info() and lets just have it alone. I think it can\ndo the job of pg_get_wal_records_info_till_end_of_wal function.\n\n==\n\n+--\n+-- pg_get_wal_stats_till_end_of_wal()\n+--\n+CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n+ OUT resource_manager text,\n+ OUT count int8,\n\nAbove comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n\n==\n\n\n+ if (loc <= read_upto)\n+ break;\n+\n+ /* Let's not wait for WAL to be available if\nindicated */\n+ if (loc > read_upto &&\n+ state->private_data != NULL)\n+ {\n\nWhy loc > read_upto? The first if condition is (loc <= read_upto)\nfollowed by the second if condition - (loc > read_upto). Is the first\nif condition (loc <= read_upto) not enough to indicate that loc >\nread_upto?\n\n==\n\n+#define IsEndOfWALReached(state) \\\n+ (state->private_data != NULL && \\\n+ (((ReadLocalXLOGPage2Private *)\nxlogreader->private_data)->no_wait == true) && \\\n+ (((ReadLocalXLOGPage2Private *)\nxlogreader->private_data)->reached_end_of_wal == true))\n\n\nI think we should either use state or xlogreader. First line says\nstate->private_data and second line xlogreader->private_data.\n\n==\n\n+ (((ReadLocalXLOGPage2Private *)\nxlogreader->private_data)->reached_end_of_wal == true))\n+\n\nThere is a new patch coming to make the end of WAL messages less\nscary. It introduces the EOW flag in xlogreaderstate maybe we can use\nthat instead of introducing new flags in private area to represent the\nend of WAL.\n\n==\n\n+/*\n+ * XLogReaderRoutine->page_read callback for reading local xlog files\n+ *\n+ * This function is same as read_local_xlog_page except that it works in both\n+ * wait and no wait mode. The callers can specify about waiting in private_data\n+ * of XLogReaderState.\n+ */\n+int\n+read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n+ int reqLen, XLogRecPtr\ntargetRecPtr, char *cur_page)\n+{\n+ XLogRecPtr read_upto,\n\nDo we really need this function? Can't we make use of an existing WAL\nreader function - read_local_xlog_page()?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Feb 25, 2022 at 4:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Feb 16, 2022 at 9:04 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > I don't think that's the use case of this patch. Unless there is some\n> > other valid reason, I would suggest you remove it.\n>\n> Removed the function pg_verify_raw_wal_record. Robert and Greg also\n> voted for removal upthread.\n>\n> > > > Should we add a function that returns the pointer to the first and\n> > > > probably the last WAL record in the WAL segment? This would help users\n> > > > to inspect the wal records in the entire wal segment if they wish to\n> > > > do so.\n> > >\n> > > Good point. One can do this already with pg_get_wal_records_info and\n> > > pg_walfile_name_offset. Usually, the LSN format itself can give an\n> > > idea about the WAL file it is in.\n> > >\n> > > postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> > > pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn asc\n> > > limit 1;\n> > > lsn | pg_walfile_name_offset\n> > > -----------+-------------------------------\n> > > 0/5000038 | (000000010000000000000005,56)\n> > > (1 row)\n> > >\n> > > postgres=# select lsn, pg_walfile_name_offset(lsn) from\n> > > pg_get_wal_records_info('0/5000000', '0/5FFFFFF') order by lsn desc\n> > > limit 1;\n> > > lsn | pg_walfile_name_offset\n> > > -----------+-------------------------------------\n> > > 0/5FFFFC0 | (000000010000000000000005,16777152)\n> > > (1 row)\n> > >\n> >\n> > The workaround you are suggesting is not very user friendly and FYKI\n> > pg_wal_records_info simply hangs at times when we specify the higher\n> > and lower limit of lsn in a wal file.\n> >\n> > To make things easier for the end users I would suggest we add a\n> > function that can return a valid first and last lsn in a walfile. The\n> > output of this function can be used to inspect the wal records in the\n> > entire wal file if they wish to do so and I am sure they will. So, it\n> > should be something like this:\n> >\n> > select first_valid_lsn, last_valid_lsn from\n> > pg_get_first_last_valid_wal_record('wal-segment-name');\n> >\n> > And above function can directly be used with pg_get_wal_records_info() like\n> >\n> > select pg_get_wal_records_info(pg_get_first_last_valid_wal_record('wal-segment'));\n> >\n> > I think this is a pretty basic ASK that we expect to be present in the\n> > module like this.\n>\n> Added a new function that returns the first and last valid WAL record\n> LSN of a given WAL file.\n>\n> > > > +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> > > > +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> > > > +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> > > > +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> > > > +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> > > >\n> > > > I think we should allow all these functions to be executed in wait and\n> > > > *nowait* mode. If a user specifies nowait mode, the function should\n> > > > return if no WAL data is present, rather than waiting for new WAL data\n> > > > to become available, default behaviour could be anything you like.\n> > >\n> > > Currently, pg_walinspect uses read_local_xlog_page which waits in the\n> > > while(1) loop if a future LSN is specified. As read_local_xlog_page is\n> > > an implementation of XLogPageReadCB, which doesn't have a wait/nowait\n> > > parameter, if we really need a wait/nowait mode behaviour, we need to\n> > > do extra things(either add a backend-level global wait variable, set\n> > > before XLogReadRecord, if set, read_local_xlog_page can just exit\n> > > without waiting and reset after the XLogReadRecord or add an extra\n> > > bool wait variable to XLogReaderState and use it in\n> > > read_local_xlog_page).\n> > >\n> >\n> > I am not asking to do any changes in the backend code. Please check -\n> > how pg_waldump does this when a user requests to stop once the endptr\n> > has reached. If not for all functions at least for a few functions we\n> > can do this if it is doable.\n>\n> I've added a new function read_local_xlog_page_2 (similar to\n> read_local_xlog_page but works in wait and no wait mode) and the\n> callers can specify whether to wait or not wait using private_data.\n> Actually, I wanted to use the private_data structure of\n> read_local_xlog_page but the logical decoding already has context as\n> private_data, that is why I had to have a new function. I know it\n> creates a bit of duplicate code, but its cleaner than using\n> backend-local variables or additional flags in XLogReaderState or\n> adding wait/no-wait boolean to page_read callback. Any other\n> suggestions are welcome here.\n>\n> With this, I'm able to have wait/no wait versions for any functions.\n> But for now, I'm having wait/no wait for two functions\n> (pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\n> sense.\n>\n> > > > +Datum\n> > > > +pg_get_wal_records_info(PG_FUNCTION_ARGS)\n> > > > +{\n> > > > +#define PG_GET_WAL_RECORDS_INFO_COLS 10\n> > > >\n> > > >\n> > > > We could probably have another variant of this function that would\n> > > > work even if the end pointer is not specified, in which case the\n> > > > default end pointer would be the last WAL record in the WAL segment.\n> > > > Currently it mandates the use of an end pointer which slightly reduces\n> > > > flexibility.\n> > >\n> > > Last WAL record in the WAL segment may not be of much use(one can\n> > > figure out the last valid WAL record in a wal file as mentioned\n> > > above), but the WAL records info till the latest current flush LSN of\n> > > the server would be a useful functionality. But that too, can be found\n> > > using something like \"select lsn, prev_lsn, resource_manager from\n> > > pg_get_wal_records_info('0/8099568', pg_current_wal_lsn());\"\n> > >\n> >\n> > What if a user wants to inspect all the valid wal records from a\n> > startptr (startlsn) and he doesn't know the endptr? Why should he/she\n> > be mandated to get the endptr and supply it to this function? I don't\n> > think we should force users to do that. I think this is again a very\n> > basic ASK that can be done in this version itself. It is not at all\n> > any advanced thing that we can think of doing in the future.\n>\n> Agreed. Added new functions that emits wal records info/stats till the\n> end of the WAL at the moment.\n>\n> > > > +\n> > > > +/*\n> > > > + * Get the first valid raw WAL record lsn.\n> > > > + */\n> > > > +Datum\n> > > > +pg_get_first_valid_wal_record_lsn(PG_FUNCTION_ARGS)\n> > > >\n> > > >\n> > > > I think this function should return a pointer to the nearest valid WAL\n> > > > record which can be the previous WAL record to the LSN entered by the\n> > > > user or the next WAL record. If a user unknowingly enters an lsn that\n> > > > does not exist then in such cases we should probably return the lsn of\n> > > > the previous WAL record instead of hanging or waiting for the new WAL\n> > > > record to arrive.\n> > >\n> > > Is it useful?\n> >\n> > It is useful in the same way as returning the next valid wal pointer\n> > is. Why should a user wait for the next valid wal pointer to be\n> > available instead the function should identify the previous valid wal\n> > record and return it and put an appropriate message to the user.\n> >\n> > If there's a strong reason, how about naming\n> > > pg_get_next_valid_wal_record_lsn returning the next valid wal record\n> > > LSN and pg_get_previous_valid_wal_record_lsn returning the previous\n> > > valid wal record LSN ? If you think having two functions is too much,\n> > > then, how about pg_get_first_valid_wal_record_lsn returning both the\n> > > next valid wal record LSN and its previous wal record LSN?\n> > >\n> >\n> > The latter one looks better.\n>\n> Modified.\n>\n> Attaching v5 patch set, please review it further.\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Mar 2022 20:11:52 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Mar 2, 2022 at 8:12 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n\nThanks for reviewing.\n\n> +--\n> +-- pg_get_wal_records_info()\n> +--\n> +CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n> + IN end_lsn pg_lsn,\n> + IN wait_for_wal boolean DEFAULT false,\n> + OUT lsn pg_lsn,\n>\n> What does the wait_for_wal flag mean here when one has already\n> specified the start and end lsn? AFAIU, If a user has specified a\n> start and stop LSN, it means that the user knows the extent to which\n> he/she wants to display the WAL records in which case we need to stop\n> once the end lsn has reached . So what is the meaning of wait_for_wal\n> flag? Does it look sensible to have the wait_for_wal flag here? To me\n> it doesn't.\n\nUsers can always specify a future end_lsn and set wait_for_wal to\ntrue, then the pg_get_wal_records_info/pg_get_wal_stats functions can\nwait for the WAL. IMO, this is useful. If you remember you were okay\nwith wait/nowait versions for some of the functions upthread [1]. I'm\nnot going to retain this behaviour for both\npg_get_wal_records_info/pg_get_wal_stats as it is similar to\npg_waldump's --follow option.\n\n> ==\n>\n> +--\n> +-- pg_get_wal_records_info_till_end_of_wal()\n> +--\n> +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> + OUT lsn pg_lsn,\n> + OUT prev_lsn pg_lsn,\n> + OUT xid xid,\n>\n> Why is this function required? Is pg_get_wal_records_info() alone not\n> enough? I think it is. See if we can make end_lsn optional in\n> pg_get_wal_records_info() and lets just have it alone. I think it can\n> do the job of pg_get_wal_records_info_till_end_of_wal function.\n>\n> ==\n>\n> +--\n> +-- pg_get_wal_stats_till_end_of_wal()\n> +--\n> +CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n> + OUT resource_manager text,\n> + OUT count int8,\n>\n> Above comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n\nI'm doing the following input validations for these functions to not\ncause any issues with invalid LSN. If I were to have the default value\nfor end_lsn as 0/0, I can't perform input validations right? That is\nthe reason I'm having separate functions {pg_get_wal_records_info,\npg_get_wal_stats}_till_end_of_wal() versions.\n\n /* Validate input. */\n if (XLogRecPtrIsInvalid(start_lsn))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"invalid WAL record start LSN\")));\n\n if (XLogRecPtrIsInvalid(end_lsn))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"invalid WAL record end LSN\")));\n\n if (start_lsn >= end_lsn)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"WAL record start LSN must be less than end LSN\")));\n\n> ==\n>\n>\n> + if (loc <= read_upto)\n> + break;\n> +\n> + /* Let's not wait for WAL to be available if\n> indicated */\n> + if (loc > read_upto &&\n> + state->private_data != NULL)\n> + {\n>\n> Why loc > read_upto? The first if condition is (loc <= read_upto)\n> followed by the second if condition - (loc > read_upto). Is the first\n> if condition (loc <= read_upto) not enough to indicate that loc >\n> read_upto?\n\nYeah, that's unnecessary, I improved the comment there and removed loc\n> read_upto.\n\n> ==\n>\n> +#define IsEndOfWALReached(state) \\\n> + (state->private_data != NULL && \\\n> + (((ReadLocalXLOGPage2Private *)\n> xlogreader->private_data)->no_wait == true) && \\\n> + (((ReadLocalXLOGPage2Private *)\n> xlogreader->private_data)->reached_end_of_wal == true))\n>\n> I think we should either use state or xlogreader. First line says\n> state->private_data and second line xlogreader->private_data.\n\nI've changed it to use state instead of xlogreader.\n\n> ==\n>\n> + (((ReadLocalXLOGPage2Private *)\n> xlogreader->private_data)->reached_end_of_wal == true))\n> +\n>\n> There is a new patch coming to make the end of WAL messages less\n> scary. It introduces the EOW flag in xlogreaderstate maybe we can use\n> that instead of introducing new flags in private area to represent the\n> end of WAL.\n\nYeah that would be great. But we never know which one gets committed\nfirst. Until then it's not good to have dependencies on two \"on-going\"\npatches. Later, we can change.\n\n> ==\n>\n> +/*\n> + * XLogReaderRoutine->page_read callback for reading local xlog files\n> + *\n> + * This function is same as read_local_xlog_page except that it works in both\n> + * wait and no wait mode. The callers can specify about waiting in private_data\n> + * of XLogReaderState.\n> + */\n> +int\n> +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> + int reqLen, XLogRecPtr\n> targetRecPtr, char *cur_page)\n> +{\n> + XLogRecPtr read_upto,\n>\n> Do we really need this function? Can't we make use of an existing WAL\n> reader function - read_local_xlog_page()?\n\nI clearly explained the reasons upthread [2]. Please let me know if\nyou have more thoughts/doubts here, we can connect offlist.\n\nAttaching v6 patch set with above review comments addressed. Please\nreview it further.\n\n[1] https://www.postgresql.org/message-id/CAE9k0P%3D9SReU_613TXytZmpwL3ZRpnC5zrf96UoNCATKpK-UxQ%40mail.gmail.com\n+PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n+PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n+PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n+PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n+PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n\nI think we should allow all these functions to be executed in wait and\n*nowait* mode. If a user specifies nowait mode, the function should\nreturn if no WAL data is present, rather than waiting for new WAL data\nto become available, default behaviour could be anything you like.\n\n[2] https://www.postgresql.org/message-id/CALj2ACUtqWX95uAj2VNJED0PnixEeQ%3D0MEzpouLi%2Bzd_iTugRA%40mail.gmail.com\nI've added a new function read_local_xlog_page_2 (similar to\nread_local_xlog_page but works in wait and no wait mode) and the\ncallers can specify whether to wait or not wait using private_data.\nActually, I wanted to use the private_data structure of\nread_local_xlog_page but the logical decoding already has context as\nprivate_data, that is why I had to have a new function. I know it\ncreates a bit of duplicate code, but its cleaner than using\nbackend-local variables or additional flags in XLogReaderState or\nadding wait/no-wait boolean to page_read callback. Any other\nsuggestions are welcome here.\n\nWith this, I'm able to have wait/no wait versions for any functions.\nBut for now, I'm having wait/no wait for two functions\n(pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\nsense.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 2 Mar 2022 22:37:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Mar 2, 2022 at 10:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 8:12 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n>\n> Thanks for reviewing.\n>\n> > +--\n> > +-- pg_get_wal_records_info()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n> > + IN end_lsn pg_lsn,\n> > + IN wait_for_wal boolean DEFAULT false,\n> > + OUT lsn pg_lsn,\n> >\n> > What does the wait_for_wal flag mean here when one has already\n> > specified the start and end lsn? AFAIU, If a user has specified a\n> > start and stop LSN, it means that the user knows the extent to which\n> > he/she wants to display the WAL records in which case we need to stop\n> > once the end lsn has reached . So what is the meaning of wait_for_wal\n> > flag? Does it look sensible to have the wait_for_wal flag here? To me\n> > it doesn't.\n>\n> Users can always specify a future end_lsn and set wait_for_wal to\n> true, then the pg_get_wal_records_info/pg_get_wal_stats functions can\n> wait for the WAL. IMO, this is useful. If you remember you were okay\n> with wait/nowait versions for some of the functions upthread [1]. I'm\n> not going to retain this behaviour for both\n> pg_get_wal_records_info/pg_get_wal_stats as it is similar to\n> pg_waldump's --follow option.\n>\n\nIt is not at all similar to pg_waldumps behaviour. Please check the\nbehaviour of pg_waldump properly. Does it wait for any wal records\nwhen a user has specified a stop pointer? It doesn't and it shouldn't.\nI mean does it even make sense to wait for the WAL when a stop pointer\nis specified? And it's quite understandable that if a user has asked\npg_walinspect to stop at a certain point, it must. Also, What if there\nare already WAL records after the stop pointer, in that case does it\neven make sense to have a wait flag. WHat would be the meaning of the\nwait flag in that case?\n\nFurther, have you checked wait_for_wal flag behaviour, is it even working?\n\n> >\n> > +--\n> > +-- pg_get_wal_records_info_till_end_of_wal()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> > + OUT lsn pg_lsn,\n> > + OUT prev_lsn pg_lsn,\n> > + OUT xid xid,\n> >\n> > Why is this function required? Is pg_get_wal_records_info() alone not\n> > enough? I think it is. See if we can make end_lsn optional in\n> > pg_get_wal_records_info() and lets just have it alone. I think it can\n> > do the job of pg_get_wal_records_info_till_end_of_wal function.\n> >\n> > ==\n> >\n> > +--\n> > +-- pg_get_wal_stats_till_end_of_wal()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n> > + OUT resource_manager text,\n> > + OUT count int8,\n> >\n> > Above comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n>\n> I'm doing the following input validations for these functions to not\n> cause any issues with invalid LSN. If I were to have the default value\n> for end_lsn as 0/0, I can't perform input validations right? That is\n> the reason I'm having separate functions {pg_get_wal_records_info,\n> pg_get_wal_stats}_till_end_of_wal() versions.\n>\n\nYou can do it. Please check pg_waldump to understand how it is done\nthere. You cannot have multiple functions doing different things when\none single function can do all the job.\n\n> > ==\n> >\n> >\n> > + if (loc <= read_upto)\n> > + break;\n> > +\n> > + /* Let's not wait for WAL to be available if\n> > indicated */\n> > + if (loc > read_upto &&\n> > + state->private_data != NULL)\n> > + {\n> >\n> > Why loc > read_upto? The first if condition is (loc <= read_upto)\n> > followed by the second if condition - (loc > read_upto). Is the first\n> > if condition (loc <= read_upto) not enough to indicate that loc >\n> > read_upto?\n>\n> Yeah, that's unnecessary, I improved the comment there and removed loc\n> > read_upto.\n>\n> > ==\n> >\n> > +#define IsEndOfWALReached(state) \\\n> > + (state->private_data != NULL && \\\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->no_wait == true) && \\\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->reached_end_of_wal == true))\n> >\n> > I think we should either use state or xlogreader. First line says\n> > state->private_data and second line xlogreader->private_data.\n>\n> I've changed it to use state instead of xlogreader.\n>\n> > ==\n> >\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->reached_end_of_wal == true))\n> > +\n> >\n> > There is a new patch coming to make the end of WAL messages less\n> > scary. It introduces the EOW flag in xlogreaderstate maybe we can use\n> > that instead of introducing new flags in private area to represent the\n> > end of WAL.\n>\n> Yeah that would be great. But we never know which one gets committed\n> first. Until then it's not good to have dependencies on two \"on-going\"\n> patches. Later, we can change.\n>\n> > ==\n> >\n> > +/*\n> > + * XLogReaderRoutine->page_read callback for reading local xlog files\n> > + *\n> > + * This function is same as read_local_xlog_page except that it works in both\n> > + * wait and no wait mode. The callers can specify about waiting in private_data\n> > + * of XLogReaderState.\n> > + */\n> > +int\n> > +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> > + int reqLen, XLogRecPtr\n> > targetRecPtr, char *cur_page)\n> > +{\n> > + XLogRecPtr read_upto,\n> >\n> > Do we really need this function? Can't we make use of an existing WAL\n> > reader function - read_local_xlog_page()?\n>\n> I clearly explained the reasons upthread [2]. Please let me know if\n> you have more thoughts/doubts here, we can connect offlist.\n>\n> Attaching v6 patch set with above review comments addressed. Please\n> review it further.\n>\n> [1] https://www.postgresql.org/message-id/CAE9k0P%3D9SReU_613TXytZmpwL3ZRpnC5zrf96UoNCATKpK-UxQ%40mail.gmail.com\n> +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n>\n> I think we should allow all these functions to be executed in wait and\n> *nowait* mode. If a user specifies nowait mode, the function should\n> return if no WAL data is present, rather than waiting for new WAL data\n> to become available, default behaviour could be anything you like.\n>\n> [2] https://www.postgresql.org/message-id/CALj2ACUtqWX95uAj2VNJED0PnixEeQ%3D0MEzpouLi%2Bzd_iTugRA%40mail.gmail.com\n> I've added a new function read_local_xlog_page_2 (similar to\n> read_local_xlog_page but works in wait and no wait mode) and the\n> callers can specify whether to wait or not wait using private_data.\n> Actually, I wanted to use the private_data structure of\n> read_local_xlog_page but the logical decoding already has context as\n> private_data, that is why I had to have a new function. I know it\n> creates a bit of duplicate code, but its cleaner than using\n> backend-local variables or additional flags in XLogReaderState or\n> adding wait/no-wait boolean to page_read callback. Any other\n> suggestions are welcome here.\n>\n> With this, I'm able to have wait/no wait versions for any functions.\n> But for now, I'm having wait/no wait for two functions\n> (pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\n> sense.\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Thu, 3 Mar 2022 07:52:00 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi.\n\n+#ifdef FRONTEND\n+/*\n+ * Functions that are currently not needed in the backend, but are better\n+ * implemented inside xlogreader.c because of the internal facilities available\n+ * here.\n+ */\n+\n #endif\t\t\t\t\t\t\t/* FRONTEND */\n\nWhy didn't you remove the emptied #ifdef section?\n\n+int\n+read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n+\t\t\t\t\t int reqLen, XLogRecPtr targetRecPtr, char *cur_page)\n\nThe difference with the original function is this function has one\nadditional if-block amid. I think we can insert the code directly in\nthe original function.\n\n+\t\t\t/*\n+\t\t\t * We are trying to read future WAL. Let's not wait for WAL to be\n+\t\t\t * available if indicated.\n+\t\t\t */\n+\t\t\tif (state->private_data != NULL)\n\nHowever, in the first place it seems to me there's not need for the\nfunction to take care of no_wait affairs.\n\nIf, for expample, pg_get_wal_record_info() with no_wait = true, it is\nenough that the function identifies the bleeding edge of WAL then loop\nuntil the LSN. So I think no need for the new function, nor for any\nmodification on the origical function.\n\nThe changes will reduce the footprint of the patch largely, I think.\n\nAt Wed, 2 Mar 2022 22:37:43 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Mar 2, 2022 at 8:12 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n> \n> Thanks for reviewing.\n> \n> > +--\n> > +-- pg_get_wal_records_info()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n> > + IN end_lsn pg_lsn,\n> > + IN wait_for_wal boolean DEFAULT false,\n> > + OUT lsn pg_lsn,\n> >\n> > What does the wait_for_wal flag mean here when one has already\n> > specified the start and end lsn? AFAIU, If a user has specified a\n> > start and stop LSN, it means that the user knows the extent to which\n> > he/she wants to display the WAL records in which case we need to stop\n> > once the end lsn has reached . So what is the meaning of wait_for_wal\n> > flag? Does it look sensible to have the wait_for_wal flag here? To me\n> > it doesn't.\n> \n> Users can always specify a future end_lsn and set wait_for_wal to\n> true, then the pg_get_wal_records_info/pg_get_wal_stats functions can\n> wait for the WAL. IMO, this is useful. If you remember you were okay\n> with wait/nowait versions for some of the functions upthread [1]. I'm\n> not going to retain this behaviour for both\n> pg_get_wal_records_info/pg_get_wal_stats as it is similar to\n> pg_waldump's --follow option.\n\nI agree to this for now. However, I prefer that NULL or invalid\nend_lsn is equivalent to pg_current_wal_lsn().\n\n> > ==\n> >\n> > +--\n> > +-- pg_get_wal_records_info_till_end_of_wal()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> > + OUT lsn pg_lsn,\n> > + OUT prev_lsn pg_lsn,\n> > + OUT xid xid,\n> >\n> > Why is this function required? Is pg_get_wal_records_info() alone not\n> > enough? I think it is. See if we can make end_lsn optional in\n> > pg_get_wal_records_info() and lets just have it alone. I think it can\n> > do the job of pg_get_wal_records_info_till_end_of_wal function.\n\nI rather agree to Ashutosh. This feature can be covered by\npg_get_wal_records(start_lsn, NULL, false).\n\n> > ==\n> >\n> > +--\n> > +-- pg_get_wal_stats_till_end_of_wal()\n> > +--\n> > +CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n> > + OUT resource_manager text,\n> > + OUT count int8,\n> >\n> > Above comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n> \n> I'm doing the following input validations for these functions to not\n> cause any issues with invalid LSN. If I were to have the default value\n> for end_lsn as 0/0, I can't perform input validations right? That is\n> the reason I'm having separate functions {pg_get_wal_records_info,\n> pg_get_wal_stats}_till_end_of_wal() versions.\n> \n> /* Validate input. */\n> if (XLogRecPtrIsInvalid(start_lsn))\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"invalid WAL record start LSN\")));\n> \n> if (XLogRecPtrIsInvalid(end_lsn))\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"invalid WAL record end LSN\")));\n> \n> if (start_lsn >= end_lsn)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"WAL record start LSN must be less than end LSN\")));\n\nI don't think that validations are worth doing at least for the first\ntwo, as far as that value has a special meaning. I see it useful if\npg_get_wal_records_info() means dump the all available records for the\nmoment, or records of the last segment, page or something.\n\n> > ==\n> >\n> >\n> > + if (loc <= read_upto)\n> > + break;\n> > +\n> > + /* Let's not wait for WAL to be available if\n> > indicated */\n> > + if (loc > read_upto &&\n> > + state->private_data != NULL)\n> > + {\n> >\n> > Why loc > read_upto? The first if condition is (loc <= read_upto)\n> > followed by the second if condition - (loc > read_upto). Is the first\n> > if condition (loc <= read_upto) not enough to indicate that loc >\n> > read_upto?\n> \n> Yeah, that's unnecessary, I improved the comment there and removed loc\n> > read_upto.\n> \n> > ==\n> >\n> > +#define IsEndOfWALReached(state) \\\n> > + (state->private_data != NULL && \\\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->no_wait == true) && \\\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->reached_end_of_wal == true))\n> >\n> > I think we should either use state or xlogreader. First line says\n> > state->private_data and second line xlogreader->private_data.\n> \n> I've changed it to use state instead of xlogreader.\n> \n> > ==\n> >\n> > + (((ReadLocalXLOGPage2Private *)\n> > xlogreader->private_data)->reached_end_of_wal == true))\n> > +\n> >\n> > There is a new patch coming to make the end of WAL messages less\n> > scary. It introduces the EOW flag in xlogreaderstate maybe we can use\n> > that instead of introducing new flags in private area to represent the\n> > end of WAL.\n> \n> Yeah that would be great. But we never know which one gets committed\n> first. Until then it's not good to have dependencies on two \"on-going\"\n> patches. Later, we can change.\n> \n> > ==\n> >\n> > +/*\n> > + * XLogReaderRoutine->page_read callback for reading local xlog files\n> > + *\n> > + * This function is same as read_local_xlog_page except that it works in both\n> > + * wait and no wait mode. The callers can specify about waiting in private_data\n> > + * of XLogReaderState.\n> > + */\n> > +int\n> > +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> > + int reqLen, XLogRecPtr\n> > targetRecPtr, char *cur_page)\n> > +{\n> > + XLogRecPtr read_upto,\n> >\n> > Do we really need this function? Can't we make use of an existing WAL\n> > reader function - read_local_xlog_page()?\n> \n> I clearly explained the reasons upthread [2]. Please let me know if\n> you have more thoughts/doubts here, we can connect offlist.\n\n*I* also think the function is not needed, as explained above. Why do\nwe need that function while we know how far we can read WAL records\n*before* calling the function?\n\n> Attaching v6 patch set with above review comments addressed. Please\n> review it further.\n> \n> [1] https://www.postgresql.org/message-id/CAE9k0P%3D9SReU_613TXytZmpwL3ZRpnC5zrf96UoNCATKpK-UxQ%40mail.gmail.com\n> +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> \n> I think we should allow all these functions to be executed in wait and\n> *nowait* mode. If a user specifies nowait mode, the function should\n> return if no WAL data is present, rather than waiting for new WAL data\n> to become available, default behaviour could be anything you like.\n> \n> [2] https://www.postgresql.org/message-id/CALj2ACUtqWX95uAj2VNJED0PnixEeQ%3D0MEzpouLi%2Bzd_iTugRA%40mail.gmail.com\n> I've added a new function read_local_xlog_page_2 (similar to\n> read_local_xlog_page but works in wait and no wait mode) and the\n> callers can specify whether to wait or not wait using private_data.\n> Actually, I wanted to use the private_data structure of\n> read_local_xlog_page but the logical decoding already has context as\n> private_data, that is why I had to have a new function. I know it\n> creates a bit of duplicate code, but its cleaner than using\n> backend-local variables or additional flags in XLogReaderState or\n> adding wait/no-wait boolean to page_read callback. Any other\n> suggestions are welcome here.\n> \n> With this, I'm able to have wait/no wait versions for any functions.\n> But for now, I'm having wait/no wait for two functions\n> (pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\n> sense.\n> \n> Regards,\n> Bharath Rupireddy.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 03 Mar 2022 11:50:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "Here are a few comments.\n\n1)\n> > > ==\n> > >\n> > > +--\n> > > +-- pg_get_wal_records_info_till_end_of_wal()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> > > + OUT lsn pg_lsn,\n> > > + OUT prev_lsn pg_lsn,\n> > > + OUT xid xid,\n> > >\n> > > Why is this function required? Is pg_get_wal_records_info() alone not\n> > > enough? I think it is. See if we can make end_lsn optional in\n> > > pg_get_wal_records_info() and lets just have it alone. I think it can\n> > > do the job of pg_get_wal_records_info_till_end_of_wal function.\n>\n> I rather agree to Ashutosh. This feature can be covered by\n> pg_get_wal_records(start_lsn, NULL, false).\n> I don't think that validations are worth doing at least for the first\n> two, as far as that value has a special meaning. I see it useful if\n> pg_get_wal_records_info() means dump the all available records for the\n> moment, or records of the last segment, page or something.\n> *I* also think the function is not needed, as explained above. Why do\n> we need that function while we know how far we can read WAL records\n> *before* calling the function?\n\nI agree with this. The function prototype comes first and the\nvalidation can be done accordingly. I feel we can even support\n'pg_get_wal_record_info' with the same name. All 3 function's\nobjectives are the same. So it is better to use the same name\n(pg_wal_record_info) with different prototypes.\n\n2) The function 'pg_get_first_valid_wal_record_lsn' looks redundant as\nwe are getting the same information from the function\n'pg_get_first_and_last_valid_wal_record_lsn'. With this function, we\ncan easily fetch the first lsn. So IMO we should remove\n'pg_get_first_valid_wal_record_lsn'.\n\n3) The word 'get' should be removed from the function name(*_get_*) as\nall the functions of the extension are used only to get the\ninformation. It will also sync with xlogfuncs's naming conventions\nlike pg_current_wal_lsn, pg_walfile_name, etc.\n\n4) The function names can be modified with lesser words by retaining\nthe existing meaning.\n:s/pg_get_raw_wal_record/pg_wal_raw_record\n:s/pg_get_first_valid_wal_record_lsn/pg_wal_first_lsn\n:s/pg_get_first_and_last_valid_wal_record_lsn/pg_wal_first_and_last_lsn\n:s/pg_get_wal_record_info/pg_wal_record_info\n:s/pg_get_wal_stats/pg_wal_stats\n\n5) Even 'pg_get_wal_stats' and 'pg_get_wal_stats_till_end_of_wal' can\nbe clubbed as one function.\n\nThe above comments are trying to simplify the extension APIs and to\nmake it easy for the user to understand and use it.\n\nThanks & Regards,\nNitin Jadhav\n\n\n\n\nOn Thu, Mar 3, 2022 at 8:20 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hi.\n>\n> +#ifdef FRONTEND\n> +/*\n> + * Functions that are currently not needed in the backend, but are better\n> + * implemented inside xlogreader.c because of the internal facilities available\n> + * here.\n> + */\n> +\n> #endif /* FRONTEND */\n>\n> Why didn't you remove the emptied #ifdef section?\n>\n> +int\n> +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> + int reqLen, XLogRecPtr targetRecPtr, char *cur_page)\n>\n> The difference with the original function is this function has one\n> additional if-block amid. I think we can insert the code directly in\n> the original function.\n>\n> + /*\n> + * We are trying to read future WAL. Let's not wait for WAL to be\n> + * available if indicated.\n> + */\n> + if (state->private_data != NULL)\n>\n> However, in the first place it seems to me there's not need for the\n> function to take care of no_wait affairs.\n>\n> If, for expample, pg_get_wal_record_info() with no_wait = true, it is\n> enough that the function identifies the bleeding edge of WAL then loop\n> until the LSN. So I think no need for the new function, nor for any\n> modification on the origical function.\n>\n> The changes will reduce the footprint of the patch largely, I think.\n>\n> At Wed, 2 Mar 2022 22:37:43 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Wed, Mar 2, 2022 at 8:12 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n> >\n> > Thanks for reviewing.\n> >\n> > > +--\n> > > +-- pg_get_wal_records_info()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n> > > + IN end_lsn pg_lsn,\n> > > + IN wait_for_wal boolean DEFAULT false,\n> > > + OUT lsn pg_lsn,\n> > >\n> > > What does the wait_for_wal flag mean here when one has already\n> > > specified the start and end lsn? AFAIU, If a user has specified a\n> > > start and stop LSN, it means that the user knows the extent to which\n> > > he/she wants to display the WAL records in which case we need to stop\n> > > once the end lsn has reached . So what is the meaning of wait_for_wal\n> > > flag? Does it look sensible to have the wait_for_wal flag here? To me\n> > > it doesn't.\n> >\n> > Users can always specify a future end_lsn and set wait_for_wal to\n> > true, then the pg_get_wal_records_info/pg_get_wal_stats functions can\n> > wait for the WAL. IMO, this is useful. If you remember you were okay\n> > with wait/nowait versions for some of the functions upthread [1]. I'm\n> > not going to retain this behaviour for both\n> > pg_get_wal_records_info/pg_get_wal_stats as it is similar to\n> > pg_waldump's --follow option.\n>\n> I agree to this for now. However, I prefer that NULL or invalid\n> end_lsn is equivalent to pg_current_wal_lsn().\n>\n> > > ==\n> > >\n> > > +--\n> > > +-- pg_get_wal_records_info_till_end_of_wal()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> > > + OUT lsn pg_lsn,\n> > > + OUT prev_lsn pg_lsn,\n> > > + OUT xid xid,\n> > >\n> > > Why is this function required? Is pg_get_wal_records_info() alone not\n> > > enough? I think it is. See if we can make end_lsn optional in\n> > > pg_get_wal_records_info() and lets just have it alone. I think it can\n> > > do the job of pg_get_wal_records_info_till_end_of_wal function.\n>\n> I rather agree to Ashutosh. This feature can be covered by\n> pg_get_wal_records(start_lsn, NULL, false).\n>\n> > > ==\n> > >\n> > > +--\n> > > +-- pg_get_wal_stats_till_end_of_wal()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n> > > + OUT resource_manager text,\n> > > + OUT count int8,\n> > >\n> > > Above comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n> >\n> > I'm doing the following input validations for these functions to not\n> > cause any issues with invalid LSN. If I were to have the default value\n> > for end_lsn as 0/0, I can't perform input validations right? That is\n> > the reason I'm having separate functions {pg_get_wal_records_info,\n> > pg_get_wal_stats}_till_end_of_wal() versions.\n> >\n> > /* Validate input. */\n> > if (XLogRecPtrIsInvalid(start_lsn))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid WAL record start LSN\")));\n> >\n> > if (XLogRecPtrIsInvalid(end_lsn))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid WAL record end LSN\")));\n> >\n> > if (start_lsn >= end_lsn)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"WAL record start LSN must be less than end LSN\")));\n>\n> I don't think that validations are worth doing at least for the first\n> two, as far as that value has a special meaning. I see it useful if\n> pg_get_wal_records_info() means dump the all available records for the\n> moment, or records of the last segment, page or something.\n>\n> > > ==\n> > >\n> > >\n> > > + if (loc <= read_upto)\n> > > + break;\n> > > +\n> > > + /* Let's not wait for WAL to be available if\n> > > indicated */\n> > > + if (loc > read_upto &&\n> > > + state->private_data != NULL)\n> > > + {\n> > >\n> > > Why loc > read_upto? The first if condition is (loc <= read_upto)\n> > > followed by the second if condition - (loc > read_upto). Is the first\n> > > if condition (loc <= read_upto) not enough to indicate that loc >\n> > > read_upto?\n> >\n> > Yeah, that's unnecessary, I improved the comment there and removed loc\n> > > read_upto.\n> >\n> > > ==\n> > >\n> > > +#define IsEndOfWALReached(state) \\\n> > > + (state->private_data != NULL && \\\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->no_wait == true) && \\\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->reached_end_of_wal == true))\n> > >\n> > > I think we should either use state or xlogreader. First line says\n> > > state->private_data and second line xlogreader->private_data.\n> >\n> > I've changed it to use state instead of xlogreader.\n> >\n> > > ==\n> > >\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->reached_end_of_wal == true))\n> > > +\n> > >\n> > > There is a new patch coming to make the end of WAL messages less\n> > > scary. It introduces the EOW flag in xlogreaderstate maybe we can use\n> > > that instead of introducing new flags in private area to represent the\n> > > end of WAL.\n> >\n> > Yeah that would be great. But we never know which one gets committed\n> > first. Until then it's not good to have dependencies on two \"on-going\"\n> > patches. Later, we can change.\n> >\n> > > ==\n> > >\n> > > +/*\n> > > + * XLogReaderRoutine->page_read callback for reading local xlog files\n> > > + *\n> > > + * This function is same as read_local_xlog_page except that it works in both\n> > > + * wait and no wait mode. The callers can specify about waiting in private_data\n> > > + * of XLogReaderState.\n> > > + */\n> > > +int\n> > > +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> > > + int reqLen, XLogRecPtr\n> > > targetRecPtr, char *cur_page)\n> > > +{\n> > > + XLogRecPtr read_upto,\n> > >\n> > > Do we really need this function? Can't we make use of an existing WAL\n> > > reader function - read_local_xlog_page()?\n> >\n> > I clearly explained the reasons upthread [2]. Please let me know if\n> > you have more thoughts/doubts here, we can connect offlist.\n>\n> *I* also think the function is not needed, as explained above. Why do\n> we need that function while we know how far we can read WAL records\n> *before* calling the function?\n>\n> > Attaching v6 patch set with above review comments addressed. Please\n> > review it further.\n> >\n> > [1] https://www.postgresql.org/message-id/CAE9k0P%3D9SReU_613TXytZmpwL3ZRpnC5zrf96UoNCATKpK-UxQ%40mail.gmail.com\n> > +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> > +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> >\n> > I think we should allow all these functions to be executed in wait and\n> > *nowait* mode. If a user specifies nowait mode, the function should\n> > return if no WAL data is present, rather than waiting for new WAL data\n> > to become available, default behaviour could be anything you like.\n> >\n> > [2] https://www.postgresql.org/message-id/CALj2ACUtqWX95uAj2VNJED0PnixEeQ%3D0MEzpouLi%2Bzd_iTugRA%40mail.gmail.com\n> > I've added a new function read_local_xlog_page_2 (similar to\n> > read_local_xlog_page but works in wait and no wait mode) and the\n> > callers can specify whether to wait or not wait using private_data.\n> > Actually, I wanted to use the private_data structure of\n> > read_local_xlog_page but the logical decoding already has context as\n> > private_data, that is why I had to have a new function. I know it\n> > creates a bit of duplicate code, but its cleaner than using\n> > backend-local variables or additional flags in XLogReaderState or\n> > adding wait/no-wait boolean to page_read callback. Any other\n> > suggestions are welcome here.\n> >\n> > With this, I'm able to have wait/no wait versions for any functions.\n> > But for now, I'm having wait/no wait for two functions\n> > (pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\n> > sense.\n> >\n> > Regards,\n> > Bharath Rupireddy.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n\n\n", "msg_date": "Thu, 3 Mar 2022 20:08:28 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "I think we should also see if we can allow end users to input timeline\ninformation with the pg_walinspect functions. This will help the end\nusers to get information about WAL records from previous timeline\nwhich can be helpful in case of restored servers.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Mar 3, 2022 at 8:20 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hi.\n>\n> +#ifdef FRONTEND\n> +/*\n> + * Functions that are currently not needed in the backend, but are better\n> + * implemented inside xlogreader.c because of the internal facilities available\n> + * here.\n> + */\n> +\n> #endif /* FRONTEND */\n>\n> Why didn't you remove the emptied #ifdef section?\n>\n> +int\n> +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> + int reqLen, XLogRecPtr targetRecPtr, char *cur_page)\n>\n> The difference with the original function is this function has one\n> additional if-block amid. I think we can insert the code directly in\n> the original function.\n>\n> + /*\n> + * We are trying to read future WAL. Let's not wait for WAL to be\n> + * available if indicated.\n> + */\n> + if (state->private_data != NULL)\n>\n> However, in the first place it seems to me there's not need for the\n> function to take care of no_wait affairs.\n>\n> If, for expample, pg_get_wal_record_info() with no_wait = true, it is\n> enough that the function identifies the bleeding edge of WAL then loop\n> until the LSN. So I think no need for the new function, nor for any\n> modification on the origical function.\n>\n> The changes will reduce the footprint of the patch largely, I think.\n>\n> At Wed, 2 Mar 2022 22:37:43 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Wed, Mar 2, 2022 at 8:12 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Some review comments on v5 patch (v5-0001-pg_walinspect.patch)\n> >\n> > Thanks for reviewing.\n> >\n> > > +--\n> > > +-- pg_get_wal_records_info()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n> > > + IN end_lsn pg_lsn,\n> > > + IN wait_for_wal boolean DEFAULT false,\n> > > + OUT lsn pg_lsn,\n> > >\n> > > What does the wait_for_wal flag mean here when one has already\n> > > specified the start and end lsn? AFAIU, If a user has specified a\n> > > start and stop LSN, it means that the user knows the extent to which\n> > > he/she wants to display the WAL records in which case we need to stop\n> > > once the end lsn has reached . So what is the meaning of wait_for_wal\n> > > flag? Does it look sensible to have the wait_for_wal flag here? To me\n> > > it doesn't.\n> >\n> > Users can always specify a future end_lsn and set wait_for_wal to\n> > true, then the pg_get_wal_records_info/pg_get_wal_stats functions can\n> > wait for the WAL. IMO, this is useful. If you remember you were okay\n> > with wait/nowait versions for some of the functions upthread [1]. I'm\n> > not going to retain this behaviour for both\n> > pg_get_wal_records_info/pg_get_wal_stats as it is similar to\n> > pg_waldump's --follow option.\n>\n> I agree to this for now. However, I prefer that NULL or invalid\n> end_lsn is equivalent to pg_current_wal_lsn().\n>\n> > > ==\n> > >\n> > > +--\n> > > +-- pg_get_wal_records_info_till_end_of_wal()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_records_info_till_end_of_wal(IN start_lsn pg_lsn,\n> > > + OUT lsn pg_lsn,\n> > > + OUT prev_lsn pg_lsn,\n> > > + OUT xid xid,\n> > >\n> > > Why is this function required? Is pg_get_wal_records_info() alone not\n> > > enough? I think it is. See if we can make end_lsn optional in\n> > > pg_get_wal_records_info() and lets just have it alone. I think it can\n> > > do the job of pg_get_wal_records_info_till_end_of_wal function.\n>\n> I rather agree to Ashutosh. This feature can be covered by\n> pg_get_wal_records(start_lsn, NULL, false).\n>\n> > > ==\n> > >\n> > > +--\n> > > +-- pg_get_wal_stats_till_end_of_wal()\n> > > +--\n> > > +CREATE FUNCTION pg_get_wal_stats_till_end_of_wal(IN start_lsn pg_lsn,\n> > > + OUT resource_manager text,\n> > > + OUT count int8,\n> > >\n> > > Above comment applies to this function as well. Isn't pg_get_wal_stats() enough?\n> >\n> > I'm doing the following input validations for these functions to not\n> > cause any issues with invalid LSN. If I were to have the default value\n> > for end_lsn as 0/0, I can't perform input validations right? That is\n> > the reason I'm having separate functions {pg_get_wal_records_info,\n> > pg_get_wal_stats}_till_end_of_wal() versions.\n> >\n> > /* Validate input. */\n> > if (XLogRecPtrIsInvalid(start_lsn))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid WAL record start LSN\")));\n> >\n> > if (XLogRecPtrIsInvalid(end_lsn))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid WAL record end LSN\")));\n> >\n> > if (start_lsn >= end_lsn)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"WAL record start LSN must be less than end LSN\")));\n>\n> I don't think that validations are worth doing at least for the first\n> two, as far as that value has a special meaning. I see it useful if\n> pg_get_wal_records_info() means dump the all available records for the\n> moment, or records of the last segment, page or something.\n>\n> > > ==\n> > >\n> > >\n> > > + if (loc <= read_upto)\n> > > + break;\n> > > +\n> > > + /* Let's not wait for WAL to be available if\n> > > indicated */\n> > > + if (loc > read_upto &&\n> > > + state->private_data != NULL)\n> > > + {\n> > >\n> > > Why loc > read_upto? The first if condition is (loc <= read_upto)\n> > > followed by the second if condition - (loc > read_upto). Is the first\n> > > if condition (loc <= read_upto) not enough to indicate that loc >\n> > > read_upto?\n> >\n> > Yeah, that's unnecessary, I improved the comment there and removed loc\n> > > read_upto.\n> >\n> > > ==\n> > >\n> > > +#define IsEndOfWALReached(state) \\\n> > > + (state->private_data != NULL && \\\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->no_wait == true) && \\\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->reached_end_of_wal == true))\n> > >\n> > > I think we should either use state or xlogreader. First line says\n> > > state->private_data and second line xlogreader->private_data.\n> >\n> > I've changed it to use state instead of xlogreader.\n> >\n> > > ==\n> > >\n> > > + (((ReadLocalXLOGPage2Private *)\n> > > xlogreader->private_data)->reached_end_of_wal == true))\n> > > +\n> > >\n> > > There is a new patch coming to make the end of WAL messages less\n> > > scary. It introduces the EOW flag in xlogreaderstate maybe we can use\n> > > that instead of introducing new flags in private area to represent the\n> > > end of WAL.\n> >\n> > Yeah that would be great. But we never know which one gets committed\n> > first. Until then it's not good to have dependencies on two \"on-going\"\n> > patches. Later, we can change.\n> >\n> > > ==\n> > >\n> > > +/*\n> > > + * XLogReaderRoutine->page_read callback for reading local xlog files\n> > > + *\n> > > + * This function is same as read_local_xlog_page except that it works in both\n> > > + * wait and no wait mode. The callers can specify about waiting in private_data\n> > > + * of XLogReaderState.\n> > > + */\n> > > +int\n> > > +read_local_xlog_page_2(XLogReaderState *state, XLogRecPtr targetPagePtr,\n> > > + int reqLen, XLogRecPtr\n> > > targetRecPtr, char *cur_page)\n> > > +{\n> > > + XLogRecPtr read_upto,\n> > >\n> > > Do we really need this function? Can't we make use of an existing WAL\n> > > reader function - read_local_xlog_page()?\n> >\n> > I clearly explained the reasons upthread [2]. Please let me know if\n> > you have more thoughts/doubts here, we can connect offlist.\n>\n> *I* also think the function is not needed, as explained above. Why do\n> we need that function while we know how far we can read WAL records\n> *before* calling the function?\n>\n> > Attaching v6 patch set with above review comments addressed. Please\n> > review it further.\n> >\n> > [1] https://www.postgresql.org/message-id/CAE9k0P%3D9SReU_613TXytZmpwL3ZRpnC5zrf96UoNCATKpK-UxQ%40mail.gmail.com\n> > +PG_FUNCTION_INFO_V1(pg_get_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_first_valid_wal_record_lsn);\n> > +PG_FUNCTION_INFO_V1(pg_verify_raw_wal_record);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_record_info);\n> > +PG_FUNCTION_INFO_V1(pg_get_wal_records_info);\n> >\n> > I think we should allow all these functions to be executed in wait and\n> > *nowait* mode. If a user specifies nowait mode, the function should\n> > return if no WAL data is present, rather than waiting for new WAL data\n> > to become available, default behaviour could be anything you like.\n> >\n> > [2] https://www.postgresql.org/message-id/CALj2ACUtqWX95uAj2VNJED0PnixEeQ%3D0MEzpouLi%2Bzd_iTugRA%40mail.gmail.com\n> > I've added a new function read_local_xlog_page_2 (similar to\n> > read_local_xlog_page but works in wait and no wait mode) and the\n> > callers can specify whether to wait or not wait using private_data.\n> > Actually, I wanted to use the private_data structure of\n> > read_local_xlog_page but the logical decoding already has context as\n> > private_data, that is why I had to have a new function. I know it\n> > creates a bit of duplicate code, but its cleaner than using\n> > backend-local variables or additional flags in XLogReaderState or\n> > adding wait/no-wait boolean to page_read callback. Any other\n> > suggestions are welcome here.\n> >\n> > With this, I'm able to have wait/no wait versions for any functions.\n> > But for now, I'm having wait/no wait for two functions\n> > (pg_get_wal_records_info and pg_get_wal_stats) for which it makes more\n> > sense.\n> >\n> > Regards,\n> > Bharath Rupireddy.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n", "msg_date": "Thu, 3 Mar 2022 20:32:19 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Feb 25, 2022 at 6:03 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Added a new function that returns the first and last valid WAL record\n> LSN of a given WAL file.\n\nSounds like fuzzy thinking to me. WAL records can cross file\nboundaries, and forgetting about that leads to all sorts of problems.\nJust give people one function that decodes a range of LSNs and call it\ngood. Why do you need anything else? If people want to get the first\nrecord that begins in a segment or the first record any portion of\nwhich is in a particular segment or the last record that begins in a\nsegment or the last record that ends in a segment or any other such\nthing, they can use a WHERE clause for that... and if you think they\ncan't, then that should be good cause to rethink the return value of\nthe one-and-only SRF that I think you need here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Mar 2022 11:35:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Mar 3, 2022 at 10:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Feb 25, 2022 at 6:03 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Added a new function that returns the first and last valid WAL record\n> > LSN of a given WAL file.\n>\n> Sounds like fuzzy thinking to me. WAL records can cross file\n> boundaries, and forgetting about that leads to all sorts of problems.\n> Just give people one function that decodes a range of LSNs and call it\n> good. Why do you need anything else? If people want to get the first\n> record that begins in a segment or the first record any portion of\n> which is in a particular segment or the last record that begins in a\n> segment or the last record that ends in a segment or any other such\n> thing, they can use a WHERE clause for that... and if you think they\n> can't, then that should be good cause to rethink the return value of\n> the one-and-only SRF that I think you need here.\n\nThanks Robert.\n\nThanks to others for your review comments.\n\nHere's the v7 patch set. These patches are based on the motive that\n\"keep it simple and short yet effective and useful\". With that in\nmind, I have not implemented the wait mode for any of the functions\n(as it doesn't look an effective use-case and requires adding a new\npage_read callback, instead throw error if future LSN is specified),\nalso these functions will give WAL information on the current server's\ntimeline. Having said that, I'm open to adding new functions in future\nonce this initial version gets in, if there's a requirement and users\nask for the new functions.\n\nPlease review the v7 patch set and provide your thoughts.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 4 Mar 2022 15:53:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Thanks Bharath for working on all my review comments. I took a quick\nlook at the new version of the patch (v7-pg_walinspect.patch) and this\nversion looks a lot better. I'll do some detailed review later (maybe\nnext week or so) and let you know my further comments, if any.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Mar 4, 2022 at 3:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 3, 2022 at 10:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Feb 25, 2022 at 6:03 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Added a new function that returns the first and last valid WAL record\n> > > LSN of a given WAL file.\n> >\n> > Sounds like fuzzy thinking to me. WAL records can cross file\n> > boundaries, and forgetting about that leads to all sorts of problems.\n> > Just give people one function that decodes a range of LSNs and call it\n> > good. Why do you need anything else? If people want to get the first\n> > record that begins in a segment or the first record any portion of\n> > which is in a particular segment or the last record that begins in a\n> > segment or the last record that ends in a segment or any other such\n> > thing, they can use a WHERE clause for that... and if you think they\n> > can't, then that should be good cause to rethink the return value of\n> > the one-and-only SRF that I think you need here.\n>\n> Thanks Robert.\n>\n> Thanks to others for your review comments.\n>\n> Here's the v7 patch set. These patches are based on the motive that\n> \"keep it simple and short yet effective and useful\". With that in\n> mind, I have not implemented the wait mode for any of the functions\n> (as it doesn't look an effective use-case and requires adding a new\n> page_read callback, instead throw error if future LSN is specified),\n> also these functions will give WAL information on the current server's\n> timeline. Having said that, I'm open to adding new functions in future\n> once this initial version gets in, if there's a requirement and users\n> ask for the new functions.\n>\n> Please review the v7 patch set and provide your thoughts.\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Fri, 4 Mar 2022 17:55:08 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, 2022-03-02 at 22:37 +0530, Bharath Rupireddy wrote:\n> \n> Attaching v6 patch set with above review comments addressed. Please\n> review it further.\n\n* Don't issue WARNINGs or other messages for ordinary situations, like\nwhen pg_get_wal_records_info() hits the end of WAL.\n\n* It feels like the APIs that allow waiting for the end of WAL are\nslightly off. Can't you just do pg_get_wal_records_info(start_lsn,\nleast(pg_current_wal_flush_lsn(), end_lsn)) if you want the non-waiting \nbehavior? Try to make the API more orthogonal, where a few basic\nfunctions can be combined to give you everything you need, rather than\nspecifying extra parameters and issuing WARNINGs. I \n\n* In the docs, include some example output. I don't see any output in\nthe tests, which makes sense because it's mostly non-deterministic, but\nit would be helpful to see sample output of at least\npg_get_wal_records_info().\n\n* Is pg_get_wal_stats() even necessary, or can you get the same\ninformation with a query over pg_get_wal_records_info()? For instance,\nif you want to group by transaction ID rather than rmgr, then\npg_get_wal_stats() is useless.\n\n* Would be nice to have a pg_wal_file_is_valid() or similar, which\nwould test that it exists, and the header matches the filename (e.g. if\nit was recycled but not used, that would count as invalid). I think\npg_get_first_valid_wal_record_lsn() would make some cases look invalid\neven if the file is valid -- for example, if a wal record spans many\nwal segments, the segments might look invalid because they contain no\ncomplete records, but the file itself is still valid and contains valid\nwal data.\n\n* Is there a reason you didn't include the timeline ID in\npg_get_wal_records_info()?\n\n* Can we mark this extension 'trusted'? I'm not 100% clear on the\nstandards for that marker, but it seems reasonable for a database owner\nwith the right privileges might want to install it.\n\n* pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\nfunction should require pg_read_server_files? Or at least\npg_read_all_data?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 10 Mar 2022 00:22:05 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2022-03-02 at 22:37 +0530, Bharath Rupireddy wrote:\n> >\n> > Attaching v6 patch set with above review comments addressed. Please\n> > review it further.\n\nThanks Jeff for reviewing it. I've posted the latest v7 patch-set\nupthread [1] which is having more simple-yet-useful-and-effective\nfunctions.\n\n> * Don't issue WARNINGs or other messages for ordinary situations, like\n> when pg_get_wal_records_info() hits the end of WAL.\n\nv7 patch-set [1] has no warnings, but the functions will error out if\nfuture LSN is specified.\n\n> * It feels like the APIs that allow waiting for the end of WAL are\n> slightly off. Can't you just do pg_get_wal_records_info(start_lsn,\n> least(pg_current_wal_flush_lsn(), end_lsn)) if you want the non-waiting\n> behavior? Try to make the API more orthogonal, where a few basic\n> functions can be combined to give you everything you need, rather than\n> specifying extra parameters and issuing WARNINGs. I\n\nv7 patch-set [1] onwards waiting mode has been removed for all of the\nfunctions, again to keep things simple-yet-useful-and-effective.\nHowever, we can always add new pg_walinspect functions that wait for\nfuture WAL in the next versions once basic stuff gets committed and if\nmany users ask for it.\n\n> * In the docs, include some example output. I don't see any output in\n> the tests, which makes sense because it's mostly non-deterministic, but\n> it would be helpful to see sample output of at least\n> pg_get_wal_records_info().\n\n+1. Added for pg_get_wal_records_info and pg_get_wal_stats.\n\n> * Is pg_get_wal_stats() even necessary, or can you get the same\n> information with a query over pg_get_wal_records_info()? For instance,\n> if you want to group by transaction ID rather than rmgr, then\n> pg_get_wal_stats() is useless.\n\nYes, you are right pg_get_wal_stats provides WAL stats per resource\nmanager which is similar to pg_waldump with --start, --end and --stats\noption. It provides more information than pg_get_wal_records_info and\nis a good way of getting stats than adding more columns to\npg_get_wal_records_info, calculating percentage in sql and having\ngroup by clause. IMO, pg_get_wal_stats is more readable and useful.\n\n> * Would be nice to have a pg_wal_file_is_valid() or similar, which\n> would test that it exists, and the header matches the filename (e.g. if\n> it was recycled but not used, that would count as invalid). I think\n> pg_get_first_valid_wal_record_lsn() would make some cases look invalid\n> even if the file is valid -- for example, if a wal record spans many\n> wal segments, the segments might look invalid because they contain no\n> complete records, but the file itself is still valid and contains valid\n> wal data.\n\nActually I haven't tried testing a single WAL record spanning many WAL\nfiles yet(I'm happy to try it if someone suggests such a use-case). In\nthat case too I assume pg_get_first_valid_wal_record_lsn() shouldn't\nhave a problem because it just gives the next valid LSN and it's\nprevious LSN using existing WAL reader API XLogFindNextRecord(). It\nopens up the WAL file segments using (some dots to connect -\npage_read/read_local_xlog_page, WALRead,\nsegment_open/wal_segment_open). Thoughts?\n\nI don't think it's necessary to have a function pg_wal_file_is_valid()\nthat given a WAL file name as input checks whether a WAL file exists\nor not, probably not in the core (xlogfuncs.c) too. These kinds of\nfunctions can open up challenges in terms of user input validation and\nmay cause unnecessary problems, please see some related discussion\n[2].\n\n> * Is there a reason you didn't include the timeline ID in\n> pg_get_wal_records_info()?\n\nI'm right now allowing the functions to read WAL from the current\nserver's timeline which I have mentioned in the docs. The server's\ncurrent timeline is available via pg_control_checkpoint()'s\ntimeline_id. So, having timeline_id as a column doesn't make sense.\nAgain this is to keep things simple-yet-useful-and-effective. However,\nwe can add new pg_walinspect functions to read WAL from historic as\nwell as current timelines in the next versions once basic stuff gets\ncommitted and if many users ask for it.\n\n+ <para>\n+ All the functions of this module will provide the WAL information using the\n+ current server's timeline ID.\n+ </para>\n\n> * Can we mark this extension 'trusted'? I'm not 100% clear on the\n> standards for that marker, but it seems reasonable for a database owner\n> with the right privileges might want to install it.\n\n'trusted' extensions concept is added by commit 50fc694 [3]. Since\npg_walinspect deals with WAL, we strictly want to control who creates\nand can execute functions exposed by it, so I don't know if 'trusted'\nis a good idea here. Also, pageinspect isn't a 'trusted' extension.\n\n> * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> function should require pg_read_server_files? Or at least\n> pg_read_all_data?\n\npg_read_all_data may not be the right choice, but pg_read_server_files\nis. However, does it sound good if some functions are allowed to be\nexecuted by users with a pg_monitor role and others\npg_get_raw_wal_record by users with pg_read_server_files? Since the\nextension itself can be created by superusers, isn't the\npg_get_raw_wal_record sort of safe with pg_mointor itself?\n\n If hackers don't agree, I'm happy to grant execution on\npg_get_raw_wal_record() to the pg_read_server_files role.\n\nAttaching the v8 patch-set resolving above comments and some tests for\nchecking function permissions. Please review it further.\n\n[1] https://www.postgresql.org/message-id/CALj2ACWtToUQ5hCCBJP%2BmKeVUcN-g7cMb9XvhAcicPxUDsdcKg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmobYrTgMEF0SV%2ByDYyCCh44DAGjZVs7BYGrD8xD3vwNjHA%40mail.gmail.com\n[3] commit 50fc694e43742ce3d04a5e9f708432cb022c5f0d\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Jan 29 18:42:43 2020 -0500\n\n Invent \"trusted\" extensions, and remove the pg_pltemplate catalog.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Mar 2022 22:15:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Mar 10, 2022 at 3:22 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> * Can we mark this extension 'trusted'? I'm not 100% clear on the\n> standards for that marker, but it seems reasonable for a database owner\n> with the right privileges might want to install it.\n\nI'm not clear on the standard either, exactly, but might not that\nallow the database owner to get a peek at what's happening in other\ndatabases?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 15:00:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 3:22 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > * Can we mark this extension 'trusted'? I'm not 100% clear on the\n> > standards for that marker, but it seems reasonable for a database owner\n> > with the right privileges might want to install it.\n> \n> I'm not clear on the standard either, exactly, but might not that\n> allow the database owner to get a peek at what's happening in other\n> databases?\n\nThe standard is basically that all of the functions it brings are\nwritten to enforce the PG privilege system and you aren't able to use\nthe extension to bypass those privileges. In some cases that means that\nthe C-language functions installed have if(!superuser) ereport() calls\nthroughout them- that's a fine answer, but it's perhaps not very helpful\nto mark those as trusted. In other cases, the C-language functions\ninstalled don't directly provide access to data, such as the PostGIS\nfunctions.\n\nI've not looked back on this thread, but I'd expect pg_walinspect to\nneed those superuser checks and with those it *could* be marked as\ntrusted, but that again brings into question how useful it is to mark it\nthusly.\n\nIn an ideal world, we might have a pg_readwal predefined role which\nallows a role which was GRANT'd that role to be able to read WAL\ntraffic, and then the pg_walinspect extension could check that the\ncalling role has that predefined role, and other functions and\nextensions could also check that rather than any existing superuser\nchecks. A cloud provider or such could then include in their setup of a\nnew instance something like:\n\nGRANT pg_readwal TO admin_user WITH ADMIN OPTION;\n\nPresuming that there isn't anything that ends up in the WAL that's an\nissue for the admin_user to have access to.\n\nI certainly don't think we should allow either database owners or\nregular users on a system the ability to access the WAL traffic of the\nentire system. More forcefully- we should *not* be throwing more access\nrights towards $owners in general and should be thinking about how we\ncan allow admins, providers, whomever, the ability to control what\nrights users are given. If they're all lumped under 'owner' then\nthere's no way for people to provide granular access to just those\nthings they wish and intend to.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Mar 2022 15:54:24 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "At Thu, 10 Mar 2022 22:15:42 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Wed, 2022-03-02 at 22:37 +0530, Bharath Rupireddy wrote:\n> > >\n> > > Attaching v6 patch set with above review comments addressed. Please\n> > > review it further.\n> \n> Thanks Jeff for reviewing it. I've posted the latest v7 patch-set\n> upthread [1] which is having more simple-yet-useful-and-effective\n> functions.\n> \n> > * Don't issue WARNINGs or other messages for ordinary situations, like\n> > when pg_get_wal_records_info() hits the end of WAL.\n> \n> v7 patch-set [1] has no warnings, but the functions will error out if\n> future LSN is specified.\n> \n> > * It feels like the APIs that allow waiting for the end of WAL are\n> > slightly off. Can't you just do pg_get_wal_records_info(start_lsn,\n> > least(pg_current_wal_flush_lsn(), end_lsn)) if you want the non-waiting\n> > behavior? Try to make the API more orthogonal, where a few basic\n> > functions can be combined to give you everything you need, rather than\n> > specifying extra parameters and issuing WARNINGs. I\n> \n> v7 patch-set [1] onwards waiting mode has been removed for all of the\n> functions, again to keep things simple-yet-useful-and-effective.\n> However, we can always add new pg_walinspect functions that wait for\n> future WAL in the next versions once basic stuff gets committed and if\n> many users ask for it.\n> \n> > * In the docs, include some example output. I don't see any output in\n> > the tests, which makes sense because it's mostly non-deterministic, but\n> > it would be helpful to see sample output of at least\n> > pg_get_wal_records_info().\n> \n> +1. Added for pg_get_wal_records_info and pg_get_wal_stats.\n> \n> > * Is pg_get_wal_stats() even necessary, or can you get the same\n> > information with a query over pg_get_wal_records_info()? For instance,\n> > if you want to group by transaction ID rather than rmgr, then\n> > pg_get_wal_stats() is useless.\n> \n> Yes, you are right pg_get_wal_stats provides WAL stats per resource\n> manager which is similar to pg_waldump with --start, --end and --stats\n> option. It provides more information than pg_get_wal_records_info and\n> is a good way of getting stats than adding more columns to\n> pg_get_wal_records_info, calculating percentage in sql and having\n> group by clause. IMO, pg_get_wal_stats is more readable and useful.\n> \n> > * Would be nice to have a pg_wal_file_is_valid() or similar, which\n> > would test that it exists, and the header matches the filename (e.g. if\n> > it was recycled but not used, that would count as invalid). I think\n> > pg_get_first_valid_wal_record_lsn() would make some cases look invalid\n> > even if the file is valid -- for example, if a wal record spans many\n> > wal segments, the segments might look invalid because they contain no\n> > complete records, but the file itself is still valid and contains valid\n> > wal data.\n> \n> Actually I haven't tried testing a single WAL record spanning many WAL\n> files yet(I'm happy to try it if someone suggests such a use-case). In\n> that case too I assume pg_get_first_valid_wal_record_lsn() shouldn't\n> have a problem because it just gives the next valid LSN and it's\n> previous LSN using existing WAL reader API XLogFindNextRecord(). It\n> opens up the WAL file segments using (some dots to connect -\n> page_read/read_local_xlog_page, WALRead,\n> segment_open/wal_segment_open). Thoughts?\n> \n> I don't think it's necessary to have a function pg_wal_file_is_valid()\n> that given a WAL file name as input checks whether a WAL file exists\n> or not, probably not in the core (xlogfuncs.c) too. These kinds of\n> functions can open up challenges in terms of user input validation and\n> may cause unnecessary problems, please see some related discussion\n> [2].\n> \n> > * Is there a reason you didn't include the timeline ID in\n> > pg_get_wal_records_info()?\n> \n> I'm right now allowing the functions to read WAL from the current\n> server's timeline which I have mentioned in the docs. The server's\n> current timeline is available via pg_control_checkpoint()'s\n> timeline_id. So, having timeline_id as a column doesn't make sense.\n> Again this is to keep things simple-yet-useful-and-effective. However,\n> we can add new pg_walinspect functions to read WAL from historic as\n> well as current timelines in the next versions once basic stuff gets\n> committed and if many users ask for it.\n> \n> + <para>\n> + All the functions of this module will provide the WAL information using the\n> + current server's timeline ID.\n> + </para>\n> \n> > * Can we mark this extension 'trusted'? I'm not 100% clear on the\n> > standards for that marker, but it seems reasonable for a database owner\n> > with the right privileges might want to install it.\n> \n> 'trusted' extensions concept is added by commit 50fc694 [3]. Since\n> pg_walinspect deals with WAL, we strictly want to control who creates\n> and can execute functions exposed by it, so I don't know if 'trusted'\n> is a good idea here. Also, pageinspect isn't a 'trusted' extension.\n> \n> > * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> > function should require pg_read_server_files? Or at least\n> > pg_read_all_data?\n> \n> pg_read_all_data may not be the right choice, but pg_read_server_files\n> is. However, does it sound good if some functions are allowed to be\n> executed by users with a pg_monitor role and others\n> pg_get_raw_wal_record by users with pg_read_server_files? Since the\n> extension itself can be created by superusers, isn't the\n> pg_get_raw_wal_record sort of safe with pg_mointor itself?\n> \n> If hackers don't agree, I'm happy to grant execution on\n> pg_get_raw_wal_record() to the pg_read_server_files role.\n> \n> Attaching the v8 patch-set resolving above comments and some tests for\n> checking function permissions. Please review it further.\n> \n> [1] https://www.postgresql.org/message-id/CALj2ACWtToUQ5hCCBJP%2BmKeVUcN-g7cMb9XvhAcicPxUDsdcKg%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CA%2BTgmobYrTgMEF0SV%2ByDYyCCh44DAGjZVs7BYGrD8xD3vwNjHA%40mail.gmail.com\n> [3] commit 50fc694e43742ce3d04a5e9f708432cb022c5f0d\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Jan 29 18:42:43 2020 -0500\n> \n> Invent \"trusted\" extensions, and remove the pg_pltemplate catalog.\n\nI played with this a bit, and would like to share some thoughts on it.\n\nIt seems to me too rigorous that pg_get_wal_records_info/stats()\nreject future LSNs as end-LSN and I think WARNING or INFO and stop at\nthe real end-of-WAL is more kind to users. I think the same with the\nrestriction that start and end LSN are required to be different.\n\nThe definition of end-lsn is fuzzy here. If I fed a future LSN to the\nfunctions, they tell me the beginning of the current insertion point\nin error message. On the other hand they don't accept the same\nvalue as end-LSN. I think it is right that they tell the current\ninsertion point and they should take the end-LSN as the LSN to stop\nreading.\n\nI think pg_get_wal_stats() is worth to have but I think it should be\nimplemented in SQL. Currently pg_get_wal_records_info() doesn't tell\nabout FPI since pg_waldump doesn't but it is internally collected (of\ncourse!) and easily revealed. If we do that, the\npg_get_wal_records_stats() would be reduced to the following SQL\nstatement\n\nSELECT resource_manager resmgr,\n count(*) AS N,\n\t (count(*) * 100 / sum(count(*)) OVER tot)::numeric(5,2) AS \"%N\",\n\t sum(total_length) AS \"combined size\",\n\t (sum(total_length) * 100 / sum(sum(total_length)) OVER tot)::numeric(5,2) AS \"%combined size\",\n\t sum(fpi_len) AS fpilen,\n\t (sum(fpi_len) * 100 / sum(sum(fpi_len)) OVER tot)::numeric(5,2) AS \"%fpilen\"\n\t FROM pg_get_wal_records_info('0/1000000', '0/175DD7f')\n \t GROUP by resource_manager\n\t WINDOW tot AS ()\n\t ORDER BY \"combined size\" desc;\n\nThe only difference with pg_waldump is the statement above doesn't\nshow lines for the resource managers that don't contained in the\nresult of pg_get_wal_records_info(). But I don't think that matters.\n\n\nSometimes the field description has very long (28kb long) content. It\nmakes the result output almost unreadable and I had a bit hard time\nstruggling with the output full of '-'s. I would like have a default\nlimit on the length of such fields that can be long but I'm not sure\nwe want that.\n\n\nThe difference between pg_get_wal_record_info and _records_ other than\nthe number of argument is the former accepts incorrect LSNs.\n\nThe following works,\n pg_get_wal_record_info('0/1000000');\n pg_get_wal_records_info('0/1000000');\n\nbut this doesn't\n pg_get_wal_records_info('0/1000000', '0/1000000');\n> ERROR: WAL start LSN must be less than end LSN\n\nBut the following works\n pg_get_wal_records_info('0/1000000', '0/1000029');\n> 0/1000028 | 0/0 | 0\n\nSo I think we can consolidate the two functions as:\n\n- pg_get_wal_records_info('0/1000000');\n\n (current behavior) find the first record and show all records\n thereafter.\n\n- pg_get_wal_records_info('0/1000000', '0/1000000');\n\n finds the first record since the start lsn and show it.\n\n- pg_get_wal_records_info('0/1000000', '0/1000030');\n\n finds the first record since the start lsn then show records up to\n the end-lsn.\n\n\nAnd about pg_get_raw_wal_record(). I don't see any use-case of the\nfunction alone on SQL interface. Even if we need to inspect broken\nWAL files, it needs profound knowledge of WAL format and tools that\ndoesn't work on SQL interface.\n\nHowever like pageinspect, if we separate the WAL-record fetching and\nparsing it could be thought as useful.\n\npg_get_wal_records_info woule be like:\n\nSELECT * FROM pg_walinspect_parse(raw)\n FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn));\n\nAnd pg_get_wal_stats woule be like:\n\nSELECT * FROM pg_walinpect_stat(pg_walinspect_parse(raw))\n FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn)));\n\nRegards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:38:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "Sorry, some minor non-syntactical corrections.\n\nAt Fri, 11 Mar 2022 11:38:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I played with this a bit, and would like to share some thoughts on it.\n> \n> It seems to me too rigorous that pg_get_wal_records_info/stats()\n> reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> the real end-of-WAL is more kind to users. I think the same with the\n> restriction that start and end LSN are required to be different.\n> \n> The definition of end-lsn is fuzzy here. If I fed a future LSN to the\n> functions, they tell me the beginning of the current insertion point\n> in error message. On the other hand they don't accept the same\n> value as end-LSN. I think it is right that they tell the current\n> insertion point and they should take the end-LSN as the LSN to stop\n> reading.\n> \n> I think pg_get_wal_stats() is worth to have but I think it should be\n> implemented in SQL. Currently pg_get_wal_records_info() doesn't tell\n> about FPI since pg_waldump doesn't but it is internally collected (of\n> course!) and easily revealed. If we do that, the\n> pg_get_wal_records_stats() would be reduced to the following SQL\n> statement\n> \n> SELECT resource_manager resmgr,\n> count(*) AS N,\n> \t (count(*) * 100 / sum(count(*)) OVER tot)::numeric(5,2) AS \"%N\",\n> \t sum(total_length) AS \"combined size\",\n> \t (sum(total_length) * 100 / sum(sum(total_length)) OVER tot)::numeric(5,2) AS \"%combined size\",\n> \t sum(fpi_len) AS fpilen,\n> \t (sum(fpi_len) * 100 / sum(sum(fpi_len)) OVER tot)::numeric(5,2) AS \"%fpilen\"\n> \t FROM pg_get_wal_records_info('0/1000000', '0/175DD7f')\n> \t GROUP by resource_manager\n> \t WINDOW tot AS ()\n> \t ORDER BY \"combined size\" desc;\n> \n> The only difference with pg_waldump is the statement above doesn't\n> show lines for the resource managers that don't contained in the\n> result of pg_get_wal_records_info(). But I don't think that matters.\n> \n> \n> Sometimes the field description has very long (28kb long) content. It\n> makes the result output almost unreadable and I had a bit hard time\n> struggling with the output full of '-'s. I would like have a default\n> limit on the length of such fields that can be long but I'm not sure\n> we want that.\n> \n> \n- The difference between pg_get_wal_record_info and _records_ other than\n- the number of argument is the former accepts incorrect LSNs.\n\nThe discussion is somewhat confused after some twists and turns.. It\nshould be something like the following.\n\npg_get_wal_record_info and pg_get_wal_records_info are almost same\nsince the latter can show a single record. However it is a bit\nannoying to do that. Since, other than it doens't accept same LSNs for\nstart and end, it doesn't show a record when there' no record in the\nspecfied LSN range. But I don't think there's no usefulness of the\nbehavior.\n\nThe following works,\n pg_get_wal_record_info('0/1000000');\n pg_get_wal_records_info('0/1000000');\n\nbut this doesn't\n pg_get_wal_records_info('0/1000000', '0/1000000');\n> ERROR: WAL start LSN must be less than end LSN\n\nAnd the following shows no records.\n pg_get_wal_records_info('0/1000000', '0/1000001');\n pg_get_wal_records_info('0/1000000', '0/1000028');\n\nBut the following works\n pg_get_wal_records_info('0/1000000', '0/1000029');\n> 0/1000028 | 0/0 | 0\n\n\n\n> So I think we can consolidate the two functions as:\n> \n> - pg_get_wal_records_info('0/1000000');\n> \n> (current behavior) find the first record and show all records\n> thereafter.\n> \n> - pg_get_wal_records_info('0/1000000', '0/1000000');\n> \n> finds the first record since the start lsn and show it.\n> \n> - pg_get_wal_records_info('0/1000000', '0/1000030');\n> \n> finds the first record since the start lsn then show records up to\n> the end-lsn.\n> \n> \n> And about pg_get_raw_wal_record(). I don't see any use-case of the\n> function alone on SQL interface. Even if we need to inspect broken\n> WAL files, it needs profound knowledge of WAL format and tools that\n> doesn't work on SQL interface.\n> \n> However like pageinspect, if we separate the WAL-record fetching and\n> parsing it could be thought as useful.\n> \n> pg_get_wal_records_info woule be like:\n> \n> SELECT * FROM pg_walinspect_parse(raw)\n> FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn));\n> \n> And pg_get_wal_stats woule be like:\n> \n> SELECT * FROM pg_walinpect_stat(pg_walinspect_parse(raw))\n> FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn)));\n\n\nRegards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:52:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Sorry, some minor non-syntactical corrections.\n>\n> At Fri, 11 Mar 2022 11:38:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I played with this a bit, and would like to share some thoughts on it.\n> >\n> > It seems to me too rigorous that pg_get_wal_records_info/stats()\n> > reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> > the real end-of-WAL is more kind to users. I think the same with the\n> > restriction that start and end LSN are required to be different.\n> >\n> > The definition of end-lsn is fuzzy here. If I fed a future LSN to the\n> > functions, they tell me the beginning of the current insertion point\n> > in error message. On the other hand they don't accept the same\n> > value as end-LSN. I think it is right that they tell the current\n> > insertion point and they should take the end-LSN as the LSN to stop\n> > reading.\n> >\n> > I think pg_get_wal_stats() is worth to have but I think it should be\n> > implemented in SQL. Currently pg_get_wal_records_info() doesn't tell\n> > about FPI since pg_waldump doesn't but it is internally collected (of\n> > course!) and easily revealed. If we do that, the\n> > pg_get_wal_records_stats() would be reduced to the following SQL\n> > statement\n> >\n> > SELECT resource_manager resmgr,\n> > count(*) AS N,\n> > (count(*) * 100 / sum(count(*)) OVER tot)::numeric(5,2) AS \"%N\",\n> > sum(total_length) AS \"combined size\",\n> > (sum(total_length) * 100 / sum(sum(total_length)) OVER tot)::numeric(5,2) AS \"%combined size\",\n> > sum(fpi_len) AS fpilen,\n> > (sum(fpi_len) * 100 / sum(sum(fpi_len)) OVER tot)::numeric(5,2) AS \"%fpilen\"\n> > FROM pg_get_wal_records_info('0/1000000', '0/175DD7f')\n> > GROUP by resource_manager\n> > WINDOW tot AS ()\n> > ORDER BY \"combined size\" desc;\n> >\n> > The only difference with pg_waldump is the statement above doesn't\n> > show lines for the resource managers that don't contained in the\n> > result of pg_get_wal_records_info(). But I don't think that matters.\n> >\n> >\n> > Sometimes the field description has very long (28kb long) content. It\n> > makes the result output almost unreadable and I had a bit hard time\n> > struggling with the output full of '-'s. I would like have a default\n> > limit on the length of such fields that can be long but I'm not sure\n> > we want that.\n> >\n> >\n> - The difference between pg_get_wal_record_info and _records_ other than\n> - the number of argument is the former accepts incorrect LSNs.\n>\n> The discussion is somewhat confused after some twists and turns.. It\n> should be something like the following.\n>\n> pg_get_wal_record_info and pg_get_wal_records_info are almost same\n> since the latter can show a single record. However it is a bit\n> annoying to do that. Since, other than it doens't accept same LSNs for\n> start and end, it doesn't show a record when there' no record in the\n> specfied LSN range. But I don't think there's no usefulness of the\n> behavior.\n>\n> The following works,\n> pg_get_wal_record_info('0/1000000');\n\nThis does work but it doesn't show any WARNING message for the start\npointer adjustment. I think it should.\n\n> pg_get_wal_records_info('0/1000000');\n>\n\nI think this is fine. It should be working because the user hasn't\nspecified the end pointer so we assume the default end pointer is\nend-of-WAL.\n\n> but this doesn't\n> pg_get_wal_records_info('0/1000000', '0/1000000');\n> > ERROR: WAL start LSN must be less than end LSN\n>\n\nI think this behaviour is fine. We cannot have the same start and end\nlsn pointers.\n\n> And the following shows no records.\n> pg_get_wal_records_info('0/1000000', '0/1000001');\n> pg_get_wal_records_info('0/1000000', '0/1000028');\n>\n\nI think we should be erroring out here saying - couldn't find any\nvalid WAL record between given start and end lsn because there exists\nno valid wal records between the specified start and end lsn pointers.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:15:09 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> - The difference between pg_get_wal_record_info and _records_ other than\n> - the number of argument is the former accepts incorrect LSNs.\n>\n> The discussion is somewhat confused after some twists and turns.. It\n> should be something like the following.\n>\n> pg_get_wal_record_info and pg_get_wal_records_info are almost same\n> since the latter can show a single record. However it is a bit\n> annoying to do that. Since, other than it doens't accept same LSNs for\n> start and end, it doesn't show a record when there' no record in the\n> specfied LSN range. But I don't think there's no usefulness of the\n> behavior.\n>\n\nSo, do you want the pg_get_wal_record_info function to be removed as\nwe can use pg_get_wal_records_info() to achieve what it does?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:35:50 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:08 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > Attaching the v8 patch-set resolving above comments and some tests for\n> > checking function permissions. Please review it further.\n>\n> I played with this a bit, and would like to share some thoughts on it.\n\nThanks a lot Kyotaro-san for reviewing.\n\n> It seems to me too rigorous that pg_get_wal_records_info/stats()\n> reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> the real end-of-WAL is more kind to users. I think the same with the\n> restriction that start and end LSN are required to be different.\n\nThrowing error on future LSNs is the same behaviour for all of the\npg_walinspect function input LSNs. IMO it is a cleaner thing to do\nrather than confuse the users with different behaviours for each\nfunction. The principle is this - pg_walinspect functions can't show\nfuture WAL info. Having said that, I agree to make it a WARNING\ninstead of ERROR, for the simple reason that ERROR aborts the txn and\nthe applications can retry without aborting the txn. For instance,\npg_terminate_backend emits a WARNING if the PID isn't a postgres\nprocess id.\n\nPS: WARNING may not be a better idea than ERROR if we turn\npg_get_wal_stats a SQL function, see my response below.\n\n> The definition of end-lsn is fuzzy here. If I fed a future LSN to the\n> functions, they tell me the beginning of the current insertion point\n> in error message. On the other hand they don't accept the same\n> value as end-LSN. I think it is right that they tell the current\n> insertion point and they should take the end-LSN as the LSN to stop\n> reading.\n\nThe future LSN is determined by this:\n\nif (!RecoveryInProgress())\navailable_lsn = GetFlushRecPtr(NULL);\nelse\navailable_lsn = GetXLogReplayRecPtr(NULL);\n\nGetFlushRecPtr returns last byte + 1 flushed meaning this is the end\nLSN currently known in the server, but it is not the start LSN of the\nlast WAL record in the server. Same goes with GetXLogReplayRecPtr\nwhich gives lastReplayedEndRecPtr end+1 position. I picked\nGetFlushRecPtr and GetXLogReplayRecPtr to determine the future WAL LSN\nbecause this is how read_local_xlog_page determines to read WAL upto\nand goes for wait mode, but I wanted to avoid the wait mode completely\nfor all the pg_walinspect functions (to keep things simple for now),\nhence doing the similar checks within the input validation code and\nemitting warning.\n\nAnd you are right when we emit something like below, users tend to use\n0/15B6D68 (from the DETAIL message) as the end LSN. I don't want to\nignore this DETAIL message altogether as it gives an idea where the\nserver is. How about rephrasing the DETAIL message a bit, something\nlike \"Database system flushed the WAL up to WAL LSN %X/% X.'' or some\nother better phrasing?\n\nWARNING: WAL start LSN cannot be a future WAL LSN\nDETAIL: Last known WAL LSN on the database system is 0/15B6D68.\n\nIf the users aren't sure about what's the end record LSN, they can\njust use pg_get_wal_records_info and pg_get_wal_stats without end LSN:\nselect * from pg_get_wal_records_info('0/15B6D68');\nselect * from pg_get_wal_stats('0/15B6D68');\n\n> I think pg_get_wal_stats() is worth to have but I think it should be\n> implemented in SQL. Currently pg_get_wal_records_info() doesn't tell\n> about FPI since pg_waldump doesn't but it is internally collected (of\n> course!) and easily revealed. If we do that, the\n> pg_get_wal_records_stats() would be reduced to the following SQL\n> statement\n>\n> SELECT resource_manager resmgr,\n> count(*) AS N,\n> (count(*) * 100 / sum(count(*)) OVER tot)::numeric(5,2) AS \"%N\",\n> sum(total_length) AS \"combined size\",\n> (sum(total_length) * 100 / sum(sum(total_length)) OVER tot)::numeric(5,2) AS \"%combined size\",\n> sum(fpi_len) AS fpilen,\n> (sum(fpi_len) * 100 / sum(sum(fpi_len)) OVER tot)::numeric(5,2) AS \"%fpilen\"\n> FROM pg_get_wal_records_info('0/1000000', '0/175DD7f')\n> GROUP by resource_manager\n> WINDOW tot AS ()\n> ORDER BY \"combined size\" desc;\n>\n> The only difference with pg_waldump is the statement above doesn't\n> show lines for the resource managers that don't contained in the\n> result of pg_get_wal_records_info(). But I don't think that matters.\n\nYeah, this is better. One problem with the above is when\npg_get_wal_records_info emits a warning for future LSN. But this\nshouldn't stop us doing it via SQL. Instead I would let all the\npg_walinspect functions emit errors as opposed to WARNING. Thoughts?\n\npostgres=# SELECT resource_manager, count(*) AS count,\n(count(*) * 100 / sum(count(*)) OVER tot)::numeric(5,2) AS count_percentage,\nsum(total_length) AS combined_size,\n(sum(total_length) * 100 / sum(sum(total_length)) OVER\ntot)::numeric(5,2) AS combined_size_percentage\nFROM pg_get_wal_records_info('0/10A3E50', '0/25B6F00')\nGROUP BY resource_manager\nWINDOW tot AS ()\nORDER BY combined_size desc;\nWARNING: WAL end LSN cannot be a future WAL LSN\nDETAIL: Last known WAL LSN on the database system is 0/15CAA70.\n resource_manager | count | count_percentage | combined_size |\ncombined_size_percentage\n------------------+-------+------------------+---------------+--------------------------\n | 1 | 100.00 | |\n(1 row)\n\n> Sometimes the field description has very long (28kb long) content. It\n> makes the result output almost unreadable and I had a bit hard time\n> struggling with the output full of '-'s. I would like have a default\n> limit on the length of such fields that can be long but I'm not sure\n> we want that.\n\nYeah, it's a text column, let's leave it as-is, if required users can\nalways ignore the description columns.\n\n> And about pg_get_raw_wal_record(). I don't see any use-case of the\n> function alone on SQL interface. Even if we need to inspect broken\n> WAL files, it needs profound knowledge of WAL format and tools that\n> doesn't work on SQL interface.\n>However like pageinspect, if we separate the WAL-record fetching and\n> parsing it could be thought as useful.\n> SELECT * FROM pg_walinspect_parse(raw)\n> FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn));\n>\n> And pg_get_wal_stats woule be like:\n>\n> SELECT * FROM pg_walinpect_stat(pg_walinspect_parse(raw))\n> FROM (SELECT * FROM pg_walinspect_get_raw(start_lsn, end_lsn)));\n\nImagine pg_get_raw_wal_record function feeding raw WAL record to an\nexternal tool/extension that understands the WAL. Apart from this, I\ndon't have a concrete reason either. I'm open to removing this\nfunction as well and adding it along with the raw WAL parsing function\nin future.\n\nI haven't thought about the raw WAL parsing functions for now. In\nfact, there are many functions we can add to pg_walinspect - functions\nwith wait mode for future WAL, WAL parsing, function to return all the\nWAL record info/stats given a WAL file name, functions to return WAL\ninfo/stats from historic timelines as well, function to see if the\ngiven WAL file is valid and so on. We can park these functions for\nfuture versions of pg_walinspect once the extension itself with basic\nyet-useful-and-effective functions gets in. I will make a note of\nthese functions and will work in future based on how pg_walinspect\ngets received by the users and community out there.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 11 Mar 2022 17:45:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Some comments on pg_walinspect-docc.patch this time:\n\n+ <varlistentry>\n+ <term>\n+ <function>pg_get_wal_record_info(in_lsn pg_lsn, lsn OUT pg_lsn,\nprev_lsn OUT pg_lsn, xid OUT xid, resource_manager OUT text, length\nOUT int4, total_length OUT int4, description OUT text, block_ref OUT\ntext, data OUT bytea, data_len OUT int4)</function>\n+ </term>\n\nYou may shorten this by mentioning just the function input parameters\nand specify \"returns record\" like shown below. So no need to specify\nall the OUT params.\n\npg_get_wal_record_info(in_lsn pg_lsn) returns record.\n\nPlease check the documentation for other functions for reference.\n\n==\n\n+ <term>\n+ <function>pg_get_wal_records_info(start_lsn pg_lsn, end_lsn\npg_lsn DEFAULT NULL, lsn OUT pg_lsn, prev_lsn OUT pg_lsn, xid OUT xid,\nresource_manager OUT text, length OUT int4, total_length OUT int4,\ndescription OUT text, block_ref OUT text, data OUT bytea, data_len OUT\nint4)</function>\n+ </term>\n\nSame comment applies here as well. In the return type you can just\nmention - \"returns setof record\" like shown below:\n\npg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn) returns setof records.\n\nYou may also check for such optimizations at other places. I might\nhave missed some.\n\n==\n\n+<screen>\n+postgres=# select prev_lsn, xid, resource_manager, length,\ntotal_length, block_ref from pg_get_wal_records_info('0/158A7F0',\n'0/1591400');\n+ prev_lsn | xid | resource_manager | length | total_length |\n block_ref\n+-----------+-----+------------------+--------+--------------+--------------------------------------------------------------------------\n+ 0/158A7B8 | 735 | Heap | 54 | 7838 | blkref\n#0: rel 1663/5/2619 blk 18 (FPW); hole: offset: 88, length: 408\n+ 0/158A7F0 | 735 | Btree | 53 | 8133 | blkref\n#0: rel 1663/5/2696 blk 1 (FPW); hole: offset: 1632, length: 112\n+ 0/158C6A8 | 735 | Heap | 53 | 873 | blkref\n#0: rel 1663/5/1259 blk 0 (FPW); hole: offset: 212, length: 7372\n\nInstead of specifying column names in the targetlist I think it's\nbetter to use \"*\" so that it will display all the output columns. Also\nyou may shorten the gap between start and end lsn to reduce the output\nsize.\n\n==\n\nAny reason for not specifying author name in the .sgml file. Do you\nwant me to add my name to the author? :)\n\n <para>\n Ashutosh Sharma <email>ashu.coek88@gmail.com</email>\n </para>\n </sect2>\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Mar 10, 2022 at 10:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Wed, 2022-03-02 at 22:37 +0530, Bharath Rupireddy wrote:\n> > >\n> > > Attaching v6 patch set with above review comments addressed. Please\n> > > review it further.\n>\n> Thanks Jeff for reviewing it. I've posted the latest v7 patch-set\n> upthread [1] which is having more simple-yet-useful-and-effective\n> functions.\n>\n> > * Don't issue WARNINGs or other messages for ordinary situations, like\n> > when pg_get_wal_records_info() hits the end of WAL.\n>\n> v7 patch-set [1] has no warnings, but the functions will error out if\n> future LSN is specified.\n>\n> > * It feels like the APIs that allow waiting for the end of WAL are\n> > slightly off. Can't you just do pg_get_wal_records_info(start_lsn,\n> > least(pg_current_wal_flush_lsn(), end_lsn)) if you want the non-waiting\n> > behavior? Try to make the API more orthogonal, where a few basic\n> > functions can be combined to give you everything you need, rather than\n> > specifying extra parameters and issuing WARNINGs. I\n>\n> v7 patch-set [1] onwards waiting mode has been removed for all of the\n> functions, again to keep things simple-yet-useful-and-effective.\n> However, we can always add new pg_walinspect functions that wait for\n> future WAL in the next versions once basic stuff gets committed and if\n> many users ask for it.\n>\n> > * In the docs, include some example output. I don't see any output in\n> > the tests, which makes sense because it's mostly non-deterministic, but\n> > it would be helpful to see sample output of at least\n> > pg_get_wal_records_info().\n>\n> +1. Added for pg_get_wal_records_info and pg_get_wal_stats.\n>\n> > * Is pg_get_wal_stats() even necessary, or can you get the same\n> > information with a query over pg_get_wal_records_info()? For instance,\n> > if you want to group by transaction ID rather than rmgr, then\n> > pg_get_wal_stats() is useless.\n>\n> Yes, you are right pg_get_wal_stats provides WAL stats per resource\n> manager which is similar to pg_waldump with --start, --end and --stats\n> option. It provides more information than pg_get_wal_records_info and\n> is a good way of getting stats than adding more columns to\n> pg_get_wal_records_info, calculating percentage in sql and having\n> group by clause. IMO, pg_get_wal_stats is more readable and useful.\n>\n> > * Would be nice to have a pg_wal_file_is_valid() or similar, which\n> > would test that it exists, and the header matches the filename (e.g. if\n> > it was recycled but not used, that would count as invalid). I think\n> > pg_get_first_valid_wal_record_lsn() would make some cases look invalid\n> > even if the file is valid -- for example, if a wal record spans many\n> > wal segments, the segments might look invalid because they contain no\n> > complete records, but the file itself is still valid and contains valid\n> > wal data.\n>\n> Actually I haven't tried testing a single WAL record spanning many WAL\n> files yet(I'm happy to try it if someone suggests such a use-case). In\n> that case too I assume pg_get_first_valid_wal_record_lsn() shouldn't\n> have a problem because it just gives the next valid LSN and it's\n> previous LSN using existing WAL reader API XLogFindNextRecord(). It\n> opens up the WAL file segments using (some dots to connect -\n> page_read/read_local_xlog_page, WALRead,\n> segment_open/wal_segment_open). Thoughts?\n>\n> I don't think it's necessary to have a function pg_wal_file_is_valid()\n> that given a WAL file name as input checks whether a WAL file exists\n> or not, probably not in the core (xlogfuncs.c) too. These kinds of\n> functions can open up challenges in terms of user input validation and\n> may cause unnecessary problems, please see some related discussion\n> [2].\n>\n> > * Is there a reason you didn't include the timeline ID in\n> > pg_get_wal_records_info()?\n>\n> I'm right now allowing the functions to read WAL from the current\n> server's timeline which I have mentioned in the docs. The server's\n> current timeline is available via pg_control_checkpoint()'s\n> timeline_id. So, having timeline_id as a column doesn't make sense.\n> Again this is to keep things simple-yet-useful-and-effective. However,\n> we can add new pg_walinspect functions to read WAL from historic as\n> well as current timelines in the next versions once basic stuff gets\n> committed and if many users ask for it.\n>\n> + <para>\n> + All the functions of this module will provide the WAL information using the\n> + current server's timeline ID.\n> + </para>\n>\n> > * Can we mark this extension 'trusted'? I'm not 100% clear on the\n> > standards for that marker, but it seems reasonable for a database owner\n> > with the right privileges might want to install it.\n>\n> 'trusted' extensions concept is added by commit 50fc694 [3]. Since\n> pg_walinspect deals with WAL, we strictly want to control who creates\n> and can execute functions exposed by it, so I don't know if 'trusted'\n> is a good idea here. Also, pageinspect isn't a 'trusted' extension.\n>\n> > * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> > function should require pg_read_server_files? Or at least\n> > pg_read_all_data?\n>\n> pg_read_all_data may not be the right choice, but pg_read_server_files\n> is. However, does it sound good if some functions are allowed to be\n> executed by users with a pg_monitor role and others\n> pg_get_raw_wal_record by users with pg_read_server_files? Since the\n> extension itself can be created by superusers, isn't the\n> pg_get_raw_wal_record sort of safe with pg_mointor itself?\n>\n> If hackers don't agree, I'm happy to grant execution on\n> pg_get_raw_wal_record() to the pg_read_server_files role.\n>\n> Attaching the v8 patch-set resolving above comments and some tests for\n> checking function permissions. Please review it further.\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACWtToUQ5hCCBJP%2BmKeVUcN-g7cMb9XvhAcicPxUDsdcKg%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CA%2BTgmobYrTgMEF0SV%2ByDYyCCh44DAGjZVs7BYGrD8xD3vwNjHA%40mail.gmail.com\n> [3] commit 50fc694e43742ce3d04a5e9f708432cb022c5f0d\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Jan 29 18:42:43 2020 -0500\n>\n> Invent \"trusted\" extensions, and remove the pg_pltemplate catalog.\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Fri, 11 Mar 2022 19:53:06 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 11, 2022 at 8:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> >\n> - The difference between pg_get_wal_record_info and _records_ other than\n> - the number of argument is the former accepts incorrect LSNs.\n>\n> The discussion is somewhat confused after some twists and turns.. It\n> should be something like the following.\n>\n> pg_get_wal_record_info and pg_get_wal_records_info are almost same\n> since the latter can show a single record. However it is a bit\n> annoying to do that. Since, other than it doens't accept same LSNs for\n> start and end, it doesn't show a record when there' no record in the\n> specfied LSN range. But I don't think there's no usefulness of the\n> behavior.\n\nI would like to reassert the usability of pg_get_wal_record_info and\npg_get_wal_records_info:\n\npg_get_wal_record_info(lsn):\nif lsn is invalid i.e. '0/0' - throws an error\nif lsn is future lsn - throws an error\nif lsn looks okay, it figures out the next available valid WAL record\nand returns info about that\n\npg_get_wal_records_info(start_lsn, end_lsn default null) -> if start\nand end lsns are provided no end_lsn would give the WAL records info\ntill the end of WAL,\nif start_lsn is invalid i.e. '0/0' - throws an error\nif start_lsn is future lsn - throws an error\nif end_lsn isn't provided by the user - calculates the end_lsn as\nserver's current flush lsn\nif end_lsn is provided by the user - throws an error if it's future LSN\nif start_lsn and end_lsn look okay, it returns info about all WAL\nrecords from the next available valid WAL record of start_lsn until\nend_lsn\n\nSo, both pg_get_wal_record_info and pg_get_wal_records_info are necessary IMHO.\n\nComing to the behaviour when input lsn is '0/1000000', it's an issue\nwith XLogSegmentOffset(lsn, wal_segment_size) != 0 check, which I will\nfix in the next version.\n\n if (*first_record != lsn && XLogSegmentOffset(lsn, wal_segment_size) != 0)\n ereport(WARNING,\n (errmsg_plural(\"first record is after %X/%X, at %X/%X,\nskipping over %u byte\",\n \"first record is after %X/%X, at %X/%X,\nskipping over %u bytes\",\n (*first_record - lsn),\n LSN_FORMAT_ARGS(lsn),\n LSN_FORMAT_ARGS(*first_record),\n (uint32) (*first_record - lsn))));\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 11 Mar 2022 21:12:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Mar 10, 2022 at 9:38 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> It seems to me too rigorous that pg_get_wal_records_info/stats()\n> reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> the real end-of-WAL is more kind to users. I think the same with the\n> restriction that start and end LSN are required to be different.\n\nIn his review just yesterday, Jeff suggested this: \"Don't issue\nWARNINGs or other messages for ordinary situations, like when\npg_get_wal_records_info() hits the end of WAL.\" I think he's entirely\nright, and I don't think any patch that does otherwise should get\ncommitted. It is worth remembering that the results of queries are\noften examined by something other than a human being sitting at a psql\nterminal. Any tool that uses this is going to want to understand what\nhappened from the result set, not by parsing strings that may show up\ninside warning messages.\n\nI think that the right answer here is to have a function that returns\none row per record parsed, and each row should also include the start\nand end LSN of the record. If for some reason the WAL records return\nstart after the specified start LSN (e.g. because we skip over a page\nheader) or end before the specified end LSN (e.g. because we reach\nend-of-WAL) the user can figure it out from looking at the LSNs in the\noutput rows and comparing them to the LSNs provided as input.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 15:39:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, 2022-03-10 at 15:54 -0500, Stephen Frost wrote:\n> The standard is basically that all of the functions it brings are\n> written to enforce the PG privilege system and you aren't able to use\n> the extension to bypass those privileges. In some cases that means\n> that\n\nEvery extension should follow that standard, right? If it doesn't (e.g.\ncreating dangerous functions and granting them to public), then even\nsuperuser should not install it.\n\n> the C-language functions installed have if(!superuser) ereport()\n> calls\n\nI'm curious why not rely on the grant system where possible? I thought\nwe were trying to get away from explicit superuser checks.\n\n> I've not looked back on this thread, but I'd expect pg_walinspect to\n> need those superuser checks and with those it *could* be marked as\n> trusted, but that again brings into question how useful it is to mark\n> it\n> thusly.\n\nAs long as any functions are safely accessible to public or a\npredefined role, there is some utility for the 'trusted' marker.\n\nAs this patch is currently written, pg_monitor has access these\nfunctions, though I don't think that's the right privilege level at\nleast for pg_get_raw_wal_record().\n\n> I certainly don't think we should allow either database owners or\n> regular users on a system the ability to access the WAL traffic of\n> the\n> entire system.\n\nAgreed. That was not what I intended by asking if it should be marked\n'trusted'. The marker only allows the non-superuser to run the CREATE\nEXTENSION command; it's up to the extension script to decide whether\nany non-superusers can do anything at all with the extension.\n\n> More forcefully- we should *not* be throwing more access\n> rights towards $owners in general and should be thinking about how we\n> can allow admins, providers, whomever, the ability to control what\n> rights users are given. If they're all lumped under 'owner' then\n> there's no way for people to provide granular access to just those\n> things they wish and intend to.\n\nNot sure I understand, but that sounds like a larger discussion.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 11 Mar 2022 19:24:12 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Fri, Mar 11, 2022 at 7:53 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Some comments on pg_walinspect-docc.patch this time:\n>\n> + <varlistentry>\n> + <term>\n> + <function>pg_get_wal_record_info(in_lsn pg_lsn, lsn OUT pg_lsn,\n> prev_lsn OUT pg_lsn, xid OUT xid, resource_manager OUT text, length\n> OUT int4, total_length OUT int4, description OUT text, block_ref OUT\n> text, data OUT bytea, data_len OUT int4)</function>\n> + </term>\n>\n> You may shorten this by mentioning just the function input parameters\n> and specify \"returns record\" like shown below. So no need to specify\n> all the OUT params.\n>\n> pg_get_wal_record_info(in_lsn pg_lsn) returns record.\n>\n> Please check the documentation for other functions for reference.\n>\n> ==\n>\n> + <term>\n> + <function>pg_get_wal_records_info(start_lsn pg_lsn, end_lsn\n> pg_lsn DEFAULT NULL, lsn OUT pg_lsn, prev_lsn OUT pg_lsn, xid OUT xid,\n> resource_manager OUT text, length OUT int4, total_length OUT int4,\n> description OUT text, block_ref OUT text, data OUT bytea, data_len OUT\n> int4)</function>\n> + </term>\n>\n> Same comment applies here as well. In the return type you can just\n> mention - \"returns setof record\" like shown below:\n>\n> pg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn) returns setof records.\n>\n> You may also check for such optimizations at other places. I might\n> have missed some.\n\nI used the way verify_heapam shows the columns as it looks good IMO\nand we can't show sample outputs for all of the functions in the\ndocumentation.\n\n> ==\n>\n> +<screen>\n> +postgres=# select prev_lsn, xid, resource_manager, length,\n> total_length, block_ref from pg_get_wal_records_info('0/158A7F0',\n> '0/1591400');\n> + prev_lsn | xid | resource_manager | length | total_length |\n> block_ref\n> +-----------+-----+------------------+--------+--------------+--------------------------------------------------------------------------\n> + 0/158A7B8 | 735 | Heap | 54 | 7838 | blkref\n> #0: rel 1663/5/2619 blk 18 (FPW); hole: offset: 88, length: 408\n> + 0/158A7F0 | 735 | Btree | 53 | 8133 | blkref\n> #0: rel 1663/5/2696 blk 1 (FPW); hole: offset: 1632, length: 112\n> + 0/158C6A8 | 735 | Heap | 53 | 873 | blkref\n> #0: rel 1663/5/1259 blk 0 (FPW); hole: offset: 212, length: 7372\n>\n> Instead of specifying column names in the targetlist I think it's\n> better to use \"*\" so that it will display all the output columns. Also\n> you may shorten the gap between start and end lsn to reduce the output\n> size.\n\nAll columns are giving huge output, especially because of data and\ndescription columns hence I'm not showing them in the sample output.\n\n> ==\n>\n> Any reason for not specifying author name in the .sgml file. Do you\n> want me to add my name to the author? :)\n\nHaha. Thanks. I will add in the v9 patch set which I will post in a while.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 12 Mar 2022 17:13:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sat, Mar 12, 2022 at 2:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 9:38 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > It seems to me too rigorous that pg_get_wal_records_info/stats()\n> > reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> > the real end-of-WAL is more kind to users. I think the same with the\n> > restriction that start and end LSN are required to be different.\n>\n> In his review just yesterday, Jeff suggested this: \"Don't issue\n> WARNINGs or other messages for ordinary situations, like when\n> pg_get_wal_records_info() hits the end of WAL.\" I think he's entirely\n> right, and I don't think any patch that does otherwise should get\n> committed. It is worth remembering that the results of queries are\n> often examined by something other than a human being sitting at a psql\n> terminal. Any tool that uses this is going to want to understand what\n> happened from the result set, not by parsing strings that may show up\n> inside warning messages.\n>\n> I think that the right answer here is to have a function that returns\n> one row per record parsed, and each row should also include the start\n> and end LSN of the record. If for some reason the WAL records return\n> start after the specified start LSN (e.g. because we skip over a page\n> header) or end before the specified end LSN (e.g. because we reach\n> end-of-WAL) the user can figure it out from looking at the LSNs in the\n> output rows and comparing them to the LSNs provided as input.\n\nThanks Robert. I've removed the WARNING part and added end_lsn as suggested.\n\nThanks Kyotaro-san, Ashutosh and Jeff for your review. I tried to\naddress your review comments, if not all, but many.\n\nHere's v9 patch-set please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 12 Mar 2022 17:13:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "At Fri, 11 Mar 2022 15:39:13 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Thu, Mar 10, 2022 at 9:38 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > It seems to me too rigorous that pg_get_wal_records_info/stats()\n> > reject future LSNs as end-LSN and I think WARNING or INFO and stop at\n> > the real end-of-WAL is more kind to users. I think the same with the\n> > restriction that start and end LSN are required to be different.\n> \n> In his review just yesterday, Jeff suggested this: \"Don't issue\n> WARNINGs or other messages for ordinary situations, like when\n> pg_get_wal_records_info() hits the end of WAL.\" I think he's entirely\n> right, and I don't think any patch that does otherwise should get\n\nIt depends on what we think is the \"ordinary\" here. If we don't\nexpect that specified LSN range is not filled-out, the case above is\nordinary and no need for any WARNING nor INFO. I'm fine with that\ndefinition here.\n\n> committed. It is worth remembering that the results of queries are\n> often examined by something other than a human being sitting at a psql\n> terminal. Any tool that uses this is going to want to understand what\n> happened from the result set, not by parsing strings that may show up\n> inside warning messages.\n\nRight. I don't think it is right that WARNING is required to evaluate\nthe result. And I think that the WARNING like 'reached end-of-wal\nbefore end LSN' is a kind that is not required in evaluation of the\nresult. Since each WAL row contains at least start LSN.\n\n> I think that the right answer here is to have a function that returns\n> one row per record parsed, and each row should also include the start\n> and end LSN of the record. If for some reason the WAL records return\n> start after the specified start LSN (e.g. because we skip over a page\n> header) or end before the specified end LSN (e.g. because we reach\n> end-of-WAL) the user can figure it out from looking at the LSNs in the\n> output rows and comparing them to the LSNs provided as input.\n\nI agree with you here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Mar 2022 10:58:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Thu, 2022-03-10 at 15:54 -0500, Stephen Frost wrote:\n> > The standard is basically that all of the functions it brings are\n> > written to enforce the PG privilege system and you aren't able to use\n> > the extension to bypass those privileges. In some cases that means\n> > that\n> \n> Every extension should follow that standard, right? If it doesn't (e.g.\n> creating dangerous functions and granting them to public), then even\n> superuser should not install it.\n\nEvery extension that's intended to be installed on a multi-user PG\nsystem where the admin cares about security in the database, sure. I\ndisagree that this applies universally to every system or every\nextension. Those are standards that modules we distribute in contrib\nshould meet but I don't know that we necessarily have to have them for,\nsay, modules in test.\n\n> > the C-language functions installed have if(!superuser) ereport()\n> > calls\n> \n> I'm curious why not rely on the grant system where possible? I thought\n> we were trying to get away from explicit superuser checks.\n\nWe don't yet have capabilities for everything. I agree that we should\nbe getting away from explicit superuser checks and explained below how\nwe might be able to in this case.\n\n> > I've not looked back on this thread, but I'd expect pg_walinspect to\n> > need those superuser checks and with those it *could* be marked as\n> > trusted, but that again brings into question how useful it is to mark\n> > it\n> > thusly.\n> \n> As long as any functions are safely accessible to public or a\n> predefined role, there is some utility for the 'trusted' marker.\n\nI'm not sure that I agree, though I'm also not sure that it's a useful\nthing to debate. Still, if all of the functions in a particular\nextension have explicit if (!superuser) ereport() checks in them, then\nwhile installing it is 'safe' and it could be marked as 'trusted'\nthere's very little point in doing so as the only person who can get\nanything useful from those functions is a superuser, and a superuser can\ninstall non-trusted extensions anyway. How is it useful to mark such an\nextension as 'trusted'?\n\n> As this patch is currently written, pg_monitor has access these\n> functions, though I don't think that's the right privilege level at\n> least for pg_get_raw_wal_record().\n\nYeah, I agree that pg_monitor isn't the right thing for such a function\nto be checking.\n\n> > I certainly don't think we should allow either database owners or\n> > regular users on a system the ability to access the WAL traffic of\n> > the\n> > entire system.\n> \n> Agreed. That was not what I intended by asking if it should be marked\n> 'trusted'. The marker only allows the non-superuser to run the CREATE\n> EXTENSION command; it's up to the extension script to decide whether\n> any non-superusers can do anything at all with the extension.\n\nSure.\n\n> > More forcefully- we should *not* be throwing more access\n> > rights towards $owners in general and should be thinking about how we\n> > can allow admins, providers, whomever, the ability to control what\n> > rights users are given. If they're all lumped under 'owner' then\n> > there's no way for people to provide granular access to just those\n> > things they wish and intend to.\n> \n> Not sure I understand, but that sounds like a larger discussion.\n\nThe point I was trying to make is that it's better to move in the\ndirection of things like pg_read_all_data rather than just declaring\nthat the owner of a database can read all of the tables in that\ndatabase, as an example. Maybe we want to implicitly have such\nprivilege for the owner of the database too, but we should first make it\nsomething that's able to be GRANT'd out to non-owners so that it's not\nrequired that all of those privileges be given out together at once.\n\nNote that to be considered an 'owner' of an object you have to be a\nmember of the role that owns the object, which means that said role is\nnecessarily able to also become the owning role too. Lumping lots of\nprivileges together- all the rights that being an 'owner' of the object\nconveys, plus the ability to also become that role directly and do\nthings as that role, works actively against the general idea of 'least\nprivilege'.\n\nThanks,\n\nStephen", "msg_date": "Mon, 14 Mar 2022 10:55:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Mar 14, 2022 at 8:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> > As this patch is currently written, pg_monitor has access these\n> > functions, though I don't think that's the right privilege level at\n> > least for pg_get_raw_wal_record().\n>\n> Yeah, I agree that pg_monitor isn't the right thing for such a function\n> to be checking.\n\nOn Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> function should require pg_read_server_files? Or at least\n> pg_read_all_data?\n\nThe v9 patch set posted at [1] grants execution on\npg_get_raw_wal_record() to the pg_monitor role.\n\npg_read_all_data may not be the right choice, but pg_read_server_files\nis as these functions do read the WAL files on the server. If okay,\nI'm happy to grant execution on pg_get_raw_wal_record() to the\npg_read_server_files role.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACVRH-z8mZLyFkpLvY4qRhxQCqU_BLkFTtwt%2BTPZNhfEVg%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Mar 2022 07:21:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Mar 15, 2022 at 7:21 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Mar 14, 2022 at 8:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > > As this patch is currently written, pg_monitor has access these\n> > > functions, though I don't think that's the right privilege level at\n> > > least for pg_get_raw_wal_record().\n> >\n> > Yeah, I agree that pg_monitor isn't the right thing for such a function\n> > to be checking.\n>\n> On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> > function should require pg_read_server_files? Or at least\n> > pg_read_all_data?\n>\n> The v9 patch set posted at [1] grants execution on\n> pg_get_raw_wal_record() to the pg_monitor role.\n>\n> pg_read_all_data may not be the right choice, but pg_read_server_files\n> is as these functions do read the WAL files on the server. If okay,\n> I'm happy to grant execution on pg_get_raw_wal_record() to the\n> pg_read_server_files role.\n>\n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACVRH-z8mZLyFkpLvY4qRhxQCqU_BLkFTtwt%2BTPZNhfEVg%40mail.gmail.com\n\nAttaching v10 patch set which allows pg_get_raw_wal_record to be\nexecuted by either superuser or users with pg_read_server_files role,\nno other change from v9 patch set.\n\nPlease review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 16 Mar 2022 13:11:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Greetings,\n\n* Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> On Tue, Mar 15, 2022 at 7:21 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Mar 14, 2022 at 8:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > >\n> > > > As this patch is currently written, pg_monitor has access these\n> > > > functions, though I don't think that's the right privilege level at\n> > > > least for pg_get_raw_wal_record().\n> > >\n> > > Yeah, I agree that pg_monitor isn't the right thing for such a function\n> > > to be checking.\n> >\n> > On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > >\n> > > * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> > > function should require pg_read_server_files? Or at least\n> > > pg_read_all_data?\n> >\n> > The v9 patch set posted at [1] grants execution on\n> > pg_get_raw_wal_record() to the pg_monitor role.\n> >\n> > pg_read_all_data may not be the right choice, but pg_read_server_files\n> > is as these functions do read the WAL files on the server. If okay,\n> > I'm happy to grant execution on pg_get_raw_wal_record() to the\n> > pg_read_server_files role.\n> >\n> > Thoughts?\n> >\n> > [1] https://www.postgresql.org/message-id/CALj2ACVRH-z8mZLyFkpLvY4qRhxQCqU_BLkFTtwt%2BTPZNhfEVg%40mail.gmail.com\n> \n> Attaching v10 patch set which allows pg_get_raw_wal_record to be\n> executed by either superuser or users with pg_read_server_files role,\n> no other change from v9 patch set.\n\nIn a quick look, that seems reasonable to me. If folks want to give out\naccess to this function individually they're also able to do so, which\nis good. Doesn't seem worthwhile to introduce a new predefined role for\nthis one function.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Mar 2022 10:26:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "I can see that the pg_get_wal_records_info function shows the details\nof the WAL record whose existence is beyond the user specified\nstop/end lsn pointer. See below:\n\nashu@postgres=# select * from pg_get_wal_records_info('0/01000028',\n'0/01000029');\n-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nstart_lsn | 0/1000028\nend_lsn | 0/100009F\nprev_lsn | 0/0\nxid | 0\nresource_manager | XLOG\nrecord_length | 114\nfpi_length | 0\ndescription | CHECKPOINT_SHUTDOWN redo 0/1000028; tli 1; prev tli\n1; fpw true; xid 0:3; oid 10000; multi 1; offset 0; oldest xid 3 in DB\n1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0;\noldest running xid 0; shutdown\nblock_ref |\ndata_length | 88\ndata |\n\\x28000001000000000100000001000000010000000000000003000000000000001027000001000000000000000300000001000000010000000100000072550000a5c4316200000000000000000000000000000000ff7f0000\n\nIn this case, the end lsn pointer specified by the user is\n'0/01000029'. There is only one WAL record which starts before this\nspecified end lsn pointer whose start pointer is at 01000028, but that\nWAL record ends at 0/100009F which is way beyond the specified end\nlsn. So, how come we are able to display the complete WAL record info?\nAFAIU, end lsn is the lsn pointer where you need to stop reading the\nWAL data. If that is true, then there exists no valid WAL record\nbetween the start and end lsn in this particular case.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Wed, Mar 16, 2022 at 7:56 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Bharath Rupireddy (bharath.rupireddyforpostgres@gmail.com) wrote:\n> > On Tue, Mar 15, 2022 at 7:21 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 14, 2022 at 8:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > >\n> > > > > As this patch is currently written, pg_monitor has access these\n> > > > > functions, though I don't think that's the right privilege level at\n> > > > > least for pg_get_raw_wal_record().\n> > > >\n> > > > Yeah, I agree that pg_monitor isn't the right thing for such a function\n> > > > to be checking.\n> > >\n> > > On Thu, Mar 10, 2022 at 1:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > > >\n> > > > * pg_get_raw_wal_record() seems too powerful for pg_monitor. Maybe that\n> > > > function should require pg_read_server_files? Or at least\n> > > > pg_read_all_data?\n> > >\n> > > The v9 patch set posted at [1] grants execution on\n> > > pg_get_raw_wal_record() to the pg_monitor role.\n> > >\n> > > pg_read_all_data may not be the right choice, but pg_read_server_files\n> > > is as these functions do read the WAL files on the server. If okay,\n> > > I'm happy to grant execution on pg_get_raw_wal_record() to the\n> > > pg_read_server_files role.\n> > >\n> > > Thoughts?\n> > >\n> > > [1] https://www.postgresql.org/message-id/CALj2ACVRH-z8mZLyFkpLvY4qRhxQCqU_BLkFTtwt%2BTPZNhfEVg%40mail.gmail.com\n> >\n> > Attaching v10 patch set which allows pg_get_raw_wal_record to be\n> > executed by either superuser or users with pg_read_server_files role,\n> > no other change from v9 patch set.\n>\n> In a quick look, that seems reasonable to me. If folks want to give out\n> access to this function individually they're also able to do so, which\n> is good. Doesn't seem worthwhile to introduce a new predefined role for\n> this one function.\n>\n> Thanks,\n>\n> Stephen\n\n\n", "msg_date": "Wed, 16 Mar 2022 20:49:12 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "At Wed, 16 Mar 2022 20:49:12 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> I can see that the pg_get_wal_records_info function shows the details\n> of the WAL record whose existence is beyond the user specified\n> stop/end lsn pointer. See below:\n> \n> ashu@postgres=# select * from pg_get_wal_records_info('0/01000028',\n> '0/01000029');\n> -[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> start_lsn | 0/1000028\n> end_lsn | 0/100009F\n> prev_lsn | 0/0\n...\n> record_length | 114\n...\n> In this case, the end lsn pointer specified by the user is\n> '0/01000029'. There is only one WAL record which starts before this\n> specified end lsn pointer whose start pointer is at 01000028, but that\n> WAL record ends at 0/100009F which is way beyond the specified end\n> lsn. So, how come we are able to display the complete WAL record info?\n> AFAIU, end lsn is the lsn pointer where you need to stop reading the\n> WAL data. If that is true, then there exists no valid WAL record\n> between the start and end lsn in this particular case.\n\nYou're right considering how pg_waldump behaves. pg_waldump works\nalmost the way as you described above. The record above actually ends\nat 1000099 and pg_waldump shows that record by specifying -s 0/1000028\n-e 0/100009a, but not for -e 0/1000099.\n\n# I personally think the current behavior is fine, though..\n\n\nIt still suggests unspecifiable end-LSN..\n\n> select * from pg_get_wal_records_info('4/4B28EB68', '4/4C000060');\n> ERROR: cannot accept future end LSN\n> DETAIL: Last known WAL LSN on the database system is 4/4C000060.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 17 Mar 2022 14:18:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Wed, Mar 16, 2022 at 8:49 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> I can see that the pg_get_wal_records_info function shows the details\n> of the WAL record whose existence is beyond the user specified\n> stop/end lsn pointer. See below:\n>\n> ashu@postgres=# select * from pg_get_wal_records_info('0/01000028',\n> '0/01000029');\n> -[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> start_lsn | 0/1000028\n> end_lsn | 0/100009F\n> prev_lsn | 0/0\n> xid | 0\n> resource_manager | XLOG\n> record_length | 114\n> fpi_length | 0\n> description | CHECKPOINT_SHUTDOWN redo 0/1000028; tli 1; prev tli\n> 1; fpw true; xid 0:3; oid 10000; multi 1; offset 0; oldest xid 3 in DB\n> 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0;\n> oldest running xid 0; shutdown\n> block_ref |\n> data_length | 88\n> data |\n> \\x28000001000000000100000001000000010000000000000003000000000000001027000001000000000000000300000001000000010000000100000072550000a5c4316200000000000000000000000000000000ff7f0000\n>\n> In this case, the end lsn pointer specified by the user is\n> '0/01000029'. There is only one WAL record which starts before this\n> specified end lsn pointer whose start pointer is at 01000028, but that\n> WAL record ends at 0/100009F which is way beyond the specified end\n> lsn. So, how come we are able to display the complete WAL record info?\n> AFAIU, end lsn is the lsn pointer where you need to stop reading the\n> WAL data. If that is true, then there exists no valid WAL record\n> between the start and end lsn in this particular case.\n\nThanks Ashutosh, it's an edge case and I don't think we would want to\nshow a WAL record that ends at LSN after the user specified end-lsn\nwhich doesn't look good. I fixed it in the v11 patch set. Now, the\npg_get_wal_records_info will show records only upto user specified\nend_lsn, it doesn't show the last record which starts at LSN < end_lsn\nbut ends at LSN > end_lsn, see [1].\n\nPlease review the v11 patch set further.\n\n[1]\npostgres=# select start_lsn, end_lsn, prev_lsn from\npg_get_wal_records_info('0/01000028', '0/01000029');\n start_lsn | end_lsn | prev_lsn\n-----------+---------+----------\n(0 rows)\n\npostgres=# select start_lsn, end_lsn, prev_lsn from\npg_get_wal_records_info('0/01000028', '0/100009F');\n start_lsn | end_lsn | prev_lsn\n-----------+-----------+----------\n 0/1000028 | 0/100009F | 0/0\n(1 row)\n\npostgres=# select start_lsn, end_lsn, prev_lsn from\npg_get_wal_records_info('0/01000028', '0/10000A0');\n start_lsn | end_lsn | prev_lsn\n-----------+-----------+----------\n 0/1000028 | 0/100009F | 0/0\n(1 row)\n\npostgres=# select start_lsn, end_lsn, prev_lsn from\npg_get_wal_records_info('0/01000028', '0/0100009E');\n start_lsn | end_lsn | prev_lsn\n-----------+---------+----------\n(0 rows)\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 17 Mar 2022 13:25:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Mar 17, 2022 at 10:48 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> It still suggests unspecifiable end-LSN..\n>\n> > select * from pg_get_wal_records_info('4/4B28EB68', '4/4C000060');\n> > ERROR: cannot accept future end LSN\n> > DETAIL: Last known WAL LSN on the database system is 4/4C000060.\n\nThanks Kyotaro-san. We can change the detail message to show (current\nflush lsn/last replayed lsn - 1), that's what I've done in v11 posted\nupthread at [1]. The problem is that all the pg_walinspect functions\nwould wait for the first valid record in read_local_xlog_page() via\nInitXLogReaderState()->XLogFindNextRecord(), see[2].\n\nWe have two things to do:\n1) Just document the behaviour \"pg_walinspect functions will wait for\nthe first valid WAL record if there is none found after the specified\ninput LSN/start LSN.\". This seems easier but some may see it as a\nproblem.\n2) Have read_local_xlog_page_2 which doesn't wait for future WAL LSN\nunlike read_local_xlog_page and like pg_waldump's WALDumpReadPage. It\nrequires a new function read_local_xlog_page_2 that almost looks like\nread_local_xlog_page except wait (pg_usleep) loop, we can avoid code\nduplication by moving the read_local_xlog_page code to a static\nfunction read_local_xlog_page_guts(existing params, bool wait):\n\nread_local_xlog_page(params)\n read_local_xlog_page_guts(existing params, false);\n\nread_local_xlog_page_2(params)\n read_local_xlog_page_guts(existing params, true);\n\nread_local_xlog_page_guts:\n if (wait) wait for future wal; ---> existing pg_usleep code in\nread_local_xlog_page.\n else return;\n\nI'm fine either way, please let me know your thoughts on this?\n\n[1] https://www.postgresql.org/message-id/CALj2ACU8XjbYbMwh5x6hEUJdpRoG9%3DPO52_tuOSf1%3DMO7WtsmQ%40mail.gmail.com\n[2]\npostgres=# select pg_current_wal_flush_lsn();\n pg_current_wal_flush_lsn\n--------------------------\n 0/1624430\n(1 row)\n\npostgres=# select * from pg_get_wal_record_info('0/1624430');\nERROR: cannot accept future input LSN\nDETAIL: Last known WAL LSN on the database system is 0/162442F.\npostgres=# select * from pg_get_wal_record_info('0/162442f'); --->\nwaits for the first valid record in read_local_xlog_page.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 17 Mar 2022 13:53:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi Bharath,\n\nDue to recent commits on master, the pg_walinpect module is not\ncompiling. Kindly update the patch.\n\npg_walinspect.c: In function ‘GetXLogRecordInfo’:\npg_walinspect.c:362:39: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘max_block_id’\n 362 | for (block_id = 0; block_id <= record->max_block_id; block_id++)\n | ^~\npg_walinspect.c:382:29: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 382 | uint8 bimg_info = record->blocks[block_id].bimg_info;\n | ^~\npg_walinspect.c:385:21: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 385 | fpi_len += record->blocks[block_id].bimg_len;\n | ^~\npg_walinspect.c:402:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 402 | record->blocks[block_id].hole_offset,\n | ^~\npg_walinspect.c:403:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 403 | record->blocks[block_id].hole_length,\n | ^~\npg_walinspect.c:405:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 405 | record->blocks[block_id].hole_length -\n | ^~\npg_walinspect.c:406:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 406 | record->blocks[block_id].bimg_len,\n | ^~\npg_walinspect.c:414:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 414 | record->blocks[block_id].hole_offset,\n | ^~\npg_walinspect.c:415:16: error: ‘XLogReaderState’ {aka ‘struct\nXLogReaderState’} has no member named ‘blocks’\n 415 | record->blocks[block_id].hole_length);\n | ^~\nmake: *** [../../src/Makefile.global:941: pg_walinspect.o] Error 1\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Mar 17, 2022 at 1:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 17, 2022 at 10:48 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > It still suggests unspecifiable end-LSN..\n> >\n> > > select * from pg_get_wal_records_info('4/4B28EB68', '4/4C000060');\n> > > ERROR: cannot accept future end LSN\n> > > DETAIL: Last known WAL LSN on the database system is 4/4C000060.\n>\n> Thanks Kyotaro-san. We can change the detail message to show (current\n> flush lsn/last replayed lsn - 1), that's what I've done in v11 posted\n> upthread at [1]. The problem is that all the pg_walinspect functions\n> would wait for the first valid record in read_local_xlog_page() via\n> InitXLogReaderState()->XLogFindNextRecord(), see[2].\n>\n> We have two things to do:\n> 1) Just document the behaviour \"pg_walinspect functions will wait for\n> the first valid WAL record if there is none found after the specified\n> input LSN/start LSN.\". This seems easier but some may see it as a\n> problem.\n> 2) Have read_local_xlog_page_2 which doesn't wait for future WAL LSN\n> unlike read_local_xlog_page and like pg_waldump's WALDumpReadPage. It\n> requires a new function read_local_xlog_page_2 that almost looks like\n> read_local_xlog_page except wait (pg_usleep) loop, we can avoid code\n> duplication by moving the read_local_xlog_page code to a static\n> function read_local_xlog_page_guts(existing params, bool wait):\n>\n> read_local_xlog_page(params)\n> read_local_xlog_page_guts(existing params, false);\n>\n> read_local_xlog_page_2(params)\n> read_local_xlog_page_guts(existing params, true);\n>\n> read_local_xlog_page_guts:\n> if (wait) wait for future wal; ---> existing pg_usleep code in\n> read_local_xlog_page.\n> else return;\n>\n> I'm fine either way, please let me know your thoughts on this?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACU8XjbYbMwh5x6hEUJdpRoG9%3DPO52_tuOSf1%3DMO7WtsmQ%40mail.gmail.com\n> [2]\n> postgres=# select pg_current_wal_flush_lsn();\n> pg_current_wal_flush_lsn\n> --------------------------\n> 0/1624430\n> (1 row)\n>\n> postgres=# select * from pg_get_wal_record_info('0/1624430');\n> ERROR: cannot accept future input LSN\n> DETAIL: Last known WAL LSN on the database system is 0/162442F.\n> postgres=# select * from pg_get_wal_record_info('0/162442f'); --->\n> waits for the first valid record in read_local_xlog_page.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n\n\n", "msg_date": "Fri, 18 Mar 2022 20:07:02 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi,\n\nFirst look at this patch, so I might be repeating stuff already commented on /\ndiscussed.\n\nOn 2022-03-17 13:25:35 +0530, Bharath Rupireddy wrote:\n> +--\n> +-- pg_get_raw_wal_record()\n> +--\n> +CREATE FUNCTION pg_get_raw_wal_record(IN in_lsn pg_lsn,\n> + OUT start_lsn pg_lsn,\n> + OUT end_lsn pg_lsn,\n> + OUT prev_lsn pg_lsn,\n> + OUT record_length int4,\n> + OUT record bytea\n> +)\n> +AS 'MODULE_PATHNAME', 'pg_get_raw_wal_record'\n> +LANGUAGE C CALLED ON NULL INPUT PARALLEL SAFE;\n\nWhat is raw about the function?\n\nWhy \"CALLED ON NULL INPUT\"? It doesn't make sense to call the function with a\nNULL lsn, does it? Also, that's the default, why is it restated here?\n\n\n> +REVOKE EXECUTE ON FUNCTION pg_get_raw_wal_record(pg_lsn) FROM PUBLIC;\n> +GRANT EXECUTE ON FUNCTION pg_get_raw_wal_record(pg_lsn) TO pg_read_server_files;\n> +\n> +--\n> +-- pg_get_wal_record_info()\n> +--\n> +CREATE FUNCTION pg_get_wal_record_info(IN in_lsn pg_lsn,\n> + OUT start_lsn pg_lsn,\n> + OUT end_lsn pg_lsn,\n> + OUT prev_lsn pg_lsn,\n> + OUT xid xid,\n> + OUT resource_manager text,\n> + OUT record_length int4,\n> + OUT fpi_length int4,\n> +\tOUT description text,\n> + OUT block_ref text,\n> + OUT data_length int4,\n> + OUT data bytea\n> +)\n> +AS 'MODULE_PATHNAME', 'pg_get_wal_record_info'\n> +LANGUAGE C CALLED ON NULL INPUT PARALLEL SAFE;\n> +\n> +REVOKE EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) FROM PUBLIC;\n> +GRANT EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) TO pg_monitor;\n\nI don't think it's appropriate for pg_monitor to see all the data in the WAL.\n\n> +--\n> +-- pg_get_wal_stats()\n> +--\n> +CREATE FUNCTION pg_get_wal_stats(IN start_lsn pg_lsn,\n> + IN end_lsn pg_lsn DEFAULT NULL,\n> + OUT resource_manager text,\n> + OUT count int8,\n> + OUT count_percentage float4,\n> + OUT record_length int8,\n> + OUT record_length_percentage float4,\n> + OUT fpi_length int8,\n> + OUT fpi_length_percentage float4\n> + )\n> +RETURNS SETOF record AS $$\n> +SELECT resource_manager,\n> + count(*) AS cnt,\n> + CASE WHEN count(*) > 0 THEN (count(*) * 100 / sum(count(*)) OVER total)::numeric(5,2) ELSE 0 END AS \"count_%\",\n> + sum(record_length) AS trecl,\n> + CASE WHEN sum(record_length) > 0 THEN (sum(record_length) * 100 / sum(sum(record_length)) OVER total)::numeric(5,2) ELSE 0 END AS \"trecl_%\",\n> + sum(fpi_length) AS tfpil,\n> + CASE WHEN sum(fpi_length) > 0 THEN (sum(fpi_length) * 100 / sum(sum(fpi_length)) OVER total)::numeric(5,2) ELSE 0 END AS \"tfpil_%\"\n> +FROM pg_get_wal_records_info(start_lsn, end_lsn)\n> +GROUP BY resource_manager\n> +WINDOW total AS ();\n> +$$ LANGUAGE SQL CALLED ON NULL INPUT PARALLEL SAFE;\n\nThis seems like an exceedingly expensive way to compute this. Not just because\nof doing the grouping, window etc, but also because it's serializing the\n\"data\" field from pg_get_wal_records_info() just to never use it. With any\nappreciatable amount of data the return value pg_get_wal_records_info() will\nbe serialized into a on-disk tuplestore.\n\nThis is probably close to an order of magnitude slower than pg_waldump\n--stats. Which imo renders this largely useless.\n\nThe column names don't seem great either. \"tfpil\"?\n\n\n> +/*\n> + * Module load callback.\n> + */\n> +void\n> +_PG_init(void)\n> +{\n> +\t/* Define custom GUCs and install hooks here, if any. */\n> +\n> +\t/*\n> +\t * Have EmitWarningsOnPlaceholders(\"pg_walinspect\"); if custom GUCs are\n> +\t * defined.\n> +\t */\n> +}\n> +\n> +/*\n> + * Module unload callback.\n> + */\n> +void\n> +_PG_fini(void)\n> +{\n> +\t/* Uninstall hooks, if any. */\n> +}\n\nWhy have this stuff if it's not used?\n\n\n> +/*\n> + * Validate given LSN and return the LSN up to which the server has WAL.\n> + */\n> +static XLogRecPtr\n> +ValidateInputLSN(XLogRecPtr lsn)\n> +{\n> +\tXLogRecPtr curr_lsn;\n> +\n> +\t/* Validate input WAL LSN. */\n> +\tif (XLogRecPtrIsInvalid(lsn))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"invalid WAL LSN\")));\n> +\n> +\t/*\n> +\t * We determine the current LSN of the server similar to how page_read\n> +\t * callback read_local_xlog_page does.\n> +\t */\n> +\tif (!RecoveryInProgress())\n> +\t\tcurr_lsn = GetFlushRecPtr(NULL);\n> +\telse\n> +\t\tcurr_lsn = GetXLogReplayRecPtr(NULL);\n> +\n> +\tAssert(!XLogRecPtrIsInvalid(curr_lsn));\n> +\n> +\tif (lsn >= curr_lsn)\n> +\t{\n> +\t\t/*\n> +\t \t * GetFlushRecPtr or GetXLogReplayRecPtr gives \"end+1\" LSN of the last\n> +\t\t * record flushed or replayed respectively. But let's use the LSN up\n> +\t\t * to \"end\" in user facing message.\n> +\t \t */\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"cannot accept future input LSN\"),\n> +\t\t\t\t errdetail(\"Last known WAL LSN on the database system is %X/%X.\",\n> +\t\t\t\t\t\t LSN_FORMAT_ARGS(curr_lsn - 1))));\n> +\t}\n\n> +\treturn curr_lsn;\n> +}\n> +\n> +/*\n> + * Validate given start LSN and end LSN, return the new end LSN in case user\n> + * hasn't specified one.\n> + */\n> +static XLogRecPtr\n> +ValidateStartAndEndLSNs(XLogRecPtr start_lsn, XLogRecPtr end_lsn)\n> +{\n> +\tXLogRecPtr curr_lsn;\n> +\n> +\t/* Validate WAL start LSN. */\n> +\tif (XLogRecPtrIsInvalid(start_lsn))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"invalid WAL start LSN\")));\n> +\n> +\tif (!RecoveryInProgress())\n> +\t\tcurr_lsn = GetFlushRecPtr(NULL);\n> +\telse\n> +\t\tcurr_lsn = GetXLogReplayRecPtr(NULL);\n> +\n> +\tif (start_lsn >= curr_lsn)\n> +\t{\n> +\t\t/*\n> +\t \t * GetFlushRecPtr or GetXLogReplayRecPtr gives \"end+1\" LSN of the last\n> +\t\t * record flushed or replayed respectively. But let's use the LSN up\n> +\t\t * to \"end\" in user facing message.\n> +\t \t */\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"cannot accept future start LSN\"),\n> +\t\t\t\t errdetail(\"Last known WAL LSN on the database system is %X/%X.\",\n> +\t\t\t\t\t\t LSN_FORMAT_ARGS(curr_lsn - 1))));\n> +\t}\n\n> +\t/*\n> +\t * If end_lsn is specified, let's ensure that it's not a future LSN i.e.\n> +\t * something the database system doesn't know about.\n> +\t */\n> +\tif (!XLogRecPtrIsInvalid(end_lsn) &&\n> +\t\t(end_lsn >= curr_lsn))\n> +\t{\n> +\t\t/*\n> +\t \t * GetFlushRecPtr or GetXLogReplayRecPtr gives \"end+1\" LSN of the last\n> +\t\t * record flushed or replayed respectively. But let's use the LSN up\n> +\t\t * to \"end\" in user facing message.\n> +\t \t */\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"cannot accept future end LSN\"),\n> +\t\t\t\t errdetail(\"Last known WAL LSN on the database system is %X/%X.\",\n> +\t\t\t\t\t\t LSN_FORMAT_ARGS(curr_lsn - 1))));\n> +\t}\n> +\n> +\t/*\n> +\t * When end_lsn is not specified let's read up to the last WAL position\n> +\t * known to be on the server.\n> +\t */\n> +\tif (XLogRecPtrIsInvalid(end_lsn))\n> +\t\tend_lsn = curr_lsn;\n> +\n> +\tAssert(!XLogRecPtrIsInvalid(end_lsn));\n> +\n> +\tif (start_lsn >= end_lsn)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"WAL start LSN must be less than end LSN\")));\n> +\n> +\treturn end_lsn;\n> +}\n\nThese two functions are largely redundant, that doesn't seem great.\n\n\n> +Datum\n> +pg_get_raw_wal_record(PG_FUNCTION_ARGS)\n> +{\n> +#define PG_GET_RAW_WAL_RECORD_COLS 5\n> +\tXLogRecPtr\tlsn;\n> +\tXLogRecord *record;\n> +\tXLogRecPtr\tfirst_record;\n> +\tXLogReaderState *xlogreader;\n> +\tbytea\t*raw_record;\n> +\tuint32\trec_len;\n> +\tchar\t*raw_record_data;\n> +\tTupleDesc\ttupdesc;\n> +\tDatum\tresult;\n> +\tHeapTuple\ttuple;\n> +\tDatum\tvalues[PG_GET_RAW_WAL_RECORD_COLS];\n> +\tbool\tnulls[PG_GET_RAW_WAL_RECORD_COLS];\n> +\tint\ti = 0;\n> +\n> +\tlsn = PG_GETARG_LSN(0);\n> +\n> +\t(void) ValidateInputLSN(lsn);\n> +\n> +\t/* Build a tuple descriptor for our result type. */\n> +\tif (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n> +\t\telog(ERROR, \"return type must be a row type\");\n> +\n> +\txlogreader = InitXLogReaderState(lsn, &first_record);\n> +\n> +\tAssert(xlogreader);\n> +\n> +\trecord = ReadNextXLogRecord(xlogreader, first_record);\n> +\n> +\trec_len = XLogRecGetTotalLen(xlogreader);\n> +\n> +\tAssert(rec_len > 0);\n> +\n\nMost of this has another copy in pg_get_wal_record_info(). Can more of this be\ndeduplicated?\n\n\n> +/*\n> + * Get WAL record info.\n> + */\n> +static void\n> +GetXLogRecordInfo(XLogReaderState *record, XLogRecPtr lsn,\n> +\t\t\t\t Datum *values, bool *nulls, uint32 ncols)\n> +{\n> +\tconst char *id;\n> +\tconst RmgrData *desc;\n> +\tuint32\tfpi_len = 0;\n> +\tRelFileNode rnode;\n> +\tForkNumber\tforknum;\n> +\tBlockNumber blk;\n> +\tint\tblock_id;\n> +\tStringInfoData rec_desc;\n> +\tStringInfoData rec_blk_ref;\n> +\tStringInfoData temp;\n> +\tbytea\t*data;\n> +\tchar\t*main_data;\n> +\tuint32\tmain_data_len;\n> +\tint\ti = 0;\n> +\n> +\tdesc = &RmgrTable[XLogRecGetRmid(record)];\n> +\tinitStringInfo(&rec_desc);\n> +\tid = desc->rm_identify(XLogRecGetInfo(record));\n> +\n> +\tif (id == NULL)\n> +\t\tappendStringInfo(&rec_desc, \"UNKNOWN (%x) \", XLogRecGetInfo(record) & ~XLR_INFO_MASK);\n> +\telse\n> +\t\tappendStringInfo(&rec_desc, \"%s \", id);\n> +\n> +\tinitStringInfo(&temp);\n> +\tdesc->rm_desc(&temp, record);\n> +\tappendStringInfo(&rec_desc, \"%s\", temp.data);\n> +\tpfree(temp.data);\n> +\tinitStringInfo(&rec_blk_ref);\n\nThis seems unnecessarily wasteful. You serialize into one stringinfo, just to\nthen copy that stringinfo into another stringinfo. Just to then allocate yet\nanother stringinfo.\n\n\n> +\t/* Block references (detailed format). */\n\nThis comment seems copied from pg_waldump, but doesn't make sense here,\nbecause there's no short format.\n\n\n> +\tfor (block_id = 0; block_id <= record->max_block_id; block_id++)\n> +\t{\n> +\t\tif (!XLogRecHasBlockRef(record, block_id))\n> +\t\t\tcontinue;\n> +\n> +\t\tXLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blk);\n> +\n> +\t\tif (forknum != MAIN_FORKNUM)\n> +\t\t\tappendStringInfo(&rec_blk_ref,\n> +\t\t\t\t\t\t\t\"blkref #%u: rel %u/%u/%u fork %s blk %u\",\n> +\t\t\t\t\t\t\tblock_id, rnode.spcNode, rnode.dbNode,\n> +\t\t\t\t\t\t\trnode.relNode, get_forkname(forknum), blk);\n> +\t\telse\n> +\t\t\tappendStringInfo(&rec_blk_ref,\n> +\t\t\t\t\t\t\t\"blkref #%u: rel %u/%u/%u blk %u\",\n> +\t\t\t\t\t\t\tblock_id, rnode.spcNode, rnode.dbNode,\n> +\t\t\t\t\t\t\trnode.relNode, blk);\n> +\n> +\t\tif (XLogRecHasBlockImage(record, block_id))\n> +\t\t{\n> +\t\t\tuint8\t\tbimg_info = record->blocks[block_id].bimg_info;\n> +\n> +\t\t\t/* Calculate the amount of FPI data in the record. */\n> +\t\t\tfpi_len += record->blocks[block_id].bimg_len;\n> +\n> +\t\t\tif (BKPIMAGE_COMPRESSED(bimg_info))\n> +\t\t\t{\n> +\t\t\t\tconst char *method;\n> +\n> +\t\t\t\tif ((bimg_info & BKPIMAGE_COMPRESS_PGLZ) != 0)\n> +\t\t\t\t\tmethod = \"pglz\";\n> +\t\t\t\telse if ((bimg_info & BKPIMAGE_COMPRESS_LZ4) != 0)\n> +\t\t\t\t\tmethod = \"lz4\";\n> +\t\t\t\telse\n> +\t\t\t\t\tmethod = \"unknown\";\n> +\n> +\t\t\t\tappendStringInfo(&rec_blk_ref, \" (FPW%s); hole: offset: %u, length: %u, \"\n> +\t\t\t\t\t\t\t\t \"compression saved: %u, method: %s\",\n> +\t\t\t\t\t\t\t\t XLogRecBlockImageApply(record, block_id) ?\n> +\t\t\t\t\t\t\t\t \"\" : \" for WAL verification\",\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].hole_offset,\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].hole_length,\n> +\t\t\t\t\t\t\t\t BLCKSZ -\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].hole_length -\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].bimg_len,\n> +\t\t\t\t\t\t\t\t method);\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t{\n> +\t\t\t\tappendStringInfo(&rec_blk_ref, \" (FPW%s); hole: offset: %u, length: %u\",\n> +\t\t\t\t\t\t\t\t XLogRecBlockImageApply(record, block_id) ?\n> +\t\t\t\t\t\t\t\t \"\" : \" for WAL verification\",\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].hole_offset,\n> +\t\t\t\t\t\t\t\t record->blocks[block_id].hole_length);\n> +\t\t\t}\n> +\t\t}\n> +\t}\n\nTo me duplicating this much code from waldump seems like a bad idea from a\nmaintainability POV.\n\n\n\n> +/*\n> + * Get info and data of all WAL records between start LSN and end LSN.\n> + */\n> +static void\n> +GetWALRecordsInfoInternal(FunctionCallInfo fcinfo, XLogRecPtr start_lsn,\n> +\t\t\t\t\t\t XLogRecPtr end_lsn)\n> +{\n> +#define PG_GET_WAL_RECORDS_INFO_COLS 11\n> +\tXLogRecPtr\tfirst_record;\n> +\tXLogReaderState *xlogreader;\n> +\tReturnSetInfo *rsinfo;\n> +\tTupleDesc\ttupdesc;\n> +\tTuplestorestate *tupstore;\n> +\tMemoryContext per_query_ctx;\n> +\tMemoryContext oldcontext;\n> +\tDatum\tvalues[PG_GET_WAL_RECORDS_INFO_COLS];\n> +\tbool\tnulls[PG_GET_WAL_RECORDS_INFO_COLS];\n> +\n> +\trsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +\n> +\t/* Check to see if caller supports us returning a tuplestore. */\n> +\tif (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"set-valued function called in context that cannot accept a set\")));\n> +\tif (!(rsinfo->allowedModes & SFRM_Materialize))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"materialize mode required, but it is not allowed in this context\")));\n> +\n> +\t/* Build a tuple descriptor for our result type. */\n> +\tif (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)\n> +\t\telog(ERROR, \"return type must be a row type\");\n> +\n> +\t/* Build tuplestore to hold the result rows. */\n> +\tper_query_ctx = rsinfo->econtext->ecxt_per_query_memory;\n> +\toldcontext = MemoryContextSwitchTo(per_query_ctx);\n> +\ttupstore = tuplestore_begin_heap(true, false, work_mem);\n> +\trsinfo->returnMode = SFRM_Materialize;\n> +\trsinfo->setResult = tupstore;\n> +\trsinfo->setDesc = tupdesc;\n\nThis should likely use the infrastructure introduced in 5b81703787bfc1e6072c8e37125eba0c5598b807.\n\n\n> +\tfor (;;)\n> +\t{\n> +\t\t(void) ReadNextXLogRecord(xlogreader, first_record);\n> +\n> +\t\t/*\n> +\t\t * Let's not show the record info if it is spanning more than the\n> +\t\t * end_lsn. EndRecPtr is \"end+1\" of the last read record, hence\n> +\t\t * use \"end\" here.\n> +\t\t */\n> +\t\tif ((xlogreader->EndRecPtr - 1) <= end_lsn)\n> +\t\t{\n> +\t\t\tGetXLogRecordInfo(xlogreader, xlogreader->currRecPtr, values, nulls,\n> +\t\t\t\t\t\t \t PG_GET_WAL_RECORDS_INFO_COLS);\n> +\n> +\t\t\ttuplestore_putvalues(tupstore, tupdesc, values, nulls);\n> +\t\t}\n> +\n> +\t\t/* Exit loop if read up to end_lsn. */\n> +\t\tif (xlogreader->EndRecPtr >= end_lsn)\n> +\t\t\tbreak;\n\nSeems weird to have both of these conditions separately.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Mar 2022 16:48:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 18, 2022 at 8:07 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n> Due to recent commits on master, the pg_walinpect module is not\n> compiling. Kindly update the patch.\n\nThanks Nitin. Here's an updated v12 patch-set. I will respond to\nAndres comments in a while.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 20 Mar 2022 14:28:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sat, Mar 19, 2022 at 5:18 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> First look at this patch, so I might be repeating stuff already commented on /\n> discussed.\n\nThanks for taking a look at the patch.\n\n> On 2022-03-17 13:25:35 +0530, Bharath Rupireddy wrote:\n> > +--\n> > +-- pg_get_raw_wal_record()\n>\n> What is raw about the function?\n\nIt right now gives data starting from the output of XLogReadRecord\nupto XLogRecGetTotalLen(xlogreader); length. Given that XLogReadRecord\nreturns a pointer to the decoded record's header, I'm not sure it's\nthe right choice. Actually, this function's intention(not an immediate\nuse-case though), is to feed the WAL record to another function and\nthen, say, repair a corrupted page given a base data page.\n\nAs I said upthread, I'm open to removing this function for now, when a\nrealistic need comes we can add it back. It also raised some concerns\naround the security and permissions. Thoughts?\n\n> Why \"CALLED ON NULL INPUT\"? It doesn't make sense to call the function with a\n> NULL lsn, does it? Also, that's the default, why is it restated here?\n\npg_get_wal_records_info needed that option (if end_lsn being the\ndefault, providing WAL info upto the end of WAL). Also, we can emit\nbetter error message (\"invalid WAL start LSN\") instead of generic one.\nI wanted to keep error message and code same across all the functions\nhence CALLED ON NULL INPUT option for pg_get_raw_wal_record.\n\n> > +REVOKE EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) FROM PUBLIC;\n> > +GRANT EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) TO pg_monitor;\n>\n> I don't think it's appropriate for pg_monitor to see all the data in the WAL.\n\nHow about pg_read_server_files or some other?\n\n> > +-- pg_get_wal_stats()\n>\n> This seems like an exceedingly expensive way to compute this. Not just because\n> of doing the grouping, window etc, but also because it's serializing the\n> \"data\" field from pg_get_wal_records_info() just to never use it. With any\n> appreciatable amount of data the return value pg_get_wal_records_info() will\n> be serialized into a on-disk tuplestore.\n>\n> This is probably close to an order of magnitude slower than pg_waldump\n> --stats. Which imo renders this largely useless.\n\nYeah that's true. Do you suggest having pg_get_wal_stats() a\nc-function like in v8 patch [1]?\n\nSEe some numbers at [2] with pg_get_wal_stats using\npg_get_wal_records_info and pg_get_wal_records_info as a direct\nc-function like in v8 patch [1]. A direct c-function always fares\nbetter (84 msec vs 1400msec).\n\n> The column names don't seem great either. \"tfpil\"?\n\n\"total fpi length\" - tfpil wanted to keep it short - it's just an\ninternal column name isn't it? The actual column name the user sees is\nfpi_length.\n\n> > +void\n> > +_PG_init(void)\n>\n> > +void\n> > +_PG_fini(void)\n>\n> Why have this stuff if it's not used?\n\nI kept it as a placeholder for future code additions, for instance\ntest_decoding.c and ssl_passphrase_func.c has empty _PG_init(),\n_PG_fini(). If okay, I can mention there like \"placeholder for now\",\notherwise I can remove it.\n\n> > +static XLogRecPtr\n> > +ValidateInputLSN(XLogRecPtr lsn)\n>\n> > +static XLogRecPtr\n> > +ValidateStartAndEndLSNs(XLogRecPtr start_lsn, XLogRecPtr end_lsn)\n> > +{\n>\n> These two functions are largely redundant, that doesn't seem great.\n\nI will modify it in the next version.\n\n> > +Datum\n> > +pg_get_raw_wal_record(PG_FUNCTION_ARGS)\n>\n> Most of this has another copy in pg_get_wal_record_info(). Can more of this be\n> deduplicated?\n\nI will do, if we decide on whether or not to have the\npg_get_raw_wal_record function at all? Please see my comments above.\n\n> > + initStringInfo(&temp);\n> > + desc->rm_desc(&temp, record);\n> > + appendStringInfo(&rec_desc, \"%s\", temp.data);\n> > + pfree(temp.data);\n> > + initStringInfo(&rec_blk_ref);\n>\n> This seems unnecessarily wasteful. You serialize into one stringinfo, just to\n> then copy that stringinfo into another stringinfo. Just to then allocate yet\n> another stringinfo.\n\nYeah, I will remove it. Looks like all the rm_desc callbacks append to\nthe passed-in buffer and not reset it, so we should be good.\n\n> > + /* Block references (detailed format). */\n>\n> This comment seems copied from pg_waldump, but doesn't make sense here,\n> because there's no short format.\n\nYes, I will remove it.\n\n> > + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> > + {\n>\n> To me duplicating this much code from waldump seems like a bad idea from a\n> maintainability POV.\n\nEven if we were to put the above code from pg_walinspect and\npg_waldump into, say, walutils.c or some other existing file, there we\nhad to make if (pg_walinspect) appendStringInfo else if (pg_waldump)\nprintf() sort of thing, isn't it clumsy? Also, unnecessary if\nconditions need to be executed for every record. For maintainability,\nI added a note in pg_walinspect.c and pg_waldump.c to consider fixing\nthings in both places (of course this might sound dumbest way of doing\nit, IMHO, it's sensible, given the if(pg_walinspect)-else\nif(pg_waldump) sorts of code that we need to do in the common\nfunctions). Thoughts?\n\n> > +/*\n> > + * Get info and data of all WAL records between start LSN and end LSN.\n> > + */\n> > +static void\n> > +GetWALRecordsInfoInternal(FunctionCallInfo fcinfo, XLogRecPtr start_lsn,\n>\n> This should likely use the infrastructure introduced in 5b81703787bfc1e6072c8e37125eba0c5598b807.\n\nYes, I will change it.\n\n> > + for (;;)\n> > + {\n> > + (void) ReadNextXLogRecord(xlogreader, first_record);\n> > +\n> > + /*\n> > + * Let's not show the record info if it is spanning more than the\n> > + * end_lsn. EndRecPtr is \"end+1\" of the last read record, hence\n> > + * use \"end\" here.\n> > + */\n> > + if ((xlogreader->EndRecPtr - 1) <= end_lsn)\n> > + {\n> > + GetXLogRecordInfo(xlogreader, xlogreader->currRecPtr, values, nulls,\n> > + PG_GET_WAL_RECORDS_INFO_COLS);\n> > +\n> > + tuplestore_putvalues(tupstore, tupdesc, values, nulls);\n> > + }\n> > +\n> > + /* Exit loop if read up to end_lsn. */\n> > + if (xlogreader->EndRecPtr >= end_lsn)\n> > + break;\n>\n> Seems weird to have both of these conditions separately.\n\nYeah. It is to handle some edge cases to print the WAL upto end_lsn\nand avoid waiting in read_local_xlog_page. I will change it.\n\nActually, there's an open point as specified in [3]. Any thoughts on it?\n\n[1] https://www.postgresql.org/message-id/CALj2ACWhcbW_s4BXLyCpLWcCppZN9ncTXHbn4dv1F0Vpe0kxqA%40mail.gmail.com\n[2] with pg_get_wal_stats using pg_get_wal_stats:\nTime: 1394.919 ms (00:01.395)\nTime: 1403.199 ms (00:01.403)\nTime: 1408.138 ms (00:01.408)\nTime: 1397.670 ms (00:01.398)\n\nwith pg_get_wal_stats as a c-function like in v8 patch [1]:\nTime: 84.319 ms\nTime: 84.303 ms\nTime: 84.208 ms\nTime: 84.452 ms\n\nuse case:\ncreate extension pg_walinspect;\n\ncreate table foo(col int);\ninsert into foo select * from generate_series(1, 100000);\nupdate foo set col = col*2+1;\ndelete from foo;\n\n\\timing on\nselect * from pg_get_wal_stats('0/01000028');\n\\timing off\n\noutput:\npostgres=# select * from pg_get_wal_stats('0/01000028');\n resource_manager | count | count_percentage | record_length |\nrecord_length_percentage | fpi_length | fpi_length_percentage\n------------------+--------+------------------+---------------+--------------------------+------------+-----------------------\n Storage | 13 | 0 | 546 |\n 0 | 0 | 0\n CLOG | 1 | 0 | 30 |\n 0 | 0 | 0\n Database | 2 | 0 | 84 |\n 0 | 0 | 0\n Btree | 13078 | 3.1 | 1486990 |\n 4.97 | 461512 | 23.13\n Heap | 404835 | 95.84 | 26354653 |\n 88.17 | 456576 | 22.88\n Transaction | 721 | 0.17 | 178933 |\n 0.6 | 0 | 0\n Heap2 | 3056 | 0.72 | 1131836 |\n 3.79 | 376932 | 18.89\n Standby | 397 | 0.09 | 23226 |\n 0.08 | 0 | 0\n XLOG | 316 | 0.07 | 716027 |\n 2.4 | 700164 | 35.09\n(9 rows)\n\n[3] https://www.postgresql.org/message-id/CALj2ACVBST5Us6-eDz4q_Gem3rUHSC7AYNOB7tmp9Yqq6PHsXw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 22 Mar 2022 21:57:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-22 21:57:51 +0530, Bharath Rupireddy wrote:\n> On Sat, Mar 19, 2022 at 5:18 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-03-17 13:25:35 +0530, Bharath Rupireddy wrote:\n> > > +--\n> > > +-- pg_get_raw_wal_record()\n> >\n> > What is raw about the function?\n> \n> It right now gives data starting from the output of XLogReadRecord\n> upto XLogRecGetTotalLen(xlogreader); length. Given that XLogReadRecord\n> returns a pointer to the decoded record's header, I'm not sure it's\n> the right choice. Actually, this function's intention(not an immediate\n> use-case though), is to feed the WAL record to another function and\n> then, say, repair a corrupted page given a base data page.\n> \n> As I said upthread, I'm open to removing this function for now, when a\n> realistic need comes we can add it back. It also raised some concerns\n> around the security and permissions. Thoughts?\n\nI'm ok with having it with appropriate permissions, I just don't like the\nname.\n\n\n> > Why \"CALLED ON NULL INPUT\"? It doesn't make sense to call the function with a\n> > NULL lsn, does it? Also, that's the default, why is it restated here?\n> \n> pg_get_wal_records_info needed that option (if end_lsn being the\n> default, providing WAL info upto the end of WAL). Also, we can emit\n> better error message (\"invalid WAL start LSN\") instead of generic one.\n> I wanted to keep error message and code same across all the functions\n> hence CALLED ON NULL INPUT option for pg_get_raw_wal_record.\n\nI think it should be strict if it behaves strict. I fail to see what\nconsistency in error messages is worth here. And I'd probably just create two\ndifferent functions for begin and begin & end LSN and mark those as strict as\nwell.\n\n\n> > > +REVOKE EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) FROM PUBLIC;\n> > > +GRANT EXECUTE ON FUNCTION pg_get_wal_record_info(pg_lsn) TO pg_monitor;\n> >\n> > I don't think it's appropriate for pg_monitor to see all the data in the WAL.\n> \n> How about pg_read_server_files or some other?\n\nThat seems more appropriate.\n\n\n> > > +-- pg_get_wal_stats()\n> >\n> > This seems like an exceedingly expensive way to compute this. Not just because\n> > of doing the grouping, window etc, but also because it's serializing the\n> > \"data\" field from pg_get_wal_records_info() just to never use it. With any\n> > appreciatable amount of data the return value pg_get_wal_records_info() will\n> > be serialized into a on-disk tuplestore.\n> >\n> > This is probably close to an order of magnitude slower than pg_waldump\n> > --stats. Which imo renders this largely useless.\n> \n> Yeah that's true. Do you suggest having pg_get_wal_stats() a\n> c-function like in v8 patch [1]?\n\nYes.\n\n\n> SEe some numbers at [2] with pg_get_wal_stats using\n> pg_get_wal_records_info and pg_get_wal_records_info as a direct\n> c-function like in v8 patch [1]. A direct c-function always fares\n> better (84 msec vs 1400msec).\n\nThat indeed makes the view as is pretty much useless. And it'd probably be\nworse in a workload with longer records / many FPIs.\n\n\n> > > +void\n> > > +_PG_init(void)\n> >\n> > > +void\n> > > +_PG_fini(void)\n> >\n> > Why have this stuff if it's not used?\n> \n> I kept it as a placeholder for future code additions, for instance\n> test_decoding.c and ssl_passphrase_func.c has empty _PG_init(),\n> _PG_fini(). If okay, I can mention there like \"placeholder for now\",\n> otherwise I can remove it.\n\nThat's not comparable, the test_decoding case has it as a placeholder because\nit serves as a template to create further output plugins. Something not the\ncase here. So please remove.\n\n\n> > > + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> > > + {\n> >\n> > To me duplicating this much code from waldump seems like a bad idea from a\n> > maintainability POV.\n> \n> Even if we were to put the above code from pg_walinspect and\n> pg_waldump into, say, walutils.c or some other existing file, there we\n> had to make if (pg_walinspect) appendStringInfo else if (pg_waldump)\n> printf() sort of thing, isn't it clumsy?\n\nWhy is that needed? Just use the stringinfo in both places? You're outputting\nthe exact same thing in both places right now. There's already a stringinfo in\nXLogDumpDisplayRecord() these days (there wasn't back when pg_xlogddump was\nwritten), so you could just convert at least the relevant printfs in\nXLogDumpDisplayRecord().\n\n\n> Also, unnecessary if\n> conditions need to be executed for every record. For maintainability,\n> I added a note in pg_walinspect.c and pg_waldump.c to consider fixing\n> things in both places (of course this might sound dumbest way of doing\n> it, IMHO, it's sensible, given the if(pg_walinspect)-else\n> if(pg_waldump) sorts of code that we need to do in the common\n> functions). Thoughts?\n\nIMO we shouldn't merge this with as much duplication as there is right now,\nthe notes don't change that it's a PITA to maintain.\n\n\n> Yeah. It is to handle some edge cases to print the WAL upto end_lsn\n> and avoid waiting in read_local_xlog_page. I will change it.\n> \n> Actually, there's an open point as specified in [3]. Any thoughts on it?\n\nSeems more user-friendly to wait - it's otherwise hard for a user to know what\nLSN to put in.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Mar 2022 11:00:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "At Tue, 22 Mar 2022 11:00:06 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-03-22 21:57:51 +0530, Bharath Rupireddy wrote:\n> > > This is probably close to an order of magnitude slower than pg_waldump\n> > > --stats. Which imo renders this largely useless.\n> > \n> > Yeah that's true. Do you suggest having pg_get_wal_stats() a\n> > c-function like in v8 patch [1]?\n> \n> Yes.\n>\n> > SEe some numbers at [2] with pg_get_wal_stats using\n> > pg_get_wal_records_info and pg_get_wal_records_info as a direct\n> > c-function like in v8 patch [1]. A direct c-function always fares\n> > better (84 msec vs 1400msec).\n> \n> That indeed makes the view as is pretty much useless. And it'd probably be\n> worse in a workload with longer records / many FPIs.\n\nFWIW agreed. The SQL version is too slow..\n\n\n> > > > + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> > > > + {\n> > >\n> > > To me duplicating this much code from waldump seems like a bad idea from a\n> > > maintainability POV.\n> > \n> > Even if we were to put the above code from pg_walinspect and\n> > pg_waldump into, say, walutils.c or some other existing file, there we\n> > had to make if (pg_walinspect) appendStringInfo else if (pg_waldump)\n> > printf() sort of thing, isn't it clumsy?\n> \n> Why is that needed? Just use the stringinfo in both places? You're outputting\n> the exact same thing in both places right now. There's already a stringinfo in\n> XLogDumpDisplayRecord() these days (there wasn't back when pg_xlogddump was\n> written), so you could just convert at least the relevant printfs in\n> XLogDumpDisplayRecord().\n\n> > Also, unnecessary if\n> > conditions need to be executed for every record. For maintainability,\n> > I added a note in pg_walinspect.c and pg_waldump.c to consider fixing\n> > things in both places (of course this might sound dumbest way of doing\n> > it, IMHO, it's sensible, given the if(pg_walinspect)-else\n> > if(pg_waldump) sorts of code that we need to do in the common\n> > functions). Thoughts?\n> \n> IMO we shouldn't merge this with as much duplication as there is right now,\n> the notes don't change that it's a PITA to maintain.\n\nThe two places emit different outputs but the only difference is the\ndelimiter between two blockrefs. (By the way, the current code forgets\nto insert a delimiter there). So even if the function took \"bool\nis_waldump\", it is used only when appending a line delimiter. It\nwould be nicer if the \"bool is_waldump\" were \"char *delimiter\".\nOthers might think differently, though..\n\nSo, the function looks like this.\n\nStringInfo XLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n\t\t\t\t\t\t\tuint32 &fpi_len);\n\n\n> > Yeah. It is to handle some edge cases to print the WAL upto end_lsn\n> > and avoid waiting in read_local_xlog_page. I will change it.\n> > \n> > Actually, there's an open point as specified in [3]. Any thoughts on it?\n> \n> Seems more user-friendly to wait - it's otherwise hard for a user to know what\n> LSN to put in.\n\nI'm not sure it is user-friendly that the function \"freeze\"s for a\nreason uncertain to the user.. Even if any results are accumulated\nbefore waiting, all of them vanishes by entering Ctrl-C to release the\n\"freeze\".\n\nAbout the usefulness of the waiting behavior, it depends on what we\nthink the function's major use cases are. Robert (AFAIU) thinks it as\na simple WAL dumper that is intended to use in some automated\nmechanism. The start/end LSNs simply identifies the records to emit.\nNo warning/errors and no waits except for apparently invalid inputs.\n\nI thought it as a means by which to manually inspect wal on SQL\ninterface but don't have a strong opinion on the waiting behavior.\n(Because I can avoid that by giving a valid LSN pair to the function\nif I don't want it to \"freeze\".)\n\n\nAnyway, the opinions here on the interface are described as follows.\n\nA. as a diag interface for human use.\n\n 1. If the whole region is filled with records, return them all.\n 2. If start-LSN is too past, starts from the first available record.\n\n 3-1. If start-LSN is in futnre, wait for the record to come.\n 4-1. If end-LSN is in future, waits for new records.\n 5-1. If end-LSN is too past, error out?\n\nB. as a simple WAL dumper\n\n 1. If the whole region is filled with records, return them all.\n 2. If start-LSN is too past, starts from the first available record.\n\n 3-2. If start-LSN is in futnre, returns nothig.\n 4-2. If end-LSN is in future, ends with the last available record.\n 5-2. If end-LSN is too past, returns northing.\n\n1 and 2 are uncontroversial.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Mar 2022 11:51:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "At Wed, 23 Mar 2022 11:51:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The two places emit different outputs but the only difference is the\n> delimiter between two blockrefs. (By the way, the current code forgets\n> to insert a delimiter there). So even if the function took \"bool\n> is_waldump\", it is used only when appending a line delimiter. It\n> would be nicer if the \"bool is_waldump\" were \"char *delimiter\".\n> Others might think differently, though..\n> \n> So, the function looks like this.\n> \n> StringInfo XLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n> \t\t\t\t\t\t\tuint32 &fpi_len);\n\nBy the way, xlog_block_info@xlogrecovery.c has the subset of the\nfunction. So the function can be shared with the callers of\nxlog_block_info but I'm not sure it is not too-much...\n\nStringInfo XLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n\t\t \t\t\t\t\tbool fpw_detail, uint32 &fpi_len);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Mar 2022 11:57:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Tue, Mar 22, 2022 at 11:30 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > Why \"CALLED ON NULL INPUT\"? It doesn't make sense to call the function with a\n> > > NULL lsn, does it? Also, that's the default, why is it restated here?\n> >\n> > pg_get_wal_records_info needed that option (if end_lsn being the\n> > default, providing WAL info upto the end of WAL). Also, we can emit\n> > better error message (\"invalid WAL start LSN\") instead of generic one.\n> > I wanted to keep error message and code same across all the functions\n> > hence CALLED ON NULL INPUT option for pg_get_raw_wal_record.\n>\n> I think it should be strict if it behaves strict. I fail to see what\n> consistency in error messages is worth here. And I'd probably just create two\n> different functions for begin and begin & end LSN and mark those as strict as\n> well.\n\nI'm okay with changing them to be STRICT. Right now, the behaviour of\npg_get_wal_records_info is this:\nCREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n IN end_lsn pg_lsn DEFAULT NULL,\n\nselect pg_get_wal_records_info(start_lsn, end_lsn);\nif start_lsn is future, then errors out\nif end_lsn is future, then errors out\notherwise, returns WAL records info between start_lsn and end_lsn\n\nselect pg_get_wal_records_info(start_lsn);\nif start_lsn is future, then errors out\nsets end_lsn = current server lsn and returns WAL records info between\nstart_lsn and end_lsn\n\nSame is true for pg_get_wal_stats.\n\nGetting WAL records info provided start_lsn until end-of-WAL is a\nbasic ask and a good function to have. Now, if I were to make\npg_get_wal_records_info STRICT, then I would need to have another\nfunction like pg_get_wal_records_info_till_end_of_wal/pg_get_wal_stats_till_end_of_wal\nmuch like ones in few of my initial patches upthread.\n\nIs it okay to have these functions pg_get_wal_records_info(start_lsn,\nend_lsn)/pg_get_wal_stats(start_lsn, end_lsn) and\npg_get_wal_records_info_till_end_of_wal(start_lsn)/pg_get_wal_stats_till_end_of_wal(start_lsn)?\nThis way, it will be more clear to the user actually than to stuff\nmore than one behaviour in a single function with default values.\n\nPlease let me know your thoughts.\n\n> > Yeah. It is to handle some edge cases to print the WAL upto end_lsn\n> > and avoid waiting in read_local_xlog_page. I will change it.\n> >\n> > Actually, there's an open point as specified in [3]. Any thoughts on it?\n>\n> Seems more user-friendly to wait - it's otherwise hard for a user to know what\n> LSN to put in.\n\nI agree with Kyotaro-san that the wait behavior isn't a good choice,\nbecause CTRL+C would not emit the accumulated info/stats unlike\npg_waldump. Also, with wait behaviour it's easy for a user to trick\nthe server with an unreasonably futuristic WAL LSN, say F/FFFFFFFF.\nAlso, if we use pg_walinspect functions, say, within a WAL monitoring\napp, the wait behaviour isn't good there as it might look like the\nfunctions hanging the app. We might think about adding a timeout for\nwaiting, but that doesn't seem an elegant way. Users/Apps can easily\nfigure out the LSNs to get WAL info/stats - either they can use\npg_current_wal_XXXX or by looking at the control file or server logs\nor pg_stat_replication, what not. LSNs are everywhere within the\npostgres eco-system.\n\nInstead, the functions simply can figure out what's current server LSN\nat-the-moment and choose to error out if any of the provided input LSN\nis beyond that as it's being done currently. This looks simpler and\nuser-friendly.\n\nOn Wed, Mar 23, 2022 at 8:27 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 23 Mar 2022 11:51:25 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > The two places emit different outputs but the only difference is the\n> > delimiter between two blockrefs. (By the way, the current code forgets\n> > to insert a delimiter there). So even if the function took \"bool\n> > is_waldump\", it is used only when appending a line delimiter. It\n> > would be nicer if the \"bool is_waldump\" were \"char *delimiter\".\n> > Others might think differently, though..\n> >\n> > So, the function looks like this.\n> >\n> > StringInfo XLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n> > uint32 &fpi_len);\n>\n> By the way, xlog_block_info@xlogrecovery.c has the subset of the\n> function. So the function can be shared with the callers of\n> xlog_block_info but I'm not sure it is not too-much...\n>\n> StringInfo XLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n> bool fpw_detail, uint32 &fpi_len);\n>\n\nYes, putting them in a common function is a good idea. I'm thinking\nsomething like below.\nStringInfo\nXLogBlockRefInfos(XLogReaderState *record, char *delimiter,\n uint32 *fpi_len, bool detailed_format)\n\nI will try to put the common functions in xlogreader.h/.c, so that\nboth pg_waldump and pg_walinspect can make use of it. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 23 Mar 2022 18:25:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Tue, Mar 22, 2022 at 11:30 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > To me duplicating this much code from waldump seems like a bad idea from a\n> > > maintainability POV.\n> >\n> > Even if we were to put the above code from pg_walinspect and\n> > pg_waldump into, say, walutils.c or some other existing file, there we\n> > had to make if (pg_walinspect) appendStringInfo else if (pg_waldump)\n> > printf() sort of thing, isn't it clumsy?\n>\n> Why is that needed? Just use the stringinfo in both places? You're outputting\n> the exact same thing in both places right now. There's already a stringinfo in\n> XLogDumpDisplayRecord() these days (there wasn't back when pg_xlogddump was\n> written), so you could just convert at least the relevant printfs in\n> XLogDumpDisplayRecord().\n>\n> > Also, unnecessary if\n> > conditions need to be executed for every record. For maintainability,\n> > I added a note in pg_walinspect.c and pg_waldump.c to consider fixing\n> > things in both places (of course this might sound dumbest way of doing\n> > it, IMHO, it's sensible, given the if(pg_walinspect)-else\n> > if(pg_waldump) sorts of code that we need to do in the common\n> > functions). Thoughts?\n>\n> IMO we shouldn't merge this with as much duplication as there is right now,\n> the notes don't change that it's a PITA to maintain.\n\nHere's a refactoring patch that basically moves the pg_waldump's\nfunctions and stats structures to xlogreader.h/.c so that the\npg_walinspect can reuse them. If it looks okay, I will send the\npg_walinspect patches based on it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 23 Mar 2022 21:36:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "At Wed, 23 Mar 2022 21:36:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Here's a refactoring patch that basically moves the pg_waldump's\n> functions and stats structures to xlogreader.h/.c so that the\n> pg_walinspect can reuse them. If it looks okay, I will send the\n> pg_walinspect patches based on it.\n\n\n+void\n+XLogRecGetBlockRefInfo(XLogReaderState *record, char *delimiter,\n+\t\t\t\t\t uint32 *fpi_len, bool detailed_format,\n+\t\t\t\t\t StringInfo buf)\n...\n+\t\tif (detailed_format && delimiter)\n+\t\t\tappendStringInfoChar(buf, '\\n');\n\nIt is odd that the variable \"delimiter\" is used as a bool in the\nfunction, though it is a \"char *\", which I meant that it is used as\ndelimiter string (assuming that you might want to insert \", \" between\ntwo blkref descriptions).\n\n\n+get_forkname(ForkNumber num)\n\nforkNames[] is public and used in reinit.c. I think we don't need\nthis function.\n\n\n+#define MAX_XLINFO_TYPES 16\n...\n+\tXLogRecStats\trmgr_stats[RM_NEXT_ID];\n+\tXLogRecStats\trecord_stats[RM_NEXT_ID][MAX_XLINFO_TYPES];\n+} XLogStats;\n+\n\nThis doesn't seem to be a part of xlogreader. Couldn't we add a new\nmodule \"xlogstats\"? XLogRecGetBlockRefInfo also doesn't seem to me as\na part of xlogreader, the xlogstats looks like a better place.\n\n\n+#define XLOG_GET_STATS_PERCENTAGE(n_pct, rec_len_pct, fpi_len_pct, \\\n+\t\t\t\t\t\t\t\t tot_len_pct, total_count, \\\n\nIt doesn't need to be a macro. However in the first place I don't\nthink it is useful to have. Rather it may be harmful since it doesn't\nreduce complexity much but instead just hides details. If we want to\navoid tedious repetitions of the same statements, a macro like the\nfollowing may work.\n\n#define CALC_PCT (num, denom) ((denom) == 0 ? 0.0 ? 100.0 * (num) / (denom))\n...\n> n_pct = CALC_PCT(n, total_count);\n> rec_len_pct = CALC_PCT(rec_len, total_rec_len);\n> fpi_len_pct = CALC_PCT(fpi_len, total_fpi_len);\n> tot_len_pct = CALC_PCT(tot_len, total_len);\n\nBut it is not seem that different if we directly write out the detail.\n\n> n_pct = (total_count == 0 ? 0 : 100.0 * n / total_count);\n> rec_len_pct = (total_rec_len == 0 ? 0 : 100.0 * rec_len / total_rec_len);\n> fpi_len_pct = (total_fpi_len == 0 ? 0 : 100.0 * fpi_len / total_fpi_len);\n> tot_len_pct = (total_len == 0 ? 0 : 100.0 * tot_len / total_len);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:52:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Thu, Mar 24, 2022 at 10:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> +void\n> +XLogRecGetBlockRefInfo(XLogReaderState *record, char *delimiter,\n> + uint32 *fpi_len, bool detailed_format,\n> + StringInfo buf)\n> ...\n> + if (detailed_format && delimiter)\n> + appendStringInfoChar(buf, '\\n');\n>\n> It is odd that the variable \"delimiter\" is used as a bool in the\n> function, though it is a \"char *\", which I meant that it is used as\n> delimiter string (assuming that you might want to insert \", \" between\n> two blkref descriptions).\n\nI'm passing NULL if the delimiter isn't required (for pg_walinspect)\nand I wanted to check if it's passed, so I was using the delimiter in\nthe condition. However, I now changed it to delimiter != NULL.\n\n> +get_forkname(ForkNumber num)\n>\n> forkNames[] is public and used in reinit.c. I think we don't need\n> this function.\n\nYes. I removed it.\n\n> +#define MAX_XLINFO_TYPES 16\n> ...\n> + XLogRecStats rmgr_stats[RM_NEXT_ID];\n> + XLogRecStats record_stats[RM_NEXT_ID][MAX_XLINFO_TYPES];\n> +} XLogStats;\n> +\n>\n> This doesn't seem to be a part of xlogreader. Couldn't we add a new\n> module \"xlogstats\"? XLogRecGetBlockRefInfo also doesn't seem to me as\n> a part of xlogreader, the xlogstats looks like a better place.\n\nI'm not sure if it's worth adding new files xlogstats.h/.c just for 2\nstructures, 1 macro, and 2 functions with no plan to add new stats\nstructures or functions. Since xlogreader is the one that reads the\nWAL, and is being included by both backend and other modules (tools\nand extensions) IMO it's the right place. However, I can specify in\nxlogreader that if at all the new stats related structures or\nfunctions are going to be added, it's good to move them into a new\nheader and .c file.\n\nThoughts?\n\n> +#define XLOG_GET_STATS_PERCENTAGE(n_pct, rec_len_pct, fpi_len_pct, \\\n> + tot_len_pct, total_count, \\\n>\n> It doesn't need to be a macro. However in the first place I don't\n> think it is useful to have. Rather it may be harmful since it doesn't\n> reduce complexity much but instead just hides details. If we want to\n> avoid tedious repetitions of the same statements, a macro like the\n> following may work.\n>\n> #define CALC_PCT (num, denom) ((denom) == 0 ? 0.0 ? 100.0 * (num) / (denom))\n> ...\n> > n_pct = CALC_PCT(n, total_count);\n> > rec_len_pct = CALC_PCT(rec_len, total_rec_len);\n> > fpi_len_pct = CALC_PCT(fpi_len, total_fpi_len);\n> > tot_len_pct = CALC_PCT(tot_len, total_len);\n>\n> But it is not seem that different if we directly write out the detail.\n>\n> > n_pct = (total_count == 0 ? 0 : 100.0 * n / total_count);\n> > rec_len_pct = (total_rec_len == 0 ? 0 : 100.0 * rec_len / total_rec_len);\n> > fpi_len_pct = (total_fpi_len == 0 ? 0 : 100.0 * fpi_len / total_fpi_len);\n> > tot_len_pct = (total_len == 0 ? 0 : 100.0 * tot_len / total_len);\n\nI removed the XLOG_GET_STATS_PERCENTAGE macro.\n\nAttaching v14 patch-set here with. It has bunch of other changes along\nwith the above:\n\n1) Used STRICT for all the functions and introduced _till_end_of_wal\nversions for pg_get_wal_records_info and pg_get_wal_stats.\n2) Most of the code is reused between pg_walinspect and pg_waldump and\nalso within pg_walinspect.\n3) Added read_local_xlog_page_no_wait without duplicating the code so\nthat the pg_walinspect functions don't wait even while finding the\nfirst valid WAL record.\n4) No function waits for future WAL lsn even to find the first valid WAL record.\n5) Addressed the review comments raised upthread by Andres.\n\nI hope this version makes the patch cleaner, please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 24 Mar 2022 15:02:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 15:02:29 +0530, Bharath Rupireddy wrote:\n> On Thu, Mar 24, 2022 at 10:22 AM Kyotaro Horiguchi\n> > This doesn't seem to be a part of xlogreader. Couldn't we add a new\n> > module \"xlogstats\"? XLogRecGetBlockRefInfo also doesn't seem to me as\n> > a part of xlogreader, the xlogstats looks like a better place.\n> \n> I'm not sure if it's worth adding new files xlogstats.h/.c just for 2\n> structures, 1 macro, and 2 functions with no plan to add new stats\n> structures or functions. Since xlogreader is the one that reads the\n> WAL, and is being included by both backend and other modules (tools\n> and extensions) IMO it's the right place. However, I can specify in\n> xlogreader that if at all the new stats related structures or\n> functions are going to be added, it's good to move them into a new\n> header and .c file.\n\nI don't like that location for XLogRecGetBlockRefInfo(). How about putting it\nin xlogdesc.c - that kind of fits?\n\nAnd what do you think about creating src/backend/access/rmgrdesc/stats.c for\nXLogRecStoreStats()? It's not a perfect location, but not too bad either.\n\nXLogRecGetLen() would be ok in xlogreader, but stats.c also would work?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 11:47:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 25, 2022 at 12:18 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-24 15:02:29 +0530, Bharath Rupireddy wrote:\n> > On Thu, Mar 24, 2022 at 10:22 AM Kyotaro Horiguchi\n> > > This doesn't seem to be a part of xlogreader. Couldn't we add a new\n> > > module \"xlogstats\"? XLogRecGetBlockRefInfo also doesn't seem to me as\n> > > a part of xlogreader, the xlogstats looks like a better place.\n> >\n> > I'm not sure if it's worth adding new files xlogstats.h/.c just for 2\n> > structures, 1 macro, and 2 functions with no plan to add new stats\n> > structures or functions. Since xlogreader is the one that reads the\n> > WAL, and is being included by both backend and other modules (tools\n> > and extensions) IMO it's the right place. However, I can specify in\n> > xlogreader that if at all the new stats related structures or\n> > functions are going to be added, it's good to move them into a new\n> > header and .c file.\n>\n> I don't like that location for XLogRecGetBlockRefInfo(). How about putting it\n> in xlogdesc.c - that kind of fits?\n\nDone.\n\n> And what do you think about creating src/backend/access/rmgrdesc/stats.c for\n> XLogRecStoreStats()? It's not a perfect location, but not too bad either.\n>\n> XLogRecGetLen() would be ok in xlogreader, but stats.c also would work?\n\nI've added a new xlogstats.c/.h (as suggested by Kyotaro-san as well)\nfile under src/backend/access/transam/. I don't think the new file\nfits well under rmgrdesc.\n\nAttaching v15 patch-set, please have a look at it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 25 Mar 2022 12:11:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "Hi Bharath,\n\nFirst look at the patch, bear with me if any of the following comments are\nrepeated.\n1. With pg_get_wal_record(lsn), say a WAL record start, end lsn range\ncontains the specified LSN, wouldn't it be more meaningful to show the\ncorresponding WAL record.\nFor example, upon providing '0/17335E7' as input, and I see get the WAL\nrecord ('0/1733618', '0/173409F') as output and not the one with start and\nend lsn as ('0/17335E0', '0/1733617').\n\nWith pg_walfile_name(lsn), we can find the WAL segment file name that\ncontains the specified LSN.\n\n2. I see the following output for pg_get_wal_record. Need to have a look at\nthe spaces I suppose.\nrkn=# select * from pg_get_wal_record('0/4041728');\n start_lsn | end_lsn | prev_lsn | record_length |\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n record\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----------+-----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n 0/4041728 | 0/40421AF | 0/40416F0 | 2670 |\n\\x6e0a0000d2020000f016040400000000000a0000fef802b400007f7f00000000408fe738a25500000300000000000000010000007f06000000\n4000003b0a00000000000012000000100101007f7f7f7f0885e738a25500003000c815380a03000885e738a255000000007f7f7f7f7f7f0000000078674301296000003000f8150020042000000000709e1203909\ndba01c89c8601609cd00030986008f8956804d202000000000000000000000000120006001f00030820ffff5f04000000000001400000010000000000000004000000000080bf0200030000000000000000006100\n0000610000000000000000000000000000000000000000000000000000000000000000000000330100000000000000bc02000001000000010000000000803f00000000000000b0060000010000000000000017000\n\n3. Should these functions be running in standby mode too? We do not allow\nWAL control functions to be executed during recovery right?\n\n4. If the wal segment corresponding to the start lsn is removed, but there\nare WAL records which could be read in the specified input lsn range, would\nit be better to output the existing WAL records displaying a message that\nit is a partial list of WAL records and the WAL files corresponding to the\nrest are already removed, rather than erroring out saying \"requested WAL\nsegment has already been removed\"?\n\n5. Following are very minor comments in the code\n\n - Correct the function description by removing \"return the LSN up to\n which the server has WAL\" for IsFutureLSN\n - In GetXLogRecordInfo, good to have pfree in place for rec_desc,\n rec_blk_ref, data\n - In GetXLogRecordInfo, can avoid calling XLogRecGetInfo(record)\n multiple times by capturing in a variable\n - In GetWALDetailsGuts, setting end_lsn could be done in single if else\n and similarly we can club the if statements verifying if the start lsn is a\n future lsn.\n\nThanks,\nRKN\n\nHi Bharath,First look at the patch, bear with me if any of the following comments are repeated.1. With pg_get_wal_record(lsn), say a WAL record start, end lsn range contains the specified LSN, wouldn't it be more meaningful to show the corresponding WAL record. For example, upon providing '0/17335E7' as input, and I see get the WAL record ('0/1733618', '0/173409F') as output and not the one with start and end lsn as ('0/17335E0', '0/1733617').With pg_walfile_name(lsn), we can find the WAL segment file name that contains the specified LSN.2. I see the following output for pg_get_wal_record. Need to have a look at the spaces I suppose.rkn=# select * from pg_get_wal_record('0/4041728'); start_lsn |  end_lsn  | prev_lsn  | record_length |                 record-----------+-----------+-----------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0/4041728 | 0/40421AF | 0/40416F0 |          2670 | \\x6e0a0000d2020000f016040400000000000a0000fef802b400007f7f00000000408fe738a25500000300000000000000010000007f060000004000003b0a00000000000012000000100101007f7f7f7f0885e738a25500003000c815380a03000885e738a255000000007f7f7f7f7f7f0000000078674301296000003000f8150020042000000000709e1203909dba01c89c8601609cd00030986008f8956804d202000000000000000000000000120006001f00030820ffff5f04000000000001400000010000000000000004000000000080bf02000300000000000000000061000000610000000000000000000000000000000000000000000000000000000000000000000000330100000000000000bc02000001000000010000000000803f00000000000000b00600000100000000000000170003. Should these functions be running in standby mode too? We do not allow WAL control functions to be executed during recovery right?4. If the wal segment corresponding to the start lsn is removed, but there are WAL records which could be read in the specified input lsn range, would it be better to output the existing WAL records displaying a message that it is a partial list of WAL records and the WAL files corresponding to the rest are already removed, rather than erroring out saying \"requested WAL segment has already been removed\"?5. Following are very minor comments in the codeCorrect the function description by removing \"return the LSN up to which the server has WAL\" for IsFutureLSNIn GetXLogRecordInfo, good to have pfree in place for rec_desc, rec_blk_ref, dataIn GetXLogRecordInfo, can avoid calling XLogRecGetInfo(record) multiple times by capturing in a variableIn GetWALDetailsGuts, setting end_lsn could be done in single if else and similarly we can club the if statements verifying if the start lsn is a future lsn.Thanks,RKN", "msg_date": "Fri, 25 Mar 2022 20:37:31 +0530", "msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Fri, Mar 25, 2022 at 8:37 PM RKN Sai Krishna\n<rknsaiforpostgres@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n> First look at the patch, bear with me if any of the following comments are repeated.\n\nThanks RKN, for playing around with the patches.\n\n> 1. With pg_get_wal_record(lsn), say a WAL record start, end lsn range contains the specified LSN, wouldn't it be more meaningful to show the corresponding WAL record.\n\nIn general, all the functions will first look for a first valid WAL\nrecord from the given input lsn/start lsn(XLogFindNextRecord) and then\ngive info of all the valid records including the first valid WAL\nrecord until either the given end lsn or till end of WAL depending on\nthe function used.\n\n> For example, upon providing '0/17335E7' as input, and I see get the WAL record ('0/1733618', '0/173409F') as output and not the one with start and end lsn as ('0/17335E0', '0/1733617').\n\nIf '0/17335E7' is an LSN containing a valid WAL record,\npg_get_wal_record gives the info of that, otherwise if there's any\nnext valid WAL record, it finds and gives that info. '0/17335E0' is\nbefore '0/17335E7' the input lsn, so it doesn't show that record, but\nthe next valid record.\n\nAll the pg_walinspect functions don't look for the nearest valid WAL\nrecord (could be previous to input lsn or next to input lsn), but they\nlook for the next valid WAL record. This is because the xlogreader\ninfra now has no API for backward iteration from a given LSN ( it has\nXLogFindNextRecord and XLogReadRecord which scans the WAL in forward\ndirection). But, it's a good idea to have XLogFindPreviousRecord and\nXLogReadPreviousRecord versions (as we have links for previous WAL\nrecord in each WAL record) but that's a separate discussion.\n\n> With pg_walfile_name(lsn), we can find the WAL segment file name that contains the specified LSN.\n\nYes.\n\n> 2. I see the following output for pg_get_wal_record. Need to have a look at the spaces I suppose.\n\nI believe this is something psql does for larger column outputs for\npretty-display. When used in a non-psql client, the column values are\nreturned properly. Nothing to do with the pg_walinspect patches here.\n\n> 3. Should these functions be running in standby mode too? We do not allow WAL control functions to be executed during recovery right?\n\nThere are functions that can be executable during recovery\npg_last_wal_receive_lsn, pg_last_wal_replay_lsn. The pg_walinspect\nfunctions are useful even in recovery and I don't see a strong reason\nto not support them. Hence, I'm right now supporting them.\n\n> 4. If the wal segment corresponding to the start lsn is removed, but there are WAL records which could be read in the specified input lsn range, would it be better to output the existing WAL records displaying a message that it is a partial list of WAL records and the WAL files corresponding to the rest are already removed, rather than erroring out saying \"requested WAL segment has already been removed\"?\n\n\"requested WAL segment %s has already been removed\" is a common error\nacross the xlogreader infra (see wal_segment_open) and I don't want to\ninvent a new behaviour. And all the segment_open callbacks report an\nerror when they are not finding the WAL file that they are looking\nfor.\n\n> 5. Following are very minor comments in the code\n>\n> Correct the function description by removing \"return the LSN up to which the server has WAL\" for IsFutureLSN\n\nThat's fine, because it actually returns curr_lsn via the function\nparam curr_lsn. However, I modified the comment a bit.\n\n> In GetXLogRecordInfo, good to have pfree in place for rec_desc, rec_blk_ref, data\n\nNo, we are just returning pointer to the string, not deep copying, see\nCStringGetTextDatum. All the functions get executed within a\nfunction's memory context and after handing off the results to the\nclient that gets deleted, deallocating all the memory.\n\n> In GetXLogRecordInfo, can avoid calling XLogRecGetInfo(record) multiple times by capturing in a variable\n\nXLogRecGetInfo is not a function, it's a macro, so that's fine.\n#define XLogRecGetInfo(decoder) ((decoder)->record->header.xl_info)\n\n> In GetWALDetailsGuts, setting end_lsn could be done in single if else and similarly we can club the if statements verifying if the start lsn is a future lsn.\n\nThe existing if conditions are:\n\n if (IsFutureLSN(start_lsn, &curr_lsn))\n if (!till_end_of_wal && end_lsn >= curr_lsn)\n if (till_end_of_wal)\n if (start_lsn >= end_lsn)\n\nI clubbed them like this:\n if (!till_end_of_wal)\n if (IsFutureLSN(start_lsn, &curr_lsn))\n if (!till_end_of_wal && end_lsn >= curr_lsn)\n else if (till_end_of_wal)\n\nOther if conditions are serving different purposes, so I'm leaving them as-is.\n\nAttaching v16 patch-set, only change in v16-0002-pg_walinspect.patch,\nothers remain the same.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 26 Mar 2022 10:31:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sat, 2022-03-26 at 10:31 +0530, Bharath Rupireddy wrote:\n> Attaching v16 patch-set, only change in v16-0002-pg_walinspect.patch,\n> others remain the same.\n\nI looked more closely at this patch.\n\n* It seems that pg_get_wal_record() is not returning the correct raw\ndata for the record. I tested with pg_logical_emit_message, and the\nmessage isn't there. pg_get_wal_record_info() uses XLogRecordGetData(),\nwhich seems closer to what I expect.\n\n* I'm a little unclear on the purpose of pg_get_wal_record(). What does\nit offer that the other functions don't?\n\n* I don't think we need the stats at all. We can run GROUP BY queries\non the results of pg_get_wal_records_info().\n\n* Include the xlinfo portion of the wal record in addition to the rmgr,\nlike pg_waldump --stats=record shows. That way we can GROUP BY that as\nwell.\n\n* I don't think we need the changes to xlogutils.c. You calculate the\nend pointer based on the flush pointer, anyway, so we should never need\nto wait (and when I take it out, the tests still pass).\n\n\nI think we can radically simplify it without losing functionality,\nunless I'm missing something.\n\n1. Eliminate pg_get_wal_record(),\npg_get_wal_records_info_till_end_of_wal(), pg_get_wal_stats(),\npg_get_wal_stats_till_end_of_wal().\n\n2. Rename pg_get_wal_record_info -> pg_get_wal_record\n\n3. Rename pg_get_wal_records_info -> pg_get_wal_records\n\n4. For pg_get_wal_records, if end_lsn is NULL, read until the end of\nWAL.\n\n5. For pg_get_wal_record and pg_get_wal_records, also return the xlinfo\nusing rm_identify() if available.\n\n6. Remove changes to xlogutils.\n\n7. Remove the refactor to pull the stats out to a separate file,\nbecause stats aren't needed. \n\n8. With only two functions in the API, it may even make sense to just\nmake it a part of postgres rather than a separate module.\n\nHowever, I'm a little behind on this discussion thread, so perhaps I'm\nmissing some important context. I'll try to catch up soon.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Fri, 01 Apr 2022 16:35:38 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "Hi,\n\nOn 2022-04-01 16:35:38 -0700, Jeff Davis wrote:\n> * I don't think we need the stats at all. We can run GROUP BY queries\n> on the results of pg_get_wal_records_info().\n\nIt's well over an order of magnitude slower. And I don't see how that can be\navoided. That makes it practically useless.\n\nSee numbers at the bottom of\nhttps://postgr.es/m/CALj2ACUvU2fGLw7keEpxZhGAoMQ9vrCPX-13hexQPoR%2BQRbuOw%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Apr 2022 16:44:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Sat, Apr 2, 2022 at 5:05 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2022-03-26 at 10:31 +0530, Bharath Rupireddy wrote:\n> > Attaching v16 patch-set, only change in v16-0002-pg_walinspect.patch,\n> > others remain the same.\n>\n> I looked more closely at this patch.\n\nThanks Jeff for reviewing this.\n\n> * It seems that pg_get_wal_record() is not returning the correct raw\n> data for the record. I tested with pg_logical_emit_message, and the\n> message isn't there. pg_get_wal_record_info() uses XLogRecordGetData(),\n> which seems closer to what I expect.\n>\n> * I'm a little unclear on the purpose of pg_get_wal_record(). What does\n> it offer that the other functions don't?\n\nMy intention is to return the overall undecoded WAL record [5] i.e.\nthe data starting from XLogReadRecord's output [6] till length\nXLogRecGetTotalLen(xlogreader);. Please see [7], where Andres agreed\nto have this function, I also mentioned a possible use-case there.\n\npg_get_wal_record_info returns the main data of the WAL record\n(xl_heap_delete, xl_heap_insert, xl_heap_multi_insert, xl_heap_update\nand so on).\n\n> * I don't think we need the stats at all. We can run GROUP BY queries\n> on the results of pg_get_wal_records_info().\n\nAs identified in [1], SQL-version of stats function is way slower in\nnormal cases, hence it was agreed (by Andres, Kyotaro-san and myself)\nto have a C-function for stats.\n\n> * Include the xlinfo portion of the wal record in addition to the rmgr,\n> like pg_waldump --stats=record shows. That way we can GROUP BY that as\n> well.\n>\n> 5. For pg_get_wal_record and pg_get_wal_records, also return the xlinfo\n> using rm_identify() if available.\n\nYes, that's already part of the description column (much like\npg_waldump does) and the users can still do it with GROUP BY and\nHAVING clauses, see [4].\n\n> * I don't think we need the changes to xlogutils.c. You calculate the\n> end pointer based on the flush pointer, anyway, so we should never need\n> to wait (and when I take it out, the tests still pass).\n>\n> 6. Remove changes to xlogutils.\n\nAs mentioned in [1], read_local_xlog_page_no_wait required because the\nfunctions can still wait in read_local_xlog_page for WAL while finding\nthe first valid record after the given input LSN (the use case is\nsimple - just provide input LSN closer to server's current flush LSN,\nmay be off by 3 or 4 bytes).\n\nAlso, I tried to keep the changes minimal with the\nread_local_xlog_page_guts static function. IMO, that shouldn't be a\nproblem.\n\n> I think we can radically simplify it without losing functionality,\n> unless I'm missing something.\n>\n> 1. Eliminate pg_get_wal_record(),\n> pg_get_wal_records_info_till_end_of_wal(), pg_get_wal_stats(),\n> pg_get_wal_stats_till_end_of_wal().\n>\n> 4. For pg_get_wal_records, if end_lsn is NULL, read until the end of\n> WAL.\n\nIt's pretty much clear to the users with till_end_of_wal functions\ninstead of cooking many things into the same functions with default\nvalues for input LSNs as NULL which also requires the functions to be\n\"CALLED ON NULL INPUT\" types which isn't good. This was also suggested\nby Andres, see [2], and I agree with it.\n\n> 2. Rename pg_get_wal_record_info -> pg_get_wal_record\n>\n> 3. Rename pg_get_wal_records_info -> pg_get_wal_records\n\nAs these functions aren't returning the WAL record data, but info\nabout it (decoded data), I would like to retain the function names\nas-is.\n\n> 8. With only two functions in the API, it may even make sense to just\n> make it a part of postgres rather than a separate module.\n\nAs said above, I would like to have till_end_of_wal versions. Firstly,\npg_walinspect functions may not be needed by everyone, the extension\nprovides a way for the users to install if required. Also, many\nhackers have suggested new functions [3], but, right now the idea is\nto get pg_walinspect onboard with simple-yet-useful functions and then\nthink of extending it with new functions later.\n\n[1] https://www.postgresql.org/message-id/CALj2ACUvU2fGLw7keEpxZhGAoMQ9vrCPX-13hexQPoR%2BQRbuOw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20220322180006.hgbsoldgbljyrcm7%40alap3.anarazel.de\n[3] There are many functions we can add to pg_walinspect - functions\nwith wait mode for future WAL, WAL parsing, function to return all the\nWAL record info/stats given a WAL file name, functions to return WAL\ninfo/stats from historic timelines as well, function to see if the\ngiven WAL file is valid and so on.\n[4]\npostgres=# select count(resource_manager), description, from\npg_get_wal_records_info('0/14E0568', '0/14F2568') group by description\nhaving description like '%INSERT_LEAF%';\n count | description\n-------+---------------------\n 7 | INSERT_LEAF off 108\n 1 | INSERT_LEAF off 111\n 1 | INSERT_LEAF off 135\n 1 | INSERT_LEAF off 142\n 3 | INSERT_LEAF off 143\n 1 | INSERT_LEAF off 144\n 1 | INSERT_LEAF off 145\n 1 | INSERT_LEAF off 146\n 1 | INSERT_LEAF off 274\n 1 | INSERT_LEAF off 405\n(10 rows)\n\n[5]\n/*\n * The overall layout of an XLOG record is:\n * Fixed-size header (XLogRecord struct)\n * XLogRecordBlockHeader struct\n * XLogRecordBlockHeader struct\n * ...\n * XLogRecordDataHeader[Short|Long] struct\n * block data\n * block data\n * ...\n * main data\n\n[6]\nXLogRecord *\nXLogReadRecord(XLogReaderState *state, char **errormsg)\n{\n decoded = XLogNextRecord(state, errormsg);\n if (decoded)\n {\n /*\n * This function returns a pointer to the record's header, not the\n * actual decoded record. The caller will access the decoded record\n * through the XLogRecGetXXX() macros, which reach the decoded\n * recorded as xlogreader->record.\n */\n Assert(state->record == decoded);\n return &decoded->header;\n }\n\n[7] https://www.postgresql.org/message-id/20220322180006.hgbsoldgbljyrcm7%40alap3.anarazel.de\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:15:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, 2022-04-04 at 09:15 +0530, Bharath Rupireddy wrote:\n> My intention is to return the overall undecoded WAL record [5] i.e.\n> the data starting from XLogReadRecord's output [6] till length\n> XLogRecGetTotalLen(xlogreader);. Please see [7], where Andres agreed\n> to have this function, I also mentioned a possible use-case there.\n\nThe current patch does not actually do this: it's returning a pointer\ninto the DecodedXLogRecord struct, which doesn't have the raw bytes of\nthe WAL record.\n\nTo return the raw bytes of the record is not entirely trivial: it seems\nwe have to look in the decoded record and either find a pointer into\nreadBuf, or readRecordBuf, depending on whether the record crosses a\nboundary or not. If we find a good way to do this I'm fine keeping the\nfunction, but if not, we can leave it for v16.\n\n> pg_get_wal_record_info returns the main data of the WAL record\n> (xl_heap_delete, xl_heap_insert, xl_heap_multi_insert, xl_heap_update\n> and so on).\n\nWe also discussed just removing the main data from the output here.\nIt's not terribly useful, and could be arbitrarily large. Similar to\nhow we leave out the backup block data and images.\n\n> As identified in [1], SQL-version of stats function is way slower in\n> normal cases, hence it was agreed (by Andres, Kyotaro-san and myself)\n> to have a C-function for stats.a pointer into \n\nNow I agree. We should also have an equivalent of \"pg_waldump --\nstats=record\" though, too.\n\n> Yes, that's already part of the description column (much like\n> pg_waldump does) and the users can still do it with GROUP BY and\n> HAVING clauses, see [4].\n\nI still think an extra column for the results of rm_identify() would\nmake sense. Not critical, but seems useful.\n\n> As mentioned in [1], read_local_xlog_page_no_wait required because\n> the\n> functions can still wait in read_local_xlog_page for WAL while\n> finding\n> the first valid record after the given input LSN (the use case is\n> simple - just provide input LSN closer to server's current flush LSN,\n> may be off by 3 or 4 bytes).\n\nDid you look into using XLogReadAhead() rather than XLogReadRecord()?\n\n> It's pretty much clear to the users with till_end_of_wal functions\n> instead of cooking many things into the same functions with default\n> values for input LSNs as NULL which also requires the functions to be\n> \"CALLED ON NULL INPUT\" types which isn't good. This was also\n> suggested\n> by Andres, see [2], and I agree with it.\n\nOK, it's a matter of taste I suppose. I don't have a strong opinion.\n\n> > 2. Rename pg_get_wal_record_info -> pg_get_wal_record\n> > \n> > 3. Rename pg_get_wal_records_info -> pg_get_wal_records\n> \n> As these functions aren't returning the WAL record data, but info\n> about it (decoded data), I would like to retain the function names\n> as-is.\n\nThe name pg_get_wal_records_info bothers me slightly, but I don't have\na better suggestion.\n\n\nOne other thought: functions like pg_logical_emit_message() return an\nLSN, but if you feed that into pg_walinspect you get the *next* record.\nThat makes sense because pg_logical_emit_message() returns the result\nof XLogInsertRecord(), which is the end of the last inserted record.\nBut it can be slightly annoying/confusing. I don't have any particular\nsuggestion, but maybe it's worth a mention in the docs or something?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 05 Apr 2022 22:02:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL\n stats" }, { "msg_contents": "On Wed, Apr 6, 2022 at 10:32 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2022-04-04 at 09:15 +0530, Bharath Rupireddy wrote:\n> > My intention is to return the overall undecoded WAL record [5] i.e.\n> > the data starting from XLogReadRecord's output [6] till length\n> > XLogRecGetTotalLen(xlogreader);. Please see [7], where Andres agreed\n> > to have this function, I also mentioned a possible use-case there.\n>\n> The current patch does not actually do this: it's returning a pointer\n> into the DecodedXLogRecord struct, which doesn't have the raw bytes of\n> the WAL record.\n>\n> To return the raw bytes of the record is not entirely trivial: it seems\n> we have to look in the decoded record and either find a pointer into\n> readBuf, or readRecordBuf, depending on whether the record crosses a\n> boundary or not. If we find a good way to do this I'm fine keeping the\n> function, but if not, we can leave it for v16.\n\nWith no immediate use of raw WAL data without a WAL record parsing\nfunction, I'm dropping that function for now.\n\n> > pg_get_wal_record_info returns the main data of the WAL record\n> > (xl_heap_delete, xl_heap_insert, xl_heap_multi_insert, xl_heap_update\n> > and so on).\n>\n> We also discussed just removing the main data from the output here.\n> It's not terribly useful, and could be arbitrarily large. Similar to\n> how we leave out the backup block data and images.\n\nDone.\n\n> > As identified in [1], SQL-version of stats function is way slower in\n> > normal cases, hence it was agreed (by Andres, Kyotaro-san and myself)\n> > to have a C-function for stats.a pointer into\n>\n> Now I agree. We should also have an equivalent of \"pg_waldump --\n> stats=record\" though, too.\n\nAdded a parameter per_record (with default being false, emitting\nper-rmgr stats) to pg_get_wal_stats and\npg_get_wal_stats_till_end_of_wal, when set returns per-record stats,\nmuch like pg_waldump.\n\n> > Yes, that's already part of the description column (much like\n> > pg_waldump does) and the users can still do it with GROUP BY and\n> > HAVING clauses, see [4].\n>\n> I still think an extra column for the results of rm_identify() would\n> make sense. Not critical, but seems useful.\n\nAdded rm_identify as record_type column in pg_get_wal_record_info,\npg_get_wal_records_info, pg_get_wal_record_info_till_end_of_wal.\nRemoved the rm_identify from the description column as it's\nunnecessary now here.\n\n> > As mentioned in [1], read_local_xlog_page_no_wait required because\n> > the\n> > functions can still wait in read_local_xlog_page for WAL while\n> > finding\n> > the first valid record after the given input LSN (the use case is\n> > simple - just provide input LSN closer to server's current flush LSN,\n> > may be off by 3 or 4 bytes).\n>\n> Did you look into using XLogReadAhead() rather than XLogReadRecord()?\n\nI don't think XLogReadAhead will help either, as it calls page_read\ncallback, XLogReadAhead->XLogDecodeNextRecord->ReadPageInternal->page_read->read_local_xlog_page\n(which again waits for future WAL).\n\nPer our internal discussion, I'm keeping the\nread_local_xlog_page_no_wait as it offers a better solution without\nmuch code duplication.\n\n> The name pg_get_wal_records_info bothers me slightly, but I don't have\n> a better suggestion.\n\nIMO, pg_get_wal_records_info looks okay, hence didn't change it.\n\n> One other thought: functions like pg_logical_emit_message() return an\n> LSN, but if you feed that into pg_walinspect you get the *next* record.\n> That makes sense because pg_logical_emit_message() returns the result\n> of XLogInsertRecord(), which is the end of the last inserted record.\n> But it can be slightly annoying/confusing. I don't have any particular\n> suggestion, but maybe it's worth a mention in the docs or something?\n\nYes, all the pg_walinspect functions would find the next valid WAL\nrecord to the input/start LSN and start returning the details from\nthen.\n\nIMO, the descriptions of these functions have already specified it:\n\npg_get_wal_record_info\n Gets WAL record information of a given LSN. If the given LSN isn't\n containing a valid WAL record, it gives the information of the next\n available valid WAL record. This function emits an error if a future (the\n\nall other functions say this:\n Gets information/statistics of all the valid WAL records between/from\n\nAttaching v17 patch-set with the above review comments addressed.\nPlease have a look at it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 6 Apr 2022 14:15:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Wed, Apr 6, 2022 at 2:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Attaching v17 patch-set with the above review comments addressed.\n> Please have a look at it.\n\nHad to rebase because of 5c279a6d350 (Custom WAL Resource Managers.).\nPlease find v18 patch-set.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 7 Apr 2022 15:35:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Thu, Apr 7, 2022 at 3:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n\nI am facing the below doc build failure on my machine due to this work:\n\n./filelist.sgml:<!ENTITY pgwalinspect SYSTEM \"pgwalinspect.sgml\">\nTabs appear in SGML/XML files\nmake: *** [check-tabs] Error 1\n\nThe attached patch fixes this for me.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 11 Apr 2022 16:21:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Apr 11, 2022 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 7, 2022 at 3:35 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n>\n> I am facing the below doc build failure on my machine due to this work:\n>\n> ./filelist.sgml:<!ENTITY pgwalinspect SYSTEM \"pgwalinspect.sgml\">\n> Tabs appear in SGML/XML files\n> make: *** [check-tabs] Error 1\n>\n> The attached patch fixes this for me.\n\nThanks. It looks like there's a TAB in between. Your patch LGTM.\n\nI'm wondering why this hasn't been caught in the build farm members\n(or it may have been found but I'm missing to locate it.).\n\nCan you please provide me with the doc build command to catch these\nkinds of errors?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 11 Apr 2022 18:33:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" }, { "msg_contents": "On Mon, Apr 11, 2022 at 6:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 7, 2022 at 3:35 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> >\n> > I am facing the below doc build failure on my machine due to this work:\n> >\n> > ./filelist.sgml:<!ENTITY pgwalinspect SYSTEM \"pgwalinspect.sgml\">\n> > Tabs appear in SGML/XML files\n> > make: *** [check-tabs] Error 1\n> >\n> > The attached patch fixes this for me.\n>\n> Thanks. It looks like there's a TAB in between. Your patch LGTM.\n>\n> I'm wondering why this hasn't been caught in the build farm members\n> (or it may have been found but I'm missing to locate it.).\n>\n> Can you please provide me with the doc build command to catch these\n> kinds of errors?\n>\n\nNothing special. In the doc/src/sgml, I did make clean followed by make check.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Apr 2022 09:26:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walinspect - a new extension to get raw WAL data and WAL stats" } ]
[ { "msg_contents": "Bom dia, \n\nEstou com um problema para realizar a migração utilizando o comando\npg_upgrade. \n\nAo executar ele está me mostrando um erro de incompatibilidade com o\nformato data/hora. \n\nPelo comando pg_controldata verifico que na base 8.3 o Tipo de data/hora\né : números de pontos fluantes \n\nJá na base 9.3 o Tipo de data/hora é : interior de 64 bits. \n\nEstou realizando esses comandos: \n\n1 - pg_upgrade.exe -u postgres -d \"F:\\Arquivos de Programas\n(x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos\nde Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\" \n\nErro: mapped win32 error code 2 to 2\n Old and new pg_controldata date/time storage types do not\nmatch. \n\n You will need to rebuild the new server with configure\noption\n --disable-integer-datetimes or get server binaries built\nwith those options. \n\n2 - pg_upgrade.exe -u postgres --disable-integer-datetimes -d\n\"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\data\" -D\n\"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de Programas\n(x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\" \n\nErro: pg_upgrade.exe: illegal option -- disable-integer-datetimes \n\n Try \"pg_upgrade --help\" for more information. \n\nPor gentileza, poderia me ajudar como posso resolver esse problema para\nrealizar a implantação da nova versão? \n\nagradeço, e aguardo seu retorno.\n\n-- \nAtt \n\nOswaldo\nPlanin Sistemas\n\nBom dia, Estou com um problema para realizar a migração utilizando o comando pg_upgrade.\nAo executar ele está me mostrando um erro de incompatibilidade com o formato data/hora.\nPelo comando pg_controldata verifico que na base 8.3 o Tipo de data/hora é : números de pontos fluantes\nJá na base 9.3 o Tipo de data/hora é : interior de 64 bits.\nEstou realizando esses comandos:\n1 - pg_upgrade.exe -u postgres -d \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\"\nErro:     mapped win32 error code 2 to 2            Old and new pg_controldata date/time storage types do not match.\n             You will need to rebuild the new server with configure option            --disable-integer-datetimes or get server binaries built with those options.\n2 - pg_upgrade.exe -u postgres --disable-integer-datetimes -d \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\"\nErro:      pg_upgrade.exe: illegal option -- disable-integer-datetimes\n             Try \"pg_upgrade --help\" for more information.\nPor gentileza, poderia me ajudar como posso resolver esse problema para realizar a implantação da nova versão?\nagradeço, e aguardo seu retorno.\n-- \nAtt\nOswaldoPlanin Sistemas", "msg_date": "Wed, 08 Sep 2021 11:21:30 -0300", "msg_from": "oswaldo.bregnoles@planin.com.br", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Migra=C3=A7=C3=A3o_Postgresql_8=2E3_para_vers=C3=A3o_P?=\n =?UTF-8?Q?ostgresql_9=2E3?=" }, { "msg_contents": "Em qua., 8 de set. de 2021 às 11:21, <oswaldo.bregnoles@planin.com.br>\nescreveu:\n\n> Bom dia,\n>\n> Estou com um problema para realizar a migração utilizando o comando\n> pg_upgrade.\n>\n> Ao executar ele está me mostrando um erro de incompatibilidade com o\n> formato data/hora.\n>\n> Pelo comando pg_controldata verifico que na base 8.3 o Tipo de data/hora é\n> : números de pontos fluantes\n>\n> Já na base 9.3 o Tipo de data/hora é : interior de 64 bits.\n>\n> Estou realizando esses comandos:\n>\n> 1 - pg_upgrade.exe -u postgres -d \"F:\\Arquivos de Programas\n> (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de\n> Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\"\n>\n> Erro: mapped win32 error code 2 to 2\n> Old and new pg_controldata date/time storage types do not\n> match.\n>\n> You will need to rebuild the new server with configure option\n> --disable-integer-datetimes or get server binaries built with\n> those options.\n>\n> 2 - pg_upgrade.exe -u postgres --disable-integer-datetimes -d \"F:\\Arquivos\n> de Programas (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b\n> \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\bin\" -B\n> \"F:\\PostgreSQL\\9.3\\bin\"\n>\n> Erro: pg_upgrade.exe: illegal option -- disable-integer-datetimes\n>\n> Try \"pg_upgrade --help\" for more information.\n>\n> Por gentileza, poderia me ajudar como posso resolver esse problema para\n> realizar a implantação da nova versão?\n>\n> agradeço, e aguardo seu retorno.\n>\nOlá Oswaldo,\nSinto, mas essa lista não é o local apropriado para essas questões.\nPor favor encaminhar essas questões para:\nhttps://www.postgresql.org/list/pgsql-general/\n\nRanier Vilela\n\nEm qua., 8 de set. de 2021 às 11:21, <oswaldo.bregnoles@planin.com.br> escreveu:\nBom dia, Estou com um problema para realizar a migração utilizando o comando pg_upgrade.\nAo executar ele está me mostrando um erro de incompatibilidade com o formato data/hora.\nPelo comando pg_controldata verifico que na base 8.3 o Tipo de data/hora é : números de pontos fluantes\nJá na base 9.3 o Tipo de data/hora é : interior de 64 bits.\nEstou realizando esses comandos:\n1 - pg_upgrade.exe -u postgres -d \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\"\nErro:     mapped win32 error code 2 to 2            Old and new pg_controldata date/time storage types do not match.\n             You will need to rebuild the new server with configure option            --disable-integer-datetimes or get server binaries built with those options.\n2 - pg_upgrade.exe -u postgres --disable-integer-datetimes -d \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\data\" -D \"F:\\PostgreSQL\\9.3\\data\" -b \"F:\\Arquivos de Programas (x86)\\PostgreSQL\\8.3\\bin\" -B \"F:\\PostgreSQL\\9.3\\bin\"\nErro:      pg_upgrade.exe: illegal option -- disable-integer-datetimes\n             Try \"pg_upgrade --help\" for more information.\nPor gentileza, poderia me ajudar como posso resolver esse problema para realizar a implantação da nova versão?\nagradeço, e aguardo seu retorno.Olá Oswaldo,Sinto, mas essa lista não é o local apropriado para essas questões.Por favor encaminhar essas questões para:https://www.postgresql.org/list/pgsql-general/Ranier Vilela", "msg_date": "Wed, 8 Sep 2021 13:20:21 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Migra=C3=A7=C3=A3o_Postgresql_8=2E3_para_vers=C3=A3o_Postgresq?=\n\t=?UTF-8?Q?l_9=2E3?=" }, { "msg_contents": "On Wed, Sep 8, 2021 at 12:20 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Olá Oswaldo,\n> Sinto, mas essa lista não é o local apropriado para essas questões.\n> Por favor encaminhar essas questões para:\n> https://www.postgresql.org/list/pgsql-general/\n\nThe Portuguese list might be a better choice:\n\nhttps://www.postgresql.org/list/pgsql-pt-geral/\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Sep 2021 10:51:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re=3A_Migra=C3=A7=C3=A3o_Postgresql_8=2E3_para_vers=C3=A3o_Postgresq?=\n\t=?UTF-8?Q?l_9=2E3?=" } ]
[ { "msg_contents": "Hi,\n\nWhile hacking on AIO I wanted to build the windows portion from linux. That\nworks surprisingly well with cross-building using --host=x86_64-w64-mingw32 .\n\nWhat didn't work as well was running things under wine. It turns out that the\nserver itself works ok, but that initdb hangs because of a bug in wine ([1]),\nleading to the bootstrap process hanging while trying to read more input.\n\n\nWhich made me wonder: What is really the point of doing so much setup as part\nof initdb? Of course a wine bug isn't a reason to change anything, but I see\nother reasons it might be worth thinking about moving more of initdb's logic\ninto the backend.\n\nThere of course is historical raisins for things happening in initdb - the\nsetup logic didn't use to be C. But now that it is C, it seems a bit absurd to\nread bootstrap data in initdb, write the data to a pipe, and then read it\nagain in the backend. It for sure doesn't make things faster.\n\nIf more of initdb happened in the backend, it seems plausible that we could\navoid the restart of the server between bootstrap and the later setup phases -\nwhich likely would result in a decent speedup. And trialing different\nmax_connection and shared_buffer settings would be a lot faster without\nretries.\n\nBesides potential speedups I also think there's architectural reasons to\nprefer doing some of initdb's work in the backend - it would allow to avoid\nsome duplicated infrastructure and avoid leaking subsystem details to one more\nplace outside the subsystem.\n\n\nThe reason I CCed Peter is that he at some point proposed ([2]) having the\nbackend initialize itself via a base backup. I think if we generally moved\nmore of the data directory initialization into the backend that'd probably\narchitecturally work a bit better.\n\n\nI'm not planning to work on this in the near future. But I would like to do so\nat some point. And it might be worth considering pushing future additions to\ninitidb to be moved server-side via functions that initdb calls, rather than\nhaving initdb control everything.\n\nGreetings,\n\nAndres Freund\n\n[1] https://bugs.winehq.org/show_bug.cgi?id=51719\n[2] https://www.postgresql.org/message-id/61b8d18d-c922-ac99-b990-a31ba63cdcbb%402ndquadrant.com\n\n\n", "msg_date": "Wed, 8 Sep 2021 12:07:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Why does bootstrap and later initdb stages happen via client?" }, { "msg_contents": "\nOn 9/8/21 3:07 PM, Andres Freund wrote:\n> Hi,\n>\n> While hacking on AIO I wanted to build the windows portion from linux. That\n> works surprisingly well with cross-building using --host=x86_64-w64-mingw32 .\n>\n> What didn't work as well was running things under wine. It turns out that the\n> server itself works ok, but that initdb hangs because of a bug in wine ([1]),\n> leading to the bootstrap process hanging while trying to read more input.\n>\n>\n> Which made me wonder: What is really the point of doing so much setup as part\n> of initdb? Of course a wine bug isn't a reason to change anything, but I see\n> other reasons it might be worth thinking about moving more of initdb's logic\n> into the backend.\n>\n> There of course is historical raisins for things happening in initdb - the\n> setup logic didn't use to be C. But now that it is C, it seems a bit absurd to\n> read bootstrap data in initdb, write the data to a pipe, and then read it\n> again in the backend. It for sure doesn't make things faster.\n\n\nI guess the downside would be that we'd need to teach the backend how to\ndo more stuff that only needs to be done once per cluster, and then that\ncode would be dead space for the rest of the lifetime of the cluster.\n\n\nMaybe the difference is sufficiently small that it doesn't matter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Sep 2021 16:24:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why does bootstrap and later initdb stages happen via client?" }, { "msg_contents": "Hi,\n\nOn September 8, 2021 1:24:00 PM PDT, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>On 9/8/21 3:07 PM, Andres Freund wrote:\n>> There of course is historical raisins for things happening in initdb - the\n>> setup logic didn't use to be C. But now that it is C, it seems a bit absurd to\n>> read bootstrap data in initdb, write the data to a pipe, and then read it\n>> again in the backend. It for sure doesn't make things faster.\n>\n>\n>I guess the downside would be that we'd need to teach the backend how to\n>do more stuff that only needs to be done once per cluster, and then that\n>code would be dead space for the rest of the lifetime of the cluster.\n>\n>\n>Maybe the difference is sufficiently small that it doesn't matter.\n\nUnused code doesn't itself cost much - the OS won't even page it in. And disk space wise, there's not much difference between code in initdb and code in postgres. It's callsites to the code that can be problematic. But there were already paying the price via --boot and a fair number of if (bootstrap) blocks.\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Wed, 08 Sep 2021 14:48:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why does bootstrap and later initdb stages happen via client?" }, { "msg_contents": "On 08.09.21 21:07, Andres Freund wrote:\n> There of course is historical raisins for things happening in initdb - the\n> setup logic didn't use to be C. But now that it is C, it seems a bit absurd to\n> read bootstrap data in initdb, write the data to a pipe, and then read it\n> again in the backend. It for sure doesn't make things faster.\n\nA couple of things I was looking into a while ago: We could probably \nget a bit of performance by replacing the line-by-line substitutions \n(replace_token()) by processing the whole buffer at once. And we could \nget even more performance by not doing any post-processing of the files \nat all. For example, we don't need to replace_token() SIZEOF_POINTER, \nwhich is known at compile time. Handling ENCODING, LC_COLLATE, etc. is \nnot quite as obvious, but moving some of that logic into the backend \ncould be helpful in that direction.\n\n\n", "msg_date": "Thu, 9 Sep 2021 14:11:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Why does bootstrap and later initdb stages happen via client?" }, { "msg_contents": "\nOn 9/8/21 5:48 PM, Andres Freund wrote:\n> Hi,\n>\n> On September 8, 2021 1:24:00 PM PDT, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 9/8/21 3:07 PM, Andres Freund wrote:\n>>> There of course is historical raisins for things happening in initdb - the\n>>> setup logic didn't use to be C. But now that it is C, it seems a bit absurd to\n>>> read bootstrap data in initdb, write the data to a pipe, and then read it\n>>> again in the backend. It for sure doesn't make things faster.\n>>\n>> I guess the downside would be that we'd need to teach the backend how to\n>> do more stuff that only needs to be done once per cluster, and then that\n>> code would be dead space for the rest of the lifetime of the cluster.\n>>\n>>\n>> Maybe the difference is sufficiently small that it doesn't matter.\n> Unused code doesn't itself cost much - the OS won't even page it in. And disk space wise, there's not much difference between code in initdb and code in postgres. It's callsites to the code that can be problematic. But there were already paying the price via --boot and a fair number of if (bootstrap) blocks.\n>\n\nFair enough. You're quite right, of course, the original design of\ninitdb.c was to do what the preceding shell script did as closely as\npossible. It does leak a bit of memory, which doesn't matter in the\ncontext of a short-lived program - but that shouldn't be too hard to\nmanage in the backend.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 9 Sep 2021 11:52:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Why does bootstrap and later initdb stages happen via client?" } ]
[ { "msg_contents": "Hi hackers,\n\nIn PG12 and PG13 LookupFuncName would return procedures as well as\nfunctions while in PG14 since commit e56bce5d [0] it would disregard\nall procedures\nand not return them as match.\n\nIs this intended behaviour or an unintended side effect of the refactoring?\n\nSven\n\n[0] https://github.com/postgres/postgres/commit/e56bce5d\n\n\n", "msg_date": "Thu, 9 Sep 2021 02:01:11 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": true, "msg_subject": "Regression in PG14 LookupFuncName" }, { "msg_contents": "Sven Klemm <sven@timescale.com> writes:\n> In PG12 and PG13 LookupFuncName would return procedures as well as\n> functions while in PG14 since commit e56bce5d [0] it would disregard\n> all procedures\n> and not return them as match.\n> Is this intended behaviour or an unintended side effect of the refactoring?\n\nIt was intentional, because all internal callers of LookupFuncName only\nwant to see functions. See the last few messages in the referenced\ndiscussion thread:\n\nhttps://www.postgresql.org/message-id/flat/3742981.1621533210%40sss.pgh.pa.us\n\nYou should be able to use LookupFuncWithArgs if you want a different\ndefinition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Sep 2021 20:13:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression in PG14 LookupFuncName" }, { "msg_contents": "> It was intentional, because all internal callers of LookupFuncName only\n> want to see functions. See the last few messages in the referenced\n> discussion thread:\n>\n> https://www.postgresql.org/message-id/flat/3742981.1621533210%40sss.pgh.pa.us\n\nThank you for the clarification.\n\n--\nRegards, Sven Klemm\n\n\n", "msg_date": "Thu, 9 Sep 2021 12:02:43 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Regression in PG14 LookupFuncName" } ]
[ { "msg_contents": "Hello.\n\nSometimes, I needed to infer where a past checkpoint make wal segments\nunnecessary up to, or just to know the LSN at a past point in\ntime. But there's no convenient source for that.\n\nThe attached small patch enables me (or us) to do that by looking into\nserver log files.\n\n\n> LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.008 s, sync=0.035 s, total=0.064 s; sync files=4, longest=0.017 s, average=0.009 s; distance=16420 kB, estimate=16420 kB, redo=0/30091D8\n\nDoes that make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 09 Sep 2021 14:58:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Show redo LSN in checkpoint logs" } ]
[ { "msg_contents": "Hello hackers,\n\nIn pg_import_system_collations() there is this fragment of code:\n\nenc = pg_get_encoding_from_locale(localebuf, false);\nif (enc < 0)\n{\n\t/* error message printed by pg_get_encoding_from_locale() */\n\tcontinue;\n}\n\nHowever, false passed to pg_get_encoding_from_locale() means \nwrite_message argument is false, so no error message is ever printed.\nI propose an obvious patch (see attachment).\n\nIntroduced in aa17c06fb in January 2017 when debug was replaced by \nfalse, so I guess back-patching through 10 would be appropriate.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru", "msg_date": "Thu, 9 Sep 2021 13:45:05 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "missing warning in pg_import_system_collations" }, { "msg_contents": "Em qui., 9 de set. de 2021 às 03:45, Anton Voloshin <\na.voloshin@postgrespro.ru> escreveu:\n\n> Hello hackers,\n>\n> In pg_import_system_collations() there is this fragment of code:\n>\n> enc = pg_get_encoding_from_locale(localebuf, false);\n> if (enc < 0)\n> {\n> /* error message printed by pg_get_encoding_from_locale() */\n> continue;\n> }\n>\n> However, false passed to pg_get_encoding_from_locale() means\n> write_message argument is false, so no error message is ever printed.\n> I propose an obvious patch (see attachment).\n>\nYeah, seems correct to me.\nThe comment clearly expresses the intention.\n\n\n> Introduced in aa17c06fb in January 2017 when debug was replaced by\n> false, so I guess back-patching through 10 would be appropriate.\n>\nThis is an oversight.\n\n+1 from me.\n\nRanier Vilela\n\nEm qui., 9 de set. de 2021 às 03:45, Anton Voloshin <a.voloshin@postgrespro.ru> escreveu:Hello hackers,\n\nIn pg_import_system_collations() there is this fragment of code:\n\nenc = pg_get_encoding_from_locale(localebuf, false);\nif (enc < 0)\n{\n        /* error message printed by pg_get_encoding_from_locale() */\n        continue;\n}\n\nHowever, false passed to pg_get_encoding_from_locale() means \nwrite_message argument is false, so no error message is ever printed.\nI propose an obvious patch (see attachment).Yeah, seems correct to me.The comment clearly expresses the intention. \n\nIntroduced in aa17c06fb in January 2017 when debug was replaced by \nfalse, so I guess back-patching through 10 would be appropriate.This is an oversight. +1 from me.Ranier Vilela", "msg_date": "Thu, 9 Sep 2021 08:46:25 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: missing warning in pg_import_system_collations" }, { "msg_contents": "Anton Voloshin <a.voloshin@postgrespro.ru> writes:\n> In pg_import_system_collations() there is this fragment of code:\n\n> enc = pg_get_encoding_from_locale(localebuf, false);\n> if (enc < 0)\n> {\n> \t/* error message printed by pg_get_encoding_from_locale() */\n> \tcontinue;\n> }\n\n> However, false passed to pg_get_encoding_from_locale() means \n> write_message argument is false, so no error message is ever printed.\n> I propose an obvious patch (see attachment).\n> Introduced in aa17c06fb in January 2017 when debug was replaced by \n> false, so I guess back-patching through 10 would be appropriate.\n\nI don't think this is obvious at all.\n\nIn the original coding (before aa17c06fb, when this code was in initdb),\nwe printed a warning if \"debug\" was true and otherwise printed nothing.\nThe other \"if (debug)\" cases in the code that got moved over were\ntranslated to \"elog(DEBUG1)\", but there isn't any API to make\npg_get_encoding_from_locale() log at that level.\n\nWhat you propose to do here would promote this case from\nought-to-be-DEBUG1 to WARNING, which seems to me to be way too much in the\nuser's face. Or, if there actually is a case for complaining, then all\nthose messages ought to be WARNING not DEBUG1. But I'm inclined to think\nthat having pg_import_system_collations silently ignore unusable locales\nis the right thing most of the time.\n\nAssuming we don't want to change pg_get_encoding_from_locale()'s API,\nthe simplest fix is to duplicate its error message, so more or less\n\n if (enc < 0)\n {\n- /* error message printed by pg_get_encoding_from_locale() */\n+ elog(DEBUG1, \"could not determine encoding for locale \\\"%s\\\"\",\n+ localebuf)));\n continue;\n }\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 10:51:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: missing warning in pg_import_system_collations" }, { "msg_contents": "On 09/09/2021 21:51, Tom Lane wrote:\n> What you propose to do here would promote this case from\n> ought-to-be-DEBUG1 to WARNING, which seems to me to be way too much in the\n> user's face. Or, if there actually is a case for complaining, then all\n> those messages ought to be WARNING not DEBUG1. ...\n> \n> Assuming we don't want to change pg_get_encoding_from_locale()'s API,\n> the simplest fix is to duplicate its error message, so more or less\n> \n> if (enc < 0)\n> {\n> - /* error message printed by pg_get_encoding_from_locale() */\n> + elog(DEBUG1, \"could not determine encoding for locale \\\"%s\\\"\",\n> + localebuf)));\n> continue;\n> }\n\nUpon thinking a little more, I agree.\nThe warnings I happen to get from initdb on my current machine (with \nmany various locales installed, more than on a typical box) are:\n\nperforming post-bootstrap initialization ... 2021-09-09 22:04:01.678 +07 \n[482312] WARNING: could not determine encoding for locale \n\"hy_AM.armscii8\": codeset is \"ARMSCII-8\"\n2021-09-09 22:04:01.679 +07 [482312] WARNING: could not determine \nencoding for locale \"ka_GE\": codeset is \"GEORGIAN-PS\"\n2021-09-09 22:04:01.679 +07 [482312] WARNING: could not determine \nencoding for locale \"kk_KZ\": codeset is \"PT154\"\n2021-09-09 22:04:01.679 +07 [482312] WARNING: could not determine \nencoding for locale \"kk_KZ.rk1048\": codeset is \"RK1048\"\n2021-09-09 22:04:01.686 +07 [482312] WARNING: could not determine \nencoding for locale \"tg_TJ\": codeset is \"KOI8-T\"\n2021-09-09 22:04:01.686 +07 [482312] WARNING: could not determine \nencoding for locale \"th_TH\": codeset is \"TIS-620\"\nok\n\nWhile they are definitely interesting as DEBUG1, not so as a WARNING.\n\nSo, +1 from me for your proposed elog(DEBUG1, ...); patch. Thank you.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n", "msg_date": "Thu, 9 Sep 2021 22:17:35 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: missing warning in pg_import_system_collations" }, { "msg_contents": "Anton Voloshin <a.voloshin@postgrespro.ru> writes:\n> On 09/09/2021 21:51, Tom Lane wrote:\n>> Assuming we don't want to change pg_get_encoding_from_locale()'s API,\n>> the simplest fix is to duplicate its error message, so more or less\n>> \n>> if (enc < 0)\n>> {\n>> - /* error message printed by pg_get_encoding_from_locale() */\n>> + elog(DEBUG1, \"could not determine encoding for locale \\\"%s\\\"\",\n>> + localebuf)));\n>> continue;\n>> }\n\n> Upon thinking a little more, I agree.\n\nAnother approach we could take is to deem the comment incorrect and\njust remove it, codifying the current behavior of silently ignoring\nunrecognized encodings. The reason that seems like it might be\nappropriate is that the logic immediately below this bit silently\nignores encodings that are known but are frontend-only:\n\n if (!PG_VALID_BE_ENCODING(enc))\n continue; /* ignore locales for client-only encodings */\n\nIt's sure not very clear to me why one case deserves a message and the\nother not. Perhaps they both do, which would lead to adding another\nDEBUG1 message here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 14:37:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: missing warning in pg_import_system_collations" }, { "msg_contents": "On 10/09/2021 01:37, Tom Lane wrote:\n> Another approach we could take is to deem the comment incorrect and\n> just remove it, codifying the current behavior of silently ignoring\n> unrecognized encodings. The reason that seems like it might be\n> appropriate is that the logic immediately below this bit silently\n> ignores encodings that are known but are frontend-only:\n> \n> if (!PG_VALID_BE_ENCODING(enc))\n> continue; /* ignore locales for client-only encodings */\n> \n> It's sure not very clear to me why one case deserves a message and the\n> other not. Perhaps they both do, which would lead to adding another\n> DEBUG1 message here.\n\nI'm not an expert in locales, but I think it makes some sense to be \nsilent about encodings we have consciously decided to ignore as we have \nthem in our tables, but marked them as frontend-only \n(!PG_VALID_BE_ENCODING(enc)).\nJust like it makes sense to do give a debug-level warning about \nencodings seen in locale -a output but not recognized by us at all \n(pg_get_encoding_from_locale(localebuf, false) < 0).\n\nTherefore I think your patch with duplicated error message is better \nthan what we have currently. I don't see how adding debug-level messages \nabout skipping frontend-only encodings would be of any significant use here.\n\nUnless someone more experienced in locales' subtleties would like to \nchime in.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:47:48 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: missing warning in pg_import_system_collations" }, { "msg_contents": "Anton Voloshin <a.voloshin@postgrespro.ru> writes:\n> On 10/09/2021 01:37, Tom Lane wrote:\n>> It's sure not very clear to me why one case deserves a message and the\n>> other not. Perhaps they both do, which would lead to adding another\n>> DEBUG1 message here.\n\n> I'm not an expert in locales, but I think it makes some sense to be \n> silent about encodings we have consciously decided to ignore as we have \n> them in our tables, but marked them as frontend-only \n> (!PG_VALID_BE_ENCODING(enc)).\n\nI'm not really buying that. It seems to me that the only reason anyone\nwould examine this debug output at all is that they want to know \"why\ndidn't this locale (which I can see in 'locale -a' output) get imported?\".\nSo the only cases I'm inclined to not log about are when we skip a locale\nbecause there's already a matching pg_collation entry.\n\nI experimented with the attached draft patch. The debug output on my\nRHEL8 box (with a more-or-less-default set of locales) looks like\n\n2021-09-11 12:13:09.908 EDT [41731] DEBUG: could not identify encoding for locale \"hy_AM.armscii8\"\n2021-09-11 12:13:09.909 EDT [41731] DEBUG: could not identify encoding for locale \"ka_GE\"\n2021-09-11 12:13:09.909 EDT [41731] DEBUG: could not identify encoding for locale \"ka_GE.georgianps\"\n2021-09-11 12:13:09.909 EDT [41731] DEBUG: could not identify encoding for locale \"kk_KZ\"\n2021-09-11 12:13:09.909 EDT [41731] DEBUG: could not identify encoding for locale \"kk_KZ.pt154\"\n2021-09-11 12:13:09.926 EDT [41731] DEBUG: could not identify encoding for locale \"tg_TJ\"\n2021-09-11 12:13:09.926 EDT [41731] DEBUG: could not identify encoding for locale \"tg_TJ.koi8t\"\n2021-09-11 12:13:09.926 EDT [41731] DEBUG: could not identify encoding for locale \"th_TH\"\n2021-09-11 12:13:09.926 EDT [41731] DEBUG: could not identify encoding for locale \"th_TH.tis620\"\n2021-09-11 12:13:09.926 EDT [41731] DEBUG: could not identify encoding for locale \"thai\"\n2021-09-11 12:13:09.929 EDT [41731] DEBUG: skipping client-only locale \"zh_CN.gb18030\"\n2021-09-11 12:13:09.929 EDT [41731] DEBUG: skipping client-only locale \"zh_CN.gbk\"\n2021-09-11 12:13:09.930 EDT [41731] DEBUG: skipping client-only locale \"zh_HK\"\n2021-09-11 12:13:09.930 EDT [41731] DEBUG: skipping client-only locale \"zh_HK.big5hkscs\"\n2021-09-11 12:13:09.930 EDT [41731] DEBUG: skipping client-only locale \"zh_SG.gbk\"\n2021-09-11 12:13:09.930 EDT [41731] DEBUG: skipping client-only locale \"zh_TW\"\n2021-09-11 12:13:09.930 EDT [41731] DEBUG: skipping client-only locale \"zh_TW.big5\"\n\nI don't see a good reason to think that someone would be less confused\nabout why we reject zh_HK than why we reject th_TH. So I think if we're\ngoing to worry about this then we should add both messages.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 11 Sep 2021 12:19:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: missing warning in pg_import_system_collations" } ]
[ { "msg_contents": "Hi,\n\nThe attached patch adds a small test for recovery_end_command execution.\n\nCurrently, patch tests execution of recovery_end_command by creating\ndummy file, I am not wedded only to this approach, other suggestions\nalso welcome.\n\nAlso, we don't have a good test for archive_cleanup_command as well, I\nam not sure how we could test that which executes with every\nrestart-point.\n\nThanks to my colleague Neha Sharma for confirming the test execution on Windows.\n\nRegards,\nAmul", "msg_date": "Thu, 9 Sep 2021 16:48:16 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "TAP test for recovery_end_command" }, { "msg_contents": "On Thu, Sep 9, 2021, at 8:18 AM, Amul Sul wrote:\n> The attached patch adds a small test for recovery_end_command execution.\nAdditional coverage is always a good thing.\n\n> Currently, patch tests execution of recovery_end_command by creating\n> dummy file, I am not wedded only to this approach, other suggestions\n> also welcome.\nThis test file is for archiving only. It seems 020_archive_status.pl is more\nappropriate for testing this parameter.\n\n> Also, we don't have a good test for archive_cleanup_command as well, I\n> am not sure how we could test that which executes with every\n> restart-point.\nSetup a replica and stop it. It triggers a restartpoint during the shutdown.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Sep 9, 2021, at 8:18 AM, Amul Sul wrote:The attached patch adds a small test for recovery_end_command execution.Additional coverage is always a good thing.Currently, patch tests execution of recovery_end_command by creatingdummy file, I am not wedded only to this approach, other suggestionsalso welcome.This test file is for archiving only. It seems 020_archive_status.pl is moreappropriate for testing this parameter.Also, we don't have a good test for archive_cleanup_command as well, Iam not sure how we could test that which executes with everyrestart-point.Setup a replica and stop it. It triggers a restartpoint during the shutdown.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 12 Sep 2021 21:25:32 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Sun, Sep 12, 2021 at 09:25:32PM -0300, Euler Taveira wrote:\n> On Thu, Sep 9, 2021, at 8:18 AM, Amul Sul wrote:\n>> Also, we don't have a good test for archive_cleanup_command as well, I\n>> am not sure how we could test that which executes with every\n>> restart-point.\n>\n> Setup a replica and stop it. It triggers a restartpoint during the shutdown.\n\n+$node_standby2->append_conf('postgresql.conf',\n+ \"recovery_end_command='echo recovery_ended > $recovery_end_command_file'\");\nThis is not going to work on Windows.\n--\nMichael", "msg_date": "Mon, 13 Sep 2021 12:14:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Mon, Sep 13, 2021 at 8:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Sep 12, 2021 at 09:25:32PM -0300, Euler Taveira wrote:\n> > On Thu, Sep 9, 2021, at 8:18 AM, Amul Sul wrote:\n> >> Also, we don't have a good test for archive_cleanup_command as well, I\n> >> am not sure how we could test that which executes with every\n> >> restart-point.\n> >\n> > Setup a replica and stop it. It triggers a restartpoint during the shutdown.\n>\n> +$node_standby2->append_conf('postgresql.conf',\n> + \"recovery_end_command='echo recovery_ended > $recovery_end_command_file'\");\n> This is not going to work on Windows.\n\nUnfortunately, I don't have Windows, but my colleague Neha Sharma has\nconfirmed it works there.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 13 Sep 2021 09:34:32 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Mon, Sep 13, 2021 at 5:56 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Sep 9, 2021, at 8:18 AM, Amul Sul wrote:\n>\n> The attached patch adds a small test for recovery_end_command execution.\n>\n> Additional coverage is always a good thing.\n>\n\nThanks for the confirmation.\n\n> Currently, patch tests execution of recovery_end_command by creating\n> dummy file, I am not wedded only to this approach, other suggestions\n> also welcome.\n>\n> This test file is for archiving only. It seems 020_archive_status.pl is more\n> appropriate for testing this parameter.\n>\n\nOk, moved to 020_archive_status.pl in the attached version.\n\n> Also, we don't have a good test for archive_cleanup_command as well, I\n> am not sure how we could test that which executes with every\n> restart-point.\n>\n> Setup a replica and stop it. It triggers a restartpoint during the shutdown.\n\nYeah, added that test too. I triggered the restartpoint via a\nCHECKPOINT command in the attached version.\n\nNote that I haven't tested the current version on Windows, will\ncross-check that tomorrow.\n\nRegards,\nAmul", "msg_date": "Mon, 13 Sep 2021 18:39:13 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Mon, Sep 13, 2021, at 10:09 AM, Amul Sul wrote:\n> Yeah, added that test too. I triggered the restartpoint via a\n> CHECKPOINT command in the attached version.\n+# archive_cleanup_command executed with every restart points\n+ok( !-f \"$archive_cleanup_command_file\",\n+ 'archive_cleanup_command not executed yet');\n\nWhy are you including a test whose result is known? Fresh cluster does\nnot contain archive_cleanup_command.done or recovery_end_command.done.\n\n+# Checkpoint will trigger restart point on standby.\n+$standby3->safe_psql('postgres', q{CHECKPOINT});\n+ok(-f \"$archive_cleanup_command_file\",\n+ 'archive_cleanup_command executed on checkpoint');\n\nIs this test reliable?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Sep 13, 2021, at 10:09 AM, Amul Sul wrote:Yeah, added that test too. I triggered the restartpoint via aCHECKPOINT command in the attached version.+# archive_cleanup_command executed with every restart points+ok( !-f \"$archive_cleanup_command_file\",+\t'archive_cleanup_command not executed yet');Why are you including a test whose result is known? Fresh cluster doesnot contain archive_cleanup_command.done or recovery_end_command.done.+# Checkpoint will trigger restart point on standby.+$standby3->safe_psql('postgres', q{CHECKPOINT});+ok(-f \"$archive_cleanup_command_file\",+\t'archive_cleanup_command executed on checkpoint');Is this test reliable?--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 13 Sep 2021 12:09:20 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Mon, Sep 13, 2021 at 8:39 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Sep 13, 2021, at 10:09 AM, Amul Sul wrote:\n>\n> Yeah, added that test too. I triggered the restartpoint via a\n> CHECKPOINT command in the attached version.\n>\n> +# archive_cleanup_command executed with every restart points\n> +ok( !-f \"$archive_cleanup_command_file\",\n> + 'archive_cleanup_command not executed yet');\n>\n> Why are you including a test whose result is known? Fresh cluster does\n> not contain archive_cleanup_command.done or recovery_end_command.done.\n>\n\nMake sense, removed in the attached version.\n\n> +# Checkpoint will trigger restart point on standby.\n> +$standby3->safe_psql('postgres', q{CHECKPOINT});\n> +ok(-f \"$archive_cleanup_command_file\",\n> + 'archive_cleanup_command executed on checkpoint');\n>\n> Is this test reliable?\n>\n\nI think yes, it will be the same as you are suggesting a shutdown\nwhich eventually performs a checkpoint. That checkpoint on the\nrecovery server performs the restart point instead. Still, there could\nbe a case where archive_cleanup_command execution could be skipped,\nif there is no record replied after the previous restart point, which\ncould happen in case of shutdown as well. Hope that makes sense.\n\nRegards,\nAmul", "msg_date": "Tue, 14 Sep 2021 10:34:09 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "Hi,\n\nOn 2021-09-14 10:34:09 +0530, Amul Sul wrote:\n> +# recovery_end_command_file executed only on recovery end which can happen on\n> +# promotion.\n> +$standby3->promote;\n> +ok(-f \"$recovery_end_command_file\",\n> +\t'recovery_end_command executed after promotion');\n\nIt'd be good to test what happens when recovery_end_command fails...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Oct 2021 13:10:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 6, 2021 at 1:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-09-14 10:34:09 +0530, Amul Sul wrote:\n> > +# recovery_end_command_file executed only on recovery end which can happen on\n> > +# promotion.\n> > +$standby3->promote;\n> > +ok(-f \"$recovery_end_command_file\",\n> > + 'recovery_end_command executed after promotion');\n>\n> It'd be good to test what happens when recovery_end_command fails...\n>\n\nThanks for the suggestion, added the same in the attached version.\n\nRegards,\nAmul", "msg_date": "Wed, 6 Oct 2021 18:49:10 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 06, 2021 at 06:49:10PM +0530, Amul Sul wrote:\n> Thanks for the suggestion, added the same in the attached version.\n\nHmm. The run-time of 020_archive_status.p bumps from 4.7s to 5.8s on\nmy laptop, so the change is noticeable. I agree that it would be good\nto have more coverage for those commands, but I also think that we\nshould make things cheaper if we can, particularly knowing that those\ncommands just touch a file. This patch creates two stanbys for its\npurpose, but do we need that much?\n\nOn top of that, 020_archive_status.pl does not look like the correct\nplace for this set of tests. 002_archiving.pl would be a better\ncandidate, where we already have two standbys that get promoted, so\nyou could have both the failure and success cases there. There should\nbe no need for extra wait phases either.\n\n+$standby4->append_conf('postgresql.conf',\n+ \"recovery_end_command = 'echo recovery_ended > /non_existing_dir/file'\");\nI am wondering how this finishes on Windows.\n--\nMichael", "msg_date": "Wed, 20 Oct 2021 14:39:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 20, 2021 at 11:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 06, 2021 at 06:49:10PM +0530, Amul Sul wrote:\n> > Thanks for the suggestion, added the same in the attached version.\n>\n> Hmm. The run-time of 020_archive_status.p bumps from 4.7s to 5.8s on\n> my laptop, so the change is noticeable. I agree that it would be good\n> to have more coverage for those commands, but I also think that we\n> should make things cheaper if we can, particularly knowing that those\n> commands just touch a file. This patch creates two stanbys for its\n> purpose, but do we need that much?\n>\n> On top of that, 020_archive_status.pl does not look like the correct\n> place for this set of tests. 002_archiving.pl would be a better\n> candidate, where we already have two standbys that get promoted, so\n> you could have both the failure and success cases there. There should\n> be no need for extra wait phases either.\n>\n\nUnderstood, moved tests to 002_archiving.pl in the attached version.\n\n> +$standby4->append_conf('postgresql.conf',\n> + \"recovery_end_command = 'echo recovery_ended > /non_existing_dir/file'\");\n> I am wondering how this finishes on Windows.\n\nMy colleague Neha Sharma has confirmed that the attached version is\npassing on the Windows.\n\nRegards,\nAmul", "msg_date": "Mon, 25 Oct 2021 14:42:28 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Mon, Oct 25, 2021 at 02:42:28PM +0530, Amul Sul wrote:\n> Understood, moved tests to 002_archiving.pl in the attached version.\n\nThanks for the new patch. I have reviewed its contents, and there\nwere a couple of things that caught my attention while putting my\nhands on it.\n\n+$node_standby->append_conf('postgresql.conf',\n+\t\"archive_cleanup_command = 'echo archive_cleanuped > $archive_cleanup_command_file'\");\n+$node_standby->append_conf('postgresql.conf',\n+\t\"recovery_end_command = 'echo recovery_ended > $recovery_end_command_file'\");\nThis can be formatted with a single append_conf() call and qq() to\nhave the correct set of quotes.\n\n+$node_primary->safe_psql('postgres', \"CHECKPOINT\");\n my $current_lsn =\n $node_primary->safe_psql('postgres', \"SELECT pg_current_wal_lsn();\");\nThis had better say that the checkpoint is necessary because we need\none before switching to a new segment on the primary, as much as the\ncheckpoint on the first standby is needed to trigger the command whose\nexecution is checked.\n\n+$node_standby2->append_conf('postgresql.conf',\n+\t\"archive_cleanup_command = 'echo xyz > $data_dir/unexisting_dir/xyz.file'\");\n+$node_standby2->append_conf('postgresql.conf',\n+\t\"recovery_end_command = 'echo xyz > $data_dir/unexisting_dir/xyz.file'\");\n[...]\n+# Failing to execute archive_cleanup_command and/or recovery_end_command does\n+# not affect promotion.\n+is($node_standby2->safe_psql( 'postgres', q{SELECT pg_is_in_recovery()}), 'f',\n+\t\"standby promoted successfully despite incorrect archive_cleanup_command and recovery_end_command\");\n\nThis SQL test is mostly useless IMO, as the promote() call done above\nensures that this state is reached properly, and the same thing could\nbe with the removals of RECOVERYHISTORY and RECOVERYXLOG. I think\nthat it would be better to check directly if the commands are run or\nnot. This is simple to test: look at the logs from a position just\nbefore the promotion, slurp the log file of $standby2 from this\nposition, and finally compare its contents with a regex of your\nchoice. I have chosen a simple \"qr/WARNING:.*recovery_end_command/s\"\nfor the purpose of this test. Having a test for\narchive_cleanup_command here would be nice, but that would be much\nmore costly than the end-of-recovery command, so I have left that\nout. Perhaps we could just append it in the conf as a dummy, as you\ndid, though, but its execution is not deterministic in this test so we\nare better without for now IMO.\n\nperltidy was also complaining a bit, this is fixed as of the attached.\nI have checked things on my own Windows dev box, while on it.\n--\nMichael", "msg_date": "Wed, 27 Oct 2021 13:07:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 27, 2021 at 9:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 25, 2021 at 02:42:28PM +0530, Amul Sul wrote:\n> > Understood, moved tests to 002_archiving.pl in the attached version.\n>\n> Thanks for the new patch. I have reviewed its contents, and there\n> were a couple of things that caught my attention while putting my\n> hands on it.\n>\n> +$node_standby->append_conf('postgresql.conf',\n> + \"archive_cleanup_command = 'echo archive_cleanuped > $archive_cleanup_command_file'\");\n> +$node_standby->append_conf('postgresql.conf',\n> + \"recovery_end_command = 'echo recovery_ended > $recovery_end_command_file'\");\n> This can be formatted with a single append_conf() call and qq() to\n> have the correct set of quotes.\n>\n> +$node_primary->safe_psql('postgres', \"CHECKPOINT\");\n> my $current_lsn =\n> $node_primary->safe_psql('postgres', \"SELECT pg_current_wal_lsn();\");\n> This had better say that the checkpoint is necessary because we need\n> one before switching to a new segment on the primary, as much as the\n> checkpoint on the first standby is needed to trigger the command whose\n> execution is checked.\n>\n> +$node_standby2->append_conf('postgresql.conf',\n> + \"archive_cleanup_command = 'echo xyz > $data_dir/unexisting_dir/xyz.file'\");\n> +$node_standby2->append_conf('postgresql.conf',\n> + \"recovery_end_command = 'echo xyz > $data_dir/unexisting_dir/xyz.file'\");\n> [...]\n> +# Failing to execute archive_cleanup_command and/or recovery_end_command does\n> +# not affect promotion.\n> +is($node_standby2->safe_psql( 'postgres', q{SELECT pg_is_in_recovery()}), 'f',\n> + \"standby promoted successfully despite incorrect archive_cleanup_command and recovery_end_command\");\n>\n> This SQL test is mostly useless IMO, as the promote() call done above\n> ensures that this state is reached properly, and the same thing could\n> be with the removals of RECOVERYHISTORY and RECOVERYXLOG. I think\n> that it would be better to check directly if the commands are run or\n> not. This is simple to test: look at the logs from a position just\n> before the promotion, slurp the log file of $standby2 from this\n> position, and finally compare its contents with a regex of your\n> choice. I have chosen a simple \"qr/WARNING:.*recovery_end_command/s\"\n> for the purpose of this test. Having a test for\n> archive_cleanup_command here would be nice, but that would be much\n> more costly than the end-of-recovery command, so I have left that\n> out. Perhaps we could just append it in the conf as a dummy, as you\n> did, though, but its execution is not deterministic in this test so we\n> are better without for now IMO.\n>\n> perltidy was also complaining a bit, this is fixed as of the attached.\n> I have checked things on my own Windows dev box, while on it.\n\nThanks for the updated version. The patch is much better than before\nexcept needing minor changes to the test description that testing\nrecovery_end_command_file before promotion, I did the same in the\nattached version.\n\nRegards,\nAmul", "msg_date": "Wed, 27 Oct 2021 10:32:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 27, 2021 at 10:32:22AM +0530, Amul Sul wrote:\n> Thanks for the updated version. The patch is much better than before\n> except needing minor changes to the test description that testing\n> recovery_end_command_file before promotion, I did the same in the\n> attached version.\n\n ok(!-f $recovery_end_command_file,\n- 'recovery_end_command executed after promotion');\n+ 'recovery_end_command not executed yet');\nIndeed :p\n--\nMichael", "msg_date": "Wed, 27 Oct 2021 14:20:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Wed, Oct 27, 2021 at 02:20:50PM +0900, Michael Paquier wrote:\n> ok(!-f $recovery_end_command_file,\n> - 'recovery_end_command executed after promotion');\n> + 'recovery_end_command not executed yet');\n> Indeed :p\n\nWhile looking at that this morning, I have noticed an extra bug. If\nthe path of the data folder included a space, the command would have\nfailed. I have to wonder if we should do something like\ncp_history_files, but that did not seem necessary to me once we rely\non each command being executed from the root of the data folder.\n\nAnyway, I am curious to see what the buildfarm thinks about all that,\nparticularly with Msys, so I have applied the patch. I am keeping an\neye on things, though.\n--\nMichael", "msg_date": "Thu, 28 Oct 2021 10:54:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Thu, Oct 28, 2021 at 7:24 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 27, 2021 at 02:20:50PM +0900, Michael Paquier wrote:\n> > ok(!-f $recovery_end_command_file,\n> > - 'recovery_end_command executed after promotion');\n> > + 'recovery_end_command not executed yet');\n> > Indeed :p\n>\n> While looking at that this morning, I have noticed an extra bug. If\n> the path of the data folder included a space, the command would have\n> failed. I have to wonder if we should do something like\n> cp_history_files, but that did not seem necessary to me once we rely\n> on each command being executed from the root of the data folder.\n>\n> Anyway, I am curious to see what the buildfarm thinks about all that,\n> particularly with Msys, so I have applied the patch. I am keeping an\n> eye on things, though.\n\nThanks a lot, Michael.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 28 Oct 2021 09:25:43 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test for recovery_end_command" }, { "msg_contents": "On Thu, Oct 28, 2021 at 09:25:43AM +0530, Amul Sul wrote:\n> Thanks a lot, Michael.\n\nSo.. The buildfarm members running on Windows and running the\nrecovery tests are jacana (MinGW) and fairywren (Msys), both reporting\ngreen. drongo (MSVC) skips those tests, but I have tested MSVC by\nmyself so I think that we are good here.\n--\nMichael", "msg_date": "Fri, 29 Oct 2021 10:54:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test for recovery_end_command" } ]
[ { "msg_contents": "I was wondering what happens when the 32 TiB per table limit is\nreached, so I faked 32767 1 GiB sparse files using dd and then tried\ninserting more rows.\n\nOn a cassert-enabled build I got:\n\nTRAP: FailedAssertion(\"tagPtr->blockNum != P_NEW\", File: \"./build/../src/backend/storage/buffer/buf_table.c\", Line: 125)\n\nOn a normal build, I got:\n\nERROR: cannot extend file \"base/18635/53332\" beyond 4294967295 blocks\nORT: mdextend, md.c:443\n\nShouldn't the cassert build raise the ERROR instead as well?\n\nPostgreSQL 13.4.\n\nChristoph\n-- \nSenior Consultant, Tel.: +49 2166 9901 187\ncredativ GmbH, HRB M�nchengladbach 12080, USt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Sascha Heuer, Geoff Richardson,\nPeter Lilley; Unser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Thu, 9 Sep 2021 13:52:41 +0200", "msg_from": "Christoph Berg <christoph.berg@credativ.de>", "msg_from_op": true, "msg_subject": "trap instead of error on 32 TiB table" }, { "msg_contents": "On Thu, Sep 9, 2021 at 7:52 AM Christoph Berg\n<christoph.berg@credativ.de> wrote:\n> Shouldn't the cassert build raise the ERROR instead as well?\n\nWe should definitely get an ERROR in both cases, not an assertion\nfailure. Exactly which ERROR we get seems negotiable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Sep 2021 09:30:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: trap instead of error on 32 TiB table" }, { "msg_contents": "Christoph Berg <christoph.berg@credativ.de> writes:\n> I was wondering what happens when the 32 TiB per table limit is\n> reached, so I faked 32767 1 GiB sparse files using dd and then tried\n> inserting more rows.\n\n> On a cassert-enabled build I got:\n\n> TRAP: FailedAssertion(\"tagPtr->blockNum != P_NEW\", File: \"./build/../src/backend/storage/buffer/buf_table.c\", Line: 125)\n\nCan you provide a stack trace from that?\n\n(or else a recipe for reproducing the bug ... I'm not excited\nabout reverse-engineering the details of the method)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 09:44:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: trap instead of error on 32 TiB table" }, { "msg_contents": "Re: Tom Lane\n> Can you provide a stack trace from that?\n\nPG log:\n\nTRAP: FailedAssertion(\"tagPtr->blockNum != P_NEW\", File: \"./build/../src/backend/storage/buffer/buf_table.c\", Line: 125)\npostgres: 13/main: cbe postgres [local] INSERT(ExceptionalCondition+0x7d)[0x558b6223d44d]\npostgres: 13/main: cbe postgres [local] INSERT(BufTableInsert+0x89)[0x558b620bafb9]\npostgres: 13/main: cbe postgres [local] INSERT(+0x441827)[0x558b620bf827]\npostgres: 13/main: cbe postgres [local] INSERT(ReadBufferExtended+0x7a)[0x558b620c021a]\npostgres: 13/main: cbe postgres [local] INSERT(RelationGetBufferForTuple+0x250)[0x558b61da7850]\npostgres: 13/main: cbe postgres [local] INSERT(heap_insert+0x8b)[0x558b61d965cb]\npostgres: 13/main: cbe postgres [local] INSERT(+0x123b89)[0x558b61da1b89]\npostgres: 13/main: cbe postgres [local] INSERT(+0x30294c)[0x558b61f8094c]\npostgres: 13/main: cbe postgres [local] INSERT(+0x303660)[0x558b61f81660]\npostgres: 13/main: cbe postgres [local] INSERT(standard_ExecutorRun+0x115)[0x558b61f4eaa5]\npostgres: 13/main: cbe postgres [local] INSERT(+0x47fb72)[0x558b620fdb72]\npostgres: 13/main: cbe postgres [local] INSERT(+0x4809be)[0x558b620fe9be]\npostgres: 13/main: cbe postgres [local] INSERT(PortalRun+0x1c2)[0x558b620fee72]\npostgres: 13/main: cbe postgres [local] INSERT(+0x47c2e0)[0x558b620fa2e0]\npostgres: 13/main: cbe postgres [local] INSERT(PostgresMain+0x1a53)[0x558b620fc153]\npostgres: 13/main: cbe postgres [local] INSERT(+0x3e7f74)[0x558b62065f74]\npostgres: 13/main: cbe postgres [local] INSERT(PostmasterMain+0xd78)[0x558b62066e68]\npostgres: 13/main: cbe postgres [local] INSERT(main+0x796)[0x558b61d40356]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7f57071fad0a]\npostgres: 13/main: cbe postgres [local] INSERT(_start+0x2a)[0x558b61d403fa]\n2021-09-09 13:19:31.024 CEST [1533530] LOG: server process (PID 1534108) was terminated by signal 6: Aborted\n2021-09-09 13:19:31.024 CEST [1533530] DETAIL: Failed process was running: insert into huge select generate_series(1,10);\n\ngdb bt:\n\nCore was generated by `postgres: 13/main: cbe postgres [local] INSERT '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50\t../sysdeps/unix/sysv/linux/raise.c: Datei oder Verzeichnis nicht gefunden.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f57071f9537 in __GI_abort () at abort.c:79\n#2 0x0000558b6223d46e in ExceptionalCondition (conditionName=conditionName@entry=0x558b623b2577 \"tagPtr->blockNum != P_NEW\",\n errorType=errorType@entry=0x558b6229b016 \"FailedAssertion\",\n fileName=fileName@entry=0x558b623b2598 \"./build/../src/backend/storage/buffer/buf_table.c\", lineNumber=lineNumber@entry=125)\n at ./build/../src/backend/utils/error/assert.c:67\n#3 0x0000558b620bafb9 in BufTableInsert (tagPtr=tagPtr@entry=0x7ffec8919330, hashcode=hashcode@entry=960067002, buf_id=<optimized out>)\n at ./build/../src/backend/storage/buffer/buf_table.c:125\n#4 0x0000558b620bf827 in BufferAlloc (foundPtr=0x7ffec891932b, strategy=0x0, blockNum=4294967295, forkNum=MAIN_FORKNUM,\n relpersistence=112 'p', smgr=0x558b62ed4b38) at ./build/../src/backend/storage/buffer/bufmgr.c:1234\n#5 ReadBuffer_common (smgr=0x558b62ed4b38, relpersistence=<optimized out>, forkNum=forkNum@entry=MAIN_FORKNUM,\n blockNum=blockNum@entry=4294967295, mode=mode@entry=RBM_ZERO_AND_LOCK, strategy=0x0, hit=0x7ffec89193d7)\n at ./build/../src/backend/storage/buffer/bufmgr.c:761\n#6 0x0000558b620c021a in ReadBufferExtended (reln=0x7f56fb4f8120, forkNum=forkNum@entry=MAIN_FORKNUM,\n blockNum=blockNum@entry=4294967295, mode=mode@entry=RBM_ZERO_AND_LOCK, strategy=strategy@entry=0x0)\n at ./build/../src/backend/storage/buffer/bufmgr.c:677\n#7 0x0000558b61da7056 in ReadBufferBI (relation=relation@entry=0x7f56fb4f8120, targetBlock=targetBlock@entry=4294967295,\n mode=mode@entry=RBM_ZERO_AND_LOCK, bistate=bistate@entry=0x0) at ./build/../src/backend/access/heap/hio.c:87\n#8 0x0000558b61da7850 in RelationGetBufferForTuple (relation=relation@entry=0x7f56fb4f8120, len=32, otherBuffer=otherBuffer@entry=0,\n options=options@entry=0, bistate=bistate@entry=0x0, vmbuffer=vmbuffer@entry=0x7ffec89194c8, vmbuffer_other=0x0)\n at ./build/../src/backend/access/heap/hio.c:598\n#9 0x0000558b61d965cb in heap_insert (relation=relation@entry=0x7f56fb4f8120, tup=tup@entry=0x558b62f0b780, cid=cid@entry=0,\n options=options@entry=0, bistate=bistate@entry=0x0) at ./build/../src/backend/access/heap/heapam.c:1868\n#10 0x0000558b61da1b89 in heapam_tuple_insert (relation=0x7f56fb4f8120, slot=0x558b62f0b6b0, cid=0, options=0, bistate=0x0)\n at ./build/../src/backend/access/heap/heapam_handler.c:251\n#11 0x0000558b61f8094c in table_tuple_insert (bistate=0x0, options=0, cid=<optimized out>, slot=0x558b62f0b6b0, rel=0x7f56fb4f8120)\n at ./build/../src/include/access/tableam.h:1156\n#12 ExecInsert (mtstate=0x558b62f0ab50, slot=0x558b62f0b6b0, planSlot=0x558b62f0b6b0, srcSlot=0x0, returningRelInfo=0x558b62f0aa38,\n estate=0x558b62f0a7c0, canSetTag=true) at ./build/../src/backend/executor/nodeModifyTable.c:642\n#13 0x0000558b61f81660 in ExecModifyTable (pstate=0x558b62f0ab50) at ./build/../src/backend/executor/nodeModifyTable.c:2321\n#14 0x0000558b61f4eaa5 in ExecProcNode (node=0x558b62f0ab50) at ./build/../src/include/executor/executor.h:248\n#15 ExecutePlan (execute_once=<optimized out>, dest=0x558b62e6d908, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x558b62f0ab50, estate=0x558b62f0a7c0)\n at ./build/../src/backend/executor/execMain.c:1632\n#16 standard_ExecutorRun (queryDesc=0x558b62e68920, direction=<optimized out>, count=0, execute_once=<optimized out>)\n at ./build/../src/backend/executor/execMain.c:350\n#17 0x0000558b620fdb72 in ProcessQuery (plan=0x558b62e6d798, sourceText=0x558b62e22000 \"insert into huge values ('boom');\", params=0x0,\n queryEnv=0x0, dest=0x558b62e6d908, qc=0x7ffec8919a00) at ./build/../src/backend/tcop/pquery.c:160\n#18 0x0000558b620fe9be in PortalRunMulti (portal=portal@entry=0x558b62eade60, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x558b62e6d908, altdest=altdest@entry=0x558b62e6d908,\n qc=qc@entry=0x7ffec8919a00) at ./build/../src/backend/tcop/pquery.c:1263\n#19 0x0000558b620fee72 in PortalRun (portal=portal@entry=0x558b62eade60, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x558b62e6d908,\n altdest=altdest@entry=0x558b62e6d908, qc=0x7ffec8919a00) at ./build/../src/backend/tcop/pquery.c:786\n#20 0x0000558b620fa2e0 in exec_simple_query (query_string=0x558b62e22000 \"insert into huge values ('boom');\")\n at ./build/../src/backend/tcop/postgres.c:1239\n#21 0x0000558b620fc153 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x558b62e71f90, dbname=<optimized out>,\n username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4339\n#22 0x0000558b62065f74 in BackendRun (port=0x558b62e6b360) at ./build/../src/backend/postmaster/postmaster.c:4526\n#23 BackendStartup (port=0x558b62e6b360) at ./build/../src/backend/postmaster/postmaster.c:4210\n#24 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1739\n#25 0x0000558b62066e68 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x558b62e1ae80)\n at ./build/../src/backend/postmaster/postmaster.c:1412\n#26 0x0000558b61d40356 in main (argc=5, argv=0x558b62e1ae80) at ./build/../src/backend/main/main.c:210\n\n(Both should be from the same instance.)\n\n> (or else a recipe for reproducing the bug ... I'm not excited\n> about reverse-engineering the details of the method)\n\nCreate a table, note the relfilenode, shut down PG, and\n\nfn=53332\ndd if=/dev/zero bs=8k of=$fn seek=131071 count=1\nfor i in {1..32766}; do dd if=/dev/zero bs=8k of=$fn.$i seek=131071 count=1; done\ndd if=/dev/zero bs=8k of=$fn.32767 seek=131070 count=1\nrm ${fn}_fsm\n\n... and then start PG and insert enough to fill the last block.\n\nChristoph\n-- \nSenior Consultant, Tel.: +49 2166 9901 187\ncredativ GmbH, HRB M�nchengladbach 12080, USt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Sascha Heuer, Geoff Richardson,\nPeter Lilley; Unser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Thu, 9 Sep 2021 16:04:29 +0200", "msg_from": "Christoph Berg <christoph.berg@credativ.de>", "msg_from_op": true, "msg_subject": "Re: trap instead of error on 32 TiB table" }, { "msg_contents": "Christoph Berg <christoph.berg@credativ.de> writes:\n>> Can you provide a stack trace from that?\n\n> #2 0x0000558b6223d46e in ExceptionalCondition (conditionName=conditionName@entry=0x558b623b2577 \"tagPtr->blockNum != P_NEW\",\n> errorType=errorType@entry=0x558b6229b016 \"FailedAssertion\",\n> fileName=fileName@entry=0x558b623b2598 \"./build/../src/backend/storage/buffer/buf_table.c\", lineNumber=lineNumber@entry=125)\n> at ./build/../src/backend/utils/error/assert.c:67\n> #3 0x0000558b620bafb9 in BufTableInsert (tagPtr=tagPtr@entry=0x7ffec8919330, hashcode=hashcode@entry=960067002, buf_id=<optimized out>)\n> at ./build/../src/backend/storage/buffer/buf_table.c:125\n> #4 0x0000558b620bf827 in BufferAlloc (foundPtr=0x7ffec891932b, strategy=0x0, blockNum=4294967295, forkNum=MAIN_FORKNUM,\n> relpersistence=112 'p', smgr=0x558b62ed4b38) at ./build/../src/backend/storage/buffer/bufmgr.c:1234\n\nAh, thanks. I don't think it's unreasonable for BufTableInsert to contain\nthat assertion --- we shouldn't be trying to allocate a buffer for an\nillegal block number.\n\nThe regular error comes from mdextend, but that is too late under this\nworldview, because smgrextend expects to be given a zero-filled buffer\nto write out. I think where we ought to be making the check is right\nwhere ReadBuffer_common replaces P_NEW:\n\n /* Substitute proper block number if caller asked for P_NEW */\n if (isExtend)\n+ {\n blockNum = smgrnblocks(smgr, forkNum);\n+ if (blockNum == InvalidBlockNumber)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n+ errmsg(\"cannot extend file \\\"%s\\\" beyond %u blocks\",\n+ relpath(smgr->smgr_rnode, forkNum),\n+ InvalidBlockNumber)));\n+ }\n\nHaving done that, the check in md.c could be reduced to an Assert,\nalthough there's something to be said for leaving it as-is as an\nextra layer of defense.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 10:25:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: trap instead of error on 32 TiB table" }, { "msg_contents": "On 09/09/2021 17:25, Tom Lane wrote:\n> Having done that, the check in md.c could be reduced to an Assert,\n> although there's something to be said for leaving it as-is as an\n> extra layer of defense.\n\nSome operations call smgrextend() directly, like B-tree index creation. \nWe don't want those operations to hit an assertion either.\n\n- Heikki\n\n\n", "msg_date": "Thu, 9 Sep 2021 22:54:32 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: trap instead of error on 32 TiB table" }, { "msg_contents": "Re: Heikki Linnakangas\n> On 09/09/2021 17:25, Tom Lane wrote:\n> > Having done that, the check in md.c could be reduced to an Assert,\n> > although there's something to be said for leaving it as-is as an\n> > extra layer of defense.\n> \n> Some operations call smgrextend() directly, like B-tree index creation. We\n> don't want those operations to hit an assertion either.\n\nThanks, I can now see a proper error on 15devel with cassert enabled:\n\n# insert into huge select generate_series(1,1000);\nFEHLER: 54000: cannot extend relation base/13550/16384 beyond 4294967295 blocks\nORT: ReadBuffer_common, bufmgr.c:831\n\n\n\nSome months ago I had already tried what happens on running into\nanother limit, namely the end of WAL. Back then I was attributing the\nresult to \"won't happen anyway\", but since we are talking asserts,\nthere is the result:\n\n/usr/lib/postgresql/15/bin/pg_resetwal -l 00000001FFFFFFFF000000FF -D $PWD\n select pg_current_wal_lsn();\n pg_current_wal_lsn\n────────────────────\n FFFFFFFF/FF000150\ncreate table foo (id bigint);\nrepeat a few times: insert into foo select generate_series(1,200000);\n\n15devel, cassert:\n\nTRAP: FailedAssertion(\"XLogRecPtrToBytePos(*EndPos) == endbytepos\", File: \"./build/../src/backend/access/transam/xlog.c\", Line: 1324, PID: 1651661)\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(ExceptionalCondition+0x9a)[0x564ad15461ba]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x223022)[0x564ad115f022]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(XLogInsert+0x653)[0x564ad116adf3]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(heap_insert+0x3ae)[0x564ad10f0a2e]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x1bf8e9)[0x564ad10fb8e9]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x35e30c)[0x564ad129a30c]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x35eedc)[0x564ad129aedc]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(standard_ExecutorRun+0x115)[0x564ad12695b5]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x4da312)[0x564ad1416312]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x4db0ee)[0x564ad14170ee]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(PortalRun+0x2ec)[0x564ad14176bc]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x4d72b6)[0x564ad14132b6]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(PostgresMain+0x181c)[0x564ad1414edc]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(+0x43fd80)[0x564ad137bd80]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(PostmasterMain+0xca0)[0x564ad137cd10]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(main+0x221)[0x564ad10973d1]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7f63bcbe1d0a]\npostgres: 15/regress: cbe postgres ::1(45564) INSERT(_start+0x2a)[0x564ad10979ca]\n2021-09-10 11:44:40.997 CEST [1651617] LOG: request to flush past end of generated WAL; request FFFFFFFF/FFFFE000, current position 0/50\n2021-09-10 11:44:42.019 CEST [1651613] LOG: Serverprozess (PID 1651661) wurde von Signal 6 beendet: Abgebrochen\n2021-09-10 11:44:42.019 CEST [1651613] DETAIL: Der fehlgeschlagene Prozess führte aus: insert into foo select generate_series(1,200000);\n\nThe system properly (?) recovers, resuming at some FFFFFFFF/FFE614B8\nposition (i.e. discarding the part that was overflowing). However, if\nI push it by moving closer to the end by doing smaller inserts, I can\nget it into an infinite recovery loop:\n\n2021-09-10 11:48:41.050 CEST [1652403] LOG: Datenbanksystem wurde beim Herunterfahren unterbrochen; letzte bekannte Aktion am 2021-09-10 11:48:40 CEST\n2021-09-10 11:48:41.051 CEST [1652403] LOG: Datenbanksystem wurde nicht richtig heruntergefahren; automatische Wiederherstellung läuft\n2021-09-10 11:48:41.051 CEST [1652403] LOG: Redo beginnt bei FFFFFFFF/FFFFDF78\n2021-09-10 11:48:41.051 CEST [1652403] LOG: ungültige Datensatzlänge bei FFFFFFFF/FFFFDFB0: 24 erwartet, 0 erhalten\n2021-09-10 11:48:41.051 CEST [1652403] LOG: redo done at FFFFFFFF/FFFFDF78 system usage: CPU: Benutzer: 0,00 s, System: 0,00 s, verstrichen: 0,00 s\nTRAP: FailedAssertion(\"((XLogPageHeader) cachedPos)->xlp_magic == XLOG_PAGE_MAGIC\", File: \"./build/../src/backend/access/transam/xlog.c\", Line: 1982, PID: 1652404)\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(ExceptionalCondition+0x9a)[0x564ad15461ba]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(+0x2221e8)[0x564ad115e1e8]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(XLogInsertRecord+0x587)[0x564ad115ea27]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(XLogInsert+0x653)[0x564ad116adf3]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(CreateCheckPoint+0x64e)[0x564ad11608ee]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(CheckpointerMain+0x3d4)[0x564ad136db34]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(AuxiliaryProcessMain+0xef)[0x564ad136bacf]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(+0x43d116)[0x564ad1379116]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(+0x43f71a)[0x564ad137b71a]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(PostmasterMain+0xca0)[0x564ad137cd10]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(main+0x221)[0x564ad10973d1]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7f63bcbe1d0a]\npostgres: 15/regress: checkpointer performing end-of-recovery checkpoint(_start+0x2a)[0x564ad10979ca]\n2021-09-10 11:48:41.791 CEST [1651613] LOG: Checkpointer-Prozess (PID 1652404) wurde von Signal 6 beendet: Abgebrochen\n2021-09-10 11:48:41.791 CEST [1651613] LOG: aktive Serverprozesse werden abgebrochen\n2021-09-10 11:48:41.791 CEST [1651613] LOG: alle Serverprozesse beendet; initialisiere neu\n... goto 10\n\nDoing the same test on 15devel without cassert, the inserting query\ngets stuck in a busy loop (no wait_event) that ^C won't terminate.\n\nTwo backtraces from that running process:\n\n(gdb) bt\n#0 0x0000563604fdb63b in memset (__len=8192, __ch=0, __dest=0x7f710f9b6000) at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:71\n#1 AdvanceXLInsertBuffer (upto=upto@entry=18446744073709543424, opportunistic=opportunistic@entry=false)\n at ./build/../src/backend/access/transam/xlog.c:2220\n#2 0x0000563604fdb920 in GetXLogBuffer (ptr=18446744073709543424) at ./build/../src/backend/access/transam/xlog.c:1959\n#3 0x0000563604fdd1b1 in CopyXLogRecordToWAL (EndPos=18446744073709543480, StartPos=18446744073709543392,\n rdata=0x56360562b1c0 <hdr_rdt>, isLogSwitch=false, write_len=63) at ./build/../src/backend/access/transam/xlog.c:1558\n#4 XLogInsertRecord (rdata=rdata@entry=0x56360562b1c0 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=18446744073709543392, flags=<optimized out>,\n num_fpi=num_fpi@entry=0) at ./build/../src/backend/access/transam/xlog.c:1123\n#5 0x0000563604fe774a in XLogInsert (rmid=rmid@entry=10 '\\n', info=info@entry=0 '\\000')\n at ./build/../src/backend/access/transam/xloginsert.c:480\n#6 0x0000563604f87b27 in heap_insert (relation=relation@entry=0x7f710f6d7938, tup=tup@entry=0x5636061bc288, cid=cid@entry=0,\n options=options@entry=0, bistate=bistate@entry=0x0) at ./build/../src/backend/access/heap/heapam.c:2208\n#7 0x0000563604f8fe89 in heapam_tuple_insert (relation=0x7f710f6d7938, slot=0x5636061bc1f8, cid=0, options=0, bistate=0x0)\n at ./build/../src/backend/access/heap/heapam_handler.c:252\n#8 0x00005636050ff35c in table_tuple_insert (bistate=0x0, options=0, cid=<optimized out>, slot=0x5636061bc1f8, rel=<optimized out>)\n at ./build/../src/include/access/tableam.h:1374\n#9 ExecInsert (mtstate=0x5636061a5ad8, resultRelInfo=0x5636061a5ce8, slot=0x5636061bc1f8, planSlot=0x5636061bb778,\n estate=0x5636061a5868, canSetTag=<optimized out>) at ./build/../src/backend/executor/nodeModifyTable.c:934\n#10 0x00005636051004c7 in ExecModifyTable (pstate=<optimized out>) at ./build/../src/backend/executor/nodeModifyTable.c:2561\n#11 0x00005636050d599d in ExecProcNode (node=0x5636061a5ad8) at ./build/../src/include/executor/executor.h:257\n#12 ExecutePlan (execute_once=<optimized out>, dest=0x5636061b22f8, direction=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x5636061a5ad8, estate=0x5636061a5868)\n at ./build/../src/backend/executor/execMain.c:1551\n#13 standard_ExecutorRun (queryDesc=0x563606109d08, direction=<optimized out>, count=0, execute_once=<optimized out>)\n at ./build/../src/backend/executor/execMain.c:361\n#14 0x00005636052501e2 in ProcessQuery (plan=0x5636061b2218,\n sourceText=0x5636060e7128 \"insert into foo select generate_series(1,500000);\", params=0x0, queryEnv=0x0, dest=0x5636061b22f8,\n qc=0x7ffdbe808cc0) at ./build/../src/backend/tcop/pquery.c:160\n#15 0x0000563605250dd9 in PortalRunMulti (portal=portal@entry=0x56360614b3a8, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5636061b22f8, altdest=altdest@entry=0x5636061b22f8,\n qc=qc@entry=0x7ffdbe808cc0) at ./build/../src/backend/tcop/pquery.c:1266\n#16 0x000056360525129c in PortalRun (portal=portal@entry=0x56360614b3a8, count=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x5636061b22f8,\n altdest=altdest@entry=0x5636061b22f8, qc=0x7ffdbe808cc0) at ./build/../src/backend/tcop/pquery.c:786\n#17 0x000056360524d2dd in exec_simple_query (query_string=0x5636060e7128 \"insert into foo select generate_series(1,500000);\")\n at ./build/../src/backend/tcop/postgres.c:1214\n#18 0x000056360524f20f in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffdbe809190, dbname=<optimized out>,\n username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4488\n#19 0x00005636051cc178 in BackendRun (port=<optimized out>, port=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:4521\n#20 BackendStartup (port=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:4243\n#21 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1765\n#22 0x00005636051cd000 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x5636060e1ea0)\n at ./build/../src/backend/postmaster/postmaster.c:1437\n#23 0x0000563604f457f1 in main (argc=5, argv=0x5636060e1ea0) at ./build/../src/backend/main/main.c:199\n\n(gdb) bt\n#0 0x0000563604fdb6cf in AdvanceXLInsertBuffer (upto=upto@entry=18446744073709543424, opportunistic=opportunistic@entry=false)\n at ./build/../src/backend/access/transam/xlog.c:2147\n#1 0x0000563604fdb920 in GetXLogBuffer (ptr=18446744073709543424) at ./build/../src/backend/access/transam/xlog.c:1959\n#2 0x0000563604fdd1b1 in CopyXLogRecordToWAL (EndPos=18446744073709543480, StartPos=18446744073709543392,\n rdata=0x56360562b1c0 <hdr_rdt>, isLogSwitch=false, write_len=63) at ./build/../src/backend/access/transam/xlog.c:1558\n#3 XLogInsertRecord (rdata=rdata@entry=0x56360562b1c0 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=18446744073709543392, flags=<optimized out>,\n num_fpi=num_fpi@entry=0) at ./build/../src/backend/access/transam/xlog.c:1123\n#4 0x0000563604fe774a in XLogInsert (rmid=rmid@entry=10 '\\n', info=info@entry=0 '\\000')\n at ./build/../src/backend/access/transam/xloginsert.c:480\n#5 0x0000563604f87b27 in heap_insert (relation=relation@entry=0x7f710f6d7938, tup=tup@entry=0x5636061bc288, cid=cid@entry=0,\n options=options@entry=0, bistate=bistate@entry=0x0) at ./build/../src/backend/access/heap/heapam.c:2208\n...\n\n\nChristoph\n-- \nSenior Consultant, Tel.: +49 2166 9901 187\ncredativ GmbH, HRB Mönchengladbach 12080, USt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Sascha Heuer, Geoff Richardson,\nPeter Lilley; Unser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Fri, 10 Sep 2021 12:06:33 +0200", "msg_from": "Christoph Berg <christoph.berg@credativ.de>", "msg_from_op": true, "msg_subject": "The End of the WAL" } ]
[ { "msg_contents": "We don't actually prevent you from scrolling a NO SCROLL cursor:\n\nregression=# begin;\nBEGIN\nregression=*# declare c no scroll cursor for select * from int8_tbl;\nDECLARE CURSOR\nregression=*# fetch all from c;\n q1 | q2 \n------------------+-------------------\n 123 | 456\n 123 | 4567890123456789\n 4567890123456789 | 123\n 4567890123456789 | 4567890123456789\n 4567890123456789 | -4567890123456789\n(5 rows)\n\nregression=*# fetch absolute 2 from c;\n q1 | q2 \n-----+------------------\n 123 | 4567890123456789\n(1 row)\n\nThere are probably specific cases where you do get an error,\nbut we don't have a blanket you-can't-do-that check. Should we?\n\nThe reason this came to mind is that while poking at [1]\nI noticed that commit ba2c6d6ce has created some user-visible\nanomalies for non-scrollable cursors WITH HOLD. If you advance\nthe cursor a bit, commit, and then try to scroll the cursor,\nit will work but the part of the output that you advanced over\nwill be missing. I think we should probably throw an error\nto prevent that from being visible. I'm worried though that\nputting in a generic prohibition may break applications that\nused to get away with this kind of thing.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAPV2KRjd%3DErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 09 Sep 2021 13:10:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "We don't enforce NO SCROLL cursor restrictions" }, { "msg_contents": "On 9/9/21 7:10 PM, Tom Lane wrote:\n> We don't actually prevent you from scrolling a NO SCROLL cursor:\n> \n> There are probably specific cases where you do get an error,\n> but we don't have a blanket you-can't-do-that check. Should we?\n\n\nI would say yes. NO SCROLL means no scrolling; or at least should.\n\nOn the other hand, if there is no optimization or other meaningful\ndifference between SCROLL and NO SCROLL, then we can just document it as\na no-op that is only provided for standard syntax compliance.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 9 Sep 2021 19:47:41 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: We don't enforce NO SCROLL cursor restrictions" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 9/9/21 7:10 PM, Tom Lane wrote:\n>> There are probably specific cases where you do get an error,\n>> but we don't have a blanket you-can't-do-that check. Should we?\n\n> I would say yes. NO SCROLL means no scrolling; or at least should.\n> On the other hand, if there is no optimization or other meaningful\n> difference between SCROLL and NO SCROLL, then we can just document it as\n> a no-op that is only provided for standard syntax compliance.\n\nThere are definitely optimizations that happen or don't happen\ndepending on the SCROLL option. I think ba2c6d6ce may be the\nfirst patch that introduces any user-visible semantic difference,\nbut I'm not completely sure about that.\n\n[ pokes at it some more ... ] Hm, we let you do this:\n\nregression=# begin;\nBEGIN\nregression=*# declare c cursor for select * from int8_tbl for update;\nDECLARE CURSOR\nregression=*# fetch all from c;\n q1 | q2 \n------------------+-------------------\n 123 | 456\n 123 | 4567890123456789\n 4567890123456789 | 123\n 4567890123456789 | 4567890123456789\n 4567890123456789 | -4567890123456789\n(5 rows)\n\nregression=*# fetch absolute 2 from c;\n q1 | q2 \n-----+------------------\n 123 | 4567890123456789\n(1 row)\n\nwhich definitely flies in the face of the fact that we disallow\ncombining SCROLL and FOR UPDATE:\n\nregression=*# declare c scroll cursor for select * from int8_tbl for update;\nERROR: DECLARE SCROLL CURSOR ... FOR UPDATE is not supported\nDETAIL: Scrollable cursors must be READ ONLY.\n\nI don't recall the exact reason for that prohibition, but I wonder\nif there aren't user-visible anomalies reachable from the fact that\nyou can bypass it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 14:09:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: We don't enforce NO SCROLL cursor restrictions" }, { "msg_contents": "I wrote:\n> [ pokes at it some more ... ] Hm, we let you do this:\n> ...\n> which definitely flies in the face of the fact that we disallow\n> combining SCROLL and FOR UPDATE:\n> regression=*# declare c scroll cursor for select * from int8_tbl for update;\n> ERROR: DECLARE SCROLL CURSOR ... FOR UPDATE is not supported\n> DETAIL: Scrollable cursors must be READ ONLY.\n> \n> I don't recall the exact reason for that prohibition, but I wonder\n> if there aren't user-visible anomalies reachable from the fact that\n> you can bypass it.\n\nI dug in the archives. The above-quoted error message was added by\nme in 048efc25e, responding to Heikki's complaint here:\n\nhttps://www.postgresql.org/message-id/471F69FE.5000500%40enterprisedb.com\n\nWhat I now see is that I put that check at the wrong level. It\nsuccessfully blocks off the case Heikki complained of:\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo (id integer);\nINSERT INTO foo SELECT a from generate_series(1,10) a;\nBEGIN;\nDECLARE c CURSOR FOR SELECT id FROM foo FOR UPDATE;\nFETCH 2 FROM c;\nUPDATE foo set ID=20 WHERE CURRENT OF c;\nFETCH RELATIVE 0 FROM c;\nCOMMIT;\n\nThe FETCH RELATIVE 0 fails with\n\nERROR: cursor can only scan forward\nHINT: Declare it with SCROLL option to enable backward scan.\n\nHowever, if you replace that with the should-be-equivalent\n\nFETCH ABSOLUTE 2 FROM c;\n\nthen what you get is not an error but\n\n id \n----\n 3\n(1 row)\n\nwhich is for certain anomalous, because that is not the row you\nsaw as being row 2 in the initial FETCH.\n\nThe reason for this behavior is that the only-scan-forward check\nis in the relatively low-level function PortalRunSelect, which\nis passed a \"forward\" flag and a count. The place that interprets\nFETCH_ABSOLUTE and friends is DoPortalRunFetch, and what it's doing\nin this particular scenario is to rewind to start and fetch forwards,\nthus bypassing PortalRunSelect's error check. And, since the query\nis using FOR UPDATE, this table scan sees the row with ID=2 as already\ndead. (Its replacement with ID=20 has been installed at the end of\nthe table, so while it would be visible to the cursor, it's not at\nthe same position as before.)\n\nSo basically, we *do* have this check and have done since 2007,\nbut it's not water-tight for all variants of FETCH. I think\ntightening it up in HEAD and v14 is a no-brainer, but I'm a bit\nmore hesitant about whether to back-patch into stable branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Sep 2021 15:21:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: We don't enforce NO SCROLL cursor restrictions" }, { "msg_contents": "I wrote:\n> The reason for this behavior is that the only-scan-forward check\n> is in the relatively low-level function PortalRunSelect, which\n> is passed a \"forward\" flag and a count. The place that interprets\n> FETCH_ABSOLUTE and friends is DoPortalRunFetch, and what it's doing\n> in this particular scenario is to rewind to start and fetch forwards,\n> thus bypassing PortalRunSelect's error check.\n\nAfter some further study, I've reached a few conclusions:\n\n* The missing bit in pquery.c is exactly that we'll allow a portal\nrewind even with a no-scroll cursor. I think that the reason it's\nlike that is that the code was mainly interested in closing off\ncases where we'd attempt to run a plan backwards, to protect plan\nnode types that can't do that. As far as the executor is concerned,\nrewind-to-start is okay in any case. However, as we see from this\nthread, that definition doesn't protect us against anomalous results\nfrom volatile queries. So putting an error check in DoPortalRewind\nseems to be enough to fix this, as in patch 0001 below. (This also\nfixes one bogus copied-and-pasted comment, and adjusts the one\nregression test case that breaks.)\n\n* The anomaly for held cursors boils down to ba2c6d6ce having ignored\nthis good advice in portal.h:\n\n\t * ... Also note that various code inspects atStart and atEnd, but\n\t * only the portal movement routines should touch portalPos.\n\nThus, PersistHoldablePortal has no business changing the cursor's\natStart/atEnd/portalPos. The only thing that resetting portalPos\nactually bought us was to make the tuplestore_skiptuples call a bit\nfurther down into a no-op, but we can just bypass that call for a\nno-scroll cursor, as in 0002 below. However, 0002 does have a\ndependency on 0001, because if we allow tuplestore_rescan on the\nholdStore it will expose the fact that the tuplestore doesn't contain\nthe whole cursor result. (I was a bit surprised to find that those\nwere the only two places where we weren't positioning in the holdStore\nby dead reckoning, but it seems to be the case.)\n\nI was feeling nervous about back-patching 0001 already, and finding\nthat one of our own regression tests was dependent on the omission\nof this check doesn't make me any more confident. However, I'd really\nlike to be able to back-patch 0002 to get rid of the held-cursor\npositioning anomaly. What I think might be an acceptable compromise\nin the back branches is to have DoPortalRewind complain only if\n(a) it needs to reposition a no-scroll cursor AND (b) the cursor has\na holdStore, ie it's held over from some previous transaction.\nThe extra restriction (b) should prevent most people from running into\nthe error check, even if they've been sloppy about marking cursors\nscrollable. In HEAD we'd drop the restriction (b) and commit 0001 as\nshown. I'm kind of inclined to do that in v14 too, but there's an\nargument to be made that it's too late in the beta process to be\nchanging user-visible semantics without great need.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 09 Sep 2021 18:54:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: We don't enforce NO SCROLL cursor restrictions" } ]
[ { "msg_contents": "Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.\n\nThis switches the default ACL to what the documentation has recommended\nsince CVE-2018-1058. Upgrades will carry forward any old ownership and\nACL. Sites that declined the 2018 recommendation should take a fresh\nlook. Recipes for commissioning a new database cluster from scratch may\nneed to create a schema, grant more privileges, etc. Out-of-tree test\nsuites may require such updates.\n\nReviewed by Peter Eisentraut.\n\nDiscussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b073c3ccd06e4cb845e121387a43faa8c68a7b62\n\nModified Files\n--------------\ncontrib/postgres_fdw/expected/postgres_fdw.out | 2 +-\ncontrib/postgres_fdw/sql/postgres_fdw.sql | 2 +-\ndoc/src/sgml/ddl.sgml | 56 ++++++++++++++------------\ndoc/src/sgml/user-manag.sgml | 19 ++++-----\nsrc/bin/initdb/initdb.c | 3 +-\nsrc/bin/pg_dump/pg_dump.c | 28 ++++++++-----\nsrc/bin/pg_dump/t/002_pg_dump.pl | 19 ++++-----\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_namespace.dat | 2 +-\nsrc/pl/plperl/expected/plperl_setup.out | 4 ++\nsrc/pl/plperl/sql/plperl_setup.sql | 4 ++\nsrc/test/regress/input/tablespace.source | 5 ++-\nsrc/test/regress/output/tablespace.source | 4 +-\n13 files changed, 86 insertions(+), 64 deletions(-)", "msg_date": "Fri, 10 Sep 2021 06:39:18 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.\n\nI've just stumbled across a testing problem created by this commit:\nif you try to skip the tablespace test, the rest of the run falls\nover, because this bit doesn't get executed:\n\n-- Rest of this suite can use the public schema freely.\nGRANT ALL ON SCHEMA public TO public;\n\nSkipping the tablespace test is something I've been accustomed to do\nwhen testing replication with the standby on the same machine as the\nprimary, because otherwise you've got to fool with keeping the\nstandby from overwriting the primary's tablespaces. This hack made\nthat a lot more painful.\n\nI'm inclined to think the cleanest fix is to move this step into a\nnew script, say \"test_setup.sql\", that is scheduled by itself just\nafter tablespace.sql. It's sort of annoying to fire up a psql+backend\nfor just one command, but perhaps there's other stuff that could be\nput there too.\n\nAnother possibility is to add that GRANT to the list of stuff that\npg_regress.c does by default. If there's actually reason for\ntablespace.sql to run without that, it could revoke and re-grant\nthe public permissions. This way would have the advantage of\nbeing less likely to break other test suites.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Dec 2021 12:52:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Fri, Dec 17, 2021 at 12:52:39PM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.\n> \n> I've just stumbled across a testing problem created by this commit:\n> if you try to skip the tablespace test, the rest of the run falls\n> over, because this bit doesn't get executed:\n> \n> -- Rest of this suite can use the public schema freely.\n> GRANT ALL ON SCHEMA public TO public;\n> \n> Skipping the tablespace test is something I've been accustomed to do\n> when testing replication with the standby on the same machine as the\n> primary, because otherwise you've got to fool with keeping the\n> standby from overwriting the primary's tablespaces. This hack made\n> that a lot more painful.\n> \n> I'm inclined to think the cleanest fix is to move this step into a\n> new script, say \"test_setup.sql\", that is scheduled by itself just\n> after tablespace.sql.\n\nI like that solution for your use case.\n\n> It's sort of annoying to fire up a psql+backend\n> for just one command, but perhaps there's other stuff that could be\n> put there too.\n\nYes. The src/test/regress suite would be in a better place if one could run\nmost test files via a schedule containing only two files, the setup file and\nthe file of interest. Adding things like the \"CREATE TABLE tenk1\" to the\nsetup file would help that.\n\n\n", "msg_date": "Fri, 17 Dec 2021 10:25:18 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, Dec 17, 2021 at 12:52:39PM -0500, Tom Lane wrote:\n>> It's sort of annoying to fire up a psql+backend\n>> for just one command, but perhaps there's other stuff that could be\n>> put there too.\n\n> Yes. The src/test/regress suite would be in a better place if one could run\n> most test files via a schedule containing only two files, the setup file and\n> the file of interest. Adding things like the \"CREATE TABLE tenk1\" to the\n> setup file would help that.\n\nIf we're thinking of a generalized setup file, putting it after the\ntablespace test feels pretty weird. What was your motivation for\ndoing this at the end of tablespace.source rather than the start?\nIt doesn't look like that test in itself had any interesting\ndependencies on public not being writable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Dec 2021 13:41:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Fri, Dec 17, 2021 at 01:41:00PM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Fri, Dec 17, 2021 at 12:52:39PM -0500, Tom Lane wrote:\n> >> It's sort of annoying to fire up a psql+backend\n> >> for just one command, but perhaps there's other stuff that could be\n> >> put there too.\n> \n> > Yes. The src/test/regress suite would be in a better place if one could run\n> > most test files via a schedule containing only two files, the setup file and\n> > the file of interest. Adding things like the \"CREATE TABLE tenk1\" to the\n> > setup file would help that.\n> \n> If we're thinking of a generalized setup file, putting it after the\n> tablespace test feels pretty weird. What was your motivation for\n> doing this at the end of tablespace.source rather than the start?\n\nI did it that way so a bit of the \"make check\" suite would exercise the\nstandard user experience. That's a minor concern, so putting the setup file\nbefore the tablespace file is fine. Various contrib and TAP suites will still\ntest the standard user experience.\n\n> It doesn't look like that test in itself had any interesting\n> dependencies on public not being writable.\n\nRight.\n\n\n", "msg_date": "Fri, 17 Dec 2021 11:47:20 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, Dec 17, 2021 at 01:41:00PM -0500, Tom Lane wrote:\n>> If we're thinking of a generalized setup file, putting it after the\n>> tablespace test feels pretty weird. What was your motivation for\n>> doing this at the end of tablespace.source rather than the start?\n\n> I did it that way so a bit of the \"make check\" suite would exercise the\n> standard user experience. That's a minor concern, so putting the setup file\n> before the tablespace file is fine. Various contrib and TAP suites will still\n> test the standard user experience.\n\nCheck. I'll make it so in a little bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Dec 2021 14:57:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Fri, Sep 10, 2021 at 2:39 AM Noah Misch <noah@leadboat.com> wrote:\n> Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.\n>\n> This switches the default ACL to what the documentation has recommended\n> since CVE-2018-1058. Upgrades will carry forward any old ownership and\n> ACL. Sites that declined the 2018 recommendation should take a fresh\n> look. Recipes for commissioning a new database cluster from scratch may\n> need to create a schema, grant more privileges, etc. Out-of-tree test\n> suites may require such updates.\n\nI was looking at the changes that this commit made to ddl.sgml today\nand I feel that it's not quite ideal. Under \"Constrain ordinary users\nto user-private schemas\" it first says \"To implement this, first issue\n<literal>REVOKE CREATE ON SCHEMA public FROM PUBLIC</literal>\" and\nthen later says, oh but wait, you actually don't need to do that\nunless you're upgrading. That seems a bit backwards to me: I think we\nshould talk about the current state of play first, and then add the\nnotes about upgrading afterwards.\n\nHere's a proposed patch to do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Nov 2022 14:22:59 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Tue, Nov 29, 2022 at 02:22:59PM -0500, Robert Haas wrote:\n> Here's a proposed patch to do that.\n\nIf I'm not wrong, you message includes a diffstat but without the patch\nitself.\n\n\n", "msg_date": "Tue, 29 Nov 2022 13:31:59 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "On Tue, Nov 29, 2022 at 2:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Nov 29, 2022 at 02:22:59PM -0500, Robert Haas wrote:\n> > Here's a proposed patch to do that.\n>\n> If I'm not wrong, you message includes a diffstat but without the patch\n> itself.\n\nD'oh.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Nov 2022 14:34:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Tue, Nov 29, 2022 at 02:22:59PM -0500, Robert Haas wrote:\n> On Fri, Sep 10, 2021 at 2:39 AM Noah Misch <noah@leadboat.com> wrote:\n> > Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner.\n> >\n> > This switches the default ACL to what the documentation has recommended\n> > since CVE-2018-1058. Upgrades will carry forward any old ownership and\n> > ACL. Sites that declined the 2018 recommendation should take a fresh\n> > look. Recipes for commissioning a new database cluster from scratch may\n> > need to create a schema, grant more privileges, etc. Out-of-tree test\n> > suites may require such updates.\n> \n> I was looking at the changes that this commit made to ddl.sgml today\n> and I feel that it's not quite ideal. Under \"Constrain ordinary users\n> to user-private schemas\" it first says \"To implement this, first issue\n> <literal>REVOKE CREATE ON SCHEMA public FROM PUBLIC</literal>\" and\n> then later says, oh but wait, you actually don't need to do that\n> unless you're upgrading. That seems a bit backwards to me: I think we\n> should talk about the current state of play first, and then add the\n> notes about upgrading afterwards.\n\nIn general, the documentation should prefer simpler decision trees.\nEspecially so where the wrong choice causes no error, yet leaves a security\nvulnerability. The unconditional REVOKE has no drawbacks; it's harmless where\nit's a no-op. That was the rationale behind the current text. Upgrades\naren't the only issue; another DBA may have changed the ACL since initdb.\n\n\n", "msg_date": "Tue, 29 Nov 2022 23:07:01 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 2:07 AM Noah Misch <noah@leadboat.com> wrote:\n> In general, the documentation should prefer simpler decision trees.\n\nTrue, but I found the current text confusing, which is also something\nto consider.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 08:39:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 08:39:23AM -0500, Robert Haas wrote:\n> On Wed, Nov 30, 2022 at 2:07 AM Noah Misch <noah@leadboat.com> wrote:\n> > In general, the documentation should prefer simpler decision trees.\n> \n> True, but I found the current text confusing, which is also something\n> to consider.\n\nCould remove the paragraph about v14. Could have that paragraph say\nexplicitly that the REVOKE is a no-op. Would either of those be an\nimprovement?\n\n\n", "msg_date": "Wed, 30 Nov 2022 07:01:36 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 10:01 AM Noah Misch <noah@leadboat.com> wrote:\n> On Wed, Nov 30, 2022 at 08:39:23AM -0500, Robert Haas wrote:\n> > On Wed, Nov 30, 2022 at 2:07 AM Noah Misch <noah@leadboat.com> wrote:\n> > > In general, the documentation should prefer simpler decision trees.\n> >\n> > True, but I found the current text confusing, which is also something\n> > to consider.\n>\n> Could remove the paragraph about v14. Could have that paragraph say\n> explicitly that the REVOKE is a no-op. Would either of those be an\n> improvement?\n\nWell, I thought what I proposed was a nice improvement, but I guess if\nyou don't like it I'm not inclined to spend a lot of time discussing\nother possibilities. If we get some opinions from more people that may\nmake it clearer which direction to go; if I'm the only one that\ndoesn't like the way it is now, it's probably not that important.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 10:44:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Nov 30, 2022 at 10:01 AM Noah Misch <noah@leadboat.com> wrote:\n>> Could remove the paragraph about v14. Could have that paragraph say\n>> explicitly that the REVOKE is a no-op. Would either of those be an\n>> improvement?\n\n> Well, I thought what I proposed was a nice improvement, but I guess if\n> you don't like it I'm not inclined to spend a lot of time discussing\n> other possibilities. If we get some opinions from more people that may\n> make it clearer which direction to go; if I'm the only one that\n> doesn't like the way it is now, it's probably not that important.\n\nHey, I'll step up to the plate ;-)\n\nI agree that it's confusing to tell people to do a REVOKE that might do\nnothing. A parenthetical note explaining that might help, but the text\nis pretty dense already, so really I'd rather have that info in a\nseparate para.\n\nAlso, I'd like to structure things so that the first para covers what\nyou need to know in a clean v15+ installation, and details that only\napply in upgrade scenarios are in the second para. The upgrade scenario\nis going to be interesting to fewer and fewer people over time, so let's\nnot clutter the lede with it.\n\nSo maybe about like this?\n\n Constrain ordinary users to user-private schemas. To implement\n this pattern, for every user needing to create non-temporary\n objects, create a schema with the same name as that user. (Recall\n that the default search path starts with $user, which resolves to\n the user name. Therefore, if each user has a separate schema, they\n access their own schemas by default.) Also ensure that no other\n schemas have public CREATE privileges. This pattern is a secure\n schema usage pattern unless an untrusted user is the database\n owner or holds the CREATEROLE privilege, in which case no secure\n schema usage pattern exists.\n\n In PostgreSQL 15 and later, the default configuration supports\n this usage pattern. In prior versions, or when using a database\n that has been upgraded from a prior version, you will need to\n remove the public CREATE privilege from the public schema (issue\n REVOKE CREATE ON SCHEMA public FROM PUBLIC). Then consider\n auditing the public schema for objects named like objects in\n schema pg_catalog.\n\nThis is close to what Robert wrote, but not exactly the same,\nso probably it will make neither of you happy ;-)\n\nBTW, is \"create a schema with the same name\" sufficient detail?\nYou have to either make it owned by that user, or explicitly\ngrant CREATE permission on it. I'm not sure if that detail\nbelongs here, but it feels like maybe it does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:35:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Wed, 30 Nov 2022 at 17:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nBTW, is \"create a schema with the same name\" sufficient detail?\n> You have to either make it owned by that user, or explicitly\n> grant CREATE permission on it. I'm not sure if that detail\n> belongs here, but it feels like maybe it does.\n\n\nIt might be worth mentioning AUTHORIZATION. The easiest way to create an\nappropriately named schema for a user is \"CREATE SCHEMA AUTHORIZATION\nusername\".\n\nOn Wed, 30 Nov 2022 at 17:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nBTW, is \"create a schema with the same name\" sufficient detail?\nYou have to either make it owned by that user, or explicitly\ngrant CREATE permission on it.  I'm not sure if that detail\nbelongs here, but it feels like maybe it does.It might be worth mentioning AUTHORIZATION. The easiest way to create an appropriately named schema for a user is \"CREATE SCHEMA AUTHORIZATION username\".", "msg_date": "Wed, 30 Nov 2022 17:57:20 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 3:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> BTW, is \"create a schema with the same name\" sufficient detail?\n> You have to either make it owned by that user, or explicitly\n> grant CREATE permission on it. I'm not sure if that detail\n> belongs here, but it feels like maybe it does.\n>\n>\nI'd mention the ownership variant and suggest using the AUTHORIZATION\nclause, with an explicit example.\n\nCREATE SCHEMA role_name AUTHORIZATION role_name;\n\nDavid J.\n\nOn Wed, Nov 30, 2022 at 3:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nBTW, is \"create a schema with the same name\" sufficient detail?\nYou have to either make it owned by that user, or explicitly\ngrant CREATE permission on it.  I'm not sure if that detail\nbelongs here, but it feels like maybe it does.I'd mention the ownership variant and suggest using the AUTHORIZATION clause, with an explicit example.CREATE SCHEMA role_name AUTHORIZATION role_name;David J.", "msg_date": "Wed, 30 Nov 2022 15:58:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Also, I'd like to structure things so that the first para covers what\n> you need to know in a clean v15+ installation, and details that only\n> apply in upgrade scenarios are in the second para. The upgrade scenario\n> is going to be interesting to fewer and fewer people over time, so let's\n> not clutter the lede with it.\n\nRight, that was my main feeling about this.\n\n> So maybe about like this?\n>\n> Constrain ordinary users to user-private schemas. To implement\n> this pattern, for every user needing to create non-temporary\n> objects, create a schema with the same name as that user. (Recall\n> that the default search path starts with $user, which resolves to\n> the user name. Therefore, if each user has a separate schema, they\n> access their own schemas by default.) Also ensure that no other\n> schemas have public CREATE privileges. This pattern is a secure\n> schema usage pattern unless an untrusted user is the database\n> owner or holds the CREATEROLE privilege, in which case no secure\n> schema usage pattern exists.\n>\n> In PostgreSQL 15 and later, the default configuration supports\n> this usage pattern. In prior versions, or when using a database\n> that has been upgraded from a prior version, you will need to\n> remove the public CREATE privilege from the public schema (issue\n> REVOKE CREATE ON SCHEMA public FROM PUBLIC). Then consider\n> auditing the public schema for objects named like objects in\n> schema pg_catalog.\n>\n> This is close to what Robert wrote, but not exactly the same,\n> so probably it will make neither of you happy ;-)\n\nI haven't looked at how it's different from what I wrote exactly, but\nit seems fine to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 23:32:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" }, { "msg_contents": "On Wed, Nov 30, 2022 at 05:35:01PM -0500, Tom Lane wrote:\n> Also, I'd like to structure things so that the first para covers what\n> you need to know in a clean v15+ installation, and details that only\n> apply in upgrade scenarios are in the second para. The upgrade scenario\n> is going to be interesting to fewer and fewer people over time, so let's\n> not clutter the lede with it.\n> \n> So maybe about like this?\n> \n> Constrain ordinary users to user-private schemas. To implement\n> this pattern, for every user needing to create non-temporary\n> objects, create a schema with the same name as that user. (Recall\n> that the default search path starts with $user, which resolves to\n> the user name. Therefore, if each user has a separate schema, they\n> access their own schemas by default.) Also ensure that no other\n> schemas have public CREATE privileges. This pattern is a secure\n> schema usage pattern unless an untrusted user is the database\n> owner or holds the CREATEROLE privilege, in which case no secure\n> schema usage pattern exists.\n\nThis is free from the problem found in ddl-create-public-reorg-really.patch.\nHowever, the word \"other\" doesn't belong there. (The per-user schemas should\nnot have public CREATE privilege.) I would also move that same sentence up\nfront, like this:\n\n Constrain ordinary users to user-private schemas. To implement this\n pattern, first ensure that no schemas have public CREATE privileges.\n Then, for every user needing to create non-temporary objects, create a\n schema with the same name as that user. (Recall that the default search\n path starts with $user, which resolves to the user name. Therefore, if\n each user has a separate schema, they access their own schemas by\n default.) This pattern is a secure schema usage pattern unless an\n untrusted user is the database owner or holds the CREATEROLE privilege, in\n which case no secure schema usage pattern exists.\n\nWith that, I think you have improved on the status quo. Thanks.\n\n> In PostgreSQL 15 and later, the default configuration supports\n> this usage pattern. In prior versions, or when using a database\n> that has been upgraded from a prior version, you will need to\n> remove the public CREATE privilege from the public schema (issue\n> REVOKE CREATE ON SCHEMA public FROM PUBLIC). Then consider\n> auditing the public schema for objects named like objects in\n> schema pg_catalog.\n\n> BTW, is \"create a schema with the same name\" sufficient detail?\n> You have to either make it owned by that user, or explicitly\n> grant CREATE permission on it. I'm not sure if that detail\n> belongs here, but it feels like maybe it does.\n\nMaybe. Failing to GRANT that will yield a clear error when the user starts\nwork, so it's not critical to explain here.\n\n\n", "msg_date": "Thu, 1 Dec 2022 00:25:33 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "On 2022-Dec-01, Noah Misch wrote:\n\n> This is free from the problem found in ddl-create-public-reorg-really.patch.\n> However, the word \"other\" doesn't belong there. (The per-user schemas should\n> not have public CREATE privilege.) I would also move that same sentence up\n> front, like this:\n> \n> Constrain ordinary users to user-private schemas. To implement this\n> pattern, first ensure that no schemas have public CREATE privileges.\n> Then, for every user needing to create non-temporary objects, create a\n> schema with the same name as that user. (Recall that the default search\n> path starts with $user, which resolves to the user name. Therefore, if\n> each user has a separate schema, they access their own schemas by\n> default.) This pattern is a secure schema usage pattern unless an\n> untrusted user is the database owner or holds the CREATEROLE privilege, in\n> which case no secure schema usage pattern exists.\n\n+1 LGTM\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 1 Dec 2022 12:16:39 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema, now owned by\n pg_databas" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Dec-01, Noah Misch wrote:\n>> This is free from the problem found in ddl-create-public-reorg-really.patch.\n>> However, the word \"other\" doesn't belong there. (The per-user schemas should\n>> not have public CREATE privilege.) I would also move that same sentence up\n>> front, like this:\n>> \n>> Constrain ordinary users to user-private schemas. To implement this\n>> pattern, first ensure that no schemas have public CREATE privileges.\n>> Then, for every user needing to create non-temporary objects, create a\n>> schema with the same name as that user. (Recall that the default search\n>> path starts with $user, which resolves to the user name. Therefore, if\n>> each user has a separate schema, they access their own schemas by\n>> default.) This pattern is a secure schema usage pattern unless an\n>> untrusted user is the database owner or holds the CREATEROLE privilege, in\n>> which case no secure schema usage pattern exists.\n\n> +1 LGTM\n\nSounds good. I'll make it so in a bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Dec 2022 09:24:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Revoke PUBLIC CREATE from public schema,\n now owned by pg_databas" } ]
[ { "msg_contents": "Hi,\nI was looking at backend_progress.c and noticed that the filename and path\nwere wrong in the header.\n\nHere is patch which corrects the mistake.\n\nPlease take a look.\n\nThanks", "msg_date": "Fri, 10 Sep 2021 10:11:34 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "incorrect file name in backend_progress.c header" }, { "msg_contents": "On Fri, Sep 10, 2021 at 10:11:34AM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at backend_progress.c and noticed that the filename and path\n> were wrong in the header.\n> \n> Here is patch which corrects the mistake.\n\nFor the record, I don't really like boilerplate, but fixing the boilerplates\nall at once is at least better than fixing them one at a time.\n\nWould you want to prepare a patch handling all of these ?\n\n$ find src/ -name '*.c' -type f -print0 |xargs -r0 awk '{fn=gensub(\".*/\", \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\", \"\", 1, fn)} FNR==1 && /---/{top=1} /\\*\\//{top=0} !top{next} FNR==3 && NF==2 && $2!=fn{print FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline; if ($0!~FILENAME){print FILENAME,\"foot\",$2}}'\nsrc/backend/catalog/pg_publication.c foot pg_publication.c\nsrc/backend/utils/activity/wait_event.c foot src/backend/postmaster/wait_event.c\nsrc/backend/utils/activity/backend_status.c foot src/backend/postmaster/backend_status.c\nsrc/backend/utils/adt/version.c foot \nsrc/backend/replication/logical/reorderbuffer.c foot src/backend/replication/reorderbuffer.c\nsrc/backend/replication/logical/snapbuild.c foot src/backend/replication/snapbuild.c\nsrc/backend/replication/logical/logicalfuncs.c foot src/backend/replication/logicalfuncs.c\nsrc/backend/optimizer/util/inherit.c foot src/backend/optimizer/path/inherit.c\nsrc/backend/optimizer/util/appendinfo.c foot src/backend/optimizer/path/appendinfo.c\nsrc/backend/commands/publicationcmds.c foot publicationcmds.c\nsrc/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\nsrc/interfaces/libpq/fe-misc.c head fe-misc.c FILE\nsrc/bin/scripts/common.c head common common.c\nsrc/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n\nThere's some function comments wrong too.\nIn case someone want to fix them together.\n\ncommit 4fba6c5044da43c1fa263125e422e869ae449ae7\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sun Sep 5 18:14:39 2021 -0500\n\n Wrong function name in header comment\n \n Found like this\n \n for f in `find src -type f -name '*.c'`; do awk -v p=0 '/^\\/\\* *-*$/{h=$0; getline; h=h\"\\n\"$0; g=gensub( \"[^_[:alnum:]].*\", \"\", 1, $2); p=1} 0&&/^{/{p=0; print h}; /^ \\*\\/$/{h=h\"\\n\"$0; getline a; h=h\"\\n\"a; getline f; h=h\"\\n\"f; l=length(g); if (substr(f,1,7) == substr(g,1,7) && substr(f,1,l) != substr(g,1,l)) print FILENAME,g,f,\"\\n\"h; next} 0&&/^[^s/ {]/{p=0; h=\"\"; next} 0&&p{h=h\"\\n\"$0}' \"$f\"; done |less\n\ndiff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\nindex cedb3848dd..e53d381e19 100644\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -105,7 +105,7 @@ compare_path_costs(Path *path1, Path *path2, CostSelector criterion)\n }\n \n /*\n- * compare_path_fractional_costs\n+ * compare_fractional_path_costs\n *\t Return -1, 0, or +1 according as path1 is cheaper, the same cost,\n *\t or more expensive than path2 for fetching the specified fraction\n *\t of the total tuples.\ndiff --git a/src/common/pg_lzcompress.c b/src/common/pg_lzcompress.c\nindex a30a2c2eb8..72e6a7ea61 100644\n--- a/src/common/pg_lzcompress.c\n+++ b/src/common/pg_lzcompress.c\n@@ -825,7 +825,7 @@ pglz_decompress(const char *source, int32 slen, char *dest,\n \n \n /* ----------\n- * pglz_max_compressed_size -\n+ * pglz_maximum_compressed_size -\n *\n *\t\tCalculate the maximum compressed size for a given amount of raw data.\n *\t\tReturn the maximum size, or total compressed size if maximum size is\n\n\n", "msg_date": "Fri, 10 Sep 2021 12:56:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: incorrect file name in backend_progress.c header" }, { "msg_contents": "On Fri, Sep 10, 2021 at 10:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Sep 10, 2021 at 10:11:34AM -0700, Zhihong Yu wrote:\n> > Hi,\n> > I was looking at backend_progress.c and noticed that the filename and\n> path\n> > were wrong in the header.\n> >\n> > Here is patch which corrects the mistake.\n>\n> For the record, I don't really like boilerplate, but fixing the\n> boilerplates\n> all at once is at least better than fixing them one at a time.\n>\n> Would you want to prepare a patch handling all of these ?\n>\n> $ find src/ -name '*.c' -type f -print0 |xargs -r0 awk '{fn=gensub(\".*/\",\n> \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\", \"\", 1, fn)} FNR==1\n> && /---/{top=1} /\\*\\//{top=0} !top{next} FNR==3 && NF==2 && $2!=fn{print\n> FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline; if ($0!~FILENAME){print\n> FILENAME,\"foot\",$2}}'\n> src/backend/catalog/pg_publication.c foot pg_publication.c\n> src/backend/utils/activity/wait_event.c foot\n> src/backend/postmaster/wait_event.c\n> src/backend/utils/activity/backend_status.c foot\n> src/backend/postmaster/backend_status.c\n> src/backend/utils/adt/version.c foot\n> src/backend/replication/logical/reorderbuffer.c foot\n> src/backend/replication/reorderbuffer.c\n> src/backend/replication/logical/snapbuild.c foot\n> src/backend/replication/snapbuild.c\n> src/backend/replication/logical/logicalfuncs.c foot\n> src/backend/replication/logicalfuncs.c\n> src/backend/optimizer/util/inherit.c foot\n> src/backend/optimizer/path/inherit.c\n> src/backend/optimizer/util/appendinfo.c foot\n> src/backend/optimizer/path/appendinfo.c\n> src/backend/commands/publicationcmds.c foot publicationcmds.c\n> src/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\n> src/interfaces/libpq/fe-misc.c head fe-misc.c FILE\n> src/bin/scripts/common.c head common common.c\n> src/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n>\n> There's some function comments wrong too.\n> In case someone want to fix them together.\n>\n> commit 4fba6c5044da43c1fa263125e422e869ae449ae7\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun Sep 5 18:14:39 2021 -0500\n>\n> Wrong function name in header comment\n>\n> Found like this\n>\n> for f in `find src -type f -name '*.c'`; do awk -v p=0 '/^\\/\\*\n> *-*$/{h=$0; getline; h=h\"\\n\"$0; g=gensub( \"[^_[:alnum:]].*\", \"\", 1, $2);\n> p=1} 0&&/^{/{p=0; print h}; /^ \\*\\/$/{h=h\"\\n\"$0; getline a; h=h\"\\n\"a;\n> getline f; h=h\"\\n\"f; l=length(g); if (substr(f,1,7) == substr(g,1,7) &&\n> substr(f,1,l) != substr(g,1,l)) print FILENAME,g,f,\"\\n\"h; next} 0&&/^[^s/\n> {]/{p=0; h=\"\"; next} 0&&p{h=h\"\\n\"$0}' \"$f\"; done |less\n>\n> diff --git a/src/backend/optimizer/util/pathnode.c\n> b/src/backend/optimizer/util/pathnode.c\n> index cedb3848dd..e53d381e19 100644\n> --- a/src/backend/optimizer/util/pathnode.c\n> +++ b/src/backend/optimizer/util/pathnode.c\n> @@ -105,7 +105,7 @@ compare_path_costs(Path *path1, Path *path2,\n> CostSelector criterion)\n> }\n>\n> /*\n> - * compare_path_fractional_costs\n> + * compare_fractional_path_costs\n> * Return -1, 0, or +1 according as path1 is cheaper, the same cost,\n> * or more expensive than path2 for fetching the specified fraction\n> * of the total tuples.\n> diff --git a/src/common/pg_lzcompress.c b/src/common/pg_lzcompress.c\n> index a30a2c2eb8..72e6a7ea61 100644\n> --- a/src/common/pg_lzcompress.c\n> +++ b/src/common/pg_lzcompress.c\n> @@ -825,7 +825,7 @@ pglz_decompress(const char *source, int32 slen, char\n> *dest,\n>\n>\n> /* ----------\n> - * pglz_max_compressed_size -\n> + * pglz_maximum_compressed_size -\n> *\n> * Calculate the maximum compressed size for a given amount\n> of raw data.\n> * Return the maximum size, or total compressed size if\n> maximum size is\n>\n\nHi,\nFor the first list, do yu want to include the path to the file for\nIDENTIFICATION ?\nIf so, I can prepare a patch covering the files in that list.\n\nCheers\n\nOn Fri, Sep 10, 2021 at 10:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Fri, Sep 10, 2021 at 10:11:34AM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at backend_progress.c and noticed that the filename and path\n> were wrong in the header.\n> \n> Here is patch which corrects the mistake.\n\nFor the record, I don't really like boilerplate, but fixing the boilerplates\nall at once is at least better than fixing them one at a time.\n\nWould you want to prepare a patch handling all of these ?\n\n$ find src/ -name '*.c' -type f -print0 |xargs -r0 awk '{fn=gensub(\".*/\", \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\", \"\", 1, fn)} FNR==1 && /---/{top=1} /\\*\\//{top=0} !top{next} FNR==3 && NF==2 && $2!=fn{print FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline; if ($0!~FILENAME){print FILENAME,\"foot\",$2}}'\nsrc/backend/catalog/pg_publication.c foot pg_publication.c\nsrc/backend/utils/activity/wait_event.c foot src/backend/postmaster/wait_event.c\nsrc/backend/utils/activity/backend_status.c foot src/backend/postmaster/backend_status.c\nsrc/backend/utils/adt/version.c foot \nsrc/backend/replication/logical/reorderbuffer.c foot src/backend/replication/reorderbuffer.c\nsrc/backend/replication/logical/snapbuild.c foot src/backend/replication/snapbuild.c\nsrc/backend/replication/logical/logicalfuncs.c foot src/backend/replication/logicalfuncs.c\nsrc/backend/optimizer/util/inherit.c foot src/backend/optimizer/path/inherit.c\nsrc/backend/optimizer/util/appendinfo.c foot src/backend/optimizer/path/appendinfo.c\nsrc/backend/commands/publicationcmds.c foot publicationcmds.c\nsrc/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\nsrc/interfaces/libpq/fe-misc.c head fe-misc.c FILE\nsrc/bin/scripts/common.c head common common.c\nsrc/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n\nThere's some function comments wrong too.\nIn case someone want to fix them together.\n\ncommit 4fba6c5044da43c1fa263125e422e869ae449ae7\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate:   Sun Sep 5 18:14:39 2021 -0500\n\n    Wrong function name in header comment\n\n    Found like this\n\n    for f in `find src -type f -name '*.c'`; do awk -v p=0 '/^\\/\\* *-*$/{h=$0; getline; h=h\"\\n\"$0; g=gensub( \"[^_[:alnum:]].*\", \"\", 1, $2); p=1} 0&&/^{/{p=0; print h}; /^ \\*\\/$/{h=h\"\\n\"$0; getline a; h=h\"\\n\"a; getline f; h=h\"\\n\"f; l=length(g); if (substr(f,1,7) == substr(g,1,7) && substr(f,1,l) != substr(g,1,l)) print FILENAME,g,f,\"\\n\"h; next} 0&&/^[^s/ {]/{p=0; h=\"\"; next} 0&&p{h=h\"\\n\"$0}' \"$f\"; done |less\n\ndiff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\nindex cedb3848dd..e53d381e19 100644\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -105,7 +105,7 @@ compare_path_costs(Path *path1, Path *path2, CostSelector criterion)\n }\n\n /*\n- * compare_path_fractional_costs\n+ * compare_fractional_path_costs\n  *       Return -1, 0, or +1 according as path1 is cheaper, the same cost,\n  *       or more expensive than path2 for fetching the specified fraction\n  *       of the total tuples.\ndiff --git a/src/common/pg_lzcompress.c b/src/common/pg_lzcompress.c\nindex a30a2c2eb8..72e6a7ea61 100644\n--- a/src/common/pg_lzcompress.c\n+++ b/src/common/pg_lzcompress.c\n@@ -825,7 +825,7 @@ pglz_decompress(const char *source, int32 slen, char *dest,\n\n\n /* ----------\n- * pglz_max_compressed_size -\n+ * pglz_maximum_compressed_size -\n  *\n  *             Calculate the maximum compressed size for a given amount of raw data.\n  *             Return the maximum size, or total compressed size if maximum size isHi,For the first list, do yu want to include the path to the file for IDENTIFICATION ? If so, I can prepare a patch covering the files in that list.Cheers", "msg_date": "Fri, 10 Sep 2021 11:07:23 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: incorrect file name in backend_progress.c header" }, { "msg_contents": "On Fri, Sep 10, 2021 at 11:07 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Sep 10, 2021 at 10:56 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> On Fri, Sep 10, 2021 at 10:11:34AM -0700, Zhihong Yu wrote:\n>> > Hi,\n>> > I was looking at backend_progress.c and noticed that the filename and\n>> path\n>> > were wrong in the header.\n>> >\n>> > Here is patch which corrects the mistake.\n>>\n>> For the record, I don't really like boilerplate, but fixing the\n>> boilerplates\n>> all at once is at least better than fixing them one at a time.\n>>\n>> Would you want to prepare a patch handling all of these ?\n>>\n>> $ find src/ -name '*.c' -type f -print0 |xargs -r0 awk '{fn=gensub(\".*/\",\n>> \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\", \"\", 1, fn)} FNR==1\n>> && /---/{top=1} /\\*\\//{top=0} !top{next} FNR==3 && NF==2 && $2!=fn{print\n>> FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline; if ($0!~FILENAME){print\n>> FILENAME,\"foot\",$2}}'\n>> src/backend/catalog/pg_publication.c foot pg_publication.c\n>> src/backend/utils/activity/wait_event.c foot\n>> src/backend/postmaster/wait_event.c\n>> src/backend/utils/activity/backend_status.c foot\n>> src/backend/postmaster/backend_status.c\n>> src/backend/utils/adt/version.c foot\n>> src/backend/replication/logical/reorderbuffer.c foot\n>> src/backend/replication/reorderbuffer.c\n>> src/backend/replication/logical/snapbuild.c foot\n>> src/backend/replication/snapbuild.c\n>> src/backend/replication/logical/logicalfuncs.c foot\n>> src/backend/replication/logicalfuncs.c\n>> src/backend/optimizer/util/inherit.c foot\n>> src/backend/optimizer/path/inherit.c\n>> src/backend/optimizer/util/appendinfo.c foot\n>> src/backend/optimizer/path/appendinfo.c\n>> src/backend/commands/publicationcmds.c foot publicationcmds.c\n>> src/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\n>> src/interfaces/libpq/fe-misc.c head fe-misc.c FILE\n>> src/bin/scripts/common.c head common common.c\n>> src/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n>>\n>> There's some function comments wrong too.\n>> In case someone want to fix them together.\n>>\n>> commit 4fba6c5044da43c1fa263125e422e869ae449ae7\n>> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n>> Date: Sun Sep 5 18:14:39 2021 -0500\n>>\n>> Wrong function name in header comment\n>>\n>> Found like this\n>>\n>> for f in `find src -type f -name '*.c'`; do awk -v p=0 '/^\\/\\*\n>> *-*$/{h=$0; getline; h=h\"\\n\"$0; g=gensub( \"[^_[:alnum:]].*\", \"\", 1, $2);\n>> p=1} 0&&/^{/{p=0; print h}; /^ \\*\\/$/{h=h\"\\n\"$0; getline a; h=h\"\\n\"a;\n>> getline f; h=h\"\\n\"f; l=length(g); if (substr(f,1,7) == substr(g,1,7) &&\n>> substr(f,1,l) != substr(g,1,l)) print FILENAME,g,f,\"\\n\"h; next} 0&&/^[^s/\n>> {]/{p=0; h=\"\"; next} 0&&p{h=h\"\\n\"$0}' \"$f\"; done |less\n>>\n>> diff --git a/src/backend/optimizer/util/pathnode.c\n>> b/src/backend/optimizer/util/pathnode.c\n>> index cedb3848dd..e53d381e19 100644\n>> --- a/src/backend/optimizer/util/pathnode.c\n>> +++ b/src/backend/optimizer/util/pathnode.c\n>> @@ -105,7 +105,7 @@ compare_path_costs(Path *path1, Path *path2,\n>> CostSelector criterion)\n>> }\n>>\n>> /*\n>> - * compare_path_fractional_costs\n>> + * compare_fractional_path_costs\n>> * Return -1, 0, or +1 according as path1 is cheaper, the same\n>> cost,\n>> * or more expensive than path2 for fetching the specified fraction\n>> * of the total tuples.\n>> diff --git a/src/common/pg_lzcompress.c b/src/common/pg_lzcompress.c\n>> index a30a2c2eb8..72e6a7ea61 100644\n>> --- a/src/common/pg_lzcompress.c\n>> +++ b/src/common/pg_lzcompress.c\n>> @@ -825,7 +825,7 @@ pglz_decompress(const char *source, int32 slen, char\n>> *dest,\n>>\n>>\n>> /* ----------\n>> - * pglz_max_compressed_size -\n>> + * pglz_maximum_compressed_size -\n>> *\n>> * Calculate the maximum compressed size for a given amount\n>> of raw data.\n>> * Return the maximum size, or total compressed size if\n>> maximum size is\n>>\n>\n> Hi,\n> For the first list, do yu want to include the path to the file for\n> IDENTIFICATION ?\n> If so, I can prepare a patch covering the files in that list.\n>\n> Cheers\n>\n\nHere is updated patch covering the first list.", "msg_date": "Fri, 10 Sep 2021 11:15:42 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: incorrect file name in backend_progress.c header" }, { "msg_contents": "> For the first list, do you want to include the path to the file for\n> IDENTIFICATION ?\n> If so, I can prepare a patch covering the files in that list.\n\nSince there's so few exceptions to the \"rule\", I think they should be fixed for\nconsistency.\n\nHere's an awk which finds a few more - including the one in your original\nreport.\n\n$ find src -name '*.[ch]' -type f -print0 |xargs -r0 awk '{fn=gensub(\".*/\", \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\", \"\", 1, fn)} FNR==1 && /---$/{top=1} /\\*\\//{top=0} !top{next} FNR>1 && FNR<4 && NF==2 && $2!=fn{print FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline; if ($0!~FILENAME){print FILENAME,\"foot\",$2}}'\n\nsrc/include/utils/dynahash.h head dynahash.h dynahash\nsrc/include/replication/pgoutput.h foot pgoutput.h\nsrc/backend/catalog/pg_publication.c foot pg_publication.c\nsrc/backend/utils/activity/wait_event.c foot src/backend/postmaster/wait_event.c\nsrc/backend/utils/activity/backend_status.c foot src/backend/postmaster/backend_status.c\nsrc/backend/utils/activity/backend_progress.c head backend_progress.c progress.c\nsrc/backend/utils/adt/version.c foot \nsrc/backend/replication/logical/reorderbuffer.c foot src/backend/replication/reorderbuffer.c\nsrc/backend/replication/logical/snapbuild.c foot src/backend/replication/snapbuild.c\nsrc/backend/replication/logical/logicalfuncs.c foot src/backend/replication/logicalfuncs.c\nsrc/backend/optimizer/util/inherit.c foot src/backend/optimizer/path/inherit.c\nsrc/backend/optimizer/util/appendinfo.c foot src/backend/optimizer/path/appendinfo.c\nsrc/backend/commands/publicationcmds.c foot publicationcmds.c\nsrc/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\nsrc/interfaces/libpq/fe-misc.c head fe-misc.c FILE\nsrc/bin/scripts/common.c head common common.c\nsrc/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:20:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: incorrect file name in backend_progress.c header" }, { "msg_contents": "On Fri, Sep 10, 2021 at 11:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> > For the first list, do you want to include the path to the file for\n> > IDENTIFICATION ?\n> > If so, I can prepare a patch covering the files in that list.\n>\n> Since there's so few exceptions to the \"rule\", I think they should be\n> fixed for\n> consistency.\n>\n> Here's an awk which finds a few more - including the one in your original\n> report.\n>\n> $ find src -name '*.[ch]' -type f -print0 |xargs -r0 awk\n> '{fn=gensub(\".*/\", \"\", \"1\", FILENAME)} FILENAME~/scripts/{fn=gensub(\"\\\\.c\",\n> \"\", 1, fn)} FNR==1 && /---$/{top=1} /\\*\\//{top=0} !top{next} FNR>1 && FNR<4\n> && NF==2 && $2!=fn{print FILENAME,\"head\",fn,$2} /IDENTIFICATION/{getline;\n> if ($0!~FILENAME){print FILENAME,\"foot\",$2}}'\n>\n> src/include/utils/dynahash.h head dynahash.h dynahash\n> src/include/replication/pgoutput.h foot pgoutput.h\n> src/backend/catalog/pg_publication.c foot pg_publication.c\n> src/backend/utils/activity/wait_event.c foot\n> src/backend/postmaster/wait_event.c\n> src/backend/utils/activity/backend_status.c foot\n> src/backend/postmaster/backend_status.c\n> src/backend/utils/activity/backend_progress.c head backend_progress.c\n> progress.c\n> src/backend/utils/adt/version.c foot\n> src/backend/replication/logical/reorderbuffer.c foot\n> src/backend/replication/reorderbuffer.c\n> src/backend/replication/logical/snapbuild.c foot\n> src/backend/replication/snapbuild.c\n> src/backend/replication/logical/logicalfuncs.c foot\n> src/backend/replication/logicalfuncs.c\n> src/backend/optimizer/util/inherit.c foot\n> src/backend/optimizer/path/inherit.c\n> src/backend/optimizer/util/appendinfo.c foot\n> src/backend/optimizer/path/appendinfo.c\n> src/backend/commands/publicationcmds.c foot publicationcmds.c\n> src/backend/commands/subscriptioncmds.c foot subscriptioncmds.c\n> src/interfaces/libpq/fe-misc.c head fe-misc.c FILE\n> src/bin/scripts/common.c head common common.c\n> src/port/pgcheckdir.c head pgcheckdir.c src/port/pgcheckdir.c\n>\n> --\n> Justin\n>\n\nHere is updated patch covering second batch of files.", "msg_date": "Fri, 10 Sep 2021 11:28:17 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: incorrect file name in backend_progress.c header" } ]
[ { "msg_contents": "I noticed that the new parameter remove_temp_files_after_crash is currently set to a default value of \"true\" in the version 14 release. It seems this was discussed in this thread [1], and it doesn't look to me like there's been a lot of stress testing of this feature.\r\n\r\nIn our fleet there have been cases where we have seen hundreds of thousands of temp files generated. I found a case where we helped a customer that had a little over 2.2 million temp files. Single threaded cleanup of these takes a significant amount of time and delays recovery. In RDS, we mitigated this by moving the pgsql_tmp directory aside, start the engine and then separately remove the old temp files.\r\n\r\nAfter noticing the current plans to default this GUC to \"on\" in v14, just thought I'd raise the question of whether this should get a little more discussion or testing with higher numbers of temp files?\r\n\r\nRegards,\r\nShawn McCoy\r\nDatabase Engineer\r\nAmazon Web Services\r\n\r\n[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com\r\n\r\n\n\n\n\n\n\n\n\n\nI noticed that the new parameter remove_temp_files_after_crash is currently set to a default value of \"true\" in the version 14 release. It seems this was discussed in this thread [1], and it doesn't look to me like there's been a lot of\r\n stress testing of this feature.\n \nIn our fleet there have been cases where we have seen hundreds of thousands of temp files generated.  I found a case where we helped a customer that had a little over 2.2 million temp files.  Single threaded cleanup of these takes a significant\r\n amount of time and delays recovery. In RDS, we mitigated this by moving the pgsql_tmp directory aside, start the engine and then separately remove the old temp files.\n \nAfter noticing the current plans to default this GUC to \"on\" in v14, just thought I'd raise the question of whether this should get a little more discussion or testing with higher numbers of temp files?\n \nRegards,\nShawn McCoy\nDatabase Engineer\nAmazon Web Services\n \n[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com", "msg_date": "Fri, 10 Sep 2021 20:58:20 +0000", "msg_from": "\"McCoy, Shawn\" <shamccoy@amazon.com>", "msg_from_op": true, "msg_subject": "Remove_temp_files_after_crash and significant recovery/startup time" }, { "msg_contents": "\"McCoy, Shawn\" <shamccoy@amazon.com> writes:\n> I noticed that the new parameter remove_temp_files_after_crash is currently set to a default value of \"true\" in the version 14 release. It seems this was discussed in this thread [1], and it doesn't look to me like there's been a lot of stress testing of this feature.\n\nProbably not ...\n\n> In our fleet there have been cases where we have seen hundreds of thousands of temp files generated. I found a case where we helped a customer that had a little over 2.2 million temp files. Single threaded cleanup of these takes a significant amount of time and delays recovery. In RDS, we mitigated this by moving the pgsql_tmp directory aside, start the engine and then separately remove the old temp files.\n\nTBH, I think the thing to be asking questions about is how come you had so\nmany temp files in the first place. Sounds like something is misadjusted\nsomewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Sep 2021 17:32:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove_temp_files_after_crash and significant recovery/startup\n time" }, { "msg_contents": "On 9/10/21 10:58 PM, McCoy, Shawn wrote:\n> I noticed that the new parameter remove_temp_files_after_crash is \n> currently set to a default value of \"true\" in the version 14 release. It \n> seems this was discussed in this thread [1], and it doesn't look to me \n> like there's been a lot of stress testing of this feature.\n> \n\nNot sure what could we learn from a stress test? IMHO it's fairly \nnatural that if there are many temporary files and/or if deleting a file \nis expensive on a given filesystem, the cleanup may take time.\n\n> In our fleet there have been cases where we have seen hundreds of \n> thousands of temp files generated.  I found a case where we helped a \n> customer that had a little over 2.2 million temp files.  Single threaded \n> cleanup of these takes a significant amount of time and delays recovery. \n> In RDS, we mitigated this by moving the pgsql_tmp directory aside, start \n> the engine and then separately remove the old temp files.\n> \n> After noticing the current plans to default this GUC to \"on\" in v14, \n> just thought I'd raise the question of whether this should get a little \n> more discussion or testing with higher numbers of temp files?\n> \n\nI doubt we can lean anything new from such testing.\n\nOf course, we can discuss the default for the GUC. I see it as a trade \noff between risk of running out of disk space and increased recovery \ntime, and perhaps the decision to prioritize lower risk of running out \nof disk space was not the right one ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Sep 2021 23:57:24 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove_temp_files_after_crash and significant recovery/startup\n time" }, { "msg_contents": "On Fri, Sep 10, 2021, at 5:58 PM, McCoy, Shawn wrote:\n> I noticed that the new parameter remove_temp_files_after_crash is currently set to a default value of \"true\" in the version 14 release. It seems this was discussed in this thread [1], and it doesn't look to me like there's been a lot of stress testing of this feature.\n> \n> In our fleet there have been cases where we have seen hundreds of thousands of temp files generated. I found a case where we helped a customer that had a little over 2.2 million temp files. Single threaded cleanup of these takes a significant amount of time and delays recovery. In RDS, we mitigated this by moving the pgsql_tmp directory aside, start the engine and then separately remove the old temp files.\n2.2 million temporary files? I'm wondering in what circumstances your system is\ngenerating those temporary files. Low work_mem and thousands of connections?\nLow work_mem and a huge analytic query? When I designed this feature I thought\nabout some extreme cases, that's why this behavior is controlled by a GUC. We\ncan probably add a sentence that explains the recovery delay caused by dozens\nof thousands of temporary files.\n\n> \n> After noticing the current plans to default this GUC to \"on\" in v14, just thought I'd raise the question of whether this should get a little more discussion or testing with higher numbers of temp files?\n> \nCrash a backend is per se a rare condition (at least it should be). Crash while\nhaving millions of temporary files in your PGDATA is an even rarer condition. I\nsaw several cases related to this issue and none of them generates millions of\ntemporary files (at most a thousand files). IMO the benefits outweigh the\nissues as I explained in [1]. Service continuity (for the vast majority of\ncases) justifies turning it on by default.\n\nIf your Postgres instance is generating millions of temporary files, it seems\nyour setup needs some tuning.\n\n\n[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com\n \n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Sep 10, 2021, at 5:58 PM, McCoy, Shawn wrote:I noticed that the new parameter remove_temp_files_after_crash is currently set to a default value of \"true\" in the version 14 release. It seems this was discussed in this thread [1], and it doesn't look to me like there's been a lot of\n stress testing of this feature. In our fleet there have been cases where we have seen hundreds of thousands of temp files generated.  I found a case where we helped a customer that had a little over 2.2 million temp files.  Single threaded cleanup of these takes a significant\n amount of time and delays recovery. In RDS, we mitigated this by moving the pgsql_tmp directory aside, start the engine and then separately remove the old temp files.2.2 million temporary files? I'm wondering in what circumstances your system isgenerating those temporary files. Low work_mem and thousands of connections?Low work_mem and a huge analytic query? When I designed this feature I thoughtabout some extreme cases, that's why this behavior is controlled by a GUC. Wecan probably add a sentence that explains the recovery delay caused by dozensof thousands of temporary files.After noticing the current plans to default this GUC to \"on\" in v14, just thought I'd raise the question of whether this should get a little more discussion or testing with higher numbers of temp files? Crash a backend is per se a rare condition (at least it should be). Crash whilehaving millions of temporary files in your PGDATA is an even rarer condition. Isaw several cases related to this issue and none of them generates millions oftemporary files (at most a thousand files). IMO the benefits  outweigh theissues as I explained in [1]. Service continuity (for the vast majority ofcases) justifies turning it on by default.If your Postgres instance is generating millions of temporary files, it seemsyour setup needs some tuning.[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com --Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Fri, 10 Sep 2021 19:10:00 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Remove=5Ftemp=5Ffiles=5Fafter=5Fcrash_and_significant_reco?=\n =?UTF-8?Q?very/startup_time?=" }, { "msg_contents": "On 9/10/21 14:57, Tomas Vondra wrote:\n> On 9/10/21 10:58 PM, McCoy, Shawn wrote:\n>> I noticed that the new parameter remove_temp_files_after_crash is\n>> currently set to a default value of \"true\" in the version 14 release.\n>> It seems this was discussed in this thread [1], and it doesn't look to\n>> me like there's been a lot of stress testing of this feature.\n> \n> Not sure what could we learn from a stress test? IMHO it's fairly\n> natural that if there are many temporary files and/or if deleting a file\n> is expensive on a given filesystem, the cleanup may take time.\n\nThe thing that comes to mind for me is just getting a sense of what the\ncurve looks like for number of files versus startup time. If I can find\nsome time then I'll poke around and share numbers.\n\nI remember awhile ago, I worked with a PostgreSQL user who had a major\noutage crisis on their primary production database. They were having\nsome minor issues, and they decided to do a \"quick\" restart to see if it\nwould clear things out. The restart ended up taking something like a day\nor two and the business was down the whole time. Working with them to\nfigure out what was happening, we found out that their very-DDL-heavy\nworkload had combined with a stuck checkpointer process. No checkpoints\nhad been completed for over a week. Only choices were waiting for WAL to\nreplay or taking data loss; we couldn't even get out of pain with a\nrestore from backup - sinces a restore still required replaying all the\nsame WAL.\n\nThere are certain core features in a database that you really need to be\nas reliable and robust as possible. IMO, for critical production\ndatabases, quick-as-possible-restarts are one of those.\n\n\n>> In our fleet there have been cases where we have seen hundreds of\n>> thousands of temp files generated.  I found a case where we helped a\n>> customer that had a little over 2.2 million temp files.  Single\n>> threaded cleanup of these takes a significant amount of time and\n>> delays recovery. In RDS, we mitigated this by moving the pgsql_tmp\n>> directory aside, start the engine and then separately remove the old\n>> temp files.\n>>\n>> After noticing the current plans to default this GUC to \"on\" in v14,\n>> just thought I'd raise the question of whether this should get a\n>> little more discussion or testing with higher numbers of temp files?\n>>\n> \n> I doubt we can lean anything new from such testing.\n> \n> Of course, we can discuss the default for the GUC. I see it as a trade\n> off between risk of running out of disk space and increased recovery\n> time, and perhaps the decision to prioritize lower risk of running out\n> of disk space was not the right one ...\n\nI'm doing a little asking around with colleagues. I'm having trouble\nfinding cases where people went end-to-end and figured out exactly what\nin the workload was causing the high number of temp files. However,\nthere seems to be a fair number of incidents with numbers of temp files\nin the hundreds of thousands.\n\nOne thing that seems possible is that in some of these cases, the temp\nfiles were accumulating across many engine crashes - those cases would\nnot be an issue once you started cleaning up on every restart. However I\nsuspect there are still some cases where high connection counts and some\nerratic workload characteristic or bugs are causing accumulation without\nmultiple crashes. If I learn more, I'll relay it along.\n\nFrankly, if the GUC defaults to off, then we're a lot less likely to\nfind out if there /are/ issues. Kinda like LLVM and parallel query... at\nsome point you just have to turn it on... even if you're not 100% sure\nwhere all the sharp edges are yet... PostgreSQL meme: \"I test in someone\nelse's production\"\n\nAll of that said, FWIW, if a restart is taking too long then a user can\nalways turn the GUC off and cancel/retry the startup. So this is not the\nsame as a stuck checkpointer, because there's simple recourse.\n\nFor my part, I appreciate this discussion. I missed it if these points\nwere debated when the feature was first committed and I can see\narguments both ways. It's not without precedent to have a new feature\nturned off by default for its' first major release version. But we're\ntalking about a corner case situation, and it's not like users are\nwithout recourse.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Wed, 15 Sep 2021 14:24:35 -0700", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Remove_temp_files_after_crash and significant recovery/startup\n time" } ]
[ { "msg_contents": "Hi,\n\nWe have two static check_permissions functions (one in slotfuncs.c\nanother in logicalfuncs.c) with the same name and same code for\nchecking the privileges for using replication slots. Why can't we have\na single function CheckReplicationSlotPermissions in slot.c? This way,\nwe can get rid of redundant code. Attaching a patch for it.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 11 Sep 2021 13:58:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove duplicate static function check_permissions in slotfuncs.c and\n logicalfuncs.c" }, { "msg_contents": "On Sat, Sep 11, 2021, at 5:28 AM, Bharath Rupireddy wrote:\n> We have two static check_permissions functions (one in slotfuncs.c\n> another in logicalfuncs.c) with the same name and same code for\n> checking the privileges for using replication slots. Why can't we have\n> a single function CheckReplicationSlotPermissions in slot.c? This way,\n> we can get rid of redundant code. Attaching a patch for it.\nGood catch! Your patch looks good to me.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn\nSat, Sep 11, 2021, at 5:28 AM, Bharath Rupireddy\nwrote:We have\ntwo static check_permissions functions (one in\nslotfuncs.canother in logicalfuncs.c) with the same name\nand same code forchecking the privileges for using\nreplication slots. Why can't we havea single function\nCheckReplicationSlotPermissions in slot.c? This way,we\ncan get rid of redundant code. Attaching a patch for\nit.Good catch! Your patch looks good to\nme.--Euler\nTaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 12 Sep 2021 13:46:35 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Remove_duplicate_static_function_check=5Fpermissions_in_sl?=\n =?UTF-8?Q?otfuncs.c_and_logicalfuncs.c?=" }, { "msg_contents": "On 9/11/21, 1:31 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> We have two static check_permissions functions (one in slotfuncs.c\r\n> another in logicalfuncs.c) with the same name and same code for\r\n> checking the privileges for using replication slots. Why can't we have\r\n> a single function CheckReplicationSlotPermissions in slot.c? This way,\r\n> we can get rid of redundant code. Attaching a patch for it.\r\n\r\n+1\r\n\r\n+/*\r\n+ * Check whether the user has privilege to use replication slots.\r\n+ */\r\n+void\r\n+CheckReplicationSlotPermissions(void)\r\n+{\r\n+\tif (!superuser() && !has_rolreplication(GetUserId()))\r\n+\t\tereport(ERROR,\r\n+\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\r\n+\t\t\t\t (errmsg(\"must be superuser or replication role to use replication slots\"))));\r\n+}\r\n\r\nnitpick: It looks like there's an extra set of parentheses around\r\nerrmsg().\r\n\r\nNathan\r\n\r\n", "msg_date": "Sun, 12 Sep 2021 23:02:49 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Remove duplicate static function check_permissions in slotfuncs.c\n and\n logicalfuncs.c" }, { "msg_contents": "On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n> nitpick: It looks like there's an extra set of parentheses around\n> errmsg().\nIndeed. Even the requirement for extra parenthesis around auxiliary function\ncalls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:nitpick: It looks like there's an extra set of parentheses arounderrmsg().Indeed. Even the requirement for extra parenthesis around auxiliary functioncalls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 12 Sep 2021 22:14:36 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Remove_duplicate_static_function_check=5Fpermissions_in_sl?=\n =?UTF-8?Q?otfuncs.c_and_logicalfuncs.c?=" }, { "msg_contents": "On Sun, Sep 12, 2021 at 10:14:36PM -0300, Euler Taveira wrote:\n> On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n>> nitpick: It looks like there's an extra set of parentheses around\n>> errmsg().\n>\n> Indeed. Even the requirement for extra parenthesis around auxiliary function\n> calls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n\nYes. The patch makes sense. I am not seeing any other places that\ncould be grouped, so that looks fine as-is.\n--\nMichael", "msg_date": "Mon, 13 Sep 2021 11:37:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove duplicate static function check_permissions in\n slotfuncs.c and logicalfuncs.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 6:45 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n>\n> nitpick: It looks like there's an extra set of parentheses around\n> errmsg().\n>\n> Indeed. Even the requirement for extra parenthesis around auxiliary function\n> calls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n\nThe same commit says that the new code can be written in any way.\nHaving said that, I will leave it to the committer to take a call on\nwhether or not to remove the extra parenthesis.\n \"\n While new code can be written either way, code intended to be\n back-patched will need to use extra parens for awhile yet.\n \"\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 13 Sep 2021 08:47:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove duplicate static function check_permissions in slotfuncs.c\n and logicalfuncs.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 8:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Sep 12, 2021 at 10:14:36PM -0300, Euler Taveira wrote:\n> > On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n> >> nitpick: It looks like there's an extra set of parentheses around\n> >> errmsg().\n> >\n> > Indeed. Even the requirement for extra parenthesis around auxiliary function\n> > calls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n>\n> Yes. The patch makes sense. I am not seeing any other places that\n> could be grouped, so that looks fine as-is.\n\nThanks all for taking a look at the patch. Here's the CF entry -\nhttps://commitfest.postgresql.org/35/3319/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 13 Sep 2021 08:51:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove duplicate static function check_permissions in slotfuncs.c\n and logicalfuncs.c" }, { "msg_contents": "On Mon, Sep 13, 2021 at 08:51:18AM +0530, Bharath Rupireddy wrote:\n> On Mon, Sep 13, 2021 at 8:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Sun, Sep 12, 2021 at 10:14:36PM -0300, Euler Taveira wrote:\n>>> On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n>>>> nitpick: It looks like there's an extra set of parentheses around\n>>>> errmsg().\n>>>\n>>> Indeed. Even the requirement for extra parenthesis around auxiliary function\n>>> calls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n\nApplied. Not using those extra parenthesis is the most common\npattern, so tweaked this way.\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 10:23:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove duplicate static function check_permissions in\n slotfuncs.c and logicalfuncs.c" }, { "msg_contents": "On 2021-Sep-14, Michael Paquier wrote:\n\n> On Mon, Sep 13, 2021 at 08:51:18AM +0530, Bharath Rupireddy wrote:\n> > On Mon, Sep 13, 2021 at 8:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On Sun, Sep 12, 2021 at 10:14:36PM -0300, Euler Taveira wrote:\n> >>> On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan wrote:\n> >>>> nitpick: It looks like there's an extra set of parentheses around\n> >>>> errmsg().\n> >>>\n> >>> Indeed. Even the requirement for extra parenthesis around auxiliary function\n> >>> calls was removed in v12 (e3a87b4991cc2d00b7a3082abb54c5f12baedfd1).\n> \n> Applied. Not using those extra parenthesis is the most common\n> pattern, so tweaked this way.\n\nThe parentheses that commit e3a87b4991cc removed the requirement for are\nthose that the committed code still has, starting at the errcode() line.\nThe ones in errmsg() were redundant and have never been necessary.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n", "msg_date": "Tue, 14 Sep 2021 12:57:47 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove duplicate static function check_permissions in\n slotfuncs.c and logicalfuncs.c" }, { "msg_contents": "On Tue, Sep 14, 2021 at 12:57:47PM -0300, Alvaro Herrera wrote:\n> The parentheses that commit e3a87b4991cc removed the requirement for are\n> those that the committed code still has, starting at the errcode() line.\n> The ones in errmsg() were redundant and have never been necessary.\n\nIndeed, thanks!\n--\nMichael", "msg_date": "Wed, 15 Sep 2021 07:18:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove duplicate static function check_permissions in\n slotfuncs.c and logicalfuncs.c" } ]
[ { "msg_contents": "Hi hackers!\n\nThere's a lot of compression discussions nowadays. And that's cool!\nRecently Naresh Chainani in private discussion shared with me the idea to compress temporary files on disk.\nAnd I was thrilled to find no evidence of implementation of this interesting idea.\n\nI've prototyped Random Access Compressed File for fun[0]. The code is very dirty proof-of-concept.\nI compress Buffile by one block at a time. There are directory pages to store information about the size of each compressed block. If any byte of the block is changed - whole block is recompressed. Wasted space is never reused. If compressed block is more then BLCSZ - unknown bad things will happen :)\n\nHere are some my observations.\n\n0. The idea seems feasible. API of fd.c used by buffile.c can easily be abstracted for compressed temporary files. Seeks are necessary, but they are not very frequent. It's easy to make temp file compression GUC-controlled.\n\n1. Temp file footprint can be easily reduced. For example query\ncreate unlogged table y as select random()::text t from generate_series(0,9999999) g;\nuses for toast index build 140000000 bytes of temp file. With patch this value is reduced to 40841704 (x3.42 smaller).\n\n2. I have not found any evidence of performance improvement. I've only benchmarked patch on my laptop. And RAM (page cache) diminished any difference between writing compressed block and uncompressed block.\n\nHow do you think: does it worth to pursue the idea? OLTP systems rarely rely on data spilled to disk.\nAre there any known good random access compressed file libs? So we could avoid reinventing the wheel.\nMaybe someone tried this approach before?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/postgres_g/commit/426cd767694b88e64f5e6bee99fc653c45eb5abd\n\n", "msg_date": "Sat, 11 Sep 2021 17:31:37 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Compressing temporary files" }, { "msg_contents": "On Sat, Sep 11, 2021 at 05:31:37PM +0500, Andrey Borodin wrote:\n> How do you think: does it worth to pursue the idea? OLTP systems rarely rely on data spilled to disk.\n> Are there any known good random access compressed file libs? So we could avoid reinventing the wheel.\n> Maybe someone tried this approach before?\n\nWhy are temporary tables more useful for compression that other database\nfiles?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 6 Oct 2021 10:24:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Compressing temporary files" }, { "msg_contents": "On Sat, Sep 11, 2021 at 8:31 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> I've prototyped Random Access Compressed File for fun[0]. The code is very dirty proof-of-concept.\n> I compress Buffile by one block at a time. There are directory pages to store information about the size of each compressed block. If any byte of the block is changed - whole block is recompressed. Wasted space is never reused. If compressed block is more then BLCSZ - unknown bad things will happen :)\n\nJust reading this description, I suppose it's also Bad if the block is\nrecompressed and the new compressed size is larger than the previous\ncompressed size. Or do you have some way to handle that?\n\nI think it's probably quite tricky to make this work if the temporary\nfiles can be modified after the data is first written. If you have a\ntemporary file that's never changed after the fact, then you could\ncompress all the blocks and maintain, on the side, an index that says\nwhere the compressed version of each block starts. That could work\nwhether or not the blocks expand when you try to compress them, and\nyou could even skip compression for blocks that get bigger when\n\"compressed\" or which don't compress nicely, just by including a\nboolean flag in your index saying whether that particular block is\ncompressed or not. But as soon as you have a case where the blocks can\nget modified after they are created, then I don't see how to make it\nwork nicely. You can't necessarily fit the new version of the block in\nthe space allocated for the old version of the block, and putting it\nelsewhere could turn sequential I/O into random I/O.\n\nLeaving all that aside, I think this feature has *some* potential,\nbecause I/O is expensive and compression could let us do less of it.\nThe problem is that a lot of the I/O that PostgreSQL thinks it does\nisn't real I/O. Everybody is pretty much forced to set work_mem\nconservatively to avoid OOM, which means a large proportion of\noperations that exceed work_mem and thus spill to files don't actually\nresult in real I/O. They end up fitting in memory after all; it's only\nthat the memory in question belongs to the OS rather than to\nPostgreSQL. And for operations of that type, which I believe to be\nvery common, compression is strictly a loss. You're doing extra CPU\nwork to avoid I/O that isn't actually happening.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Oct 2021 10:53:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compressing temporary files" }, { "msg_contents": "Hi,\n\nOn 9/11/21 2:31 PM, Andrey Borodin wrote:\n> Hi hackers!\n> \n> There's a lot of compression discussions nowadays. And that's cool! \n> Recently Naresh Chainani in private discussion shared with me the\n> idea to compress temporary files on disk. And I was thrilled to find\n> no evidence of implementation of this interesting idea.\n> \n> I've prototyped Random Access Compressed File for fun[0]. The code is\n> very dirty proof-of-concept. I compress Buffile by one block at a\n> time. There are directory pages to store information about the size\n> of each compressed block. If any byte of the block is changed - whole\n> block is recompressed. Wasted space is never reused. If compressed\n> block is more then BLCSZ - unknown bad things will happen :)\n> \n\nMight be an interesting feature, and the approach seems reasonable too\n(of course, it's a PoC, so it has rough edges that'd need to be solved).\n\nNot sure if compressing it at the 8kB block granularity is good or bad.\nPresumably larger compression blocks would give better compression, but\nthat's a detail we would investigate later.\n\n> Here are some my observations.\n> \n> 0. The idea seems feasible. API of fd.c used by buffile.c can easily\n> be abstracted for compressed temporary files. Seeks are necessary,\n> but they are not very frequent. It's easy to make temp file\n> compression GUC-controlled.\n> \n\nHmm. How much more expensive the seeks are, actually? If we compress the\nfiles block by block, then it's decompression of 8kB of data. Of course,\nthat's not free, but if you compare it to doing less I/O, it may easily\nbe a significant win.\n\n> 1. Temp file footprint can be easily reduced. For example query \n> create unlogged table y as select random()::text t from\n> generate_series(0,9999999) g; uses for toast index build 140000000\n> bytes of temp file. With patch this value is reduced to 40841704\n> (x3.42 smaller).\n> \n\nThat seems a bit optimistic, really. The problem is that while random()\nis random, it means we're only dealing with 10 characters in the text\nvalue. That's pretty redundant, and the compression benefits from that.\n\nBut then again, data produced by queries (which we may need to sort,\nwhich generates temp files) is probably redundant too.\n\n> 2. I have not found any evidence of performance improvement. I've\n> only benchmarked patch on my laptop. And RAM (page cache) diminished\n> any difference between writing compressed block and uncompressed\n> block.\n> \n\nI expect the performance improvement to be less direct, requiring\ncontention for resources (memory and I/O bandwidth). If you have\nmultiple sessions and memory pressure, that'll force temporary files\nfrom page cache to disk. The compression will reduce the memory pressure\n(because of less data written to page cache), possibly even eliminating\nthe need to write dirty pages to disk. And if we still have to write\ndata to disk, this reduces the amount we have to write.\n\nOf course, it may also reduce the disk space required for temp files,\nwhich is also nice.\n\n> How do you think: does it worth to pursue the idea? OLTP systems\n> rarely rely on data spilled to disk. Are there any known good random\n> access compressed file libs? So we could avoid reinventing the\n> wheel. Maybe someone tried this approach before?\n> \n\nI'd say it's worth investigating further.\n\nNot sure about existing solutions / libraries for this problem, but my\nguess is the overall approach is roughly what you implemented.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 Oct 2021 17:02:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Compressing temporary files" }, { "msg_contents": " On Sat, Sep 11, 2021, 6:01 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi hackers!\n>\n> There's a lot of compression discussions nowadays. And that's cool!\n> Recently Naresh Chainani in private discussion shared with me the idea to compress temporary files on disk.\n> And I was thrilled to find no evidence of implementation of this interesting idea.\n>\n> I've prototyped Random Access Compressed File for fun[0]. The code is very dirty proof-of-concept.\n> I compress Buffile by one block at a time. There are directory pages to store information about the size of each compressed block. If any byte of the block is changed - whole block is recompressed. Wasted space is never reused. If compressed block is more then BLCSZ - unknown bad things will happen :)\n>\n> Here are some my observations.\n>\n> 0. The idea seems feasible. API of fd.c used by buffile.c can easily be abstracted for compressed temporary files. Seeks are necessary, but they are not very frequent. It's easy to make temp file compression GUC-controlled.\n>\n> 1. Temp file footprint can be easily reduced. For example query\n> create unlogged table y as select random()::text t from generate_series(0,9999999) g;\n> uses for toast index build 140000000 bytes of temp file. With patch this value is reduced to 40841704 (x3.42 smaller).\n>\n> 2. I have not found any evidence of performance improvement. I've only benchmarked patch on my laptop. And RAM (page cache) diminished any difference between writing compressed block and uncompressed block.\n>\n> How do you think: does it worth to pursue the idea? OLTP systems rarely rely on data spilled to disk.\n> Are there any known good random access compressed file libs? So we could avoid reinventing the wheel.\n> Maybe someone tried this approach before?\n\nAre you proposing to compress the temporary files being created by the\npostgres processes under $PGDATA/base/pgsql_tmp? Are there any other\ndirectories that postgres processes would write temporary files to?\n\nAre you proposing to compress the temporary files that get generated\nduring the execution of queries? IIUC, the temp files under the\npgsql_tmp directory get cleaned up at the end of each txn right? In\nwhat situations the temporary files under the pgsql_tmp directory\nwould remain even after the txns that created them are\ncommitted/aborted? Here's one scenario: if a backend crashes while\nexecuting a huge analytic query, I can understand that the temp files\nwould remain in pgsql_tmp and we have the commit [1] cleaning them on\nrestart. Any other scenarios that fill up the pgsql_tmp directory?\n\n[1] commit cd91de0d17952b5763466cfa663e98318f26d357\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Thu Mar 18 16:05:03 2021 +0100\n\n Remove temporary files after backend crash\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Oct 2021 19:17:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compressing temporary files" } ]
[ { "msg_contents": "Hi,\n\nBTW, this only happens when the third parameter is large. Here is an\nexample that consistently crash here:\n\nselect regexp_count('jaime.casanova', 'valid', 102481);\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sat, 11 Sep 2021 13:03:57 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "new regexp_*(text, text, int) functions crash" }, { "msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> BTW, this only happens when the third parameter is large. Here is an\n> example that consistently crash here:\n> select regexp_count('jaime.casanova', 'valid', 102481);\n\nHah ... pg_regexec has never had a check that the search start position\nis sane. Will fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Sep 2021 14:42:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: new regexp_*(text, text, int) functions crash" } ]
[ { "msg_contents": "Hi everyone,\n\nI tried an old test that at some point crashed the database... that is\nalready fixed.\n\nSo now it gives a good ERROR message:\n\n\"\"\"\npostgres=# create table t1 (col1 text, col2 text);\nCREATE TABLE\npostgres=# create unique index on t1 ((col1 || col2));\nCREATE INDEX\npostgres=# insert into t1 values((select array_agg(md5(g::text))::text from\npostgres(# generate_series(1, 256) g), version());\nERROR: index row requires 8552 bytes, maximum size is 8191\n\"\"\"\n\ngreat, so I reduced the length of the index row size:\n\n\"\"\"\npostgres=# insert into t1 values((select array_agg(md5(g::text))::text from generate_series(1, 200) g), version());\nERROR: index row size 6704 exceeds btree version 4 maximum 2704 for index \"t1_expr_idx\"\nDETAIL: Index row references tuple (0,1) in relation \"t1\".\nHINT: Values larger than 1/3 of a buffer page cannot be indexed.\nConsider a function index of an MD5 hash of the value, or use full text indexing.\n\"\"\"\n\nSo, what is it? the index row size could be upto 8191 or cannot be\ngreater than 2704?\n\nregards,\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 12 Sep 2021 00:50:41 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Confusing messages about index row size" }, { "msg_contents": "On Sunday, September 12, 2021, Jaime Casanova <jcasanov@systemguards.com.ec>\nwrote:\n>\n>\n> So, what is it? the index row size could be upto 8191 or cannot be\n> greater than 2704?\n>\n\nThe wording doesn’t change between the two: The size cannot be greater the\n8191 regardless of the index type being used. This check is first,\nprobably because it is cheaper, and just normal abstraction layering, but\nit doesn’t preclude individual indexes imposing their own constraint, as\nevidenced by the lower maximum of 2704 in this specific setup.\n\nIt may be non-ideal from a UX perspective to have a moving target in the\nerror messages, but they are consistent and accurate, and doesn’t seem\nworthwhile to expend much effort on usability since the errors should\nthemselves be rare.\n\nDavid J.\n\nOn Sunday, September 12, 2021, Jaime Casanova <jcasanov@systemguards.com.ec> wrote:\n\nSo, what is it? the index row size could be upto 8191 or cannot be\ngreater than 2704?\nThe wording doesn’t change between the two: The size cannot be greater the 8191 regardless of the index type being used.  This check is first, probably because it is cheaper, and just normal abstraction layering, but it doesn’t preclude individual indexes imposing their own constraint, as evidenced by the lower maximum of 2704 in this specific setup.It may be non-ideal from a UX perspective to have a moving target in the error messages, but they are consistent and accurate, and doesn’t seem worthwhile to expend much effort on usability since the errors should themselves be rare.David J.", "msg_date": "Sat, 11 Sep 2021 23:03:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Confusing messages about index row size" } ]
[ { "msg_contents": "In commit aa769f80e, I back-patched the same postgres-fdw.sgml change,\nincluding $SUBJECT, to v12, but I noticed the type info on each FDW\noption is present in HEAD only. :-( Here is a patch to remove\n$SUBJECT from the back branches for consistency.\nBest regards,Etsuro Fujita", "msg_date": "Sun, 12 Sep 2021 17:44:16 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Doc: Extra type info on postgres-fdw option import_generated in back\n branches" }, { "msg_contents": "On Sun, Sep 12, 2021 at 5:44 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> In commit aa769f80e, I back-patched the same postgres-fdw.sgml change,\n> including $SUBJECT, to v12, but I noticed the type info on each FDW\n> option is present in HEAD only. :-( Here is a patch to remove\n> $SUBJECT from the back branches for consistency.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 13 Sep 2021 17:45:47 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Doc: Extra type info on postgres-fdw option import_generated in\n back branches" } ]
[ { "msg_contents": "Hello,\n\nI've created a Postgresql C/C++ Aggregate Extension implementing Private Information Retrieval (PIR) using Homomorphic Encryption. The open sourced version can be found here: https://github.com/ReverseControl/MuchPIR .\n\nIn essence, with PIR we can retrieve data from any row in a table without revealing to the server doing the search which row data was retrieved, or whether the data was found at all.\n\nI am seeking feedback from the postgres community on this extension. Is it something of interest? Is it something anyone would like to contribute to and make better? Is there similar work already publicly available? Any reference would be greatly appreciated.\n\nThank you.\n\nSent with [ProtonMail](https://protonmail.com/) Secure Email.\nHello,I've created a Postgresql C/C++\r\n Aggregate Extension implementing Private Information Retrieval (PIR)\r\nusing Homomorphic Encryption. The open sourced version can be found\r\nhere: https://github.com/ReverseControl/MuchPIR .In\r\n essence, with PIR we can retrieve data from any row in a table without\r\nrevealing to the server doing the search which row data was retrieved,\r\nor whether the data was found at all. I am\r\n seeking feedback from the postgres community on this extension. Is it\r\nsomething of interest? Is it something anyone would like to contribute\r\nto and make better? Is there similar work already publicly available?\r\nAny reference would be greatly appreciated.Thank you.Sent with ProtonMail Secure Email.", "msg_date": "Sun, 12 Sep 2021 13:02:44 +0000", "msg_from": "\"Private Information Retrieval(PIR)\" <postgresql-pir@pm.me>", "msg_from_op": true, "msg_subject": "Private Information Retrieval (PIR) as a C/C++ Aggregate Extension" }, { "msg_contents": "Hi!\n\n> 12 сент. 2021 г., в 18:02, Private Information Retrieval(PIR) <postgresql-pir@pm.me> написал(а):\n> \n> I've created a Postgresql C/C++ Aggregate Extension implementing Private Information Retrieval (PIR) using Homomorphic Encryption. The open sourced version can be found here: https://github.com/ReverseControl/MuchPIR .\n> \n> In essence, with PIR we can retrieve data from any row in a table without revealing to the server doing the search which row data was retrieved, or whether the data was found at all. \n> \n> I am seeking feedback from the postgres community on this extension. Is it something of interest? Is it something anyone would like to contribute to and make better? Is there similar work already publicly available? Any reference would be greatly appreciated.\n\nPIR seem to be interesting functionality.\nAs far as I understand in terms of a database PIR is special kind of an aggregator, which extracts some part of data unknown to server.\n\nOne question came to my mind. Can we limit the amount of extracted data? It makes sense to protect the database from copy.\n\nAlso you may be interested in differential privacy data exploration [0,1]. This is a kind of data aggregation which protects data from deducing single row by means of aggregation. Implementation could be resemblant to MuchPIR.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://en.wikipedia.org/wiki/Differential_privacy\n[1] https://cs.uwaterloo.ca/~ilyas/papers/GeSIGMOD2019.pdf \n\n", "msg_date": "Sun, 12 Sep 2021 22:00:11 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Private Information Retrieval (PIR) as a C/C++ Aggregate\n Extension" }, { "msg_contents": "Yes, that is accurate. You can think of PIR as an aggregator.\n\nThe amount of data per query is already limited. In fact, the naive implementation of Information Theoretic PIR requires the transmission of the entire database. MuchPIR implementation makes use of the already optimized query/response presented in [1]. As for protection of the database per copy: anyone who already has access to your database can copy it if they so wish so. PIR's threat model revolves around keeping data query/result private even when everything beyond your private zone is untrusted. Data copy is not a concern.\n\nThere is one configuration in which the query can be reduced to about 1 MB in size. Comes at a cost somewhere else though. There is an optimization that reduces the query size by more than half, but that is not available in the demo. The query result however is fixed in size, per configuration, up to compression.\n\nYes, our particular implementation does lend itself to other uses falling under Differential Privacy. In fact, we have already worked out the technical details for several such use cases: retrieval on keyword match, or ID match, sum aggregator, and string search. The most remarkable part of string search is that searches can be done with using wildcards as well, though the returned data will be how many hits occurred. The size of the string to be searched remains very small, but we are working to improve every aspect of MuchPIR and the technology we are building on top of it.\n\n\nMucPIR Team\n\n[1] https://eprint.iacr.org/2017/1142.pdf\n\n\n\nSent with ProtonMail Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n\nOn Sunday, September 12th, 2021 at 1:00 PM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> Hi!\n>\n> > 12 сент. 2021 г., в 18:02, Private Information Retrieval(PIR) postgresql-pir@pm.me написал(а):\n> >\n> > I've created a Postgresql C/C++ Aggregate Extension implementing Private Information Retrieval (PIR) using Homomorphic Encryption. The open sourced version can be found here: https://github.com/ReverseControl/MuchPIR .\n> >\n> > In essence, with PIR we can retrieve data from any row in a table without revealing to the server doing the search which row data was retrieved, or whether the data was found at all.\n> >\n> > I am seeking feedback from the postgres community on this extension. Is it something of interest? Is it something anyone would like to contribute to and make better? Is there similar work already publicly available? Any reference would be greatly appreciated.\n>\n> PIR seem to be interesting functionality.\n>\n> As far as I understand in terms of a database PIR is special kind of an aggregator, which extracts some part of data unknown to server.\n>\n> One question came to my mind. Can we limit the amount of extracted data? It makes sense to protect the database from copy.\n>\n> Also you may be interested in differential privacy data exploration [0,1]. This is a kind of data aggregation which protects data from deducing single row by means of aggregation. Implementation could be resemblant to MuchPIR.\n>\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://en.wikipedia.org/wiki/Differential_privacy\n>\n> [1] https://cs.uwaterloo.ca/~ilyas/papers/GeSIGMOD2019.pdf\n\n\n", "msg_date": "Mon, 13 Sep 2021 14:45:30 +0000", "msg_from": "\"Private Information Retrieval(PIR)\" <postgresql-pir@pm.me>", "msg_from_op": true, "msg_subject": "Re: Private Information Retrieval (PIR) as a C/C++ Aggregate\n Extension" } ]
[ { "msg_contents": "Hi Tomas,\n\nJust noted that this query crash the server. Execute it in the\nregression database:\n\n\"\"\"\nupdate brintest_multi set inetcol = '192.168.204.50/0'::inet;\n\"\"\"\n\nAttached is the backtrace. Let me know if you need something else to\ntrack it.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Sun, 12 Sep 2021 19:44:47 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "brin multi minmax crash for inet value" }, { "msg_contents": "On Sun, Sep 12, 2021 at 07:44:47PM -0500, Jaime Casanova wrote:\n> Hi Tomas,\n> \n> Just noted that this query crash the server. Execute it in the\n> regression database:\n\nIf I'm not wrong, this is the crash fixed by e1fbe1181 in April.\n\nCould you check what HEAD your server is compiled from ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 12 Sep 2021 20:23:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: brin multi minmax crash for inet value" }, { "msg_contents": "On Sun, Sep 12, 2021 at 08:23:44PM -0500, Justin Pryzby wrote:\n> Could you check what HEAD your server is compiled from ?\n\nThat works on HEAD for me.\n--\nMichael", "msg_date": "Mon, 13 Sep 2021 12:00:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: brin multi minmax crash for inet value" }, { "msg_contents": "On Sun, Sep 12, 2021 at 08:23:44PM -0500, Justin Pryzby wrote:\n> On Sun, Sep 12, 2021 at 07:44:47PM -0500, Jaime Casanova wrote:\n> > Hi Tomas,\n> > \n> > Just noted that this query crash the server. Execute it in the\n> > regression database:\n> \n> If I'm not wrong, this is the crash fixed by e1fbe1181 in April.\n> \n> Could you check what HEAD your server is compiled from ?\n> \n\nThat was with yesterday's head but trying with today's head this\nsame update works fine.\n\nMaybe there is something else happening here, will try to investigate\ntomorrow.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 13 Sep 2021 01:19:39 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: brin multi minmax crash for inet value" }, { "msg_contents": "\nOn 9/13/21 8:19 AM, Jaime Casanova wrote:\n> On Sun, Sep 12, 2021 at 08:23:44PM -0500, Justin Pryzby wrote:\n>> On Sun, Sep 12, 2021 at 07:44:47PM -0500, Jaime Casanova wrote:\n>>> Hi Tomas,\n>>>\n>>> Just noted that this query crash the server. Execute it in the\n>>> regression database:\n>>\n>> If I'm not wrong, this is the crash fixed by e1fbe1181 in April.\n>>\n>> Could you check what HEAD your server is compiled from ?\n>>\n> \n> That was with yesterday's head but trying with today's head this\n> same update works fine.\n> \n> Maybe there is something else happening here, will try to investigate\n> tomorrow.\n> \n\nPer the backtrace the value is very close to 0\n\n delta = -1.1641532182693481e-08\n\nso I suspect this might be a rounding error when calculating the delta \nas a difference between two inet values. That's harmless in practice, \nbut it may trigger the assert.\n\nI wonder if the delta should be calculated differently. Currently we \ncalculate it \"byte by byte\" adding up the smaller differences. But that \nhas this rounding issue.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Sep 2021 13:43:24 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: brin multi minmax crash for inet value" } ]
[ { "msg_contents": "Hello.\n\nAs reported in [1] it seems that walsender can suffer timeout in\ncertain cases. It is not clearly confirmed, but I suspect that\nthere's the case where LogicalRepApplyLoop keeps running the innermost\nloop without receiving keepalive packet for longer than\nwal_sender_timeout (not wal_receiver_timeout). Of course that can be\nresolved by giving sufficient processing power to the subscriber if\nnot. But if that happens between the servers with the equal processing\npower, it is reasonable to \"fix\" this. Theoretically I think this can\nhappen with equally-powered servers if the connecting network is\nsufficiently fast. Because sending reordered changes is relatively\nsimple and fast than apllying the changes on subscriber.\n\nI think we don't want to call GetCurrentTimestamp every iteration of\nthe innermost loop. Even if we call it every N iterations, I don't\ncome up with a proper N that fits any workload. So one possible\nsolution would be using slgalrm. Is it worth doing? Or is there any\nother way?\n\nEven if we won't fix this, we might need to add a description about\nthis restriciton in the documentation?\n\nAny thougths?\n\n[1] https://www.postgresql.org/message-id/CAEDsCzhBtkNDLM46_fo_HirFYE2Mb3ucbZrYqG59ocWqWy7-xA%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Sep 2021 10:31:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "walsender timeout on logical replication set" }, { "msg_contents": "On Mon, Sep 13, 2021 at 7:01 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> As reported in [1] it seems that walsender can suffer timeout in\n> certain cases. It is not clearly confirmed, but I suspect that\n> there's the case where LogicalRepApplyLoop keeps running the innermost\n> loop without receiving keepalive packet for longer than\n> wal_sender_timeout (not wal_receiver_timeout).\n>\n\nWhy is that happening? In the previous investigation in this area [1]\nyour tests revealed that after reading a WAL page, we always send keep\nalive, so even if the transaction is large, we should send some\nkeepalive in-between.\n\nThe other thing that I am not able to understand from Abhishek's reply\n[2] is why increasing wal_sender_timeout/wal_recevier_timeout leads to\nthe removal of required WAL segments. As per my understanding, we\nshouldn't remove WAL unless we get confirmation that the subscriber\nhas processed it.\n\n[1] - https://www.postgresql.org/message-id/20210610.150016.1709823354377067679.horikyota.ntt%40gmail.com\n[2] - https://www.postgresql.org/message-id/CAEDsCzjEHLxgqa4d563CKFwSbgBvvnM91Cqfq_qoZDXCkyOsiw%40mail.gmail.com\n\nNote - I have added Abhishek to see if he has answers to any of these questions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Sep 2021 10:18:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender timeout on logical replication set" }, { "msg_contents": "Thank you vary much for coming in!\n\nAt Fri, 17 Sep 2021 10:18:11 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Sep 13, 2021 at 7:01 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Hello.\n> >\n> > As reported in [1] it seems that walsender can suffer timeout in\n> > certain cases. It is not clearly confirmed, but I suspect that\n> > there's the case where LogicalRepApplyLoop keeps running the innermost\n> > loop without receiving keepalive packet for longer than\n> > wal_sender_timeout (not wal_receiver_timeout).\n> \n> Why is that happening? In the previous investigation in this area [1]\n> your tests revealed that after reading a WAL page, we always send keep\n> alive, so even if the transaction is large, we should send some\n> keepalive in-between.\n\nWe fixed too-many keepalives (aka keepalive-flood) in the thread, but\nthis is an issue of long absense of subscriber response. What I'm\nsuspecting, or assuming here is:\n\n- The publisher is working fine. It doesn't send extra keepalives so\n much and does send regular keepalives with wal_sender_timeout/2 by\n the sender's clock.\n\n- Networks conveys all the data in-time.\n\n- The subscriber consumes received data at less than half the speed at\n which the publisher sends data. In this case, while the burst\n traffic is coming, the publisher keep sending for\n wal_sender_timeout/2 seconds and it may not send a keepalive for the\n same duration. This is the correct behavior. On the other hand, the\n subscriber is kept busy without receiving a keepalive for\n wal_sender_timeout seconds. AFAICS LogicalRepApplyLoop doesn't send\n a response unless a keepalive comes while in the inner-most loop.\n\nIf wel_sender_timeout is relatively short (5 seconds, in the report),\na burst (or a gap-less) logical replication traffic can continue\neasily for more than 2.5 seconds. If wal_sender_timeout is longer (1\nmin, ditto), burst replication traffics last for more than\nwal_sender_timeout/2 becomes relatively not so frequent.\n\nHowever, I'm not sure how it makes things worse again to increase it\nfurther to 5 min.\n\nIs my diagnostics that while the innermost loop in LogicalRepAllyLoop\n[A] is busy, it doesn't have a chance to send reply until a keepalive\ncomes in correct? If so, walsender timeout due to slowness of\nsubscriber happens and we might need to break the innermost loop to\ngive subscriber a chance to send a response with appropriate\nintervals. This is what I wanted to propose.\n\n[A]\nbackend/replication/logical/worker.c:2565@today's master\n> \t/* Loop to process all available data (without blocking). */\n> \tfor (;;)\n\n\n\n> The other thing that I am not able to understand from Abhishek's reply\n> [2] is why increasing wal_sender_timeout/wal_recevier_timeout leads to\n> the removal of required WAL segments. As per my understanding, we\n> shouldn't remove WAL unless we get confirmation that the subscriber\n> has processed it.\n> \n> [1] - https://www.postgresql.org/message-id/20210610.150016.1709823354377067679.horikyota.ntt%40gmail.com\n> [2] - https://www.postgresql.org/message-id/CAEDsCzjEHLxgqa4d563CKFwSbgBvvnM91Cqfq_qoZDXCkyOsiw%40mail.gmail.com\n> \n> Note - I have added Abhishek to see if he has answers to any of these questions.\n\nOuch! max_slot_wal_keep_size was introduced at 13. So I have no idea\nof how required segments can be removed on the publisher for now.\n\n== From the first report [1]\nsourcedb=# select * from pg_replication_slots;\n...\nrestart_lsn\t\t\t116D0/C36886F8\nconfirmed_flush_lsn\t116D0/C3E5D370\n\ntargetdb=# show wal_receiver_status_interval;\nwal_receiver_status_interval 2s\n\ntargetdb=# select * from pg_stat_subscription;\n..\nreceived_lsn\t\t 116D1/2BA8F170\nlast_msg_send_time\t 2021-08-20 09:05:15.398423+09\nlast_msg_receipt_time 2021-08-20 09:05:15.398471+09\nlatest_end_lsn\t\t 116D1/2BA8F170\nlatest_end_time\t\t 2021-08-20 09:05:15.398423+09\n==\n\nThere is a gap with about 105 segments (1.7GB) between how far the\nsubscriber advanced and the publisher's idea of how far the subscriber\nadvanced. But that alone cannot cause wal removal..\n\n[1] https://www.postgresql.org/message-id/flat/CAEDsCzjEHLxgqa4d563CKFwSbgBvvnM91Cqfq_qoZDXCkyOsiw%40mail.gmail.com#72da631f3af885b06669ddc1636a0a63\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Sep 2021 16:18:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: walsender timeout on logical replication set" }, { "msg_contents": "On Fri, Sep 17, 2021 at 12:48 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thank you vary much for coming in!\n>\n> At Fri, 17 Sep 2021 10:18:11 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Mon, Sep 13, 2021 at 7:01 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > Hello.\n> > >\n> > > As reported in [1] it seems that walsender can suffer timeout in\n> > > certain cases. It is not clearly confirmed, but I suspect that\n> > > there's the case where LogicalRepApplyLoop keeps running the innermost\n> > > loop without receiving keepalive packet for longer than\n> > > wal_sender_timeout (not wal_receiver_timeout).\n> >\n> > Why is that happening? In the previous investigation in this area [1]\n> > your tests revealed that after reading a WAL page, we always send keep\n> > alive, so even if the transaction is large, we should send some\n> > keepalive in-between.\n>\n> We fixed too-many keepalives (aka keepalive-flood) in the thread, but\n> this is an issue of long absense of subscriber response. What I'm\n> suspecting, or assuming here is:\n>\n> - The publisher is working fine. It doesn't send extra keepalives so\n> much and does send regular keepalives with wal_sender_timeout/2 by\n> the sender's clock.\n>\n\nI think the publisher should also send it even after a certain amount\nof WAL is consumed via the below code:\nWalSndWaitForWal()\n{\n...\nif (MyWalSnd->flush < sentPtr &&\nMyWalSnd->write < sentPtr &&\n!waiting_for_ping_response)\nWalSndKeepalive(false);\n...\n}\n\n> - Networks conveys all the data in-time.\n>\n> - The subscriber consumes received data at less than half the speed at\n> which the publisher sends data. In this case, while the burst\n> traffic is coming, the publisher keep sending for\n> wal_sender_timeout/2 seconds and it may not send a keepalive for the\n> same duration. This is the correct behavior. On the other hand, the\n> subscriber is kept busy without receiving a keepalive for\n> wal_sender_timeout seconds. AFAICS LogicalRepApplyLoop doesn't send\n> a response unless a keepalive comes while in the inner-most loop.\n>\n\nOne way this could happen is that the apply is taking a long time\nbecause of contention on subscriber, say there are a lot of other\noperations going on in the subscriber or it stuck due to some reason.\n\n> If wel_sender_timeout is relatively short (5 seconds, in the report),\n> a burst (or a gap-less) logical replication traffic can continue\n> easily for more than 2.5 seconds. If wal_sender_timeout is longer (1\n> min, ditto), burst replication traffics last for more than\n> wal_sender_timeout/2 becomes relatively not so frequent.\n>\n> However, I'm not sure how it makes things worse again to increase it\n> further to 5 min.\n>\n\nThere might be a possibility that subscriber is stuck or is extremely\nslow due to other operations.\n\n> Is my diagnostics that while the innermost loop in LogicalRepAllyLoop\n> [A] is busy, it doesn't have a chance to send reply until a keepalive\n> comes in correct? If so, walsender timeout due to slowness of\n> subscriber happens and we might need to break the innermost loop to\n> give subscriber a chance to send a response with appropriate\n> intervals. This is what I wanted to propose.\n>\n\nI was thinking increasing wal_sender/receiver_timeout should solve\nthis problem. I am not sure why it leads to loss of WAL in the OP's\ncase.\n\n> [A]\n> backend/replication/logical/worker.c:2565@today's master\n> > /* Loop to process all available data (without blocking). */\n> > for (;;)\n>\n>\n>\n> > The other thing that I am not able to understand from Abhishek's reply\n> > [2] is why increasing wal_sender_timeout/wal_recevier_timeout leads to\n> > the removal of required WAL segments. As per my understanding, we\n> > shouldn't remove WAL unless we get confirmation that the subscriber\n> > has processed it.\n> >\n> > [1] - https://www.postgresql.org/message-id/20210610.150016.1709823354377067679.horikyota.ntt%40gmail.com\n> > [2] - https://www.postgresql.org/message-id/CAEDsCzjEHLxgqa4d563CKFwSbgBvvnM91Cqfq_qoZDXCkyOsiw%40mail.gmail.com\n> >\n> > Note - I have added Abhishek to see if he has answers to any of these questions.\n>\n> Ouch! max_slot_wal_keep_size was introduced at 13. So I have no idea\n> of how required segments can be removed on the publisher for now.\n>\n> == From the first report [1]\n> sourcedb=# select * from pg_replication_slots;\n> ...\n> restart_lsn 116D0/C36886F8\n> confirmed_flush_lsn 116D0/C3E5D370\n>\n> targetdb=# show wal_receiver_status_interval;\n> wal_receiver_status_interval 2s\n>\n> targetdb=# select * from pg_stat_subscription;\n> ..\n> received_lsn 116D1/2BA8F170\n> last_msg_send_time 2021-08-20 09:05:15.398423+09\n> last_msg_receipt_time 2021-08-20 09:05:15.398471+09\n> latest_end_lsn 116D1/2BA8F170\n> latest_end_time 2021-08-20 09:05:15.398423+09\n> ==\n>\n> There is a gap with about 105 segments (1.7GB) between how far the\n> subscriber advanced and the publisher's idea of how far the subscriber\n> advanced. But that alone cannot cause wal removal..\n>\n\nYeah, that is quite strange.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Sep 2021 15:46:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: walsender timeout on logical replication set" } ]
[ { "msg_contents": "Adding -hackers, sorry for the duplicate.\n\nThis seems to be deficient, citing\nhttps://www.postgresql.org/message-id/flat/0d1b394b-bec9-8a71-a336-44df7078b295%40gmail.com\n\nI'm proposing something like the attached. Ideally, there would be a central\nplace to put details, and the other places could refer to that.\n\nSince the autoanalyze patch was reverted, this should be easily applied to\nbackbranches, which is probably most of its value.\n\ncommit 4ad2c8f6fd8eb26d76b226e68d3fdb8f0658f113\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Jul 22 16:06:18 2021 -0500\n\n documentation deficiencies for ANALYZE of partitioned tables\n \n This is partially extracted from 1b5617eb844cd2470a334c1d2eec66cf9b39c41a,\n which was reverted.\n\ndiff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 36f975b1e5..decfabff5d 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -290,6 +290,14 @@\n to meaningful statistical changes.\n </para>\n \n+ <para>\n+ Tuples changed in partitions and inheritence children do not count towards\n+ analyze on the parent table. If the parent table is empty or rarely\n+ changed, it may never be processed by autovacuum. It is necessary to\n+ periodically run an manual <command>ANALYZE</command> to keep the statistics\n+ of the table hierarchy up to date.\n+ </para>\n+\n <para>\n As with vacuuming for space recovery, frequent updates of statistics\n are more useful for heavily-updated tables than for seldom-updated\n@@ -347,6 +355,18 @@\n <command>ANALYZE</command> commands on those tables on a suitable schedule.\n </para>\n </tip>\n+\n+ <tip>\n+ <para>\n+ The autovacuum daemon does not issue <command>ANALYZE</command> commands for\n+ partitioned tables. Inheritence parents will only be analyzed if the\n+ parent is changed - changes to child tables do not trigger autoanalyze on\n+ the parent table. It is necessary to periodically run an manual\n+ <command>ANALYZE</command> to keep the statistics of the table hierarchy up to\n+ date.\n+ </para>\n+ </tip>\n+\n </sect2>\n \n <sect2 id=\"vacuum-for-visibility-map\">\n@@ -817,6 +837,18 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu\n </programlisting>\n is compared to the total number of tuples inserted, updated, or deleted\n since the last <command>ANALYZE</command>.\n+\n+ Partitioned tables are not processed by autovacuum, and their statistics\n+ should be updated by manually running <command>ANALYZE</command> when the\n+ table is first populated, and whenever the distribution of data in its\n+ partitions changes significantly.\n+ </para>\n+\n+ <para>\n+ Partitioned tables are not processed by autovacuum. Statistics\n+ should be collected by running a manual <command>ANALYZE</command> when it is\n+ first populated, and updated whenever the distribution of data in its\n+ partitions changes significantly.\n </para>\n \n <para>\ndiff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\nindex 89ff58338e..b84853fd6f 100644\n--- a/doc/src/sgml/perform.sgml\n+++ b/doc/src/sgml/perform.sgml\n@@ -1765,9 +1765,11 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;\n <title>Run <command>ANALYZE</command> Afterwards</title>\n \n <para>\n+\n Whenever you have significantly altered the distribution of data\n within a table, running <link linkend=\"sql-analyze\"><command>ANALYZE</command></link> is strongly recommended. This\n includes bulk loading large amounts of data into the table. Running\n+\n <command>ANALYZE</command> (or <command>VACUUM ANALYZE</command>)\n ensures that the planner has up-to-date statistics about the\n table. With no statistics or obsolete statistics, the planner might\ndiff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml\nindex c423aeeea5..20ffbc2d7a 100644\n--- a/doc/src/sgml/ref/analyze.sgml\n+++ b/doc/src/sgml/ref/analyze.sgml\n@@ -250,22 +250,33 @@ ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replacea\n </para>\n \n <para>\n- If the table being analyzed has one or more children,\n- <command>ANALYZE</command> will gather statistics twice: once on the\n- rows of the parent table only, and a second time on the rows of the\n- parent table with all of its children. This second set of statistics\n- is needed when planning queries that traverse the entire inheritance\n- tree. The autovacuum daemon, however, will only consider inserts or\n- updates on the parent table itself when deciding whether to trigger an\n- automatic analyze for that table. If that table is rarely inserted into\n- or updated, the inheritance statistics will not be up to date unless you\n- run <command>ANALYZE</command> manually.\n+ If the table being analyzed is partitioned, <command>ANALYZE</command>\n+ will gather statistics by sampling blocks randomly from its partitions;\n+ in addition, it will recurse into each partition and update its statistics.\n+ (However, in multi-level partitioning scenarios, each leaf partition\n+ will only be analyzed once.)\n+ By constrast, if the table being analyzed has inheritance children,\n+ <command>ANALYZE</command> will gather statistics for it twice:\n+ once on the rows of the parent table only, and a second time on the\n+ rows of the parent table with all of its children. This second set of\n+ statistics is needed when planning queries that traverse the entire\n+ inheritance tree. The child tables themselves are not individually\n+ analyzed in this case.\n </para>\n \n <para>\n- If any of the child tables are foreign tables whose foreign data wrappers\n- do not support <command>ANALYZE</command>, those child tables are ignored while\n- gathering inheritance statistics.\n+ The autovacuum daemon does not process partitioned tables or inheritence\n+ parents. It is usually necessary to periodically run a manual\n+ <command>ANALYZE</command> to keep the statistics of the table hierarchy\n+ up to date (except for nonempty inheritence parents which undergo\n+ modifications of their own table data).\n+ See...\n+ </para>\n+\n+ <para>\n+ If any of the child tables or partitions are foreign tables whose foreign\n+ data wrappers do not support <command>ANALYZE</command>, those tables are\n+ ignored while gathering inheritance statistics.\n </para>\n \n <para>\n\n\n", "msg_date": "Sun, 12 Sep 2021 22:54:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "document the need to analyze partitioned tables" }, { "msg_contents": "On Sun, Sep 12, 2021 at 8:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Adding -hackers, sorry for the duplicate.\n>\n> This seems to be deficient, citing\n>\n> https://www.postgresql.org/message-id/flat/0d1b394b-bec9-8a71-a336-44df7078b295%40gmail.com\n>\n> I'm proposing something like the attached. Ideally, there would be a\n> central\n> place to put details, and the other places could refer to that.\n>\n> Since the autoanalyze patch was reverted, this should be easily applied to\n> backbranches, which is probably most of its value.\n>\n> commit 4ad2c8f6fd8eb26d76b226e68d3fdb8f0658f113\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu Jul 22 16:06:18 2021 -0500\n>\n> documentation deficiencies for ANALYZE of partitioned tables\n>\n> This is partially extracted from\n> 1b5617eb844cd2470a334c1d2eec66cf9b39c41a,\n> which was reverted.\n>\n> diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> index 36f975b1e5..decfabff5d 100644\n> --- a/doc/src/sgml/maintenance.sgml\n> +++ b/doc/src/sgml/maintenance.sgml\n> @@ -290,6 +290,14 @@\n> to meaningful statistical changes.\n> </para>\n>\n> + <para>\n> + Tuples changed in partitions and inheritence children do not count\n> towards\n> + analyze on the parent table. If the parent table is empty or rarely\n> + changed, it may never be processed by autovacuum. It is necessary to\n> + periodically run an manual <command>ANALYZE</command> to keep the\n> statistics\n> + of the table hierarchy up to date.\n> + </para>\n> +\n> <para>\n> As with vacuuming for space recovery, frequent updates of statistics\n> are more useful for heavily-updated tables than for seldom-updated\n> @@ -347,6 +355,18 @@\n> <command>ANALYZE</command> commands on those tables on a suitable\n> schedule.\n> </para>\n> </tip>\n> +\n> + <tip>\n> + <para>\n> + The autovacuum daemon does not issue <command>ANALYZE</command>\n> commands for\n> + partitioned tables. Inheritence parents will only be analyzed if the\n> + parent is changed - changes to child tables do not trigger\n> autoanalyze on\n> + the parent table. It is necessary to periodically run an manual\n> + <command>ANALYZE</command> to keep the statistics of the table\n> hierarchy up to\n> + date.\n> + </para>\n> + </tip>\n> +\n> </sect2>\n>\n> <sect2 id=\"vacuum-for-visibility-map\">\n> @@ -817,6 +837,18 @@ analyze threshold = analyze base threshold + analyze\n> scale factor * number of tu\n> </programlisting>\n> is compared to the total number of tuples inserted, updated, or\n> deleted\n> since the last <command>ANALYZE</command>.\n> +\n> + Partitioned tables are not processed by autovacuum, and their\n> statistics\n> + should be updated by manually running <command>ANALYZE</command> when\n> the\n> + table is first populated, and whenever the distribution of data in its\n> + partitions changes significantly.\n> + </para>\n> +\n> + <para>\n> + Partitioned tables are not processed by autovacuum. Statistics\n> + should be collected by running a manual <command>ANALYZE</command>\n> when it is\n> + first populated, and updated whenever the distribution of data in its\n> + partitions changes significantly.\n> </para>\n>\n> <para>\n> diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\n> index 89ff58338e..b84853fd6f 100644\n> --- a/doc/src/sgml/perform.sgml\n> +++ b/doc/src/sgml/perform.sgml\n> @@ -1765,9 +1765,11 @@ SELECT * FROM x, y, a, b, c WHERE something AND\n> somethingelse;\n> <title>Run <command>ANALYZE</command> Afterwards</title>\n>\n> <para>\n> +\n> Whenever you have significantly altered the distribution of data\n> within a table, running <link\n> linkend=\"sql-analyze\"><command>ANALYZE</command></link> is strongly\n> recommended. This\n> includes bulk loading large amounts of data into the table. Running\n> +\n> <command>ANALYZE</command> (or <command>VACUUM ANALYZE</command>)\n> ensures that the planner has up-to-date statistics about the\n> table. With no statistics or obsolete statistics, the planner might\n> diff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml\n> index c423aeeea5..20ffbc2d7a 100644\n> --- a/doc/src/sgml/ref/analyze.sgml\n> +++ b/doc/src/sgml/ref/analyze.sgml\n> @@ -250,22 +250,33 @@ ANALYZE [ VERBOSE ] [ <replaceable\n> class=\"parameter\">table_and_columns</replacea\n> </para>\n>\n> <para>\n> - If the table being analyzed has one or more children,\n> - <command>ANALYZE</command> will gather statistics twice: once on the\n> - rows of the parent table only, and a second time on the rows of the\n> - parent table with all of its children. This second set of statistics\n> - is needed when planning queries that traverse the entire inheritance\n> - tree. The autovacuum daemon, however, will only consider inserts or\n> - updates on the parent table itself when deciding whether to trigger an\n> - automatic analyze for that table. If that table is rarely inserted\n> into\n> - or updated, the inheritance statistics will not be up to date unless\n> you\n> - run <command>ANALYZE</command> manually.\n> + If the table being analyzed is partitioned, <command>ANALYZE</command>\n> + will gather statistics by sampling blocks randomly from its\n> partitions;\n> + in addition, it will recurse into each partition and update its\n> statistics.\n> + (However, in multi-level partitioning scenarios, each leaf partition\n> + will only be analyzed once.)\n> + By constrast, if the table being analyzed has inheritance children,\n> + <command>ANALYZE</command> will gather statistics for it twice:\n> + once on the rows of the parent table only, and a second time on the\n> + rows of the parent table with all of its children. This second set of\n> + statistics is needed when planning queries that traverse the entire\n> + inheritance tree. The child tables themselves are not individually\n> + analyzed in this case.\n> </para>\n>\n> <para>\n> - If any of the child tables are foreign tables whose foreign data\n> wrappers\n> - do not support <command>ANALYZE</command>, those child tables are\n> ignored while\n> - gathering inheritance statistics.\n> + The autovacuum daemon does not process partitioned tables or\n> inheritence\n> + parents. It is usually necessary to periodically run a manual\n> + <command>ANALYZE</command> to keep the statistics of the table\n> hierarchy\n> + up to date (except for nonempty inheritence parents which undergo\n> + modifications of their own table data).\n> + See...\n> + </para>\n> +\n> + <para>\n> + If any of the child tables or partitions are foreign tables whose\n> foreign\n> + data wrappers do not support <command>ANALYZE</command>, those tables\n> are\n> + ignored while gathering inheritance statistics.\n> </para>\n>\n> <para>\n>\n>\n> Hi,\nMinor comment:\n\nperiodically run an manual -> periodically run a manual\n\nCheers\n\nOn Sun, Sep 12, 2021 at 8:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:Adding -hackers, sorry for the duplicate.\n\nThis seems to be deficient, citing\nhttps://www.postgresql.org/message-id/flat/0d1b394b-bec9-8a71-a336-44df7078b295%40gmail.com\n\nI'm proposing something like the attached.  Ideally, there would be a central\nplace to put details, and the other places could refer to that.\n\nSince the autoanalyze patch was reverted, this should be easily applied to\nbackbranches, which is probably most of its value.\n\ncommit 4ad2c8f6fd8eb26d76b226e68d3fdb8f0658f113\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate:   Thu Jul 22 16:06:18 2021 -0500\n\n    documentation deficiencies for ANALYZE of partitioned tables\n\n    This is partially extracted from 1b5617eb844cd2470a334c1d2eec66cf9b39c41a,\n    which was reverted.\n\ndiff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 36f975b1e5..decfabff5d 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -290,6 +290,14 @@\n     to meaningful statistical changes.\n    </para>\n\n+   <para>\n+    Tuples changed in partitions and inheritence children do not count towards\n+    analyze on the parent table.  If the parent table is empty or rarely\n+    changed, it may never be processed by autovacuum.  It is necessary to\n+    periodically run an manual <command>ANALYZE</command> to keep the statistics\n+    of the table hierarchy up to date.\n+   </para>\n+\n    <para>\n     As with vacuuming for space recovery, frequent updates of statistics\n     are more useful for heavily-updated tables than for seldom-updated\n@@ -347,6 +355,18 @@\n      <command>ANALYZE</command> commands on those tables on a suitable schedule.\n     </para>\n    </tip>\n+\n+   <tip>\n+    <para>\n+     The autovacuum daemon does not issue <command>ANALYZE</command> commands for\n+     partitioned tables.  Inheritence parents will only be analyzed if the\n+     parent is changed - changes to child tables do not trigger autoanalyze on\n+     the parent table.  It is necessary to periodically run an manual\n+     <command>ANALYZE</command> to keep the statistics of the table hierarchy up to\n+     date.\n+    </para>\n+   </tip>\n+\n   </sect2>\n\n   <sect2 id=\"vacuum-for-visibility-map\">\n@@ -817,6 +837,18 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu\n </programlisting>\n     is compared to the total number of tuples inserted, updated, or deleted\n     since the last <command>ANALYZE</command>.\n+\n+    Partitioned tables are not processed by autovacuum, and their statistics\n+    should be updated by manually running <command>ANALYZE</command> when the\n+    table is first populated, and whenever the distribution of data in its\n+    partitions changes significantly.\n+   </para>\n+\n+   <para>\n+    Partitioned tables are not processed by autovacuum.  Statistics\n+    should be collected by running a manual <command>ANALYZE</command> when it is\n+    first populated, and updated whenever the distribution of data in its\n+    partitions changes significantly.\n    </para>\n\n    <para>\ndiff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\nindex 89ff58338e..b84853fd6f 100644\n--- a/doc/src/sgml/perform.sgml\n+++ b/doc/src/sgml/perform.sgml\n@@ -1765,9 +1765,11 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;\n    <title>Run <command>ANALYZE</command> Afterwards</title>\n\n    <para>\n+\n     Whenever you have significantly altered the distribution of data\n     within a table, running <link linkend=\"sql-analyze\"><command>ANALYZE</command></link> is strongly recommended. This\n     includes bulk loading large amounts of data into the table.  Running\n+\n     <command>ANALYZE</command> (or <command>VACUUM ANALYZE</command>)\n     ensures that the planner has up-to-date statistics about the\n     table.  With no statistics or obsolete statistics, the planner might\ndiff --git a/doc/src/sgml/ref/analyze.sgml b/doc/src/sgml/ref/analyze.sgml\nindex c423aeeea5..20ffbc2d7a 100644\n--- a/doc/src/sgml/ref/analyze.sgml\n+++ b/doc/src/sgml/ref/analyze.sgml\n@@ -250,22 +250,33 @@ ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replacea\n   </para>\n\n   <para>\n-    If the table being analyzed has one or more children,\n-    <command>ANALYZE</command> will gather statistics twice: once on the\n-    rows of the parent table only, and a second time on the rows of the\n-    parent table with all of its children.  This second set of statistics\n-    is needed when planning queries that traverse the entire inheritance\n-    tree.  The autovacuum daemon, however, will only consider inserts or\n-    updates on the parent table itself when deciding whether to trigger an\n-    automatic analyze for that table.  If that table is rarely inserted into\n-    or updated, the inheritance statistics will not be up to date unless you\n-    run <command>ANALYZE</command> manually.\n+    If the table being analyzed is partitioned, <command>ANALYZE</command>\n+    will gather statistics by sampling blocks randomly from its partitions;\n+    in addition, it will recurse into each partition and update its statistics.\n+    (However, in multi-level partitioning scenarios, each leaf partition\n+    will only be analyzed once.)\n+    By constrast, if the table being analyzed has inheritance children,\n+    <command>ANALYZE</command> will gather statistics for it twice:\n+    once on the rows of the parent table only, and a second time on the\n+    rows of the parent table with all of its children.  This second set of\n+    statistics is needed when planning queries that traverse the entire\n+    inheritance tree.  The child tables themselves are not individually\n+    analyzed in this case.\n   </para>\n\n   <para>\n-    If any of the child tables are foreign tables whose foreign data wrappers\n-    do not support <command>ANALYZE</command>, those child tables are ignored while\n-    gathering inheritance statistics.\n+    The autovacuum daemon does not process partitioned tables or inheritence\n+    parents.  It is usually necessary to periodically run a manual\n+    <command>ANALYZE</command> to keep the statistics of the table hierarchy\n+    up to date (except for nonempty inheritence parents which undergo\n+    modifications of their own table data).\n+    See...\n+  </para>\n+\n+  <para>\n+    If any of the child tables or partitions are foreign tables whose foreign\n+    data wrappers do not support <command>ANALYZE</command>, those tables are\n+    ignored while gathering inheritance statistics.\n   </para>\n\n   <para>\n\nHi,Minor comment:periodically run an manual  -> periodically run a manual  Cheers", "msg_date": "Sun, 12 Sep 2021 21:38:50 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "Cleaned up and attached as a .patch.\n\nThe patch implementing autoanalyze on partitioned tables should revert relevant\nportions of this patch.", "msg_date": "Fri, 8 Oct 2021 07:58:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "Hi,\n\nOn 10/8/21 14:58, Justin Pryzby wrote:\n> Cleaned up and attached as a .patch.\n> \n> The patch implementing autoanalyze on partitioned tables should\n> revert relevant portions of this patch.\n\nI went through this patch and I'd like to propose a couple changes, per \nthe 0002 patch:\n\n1) I've reworded the changes in maintenance.sgml a bit. It sounded a bit \nstrange before, but I'm not a native speaker so maybe it's worse ...\n\n2) Remove unnecessary whitespace changes in perform.sgml.\n\n3) Simplify the analyze.sgml changes a bit - it was trying to cram too \nmuch stuff into a single paragraph, so I split that.\n\nDoes that seem OK, or did omit something important?\n\nFWIW I think it's really confusing we have inheritance and partitioning, \nand partitions and child tables. And sometimes we use partitioning in \nthe generic sense (i.e. including the inheritance approach), and \nsometimes only the declarative variant. Same for partitions vs child \ntables. I can't even imagine how confusing this has to be for people \njust learning this stuff. They must be in permanent WTF?! state ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 21 Jan 2022 18:21:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "Thanks for looking at this\n\nOn Fri, Jan 21, 2022 at 06:21:57PM +0100, Tomas Vondra wrote:\n> Hi,\n> \n> On 10/8/21 14:58, Justin Pryzby wrote:\n> > Cleaned up and attached as a .patch.\n> > \n> > The patch implementing autoanalyze on partitioned tables should\n> > revert relevant portions of this patch.\n> \n> I went through this patch and I'd like to propose a couple changes, per the\n> 0002 patch:\n> \n> 1) I've reworded the changes in maintenance.sgml a bit. It sounded a bit\n> strange before, but I'm not a native speaker so maybe it's worse ...\n\n+ autoanalyze on the parent table. If your queries require statistics on \n+ parent relations for proper planning, it's necessary to periodically run \n\nYou added two references to \"relations\", but everything else talks about\n\"tables\", which is all that analyze processes.\n\n> 2) Remove unnecessary whitespace changes in perform.sgml.\n\nThose were a note to myself and to any reviewer - should that be updated too ?\n\n> 3) Simplify the analyze.sgml changes a bit - it was trying to cram too much\n> stuff into a single paragraph, so I split that.\n> \n> Does that seem OK, or did omit something important?\n\n+ If the table being analyzed has one or more children,\n\nI think you're referring to both legacy inheritance and and partitioning. That\nshould be more clear.\n\n+ <command>ANALYZE</command> gathers two sets of statistics: once on the rows\n+ of the parent table only, and a second one including rows of both the parent\n+ table and all child relations. This second set of statistics is needed when\n\nI think should say \".. and all of its children\".\n\n> FWIW I think it's really confusing we have inheritance and partitioning, and\n> partitions and child tables. And sometimes we use partitioning in the\n> generic sense (i.e. including the inheritance approach), and sometimes only\n> the declarative variant. Same for partitions vs child tables. I can't even\n> imagine how confusing this has to be for people just learning this stuff.\n> They must be in permanent WTF?! state ...\n\nThe docs were cleaned up some in 0c06534bd. At least the word \"partitioned\"\nshould never be used for legacy inheritance - but \"partitioning\" is.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 21 Jan 2022 12:02:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On 1/21/22 19:02, Justin Pryzby wrote:\n> Thanks for looking at this\n> \n> On Fri, Jan 21, 2022 at 06:21:57PM +0100, Tomas Vondra wrote:\n>> Hi,\n>>\n>> On 10/8/21 14:58, Justin Pryzby wrote:\n>>> Cleaned up and attached as a .patch.\n>>>\n>>> The patch implementing autoanalyze on partitioned tables should\n>>> revert relevant portions of this patch.\n>>\n>> I went through this patch and I'd like to propose a couple changes, per the\n>> 0002 patch:\n>>\n>> 1) I've reworded the changes in maintenance.sgml a bit. It sounded a bit\n>> strange before, but I'm not a native speaker so maybe it's worse ...\n> \n> + autoanalyze on the parent table. If your queries require statistics on\n> + parent relations for proper planning, it's necessary to periodically run\n> \n> You added two references to \"relations\", but everything else talks about\n> \"tables\", which is all that analyze processes.\n> \n\nGood point, that should use \"tables\" too.\n\n>> 2) Remove unnecessary whitespace changes in perform.sgml.\n> \n> Those were a note to myself and to any reviewer - should that be updated too ?\n> \n\nAh, I see. I don't think that part needs updating - it talks about \nhaving to analyze after a bulk load, and that applies to all tables \nanyway. I don't think it needs to mention partitioned tables need an \nanalyze too.\n\n>> 3) Simplify the analyze.sgml changes a bit - it was trying to cram too much\n>> stuff into a single paragraph, so I split that.\n>>\n>> Does that seem OK, or did omit something important?\n> \n> + If the table being analyzed has one or more children,\n> \n> I think you're referring to both legacy inheritance and and partitioning. That\n> should be more clear.\n> \n\nI think it applies to both types of partitioning - it's just that in the \ndeclarative partitioning case the table is always empty so no stats with \ninherit=false are built.\n\n> + <command>ANALYZE</command> gathers two sets of statistics: once on the rows\n> + of the parent table only, and a second one including rows of both the parent\n> + table and all child relations. This second set of statistics is needed when\n> \n> I think should say \".. and all of its children\".\n> \n\nOK\n\n>> FWIW I think it's really confusing we have inheritance and partitioning, and\n>> partitions and child tables. And sometimes we use partitioning in the\n>> generic sense (i.e. including the inheritance approach), and sometimes only\n>> the declarative variant. Same for partitions vs child tables. I can't even\n>> imagine how confusing this has to be for people just learning this stuff.\n>> They must be in permanent WTF?! state ...\n> \n> The docs were cleaned up some in 0c06534bd. At least the word \"partitioned\"\n> should never be used for legacy inheritance - but \"partitioning\" is.\n> \n\nOK\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 21 Jan 2022 19:31:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, Jan 21, 2022 at 1:31 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> [ new patch ]\n\nThis patch is originally by Justin. The latest version is by Tomas. I\nthink the next step is for Justin to say whether he's OK with the\nlatest version that Tomas posted. If he is, then I suggest that he\nalso mark it Ready for Committer, and that Tomas commit it. If he's\nnot, he should say what he wants changed and either post a new version\nhimself or wait for Tomas to do that.\n\nI think the fact that is classified as a \"Bug Fix\" in the CommitFest\napplication is not particularly good. I would prefer to see it\nclassified under \"Documentation\". I'm prepared to concede that\ndocumentation can have bugs as a general matter, but nobody's data is\ngetting eaten because the documentation wasn't updated. In fact, this\nis the fourth patch from the \"bug fix\" section I've studied this\nafternoon, and, well, none of them have been back-patchable code\ndefects.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 17:23:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Mon, Mar 14, 2022 at 05:23:54PM -0400, Robert Haas wrote:\n> On Fri, Jan 21, 2022 at 1:31 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > [ new patch ]\n> \n> This patch is originally by Justin. The latest version is by Tomas. I\n> think the next step is for Justin to say whether he's OK with the\n> latest version that Tomas posted. If he is, then I suggest that he\n> also mark it Ready for Committer, and that Tomas commit it. If he's\n> not, he should say what he wants changed and either post a new version\n> himself or wait for Tomas to do that.\n\nYes, I think it can be Ready. Done.\n\nI amended some of Tomas' changes (see 0003, attached as txt).\n\n@cfbot: the *.patch file is for your consumption, and the others are only there\nto show my changes.\n\n> I think the fact that is classified as a \"Bug Fix\" in the CommitFest\n> application is not particularly good. I would prefer to see it\n> classified under \"Documentation\". I'm prepared to concede that\n> documentation can have bugs as a general matter, but nobody's data is\n> getting eaten because the documentation wasn't updated. In fact, this\n> is the fourth patch from the \"bug fix\" section I've studied this\n> afternoon, and, well, none of them have been back-patchable code\n> defects.\n\nIn fact, I consider this to be back-patchable back to v10. IMO it's an\nomission that this isn't documented. Not all bugs cause data to be eaten. If\nsomeone reads the existing documentation, they might conclude that their\npartitioned tables don't need to be analyzed, and they would've been better\nserved by not reading the docs.\n\n-- \nJustin", "msg_date": "Tue, 15 Mar 2022 18:00:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On 3/16/22 00:00, Justin Pryzby wrote:\n> On Mon, Mar 14, 2022 at 05:23:54PM -0400, Robert Haas wrote:\n>> On Fri, Jan 21, 2022 at 1:31 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>> [ new patch ]\n>>\n>> This patch is originally by Justin. The latest version is by Tomas. I\n>> think the next step is for Justin to say whether he's OK with the\n>> latest version that Tomas posted. If he is, then I suggest that he\n>> also mark it Ready for Committer, and that Tomas commit it. If he's\n>> not, he should say what he wants changed and either post a new version\n>> himself or wait for Tomas to do that.\n> \n> Yes, I think it can be Ready. Done.\n> \n> I amended some of Tomas' changes (see 0003, attached as txt).\n> \n> @cfbot: the *.patch file is for your consumption, and the others are only there\n> to show my changes.\n> \n>> I think the fact that is classified as a \"Bug Fix\" in the CommitFest\n>> application is not particularly good. I would prefer to see it\n>> classified under \"Documentation\". I'm prepared to concede that\n>> documentation can have bugs as a general matter, but nobody's data is\n>> getting eaten because the documentation wasn't updated. In fact, this\n>> is the fourth patch from the \"bug fix\" section I've studied this\n>> afternoon, and, well, none of them have been back-patchable code\n>> defects.\n> \n> In fact, I consider this to be back-patchable back to v10. IMO it's an\n> omission that this isn't documented. Not all bugs cause data to be eaten. If\n> someone reads the existing documentation, they might conclude that their\n> partitioned tables don't need to be analyzed, and they would've been better\n> served by not reading the docs.\n> \n\nI've pushed the last version, and backpatched it to 10 (not sure I'd\ncall it a bugfix, but I certainly agree with Justin it's worth\nmentioning in the docs, even on older branches).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:05:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "> On 28 Mar 2022, at 15:05, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> I've pushed the last version, and backpatched it to 10 (not sure I'd\n> call it a bugfix, but I certainly agree with Justin it's worth\n> mentioning in the docs, even on older branches).\n\nI happened to spot a small typo in this commit in the ANALYZE docs, and have\njust pushed a fix all the way down to 10 as per the original commit.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 12:17:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Mon, 2022-03-28 at 15:05 +0200, Tomas Vondra wrote:\n> I've pushed the last version, and backpatched it to 10 (not sure I'd\n> call it a bugfix, but I certainly agree with Justin it's worth\n> mentioning in the docs, even on older branches).\n\nI'd like to suggest an improvement to this. The current wording could\nbe read to mean that dead tuples won't get cleaned up in partitioned tables.\n\n\nBy the way, where are the statistics of a partitioned tables used? The actual\ntables scanned are always the partitions, and in the execution plans that\nI have seen, the optimizer always used the statistics of the partitions.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 05 Oct 2022 10:37:01 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On 10/5/22 13:37, Laurenz Albe wrote:\n> On Mon, 2022-03-28 at 15:05 +0200, Tomas Vondra wrote:\n>> I've pushed the last version, and backpatched it to 10 (not sure I'd\n>> call it a bugfix, but I certainly agree with Justin it's worth\n>> mentioning in the docs, even on older branches).\n> \n> I'd like to suggest an improvement to this. The current wording could\n> be read to mean that dead tuples won't get cleaned up in partitioned tables.\n> \n> \n> By the way, where are the statistics of a partitioned tables used? The actual\n> tables scanned are always the partitions, and in the execution plans that\n> I have seen, the optimizer always used the statistics of the partitions.\nFor example, it is used to estimate selectivity of join clause:\n\nCREATE TABLE test (id integer, val integer) PARTITION BY hash (id);\nCREATE TABLE test_0 PARTITION OF test\n FOR VALUES WITH (modulus 2, remainder 0);\nCREATE TABLE test_1 PARTITION OF test\n FOR VALUES WITH (modulus 2, remainder 1);\n\nINSERT INTO test (SELECT q, q FROM generate_series(1,10) AS q);\nVACUUM ANALYZE test;\nINSERT INTO test (SELECT q, q%2 FROM generate_series(11,200) AS q);\nVACUUM ANALYZE test_0,test_1;\n\nEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF)\nSELECT * FROM test t1, test t2 WHERE t1.id = t2.val;\nVACUUM ANALYZE test;\nEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF)\nSELECT * FROM test t1, test t2 WHERE t1.id = t2.val;\n\nHere without actual statistics on parent table we make wrong prediction.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 6 Oct 2022 11:02:07 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, Oct 05, 2022 at 10:37:01AM +0200, Laurenz Albe wrote:\n> On Mon, 2022-03-28 at 15:05 +0200, Tomas Vondra wrote:\n>> I've pushed the last version, and backpatched it to 10 (not sure I'd\n>> call it a bugfix, but I certainly agree with Justin it's worth\n>> mentioning in the docs, even on older branches).\n> \n> I'd like to suggest an improvement to this. The current wording could\n> be read to mean that dead tuples won't get cleaned up in partitioned tables.\n\nWell, dead tuples won't get cleaned up in partitioned tables, as\npartitioned tables do not have storage. But I see what you mean. Readers\nmight misinterpret this to mean that autovacuum will not process the\npartitions. There's a good definition of what the docs mean by\n\"partitioned table\" [0], but FWIW it took me some time before I\nconsistently read \"partitioned table\" to mean \"only the thing with relkind\nset to 'p'\" and not \"both the partitioned table and its partitions.\" So,\nwhile the current wording it technically correct, I think it'd be\nreasonable to expand it to help avoid confusion.\n\nHere is my take on the wording:\n\n\tSince all the data for a partitioned table is stored in its partitions,\n\tautovacuum does not process partitioned tables. Instead, autovacuum\n\tprocesses the individual partitions that are regular tables. This\n\tmeans that autovacuum only gathers statistics for the regular tables\n\tthat serve as partitions and not for the partitioned tables. Since\n\tqueries may rely on a partitioned table's statistics, you should\n\tcollect statistics via the ANALYZE command when it is first populated,\n\tand again whenever the distribution of data in its partitions changes\n\tsignificantly.\n\n[0] https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 15:27:47 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Thu, Jan 12, 2023 at 03:27:47PM -0800, Nathan Bossart wrote:\n> On Wed, Oct 05, 2022 at 10:37:01AM +0200, Laurenz Albe wrote:\n> > On Mon, 2022-03-28 at 15:05 +0200, Tomas Vondra wrote:\n> >> I've pushed the last version, and backpatched it to 10 (not sure I'd\n> >> call it a bugfix, but I certainly agree with Justin it's worth\n> >> mentioning in the docs, even on older branches).\n> > \n> > I'd like to suggest an improvement to this. The current wording could\n> > be read to mean that dead tuples won't get cleaned up in partitioned tables.\n> \n> Well, dead tuples won't get cleaned up in partitioned tables, as\n> partitioned tables do not have storage. But I see what you mean. Readers\n> might misinterpret this to mean that autovacuum will not process the\n> partitions. There's a good definition of what the docs mean by\n> \"partitioned table\" [0], but FWIW it took me some time before I\n> consistently read \"partitioned table\" to mean \"only the thing with relkind\n> set to 'p'\" and not \"both the partitioned table and its partitions.\" So,\n> while the current wording it technically correct, I think it'd be\n> reasonable to expand it to help avoid confusion.\n> \n> Here is my take on the wording:\n> \n> \tSince all the data for a partitioned table is stored in its partitions,\n> \tautovacuum does not process partitioned tables. Instead, autovacuum\n> \tprocesses the individual partitions that are regular tables. This\n> \tmeans that autovacuum only gathers statistics for the regular tables\n> \tthat serve as partitions and not for the partitioned tables. Since\n> \tqueries may rely on a partitioned table's statistics, you should\n> \tcollect statistics via the ANALYZE command when it is first populated,\n> \tand again whenever the distribution of data in its partitions changes\n> \tsignificantly.\n\nUh, what about autovacuum's handling of partitioned tables? This makes\nit sound like it ignores them because it talks about manual ANALYZE.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 17 Jan 2023 15:53:24 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Tue, Jan 17, 2023 at 03:53:24PM -0500, Bruce Momjian wrote:\n> On Thu, Jan 12, 2023 at 03:27:47PM -0800, Nathan Bossart wrote:\n> > On Wed, Oct 05, 2022 at 10:37:01AM +0200, Laurenz Albe wrote:\n> > > On Mon, 2022-03-28 at 15:05 +0200, Tomas Vondra wrote:\n> > >> I've pushed the last version, and backpatched it to 10 (not sure I'd\n> > >> call it a bugfix, but I certainly agree with Justin it's worth\n> > >> mentioning in the docs, even on older branches).\n> > > \n> > > I'd like to suggest an improvement to this. The current wording could\n> > > be read to mean that dead tuples won't get cleaned up in partitioned tables.\n> > \n> > Well, dead tuples won't get cleaned up in partitioned tables, as\n> > partitioned tables do not have storage. But I see what you mean. Readers\n> > might misinterpret this to mean that autovacuum will not process the\n> > partitions. There's a good definition of what the docs mean by\n> > \"partitioned table\" [0], but FWIW it took me some time before I\n> > consistently read \"partitioned table\" to mean \"only the thing with relkind\n> > set to 'p'\" and not \"both the partitioned table and its partitions.\" So,\n> > while the current wording it technically correct, I think it'd be\n> > reasonable to expand it to help avoid confusion.\n> > \n> > Here is my take on the wording:\n> > \n> > \tSince all the data for a partitioned table is stored in its partitions,\n> > \tautovacuum does not process partitioned tables. Instead, autovacuum\n> > \tprocesses the individual partitions that are regular tables. This\n> > \tmeans that autovacuum only gathers statistics for the regular tables\n> > \tthat serve as partitions and not for the partitioned tables. Since\n> > \tqueries may rely on a partitioned table's statistics, you should\n> > \tcollect statistics via the ANALYZE command when it is first populated,\n> > \tand again whenever the distribution of data in its partitions changes\n> > \tsignificantly.\n> \n> Uh, what about autovacuum's handling of partitioned tables? This makes\n> it sound like it ignores them because it talks about manual ANALYZE.\n\nIf we're referring to the *partitioned* table, then it does ignore them.\nSee:\n\n|commit 6f8127b7390119c21479f5ce495b7d2168930e82\n|Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n|Date: Mon Aug 16 17:27:52 2021 -0400\n|\n| Revert analyze support for partitioned tables\n\nMaybe (all?) the clarification the docs need is to say:\n\"Partitioned tables are not *themselves* processed by autovacuum.\"\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 17 Jan 2023 15:00:50 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Tue, Jan 17, 2023 at 03:00:50PM -0600, Justin Pryzby wrote:\n> On Tue, Jan 17, 2023 at 03:53:24PM -0500, Bruce Momjian wrote:\n> > On Thu, Jan 12, 2023 at 03:27:47PM -0800, Nathan Bossart wrote:\n> > > Here is my take on the wording:\n> > > \n> > > \tSince all the data for a partitioned table is stored in its partitions,\n> > > \tautovacuum does not process partitioned tables. Instead, autovacuum\n> > > \tprocesses the individual partitions that are regular tables. This\n> > > \tmeans that autovacuum only gathers statistics for the regular tables\n> > > \tthat serve as partitions and not for the partitioned tables. Since\n> > > \tqueries may rely on a partitioned table's statistics, you should\n> > > \tcollect statistics via the ANALYZE command when it is first populated,\n> > > \tand again whenever the distribution of data in its partitions changes\n> > > \tsignificantly.\n> > \n> > Uh, what about autovacuum's handling of partitioned tables? This makes\n> > it sound like it ignores them because it talks about manual ANALYZE.\n> \n> If we're referring to the *partitioned* table, then it does ignore them.\n> See:\n> \n> |commit 6f8127b7390119c21479f5ce495b7d2168930e82\n> |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> |Date: Mon Aug 16 17:27:52 2021 -0400\n> |\n> | Revert analyze support for partitioned tables\n\nYes, I see that patch was trying to combine the statistics of individual\npartitions into a partitioned table summary.\n\n> Maybe (all?) the clarification the docs need is to say:\n> \"Partitioned tables are not *themselves* processed by autovacuum.\"\n\nYes, I think the lack of autovacuum needs to be specifically mentioned\nsince most people assume autovacuum handles _all_ statistics updating.\n\nCan someone summarize how bad it is we have no statistics on partitioned\ntables? It sounds bad to me. \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Tue, 17 Jan 2023 16:16:20 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Tue, 2023-01-17 at 16:16 -0500, Bruce Momjian wrote:\n> On Tue, Jan 17, 2023 at 03:00:50PM -0600, Justin Pryzby wrote:\n> > Maybe (all?) the clarification the docs need is to say:\n> > \"Partitioned tables are not *themselves* processed by autovacuum.\"\n> \n> Yes, I think the lack of autovacuum needs to be specifically mentioned\n> since most people assume autovacuum handles _all_ statistics updating.\n> \n> Can someone summarize how bad it is we have no statistics on partitioned\n> tables?  It sounds bad to me.\n\nAndrey Lepikhov had an example earlier in this thread[1]. It doesn't take\nan exotic query. \n\nAttached is a new version of my patch that tries to improve the wording.\n\nYours,\nLaurenz Albe\n\n [1]: https://postgr.es/m/3df5c68b-13aa-53d0-c0ec-ed98e6972e2e%40postgrespro.ru", "msg_date": "Wed, 18 Jan 2023 10:15:18 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, Jan 18, 2023 at 10:15:18AM +0100, Laurenz Albe wrote:\n> On Tue, 2023-01-17 at 16:16 -0500, Bruce Momjian wrote:\n> > On Tue, Jan 17, 2023 at 03:00:50PM -0600, Justin Pryzby wrote:\n> > > Maybe (all?) the clarification the docs need is to say:\n> > > \"Partitioned tables are not *themselves* processed by autovacuum.\"\n> > \n> > Yes, I think the lack of autovacuum needs to be specifically mentioned\n> > since most people assume autovacuum handles _all_ statistics updating.\n\nThat's what 61fa6ca79 aimed to do. Laurenz is suggesting further\nclarification.\n\n> > Can someone summarize how bad it is we have no statistics on partitioned\n> > tables?� It sounds bad to me.\n> \n> Andrey Lepikhov had an example earlier in this thread[1]. It doesn't take\n> an exotic query. \n> \n> Attached is a new version of my patch that tries to improve the wording.\n\nI tweaked this a bit to end up with:\n\n> - Partitioned tables are not processed by autovacuum. Statistics\n> - should be collected by running a manual <command>ANALYZE</command> when it is\n> + The leaf partitions of a partitioned table are normal tables and are processed\n> + by autovacuum; however, autovacuum does not process the partitioned table itself.\n> + This is no problem as far as <command>VACUUM</command> is concerned, since\n> + there's no need to vacuum the empty, partitioned table. But, as mentioned in\n> + <xref linkend=\"vacuum-for-statistics\"/>, it also means that autovacuum won't\n> + run <command>ANALYZE</command> on the partitioned table.\n> + Although statistics are automatically gathered on its leaf partitions, some queries also need\n> + statistics on the partitioned table to run optimally. You should collect statistics by\n> + running a manual <command>ANALYZE</command> when the partitioned table is\n> first populated, and again whenever the distribution of data in its\n> partitions changes significantly.\n> </para>\n\n\"partitions are normal tables\" was techically wrong, as partitions can\nalso be partitioned.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 18 Jan 2023 11:49:19 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, Jan 18, 2023 at 11:49:19AM -0600, Justin Pryzby wrote:\n> On Wed, Jan 18, 2023 at 10:15:18AM +0100, Laurenz Albe wrote:\n> > On Tue, 2023-01-17 at 16:16 -0500, Bruce Momjian wrote:\n> > > On Tue, Jan 17, 2023 at 03:00:50PM -0600, Justin Pryzby wrote:\n> > > > Maybe (all?) the clarification the docs need is to say:\n> > > > \"Partitioned tables are not *themselves* processed by autovacuum.\"\n> > > \n> > > Yes, I think the lack of autovacuum needs to be specifically mentioned\n> > > since most people assume autovacuum handles _all_ statistics updating.\n> \n> That's what 61fa6ca79 aimed to do. Laurenz is suggesting further\n> clarification.\n\nAh, makes sense, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 18 Jan 2023 13:11:12 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 2023-01-18 at 11:49 -0600, Justin Pryzby wrote:\n> I tweaked this a bit to end up with:\n> \n> > -    Partitioned tables are not processed by autovacuum.  Statistics\n> > -    should be collected by running a manual <command>ANALYZE</command> when it is\n> > +    The leaf partitions of a partitioned table are normal tables and are processed\n> > +    by autovacuum; however, autovacuum does not process the partitioned table itself.\n> > +    This is no problem as far as <command>VACUUM</command> is concerned, since\n> > +    there's no need to vacuum the empty, partitioned table.  But, as mentioned in\n> > +    <xref linkend=\"vacuum-for-statistics\"/>, it also means that autovacuum won't\n> > +    run <command>ANALYZE</command> on the partitioned table.\n> > +    Although statistics are automatically gathered on its leaf partitions, some queries also need\n> > +    statistics on the partitioned table to run optimally.  You should collect statistics by\n> > +    running a manual <command>ANALYZE</command> when the partitioned table is\n> >      first populated, and again whenever the distribution of data in its\n> >      partitions changes significantly.\n> >     </para>\n> \n> \"partitions are normal tables\" was techically wrong, as partitions can\n> also be partitioned.\n\nI am fine with your tweaks. I think this is good to go.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 18 Jan 2023 20:26:10 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, Jan 18, 2023 at 10:15:18AM +0100, Laurenz Albe wrote:\n> On Tue, 2023-01-17 at 16:16 -0500, Bruce Momjian wrote:\n> > On Tue, Jan 17, 2023 at 03:00:50PM -0600, Justin Pryzby wrote:\n> > > Maybe (all?) the clarification the docs need is to say:\n> > > \"Partitioned tables are not *themselves* processed by autovacuum.\"\n> > \n> > Yes, I think the lack of autovacuum needs to be specifically mentioned\n> > since most people assume autovacuum handles _all_ statistics updating.\n> > \n> > Can someone summarize how bad it is we have no statistics on partitioned\n> > tables?  It sounds bad to me.\n> \n> Andrey Lepikhov had an example earlier in this thread[1]. It doesn't take\n> an exotic query. \n> \n> Attached is a new version of my patch that tries to improve the wording.\n\nAh, yes, that is the example I saw but could not re-find. Here is the\noutput:\n\n\tCREATE TABLE test (id integer, val integer) PARTITION BY hash (id);\n\t\n\tCREATE TABLE test_0 PARTITION OF test\n\t FOR VALUES WITH (modulus 2, remainder 0);\n\tCREATE TABLE test_1 PARTITION OF test\n\t FOR VALUES WITH (modulus 2, remainder 1);\n\t\n\tINSERT INTO test (SELECT q, q FROM generate_series(1,10) AS q);\n\t\n\tVACUUM ANALYZE test;\n\t\n\tINSERT INTO test (SELECT q, q%2 FROM generate_series(11,200) AS q);\n\t\n\tVACUUM ANALYZE test_0,test_1;\n\t\n\tEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF)\n\tSELECT * FROM test t1, test t2 WHERE t1.id = t2.val;\n\t QUERY PLAN \n\t---------------------------------------------------------------------------------------------------------\n\t Hash Join (cost=7.50..15.25 rows=200 width=16) (actual rows=105 loops=1)\n\t Hash Cond: (t1.id = t2.val)\n\t -> Append (cost=0.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t -> Seq Scan on test_0 t1_1 (cost=0.00..2.13 rows=113 width=8) (actual rows=113 loops=1)\n\t -> Seq Scan on test_1 t1_2 (cost=0.00..1.87 rows=87 width=8) (actual rows=87 loops=1)\n\t -> Hash (cost=5.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t Buckets: 1024 Batches: 1 Memory Usage: 16kB\n\t -> Append (cost=0.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t -> Seq Scan on test_0 t2_1 (cost=0.00..2.13 rows=113 width=8) (actual rows=113 loops=1)\n\t -> Seq Scan on test_1 t2_2 (cost=0.00..1.87 rows=87 width=8) (actual rows=87 loops=1)\n\t\n\tVACUUM ANALYZE test;\n\t\n\tEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF)\n\tSELECT * FROM test t1, test t2 WHERE t1.id = t2.val;\n\t QUERY PLAN \n\t---------------------------------------------------------------------------------------------------------\n\t Hash Join (cost=7.50..15.25 rows=200 width=16) (actual rows=105 loops=1)\n\t Hash Cond: (t2.val = t1.id)\n\t -> Append (cost=0.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t -> Seq Scan on test_0 t2_1 (cost=0.00..2.13 rows=113 width=8) (actual rows=113 loops=1)\n\t -> Seq Scan on test_1 t2_2 (cost=0.00..1.87 rows=87 width=8) (actual rows=87 loops=1)\n\t -> Hash (cost=5.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t Buckets: 1024 Batches: 1 Memory Usage: 16kB\n\t -> Append (cost=0.00..5.00 rows=200 width=8) (actual rows=200 loops=1)\n\t -> Seq Scan on test_0 t1_1 (cost=0.00..2.13 rows=113 width=8) (actual rows=113 loops=1)\n\t -> Seq Scan on test_1 t1_2 (cost=0.00..1.87 rows=87 width=8) (actual rows=87 loops=1)\n\nI see the inner side uses 'val' in the first EXPLAIN and 'id' in the\nsecond, and you are right that 'val' has mostly 0/1.\n\nIs it possible to document when partition table statistics helps?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Wed, 18 Jan 2023 16:23:41 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 2023-01-18 at 16:23 -0500, Bruce Momjian wrote:\n> Is it possible to document when partition table statistics helps?\n\nI think it would be difficult to come up with an exhaustive list.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 19 Jan 2023 13:50:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Thu, Jan 19, 2023 at 01:50:05PM +0100, Laurenz Albe wrote:\n> On Wed, 2023-01-18 at 16:23 -0500, Bruce Momjian wrote:\n> > Is it possible to document when partition table statistics helps?\n> \n> I think it would be difficult to come up with an exhaustive list.\n\nI was afraid of that. I asked only because most people assume\nautovacuum handles _all_ statistics needs, but this case is not handled.\nDo people even have any statistics maintenance process anymore, and if\nnot, how would they know they need to run a manual ANALYZE?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:56:54 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Thu, 2023-01-19 at 15:56 -0500, Bruce Momjian wrote:\n> On Thu, Jan 19, 2023 at 01:50:05PM +0100, Laurenz Albe wrote:\n> > On Wed, 2023-01-18 at 16:23 -0500, Bruce Momjian wrote:\n> > > Is it possible to document when partition table statistics helps?\n> > \n> > I think it would be difficult to come up with an exhaustive list.\n> \n> I was afraid of that.  I asked only because most people assume\n> autovacuum handles _all_ statistics needs, but this case is not handled.\n> Do people even have any statistics maintenance process anymore, and if\n> not, how would they know they need to run a manual ANALYZE?\n\nProbably not. I think this would warrant an entry in the TODO list:\n\"make autovacuum collect startistics for partitioned tables\".\n\nEven if we cannot give better advice than \"run ALANYZE manually if\nthe execution plan looks fishy\", the patch is still an improvement,\nisn't it?\n\nI have already seen several questions by people who read the current\ndocumentation and were worried that autovacuum wouldn't clean up their\npartitioned tables.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:33:57 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, Jan 20, 2023 at 10:33:57AM +0100, Laurenz Albe wrote:\n> On Thu, 2023-01-19 at 15:56 -0500, Bruce Momjian wrote:\n> > On Thu, Jan 19, 2023 at 01:50:05PM +0100, Laurenz Albe wrote:\n> > > On Wed, 2023-01-18 at 16:23 -0500, Bruce Momjian wrote:\n> > > > Is it possible to document when partition table statistics helps?\n> > > \n> > > I think it would be difficult to come up with an exhaustive list.\n> > \n> > I was afraid of that.  I asked only because most people assume\n> > autovacuum handles _all_ statistics needs, but this case is not handled.\n> > Do people even have any statistics maintenance process anymore, and if\n> > not, how would they know they need to run a manual ANALYZE?\n> \n> Probably not. I think this would warrant an entry in the TODO list:\n> \"make autovacuum collect startistics for partitioned tables\".\n\nWe have it already:\n\n\tHave autoanalyze of parent tables occur when child tables are modified\n\n> Even if we cannot give better advice than \"run ALANYZE manually if\n> the execution plan looks fishy\", the patch is still an improvement,\n> isn't it?\n\nYes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:54:13 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 18 Jan 2023 at 22:15, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Attached is a new version of my patch that tries to improve the wording.\n\nI had a look at this and agree that we should adjust the paragraph in\nquestion if people are finding it confusing.\n\nFor your wording, I found I had a small problem with calling\npartitions of a partitioned tables \"normal tables\" in:\n\n+ The partitions of a partitioned table are normal tables and get processed\n+ by autovacuum, but autovacuum doesn't process the partitioned table itself.\n\nI started to adjust that but since the text is fairly short it turned\nout quite different from what you had.\n\nI ended up with:\n\n+ With partitioned tables, since these do not directly store tuples, these\n+ do not require autovacuum to perform any <command>VACUUM</command>\n+ operations. Autovacuum simply performs a <command>VACUUM</command> on the\n+ partitioned table's partitions the same as it does with normal tables.\n+ However, the same is true for <command>ANALYZE</command> operations, and\n+ this can be problematic as there are various places in the query planner\n+ that attempt to make use of table statistics for partitioned tables when\n+ partitioned tables are queried. For now, you can work around this problem\n+ by running a manual <command>ANALYZE</command> command on the partitioned\n+ table when the partitioned table is first populated, and again whenever\n+ the distribution of data in its partitions changes significantly.\n\nwhich I've also attached in patch form.\n\nI know there's been a bit of debate on the wording and a few patches,\nso I may not be helping. If nobody is against the above, then I don't\nmind going ahead with it and backpatching to whichever version this\nfirst applies to. I just felt I wasn't 100% happy with what was being\nproposed.\n\nDavid", "msg_date": "Wed, 25 Jan 2023 16:26:24 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 2023-01-25 at 16:26 +1300, David Rowley wrote:\n> On Wed, 18 Jan 2023 at 22:15, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > Attached is a new version of my patch that tries to improve the wording.\n> \n> I had a look at this and agree that we should adjust the paragraph in\n> question if people are finding it confusing.\n> \n> I started to adjust that but since the text is fairly short it turned\n> out quite different from what you had.\n> \n> I ended up with:\n> \n> +    With partitioned tables, since these do not directly store tuples, these\n> +    do not require autovacuum to perform any <command>VACUUM</command>\n> +    operations.  Autovacuum simply performs a <command>VACUUM</command> on the\n> +    partitioned table's partitions the same as it does with normal tables.\n> +    However, the same is true for <command>ANALYZE</command> operations, and\n> +    this can be problematic as there are various places in the query planner\n> +    that attempt to make use of table statistics for partitioned tables when\n> +    partitioned tables are queried.  For now, you can work around this problem\n> +    by running a manual <command>ANALYZE</command> command on the partitioned\n> +    table when the partitioned table is first populated, and again whenever\n> +    the distribution of data in its partitions changes significantly.\n> \n> which I've also attached in patch form.\n> \n> I know there's been a bit of debate on the wording and a few patches,\n> so I may not be helping.  If nobody is against the above, then I don't\n> mind going ahead with it and backpatching to whichever version this\n> first applies to. I just felt I wasn't 100% happy with what was being\n> proposed.\n\nThanks, your help is welcome.\n\nDid you see Justin's wording suggestion in\nhttps://postgr.es/m/20230118174919.GA9837%40telsasoft.com ?\nHe didn't attach it as a patch, so you may have missed it.\nI was pretty happy with that.\n\nI think your first sentence it a bit clumsy and might be streamlined to\n\n Partitioned tables do not directly store tuples and consequently do not\n require autovacuum to perform any <command>VACUUM</command> operations.\n\nAlso, I am a little bit unhappy about\n\n1. Your paragraph states that partitioned table need no autovacuum,\n but doesn't state unmistakably that they will never be treated\n by autovacuum.\n\n2. You make a distinction between table partitions and \"normal tables\",\n but really there is no distiction.\n\nPerhaps I am being needlessly picky here...\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 25 Jan 2023 07:46:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 25 Jan 2023 at 19:46, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Did you see Justin's wording suggestion in\n> https://postgr.es/m/20230118174919.GA9837%40telsasoft.com ?\n> He didn't attach it as a patch, so you may have missed it.\n> I was pretty happy with that.\n\nI didn't pay too much attention as I tend to apply patches to obtain\nthe full context of the change. Manually trying to apply a patch from\nan email is not something I like to do.\n\n> I think your first sentence it a bit clumsy and might be streamlined to\n>\n> Partitioned tables do not directly store tuples and consequently do not\n> require autovacuum to perform any <command>VACUUM</command> operations.\n\nThat seems better than what I had.\n\n> Also, I am a little bit unhappy about\n>\n> 1. Your paragraph states that partitioned table need no autovacuum,\n> but doesn't state unmistakably that they will never be treated\n> by autovacuum.\n\nhmm. I assume the reader realises from the text that lack of any\ntuples means VACUUM is not required. The remaining part of what\nautovacuum does not do is explained when the text goes on to say that\nANALYZE operations are also not performed on partitioned tables. I'm\nnot sure what is left that's mistakable there.\n\n> 2. You make a distinction between table partitions and \"normal tables\",\n> but really there is no distiction.\n\nWe may have different mental models here. This relates to the part\nthat I wasn't keen on in your patch, i.e:\n\n+ The partitions of a partitioned table are normal tables and get processed\n+ by autovacuum\n\nWhile I agree that the majority of partitions are likely to be\nrelkind='r', which you might ordinarily consider a \"normal table\", you\njust might change your mind when you try to INSERT or UPDATE records\nthat would violate the partition constraint. Some partitions might\nalso be themselves partitioned tables and others might be foreign\ntables. That does not really matter much when it comes to what\nautovacuum does or does not do, but I'm not really keen to imply in\nour documents that partitions are \"normal tables\".\n\nDavid\n\n\n", "msg_date": "Wed, 25 Jan 2023 21:43:15 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, 25 Jan 2023 at 21:43, David Rowley <dgrowleyml@gmail.com> wrote:\n> While I agree that the majority of partitions are likely to be\n> relkind='r', which you might ordinarily consider a \"normal table\", you\n> just might change your mind when you try to INSERT or UPDATE records\n> that would violate the partition constraint. Some partitions might\n> also be themselves partitioned tables and others might be foreign\n> tables. That does not really matter much when it comes to what\n> autovacuum does or does not do, but I'm not really keen to imply in\n> our documents that partitions are \"normal tables\".\n\nBased on the above, I'm setting this to waiting on author.\n\nDavid\n\n\n", "msg_date": "Thu, 13 Jul 2023 10:21:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "> On 13 Jul 2023, at 00:21, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> On Wed, 25 Jan 2023 at 21:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>> While I agree that the majority of partitions are likely to be\n>> relkind='r', which you might ordinarily consider a \"normal table\", you\n>> just might change your mind when you try to INSERT or UPDATE records\n>> that would violate the partition constraint. Some partitions might\n>> also be themselves partitioned tables and others might be foreign\n>> tables. That does not really matter much when it comes to what\n>> autovacuum does or does not do, but I'm not really keen to imply in\n>> our documents that partitions are \"normal tables\".\n> \n> Based on the above, I'm setting this to waiting on author.\n\nBased on the above, and that the thread has been stalled for months, I'm\nmarking this returned with feedback. Please feel free to resubmit a new\nversion of the patch to a future CF.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 22:24:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "Sorry for dropping the ball on this; I'll add it to the next commitfest.\n\nOn Wed, 2023-01-25 at 21:43 +1300, David Rowley wrote:\n> > I think your first sentence it a bit clumsy and might be streamlined to\n> > \n> >   Partitioned tables do not directly store tuples and consequently do not\n> >   require autovacuum to perform any <command>VACUUM</command> operations.\n> \n> That seems better than what I had.\n\nOk, I went with it.\n\n> > Also, I am a little bit unhappy about\n> > \n> > 1. Your paragraph states that partitioned table need no autovacuum,\n> >    but doesn't state unmistakably that they will never be treated\n> >    by autovacuum.\n> \n> hmm. I assume the reader realises from the text that lack of any\n> tuples means VACUUM is not required.  The remaining part of what\n> autovacuum does not do is explained when the text goes on to say that\n> ANALYZE operations are also not performed on partitioned tables. I'm\n> not sure what is left that's mistakable there.\n\nI rewrote the paragraph a little so that it looks clearer to me.\nI hope it is OK for you as well.\n\n> > 2. You make a distinction between table partitions and \"normal tables\",\n> >    but really there is no distiction.\n> \n> We may have different mental models here. This relates to the part\n> that I wasn't keen on in your patch, i.e:\n> \n> +    The partitions of a partitioned table are normal tables and get processed\n> +    by autovacuum\n> \n> While I agree that the majority of partitions are likely to be\n> relkind='r', which you might ordinarily consider a \"normal table\", you\n> just might change your mind when you try to INSERT or UPDATE records\n> that would violate the partition constraint. Some partitions might\n> also be themselves partitioned tables and others might be foreign\n> tables. That does not really matter much when it comes to what\n> autovacuum does or does not do, but I'm not really keen to imply in\n> our documents that partitions are \"normal tables\".\n\nAgreed, there are differences between partitions and normal tables.\nAnd this is not the place in the documentation where we would like to\nget into detail about the differences.\n\nAttached is the next version of my patch.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 06 Sep 2023 05:53:56 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Wed, Sep 6, 2023 at 05:53:56AM +0200, Laurenz Albe wrote:\n> > We may have different mental models here. This relates to the part\n> > that I wasn't keen on in your patch, i.e:\n> > \n> > +    The partitions of a partitioned table are normal tables and get processed\n> > +    by autovacuum\n> > \n> > While I agree that the majority of partitions are likely to be\n> > relkind='r', which you might ordinarily consider a \"normal table\", you\n> > just might change your mind when you try to INSERT or UPDATE records\n> > that would violate the partition constraint. Some partitions might\n> > also be themselves partitioned tables and others might be foreign\n> > tables. That does not really matter much when it comes to what\n> > autovacuum does or does not do, but I'm not really keen to imply in\n> > our documents that partitions are \"normal tables\".\n> \n> Agreed, there are differences between partitions and normal tables.\n> And this is not the place in the documentation where we would like to\n> get into detail about the differences.\n> \n> Attached is the next version of my patch.\n\nI adjusted your patch to be shorter and clearer, patch attached. I am\nplanning to apply this back to PG 11.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 29 Sep 2023 18:08:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, 2023-09-29 at 18:08 -0400, Bruce Momjian wrote:\n> On Wed, Sep  6, 2023 at 05:53:56AM +0200, Laurenz Albe wrote:\n> > > We may have different mental models here. This relates to the part\n> > > that I wasn't keen on in your patch, i.e:\n> > > \n> > > +    The partitions of a partitioned table are normal tables and get processed\n> > > +    by autovacuum\n> > > \n> > > While I agree that the majority of partitions are likely to be\n> > > relkind='r', which you might ordinarily consider a \"normal table\", you\n> > > just might change your mind when you try to INSERT or UPDATE records\n> > > that would violate the partition constraint. Some partitions might\n> > > also be themselves partitioned tables and others might be foreign\n> > > tables. That does not really matter much when it comes to what\n> > > autovacuum does or does not do, but I'm not really keen to imply in\n> > > our documents that partitions are \"normal tables\".\n> > \n> > Agreed, there are differences between partitions and normal tables.\n> > And this is not the place in the documentation where we would like to\n> > get into detail about the differences.\n> > \n> > Attached is the next version of my patch.\n> \n> I adjusted your patch to be shorter and clearer, patch attached.  I am\n> planning to apply this back to PG 11.\n\nThanks for looking at this.\n\nI am mostly fine with your version, but it does not directly state that\nautovacuum does not process partitioned tables. I think this should be\nclarified in the beginning.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 30 Sep 2023 00:39:43 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Sat, Sep 30, 2023 at 12:39:43AM +0200, Laurenz Albe wrote:\n> On Fri, 2023-09-29 at 18:08 -0400, Bruce Momjian wrote:\n> > On Wed, Sep  6, 2023 at 05:53:56AM +0200, Laurenz Albe wrote:\n> > > > We may have different mental models here. This relates to the part\n> > > > that I wasn't keen on in your patch, i.e:\n> > > > \n> > > > +    The partitions of a partitioned table are normal tables and get processed\n> > > > +    by autovacuum\n> > > > \n> > > > While I agree that the majority of partitions are likely to be\n> > > > relkind='r', which you might ordinarily consider a \"normal table\", you\n> > > > just might change your mind when you try to INSERT or UPDATE records\n> > > > that would violate the partition constraint. Some partitions might\n> > > > also be themselves partitioned tables and others might be foreign\n> > > > tables. That does not really matter much when it comes to what\n> > > > autovacuum does or does not do, but I'm not really keen to imply in\n> > > > our documents that partitions are \"normal tables\".\n> > > \n> > > Agreed, there are differences between partitions and normal tables.\n> > > And this is not the place in the documentation where we would like to\n> > > get into detail about the differences.\n> > > \n> > > Attached is the next version of my patch.\n> > \n> > I adjusted your patch to be shorter and clearer, patch attached.  I am\n> > planning to apply this back to PG 11.\n> \n> Thanks for looking at this.\n> \n> I am mostly fine with your version, but it does not directly state that\n> autovacuum does not process partitioned tables. I think this should be\n> clarified in the beginning.\n\nVery good point! Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 29 Sep 2023 22:34:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, 2023-09-29 at 22:34 -0400, Bruce Momjian wrote:\n> Very good point! Updated patch attached.\n\nThanks! Some small corrections:\n\n> diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> index 9cf9d030a8..be1c522575 100644\n> --- a/doc/src/sgml/maintenance.sgml\n> +++ b/doc/src/sgml/maintenance.sgml\n> @@ -861,10 +861,16 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu\n>     </para>\n>  \n>     <para>\n> - Partitioned tables are not processed by autovacuum. Statistics\n> - should be collected by running a manual <command>ANALYZE</command> when it is\n> - first populated, and again whenever the distribution of data in its\n> - partitions changes significantly.\n> + Partitioned tables do not directly store tuples and consequently\n> + autovacuum does not <command>VACUUM</command> them. (Autovacuum does\n\n... does not <command>VACUUM</command> or <command>ANALYZE</command> them.\n\nPerhaps it would be shorter to say \"does not process them\" like the\noriginal wording.\n\n> + perform <command>VACUUM</command> on table partitions just like other\n\nJust like *on* other tables, right?\n\n> + tables.) Unfortunately, this also means that autovacuum doesn't\n> + run <command>ANALYZE</command> on partitioned tables, and this\n> + can cause suboptimal plans for queries that reference partitioned\n> + table statistics. You can work around this problem by manually\n> + running <command>ANALYZE</command> on partitioned tables when they\n> + are first populated, and again whenever the distribution of data in\n> + their partitions changes significantly.\n>     </para>\n>  \n>     <para>\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 02 Oct 2023 04:48:20 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Mon, Oct 2, 2023 at 04:48:20AM +0200, Laurenz Albe wrote:\n> On Fri, 2023-09-29 at 22:34 -0400, Bruce Momjian wrote:\n> > Very good point! Updated patch attached.\n> \n> Thanks! Some small corrections:\n> \n> > diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> > index 9cf9d030a8..be1c522575 100644\n> > --- a/doc/src/sgml/maintenance.sgml\n> > +++ b/doc/src/sgml/maintenance.sgml\n> > @@ -861,10 +861,16 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu\n> >     </para>\n> >  \n> >     <para>\n> > - Partitioned tables are not processed by autovacuum. Statistics\n> > - should be collected by running a manual <command>ANALYZE</command> when it is\n> > - first populated, and again whenever the distribution of data in its\n> > - partitions changes significantly.\n> > + Partitioned tables do not directly store tuples and consequently\n> > + autovacuum does not <command>VACUUM</command> them. (Autovacuum does\n> \n> ... does not <command>VACUUM</command> or <command>ANALYZE</command> them.\n> \n> Perhaps it would be shorter to say \"does not process them\" like the\n> original wording.\n> \n> > + perform <command>VACUUM</command> on table partitions just like other\n> \n> Just like *on* other tables, right?\n\nGood points, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 6 Oct 2023 12:20:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, 2023-10-06 at 12:20 -0400, Bruce Momjian wrote:\n> Good points, updated patch attached.\n\nThat patch is good to go, as far as I am concerned.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 06 Oct 2023 18:49:05 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" }, { "msg_contents": "On Fri, Oct 6, 2023 at 06:49:05PM +0200, Laurenz Albe wrote:\n> On Fri, 2023-10-06 at 12:20 -0400, Bruce Momjian wrote:\n> > Good points, updated patch attached.\n> \n> That patch is good to go, as far as I am concerned.\n\nPatch applied back to PG 11, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 10 Oct 2023 15:14:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: document the need to analyze partitioned tables" } ]
[ { "msg_contents": "Hi all,\n\nI'm trying to execute a PROCEDURE (with COMMIT inside) called from a\nbackground worker using SPI but I'm always getting the error below:\n\n2021-09-13 09:36:43.568 -03 [23845] LOG: worker_spi worker 2 initialized\nwith schema2.counted\n2021-09-13 09:36:43.568 -03 [23846] LOG: worker_spi worker 1 initialized\nwith schema1.counted\n2021-09-13 09:36:43.571 -03 [23846] ERROR: invalid transaction termination\n2021-09-13 09:36:43.571 -03 [23846] CONTEXT: PL/pgSQL function\nschema1.counted_proc() line 1 at COMMIT\nSQL statement \"CALL \"schema1\".\"counted_proc\"()\"\n2021-09-13 09:36:43.571 -03 [23846] STATEMENT: CALL\n\"schema1\".\"counted_proc\"()\n2021-09-13 09:36:43.571 -03 [23845] ERROR: invalid transaction termination\n2021-09-13 09:36:43.571 -03 [23845] CONTEXT: PL/pgSQL function\nschema2.counted_proc() line 1 at COMMIT\nSQL statement \"CALL \"schema2\".\"counted_proc\"()\"\n2021-09-13 09:36:43.571 -03 [23845] STATEMENT: CALL\n\"schema2\".\"counted_proc\"()\n2021-09-13 09:36:43.571 -03 [23838] LOG: background worker \"worker_spi\"\n(PID 23845) exited with exit code 1\n2021-09-13 09:36:43.571 -03 [23838] LOG: background worker \"worker_spi\"\n(PID 23846) exited with exit code 1\n\nI changed the worker_spi example (attached) a bit to execute a simple\nprocedure. Even using SPI_connect_ext(SPI_OPT_NONATOMIC) I'm getting the\nerror \"invalid transaction termination\".\n\nThere are something wrong with the attached example or am I missing\nsomething?\n\nRegards,\n\n-- \nFabrízio de Royes Mello", "msg_date": "Mon, 13 Sep 2021 09:48:45 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Is SPI + Procedures (with COMMIT) inside a bgworker broken?" }, { "msg_contents": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com> writes:\n> I'm trying to execute a PROCEDURE (with COMMIT inside) called from a\n> background worker using SPI but I'm always getting the error below:\n> 2021-09-13 09:36:43.571 -03 [23846] ERROR: invalid transaction termination\n\nThe direct cause of that is that SPI_execute() doesn't permit the called\nquery to perform COMMIT/ROLLBACK, which is because most callers would fail\nto cope with that. You can instruct SPI to allow that by replacing the\nSPI_execute() call with something like\n\n\t\tSPIExecuteOptions options;\n\n\t\t...\n\t\tmemset(&options, 0, sizeof(options));\n\t\toptions.allow_nonatomic = true;\n\n\t\tret = SPI_execute_extended(buf.data, &options);\n\n\nHowever, that's not enough to make this example work :-(.\nI find that it still fails inside the procedure's COMMIT,\nwith\n\n2021-09-13 15:14:54.775 EDT worker_spi[476310] ERROR: portal snapshots (0) did not account for all active snapshots (1)\n2021-09-13 15:14:54.775 EDT worker_spi[476310] CONTEXT: PL/pgSQL function schema4.counted_proc() line 1 at COMMIT\n SQL statement \"CALL \"schema4\".\"counted_proc\"()\"\n\nI think what this indicates is that worker_spi_main's cavalier\nmanagement of the active snapshot isn't up to snuff for this\nuse-case. The error is coming from ForgetPortalSnapshots, which\nis expecting that all active snapshots are attached to Portals;\nbut that one isn't.\n\nProbably the most appropriate fix is to make worker_spi_main\nset up a Portal to run the query inside of. There are other\nbits of code that are not happy if they're not inside a Portal,\nso if you're hoping to run arbitrary SQL this way, sooner or\nlater you're going to have to cross that bridge.\n\n(I remain of the opinion that replication/logical/worker.c\nis going to have to do that eventually, too...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Sep 2021 15:30:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is SPI + Procedures (with COMMIT) inside a bgworker broken?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 4:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> The direct cause of that is that SPI_execute() doesn't permit the called\n> query to perform COMMIT/ROLLBACK, which is because most callers would fail\n> to cope with that. You can instruct SPI to allow that by replacing the\n> SPI_execute() call with something like\n>\n> SPIExecuteOptions options;\n>\n> ...\n> memset(&options, 0, sizeof(options));\n> options.allow_nonatomic = true;\n>\n> ret = SPI_execute_extended(buf.data, &options);\n>\n\nI completely forgot about the SPI execute options... Thanks for the\nexplanation!!!\n\n\n> However, that's not enough to make this example work :-(.\n> I find that it still fails inside the procedure's COMMIT,\n> with\n>\n> 2021-09-13 15:14:54.775 EDT worker_spi[476310] ERROR: portal snapshots\n(0) did not account for all active snapshots (1)\n> 2021-09-13 15:14:54.775 EDT worker_spi[476310] CONTEXT: PL/pgSQL\nfunction schema4.counted_proc() line 1 at COMMIT\n> SQL statement \"CALL \"schema4\".\"counted_proc\"()\"\n>\n> I think what this indicates is that worker_spi_main's cavalier\n> management of the active snapshot isn't up to snuff for this\n> use-case. The error is coming from ForgetPortalSnapshots, which\n> is expecting that all active snapshots are attached to Portals;\n> but that one isn't.\n>\n\nThat is exactly the root cause of all my investigation.\n\nAt Timescale we have a scheduler (background worker) that launches another\nbackground worker to \"execute a job\", and by executing a job it means to\ncall a function [1] or a procedure [2] directly without a SPI.\n\nBut now a user raised an issue about snapshots [3] and when I saw the code\nfor the first time I tried to use SPI and it didn't work as expected.\n\nEven tweaking worker_spi to execute the procedure without SPI by calling\nExecuteCallStmt (attached) we end up with the same situation about the\nactive snapshots:\n\n2021-09-13 20:14:36.654 -03 [21483] LOG: worker_spi worker 2 initialized\nwith schema2.counted\n2021-09-13 20:14:36.655 -03 [21484] LOG: worker_spi worker 1 initialized\nwith schema1.counted\n2021-09-13 20:14:36.657 -03 [21483] ERROR: portal snapshots (0) did not\naccount for all active snapshots (1)\n2021-09-13 20:14:36.657 -03 [21483] CONTEXT: PL/pgSQL function\nschema2.counted_proc() line 1 at COMMIT\n2021-09-13 20:14:36.657 -03 [21484] ERROR: portal snapshots (0) did not\naccount for all active snapshots (1)\n2021-09-13 20:14:36.657 -03 [21484] CONTEXT: PL/pgSQL function\nschema1.counted_proc() line 1 at COMMIT\n2021-09-13 20:14:36.659 -03 [21476] LOG: background worker \"worker_spi\"\n(PID 21483) exited with exit code 1\n2021-09-13 20:14:36.659 -03 [21476] LOG: background worker \"worker_spi\"\n(PID 21484) exited with exit code 1\n\n\n> Probably the most appropriate fix is to make worker_spi_main\n> set up a Portal to run the query inside of. There are other\n> bits of code that are not happy if they're not inside a Portal,\n> so if you're hoping to run arbitrary SQL this way, sooner or\n> later you're going to have to cross that bridge.\n>\n\nI started digging with it [4] by creating a Portal from scratch to execute\nthe Function or Procedure and it worked.\n\nWe're wondering if we can avoid the parser for PortalRun, can we??\n\nRegards,\n\n[1]\nhttps://github.com/timescale/timescaledb/blob/master/tsl/src/bgw_policy/job.c#L726\n[2]\nhttps://github.com/timescale/timescaledb/blob/master/tsl/src/bgw_policy/job.c#L741\n[3] https://github.com/timescale/timescaledb/issues/3545\n[4]\nhttps://github.com/fabriziomello/timescaledb/blob/issue/3545/tsl/src/bgw_policy/job.c#L824\n\n-- \nFabrízio de Royes Mello", "msg_date": "Mon, 13 Sep 2021 20:31:34 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is SPI + Procedures (with COMMIT) inside a bgworker broken?" } ]
[ { "msg_contents": "Hi,\n\nI was reading src/backend/utils/resowner/README today and noticed the\nfollowing paragraph that mentions the types of objects/resources\n*directly* supported by the module.\n\n===\nCurrently, ResourceOwners contain direct support for recording ownership of\nbuffer pins, lmgr locks, and catcache, relcache, plancache, tupdesc, and\nsnapshot references. Other objects can be associated with a ResourceOwner by\nrecording the address of the owning ResourceOwner in such an object. There is\nan API for other modules to get control during ResourceOwner release, so that\nthey can scan their own data structures to find the objects that need to be\ndeleted.\n===\n\nIt seems a bunch of other object/resource types have been integrated\ninto the resowner mechanism since the list was last updated (2008).\n\nAttached patch updates the list. Not sure though if we should keep\nthe current format of the list, which after updating becomes a bit too\nlong.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Sep 2021 22:44:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "resowner module README needs update?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 10:44:11PM +0900, Amit Langote wrote:\n> It seems a bunch of other object/resource types have been integrated\n> into the resowner mechanism since the list was last updated (2008).\n> \n> Attached patch updates the list. Not sure though if we should keep\n> the current format of the list, which after updating becomes a bit too\n> long.\n\n Currently, ResourceOwners contain direct support for recording ownership of\n-buffer pins, lmgr locks, and catcache, relcache, plancache, tupdesc, and\n-snapshot references. Other objects can be associated with a ResourceOwner by\n-recording the address of the owning ResourceOwner in such an object. There is\n-an API for other modules to get control during ResourceOwner release, so that\n-they can scan their own data structures to find the objects that need to be\n-deleted.\n+buffer pins, lmgr locks, and catcache, relcache, plancache, tupdesc, snapshot\n+references, temporary files, dynamic shared memory segments, JIT contexts,\n+cryptohash contexts, and HMAX contexts. Other objects can be associated with\n+a ResourceOwner by recording the address of the owning ResourceOwner in such\n+an object. There is an API for other modules to get control during\n+ResourceOwner release, so that they can scan their own data structures to find\n+the objects that need to be deleted.\n\ns/HMAX/HMAC/.\n\nJust updating this list is a recipe for having it out-of-sync again.\nWhat about instead redirecting users to look at ResourceOwnerData in\nresowner.c about the types of resources owners that exist? I would\nsuggest something like that:\n\"Currently, ResourceOwners contain direct support for various built-in\ntypes (see ResourceOwnerData in src/backend/utils/resowner/resowner.c).\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 09:17:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: resowner module README needs update?" }, { "msg_contents": "Thanks for looking.\n\nOn Tue, Sep 14, 2021 at 9:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Sep 13, 2021 at 10:44:11PM +0900, Amit Langote wrote:\n> > It seems a bunch of other object/resource types have been integrated\n> > into the resowner mechanism since the list was last updated (2008).\n> >\n> > Attached patch updates the list. Not sure though if we should keep\n> > the current format of the list, which after updating becomes a bit too\n> > long.\n>\n> Currently, ResourceOwners contain direct support for recording ownership of\n> -buffer pins, lmgr locks, and catcache, relcache, plancache, tupdesc, and\n> -snapshot references. Other objects can be associated with a ResourceOwner by\n> -recording the address of the owning ResourceOwner in such an object. There is\n> -an API for other modules to get control during ResourceOwner release, so that\n> -they can scan their own data structures to find the objects that need to be\n> -deleted.\n> +buffer pins, lmgr locks, and catcache, relcache, plancache, tupdesc, snapshot\n> +references, temporary files, dynamic shared memory segments, JIT contexts,\n> +cryptohash contexts, and HMAX contexts. Other objects can be associated with\n> +a ResourceOwner by recording the address of the owning ResourceOwner in such\n> +an object. There is an API for other modules to get control during\n> +ResourceOwner release, so that they can scan their own data structures to find\n> +the objects that need to be deleted.\n>\n> s/HMAX/HMAC/.\n\nOops.\n\n> Just updating this list is a recipe for having it out-of-sync again.\n> What about instead redirecting users to look at ResourceOwnerData in\n> resowner.c about the types of resources owners that exist? I would\n> suggest something like that:\n> \"Currently, ResourceOwners contain direct support for various built-in\n> types (see ResourceOwnerData in src/backend/utils/resowner/resowner.c).\n\nYeah, that might be better.\n\nPatch updated. Given the new text, I thought it might be better to\nmove the paragraph right next to the description of the ResourceOwner\nAPI at the beginning of the section, because the context seems clearer\nthat way. Thoughts?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Sep 2021 12:18:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: resowner module README needs update?" }, { "msg_contents": "On Tue, Sep 14, 2021 at 12:18:21PM +0900, Amit Langote wrote:\n> Patch updated. Given the new text, I thought it might be better to\n> move the paragraph right next to the description of the ResourceOwner\n> API at the beginning of the section, because the context seems clearer\n> that way. Thoughts?\n\nNo objections here.\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 19:37:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: resowner module README needs update?" }, { "msg_contents": "On Tue, Sep 14, 2021 at 07:37:28PM +0900, Michael Paquier wrote:\n> No objections here.\n\nDone as of cae6fc2.\n--\nMichael", "msg_date": "Wed, 15 Sep 2021 16:09:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: resowner module README needs update?" }, { "msg_contents": "On Wed, Sep 15, 2021 at 4:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Sep 14, 2021 at 07:37:28PM +0900, Michael Paquier wrote:\n> > No objections here.\n>\n> Done as of cae6fc2.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Sep 2021 16:42:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: resowner module README needs update?" } ]
[ { "msg_contents": "All,\n\nI'm considering a new design for a specialized table am. It would simplify\nthe design if TIDs grew forever and I didn't have to implement TID reuse\nlogic.\n\nThe current 48 bit TID is big, but I can see extreme situations where it\nmight not be quite big enough. If every row that gets updated needs a TID,\nand something like an IoT app is updating huge numbers of rows per second\nusing multiple connections in parallel, there might be a problem. This is\nespecially true if each connection requests a batch of TIDs and then\ndoesn't use all of them.\n\nAre there any plans in the works to widen the TID?\n\nI saw some notes on this in the Zedstore project, but there hasn't been\nmuch activity in that project for almost a year.\n\nChris\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nAll,I'm considering a new design for a specialized table am. It would simplify the design if TIDs grew forever and I didn't have to implement TID reuse logic.The current 48 bit TID is big, but I can see extreme situations where it might not be quite big enough. If every row that gets updated needs a TID, and something like an IoT app is updating huge numbers of rows per second using multiple connections in parallel, there might be a problem. This is especially true if each connection requests a batch of TIDs and then doesn't use all of them. Are there any plans in the works to widen the TID?I saw some notes on this in the Zedstore project, but there hasn't been much activity in that project for almost a year.Chris-- Chris Cleveland312-339-2677 mobile", "msg_date": "Mon, 13 Sep 2021 10:49:48 -0500", "msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>", "msg_from_op": true, "msg_subject": "64 bit TID?" }, { "msg_contents": "On Mon, 13 Sept 2021 at 17:50, Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n>\n> All,\n>\n> I'm considering a new design for a specialized table am. It would simplify the design if TIDs grew forever and I didn't have to implement TID reuse logic.\n\nTID reuse logic also helps clean up index tuples for deleted table\ntuples. I would suggest to implement TID reuse logic if only to\nprevent indexes from growing indefinately (or TID limits reached,\nwhichever first).\n\n> The current 48 bit TID is big, but I can see extreme situations where it might not be quite big enough. If every row that gets updated needs a TID, and something like an IoT app is updating huge numbers of rows per second using multiple connections in parallel, there might be a problem.\n\nIf your table contains such large amounts of (versions of) tuples, you\nmight want to partition your table(s), as that allows the system to\nmove some bits of tuple identification to the the relation identifier.\n\n> This is especially true if each connection requests a batch of TIDs and then doesn't use all of them.\n\nFor the HeapAM this is never the case; TIDs cannot be allocated\nwithout use (albeit some may be used for rolled-back and thus dead\ntuples).\n\n> Are there any plans in the works to widen the TID?\n\nThis was recently discussed here [0] as well, but to the best of my\nknowledge no material proposal to update the APIs has been suggested\nas of yet.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/0bbeb784050503036344e1f08513f13b2083244b.camel%40j-davis.com\n\n\n", "msg_date": "Mon, 13 Sep 2021 19:12:42 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit TID?" }, { "msg_contents": "> > Are there any plans in the works to widen the TID?\n>\n> This was recently discussed here [0] as well, but to the best of my\n> knowledge no material proposal to update the APIs has been suggested\n> as of yet.\n>\n> [0] https://www.postgresql.org/message-id/flat/0bbeb784050503036344e1f08513f13b2083244b.camel%40j-davis.com\n\nWow, thank you, that is some thread. It discusses the issues\nthoroughly. As I see it, there are three options:\n\n1. Make it possible to use the unused 5 bits in the existing TID\nscheme. The advantages: we get the full 48 bits, and it may not take a\nlot of work, and it makes Jeff Davis' work with Columnar easier.\n\n2. Go to a flat 64-bit logical TID. The advantages: certain types of\ntable AMs work better, including Columnar and LSM tree-based AMs\n(which I'm currently working on).\n\n3. Go to a variable-length TID. The advantages: you can stuff any kind\nof payload into the TID, which would make clustered tables and certain\nfancy indexes easier, but would be far more work.\n\nI would contribute patches myself, but I'm not *yet* skilled enough in\nthe ways of Postgres to do so.\n\nQuestions:\n\nWould widening the existing ItemPointer to 64 bits now preclude a\nvariable-length TID in the future? Or make it more difficult?\n\nHow much work would it take?\n\nSince the thread ended in May, has the group reached any kind of\nconsensus on the issue?\n-- \nChris Cleveland\n312-339-2677 mobile\n\n\n", "msg_date": "Mon, 13 Sep 2021 17:29:46 -0500", "msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>", "msg_from_op": true, "msg_subject": "Re: 64 bit TID?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 3:30 PM Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n> Wow, thank you, that is some thread. It discusses the issues\n> thoroughly.\n\nIf somebody wants to make TIDs (or some generalized TID-like thing\nthat tableam knows about) into logical identifiers, then they must\nalso answer the question: identifiers of what?\n\nTIDs from Postgres heapam identify a physical version, or perhaps a\nHOT chain -- which is not how TIDs work in other DB systems that use a\nheap structure. This is the only reason why we can mostly think of\nindexes as data structures that don't need to be involved in\nconcurrency control. Postgres index access methods don't usually need\nto know anything about locks that protect the logical structure of the\ndatabase.\n\nThe option of just creating a new distinct TID (for the same logical\nrow) buys us the ability to keep index access methods rather separate\nfrom everything else -- which helps with extensibility. No logical\nlocks are required in Postgres. Complicated designs that bleed into\nother parts of the system (designs like ARIES/KVL and ARIES/IM) are\nunnecessary.\n\n> Questions:\n>\n> Would widening the existing ItemPointer to 64 bits now preclude a\n> variable-length TID in the future? Or make it more difficult?\n>\n> How much work would it take?\n\nIf it was just a matter of changing the data structure then I think it\nwould be far easier.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Sep 2021 17:36:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 64 bit TID?" }, { "msg_contents": "On Mon, Sep 13, 2021 at 5:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If somebody wants to make TIDs (or some generalized TID-like thing\n> that tableam knows about) into logical identifiers, then they must\n> also answer the question: identifiers of what?\n>\n> TIDs from Postgres heapam identify a physical version, or perhaps a\n> HOT chain -- which is not how TIDs work in other DB systems that use a\n> heap structure. This is the only reason why we can mostly think of\n> indexes as data structures that don't need to be involved in\n> concurrency control. Postgres index access methods don't usually need\n> to know anything about locks that protect the logical structure of the\n> database.\n\nThe 1993 paper \"Options in Physical Database Design\" gives a useful\noverview of the challenges here. Especially for an extensibile system\nlike Postgres relative to a system with a traditional design\nimplementing classic ARIES.\n\nI think that you need an ACM membership to get a copy. The relevant\nsection starts out like this:\n\n\"\"\"\nItem Representation\n-------------------\n\nPhysical representation types for abstract data types\nis only slowly gaining research attention for object-\noriented database systems but will likely become a\nvery important tuning option. Examples include sets\nrepresented as bit maps, arrays, or lists and matrices\nrepresented densely or sparsely, by row or by column\nor as tiles, e.g. [MaV93]. The goal is to bring physical\ndata independence to object-oriented and scientific\ndatabases and their applications.\n\nPhysical pointers, references, or object identifiers to\nrepresent relationships support \"navigation\" through a\ndatabase, which is very good for single-instance\nretrievals and often improves set matching, but also\ncreates a new type of updates, structural updates,\nwhich may increase the complexity of concurrency\ncontrol and recovery [CSL90, ChK84, RoR85,\nShC90].\n\"\"\"\n\nThis seems to be a fundamental trade-off that is tied inextricably to\nthe design of many other things.\n\nThat doesn't stop anybody from creating a column store using the\ntableam. But it does mean that they will need to be very careful about\ndefining what exact \"logical vs physical vs physiological\" tradeoff\nthey've chosen. It's rather subtle stuff.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Sep 2021 17:55:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: 64 bit TID?" } ]
[ { "msg_contents": "Hi,\n\nI noticed that postgres.h is included from relcache.h (starting in [1]) and\nwanted to fix that - it violates our usual policy against including postgres.h\nfrom within headers.\n\nBut then I noticed that that causes pg_upgrade/file.c to fail to compile:\n\nIn file included from /home/andres/src/postgresql/src/include/access/visibilitymap.h:20,\n from /home/andres/src/postgresql/src/bin/pg_upgrade/file.c:22:\n/home/andres/src/postgresql/src/include/utils/relcache.h:53:8: error: unknown type name ‘Datum’\n 53 | extern Datum *RelationGetIndexRawAttOptions(Relation relation);\n\nWhich is presumably why the postgres.h include was added in [1]. The only\nreason this didn't fail before is because there wasn't any other reference to\nDatum (or any of the other postgres.h types) in relcache.h before this commit.\n\n\nI guess the best solution is to add include the \"full\" postgres.h explicitly\nfrom file.c like several other places, like e.g. src/bin/pg_controldata/pg_controldata.c\ndo:\n\n/*\n * We have to use postgres.h not postgres_fe.h here, because there's so much\n * backend-only stuff in the XLOG include files we need. But we need a\n * frontend-ish environment otherwise. Hence this ugly hack.\n */\n#define FRONTEND 1\n\n\nI was also wondering if we should put something in c.h and postgres.h to avoid\nredundant includes? Currently there's a few .c files that \"use\" the header\nguards, all scanners that we include into the parsers.\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] commit 911e70207703799605f5a0e8aad9f06cff067c63\nAuthor: Alexander Korotkov <akorotkov@postgresql.org>\nDate: 2020-03-30 19:17:11 +0300\n\n Implement operator class parameters\n\n\n", "msg_date": "Mon, 13 Sep 2021 16:26:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I noticed that postgres.h is included from relcache.h (starting in [1]) and\n> wanted to fix that - it violates our usual policy against including postgres.h\n> from within headers.\n\nUgh, yeah, that's entirely against policy.\n\nAs for the fix ... what in the world is pg_upgrade doing including\nrelcache.h? It seems like there's a more fundamental problem here:\neither relcache.h is declaring something that needs to be elsewhere,\nor pg_upgrade is doing something it should not.\n\n> I was also wondering if we should put something in c.h and postgres.h to avoid\n> redundant includes?\n\nNo. If anything, I'd want to throw an error for \"redundant\" includes\nof these files, because it's a pretty good red flag about\npoorly-thought-out header modularization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Sep 2021 22:40:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Hi,\n\nOn 2021-09-13 22:40:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I noticed that postgres.h is included from relcache.h (starting in [1]) and\n> > wanted to fix that - it violates our usual policy against including postgres.h\n> > from within headers.\n> \n> Ugh, yeah, that's entirely against policy.\n> \n> As for the fix ... what in the world is pg_upgrade doing including\n> relcache.h? It seems like there's a more fundamental problem here:\n> either relcache.h is declaring something that needs to be elsewhere,\n> or pg_upgrade is doing something it should not.\n\nIt's not directly including relcache. pg_upgrade/file.c needs a few symbols\nfrom visibilitymap.h, for the upgrade of the visibilitymap from old to new\nformat. And visibilitymap needs relcache.h because several of its routines\ntake Relation params.\n\nWe could split visibilitymap.h into two, or we could forward-declare Relation\nand not include relcache...\n\n\n> > I was also wondering if we should put something in c.h and postgres.h to avoid\n> > redundant includes?\n> \n> No. If anything, I'd want to throw an error for \"redundant\" includes\n> of these files, because it's a pretty good red flag about\n> poorly-thought-out header modularization.\n\nI think we might be thinking of the same. What I meant with \"avoid\" was to\nraise a warning or error. If we were to do that, it's probably worth doing the\nbuild system ugliness to do this only when building postgres code, rather than\nextensions...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Sep 2021 19:57:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-09-13 22:40:19 -0400, Tom Lane wrote:\n>> As for the fix ... what in the world is pg_upgrade doing including\n>> relcache.h? It seems like there's a more fundamental problem here:\n>> either relcache.h is declaring something that needs to be elsewhere,\n>> or pg_upgrade is doing something it should not.\n\n> We could split visibilitymap.h into two, or we could forward-declare Relation\n> and not include relcache...\n\nWithout having looked at the details, I think using a forward-declare\nto avoid including relcache.h in visibilitymap.h might be a reasonably\nnon-painful fix. OTOH, in the long run it might be worth the effort\nto split visibilitymap.h to separate useful file-contents knowledge\nfrom backend function declarations.\n\n>> No. If anything, I'd want to throw an error for \"redundant\" includes\n>> of these files, because it's a pretty good red flag about\n>> poorly-thought-out header modularization.\n\n> I think we might be thinking of the same. What I meant with \"avoid\" was to\n> raise a warning or error.\n\nAh, we are on the same page then. I misunderstood what you wrote.\n\n> If we were to do that, it's probably worth doing the\n> build system ugliness to do this only when building postgres code, rather than\n> extensions...\n\nAs long as we do this in HEAD only, I'm not sure why extensions\nneed an exception. Perhaps it will result in somebody pointing out\nadditional poorly-thought-out header contents, but I don't think\nthat's bad.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Sep 2021 23:53:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On Tue, Sep 14, 2021 at 5:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I noticed that postgres.h is included from relcache.h (starting in [1]) and\n> > wanted to fix that - it violates our usual policy against including postgres.h\n> > from within headers.\n>\n> Ugh, yeah, that's entirely against policy.\n\nI see. This is my oversight, sorry for that.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 18 Sep 2021 02:24:17 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On Tue, Sep 14, 2021 at 6:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-09-13 22:40:19 -0400, Tom Lane wrote:\n> >> As for the fix ... what in the world is pg_upgrade doing including\n> >> relcache.h? It seems like there's a more fundamental problem here:\n> >> either relcache.h is declaring something that needs to be elsewhere,\n> >> or pg_upgrade is doing something it should not.\n>\n> > We could split visibilitymap.h into two, or we could forward-declare Relation\n> > and not include relcache...\n>\n> Without having looked at the details, I think using a forward-declare\n> to avoid including relcache.h in visibilitymap.h might be a reasonably\n> non-painful fix.\n\nI like that idea, but I didn't find an appropriate existing header for\nforward-declaration of Relation. relation.h isn't suitable, because\nit includes primnodes.h. A separate header for just\nforward-definition of Relation seems too much.\n\n> TOH, in the long run it might be worth the effort\n> to split visibilitymap.h to separate useful file-contents knowledge\n> from backend function declarations.\n\nI've drafted a patch splitting visibilitymap_maros.h from\nvisibilitymap.h. What do you think?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 18 Sep 2021 02:51:09 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Hi,\n\nOn 2021-09-18 02:51:09 +0300, Alexander Korotkov wrote:\n> On Tue, Sep 14, 2021 at 6:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Without having looked at the details, I think using a forward-declare\n> > to avoid including relcache.h in visibilitymap.h might be a reasonably\n> > non-painful fix.\n> \n> I like that idea, but I didn't find an appropriate existing header for\n> forward-declaration of Relation. relation.h isn't suitable, because\n> it includes primnodes.h. A separate header for just\n> forward-definition of Relation seems too much.\n\nI was just thinking of doing something like the attached.\n\n\n> > TOH, in the long run it might be worth the effort\n> > to split visibilitymap.h to separate useful file-contents knowledge\n> > from backend function declarations.\n> \n> I've drafted a patch splitting visibilitymap_maros.h from\n> visibilitymap.h. What do you think?\n\nI'd name it visibilitymapdefs.h or such, mostly because that's what other\nheaders are named like...\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 17 Sep 2021 17:06:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Hi,\n\nOn Sat, Sep 18, 2021 at 3:06 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-09-18 02:51:09 +0300, Alexander Korotkov wrote:\n> > On Tue, Sep 14, 2021 at 6:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Without having looked at the details, I think using a forward-declare\n> > > to avoid including relcache.h in visibilitymap.h might be a reasonably\n> > > non-painful fix.\n> >\n> > I like that idea, but I didn't find an appropriate existing header for\n> > forward-declaration of Relation. relation.h isn't suitable, because\n> > it includes primnodes.h. A separate header for just\n> > forward-definition of Relation seems too much.\n>\n> I was just thinking of doing something like the attached.\n\nI see now. I think I'm rather favoring splitting visibilitymap.h.\n\n> > > TOH, in the long run it might be worth the effort\n> > > to split visibilitymap.h to separate useful file-contents knowledge\n> > > from backend function declarations.\n> >\n> > I've drafted a patch splitting visibilitymap_maros.h from\n> > visibilitymap.h. What do you think?\n>\n> I'd name it visibilitymapdefs.h or such, mostly because that's what other\n> headers are named like...\n\nGood point. The revised patch is attached.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 18 Sep 2021 21:58:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On 2021-Sep-18, Alexander Korotkov wrote:\n\n> I see now. I think I'm rather favoring splitting visibilitymap.h.\n\nAgreed, this looks sane to me. However, I think the\nVM_ALL_{VISIBLE,FROZEN} macros should remain in visibilitymap.h, since\nthey depend on the visibilitymap_get_status function (and pg_upgrade\ndoesn't use them).\n\nThere's a typo \"maros\" for \"macros\" in the new header file. (Also, why\ndoes the copyright line say \"portions\" if no portion under another\ncopyright? I think we don't say \"portions\" when there is only one\ncopyright statement line.)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 18 Sep 2021 17:35:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On Sat, Sep 18, 2021 at 11:35 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Sep-18, Alexander Korotkov wrote:\n>\n> > I see now. I think I'm rather favoring splitting visibilitymap.h.\n>\n> Agreed, this looks sane to me. However, I think the\n> VM_ALL_{VISIBLE,FROZEN} macros should remain in visibilitymap.h, since\n> they depend on the visibilitymap_get_status function (and pg_upgrade\n> doesn't use them).\n>\n> There's a typo \"maros\" for \"macros\" in the new header file. (Also, why\n> does the copyright line say \"portions\" if no portion under another\n> copyright? I think we don't say \"portions\" when there is only one\n> copyright statement line.)\n\nThank you for the feedback. All changes are accepted.\n\nAny objections to pushing this?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 19 Sep 2021 18:45:45 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On 2021-09-19 18:45:45 +0300, Alexander Korotkov wrote:\n> Any objections to pushing this?\n\nlgtm\n\nI assume you're planning to backpatch this?\n\n\n", "msg_date": "Mon, 20 Sep 2021 12:48:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On Mon, Sep 20, 2021 at 10:48 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-09-19 18:45:45 +0300, Alexander Korotkov wrote:\n> > Any objections to pushing this?\n>\n> lgtm\n\nThanks!\n\n> I assume you're planning to backpatch this?\n\nYes.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 21 Sep 2021 02:05:43 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Mon, Sep 20, 2021 at 10:48 PM Andres Freund <andres@anarazel.de> wrote:\n>> I assume you're planning to backpatch this?\n\n> Yes.\n\nProbably good to wait 24 hours until 14rc1 has been tagged.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Sep 2021 19:07:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" }, { "msg_contents": "On Tue, Sep 21, 2021 at 2:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Mon, Sep 20, 2021 at 10:48 PM Andres Freund <andres@anarazel.de> wrote:\n> >> I assume you're planning to backpatch this?\n>\n> > Yes.\n>\n> Probably good to wait 24 hours until 14rc1 has been tagged.\n\nOK, NP!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 21 Sep 2021 03:08:28 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres.h included from relcache.h - but removing it breaks\n pg_upgrade" } ]
[ { "msg_contents": "Hi,\n\nI would like to propose a patch that removes the duplicate code\nsetting database state in the control file.\n\nThe patch is straightforward but the only concern is that in\nStartupXLOG(), SharedRecoveryState now gets updated only with spin\nlock; earlier it also had ControlFileLock in addition to that. AFAICU,\nI don't see any problem there, since until the startup process exists\nother backends could not connect and write a WAL record.\n\nRegards,\nAmul Sul.\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Sep 2021 11:35:01 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 9/13/21, 11:06 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> The patch is straightforward but the only concern is that in\r\n> StartupXLOG(), SharedRecoveryState now gets updated only with spin\r\n> lock; earlier it also had ControlFileLock in addition to that. AFAICU,\r\n> I don't see any problem there, since until the startup process exists\r\n> other backends could not connect and write a WAL record.\r\n\r\nIt looks like ebdf5bf intentionally made sure that we hold\r\nControlFileLock while updating SharedRecoveryInProgress (now\r\nSharedRecoveryState after 4e87c48). The thread for this change [0]\r\nhas some additional details.\r\n\r\nAs far as the patch goes, I'm not sure why SetControlFileDBState()\r\nneeds to be exported, and TBH I don't know if this change is really a\r\nworthwhile improvement. ISTM the main benefit is that it could help\r\navoid cases where we update the state but not the time. However,\r\nthere's still nothing preventing that, and I don't get the idea that\r\nit was really a big problem to begin with.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CAB7nPqTS5J3-G_zTow0Kc5oqZn877RDDN1Mfcqm2PscAS7FnAw%40mail.gmail.com\r\n\r\n", "msg_date": "Tue, 14 Sep 2021 19:22:03 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Wed, Sep 15, 2021 at 12:52 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 9/13/21, 11:06 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\n> > The patch is straightforward but the only concern is that in\n> > StartupXLOG(), SharedRecoveryState now gets updated only with spin\n> > lock; earlier it also had ControlFileLock in addition to that. AFAICU,\n> > I don't see any problem there, since until the startup process exists\n> > other backends could not connect and write a WAL record.\n>\n> It looks like ebdf5bf intentionally made sure that we hold\n> ControlFileLock while updating SharedRecoveryInProgress (now\n> SharedRecoveryState after 4e87c48). The thread for this change [0]\n> has some additional details.\n>\n\nYeah, I saw that and ebdf5bf main intention was to minimize the gap\nbetween both of them which was quite big previously. The comments\nadded by the same commit also describe the case that backends can\nwrite WAL and the control file is still referring not in\nDB_IN_PRODUCTION and IIUC, which seems to be acceptable.\nThen the question is what would be wrong if a process can see an\ninconsistent shared memory view for a small window? Might be\nwait-promoting might behave unexpectedly, that I have to test.\n\n> As far as the patch goes, I'm not sure why SetControlFileDBState()\n> needs to be exported, and TBH I don't know if this change is really a\n> worthwhile improvement. ISTM the main benefit is that it could help\n> avoid cases where we update the state but not the time. However,\n> there's still nothing preventing that, and I don't get the idea that\n> it was really a big problem to begin with.\n>\n\nOh ok, I was working on a different patch[1] where I want to call this\nfunction from checkpointer, but I agree exporting function is not in\nthe scope of this patch.\n\nRegards,\nAmul\n\n1] https://postgr.es/m/CAAJ_b97KZzdJsffwRK7w0XU5HnXkcgKgTR69t8cOZztsyXjkQw@mail.gmail.com\n\n\n", "msg_date": "Wed, 15 Sep 2021 17:16:23 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 9/15/21, 4:47 AM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> On Wed, Sep 15, 2021 at 12:52 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> It looks like ebdf5bf intentionally made sure that we hold\r\n>> ControlFileLock while updating SharedRecoveryInProgress (now\r\n>> SharedRecoveryState after 4e87c48). The thread for this change [0]\r\n>> has some additional details.\r\n>>\r\n>\r\n> Yeah, I saw that and ebdf5bf main intention was to minimize the gap\r\n> between both of them which was quite big previously. The comments\r\n> added by the same commit also describe the case that backends can\r\n> write WAL and the control file is still referring not in\r\n> DB_IN_PRODUCTION and IIUC, which seems to be acceptable.\r\n> Then the question is what would be wrong if a process can see an\r\n> inconsistent shared memory view for a small window? Might be\r\n> wait-promoting might behave unexpectedly, that I have to test.\r\n\r\nFor your proposed change, I would either leave out this particular\r\ncall site or add a \"WithLock\" version of the function.\r\n\r\nvoid\r\nSetControlFileDBState(DBState state)\r\n{\r\n LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\r\n SetControlFileDBStateWithLock(state);\r\n LWLockRelease(ControlFileLock);\r\n}\r\n\r\nvoid\r\nSetControlFileDBStateWithLock(DBState state)\r\n{\r\n Assert(LWLockHeldByMeInMode(ControlFileLock, LW_EXCLUSIVE));\r\n\r\n ControlFile->state = state;\r\n ControlFile->time = (pg_time_t) time(NULL);\r\n UpdateControlFile();\r\n}\r\n\r\n>> As far as the patch goes, I'm not sure why SetControlFileDBState()\r\n>> needs to be exported, and TBH I don't know if this change is really a\r\n>> worthwhile improvement. ISTM the main benefit is that it could help\r\n>> avoid cases where we update the state but not the time. However,\r\n>> there's still nothing preventing that, and I don't get the idea that\r\n>> it was really a big problem to begin with.\r\n>>\r\n>\r\n> Oh ok, I was working on a different patch[1] where I want to call this\r\n> function from checkpointer, but I agree exporting function is not in\r\n> the scope of this patch.\r\n\r\nAh, I was missing this context. Perhaps this should be included in\r\nthe patch set for the other thread, especially if it will need to be\r\nexported.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 15 Sep 2021 22:49:39 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Wed, Sep 15, 2021 at 10:49:39PM +0000, Bossart, Nathan wrote:\n> Ah, I was missing this context. Perhaps this should be included in\n> the patch set for the other thread, especially if it will need to be\n> exported.\n\nThis part of the patch is mentioned at the top of the thread:\n- LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n- ControlFile->state = DB_IN_PRODUCTION;\n- ControlFile->time = (pg_time_t) time(NULL);\n-\n+ SetControlFileDBState(DB_IN_PRODUCTION);\n SpinLockAcquire(&XLogCtl->info_lck);\n XLogCtl->SharedRecoveryState = RECOVERY_STATE_DONE;\n SpinLockRelease(&XLogCtl->info_lck);\n\nThere is an assumption in this code to update SharedRecoveryState\n*while* holding ControlFileLock. For example, see the following\ncomment in xlog.c, ReadRecord():\n/*\n * We update SharedRecoveryState while holding the lock on\n * ControlFileLock so both states are consistent in shared \n * memory. \n */\n--\nMichael", "msg_date": "Thu, 16 Sep 2021 08:47:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Thu, Sep 16, 2021 at 5:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Sep 15, 2021 at 10:49:39PM +0000, Bossart, Nathan wrote:\n> > Ah, I was missing this context. Perhaps this should be included in\n> > the patch set for the other thread, especially if it will need to be\n> > exported.\n>\n> This part of the patch is mentioned at the top of the thread:\n> - LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> - ControlFile->state = DB_IN_PRODUCTION;\n> - ControlFile->time = (pg_time_t) time(NULL);\n> -\n> + SetControlFileDBState(DB_IN_PRODUCTION);\n> SpinLockAcquire(&XLogCtl->info_lck);\n> XLogCtl->SharedRecoveryState = RECOVERY_STATE_DONE;\n> SpinLockRelease(&XLogCtl->info_lck);\n>\n> There is an assumption in this code to update SharedRecoveryState\n> *while* holding ControlFileLock. For example, see the following\n> comment in xlog.c, ReadRecord():\n> /*\n> * We update SharedRecoveryState while holding the lock on\n> * ControlFileLock so both states are consistent in shared\n> * memory.\n> */\n\nOk, understood, let's do that update with ControlFileLock, thanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 20 Sep 2021 10:04:53 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Thu, Sep 16, 2021 at 4:19 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 9/15/21, 4:47 AM, \"Amul Sul\" <sulamul@gmail.com> wrote:\n> > On Wed, Sep 15, 2021 at 12:52 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> It looks like ebdf5bf intentionally made sure that we hold\n> >> ControlFileLock while updating SharedRecoveryInProgress (now\n> >> SharedRecoveryState after 4e87c48). The thread for this change [0]\n> >> has some additional details.\n> >>\n> >\n> > Yeah, I saw that and ebdf5bf main intention was to minimize the gap\n> > between both of them which was quite big previously. The comments\n> > added by the same commit also describe the case that backends can\n> > write WAL and the control file is still referring not in\n> > DB_IN_PRODUCTION and IIUC, which seems to be acceptable.\n> > Then the question is what would be wrong if a process can see an\n> > inconsistent shared memory view for a small window? Might be\n> > wait-promoting might behave unexpectedly, that I have to test.\n>\n> For your proposed change, I would either leave out this particular\n> call site or add a \"WithLock\" version of the function.\n>\n> void\n> SetControlFileDBState(DBState state)\n> {\n> LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> SetControlFileDBStateWithLock(state);\n> LWLockRelease(ControlFileLock);\n> }\n>\n> void\n> SetControlFileDBStateWithLock(DBState state)\n> {\n> Assert(LWLockHeldByMeInMode(ControlFileLock, LW_EXCLUSIVE));\n>\n> ControlFile->state = state;\n> ControlFile->time = (pg_time_t) time(NULL);\n> UpdateControlFile();\n> }\n>\n\n+1, since skipping ControlFileLock for the DBState update is not the\nright thing, let's have two different functions as per your suggestion\n-- did the same in the attached version, thanks.\n\n\n> >> As far as the patch goes, I'm not sure why SetControlFileDBState()\n> >> needs to be exported, and TBH I don't know if this change is really a\n> >> worthwhile improvement. ISTM the main benefit is that it could help\n> >> avoid cases where we update the state but not the time. However,\n> >> there's still nothing preventing that, and I don't get the idea that\n> >> it was really a big problem to begin with.\n> >>\n> >\n> > Oh ok, I was working on a different patch[1] where I want to call this\n> > function from checkpointer, but I agree exporting function is not in\n> > the scope of this patch.\n>\n> Ah, I was missing this context. Perhaps this should be included in\n> the patch set for the other thread, especially if it will need to be\n> exported.\n>\n\nOk, reverted those changes in the attached version.\n\nI have one additional concern about the way we update the control\nfile, at every place where doing the update, we need to set control\nfile update time explicitly, why can't the time update line be moved\nto UpdateControlFile() so that time gets automatically updated?\n\nRegards,\nAmul", "msg_date": "Mon, 20 Sep 2021 11:36:29 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 9/19/21, 11:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> +1, since skipping ControlFileLock for the DBState update is not the\r\n> right thing, let's have two different functions as per your suggestion\r\n> -- did the same in the attached version, thanks.\r\n\r\nI see that the attached patch reorders the call to UpdateControlFile()\r\nto before SharedRecoveryState is updated, which seems to go against\r\nthe intent of ebdf5bf. I'm not sure if this really creates that much\r\nof a problem in practice, but it is a behavior change.\r\n\r\nAlso, I still think it might be better to include this patch in the\r\npatch set where the exported function is needed. On its own, this is\r\na very small amount of refactoring that might not be totally\r\nnecessary.\r\n\r\n> I have one additional concern about the way we update the control\r\n> file, at every place where doing the update, we need to set control\r\n> file update time explicitly, why can't the time update line be moved\r\n> to UpdateControlFile() so that time gets automatically updated?\r\n\r\nI see a few places where UpdateControlFile() is called without\r\nupdating ControlFile->time. I haven't found any obvious reason for\r\nthat, so perhaps it would be okay to move it to update_controlfile().\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 20 Sep 2021 23:14:36 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Tue, Sep 21, 2021 at 4:44 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 9/19/21, 11:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\n> > +1, since skipping ControlFileLock for the DBState update is not the\n> > right thing, let's have two different functions as per your suggestion\n> > -- did the same in the attached version, thanks.\n>\n> I see that the attached patch reorders the call to UpdateControlFile()\n> to before SharedRecoveryState is updated, which seems to go against\n> the intent of ebdf5bf. I'm not sure if this really creates that much\n> of a problem in practice, but it is a behavior change.\n>\n\nI had to have a thought on the same and didn't see any problem and\ntest suits also fine but that doesn't mean the change is perfect, the\nissue might be hard to reproduce if there are any. Let's see what\nothers think and for now, to be safe I have reverted this change.\n\n> Also, I still think it might be better to include this patch in the\n> patch set where the exported function is needed. On its own, this is\n> a very small amount of refactoring that might not be totally\n> necessary.\n>\n\nWell, the other patch set is quite big and complex. In my experience,\nusually, people avoid downloading big sets due to lack of time and\nsuch small refactoring patches usually don't get much detailed\nattention.\n\nAlso, even though this patch is small, it is independent and has\nnothing to do with other patch set whether it gets committed or not.\nStill, proposing some improvement might not be a big one but nice to\nhave.\n\n> > I have one additional concern about the way we update the control\n> > file, at every place where doing the update, we need to set control\n> > file update time explicitly, why can't the time update line be moved\n> > to UpdateControlFile() so that time gets automatically updated?\n>\n> I see a few places where UpdateControlFile() is called without\n> updating ControlFile->time. I haven't found any obvious reason for\n> that, so perhaps it would be okay to move it to update_controlfile().\n>\n\nOk, thanks, did the same in the attached version.\n\nRegards,\nAmul Sul", "msg_date": "Tue, 21 Sep 2021 10:33:55 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 9/20/21, 10:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> On Tue, Sep 21, 2021 at 4:44 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> On 9/19/21, 11:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n>> > I have one additional concern about the way we update the control\r\n>> > file, at every place where doing the update, we need to set control\r\n>> > file update time explicitly, why can't the time update line be moved\r\n>> > to UpdateControlFile() so that time gets automatically updated?\r\n>>\r\n>> I see a few places where UpdateControlFile() is called without\r\n>> updating ControlFile->time. I haven't found any obvious reason for\r\n>> that, so perhaps it would be okay to move it to update_controlfile().\r\n>>\r\n>\r\n> Ok, thanks, did the same in the attached version.\r\n\r\nvoid\r\nUpdateControlFile(void)\r\n{\r\n+\tControlFile->time = (pg_time_t) time(NULL);\r\n\tupdate_controlfile(DataDir, ControlFile, true);\r\n}\r\n\r\nShouldn't we update the time in update_controlfile()? Also, can you\r\nsplit this change into two patches (i.e., one for the timestamp change\r\nand another for the refactoring)? Otherwise, this looks reasonable to\r\nme.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 21 Sep 2021 16:13:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Tue, Sep 21, 2021 at 9:43 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 9/20/21, 10:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\n> > On Tue, Sep 21, 2021 at 4:44 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> On 9/19/21, 11:07 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\n> >> > I have one additional concern about the way we update the control\n> >> > file, at every place where doing the update, we need to set control\n> >> > file update time explicitly, why can't the time update line be moved\n> >> > to UpdateControlFile() so that time gets automatically updated?\n> >>\n> >> I see a few places where UpdateControlFile() is called without\n> >> updating ControlFile->time. I haven't found any obvious reason for\n> >> that, so perhaps it would be okay to move it to update_controlfile().\n> >>\n> >\n> > Ok, thanks, did the same in the attached version.\n>\n> void\n> UpdateControlFile(void)\n> {\n> + ControlFile->time = (pg_time_t) time(NULL);\n> update_controlfile(DataDir, ControlFile, true);\n> }\n>\n> Shouldn't we update the time in update_controlfile()?\n\nIf you see the callers of update_controlfile() except for\nRewriteControlFile() no one else updates the timestamp before calling\nit, I am not sure if that is intentional or not. That was the one\nreason that was added in UpdateControlFile(). And another reason is\nthat if you look at all the deleting lines followed by\nUpdateControlFile() & moving that to UpdateControlFile() wouldn't\nchange anything drastically.\n\nIMO, anything going to change should update the timestamp as well,\nthat could be a bug then.\n\n> Also, can you\n> split this change into two patches (i.e., one for the timestamp change\n> and another for the refactoring)? Otherwise, this looks reasonable to\n> me.\n\nDone, thanks for the review.\n\nRegards,\nAmul", "msg_date": "Thu, 23 Sep 2021 10:32:01 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 9/22/21, 10:03 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> On Tue, Sep 21, 2021 at 9:43 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Shouldn't we update the time in update_controlfile()?\r\n>\r\n> If you see the callers of update_controlfile() except for\r\n> RewriteControlFile() no one else updates the timestamp before calling\r\n> it, I am not sure if that is intentional or not. That was the one\r\n> reason that was added in UpdateControlFile(). And another reason is\r\n> that if you look at all the deleting lines followed by\r\n> UpdateControlFile() & moving that to UpdateControlFile() wouldn't\r\n> change anything drastically.\r\n>\r\n> IMO, anything going to change should update the timestamp as well,\r\n> that could be a bug then.\r\n\r\nI'm inclined to agree that anything that calls update_controlfile()\r\nshould update the timestamp. However, I wonder if the additional\r\ncalls to time() would have a noticeable impact.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 1 Oct 2021 17:47:45 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Fri, Oct 01, 2021 at 05:47:45PM +0000, Bossart, Nathan wrote:\n> I'm inclined to agree that anything that calls update_controlfile()\n> should update the timestamp.\n\npg_control.h tells that:\npg_time_t time; /* time stamp of last pg_control update */\nSo, yes, that would be more consistent.\n\n> However, I wonder if the additional\n> calls to time() would have a noticeable impact.\n\nI would not take that lightly either. Now, I don't think that any of\nthe code paths where UpdateControlFile() or update_controlfile() is\ncalled are hot enough to worry about that.\n\n UpdateControlFile(void)\n {\n+ ControlFile->time = (pg_time_t) time(NULL);\n update_controlfile(DataDir, ControlFile, true);\n }\nI have to admit that it is a bit strange to do that in the backend but\nnot the frontend, so there is a good argument for doing that directly\nin update_controlfile(). pg_resetwal does an update of the time, but\npg_rewind does not.\n--\nMichael", "msg_date": "Sat, 2 Oct 2021 14:40:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On 10/1/21, 10:40 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Fri, Oct 01, 2021 at 05:47:45PM +0000, Bossart, Nathan wrote:\r\n>> I'm inclined to agree that anything that calls update_controlfile()\r\n>> should update the timestamp.\r\n>\r\n> pg_control.h tells that:\r\n> pg_time_t time; /* time stamp of last pg_control update */\r\n> So, yes, that would be more consistent.\r\n>\r\n>> However, I wonder if the additional\r\n>> calls to time() would have a noticeable impact.\r\n>\r\n> I would not take that lightly either. Now, I don't think that any of\r\n> the code paths where UpdateControlFile() or update_controlfile() is\r\n> called are hot enough to worry about that.\r\n>\r\n> UpdateControlFile(void)\r\n> {\r\n> + ControlFile->time = (pg_time_t) time(NULL);\r\n> update_controlfile(DataDir, ControlFile, true);\r\n> }\r\n> I have to admit that it is a bit strange to do that in the backend but\r\n> not the frontend, so there is a good argument for doing that directly\r\n> in update_controlfile(). pg_resetwal does an update of the time, but\r\n> pg_rewind does not.\r\n\r\nI don't see any recent updates to this thread from Amul, so I'm\r\nmarking this one as waiting-for-author.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 10 Nov 2021 20:00:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Thu, Nov 11, 2021 at 1:30 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/1/21, 10:40 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > On Fri, Oct 01, 2021 at 05:47:45PM +0000, Bossart, Nathan wrote:\n> >> I'm inclined to agree that anything that calls update_controlfile()\n> >> should update the timestamp.\n> >\n> > pg_control.h tells that:\n> > pg_time_t time; /* time stamp of last pg_control update */\n> > So, yes, that would be more consistent.\n> >\n> >> However, I wonder if the additional\n> >> calls to time() would have a noticeable impact.\n> >\n> > I would not take that lightly either. Now, I don't think that any of\n> > the code paths where UpdateControlFile() or update_controlfile() is\n> > called are hot enough to worry about that.\n> >\n> > UpdateControlFile(void)\n> > {\n> > + ControlFile->time = (pg_time_t) time(NULL);\n> > update_controlfile(DataDir, ControlFile, true);\n> > }\n> > I have to admit that it is a bit strange to do that in the backend but\n> > not the frontend, so there is a good argument for doing that directly\n> > in update_controlfile(). pg_resetwal does an update of the time, but\n> > pg_rewind does not.\n>\n\nThanks for the inputs -- moved timestamp setting inside update_controlfile().\n\n> I don't see any recent updates to this thread from Amul, so I'm\n> marking this one as waiting-for-author.\n>\n\nSorry for the delay, please have a look at the attached version --\nchanging status to Needs review, thanks.\n\nRegards,\nAmul", "msg_date": "Thu, 25 Nov 2021 10:21:40 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Thu, Nov 25, 2021 at 10:21:40AM +0530, Amul Sul wrote:\n> Thanks for the inputs -- moved timestamp setting inside update_controlfile().\n\nI have not check the performance implication of that with a micro\nbenchmark or the like, but I can get behind 0001 on consistency\ngrounds between the backend and the frontend. 0002 does not seem\nworth the trouble, though, as it is changing only two code paths.\n--\nMichael", "msg_date": "Thu, 25 Nov 2021 16:04:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Thu, Nov 25, 2021 at 04:04:23PM +0900, Michael Paquier wrote:\n> I have not check the performance implication of that with a micro\n> benchmark or the like, but I can get behind 0001 on consistency\n> grounds between the backend and the frontend.\n\n /* Now create pg_control */\n InitControlFile(sysidentifier);\n- ControlFile->time = checkPoint.time;\n ControlFile->checkPoint = checkPoint.redo;\n ControlFile->checkPointCopy = checkPoint;\n0001 has a mistake here, no? The initial control file creation goes\nthrough WriteControlFile(), and not update_controlfile(), so this\nchange means that we would miss setting up this timestamp for the\nfirst time.\n\n@@ -714,7 +714,6 @@ GuessControlValues(void)\n ControlFile.checkPointCopy.oldestActiveXid = InvalidTransactionId;\n\n ControlFile.state = DB_SHUTDOWNED;\n- ControlFile.time = (pg_time_t) time(NULL);\nThis one had better not be removed, either, as we require pg_resetwal\nto guess a set of control file values. Removing the one in\nRewriteControlFile() is fine, on the contrary.\n--\nMichael", "msg_date": "Fri, 26 Nov 2021 15:46:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Fri, Nov 26, 2021 at 12:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 25, 2021 at 04:04:23PM +0900, Michael Paquier wrote:\n> > I have not check the performance implication of that with a micro\n> > benchmark or the like, but I can get behind 0001 on consistency\n> > grounds between the backend and the frontend.\n>\n> /* Now create pg_control */\n> InitControlFile(sysidentifier);\n> - ControlFile->time = checkPoint.time;\n> ControlFile->checkPoint = checkPoint.redo;\n> ControlFile->checkPointCopy = checkPoint;\n> 0001 has a mistake here, no? The initial control file creation goes\n> through WriteControlFile(), and not update_controlfile(), so this\n> change means that we would miss setting up this timestamp for the\n> first time.\n>\n> @@ -714,7 +714,6 @@ GuessControlValues(void)\n> ControlFile.checkPointCopy.oldestActiveXid = InvalidTransactionId;\n>\n> ControlFile.state = DB_SHUTDOWNED;\n> - ControlFile.time = (pg_time_t) time(NULL);\n> This one had better not be removed, either, as we require pg_resetwal\n> to guess a set of control file values. Removing the one in\n> RewriteControlFile() is fine, on the contrary.\n\nMy bad, sorry for the sloppy change, corrected it in the attached version.\n\nRegards,\nAmul", "msg_date": "Fri, 26 Nov 2021 14:48:13 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Fri, Nov 26, 2021 at 2:48 PM Amul Sul <sulamul@gmail.com> wrote:\n> > ControlFile.state = DB_SHUTDOWNED;\n> > - ControlFile.time = (pg_time_t) time(NULL);\n> > This one had better not be removed, either, as we require pg_resetwal\n> > to guess a set of control file values. Removing the one in\n> > RewriteControlFile() is fine, on the contrary.\n>\n> My bad, sorry for the sloppy change, corrected it in the attached version.\n\nThanks for the patch. By moving the time update to update_controlfile,\nthe patch ensures that we have the correct last updated time. Earlier\nwe were missing (in some places) to update the time before calling\nUpdateControlFile.\n\nIsn't it better if we update the ControlFile->time at the end of the\nupdate_controlfile, after file write/sync?\n\nWhy do we even need UpdateControlFile which just calls another\nfunction? It may be there for usability and readability, but can't the\npg backend code just call update_controlfile(DataDir, ControlFile,\ntrue); directly so that a function call cost can be avoided?\nOtherwise, why can't we make UpdateControlFile an inline function? I'm\nnot sure if any of the compilers will ever optimize by inlining it\nwithout the \"inline\" keyword.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 28 Nov 2021 07:53:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Sun, Nov 28, 2021 at 07:53:13AM +0530, Bharath Rupireddy wrote:\n> Isn't it better if we update the ControlFile->time at the end of the\n> update_controlfile, after file write/sync?\n\nI don't quite understand your point here. We want to update the\ncontrol file's timestamp when it is written, before calculating its\nCRC.\n\n> Why do we even need UpdateControlFile which just calls another\n> function? It may be there for usability and readability, but can't the\n> pg backend code just call update_controlfile(DataDir, ControlFile,\n> true); directly so that a function call cost can be avoided?\n> Otherwise, why can't we make UpdateControlFile an inline function? I'm\n> not sure if any of the compilers will ever optimize by inlining it\n> without the \"inline\" keyword.\n\nI would leave it as-is as UpdateControlFile() is a public API old\nenough to vote (a70e74b0). Anyway, that's a useful wrapper for the\nbackend.\n--\nMichael", "msg_date": "Sun, 28 Nov 2021 13:32:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Sun, Nov 28, 2021 at 10:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n> We want to update the\n> control file's timestamp when it is written, before calculating its\n> CRC.\n\nOkay.\n\n> > Why do we even need UpdateControlFile which just calls another\n> > function? It may be there for usability and readability, but can't the\n> > pg backend code just call update_controlfile(DataDir, ControlFile,\n> > true); directly so that a function call cost can be avoided?\n> > Otherwise, why can't we make UpdateControlFile an inline function? I'm\n> > not sure if any of the compilers will ever optimize by inlining it\n> > without the \"inline\" keyword.\n>\n> I would leave it as-is as UpdateControlFile() is a public API old\n> enough to vote (a70e74b0). Anyway, that's a useful wrapper for the\n> backend.\n\nIn that case, why can't we inline UpdateControlFile to avoid the\nfunction call cost? Do you see any issues with it?\n\nBTW, the v6 patch proposed by Amul at [1], looks good to me.\n\n[1] - https://www.postgresql.org/message-id/CAAJ_b94_s-VQs3Vtn_X-ReYr1DzaEakwPi80D1UYSmV3-f%2B_pw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:28:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Mon, Nov 29, 2021 at 09:28:23AM +0530, Bharath Rupireddy wrote:\n> In that case, why can't we inline UpdateControlFile to avoid the\n> function call cost? Do you see any issues with it?\n\nThis routine is IMO not something worth bothering about.\n\n> BTW, the v6 patch proposed by Amul at [1], looks good to me.\n\nYes, I have no problems with this part, so done.\n--\nMichael", "msg_date": "Mon, 29 Nov 2021 13:42:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." }, { "msg_contents": "On Mon, Nov 29, 2021 at 10:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 29, 2021 at 09:28:23AM +0530, Bharath Rupireddy wrote:\n> > In that case, why can't we inline UpdateControlFile to avoid the\n> > function call cost? Do you see any issues with it?\n>\n> This routine is IMO not something worth bothering about.\n>\n> > BTW, the v6 patch proposed by Amul at [1], looks good to me.\n>\n> Yes, I have no problems with this part, so done.\n\nThank you, Michael.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 29 Nov 2021 10:18:59 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate code updating ControleFile's DBState." } ]
[ { "msg_contents": "Fellow Postgres Admins and Developers,\n\nWith the arrival of ARM compute nodes on AWS and an existing fleet of\nPostgres clusters running on x86_64 nodes the question arises how to\nmigrate existing Postgres clusters to ARM64 nodes, ideally with zero\ndowntime, as one is used to.\n\nInitial experiments show no observable problems when copying PGDATA or in\nfact using physical streaming replication between the two CPU\narchitectures. In our case Postgres is using Docker based on Ubuntu 18.04\nbase images and PGDG packages for Postgres 13. On top of that, we checked\nexisting indexes with the amcheck\n<https://www.postgresql.org/docs/current/amcheck.html> extension, which did\nnot reveal any issues.\n\nHowever experiments are not valid to exclude all corner cases, thus we are\ncurious to hear other input on that matter, as we believe this is of\nrelevance to a bigger audience and ARM is not unlikely to be available on\nother non AWS platforms going forward.\n\nIt is our understanding that AWS RDS in fact for Postgres 12 and Postgres\n13 allows the change from x86 nodes to ARM nodes on the fly, which gives us\nsome indication that if done right, both platforms are indeed compatible.\n\nLooking forward to your input and discussion points!\n\n\n-- \nJan Mußler\nEngineering Manager - Team Acid & Team Aruha | Zalando SE\n\nFellow Postgres Admins and Developers,With the arrival of ARM compute nodes on AWS and an existing fleet of Postgres clusters running on x86_64 nodes the question arises how to migrate existing Postgres clusters to ARM64 nodes, ideally with zero downtime, as one is used to.Initial experiments show no observable problems when copying PGDATA or in fact using physical streaming replication between the two CPU architectures. In our case Postgres is using Docker based on Ubuntu 18.04 base images and PGDG packages for Postgres 13. On top of that, we checked existing indexes with the amcheck extension, which did not reveal any issues.However experiments are not valid to exclude all corner cases, thus we are curious to hear other input on that matter, as we believe this is of relevance to a bigger audience and ARM is not unlikely to be available on other non AWS platforms going forward.It is our understanding that AWS RDS in fact for Postgres 12 and Postgres 13 allows the change from x86 nodes to ARM nodes on the fly, which gives us some indication that if done right, both platforms are indeed compatible.Looking forward to your input and discussion points!-- Jan MußlerEngineering Manager - Team Acid & Team Aruha | Zalando SE", "msg_date": "Tue, 14 Sep 2021 11:50:59 +0200", "msg_from": "=?UTF-8?Q?Jan_Mu=C3=9Fler?= <jan.mussler@zalando.de>", "msg_from_op": true, "msg_subject": "Physical replication from x86_64 to ARM64" }, { "msg_contents": "Hi Jan,\n\n> Initial experiments show no observable problems when copying PGDATA or in\nfact using physical streaming replication between the two CPU architectures.\n\nThat's an interesting result. The topic of physical replication\ncompatibility interested me much back in 2017 and I raised this question on\nPGCon [1]. As I recall the compatibility is not guaranteed, nor tested, and\nnot going to be, because the community doesn't have resources for this. The\nconsensus was that to migrate without downtime the user has to use logical\nreplication. Thus what you observe should be considered a hack, and if\nsomething will go wrong, you are on your own.\n\nOf course, there is a possibility that something has changed in the past\nfour years. I'm sure somebody on the mailing list will correct me in this\ncase.\n\n[1] https://wiki.postgresql.org/wiki/PgCon_2017_Developer_Meeting\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Jan,> Initial experiments show no observable problems when copying PGDATA or in fact using physical streaming replication between the two CPU architectures.That's an interesting result. The topic of physical replication compatibility interested me much back in 2017 and I raised this question on PGCon [1]. As I recall the compatibility is not guaranteed, nor tested, and not going to be, because the community doesn't have resources for this. The consensus was that to migrate without downtime the user has to use logical replication. Thus what you observe should be considered a hack, and if something will go wrong, you are on your own.Of course, there is a possibility that something has changed in the past four years. I'm sure somebody on the mailing list will correct me in this case.[1] https://wiki.postgresql.org/wiki/PgCon_2017_Developer_Meeting-- Best regards,Aleksander Alekseev", "msg_date": "Tue, 14 Sep 2021 14:49:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Physical replication from x86_64 to ARM64" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> Initial experiments show no observable problems when copying PGDATA or in\n>> fact using physical streaming replication between the two CPU architectures.\n\n> That's an interesting result. The topic of physical replication\n> compatibility interested me much back in 2017 and I raised this question on\n> PGCon [1]. As I recall the compatibility is not guaranteed, nor tested, and\n> not going to be, because the community doesn't have resources for this.\n\nYeah. As far as the hardware goes, if you have the same endianness,\nstruct alignment rules, and floating-point format [1], then physical\nreplication ought to work. Where things get far stickier is if the\noperating systems aren't identical, because then you have very great\nrisk of text sorting rules not being the same, leading to index\ncorruption [2]. In modern practice that tends to be a bigger issue\nthan the hardware, and we don't have any good way to check for it.\n\n\t\t\tregards, tom lane\n\n[1] all of which are checked by pg_control fields, btw\n[2] https://wiki.postgresql.org/wiki/Locale_data_changes\n\n\n", "msg_date": "Tue, 14 Sep 2021 10:11:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Physical replication from x86_64 to ARM64" }, { "msg_contents": "Hi,\n\n\nOn September 14, 2021 7:11:25 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Aleksander Alekseev <aleksander@timescale.com> writes:\n>>> Initial experiments show no observable problems when copying PGDATA or in\n>>> fact using physical streaming replication between the two CPU architectures.\n>\n>> That's an interesting result. The topic of physical replication\n>> compatibility interested me much back in 2017 and I raised this question on\n>> PGCon [1]. As I recall the compatibility is not guaranteed, nor tested, and\n>> not going to be, because the community doesn't have resources for this.\n>\n>Yeah. As far as the hardware goes, if you have the same endianness,\n>struct alignment rules, and floating-point format [1], then physical\n>replication ought to work. Where things get far stickier is if the\n>operating systems aren't identical, because then you have very great\n>risk of text sorting rules not being the same, leading to index\n>corruption [2]. In modern practice that tends to be a bigger issue\n>than the hardware, and we don't have any goo d way to check for it.\n\nI'd also be worried about subtle changes in floating point math results, and that subsequently leading to index mismatches. Be that because the hardware gives differing results, or because libc differences.\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 14 Sep 2021 08:07:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Physical replication from x86_64 to ARM64" }, { "msg_contents": "> On Tue, Sep 14, 2021 at 08:07:19AM -0700, Andres Freund wrote:\n>\n> >Yeah. As far as the hardware goes, if you have the same endianness,\n> >struct alignment rules, and floating-point format [1], then physical\n> >replication ought to work. Where things get far stickier is if the\n> >operating systems aren't identical, because then you have very great\n> >risk of text sorting rules not being the same, leading to index\n> >corruption [2]. In modern practice that tends to be a bigger issue\n> >than the hardware, and we don't have any goo d way to check for it.\n>\n> I'd also be worried about subtle changes in floating point math results, and that subsequently leading to index mismatches. Be that because the hardware gives differing results, or because libc differences.\n\nThe question about hardware side I find interesting, as at least in\nArmv-8 case there are claims to be fully IEEE 754 compliant [1]. From\nwhat I see some parts, which are not specified in this standard, are\nalso implemented similarly on Arm and x86 ([2], [3]). On top of that\nmany compilers implement at least partial level of IEEE 754 compliance\n(e.g. for gcc [4]) by default. The only strange difference I found is\nx87 FPU unit (without no SEE2, see [5]), but I'm not sure what could be\nconsequences of extra precision here. All in all sounds like at least\nfrom the hardware perspective in case of Arm chances for having subtle\ndifferences in floating point math are small -- do I miss anything?\n\n[1]: https://developer.arm.com/architectures/instruction-sets/floating-point\n[2]: https://en.wikipedia.org/wiki/Single-precision_floating-point_format#Single-precision_examples\n[3]: https://en.wikipedia.org/wiki/Double-precision_floating-point_format\n[4]: https://gcc.gnu.org/wiki/FloatingPointMath\n[5]: https://gcc.gnu.org/wiki/x87note\n\n\n", "msg_date": "Wed, 15 Sep 2021 11:40:32 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Physical replication from x86_64 to ARM64" } ]