threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "The tablesync worker in logical replication performs the table data\nsync in a single transaction which means it will copy the initial data\nand then catch up with apply worker in the same transaction. There is\na comment in LogicalRepSyncTableStart (\"We want to do the table data\nsync in a single transaction.\") saying so but I can't find the\nconcrete theory behind the same. Is there any fundamental problem if\nwe commit the transaction after initial copy and slot creation in\nLogicalRepSyncTableStart and then allow the apply of transactions as\nit happens in apply worker? I have tried doing so in the attached (a\nquick prototype to test) and didn't find any problems with regression\ntests. I have tried a few manual tests as well to see if it works and\ndidn't find any problem. Now, it is quite possible that it is\nmandatory to do the way we are doing currently, or maybe something\nelse is required to remove this requirement but I think we can do\nbetter with respect to comments in this area.\n\nThe reason why I am looking into this area is to support the logical\ndecoding of prepared transactions. See the problem [1] reported by\nPeter Smith. Basically, when we stream prepared transactions in the\ntablesync worker, it will simply commit the same due to the\nrequirement of maintaining a single transaction for the entire\nduration of copy and streaming of transactions. Now, we can fix that\nproblem by disabling the decoding of prepared xacts in tablesync\nworker. But that will arise to a different kind of problems like the\nprepare will not be sent by the publisher but a later commit might\nmove lsn to a later step which will allow it to catch up till the\napply worker. So, now the prepared transaction will be skipped by both\ntablesync and apply worker.\n\nI think apart from unblocking the development of 'logical decoding of\nprepared xacts', it will make the code consistent between apply and\ntablesync worker and reduce the chances of future bugs in this area.\nBasically, it will reduce the checks related to am_tablesync_worker()\nat various places in the code.\n\nI see that this code is added as part of commit\n7c4f52409a8c7d85ed169bbbc1f6092274d03920 (Logical replication support\nfor initial data copy).\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 3 Dec 2020 14:57:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> The tablesync worker in logical replication performs the table data\n> sync in a single transaction which means it will copy the initial data\n> and then catch up with apply worker in the same transaction. There is\n> a comment in LogicalRepSyncTableStart (\"We want to do the table data\n> sync in a single transaction.\") saying so but I can't find the\n> concrete theory behind the same. Is there any fundamental problem if\n> we commit the transaction after initial copy and slot creation in\n> LogicalRepSyncTableStart and then allow the apply of transactions as\n> it happens in apply worker? I have tried doing so in the attached (a\n> quick prototype to test) and didn't find any problems with regression\n> tests. I have tried a few manual tests as well to see if it works and\n> didn't find any problem. Now, it is quite possible that it is\n> mandatory to do the way we are doing currently, or maybe something\n> else is required to remove this requirement but I think we can do\n> better with respect to comments in this area.\n\nIf we commit the initial copy, the data upto the initial copy's\nsnapshot will be visible downstream. If we apply the changes by\ncommitting changes per transaction, the data visible to the other\ntransactions will differ as the apply progresses. You haven't\nclarified whether we will respect the transaction boundaries in the\napply log or not. I assume we will. Whereas if we apply all the\nchanges in one go, other transactions either see the data before\nresync or after it without any intermediate states. That will not\nviolate consistency, I think.\n\nThat's all I can think of as the reason behind doing a whole resync as\na single transaction.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 3 Dec 2020 19:04:31 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 7:04 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The tablesync worker in logical replication performs the table data\n> > sync in a single transaction which means it will copy the initial data\n> > and then catch up with apply worker in the same transaction. There is\n> > a comment in LogicalRepSyncTableStart (\"We want to do the table data\n> > sync in a single transaction.\") saying so but I can't find the\n> > concrete theory behind the same. Is there any fundamental problem if\n> > we commit the transaction after initial copy and slot creation in\n> > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > it happens in apply worker? I have tried doing so in the attached (a\n> > quick prototype to test) and didn't find any problems with regression\n> > tests. I have tried a few manual tests as well to see if it works and\n> > didn't find any problem. Now, it is quite possible that it is\n> > mandatory to do the way we are doing currently, or maybe something\n> > else is required to remove this requirement but I think we can do\n> > better with respect to comments in this area.\n>\n> If we commit the initial copy, the data upto the initial copy's\n> snapshot will be visible downstream. If we apply the changes by\n> committing changes per transaction, the data visible to the other\n> transactions will differ as the apply progresses.\n>\n\nIt is not clear what you mean by the above. The way you have written\nappears that you are saying that instead of copying the initial data,\nI am saying to copy it transaction-by-transaction. But that is not the\ncase. I am saying copy the initial data by using REPEATABLE READ\nisolation level as we are doing now, commit it and then process\ntransaction-by-transaction till we reach sync-point (point till where\napply worker has already received the data).\n\n> You haven't\n> clarified whether we will respect the transaction boundaries in the\n> apply log or not. I assume we will.\n>\n\nIt will be transaction-by-transaction.\n\n> Whereas if we apply all the\n> changes in one go, other transactions either see the data before\n> resync or after it without any intermediate states.\n>\n\nWhat is the problem even if the user is able to see the data after the\ninitial copy?\n\n> That will not\n> violate consistency, I think.\n>\n\nI am not sure how consistency will be broken.\n\n> That's all I can think of as the reason behind doing a whole resync as\n> a single transaction.\n>\n\nThanks for sharing your thoughts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Dec 2020 19:26:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Is there any fundamental problem if\n> we commit the transaction after initial copy and slot creation in\n> LogicalRepSyncTableStart and then allow the apply of transactions as\n> it happens in apply worker?\n\nNo fundamental problem. Both approaches are fine. Committing the\ninitial copy then doing the rest in individual txns means an\nincomplete sync state for the table becomes visible, which may not be\nideal. Ideally we'd do something like sync the data into a clone of\nthe table then swap the table relfilenodes out once we're synced up.\n\nIMO the main advantage of committing as we go is that it would let us\nuse a non-temporary slot and support recovering an incomplete sync and\nfinishing it after interruption by connection loss, crash, etc. That\nwould be advantageous for big table syncs or where the sync has lots\nof lag to replay. But it means we have to remember sync states, and\ngive users a way to cancel/abort them. Otherwise forgotten temp slots\nfor syncs will cause a mess on the upstream.\n\nIt also allows the sync slot to advance, freeing any held upstream\nresources before the whole sync is done, which is good if the upstream\nis busy and generating lots of WAL.\n\nFinally, committing as we go means we won't exceed the cid increment\nlimit in a single txn.\n\n> The reason why I am looking into this area is to support the logical\n> decoding of prepared transactions. See the problem [1] reported by\n> Peter Smith. Basically, when we stream prepared transactions in the\n> tablesync worker, it will simply commit the same due to the\n> requirement of maintaining a single transaction for the entire\n> duration of copy and streaming of transactions. Now, we can fix that\n> problem by disabling the decoding of prepared xacts in tablesync\n> worker.\n\nTablesync should indeed only receive a txn when the commit arrives, it\nshould not attempt to handle uncommitted prepared xacts.\n\n> But that will arise to a different kind of problems like the\n> prepare will not be sent by the publisher but a later commit might\n> move lsn to a later step which will allow it to catch up till the\n> apply worker. So, now the prepared transaction will be skipped by both\n> tablesync and apply worker.\n\nI'm not sure I understand. If what you describe is possible then\nthere's already a bug in prepared xact handling. Prepared xact commit\nprogress should be tracked by commit lsn, not by prepare lsn.\n\nCan you set out the ordering of events in more detail?\n\n> I think apart from unblocking the development of 'logical decoding of\n> prepared xacts', it will make the code consistent between apply and\n> tablesync worker and reduce the chances of future bugs in this area.\n> Basically, it will reduce the checks related to am_tablesync_worker()\n> at various places in the code.\n\nI think we made similar changes in pglogical to switch to applying\nsync work in individual txns.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:22:44 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 7:53 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> > The reason why I am looking into this area is to support the logical\n> > decoding of prepared transactions. See the problem [1] reported by\n> > Peter Smith. Basically, when we stream prepared transactions in the\n> > tablesync worker, it will simply commit the same due to the\n> > requirement of maintaining a single transaction for the entire\n> > duration of copy and streaming of transactions. Now, we can fix that\n> > problem by disabling the decoding of prepared xacts in tablesync\n> > worker.\n>\n> Tablesync should indeed only receive a txn when the commit arrives, it\n> should not attempt to handle uncommitted prepared xacts.\n>\n\nWhy? If we go with the approach of the commit as we go for individual\ntransactions in the tablesync worker then this shouldn't be a problem.\n\n> > But that will arise to a different kind of problems like the\n> > prepare will not be sent by the publisher but a later commit might\n> > move lsn to a later step which will allow it to catch up till the\n> > apply worker. So, now the prepared transaction will be skipped by both\n> > tablesync and apply worker.\n>\n> I'm not sure I understand. If what you describe is possible then\n> there's already a bug in prepared xact handling. Prepared xact commit\n> progress should be tracked by commit lsn, not by prepare lsn.\n>\n\nOh no, I am talking about commit of some other transaction.\n\n> Can you set out the ordering of events in more detail?\n>\n\nSure. It will be something like where apply worker is ahead of sync worker:\n\nAssume t1 has some data which tablesync worker has to first copy.\n\ntx1\nBegin;\nInsert into t1....\nPrepare Transaction 'foo'\n\ntx2\nBegin;\nInsert into t1....\nCommit\n\napply worker\n• tx1: replays - does not apply anything because\nshould_apply_changes_for_rel thinks relation is not ready\n• tx2: replays - does not apply anything because\nshould_apply_changes_for_rel thinks relation is not ready\n\ntablesync worder\n• tx1: handles: BEGIN - INSERT - PREPARE 'xyz'; (but tablesync gets\nnothing because say we disable 2-PC for it)\n• tx2: handles: BEGIN - INSERT - COMMIT;\n• tablelsync exits\n\nNow the situation is that the apply worker has skipped the prepared\nxact data and tablesync worker has not received it, so not applied it.\nNext, when we get Commit Prepared for tx1, it will silently commit the\nprepared transaction without any data being updated. The commit\nprepared won't error out in subscriber because the prepare would have\nbeen successful even though the data is skipped via\nshould_apply_changes_for_rel.\n\n> > I think apart from unblocking the development of 'logical decoding of\n> > prepared xacts', it will make the code consistent between apply and\n> > tablesync worker and reduce the chances of future bugs in this area.\n> > Basically, it will reduce the checks related to am_tablesync_worker()\n> > at various places in the code.\n>\n> I think we made similar changes in pglogical to switch to applying\n> sync work in individual txns.\n>\n\noh, cool. Did you make some additional changes as you have mentioned\nin the earlier part of the email?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 08:22:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 7:53 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Is there any fundamental problem if\n> > we commit the transaction after initial copy and slot creation in\n> > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > it happens in apply worker?\n>\n> No fundamental problem. Both approaches are fine. Committing the\n> initial copy then doing the rest in individual txns means an\n> incomplete sync state for the table becomes visible, which may not be\n> ideal. Ideally we'd do something like sync the data into a clone of\n> the table then swap the table relfilenodes out once we're synced up.\n>\n> IMO the main advantage of committing as we go is that it would let us\n> use a non-temporary slot and support recovering an incomplete sync and\n> finishing it after interruption by connection loss, crash, etc. That\n> would be advantageous for big table syncs or where the sync has lots\n> of lag to replay. But it means we have to remember sync states, and\n> give users a way to cancel/abort them. Otherwise forgotten temp slots\n> for syncs will cause a mess on the upstream.\n>\n> It also allows the sync slot to advance, freeing any held upstream\n> resources before the whole sync is done, which is good if the upstream\n> is busy and generating lots of WAL.\n>\n> Finally, committing as we go means we won't exceed the cid increment\n> limit in a single txn.\n>\n\n\nYeah, all these are advantages of processing\ntransaction-by-transaction. IIUC, we need to primarily do two things\nto achieve it, one is to have an additional state in the catalog (say\ncatch up) which will say that the initial copy is done. Then we need\nto have a permanent slot using which we can track the progress of the\nslot so that after restart (due to crash, connection break, etc.) we\ncan start from the appropriate position.\n\nApart from the above, I think with the current design of tablesync we\ncan see partial data of transactions because we allow all the\ntablesync workers to run parallelly. Consider the below scenario:\n\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\nCREATE TABLE mytbl2(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nTx1\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl2(somedata, text) VALUES (1, 1);\nCOMMIT;\n\nCREATE PUBLICATION mypublication FOR TABLE mytbl;\n\nCREATE SUBSCRIPTION mysub\n CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypublication;\n\nTx2\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 2);\nINSERT INTO mytbl2(somedata, text) VALUES (1, 2);\nCommit;\n\nTx3\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 3);\nINSERT INTO mytbl2(somedata, text) VALUES (1, 3);\nCommit;\n\nNow, I could see the below results on subscriber:\n\npostgres=# select * from mytbl1;\n id | somedata | text\n----+----------+------\n(0 rows)\n\n\npostgres=# select * from mytbl2;\n id | somedata | text\n----+----------+------\n 1 | 1 | 1\n 2 | 1 | 2\n 3 | 1 | 3\n(3 rows)\n\nBasically, the results for Tx1, Tx2, Tx3 are visible for mytbl2 but\nnot for mytbl1. To reproduce this I have stopped the tablesync workers\n(via debugger) for mytbl1 and mytbl2 in LogicalRepSyncTableStart\nbefore it changes the relstate to SUBREL_STATE_SYNCWAIT. Then allowed\nTx2 and Tx3 to be processed by apply worker and then allowed tablesync\nworker for mytbl2 to proceed. After that, I can see the above state.\n\nNow, won't this behavior be considered as transaction inconsistency\nwhere partial transaction data or later transaction data is visible? I\ndon't think we can have such a situation on the master (publisher)\nnode or in physical standby.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:29:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 10:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 4, 2020 at 7:53 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n> > On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Is there any fundamental problem if\n> > > we commit the transaction after initial copy and slot creation in\n> > > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > > it happens in apply worker?\n> >\n> > No fundamental problem. Both approaches are fine. Committing the\n> > initial copy then doing the rest in individual txns means an\n> > incomplete sync state for the table becomes visible, which may not be\n> > ideal. Ideally we'd do something like sync the data into a clone of\n> > the table then swap the table relfilenodes out once we're synced up.\n> >\n> > IMO the main advantage of committing as we go is that it would let us\n> > use a non-temporary slot and support recovering an incomplete sync and\n> > finishing it after interruption by connection loss, crash, etc. That\n> > would be advantageous for big table syncs or where the sync has lots\n> > of lag to replay. But it means we have to remember sync states, and\n> > give users a way to cancel/abort them. Otherwise forgotten temp slots\n> > for syncs will cause a mess on the upstream.\n> >\n> > It also allows the sync slot to advance, freeing any held upstream\n> > resources before the whole sync is done, which is good if the upstream\n> > is busy and generating lots of WAL.\n> >\n> > Finally, committing as we go means we won't exceed the cid increment\n> > limit in a single txn.\n> >\n>\n> Yeah, all these are advantages of processing\n> transaction-by-transaction. IIUC, we need to primarily do two things\n> to achieve it, one is to have an additional state in the catalog (say\n> catch up) which will say that the initial copy is done. Then we need\n> to have a permanent slot using which we can track the progress of the\n> slot so that after restart (due to crash, connection break, etc.) we\n> can start from the appropriate position.\n>\n> Apart from the above, I think with the current design of tablesync we\n> can see partial data of transactions because we allow all the\n> tablesync workers to run parallelly. Consider the below scenario:\n>\n> CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n> CREATE TABLE mytbl2(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n>\n> Tx1\n> BEGIN;\n> INSERT INTO mytbl1(somedata, text) VALUES (1, 1);\n> INSERT INTO mytbl2(somedata, text) VALUES (1, 1);\n> COMMIT;\n>\n> CREATE PUBLICATION mypublication FOR TABLE mytbl;\n>\n\noops, the above statement should be CREATE PUBLICATION mypublication\nFOR TABLE mytbl1, mytbl2;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:35:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 10:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 4, 2020 at 7:53 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n> > On Thu, 3 Dec 2020 at 17:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Is there any fundamental problem if\n> > > we commit the transaction after initial copy and slot creation in\n> > > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > > it happens in apply worker?\n> >\n> > No fundamental problem. Both approaches are fine. Committing the\n> > initial copy then doing the rest in individual txns means an\n> > incomplete sync state for the table becomes visible, which may not be\n> > ideal. Ideally we'd do something like sync the data into a clone of\n> > the table then swap the table relfilenodes out once we're synced up.\n> >\n> > IMO the main advantage of committing as we go is that it would let us\n> > use a non-temporary slot and support recovering an incomplete sync and\n> > finishing it after interruption by connection loss, crash, etc. That\n> > would be advantageous for big table syncs or where the sync has lots\n> > of lag to replay. But it means we have to remember sync states, and\n> > give users a way to cancel/abort them. Otherwise forgotten temp slots\n> > for syncs will cause a mess on the upstream.\n> >\n> > It also allows the sync slot to advance, freeing any held upstream\n> > resources before the whole sync is done, which is good if the upstream\n> > is busy and generating lots of WAL.\n> >\n> > Finally, committing as we go means we won't exceed the cid increment\n> > limit in a single txn.\n> >\n>\n>\n> Yeah, all these are advantages of processing\n> transaction-by-transaction. IIUC, we need to primarily do two things\n> to achieve it, one is to have an additional state in the catalog (say\n> catch up) which will say that the initial copy is done. Then we need\n> to have a permanent slot using which we can track the progress of the\n> slot so that after restart (due to crash, connection break, etc.) we\n> can start from the appropriate position.\n>\n> Apart from the above, I think with the current design of tablesync we\n> can see partial data of transactions because we allow all the\n> tablesync workers to run parallelly. Consider the below scenario:\n>\n..\n..\n>\n> Basically, the results for Tx1, Tx2, Tx3 are visible for mytbl2 but\n> not for mytbl1. To reproduce this I have stopped the tablesync workers\n> (via debugger) for mytbl1 and mytbl2 in LogicalRepSyncTableStart\n> before it changes the relstate to SUBREL_STATE_SYNCWAIT. Then allowed\n> Tx2 and Tx3 to be processed by apply worker and then allowed tablesync\n> worker for mytbl2 to proceed. After that, I can see the above state.\n>\n> Now, won't this behavior be considered as transaction inconsistency\n> where partial transaction data or later transaction data is visible? I\n> don't think we can have such a situation on the master (publisher)\n> node or in physical standby.\n>\n\nOn briefly checking the pglogical code [1], it seems this problem\nwon't be there in pglogical. Because it seems to first copy all the\ntables (via pglogical_sync_table) in one process and then catch with\nthe apply worker in a transaction-by-transaction manner. Am, I reading\nit correctly? If so then why we followed a different approach for\nin-core solution or is it that the pglogical has improved over time\nbut all the improvements can't be implemented in-core because of some\nmissing features?\n\n[1] - https://github.com/2ndQuadrant/pglogical\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:32:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 3, 2020 at 7:04 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > The tablesync worker in logical replication performs the table data\n> > > sync in a single transaction which means it will copy the initial data\n> > > and then catch up with apply worker in the same transaction. There is\n> > > a comment in LogicalRepSyncTableStart (\"We want to do the table data\n> > > sync in a single transaction.\") saying so but I can't find the\n> > > concrete theory behind the same. Is there any fundamental problem if\n> > > we commit the transaction after initial copy and slot creation in\n> > > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > > it happens in apply worker? I have tried doing so in the attached (a\n> > > quick prototype to test) and didn't find any problems with regression\n> > > tests. I have tried a few manual tests as well to see if it works and\n> > > didn't find any problem. Now, it is quite possible that it is\n> > > mandatory to do the way we are doing currently, or maybe something\n> > > else is required to remove this requirement but I think we can do\n> > > better with respect to comments in this area.\n> >\n> > If we commit the initial copy, the data upto the initial copy's\n> > snapshot will be visible downstream. If we apply the changes by\n> > committing changes per transaction, the data visible to the other\n> > transactions will differ as the apply progresses.\n> >\n>\n> It is not clear what you mean by the above. The way you have written\n> appears that you are saying that instead of copying the initial data,\n> I am saying to copy it transaction-by-transaction. But that is not the\n> case. I am saying copy the initial data by using REPEATABLE READ\n> isolation level as we are doing now, commit it and then process\n> transaction-by-transaction till we reach sync-point (point till where\n> apply worker has already received the data).\n\nCraig in his mail has clarified this. The changes after the initial\nCOPY will be visible before the table sync catches up.\n\n>\n> > You haven't\n> > clarified whether we will respect the transaction boundaries in the\n> > apply log or not. I assume we will.\n> >\n>\n> It will be transaction-by-transaction.\n>\n> > Whereas if we apply all the\n> > changes in one go, other transactions either see the data before\n> > resync or after it without any intermediate states.\n> >\n>\n> What is the problem even if the user is able to see the data after the\n> initial copy?\n>\n> > That will not\n> > violate consistency, I think.\n> >\n>\n> I am not sure how consistency will be broken.\n\nSome of the transactions applied by apply workers may not have been\napplied by the resync and vice versa. If the intermediate states of\ntable resync worker are visible, this difference in applied\ntransaction will result in loss of consistency if those transactions\nare changing the table being resynced and some other table in the same\ntransaction. The changes won't be atomically visible. Thinking more\nabout this, this problem exists today for a table being resynced, but\nat least it's only the table being resynced that is behind the other\ntables so it's predictable.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 4 Dec 2020 19:12:30 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 7:12 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 3, 2020 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 3, 2020 at 7:04 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > The tablesync worker in logical replication performs the table data\n> > > > sync in a single transaction which means it will copy the initial data\n> > > > and then catch up with apply worker in the same transaction. There is\n> > > > a comment in LogicalRepSyncTableStart (\"We want to do the table data\n> > > > sync in a single transaction.\") saying so but I can't find the\n> > > > concrete theory behind the same. Is there any fundamental problem if\n> > > > we commit the transaction after initial copy and slot creation in\n> > > > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > > > it happens in apply worker? I have tried doing so in the attached (a\n> > > > quick prototype to test) and didn't find any problems with regression\n> > > > tests. I have tried a few manual tests as well to see if it works and\n> > > > didn't find any problem. Now, it is quite possible that it is\n> > > > mandatory to do the way we are doing currently, or maybe something\n> > > > else is required to remove this requirement but I think we can do\n> > > > better with respect to comments in this area.\n> > >\n> > > If we commit the initial copy, the data upto the initial copy's\n> > > snapshot will be visible downstream. If we apply the changes by\n> > > committing changes per transaction, the data visible to the other\n> > > transactions will differ as the apply progresses.\n> > >\n> >\n> > It is not clear what you mean by the above. The way you have written\n> > appears that you are saying that instead of copying the initial data,\n> > I am saying to copy it transaction-by-transaction. But that is not the\n> > case. I am saying copy the initial data by using REPEATABLE READ\n> > isolation level as we are doing now, commit it and then process\n> > transaction-by-transaction till we reach sync-point (point till where\n> > apply worker has already received the data).\n>\n> Craig in his mail has clarified this. The changes after the initial\n> COPY will be visible before the table sync catches up.\n>\n\nI think the problem is not that the changes are visible after COPY\nrather it is that we don't have a mechanism to restart if it crashes\nafter COPY unless we do all the sync up in one transaction. Assume we\ncommit after COPY and then process transaction-by-transaction and it\nerrors out (due to connection loss) or crashes, in-between one of the\nfollowing transactions after COPY then after the restart we won't know\nfrom where to start for that relation. This is because the catalog\n(pg_subscription_rel) will show the state as 'd' (data is being\ncopied) and the slot would have gone as it was a temporary slot. But\nas mentioned in one of my emails above [1] we can solve these problems\nwhich Craig also seems to be advocating for as there are many\nadvantages of not doing the entire sync (initial copy + stream changes\nfor that relation) in one single transaction. It will allow us to\nsupport decode of prepared xacts in the subscriber. Also, it seems\npglogical already does processing transaction-by-transaction after the\ninitial copy. The only thing which is not clear to me is why we\nhaven't decided to go ahead initially and it would be probably better\nif the original authors would also chime-in to at least clarify the\nsame.\n\n> >\n> > > You haven't\n> > > clarified whether we will respect the transaction boundaries in the\n> > > apply log or not. I assume we will.\n> > >\n> >\n> > It will be transaction-by-transaction.\n> >\n> > > Whereas if we apply all the\n> > > changes in one go, other transactions either see the data before\n> > > resync or after it without any intermediate states.\n> > >\n> >\n> > What is the problem even if the user is able to see the data after the\n> > initial copy?\n> >\n> > > That will not\n> > > violate consistency, I think.\n> > >\n> >\n> > I am not sure how consistency will be broken.\n>\n> Some of the transactions applied by apply workers may not have been\n> applied by the resync and vice versa. If the intermediate states of\n> table resync worker are visible, this difference in applied\n> transaction will result in loss of consistency if those transactions\n> are changing the table being resynced and some other table in the same\n> transaction. The changes won't be atomically visible. Thinking more\n> about this, this problem exists today for a table being resynced, but\n> at least it's only the table being resynced that is behind the other\n> tables so it's predictable.\n>\n\nYeah, I have already shown that this problem [1] exists today and it\nwon't be predictable when the number of tables to be synced are more.\nI am not sure why but it seems acceptable to original authors that the\ndata of transactions are visibly partially during the initial\nsynchronization phase for a subscription. I don't see it documented\nclearly either.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Ld9XaLoTZCoKF_gET7kc1fDf8CPR3CM48MQb1N1jDLYg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 5 Dec 2020 07:34:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, 5 Dec 2020, 10:03 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Dec 4, 2020 at 7:12 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Dec 3, 2020 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > On Thu, Dec 3, 2020 at 7:04 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > > >\n> > > > > The tablesync worker in logical replication performs the table data\n> > > > > sync in a single transaction which means it will copy the initial\n> data\n> > > > > and then catch up with apply worker in the same transaction. There\n> is\n> > > > > a comment in LogicalRepSyncTableStart (\"We want to do the table\n> data\n> > > > > sync in a single transaction.\") saying so but I can't find the\n> > > > > concrete theory behind the same. Is there any fundamental problem\n> if\n> > > > > we commit the transaction after initial copy and slot creation in\n> > > > > LogicalRepSyncTableStart and then allow the apply of transactions\n> as\n> > > > > it happens in apply worker? I have tried doing so in the attached\n> (a\n> > > > > quick prototype to test) and didn't find any problems with\n> regression\n> > > > > tests. I have tried a few manual tests as well to see if it works\n> and\n> > > > > didn't find any problem. Now, it is quite possible that it is\n> > > > > mandatory to do the way we are doing currently, or maybe something\n> > > > > else is required to remove this requirement but I think we can do\n> > > > > better with respect to comments in this area.\n> > > >\n> > > > If we commit the initial copy, the data upto the initial copy's\n> > > > snapshot will be visible downstream. If we apply the changes by\n> > > > committing changes per transaction, the data visible to the other\n> > > > transactions will differ as the apply progresses.\n> > > >\n> > >\n> > > It is not clear what you mean by the above. The way you have written\n> > > appears that you are saying that instead of copying the initial data,\n> > > I am saying to copy it transaction-by-transaction. But that is not the\n> > > case. I am saying copy the initial data by using REPEATABLE READ\n> > > isolation level as we are doing now, commit it and then process\n> > > transaction-by-transaction till we reach sync-point (point till where\n> > > apply worker has already received the data).\n> >\n> > Craig in his mail has clarified this. The changes after the initial\n> > COPY will be visible before the table sync catches up.\n> >\n>\n> I think the problem is not that the changes are visible after COPY\n> rather it is that we don't have a mechanism to restart if it crashes\n> after COPY unless we do all the sync up in one transaction. Assume we\n> commit after COPY and then process transaction-by-transaction and it\n> errors out (due to connection loss) or crashes, in-between one of the\n> following transactions after COPY then after the restart we won't know\n> from where to start for that relation. This is because the catalog\n> (pg_subscription_rel) will show the state as 'd' (data is being\n> copied) and the slot would have gone as it was a temporary slot. But\n> as mentioned in one of my emails above [1] we can solve these problems\n> which Craig also seems to be advocating for as there are many\n> advantages of not doing the entire sync (initial copy + stream changes\n> for that relation) in one single transaction. It will allow us to\n> support decode of prepared xacts in the subscriber. Also, it seems\n> pglogical already does processing transaction-by-transaction after the\n> initial copy. The only thing which is not clear to me is why we\n> haven't decided to go ahead initially and it would be probably better\n> if the original authors would also chime-in to at least clarify the\n> same.\n>\n\nIt's partly a resource management issue.\n\nReplication origins are a limited resource. We need to use a replication\norigin for any sync we want to be durable across restarts.\n\nThen again so are slots and we use temp slots for each sync.\n\nIf a sync fails cleanup on the upstream side is simple with a temp slot.\nWith persistent slots we have more risk of creating upstream issues. But\nthen, so long as the subscriber exists it can deal with that. And if the\nsubscriber no longer exists its primary slot is an issue too.\n\nIt'd help if we could register pg_shdepend entries between catalog entries\nand slots, and from a main subscription slot to any extra slots used for\nresynchronization.\n\nAnd I should write a patch for a resource retention summarisation view.\n\n\n> I am not sure why but it seems acceptable to original authors that the\n> data of transactions are visibly partially during the initial\n> synchronization phase for a subscription.\n\n\nI don't think there's much alternative there.\n\nPg would need some kind of cross commit visibility control mechanism that\nseparates durable commit from visibility\n\nOn Sat, 5 Dec 2020, 10:03 Amit Kapila, <amit.kapila16@gmail.com> wrote:On Fri, Dec 4, 2020 at 7:12 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 3, 2020 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 3, 2020 at 7:04 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 3, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > The tablesync worker in logical replication performs the table data\n> > > > sync in a single transaction which means it will copy the initial data\n> > > > and then catch up with apply worker in the same transaction. There is\n> > > > a comment in LogicalRepSyncTableStart (\"We want to do the table data\n> > > > sync in a single transaction.\") saying so but I can't find the\n> > > > concrete theory behind the same. Is there any fundamental problem if\n> > > > we commit the transaction after initial copy and slot creation in\n> > > > LogicalRepSyncTableStart and then allow the apply of transactions as\n> > > > it happens in apply worker? I have tried doing so in the attached (a\n> > > > quick prototype to test) and didn't find any problems with regression\n> > > > tests. I have tried a few manual tests as well to see if it works and\n> > > > didn't find any problem. Now, it is quite possible that it is\n> > > > mandatory to do the way we are doing currently, or maybe something\n> > > > else is required to remove this requirement but I think we can do\n> > > > better with respect to comments in this area.\n> > >\n> > > If we commit the initial copy, the data upto the initial copy's\n> > > snapshot will be visible downstream. If we apply the changes by\n> > > committing changes per transaction, the data visible to the other\n> > > transactions will differ as the apply progresses.\n> > >\n> >\n> > It is not clear what you mean by the above. The way you have written\n> > appears that you are saying that instead of copying the initial data,\n> > I am saying to copy it transaction-by-transaction. But that is not the\n> > case. I am saying copy the initial data by using REPEATABLE READ\n> > isolation level as we are doing now, commit it and then process\n> > transaction-by-transaction till we reach sync-point (point till where\n> > apply worker has already received the data).\n>\n> Craig in his mail has clarified this. The changes after the initial\n> COPY will be visible before the table sync catches up.\n>\n\nI think the problem is not that the changes are visible after COPY\nrather it is that we don't have a mechanism to restart if it crashes\nafter COPY unless we do all the sync up in one transaction. Assume we\ncommit after COPY and then process transaction-by-transaction and it\nerrors out (due to connection loss) or crashes, in-between one of the\nfollowing transactions after COPY then after the restart we won't know\nfrom where to start for that relation. This is because the catalog\n(pg_subscription_rel) will show the state as 'd' (data is being\ncopied) and the slot would have gone as it was a temporary slot. But\nas mentioned in one of my emails above [1] we can solve these problems\nwhich Craig also seems to be advocating for as there are many\nadvantages of not doing the entire sync (initial copy + stream changes\nfor that relation) in one single transaction. It will allow us to\nsupport decode of prepared xacts in the subscriber. Also, it seems\npglogical already does processing transaction-by-transaction after the\ninitial copy. The only thing which is not clear to me is why we\nhaven't decided to go ahead initially and it would be probably better\nif the original authors would also chime-in to at least clarify the\nsame.It's partly a resource management issue.Replication origins are a limited resource. We need to use a replication origin for any sync we want to be durable across restarts.Then again so are slots and we use temp slots for each sync.If a sync fails cleanup on the upstream side is simple with a temp slot. With persistent slots we have more risk of creating upstream issues. But then, so long as the subscriber exists it can deal with that. And if the subscriber no longer exists its primary slot is an issue too.It'd help if we could register pg_shdepend entries between catalog entries and slots, and from a main subscription slot to any extra slots used for resynchronization.And I should write a patch for a resource retention summarisation view.\nI am not sure why but it seems acceptable to original authors that the\ndata of transactions are visibly partially during the initial\nsynchronization phase for a subscription.I don't think there's much alternative there.Pg would need some kind of cross commit visibility control mechanism that separates durable commit from visibility",
"msg_date": "Mon, 7 Dec 2020 08:50:31 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi,\n\nI wanted to float another idea to solve these tablesync/apply worker problems.\n\nThis idea may or may not have merit. Please consider it.\n\n~\n\nBasically, I was wondering why can't the \"tablesync\" worker just\ngather messages in a similar way to how the current streaming feature\ngathers messages into a \"changes\" file, so that they can be replayed\nlater.\n\ne.g. Imagine if\n\nA) The \"tablesync\" worker (after the COPY) does not ever apply any of\nthe incoming messages, but instead it just gobbles them into a\n\"changes\" file until it decides it has reached SYNCDONE state and\nexits.\n\nB) Then, when the \"apply\" worker proceeds, if it detects the existence\nof the \"changes\" file it will replay/apply_dispatch all those gobbled\nmessages before just continuing as normal.\n\nSo\n- IIUC this kind of replay is like how the current code stream commit\napplies the streamed \"changes\" file.\n- \"tablesync\" worker would only be doing table sync (COPY) as its name\nsuggests. Any detected \"changes\" are recorded and left for the \"apply\"\nworker to handle.\n- \"tablesync\" worker would just operate in single tx with a temporary\nslot as per current code\n- Then the \"apply\" worker would be the *only* worker that actually\napplies anything. (as its name suggests)\n\nThoughts?\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 7 Dec 2020 14:44:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 6:20 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Sat, 5 Dec 2020, 10:03 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Dec 4, 2020 at 7:12 PM Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> I think the problem is not that the changes are visible after COPY\n>> rather it is that we don't have a mechanism to restart if it crashes\n>> after COPY unless we do all the sync up in one transaction. Assume we\n>> commit after COPY and then process transaction-by-transaction and it\n>> errors out (due to connection loss) or crashes, in-between one of the\n>> following transactions after COPY then after the restart we won't know\n>> from where to start for that relation. This is because the catalog\n>> (pg_subscription_rel) will show the state as 'd' (data is being\n>> copied) and the slot would have gone as it was a temporary slot. But\n>> as mentioned in one of my emails above [1] we can solve these problems\n>> which Craig also seems to be advocating for as there are many\n>> advantages of not doing the entire sync (initial copy + stream changes\n>> for that relation) in one single transaction. It will allow us to\n>> support decode of prepared xacts in the subscriber. Also, it seems\n>> pglogical already does processing transaction-by-transaction after the\n>> initial copy. The only thing which is not clear to me is why we\n>> haven't decided to go ahead initially and it would be probably better\n>> if the original authors would also chime-in to at least clarify the\n>> same.\n>\n>\n> It's partly a resource management issue.\n>\n> Replication origins are a limited resource. We need to use a replication origin for any sync we want to be durable across restarts.\n>\n> Then again so are slots and we use temp slots for each sync.\n>\n> If a sync fails cleanup on the upstream side is simple with a temp slot. With persistent slots we have more risk of creating upstream issues. But then, so long as the subscriber exists it can deal with that. And if the subscriber no longer exists its primary slot is an issue too.\n>\n\nI think if the only issue is slot clean up, then the same exists today\nfor the slot created by the apply worker (or which I think you are\nreferring to as a primary slot). This can only happen if the\nsubscriber goes away without dropping the subscription. Also, if we\nare worried about using up too many slots then the slots used by\ntablesync workers will probably be freed sooner.\n\n> It'd help if we could register pg_shdepend entries between catalog entries and slots, and from a main subscription slot to any extra slots used for resynchronization.\n>\n\nWhich catalog entries you are referring to here?\n\n> And I should write a patch for a resource retention summarisation view.\n>\n\nThat would be great.\n\n>>\n>> I am not sure why but it seems acceptable to original authors that the\n>> data of transactions are visibly partially during the initial\n>> synchronization phase for a subscription.\n>\n>\n> I don't think there's much alternative there.\n>\n\nI am not sure about this. I think it is primarily to allow some more\nparallelism among apply and sync workers. One primitive way to achieve\nparallelism and don't have this problem is to allow apply worker to\nwait till all the tablesync workers are in DONE state. Then we will\nnever have an inconsistency problem or the prepared xact problem. Now,\nsurely if large copies are required for multiple relations then we\nwould delay a bit to replay transactions partially by the apply worker\nbut don't know how much that matters as compared to transaction\nvisibility issue and anyway we would have achieved the maximum\nparallelism by allowing copy via multiple workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 09:21:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, 7 Dec 2020 at 11:44, Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> Basically, I was wondering why can't the \"tablesync\" worker just\n> gather messages in a similar way to how the current streaming feature\n> gathers messages into a \"changes\" file, so that they can be replayed\n> later.\n>\n>\nSee the related thread \"Logical archiving\"\n\nhttps://www.postgresql.org/message-id/20D9328B-A189-43D1-80E2-EB25B9284AD6@yandex-team.ru\n\nwhere I addressed some parts of this topic in detail earlier today.\n\nA) The \"tablesync\" worker (after the COPY) does not ever apply any of\n> the incoming messages, but instead it just gobbles them into a\n> \"changes\" file until it decides it has reached SYNCDONE state and\n> exits.\n>\n\nThis has a few issues.\n\nMost importantly, the sync worker must cooperate with the main apply worker\nto achieve a consistent end-of-sync cutover. The sync worker must have\nreplayed the pending changes in order to make this cut-over, because the\nnon-sync apply worker will need to start applying changes on top of the\nresync'd table potentially as soon as the next transaction it starts\napplying, so it needs to see the rows there.\n\nDoing this would also add another round of write multiplication since the\ndata would get spooled then applied to WAL then heap. Write multiplication\nis already an issue for logical replication so adding to it isn't\nparticularly desirable without a really compelling reason. With the write\nmultiplication comes disk space management issues for big transactions as\nwell as the obvious performance/throughput impact.\n\nIt adds even more latency between upstream commit and downstream apply,\nsomething that is again already an issue for logical replication.\n\nRight now we don't have any concept of a durable and locally flushed spool.\n\nIt's not impossible to do as you suggest but the cutover requirement makes\nit far from simple. As discussed in the logical archiving thread I think\nit'd be good to have something like this, and there are times the write\nmultiplication price would be well worth paying. But it's not easy.\n\nB) Then, when the \"apply\" worker proceeds, if it detects the existence\n> of the \"changes\" file it will replay/apply_dispatch all those gobbled\n> messages before just continuing as normal.\n>\n\nThat's going to introduce a really big stall in the apply worker's progress\nin many cases. During that time it won't be receiving from upstream (since\nwe don't spool logical changes to disk at this time) so the upstream lag\nwill grow. That will impact synchronous replication, pg_wal size\nmanagement, catalog bloat, etc. It'll also leave the upstream logical\ndecoding session idle, so when it resumes it may create a spike of I/O and\nCPU load as it catches up, as well as a spike of network traffic. And\ndepending on how close the upstream write rate is to the max decode speed,\nnetwork throughput max, and downstream apply speed max, it may take some\ntime to catch up over the resulting lag.\n\nNot a big fan of that approach.\n\nOn Mon, 7 Dec 2020 at 11:44, Peter Smith <smithpb2250@gmail.com> wrote:\nBasically, I was wondering why can't the \"tablesync\" worker just\ngather messages in a similar way to how the current streaming feature\ngathers messages into a \"changes\" file, so that they can be replayed\nlater.See the related thread \"Logical archiving\"https://www.postgresql.org/message-id/20D9328B-A189-43D1-80E2-EB25B9284AD6@yandex-team.ruwhere I addressed some parts of this topic in detail earlier today. A) The \"tablesync\" worker (after the COPY) does not ever apply any of\nthe incoming messages, but instead it just gobbles them into a\n\"changes\" file until it decides it has reached SYNCDONE state and\nexits.This has a few issues. Most importantly, the sync worker must cooperate with the main apply worker to achieve a consistent end-of-sync cutover. The sync worker must have replayed the pending changes in order to make this cut-over, because the non-sync apply worker will need to start applying changes on top of the resync'd table potentially as soon as the next transaction it starts applying, so it needs to see the rows there.Doing this would also add another round of write multiplication since the data would get spooled then applied to WAL then heap. Write multiplication is already an issue for logical replication so adding to it isn't particularly desirable without a really compelling reason. With the write multiplication comes disk space management issues for big transactions as well as the obvious performance/throughput impact.It adds even more latency between upstream commit and downstream apply, something that is again already an issue for logical replication.Right now we don't have any concept of a durable and locally flushed spool. It's not impossible to do as you suggest but the cutover requirement makes it far from simple. As discussed in the logical archiving thread I think it'd be good to have something like this, and there are times the write multiplication price would be well worth paying. But it's not easy.\n\nB) Then, when the \"apply\" worker proceeds, if it detects the existence\nof the \"changes\" file it will replay/apply_dispatch all those gobbled\nmessages before just continuing as normal.That's going to introduce a really big stall in the apply worker's progress in many cases. During that time it won't be receiving from upstream (since we don't spool logical changes to disk at this time) so the upstream lag will grow. That will impact synchronous replication, pg_wal size management, catalog bloat, etc. It'll also leave the upstream logical decoding session idle, so when it resumes it may create a spike of I/O and CPU load as it catches up, as well as a spike of network traffic. And depending on how close the upstream write rate is to the max decode speed, network throughput max, and downstream apply speed max, it may take some time to catch up over the resulting lag.Not a big fan of that approach.",
"msg_date": "Mon, 7 Dec 2020 12:31:53 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 10:02 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Mon, 7 Dec 2020 at 11:44, Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>>\n>> Basically, I was wondering why can't the \"tablesync\" worker just\n>> gather messages in a similar way to how the current streaming feature\n>> gathers messages into a \"changes\" file, so that they can be replayed\n>> later.\n>>\n>\n> See the related thread \"Logical archiving\"\n>\n> https://www.postgresql.org/message-id/20D9328B-A189-43D1-80E2-EB25B9284AD6@yandex-team.ru\n>\n> where I addressed some parts of this topic in detail earlier today.\n>\n>> A) The \"tablesync\" worker (after the COPY) does not ever apply any of\n>> the incoming messages, but instead it just gobbles them into a\n>> \"changes\" file until it decides it has reached SYNCDONE state and\n>> exits.\n>\n>\n> This has a few issues.\n>\n> Most importantly, the sync worker must cooperate with the main apply worker to achieve a consistent end-of-sync cutover.\n>\n\nIn this idea, there is no need to change the end-of-sync cutover. It\nwill work as it is now. I am not sure what makes you think so.\n\n> The sync worker must have replayed the pending changes in order to make this cut-over, because the non-sync apply worker will need to start applying changes on top of the resync'd table potentially as soon as the next transaction it starts applying, so it needs to see the rows there.\n>\n\nThe change here would be that the apply worker will check for changes\nfile and if it exists then apply them before it changes the relstate\nto SUBREL_STATE_READY in process_syncing_tables_for_apply(). So, it\nwill not miss seeing any rows.\n\n> Doing this would also add another round of write multiplication since the data would get spooled then applied to WAL then heap. Write multiplication is already an issue for logical replication so adding to it isn't particularly desirable without a really compelling reason.\n>\n\nIt will solve our problem of allowing decoding of prepared xacts in\npgoutput. I have explained the problem above [1]. The other idea which\nwe discussed is to allow having an additional state in\npg_subscription_rel, make the slot as permanent in tablesync worker,\nand then process transaction-by-transaction in apply worker. Does that\napproach sounds better? Is there any bigger change involved in this\napproach (making tablesync slot permanent) which I am missing?\n\n> With the write multiplication comes disk space management issues for big transactions as well as the obvious performance/throughput impact.\n>\n> It adds even more latency between upstream commit and downstream apply, something that is again already an issue for logical replication.\n>\n> Right now we don't have any concept of a durable and locally flushed spool.\n>\n\nI think we have a concept quite close to it for writing changes for\nin-progress xacts as done in PG-14. It is not durable but that\nshouldn't be a big problem if we allow syncing the changes file.\n\n> It's not impossible to do as you suggest but the cutover requirement makes it far from simple. As discussed in the logical archiving thread I think it'd be good to have something like this, and there are times the write multiplication price would be well worth paying. But it's not easy.\n>\n>> B) Then, when the \"apply\" worker proceeds, if it detects the existence\n>> of the \"changes\" file it will replay/apply_dispatch all those gobbled\n>> messages before just continuing as normal.\n>\n>\n> That's going to introduce a really big stall in the apply worker's progress in many cases. During that time it won't be receiving from upstream (since we don't spool logical changes to disk at this time) so the upstream lag will grow. That will impact synchronous replication, pg_wal size management, catalog bloat, etc. It'll also leave the upstream logical decoding session idle, so when it resumes it may create a spike of I/O and CPU load as it catches up, as well as a spike of network traffic. And depending on how close the upstream write rate is to the max decode speed, network throughput max, and downstream apply speed max, it may take some time to catch up over the resulting lag.\n>\n\nThis is just for the initial tablesync phase. I think it is equivalent\nto saying that during basebackup, we need to parallelly start physical\nreplication. I agree that sometimes it can take a lot of time to copy\nlarge tables but it will be just one time and no worse than the other\nsituations like basebackup.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KFsjf6x-S7b0dJLvEL3tcn9x-voBJiFoGsccyH5xgDzQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 13:27:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 9:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 7, 2020 at 6:20 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n>\n> >>\n> >> I am not sure why but it seems acceptable to original authors that the\n> >> data of transactions are visibly partially during the initial\n> >> synchronization phase for a subscription.\n> >\n> >\n> > I don't think there's much alternative there.\n> >\n>\n> I am not sure about this. I think it is primarily to allow some more\n> parallelism among apply and sync workers. One primitive way to achieve\n> parallelism and don't have this problem is to allow apply worker to\n> wait till all the tablesync workers are in DONE state.\n>\n\nAs the slot of apply worker is created before all the tablesync\nworkers it should never miss any LSN which tablesync workers would\nhave processed. Also, the table sync workers should not process any\nxact if the apply worker has not processed anything. I think tablesync\ncurrently always processes one transaction (because we call\nprocess_sync_tables at commit of a txn) even if that is not required\nto be in sync with the apply worker. This should solve both the\nproblems (a) visibility of partial transactions (b) allow prepared\ntransactions because tablesync worker no longer needs to combine\nmultiple transactions data.\n\nI think the other advantages of this would be that it would reduce the\nload (both CPU and I/O) on the publisher-side by allowing to decode\nthe data only once instead of for each table sync worker once and\nseparately for the apply worker. I think it will use fewer resources\nto finish the work.\n\nIs there any flaw in this idea which I am missing?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 14:21:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 7, 2020 at 9:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 7, 2020 at 6:20 AM Craig Ringer\n> > <craig.ringer@enterprisedb.com> wrote:\n> > >\n> >\n> > >>\n> > >> I am not sure why but it seems acceptable to original authors that the\n> > >> data of transactions are visibly partially during the initial\n> > >> synchronization phase for a subscription.\n> > >\n> > >\n> > > I don't think there's much alternative there.\n> > >\n> >\n> > I am not sure about this. I think it is primarily to allow some more\n> > parallelism among apply and sync workers. One primitive way to achieve\n> > parallelism and don't have this problem is to allow apply worker to\n> > wait till all the tablesync workers are in DONE state.\n> >\n>\n> As the slot of apply worker is created before all the tablesync\n> workers it should never miss any LSN which tablesync workers would\n> have processed. Also, the table sync workers should not process any\n> xact if the apply worker has not processed anything. I think tablesync\n> currently always processes one transaction (because we call\n> process_sync_tables at commit of a txn) even if that is not required\n> to be in sync with the apply worker.\n>\n\nOne more thing to consider here is that currently in tablesync worker,\nwe create a slot with CRS_USE_SNAPSHOT option which creates a\ntransaction snapshot on the publisher, and then we use the same\nsnapshot for a copy from the publisher. After this, when we try to\nreceive the data from the publisher using the same slot, it will be in\nsync with the COPY. I think to keep the same consistency between COPY\nand the data we receive from the publisher in this approach, we need\nto export the snapshot while creating a slot in the apply worker by\nusing CRS_EXPORT_SNAPSHOT and then use the same snapshot by all the\ntablesync workers doing the copy. In tablesync workers, we can use the\nSET TRANSACTION SNAPSHOT command after \"BEGIN READ ONLY ISOLATION\nLEVEL REPEATABLE READ\" to achieve it. That way the COPY will use the\nsame snapshot as is used for receiving the changes in apply worker and\nthe data will be in sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Dec 2020 10:57:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> As the slot of apply worker is created before all the tablesync\n> workers it should never miss any LSN which tablesync workers would\n> have processed. Also, the table sync workers should not process any\n> xact if the apply worker has not processed anything. I think tablesync\n> currently always processes one transaction (because we call\n> process_sync_tables at commit of a txn) even if that is not required\n> to be in sync with the apply worker. This should solve both the\n> problems (a) visibility of partial transactions (b) allow prepared\n> transactions because tablesync worker no longer needs to combine\n> multiple transactions data.\n>\n> I think the other advantages of this would be that it would reduce the\n> load (both CPU and I/O) on the publisher-side by allowing to decode\n> the data only once instead of for each table sync worker once and\n> separately for the apply worker. I think it will use fewer resources\n> to finish the work.\n\nYes, I observed this same behavior.\n\nIIUC the only way for the tablesync worker to go from CATCHUP mode to\nSYNCDONE is via the call to process_sync_tables.\n\nBut a side-effect of this is, when messages arrive during this CATCHUP\nphase one tx will be getting handled by the tablesync worker before\nthe process_sync_tables() is ever encountered.\n\nI have created and attached a simple patch which allows the tablesync\nto detect if there is anything to do *before* it enters the apply main\nloop. Calling process_sync_tables() before the apply main loop offers\na quick way out so the message handling will not be split\nunnecessarily between the workers.\n\n~\n\nThe result of the patch is demonstrated by the following test/logs\nwhich are also attached.\nNote: I added more logging (not in this patch) to make it easier to\nsee what is going on.\n\nLOGS1. Current code.\nTest: 10 x INSERTS done at CATCHUP time.\nResult: tablesync worker does 1 x INSERT, then apply worker skips 1\nand does remaining 9 x INSERTs.\n\nLOGS2. Patched code.\nTest: Same 10 x INSERTS done at CATCHUP time.\nResult: tablesync can exit early. apply worker handles all 10 x INSERTs\n\nLOGS3. Patched code.\nTest: 2PC PREPARE then COMMIT PREPARED [1] done at CATCHUP time\npsql -d test_pub -c \"BEGIN;INSERT INTO test_tab VALUES(1,\n'foo');PREPARE TRANSACTION 'test_prepared_tab';\"\npsql -d test_pub -c \"COMMIT PREPARED 'test_prepared_tab';\"\nResult: The PREPARE and COMMIT PREPARED are both handle by apply\nworker. This avoids complications which the split otherwise causes.\n[1] 2PC prepare test requires v29 patch from\nhttps://www.postgresql.org/message-id/flat/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3%2BQ%40mail.gmail.com\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 8 Dec 2020 17:22:49 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 11:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Yes, I observed this same behavior.\n>\n> IIUC the only way for the tablesync worker to go from CATCHUP mode to\n> SYNCDONE is via the call to process_sync_tables.\n>\n> But a side-effect of this is, when messages arrive during this CATCHUP\n> phase one tx will be getting handled by the tablesync worker before\n> the process_sync_tables() is ever encountered.\n>\n> I have created and attached a simple patch which allows the tablesync\n> to detect if there is anything to do *before* it enters the apply main\n> loop. Calling process_sync_tables() before the apply main loop offers\n> a quick way out so the message handling will not be split\n> unnecessarily between the workers.\n>\n\nYeah, this demonstrates the idea can work but as mentioned in my\nprevious email [1] this needs much more work to make the COPY and then\nlater fetching the changes from the publisher consistently. So, let me\nsummarize the discussion so far. We wanted to enhance the tablesync\nphase of Logical Replication to enable decoding of prepared\ntransactions [2]. The problem was when we stream prepared transactions\nin the tablesync worker, it will simply commit the same due to the\nrequirement of maintaining a single transaction for the entire\nduration of copy and streaming of transactions afterward. We can't\nsimply disable the decoding of prepared xacts for tablesync workers\nbecause it can skip some of the prepared xacts forever on subscriber\nas explained in one of the emails above [3]. Now, while investigating\nthe solutions to enhance tablesync to support decoding at prepare\ntime, I found that due to the current design of tablesync we can see\npartial data of transactions on subscribers which is also explained in\nthe email above with an example [4]. This problem of visibility is\nthere since the Logical Replication is introduced in PostgreSQL and\nthe only answer I got till now is that there doesn't seem to be any\nother alternative which I think is not true and I have provided one\nalternative as well.\n\nNext, we have discussed three different solutions all of which will\nsolve the first problem (allow the tablesync worker to decode\ntransactions at prepare time) and one of which solves both the first\nand second problem (partial transaction data visibility).\n\nSolution-1: Allow the table-sync worker to use multiple transactions.\nThe reason for doing it in a single transaction is that if after\ninitial COPY we commit and then crash while streaming changes of other\ntransactions, the state of the table won't be known after the restart\nas we are using temporary slot so we don't from where to restart\nsyncing the table.\n\nIIUC, we need to primarily do two things to achieve multiple\ntransactions, one is to have an additional state in the catalog (say\ncatch up) which will say that the initial copy is done. Then we need\nto have a permanent slot using which we can track the progress of the\nslot so that after restart (due to crash, connection break, etc.) we\ncan start from the appropriate position. Now, this will allow us to do\nless work after recovering from a crash because we will know the\nrestart point. As Craig mentioned, it also allows the sync slot to\nadvance, freeing any held upstream resources before the whole sync is\ndone, which is good if the upstream is busy and generating lots of\nWAL. Finally, committing as we go means we won't exceed the cid\nincrement limit in a single txn.\n\nSolution-2: The next solution we discussed is to make \"tablesync\"\nworker just gather messages after COPY in a similar way to how the\ncurrent streaming of in-progress transaction feature gathers messages\ninto a \"changes\" file so that they can be replayed later by the apply\nworker. Now, here as we don't need to replay the individual\ntransactions in tablesync worker in a single transaction, it will\nallow us to send decode prepared to the subscriber. This has some\ndisadvantages such as each transaction processed by tablesync worker\nneeds to be durably written to file and it can also lead to some apply\nlag later when we process the same by apply worker.\n\nSolution-3: Allow the table-sync workers to just perform initial COPY\nand then once the COPY is done for all relations the apply worker will\nstream all the future changes. Now, surely if large copies are\nrequired for multiple relations then we would delay a bit to replay\ntransactions partially by the apply worker but don't know how much\nthat matters as compared to transaction visibility issue and anyway we\nwould have achieved the maximum parallelism by allowing copy via\nmultiple workers. This would reduce the load (both CPU and I/O) on the\npublisher-side by allowing to decode the data only once instead of for\neach table sync worker once and separately for the apply worker. I\nthink it will use fewer resources to finish the work.\n\nCurrently, in tablesync worker, we create a slot with CRS_USE_SNAPSHOT\noption which creates a transaction snapshot on the publisher, and then\nwe use the same snapshot for COPY from the publisher. After this, when\nwe try to receive the data from the publisher using the same slot, it\nwill be in sync with the COPY. I think to keep the same consistency\nbetween COPY and the data we receive from the publisher in this\napproach, we need to export the snapshot while creating a slot in the\napply worker by using CRS_EXPORT_SNAPSHOT and then use the same\nsnapshot by all the tablesync workers doing the copy. In tablesync\nworkers, we can use the SET TRANSACTION SNAPSHOT command after \"BEGIN\nREAD ONLY ISOLATION LEVEL REPEATABLE READ\" to use the exported\nsnapshot. That way the COPY will use the same snapshot as is used for\nreceiving the changes in apply worker and the data will be in sync.\n\nThen we also need a way to export snapshot while the apply worker is\nalready receiving the changes because users can use 'ALTER\nSUBSCRIPTION name REFRESH PUBLICATION' which allows new tables to be\nsynced. I think we need to introduce a new command in\nexec_replication_command() to export the snapshot from the existing\nslot and then use it by the new tablesync worker.\n\n\nAmong the above three solutions, the first two will solve the first\nproblem (allow the tablesync worker to decode transactions at prepare\ntime) and the third solution will solve both the first and second\nproblem (partial transaction data visibility). The third solution\nrequires quite some redesign of how the Logical Replication work is\nsynchronized between apply and tablesync workers and might turn out to\nbe a bigger implementation effort. I am tentatively thinking to go\nwith a first or second solution at this stage and anyway if later\npeople feel that we need some bigger redesign then we can go with\nsomething on the lines of Solution-3.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BQC74wRQmbYT%2BMmOs%3DYbdUjuq0_A9CBbVoQMB1Ryi-OA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAA4eK1KFsjf6x-S7b0dJLvEL3tcn9x-voBJiFoGsccyH5xgDzQ%40mail.gmail.com\n[4] - https://www.postgresql.org/message-id/CAA4eK1Ld9XaLoTZCoKF_gET7kc1fDf8CPR3CM48MQb1N1jDLYg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Dec 2020 15:46:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 8, 2020 at 11:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Yes, I observed this same behavior.\n> >\n> > IIUC the only way for the tablesync worker to go from CATCHUP mode to\n> > SYNCDONE is via the call to process_sync_tables.\n> >\n> > But a side-effect of this is, when messages arrive during this CATCHUP\n> > phase one tx will be getting handled by the tablesync worker before\n> > the process_sync_tables() is ever encountered.\n> >\n> > I have created and attached a simple patch which allows the tablesync\n> > to detect if there is anything to do *before* it enters the apply main\n> > loop. Calling process_sync_tables() before the apply main loop offers\n> > a quick way out so the message handling will not be split\n> > unnecessarily between the workers.\n> >\n>\n> Yeah, this demonstrates the idea can work but as mentioned in my\n> previous email [1] this needs much more work to make the COPY and then\n> later fetching the changes from the publisher consistently. So, let me\n> summarize the discussion so far. We wanted to enhance the tablesync\n> phase of Logical Replication to enable decoding of prepared\n> transactions [2]. The problem was when we stream prepared transactions\n> in the tablesync worker, it will simply commit the same due to the\n> requirement of maintaining a single transaction for the entire\n> duration of copy and streaming of transactions afterward. We can't\n> simply disable the decoding of prepared xacts for tablesync workers\n> because it can skip some of the prepared xacts forever on subscriber\n> as explained in one of the emails above [3]. Now, while investigating\n> the solutions to enhance tablesync to support decoding at prepare\n> time, I found that due to the current design of tablesync we can see\n> partial data of transactions on subscribers which is also explained in\n> the email above with an example [4]. This problem of visibility is\n> there since the Logical Replication is introduced in PostgreSQL and\n> the only answer I got till now is that there doesn't seem to be any\n> other alternative which I think is not true and I have provided one\n> alternative as well.\n>\n> Next, we have discussed three different solutions all of which will\n> solve the first problem (allow the tablesync worker to decode\n> transactions at prepare time) and one of which solves both the first\n> and second problem (partial transaction data visibility).\n>\n> Solution-1: Allow the table-sync worker to use multiple transactions.\n> The reason for doing it in a single transaction is that if after\n> initial COPY we commit and then crash while streaming changes of other\n> transactions, the state of the table won't be known after the restart\n> as we are using temporary slot so we don't from where to restart\n> syncing the table.\n>\n> IIUC, we need to primarily do two things to achieve multiple\n> transactions, one is to have an additional state in the catalog (say\n> catch up) which will say that the initial copy is done. Then we need\n> to have a permanent slot using which we can track the progress of the\n> slot so that after restart (due to crash, connection break, etc.) we\n> can start from the appropriate position. Now, this will allow us to do\n> less work after recovering from a crash because we will know the\n> restart point. As Craig mentioned, it also allows the sync slot to\n> advance, freeing any held upstream resources before the whole sync is\n> done, which is good if the upstream is busy and generating lots of\n> WAL. Finally, committing as we go means we won't exceed the cid\n> increment limit in a single txn.\n>\n> Solution-2: The next solution we discussed is to make \"tablesync\"\n> worker just gather messages after COPY in a similar way to how the\n> current streaming of in-progress transaction feature gathers messages\n> into a \"changes\" file so that they can be replayed later by the apply\n> worker. Now, here as we don't need to replay the individual\n> transactions in tablesync worker in a single transaction, it will\n> allow us to send decode prepared to the subscriber. This has some\n> disadvantages such as each transaction processed by tablesync worker\n> needs to be durably written to file and it can also lead to some apply\n> lag later when we process the same by apply worker.\n>\n> Solution-3: Allow the table-sync workers to just perform initial COPY\n> and then once the COPY is done for all relations the apply worker will\n> stream all the future changes. Now, surely if large copies are\n> required for multiple relations then we would delay a bit to replay\n> transactions partially by the apply worker but don't know how much\n> that matters as compared to transaction visibility issue and anyway we\n> would have achieved the maximum parallelism by allowing copy via\n> multiple workers. This would reduce the load (both CPU and I/O) on the\n> publisher-side by allowing to decode the data only once instead of for\n> each table sync worker once and separately for the apply worker. I\n> think it will use fewer resources to finish the work.\n>\n> Currently, in tablesync worker, we create a slot with CRS_USE_SNAPSHOT\n> option which creates a transaction snapshot on the publisher, and then\n> we use the same snapshot for COPY from the publisher. After this, when\n> we try to receive the data from the publisher using the same slot, it\n> will be in sync with the COPY. I think to keep the same consistency\n> between COPY and the data we receive from the publisher in this\n> approach, we need to export the snapshot while creating a slot in the\n> apply worker by using CRS_EXPORT_SNAPSHOT and then use the same\n> snapshot by all the tablesync workers doing the copy. In tablesync\n> workers, we can use the SET TRANSACTION SNAPSHOT command after \"BEGIN\n> READ ONLY ISOLATION LEVEL REPEATABLE READ\" to use the exported\n> snapshot. That way the COPY will use the same snapshot as is used for\n> receiving the changes in apply worker and the data will be in sync.\n>\n> Then we also need a way to export snapshot while the apply worker is\n> already receiving the changes because users can use 'ALTER\n> SUBSCRIPTION name REFRESH PUBLICATION' which allows new tables to be\n> synced. I think we need to introduce a new command in\n> exec_replication_command() to export the snapshot from the existing\n> slot and then use it by the new tablesync worker.\n>\n>\n> Among the above three solutions, the first two will solve the first\n> problem (allow the tablesync worker to decode transactions at prepare\n> time) and the third solution will solve both the first and second\n> problem (partial transaction data visibility). The third solution\n> requires quite some redesign of how the Logical Replication work is\n> synchronized between apply and tablesync workers and might turn out to\n> be a bigger implementation effort. I am tentatively thinking to go\n> with a first or second solution at this stage and anyway if later\n> people feel that we need some bigger redesign then we can go with\n> something on the lines of Solution-3.\n>\n> Thoughts?\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1%2BQC74wRQmbYT%2BMmOs%3DYbdUjuq0_A9CBbVoQMB1Ryi-OA%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com\n> [3] - https://www.postgresql.org/message-id/CAA4eK1KFsjf6x-S7b0dJLvEL3tcn9x-voBJiFoGsccyH5xgDzQ%40mail.gmail.com\n> [4] - https://www.postgresql.org/message-id/CAA4eK1Ld9XaLoTZCoKF_gET7kc1fDf8CPR3CM48MQb1N1jDLYg%40mail.gmail.com\n>\n> --\n\nHi Amit,\n\n- Solution-3 has become too complicated to be attempted by me. Anyway,\nwe may be better to just focus on eliminating the new problems exposed\nby the 2PC work [1], rather than burning too much effort to fix some\nother quirk which apparently has existed for years.\n[1] https://www.postgresql.org/message-id/CAHut%2BPtm7E5Jj92tJWPtnnjbNjJN60_%3DaGGKYW3h23b7J%3DqeDg%40mail.gmail.com\n\n- Solution-2 has some potential lag problems, and maybe file resource\nproblems as well. This idea did not get a very favourable response\nwhen I first proposed it.\n\n- This leaves Solution-1 as the best viable option to fix the current\nknown 2PC trouble.\n\n~~\n\nSo I will try to write a patch for the proposed Solution-1.\n\n---\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 10 Dec 2020 20:49:13 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 3:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Dec 8, 2020 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 8, 2020 at 11:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Yes, I observed this same behavior.\n> > >\n> > > IIUC the only way for the tablesync worker to go from CATCHUP mode to\n> > > SYNCDONE is via the call to process_sync_tables.\n> > >\n> > > But a side-effect of this is, when messages arrive during this CATCHUP\n> > > phase one tx will be getting handled by the tablesync worker before\n> > > the process_sync_tables() is ever encountered.\n> > >\n> > > I have created and attached a simple patch which allows the tablesync\n> > > to detect if there is anything to do *before* it enters the apply main\n> > > loop. Calling process_sync_tables() before the apply main loop offers\n> > > a quick way out so the message handling will not be split\n> > > unnecessarily between the workers.\n> > >\n> >\n> > Yeah, this demonstrates the idea can work but as mentioned in my\n> > previous email [1] this needs much more work to make the COPY and then\n> > later fetching the changes from the publisher consistently. So, let me\n> > summarize the discussion so far. We wanted to enhance the tablesync\n> > phase of Logical Replication to enable decoding of prepared\n> > transactions [2]. The problem was when we stream prepared transactions\n> > in the tablesync worker, it will simply commit the same due to the\n> > requirement of maintaining a single transaction for the entire\n> > duration of copy and streaming of transactions afterward. We can't\n> > simply disable the decoding of prepared xacts for tablesync workers\n> > because it can skip some of the prepared xacts forever on subscriber\n> > as explained in one of the emails above [3]. Now, while investigating\n> > the solutions to enhance tablesync to support decoding at prepare\n> > time, I found that due to the current design of tablesync we can see\n> > partial data of transactions on subscribers which is also explained in\n> > the email above with an example [4]. This problem of visibility is\n> > there since the Logical Replication is introduced in PostgreSQL and\n> > the only answer I got till now is that there doesn't seem to be any\n> > other alternative which I think is not true and I have provided one\n> > alternative as well.\n> >\n> > Next, we have discussed three different solutions all of which will\n> > solve the first problem (allow the tablesync worker to decode\n> > transactions at prepare time) and one of which solves both the first\n> > and second problem (partial transaction data visibility).\n> >\n> > Solution-1: Allow the table-sync worker to use multiple transactions.\n> > The reason for doing it in a single transaction is that if after\n> > initial COPY we commit and then crash while streaming changes of other\n> > transactions, the state of the table won't be known after the restart\n> > as we are using temporary slot so we don't from where to restart\n> > syncing the table.\n> >\n> > IIUC, we need to primarily do two things to achieve multiple\n> > transactions, one is to have an additional state in the catalog (say\n> > catch up) which will say that the initial copy is done. Then we need\n> > to have a permanent slot using which we can track the progress of the\n> > slot so that after restart (due to crash, connection break, etc.) we\n> > can start from the appropriate position. Now, this will allow us to do\n> > less work after recovering from a crash because we will know the\n> > restart point. As Craig mentioned, it also allows the sync slot to\n> > advance, freeing any held upstream resources before the whole sync is\n> > done, which is good if the upstream is busy and generating lots of\n> > WAL. Finally, committing as we go means we won't exceed the cid\n> > increment limit in a single txn.\n> >\n> > Solution-2: The next solution we discussed is to make \"tablesync\"\n> > worker just gather messages after COPY in a similar way to how the\n> > current streaming of in-progress transaction feature gathers messages\n> > into a \"changes\" file so that they can be replayed later by the apply\n> > worker. Now, here as we don't need to replay the individual\n> > transactions in tablesync worker in a single transaction, it will\n> > allow us to send decode prepared to the subscriber. This has some\n> > disadvantages such as each transaction processed by tablesync worker\n> > needs to be durably written to file and it can also lead to some apply\n> > lag later when we process the same by apply worker.\n> >\n> > Solution-3: Allow the table-sync workers to just perform initial COPY\n> > and then once the COPY is done for all relations the apply worker will\n> > stream all the future changes. Now, surely if large copies are\n> > required for multiple relations then we would delay a bit to replay\n> > transactions partially by the apply worker but don't know how much\n> > that matters as compared to transaction visibility issue and anyway we\n> > would have achieved the maximum parallelism by allowing copy via\n> > multiple workers. This would reduce the load (both CPU and I/O) on the\n> > publisher-side by allowing to decode the data only once instead of for\n> > each table sync worker once and separately for the apply worker. I\n> > think it will use fewer resources to finish the work.\n> >\n> > Currently, in tablesync worker, we create a slot with CRS_USE_SNAPSHOT\n> > option which creates a transaction snapshot on the publisher, and then\n> > we use the same snapshot for COPY from the publisher. After this, when\n> > we try to receive the data from the publisher using the same slot, it\n> > will be in sync with the COPY. I think to keep the same consistency\n> > between COPY and the data we receive from the publisher in this\n> > approach, we need to export the snapshot while creating a slot in the\n> > apply worker by using CRS_EXPORT_SNAPSHOT and then use the same\n> > snapshot by all the tablesync workers doing the copy. In tablesync\n> > workers, we can use the SET TRANSACTION SNAPSHOT command after \"BEGIN\n> > READ ONLY ISOLATION LEVEL REPEATABLE READ\" to use the exported\n> > snapshot. That way the COPY will use the same snapshot as is used for\n> > receiving the changes in apply worker and the data will be in sync.\n> >\n> > Then we also need a way to export snapshot while the apply worker is\n> > already receiving the changes because users can use 'ALTER\n> > SUBSCRIPTION name REFRESH PUBLICATION' which allows new tables to be\n> > synced. I think we need to introduce a new command in\n> > exec_replication_command() to export the snapshot from the existing\n> > slot and then use it by the new tablesync worker.\n> >\n> >\n> > Among the above three solutions, the first two will solve the first\n> > problem (allow the tablesync worker to decode transactions at prepare\n> > time) and the third solution will solve both the first and second\n> > problem (partial transaction data visibility). The third solution\n> > requires quite some redesign of how the Logical Replication work is\n> > synchronized between apply and tablesync workers and might turn out to\n> > be a bigger implementation effort. I am tentatively thinking to go\n> > with a first or second solution at this stage and anyway if later\n> > people feel that we need some bigger redesign then we can go with\n> > something on the lines of Solution-3.\n> >\n> > Thoughts?\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1%2BQC74wRQmbYT%2BMmOs%3DYbdUjuq0_A9CBbVoQMB1Ryi-OA%40mail.gmail.com\n> > [2] - https://www.postgresql.org/message-id/CAHut+PuEMk4SO8oGzxc_ftzPkGA8uC-y5qi-KRqHSy_P0i30DA@mail.gmail.com\n> > [3] - https://www.postgresql.org/message-id/CAA4eK1KFsjf6x-S7b0dJLvEL3tcn9x-voBJiFoGsccyH5xgDzQ%40mail.gmail.com\n> > [4] - https://www.postgresql.org/message-id/CAA4eK1Ld9XaLoTZCoKF_gET7kc1fDf8CPR3CM48MQb1N1jDLYg%40mail.gmail.com\n> >\n> > --\n>\n> Hi Amit,\n>\n> - Solution-3 has become too complicated to be attempted by me. Anyway,\n> we may be better to just focus on eliminating the new problems exposed\n> by the 2PC work [1], rather than burning too much effort to fix some\n> other quirk which apparently has existed for years.\n> [1] https://www.postgresql.org/message-id/CAHut%2BPtm7E5Jj92tJWPtnnjbNjJN60_%3DaGGKYW3h23b7J%3DqeDg%40mail.gmail.com\n>\n> - Solution-2 has some potential lag problems, and maybe file resource\n> problems as well. This idea did not get a very favourable response\n> when I first proposed it.\n>\n> - This leaves Solution-1 as the best viable option to fix the current\n> known 2PC trouble.\n>\n> ~~\n>\n> So I will try to write a patch for the proposed Solution-1.\n\nYeah, even I think that the Solution-1 is best for solving the problem for 2PC.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Dec 2020 15:45:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 8:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> So I will try to write a patch for the proposed Solution-1.\n>\n\nHi Amit.\n\nFYI, here is my v3 WIP patch for the Solution1.\n\nThis patch applies onto the v30 patch set [1] from the other 2PC thread:\n[1] https://www.postgresql.org/message-id/CAFPTHDYA8yE6tEmQ2USYS68kNt%2BkM%3DSwKgj%3Djy4AvFD5e9-UTQ%40mail.gmail.com\n\nAlthough incomplete, it does continue to pass all the make check, and\nsrc/test/subscription TAP tests.\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n\nTODO / Known Issues:\n\n* The tablesync replication origin/lsn logic all needs to be updated\nso that tablesync knows where to restart based on information held by\nthe now permanent slot.\n\n* the current implementation of tablesync drop slot (e.g. from DROP\nSUBSCRIPTION) or finish_sync_worker regenerates the tablesync slot\nname so it knows what slot to drop. The current code may be ok for\nnormal use cases, but if there is an ALTER SUBSCRIPTION ... SET\n(slot_name = newname) it would fail to be able to find the tablesync\nslot. Some redesign may be needed for this part.\n\n* help / comments / cleanup\n\n* There is temporary \"!!>>\" excessive logging of mine scattered around\nwhich I added to help my testing during development\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 15 Dec 2020 21:01:41 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v4 WIP patch for the Solution1.\n\nThis patch applies onto the v30 patch set [1] from other 2PC thread:\n[1] https://www.postgresql.org/message-id/CAFPTHDYA8yE6tEmQ2USYS68kNt%2BkM%3DSwKgj%3Djy4AvFD5e9-UTQ%40mail.gmail.com\n\nAlthough incomplete it does still pass all the make check, and\nsrc/test/subscription TAP tests.\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync now sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for apply worker)\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply\n\nTODO / Known Issues:\n\n* the current implementation of tablesync drop slot (e.g. from\nDropSubscription or finish_sync_worker) regenerates the tablesync slot\nname so it knows what slot to drop. The current code might be ok for\nnormal use cases, but if there is an ALTER SUBSCRIPTION ... SET\n(slot_name = newname) it would fail to be able to find the tablesync\nslot.\n\n* I think if there are crashed tablesync workers then they are not\nknown to DropSubscription. So this might be a problem to cleanup slots\nand/or origin tracking belonging to those unknown workers.\n\n* help / comments / cleanup\n\n* There is temporary \"!!>>\" excessive logging of mine scattered around\nwhich I added to help my testing during development\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Sat, 19 Dec 2020 00:11:03 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> TODO / Known Issues:\n>\n> * the current implementation of tablesync drop slot (e.g. from\n> DropSubscription or finish_sync_worker) regenerates the tablesync slot\n> name so it knows what slot to drop.\n>\n\nIf you always drop the slot at finish_sync_worker, then in which case\ndo you need to drop it during DropSubscription? Is it when the table\nsync workers are crashed?\n\n> The current code might be ok for\n> normal use cases, but if there is an ALTER SUBSCRIPTION ... SET\n> (slot_name = newname) it would fail to be able to find the tablesync\n> slot.\n>\n\nSure, but the same will be true for the apply worker slot as well. I\nagree the problem would be more for table sync workers but I think we\ncan solve it, see below.\n\n> * I think if there are crashed tablesync workers then they are not\n> known to DropSubscription. So this might be a problem to cleanup slots\n> and/or origin tracking belonging to those unknown workers.\n>\n\nYeah, I think we can do two things to avoid this and the previous\nproblem. (a) We can generate the slot_name for the table sync worker\nbased on only subscription_id and rel_id. (b) Immediately after\ncreating the slot, advance the replication origin with the position\n(origin_startpos) we get from walrcv_create_slot, this will help us to\nstart from the right location.\n\nDo you see anything which will still not be addressed after doing the above?\n\nI understand why you are trying to create this patch atop logical\ndecoding of 2PC patch but I think it is better to create this as an\nindependent patch and then use it to test 2PC problem. Also, please\nexplain what kind of testing you did to ensure that it works properly\nafter the table sync worker restarts after the crash.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Dec 2020 12:10:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 12:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 18, 2020 at 6:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> I understand why you are trying to create this patch atop logical\n> decoding of 2PC patch but I think it is better to create this as an\n> independent patch and then use it to test 2PC problem. Also, please\n> explain what kind of testing you did to ensure that it works properly\n> after the table sync worker restarts after the crash.\n>\n\nFew other comments:\n==================\n1.\n* FIXME 3 - Crashed tablesync workers may also have remaining slots\nbecause I don't think\n+ * such workers are even iterated by this loop, and nobody else is\nremoving them.\n+ */\n+ if (slotname)\n+ {\n\nThe above FIXME is not clear to me. Actually, the crashed workers\nshould restart, finish their work, and drop the slots. So not sure\nwhat exactly this FIXME refers to?\n\n2.\nDropSubscription()\n{\n..\nReplicationSlotDropAtPubNode(\n+ NULL,\n+ conninfo, /* use conninfo to make a new connection. */\n+ subname,\n+ syncslotname);\n..\n}\n\nWith the above call, it will form a connection with the publisher and\ndrop the required slots. I think we need to save the connection info\nso that we don't need to connect/disconnect for each slot to be\ndropped. Later in this function, we again connect and drop the apply\nworker slot. I think we should connect just once drop the apply and\ntable sync slots if any.\n\n3.\nReplicationSlotDropAtPubNode(WalReceiverConn *wrconn_given, char\n*conninfo, char *subname, char *slotname)\n{\n..\n+ PG_TRY();\n..\n+ PG_CATCH();\n+ {\n+ /* NOP. Just gobble any ERROR. */\n+ }\n+ PG_END_TRY();\n\nWhy are we suppressing the error instead of handling it the error in\nthe same way as we do while dropping the apply worker slot in\nDropSubscription?\n\n4.\n@@ -139,6 +141,28 @@ finish_sync_worker(void)\n get_rel_name(MyLogicalRepWorker->relid))));\n CommitTransactionCommand();\n\n+ /*\n+ * Cleanup the tablesync slot.\n+ */\n+ {\n+ extern void ReplicationSlotDropAtPubNode(\n+ WalReceiverConn *wrconn_given, char *conninfo, char *subname, char *slotname);\n\nThis is not how we export functions at other places?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Dec 2020 10:55:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v5 WIP patch for the Solution1.\n\nThis patch still applies onto the v30 patch set [1] from other 2PC thread:\n[1] https://www.postgresql.org/message-id/CAFPTHDYA8yE6tEmQ2USYS68kNt%2BkM%3DSwKgj%3Djy4AvFD5e9-UTQ%40mail.gmail.com\n\n(I understand you would like this to be delivered as a separate patch\nindependent of v30. I will convert it ASAP)\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot name.\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply\n\nTODO / Known Issues:\n\n* I think if there are crashed tablesync workers they may not be known\nto DropSubscription current code. This might be a problem to cleanup\nslots and/or origin tracking belonging to those unknown workers.\n\n* Help / comments / cleanup\n\n* There is temporary \"!!>>\" excessive logging of mine scattered around\nwhich I added to help my testing during development\n\n* Address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 21 Dec 2020 20:15:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 5:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 18, 2020 at 6:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > TODO / Known Issues:\n> >\n> > * the current implementation of tablesync drop slot (e.g. from\n> > DropSubscription or finish_sync_worker) regenerates the tablesync slot\n> > name so it knows what slot to drop.\n> >\n>\n> If you always drop the slot at finish_sync_worker, then in which case\n> do you need to drop it during DropSubscription? Is it when the table\n> sync workers are crashed?\n\nYes. It is not the normal case. But if the tablesync never yet got to\nSYNCDONE state (maybe crashed) then finish_sync_worker may not be\ncalled.\nSo I think a rogue tablesync slot might still exist during DropSubscription.\n\n>\n> > The current code might be ok for\n> > normal use cases, but if there is an ALTER SUBSCRIPTION ... SET\n> > (slot_name = newname) it would fail to be able to find the tablesync\n> > slot.\n> >\n>\n> Sure, but the same will be true for the apply worker slot as well. I\n> agree the problem would be more for table sync workers but I think we\n> can solve it, see below.\n>\n> > * I think if there are crashed tablesync workers then they are not\n> > known to DropSubscription. So this might be a problem to cleanup slots\n> > and/or origin tracking belonging to those unknown workers.\n> >\n>\n> Yeah, I think we can do two things to avoid this and the previous\n> problem. (a) We can generate the slot_name for the table sync worker\n> based on only subscription_id and rel_id. (b) Immediately after\n> creating the slot, advance the replication origin with the position\n> (origin_startpos) we get from walrcv_create_slot, this will help us to\n> start from the right location.\n>\n> Do you see anything which will still not be addressed after doing the above?\n\n(a) V5 Patch is updated as suggested.\n(b) V5 Patch is updated as suggested. Now calling replorigin_advance.\nNo problems seen so far. All TAP tests pass, but more testing needed\nfor the origin stuff\n\n>\n> I understand why you are trying to create this patch atop logical\n> decoding of 2PC patch but I think it is better to create this as an\n> independent patch and then use it to test 2PC problem.\n\nOK. The latest patch still applies to v30 just for my convenience\ntoday, but I will head towards converting this to an independent patch\nASAP.\n\n> Also, please\n> explain what kind of testing you did to ensure that it works properly\n> after the table sync worker restarts after the crash.\n\nSo far tested like this - I caused the tablesync to crash after\nCOPYDONE (but before SYNCDONE) by sending a row to cause a PK\nviolation while holding the tablesync at the CATCHUP state in the\ndebugger. The tablesync then handles the insert, encounters the PK\nviolation error, and re-launches. Then I can remove the extra row so\nthe PK violation does not happen, so the (re-launched) tablesync can\ncomplete and finish normally. The Apply worker then takes over.\n\nI have attached some captured/annotated logging of my test scenario\nwhich I ran using the V4 patch (the log has a lot of extra temporary\noutput to help see what is going on)\n\n---\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Mon, 21 Dec 2020 20:35:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Few other comments:\n> ==================\n\nThanks for your feedback.\n\n> 1.\n> * FIXME 3 - Crashed tablesync workers may also have remaining slots\n> because I don't think\n> + * such workers are even iterated by this loop, and nobody else is\n> removing them.\n> + */\n> + if (slotname)\n> + {\n>\n> The above FIXME is not clear to me. Actually, the crashed workers\n> should restart, finish their work, and drop the slots. So not sure\n> what exactly this FIXME refers to?\n\nYes, normally if the tablesync can complete it should behave like that.\nBut I think there are other scenarios where it may be unable to\nclean-up after itself. For example:\n\ni) Maybe the crashed tablesync worker cannot finish. e.g. A row insert\nhandled by tablesync can give a PK violation which also will crash\nagain and again for each re-launched/replacement tablesync worker.\nThis can be reproduced in the debugger. If the DropSubscription\ndoesn't clean-up the tablesync's slot then nobody will.\n\nii) Also DROP SUBSCRIPTION code has locking (see code commit) \"to\nensure that the launcher doesn't restart new worker during dropping\nthe subscription\". So executing DROP SUBSCRIPTION will prevent a newly\ncrashed tablesync from re-launching, so it won’t be able to take care\nof its own slot. If the DropSubscription doesn't clean-up that\ntablesync's slot then nobody will.\n\n>\n> 2.\n> DropSubscription()\n> {\n> ..\n> ReplicationSlotDropAtPubNode(\n> + NULL,\n> + conninfo, /* use conninfo to make a new connection. */\n> + subname,\n> + syncslotname);\n> ..\n> }\n>\n> With the above call, it will form a connection with the publisher and\n> drop the required slots. I think we need to save the connection info\n> so that we don't need to connect/disconnect for each slot to be\n> dropped. Later in this function, we again connect and drop the apply\n> worker slot. I think we should connect just once drop the apply and\n> table sync slots if any.\n\nOK. IIUC this is a suggestion for more efficient connection usage,\nrather than actual bug right? I have added this suggestion to my TODO\nlist.\n\n>\n> 3.\n> ReplicationSlotDropAtPubNode(WalReceiverConn *wrconn_given, char\n> *conninfo, char *subname, char *slotname)\n> {\n> ..\n> + PG_TRY();\n> ..\n> + PG_CATCH();\n> + {\n> + /* NOP. Just gobble any ERROR. */\n> + }\n> + PG_END_TRY();\n>\n> Why are we suppressing the error instead of handling it the error in\n> the same way as we do while dropping the apply worker slot in\n> DropSubscription?\n\nThis function is common - it is also called from the tablesync\nfinish_sync_worker. But in the finish_sync_worker case I wanted to\navoid throwing an ERROR which would cause the tablesync to crash and\nrelaunch (and crash/relaunch/repeat...) when all it was trying to do\nin the first place was just cleanup and exit the process. Perhaps the\nerror suppression should be conditional depending where this function\nis called from?\n\n>\n> 4.\n> @@ -139,6 +141,28 @@ finish_sync_worker(void)\n> get_rel_name(MyLogicalRepWorker->relid))));\n> CommitTransactionCommand();\n>\n> + /*\n> + * Cleanup the tablesync slot.\n> + */\n> + {\n> + extern void ReplicationSlotDropAtPubNode(\n> + WalReceiverConn *wrconn_given, char *conninfo, char *subname, char *slotname);\n>\n> This is not how we export functions at other places?\n\nFixed in latest v5 patch -\nhttps://www.postgresql.org/message-id/CAHut%2BPvmDJ_EO11_up%3D_cRbOjhdWCMG-n7kF-mdRhjtCHcjHRA%40mail.gmail.com\n\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 21 Dec 2020 20:47:00 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Dec 21, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Few other comments:\n> > ==================\n>\n> Thanks for your feedback.\n>\n> > 1.\n> > * FIXME 3 - Crashed tablesync workers may also have remaining slots\n> > because I don't think\n> > + * such workers are even iterated by this loop, and nobody else is\n> > removing them.\n> > + */\n> > + if (slotname)\n> > + {\n> >\n> > The above FIXME is not clear to me. Actually, the crashed workers\n> > should restart, finish their work, and drop the slots. So not sure\n> > what exactly this FIXME refers to?\n>\n> Yes, normally if the tablesync can complete it should behave like that.\n> But I think there are other scenarios where it may be unable to\n> clean-up after itself. For example:\n>\n> i) Maybe the crashed tablesync worker cannot finish. e.g. A row insert\n> handled by tablesync can give a PK violation which also will crash\n> again and again for each re-launched/replacement tablesync worker.\n> This can be reproduced in the debugger. If the DropSubscription\n> doesn't clean-up the tablesync's slot then nobody will.\n>\n> ii) Also DROP SUBSCRIPTION code has locking (see code commit) \"to\n> ensure that the launcher doesn't restart new worker during dropping\n> the subscription\".\n>\n\nYeah, I have also read that comment but do you know how it is\npreventing relaunch? How does the subscription lock help?\n\n> So executing DROP SUBSCRIPTION will prevent a newly\n> crashed tablesync from re-launching, so it won’t be able to take care\n> of its own slot. If the DropSubscription doesn't clean-up that\n> tablesync's slot then nobody will.\n>\n\n\n> >\n> > 2.\n> > DropSubscription()\n> > {\n> > ..\n> > ReplicationSlotDropAtPubNode(\n> > + NULL,\n> > + conninfo, /* use conninfo to make a new connection. */\n> > + subname,\n> > + syncslotname);\n> > ..\n> > }\n> >\n> > With the above call, it will form a connection with the publisher and\n> > drop the required slots. I think we need to save the connection info\n> > so that we don't need to connect/disconnect for each slot to be\n> > dropped. Later in this function, we again connect and drop the apply\n> > worker slot. I think we should connect just once drop the apply and\n> > table sync slots if any.\n>\n> OK. IIUC this is a suggestion for more efficient connection usage,\n> rather than actual bug right?\n>\n\nYes, it is for effective connection usage.\n\n> I have added this suggestion to my TODO\n> list.\n>\n> >\n> > 3.\n> > ReplicationSlotDropAtPubNode(WalReceiverConn *wrconn_given, char\n> > *conninfo, char *subname, char *slotname)\n> > {\n> > ..\n> > + PG_TRY();\n> > ..\n> > + PG_CATCH();\n> > + {\n> > + /* NOP. Just gobble any ERROR. */\n> > + }\n> > + PG_END_TRY();\n> >\n> > Why are we suppressing the error instead of handling it the error in\n> > the same way as we do while dropping the apply worker slot in\n> > DropSubscription?\n>\n> This function is common - it is also called from the tablesync\n> finish_sync_worker. But in the finish_sync_worker case I wanted to\n> avoid throwing an ERROR which would cause the tablesync to crash and\n> relaunch (and crash/relaunch/repeat...) when all it was trying to do\n> in the first place was just cleanup and exit the process. Perhaps the\n> error suppression should be conditional depending where this function\n> is called from?\n>\n\nYeah, that could be one way and if you follow my previous suggestion\nthis function might change a bit more.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Dec 2020 18:08:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v6 WIP patch for the Solution1.\n\nThis patch still applies onto the v30 patch set [1] from other 2PC thread:\n[1] https://www.postgresql.org/message-id/CAFPTHDYA8yE6tEmQ2USYS68kNt%2BkM%3DSwKgj%3Djy4AvFD5e9-UTQ%40mail.gmail.com\n\n(I understand you would like this to be delivered as a separate patch\nindependent of v30. I will convert it ASAP)\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot name.\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply\n\nTODO / Known Issues:\n\n* Crashed tablesync workers may not be known to DropSubscription\ncurrent code. This might be a problem to cleanup slots and/or origin\ntracking belonging to those unknown workers.\n\n* There seems to be a race condition during DROP SUBSCRIPTION. It\nmanifests as the TAP test 007 hanging. Logging shows it seems to be\nduring replorigin_drop when called from DropSubscription. It is timing\nrelated and quite rare - e.g. Only happens 1x every 10x running\nsubscription TAP tests.\n\n* Help / comments / cleanup\n\n* There is temporary \"!!>>\" excessive logging of mine scattered around\nwhich I added to help my testing during development\n\n* Address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 22 Dec 2020 22:13:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 11:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 21, 2020 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Dec 21, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Few other comments:\n> > > ==================\n> >\n> > Thanks for your feedback.\n> >\n> > > 1.\n> > > * FIXME 3 - Crashed tablesync workers may also have remaining slots\n> > > because I don't think\n> > > + * such workers are even iterated by this loop, and nobody else is\n> > > removing them.\n> > > + */\n> > > + if (slotname)\n> > > + {\n> > >\n> > > The above FIXME is not clear to me. Actually, the crashed workers\n> > > should restart, finish their work, and drop the slots. So not sure\n> > > what exactly this FIXME refers to?\n> >\n> > Yes, normally if the tablesync can complete it should behave like that.\n> > But I think there are other scenarios where it may be unable to\n> > clean-up after itself. For example:\n> >\n> > i) Maybe the crashed tablesync worker cannot finish. e.g. A row insert\n> > handled by tablesync can give a PK violation which also will crash\n> > again and again for each re-launched/replacement tablesync worker.\n> > This can be reproduced in the debugger. If the DropSubscription\n> > doesn't clean-up the tablesync's slot then nobody will.\n> >\n> > ii) Also DROP SUBSCRIPTION code has locking (see code commit) \"to\n> > ensure that the launcher doesn't restart new worker during dropping\n> > the subscription\".\n> >\n>\n> Yeah, I have also read that comment but do you know how it is\n> preventing relaunch? How does the subscription lock help?\n\nHmmm. I did see there is a matching lock in get_subscription_list of\nlauncher.c, which may be what that code comment was referring to. But\nI also am currently unsure how this lock prevents anybody (e.g.\nprocess_syncing_tables_for_apply) from executing another\nlogicalrep_worker_launch.\n\n>\n> > So executing DROP SUBSCRIPTION will prevent a newly\n> > crashed tablesync from re-launching, so it won’t be able to take care\n> > of its own slot. If the DropSubscription doesn't clean-up that\n> > tablesync's slot then nobody will.\n> >\n>\n>\n> > >\n> > > 2.\n> > > DropSubscription()\n> > > {\n> > > ..\n> > > ReplicationSlotDropAtPubNode(\n> > > + NULL,\n> > > + conninfo, /* use conninfo to make a new connection. */\n> > > + subname,\n> > > + syncslotname);\n> > > ..\n> > > }\n> > >\n> > > With the above call, it will form a connection with the publisher and\n> > > drop the required slots. I think we need to save the connection info\n> > > so that we don't need to connect/disconnect for each slot to be\n> > > dropped. Later in this function, we again connect and drop the apply\n> > > worker slot. I think we should connect just once drop the apply and\n> > > table sync slots if any.\n> >\n> > OK. IIUC this is a suggestion for more efficient connection usage,\n> > rather than actual bug right?\n> >\n>\n> Yes, it is for effective connection usage.\n>\n\nI have addressed this in the latest patch [v6]\n\n> >\n> > >\n> > > 3.\n> > > ReplicationSlotDropAtPubNode(WalReceiverConn *wrconn_given, char\n> > > *conninfo, char *subname, char *slotname)\n> > > {\n> > > ..\n> > > + PG_TRY();\n> > > ..\n> > > + PG_CATCH();\n> > > + {\n> > > + /* NOP. Just gobble any ERROR. */\n> > > + }\n> > > + PG_END_TRY();\n> > >\n> > > Why are we suppressing the error instead of handling it the error in\n> > > the same way as we do while dropping the apply worker slot in\n> > > DropSubscription?\n> >\n> > This function is common - it is also called from the tablesync\n> > finish_sync_worker. But in the finish_sync_worker case I wanted to\n> > avoid throwing an ERROR which would cause the tablesync to crash and\n> > relaunch (and crash/relaunch/repeat...) when all it was trying to do\n> > in the first place was just cleanup and exit the process. Perhaps the\n> > error suppression should be conditional depending where this function\n> > is called from?\n> >\n>\n> Yeah, that could be one way and if you follow my previous suggestion\n> this function might change a bit more.\n\nI have addressed this in the latest patch [v6]\n\n---\n[v6] https://www.postgresql.org/message-id/CAHut%2BPuCLty2HGNT6neyOcUmBNxOLo%3DybQ2Yv-nTR4kFY-8QLw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 22 Dec 2020 22:28:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v7 WIP patch for the Solution1.\n\nThis patch still applies onto the v30 patch set [1] from other 2PC thread:\n[1] https://www.postgresql.org/message-id/CAFPTHDYA8yE6tEmQ2USYS68kNt%2BkM%3DSwKgj%3Djy4AvFD5e9-UTQ%40mail.gmail.com\n\n(I understand you would like this to be delivered as a separate patch\nindependent of v30. I will convert it ASAP)\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot name.\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply\n\n* The v7 DropSubscription cleanup code has been rewritten since v6.\nThe subscription TAP tests have been executed many (7) times now\nwithout observing any of the race problems that I previously reported\nseeing when using the v6 patch.\n\nTODO / Known Issues:\n\n* Help / comments / cleanup\n\n* There is temporary \"!!>>\" excessive logging scattered around which I\nadded to help my testing during development\n\n* Address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 23 Dec 2020 17:19:19 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v8 WIP patch for the Solution1.\n\nThis has the same code changes as the v7 patch, but the v8 patch can\nbe applied to the current PG OSS master code base.\n\n====\n\nCoded / WIP:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot name.\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply\n\n* The DropSubscription cleanup code was changed lots in v7. The\nsubscription TAP tests have been executed 6x now without observing any\nrace problems that were sometimes seen to happen in the v6 patch.\n\nTODO / Known Issues:\n\n* Help / comments\n\n* There is temporary \"!!>>\" excessive logging scattered around which I\nadded to help my testing during development\n\n* Address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 23 Dec 2020 20:38:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 11:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Amit.\n>\n> PSA my v7 WIP patch for the Solution1.\n>\n\nFew comments:\n================\n1.\n+ * Rarely, the DropSubscription may be issued when a tablesync still\n+ * is in SYNCDONE but not yet in READY state. If this happens then\n+ * the drop slot could fail because it is already dropped.\n+ * In this case suppress and drop slot error.\n+ *\n+ * FIXME - Is there a better way than this?\n+ */\n+ if (rstate->state != SUBREL_STATE_SYNCDONE)\n+ PG_RE_THROW();\n\nSo, does this situation happens when we try to drop subscription after\nthe state is changed to syncdone but not syncready. If so, then can't\nwe write a function GetSubscriptionNotDoneRelations similar to\nGetSubscriptionNotReadyRelations where we get a list of relations that\nare not in done stage. I think this should be safe because once we are\nhere we shouldn't be allowed to start a new worker and old workers are\nalready stopped by this function.\n\n2. Your changes in LogicalRepSyncTableStart() doesn't seem to be\nright. IIUC, you are copying the table in one transaction, then\nupdating the state to SUBREL_STATE_COPYDONE in another transaction,\nand after that doing replorigin_advance. Consider what happened if we\nerror out after the first txn is committed in which we have copied the\ntable. After the restart, it will again try to copy and lead to an\nerror. Similarly, consider if we error out after the second\ntransaction, we won't where to start decoding from. I think all these\nshould be done in a single transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Dec 2020 15:15:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 4:58 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Dec 21, 2020 at 11:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 21, 2020 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 21, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > Few other comments:\n> > > > ==================\n> > >\n> > > Thanks for your feedback.\n> > >\n> > > > 1.\n> > > > * FIXME 3 - Crashed tablesync workers may also have remaining slots\n> > > > because I don't think\n> > > > + * such workers are even iterated by this loop, and nobody else is\n> > > > removing them.\n> > > > + */\n> > > > + if (slotname)\n> > > > + {\n> > > >\n> > > > The above FIXME is not clear to me. Actually, the crashed workers\n> > > > should restart, finish their work, and drop the slots. So not sure\n> > > > what exactly this FIXME refers to?\n> > >\n> > > Yes, normally if the tablesync can complete it should behave like that.\n> > > But I think there are other scenarios where it may be unable to\n> > > clean-up after itself. For example:\n> > >\n> > > i) Maybe the crashed tablesync worker cannot finish. e.g. A row insert\n> > > handled by tablesync can give a PK violation which also will crash\n> > > again and again for each re-launched/replacement tablesync worker.\n> > > This can be reproduced in the debugger. If the DropSubscription\n> > > doesn't clean-up the tablesync's slot then nobody will.\n> > >\n> > > ii) Also DROP SUBSCRIPTION code has locking (see code commit) \"to\n> > > ensure that the launcher doesn't restart new worker during dropping\n> > > the subscription\".\n> > >\n> >\n> > Yeah, I have also read that comment but do you know how it is\n> > preventing relaunch? How does the subscription lock help?\n>\n> Hmmm. I did see there is a matching lock in get_subscription_list of\n> launcher.c, which may be what that code comment was referring to. But\n> I also am currently unsure how this lock prevents anybody (e.g.\n> process_syncing_tables_for_apply) from executing another\n> logicalrep_worker_launch.\n>\n\nprocess_syncing_tables_for_apply will be called by the apply worker\nand we are stopping the apply worker. So, after that launcher won't\nstart a new apply worker because of get_subscription_list() and if the\napply worker is not started then it won't be able to start tablesync\nworker. So, we need the handling of crashed tablesync workers here\nsuch that we need to drop any new sync slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Dec 2020 15:39:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v9 WIP patch for the Solution1 which addresses some recent\nreview comments, and other minor changes.\n\n====\n\nFeatures:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot na\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a relaunched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* The DropSubscription cleanup code was enhanced in v7 to take care of\ncrashed sync workers.\n\n* Minor updates to PG docs\n\nTODO / Known Issues:\n\n* Source includes temporary \"!!>>\" excessive logging which I added to\nhelp testing during development\n\n* Address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 30 Dec 2020 17:08:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 9:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 22, 2020 at 4:58 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Dec 21, 2020 at 11:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 21, 2020 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Mon, Dec 21, 2020 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > Few other comments:\n> > > > > ==================\n> > > >\n> > > > Thanks for your feedback.\n> > > >\n> > > > > 1.\n> > > > > * FIXME 3 - Crashed tablesync workers may also have remaining slots\n> > > > > because I don't think\n> > > > > + * such workers are even iterated by this loop, and nobody else is\n> > > > > removing them.\n> > > > > + */\n> > > > > + if (slotname)\n> > > > > + {\n> > > > >\n> > > > > The above FIXME is not clear to me. Actually, the crashed workers\n> > > > > should restart, finish their work, and drop the slots. So not sure\n> > > > > what exactly this FIXME refers to?\n> > > >\n> > > > Yes, normally if the tablesync can complete it should behave like that.\n> > > > But I think there are other scenarios where it may be unable to\n> > > > clean-up after itself. For example:\n> > > >\n> > > > i) Maybe the crashed tablesync worker cannot finish. e.g. A row insert\n> > > > handled by tablesync can give a PK violation which also will crash\n> > > > again and again for each re-launched/replacement tablesync worker.\n> > > > This can be reproduced in the debugger. If the DropSubscription\n> > > > doesn't clean-up the tablesync's slot then nobody will.\n> > > >\n> > > > ii) Also DROP SUBSCRIPTION code has locking (see code commit) \"to\n> > > > ensure that the launcher doesn't restart new worker during dropping\n> > > > the subscription\".\n> > > >\n> > >\n> > > Yeah, I have also read that comment but do you know how it is\n> > > preventing relaunch? How does the subscription lock help?\n> >\n> > Hmmm. I did see there is a matching lock in get_subscription_list of\n> > launcher.c, which may be what that code comment was referring to. But\n> > I also am currently unsure how this lock prevents anybody (e.g.\n> > process_syncing_tables_for_apply) from executing another\n> > logicalrep_worker_launch.\n> >\n>\n> process_syncing_tables_for_apply will be called by the apply worker\n> and we are stopping the apply worker. So, after that launcher won't\n> start a new apply worker because of get_subscription_list() and if the\n> apply worker is not started then it won't be able to start tablesync\n> worker. So, we need the handling of crashed tablesync workers here\n> such that we need to drop any new sync slots.\n\nYes, in the v6 patch code this was a problem in need of handling. But\nsince the v7 patch the DropSubscription code is now using a separate\nGetSubscriptionNotReadyRelations loop to handle the cleanup of\npotentially leftover slots from crashed tablesync workers (i.e.\nworkers that never got to a READY state).\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 30 Dec 2020 17:15:17 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 8:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 1.\n> + * Rarely, the DropSubscription may be issued when a tablesync still\n> + * is in SYNCDONE but not yet in READY state. If this happens then\n> + * the drop slot could fail because it is already dropped.\n> + * In this case suppress and drop slot error.\n> + *\n> + * FIXME - Is there a better way than this?\n> + */\n> + if (rstate->state != SUBREL_STATE_SYNCDONE)\n> + PG_RE_THROW();\n>\n> So, does this situation happens when we try to drop subscription after\n> the state is changed to syncdone but not syncready. If so, then can't\n> we write a function GetSubscriptionNotDoneRelations similar to\n> GetSubscriptionNotReadyRelations where we get a list of relations that\n> are not in done stage. I think this should be safe because once we are\n> here we shouldn't be allowed to start a new worker and old workers are\n> already stopped by this function.\n\nYes, but I don't see how adding such a function is an improvement over\nthe existing code:\ne.g.1. GetSubscriptionNotDoneRelations will include the READY state\n(which we don't want) just like GetSubscriptionNotReadyRelations\nincludes the SYNCDONE state.\ne.g.2. Or, something like GetSubscriptionNotDoneAndNotReadyRelations\nwould be an unnecessary overkill replacement for the current simple\n\"if\".\n\nAFAIK the code is OK as-is. That \"FIXME\" comment was really meant only\nto highlight this for review, rather than to imply something needed to\nbe fixed. I have removed that \"FIXME\" comment in the latest patch\n[v9].\n\n>\n> 2. Your changes in LogicalRepSyncTableStart() doesn't seem to be\n> right. IIUC, you are copying the table in one transaction, then\n> updating the state to SUBREL_STATE_COPYDONE in another transaction,\n> and after that doing replorigin_advance. Consider what happened if we\n> error out after the first txn is committed in which we have copied the\n> table. After the restart, it will again try to copy and lead to an\n> error. Similarly, consider if we error out after the second\n> transaction, we won't where to start decoding from. I think all these\n> should be done in a single transaction.\n\nFixed as suggested. Please see latest patch [v9]\n\n---\n\n[v9] https://www.postgresql.org/message-id/CAHut%2BPv8ShLmrjCriVU%2Btprk_9b2kwBxYK2oGSn5Eb__kWVc7A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 30 Dec 2020 17:21:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 11:51 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Dec 23, 2020 at 8:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 1.\n> > + * Rarely, the DropSubscription may be issued when a tablesync still\n> > + * is in SYNCDONE but not yet in READY state. If this happens then\n> > + * the drop slot could fail because it is already dropped.\n> > + * In this case suppress and drop slot error.\n> > + *\n> > + * FIXME - Is there a better way than this?\n> > + */\n> > + if (rstate->state != SUBREL_STATE_SYNCDONE)\n> > + PG_RE_THROW();\n> >\n> > So, does this situation happens when we try to drop subscription after\n> > the state is changed to syncdone but not syncready. If so, then can't\n> > we write a function GetSubscriptionNotDoneRelations similar to\n> > GetSubscriptionNotReadyRelations where we get a list of relations that\n> > are not in done stage. I think this should be safe because once we are\n> > here we shouldn't be allowed to start a new worker and old workers are\n> > already stopped by this function.\n>\n> Yes, but I don't see how adding such a function is an improvement over\n> the existing code:\n>\n\nThe advantage is that we don't need to use try..catch to deal with\nsuch conditions which I don't think is a good way to deal with such\ncases. Also, even after using try...catch, still, we can leak the\nslots because the patch drops the slot after changing the state to\nsyncdone and if there is any error while dropping the slot, it simply\nskips it. So, it is possible that the rel state is syncdone but the\nslot still exists and we get an error due to some different reason,\nand then we will silently skip it again and allow the subscription to\nbe dropped.\n\nI think instead what we should do is to drop the slot before we change\nthe rel state to syncdone. Also, if the apply workers fail to drop the\nslot, it should try to again drop it after restart. In\nDropSubscription, we can then check if the rel state is not SYNC or\nREADY, we can drop the corresponding slots.\n\n> e.g.1. GetSubscriptionNotDoneRelations will include the READY state\n> (which we don't want) just like GetSubscriptionNotReadyRelations\n> includes the SYNCDONE state.\n> e.g.2. Or, something like GetSubscriptionNotDoneAndNotReadyRelations\n> would be an unnecessary overkill replacement for the current simple\n> \"if\".\n>\n\nor we can probably modify the function as\nGetSubscriptionRelationsNotInStates and pass it an array of states\nwhich we don't want.\n\n> AFAIK the code is OK as-is.\n>\n\nAs described above, there are still race conditions where we can leak\nslots and also this doesn't look clean.\n\nFew other comments:\n=================\n1.\n+ elog(LOG, \"!!>> DropSubscription: dropping the tablesync slot\n\\\"%s\\\".\", syncslotname);\n+ ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n+ elog(LOG, \"!!>> DropSubscription: dropped the tablesync slot\n\\\"%s\\\".\", syncslotname);\n\n...\n...\n\n+ elog(LOG, \"!!>> finish_sync_worker: dropping the tablesync slot\n\\\"%s\\\".\", syncslotname);\n+ ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n+ elog(LOG, \"!!>> finish_sync_worker: dropped the tablesync slot\n\\\"%s\\\".\", syncslotname);\n\nRemove these and other elogs added to aid debugging or testing. If you\nneed these for development purposes then move these to separate patch.\n\n2. Remove WIP from the commit message and patch name.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Jan 2021 14:38:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA my v10 patch for the Solution1.\n\nv10 is essentially the same as v9, except now all the temporary \"!!>>\"\nlogging has been isolated to a separate (optional) patch 0002.\n\n====\n\nFeatures:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot na\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a re-launched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* the DropSubscription cleanup code was enhanced (v7+) to take care of\ncrashed sync workers.\n\n* minor updates to PG docs\n\nTODO / Known Issues:\n\n* address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 4 Jan 2021 20:28:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 8:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Few other comments:\n> =================\n> 1.\n> + elog(LOG, \"!!>> DropSubscription: dropping the tablesync slot\n> \\\"%s\\\".\", syncslotname);\n> + ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n> + elog(LOG, \"!!>> DropSubscription: dropped the tablesync slot\n> \\\"%s\\\".\", syncslotname);\n>\n> ...\n> ...\n>\n> + elog(LOG, \"!!>> finish_sync_worker: dropping the tablesync slot\n> \\\"%s\\\".\", syncslotname);\n> + ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n> + elog(LOG, \"!!>> finish_sync_worker: dropped the tablesync slot\n> \\\"%s\\\".\", syncslotname);\n>\n> Remove these and other elogs added to aid debugging or testing. If you\n> need these for development purposes then move these to separate patch.\n\nFixed in latest patch (v10).\n\n>\n> 2. Remove WIP from the commit message and patch name.\n>\n> --\n\nFixed in latest patch (v10)\n\n---\nv10 = https://www.postgresql.org/message-id/CAHut%2BPuzPmFzk3p4oL9H3nkiY6utFryV9c5dW6kRhCe_RY%3DgnA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 4 Jan 2021 20:33:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few other comments:\n> =================\n>\n\nFew more comments on v9:\n======================\n1.\n+ /* Drop the tablesync slot. */\n+ {\n+ char *syncslotname = ReplicationSlotNameForTablesync(subid, relid);\n+\n+ /*\n+ * If the subscription slotname is NONE/NULL and the connection to publisher is\n+ * broken, but the DropSubscription should still be allowed to complete.\n+ * But without a connection it is not possible to drop any tablesync slots.\n+ */\n+ if (!wrconn)\n+ {\n+ /* FIXME - OK to just log a warning? */\n+ elog(WARNING, \"!!>> DropSubscription: no connection. Cannot drop\ntablesync slot \\\"%s\\\".\",\n+ syncslotname);\n+ }\n\nWhy is this not an ERROR? We don't want to keep the table slots\nlingering after DropSubscription. If there is any tablesync slot that\nneeds to be dropped and the publisher is not available then we should\nraise an error.\n\n2.\n+ /*\n+ * Tablesync resource cleanup (slots and origins).\n+ *\n+ * Any READY-state relations would already have dealt with clean-ups.\n+ */\n+ {\n\nThere is no need to start a separate block '{' here.\n\n3.\n+#define SUBREL_STATE_COPYDONE 'C' /* tablesync copy phase is completed */\n\nYou can mention in the comments that sublsn will be NULL for this\nstate as it is mentioned for other similar states. Can we think of\nusing any letter in lower case for this as all other states are in\nlower-case except for this which makes it a look bit odd? We can use\n'f' or 'e' and describe it as 'copy finished' or 'copy end'. I am fine\nif you have any better ideas.\n\n4.\nLogicalRepSyncTableStart()\n{\n..\n..\n+copy_table_done:\n+\n+ /* Setup replication origin tracking. */\n+ {\n+ char originname[NAMEDATALEN];\n+ RepOriginId originid;\n+\n+ snprintf(originname, sizeof(originname), \"pg_%u_%u\",\nMySubscription->oid, MyLogicalRepWorker->relid);\n+ originid = replorigin_by_name(originname, true);\n+ if (!OidIsValid(originid))\n+ {\n+ /*\n+ * Origin tracking does not exist. Create it now, and advance to LSN\ngot from walrcv_create_slot.\n+ */\n+ elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_create\n\\\"%s\\\".\", originname);\n+ originid = replorigin_create(originname);\n+ elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_session_setup\n\\\"%s\\\".\", originname);\n+ replorigin_session_setup(originid);\n+ replorigin_session_origin = originid;\n+ elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_advance\n\\\"%s\\\".\", originname);\n+ replorigin_advance(originid, *origin_startpos, InvalidXLogRecPtr,\n+ true /* go backward */ , true /* WAL log */ );\n+ }\n+ else\n+ {\n+ /*\n+ * Origin tracking already exists.\n+ */\n+ elog(LOG, \"!!>> LogicalRepSyncTableStart: 2 replorigin_session_setup\n\\\"%s\\\".\", originname);\n+ replorigin_session_setup(originid);\n+ replorigin_session_origin = originid;\n+ elog(LOG, \"!!>> LogicalRepSyncTableStart: 2\nreplorigin_session_get_progress \\\"%s\\\".\", originname);\n+ *origin_startpos = replorigin_session_get_progress(false);\n+ }\n..\n..\n}\n\nI am not sure if this code is correct because, for the very first time\nwhen the copy is done, we don't expect replication origin to exist\nwhereas this code will silently use already existing replication\norigin which can lead to a wrong start position for the slot. In such\na case it should error out. I guess we should create the replication\norigin before making the state as copydone. I feel we should even have\na test case for this as it is not difficult to have a pre-existing\nreplication origin.\n\n5. Is it possible to write a testcase where we fail (say due to pk\nviolation or some other error) after the initial copy is done, then\nremove the conflicting row and allow a copy to be completed? If we\nalready have any such test then it is fine.\n\n6.\n+/*\n+ * Drop the replication slot at the publisher node\n+ * using the replication connection.\n+ */\n\nThis comment looks a bit odd. The first line appears to be too short.\nWe have limit of 80 chars but this is much lesser than that.\n\n7.\n@@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,\n LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n\n /* Make sure it's not used by somebody else */\n- if (replication_state->acquired_by != 0)\n+ if (replication_state->acquired_by != 0 &&\nreplication_state->acquired_by != MyProcPid)\n {\n\nI think you won't need this change if you do replorigin_advance before\nreplorigin_session_setup in your patch.\n\n8.\n- * that ensures we won't loose knowledge about that after a crash if the\n+ * that ensures we won't lose knowledge about that after a crash if the\n\nIt is better to submit this as a separate patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Jan 2021 17:20:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 5:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> PSA my v9 WIP patch for the Solution1 which addresses some recent\n> review comments, and other minor changes.\n\nI did some tests using the test suite prepared by Erik Rijkers in [1]\nduring the initial design of tablesync.\n\nBack then, they had seen some errors while doing multiple commits in\ninitial tablesync. So I've rerun the test script on the v9 patch\napplied on HEAD and found no errors.\nThe script runs pgbench, creates a pub/sub on a standby server, and\nall of the pgbench tables are replicated to the standby. The contents\nof the tables are compared at\nthe end of each run to make sure they are identical.\nI have run it for around 12 hours, and it worked without any errors.\nAttaching the script I used.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n[1]- https://www.postgresql.org/message-id/93d02794068482f96d31b002e0eb248d%40xs4all.nl",
"msg_date": "Tue, 5 Jan 2021 14:02:35 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v11 patch for the Tablesync Solution1.\n\nDifference from v10:\n- Addresses several recent review comments.\n- pg_indent has been run\n\n====\n\nFeatures:\n\n* tablesync slot is now permanent instead of temporary. The tablesync\nslot name is no longer tied to the Subscription slot na\n\n* the tablesync slot cleanup (drop) code is added for DropSubscription\nand for finish_sync_worker functions\n\n* tablesync worked now allowing multiple tx instead of single tx\n\n* a new state (SUBREL_STATE_COPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* if a re-launched tablesync finds the state is SUBREL_STATE_COPYDONE\nthen it will bypass the initial copy_table phase.\n\n* tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* the DropSubscription cleanup code was enhanced (v7+) to take care of\ncrashed sync workers.\n\n* minor updates to PG docs\n\nTODO / Known Issues:\n\n* address review comments\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 5 Jan 2021 20:52:28 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few more comments on v9:\n> ======================\n> 1.\n> + /* Drop the tablesync slot. */\n> + {\n> + char *syncslotname = ReplicationSlotNameForTablesync(subid, relid);\n> +\n> + /*\n> + * If the subscription slotname is NONE/NULL and the connection to publisher is\n> + * broken, but the DropSubscription should still be allowed to complete.\n> + * But without a connection it is not possible to drop any tablesync slots.\n> + */\n> + if (!wrconn)\n> + {\n> + /* FIXME - OK to just log a warning? */\n> + elog(WARNING, \"!!>> DropSubscription: no connection. Cannot drop\n> tablesync slot \\\"%s\\\".\",\n> + syncslotname);\n> + }\n>\n> Why is this not an ERROR? We don't want to keep the table slots\n> lingering after DropSubscription. If there is any tablesync slot that\n> needs to be dropped and the publisher is not available then we should\n> raise an error.\n\nPreviously there was only the subscription slot. If the connection was\nbroken and caused an error then it was still possible for the user to\ndisassociate the subscription from the slot using ALTER SUBSCRIPTION\n... SET (slot_name = NONE). And then (when the slotname is NULL) the\nDropSubscription could complete OK. I expect in that case the Admin\nstill had some slot clean-up they would need to do on the Publisher\nmachine.\n\nBut now we have the tablesync slots so if I caused them to give ERROR\nwhen the connection is broken then the subscription would become\nun-droppable. If you think that having ERROR and an undroppable\nsubscription is better than the current WARNING then please let me\nknow - there is no problem to change it.\n\n> 2.\n> + /*\n> + * Tablesync resource cleanup (slots and origins).\n> + *\n> + * Any READY-state relations would already have dealt with clean-ups.\n> + */\n> + {\n>\n> There is no need to start a separate block '{' here.\n\nWritten this way so I can declare variables only at the scope they are\nneeded. I didn’t see anything in the PG code conventions discouraging\ndoing this practice: https://www.postgresql.org/docs/devel/source.html\n\n> 3.\n> +#define SUBREL_STATE_COPYDONE 'C' /* tablesync copy phase is completed */\n>\n> You can mention in the comments that sublsn will be NULL for this\n> state as it is mentioned for other similar states. Can we think of\n> using any letter in lower case for this as all other states are in\n> lower-case except for this which makes it a look bit odd? We can use\n> 'f' or 'e' and describe it as 'copy finished' or 'copy end'. I am fine\n> if you have any better ideas.\n>\n\nFixed in latest patch [v11]\n\n> 4.\n> LogicalRepSyncTableStart()\n> {\n> ..\n> ..\n> +copy_table_done:\n> +\n> + /* Setup replication origin tracking. */\n> + {\n> + char originname[NAMEDATALEN];\n> + RepOriginId originid;\n> +\n> + snprintf(originname, sizeof(originname), \"pg_%u_%u\",\n> MySubscription->oid, MyLogicalRepWorker->relid);\n> + originid = replorigin_by_name(originname, true);\n> + if (!OidIsValid(originid))\n> + {\n> + /*\n> + * Origin tracking does not exist. Create it now, and advance to LSN\n> got from walrcv_create_slot.\n> + */\n> + elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_create\n> \\\"%s\\\".\", originname);\n> + originid = replorigin_create(originname);\n> + elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_session_setup\n> \\\"%s\\\".\", originname);\n> + replorigin_session_setup(originid);\n> + replorigin_session_origin = originid;\n> + elog(LOG, \"!!>> LogicalRepSyncTableStart: 1 replorigin_advance\n> \\\"%s\\\".\", originname);\n> + replorigin_advance(originid, *origin_startpos, InvalidXLogRecPtr,\n> + true /* go backward */ , true /* WAL log */ );\n> + }\n> + else\n> + {\n> + /*\n> + * Origin tracking already exists.\n> + */\n> + elog(LOG, \"!!>> LogicalRepSyncTableStart: 2 replorigin_session_setup\n> \\\"%s\\\".\", originname);\n> + replorigin_session_setup(originid);\n> + replorigin_session_origin = originid;\n> + elog(LOG, \"!!>> LogicalRepSyncTableStart: 2\n> replorigin_session_get_progress \\\"%s\\\".\", originname);\n> + *origin_startpos = replorigin_session_get_progress(false);\n> + }\n> ..\n> ..\n> }\n>\n> I am not sure if this code is correct because, for the very first time\n> when the copy is done, we don't expect replication origin to exist\n> whereas this code will silently use already existing replication\n> origin which can lead to a wrong start position for the slot. In such\n> a case it should error out. I guess we should create the replication\n> origin before making the state as copydone. I feel we should even have\n> a test case for this as it is not difficult to have a pre-existing\n> replication origin.\n>\n\nFixed as suggested in latest patch [v11]\n\n> 5. Is it possible to write a testcase where we fail (say due to pk\n> violation or some other error) after the initial copy is done, then\n> remove the conflicting row and allow a copy to be completed? If we\n> already have any such test then it is fine.\n>\n\nCausing a PK violation during the initial copy is not a problem to\ntest, but doing it after the initial copy is difficult. I have done\nexactly this test scenario before but I thought it cannot be\nautomated. E.g. To cause an PK violation error somewhere between\nCOPYDONE and SYNDONE means that the offending insert (the one which\ntablesync will fail to replicate) has to be sent while the tablesync\nis in CATCHUP mode. But AFAIK that can only be achieved using the\ndebugger to get the timing right.\n\n> 6.\n> +/*\n> + * Drop the replication slot at the publisher node\n> + * using the replication connection.\n> + */\n>\n> This comment looks a bit odd. The first line appears to be too short.\n> We have limit of 80 chars but this is much lesser than that.\n>\n\nFixed in latest patch [v11]\n\n> 7.\n> @@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,\n> LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n>\n> /* Make sure it's not used by somebody else */\n> - if (replication_state->acquired_by != 0)\n> + if (replication_state->acquired_by != 0 &&\n> replication_state->acquired_by != MyProcPid)\n> {\n>\n\nTODO\n\n> I think you won't need this change if you do replorigin_advance before\n> replorigin_session_setup in your patch.\n>\n> 8.\n> - * that ensures we won't loose knowledge about that after a crash if the\n> + * that ensures we won't lose knowledge about that after a crash if the\n>\n> It is better to submit this as a separate patch.\n>\n\nDone. Please see CF entry. https://commitfest.postgresql.org/32/2926/\n\n----\n[v11] = https://www.postgresql.org/message-id/CAHut%2BPu0A6TUPgYC-L3BKYQfa_ScL31kOV_3RsB3ActdkL1iBQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 5 Jan 2021 21:02:25 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 3:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Few more comments on v9:\n> > ======================\n> > 1.\n> > + /* Drop the tablesync slot. */\n> > + {\n> > + char *syncslotname = ReplicationSlotNameForTablesync(subid, relid);\n> > +\n> > + /*\n> > + * If the subscription slotname is NONE/NULL and the connection to publisher is\n> > + * broken, but the DropSubscription should still be allowed to complete.\n> > + * But without a connection it is not possible to drop any tablesync slots.\n> > + */\n> > + if (!wrconn)\n> > + {\n> > + /* FIXME - OK to just log a warning? */\n> > + elog(WARNING, \"!!>> DropSubscription: no connection. Cannot drop\n> > tablesync slot \\\"%s\\\".\",\n> > + syncslotname);\n> > + }\n> >\n> > Why is this not an ERROR? We don't want to keep the table slots\n> > lingering after DropSubscription. If there is any tablesync slot that\n> > needs to be dropped and the publisher is not available then we should\n> > raise an error.\n>\n> Previously there was only the subscription slot. If the connection was\n> broken and caused an error then it was still possible for the user to\n> disassociate the subscription from the slot using ALTER SUBSCRIPTION\n> ... SET (slot_name = NONE). And then (when the slotname is NULL) the\n> DropSubscription could complete OK. I expect in that case the Admin\n> still had some slot clean-up they would need to do on the Publisher\n> machine.\n>\n\nI think such an option could probably be used for user-created slots\nbut it would be difficult for even Admin to know about these\ninternally created slots associated with the particular subscription.\nI would say it is better to ERROR out.\n\n>\n> > 2.\n> > + /*\n> > + * Tablesync resource cleanup (slots and origins).\n> > + *\n> > + * Any READY-state relations would already have dealt with clean-ups.\n> > + */\n> > + {\n> >\n> > There is no need to start a separate block '{' here.\n>\n> Written this way so I can declare variables only at the scope they are\n> needed. I didn’t see anything in the PG code conventions discouraging\n> doing this practice: https://www.postgresql.org/docs/devel/source.html\n>\n\nBut, do we encourage such a coding convention to declare variables. I\nfind it difficult to read such a code. I guess as a one-off we can do\nthis but I don't see a compelling need here.\n\n> > 3.\n> > +#define SUBREL_STATE_COPYDONE 'C' /* tablesync copy phase is completed */\n> >\n> > You can mention in the comments that sublsn will be NULL for this\n> > state as it is mentioned for other similar states. Can we think of\n> > using any letter in lower case for this as all other states are in\n> > lower-case except for this which makes it a look bit odd? We can use\n> > 'f' or 'e' and describe it as 'copy finished' or 'copy end'. I am fine\n> > if you have any better ideas.\n> >\n>\n> Fixed in latest patch [v11]\n>\n\nIt is still not reflected in the docs. See below:\n--- a/doc/src/sgml/catalogs.sgml\n+++ b/doc/src/sgml/catalogs.sgml\n@@ -7651,6 +7651,7 @@ SCRAM-SHA-256$<replaceable><iteration\ncount></replaceable>:<replaceable>&l\n State code:\n <literal>i</literal> = initialize,\n <literal>d</literal> = data is being copied,\n+ <literal>C</literal> = table data has been copied,\n <literal>s</literal> = synchronized,\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Jan 2021 17:13:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 10:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > 1.\n> > > + /* Drop the tablesync slot. */\n> > > + {\n> > > + char *syncslotname = ReplicationSlotNameForTablesync(subid, relid);\n> > > +\n> > > + /*\n> > > + * If the subscription slotname is NONE/NULL and the connection to publisher is\n> > > + * broken, but the DropSubscription should still be allowed to complete.\n> > > + * But without a connection it is not possible to drop any tablesync slots.\n> > > + */\n> > > + if (!wrconn)\n> > > + {\n> > > + /* FIXME - OK to just log a warning? */\n> > > + elog(WARNING, \"!!>> DropSubscription: no connection. Cannot drop\n> > > tablesync slot \\\"%s\\\".\",\n> > > + syncslotname);\n> > > + }\n> > >\n> > > Why is this not an ERROR? We don't want to keep the table slots\n> > > lingering after DropSubscription. If there is any tablesync slot that\n> > > needs to be dropped and the publisher is not available then we should\n> > > raise an error.\n> >\n> > Previously there was only the subscription slot. If the connection was\n> > broken and caused an error then it was still possible for the user to\n> > disassociate the subscription from the slot using ALTER SUBSCRIPTION\n> > ... SET (slot_name = NONE). And then (when the slotname is NULL) the\n> > DropSubscription could complete OK. I expect in that case the Admin\n> > still had some slot clean-up they would need to do on the Publisher\n> > machine.\n> >\n>\n> I think such an option could probably be used for user-created slots\n> but it would be difficult for even Admin to know about these\n> internally created slots associated with the particular subscription.\n> I would say it is better to ERROR out.\n\nI am having doubts that ERROR is the best choice here. There is a long\nnote in https://www.postgresql.org/docs/devel/sql-dropsubscription.html\nwhich describes this problem for the subscription slot and how to\ndisassociate the name to give a workaround “To proceed in this\nsituation”.\n\nOTOH if we make the tablesync slot unconditionally ERROR for a broken\nconnection then there is no way to proceed, and the whole (slot_name =\nNONE) workaround becomes ineffectual. Note - the current patch code is\nonly logging when the user has already disassociated the slot name; of\ncourse normally (when the slot name was not disassociated) table slots\nwill give ERROR for broken connections.\n\nIMO, if the user has disassociated the slot name then they have\nalready made their decision that they REALLY DO want to “proceed in\nthis situation”. So I thought we should let them proceed.\n\nWhat do you think?\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Wed, 6 Jan 2021 10:02:11 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 4:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Jan 5, 2021 at 10:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > 1.\n> > > > + /* Drop the tablesync slot. */\n> > > > + {\n> > > > + char *syncslotname = ReplicationSlotNameForTablesync(subid, relid);\n> > > > +\n> > > > + /*\n> > > > + * If the subscription slotname is NONE/NULL and the connection to publisher is\n> > > > + * broken, but the DropSubscription should still be allowed to complete.\n> > > > + * But without a connection it is not possible to drop any tablesync slots.\n> > > > + */\n> > > > + if (!wrconn)\n> > > > + {\n> > > > + /* FIXME - OK to just log a warning? */\n> > > > + elog(WARNING, \"!!>> DropSubscription: no connection. Cannot drop\n> > > > tablesync slot \\\"%s\\\".\",\n> > > > + syncslotname);\n> > > > + }\n> > > >\n> > > > Why is this not an ERROR? We don't want to keep the table slots\n> > > > lingering after DropSubscription. If there is any tablesync slot that\n> > > > needs to be dropped and the publisher is not available then we should\n> > > > raise an error.\n> > >\n> > > Previously there was only the subscription slot. If the connection was\n> > > broken and caused an error then it was still possible for the user to\n> > > disassociate the subscription from the slot using ALTER SUBSCRIPTION\n> > > ... SET (slot_name = NONE). And then (when the slotname is NULL) the\n> > > DropSubscription could complete OK. I expect in that case the Admin\n> > > still had some slot clean-up they would need to do on the Publisher\n> > > machine.\n> > >\n> >\n> > I think such an option could probably be used for user-created slots\n> > but it would be difficult for even Admin to know about these\n> > internally created slots associated with the particular subscription.\n> > I would say it is better to ERROR out.\n>\n> I am having doubts that ERROR is the best choice here. There is a long\n> note in https://www.postgresql.org/docs/devel/sql-dropsubscription.html\n> which describes this problem for the subscription slot and how to\n> disassociate the name to give a workaround “To proceed in this\n> situation”.\n>\n> OTOH if we make the tablesync slot unconditionally ERROR for a broken\n> connection then there is no way to proceed, and the whole (slot_name =\n> NONE) workaround becomes ineffectual. Note - the current patch code is\n> only logging when the user has already disassociated the slot name; of\n> course normally (when the slot name was not disassociated) table slots\n> will give ERROR for broken connections.\n>\n> IMO, if the user has disassociated the slot name then they have\n> already made their decision that they REALLY DO want to “proceed in\n> this situation”. So I thought we should let them proceed.\n>\n\nOkay, if we want to go that way then we should add some documentation\nabout it. Currently, the slot name used by apply worker is known to\nthe user because either it is specified by the user or the default is\nsubscription name, so the user can manually remove that slot later but\nthat is not true for tablesync slots. I think we need to update both\nthe Drop Subscription page [1] and logical-replication-subscription\npage [2] where we have mentioned temporary slots and in the end \"Here\nare some scenarios: ..\" to mention about these slots and probably how\ntheir names are generated so that in such special situations users can\ndrop them manually.\n\n[1] - https://www.postgresql.org/docs/devel/sql-dropsubscription.html\n[2] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Jan 2021 08:40:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 3:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 5. Is it possible to write a testcase where we fail (say due to pk\n> > violation or some other error) after the initial copy is done, then\n> > remove the conflicting row and allow a copy to be completed? If we\n> > already have any such test then it is fine.\n> >\n>\n> Causing a PK violation during the initial copy is not a problem to\n> test, but doing it after the initial copy is difficult. I have done\n> exactly this test scenario before but I thought it cannot be\n> automated. E.g. To cause an PK violation error somewhere between\n> COPYDONE and SYNDONE means that the offending insert (the one which\n> tablesync will fail to replicate) has to be sent while the tablesync\n> is in CATCHUP mode. But AFAIK that can only be achieved using the\n> debugger to get the timing right.\n>\n\nYeah, I am also not able to think of any way to automate such a test.\nI was thinking about what could go wrong if we error out in that\nstage. The only thing that could be problematic is if we somehow make\nthe slot and replication origin used during copy dangling. I think if\ntablesync is restarted after error then we will clean up those which\nwill be normally the case but what if the tablesync worker is not\nstarted again? I think the only possibility of tablesync worker not\nstarted again is if during Alter Subscription ... Refresh Publication,\nwe remove the corresponding subscription rel (see\nAlterSubscription_refresh, I guess it could happen if one has dropped\nthe relation from publication). I haven't tested this with your patch\nbut if such a possibility exists then we need to think of cleaning up\nslot and origin when we remove subscription rel. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Jan 2021 10:36:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 5, 2021 at 3:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > 5. Is it possible to write a testcase where we fail (say due to pk\n> > > violation or some other error) after the initial copy is done, then\n> > > remove the conflicting row and allow a copy to be completed? If we\n> > > already have any such test then it is fine.\n> > >\n> >\n> > Causing a PK violation during the initial copy is not a problem to\n> > test, but doing it after the initial copy is difficult. I have done\n> > exactly this test scenario before but I thought it cannot be\n> > automated. E.g. To cause an PK violation error somewhere between\n> > COPYDONE and SYNDONE means that the offending insert (the one which\n> > tablesync will fail to replicate) has to be sent while the tablesync\n> > is in CATCHUP mode. But AFAIK that can only be achieved using the\n> > debugger to get the timing right.\n> >\n>\n> Yeah, I am also not able to think of any way to automate such a test.\n> I was thinking about what could go wrong if we error out in that\n> stage. The only thing that could be problematic is if we somehow make\n> the slot and replication origin used during copy dangling. I think if\n> tablesync is restarted after error then we will clean up those which\n> will be normally the case but what if the tablesync worker is not\n> started again? I think the only possibility of tablesync worker not\n> started again is if during Alter Subscription ... Refresh Publication,\n> we remove the corresponding subscription rel (see\n> AlterSubscription_refresh, I guess it could happen if one has dropped\n> the relation from publication). I haven't tested this with your patch\n> but if such a possibility exists then we need to think of cleaning up\n> slot and origin when we remove subscription rel. What do you think?\n>\n\nI think it makes sense. If there can be a race between the tablesync\nre-launching (after error), and the AlterSubscription_refresh removing\nsome table’s relid from the subscription then there could be lurking\nslot/origin tablesync resources (of the removed table) which a\nsubsequent DROP SUBSCRIPTION cannot discover. I will think more about\nhow/if it is possible to make this happen. Anyway, I suppose I ought\nto refactor/isolate some of the tablesync cleanup code in case it\nneeds to be commonly called from DropSubscription and/or from\nAlterSubscription_refresh.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Wed, 6 Jan 2021 19:43:37 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 2:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I think it makes sense. If there can be a race between the tablesync\n> re-launching (after error), and the AlterSubscription_refresh removing\n> some table’s relid from the subscription then there could be lurking\n> slot/origin tablesync resources (of the removed table) which a\n> subsequent DROP SUBSCRIPTION cannot discover. I will think more about\n> how/if it is possible to make this happen. Anyway, I suppose I ought\n> to refactor/isolate some of the tablesync cleanup code in case it\n> needs to be commonly called from DropSubscription and/or from\n> AlterSubscription_refresh.\n>\n\nFair enough. BTW, I have analyzed whether we need any modifications to\npg_dump/restore for this patch as this changes the state of one of the\nfields in the system table and concluded that we don't need any\nchange. For subscriptions, we don't dump any of the information from\npg_subscription_rel, rather we just dump subscriptions with the\nconnect option as false which means users need to enable the\nsubscription and refresh publication after restore. I have checked\nthis in the code and tested it as well. The related information is\npresent in pg_dump doc page [1], see from \"When dumping logical\nreplication subscriptions ....\".\n\n[1] - https://www.postgresql.org/docs/devel/app-pgdump.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Jan 2021 15:39:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "> PSA the v11 patch for the Tablesync Solution1.\r\n> \r\n> Difference from v10:\r\n> - Addresses several recent review comments.\r\n> - pg_indent has been run\r\n> \r\nHi\r\n\r\nI took a look into the patch and have some comments.\r\n\r\n1.\r\n *\t So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\r\n- *\t CATCHUP -> SYNCDONE -> READY.\r\n+ *\t CATCHUP -> (sync worker TCOPYDONE) -> SYNCDONE -> READY.\r\n\r\nI noticed the new state TCOPYDONE is commented between CATCHUP and SYNCDONE,\r\nBut It seems the SUBREL_STATE_TCOPYDONE is actually set before SUBREL_STATE_SYNCWAIT[1].\r\nDid i miss something here ?\r\n\r\n[1]-----------------\r\n+\tUpdateSubscriptionRelState(MyLogicalRepWorker->subid,\r\n+\t\t\t\t\t\t\t MyLogicalRepWorker->relid,\r\n+\t\t\t\t\t\t\t SUBREL_STATE_TCOPYDONE,\r\n+\t\t\t\t\t\t\t MyLogicalRepWorker->relstate_lsn);\r\n...\r\n\t/*\r\n\t * We are done with the initial data synchronization, update the state.\r\n\t */\r\n\tSpinLockAcquire(&MyLogicalRepWorker->relmutex);\r\n\tMyLogicalRepWorker->relstate = SUBREL_STATE_SYNCWAIT;\r\n------------------\r\n\r\n\r\n2.\r\n <literal>i</literal> = initialize,\r\n <literal>d</literal> = data is being copied,\r\n+ <literal>C</literal> = table data has been copied,\r\n <literal>s</literal> = synchronized,\r\n <literal>r</literal> = ready (normal replication)\r\n\r\n+#define SUBREL_STATE_TCOPYDONE\t't' /* tablesync copy phase is completed\r\n+\t\t\t\t\t\t\t\t\t * (sublsn NULL) */\r\nThe character representing 'data has been copied' in the catalog seems different from the macro define.\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\n\n",
"msg_date": "Thu, 7 Jan 2021 01:45:15 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Thankyou for the feedback.\n\nOn Thu, Jan 7, 2021 at 12:45 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > PSA the v11 patch for the Tablesync Solution1.\n> >\n> > Difference from v10:\n> > - Addresses several recent review comments.\n> > - pg_indent has been run\n> >\n> Hi\n>\n> I took a look into the patch and have some comments.\n>\n> 1.\n> * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\n> - * CATCHUP -> SYNCDONE -> READY.\n> + * CATCHUP -> (sync worker TCOPYDONE) -> SYNCDONE -> READY.\n>\n> I noticed the new state TCOPYDONE is commented between CATCHUP and SYNCDONE,\n> But It seems the SUBREL_STATE_TCOPYDONE is actually set before SUBREL_STATE_SYNCWAIT[1].\n> Did i miss something here ?\n>\n> [1]-----------------\n> + UpdateSubscriptionRelState(MyLogicalRepWorker->subid,\n> + MyLogicalRepWorker->relid,\n> + SUBREL_STATE_TCOPYDONE,\n> + MyLogicalRepWorker->relstate_lsn);\n> ...\n> /*\n> * We are done with the initial data synchronization, update the state.\n> */\n> SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCWAIT;\n> ------------------\n>\n\nThanks for reporting this mistake. I will correct the comment for the\nnext patch (v12)\n\n>\n> 2.\n> <literal>i</literal> = initialize,\n> <literal>d</literal> = data is being copied,\n> + <literal>C</literal> = table data has been copied,\n> <literal>s</literal> = synchronized,\n> <literal>r</literal> = ready (normal replication)\n>\n> +#define SUBREL_STATE_TCOPYDONE 't' /* tablesync copy phase is completed\n> + * (sublsn NULL) */\n> The character representing 'data has been copied' in the catalog seems different from the macro define.\n>\n\nYes, same was already previously reported [1]\n[1] https://www.postgresql.org/message-id/CAA4eK1Kyi037XZzyrLE71MS2KoMmNSqa6RrQLdSCeeL27gnL%2BA%40mail.gmail.com\nIt will be fixed in the next patch (v12)\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 7 Jan 2021 14:53:23 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 6, 2021 at 2:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > I think it makes sense. If there can be a race between the tablesync\n> > re-launching (after error), and the AlterSubscription_refresh removing\n> > some table’s relid from the subscription then there could be lurking\n> > slot/origin tablesync resources (of the removed table) which a\n> > subsequent DROP SUBSCRIPTION cannot discover. I will think more about\n> > how/if it is possible to make this happen. Anyway, I suppose I ought\n> > to refactor/isolate some of the tablesync cleanup code in case it\n> > needs to be commonly called from DropSubscription and/or from\n> > AlterSubscription_refresh.\n> >\n>\n> Fair enough.\n>\n\nI think before implementing, we should once try to reproduce this\ncase. I understand this is a timing issue and can be reproduced only\nwith the help of debugger but we should do that.\n\n> BTW, I have analyzed whether we need any modifications to\n> pg_dump/restore for this patch as this changes the state of one of the\n> fields in the system table and concluded that we don't need any\n> change. For subscriptions, we don't dump any of the information from\n> pg_subscription_rel, rather we just dump subscriptions with the\n> connect option as false which means users need to enable the\n> subscription and refresh publication after restore. I have checked\n> this in the code and tested it as well. The related information is\n> present in pg_dump doc page [1], see from \"When dumping logical\n> replication subscriptions ....\".\n>\n\nI have further analyzed that we don't need to do anything w.r.t\npg_upgrade as well because it uses pg_dump/pg_dumpall to dump the\nschema info of the old cluster and then restore it to the new cluster.\nAnd, we know that pg_dump ignores the info in pg_subscription_rel, so\nwe don't need to change anything as our changes are specific to the\nstate of one of the columns in pg_subscription_rel. I have not tested\nthis but we should test it by having some relations in not_ready state\nand then allow the old cluster (<=PG13) to be upgraded to new (pg14)\nboth with and without this patch and see if there is any change in\nbehavior.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Jan 2021 09:53:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v12 patch for the Tablesync Solution1.\n\nDifferences from v11:\n + Added PG docs to mention the tablesync slot\n + Refactored tablesync slot drop (done by\nDropSubscription/process_syncing_tables_for_sync)\n + Fixed PG docs mentioning wrong state code\n + Fixed wrong code comment describing TCOPYDONE state\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary. The\ntablesync slot name is no longer tied to the Subscription slot na\n\n* The tablesync slot cleanup (drop) code is added for DropSubscription\nand for process_syncing_tables_for_sync functions\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_TCOPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_TCOPYDONE then\nit will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* The DropSubscription cleanup code was enhanced (v7+) to take care of\nany crashed tablesync workers.\n\n* Updates to PG docs\n\nTODO / Known Issues:\n\n* Address review comments\n\n* Patch applies with whitespace warning\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 7 Jan 2021 18:52:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 8:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 30, 2020 at 11:51 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Dec 23, 2020 at 8:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 1.\n> > > + * Rarely, the DropSubscription may be issued when a tablesync still\n> > > + * is in SYNCDONE but not yet in READY state. If this happens then\n> > > + * the drop slot could fail because it is already dropped.\n> > > + * In this case suppress and drop slot error.\n> > > + *\n> > > + * FIXME - Is there a better way than this?\n> > > + */\n> > > + if (rstate->state != SUBREL_STATE_SYNCDONE)\n> > > + PG_RE_THROW();\n> > >\n> > > So, does this situation happens when we try to drop subscription after\n> > > the state is changed to syncdone but not syncready. If so, then can't\n> > > we write a function GetSubscriptionNotDoneRelations similar to\n> > > GetSubscriptionNotReadyRelations where we get a list of relations that\n> > > are not in done stage. I think this should be safe because once we are\n> > > here we shouldn't be allowed to start a new worker and old workers are\n> > > already stopped by this function.\n> >\n> > Yes, but I don't see how adding such a function is an improvement over\n> > the existing code:\n> >\n>\n> The advantage is that we don't need to use try..catch to deal with\n> such conditions which I don't think is a good way to deal with such\n> cases. Also, even after using try...catch, still, we can leak the\n> slots because the patch drops the slot after changing the state to\n> syncdone and if there is any error while dropping the slot, it simply\n> skips it. So, it is possible that the rel state is syncdone but the\n> slot still exists and we get an error due to some different reason,\n> and then we will silently skip it again and allow the subscription to\n> be dropped.\n>\n> I think instead what we should do is to drop the slot before we change\n> the rel state to syncdone. Also, if the apply workers fail to drop the\n> slot, it should try to again drop it after restart. In\n> DropSubscription, we can then check if the rel state is not SYNC or\n> READY, we can drop the corresponding slots.\n>\n\nFixed as suggested in latest patch [v12]\n\n----\n\n[v12] = https://www.postgresql.org/message-id/CAHut%2BPsonJzarxSBWkOM%3DMjoEpaq53ShBJoTT9LHJskwP3OvZA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 7 Jan 2021 19:05:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 10:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > 3.\n> > > +#define SUBREL_STATE_COPYDONE 'C' /* tablesync copy phase is completed */\n> > >\n> > > You can mention in the comments that sublsn will be NULL for this\n> > > state as it is mentioned for other similar states. Can we think of\n> > > using any letter in lower case for this as all other states are in\n> > > lower-case except for this which makes it a look bit odd? We can use\n> > > 'f' or 'e' and describe it as 'copy finished' or 'copy end'. I am fine\n> > > if you have any better ideas.\n> > >\n> >\n> > Fixed in latest patch [v11]\n> >\n>\n> It is still not reflected in the docs. See below:\n> --- a/doc/src/sgml/catalogs.sgml\n> +++ b/doc/src/sgml/catalogs.sgml\n> @@ -7651,6 +7651,7 @@ SCRAM-SHA-256$<replaceable><iteration\n> count></replaceable>:<replaceable>&l\n> State code:\n> <literal>i</literal> = initialize,\n> <literal>d</literal> = data is being copied,\n> + <literal>C</literal> = table data has been copied,\n> <literal>s</literal> = synchronized,\n>\n\nFixed in latest patch [v12]\n\n----\n[v12] = https://www.postgresql.org/message-id/CAHut%2BPsonJzarxSBWkOM%3DMjoEpaq53ShBJoTT9LHJskwP3OvZA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 7 Jan 2021 19:08:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 2:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n> Okay, if we want to go that way then we should add some documentation\n> about it. Currently, the slot name used by apply worker is known to\n> the user because either it is specified by the user or the default is\n> subscription name, so the user can manually remove that slot later but\n> that is not true for tablesync slots. I think we need to update both\n> the Drop Subscription page [1] and logical-replication-subscription\n> page [2] where we have mentioned temporary slots and in the end \"Here\n> are some scenarios: ..\" to mention about these slots and probably how\n> their names are generated so that in such special situations users can\n> drop them manually.\n>\n> [1] - https://www.postgresql.org/docs/devel/sql-dropsubscription.html\n> [2] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n>\n\nPG docs updated in latest patch [v12]\n\n----\n[v12] = https://www.postgresql.org/message-id/CAHut%2BPsonJzarxSBWkOM%3DMjoEpaq53ShBJoTT9LHJskwP3OvZA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 7 Jan 2021 19:11:45 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 3:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 6, 2021 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 6, 2021 at 2:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > I think it makes sense. If there can be a race between the tablesync\n> > > re-launching (after error), and the AlterSubscription_refresh removing\n> > > some table’s relid from the subscription then there could be lurking\n> > > slot/origin tablesync resources (of the removed table) which a\n> > > subsequent DROP SUBSCRIPTION cannot discover. I will think more about\n> > > how/if it is possible to make this happen. Anyway, I suppose I ought\n> > > to refactor/isolate some of the tablesync cleanup code in case it\n> > > needs to be commonly called from DropSubscription and/or from\n> > > AlterSubscription_refresh.\n> > >\n> >\n> > Fair enough.\n> >\n>\n> I think before implementing, we should once try to reproduce this\n> case. I understand this is a timing issue and can be reproduced only\n> with the help of debugger but we should do that.\n\nFYI, I was able to reproduce this case in debugger. PSA logs showing details.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 8 Jan 2021 12:43:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "> PSA the v12 patch for the Tablesync Solution1.\r\n> \r\n> Differences from v11:\r\n> + Added PG docs to mention the tablesync slot\r\n> + Refactored tablesync slot drop (done by\r\n> DropSubscription/process_syncing_tables_for_sync)\r\n> + Fixed PG docs mentioning wrong state code\r\n> + Fixed wrong code comment describing TCOPYDONE state\r\n> \r\nHi\r\n\r\nI look into the new patch and have some comments.\r\n\r\n1.\r\n+\t/* Setup replication origin tracking. */\r\n①+\toriginid = replorigin_by_name(originname, true);\r\n+\tif (!OidIsValid(originid))\r\n+\t{\r\n\r\n②+\t\t\toriginid = replorigin_by_name(originname, true);\r\n+\t\t\tif (originid != InvalidRepOriginId)\r\n+\t\t\t{\r\n\r\nThere are two different style code which check whether originid is valid.\r\nBoth are fine, but do you think it’s better to have a same style here?\r\n\r\n\r\n2.\r\n *\t\t state to SYNCDONE. There might be zero changes applied between\r\n *\t\t CATCHUP and SYNCDONE, because the sync worker might be ahead of the\r\n *\t\t apply worker.\r\n+ *\t - The sync worker has a intermediary state TCOPYDONE which comes after\r\n+ *\t\tDATASYNC and before SYNCWAIT. This state indicates that the initial\r\n\r\nThis comment about TCOPYDONE is better to be placed at [1]*, where is between DATASYNC and SYNCWAIT.\r\n\r\n *\t - Tablesync worker starts; changes table state from INIT to DATASYNC while\r\n *\t\t copying.\r\n [1]*\r\n *\t - Tablesync worker finishes the copy and sets table state to SYNCWAIT;\r\n *\t\t waits for state change.\r\n\r\n3.\r\n+\t/*\r\n+\t * To build a slot name for the sync work, we are limited to NAMEDATALEN -\r\n+\t * 1 characters.\r\n+\t *\r\n+\t * The name is calculated as pg_%u_sync_%u (3 + 10 + 6 + 10 + '\\0'). (It's\r\n+\t * actually the NAMEDATALEN on the remote that matters, but this scheme\r\n+\t * will also work reasonably if that is different.)\r\n+\t */\r\n+\tStaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\");\t/* for sanity */\r\n+\r\n+\tsyncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\r\n\r\nThe comments says syncslotname is limit to NAMEDATALEN - 1 characters.\r\nBut the actual size of it is (3 + 10 + 6 + 10 + '\\0') = 30,which seems not NAMEDATALEN - 1.\r\nShould we change the comment here?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\n\n",
"msg_date": "Fri, 8 Jan 2021 02:02:40 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 7:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Jan 7, 2021 at 3:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 6, 2021 at 3:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 6, 2021 at 2:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > I think it makes sense. If there can be a race between the tablesync\n> > > > re-launching (after error), and the AlterSubscription_refresh removing\n> > > > some table’s relid from the subscription then there could be lurking\n> > > > slot/origin tablesync resources (of the removed table) which a\n> > > > subsequent DROP SUBSCRIPTION cannot discover. I will think more about\n> > > > how/if it is possible to make this happen. Anyway, I suppose I ought\n> > > > to refactor/isolate some of the tablesync cleanup code in case it\n> > > > needs to be commonly called from DropSubscription and/or from\n> > > > AlterSubscription_refresh.\n> > > >\n> > >\n> > > Fair enough.\n> > >\n> >\n> > I think before implementing, we should once try to reproduce this\n> > case. I understand this is a timing issue and can be reproduced only\n> > with the help of debugger but we should do that.\n>\n> FYI, I was able to reproduce this case in debugger. PSA logs showing details.\n>\n\nThanks for reproducing as I was worried about exactly this case. I\nhave one question related to logs:\n\n##\n## ALTER SUBSCRIPTION to REFRESH the publication\n\n## This blocks on some latch until the tablesync worker dies, then it continues\n##\n\nDid you check which exact latch or lock blocks this? It is important\nto retain this interlock as otherwise even if decide to drop slot (and\nor origin) the tablesync worker might continue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Jan 2021 08:20:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v13 patch for the Tablesync Solution1.\n\nDifferences from v12:\n+ Fixed whitespace errors of v12-0001\n+ Modify TCOPYDONE state comment (houzj feedback)\n+ WIP fix for AlterSubscripion_refresh (Amit feedback)\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary. The\ntablesync slot name is no longer tied to the Subscription slot na\n\n* The tablesync slot cleanup (drop) code is added for DropSubscription\nand for process_syncing_tables_for_sync functions\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_TCOPYDONE) is persisted after a successful\ncopy_table in LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_TCOPYDONE then\nit will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* The DropSubscription cleanup code was enhanced (v7+) to take care of\nany crashed tablesync workers.\n\n* Updates to PG docs\n\nTODO / Known Issues:\n\n* Address review comments\n\n* ALTER PUBLICATION DROP TABLE can mean knowledge of tablesyncs gets\nlost causing resource cleanup to be missed. There is a WIP fix for\nthis in the AlterSubscription_refresh, however it is not entirely\ncorrect; there are known race conditions. See FIXME comments.\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Thu, Jan 7, 2021 at 6:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Amit.\n>\n> PSA the v12 patch for the Tablesync Solution1.\n>\n> Differences from v11:\n> + Added PG docs to mention the tablesync slot\n> + Refactored tablesync slot drop (done by\n> DropSubscription/process_syncing_tables_for_sync)\n> + Fixed PG docs mentioning wrong state code\n> + Fixed wrong code comment describing TCOPYDONE state\n>\n> ====\n>\n> Features:\n>\n> * The tablesync slot is now permanent instead of temporary. The\n> tablesync slot name is no longer tied to the Subscription slot na\n>\n> * The tablesync slot cleanup (drop) code is added for DropSubscription\n> and for process_syncing_tables_for_sync functions\n>\n> * The tablesync worker is now allowing multiple tx instead of single tx\n>\n> * A new state (SUBREL_STATE_TCOPYDONE) is persisted after a successful\n> copy_table in LogicalRepSyncTableStart.\n>\n> * If a re-launched tablesync finds state SUBREL_STATE_TCOPYDONE then\n> it will bypass the initial copy_table phase.\n>\n> * Now tablesync sets up replication origin tracking in\n> LogicalRepSyncTableStart (similar as done for the apply worker). The\n> origin is advanced when first created.\n>\n> * The tablesync replication origin tracking is cleaned up during\n> DropSubscription and/or process_syncing_tables_for_apply.\n>\n> * The DropSubscription cleanup code was enhanced (v7+) to take care of\n> any crashed tablesync workers.\n>\n> * Updates to PG docs\n>\n> TODO / Known Issues:\n>\n> * Address review comments\n>\n> * Patch applies with whitespace warning\n>\n> ---\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia",
"msg_date": "Fri, 8 Jan 2021 20:11:48 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 1:02 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > PSA the v12 patch for the Tablesync Solution1.\n> >\n> > Differences from v11:\n> > + Added PG docs to mention the tablesync slot\n> > + Refactored tablesync slot drop (done by\n> > DropSubscription/process_syncing_tables_for_sync)\n> > + Fixed PG docs mentioning wrong state code\n> > + Fixed wrong code comment describing TCOPYDONE state\n> >\n> Hi\n>\n> I look into the new patch and have some comments.\n>\n> 1.\n> + /* Setup replication origin tracking. */\n> ①+ originid = replorigin_by_name(originname, true);\n> + if (!OidIsValid(originid))\n> + {\n>\n> ②+ originid = replorigin_by_name(originname, true);\n> + if (originid != InvalidRepOriginId)\n> + {\n>\n> There are two different style code which check whether originid is valid.\n> Both are fine, but do you think it’s better to have a same style here?\n\nYes. I think the 1st style is better, so I used the OidIsValid for all\nthe new code of the patch.\nBut the check in DropSubscription is an exception; there I used 2nd\nstyle but ONLY to be consistent with another originid check which\nalready existed in that same function.\n\n>\n>\n> 2.\n> * state to SYNCDONE. There might be zero changes applied between\n> * CATCHUP and SYNCDONE, because the sync worker might be ahead of the\n> * apply worker.\n> + * - The sync worker has a intermediary state TCOPYDONE which comes after\n> + * DATASYNC and before SYNCWAIT. This state indicates that the initial\n>\n> This comment about TCOPYDONE is better to be placed at [1]*, where is between DATASYNC and SYNCWAIT.\n>\n> * - Tablesync worker starts; changes table state from INIT to DATASYNC while\n> * copying.\n> [1]*\n> * - Tablesync worker finishes the copy and sets table state to SYNCWAIT;\n> * waits for state change.\n>\n\nAgreed. I have moved the comment per your suggestion (and I also\nre-worded it again).\nFixed in latest patch [v13]\n\n> 3.\n> + /*\n> + * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> + * 1 characters.\n> + *\n> + * The name is calculated as pg_%u_sync_%u (3 + 10 + 6 + 10 + '\\0'). (It's\n> + * actually the NAMEDATALEN on the remote that matters, but this scheme\n> + * will also work reasonably if that is different.)\n> + */\n> + StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> +\n> + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n>\n> The comments says syncslotname is limit to NAMEDATALEN - 1 characters.\n> But the actual size of it is (3 + 10 + 6 + 10 + '\\0') = 30,which seems not NAMEDATALEN - 1.\n> Should we change the comment here?\n>\n\nThe comment wording is a remnant from older code which had a\ndifferently format slot name.\nI think the comment is still valid, albeit maybe unnecessary since in\nthe current code the tablesync slot\nname length is fixed. But I left the older comment here as a safety reminder\nin case some future change would want to modify the slot name. What do\nyou think?\n\n----\n[v13] = https://www.postgresql.org/message-id/CAHut%2BPvby4zg6kM1RoGd_j-xs9OtPqZPPVhbiC53gCCRWdNSrw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 8 Jan 2021 20:25:15 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 2:55 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 1:02 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> >\n>\n> > 3.\n> > + /*\n> > + * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> > + * 1 characters.\n> > + *\n> > + * The name is calculated as pg_%u_sync_%u (3 + 10 + 6 + 10 + '\\0'). (It's\n> > + * actually the NAMEDATALEN on the remote that matters, but this scheme\n> > + * will also work reasonably if that is different.)\n> > + */\n> > + StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> > +\n> > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> >\n> > The comments says syncslotname is limit to NAMEDATALEN - 1 characters.\n> > But the actual size of it is (3 + 10 + 6 + 10 + '\\0') = 30,which seems not NAMEDATALEN - 1.\n> > Should we change the comment here?\n> >\n>\n> The comment wording is a remnant from older code which had a\n> differently format slot name.\n> I think the comment is still valid, albeit maybe unnecessary since in\n> the current code the tablesync slot\n> name length is fixed. But I left the older comment here as a safety reminder\n> in case some future change would want to modify the slot name. What do\n> you think?\n>\n\nI find it quite confusing. The comments should reflect the latest\ncode. You can probably say in some form that the length of slotname\nshouldn't exceed NAMEDATALEN because of remote node constraints on\nslot name length. Also, probably the StaticAssert on NAMEDATALEN is\nnot required.\n\n1.\n+ <para>\n+ Additional table synchronization slots are normally transient, created\n+ internally and dropped automatically when they are no longer needed.\n+ These table synchronization slots have generated names:\n+ <quote><literal>pg_%u_sync_%u</literal></quote> (parameters:\nSubscription <parameter>oid</parameter>, Table\n<parameter>relid</parameter>)\n+ </para>\n\nThe last line seems too long. I think we are not strict for 80 char\nlimit in docs but it is good to be close to that, however, this\nappears quite long.\n\n2.\n+ /*\n+ * Cleanup any remaining tablesync resources.\n+ */\n+ {\n+ char originname[NAMEDATALEN];\n+ RepOriginId originid;\n+ char state;\n+ XLogRecPtr statelsn;\n\nI have already mentioned previously that let's not use this new style\nof code (start using { to localize the scope of variables). I don't\nknow about others but I find it difficult to read such a code. You\nmight want to consider moving this whole block to a separate function.\n\n3.\n/*\n+ * XXX - Should optimize this to avoid multiple\n+ * connect/disconnect.\n+ */\n+ wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n\nI think it is better to avoid multiple connect/disconnect here. In\nthis same function, we have connected to the publisher, we should be\nable to use the same connection.\n\n4.\nprocess_syncing_tables_for_sync()\n{\n..\n+ /*\n+ * Cleanup the tablesync slot.\n+ */\n+ syncslotname = ReplicationSlotNameForTablesync(\n+ MySubscription->oid,\n+ MyLogicalRepWorker->relid);\n+ PG_TRY();\n+ {\n+ elog(DEBUG1, \"process_syncing_tables_for_sync: dropping the\ntablesync slot \\\"%s\\\".\", syncslotname);\n+ ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n+ }\n+ PG_FINALLY();\n+ {\n+ pfree(syncslotname);\n+ }\n+ PG_END_TRY();\n..\n}\n\nBoth here and in DropSubscription(), it seems we are using\nPG_TRY..PG_FINALLY just to free the memory even though\nReplicationSlotDropAtPubNode already has try..finally. Can we arrange\ncode to move allocation of syncslotname inside\nReplicationSlotDropAtPubNode to avoid additional try..finaly? BTW, if\nthe usage of try..finally here is only to free the memory, I am not\nsure if it is required because I think we will anyway Reset the memory\ncontext where this memory is allocated as part of error handling.\n\n5.\n #define SUBREL_STATE_DATASYNC 'd' /* data is being synchronized (sublsn\n * NULL) */\n+#define SUBREL_STATE_TCOPYDONE 't' /* tablesync copy phase is completed\n+ * (sublsn NULL) */\n #define SUBREL_STATE_SYNCDONE 's' /* synchronization finished in front of\n * apply (sublsn set) */\n\nI am not very happy with the new state name SUBREL_STATE_TCOPYDONE as\nit is quite different from other adjoining state names and somehow not\ngoing well with the code. How about SUBREL_STATE_ENDCOPY 'e' or\nSUBREL_STATE_FINISHEDCOPY 'f'?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 9 Jan 2021 12:16:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 7:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > FYI, I was able to reproduce this case in debugger. PSA logs showing details.\n> >\n>\n> Thanks for reproducing as I was worried about exactly this case. I\n> have one question related to logs:\n>\n> ##\n> ## ALTER SUBSCRIPTION to REFRESH the publication\n>\n> ## This blocks on some latch until the tablesync worker dies, then it continues\n> ##\n>\n> Did you check which exact latch or lock blocks this?\n>\n\nI have checked this myself and the command is waiting on the drop of\norigin till the tablesync worker is finished because replorigin_drop()\nrequires state->acquired_by to be 0 which will only be true once the\ntablesync worker exits. I think this is the reason you might have\nnoticed that the command can't be finished until the tablesync worker\ndied. So this can't be an interlock between ALTER SUBSCRIPTION ..\nREFRESH command and tablesync worker and to that end it seems you have\nbelow Fixme's in the patch:\n\n+ * FIXME - Usually this cleanup would be OK, but will not\n+ * always be OK because the logicalrep_worker_stop_at_commit\n+ * only \"flags\" the worker to be stopped in the near future\n+ * but meanwhile it may still be running. In this case there\n+ * could be a race between the tablesync worker and this code\n+ * to see who will succeed with the tablesync drop (and the\n+ * loser will ERROR).\n+ *\n+ * FIXME - Also, checking the state is also not guaranteed\n+ * correct because state might be TCOPYDONE when we checked\n+ * but has since progressed to SYNDONE\n+ */\n+\n+ if (state == SUBREL_STATE_TCOPYDONE)\n+ {\n\nI feel this was okay for an earlier code but now we need to stop the\ntablesync workers before trying to drop the slot as we do in\nDropSubscription. Now, if we do that then that would fix the race\nconditions mentioned in Fixme but still, there are few more things I\nam worried about: (a) What if the launcher again starts the tablesync\nworker? One idea could be to acquire AccessExclusiveLock on\nSubscriptionRelationId as we do in DropSubscription which is not a\nvery good idea but I can't think of any other good way. (b) the patch\nis just checking SUBREL_STATE_TCOPYDONE before dropping the\nreplication slot but the slot could be created even before that (in\nSUBREL_STATE_DATASYNC state). One idea could be we can try to drop the\nslot and if we are not able to drop then we can simply continue\nassuming it didn't exist.\n\nOne minor comment:\n1.\n+ SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;\n MyLogicalRepWorker->relstate_lsn = current_lsn;\n-\n\nSpurious line removal.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Jan 2021 10:04:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 3:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > BTW, I have analyzed whether we need any modifications to\n> > pg_dump/restore for this patch as this changes the state of one of the\n> > fields in the system table and concluded that we don't need any\n> > change. For subscriptions, we don't dump any of the information from\n> > pg_subscription_rel, rather we just dump subscriptions with the\n> > connect option as false which means users need to enable the\n> > subscription and refresh publication after restore. I have checked\n> > this in the code and tested it as well. The related information is\n> > present in pg_dump doc page [1], see from \"When dumping logical\n> > replication subscriptions ....\".\n> >\n>\n> I have further analyzed that we don't need to do anything w.r.t\n> pg_upgrade as well because it uses pg_dump/pg_dumpall to dump the\n> schema info of the old cluster and then restore it to the new cluster.\n> And, we know that pg_dump ignores the info in pg_subscription_rel, so\n> we don't need to change anything as our changes are specific to the\n> state of one of the columns in pg_subscription_rel. I have not tested\n> this but we should test it by having some relations in not_ready state\n> and then allow the old cluster (<=PG13) to be upgraded to new (pg14)\n> both with and without this patch and see if there is any change in\n> behavior.\n\nI have tested this scenario, stopped a server running PG_13 when\nsubscription table sync was in progress.\nOne of the tables in pg_subscription_rel was still in 'd' state (DATASYNC)\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+------------\n 16424 | 16384 | d |\n 16424 | 16390 | r | 0/247A63D8\n 16424 | 16395 | r | 0/247A6410\n 16424 | 16387 | r | 0/247A6448\n(4 rows)\n\nthen initiated the pg_upgrade to PG_14 with the patch and without the patch:\nI see that the subscription exists but is not enabled:\n\npostgres=# select * from pg_subscription;\n oid | subdbid | subname | subowner | subenabled | subbinary |\nsubstream | subconninfo | subslotname |\nsubsynccommit | subpublications\n-------+---------+---------+----------+------------+-----------+-----------+------------------------------------------+-------------+---------------+-----------------\n 16407 | 16401 | tap_sub | 10 | f | f | f\n | host=localhost port=6972 dbname=postgres | tap_sub | off\n | {tap_pub}\n(1 row)\n\nand looking at the pg_subscription_rel:\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+----------\n(0 rows)\n\nAs can be seen, none of the data in the pg_subscription_rel has been\ncopied over. Same behaviour is seen with the patch and without the\npatch.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Jan 2021 21:23:36 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 3:53 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Thu, Jan 7, 2021 at 3:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > BTW, I have analyzed whether we need any modifications to\n> > > pg_dump/restore for this patch as this changes the state of one of the\n> > > fields in the system table and concluded that we don't need any\n> > > change. For subscriptions, we don't dump any of the information from\n> > > pg_subscription_rel, rather we just dump subscriptions with the\n> > > connect option as false which means users need to enable the\n> > > subscription and refresh publication after restore. I have checked\n> > > this in the code and tested it as well. The related information is\n> > > present in pg_dump doc page [1], see from \"When dumping logical\n> > > replication subscriptions ....\".\n> > >\n> >\n> > I have further analyzed that we don't need to do anything w.r.t\n> > pg_upgrade as well because it uses pg_dump/pg_dumpall to dump the\n> > schema info of the old cluster and then restore it to the new cluster.\n> > And, we know that pg_dump ignores the info in pg_subscription_rel, so\n> > we don't need to change anything as our changes are specific to the\n> > state of one of the columns in pg_subscription_rel. I have not tested\n> > this but we should test it by having some relations in not_ready state\n> > and then allow the old cluster (<=PG13) to be upgraded to new (pg14)\n> > both with and without this patch and see if there is any change in\n> > behavior.\n>\n> I have tested this scenario, stopped a server running PG_13 when\n> subscription table sync was in progress.\n>\n\nThanks for the test. This confirms my analysis and we don't need any\nchange in pg_dump or pg_upgrade for this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Jan 2021 16:30:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v14 patch for the Tablesync Solution1.\n\nMain differences from v13:\n+ Addresses all review comments 1-5, posted 9/Jan [ak9]\n+ Addresses review comment 1, posted 11/Jan [ak11]\n+ Modifications per suggestion [ak11] to handle race scenarios during\nDrop/AlterSubscription\n+ Changed LOG to WARNING if DropSubscription unable to drop tablesync slot\n\n[ak9] = https://www.postgresql.org/message-id/CAA4eK1%2BgUBxKcYWg%2BMCC6Qbw-My%2B2wKUct%2BiFtr-_HgundUUBQ%40mail.gmail.com\n[ak11] = https://www.postgresql.org/message-id/CAA4eK1KGUt86A7CfuQW6OeDvAhEbVk8VOBJmcoZjrYBn965kOA%40mail.gmail.com\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary.\n\n* The tablesync slot name is no longer tied to the Subscription slot name.\n\n* The tablesync slot cleanup (drop) code is added for\nDropSubscription, AlterSubscription_refresh and for\nprocess_syncing_tables_for_sync functions. Drop/AlterSubscription will\nissue WARNING instead of ERROR in case the slot drop fails.\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* The DropSubscription cleanup code was enhanced (v7+) to take care of\nany crashed tablesync workers.\n\n* The AlterSubscription_refresh (v14+) is now more similar to\nDropSubscription w.r.t to stopping workers for any \"removed\" tables.\n\n* Updates to PG docs.\n\nTODO / Known Issues:\n\n* Minor review comments\n\n===\n\nAlso PSA some detailed logging evidence of some test scenarios\ninvolving Drop/AlterSubscription:\n+ Test-20210112-AlterSubscriptionRefresh-ok.txt =\nAlterSubscription_refresh which successfully drops a tablesync slot\n+ Test-20210112-AlterSubscriptionRefresh-warning.txt =\nAlterSubscription_refresh gives WARNING that it cannot drop the\ntablesync slot (which no longer exists)\n+ Test-20210112-DropSubscription-warning.txt = DropSubscription with a\ndisassociated slot_name gives a WARNING that it cannot drop the\ntablesync slot (due to broken connection)\n\n---\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 12 Jan 2021 22:53:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 9, 2021 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 2:55 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 1:02 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > >\n> >\n> > > 3.\n> > > + /*\n> > > + * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> > > + * 1 characters.\n> > > + *\n> > > + * The name is calculated as pg_%u_sync_%u (3 + 10 + 6 + 10 + '\\0'). (It's\n> > > + * actually the NAMEDATALEN on the remote that matters, but this scheme\n> > > + * will also work reasonably if that is different.)\n> > > + */\n> > > + StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> > > +\n> > > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> > >\n> > > The comments says syncslotname is limit to NAMEDATALEN - 1 characters.\n> > > But the actual size of it is (3 + 10 + 6 + 10 + '\\0') = 30,which seems not NAMEDATALEN - 1.\n> > > Should we change the comment here?\n> > >\n> >\n> > The comment wording is a remnant from older code which had a\n> > differently format slot name.\n> > I think the comment is still valid, albeit maybe unnecessary since in\n> > the current code the tablesync slot\n> > name length is fixed. But I left the older comment here as a safety reminder\n> > in case some future change would want to modify the slot name. What do\n> > you think?\n> >\n>\n> I find it quite confusing. The comments should reflect the latest\n> code. You can probably say in some form that the length of slotname\n> shouldn't exceed NAMEDATALEN because of remote node constraints on\n> slot name length. Also, probably the StaticAssert on NAMEDATALEN is\n> not required.\n\nModified comment in latest patch [v14]\n\n>\n> 1.\n> + <para>\n> + Additional table synchronization slots are normally transient, created\n> + internally and dropped automatically when they are no longer needed.\n> + These table synchronization slots have generated names:\n> + <quote><literal>pg_%u_sync_%u</literal></quote> (parameters:\n> Subscription <parameter>oid</parameter>, Table\n> <parameter>relid</parameter>)\n> + </para>\n>\n> The last line seems too long. I think we are not strict for 80 char\n> limit in docs but it is good to be close to that, however, this\n> appears quite long.\n\nFixed in latest patch [v14]\n\n>\n> 2.\n> + /*\n> + * Cleanup any remaining tablesync resources.\n> + */\n> + {\n> + char originname[NAMEDATALEN];\n> + RepOriginId originid;\n> + char state;\n> + XLogRecPtr statelsn;\n>\n> I have already mentioned previously that let's not use this new style\n> of code (start using { to localize the scope of variables). I don't\n> know about others but I find it difficult to read such a code. You\n> might want to consider moving this whole block to a separate function.\n>\n\nRemoved extra code block in latest patch [v14]\n\n> 3.\n> /*\n> + * XXX - Should optimize this to avoid multiple\n> + * connect/disconnect.\n> + */\n> + wrconn = walrcv_connect(sub->conninfo, true, sub->name, &err);\n>\n> I think it is better to avoid multiple connect/disconnect here. In\n> this same function, we have connected to the publisher, we should be\n> able to use the same connection.\n>\n\nFixed in latest patch [v14]\n\n> 4.\n> process_syncing_tables_for_sync()\n> {\n> ..\n> + /*\n> + * Cleanup the tablesync slot.\n> + */\n> + syncslotname = ReplicationSlotNameForTablesync(\n> + MySubscription->oid,\n> + MyLogicalRepWorker->relid);\n> + PG_TRY();\n> + {\n> + elog(DEBUG1, \"process_syncing_tables_for_sync: dropping the\n> tablesync slot \\\"%s\\\".\", syncslotname);\n> + ReplicationSlotDropAtPubNode(wrconn, syncslotname);\n> + }\n> + PG_FINALLY();\n> + {\n> + pfree(syncslotname);\n> + }\n> + PG_END_TRY();\n> ..\n> }\n>\n> Both here and in DropSubscription(), it seems we are using\n> PG_TRY..PG_FINALLY just to free the memory even though\n> ReplicationSlotDropAtPubNode already has try..finally. Can we arrange\n> code to move allocation of syncslotname inside\n> ReplicationSlotDropAtPubNode to avoid additional try..finaly? BTW, if\n> the usage of try..finally here is only to free the memory, I am not\n> sure if it is required because I think we will anyway Reset the memory\n> context where this memory is allocated as part of error handling.\n>\n\nEliminated need for TRY/FINALLY to free syncslotname in latest patch [v14]\n\n> 5.\n> #define SUBREL_STATE_DATASYNC 'd' /* data is being synchronized (sublsn\n> * NULL) */\n> +#define SUBREL_STATE_TCOPYDONE 't' /* tablesync copy phase is completed\n> + * (sublsn NULL) */\n> #define SUBREL_STATE_SYNCDONE 's' /* synchronization finished in front of\n> * apply (sublsn set) */\n>\n> I am not very happy with the new state name SUBREL_STATE_TCOPYDONE as\n> it is quite different from other adjoining state names and somehow not\n> going well with the code. How about SUBREL_STATE_ENDCOPY 'e' or\n> SUBREL_STATE_FINISHEDCOPY 'f'?\n>\n\nUsing SUBREL_STATE_FINISHEDCOPY in latest patch [v14]\n\n---\n[v14] = https://www.postgresql.org/message-id/CAHut%2BPsPO2vOp%2BP7U2szROMy15PKKGanhUrCYQ0ffpy9zG1V1A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Jan 2021 23:13:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 7:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > FYI, I was able to reproduce this case in debugger. PSA logs showing details.\n> > >\n> >\n> > Thanks for reproducing as I was worried about exactly this case. I\n> > have one question related to logs:\n> >\n> > ##\n> > ## ALTER SUBSCRIPTION to REFRESH the publication\n> >\n> > ## This blocks on some latch until the tablesync worker dies, then it continues\n> > ##\n> >\n> > Did you check which exact latch or lock blocks this?\n> >\n>\n> I have checked this myself and the command is waiting on the drop of\n> origin till the tablesync worker is finished because replorigin_drop()\n> requires state->acquired_by to be 0 which will only be true once the\n> tablesync worker exits. I think this is the reason you might have\n> noticed that the command can't be finished until the tablesync worker\n> died. So this can't be an interlock between ALTER SUBSCRIPTION ..\n> REFRESH command and tablesync worker and to that end it seems you have\n> below Fixme's in the patch:\n\nI have also seen this same blocking reason before in the replorigin_drop().\nHowever, back when I first tested/reproduced the refresh issue\n[test-refresh] that\nAlterSubscription_refresh was still *original* unchanged code, so at\nthat time it did not\nhave any replorigin_drop() in at all. In any case in the latest code\n[v14] the AlterSubscription is\nimmediately stopping the workers so this question may be moot.\n\n>\n> + * FIXME - Usually this cleanup would be OK, but will not\n> + * always be OK because the logicalrep_worker_stop_at_commit\n> + * only \"flags\" the worker to be stopped in the near future\n> + * but meanwhile it may still be running. In this case there\n> + * could be a race between the tablesync worker and this code\n> + * to see who will succeed with the tablesync drop (and the\n> + * loser will ERROR).\n> + *\n> + * FIXME - Also, checking the state is also not guaranteed\n> + * correct because state might be TCOPYDONE when we checked\n> + * but has since progressed to SYNDONE\n> + */\n> +\n> + if (state == SUBREL_STATE_TCOPYDONE)\n> + {\n>\n> I feel this was okay for an earlier code but now we need to stop the\n> tablesync workers before trying to drop the slot as we do in\n> DropSubscription. Now, if we do that then that would fix the race\n> conditions mentioned in Fixme but still, there are few more things I\n> am worried about: (a) What if the launcher again starts the tablesync\n> worker? One idea could be to acquire AccessExclusiveLock on\n> SubscriptionRelationId as we do in DropSubscription which is not a\n> very good idea but I can't think of any other good way. (b) the patch\n> is just checking SUBREL_STATE_TCOPYDONE before dropping the\n> replication slot but the slot could be created even before that (in\n> SUBREL_STATE_DATASYNC state). One idea could be we can try to drop the\n> slot and if we are not able to drop then we can simply continue\n> assuming it didn't exist.\n\nThe code was modified in the latest patch [v14] something like as suggested.\n\nThe workers for removed tables are now immediately stopped (like\nDropSubscription does). Although I did include the AccessExclusiveLock\nas (a) suggested, AFAIK this was actually ineffective at preventing\nthe workers relaunching. Instead, I am using\nlogicalrep_worker_stop_at_commit to do this - testing shows it as\nworking ok. Please see the code and latest test logs [v14] for\ndetails.\n\nAlso, now the Drop/AlterSubscription will only give WARNING if unable\nto drop slots, a per suggestion (b). This is also tested [v14].\n\n>\n> One minor comment:\n> 1.\n> + SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCDONE;\n> MyLogicalRepWorker->relstate_lsn = current_lsn;\n> -\n>\n> Spurious line removal.\n\nFixed in latest patch [v14]\n\n----\n[v14] = https://www.postgresql.org/message-id/CAHut%2BPsPO2vOp%2BP7U2szROMy15PKKGanhUrCYQ0ffpy9zG1V1A%40mail.gmail.com\n[test-refresh] https://www.postgresql.org/message-id/CAHut%2BPv7YW7AyO_Q_nf9kzogcJcDFQNe8FBP6yXdzowMz3dY_Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Jan 2021 23:47:09 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "> Also PSA some detailed logging evidence of some test scenarios involving\r\n> Drop/AlterSubscription:\r\n> + Test-20210112-AlterSubscriptionRefresh-ok.txt =\r\n> AlterSubscription_refresh which successfully drops a tablesync slot\r\n> + Test-20210112-AlterSubscriptionRefresh-warning.txt =\r\n> AlterSubscription_refresh gives WARNING that it cannot drop the tablesync\r\n> slot (which no longer exists)\r\n> + Test-20210112-DropSubscription-warning.txt = DropSubscription with a\r\n> disassociated slot_name gives a WARNING that it cannot drop the tablesync\r\n> slot (due to broken connection)\r\n\r\nHi\r\n\r\n> * The AlterSubscription_refresh (v14+) is now more similar to DropSubscription w.r.t to stopping workers for any \"removed\" tables.\r\nI have an issue about the above feature.\r\n\r\nWith the patch, it seems does not stop the worker in the case of [1].\r\nI probably missed something, should we stop the worker in such case ?\r\n\r\n[1] https://www.postgresql.org/message-id/CALj2ACV%2B0UFpcZs5czYgBpujM9p0Hg1qdOZai_43OU7bqHU_xw%40mail.gmail.com\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n",
"msg_date": "Wed, 13 Jan 2021 02:06:58 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> 7.\n> @@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,\n> LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n>\n> /* Make sure it's not used by somebody else */\n> - if (replication_state->acquired_by != 0)\n> + if (replication_state->acquired_by != 0 &&\n> replication_state->acquired_by != MyProcPid)\n> {\n>\n> I think you won't need this change if you do replorigin_advance before\n> replorigin_session_setup in your patch.\n>\n\nAs you know the replorigin_session_setup sets the\nreplication_state->acquired_by to be the current PID. So without this\nchange the replorigin_advance rejects that same slot state thinking\nthat it is already active for a different process. Root problem is\nthat the same process/PID calling both functions would hang. So this\npatch change allows replorigin_advance code to be called by self.\n\nIIUC that acquired_by check condition is like a sanity check for the\noriginid passed. The patched code only does just like what the comment\nsays:\n\"/* Make sure it's not used by somebody else */\"\nDoesn't \"somebody else\" means \"anyone but me\" (i.e. anyone but MyProcPid).\n\nAlso, “setup” of a thing generally comes before usage of that thing,\nso won't it seem strange to do (like the suggestion) and deliberately\ncall the \"setup\" function 2nd instead of 1st?\n\nCan you please explain why is it better to do it the suggested way\n(switch the calls around) than keep the patch code? Probably there is\na good reason but I am just not understanding it.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Jan 2021 16:47:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 1:07 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > Also PSA some detailed logging evidence of some test scenarios involving\n> > Drop/AlterSubscription:\n> > + Test-20210112-AlterSubscriptionRefresh-ok.txt =\n> > AlterSubscription_refresh which successfully drops a tablesync slot\n> > + Test-20210112-AlterSubscriptionRefresh-warning.txt =\n> > AlterSubscription_refresh gives WARNING that it cannot drop the tablesync\n> > slot (which no longer exists)\n> > + Test-20210112-DropSubscription-warning.txt = DropSubscription with a\n> > disassociated slot_name gives a WARNING that it cannot drop the tablesync\n> > slot (due to broken connection)\n>\n> Hi\n>\n> > * The AlterSubscription_refresh (v14+) is now more similar to DropSubscription w.r.t to stopping workers for any \"removed\" tables.\n> I have an issue about the above feature.\n>\n> With the patch, it seems does not stop the worker in the case of [1].\n> I probably missed something, should we stop the worker in such case ?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACV%2B0UFpcZs5czYgBpujM9p0Hg1qdOZai_43OU7bqHU_xw%40mail.gmail.com\n>\n\nI am not exactly sure of the concern. (If the extra info below does\nnot help can you please describe your concern with more details).\n\nThis [v14] patch code/feature is only referring to the immediate\nstopping of only the *** \"tablesync\" *** worker (if any) for any/each\ntable being removed from the subscription. It has nothing to say about\nthe \"apply\" worker of the subscription, which continues replicating as\nbefore.\n\nOTOH, I think the other mail problem is not really related to the\n\"tablesync\" workers. As you can see (e.g. steps 7,8,9,10 of [2]), that\nproblem is described as continuing over multiple transactions to\nreplicate unexpected rows - I think this could only be done by the\nsubscription \"apply\" worker, and is after the \"tablesync\" worker has\ngone away.\n\nSo AFAIK these are 2 quite unrelated problems, and would be solved\nindependently.\n\nIt just happens that they are both exposed using ALTER SUBSCRIPTION\n... REFRESH PUBLICATION;\n\n----\n[v14] = https://www.postgresql.org/message-id/CAHut%2BPsPO2vOp%2BP7U2szROMy15PKKGanhUrCYQ0ffpy9zG1V1A%40mail.gmail.com\n[2] = https://www.postgresql.org/message-id/CALj2ACV%2B0UFpcZs5czYgBpujM9p0Hg1qdOZai_43OU7bqHU_xw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Jan 2021 17:33:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "> I am not exactly sure of the concern. (If the extra info below does not\r\n> help can you please describe your concern with more details).\r\n> \r\n> This [v14] patch code/feature is only referring to the immediate stopping\r\n> of only the *** \"tablesync\" *** worker (if any) for any/each table being\r\n> removed from the subscription. It has nothing to say about the \"apply\" worker\r\n> of the subscription, which continues replicating as before.\r\n> \r\n> OTOH, I think the other mail problem is not really related to the \"tablesync\"\r\n> workers. As you can see (e.g. steps 7,8,9,10 of [2]), that problem is\r\n> described as continuing over multiple transactions to replicate unexpected\r\n> rows - I think this could only be done by the subscription \"apply\" worker,\r\n> and is after the \"tablesync\" worker has gone away.\r\n> \r\n> So AFAIK these are 2 quite unrelated problems, and would be solved\r\n> independently.\r\n> \r\n> It just happens that they are both exposed using ALTER SUBSCRIPTION ...\r\n> REFRESH PUBLICATION;\r\n\r\nSo sorry for the confusion, you are right that these are 2 quite unrelated problems.\r\nI misunderstood the 'stop the worker' here.\r\n\r\n\r\n+\t\t\t\t/* Immediately stop the worker. */\r\n+\t\t\t\tlogicalrep_worker_stop_at_commit(subid, relid); /* prevent re-launching */\r\n+\t\t\t\tlogicalrep_worker_stop(subid, relid); /* stop immediately */\r\n\r\nDo you think we can add some comments to describe what type \"worker\" is stop here ? (sync worker here) \r\nAnd should we add some more comments to talk about the reason of \" Immediately stop \" here ? it may looks easier to understand.\r\n\r\nBest regards,\r\nHouzj\r\n\n\n",
"msg_date": "Wed, 13 Jan 2021 08:00:07 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 1:30 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > I am not exactly sure of the concern. (If the extra info below does not\n> > help can you please describe your concern with more details).\n> >\n> > This [v14] patch code/feature is only referring to the immediate stopping\n> > of only the *** \"tablesync\" *** worker (if any) for any/each table being\n> > removed from the subscription. It has nothing to say about the \"apply\" worker\n> > of the subscription, which continues replicating as before.\n> >\n> > OTOH, I think the other mail problem is not really related to the \"tablesync\"\n> > workers. As you can see (e.g. steps 7,8,9,10 of [2]), that problem is\n> > described as continuing over multiple transactions to replicate unexpected\n> > rows - I think this could only be done by the subscription \"apply\" worker,\n> > and is after the \"tablesync\" worker has gone away.\n> >\n> > So AFAIK these are 2 quite unrelated problems, and would be solved\n> > independently.\n> >\n> > It just happens that they are both exposed using ALTER SUBSCRIPTION ...\n> > REFRESH PUBLICATION;\n>\n> So sorry for the confusion, you are right that these are 2 quite unrelated problems.\n> I misunderstood the 'stop the worker' here.\n>\n>\n> + /* Immediately stop the worker. */\n> + logicalrep_worker_stop_at_commit(subid, relid); /* prevent re-launching */\n> + logicalrep_worker_stop(subid, relid); /* stop immediately */\n>\n> Do you think we can add some comments to describe what type \"worker\" is stop here ? (sync worker here)\n> And should we add some more comments to talk about the reason of \" Immediately stop \" here ? it may looks easier to understand.\n>\n\nAnother thing related to this is why we need to call both\nlogicalrep_worker_stop_at_commit() and logicalrep_worker_stop()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 14:20:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 11:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > 7.\n> > @@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,\n> > LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n> >\n> > /* Make sure it's not used by somebody else */\n> > - if (replication_state->acquired_by != 0)\n> > + if (replication_state->acquired_by != 0 &&\n> > replication_state->acquired_by != MyProcPid)\n> > {\n> >\n> > I think you won't need this change if you do replorigin_advance before\n> > replorigin_session_setup in your patch.\n> >\n>\n> As you know the replorigin_session_setup sets the\n> replication_state->acquired_by to be the current PID. So without this\n> change the replorigin_advance rejects that same slot state thinking\n> that it is already active for a different process. Root problem is\n> that the same process/PID calling both functions would hang.\n>\n\nI think the hang happens only if we call unchanged replorigin_advance\nafter session_setup API, right?\n\n> So this\n> patch change allows replorigin_advance code to be called by self.\n>\n> IIUC that acquired_by check condition is like a sanity check for the\n> originid passed. The patched code only does just like what the comment\n> says:\n> \"/* Make sure it's not used by somebody else */\"\n> Doesn't \"somebody else\" means \"anyone but me\" (i.e. anyone but MyProcPid).\n>\n> Also, “setup” of a thing generally comes before usage of that thing,\n> so won't it seem strange to do (like the suggestion) and deliberately\n> call the \"setup\" function 2nd instead of 1st?\n>\n> Can you please explain why is it better to do it the suggested way\n> (switch the calls around) than keep the patch code? Probably there is\n> a good reason but I am just not understanding it.\n>\n\nBecause there is no requirement for origin_advance API to be called\nafter session setup. Session setup is required to mark the node as\nreplaying from a remote node, see [1] whereas origin_advance is used\nfor setting up the initial location or setting a new location, see [2]\n(pg_replication_origin_advance).\n\nNow here, after creating the origin, we need to set up the initial\nlocation and it seems fine to call origin_advance before\nsession_setup. In short, as such, I don't see any problem with your\nchange in replorigin_advance but OTOH, I don't see the need for the\nsame as well. So, let's try to avoid that change unless we can't do\nwithout it.\n\nAlso, another thing is we need to take RowExclusiveLock on\npg_replication_origin as written in comments atop replorigin_advance\nbefore calling it. See its usage in pg_replication_origin_advance.\nAlso, write comments on why we need to use replorigin_advance here\n(... something, like we need to WAL log this for the purpose of\nrecovery...).\n\n[1] - https://www.postgresql.org/docs/devel/replication-origins.html\n[2] - https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-REPLICATION\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 15:50:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 12, 2021 at 6:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 11, 2021 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 8, 2021 at 7:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > FYI, I was able to reproduce this case in debugger. PSA logs showing details.\n> > > >\n> > >\n> > > Thanks for reproducing as I was worried about exactly this case. I\n> > > have one question related to logs:\n> > >\n> > > ##\n> > > ## ALTER SUBSCRIPTION to REFRESH the publication\n> > >\n> > > ## This blocks on some latch until the tablesync worker dies, then it continues\n> > > ##\n> > >\n> > > Did you check which exact latch or lock blocks this?\n> > >\n> >\n> > I have checked this myself and the command is waiting on the drop of\n> > origin till the tablesync worker is finished because replorigin_drop()\n> > requires state->acquired_by to be 0 which will only be true once the\n> > tablesync worker exits. I think this is the reason you might have\n> > noticed that the command can't be finished until the tablesync worker\n> > died. So this can't be an interlock between ALTER SUBSCRIPTION ..\n> > REFRESH command and tablesync worker and to that end it seems you have\n> > below Fixme's in the patch:\n>\n> I have also seen this same blocking reason before in the replorigin_drop().\n> However, back when I first tested/reproduced the refresh issue\n> [test-refresh] that\n> AlterSubscription_refresh was still *original* unchanged code, so at\n> that time it did not\n> have any replorigin_drop() in at all. In any case in the latest code\n> [v14] the AlterSubscription is\n> immediately stopping the workers so this question may be moot.\n>\n> >\n> > + * FIXME - Usually this cleanup would be OK, but will not\n> > + * always be OK because the logicalrep_worker_stop_at_commit\n> > + * only \"flags\" the worker to be stopped in the near future\n> > + * but meanwhile it may still be running. In this case there\n> > + * could be a race between the tablesync worker and this code\n> > + * to see who will succeed with the tablesync drop (and the\n> > + * loser will ERROR).\n> > + *\n> > + * FIXME - Also, checking the state is also not guaranteed\n> > + * correct because state might be TCOPYDONE when we checked\n> > + * but has since progressed to SYNDONE\n> > + */\n> > +\n> > + if (state == SUBREL_STATE_TCOPYDONE)\n> > + {\n> >\n> > I feel this was okay for an earlier code but now we need to stop the\n> > tablesync workers before trying to drop the slot as we do in\n> > DropSubscription. Now, if we do that then that would fix the race\n> > conditions mentioned in Fixme but still, there are few more things I\n> > am worried about: (a) What if the launcher again starts the tablesync\n> > worker? One idea could be to acquire AccessExclusiveLock on\n> > SubscriptionRelationId as we do in DropSubscription which is not a\n> > very good idea but I can't think of any other good way. (b) the patch\n> > is just checking SUBREL_STATE_TCOPYDONE before dropping the\n> > replication slot but the slot could be created even before that (in\n> > SUBREL_STATE_DATASYNC state). One idea could be we can try to drop the\n> > slot and if we are not able to drop then we can simply continue\n> > assuming it didn't exist.\n>\n> The code was modified in the latest patch [v14] something like as suggested.\n>\n> The workers for removed tables are now immediately stopped (like\n> DropSubscription does). Although I did include the AccessExclusiveLock\n> as (a) suggested, AFAIK this was actually ineffective at preventing\n> the workers relaunching.\n>\n\nThe reason why it was ineffective is that you are locking\nSubscriptionRelationId which is to protect relaunch of apply workers\nnot tablesync workers. But in current form even acquiring\nSubscriptionRelRelationId lock won't serve the purpose because\nprocess_syncing_tables_for_apply() doesn't always acquire it before\nrelaunching the tablesync workers. However, if we acquire\nSubscriptionRelRelationId in process_syncing_tables_for_apply() then\nit would prevent relaunch of workers but not sure if that is a good\nidea. Can you think of some other way?\n\n> Instead, I am using\n> logicalrep_worker_stop_at_commit to do this - testing shows it as\n> working ok. Please see the code and latest test logs [v14] for\n> details.\n>\n\nThere is still a window where it can relaunch. Basically, after you\nstop the worker in AlterSubscription_refresh and till the commit\nhappens apply worker can relaunch the tablesync workers. I don't see\ncode-wise how we can protect that. And if the tablesync workers are\nrestarted after we stopped them, the purpose won't be achieved because\nit can recreate or try to reuse the slot which we have dropped.\n\nThe other issue with the current code could be that after we drop the\nslot and origin what if the transaction (in which we are doing Alter\nSubscription) is rolledback? Basically, the workers will be relaunched\nand it would assume that slot should be there but the slot won't be\npresent. I have thought of dropping the slot at commit time after we\nstop the workers but again not sure if that is a good idea because at\nthat point we don't want to establish the connection with the\npublisher.\n\nI think this needs some more thought.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 17:07:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v15 patch for the Tablesync Solution1.\n\nMain differences from v14:\n+ Addresses review comment, posted 13/Jan [ak13]\n\n[ak13] = https://www.postgresql.org/message-id/CAA4eK1KzNbudfwmJD-ureYigX6sNyCU6YgHscg29xWoZG6osvA%40mail.gmail.com\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary.\n\n* The tablesync slot name is no longer tied to the Subscription slot name.\n\n* The tablesync slot cleanup (drop) code is added for\nDropSubscription, AlterSubscription_refresh and for\nprocess_syncing_tables_for_sync functions. Drop/AlterSubscription will\nissue WARNING instead of ERROR in case the slot drop fails.\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking is cleaned up during\nDropSubscription and/or process_syncing_tables_for_apply.\n\n* The DropSubscription cleanup code was enhanced (v7+) to take care of\nany crashed tablesync workers.\n\n* The AlterSubscription_refresh (v14+) is now more similar to\nDropSubscription w.r.t to stopping tablesync workers for any \"removed\"\ntables.\n\n* Updates to PG docs.\n\nTODO / Known Issues:\n\n* The AlterSubscription_refresh tablesync cleanup code still has some\nproblems [1]\n[1] = https://www.postgresql.org/message-id/CAA4eK1JuwZF7FHM%2BEPjWdVh%3DXaz-7Eo-G0TByMjWeUU32Xue3w%40mail.gmail.com\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 14 Jan 2021 16:23:23 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 13, 2021 at 11:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Jan 4, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > 7.\n> > > @@ -905,7 +905,7 @@ replorigin_advance(RepOriginId node,\n> > > LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n> > >\n> > > /* Make sure it's not used by somebody else */\n> > > - if (replication_state->acquired_by != 0)\n> > > + if (replication_state->acquired_by != 0 &&\n> > > replication_state->acquired_by != MyProcPid)\n> > > {\n> > >\n> > > I think you won't need this change if you do replorigin_advance before\n> > > replorigin_session_setup in your patch.\n> > >\n> >\n> > As you know the replorigin_session_setup sets the\n> > replication_state->acquired_by to be the current PID. So without this\n> > change the replorigin_advance rejects that same slot state thinking\n> > that it is already active for a different process. Root problem is\n> > that the same process/PID calling both functions would hang.\n> >\n>\n> I think the hang happens only if we call unchanged replorigin_advance\n> after session_setup API, right?\n>\n> > So this\n> > patch change allows replorigin_advance code to be called by self.\n> >\n> > IIUC that acquired_by check condition is like a sanity check for the\n> > originid passed. The patched code only does just like what the comment\n> > says:\n> > \"/* Make sure it's not used by somebody else */\"\n> > Doesn't \"somebody else\" means \"anyone but me\" (i.e. anyone but MyProcPid).\n> >\n> > Also, “setup” of a thing generally comes before usage of that thing,\n> > so won't it seem strange to do (like the suggestion) and deliberately\n> > call the \"setup\" function 2nd instead of 1st?\n> >\n> > Can you please explain why is it better to do it the suggested way\n> > (switch the calls around) than keep the patch code? Probably there is\n> > a good reason but I am just not understanding it.\n> >\n>\n> Because there is no requirement for origin_advance API to be called\n> after session setup. Session setup is required to mark the node as\n> replaying from a remote node, see [1] whereas origin_advance is used\n> for setting up the initial location or setting a new location, see [2]\n> (pg_replication_origin_advance).\n>\n> Now here, after creating the origin, we need to set up the initial\n> location and it seems fine to call origin_advance before\n> session_setup. In short, as such, I don't see any problem with your\n> change in replorigin_advance but OTOH, I don't see the need for the\n> same as well. So, let's try to avoid that change unless we can't do\n> without it.\n>\n> Also, another thing is we need to take RowExclusiveLock on\n> pg_replication_origin as written in comments atop replorigin_advance\n> before calling it. See its usage in pg_replication_origin_advance.\n> Also, write comments on why we need to use replorigin_advance here\n> (... something, like we need to WAL log this for the purpose of\n> recovery...).\n>\n\nModified in latest patch [v15].\n\n----\n[v15] = https://www.postgresql.org/message-id/CAHut%2BPu3he2rOWjbXcNUO6z3aH2LYzW03KV%2BfiMWim49qW9etQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 14 Jan 2021 16:33:28 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 5:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 12, 2021 at 6:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Jan 11, 2021 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The workers for removed tables are now immediately stopped (like\n> > DropSubscription does). Although I did include the AccessExclusiveLock\n> > as (a) suggested, AFAIK this was actually ineffective at preventing\n> > the workers relaunching.\n> >\n>\n> The reason why it was ineffective is that you are locking\n> SubscriptionRelationId which is to protect relaunch of apply workers\n> not tablesync workers. But in current form even acquiring\n> SubscriptionRelRelationId lock won't serve the purpose because\n> process_syncing_tables_for_apply() doesn't always acquire it before\n> relaunching the tablesync workers. However, if we acquire\n> SubscriptionRelRelationId in process_syncing_tables_for_apply() then\n> it would prevent relaunch of workers but not sure if that is a good\n> idea. Can you think of some other way?\n>\n> > Instead, I am using\n> > logicalrep_worker_stop_at_commit to do this - testing shows it as\n> > working ok. Please see the code and latest test logs [v14] for\n> > details.\n> >\n>\n> There is still a window where it can relaunch. Basically, after you\n> stop the worker in AlterSubscription_refresh and till the commit\n> happens apply worker can relaunch the tablesync workers. I don't see\n> code-wise how we can protect that. And if the tablesync workers are\n> restarted after we stopped them, the purpose won't be achieved because\n> it can recreate or try to reuse the slot which we have dropped.\n>\n> The other issue with the current code could be that after we drop the\n> slot and origin what if the transaction (in which we are doing Alter\n> Subscription) is rolledback? Basically, the workers will be relaunched\n> and it would assume that slot should be there but the slot won't be\n> present. I have thought of dropping the slot at commit time after we\n> stop the workers but again not sure if that is a good idea because at\n> that point we don't want to establish the connection with the\n> publisher.\n>\n> I think this needs some more thought.\n>\n\nI have another idea to solve this problem. Instead of Alter\nSubscription drop the slot/origin, we can let tablesync worker do it.\nBasically, we need to register SignalHandlerForShutdownRequest as\nSIGTERM handler and then later need to check ShutdownRequestPending\nflag in the tablesync worker. If the flag is set, then we can drop the\nslot/origin and allow the process to exit cleanly.\n\nThis will obviate the need to take the lock and all sort of rollback\nproblems. If this works out well then I think we can use this for\nDropSubscription as well but that is a matter of separate patch.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Jan 2021 12:10:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v16 patch for the Tablesync Solution1.\n\nMain differences from v15:\n+ Tablesync cleanups of DropSubscription/AlterSubscription_refresh are\nre-implemented as as ProcessInterrupts function\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary.\n\n* The tablesync slot name is no longer tied to the Subscription slot name.\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* Cleanup of tablesync resources:\n- The tablesync slot cleanup (drop) code is added for\nprocess_syncing_tables_for_sync functions.\n- The tablesync replication origin tracking is cleaned\nprocess_syncing_tables_for_apply.\n- A tablesync function to cleanup its own slot/origin is called from\nProcessInterrupt. This is indirectly invoked by\nDropSubscription/AlterSubscrition when they signal the tablesync\nworker to stop.\n\n* Updates to PG docs.\n\nTODO / Known Issues:\n\n* Race condition observed in \"make check\" may be related to this patch.\n\n* Add test cases.\n\n---\n\nPlease also see some test scenario logging which shows the new\ntablesync cleanup function getting called as results of\nDrop/AlterSUbscription.\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 18 Jan 2021 21:43:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v17 patch for the Tablesync Solution1.\n\nMain differences from v16:\n+ Small refactor for DropSubscription to correct the \"make check\" deadlock\n+ Added test case\n+ Some comment wording\n\n====\n\nFeatures:\n\n* The tablesync slot is now permanent instead of temporary.\n\n* The tablesync slot name is no longer tied to the Subscription slot name.\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* Cleanup of tablesync resources:\n- The tablesync slot cleanup (drop) code is added for\nprocess_syncing_tables_for_sync functions.\n- The tablesync replication origin tracking is cleaned\nprocess_syncing_tables_for_apply.\n- A tablesync function to cleanup its own slot/origin is called fro\nProcessInterrupts. This is indirectly invoked by\nDropSubscription/AlterSubscription when they signal the tablesync\nworker to stop.\n\n* Updates to PG docs.\n\n* New TAP test case\n\nTODO / Known Issues:\n\n* None known.\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 19 Jan 2021 20:01:48 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Jan 19, 2021 at 2:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Amit.\n>\n> PSA the v17 patch for the Tablesync Solution1.\n>\n\nThanks for the updated patch. Below are few comments:\n1. Why are we changing the scope of PG_TRY in DropSubscription()?\nAlso, it might be better to keep the replication slot drop part as it\nis.\n\n2.\n- * - Tablesync worker finishes the copy and sets table state to SYNCWAIT;\n- * waits for state change.\n+ * - Tablesync worker does initial table copy; there is a\nFINISHEDCOPY state to\n+ * indicate when the copy phase has completed, so if the worker crashes\n+ * before reaching SYNCDONE the copy will not be re-attempted.\n\nIn the last line, shouldn't the state be FINISHEDCOPY instead of SYNCDONE?\n\n3.\n+void\n+tablesync_cleanup_at_interrupt(void)\n+{\n+ bool drop_slot_needed;\n+ char originname[NAMEDATALEN] = {0};\n+ RepOriginId originid;\n+ TimeLineID tli;\n+ Oid subid = MySubscription->oid;\n+ Oid relid = MyLogicalRepWorker->relid;\n+\n+ elog(DEBUG1,\n+ \"tablesync_cleanup_at_interrupt for relid = %d\",\n+ MyLogicalRepWorker->relid);\n\nThe function name and message makes it sound like that we drop slot\nand origin at any interrupt. Isn't it better to name it as\ntablesync_cleanup_at_shutdown()?\n\n4.\n+ drop_slot_needed =\n+ wrconn != NULL &&\n+ MyLogicalRepWorker->relstate != SUBREL_STATE_SYNCDONE &&\n+ MyLogicalRepWorker->relstate != SUBREL_STATE_READY;\n+\n+ if (drop_slot_needed)\n+ {\n+ char syncslotname[NAMEDATALEN] = {0};\n+ bool missing_ok = true; /* no ERROR if slot is missing. */\n\nI think we can avoid using missing_ok and drop_slot_needed variables.\n\n5. Can we drop the origin along with the slot in\nprocess_syncing_tables_for_sync() instead of\nprocess_syncing_tables_for_apply()? I think this is possible because\nof the other changes you made in origin.c. Also, if possible, we can\ntry to use the same code to drop the slot and origin in\ntablesync_cleanup_at_interrupt and process_syncing_tables_for_sync.\n\n6.\n+ if (MyLogicalRepWorker->relstate == SUBREL_STATE_FINISHEDCOPY)\n+ {\n+ /*\n+ * The COPY phase was previously done, but tablesync then crashed/etc\n+ * before it was able to finish normally.\n+ */\n\nThere seems to be a typo (crashed/etc) in the above comment.\n\n7.\n+# check for occurrence of the expected error\n+poll_output_until(\"replication slot \\\"$slotname\\\" already exists\")\n+ or die \"no error stop for the pre-existing origin\";\n\nIn this test, isn't it better to check for datasync state like below?\n004_sync.pl has some other similar test.\nmy $started_query = \"SELECT srsubstate = 'd' FROM pg_subscription_rel;\";\n$node_subscriber->poll_query_until('postgres', $started_query)\n or die \"Timed out while waiting for subscriber to start sync\";\n\nIs there a reason why we can't use the existing way to check for\nfailure in this case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jan 2021 15:47:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 19, 2021 at 2:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Amit.\n> >\n> > PSA the v17 patch for the Tablesync Solution1.\n> >\n>\n> Thanks for the updated patch. Below are few comments:\n>\n\nOne more comment:\n\nIn LogicalRepSyncTableStart(), you are trying to remove the slot on\nthe failure of copy which won't work if the publisher is down. If that\nhappens on restart of tablesync worker, we will retry to create the\nslot with the same name and it will fail because the previous slot is\nstill not removed from the publisher. I think the same problem can\nhappen if, after an error in tablesync worker and we drop the\nsubscription before tablesync worker gets a chance to restart. So, to\navoid these problems can we use the TEMPORARY slot for tablesync\nworkers as previously? If I remember correctly, the main problem was\nwe don't know where to start decoding if we fail in catchup phase. But\nfor that origins should be sufficient because if we fail before copy\nthen anyway we have to create a new slot and origin but if we fail\nafter copy then we can use the start_decoding_position from the\norigin. So before copy, we still need to use CRS_USE_SNAPSHOT while\ncreating a temporary slot but if we are already in FINISHED COPY state\nat the start of tablesync worker then create a slot with\nCRS_NOEXPORT_SNAPSHOT option and then use origin's start_pos and\nproceed decoding changes from that point onwards similar to how\ncurrently the apply worker works.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Jan 2021 08:13:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v18 patch for the Tablesync Solution1.\n\nMain differences from v17:\n+ Design change to use TEMPORARY tablesync slots [ak0122] means lots\nof the v17 slot cleanup code became unnecessary.\n+ Small refactor in LogicalReplicationSyncTableStart to fix a deadlock scenario.\n+ Addressing some review comments [ak0121].\n\n[ak0121] https://www.postgresql.org/message-id/CAA4eK1LGxuB_RTfZ2HLJT76wv%3DFLV6UPqT%2BFWkiDg61rvQkkmQ%40mail.gmail.com\n[ak0122] https://www.postgresql.org/message-id/CAA4eK1LS0_mdVx2zG3cS%2BH88FJiwyS3kZi7zxijJ_gEuw2uQ2g%40mail.gmail.com\n\n====\n\nFeatures:\n\n* The tablesync slot name is no longer tied to the Subscription slot name.\n\n* The tablesync worker is now allowing multiple tx instead of single tx\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking record is cleaned up by:\n- process_syncing_tables_for_apply\n- DropSubscription\n- AlterSubscription_refresh\n\n* Updates to PG docs.\n\n* New TAP test case\n\nKnown Issues:\n\n* None.\n\n---\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Sat, 23 Jan 2021 10:25:19 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 1:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 19, 2021 at 2:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi Amit.\n> > >\n> > > PSA the v17 patch for the Tablesync Solution1.\n> > >\n> >\n> > Thanks for the updated patch. Below are few comments:\n> >\n>\n> One more comment:\n>\n> In LogicalRepSyncTableStart(), you are trying to remove the slot on\n> the failure of copy which won't work if the publisher is down. If that\n> happens on restart of tablesync worker, we will retry to create the\n> slot with the same name and it will fail because the previous slot is\n> still not removed from the publisher. I think the same problem can\n> happen if, after an error in tablesync worker and we drop the\n> subscription before tablesync worker gets a chance to restart. So, to\n> avoid these problems can we use the TEMPORARY slot for tablesync\n> workers as previously? If I remember correctly, the main problem was\n> we don't know where to start decoding if we fail in catchup phase. But\n> for that origins should be sufficient because if we fail before copy\n> then anyway we have to create a new slot and origin but if we fail\n> after copy then we can use the start_decoding_position from the\n> origin. So before copy, we still need to use CRS_USE_SNAPSHOT while\n> creating a temporary slot but if we are already in FINISHED COPY state\n> at the start of tablesync worker then create a slot with\n> CRS_NOEXPORT_SNAPSHOT option and then use origin's start_pos and\n> proceed decoding changes from that point onwards similar to how\n> currently the apply worker works.\n>\n\nOK. Code is modified as suggested in the latest patch [v18].\nNow that tablesync slots are temporary, quite a lot of cleanup code\nfrom the previous patch (v17) is no longer required so has been\nremoved.\n\n----\n[v18] = https://www.postgresql.org/message-id/CAHut%2BPvm0R%3DMn_uVN_JhK0scE54V6%2BEDGHJg1WYJx0Q8HX_mkQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 23 Jan 2021 11:16:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 9:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 19, 2021 at 2:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Amit.\n> >\n> > PSA the v17 patch for the Tablesync Solution1.\n> >\n>\n> Thanks for the updated patch. Below are few comments:\n> 1. Why are we changing the scope of PG_TRY in DropSubscription()?\n> Also, it might be better to keep the replication slot drop part as it\n> is.\n>\n\nThe latest patch [v18] was re-designed to make tablesync slots as\nTEMPORARY [ak0122], so this code in DropSubscription is modified a\nlot. This review comment is not applicable anymore.\n\n> 2.\n> - * - Tablesync worker finishes the copy and sets table state to SYNCWAIT;\n> - * waits for state change.\n> + * - Tablesync worker does initial table copy; there is a\n> FINISHEDCOPY state to\n> + * indicate when the copy phase has completed, so if the worker crashes\n> + * before reaching SYNCDONE the copy will not be re-attempted.\n>\n> In the last line, shouldn't the state be FINISHEDCOPY instead of SYNCDONE?\n>\n\nOK. The code comment was correct, but maybe confusing. I have reworded\nit in the latest patch [v18].\n\n> 3.\n> +void\n> +tablesync_cleanup_at_interrupt(void)\n> +{\n> + bool drop_slot_needed;\n> + char originname[NAMEDATALEN] = {0};\n> + RepOriginId originid;\n> + TimeLineID tli;\n> + Oid subid = MySubscription->oid;\n> + Oid relid = MyLogicalRepWorker->relid;\n> +\n> + elog(DEBUG1,\n> + \"tablesync_cleanup_at_interrupt for relid = %d\",\n> + MyLogicalRepWorker->relid);\n>\n> The function name and message makes it sound like that we drop slot\n> and origin at any interrupt. Isn't it better to name it as\n> tablesync_cleanup_at_shutdown()?\n>\n\nThe latest patch [v18] was re-designed to make tablesync slots as\nTEMPORARY [ak0122], so this cleanup function is removed. This review\ncomment is not applicable anymore.\n\n> 4.\n> + drop_slot_needed =\n> + wrconn != NULL &&\n> + MyLogicalRepWorker->relstate != SUBREL_STATE_SYNCDONE &&\n> + MyLogicalRepWorker->relstate != SUBREL_STATE_READY;\n> +\n> + if (drop_slot_needed)\n> + {\n> + char syncslotname[NAMEDATALEN] = {0};\n> + bool missing_ok = true; /* no ERROR if slot is missing. */\n>\n> I think we can avoid using missing_ok and drop_slot_needed variables.\n>\n\nThe latest patch [v18] was re-designed to make tablesync slots as\nTEMPORARY [ak0122], so this code no longer exists. This review comment\nis not applicable anymore.\n\n> 5. Can we drop the origin along with the slot in\n> process_syncing_tables_for_sync() instead of\n> process_syncing_tables_for_apply()? I think this is possible because\n> of the other changes you made in origin.c. Also, if possible, we can\n> try to use the same code to drop the slot and origin in\n> tablesync_cleanup_at_interrupt and process_syncing_tables_for_sync.\n>\n\nNo, the origin tracking cannot be dropped by the tablesync worker for\nthe normal use-case even with my modified origin.c; it would fail\nduring the commit TX because while trying to do\nreplorigin_session_advance it would find the asserted origin id was\nnot there anymore.\n\nAlso, the latest patch [v18] was re-designed to make tablesync slots\nas TEMPORARY [ak0122], so the tablesync_cleanup_at_interrupt function\nno longer exists (so the origin.c change of v17 has also been\nremoved).\n\n> 6.\n> + if (MyLogicalRepWorker->relstate == SUBREL_STATE_FINISHEDCOPY)\n> + {\n> + /*\n> + * The COPY phase was previously done, but tablesync then crashed/etc\n> + * before it was able to finish normally.\n> + */\n>\n> There seems to be a typo (crashed/etc) in the above comment.\n>\n\nOK. Fixed in latest patch [v18].\n\n----\n[ak0122] = https://www.postgresql.org/message-id/CAA4eK1LS0_mdVx2zG3cS%2BH88FJiwyS3kZi7zxijJ_gEuw2uQ2g%40mail.gmail.com\n[v18] = https://www.postgresql.org/message-id/CAHut%2BPvm0R%3DMn_uVN_JhK0scE54V6%2BEDGHJg1WYJx0Q8HX_mkQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 23 Jan 2021 11:25:11 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 9:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> 7.\n> +# check for occurrence of the expected error\n> +poll_output_until(\"replication slot \\\"$slotname\\\" already exists\")\n> + or die \"no error stop for the pre-existing origin\";\n>\n> In this test, isn't it better to check for datasync state like below?\n> 004_sync.pl has some other similar test.\n> my $started_query = \"SELECT srsubstate = 'd' FROM pg_subscription_rel;\";\n> $node_subscriber->poll_query_until('postgres', $started_query)\n> or die \"Timed out while waiting for subscriber to start sync\";\n>\n> Is there a reason why we can't use the existing way to check for\n> failure in this case?\n\nSince the new design now uses temporary slots, is this test case still\nrequired?. If required, I can change it accordingly.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 23 Jan 2021 14:07:24 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 8:37 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Thu, Jan 21, 2021 at 9:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > 7.\n> > +# check for occurrence of the expected error\n> > +poll_output_until(\"replication slot \\\"$slotname\\\" already exists\")\n> > + or die \"no error stop for the pre-existing origin\";\n> >\n> > In this test, isn't it better to check for datasync state like below?\n> > 004_sync.pl has some other similar test.\n> > my $started_query = \"SELECT srsubstate = 'd' FROM pg_subscription_rel;\";\n> > $node_subscriber->poll_query_until('postgres', $started_query)\n> > or die \"Timed out while waiting for subscriber to start sync\";\n> >\n> > Is there a reason why we can't use the existing way to check for\n> > failure in this case?\n>\n> Since the new design now uses temporary slots, is this test case still\n> required?\n>\n\nI think so. But do you have any reason to believe that it won't be\nrequired anymore?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 23 Jan 2021 09:45:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> I think so. But do you have any reason to believe that it won't be\n> required anymore?\n\nA temporary slot will not clash with a permanent slot of the same name,\n\nregards,\nAjin Cherian\nFujitsu\n\n\n",
"msg_date": "Sat, 23 Jan 2021 16:38:02 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA the v18 patch for the Tablesync Solution1.\n>\n\nFew comments:\n=============\n1.\n- * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\n- * CATCHUP -> SYNCDONE -> READY.\n+ * So the state progression is always: INIT -> DATASYNC ->\n+ * (sync worker FINISHEDCOPY) -> SYNCWAIT -> CATCHUP -> SYNCDONE -> READY.\n\nI don't think we need to be specific here that sync worker sets\nFINISHEDCOPY state.\n\n2.\n@@ -98,11 +102,16 @@\n #include \"miscadmin.h\"\n #include \"parser/parse_relation.h\"\n #include \"pgstat.h\"\n+#include \"postmaster/interrupt.h\"\n #include \"replication/logicallauncher.h\"\n #include \"replication/logicalrelation.h\"\n+#include \"replication/logicalworker.h\"\n #include \"replication/walreceiver.h\"\n #include \"replication/worker_internal.h\"\n+#include \"replication/slot.h\"\n\nI don't think the above includes are required. They seem to the\nremnant of the previous approach.\n\n3.\n process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n {\n- Assert(IsTransactionState());\n+ bool sync_done = false;\n\n SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n+ sync_done = MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&\n+ current_lsn >= MyLogicalRepWorker->relstate_lsn;\n+ SpinLockRelease(&MyLogicalRepWorker->relmutex);\n\n- if (MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&\n- current_lsn >= MyLogicalRepWorker->relstate_lsn)\n+ if (sync_done)\n {\n TimeLineID tli;\n\n+ /*\n+ * Change state to SYNCDONE.\n+ */\n+ SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n\nWhy do we need these changes? If you have done it for the\ncode-readability purpose then we can consider this as a separate patch\nbecause I don't see why these are required w.r.t this patch.\n\n4.\n- /*\n- * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n- * 1 characters. We cut the original slot name to NAMEDATALEN - 28 chars\n- * and append _%u_sync_%u (1 + 10 + 6 + 10 + '\\0'). (It's actually the\n- * NAMEDATALEN on the remote that matters, but this scheme will also work\n- * reasonably if that is different.)\n- */\n- StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n- slotname = psprintf(\"%.*s_%u_sync_%u\",\n- NAMEDATALEN - 28,\n- MySubscription->slotname,\n- MySubscription->oid,\n- MyLogicalRepWorker->relid);\n+ /* Calculate the name of the tablesync slot. */\n+ slotname = ReplicationSlotNameForTablesync(\n+ MySubscription->oid,\n+ MyLogicalRepWorker->relid);\n\nWhat is the reason for changing the slot name calculation? If there is\nany particular reasons, then we can add a comment to indicate why we\ncan't include the subscription's slotname in this calculation.\n\n5.\nThis is WAL\n+ * logged for for the purpose of recovery. Locks are to prevent the\n+ * replication origin from vanishing while advancing.\n\n/for for/for\n\n6.\n+ /* Remove the tablesync's origin tracking if exists. */\n+ snprintf(originname, sizeof(originname), \"pg_%u_%u\", subid, relid);\n+ originid = replorigin_by_name(originname, true);\n+ if (originid != InvalidRepOriginId)\n+ {\n+ elog(DEBUG1, \"DropSubscription: dropping origin tracking for\n\\\"%s\\\"\", originname);\n\nI don't think we need this and the DEBUG1 message in\nAlterSubscription_refresh. IT is fine to print this information for\nbackground workers like in apply-worker but not sure if need it here.\nThe DropSubscription drops the origin of apply worker but it doesn't\nuse such a DEBUG message so I guess we don't it for tablesync origins\nas well.\n\n7. Have you tested with the new patch the scenario where we crash\nafter FINISHEDCOPY and before SYNCDONE, is it able to pick up the\nreplication using the new temporary slot? Here, we need to test the\ncase where during the catchup phase we have received few commits and\nthen the tablesync worker is crashed/errored out? Basically, check if\nthe replication is continued from the same point? I understand that\nthis can be only tested by adding some logs and we might not be able\nto write a test for it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 23 Jan 2021 17:56:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "FYI - I have done some long-running testing using the current patch [v18].\n\n1. The src/test/subscription TAP tests:\n- Subscription TAP tests were executed in a loop X 150 iterations.\n- Duration 5 hrs.\n- All iterations report \"Result: PASS\"\n\n2. The postgres \"make check\" tests:\n- make check was executed in a loop X 150 iterations.\n- Duration 2 hrs.\n- All iterations report \"All 202 tests passed\"\n\n---\n[v18] https://www.postgresql.org/message-id/CAHut%2BPvm0R%3DMn_uVN_JhK0scE54V6%2BEDGHJg1WYJx0Q8HX_mkQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sun, 24 Jan 2021 10:40:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA the v18 patch for the Tablesync Solution1.\n> >\n>\n> Few comments:\n> =============\n> 1.\n> - * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\n> - * CATCHUP -> SYNCDONE -> READY.\n> + * So the state progression is always: INIT -> DATASYNC ->\n> + * (sync worker FINISHEDCOPY) -> SYNCWAIT -> CATCHUP -> SYNCDONE -> READY.\n>\n> I don't think we need to be specific here that sync worker sets\n> FINISHEDCOPY state.\n>\n\nThis was meant to indicate that *only* the sync worker knows about the\nFINISHEDCOPY state, whereas all the other states are either known\n(assigned and/or used) by *both* kinds of workers. But, I can remove\nit if you feel that distinction is not useful.\n\n> 4.\n> - /*\n> - * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> - * 1 characters. We cut the original slot name to NAMEDATALEN - 28 chars\n> - * and append _%u_sync_%u (1 + 10 + 6 + 10 + '\\0'). (It's actually the\n> - * NAMEDATALEN on the remote that matters, but this scheme will also work\n> - * reasonably if that is different.)\n> - */\n> - StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> - slotname = psprintf(\"%.*s_%u_sync_%u\",\n> - NAMEDATALEN - 28,\n> - MySubscription->slotname,\n> - MySubscription->oid,\n> - MyLogicalRepWorker->relid);\n> + /* Calculate the name of the tablesync slot. */\n> + slotname = ReplicationSlotNameForTablesync(\n> + MySubscription->oid,\n> + MyLogicalRepWorker->relid);\n>\n> What is the reason for changing the slot name calculation? If there is\n> any particular reasons, then we can add a comment to indicate why we\n> can't include the subscription's slotname in this calculation.\n>\n\nThe subscription slot name may be changed (e.g. ALTER SUBSCRIPTION)\nand so including the subscription slot name as part of the tablesync\nslot name was considered to be:\na) possibly risky/undefined, if the subscription slot_name = NONE\nb) confusing, if we end up using 2 different slot names for the same\ntablesync (e.g. if the subscription slot name is changed before a sync\nworker is re-launched).\nAnd since this subscription slot name part is not necessary for\nuniqueness anyway, it was removed from the tablesync slot name to\neliminate those concerns.\n\nAlso, the tablesync slot name calculation was encapsulated as a\nseparate function because previously (i.e. before v18) it was used by\nvarious other cleanup codes. I still like it better as a function, but\nnow it is only called from one place so we could put that code back\ninline if you prefer it how it was..\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sun, 24 Jan 2021 17:54:37 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sun, Jan 24, 2021 at 5:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > 4.\n> > - /*\n> > - * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> > - * 1 characters. We cut the original slot name to NAMEDATALEN - 28 chars\n> > - * and append _%u_sync_%u (1 + 10 + 6 + 10 + '\\0'). (It's actually the\n> > - * NAMEDATALEN on the remote that matters, but this scheme will also work\n> > - * reasonably if that is different.)\n> > - */\n> > - StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> > - slotname = psprintf(\"%.*s_%u_sync_%u\",\n> > - NAMEDATALEN - 28,\n> > - MySubscription->slotname,\n> > - MySubscription->oid,\n> > - MyLogicalRepWorker->relid);\n> > + /* Calculate the name of the tablesync slot. */\n> > + slotname = ReplicationSlotNameForTablesync(\n> > + MySubscription->oid,\n> > + MyLogicalRepWorker->relid);\n> >\n> > What is the reason for changing the slot name calculation? If there is\n> > any particular reasons, then we can add a comment to indicate why we\n> > can't include the subscription's slotname in this calculation.\n> >\n>\n> The subscription slot name may be changed (e.g. ALTER SUBSCRIPTION)\n> and so including the subscription slot name as part of the tablesync\n> slot name was considered to be:\n> a) possibly risky/undefined, if the subscription slot_name = NONE\n> b) confusing, if we end up using 2 different slot names for the same\n> tablesync (e.g. if the subscription slot name is changed before a sync\n> worker is re-launched).\n> And since this subscription slot name part is not necessary for\n> uniqueness anyway, it was removed from the tablesync slot name to\n> eliminate those concerns.\n>\n> Also, the tablesync slot name calculation was encapsulated as a\n> separate function because previously (i.e. before v18) it was used by\n> various other cleanup codes. I still like it better as a function, but\n> now it is only called from one place so we could put that code back\n> inline if you prefer it how it was..\n\nIt turns out those (a/b) concerns I wrote above are maybe unfounded,\nbecause it seems not possible to alter the slot_name = NONE unless the\nsubscription is first DISABLED.\nSo probably I can revert all this tablesync slot name calculation back\nto how it originally was in the OSS HEAD if you want.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 11:44:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v19 patch for the Tablesync Solution1.\n\nMain differences from v18:\n+ Patch has been rebased off HEAD @ 24/Jan\n+ Addressing some review comments [ak0123]\n\n[ak0123] https://www.postgresql.org/message-id/CAA4eK1JhpuwujrV6ABMmZ3jXfW37ssZnJ3fikrY7rRdvoEmu_g%40mail.gmail.com\n\n====\n\nFeatures:\n\n* The tablesync worker is now allowing multiple tx instead of single tx.\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking record is cleaned up by:\n- process_syncing_tables_for_apply\n- DropSubscription\n- AlterSubscription_refresh\n\n* Updates to PG docs.\n\n* New TAP test case.\n\nKnown Issues:\n\n* None.\n\n---\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 25 Jan 2021 13:32:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 6:15 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, Jan 24, 2021 at 5:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > 4.\n> > > - /*\n> > > - * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> > > - * 1 characters. We cut the original slot name to NAMEDATALEN - 28 chars\n> > > - * and append _%u_sync_%u (1 + 10 + 6 + 10 + '\\0'). (It's actually the\n> > > - * NAMEDATALEN on the remote that matters, but this scheme will also work\n> > > - * reasonably if that is different.)\n> > > - */\n> > > - StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> > > - slotname = psprintf(\"%.*s_%u_sync_%u\",\n> > > - NAMEDATALEN - 28,\n> > > - MySubscription->slotname,\n> > > - MySubscription->oid,\n> > > - MyLogicalRepWorker->relid);\n> > > + /* Calculate the name of the tablesync slot. */\n> > > + slotname = ReplicationSlotNameForTablesync(\n> > > + MySubscription->oid,\n> > > + MyLogicalRepWorker->relid);\n> > >\n> > > What is the reason for changing the slot name calculation? If there is\n> > > any particular reasons, then we can add a comment to indicate why we\n> > > can't include the subscription's slotname in this calculation.\n> > >\n> >\n> > The subscription slot name may be changed (e.g. ALTER SUBSCRIPTION)\n> > and so including the subscription slot name as part of the tablesync\n> > slot name was considered to be:\n> > a) possibly risky/undefined, if the subscription slot_name = NONE\n> > b) confusing, if we end up using 2 different slot names for the same\n> > tablesync (e.g. if the subscription slot name is changed before a sync\n> > worker is re-launched).\n> > And since this subscription slot name part is not necessary for\n> > uniqueness anyway, it was removed from the tablesync slot name to\n> > eliminate those concerns.\n> >\n> > Also, the tablesync slot name calculation was encapsulated as a\n> > separate function because previously (i.e. before v18) it was used by\n> > various other cleanup codes. I still like it better as a function, but\n> > now it is only called from one place so we could put that code back\n> > inline if you prefer it how it was..\n>\n> It turns out those (a/b) concerns I wrote above are maybe unfounded,\n> because it seems not possible to alter the slot_name = NONE unless the\n> subscription is first DISABLED.\n>\n\nYeah, but I think the user can still change to some other predefined\nslot_name. However, I guess it doesn't matter unless it can lead what\nyou have mentioned in (a). As that can't happen, it is probably better\nto take out that change from the patch. I see your point of moving\nthis calculation to a separate function but not sure if it is worth it\nunless we have to call it from multiple places or it simplifies the\nexisting code.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 08:18:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 2.\n> @@ -98,11 +102,16 @@\n> #include \"miscadmin.h\"\n> #include \"parser/parse_relation.h\"\n> #include \"pgstat.h\"\n> +#include \"postmaster/interrupt.h\"\n> #include \"replication/logicallauncher.h\"\n> #include \"replication/logicalrelation.h\"\n> +#include \"replication/logicalworker.h\"\n> #include \"replication/walreceiver.h\"\n> #include \"replication/worker_internal.h\"\n> +#include \"replication/slot.h\"\n>\n> I don't think the above includes are required. They seem to the\n> remnant of the previous approach.\n>\n\nOK. Fixed in the latest patch [v19].\n\n> 3.\n> process_syncing_tables_for_sync(XLogRecPtr current_lsn)\n> {\n> - Assert(IsTransactionState());\n> + bool sync_done = false;\n>\n> SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> + sync_done = MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&\n> + current_lsn >= MyLogicalRepWorker->relstate_lsn;\n> + SpinLockRelease(&MyLogicalRepWorker->relmutex);\n>\n> - if (MyLogicalRepWorker->relstate == SUBREL_STATE_CATCHUP &&\n> - current_lsn >= MyLogicalRepWorker->relstate_lsn)\n> + if (sync_done)\n> {\n> TimeLineID tli;\n>\n> + /*\n> + * Change state to SYNCDONE.\n> + */\n> + SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n>\n> Why do we need these changes? If you have done it for the\n> code-readability purpose then we can consider this as a separate patch\n> because I don't see why these are required w.r.t this patch.\n>\n\nYes it was for code readability in v17 when this function used to be\nmuch larger. But it is not very necessary anymore and has been\nreverted in the latest patch [v19].\n\n> 4.\n> - /*\n> - * To build a slot name for the sync work, we are limited to NAMEDATALEN -\n> - * 1 characters. We cut the original slot name to NAMEDATALEN - 28 chars\n> - * and append _%u_sync_%u (1 + 10 + 6 + 10 + '\\0'). (It's actually the\n> - * NAMEDATALEN on the remote that matters, but this scheme will also work\n> - * reasonably if that is different.)\n> - */\n> - StaticAssertStmt(NAMEDATALEN >= 32, \"NAMEDATALEN too small\"); /* for sanity */\n> - slotname = psprintf(\"%.*s_%u_sync_%u\",\n> - NAMEDATALEN - 28,\n> - MySubscription->slotname,\n> - MySubscription->oid,\n> - MyLogicalRepWorker->relid);\n> + /* Calculate the name of the tablesync slot. */\n> + slotname = ReplicationSlotNameForTablesync(\n> + MySubscription->oid,\n> + MyLogicalRepWorker->relid);\n>\n> What is the reason for changing the slot name calculation? If there is\n> any particular reasons, then we can add a comment to indicate why we\n> can't include the subscription's slotname in this calculation.\n>\n\nThe tablesync slot name changes were not strictly necessary, so the\ncode is all reverted to be the same as OSS HEAD now in the latest\npatch [v19].\n\n> 5.\n> This is WAL\n> + * logged for for the purpose of recovery. Locks are to prevent the\n> + * replication origin from vanishing while advancing.\n>\n> /for for/for\n>\n\nOK. Fixed in the latest patch [v19].\n\n> 6.\n> + /* Remove the tablesync's origin tracking if exists. */\n> + snprintf(originname, sizeof(originname), \"pg_%u_%u\", subid, relid);\n> + originid = replorigin_by_name(originname, true);\n> + if (originid != InvalidRepOriginId)\n> + {\n> + elog(DEBUG1, \"DropSubscription: dropping origin tracking for\n> \\\"%s\\\"\", originname);\n>\n> I don't think we need this and the DEBUG1 message in\n> AlterSubscription_refresh. IT is fine to print this information for\n> background workers like in apply-worker but not sure if need it here.\n> The DropSubscription drops the origin of apply worker but it doesn't\n> use such a DEBUG message so I guess we don't it for tablesync origins\n> as well.\n>\n\nOK. These DEBUG1 logs are removed in the latest patch [v19].\n\n----\n[v19] https://www.postgresql.org/message-id/CAHut%2BPsj7Xm8C1LbqeAbk-3duyS8xXJtL9TiGaeu3P8g272mAA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 13:53:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sun, Jan 24, 2021 at 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Few comments:\n> > =============\n> > 1.\n> > - * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\n> > - * CATCHUP -> SYNCDONE -> READY.\n> > + * So the state progression is always: INIT -> DATASYNC ->\n> > + * (sync worker FINISHEDCOPY) -> SYNCWAIT -> CATCHUP -> SYNCDONE -> READY.\n> >\n> > I don't think we need to be specific here that sync worker sets\n> > FINISHEDCOPY state.\n> >\n>\n> This was meant to indicate that *only* the sync worker knows about the\n> FINISHEDCOPY state, whereas all the other states are either known\n> (assigned and/or used) by *both* kinds of workers. But, I can remove\n> it if you feel that distinction is not useful.\n>\n\nOkay, but I feel you can mention that in the description you have\nadded for FINISHEDCOPY state. It looks a bit odd here and the message\nyou want to convey is also not that clear.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 08:28:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 11:08 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > I think so. But do you have any reason to believe that it won't be\n> > required anymore?\n>\n> A temporary slot will not clash with a permanent slot of the same name,\n>\n\nI have tried below and it seems to be clashing:\npostgres=# SELECT 'init' FROM\npg_create_logical_replication_slot('test_slot2', 'test_decoding');\n ?column?\n----------\n init\n(1 row)\n\npostgres=# SELECT 'init' FROM\npg_create_logical_replication_slot('test_slot2', 'test_decoding',\ntrue);\nERROR: replication slot \"test_slot2\" already exists\n\nNote that the third parameter in the second statement above indicates\nwhether it is a temporary slot or not. What am I missing?\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 08:45:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 8:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 2.\n> > @@ -98,11 +102,16 @@\n> > #include \"miscadmin.h\"\n> > #include \"parser/parse_relation.h\"\n> > #include \"pgstat.h\"\n> > +#include \"postmaster/interrupt.h\"\n> > #include \"replication/logicallauncher.h\"\n> > #include \"replication/logicalrelation.h\"\n> > +#include \"replication/logicalworker.h\"\n> > #include \"replication/walreceiver.h\"\n> > #include \"replication/worker_internal.h\"\n> > +#include \"replication/slot.h\"\n> >\n> > I don't think the above includes are required. They seem to the\n> > remnant of the previous approach.\n> >\n>\n> OK. Fixed in the latest patch [v19].\n>\n\nYou seem to forgot removing #include \"replication/slot.h\". Check, if\nit is not required then remove that as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 09:24:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 8:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Amit.\n>\n> PSA the v19 patch for the Tablesync Solution1.\n>\n\nI see one race condition in this patch where we try to drop the origin\nvia apply process and DropSubscription. I think it can lead to the\nerror \"cache lookup failed for replication origin with oid %u\". The\nsame problem can happen via exposed API pg_replication_origin_drop but\nprobably because this is not used concurrently so nobody faced this\nissue. I think for the matter of this patch we can try to suppress\nsuch an error either via try..catch, or by adding missing_ok argument\nto replorigin_drop API, or we can just add to comments that such a\nrace exists. Additionally, we should try to start a new thread for the\nexistence of this problem in pg_replication_origin_drop. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 11:18:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v20 patch for the Tablesync Solution1.\n\nMain differences from v19:\n+ Updated TAP test [ak0123-7]\n+ Fixed comment [ak0125-1]\n+ Removed redundant header [ak0125-2]\n+ Protection against race condition [ak0125-race]\n\n[ak0123-7] https://www.postgresql.org/message-id/CAA4eK1JhpuwujrV6ABMmZ3jXfW37ssZnJ3fikrY7rRdvoEmu_g%40mail.gmail.com\n[ak0125-1] https://www.postgresql.org/message-id/CAA4eK1JmP2VVpH2%3DO%3D5BBbuH7gyQtWn40aXp_Jyjn1%2BKggfq8A%40mail.gmail.com\n[ak0125-2] https://www.postgresql.org/message-id/CAA4eK1L1j5sfBgHb0-H-%2B2quBstsA3hMcDfP-4vLuU-UF43nXQ%40mail.gmail.com\n[ak0125-race] https://www.postgresql.org/message-id/CAA4eK1%2ByeLwBCkTvTdPM-hSk1fr6jT8KJc362CN8zrGztq_JqQ%40mail.gmail.com\n\n====\n\nFeatures:\n\n* The tablesync worker is now allowing multiple tx instead of single tx.\n\n* A new state (SUBREL_STATE_FINISHEDCOPY) is persisted after a\nsuccessful copy_table in tablesync's LogicalRepSyncTableStart.\n\n* If a re-launched tablesync finds state SUBREL_STATE_FINISHEDCOPY\nthen it will bypass the initial copy_table phase.\n\n* Now tablesync sets up replication origin tracking in\nLogicalRepSyncTableStart (similar as done for the apply worker). The\norigin is advanced when first created.\n\n* The tablesync replication origin tracking record is cleaned up by:\n- process_syncing_tables_for_apply\n- DropSubscription\n- AlterSubscription_refresh\n\n* Updates to PG docs.\n\n* New TAP test case.\n\nKnown Issues:\n\n* Some records arriving between FINISHEDCOPY and SYNCDONE state may be\nlost (currently under investigation).\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 25 Jan 2021 21:49:10 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 9:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> 7.\n> +# check for occurrence of the expected error\n> +poll_output_until(\"replication slot \\\"$slotname\\\" already exists\")\n> + or die \"no error stop for the pre-existing origin\";\n>\n> In this test, isn't it better to check for datasync state like below?\n> 004_sync.pl has some other similar test.\n> my $started_query = \"SELECT srsubstate = 'd' FROM pg_subscription_rel;\";\n> $node_subscriber->poll_query_until('postgres', $started_query)\n> or die \"Timed out while waiting for subscriber to start sync\";\n>\n> Is there a reason why we can't use the existing way to check for\n> failure in this case?\n>\n\nThe TAP test is updated in the latest patch [v20].\n\n----\n[v20] https://www.postgresql.org/message-id/CAHut%2BPuNwSujoL_dwa%3DTtozJ_vF%3DCnJxjgQTCmNBkazd8J1m-A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:15:29 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 24, 2021 at 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Few comments:\n> > > =============\n> > > 1.\n> > > - * So the state progression is always: INIT -> DATASYNC -> SYNCWAIT ->\n> > > - * CATCHUP -> SYNCDONE -> READY.\n> > > + * So the state progression is always: INIT -> DATASYNC ->\n> > > + * (sync worker FINISHEDCOPY) -> SYNCWAIT -> CATCHUP -> SYNCDONE -> READY.\n> > >\n> > > I don't think we need to be specific here that sync worker sets\n> > > FINISHEDCOPY state.\n> > >\n> >\n> > This was meant to indicate that *only* the sync worker knows about the\n> > FINISHEDCOPY state, whereas all the other states are either known\n> > (assigned and/or used) by *both* kinds of workers. But, I can remove\n> > it if you feel that distinction is not useful.\n> >\n>\n> Okay, but I feel you can mention that in the description you have\n> added for FINISHEDCOPY state. It looks a bit odd here and the message\n> you want to convey is also not that clear.\n>\n\nThe comment is updated in the latest patch [v20].\n\n----\n[v20] https://www.postgresql.org/message-id/CAHut%2BPuNwSujoL_dwa%3DTtozJ_vF%3DCnJxjgQTCmNBkazd8J1m-A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:39:54 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 8:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sat, Jan 23, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 2.\n> > > @@ -98,11 +102,16 @@\n> > > #include \"miscadmin.h\"\n> > > #include \"parser/parse_relation.h\"\n> > > #include \"pgstat.h\"\n> > > +#include \"postmaster/interrupt.h\"\n> > > #include \"replication/logicallauncher.h\"\n> > > #include \"replication/logicalrelation.h\"\n> > > +#include \"replication/logicalworker.h\"\n> > > #include \"replication/walreceiver.h\"\n> > > #include \"replication/worker_internal.h\"\n> > > +#include \"replication/slot.h\"\n> > >\n> > > I don't think the above includes are required. They seem to the\n> > > remnant of the previous approach.\n> > >\n> >\n> > OK. Fixed in the latest patch [v19].\n> >\n>\n> You seem to forgot removing #include \"replication/slot.h\". Check, if\n> it is not required then remove that as well.\n>\n\nFixed in the latest patch [v20].\n\n----\n[v20] https://www.postgresql.org/message-id/CAHut%2BPuNwSujoL_dwa%3DTtozJ_vF%3DCnJxjgQTCmNBkazd8J1m-A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:41:46 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 4:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 8:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Amit.\n> >\n> > PSA the v19 patch for the Tablesync Solution1.\n> >\n>\n> I see one race condition in this patch where we try to drop the origin\n> via apply process and DropSubscription. I think it can lead to the\n> error \"cache lookup failed for replication origin with oid %u\". The\n> same problem can happen via exposed API pg_replication_origin_drop but\n> probably because this is not used concurrently so nobody faced this\n> issue. I think for the matter of this patch we can try to suppress\n> such an error either via try..catch, or by adding missing_ok argument\n> to replorigin_drop API, or we can just add to comments that such a\n> race exists.\n\nOK. This has been isolated to a common function called from 3 places.\nThe potential race ERROR is suppressed by TRY/CATCH.\nPlease see code of latest patch [v20]\n\n> Additionally, we should try to start a new thread for the\n> existence of this problem in pg_replication_origin_drop. What do you\n> think?\n\nOK. It is on my TODO list..\n\n----\n[v20] https://www.postgresql.org/message-id/CAHut%2BPuNwSujoL_dwa%3DTtozJ_vF%3DCnJxjgQTCmNBkazd8J1m-A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:47:47 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 4:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 25, 2021 at 8:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Amit.\n> >\n> > PSA the v19 patch for the Tablesync Solution1.\n> >\n>\n> I see one race condition in this patch where we try to drop the origin\n> via apply process and DropSubscription. I think it can lead to the\n> error \"cache lookup failed for replication origin with oid %u\". The\n> same problem can happen via exposed API pg_replication_origin_drop but\n> probably because this is not used concurrently so nobody faced this\n> issue. I think for the matter of this patch we can try to suppress\n> such an error either via try..catch, or by adding missing_ok argument\n> to replorigin_drop API, or we can just add to comments that such a\n> race exists. Additionally, we should try to start a new thread for the\n> existence of this problem in pg_replication_origin_drop. What do you\n> think?\n\nOK. A new thread [ps0127] for this problem was started\n\n---\n[ps0127] = https://www.postgresql.org/message-id/CAHut%2BPuW8DWV5fskkMWWMqzt-x7RPcNQOtJQBp6SdwyRghCk7A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 27 Jan 2021 11:39:25 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA the v18 patch for the Tablesync Solution1.\n>\n> 7. Have you tested with the new patch the scenario where we crash\n> after FINISHEDCOPY and before SYNCDONE, is it able to pick up the\n> replication using the new temporary slot? Here, we need to test the\n> case where during the catchup phase we have received few commits and\n> then the tablesync worker is crashed/errored out? Basically, check if\n> the replication is continued from the same point?\n>\n\nI have tested this and it didn't work, see the below example.\n\nPublisher-side\n================\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl1(somedata, text) VALUES (1, 2);\nCOMMIT;\n\nCREATE PUBLICATION mypublication FOR TABLE mytbl1;\n\nSubscriber-side\n================\n- Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\nworker stops.\n\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\n\nCREATE SUBSCRIPTION mysub\n CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypublication;\n\nDuring debug, stop after we mark FINISHEDCOPY state.\n\nPublisher-side\n================\nINSERT INTO mytbl1(somedata, text) VALUES (1, 3);\nINSERT INTO mytbl1(somedata, text) VALUES (1, 4);\n\n\nSubscriber-side\n================\n- Have a breakpoint in apply_dispatch\n- continue in debugger;\n- After we replay first commit (which will be for values(1,3), note\ndown the origin position in apply_handle_commit_internal and somehow\nerror out. I have forced the debugger to set to the last line in\napply_dispatch where the error is raised.\n- After the error, again the tablesync worker is restarted and it\nstarts from the position noted in the previous step\n- It exits without replaying the WAL for (1,4)\n\nSo, on the subscriber-side, you will see 3 records. Fourth is missing.\nNow, if you insert more records on the publisher, it will anyway\nreplay those but the fourth one got missing.\n\nThe temporary slots didn't seem to work because we created again the\nnew temporary slot after the crash and ask it to start decoding from\nthe point we noted in origin_lsn. The publisher didn’t hold the\nrequired WAL as our slot was temporary so it started sending from some\nlater point. We retain WAL based on the slots restart_lsn position and\nwal_keep_size. For our case, the positions of the slots will matter\nand as we have created temporary slots, there is no way for a\npublisher to save that WAL.\n\nIn this particular case, even if the WAL would have been there we only\npass the start_decoding_at position but didn’t pass restart_lsn, so it\npicked a random location (current insert position in WAL) which is\nahead of start_decoding_at point so it never sent the required fourth\nrecord. Now, I don’t think it will work even if somehow sent the\ncorrect restart_lsn because of what I wrote earlier that there is no\nguarantee that the earlier WAL would have been saved.\n\nAt this point, I can't think of any way to fix this problem except for\ngoing back to the previous approach of permanent slots but let me know\nif you have any ideas to salvage this approach?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Jan 2021 09:23:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v21 patch for the Tablesync Solution1.\n\nMain differences from v20:\n+ Rebased to latest OSS HEAD @ 27/Jan\n+ v21 is a merging of patches [v17] and [v20], which was made\nnecessary when it was found [ak0127] that the v20 usage of TEMPORARY\ntablesync slots did not work correctly. v21 reverts to using PERMANENT\ntablesync slots same as implemented in v17, while retaining other\nreview comment fixes made for v18, v19, v20.\n\n----\n[v17] https://www.postgresql.org/message-id/CAHut%2BPt9%2Bg8qQR0kMC85nY-O4uDQxXboamZAYhHbvkebzC9fAQ%40mail.gmail.com\n[v20] https://www.postgresql.org/message-id/CAHut%2BPuNwSujoL_dwa%3DTtozJ_vF%3DCnJxjgQTCmNBkazd8J1m-A%40mail.gmail.com\n[ak0127] https://www.postgresql.org/message-id/CAA4eK1LDsj9kw4FbWAw3CMHyVsjafgDum03cYy-wpGmor%3D8-1w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 28 Jan 2021 17:18:35 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 2:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > PSA the v18 patch for the Tablesync Solution1.\n> >\n> > 7. Have you tested with the new patch the scenario where we crash\n> > after FINISHEDCOPY and before SYNCDONE, is it able to pick up the\n> > replication using the new temporary slot? Here, we need to test the\n> > case where during the catchup phase we have received few commits and\n> > then the tablesync worker is crashed/errored out? Basically, check if\n> > the replication is continued from the same point?\n> >\n>\n> I have tested this and it didn't work, see the below example.\n>\n> Publisher-side\n> ================\n> CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n>\n> BEGIN;\n> INSERT INTO mytbl1(somedata, text) VALUES (1, 1);\n> INSERT INTO mytbl1(somedata, text) VALUES (1, 2);\n> COMMIT;\n>\n> CREATE PUBLICATION mypublication FOR TABLE mytbl1;\n>\n> Subscriber-side\n> ================\n> - Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\n> worker stops.\n>\n> CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n>\n>\n> CREATE SUBSCRIPTION mysub\n> CONNECTION 'host=localhost port=5432 dbname=postgres'\n> PUBLICATION mypublication;\n>\n> During debug, stop after we mark FINISHEDCOPY state.\n>\n> Publisher-side\n> ================\n> INSERT INTO mytbl1(somedata, text) VALUES (1, 3);\n> INSERT INTO mytbl1(somedata, text) VALUES (1, 4);\n>\n>\n> Subscriber-side\n> ================\n> - Have a breakpoint in apply_dispatch\n> - continue in debugger;\n> - After we replay first commit (which will be for values(1,3), note\n> down the origin position in apply_handle_commit_internal and somehow\n> error out. I have forced the debugger to set to the last line in\n> apply_dispatch where the error is raised.\n> - After the error, again the tablesync worker is restarted and it\n> starts from the position noted in the previous step\n> - It exits without replaying the WAL for (1,4)\n>\n> So, on the subscriber-side, you will see 3 records. Fourth is missing.\n> Now, if you insert more records on the publisher, it will anyway\n> replay those but the fourth one got missing.\n>\n> The temporary slots didn't seem to work because we created again the\n> new temporary slot after the crash and ask it to start decoding from\n> the point we noted in origin_lsn. The publisher didn’t hold the\n> required WAL as our slot was temporary so it started sending from some\n> later point. We retain WAL based on the slots restart_lsn position and\n> wal_keep_size. For our case, the positions of the slots will matter\n> and as we have created temporary slots, there is no way for a\n> publisher to save that WAL.\n>\n> In this particular case, even if the WAL would have been there we only\n> pass the start_decoding_at position but didn’t pass restart_lsn, so it\n> picked a random location (current insert position in WAL) which is\n> ahead of start_decoding_at point so it never sent the required fourth\n> record. Now, I don’t think it will work even if somehow sent the\n> correct restart_lsn because of what I wrote earlier that there is no\n> guarantee that the earlier WAL would have been saved.\n>\n> At this point, I can't think of any way to fix this problem except for\n> going back to the previous approach of permanent slots but let me know\n> if you have any ideas to salvage this approach?\n>\n\nOK. The latest patch [v21] now restores the permanent slot (and slot\ncleanup) approach as it was implemented in an earlier version [v17].\nPlease note that this change also re-introduces some potential slot\ncleanup problems for some race scenarios. These will be addressed by\nfuture patches.\n\n----\n[v17] https://www.postgresql.org/message-id/CAHut%2BPt9%2Bg8qQR0kMC85nY-O4uDQxXboamZAYhHbvkebzC9fAQ%40mail.gmail.com\n[v21] https://www.postgresql.org/message-id/CAHut%2BPvzHRRA_5O0R8KZCb1tVe1mBVPxFtmttXJnmuOmAegoWA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 28 Jan 2021 18:01:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jan 27, 2021 at 2:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > PSA the v18 patch for the Tablesync Solution1.\n> > >\n> > > 7. Have you tested with the new patch the scenario where we crash\n> > > after FINISHEDCOPY and before SYNCDONE, is it able to pick up the\n> > > replication using the new temporary slot? Here, we need to test the\n> > > case where during the catchup phase we have received few commits and\n> > > then the tablesync worker is crashed/errored out? Basically, check if\n> > > the replication is continued from the same point?\n> > >\n> >\n> > I have tested this and it didn't work, see the below example.\n> >\n> > Publisher-side\n> > ================\n> > CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n> >\n> > BEGIN;\n> > INSERT INTO mytbl1(somedata, text) VALUES (1, 1);\n> > INSERT INTO mytbl1(somedata, text) VALUES (1, 2);\n> > COMMIT;\n> >\n> > CREATE PUBLICATION mypublication FOR TABLE mytbl1;\n> >\n> > Subscriber-side\n> > ================\n> > - Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\n> > worker stops.\n> >\n> > CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n> >\n> >\n> > CREATE SUBSCRIPTION mysub\n> > CONNECTION 'host=localhost port=5432 dbname=postgres'\n> > PUBLICATION mypublication;\n> >\n> > During debug, stop after we mark FINISHEDCOPY state.\n> >\n> > Publisher-side\n> > ================\n> > INSERT INTO mytbl1(somedata, text) VALUES (1, 3);\n> > INSERT INTO mytbl1(somedata, text) VALUES (1, 4);\n> >\n> >\n> > Subscriber-side\n> > ================\n> > - Have a breakpoint in apply_dispatch\n> > - continue in debugger;\n> > - After we replay first commit (which will be for values(1,3), note\n> > down the origin position in apply_handle_commit_internal and somehow\n> > error out. I have forced the debugger to set to the last line in\n> > apply_dispatch where the error is raised.\n> > - After the error, again the tablesync worker is restarted and it\n> > starts from the position noted in the previous step\n> > - It exits without replaying the WAL for (1,4)\n> >\n> > So, on the subscriber-side, you will see 3 records. Fourth is missing.\n> > Now, if you insert more records on the publisher, it will anyway\n> > replay those but the fourth one got missing.\n> >\n...\n> >\n> > At this point, I can't think of any way to fix this problem except for\n> > going back to the previous approach of permanent slots but let me know\n> > if you have any ideas to salvage this approach?\n> >\n>\n> OK. The latest patch [v21] now restores the permanent slot (and slot\n> cleanup) approach as it was implemented in an earlier version [v17].\n> Please note that this change also re-introduces some potential slot\n> cleanup problems for some race scenarios.\n>\n\nI am able to reproduce the race condition where slot/origin will\nremain on the publisher node even when the corresponding subscription\nis dropped. Basically, if we error out in the 'catchup' phase in\ntablesync worker then either it will restart and cleanup slot/origin\nor if in the meantime we have dropped the subscription and stopped\napply worker then probably the slot and origin will be dangling on the\npublisher.\n\nI have used exactly the same test procedure as was used to expose the\nproblem in the temporary slots with some minor changes as mentioned\nbelow:\nSubscriber-side\n================\n- Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\nworker stops.\n- Have a while(1) loop in wait_for_relation_state_change so that we\ncan control apply worker via debugger at the right time.\n\nSubscriber-side\n================\n- Have a breakpoint in apply_dispatch\n- continue in debugger;\n- After we replay first commit somehow error out. I have forced the\ndebugger to set to the last line in apply_dispatch where the error is\nraised.\n- Now, the table sync worker won't restart because the apply worker is\nlooping in wait_for_relation_state_change.\n- Execute DropSubscription;\n- We can allow apply worker to continue by skipping the while(1) and\nit will exit because DropSubscription would have sent a terminate\nsignal.\n\nAfter the above steps, check the publisher (select * from\npg_replication_slots) and you will find the dangling tablesync slot.\n\nI think to solve the above problem we should drop tablesync\nslot/origin at the Drop/Alter Subscription time and additionally we\nneed to ensure that apply worker doesn't let tablesync workers restart\n(or it must not do any work to access the slot because the slots are\ndropped) once we stopped them. To ensure that, I think we need to make\nthe following changes:\n\n1. Take AccessExclusivelock on subscription_rel during Alter (before\ncalling RemoveSubscriptionRel) and don't release it till transaction\nend (do table_close with NoLock) similar to DropSubscription.\n2. Take share lock (AccessShareLock) in GetSubscriptionRelState (it\ngets called from logicalrepsyncstartworker), we can release this lock\nat the end of that function. This will ensure that even if the\ntablesync worker is restarted, it will be blocked till the transaction\nperforming Alter will commit.\n3. Make Alter command to not run in xact block so that we don't keep\nlocks for a longer time and second for the slots related stuff similar\nto dropsubscription.\n\nFew comments on v21:\n===================\n1.\nDropSubscription()\n{\n..\n- /* Clean up dependencies */\n+ /* Clean up dependencies. */\n deleteSharedDependencyRecordsFor(SubscriptionRelationId, subid, 0);\n..\n}\n\nThe above change seems unnecessary w.r.t current patch.\n\n2.\nDropSubscription()\n{\n..\n /*\n- * If there is no slot associated with the subscription, we can finish\n- * here.\n+ * If there is a slot associated with the subscription, then drop the\n+ * replication slot at the publisher node using the replication\n+ * connection.\n */\n- if (!slotname)\n+ if (slotname)\n {\n- table_close(rel, NoLock);\n- return;\n..\n}\n\nWhat is the reason for this change? Can't we keep the check in its\nexisting form?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Jan 2021 16:07:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi Amit.\n\nPSA the v22 patch for the Tablesync Solution1.\n\nDifferences from v21:\n+ Patch is rebased to latest OSS HEAD @ 29/Jan.\n+ Includes new code as suggested [ak0128] to ensure no dangling slots\nat Drop/AlterSubscription.\n+ Removes the slot/origin cleanup down by process interrupt logic\n(cleanup_at_shutdown function).\n+ Addresses some minor review comments.\n\n----\n[ak0128] https://www.postgresql.org/message-id/CAA4eK1LMYXZY1SpzgW-WyFdy%2BFTMZ4BMz1dj0rT2rxGv-zLwFA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 29 Jan 2021 21:37:15 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 9:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 28, 2021 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Jan 27, 2021 at 2:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Jan 23, 2021 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Sat, Jan 23, 2021 at 4:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > >\n> > > > > PSA the v18 patch for the Tablesync Solution1.\n> > > >\n> > > > 7. Have you tested with the new patch the scenario where we crash\n> > > > after FINISHEDCOPY and before SYNCDONE, is it able to pick up the\n> > > > replication using the new temporary slot? Here, we need to test the\n> > > > case where during the catchup phase we have received few commits and\n> > > > then the tablesync worker is crashed/errored out? Basically, check if\n> > > > the replication is continued from the same point?\n> > > >\n> > >\n> > > I have tested this and it didn't work, see the below example.\n> > >\n> > > Publisher-side\n> > > ================\n> > > CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n> > >\n> > > BEGIN;\n> > > INSERT INTO mytbl1(somedata, text) VALUES (1, 1);\n> > > INSERT INTO mytbl1(somedata, text) VALUES (1, 2);\n> > > COMMIT;\n> > >\n> > > CREATE PUBLICATION mypublication FOR TABLE mytbl1;\n> > >\n> > > Subscriber-side\n> > > ================\n> > > - Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\n> > > worker stops.\n> > >\n> > > CREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n> > >\n> > >\n> > > CREATE SUBSCRIPTION mysub\n> > > CONNECTION 'host=localhost port=5432 dbname=postgres'\n> > > PUBLICATION mypublication;\n> > >\n> > > During debug, stop after we mark FINISHEDCOPY state.\n> > >\n> > > Publisher-side\n> > > ================\n> > > INSERT INTO mytbl1(somedata, text) VALUES (1, 3);\n> > > INSERT INTO mytbl1(somedata, text) VALUES (1, 4);\n> > >\n> > >\n> > > Subscriber-side\n> > > ================\n> > > - Have a breakpoint in apply_dispatch\n> > > - continue in debugger;\n> > > - After we replay first commit (which will be for values(1,3), note\n> > > down the origin position in apply_handle_commit_internal and somehow\n> > > error out. I have forced the debugger to set to the last line in\n> > > apply_dispatch where the error is raised.\n> > > - After the error, again the tablesync worker is restarted and it\n> > > starts from the position noted in the previous step\n> > > - It exits without replaying the WAL for (1,4)\n> > >\n> > > So, on the subscriber-side, you will see 3 records. Fourth is missing.\n> > > Now, if you insert more records on the publisher, it will anyway\n> > > replay those but the fourth one got missing.\n> > >\n> ...\n> > >\n> > > At this point, I can't think of any way to fix this problem except for\n> > > going back to the previous approach of permanent slots but let me know\n> > > if you have any ideas to salvage this approach?\n> > >\n> >\n> > OK. The latest patch [v21] now restores the permanent slot (and slot\n> > cleanup) approach as it was implemented in an earlier version [v17].\n> > Please note that this change also re-introduces some potential slot\n> > cleanup problems for some race scenarios.\n> >\n>\n> I am able to reproduce the race condition where slot/origin will\n> remain on the publisher node even when the corresponding subscription\n> is dropped. Basically, if we error out in the 'catchup' phase in\n> tablesync worker then either it will restart and cleanup slot/origin\n> or if in the meantime we have dropped the subscription and stopped\n> apply worker then probably the slot and origin will be dangling on the\n> publisher.\n>\n> I have used exactly the same test procedure as was used to expose the\n> problem in the temporary slots with some minor changes as mentioned\n> below:\n> Subscriber-side\n> ================\n> - Have a while(1) loop in LogicalRepSyncTableStart so that tablesync\n> worker stops.\n> - Have a while(1) loop in wait_for_relation_state_change so that we\n> can control apply worker via debugger at the right time.\n>\n> Subscriber-side\n> ================\n> - Have a breakpoint in apply_dispatch\n> - continue in debugger;\n> - After we replay first commit somehow error out. I have forced the\n> debugger to set to the last line in apply_dispatch where the error is\n> raised.\n> - Now, the table sync worker won't restart because the apply worker is\n> looping in wait_for_relation_state_change.\n> - Execute DropSubscription;\n> - We can allow apply worker to continue by skipping the while(1) and\n> it will exit because DropSubscription would have sent a terminate\n> signal.\n>\n> After the above steps, check the publisher (select * from\n> pg_replication_slots) and you will find the dangling tablesync slot.\n>\n> I think to solve the above problem we should drop tablesync\n> slot/origin at the Drop/Alter Subscription time and additionally we\n> need to ensure that apply worker doesn't let tablesync workers restart\n> (or it must not do any work to access the slot because the slots are\n> dropped) once we stopped them. To ensure that, I think we need to make\n> the following changes:\n>\n> 1. Take AccessExclusivelock on subscription_rel during Alter (before\n> calling RemoveSubscriptionRel) and don't release it till transaction\n> end (do table_close with NoLock) similar to DropSubscription.\n> 2. Take share lock (AccessShareLock) in GetSubscriptionRelState (it\n> gets called from logicalrepsyncstartworker), we can release this lock\n> at the end of that function. This will ensure that even if the\n> tablesync worker is restarted, it will be blocked till the transaction\n> performing Alter will commit.\n> 3. Make Alter command to not run in xact block so that we don't keep\n> locks for a longer time and second for the slots related stuff similar\n> to dropsubscription.\n>\n\nOK. The latest patch [v22] changes the code as suggested above.\n\n> Few comments on v21:\n> ===================\n> 1.\n> DropSubscription()\n> {\n> ..\n> - /* Clean up dependencies */\n> + /* Clean up dependencies. */\n> deleteSharedDependencyRecordsFor(SubscriptionRelationId, subid, 0);\n> ..\n> }\n>\n> The above change seems unnecessary w.r.t current patch.\n>\n\nOK. Modified in patch [v22].\n\n> 2.\n> DropSubscription()\n> {\n> ..\n> /*\n> - * If there is no slot associated with the subscription, we can finish\n> - * here.\n> + * If there is a slot associated with the subscription, then drop the\n> + * replication slot at the publisher node using the replication\n> + * connection.\n> */\n> - if (!slotname)\n> + if (slotname)\n> {\n> - table_close(rel, NoLock);\n> - return;\n> ..\n> }\n>\n> What is the reason for this change? Can't we keep the check in its\n> existing form?\n>\n\nI think the above comment is longer applicable in the latest patch [v22].\nEarly exit for null slotname is not desirable anymore; we still need\nto process all the tablesync slots/origins regardless.\n\n----\n[v22] https://www.postgresql.org/message-id/CAHut%2BPtrAVrtjc8srASTeUhbJtviw0Up-bzFSc14Ss%3DmAMxz9g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 29 Jan 2021 21:47:45 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 4:07 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> Differences from v21:\n> + Patch is rebased to latest OSS HEAD @ 29/Jan.\n> + Includes new code as suggested [ak0128] to ensure no dangling slots\n> at Drop/AlterSubscription.\n> + Removes the slot/origin cleanup down by process interrupt logic\n> (cleanup_at_shutdown function).\n> + Addresses some minor review comments.\n>\n\nI have made the below changes in the patch. Let me know what you think\nabout these?\n1. It was a bit difficult to understand the code in DropSubscription\nso I have rearranged the code to match the way we are doing in HEAD\nwhere we drop the slots at the end after finishing all the other\ncleanup.\n2. In AlterSubscription_refresh(), we can't allow workers to be\nstopped at commit time as we have already dropped the slots because\nthe worker can access the dropped slot. We need to stop the workers\nbefore dropping slots. This makes all the code related to\nlogicalrep_worker_stop_at_commit redundant.\n3. In AlterSubscription_refresh(), we need to acquire the lock on\npg_subscription_rel only when we try to remove any subscription rel.\n4. Added/Changed quite a few comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 30 Jan 2021 18:49:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have made the below changes in the patch. Let me know what you think\n> about these?\n> 1. It was a bit difficult to understand the code in DropSubscription\n> so I have rearranged the code to match the way we are doing in HEAD\n> where we drop the slots at the end after finishing all the other\n> cleanup.\n\nThere was a reason why the v22 logic was different from HEAD.\n\nThe broken connection leaves dangling slots which is unavoidable. But,\nwhereas the user knows the name of the Subscription slot (they named\nit), there is no easy way for them to know the names of the remaining\ntablesync slots unless we log them.\n\nThat is why the v22 code was written to process the tablesync slots\neven for wrconn == NULL so their name could be logged:\nelog(WARNING, \"no connection; cannot drop tablesync slot \\\"%s\\\".\",\nsyncslotname);\n\nThe v23 patch removed this dangling slot name information, so it makes\nit difficult for the user to know what tablesync slots to cleanup.\n\n> 2. In AlterSubscription_refresh(), we can't allow workers to be\n> stopped at commit time as we have already dropped the slots because\n> the worker can access the dropped slot. We need to stop the workers\n> before dropping slots. This makes all the code related to\n> logicalrep_worker_stop_at_commit redundant.\n\nOK.\n\n> 3. In AlterSubscription_refresh(), we need to acquire the lock on\n> pg_subscription_rel only when we try to remove any subscription rel.\n\n+ if (!sub_rel_locked)\n+ {\n+ rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);\n+ sub_rel_locked = true;\n+ }\n\nOK. But the sub_rel_locked bool is not really necessary. Why not just\ncheck for rel == NULL? e.g.\nif (!rel)\n rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);\n\n> 4. Added/Changed quite a few comments.\n>\n\n@@ -1042,6 +1115,31 @@ DropSubscription(DropSubscriptionStmt *stmt,\nbool isTopLevel)\n }\n list_free(subworkers);\n\n+ /*\n+ * Tablesync resource cleanup (slots and origins).\n\nThe comment is misleading; this code is only dropping origins.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 1 Feb 2021 12:18:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> 2. In AlterSubscription_refresh(), we can't allow workers to be\n> stopped at commit time as we have already dropped the slots because\n> the worker can access the dropped slot. We need to stop the workers\n> before dropping slots. This makes all the code related to\n> logicalrep_worker_stop_at_commit redundant.\n\n@@ -73,20 +73,6 @@ typedef struct LogicalRepWorkerId\n Oid relid;\n } LogicalRepWorkerId;\n\n-typedef struct StopWorkersData\n-{\n- int nestDepth; /* Sub-transaction nest level */\n- List *workers; /* List of LogicalRepWorkerId */\n- struct StopWorkersData *parent; /* This need not be an immediate\n- * subtransaction parent */\n-} StopWorkersData;\n\nSince v23 removes that typedef from the code, don't you also have to\nremove it from src/tools/pgindent/typedefs.list?\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 1 Feb 2021 12:38:55 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 6:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have made the below changes in the patch. Let me know what you think\n> > about these?\n> > 1. It was a bit difficult to understand the code in DropSubscription\n> > so I have rearranged the code to match the way we are doing in HEAD\n> > where we drop the slots at the end after finishing all the other\n> > cleanup.\n>\n> There was a reason why the v22 logic was different from HEAD.\n>\n> The broken connection leaves dangling slots which is unavoidable.\n>\n\nI think this is true only when the user specifically requested it by\nthe use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\nOtherwise, we give an error on a broken connection. Also, if that is\ntrue then is there a reason to pass missing_ok as true while dropping\ntablesync slots?\n\n\n> But,\n> whereas the user knows the name of the Subscription slot (they named\n> it), there is no easy way for them to know the names of the remaining\n> tablesync slots unless we log them.\n>\n> That is why the v22 code was written to process the tablesync slots\n> even for wrconn == NULL so their name could be logged:\n> elog(WARNING, \"no connection; cannot drop tablesync slot \\\"%s\\\".\",\n> syncslotname);\n>\n> The v23 patch removed this dangling slot name information, so it makes\n> it difficult for the user to know what tablesync slots to cleanup.\n>\n\nOkay, then can we think of combining with the existing error of the\nreplication slot? I think that might produce a very long message, so\nanother idea could be to LOG a separate WARNING for each such slot\njust before giving the error.\n\n> > 2. In AlterSubscription_refresh(), we can't allow workers to be\n> > stopped at commit time as we have already dropped the slots because\n> > the worker can access the dropped slot. We need to stop the workers\n> > before dropping slots. This makes all the code related to\n> > logicalrep_worker_stop_at_commit redundant.\n>\n> OK.\n>\n> > 3. In AlterSubscription_refresh(), we need to acquire the lock on\n> > pg_subscription_rel only when we try to remove any subscription rel.\n>\n> + if (!sub_rel_locked)\n> + {\n> + rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);\n> + sub_rel_locked = true;\n> + }\n>\n> OK. But the sub_rel_locked bool is not really necessary. Why not just\n> check for rel == NULL? e.g.\n> if (!rel)\n> rel = table_open(SubscriptionRelRelationId, AccessExclusiveLock);\n>\n\nOkay, that seems to be better, will change accordingly.\n\n> > 4. Added/Changed quite a few comments.\n> >\n>\n> @@ -1042,6 +1115,31 @@ DropSubscription(DropSubscriptionStmt *stmt,\n> bool isTopLevel)\n> }\n> list_free(subworkers);\n>\n> + /*\n> + * Tablesync resource cleanup (slots and origins).\n>\n> The comment is misleading; this code is only dropping origins.\n>\n\nOkay, will change to something like: \"Cleanup of tablesync replication origins.\"\n\n> @@ -73,20 +73,6 @@ typedef struct LogicalRepWorkerId\n> Oid relid;\n> } LogicalRepWorkerId;\n>\n> -typedef struct StopWorkersData\n> -{\n> - int nestDepth; /* Sub-transaction nest level */\n> - List *workers; /* List of LogicalRepWorkerId */\n> - struct StopWorkersData *parent; /* This need not be an immediate\n> - * subtransaction parent */\n> -} StopWorkersData;\n>\n> Since v23 removes that typedef from the code, don't you also have to\n> remove it from src/tools/pgindent/typedefs.list?\n>\n\nYeah.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 08:24:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 6:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have made the below changes in the patch. Let me know what you think\n> > > about these?\n> > > 1. It was a bit difficult to understand the code in DropSubscription\n> > > so I have rearranged the code to match the way we are doing in HEAD\n> > > where we drop the slots at the end after finishing all the other\n> > > cleanup.\n> >\n> > There was a reason why the v22 logic was different from HEAD.\n> >\n> > The broken connection leaves dangling slots which is unavoidable.\n> >\n>\n> I think this is true only when the user specifically requested it by\n> the use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\n> Otherwise, we give an error on a broken connection. Also, if that is\n> true then is there a reason to pass missing_ok as true while dropping\n> tablesync slots?\n>\n\nAFAIK there is always a potential race with DropSubscription dropping\nslots. The DropSubscription might be running at exactly the same time\nthe apply worker has just dropped the very same tablesync slot. By\nsaying missing_ok = true it means DropSubscription would not give\nERROR in such a case, so at least the DROP SUBSCRIPTION would not fail\nwith an unexpected error.\n\n>\n> > But,\n> > whereas the user knows the name of the Subscription slot (they named\n> > it), there is no easy way for them to know the names of the remaining\n> > tablesync slots unless we log them.\n> >\n> > That is why the v22 code was written to process the tablesync slots\n> > even for wrconn == NULL so their name could be logged:\n> > elog(WARNING, \"no connection; cannot drop tablesync slot \\\"%s\\\".\",\n> > syncslotname);\n> >\n> > The v23 patch removed this dangling slot name information, so it makes\n> > it difficult for the user to know what tablesync slots to cleanup.\n> >\n>\n> Okay, then can we think of combining with the existing error of the\n> replication slot? I think that might produce a very long message, so\n> another idea could be to LOG a separate WARNING for each such slot\n> just before giving the error.\n\nThere may be many subscribed tables so I agree combining to one\nmessage might be too long. Yes, we can add another loop to output the\nnecessary information. But, isn’t logging each tablesync slot WARNING\nbefore the subscription slot ERROR exactly the behaviour which already\nexisted in v22. IIUC the DropSubscription refactoring in V23 was not\ndone for any functional change, but was intended only to make the code\nsimpler, but how is that goal achieved if v23 ends up needing 3 loops\nwhere v22 only needed 1 loop to do the same thing?\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 15:08:13 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 9:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 6:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I have made the below changes in the patch. Let me know what you think\n> > > > about these?\n> > > > 1. It was a bit difficult to understand the code in DropSubscription\n> > > > so I have rearranged the code to match the way we are doing in HEAD\n> > > > where we drop the slots at the end after finishing all the other\n> > > > cleanup.\n> > >\n> > > There was a reason why the v22 logic was different from HEAD.\n> > >\n> > > The broken connection leaves dangling slots which is unavoidable.\n> > >\n> >\n> > I think this is true only when the user specifically requested it by\n> > the use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\n> > Otherwise, we give an error on a broken connection. Also, if that is\n> > true then is there a reason to pass missing_ok as true while dropping\n> > tablesync slots?\n> >\n>\n> AFAIK there is always a potential race with DropSubscription dropping\n> slots. The DropSubscription might be running at exactly the same time\n> the apply worker has just dropped the very same tablesync slot.\n>\n\nWe stopped the workers before getting a list of NotReady relations and\nthen we try to drop the corresponding slots. So, how such a race\ncondition can happen? Note, because we have a lock on pg_subscrition,\nthere is no chance that the workers can restart till the transaction\nend.\n\n> By\n> saying missing_ok = true it means DropSubscription would not give\n> ERROR in such a case, so at least the DROP SUBSCRIPTION would not fail\n> with an unexpected error.\n>\n> >\n> > > But,\n> > > whereas the user knows the name of the Subscription slot (they named\n> > > it), there is no easy way for them to know the names of the remaining\n> > > tablesync slots unless we log them.\n> > >\n> > > That is why the v22 code was written to process the tablesync slots\n> > > even for wrconn == NULL so their name could be logged:\n> > > elog(WARNING, \"no connection; cannot drop tablesync slot \\\"%s\\\".\",\n> > > syncslotname);\n> > >\n> > > The v23 patch removed this dangling slot name information, so it makes\n> > > it difficult for the user to know what tablesync slots to cleanup.\n> > >\n> >\n> > Okay, then can we think of combining with the existing error of the\n> > replication slot? I think that might produce a very long message, so\n> > another idea could be to LOG a separate WARNING for each such slot\n> > just before giving the error.\n>\n> There may be many subscribed tables so I agree combining to one\n> message might be too long. Yes, we can add another loop to output the\n> necessary information. But, isn’t logging each tablesync slot WARNING\n> before the subscription slot ERROR exactly the behaviour which already\n> existed in v22. IIUC the DropSubscription refactoring in V23 was not\n> done for any functional change, but was intended only to make the code\n> simpler, but how is that goal achieved if v23 ends up needing 3 loops\n> where v22 only needed 1 loop to do the same thing?\n>\n\nNo, there is a functionality change as well. The way we have code in\nv22 can easily lead to a problem when we have dropped the slots but\nget an error while removing origins or an entry from subscription rel.\nIn such cases, we won't be able to rollback the drop of slots but the\nother database operations will be rolled back. This is the reason we\nhave to drop the slots at the end. We need to ensure the same thing\nfor AlterSubscription_refresh. Does this make sense now?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 10:14:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 9:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Feb 1, 2021 at 6:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > I have made the below changes in the patch. Let me know what you think\n> > > > > about these?\n> > > > > 1. It was a bit difficult to understand the code in DropSubscription\n> > > > > so I have rearranged the code to match the way we are doing in HEAD\n> > > > > where we drop the slots at the end after finishing all the other\n> > > > > cleanup.\n> > > >\n> > > > There was a reason why the v22 logic was different from HEAD.\n> > > >\n> > > > The broken connection leaves dangling slots which is unavoidable.\n> > > >\n> > >\n> > > I think this is true only when the user specifically requested it by\n> > > the use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\n> > > Otherwise, we give an error on a broken connection. Also, if that is\n> > > true then is there a reason to pass missing_ok as true while dropping\n> > > tablesync slots?\n> > >\n> >\n> > AFAIK there is always a potential race with DropSubscription dropping\n> > slots. The DropSubscription might be running at exactly the same time\n> > the apply worker has just dropped the very same tablesync slot.\n> >\n>\n> We stopped the workers before getting a list of NotReady relations and\n> then we try to drop the corresponding slots. So, how such a race\n> condition can happen?\n>\n\nI think it is possible that the state is still not SYNCDONE but the\nslot is already dropped so here we should be ready with the missing\nslot.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:09:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 3:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 9:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Feb 1, 2021 at 6:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Sun, Jan 31, 2021 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > I have made the below changes in the patch. Let me know what you think\n> > > > > about these?\n> > > > > 1. It was a bit difficult to understand the code in DropSubscription\n> > > > > so I have rearranged the code to match the way we are doing in HEAD\n> > > > > where we drop the slots at the end after finishing all the other\n> > > > > cleanup.\n> > > >\n> > > > There was a reason why the v22 logic was different from HEAD.\n> > > >\n> > > > The broken connection leaves dangling slots which is unavoidable.\n> > > >\n> > >\n> > > I think this is true only when the user specifically requested it by\n> > > the use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\n> > > Otherwise, we give an error on a broken connection. Also, if that is\n> > > true then is there a reason to pass missing_ok as true while dropping\n> > > tablesync slots?\n> > >\n> >\n> > AFAIK there is always a potential race with DropSubscription dropping\n> > slots. The DropSubscription might be running at exactly the same time\n> > the apply worker has just dropped the very same tablesync slot.\n> >\n>\n> We stopped the workers before getting a list of NotReady relations and\n> then we try to drop the corresponding slots. So, how such a race\n> condition can happen? Note, because we have a lock on pg_subscrition,\n> there is no chance that the workers can restart till the transaction\n> end.\n\nOK. I think I was forgetting the logicalrep_worker_stop would also go\ninto a loop waiting for the worker process to die. So even if the\ntablesync worker does simultaneously drop it's own slot, I think it\nwill certainly at least be in SYNCDONE state before DropSubscription\ndoes anything else with that worker.\n\n>\n> > By\n> > saying missing_ok = true it means DropSubscription would not give\n> > ERROR in such a case, so at least the DROP SUBSCRIPTION would not fail\n> > with an unexpected error.\n> >\n> > >\n> > > > But,\n> > > > whereas the user knows the name of the Subscription slot (they named\n> > > > it), there is no easy way for them to know the names of the remaining\n> > > > tablesync slots unless we log them.\n> > > >\n> > > > That is why the v22 code was written to process the tablesync slots\n> > > > even for wrconn == NULL so their name could be logged:\n> > > > elog(WARNING, \"no connection; cannot drop tablesync slot \\\"%s\\\".\",\n> > > > syncslotname);\n> > > >\n> > > > The v23 patch removed this dangling slot name information, so it makes\n> > > > it difficult for the user to know what tablesync slots to cleanup.\n> > > >\n> > >\n> > > Okay, then can we think of combining with the existing error of the\n> > > replication slot? I think that might produce a very long message, so\n> > > another idea could be to LOG a separate WARNING for each such slot\n> > > just before giving the error.\n> >\n> > There may be many subscribed tables so I agree combining to one\n> > message might be too long. Yes, we can add another loop to output the\n> > necessary information. But, isn’t logging each tablesync slot WARNING\n> > before the subscription slot ERROR exactly the behaviour which already\n> > existed in v22. IIUC the DropSubscription refactoring in V23 was not\n> > done for any functional change, but was intended only to make the code\n> > simpler, but how is that goal achieved if v23 ends up needing 3 loops\n> > where v22 only needed 1 loop to do the same thing?\n> >\n>\n> No, there is a functionality change as well. The way we have code in\n> v22 can easily lead to a problem when we have dropped the slots but\n> get an error while removing origins or an entry from subscription rel.\n> In such cases, we won't be able to rollback the drop of slots but the\n> other database operations will be rolled back. This is the reason we\n> have to drop the slots at the end. We need to ensure the same thing\n> for AlterSubscription_refresh. Does this make sense now?\n>\n\nOK.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 16:53:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 3:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 9:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > > I think this is true only when the user specifically requested it by\n> > > > the use of \"ALTER SUBSCRIPTION ... SET (slot_name = NONE)\", right?\n> > > > Otherwise, we give an error on a broken connection. Also, if that is\n> > > > true then is there a reason to pass missing_ok as true while dropping\n> > > > tablesync slots?\n> > > >\n> > >\n> > > AFAIK there is always a potential race with DropSubscription dropping\n> > > slots. The DropSubscription might be running at exactly the same time\n> > > the apply worker has just dropped the very same tablesync slot.\n> > >\n> >\n> > We stopped the workers before getting a list of NotReady relations and\n> > then we try to drop the corresponding slots. So, how such a race\n> > condition can happen? Note, because we have a lock on pg_subscrition,\n> > there is no chance that the workers can restart till the transaction\n> > end.\n>\n> OK. I think I was forgetting the logicalrep_worker_stop would also go\n> into a loop waiting for the worker process to die. So even if the\n> tablesync worker does simultaneously drop it's own slot, I think it\n> will certainly at least be in SYNCDONE state before DropSubscription\n> does anything else with that worker.\n>\n\nHow is that ensured? We don't have anything like HOLD_INTERRUPTS\nbetween the time dropped the slot and updated rel state as SYNCDONE.\nSo, isn't it possible that after we dropped the slot and before we\nupdate the state, the SIGTERM signal arrives and led to worker exit?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:49:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 5:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n > > > AFAIK there is always a potential race with DropSubscription dropping\n> > > > slots. The DropSubscription might be running at exactly the same time\n> > > > the apply worker has just dropped the very same tablesync slot.\n> > > >\n> > >\n> > > We stopped the workers before getting a list of NotReady relations and\n> > > then we try to drop the corresponding slots. So, how such a race\n> > > condition can happen? Note, because we have a lock on pg_subscrition,\n> > > there is no chance that the workers can restart till the transaction\n> > > end.\n> >\n> > OK. I think I was forgetting the logicalrep_worker_stop would also go\n> > into a loop waiting for the worker process to die. So even if the\n> > tablesync worker does simultaneously drop it's own slot, I think it\n> > will certainly at least be in SYNCDONE state before DropSubscription\n> > does anything else with that worker.\n> >\n>\n> How is that ensured? We don't have anything like HOLD_INTERRUPTS\n> between the time dropped the slot and updated rel state as SYNCDONE.\n> So, isn't it possible that after we dropped the slot and before we\n> update the state, the SIGTERM signal arrives and led to worker exit?\n>\n\nThe worker has the SIGTERM handler of \"die\". IIUC the \"die\" function\ndoesn't normally do anything except set some flags to say please die\nat the next convenient opportunity. My understanding is that the\nworker process will not actually exit until it next executes\nCHECK_FOR_INTERRUPTS(), whereupon it will see the ProcDiePending flag\nand *really* die. So even if the SIGTERM signal arrives immediately\nafter the slot is dropped, the tablesync will still become SYNCDONE.\nIs this wrong understanding?\n\nBut your scenario could still be possible if \"die\" exited immediately\n(e.g. only in single user mode?).\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 1 Feb 2021 18:38:16 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 1:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 5:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > > AFAIK there is always a potential race with DropSubscription dropping\n> > > > > slots. The DropSubscription might be running at exactly the same time\n> > > > > the apply worker has just dropped the very same tablesync slot.\n> > > > >\n> > > >\n> > > > We stopped the workers before getting a list of NotReady relations and\n> > > > then we try to drop the corresponding slots. So, how such a race\n> > > > condition can happen? Note, because we have a lock on pg_subscrition,\n> > > > there is no chance that the workers can restart till the transaction\n> > > > end.\n> > >\n> > > OK. I think I was forgetting the logicalrep_worker_stop would also go\n> > > into a loop waiting for the worker process to die. So even if the\n> > > tablesync worker does simultaneously drop it's own slot, I think it\n> > > will certainly at least be in SYNCDONE state before DropSubscription\n> > > does anything else with that worker.\n> > >\n> >\n> > How is that ensured? We don't have anything like HOLD_INTERRUPTS\n> > between the time dropped the slot and updated rel state as SYNCDONE.\n> > So, isn't it possible that after we dropped the slot and before we\n> > update the state, the SIGTERM signal arrives and led to worker exit?\n> >\n>\n> The worker has the SIGTERM handler of \"die\". IIUC the \"die\" function\n> doesn't normally do anything except set some flags to say please die\n> at the next convenient opportunity. My understanding is that the\n> worker process will not actually exit until it next executes\n> CHECK_FOR_INTERRUPTS(), whereupon it will see the ProcDiePending flag\n> and *really* die. So even if the SIGTERM signal arrives immediately\n> after the slot is dropped, the tablesync will still become SYNCDONE.\n> Is this wrong understanding?\n>\n> But your scenario could still be possible if \"die\" exited immediately\n> (e.g. only in single user mode?).\n>\n\nI think it is possible without that as well. There are many calls\nin-between those two operations which can internally call\nCHECK_FOR_INTERRUPTS. One of the flows where such a possibility exists\nis UpdateSubscriptionRelState->SearchSysCacheCopy2->SearchSysCacheCopy->SearchSysCache->SearchCatCache->SearchCatCacheInternal->SearchCatCacheMiss->systable_getnext.\nThis can internally call heapgetpage where we have\nCHECK_FOR_INTERRUPTS. I think even if today there was no CFI call we\ncan't take a guarantee for the future as the calls used are quite\ncommon. So, probably we need missing_ok flag in DropSubscription.\n\nOne more point in the tablesync code you are calling\nReplicationSlotDropAtPubNode with missing_ok as false. What if we get\nan error after that and before we have marked the state as SYNCDONE? I\nguess it will always error from ReplicationSlotDropAtPubNode after\nthat because we had already dropped the slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Feb 2021 14:10:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 3:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 9:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > No, there is a functionality change as well. The way we have code in\n> > v22 can easily lead to a problem when we have dropped the slots but\n> > get an error while removing origins or an entry from subscription rel.\n> > In such cases, we won't be able to rollback the drop of slots but the\n> > other database operations will be rolled back. This is the reason we\n> > have to drop the slots at the end. We need to ensure the same thing\n> > for AlterSubscription_refresh. Does this make sense now?\n> >\n>\n> OK.\n>\n\nI have updated the patch to display WARNING for each of the tablesync\nslots during DropSubscription. As discussed, I have moved the drop\nslot related code towards the end in AlterSubscription_refresh. Apart\nfrom this, I have fixed one more issue in tablesync code where in\nafter catching the exception we were not clearing the transaction\nstate on the publisher, see changes in LogicalRepSyncTableStart. I\nhave also fixed other comments raised by you. Additionally, I have\nremoved the test because it was creating the same name slot as the\ntablesync worker and tablesync worker removed the same due to new\nlogic in LogicalRepSyncStart. Earlier, it was not failing because of\nthe bug in that code which I have fixed in the attached.\n\nI wonder whether we should restrict creating slots with prefix pg_\nbecause we are internally creating slots with those names? I think\nthis was a problem previously also. We already prohibit it for few\nother objects like origins, schema, etc., see the usage of\nIsReservedName.\n\n\n\n\n--\nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 1 Feb 2021 17:56:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> I have updated the patch to display WARNING for each of the tablesync\n> slots during DropSubscription. As discussed, I have moved the drop\n> slot related code towards the end in AlterSubscription_refresh. Apart\n> from this, I have fixed one more issue in tablesync code where in\n> after catching the exception we were not clearing the transaction\n> state on the publisher, see changes in LogicalRepSyncTableStart. I\n> have also fixed other comments raised by you.\n\nHere are some additional feedback comments about the v24 patch:\n\n~~\n\nReportSlotConnectionError:\n\n1,2,3,4.\n+ foreach(lc, rstates)\n+ {\n+ SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);\n+ Oid relid = rstate->relid;\n+\n+ /* Only cleanup resources of tablesync workers */\n+ if (!OidIsValid(relid))\n+ continue;\n+\n+ /*\n+ * Caller needs to ensure that we have appropriate locks so that\n+ * relstate doesn't change underneath us.\n+ */\n+ if (rstate->state != SUBREL_STATE_SYNCDONE)\n+ {\n+ char syncslotname[NAMEDATALEN] = { 0 };\n+\n+ ReplicationSlotNameForTablesync(subid, relid, syncslotname);\n+ elog(WARNING, \"could not drop tablesync replication slot \\\"%s\\\"\",\n+ syncslotname);\n+\n+ }\n+ }\n\n1. I wonder if \"rstates\" would be better named something like\n\"not_ready_rstates\", otherwise it is not apparent what states are in\nthis list\n\n2. The comment \"/* Only cleanup resources of tablesync workers */\" is\nnot quite correct because there is no cleanup happening here. Maybe\nchange to:\nif (!OidIsValid(relid))\ncontinue; /* not a tablesync worker */\n\n3. Maybe the \"appropriate locks\" comment can say what locks are the\n\"appropriate\" ones?\n\n4. Spurious blank line after the elog?\n\n~~\n\nAlterSubscription_refresh:\n\n5.\n+ /*\n+ * Drop the tablesync slot. This has to be at the end because\notherwise if there\n+ * is an error while doing the database operations we won't be able to rollback\n+ * dropped slot.\n+ */\n\nMaybe \"Drop the tablesync slot.\" should say \"Drop the tablesync slots\nassociated with removed tables.\"\n\n~~\n\nDropSubscription:\n\n6.\n+ /*\n+ * Cleanup of tablesync replication origins.\n+ *\n+ * Any READY-state relations would already have dealt with clean-ups.\n+ *\n+ * Note that the state can't change because we have already stopped both\n+ * the apply and tablesync workers and they can't restart because of\n+ * exclusive lock on the subscription.\n+ */\n+ rstates = GetSubscriptionNotReadyRelations(subid);\n+ foreach(lc, rstates)\n\nI wonder if \"rstates\" would be better named as \"not_ready_rstates\",\nbecause it is used in several places where not READY is assumed.\n\n7.\n+ {\n+ if (!slotname)\n+ {\n+ /* be tidy */\n+ list_free(rstates);\n+ return;\n+ }\n+ else\n+ {\n+ ReportSlotConnectionError(rstates, subid, slotname, err);\n+ }\n+\n+ }\n\nSpurious blank line above?\n\n8.\nThe new logic of calling the ReportSlotConnectionError seems to be\nexpecting that the user has encountered some connection error, and\n*after* that they have assigned slot_name = NONE as a workaround. In\nthis scenario the code looks ok since names of any dangling tablesync\nslots were being logged at the time of the error.\n\nBut I am wondering what about where the user might have set slot_name\n= NONE *before* the connection is broken. In this scenario, there is\nno ERROR, so if there are still (is it possible?) dangling tablesync\nslots, their names are never getting logged at all. So how can the\nuser know what to delete?\n\n~~\n\n> Additionally, I have\n> removed the test because it was creating the same name slot as the\n> tablesync worker and tablesync worker removed the same due to new\n> logic in LogicalRepSyncStart. Earlier, it was not failing because of\n> the bug in that code which I have fixed in the attached.\n\nWasn't causing a tablesync slot clash and seeing if it could recover\nthe point of that test? Why not just keep, and modify the test to make\nit work again? Isn't it still valuable because at least it would\nexecute the code through the PG_CATCH which otherwise may not get\nexecuted by any other test?\n\n>\n> I wonder whether we should restrict creating slots with prefix pg_\n> because we are internally creating slots with those names? I think\n> this was a problem previously also. We already prohibit it for few\n> other objects like origins, schema, etc., see the usage of\n> IsReservedName.\n>\n\nYes, we could restrict the create slot API if you really wanted to.\nBut, IMO it is implausible that a user could \"accidentally\" clash with\nthe internal tablesync slot name, so in practice maybe this change\nwould not help much but it might make it more difficult to test some\nscenarios.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Feb 2021 13:59:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 8:29 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I have updated the patch to display WARNING for each of the tablesync\n> > slots during DropSubscription. As discussed, I have moved the drop\n> > slot related code towards the end in AlterSubscription_refresh. Apart\n> > from this, I have fixed one more issue in tablesync code where in\n> > after catching the exception we were not clearing the transaction\n> > state on the publisher, see changes in LogicalRepSyncTableStart. I\n> > have also fixed other comments raised by you.\n>\n> Here are some additional feedback comments about the v24 patch:\n>\n> ~~\n>\n> ReportSlotConnectionError:\n>\n> 1,2,3,4.\n> + foreach(lc, rstates)\n> + {\n> + SubscriptionRelState *rstate = (SubscriptionRelState *) lfirst(lc);\n> + Oid relid = rstate->relid;\n> +\n> + /* Only cleanup resources of tablesync workers */\n> + if (!OidIsValid(relid))\n> + continue;\n> +\n> + /*\n> + * Caller needs to ensure that we have appropriate locks so that\n> + * relstate doesn't change underneath us.\n> + */\n> + if (rstate->state != SUBREL_STATE_SYNCDONE)\n> + {\n> + char syncslotname[NAMEDATALEN] = { 0 };\n> +\n> + ReplicationSlotNameForTablesync(subid, relid, syncslotname);\n> + elog(WARNING, \"could not drop tablesync replication slot \\\"%s\\\"\",\n> + syncslotname);\n> +\n> + }\n> + }\n>\n> 1. I wonder if \"rstates\" would be better named something like\n> \"not_ready_rstates\", otherwise it is not apparent what states are in\n> this list\n>\n\nI don't know if that would be better and it is used in the same way in\nthe existing code. I find the current naming succinct.\n\n> 2. The comment \"/* Only cleanup resources of tablesync workers */\" is\n> not quite correct because there is no cleanup happening here. Maybe\n> change to:\n> if (!OidIsValid(relid))\n> continue; /* not a tablesync worker */\n>\n\nAren't we trying to cleanup the tablesync slots here? So, I don't see\nthe comment as irrelevant.\n\n> 3. Maybe the \"appropriate locks\" comment can say what locks are the\n> \"appropriate\" ones?\n>\n> 4. Spurious blank line after the elog?\n>\n\nWill fix both the above.\n\n> ~~\n>\n> AlterSubscription_refresh:\n>\n> 5.\n> + /*\n> + * Drop the tablesync slot. This has to be at the end because\n> otherwise if there\n> + * is an error while doing the database operations we won't be able to rollback\n> + * dropped slot.\n> + */\n>\n> Maybe \"Drop the tablesync slot.\" should say \"Drop the tablesync slots\n> associated with removed tables.\"\n>\n\nmakes sense, will fix.\n\n> ~~\n>\n> DropSubscription:\n>\n> 6.\n> + /*\n> + * Cleanup of tablesync replication origins.\n> + *\n> + * Any READY-state relations would already have dealt with clean-ups.\n> + *\n> + * Note that the state can't change because we have already stopped both\n> + * the apply and tablesync workers and they can't restart because of\n> + * exclusive lock on the subscription.\n> + */\n> + rstates = GetSubscriptionNotReadyRelations(subid);\n> + foreach(lc, rstates)\n>\n> I wonder if \"rstates\" would be better named as \"not_ready_rstates\",\n> because it is used in several places where not READY is assumed.\n>\n\nSame response as above for similar comment.\n\n> 7.\n> + {\n> + if (!slotname)\n> + {\n> + /* be tidy */\n> + list_free(rstates);\n> + return;\n> + }\n> + else\n> + {\n> + ReportSlotConnectionError(rstates, subid, slotname, err);\n> + }\n> +\n> + }\n>\n> Spurious blank line above?\n>\n\nWill fix.\n\n> 8.\n> The new logic of calling the ReportSlotConnectionError seems to be\n> expecting that the user has encountered some connection error, and\n> *after* that they have assigned slot_name = NONE as a workaround. In\n> this scenario the code looks ok since names of any dangling tablesync\n> slots were being logged at the time of the error.\n>\n> But I am wondering what about where the user might have set slot_name\n> = NONE *before* the connection is broken. In this scenario, there is\n> no ERROR, so if there are still (is it possible?) dangling tablesync\n> slots, their names are never getting logged at all. So how can the\n> user know what to delete?\n>\n\nIt has been mentioned in docs that the user is responsible for\ncleaning that up manually in such a case. The patch has also described\nhow the names are generated so that can help user to remove those.\n+ These table synchronization slots have generated names:\n+ <quote><literal>pg_%u_sync_%u</literal></quote> (parameters: Subscription\n+ <parameter>oid</parameter>, Table <parameter>relid</parameter>)\n\nI think if the user changes slot_name associated with the\nsubscription, it would be his responsibility to clean up the\npreviously associated slot. This is currently the case with the main\nsubscription slot as well. I think it won't be advisable for the user\nto change slot_name unless under some rare cases where the system\nmight be stuck like the one for which we are giving WARNING and\nproviding a hint for setting the slot_name to NONE.\n\n\n> ~~\n>\n> > Additionally, I have\n> > removed the test because it was creating the same name slot as the\n> > tablesync worker and tablesync worker removed the same due to new\n> > logic in LogicalRepSyncStart. Earlier, it was not failing because of\n> > the bug in that code which I have fixed in the attached.\n>\n> Wasn't causing a tablesync slot clash and seeing if it could recover\n> the point of that test? Why not just keep, and modify the test to make\n> it work again?\n>\n\nWe can do that but my other worry was that we might want to reserve\nthe names for slots that start with pg_.\n\n> Isn't it still valuable because at least it would\n> execute the code through the PG_CATCH which otherwise may not get\n> executed by any other test?\n>\n\nIt is valuable but IIRC there was a test (in subscription/004_sync.pl)\nwhere PK violation happens during copy which will lead to the coverage\nof code in CATCH.\n\n> >\n> > I wonder whether we should restrict creating slots with prefix pg_\n> > because we are internally creating slots with those names? I think\n> > this was a problem previously also. We already prohibit it for few\n> > other objects like origins, schema, etc., see the usage of\n> > IsReservedName.\n> >\n>\n> Yes, we could restrict the create slot API if you really wanted to.\n> But, IMO it is implausible that a user could \"accidentally\" clash with\n> the internal tablesync slot name, so in practice maybe this change\n> would not help much but it might make it more difficult to test some\n> scenarios.\n>\n\nIsn't the same true for origins?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Feb 2021 10:33:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I have updated the patch to display WARNING for each of the tablesync\n> slots during DropSubscription. As discussed, I have moved the drop\n> slot related code towards the end in AlterSubscription_refresh. Apart\n> from this, I have fixed one more issue in tablesync code where in\n> after catching the exception we were not clearing the transaction\n> state on the publisher, see changes in LogicalRepSyncTableStart. I\n> have also fixed other comments raised by you. Additionally, I have\n> removed the test because it was creating the same name slot as the\n> tablesync worker and tablesync worker removed the same due to new\n> logic in LogicalRepSyncStart. Earlier, it was not failing because of\n> the bug in that code which I have fixed in the attached.\n>\n\nI was testing this patch. I had a table on the subscriber which had a\nrow that would cause a PK constraint\nviolation during the table copy. This is resulting in the subscriber\ntrying to rollback the table copy and failing.\n\n2021-02-01 23:28:16.041 EST [23738] LOG: logical replication apply\nworker for subscription \"tap_sub\" has started\n2021-02-01 23:28:16.051 EST [23740] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-01 23:28:21.118 EST [23740] ERROR: table copy could not\nrollback transaction on publisher\n2021-02-01 23:28:21.118 EST [23740] DETAIL: The error was: another\ncommand is already in progress\n2021-02-01 23:28:21.122 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 23740) exited with exit code 1\n2021-02-01 23:28:21.125 EST [23908] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-01 23:28:21.138 EST [23908] ERROR: could not create\nreplication slot \"pg_16398_sync_16384\": ERROR: replication slot\n\"pg_16398_sync_16384\" already exists\n2021-02-01 23:28:21.139 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 23908) exited with exit code 1\n2021-02-01 23:28:26.168 EST [24048] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-01 23:28:34.244 EST [24048] ERROR: table copy could not\nrollback transaction on publisher\n2021-02-01 23:28:34.244 EST [24048] DETAIL: The error was: another\ncommand is already in progress\n2021-02-01 23:28:34.251 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 24048) exited with exit code 1\n2021-02-01 23:28:34.254 EST [24337] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-01 23:28:34.263 EST [24337] ERROR: could not create\nreplication slot \"pg_16398_sync_16384\": ERROR: replication slot\n\"pg_16398_sync_16384\" already exists\n2021-02-01 23:28:34.264 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 24337) exited with exit code 1\n\nAnd one more thing I see is that now we error out in PG_CATCH() in\nLogicalRepSyncTableStart() with the above error and as a result, the\ntablesync slot is not dropped. Hence causing the slot create to fail\nin the next restart.\nI think this can be avoided. We could either attempt a rollback only\non specific failures and drop slot prior to erroring out.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Feb 2021 16:03:51 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Another failure I see in my testing\n\nOn publisher create a big enough table:\npublisher:\npostgres=# CREATE TABLE tab_rep (a int primary key);CREATE TABLE\npostgres=# INSERT INTO tab_rep SELECT generate_series(1,1000000);\nINSERT 0 1000000\npostgres=# CREATE PUBLICATION tap_pub FOR ALL TABLES;\nCREATE PUBLICATION\n\nSubscriber:\npostgres=# CREATE TABLE tab_rep (a int primary key);\nCREATE TABLE\npostgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\ndbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\n\nCreate the subscription but do not enable it:\nThe below two commands on the subscriber should be issued quickly with\nno delay between them.\n\npostgres=# ALTER SUBSCRIPTION tap_sub enable;\nALTER SUBSCRIPTION\npostgres=# ALTER SUBSCRIPTION tap_sub disable;\nALTER SUBSCRIPTION\n\nThis leaves the below state for the pg_subscription rel:\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+----------\n 16395 | 16384 | f |\n(1 row)\n\nThe rel is in the SUBREL_STATE_FINISHEDCOPY state.\n\nMeanwhile on the publisher, looking at the slots created:\n\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database |\ntemporary | active | active_pid | x\nmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status |\nsafe_wal_size\n---------------------+----------+-----------+--------+----------+-----------+--------+------------+--\n----+--------------+-------------+---------------------+------------+---------------\n tap_sub | pgoutput | logical | 13859 | postgres | f\n | f | |\n | 517 | 0/9303660 | 0/9303698 | reserved |\n pg_16395_sync_16384 | pgoutput | logical | 13859 | postgres | f\n | f | |\n | 517 | 0/9303660 | 0/9303698 | reserved |\n(2 rows)\n\n\nThere are two slots, the main slot as well as the tablesync slot, drop\nthe table, re-enable the subscription and then drop the subscription\n\nNow on the subscriber:\npostgres=# drop table tab_rep;\nDROP TABLE\npostgres=# ALTER SUBSCRIPTION tap_sub enable;\nALTER SUBSCRIPTION\npostgres=# drop subscription tap_sub ;\nNOTICE: dropped replication slot \"tap_sub\" on publisher\nDROP SUBSCRIPTION\n\nWe see the tablesync slot dangling in the publisher:\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database |\ntemporary | active | active_pid | x\nmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status |\nsafe_wal_size\n---------------------+----------+-----------+--------+----------+-----------+--------+------------+--\n----+--------------+-------------+---------------------+------------+---------------\n pg_16395_sync_16384 | pgoutput | logical | 13859 | postgres | f\n | f | |\n | 517 | 0/9303660 | 0/9303698 | reserved |\n(1 row)\n\nThe dropping of the table, meant that after the tablesync is\nrestarted, the worker has no idea about the old slot created as its\nname uses the relid of the dropped table.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Feb 2021 17:05:06 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have updated the patch to display WARNING for each of the tablesync\n> slots during DropSubscription. As discussed, I have moved the drop\n> slot related code towards the end in AlterSubscription_refresh. Apart\n> from this, I have fixed one more issue in tablesync code where in\n> after catching the exception we were not clearing the transaction\n> state on the publisher, see changes in LogicalRepSyncTableStart.\n...\n\nI know that in another email [ac0202] Ajin has reported some problem\nhe found related to this new (LogicalRepSyncTableStart PG_CATCH) code\nfor some different use-case, but for my test scenario of a \"broken\nconnection during a table copy\" the code did appear to be working\nproperly.\n\nPSA detailed logs which show the test steps and output for this\n\"\"broken connection during a table copy\" scenario.\n\n----\n[ac0202] https://www.postgresql.org/message-id/CAFPTHDaZw5o%2BwMbv3aveOzuLyz_rqZebXAj59rDKTJbwXFPYgw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 2 Feb 2021 17:51:35 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 10:34 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have updated the patch to display WARNING for each of the tablesync\n> > slots during DropSubscription. As discussed, I have moved the drop\n> > slot related code towards the end in AlterSubscription_refresh. Apart\n> > from this, I have fixed one more issue in tablesync code where in\n> > after catching the exception we were not clearing the transaction\n> > state on the publisher, see changes in LogicalRepSyncTableStart. I\n> > have also fixed other comments raised by you. Additionally, I have\n> > removed the test because it was creating the same name slot as the\n> > tablesync worker and tablesync worker removed the same due to new\n> > logic in LogicalRepSyncStart. Earlier, it was not failing because of\n> > the bug in that code which I have fixed in the attached.\n> >\n>\n> I was testing this patch. I had a table on the subscriber which had a\n> row that would cause a PK constraint\n> violation during the table copy. This is resulting in the subscriber\n> trying to rollback the table copy and failing.\n>\n\nI am not getting this error. I have tried by below test:\nPublisher\n===========\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl1(somedata, text) VALUES (1, 2);\nCOMMIT;\n\nCREATE PUBLICATION mypublication FOR TABLE mytbl1;\n\nSubscriber\n=============\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl1(somedata, text) VALUES (1, 2);\nCOMMIT;\n\nCREATE SUBSCRIPTION mysub\n CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypublication;\n\nIt generates the PK violation the first time and then I removed the\nconflicting rows in the subscriber and it passed. See logs below.\n\n2021-02-02 13:51:34.316 IST [20796] LOG: logical replication table\nsynchronization worker for subscription \"mysub\", table \"mytbl1\" has\nstarted\n2021-02-02 13:52:43.625 IST [20796] ERROR: duplicate key value\nviolates unique constraint \"mytbl1_pkey\"\n2021-02-02 13:52:43.625 IST [20796] DETAIL: Key (id)=(1) already exists.\n2021-02-02 13:52:43.625 IST [20796] CONTEXT: COPY mytbl1, line 1\n2021-02-02 13:52:43.695 IST [27840] LOG: background worker \"logical\nreplication worker\" (PID 20796) exited with exit code 1\n2021-02-02 13:52:43.884 IST [6260] LOG: logical replication table\nsynchronization worker for subscription \"mysub\", table \"mytbl1\" has\nstarted\n2021-02-02 13:53:54.680 IST [6260] LOG: logical replication table\nsynchronization worker for subscription \"mysub\", table \"mytbl1\" has\nfinished\n\nAlso, a similar test exists in 0004_sync.pl, is that also failing for\nyou? Can you please provide detailed steps that led to this failure?\n\n>\n> And one more thing I see is that now we error out in PG_CATCH() in\n> LogicalRepSyncTableStart() with the above error and as a result, the\n> tablesync slot is not dropped. Hence causing the slot create to fail\n> in the next restart.\n> I think this can be avoided. We could either attempt a rollback only\n> on specific failures and drop slot prior to erroring out.\n>\n\nHmm, we have to first rollback before attempting any other operation\nbecause the transaction on the publisher is in an errored state.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Feb 2021 14:10:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\ntried a simple test where I do a DROP TABLE with very bad timing for\nthe tablesync worker. It seems that doing this can cause the sync\nworker's MyLogicalRepWorker->relid to become invalid.\n\nIn my test this caused a stack trace within some logging, but I\nimagine other bad things can happen if the tablesync worker can be\nexecuted with an invalid relid.\n\nPossibly this is an existing PG bug which has just never been seen\nbefore; The ereport which has failed here is not new code.\n\nPSA the log for the test steps and the stack trace details.\n\n----\n[ac0202] https://www.postgresql.org/message-id/CAFPTHDYzjaNfzsFHpER9idAPB8v5j%3DSUbWL0AKj5iVy0BKbTpg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 2 Feb 2021 21:00:54 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 2, 2021 at 10:34 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Mon, Feb 1, 2021 at 11:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I have updated the patch to display WARNING for each of the tablesync\n> > > slots during DropSubscription. As discussed, I have moved the drop\n> > > slot related code towards the end in AlterSubscription_refresh. Apart\n> > > from this, I have fixed one more issue in tablesync code where in\n> > > after catching the exception we were not clearing the transaction\n> > > state on the publisher, see changes in LogicalRepSyncTableStart. I\n> > > have also fixed other comments raised by you. Additionally, I have\n> > > removed the test because it was creating the same name slot as the\n> > > tablesync worker and tablesync worker removed the same due to new\n> > > logic in LogicalRepSyncStart. Earlier, it was not failing because of\n> > > the bug in that code which I have fixed in the attached.\n> > >\n> >\n> > I was testing this patch. I had a table on the subscriber which had a\n> > row that would cause a PK constraint\n> > violation during the table copy. This is resulting in the subscriber\n> > trying to rollback the table copy and failing.\n> >\n>\n> I am not getting this error. I have tried by below test:\n\nI am sorry, my above steps were not correct. I think the reason for\nthe failure I was seeing were some other steps I did prior to this. I\nwill recreate this and update you with the appropriate steps.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Feb 2021 21:03:52 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 11:35 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Another failure I see in my testing\n>\n\nThe problem here is that we are allowing to drop the table when table\nsynchronization is still in progress and then we don't have any way to\nknow the corresponding slot or origin. I think we can try to drop the\nslot and origin as well but that is not a good idea because slots once\ndropped won't be rolled back. So, I have added a fix to disallow the\ndrop of the table when table synchronization is still in progress.\nApart from that, I have fixed comments raised by Peter as discussed\nabove and made some additional changes in comments, code (code changes\nare cosmetic), and docs.\n\nLet me know if the issue reported is fixed or not?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 2 Feb 2021 18:54:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\n> tried a simple test where I do a DROP TABLE with very bad timing for\n> the tablesync worker. It seems that doing this can cause the sync\n> worker's MyLogicalRepWorker->relid to become invalid.\n>\n\nI think this should be fixed by latest patch because I have disallowed\ndrop of a table when its synchronization is in progress. You can check\nonce and let me know if the issue still exists?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Feb 2021 18:56:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 12:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 2, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\n> > tried a simple test where I do a DROP TABLE with very bad timing for\n> > the tablesync worker. It seems that doing this can cause the sync\n> > worker's MyLogicalRepWorker->relid to become invalid.\n> >\n>\n> I think this should be fixed by latest patch because I have disallowed\n> drop of a table when its synchronization is in progress. You can check\n> once and let me know if the issue still exists?\n>\n\nFYI - I confirmed that the problem scenario that I reported yesterday\nis no longer possible because now the V25 patch is disallowing the\nDROP TABLE while the tablesync is still running.\n\nPSA my test logs showing it is now working as expected.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 3 Feb 2021 12:08:07 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 12:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> The problem here is that we are allowing to drop the table when table\n> synchronization is still in progress and then we don't have any way to\n> know the corresponding slot or origin. I think we can try to drop the\n> slot and origin as well but that is not a good idea because slots once\n> dropped won't be rolled back. So, I have added a fix to disallow the\n> drop of the table when table synchronization is still in progress.\n> Apart from that, I have fixed comments raised by Peter as discussed\n> above and made some additional changes in comments, code (code changes\n> are cosmetic), and docs.\n>\n> Let me know if the issue reported is fixed or not?\n\nYes, the issue is fixed, now the table drop results in an error.\n\npostgres=# drop table tab_rep ;\nERROR: could not drop relation mapping for subscription \"tap_sub\"\nDETAIL: Table synchronization for relation \"tab_rep\" is in progress\nand is in state \"f\".\nHINT: Use ALTER SUBSCRIPTION ... ENABLE to enable subscription if not\nalready enabled or use DROP SUBSCRIPTION ... to drop the subscription.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 3 Feb 2021 12:28:33 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 6:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 12:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Feb 2, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\n> > > tried a simple test where I do a DROP TABLE with very bad timing for\n> > > the tablesync worker. It seems that doing this can cause the sync\n> > > worker's MyLogicalRepWorker->relid to become invalid.\n> > >\n> >\n> > I think this should be fixed by latest patch because I have disallowed\n> > drop of a table when its synchronization is in progress. You can check\n> > once and let me know if the issue still exists?\n> >\n>\n> FYI - I confirmed that the problem scenario that I reported yesterday\n> is no longer possible because now the V25 patch is disallowing the\n> DROP TABLE while the tablesync is still running.\n>\n\nThanks for the confirmation. BTW, can you please check if we can\nreproduce that problem without this patch? If so, we might want to\napply this fix irrespective of this patch. If not, why not?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Feb 2021 08:04:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 6:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 12:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Feb 2, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\n> > > > tried a simple test where I do a DROP TABLE with very bad timing for\n> > > > the tablesync worker. It seems that doing this can cause the sync\n> > > > worker's MyLogicalRepWorker->relid to become invalid.\n> > > >\n> > >\n> > > I think this should be fixed by latest patch because I have disallowed\n> > > drop of a table when its synchronization is in progress. You can check\n> > > once and let me know if the issue still exists?\n> > >\n> >\n> > FYI - I confirmed that the problem scenario that I reported yesterday\n> > is no longer possible because now the V25 patch is disallowing the\n> > DROP TABLE while the tablesync is still running.\n> >\n>\n> Thanks for the confirmation. BTW, can you please check if we can\n> reproduce that problem without this patch? If so, we might want to\n> apply this fix irrespective of this patch. If not, why not?\n>\n\nYes, this was an existing postgres bug. It is independent of the patch.\n\nI can reproduce exactly the same stacktrace using the HEAD src pulled @ 3/Feb.\n\nPSA my test logs showing the details.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 3 Feb 2021 14:51:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 2, 2021 at 9:03 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\n> I am sorry, my above steps were not correct. I think the reason for\n> the failure I was seeing were some other steps I did prior to this. I\n> will recreate this and update you with the appropriate steps.\n\nThe correct steps are as follows:\n\nPublisher:\n\npostgres=# CREATE TABLE tab_rep (a int primary key);\nCREATE TABLE\npostgres=# INSERT INTO tab_rep SELECT generate_series(1,1000000);\nINSERT 0 1000000\npostgres=# CREATE PUBLICATION tap_pub FOR ALL TABLES;\nCREATE PUBLICATION\n\nSubscriber:\npostgres=# CREATE TABLE tab_rep (a int primary key);\nCREATE TABLE\npostgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\ndbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\nNOTICE: created replication slot \"tap_sub\" on publisher\nCREATE SUBSCRIPTION\npostgres=# ALTER SUBSCRIPTION tap_sub enable;\nALTER SUBSCRIPTION\n\nAllow the tablesync to complete and then drop the subscription, the\ntable remains full and restarting the subscription should fail with a\nconstraint violation during tablesync but it does not.\n\n\nSubscriber:\npostgres=# drop subscription tap_sub ;\nNOTICE: dropped replication slot \"tap_sub\" on publisher\nDROP SUBSCRIPTION\npostgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\ndbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\nNOTICE: created replication slot \"tap_sub\" on publisher\nCREATE SUBSCRIPTION\npostgres=# ALTER SUBSCRIPTION tap_sub enable;\nALTER SUBSCRIPTION\n\nThis takes the subscriber into an error loop but no mention of what\nthe error was:\n\n2021-02-02 05:01:34.698 EST [1549] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-02 05:01:34.739 EST [1549] ERROR: table copy could not\nrollback transaction on publisher\n2021-02-02 05:01:34.739 EST [1549] DETAIL: The error was: another\ncommand is already in progress\n2021-02-02 05:01:34.740 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 1549) exited with exit code 1\n2021-02-02 05:01:40.107 EST [1711] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n2021-02-02 05:01:40.121 EST [1711] ERROR: could not create\nreplication slot \"pg_16479_sync_16435\": ERROR: replication slot\n\"pg_16479_sync_16435\" already exists\n2021-02-02 05:01:40.121 EST [8028] LOG: background worker \"logical\nreplication worker\" (PID 1711) exited with exit code 1\n2021-02-02 05:01:45.140 EST [1891] LOG: logical replication table\nsynchronization worker for subscription \"tap_sub\", table \"tab_rep\" has\nstarted\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 3 Feb 2021 18:58:00 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 2:51 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 6:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Wed, Feb 3, 2021 at 12:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Feb 2, 2021 at 3:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > >\n> > > > > After seeing Ajin's test [ac0202] which did a DROP TABLE, I have also\n> > > > > tried a simple test where I do a DROP TABLE with very bad timing for\n> > > > > the tablesync worker. It seems that doing this can cause the sync\n> > > > > worker's MyLogicalRepWorker->relid to become invalid.\n> > > > >\n> > > >\n> > > > I think this should be fixed by latest patch because I have disallowed\n> > > > drop of a table when its synchronization is in progress. You can check\n> > > > once and let me know if the issue still exists?\n> > > >\n> > >\n> > > FYI - I confirmed that the problem scenario that I reported yesterday\n> > > is no longer possible because now the V25 patch is disallowing the\n> > > DROP TABLE while the tablesync is still running.\n> > >\n> >\n> > Thanks for the confirmation. BTW, can you please check if we can\n> > reproduce that problem without this patch? If so, we might want to\n> > apply this fix irrespective of this patch. If not, why not?\n> >\n>\n> Yes, this was an existing postgres bug. It is independent of the patch.\n>\n> I can reproduce exactly the same stacktrace using the HEAD src pulled @ 3/Feb.\n>\n> PSA my test logs showing the details.\n>\n\nSince this is an existing PG bug independent of this patch, I spawned\na new thread [ps0202] to deal with this problem.\n\n----\n[ps0202] https://www.postgresql.org/message-id/CAHut%2BPu7Z4a%3Domo%2BTvK5Gub2hxcJ7-3%2BBu1FO_%2B%2BfpFTW0oQfQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 3 Feb 2021 21:09:57 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 1:28 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Tue, Feb 2, 2021 at 9:03 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> > I am sorry, my above steps were not correct. I think the reason for\n> > the failure I was seeing were some other steps I did prior to this. I\n> > will recreate this and update you with the appropriate steps.\n>\n> The correct steps are as follows:\n>\n> Publisher:\n>\n> postgres=# CREATE TABLE tab_rep (a int primary key);\n> CREATE TABLE\n> postgres=# INSERT INTO tab_rep SELECT generate_series(1,1000000);\n> INSERT 0 1000000\n> postgres=# CREATE PUBLICATION tap_pub FOR ALL TABLES;\n> CREATE PUBLICATION\n>\n> Subscriber:\n> postgres=# CREATE TABLE tab_rep (a int primary key);\n> CREATE TABLE\n> postgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\n> dbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\n> NOTICE: created replication slot \"tap_sub\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=# ALTER SUBSCRIPTION tap_sub enable;\n> ALTER SUBSCRIPTION\n>\n> Allow the tablesync to complete and then drop the subscription, the\n> table remains full and restarting the subscription should fail with a\n> constraint violation during tablesync but it does not.\n>\n>\n> Subscriber:\n> postgres=# drop subscription tap_sub ;\n> NOTICE: dropped replication slot \"tap_sub\" on publisher\n> DROP SUBSCRIPTION\n> postgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\n> dbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\n> NOTICE: created replication slot \"tap_sub\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=# ALTER SUBSCRIPTION tap_sub enable;\n> ALTER SUBSCRIPTION\n>\n> This takes the subscriber into an error loop but no mention of what\n> the error was:\n>\n\nThanks for the report. The problem here was that the error occurred\nwhen we were trying to copy the large data. Now, before fetching the\nentire data we issued a rollback that led to this problem. I think the\nalternative here could be to first fetch the entire data when the\nerror occurred then issue the following commands. Instead, I have\nmodified the patch to perform 'drop_replication_slot' in the beginning\nif the relstate is datasync. Do let me know if you can think of a\nbetter way to fix this?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 3 Feb 2021 18:08:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 3, 2021 at 11:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Thanks for the report. The problem here was that the error occurred\n> when we were trying to copy the large data. Now, before fetching the\n> entire data we issued a rollback that led to this problem. I think the\n> alternative here could be to first fetch the entire data when the\n> error occurred then issue the following commands. Instead, I have\n> modified the patch to perform 'drop_replication_slot' in the beginning\n> if the relstate is datasync. Do let me know if you can think of a\n> better way to fix this?\n\nI have verified that the problem is not seen after this patch. I also\nagree with the approach taken for the fix,\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 4 Feb 2021 15:24:51 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 4, 2021 at 9:55 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 11:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Thanks for the report. The problem here was that the error occurred\n> > when we were trying to copy the large data. Now, before fetching the\n> > entire data we issued a rollback that led to this problem. I think the\n> > alternative here could be to first fetch the entire data when the\n> > error occurred then issue the following commands. Instead, I have\n> > modified the patch to perform 'drop_replication_slot' in the beginning\n> > if the relstate is datasync. Do let me know if you can think of a\n> > better way to fix this?\n>\n> I have verified that the problem is not seen after this patch. I also\n> agree with the approach taken for the fix,\n>\n\nThanks. I have fixed one of the issues reported by me earlier [1]\nwherein the tablesync worker can repeatedly fail if after dropping the\nslot there is an error while updating the SYNCDONE state in the\ndatabase. I have moved the drop of the slot just before commit of the\ntransaction where we are marking the state as SYNCDONE. Additionally,\nI have removed unnecessary includes in tablesync.c, updated the docs\nfor Alter Subscription, and updated the comments at various places in\nthe patch. I have also updated the commit message this time.\n\nI am still not very happy with the way we handle concurrent drop\norigins but probably that would be addressed by the other patch Peter\nis working on [2].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JdWv84nMyCpTboBURjj70y3BfO1xdy8SYPRqNxtH7TEA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAHut%2BPsW6%2B7Ucb1sxjSNBaSYPGAVzQFbAEg4y1KsYQiGCnyGeQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 4 Feb 2021 15:02:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 4, 2021 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n...\n\n> Thanks. I have fixed one of the issues reported by me earlier [1]\n> wherein the tablesync worker can repeatedly fail if after dropping the\n> slot there is an error while updating the SYNCDONE state in the\n> database. I have moved the drop of the slot just before commit of the\n> transaction where we are marking the state as SYNCDONE. Additionally,\n> I have removed unnecessary includes in tablesync.c, updated the docs\n> for Alter Subscription, and updated the comments at various places in\n> the patch. I have also updated the commit message this time.\n>\n\nBelow are my feedback comments for V17 (nothing functional)\n\n~~\n\n1.\nV27 Commit message:\nFor the initial table data synchronization in logical replication, we use\na single transaction to copy the entire table and then synchronizes the\nposition in the stream with the main apply worker.\n\nTypo:\n\"synchronizes\" -> \"synchronize\"\n\n~~\n\n2.\n@@ -48,6 +48,23 @@ ALTER SUBSCRIPTION <replaceable\nclass=\"parameter\">name</replaceable> RENAME TO <\n (Currently, all subscription owners must be superusers, so the owner checks\n will be bypassed in practice. But this might change in the future.)\n </para>\n+\n+ <para>\n+ When refreshing a publication we remove the relations that are no longer\n+ part of the publication and we also remove the tablesync slots if there are\n+ any. It is necessary to remove tablesync slots so that the resources\n+ allocated for the subscription on the remote host are released. If due to\n+ network breakdown or some other error, we are not able to remove the slots,\n+ we give WARNING and the user needs to manually remove such slots later as\n+ otherwise, they will continue to reserve WAL and might eventually cause\n+ the disk to fill up. See also <xref\nlinkend=\"logical-replication-subscription-slot\"/>.\n+ </para>\n\nI think the content is good, but the 1st-person wording seemed strange.\ne.g.\n\"we are not able to remove the slots, we give WARNING and the user needs...\"\nMaybe it should be like:\n\"... PostgreSQL is unable to remove the slots, so a WARNING is\nreported. The user needs... \"\n\n~~\n\n3.\n@@ -566,107 +569,197 @@ AlterSubscription_refresh(Subscription *sub,\nbool copy_data)\n...\n+ * XXX If there is a network break down while dropping the\n\n\"network break down\" -> \"network breakdown\"\n\n~~\n\n4.\n@@ -872,7 +970,48 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)\n (errmsg(\"could not connect to the publisher: %s\", err)));\n...\n+ * XXX We could also instead try to drop the slot, last time we failed\n+ * but for that, we might need to clean up the copy state as it might\n+ * be in the middle of fetching the rows. Also, if there is a network\n+ * break down then it wouldn't have succeeded so trying it next time\n+ * seems like a better bet.\n\n\"network break down\" -> \"network breakdown\"\n\n~~\n\n5.\n@@ -269,26 +313,47 @@ invalidate_syncing_table_states(Datum arg, int\ncacheid, uint32 hashvalue)\n...\n+\n+ /*\n+ * Cleanup the tablesync slot.\n+ *\n+ * This has to be done after updating the state because otherwise if\n+ * there is an error while doing the database operations we won't be\n+ * able to rollback dropped slot.\n+ */\n+ ReplicationSlotNameForTablesync(MyLogicalRepWorker->subid,\n+ MyLogicalRepWorker->relid,\n+ syncslotname);\n+\n+ ReplicationSlotDropAtPubNode(wrconn, syncslotname, false /* missing_ok */);\n+\n\nShould this comment also describe why the missing_ok is false for this case?\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 5 Feb 2021 12:39:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 5, 2021 at 7:09 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> ...\n>\n> > Thanks. I have fixed one of the issues reported by me earlier [1]\n> > wherein the tablesync worker can repeatedly fail if after dropping the\n> > slot there is an error while updating the SYNCDONE state in the\n> > database. I have moved the drop of the slot just before commit of the\n> > transaction where we are marking the state as SYNCDONE. Additionally,\n> > I have removed unnecessary includes in tablesync.c, updated the docs\n> > for Alter Subscription, and updated the comments at various places in\n> > the patch. I have also updated the commit message this time.\n> >\n>\n> Below are my feedback comments for V17 (nothing functional)\n>\n> ~~\n>\n> 1.\n> V27 Commit message:\n> For the initial table data synchronization in logical replication, we use\n> a single transaction to copy the entire table and then synchronizes the\n> position in the stream with the main apply worker.\n>\n> Typo:\n> \"synchronizes\" -> \"synchronize\"\n>\n\nFixed and added a note about Alter Sub .. Refresh .. command can't be\nexecuted in the transaction block.\n\n> ~~\n>\n> 2.\n> @@ -48,6 +48,23 @@ ALTER SUBSCRIPTION <replaceable\n> class=\"parameter\">name</replaceable> RENAME TO <\n> (Currently, all subscription owners must be superusers, so the owner checks\n> will be bypassed in practice. But this might change in the future.)\n> </para>\n> +\n> + <para>\n> + When refreshing a publication we remove the relations that are no longer\n> + part of the publication and we also remove the tablesync slots if there are\n> + any. It is necessary to remove tablesync slots so that the resources\n> + allocated for the subscription on the remote host are released. If due to\n> + network breakdown or some other error, we are not able to remove the slots,\n> + we give WARNING and the user needs to manually remove such slots later as\n> + otherwise, they will continue to reserve WAL and might eventually cause\n> + the disk to fill up. See also <xref\n> linkend=\"logical-replication-subscription-slot\"/>.\n> + </para>\n>\n> I think the content is good, but the 1st-person wording seemed strange.\n> e.g.\n> \"we are not able to remove the slots, we give WARNING and the user needs...\"\n> Maybe it should be like:\n> \"... PostgreSQL is unable to remove the slots, so a WARNING is\n> reported. The user needs... \"\n>\n\nChanged as per suggestion with a minor tweak.\n\n> ~~\n>\n> 3.\n> @@ -566,107 +569,197 @@ AlterSubscription_refresh(Subscription *sub,\n> bool copy_data)\n> ...\n> + * XXX If there is a network break down while dropping the\n>\n> \"network break down\" -> \"network breakdown\"\n>\n> ~~\n>\n> 4.\n> @@ -872,7 +970,48 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)\n> (errmsg(\"could not connect to the publisher: %s\", err)));\n> ...\n> + * XXX We could also instead try to drop the slot, last time we failed\n> + * but for that, we might need to clean up the copy state as it might\n> + * be in the middle of fetching the rows. Also, if there is a network\n> + * break down then it wouldn't have succeeded so trying it next time\n> + * seems like a better bet.\n>\n> \"network break down\" -> \"network breakdown\"\n>\n\nChanged as per suggestion.\n\n> ~~\n>\n> 5.\n> @@ -269,26 +313,47 @@ invalidate_syncing_table_states(Datum arg, int\n> cacheid, uint32 hashvalue)\n> ...\n> +\n> + /*\n> + * Cleanup the tablesync slot.\n> + *\n> + * This has to be done after updating the state because otherwise if\n> + * there is an error while doing the database operations we won't be\n> + * able to rollback dropped slot.\n> + */\n> + ReplicationSlotNameForTablesync(MyLogicalRepWorker->subid,\n> + MyLogicalRepWorker->relid,\n> + syncslotname);\n> +\n> + ReplicationSlotDropAtPubNode(wrconn, syncslotname, false /* missing_ok */);\n> +\n>\n> Should this comment also describe why the missing_ok is false for this case?\n>\n\nYeah that makes sense, so added a comment.\n\nAdditionally, I have changed the errorcode in RemoveSubscriptionRel,\nmoved the setup of origin before copy_table in\nLogicalRepSyncTableStart to avoid doing copy again due to an error in\nsetting up origin. I have made a few comment changes as well.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 5 Feb 2021 10:52:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hello\r\n\r\n\r\n\r\nOn Friday, February 5, 2021 2:23 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Fri, Feb 5, 2021 at 7:09 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > On Thu, Feb 4, 2021 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > ...\r\n> >\r\n> > > Thanks. I have fixed one of the issues reported by me earlier [1]\r\n> > > wherein the tablesync worker can repeatedly fail if after dropping\r\n> > > the slot there is an error while updating the SYNCDONE state in the\r\n> > > database. I have moved the drop of the slot just before commit of\r\n> > > the transaction where we are marking the state as SYNCDONE.\r\n> > > Additionally, I have removed unnecessary includes in tablesync.c,\r\n> > > updated the docs for Alter Subscription, and updated the comments at\r\n> > > various places in the patch. I have also updated the commit message this\r\n> time.\r\n> > >\r\n> >\r\n> > Below are my feedback comments for V17 (nothing functional)\r\n> >\r\n> > ~~\r\n> >\r\n> > 1.\r\n> > V27 Commit message:\r\n> > For the initial table data synchronization in logical replication, we\r\n> > use a single transaction to copy the entire table and then\r\n> > synchronizes the position in the stream with the main apply worker.\r\n> >\r\n> > Typo:\r\n> > \"synchronizes\" -> \"synchronize\"\r\n> >\r\n> \r\n> Fixed and added a note about Alter Sub .. Refresh .. command can't be\r\n> executed in the transaction block.\r\nThank you for the updates.\r\n\r\nWe need to add some tests to prove the new checks of AlterSubscription() work. \r\nI chose TAP tests as we need to set connect = true for the subscription.\r\nWhen it can contribute to the development, please utilize this.\r\nI used v28 to check my patch and works as we expect.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 5 Feb 2021 07:06:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 5, 2021 at 12:36 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> We need to add some tests to prove the new checks of AlterSubscription() work.\n> I chose TAP tests as we need to set connect = true for the subscription.\n> When it can contribute to the development, please utilize this.\n> I used v28 to check my patch and works as we expect.\n>\n\nThanks for writing the tests but I don't understand why you need to\nset connect = true for this test? I have tried below '... with connect\n= false' and it seems to be working:\npostgres=# CREATE SUBSCRIPTION mysub\npostgres-# CONNECTION 'host=localhost port=5432 dbname=postgres'\npostgres-# PUBLICATION mypublication WITH (connect = false);\nWARNING: tables were not subscribed, you will have to run ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\nCREATE SUBSCRIPTION\npostgres=# Begin;\nBEGIN\npostgres=*# Alter Subscription mysub Refresh Publication;\nERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions\n\nSo, if possible lets write this test in src/test/regress/sql/subscription.sql.\n\nI have another idea for a test case: What if we write a test such that\nit fails PK violation on copy and then drop the subscription. Then\ncheck there shouldn't be any dangling slot on the publisher? This is\nsimilar to a test in subscription/t/004_sync.pl, we can use some of\nthat framework but have a separate test for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 Feb 2021 14:21:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "I did some basic cross-version testing, publisher on PG13 and\nsubscriber on PG14 and publisher on PG14 and subscriber on PG13.\nDid some basic operations, CREATE, ALTER and STOP subscriptions and it\nseemed to work fine, no errors.\n\nregards,\nAjin Cherian\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 5 Feb 2021 21:01:04 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi,\n\nWe had a bit high-level discussion about this patches with Amit \noff-list, so I decided to also take a look at the actual code.\n\nMy main concern originally was the potential for left-over slots on \npublisher, but I think the state now is relatively okay, with couple of \ncorner cases that are documented and don't seem much worse than the main \nslot.\n\nI wonder if we should mention the max_slot_wal_keep_size GUC in the \ntable sync docs though.\n\nAnother thing that might need documentation is that the the visibility \nof changes done by table sync is not anymore isolated in that table \ncontents will show intermediate progress to other backends, rather than \nswitching from nothing to state consistent with rest of replication.\n\n\nSome minor comments about code:\n\n> +\t\telse if (res->status == WALRCV_ERROR && missing_ok)\n> +\t\t{\n> +\t\t\t/* WARNING. Error, but missing_ok = true. */\n> +\t\t\tereport(WARNING,\n\nI wonder if we need to add error code to the WalRcvExecResult and check \nfor the appropriate ones here. Because this can for example return error \nbecause of timeout, not because slot is missing. Not sure if it matters \nfor current callers though (but then maybe don't call the param \nmissign_ok?).\n\n\n> +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n> +{\n> +\tif (syncslotname)\n> +\t\tsprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n> +\telse\n> +\t\tsyncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> +\n> +\treturn syncslotname;\n> +}\n\nGiven that we are now explicitly dropping slots, what happens here if we \nhave 2 different downstreams that happen to get same suboid and reloid, \nwill one of the drop the slot of the other one? Previously with the \ncleanup being left to temp slot we'd at maximum got error when creating \nit but with the new logic in LogicalRepSyncTableStart it feels like we \ncould get into situation where 2 downstreams are fighting over slot no?\n\n\n-- \nPetr\n\n\n",
"msg_date": "Fri, 5 Feb 2021 16:10:36 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> > +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n> > +{\n> > + if (syncslotname)\n> > + sprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n> > + else\n> > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> > +\n> > + return syncslotname;\n> > +}\n>\n> Given that we are now explicitly dropping slots, what happens here if we\n> have 2 different downstreams that happen to get same suboid and reloid,\n> will one of the drop the slot of the other one? Previously with the\n> cleanup being left to temp slot we'd at maximum got error when creating\n> it but with the new logic in LogicalRepSyncTableStart it feels like we\n> could get into situation where 2 downstreams are fighting over slot no?\n>\n\nThe PG docs [1] says \"there is only one copy of pg_subscription per\ncluster, not one per database\". IIUC that means it is not possible for\n2 different subscriptions to have the same suboid. And if the suboid\nis globally unique then syncslotname name is also unique. Is that\nunderstanding not correct?\n\n-----\n[1] https://www.postgresql.org/docs/devel/catalog-pg-subscription.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 6 Feb 2021 11:51:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Feb 6, 2021 at 6:22 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n> >\n> > > +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n> > > +{\n> > > + if (syncslotname)\n> > > + sprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n> > > + else\n> > > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> > > +\n> > > + return syncslotname;\n> > > +}\n> >\n> > Given that we are now explicitly dropping slots, what happens here if we\n> > have 2 different downstreams that happen to get same suboid and reloid,\n> > will one of the drop the slot of the other one? Previously with the\n> > cleanup being left to temp slot we'd at maximum got error when creating\n> > it but with the new logic in LogicalRepSyncTableStart it feels like we\n> > could get into situation where 2 downstreams are fighting over slot no?\n> >\n\nI think so. See, if the alternative suggested below works or if you\nhave any other suggestions for the same?\n\n>\n> The PG docs [1] says \"there is only one copy of pg_subscription per\n> cluster, not one per database\". IIUC that means it is not possible for\n> 2 different subscriptions to have the same suboid.\n>\n\nI think he is talking about two different clusters having separate\nsubscriptions but point to the same publisher. In different clusters,\nwe can get the same subid/relid. I think we need a cluster-wide unique\nidentifier to distinguish among different subscribers. How about using\nthe system_identifier stored in the control file (we can use\nGetSystemIdentifier to retrieve it). I think one concern could be\nthat adding that to slot name could exceed the max length of slot\n(NAMEDATALEN -1) but I don't think that is the case here\n(pg_%u_sync_%u_UINT64_FORMAT (3 + 10 + 6 + 10 + 20 + '\\0')). Note last\nis system_identifier in this scheme.\n\nDo you guys think that works or let me know if you have any other\nbetter idea? Petr, is there a reason why such an identifier is not\nconsidered originally, is there any risk in it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 6 Feb 2021 10:37:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hi\r\n\r\n\r\nOn Friday, February 5, 2021 5:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Feb 5, 2021 at 12:36 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > We need to add some tests to prove the new checks of AlterSubscription()\r\n> work.\r\n> > I chose TAP tests as we need to set connect = true for the subscription.\r\n> > When it can contribute to the development, please utilize this.\r\n> > I used v28 to check my patch and works as we expect.\r\n> >\r\n> \r\n> Thanks for writing the tests but I don't understand why you need to set\r\n> connect = true for this test? I have tried below '... with connect = false' and it\r\n> seems to be working:\r\n> postgres=# CREATE SUBSCRIPTION mysub\r\n> postgres-# CONNECTION 'host=localhost port=5432\r\n> dbname=postgres'\r\n> postgres-# PUBLICATION mypublication WITH (connect = false);\r\n> WARNING: tables were not subscribed, you will have to run ALTER\r\n> SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables CREATE\r\n> SUBSCRIPTION postgres=# Begin; BEGIN postgres=*# Alter Subscription\r\n> mysub Refresh Publication;\r\n> ERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled\r\n> subscriptions\r\n> \r\n> So, if possible lets write this test in src/test/regress/sql/subscription.sql.\r\nOK. I changed the place to write the tests for those.\r\n\r\n \r\n> I have another idea for a test case: What if we write a test such that it fails PK\r\n> violation on copy and then drop the subscription. Then check there shouldn't\r\n> be any dangling slot on the publisher? This is similar to a test in\r\n> subscription/t/004_sync.pl, we can use some of that framework but have a\r\n> separate test for this.\r\nI've added this PK violation test to the attached tests.\r\nThe patch works with v28 and made no failure during regression tests.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sat, 6 Feb 2021 07:30:40 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "\nOn 06/02/2021 06:07, Amit Kapila wrote:\n> On Sat, Feb 6, 2021 at 6:22 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n>> <petr.jelinek@enterprisedb.com> wrote:\n>>>> +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n>>>> +{\n>>>> + if (syncslotname)\n>>>> + sprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n>>>> + else\n>>>> + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n>>>> +\n>>>> + return syncslotname;\n>>>> +}\n>>> Given that we are now explicitly dropping slots, what happens here if we\n>>> have 2 different downstreams that happen to get same suboid and reloid,\n>>> will one of the drop the slot of the other one? Previously with the\n>>> cleanup being left to temp slot we'd at maximum got error when creating\n>>> it but with the new logic in LogicalRepSyncTableStart it feels like we\n>>> could get into situation where 2 downstreams are fighting over slot no?\n>>>\n> I think so. See, if the alternative suggested below works or if you\n> have any other suggestions for the same?\n>\n>> The PG docs [1] says \"there is only one copy of pg_subscription per\n>> cluster, not one per database\". IIUC that means it is not possible for\n>> 2 different subscriptions to have the same suboid.\n>>\n> I think he is talking about two different clusters having separate\n> subscriptions but point to the same publisher. In different clusters,\n> we can get the same subid/relid. I think we need a cluster-wide unique\n> identifier to distinguish among different subscribers. How about using\n> the system_identifier stored in the control file (we can use\n> GetSystemIdentifier to retrieve it). I think one concern could be\n> that adding that to slot name could exceed the max length of slot\n> (NAMEDATALEN -1) but I don't think that is the case here\n> (pg_%u_sync_%u_UINT64_FORMAT (3 + 10 + 6 + 10 + 20 + '\\0')). Note last\n> is system_identifier in this scheme.\n\n\nYep that's what I mean and system_identifier seems like a good choice to me.\n\n\n> Do you guys think that works or let me know if you have any other\n> better idea? Petr, is there a reason why such an identifier is not\n> considered originally, is there any risk in it?\n\n\nOriginally it was not considered likely because it's all based on \npglogical/BDR work where ids are hashes of stuff that's unique across \ngroup of instances, not counter based like Oids in PostgreSQL and I \nsimply didn't realize it could be a problem until reading this patch :)\n\n\n-- \nPetr Jelinek\n\n\n\n",
"msg_date": "Sat, 6 Feb 2021 10:41:17 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Some minor comments about code:\n>\n> > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > + {\n> > + /* WARNING. Error, but missing_ok = true. */\n> > + ereport(WARNING,\n>\n> I wonder if we need to add error code to the WalRcvExecResult and check\n> for the appropriate ones here. Because this can for example return error\n> because of timeout, not because slot is missing. Not sure if it matters\n> for current callers though (but then maybe don't call the param\n> missign_ok?).\n\nYou are right. The way we are using this function has evolved beyond\nthe original intention.\nProbably renaming the param to something like \"error_ok\" would be more\nappropriate now.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sun, 7 Feb 2021 14:38:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sun, Feb 7, 2021 at 2:38 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > Some minor comments about code:\n> >\n> > > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > > + {\n> > > + /* WARNING. Error, but missing_ok = true. */\n> > > + ereport(WARNING,\n> >\n> > I wonder if we need to add error code to the WalRcvExecResult and check\n> > for the appropriate ones here. Because this can for example return error\n> > because of timeout, not because slot is missing. Not sure if it matters\n> > for current callers though (but then maybe don't call the param\n> > missign_ok?).\n>\n> You are right. The way we are using this function has evolved beyond\n> the original intention.\n> Probably renaming the param to something like \"error_ok\" would be more\n> appropriate now.\n>\n\nPSA a patch (apply on top of V28) to change the misleading param name.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 8 Feb 2021 11:42:29 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Sat, Feb 6, 2021 at 6:30 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi\n>\n>\n> On Friday, February 5, 2021 5:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Feb 5, 2021 at 12:36 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > We need to add some tests to prove the new checks of AlterSubscription()\n> > work.\n> > > I chose TAP tests as we need to set connect = true for the subscription.\n> > > When it can contribute to the development, please utilize this.\n> > > I used v28 to check my patch and works as we expect.\n> > >\n> >\n> > Thanks for writing the tests but I don't understand why you need to set\n> > connect = true for this test? I have tried below '... with connect = false' and it\n> > seems to be working:\n> > postgres=# CREATE SUBSCRIPTION mysub\n> > postgres-# CONNECTION 'host=localhost port=5432\n> > dbname=postgres'\n> > postgres-# PUBLICATION mypublication WITH (connect = false);\n> > WARNING: tables were not subscribed, you will have to run ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables CREATE\n> > SUBSCRIPTION postgres=# Begin; BEGIN postgres=*# Alter Subscription\n> > mysub Refresh Publication;\n> > ERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled\n> > subscriptions\n> >\n> > So, if possible lets write this test in src/test/regress/sql/subscription.sql.\n> OK. I changed the place to write the tests for those.\n>\n>\n> > I have another idea for a test case: What if we write a test such that it fails PK\n> > violation on copy and then drop the subscription. Then check there shouldn't\n> > be any dangling slot on the publisher? This is similar to a test in\n> > subscription/t/004_sync.pl, we can use some of that framework but have a\n> > separate test for this.\n> I've added this PK violation test to the attached tests.\n> The patch works with v28 and made no failure during regression tests.\n>\n\nI checked this patch. It applied cleanly on top of V28, and all tests passed OK.\n\nHere are two feedback comments.\n\n1. For the regression test there is 2 x SQL and 1 x function test. I\nthought to cover all the combinations there should be another function\ntest. e.g.\nTests ALTER … REFRESH\nTests ALTER …. (refresh = true)\nTests ALTER … (refresh = true) in a function\nTests ALTER … REFRESH in a function <== this combination is not being\ntesting ??\n\n2. For the 004 test case I know the test is needing some PK constraint violation\n# Check if DROP SUBSCRIPTION cleans up slots on the publisher side\n# when the subscriber is stuck on data copy for constraint\n\nBut it is not clear to me what was the exact cause of that PK\nviolation. I think you must be relying on data that is leftover from\nsome previous test case but I am not sure which one. Can you make the\ncomment more detailed to say *how* the PK violation is happening - e.g\nsomething to say which rows, in which table, and inserted by who?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 8 Feb 2021 13:35:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 8, 2021 at 8:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Feb 6, 2021 at 6:30 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > > I have another idea for a test case: What if we write a test such that it fails PK\n> > > violation on copy and then drop the subscription. Then check there shouldn't\n> > > be any dangling slot on the publisher? This is similar to a test in\n> > > subscription/t/004_sync.pl, we can use some of that framework but have a\n> > > separate test for this.\n> > I've added this PK violation test to the attached tests.\n> > The patch works with v28 and made no failure during regression tests.\n> >\n>\n> I checked this patch. It applied cleanly on top of V28, and all tests passed OK.\n>\n> Here are two feedback comments.\n>\n> 1. For the regression test there is 2 x SQL and 1 x function test. I\n> thought to cover all the combinations there should be another function\n> test. e.g.\n> Tests ALTER … REFRESH\n> Tests ALTER …. (refresh = true)\n> Tests ALTER … (refresh = true) in a function\n> Tests ALTER … REFRESH in a function <== this combination is not being\n> testing ??\n>\n\nI am not sure whether there is much value in adding more to this set\nof negative test cases unless it really covers a different code path\nwhich I think won't happen if we add more tests here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 8 Feb 2021 09:10:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Hello\r\n\r\n\r\nOn Mon, Feb 8, 2021 12:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Feb 8, 2021 at 8:06 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Sat, Feb 6, 2021 at 6:30 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > > I have another idea for a test case: What if we write a test such\r\n> > > > that it fails PK violation on copy and then drop the subscription.\r\n> > > > Then check there shouldn't be any dangling slot on the publisher?\r\n> > > > This is similar to a test in subscription/t/004_sync.pl, we can\r\n> > > > use some of that framework but have a separate test for this.\r\n> > > I've added this PK violation test to the attached tests.\r\n> > > The patch works with v28 and made no failure during regression tests.\r\n> > >\r\n> >\r\n> > I checked this patch. It applied cleanly on top of V28, and all tests passed\r\n> OK.\r\n> >\r\n> > Here are two feedback comments.\r\n> >\r\n> > 1. For the regression test there is 2 x SQL and 1 x function test. I\r\n> > thought to cover all the combinations there should be another function\r\n> > test. e.g.\r\n> > Tests ALTER … REFRESH\r\n> > Tests ALTER …. (refresh = true)\r\n> > Tests ALTER … (refresh = true) in a function Tests ALTER … REFRESH in\r\n> > a function <== this combination is not being testing ??\r\n> >\r\n> \r\n> I am not sure whether there is much value in adding more to this set of\r\n> negative test cases unless it really covers a different code path which I think\r\n> won't happen if we add more tests here.\r\nYeah, I agree. Accordingly, I didn't fix that part.\r\n\r\n\r\nOn Mon, Feb 8, 2021 11:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> 2. For the 004 test case I know the test is needing some PK constraint\r\n> violation # Check if DROP SUBSCRIPTION cleans up slots on the publisher\r\n> side # when the subscriber is stuck on data copy for constraint\r\n> \r\n> But it is not clear to me what was the exact cause of that PK violation. I think\r\n> you must be relying on data that is leftover from some previous test case but\r\n> I am not sure which one. Can you make the comment more detailed to say\r\n> *how* the PK violation is happening - e.g something to say which rows, in\r\n> which table, and inserted by who?\r\nI added some comments to clarify how the PK violation happens.\r\nPlease have a look.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 8 Feb 2021 04:43:59 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Monday, February 8, 2021 1:44 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\r\n> On Mon, Feb 8, 2021 12:40 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Mon, Feb 8, 2021 at 8:06 AM Peter Smith <smithpb2250@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > On Sat, Feb 6, 2021 at 6:30 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > > I have another idea for a test case: What if we write a test\r\n> > > > > such that it fails PK violation on copy and then drop the subscription.\r\n> > > > > Then check there shouldn't be any dangling slot on the publisher?\r\n> > > > > This is similar to a test in subscription/t/004_sync.pl, we can\r\n> > > > > use some of that framework but have a separate test for this.\r\n> > > > I've added this PK violation test to the attached tests.\r\n> > > > The patch works with v28 and made no failure during regression tests.\r\n> > > >\r\n> > >\r\n> > > I checked this patch. It applied cleanly on top of V28, and all\r\n> > > tests passed\r\n> > OK.\r\n> > >\r\n> > > Here are two feedback comments.\r\n> > >\r\n> > > 1. For the regression test there is 2 x SQL and 1 x function test. I\r\n> > > thought to cover all the combinations there should be another\r\n> > > function test. e.g.\r\n> > > Tests ALTER … REFRESH\r\n> > > Tests ALTER …. (refresh = true)\r\n> > > Tests ALTER … (refresh = true) in a function Tests ALTER … REFRESH\r\n> > > in a function <== this combination is not being testing ??\r\n> > >\r\n> >\r\n> > I am not sure whether there is much value in adding more to this set\r\n> > of negative test cases unless it really covers a different code path\r\n> > which I think won't happen if we add more tests here.\r\n> Yeah, I agree. Accordingly, I didn't fix that part.\r\n> \r\n> \r\n> On Mon, Feb 8, 2021 11:36 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > 2. For the 004 test case I know the test is needing some PK constraint\r\n> > violation # Check if DROP SUBSCRIPTION cleans up slots on the\r\n> > publisher side # when the subscriber is stuck on data copy for\r\n> > constraint\r\n> >\r\n> > But it is not clear to me what was the exact cause of that PK\r\n> > violation. I think you must be relying on data that is leftover from\r\n> > some previous test case but I am not sure which one. Can you make the\r\n> > comment more detailed to say\r\n> > *how* the PK violation is happening - e.g something to say which rows,\r\n> > in which table, and inserted by who?\r\n> I added some comments to clarify how the PK violation happens.\r\n> Please have a look.\r\nSorry, I had a one typo in the tests of subscription.sql in v2.\r\nI used 'foo' for the first test of \"ALTER SUBSCRIPTION mytest SET PUBLICATION foo WITH (refresh = true) in v02\",\r\nbut I should have used 'mypub' to make this test clearly independent from other previous tests.\r\nAttached the fixed version.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 8 Feb 2021 06:52:18 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 5, 2021 at 8:40 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> We had a bit high-level discussion about this patches with Amit\n> off-list, so I decided to also take a look at the actual code.\n>\n\nThanks for the discussion and a follow-up review.\n\n> My main concern originally was the potential for left-over slots on\n> publisher, but I think the state now is relatively okay, with couple of\n> corner cases that are documented and don't seem much worse than the main\n> slot.\n>\n> I wonder if we should mention the max_slot_wal_keep_size GUC in the\n> table sync docs though.\n>\n\nI have added the reference of this in Alter Subscription where we\nmentioned the risk of leftover slots. Let me know if you have\nsomething else in mind?\n\n> Another thing that might need documentation is that the the visibility\n> of changes done by table sync is not anymore isolated in that table\n> contents will show intermediate progress to other backends, rather than\n> switching from nothing to state consistent with rest of replication.\n>\n\nAgreed and updated the docs accordingly.\n\n>\n> Some minor comments about code:\n>\n> > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > + {\n> > + /* WARNING. Error, but missing_ok = true. */\n> > + ereport(WARNING,\n>\n> I wonder if we need to add error code to the WalRcvExecResult and check\n> for the appropriate ones here. Because this can for example return error\n> because of timeout, not because slot is missing.\n>\n\nI think there are both pros and cons of distinguishing the error\n(\"slot doesnot exist\" from others). The benefit is if there a network\nglitch then the user can probably retry the commands Alter/Drop and it\nwill be successful next time. OTOH, say the network is broken for a\nlong time and the user wants to proceed but there won't be any way to\nproceed for Alter Subscription ... Refresh or Drop Command. So by\ngiving WARNING at least we can provide a way to proceed and then they\ncan drop such slots later. We have mentioned this in docs as well. I\nthink we can go either way here, let me know what do you think is a\nbetter way?\n\n> Not sure if it matters\n> for current callers though (but then maybe don't call the param\n> missign_ok?).\n>\n\nSure, if we decide not to change the behavior as suggested by you then\nthis makes sense.\n\n>\n> > +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n> > +{\n> > + if (syncslotname)\n> > + sprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n> > + else\n> > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> > +\n> > + return syncslotname;\n> > +}\n>\n> Given that we are now explicitly dropping slots, what happens here if we\n> have 2 different downstreams that happen to get same suboid and reloid,\n> will one of the drop the slot of the other one? Previously with the\n> cleanup being left to temp slot we'd at maximum got error when creating\n> it but with the new logic in LogicalRepSyncTableStart it feels like we\n> could get into situation where 2 downstreams are fighting over slot no?\n>\n\nAs discussed, added system_identifier to distinguish subscriptions\nbetween different clusters.\n\nApart from fixing the above comment, I have integrated it with the new\nreplorigin_drop_by_name() API being discussed in the thread [1] and\nposted that patch just for ease. I have also integrated Osumi-San's\ntest case patch with minor modifications.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L7mLhY%3DwyCB0qsEGUpfzWfncDSS9_0a4Co%2BN0GUyNGNQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 8 Feb 2021 16:29:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 8, 2021 at 12:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, February 8, 2021 1:44 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>\n> > On Mon, Feb 8, 2021 11:36 AM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > > 2. For the 004 test case I know the test is needing some PK constraint\n> > > violation # Check if DROP SUBSCRIPTION cleans up slots on the\n> > > publisher side # when the subscriber is stuck on data copy for\n> > > constraint\n> > >\n> > > But it is not clear to me what was the exact cause of that PK\n> > > violation. I think you must be relying on data that is leftover from\n> > > some previous test case but I am not sure which one. Can you make the\n> > > comment more detailed to say\n> > > *how* the PK violation is happening - e.g something to say which rows,\n> > > in which table, and inserted by who?\n> > I added some comments to clarify how the PK violation happens.\n> > Please have a look.\n> Sorry, I had a one typo in the tests of subscription.sql in v2.\n> I used 'foo' for the first test of \"ALTER SUBSCRIPTION mytest SET PUBLICATION foo WITH (refresh = true) in v02\",\n> but I should have used 'mypub' to make this test clearly independent from other previous tests.\n> Attached the fixed version.\n>\n\nThanks. I have integrated this into the main patch with minor\nmodifications in the comments. The main change I have done is to\nremove the test that was testing that there are two slots remaining\nafter the initial sync failure. This is because on restart of\ntablesync worker we again try to drop the slot so we can't guarantee\nthat the tablesync slot would be remaining. I think this is a timing\nissue so it might not have occurred on your machine but I could\nreproduce that by repeated runs of the tests provided by you.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 8 Feb 2021 16:33:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 8, 2021 at 11:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, Feb 7, 2021 at 2:38 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n> > <petr.jelinek@enterprisedb.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Some minor comments about code:\n> > >\n> > > > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > > > + {\n> > > > + /* WARNING. Error, but missing_ok = true. */\n> > > > + ereport(WARNING,\n> > >\n> > > I wonder if we need to add error code to the WalRcvExecResult and check\n> > > for the appropriate ones here. Because this can for example return error\n> > > because of timeout, not because slot is missing. Not sure if it matters\n> > > for current callers though (but then maybe don't call the param\n> > > missign_ok?).\n> >\n> > You are right. The way we are using this function has evolved beyond\n> > the original intention.\n> > Probably renaming the param to something like \"error_ok\" would be more\n> > appropriate now.\n> >\n>\n> PSA a patch (apply on top of V28) to change the misleading param name.\n>\n\nPSA an alternative patch. This one adds a new member to\nWalRcvExecResult and so is able to detect the \"slot does not exist\"\nerror. This patch also applies on top of V28, if you want it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 9 Feb 2021 10:38:45 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Mon, Feb 8, 2021 8:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Feb 8, 2021 at 12:22 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > On Monday, February 8, 2021 1:44 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com>\r\n> > > On Mon, Feb 8, 2021 11:36 AM Peter Smith <smithpb2250@gmail.com>\r\n> > > wrote:\r\n> > > > 2. For the 004 test case I know the test is needing some PK\r\n> > > > constraint violation # Check if DROP SUBSCRIPTION cleans up slots\r\n> > > > on the publisher side # when the subscriber is stuck on data copy\r\n> > > > for constraint\r\n> > > >\r\n> > > > But it is not clear to me what was the exact cause of that PK\r\n> > > > violation. I think you must be relying on data that is leftover\r\n> > > > from some previous test case but I am not sure which one. Can you\r\n> > > > make the comment more detailed to say\r\n> > > > *how* the PK violation is happening - e.g something to say which\r\n> > > > rows, in which table, and inserted by who?\r\n> > > I added some comments to clarify how the PK violation happens.\r\n> > > Please have a look.\r\n> > Sorry, I had a one typo in the tests of subscription.sql in v2.\r\n> > I used 'foo' for the first test of \"ALTER SUBSCRIPTION mytest SET\r\n> > PUBLICATION foo WITH (refresh = true) in v02\", but I should have used\r\n> 'mypub' to make this test clearly independent from other previous tests.\r\n> > Attached the fixed version.\r\n> >\r\n> \r\n> Thanks. I have integrated this into the main patch with minor modifications in\r\n> the comments. The main change I have done is to remove the test that was\r\n> testing that there are two slots remaining after the initial sync failure. This is\r\n> because on restart of tablesync worker we again try to drop the slot so we\r\n> can't guarantee that the tablesync slot would be remaining. I think this is a\r\n> timing issue so it might not have occurred on your machine but I could\r\n> reproduce that by repeated runs of the tests provided by you.\r\nOK. I understand. Thank you so much that your modified\r\nand integrated it into the main patch.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n",
"msg_date": "Tue, 9 Feb 2021 01:37:17 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Here are my feedback comments for the V29 patch.\n\n====\n\nFILE: logical-replication.sgml\n\n+ slots have generated names:\n<quote><literal>pg_%u_sync_%u_%llu</literal></quote>\n+ (parameters: Subscription <parameter>oid</parameter>,\n+ Table <parameter>relid</parameter>, system\nidentifier<parameter>sysid</parameter>)\n+ </para>\n\n1.\nThere is a missing space before the sysid parameter.\n\n=====\n\nFILE: subscriptioncmds.c\n\n+ * SUBREL_STATE_FINISHEDCOPY. The apply worker can also\n+ * concurrently try to drop the origin and by this time the\n+ * origin might be already removed. For these reasons,\n+ * passing missing_ok = true from here.\n+ */\n+ snprintf(originname, sizeof(originname), \"pg_%u_%u\", sub->oid, relid);\n+ replorigin_drop_by_name(originname, true, false);\n+ }\n\n2.\nDon't really need to say \"from here\".\n(same comment applies multiple places, in this file and in tablesync.c)\n\n3.\nPreviously the tablesync origin name format was encapsulated in a\ncommon function. IMO it was cleaner/safer how it was before, instead\nof the same \"pg_%u_%u\" cut/paste and scattered in many places.\n(same comment applies multiple places, in this file and in tablesync.c)\n\n4.\nCalls like replorigin_drop_by_name(originname, true, false); make it\nunnecessarily hard to read code when the boolean params are neither\nnamed as variables nor commented. I noticed on another thread [et0205]\nthere was an idea that having no name/comments is fine because anyway\nit is not difficult to figure out when using a \"modern IDE\", but since\nmy review tools are only \"vi\" and \"meld\" I beg to differ with that\njustification.\n(same comment applies multiple places, in this file and in tablesync.c)\n\n[et0205] https://www.postgresql.org/message-id/c1d9833f-eeeb-40d5-89ba-87674e1b7ba3%40www.fastmail.com\n\n=====\n\nFILE: tablesync.c\n\n5.\nPreviously there was a function tablesync_replorigin_drop which was\nencapsulating the tablesync origin name formatting. I thought that was\nbetter than the V29 code which now has the same formatting scattered\nover many places.\n(same comment applies for worker_internal.h)\n\n+ * Determine the tablesync slot name.\n+ *\n+ * The name must not exceed NAMEDATALEN - 1 because of remote node constraints\n+ * on slot name length. We do append system_identifier to avoid slot_name\n+ * collision with subscriptions in other clusters. With current scheme\n+ * pg_%u_sync_%u_UINT64_FORMAT (3 + 10 + 6 + 10 + 20 + '\\0'), the maximum\n+ * length of slot_name will be 50.\n+ *\n+ * The returned slot name is either:\n+ * - stored in the supplied buffer (syncslotname), or\n+ * - palloc'ed in current memory context (if syncslotname = NULL).\n+ *\n+ * Note: We don't use the subscription slot name as part of tablesync slot name\n+ * because we are responsible for cleaning up these slots and it could become\n+ * impossible to recalculate what name to cleanup if the subscription slot name\n+ * had changed.\n+ */\n+char *\n+ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char\nsyncslotname[NAMEDATALEN])\n+{\n+ if (syncslotname)\n+ sprintf(syncslotname, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid, relid,\n+ GetSystemIdentifier());\n+ else\n+ syncslotname = psprintf(\"pg_%u_sync_%u_\" UINT64_FORMAT, suboid, relid,\n+ GetSystemIdentifier());\n+\n+ return syncslotname;\n+}\n\n6.\n\"We do append\" --> \"We append\"\n\"With current scheme\" -> \"With the current scheme\"\n\n7.\nMaybe consider to just assign GetSystemIdentifier() to a static\ninstead of calling that function for every slot?\nstatic uint64 sysid = GetSystemIdentifier();\nIIUC the sysid value is never going to change for a process, right?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Mon, Feb 8, 2021 at 9:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 8:40 PM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > We had a bit high-level discussion about this patches with Amit\n> > off-list, so I decided to also take a look at the actual code.\n> >\n>\n> Thanks for the discussion and a follow-up review.\n>\n> > My main concern originally was the potential for left-over slots on\n> > publisher, but I think the state now is relatively okay, with couple of\n> > corner cases that are documented and don't seem much worse than the main\n> > slot.\n> >\n> > I wonder if we should mention the max_slot_wal_keep_size GUC in the\n> > table sync docs though.\n> >\n>\n> I have added the reference of this in Alter Subscription where we\n> mentioned the risk of leftover slots. Let me know if you have\n> something else in mind?\n>\n> > Another thing that might need documentation is that the the visibility\n> > of changes done by table sync is not anymore isolated in that table\n> > contents will show intermediate progress to other backends, rather than\n> > switching from nothing to state consistent with rest of replication.\n> >\n>\n> Agreed and updated the docs accordingly.\n>\n> >\n> > Some minor comments about code:\n> >\n> > > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > > + {\n> > > + /* WARNING. Error, but missing_ok = true. */\n> > > + ereport(WARNING,\n> >\n> > I wonder if we need to add error code to the WalRcvExecResult and check\n> > for the appropriate ones here. Because this can for example return error\n> > because of timeout, not because slot is missing.\n> >\n>\n> I think there are both pros and cons of distinguishing the error\n> (\"slot doesnot exist\" from others). The benefit is if there a network\n> glitch then the user can probably retry the commands Alter/Drop and it\n> will be successful next time. OTOH, say the network is broken for a\n> long time and the user wants to proceed but there won't be any way to\n> proceed for Alter Subscription ... Refresh or Drop Command. So by\n> giving WARNING at least we can provide a way to proceed and then they\n> can drop such slots later. We have mentioned this in docs as well. I\n> think we can go either way here, let me know what do you think is a\n> better way?\n>\n> > Not sure if it matters\n> > for current callers though (but then maybe don't call the param\n> > missign_ok?).\n> >\n>\n> Sure, if we decide not to change the behavior as suggested by you then\n> this makes sense.\n>\n> >\n> > > +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char syncslotname[NAMEDATALEN])\n> > > +{\n> > > + if (syncslotname)\n> > > + sprintf(syncslotname, \"pg_%u_sync_%u\", suboid, relid);\n> > > + else\n> > > + syncslotname = psprintf(\"pg_%u_sync_%u\", suboid, relid);\n> > > +\n> > > + return syncslotname;\n> > > +}\n> >\n> > Given that we are now explicitly dropping slots, what happens here if we\n> > have 2 different downstreams that happen to get same suboid and reloid,\n> > will one of the drop the slot of the other one? Previously with the\n> > cleanup being left to temp slot we'd at maximum got error when creating\n> > it but with the new logic in LogicalRepSyncTableStart it feels like we\n> > could get into situation where 2 downstreams are fighting over slot no?\n> >\n>\n> As discussed, added system_identifier to distinguish subscriptions\n> between different clusters.\n>\n> Apart from fixing the above comment, I have integrated it with the new\n> replorigin_drop_by_name() API being discussed in the thread [1] and\n> posted that patch just for ease. I have also integrated Osumi-San's\n> test case patch with minor modifications.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1L7mLhY%3DwyCB0qsEGUpfzWfncDSS9_0a4Co%2BN0GUyNGNQ%40mail.gmail.com\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n",
"msg_date": "Tue, 9 Feb 2021 17:32:00 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "More V29 Feedback\n\nFILE: alter_subscription.sgml\n\n8.\n+ <para>\n+ Commands <command>ALTER SUBSCRIPTION ... REFRESH ..</command> and\n+ <command>ALTER SUBSCRIPTION ... SET PUBLICATION ..</command> with refresh\n+ option as true cannot be executed inside a transaction block.\n+ </para>\n\nMy guess is those two lots of double dots (\"..\") were probably meant\nto be ellipsis (\"...\")\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 Feb 2021 18:37:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "Looking at the V29 style tablesync slot names now they appear like this:\n\nWARNING: could not drop tablesync replication slot\n\"pg_16397_sync_16389_6927117142022745645\"\nThat is in the order subid + relid + sysid\n\nNow that I see it in a message it seems a bit strange with the sysid\njust tacked onto the end like that.\n\nI am wondering if reordering of parent to child might be more natural.\ne.g sysid + subid + relid gives a more intuitive name IMO.\n\nSo in this example it would be \"pg_sync_6927117142022745645_16397_16389\"\n\nThoughts?\n\n----\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 Feb 2021 19:07:34 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "When looking at the DropSubscription code I noticed that there is a\nsmall difference between the HEAD code and the V29 code when slot_name\n= NONE.\n\nHEAD does\n------\n if (!slotname)\n {\n table_close(rel, NoLock);\n return;\n }\n------\n\nV29 does\n------\n if (!slotname)\n {\n /* be tidy */\n list_free(rstates);\n return;\n }\n------\n\nIsn't the V29 code missing doing a table_close(rel, NoLock) there?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 Feb 2021 19:33:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 12:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my feedback comments for the V29 patch.\n>\n\nThanks.\n\n>\n> 3.\n> Previously the tablesync origin name format was encapsulated in a\n> common function. IMO it was cleaner/safer how it was before, instead\n> of the same \"pg_%u_%u\" cut/paste and scattered in many places.\n> (same comment applies multiple places, in this file and in tablesync.c)\n>\n> 4.\n> Calls like replorigin_drop_by_name(originname, true, false); make it\n> unnecessarily hard to read code when the boolean params are neither\n> named as variables nor commented. I noticed on another thread [et0205]\n> there was an idea that having no name/comments is fine because anyway\n> it is not difficult to figure out when using a \"modern IDE\", but since\n> my review tools are only \"vi\" and \"meld\" I beg to differ with that\n> justification.\n> (same comment applies multiple places, in this file and in tablesync.c)\n>\n\nIt would be a bit convenient for you but for most others, I think it\nwould be noise. Personally, I find the code more readable without such\nname comments, it just breaks the flow of code unless you want to\nstudy in detail the value of each param.\n\n> [et0205] https://www.postgresql.org/message-id/c1d9833f-eeeb-40d5-89ba-87674e1b7ba3%40www.fastmail.com\n>\n> =====\n>\n> FILE: tablesync.c\n>\n> 5.\n> Previously there was a function tablesync_replorigin_drop which was\n> encapsulating the tablesync origin name formatting. I thought that was\n> better than the V29 code which now has the same formatting scattered\n> over many places.\n> (same comment applies for worker_internal.h)\n>\n\nIsn't this the same as what you want to say in point-3?\n\n>\n> 7.\n> Maybe consider to just assign GetSystemIdentifier() to a static\n> instead of calling that function for every slot?\n> static uint64 sysid = GetSystemIdentifier();\n> IIUC the sysid value is never going to change for a process, right?\n>\n\nThat's right but I am not sure if there is much value in saving one\ncall here by introducing extra variable.\n\nI'll fix other comments raised by you.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Feb 2021 15:01:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 1:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Looking at the V29 style tablesync slot names now they appear like this:\n>\n> WARNING: could not drop tablesync replication slot\n> \"pg_16397_sync_16389_6927117142022745645\"\n> That is in the order subid + relid + sysid\n>\n> Now that I see it in a message it seems a bit strange with the sysid\n> just tacked onto the end like that.\n>\n> I am wondering if reordering of parent to child might be more natural.\n> e.g sysid + subid + relid gives a more intuitive name IMO.\n>\n> So in this example it would be \"pg_sync_6927117142022745645_16397_16389\"\n>\n\nI have kept the order based on the importance of each parameter. Say\nwhen the user sees this message in the server log of the subscriber\neither for the purpose of tracking the origins progress or for errors,\nthe sysid parameter won't be of much use and they will be mostly\nlooking at subid and relid. OTOH, if due to some reason this parameter\nappears in the publisher logs then sysid might be helpful.\n\nPetr, anyone else, do you have any opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Feb 2021 15:08:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 12:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my feedback comments for the V29 patch.\n>\n> ====\n>\n> FILE: logical-replication.sgml\n>\n> + slots have generated names:\n> <quote><literal>pg_%u_sync_%u_%llu</literal></quote>\n> + (parameters: Subscription <parameter>oid</parameter>,\n> + Table <parameter>relid</parameter>, system\n> identifier<parameter>sysid</parameter>)\n> + </para>\n>\n> 1.\n> There is a missing space before the sysid parameter.\n>\n> =====\n>\n> FILE: subscriptioncmds.c\n>\n> + * SUBREL_STATE_FINISHEDCOPY. The apply worker can also\n> + * concurrently try to drop the origin and by this time the\n> + * origin might be already removed. For these reasons,\n> + * passing missing_ok = true from here.\n> + */\n> + snprintf(originname, sizeof(originname), \"pg_%u_%u\", sub->oid, relid);\n> + replorigin_drop_by_name(originname, true, false);\n> + }\n>\n> 2.\n> Don't really need to say \"from here\".\n> (same comment applies multiple places, in this file and in tablesync.c)\n>\n> 3.\n> Previously the tablesync origin name format was encapsulated in a\n> common function. IMO it was cleaner/safer how it was before, instead\n> of the same \"pg_%u_%u\" cut/paste and scattered in many places.\n> (same comment applies multiple places, in this file and in tablesync.c)\n>\n\nFixed all the three above comments.\n\n> 4.\n> Calls like replorigin_drop_by_name(originname, true, false); make it\n> unnecessarily hard to read code when the boolean params are neither\n> named as variables nor commented. I noticed on another thread [et0205]\n> there was an idea that having no name/comments is fine because anyway\n> it is not difficult to figure out when using a \"modern IDE\", but since\n> my review tools are only \"vi\" and \"meld\" I beg to differ with that\n> justification.\n> (same comment applies multiple places, in this file and in tablesync.c)\n>\n\nAlready responded to it separately. I went ahead and removed such\ncomments from other places in the patch.\n\n> [et0205] https://www.postgresql.org/message-id/c1d9833f-eeeb-40d5-89ba-87674e1b7ba3%40www.fastmail.com\n>\n> =====\n>\n> FILE: tablesync.c\n>\n> 5.\n> Previously there was a function tablesync_replorigin_drop which was\n> encapsulating the tablesync origin name formatting. I thought that was\n> better than the V29 code which now has the same formatting scattered\n> over many places.\n> (same comment applies for worker_internal.h)\n>\n\nI am not sure what different you are expecting here than point-3?\n\n> + * Determine the tablesync slot name.\n> + *\n> + * The name must not exceed NAMEDATALEN - 1 because of remote node constraints\n> + * on slot name length. We do append system_identifier to avoid slot_name\n> + * collision with subscriptions in other clusters. With current scheme\n> + * pg_%u_sync_%u_UINT64_FORMAT (3 + 10 + 6 + 10 + 20 + '\\0'), the maximum\n> + * length of slot_name will be 50.\n> + *\n> + * The returned slot name is either:\n> + * - stored in the supplied buffer (syncslotname), or\n> + * - palloc'ed in current memory context (if syncslotname = NULL).\n> + *\n> + * Note: We don't use the subscription slot name as part of tablesync slot name\n> + * because we are responsible for cleaning up these slots and it could become\n> + * impossible to recalculate what name to cleanup if the subscription slot name\n> + * had changed.\n> + */\n> +char *\n> +ReplicationSlotNameForTablesync(Oid suboid, Oid relid, char\n> syncslotname[NAMEDATALEN])\n> +{\n> + if (syncslotname)\n> + sprintf(syncslotname, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid, relid,\n> + GetSystemIdentifier());\n> + else\n> + syncslotname = psprintf(\"pg_%u_sync_%u_\" UINT64_FORMAT, suboid, relid,\n> + GetSystemIdentifier());\n> +\n> + return syncslotname;\n> +}\n>\n> 6.\n> \"We do append\" --> \"We append\"\n> \"With current scheme\" -> \"With the current scheme\"\n>\n\nFixed.\n\n> 7.\n> Maybe consider to just assign GetSystemIdentifier() to a static\n> instead of calling that function for every slot?\n> static uint64 sysid = GetSystemIdentifier();\n> IIUC the sysid value is never going to change for a process, right?\n>\n\nAlready responded.\n\n> FILE: alter_subscription.sgml\n>\n> 8.\n> + <para>\n> + Commands <command>ALTER SUBSCRIPTION ... REFRESH ..</command> and\n> + <command>ALTER SUBSCRIPTION ... SET PUBLICATION ..</command> with refresh\n> + option as true cannot be executed inside a transaction block.\n> + </para>\n>\n> My guess is those two lots of double dots (\"..\") were probably meant\n> to be ellipsis (\"...\")\n>\n\nFixed, for the first one I completed the command by adding PUBLICATION.\n\n>\n> When looking at the DropSubscription code I noticed that there is a\n> small difference between the HEAD code and the V29 code when slot_name\n> = NONE.\n>\n> HEAD does\n> ------\n> if (!slotname)\n> {\n> table_close(rel, NoLock);\n> return;\n> }\n> ------\n>\n> V29 does\n> ------\n> if (!slotname)\n> {\n> /* be tidy */\n> list_free(rstates);\n> return;\n> }\n> ------\n>\n> Isn't the V29 code missing doing a table_close(rel, NoLock) there?\n>\n\nYes, good catch. Fixed.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 9 Feb 2021 16:49:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 8:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 12:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my feedback comments for the V29 patch.\n> >\n>\n> Thanks.\n>\n> >\n> > 3.\n> > Previously the tablesync origin name format was encapsulated in a\n> > common function. IMO it was cleaner/safer how it was before, instead\n> > of the same \"pg_%u_%u\" cut/paste and scattered in many places.\n> > (same comment applies multiple places, in this file and in tablesync.c)\n\nOK. I confirmed it is fixed in V30.\n\nBut I noticed that the new function name is not quite consistent with\nexisting function for slot name. e.g.\nReplicationSlotNameForTablesync versus\nReplicationOriginNameForTableSync (see \"TableSync\" instead of\n\"Tablesync\")\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 10 Feb 2021 11:51:57 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 8, 2021 at 11:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sun, Feb 7, 2021 at 2:38 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Sat, Feb 6, 2021 at 2:10 AM Petr Jelinek\n> > > <petr.jelinek@enterprisedb.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > Some minor comments about code:\n> > > >\n> > > > > + else if (res->status == WALRCV_ERROR && missing_ok)\n> > > > > + {\n> > > > > + /* WARNING. Error, but missing_ok = true. */\n> > > > > + ereport(WARNING,\n> > > >\n> > > > I wonder if we need to add error code to the WalRcvExecResult and check\n> > > > for the appropriate ones here. Because this can for example return error\n> > > > because of timeout, not because slot is missing. Not sure if it matters\n> > > > for current callers though (but then maybe don't call the param\n> > > > missign_ok?).\n> > >\n> > > You are right. The way we are using this function has evolved beyond\n> > > the original intention.\n> > > Probably renaming the param to something like \"error_ok\" would be more\n> > > appropriate now.\n> > >\n> >\n> > PSA a patch (apply on top of V28) to change the misleading param name.\n> >\n>\n> PSA an alternative patch. This one adds a new member to\n> WalRcvExecResult and so is able to detect the \"slot does not exist\"\n> error. This patch also applies on top of V28, if you want it.\n>\n\nPSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\nsome PG doc updates).\nThis applies OK on top of v30 of the main patch.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 10 Feb 2021 13:11:42 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> PSA an alternative patch. This one adds a new member to\n> WalRcvExecResult and so is able to detect the \"slot does not exist\"\n> error. This patch also applies on top of V28, if you want it.\n\nDid some testing with this patch on top of v29. I could see that now,\nwhile dropping the subscription, if the tablesync slot does not exist\non the publisher, then it gives a warning\nbut the command does not fail.\n\npostgres=# CREATE SUBSCRIPTION tap_sub CONNECTION 'host=localhost\ndbname=postgres port=6972' PUBLICATION tap_pub WITH (enabled = false);\nNOTICE: created replication slot \"tap_sub\" on publisher\nCREATE SUBSCRIPTION\npostgres=# ALTER SUBSCRIPTION tap_sub enable;\nALTER SUBSCRIPTION\npostgres=# ALTER SUBSCRIPTION tap_sub disable;\nALTER SUBSCRIPTION\n=== here, the tablesync slot exists on the publisher but I go and\n=== manually drop it.\n\npostgres=# drop subscription tap_sub;\nWARNING: could not drop the replication slot\n\"pg_16401_sync_16389_6927117142022745645\" on publisher\nDETAIL: The error was: ERROR: replication slot\n\"pg_16401_sync_16389_6927117142022745645\" does not exist\nNOTICE: dropped replication slot \"tap_sub\" on publisher\nDROP SUBSCRIPTION\n\nI have a minor comment on the error message, the \"The error was:\"\nseems a bit redundant here. Maybe remove it? So that it looks like:\n\nWARNING: could not drop the replication slot\n\"pg_16401_sync_16389_6927117142022745645\" on publisher\nDETAIL: ERROR: replication slot\n\"pg_16401_sync_16389_6927117142022745645\" does not exist\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 10 Feb 2021 15:07:33 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n> some PG doc updates).\n> This applies OK on top of v30 of the main patch.\n>\n\nThanks, I have integrated these changes into the main patch and\nadditionally made some changes to comments and docs. I have also fixed\nthe function name inconsistency issue you reported and ran pgindent.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 10 Feb 2021 11:02:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "I have reviewed again the latest patch (V31)\n\nI found only a few minor nitpick issues not worth listing.\n\nThen I ran the subscription TAP tests 50x in a loop as a kind of\nstress test. That ran for 2.5hrs and the result was all 50x 'Result:\nPASS'.\n\nSo V31 looks good to me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 11 Feb 2021 13:16:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On 10 Feb 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> \n>> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>> \n>> \n>> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n>> some PG doc updates).\n>> This applies OK on top of v30 of the main patch.\n>> \n> \n> Thanks, I have integrated these changes into the main patch and\n> additionally made some changes to comments and docs. I have also fixed\n> the function name inconsistency issue you reported and ran pgindent.\n\nOne thing:\n\n> +\t\telse if (res->status == WALRCV_ERROR &&\n> +\t\t\t\t missing_ok &&\n> +\t\t\t\t res->sqlstate == ERRCODE_UNDEFINED_OBJECT)\n> +\t\t{\n> +\t\t\t/* WARNING. Error, but missing_ok = true. */\n> +\t\t\tereport(WARNING,\n> \t\t\t\t\t(errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n> \t\t\t\t\t\t\tslotname),\n> \t\t\t\t\t errdetail(\"The error was: %s\", res->err)));\n\nHmm, why is this WARNING, we mostly call it with missing_ok = true when the slot is not expected to be there, so it does not seem correct to report it as warning?\n\n--\nPetr\n\n",
"msg_date": "Thu, 11 Feb 2021 09:21:28 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 1:51 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> On 10 Feb 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >>\n> >> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >>>\n> >>\n> >> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n> >> some PG doc updates).\n> >> This applies OK on top of v30 of the main patch.\n> >>\n> >\n> > Thanks, I have integrated these changes into the main patch and\n> > additionally made some changes to comments and docs. I have also fixed\n> > the function name inconsistency issue you reported and ran pgindent.\n>\n> One thing:\n>\n> > + else if (res->status == WALRCV_ERROR &&\n> > + missing_ok &&\n> > + res->sqlstate == ERRCODE_UNDEFINED_OBJECT)\n> > + {\n> > + /* WARNING. Error, but missing_ok = true. */\n> > + ereport(WARNING,\n> > (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n> > slotname),\n> > errdetail(\"The error was: %s\", res->err)));\n>\n> Hmm, why is this WARNING, we mostly call it with missing_ok = true when the slot is not expected to be there, so it does not seem correct to report it as warning?\n>\n\nWARNING is for the cases where we don't always expect slots to exist\nand we don't want to stop the operation due to it. For example, in\nDropSubscription, for some of the rel states like (SUBREL_STATE_INIT\nand SUBREL_STATE_DATASYNC), the slot won't exist. Similarly, say if we\nfail (due to network error) after removing some of the slots, next\ntime, it will again try to drop already dropped slots and fail. For\nthese reasons, we need to use WARNING. Similarly for tablesync workers\nwhen we are trying to initially drop the slot there is no certainty\nthat it exists, so we can't throw ERROR and stop the operation there.\nThere are other cases like when the table sync worker has finished\nsyncing the table, there we will raise an ERROR if the slot doesn't\nexist. Does this make sense?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Feb 2021 15:12:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On 11 Feb 2021, at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Thu, Feb 11, 2021 at 1:51 PM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n>> \n>> On 10 Feb 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> \n>>> On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>>> \n>>>> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>>>> \n>>>> \n>>>> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n>>>> some PG doc updates).\n>>>> This applies OK on top of v30 of the main patch.\n>>>> \n>>> \n>>> Thanks, I have integrated these changes into the main patch and\n>>> additionally made some changes to comments and docs. I have also fixed\n>>> the function name inconsistency issue you reported and ran pgindent.\n>> \n>> One thing:\n>> \n>>> + else if (res->status == WALRCV_ERROR &&\n>>> + missing_ok &&\n>>> + res->sqlstate == ERRCODE_UNDEFINED_OBJECT)\n>>> + {\n>>> + /* WARNING. Error, but missing_ok = true. */\n>>> + ereport(WARNING,\n>>> (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n>>> slotname),\n>>> errdetail(\"The error was: %s\", res->err)));\n>> \n>> Hmm, why is this WARNING, we mostly call it with missing_ok = true when the slot is not expected to be there, so it does not seem correct to report it as warning?\n>> \n> \n> WARNING is for the cases where we don't always expect slots to exist\n> and we don't want to stop the operation due to it. For example, in\n> DropSubscription, for some of the rel states like (SUBREL_STATE_INIT\n> and SUBREL_STATE_DATASYNC), the slot won't exist. Similarly, say if we\n> fail (due to network error) after removing some of the slots, next\n> time, it will again try to drop already dropped slots and fail. For\n> these reasons, we need to use WARNING. Similarly for tablesync workers\n> when we are trying to initially drop the slot there is no certainty\n> that it exists, so we can't throw ERROR and stop the operation there.\n> There are other cases like when the table sync worker has finished\n> syncing the table, there we will raise an ERROR if the slot doesn't\n> exist. Does this make sense?\n\nWell, I was thinking it could be NOTICE or LOG to be honest, WARNING seems unnecessarily scary for those usecases to me.\n\n—\nPetr\n\n\n\n",
"msg_date": "Thu, 11 Feb 2021 10:50:56 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 3:20 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> On 11 Feb 2021, at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Feb 11, 2021 at 1:51 PM Petr Jelinek\n> > <petr.jelinek@enterprisedb.com> wrote:\n> >>\n> >> On 10 Feb 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>\n> >>> On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >>>>\n> >>>> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >>>>>\n> >>>>\n> >>>> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n> >>>> some PG doc updates).\n> >>>> This applies OK on top of v30 of the main patch.\n> >>>>\n> >>>\n> >>> Thanks, I have integrated these changes into the main patch and\n> >>> additionally made some changes to comments and docs. I have also fixed\n> >>> the function name inconsistency issue you reported and ran pgindent.\n> >>\n> >> One thing:\n> >>\n> >>> + else if (res->status == WALRCV_ERROR &&\n> >>> + missing_ok &&\n> >>> + res->sqlstate == ERRCODE_UNDEFINED_OBJECT)\n> >>> + {\n> >>> + /* WARNING. Error, but missing_ok = true. */\n> >>> + ereport(WARNING,\n> >>> (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n> >>> slotname),\n> >>> errdetail(\"The error was: %s\", res->err)));\n> >>\n> >> Hmm, why is this WARNING, we mostly call it with missing_ok = true when the slot is not expected to be there, so it does not seem correct to report it as warning?\n> >>\n> >\n> > WARNING is for the cases where we don't always expect slots to exist\n> > and we don't want to stop the operation due to it. For example, in\n> > DropSubscription, for some of the rel states like (SUBREL_STATE_INIT\n> > and SUBREL_STATE_DATASYNC), the slot won't exist. Similarly, say if we\n> > fail (due to network error) after removing some of the slots, next\n> > time, it will again try to drop already dropped slots and fail. For\n> > these reasons, we need to use WARNING. Similarly for tablesync workers\n> > when we are trying to initially drop the slot there is no certainty\n> > that it exists, so we can't throw ERROR and stop the operation there.\n> > There are other cases like when the table sync worker has finished\n> > syncing the table, there we will raise an ERROR if the slot doesn't\n> > exist. Does this make sense?\n>\n> Well, I was thinking it could be NOTICE or LOG to be honest, WARNING seems unnecessarily scary for those usecases to me.\n>\n\nI am fine with LOG and will make that change. Do you have any more\ncomments or want to spend more time on this patch before we call it\ngood?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Feb 2021 15:26:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On 11 Feb 2021, at 10:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Thu, Feb 11, 2021 at 3:20 PM Petr Jelinek\n> <petr.jelinek@enterprisedb.com> wrote:\n>> \n>> On 11 Feb 2021, at 10:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> \n>>> On Thu, Feb 11, 2021 at 1:51 PM Petr Jelinek\n>>> <petr.jelinek@enterprisedb.com> wrote:\n>>>> \n>>>> On 10 Feb 2021, at 06:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>> \n>>>>> On Wed, Feb 10, 2021 at 7:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>>>>> \n>>>>>> On Tue, Feb 9, 2021 at 10:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>>>>>> \n>>>>>> \n>>>>>> PSA v2 of this WalRcvExceResult patch (it is same as v1 but includes\n>>>>>> some PG doc updates).\n>>>>>> This applies OK on top of v30 of the main patch.\n>>>>>> \n>>>>> \n>>>>> Thanks, I have integrated these changes into the main patch and\n>>>>> additionally made some changes to comments and docs. I have also fixed\n>>>>> the function name inconsistency issue you reported and ran pgindent.\n>>>> \n>>>> One thing:\n>>>> \n>>>>> + else if (res->status == WALRCV_ERROR &&\n>>>>> + missing_ok &&\n>>>>> + res->sqlstate == ERRCODE_UNDEFINED_OBJECT)\n>>>>> + {\n>>>>> + /* WARNING. Error, but missing_ok = true. */\n>>>>> + ereport(WARNING,\n>>>>> (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n>>>>> slotname),\n>>>>> errdetail(\"The error was: %s\", res->err)));\n>>>> \n>>>> Hmm, why is this WARNING, we mostly call it with missing_ok = true when the slot is not expected to be there, so it does not seem correct to report it as warning?\n>>>> \n>>> \n>>> WARNING is for the cases where we don't always expect slots to exist\n>>> and we don't want to stop the operation due to it. For example, in\n>>> DropSubscription, for some of the rel states like (SUBREL_STATE_INIT\n>>> and SUBREL_STATE_DATASYNC), the slot won't exist. Similarly, say if we\n>>> fail (due to network error) after removing some of the slots, next\n>>> time, it will again try to drop already dropped slots and fail. For\n>>> these reasons, we need to use WARNING. Similarly for tablesync workers\n>>> when we are trying to initially drop the slot there is no certainty\n>>> that it exists, so we can't throw ERROR and stop the operation there.\n>>> There are other cases like when the table sync worker has finished\n>>> syncing the table, there we will raise an ERROR if the slot doesn't\n>>> exist. Does this make sense?\n>> \n>> Well, I was thinking it could be NOTICE or LOG to be honest, WARNING seems unnecessarily scary for those usecases to me.\n>> \n> \n> I am fine with LOG and will make that change. Do you have any more\n> comments or want to spend more time on this patch before we call it\n> good?\n\nI am good, thanks!\n\n—\nPetr\n\n",
"msg_date": "Thu, 11 Feb 2021 11:02:31 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 3:32 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> On 11 Feb 2021, at 10:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >> Well, I was thinking it could be NOTICE or LOG to be honest, WARNING seems unnecessarily scary for those usecases to me.\n> >>\n> >\n> > I am fine with LOG and will make that change. Do you have any more\n> > comments or want to spend more time on this patch before we call it\n> > good?\n>\n> I am good, thanks!\n>\n\nOkay, attached an updated patch with only that change.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 11 Feb 2021 17:08:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 10:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Okay, attached an updated patch with only that change.\n\nI ran Erik's test suite [1] on this patch overnight and found no\nerrors. No more comments from me. The patch looks good.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n[1]- https://www.postgresql.org/message-id/93d02794068482f96d31b002e0eb248d%40xs4all.nl\n\n\n",
"msg_date": "Fri, 12 Feb 2021 12:48:32 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 12, 2021 at 7:18 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Thu, Feb 11, 2021 at 10:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Okay, attached an updated patch with only that change.\n>\n> I ran Erik's test suite [1] on this patch overnight and found no\n> errors. No more comments from me. The patch looks good.\n>\n\nThanks, I have pushed the patch but getting one failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-02-12%2002%3A28%3A12\n\nThe reason seems to be that we are trying to connect and\nmax_wal_senders is set to zero. I think we can write this without\ntrying to connect. The attached patch fixes the problem for me. What\ndo you think?\n\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 12 Feb 2021 09:16:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 12, 2021 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Thanks, I have pushed the patch but getting one failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-02-12%2002%3A28%3A12\n>\n> The reason seems to be that we are trying to connect and\n> max_wal_senders is set to zero. I think we can write this without\n> trying to connect. The attached patch fixes the problem for me. What\n> do you think?\n\nVerified this with installcheck and modified configuration to have\nwal_level = minimal and max_wal_senders = 0.\nTests passed. The changes look good to me.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 12 Feb 2021 15:37:53 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 12, 2021 at 10:08 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > Thanks, I have pushed the patch but getting one failure:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-02-12%2002%3A28%3A12\n> >\n> > The reason seems to be that we are trying to connect and\n> > max_wal_senders is set to zero. I think we can write this without\n> > trying to connect. The attached patch fixes the problem for me. What\n> > do you think?\n>\n> Verified this with installcheck and modified configuration to have\n> wal_level = minimal and max_wal_senders = 0.\n> Tests passed. The changes look good to me.\n>\n\nThanks, I have pushed the fix and the latest run of 'thorntail' has passed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Feb 2021 11:18:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Fri, Feb 12, 2021 at 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 10:08 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Thanks, I have pushed the patch but getting one failure:\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2021-02-12%2002%3A28%3A12\n> > >\n> > > The reason seems to be that we are trying to connect and\n> > > max_wal_senders is set to zero. I think we can write this without\n> > > trying to connect. The attached patch fixes the problem for me. What\n> > > do you think?\n> >\n> > Verified this with installcheck and modified configuration to have\n> > wal_level = minimal and max_wal_senders = 0.\n> > Tests passed. The changes look good to me.\n> >\n>\n> Thanks, I have pushed the fix and the latest run of 'thorntail' has passed.\n\nI got the following WARNING message from a logical replication apply worker:\n\nWARNING: relcache reference leak: relation \"pg_subscription_rel\" not closed\n\nThe cause of this is that GetSubscriptionRelState() doesn't close the\nrelation in SUBREL_STATE_UNKNOWN case. It seems that commit ce0fdbfe9\nforgot to close it. I've attached the patch to fix this issue.\n\nHere is a reproducible step:\n\n1. On both publisher and subscriber:\ncreate table test (a int primary key);\n\n2. On publisher:\ncreate publication test_pub for table test;\n\n3. On subscriber:\ncreate subscription test_sub connection 'dbname=postgres' publication test_pub\";\n-- wait until table sync finished\ndrop table test;\ncreate table test (a int primary key);\n\n From this point, you will get the WARNING message when doing\ninsert/update/delete/truncate to 'test' table on the publisher.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 24 Feb 2021 16:16:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Thanks, I have pushed the fix and the latest run of 'thorntail' has passed.\n>\n> I got the following WARNING message from a logical replication apply worker:\n>\n> WARNING: relcache reference leak: relation \"pg_subscription_rel\" not closed\n>\n> The cause of this is that GetSubscriptionRelState() doesn't close the\n> relation in SUBREL_STATE_UNKNOWN case. It seems that commit ce0fdbfe9\n> forgot to close it. I've attached the patch to fix this issue.\n>\n\nThanks for the report and fix. Your patch LGTM. I'll push it tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Feb 2021 17:55:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 5:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 24, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Thanks, I have pushed the fix and the latest run of 'thorntail' has passed.\n> >\n> > I got the following WARNING message from a logical replication apply worker:\n> >\n> > WARNING: relcache reference leak: relation \"pg_subscription_rel\" not closed\n> >\n> > The cause of this is that GetSubscriptionRelState() doesn't close the\n> > relation in SUBREL_STATE_UNKNOWN case. It seems that commit ce0fdbfe9\n> > forgot to close it. I've attached the patch to fix this issue.\n> >\n>\n> Thanks for the report and fix. Your patch LGTM. I'll push it tomorrow.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Feb 2021 10:22:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Single transaction in the tablesync worker?"
},
{
"msg_contents": "On Thu, Feb 25, 2021 at 1:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 24, 2021 at 5:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 24, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 12, 2021 at 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Thanks, I have pushed the fix and the latest run of 'thorntail' has passed.\n> > >\n> > > I got the following WARNING message from a logical replication apply worker:\n> > >\n> > > WARNING: relcache reference leak: relation \"pg_subscription_rel\" not closed\n> > >\n> > > The cause of this is that GetSubscriptionRelState() doesn't close the\n> > > relation in SUBREL_STATE_UNKNOWN case. It seems that commit ce0fdbfe9\n> > > forgot to close it. I've attached the patch to fix this issue.\n> > >\n> >\n> > Thanks for the report and fix. Your patch LGTM. I'll push it tomorrow.\n> >\n>\n> Pushed!\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 26 Feb 2021 09:46:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Single transaction in the tablesync worker?"
}
] |
[
{
"msg_contents": "Hello,\n\nNow that we have the infrastructure to track indexes that might be corrupted\ndue to changes in collation libraries, I think it would be a good idea to offer\nan easy way for users to reindex all indexes that might be corrupted.\n\nI'm attaching a POC patch as a discussion basis. It implements a new\n\"COLLATION\" option to reindex, with \"not_current\" being the only accepted\nvalue. Note that I didn't spent too much efforts on the grammar part yet.\n\nSo for instance you can do:\n\nREINDEX (COLLATION 'not_current') DATABASE mydb;\n\nThe filter is also implemented so that you could cumulate multiple filters, so\nit could be easy to add more filtering, for instance:\n\nREINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n\nto only rebuild indexes depending on outdated libc collations, or\n\nREINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n\nto only rebuild indexes depending on a specific version of libc.",
"msg_date": "Thu, 3 Dec 2020 17:31:43 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "REINDEX backend filtering"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 05:31:43PM +0800, Julien Rouhaud wrote:\n> Now that we have the infrastructure to track indexes that might be corrupted\n> due to changes in collation libraries, I think it would be a good idea to offer\n> an easy way for users to reindex all indexes that might be corrupted.\n\nYes. It would be a good thing.\n\n> The filter is also implemented so that you could cumulate multiple filters, so\n> it could be easy to add more filtering, for instance:\n> \n> REINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n> \n> to only rebuild indexes depending on outdated libc collations, or\n> \n> REINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n> \n> to only rebuild indexes depending on a specific version of libc.\n\nDeciding on the grammar to use depends on the use cases we would like\nto satisfy. From what I heard on this topic, the goal is to reduce\nthe amount of time necessary to reindex a system so as REINDEX only\nworks on indexes whose dependent collation versions are not known or\nworks on indexes in need of a collation refresh (like a reindexdb\n--all --collation -j $jobs). What would be the benefit in having more\ncomplexity with library-dependent settings while we could take care\nof the use cases that matter the most with a simple grammar? Perhaps\n\"not_current\" is not the best match as a keyword, we could just use\n\"collation\" and handle that as a boolean. As long as we don't need\nnew operators in the grammar rules..\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 16:45:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 3:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Dec 03, 2020 at 05:31:43PM +0800, Julien Rouhaud wrote:\n> > Now that we have the infrastructure to track indexes that might be corrupted\n> > due to changes in collation libraries, I think it would be a good idea to offer\n> > an easy way for users to reindex all indexes that might be corrupted.\n>\n> Yes. It would be a good thing.\n>\n> > The filter is also implemented so that you could cumulate multiple filters, so\n> > it could be easy to add more filtering, for instance:\n> >\n> > REINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n> >\n> > to only rebuild indexes depending on outdated libc collations, or\n> >\n> > REINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n> >\n> > to only rebuild indexes depending on a specific version of libc.\n>\n> Deciding on the grammar to use depends on the use cases we would like\n> to satisfy. From what I heard on this topic, the goal is to reduce\n> the amount of time necessary to reindex a system so as REINDEX only\n> works on indexes whose dependent collation versions are not known or\n> works on indexes in need of a collation refresh (like a reindexdb\n> --all --collation -j $jobs). What would be the benefit in having more\n> complexity with library-dependent settings while we could take care\n> of the use cases that matter the most with a simple grammar? Perhaps\n> \"not_current\" is not the best match as a keyword, we could just use\n> \"collation\" and handle that as a boolean. As long as we don't need\n> new operators in the grammar rules..\n\nI'm not sure what the DBA usual pattern here. If the reindexing\nruntime is really critical, I'm assuming that at least some people\nwill dig into library details to see what are the collations that\nactually broke in the last upgrade and will want to reindex only\nthose, and force the version for the rest of the indexes. And\nobviously, they probably won't wait to have multiple collation\nversions dependencies before taking care of that. In that case the\nfilters that would matters would be one to only keep indexes with an\noutdated collation version, and an additional one for a specific\ncollation name. Or we could have the COLLATION keyword without\nadditional argument mean all outdated collations, and COLLATION\n'collation_name' to specify a specific one. This is maybe a bit ugly,\nand would probably require a different approach for reindexdb.\n\n\n",
"msg_date": "Tue, 15 Dec 2020 19:21:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 12:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Dec 14, 2020 at 3:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Dec 03, 2020 at 05:31:43PM +0800, Julien Rouhaud wrote:\n> > > Now that we have the infrastructure to track indexes that might be corrupted\n> > > due to changes in collation libraries, I think it would be a good idea to offer\n> > > an easy way for users to reindex all indexes that might be corrupted.\n> >\n> > Yes. It would be a good thing.\n> >\n> > > The filter is also implemented so that you could cumulate multiple filters, so\n> > > it could be easy to add more filtering, for instance:\n> > >\n> > > REINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n> > >\n> > > to only rebuild indexes depending on outdated libc collations, or\n> > >\n> > > REINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n> > >\n> > > to only rebuild indexes depending on a specific version of libc.\n> >\n> > Deciding on the grammar to use depends on the use cases we would like\n> > to satisfy. From what I heard on this topic, the goal is to reduce\n> > the amount of time necessary to reindex a system so as REINDEX only\n> > works on indexes whose dependent collation versions are not known or\n> > works on indexes in need of a collation refresh (like a reindexdb\n> > --all --collation -j $jobs). What would be the benefit in having more\n> > complexity with library-dependent settings while we could take care\n> > of the use cases that matter the most with a simple grammar? Perhaps\n> > \"not_current\" is not the best match as a keyword, we could just use\n> > \"collation\" and handle that as a boolean. As long as we don't need\n> > new operators in the grammar rules..\n>\n> I'm not sure what the DBA usual pattern here. If the reindexing\n> runtime is really critical, I'm assuming that at least some people\n> will dig into library details to see what are the collations that\n> actually broke in the last upgrade and will want to reindex only\n> those, and force the version for the rest of the indexes. And\n> obviously, they probably won't wait to have multiple collation\n> versions dependencies before taking care of that. In that case the\n> filters that would matters would be one to only keep indexes with an\n> outdated collation version, and an additional one for a specific\n> collation name. Or we could have the COLLATION keyword without\n> additional argument mean all outdated collations, and COLLATION\n> 'collation_name' to specify a specific one. This is maybe a bit ugly,\n> and would probably require a different approach for reindexdb.\n\nIs this really a common enough operation that we need it i the main grammar?\n\nHaving the functionality, definitely, but what if it was \"just\" a\nfunction instead? So you'd do something like:\nSELECT 'reindex index ' || i FROM pg_blah(some, arguments, here)\n\\gexec\n\nOr even a function that returns the REINDEX commands directly (taking\na parameter to turn on/off concurrency for example).\n\nThat also seems like it would be easier to make flexible, and just as\neasy to plug into reindexdb?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 15 Dec 2020 18:34:16 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 06:34:16PM +0100, Magnus Hagander wrote:\n> Is this really a common enough operation that we need it in the main grammar?\n> \n> Having the functionality, definitely, but what if it was \"just\" a\n> function instead? So you'd do something like:\n> SELECT 'reindex index ' || i FROM pg_blah(some, arguments, here)\n> \\gexec\n> \n> Or even a function that returns the REINDEX commands directly (taking\n> a parameter to turn on/off concurrency for example).\n> \n> That also seems like it would be easier to make flexible, and just as\n> easy to plug into reindexdb?\n\nHaving control in the grammar to choose which index to reindex for a\ntable is very useful when it comes to parallel reindexing, because\nit is no-brainer in terms of knowing which index to distribute to one\njob or another. In short, with this grammar you can just issue a set\nof REINDEX TABLE commands that we know will not conflict with each\nother. You cannot get that level of control with REINDEX INDEX as it\nmay be possible that more or more commands conflict if they work on an\nindex of the same relation because it is required to take lock also on\nthe parent table. Of course, we could decide to implement a\nredistribution logic in all frontend tools that need such things, like\nreindexdb, but that's not something I think we should let the client \ndecide of. A backend-side filtering is IMO much simpler, less code,\nand more elegant.\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 09:27:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 8:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 15, 2020 at 06:34:16PM +0100, Magnus Hagander wrote:\n> > Is this really a common enough operation that we need it in the main grammar?\n> >\n> > Having the functionality, definitely, but what if it was \"just\" a\n> > function instead? So you'd do something like:\n> > SELECT 'reindex index ' || i FROM pg_blah(some, arguments, here)\n> > \\gexec\n> >\n> > Or even a function that returns the REINDEX commands directly (taking\n> > a parameter to turn on/off concurrency for example).\n> >\n> > That also seems like it would be easier to make flexible, and just as\n> > easy to plug into reindexdb?\n>\n> Having control in the grammar to choose which index to reindex for a\n> table is very useful when it comes to parallel reindexing, because\n> it is no-brainer in terms of knowing which index to distribute to one\n> job or another. In short, with this grammar you can just issue a set\n> of REINDEX TABLE commands that we know will not conflict with each\n> other. You cannot get that level of control with REINDEX INDEX as it\n> may be possible that more or more commands conflict if they work on an\n> index of the same relation because it is required to take lock also on\n> the parent table. Of course, we could decide to implement a\n> redistribution logic in all frontend tools that need such things, like\n> reindexdb, but that's not something I think we should let the client\n> decide of. A backend-side filtering is IMO much simpler, less code,\n> and more elegant.\n\nMaybe additional filtering capabilities is not something that will be\nrequired frequently, but I'm pretty sure that reindexing only indexes\nthat might be corrupt is something that will be required often.. So I\nagree, having all that logic in the backend makes everything easier\nfor users, having the choice of the tools they want to issue the query\nwhile still having all features available.\n\nThere was a conflict with a3dc926009be8 (Refactor option handling of\nCLUSTER, REINDEX and VACUUM), so rebased version attached. No other\nchanges included yet.",
"msg_date": "Thu, 21 Jan 2021 11:12:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 11:12:56AM +0800, Julien Rouhaud wrote:\n> \n> There was a conflict with a3dc926009be8 (Refactor option handling of\n> CLUSTER, REINDEX and VACUUM), so rebased version attached. No other\n> changes included yet.\n\nNew conflict, v3 attached.",
"msg_date": "Sun, 7 Feb 2021 15:20:17 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hi,\nFor index_has_deprecated_collation(),\n\n+ object.objectSubId = 0;\n\nThe objectSubId field is not accessed\nby do_check_index_has_deprecated_collation(). Does it need to be assigned ?\n\nFor RelationGetIndexListFiltered(), it seems when (options &\nREINDEXOPT_COLL_NOT_CURRENT) == 0, the full_list would be returned.\nThis can be checked prior to entering the foreach loop.\n\nCheers\n\nOn Sat, Feb 6, 2021 at 11:20 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Jan 21, 2021 at 11:12:56AM +0800, Julien Rouhaud wrote:\n> >\n> > There was a conflict with a3dc926009be8 (Refactor option handling of\n> > CLUSTER, REINDEX and VACUUM), so rebased version attached. No other\n> > changes included yet.\n>\n> New conflict, v3 attached.\n>\n\nHi,For index_has_deprecated_collation(),+ object.objectSubId = 0;The objectSubId field is not accessed by do_check_index_has_deprecated_collation(). Does it need to be assigned ?For RelationGetIndexListFiltered(), it seems when (options & REINDEXOPT_COLL_NOT_CURRENT) == 0, the full_list would be returned.This can be checked prior to entering the foreach loop.CheersOn Sat, Feb 6, 2021 at 11:20 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Thu, Jan 21, 2021 at 11:12:56AM +0800, Julien Rouhaud wrote:\n> \n> There was a conflict with a3dc926009be8 (Refactor option handling of\n> CLUSTER, REINDEX and VACUUM), so rebased version attached. No other\n> changes included yet.\n\nNew conflict, v3 attached.",
"msg_date": "Sun, 7 Feb 2021 08:16:44 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\nOn Mon, Feb 8, 2021 at 12:14 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For index_has_deprecated_collation(),\n>\n> + object.objectSubId = 0;\n>\n> The objectSubId field is not accessed by do_check_index_has_deprecated_collation(). Does it need to be assigned ?\n\nIndeed it's not strictly necessary I think, but it makes things\ncleaner and future proof, and that's how things are already done\nnearby. So I think it's better to keep it this way.\n\n> For RelationGetIndexListFiltered(), it seems when (options & REINDEXOPT_COLL_NOT_CURRENT) == 0, the full_list would be returned.\n> This can be checked prior to entering the foreach loop.\n\nThat's already the case with this test:\n\n /* Fast exit if no filtering was asked, or if the list if empty. */\n if (!reindexHasFilter(options) || full_list == NIL)\n return full_list;\n\nknowing that\n\n#define reindexHasFilter(x) ((x & REINDEXOPT_COLL_NOT_CURRENT) != 0)\n\nThe code as-is written to be extensible with possibly other filters\n(e.g. specific library or specific version). Feedback so far seems to\nindicate that it may be overkill and only filtering indexes with\ndeprecated collation is enough. I'll simplify this code in a future\nversion, getting rid of reindexHasFilter, unless someone thinks more\nfilter is a good idea.\n\nFor now I'm attaching a rebased version, there was a conflict with the\nrecent patch to add the missing_ok parameter to\nget_collation_version_for_oid()",
"msg_date": "Wed, 24 Feb 2021 20:21:41 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": " From what I heard on this topic, the goal is to reduce\nthe amount of time necessary to reindex a system so as REINDEX only\nworks on indexes whose dependent collation versions are not known or\nworks on indexes in need of a collation refresh (like a reindexdb\n--all --collation -j $jobs). \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 24 Feb 2021 07:33:53 -0700 (MST)",
"msg_from": "mariakatosvich <loveneet.singh@redblink.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 25, 2021 at 12:11 AM mariakatosvich\n<loveneet.singh@redblink.com> wrote:\n>\n> From what I heard on this topic, the goal is to reduce\n> the amount of time necessary to reindex a system so as REINDEX only\n> works on indexes whose dependent collation versions are not known or\n> works on indexes in need of a collation refresh (like a reindexdb\n> --all --collation -j $jobs).\n\nThat's indeed the goal. The current patch only adds infrastructure\nfor the REINDEX command, which will make easy to add the option for\nreindexdb. I'll add the reindexdb part too in the next version of the\npatch.\n\n\n",
"msg_date": "Thu, 25 Feb 2021 00:58:36 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hello,\n\nThe PostGIS project needed this from time to time. Would be great if\nreindex by opclass can be made possible.\n\nWe changed the semantics of btree at least twice (in 2.4 and 3.0), fixed\nsome ND mixed-dimension indexes semantics in 3.0, fixed hash index on 32\nbit arch in 3.0.\n\nOn Thu, Dec 3, 2020 at 12:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hello,\n>\n> Now that we have the infrastructure to track indexes that might be\n> corrupted\n> due to changes in collation libraries, I think it would be a good idea to\n> offer\n> an easy way for users to reindex all indexes that might be corrupted.\n>\n> I'm attaching a POC patch as a discussion basis. It implements a new\n> \"COLLATION\" option to reindex, with \"not_current\" being the only accepted\n> value. Note that I didn't spent too much efforts on the grammar part yet.\n>\n> So for instance you can do:\n>\n> REINDEX (COLLATION 'not_current') DATABASE mydb;\n>\n> The filter is also implemented so that you could cumulate multiple\n> filters, so\n> it could be easy to add more filtering, for instance:\n>\n> REINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n>\n> to only rebuild indexes depending on outdated libc collations, or\n>\n> REINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n>\n> to only rebuild indexes depending on a specific version of libc.\n>\n\n\n-- \nDarafei \"Komяpa\" Praliaskouski\nOSM BY Team - http://openstreetmap.by/\n\nHello,The PostGIS project needed this from time to time. Would be great if reindex by opclass can be made possible.We changed the semantics of btree at least twice (in 2.4 and 3.0), fixed some ND mixed-dimension indexes semantics in 3.0, fixed hash index on 32 bit arch in 3.0.On Thu, Dec 3, 2020 at 12:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hello,\n\nNow that we have the infrastructure to track indexes that might be corrupted\ndue to changes in collation libraries, I think it would be a good idea to offer\nan easy way for users to reindex all indexes that might be corrupted.\n\nI'm attaching a POC patch as a discussion basis. It implements a new\n\"COLLATION\" option to reindex, with \"not_current\" being the only accepted\nvalue. Note that I didn't spent too much efforts on the grammar part yet.\n\nSo for instance you can do:\n\nREINDEX (COLLATION 'not_current') DATABASE mydb;\n\nThe filter is also implemented so that you could cumulate multiple filters, so\nit could be easy to add more filtering, for instance:\n\nREINDEX (COLLATION 'libc', COLLATION 'not_current') DATABASE mydb;\n\nto only rebuild indexes depending on outdated libc collations, or\n\nREINDEX (COLLATION 'libc', VERSION 'X.Y') DATABASE mydb;\n\nto only rebuild indexes depending on a specific version of libc.\n-- Darafei \"Komяpa\" PraliaskouskiOSM BY Team - http://openstreetmap.by/",
"msg_date": "Wed, 24 Feb 2021 21:34:59 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Thu, Feb 25, 2021 at 1:22 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> #define reindexHasFilter(x) ((x & REINDEXOPT_COLL_NOT_CURRENT) != 0)\n\nIt's better to use \"(x) & ...\" in macros to avoid weird operator\nprecedence problems in future code.\n\nIt seems like there are several different names for similar things in\nthis patch: \"outdated\", \"not current\", \"deprecated\". Can we settle on\none, maybe \"outdated\"?\n\n> The code as-is written to be extensible with possibly other filters\n> (e.g. specific library or specific version). Feedback so far seems to\n> indicate that it may be overkill and only filtering indexes with\n> deprecated collation is enough. I'll simplify this code in a future\n> version, getting rid of reindexHasFilter, unless someone thinks more\n> filter is a good idea.\n\nHmm, yeah, I think it should probably be very general. Suppose someone\ninvents versioned dependencies for (say) functions or full text\nstemmers etc, then do we want to add more syntax here to rebuild\nindexes (assuming we don't use INVALID for such cases, IDK)? I don't\nthink we'd want to have more syntax just to be able to say \"hey,\nplease fix my collation problems but not my stemmer problems\". What\nabout just REINDEX (OUTDATED)? It's hard to find a single word that\nmeans \"depends on an object whose version has changed\".\n\n\n",
"msg_date": "Thu, 25 Feb 2021 07:36:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 24, 2021 at 09:34:59PM +0300, Darafei \"Komяpa\" Praliaskouski wrote:\n> Hello,\n> \n> The PostGIS project needed this from time to time. Would be great if\n> reindex by opclass can be made possible.\n> \n> We changed the semantics of btree at least twice (in 2.4 and 3.0), fixed\n> some ND mixed-dimension indexes semantics in 3.0, fixed hash index on 32\n> bit arch in 3.0.\n\nOh, I wasn't aware of that. Thanks for bringing this up!\n\nLooking at the last occurence (c00f9525a3c6c) I see that the NEWS item does\nmention the need to do a REINDEX. As far as I understand there wouldn't be any\nhard error if one doesn't do to a REINDEX, and you'd end up with some kind\nof \"logical\" corruption as the new lib version won't have the same semantics\ndepending on the number of dimensions, so more or less the same kind of\nproblems that would happen in case of breaking update of a collation library.\n\nIt seems to me that it's a legitimate use case, especially since GiST doesn't\nhave a metapage to store an index version. But I'm wondering if the right\nanswer is to allow REINDEX / reindexdb to look for specific opclass or to add\nan API to let third party code register a custom dependency. In this case\nit would be some kind of gist ABI versioning. We could then have a single\nREINDEX option, like REINDEX (OUTDATED) as Thomas suggested in\nhttps://www.postgresql.org/message-id/CA+hUKG+WWioP6xV5Xf1pPhiWNGD1B7hdBBCdQoKfp=zymJajBQ@mail.gmail.com\nfor both cases.\n\n\n",
"msg_date": "Fri, 26 Feb 2021 15:45:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Thu, Feb 25, 2021 at 07:36:02AM +1300, Thomas Munro wrote:\n> On Thu, Feb 25, 2021 at 1:22 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > #define reindexHasFilter(x) ((x & REINDEXOPT_COLL_NOT_CURRENT) != 0)\n> \n> It's better to use \"(x) & ...\" in macros to avoid weird operator\n> precedence problems in future code.\n\nAh indeed, thanks! I usually always protect the arguments wth parenthesis but\nI somehow missed this one. I'll send a new version of the patch shortly with\nthe rest of the problems you mentioned.\n\n> It seems like there are several different names for similar things in\n> this patch: \"outdated\", \"not current\", \"deprecated\". Can we settle on\n> one, maybe \"outdated\"?\n\nOops, I apparently missed a lot of places during the various rewrite of the\npatch. +1 for oudated.\n\n> \n> > The code as-is written to be extensible with possibly other filters\n> > (e.g. specific library or specific version). Feedback so far seems to\n> > indicate that it may be overkill and only filtering indexes with\n> > deprecated collation is enough. I'll simplify this code in a future\n> > version, getting rid of reindexHasFilter, unless someone thinks more\n> > filter is a good idea.\n> \n> Hmm, yeah, I think it should probably be very general. Suppose someone\n> invents versioned dependencies for (say) functions or full text\n> stemmers etc, then do we want to add more syntax here to rebuild\n> indexes (assuming we don't use INVALID for such cases, IDK)? I don't\n> think we'd want to have more syntax just to be able to say \"hey,\n> please fix my collation problems but not my stemmer problems\". What\n> about just REINDEX (OUTDATED)? It's hard to find a single word that\n> means \"depends on an object whose version has changed\".\n\nThat quite make sense. I agree that it would make the solution simpler and\nbetter.\n\nLooking at the other use case for PostGIS mentioned by Darafei, it seems that\nit would help to make concept of index dependency extensible for third party\ncode too (see\nhttps://www.postgresql.org/message-id/20210226074531.dhkfneao2czzqk6n%40nol).\nWould that make sense?\n\n\n",
"msg_date": "Fri, 26 Feb 2021 15:52:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 1:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 15, 2020 at 06:34:16PM +0100, Magnus Hagander wrote:\n> > Is this really a common enough operation that we need it in the main grammar?\n> >\n> > Having the functionality, definitely, but what if it was \"just\" a\n> > function instead? So you'd do something like:\n> > SELECT 'reindex index ' || i FROM pg_blah(some, arguments, here)\n> > \\gexec\n> >\n> > Or even a function that returns the REINDEX commands directly (taking\n> > a parameter to turn on/off concurrency for example).\n> >\n> > That also seems like it would be easier to make flexible, and just as\n> > easy to plug into reindexdb?\n>\n> Having control in the grammar to choose which index to reindex for a\n> table is very useful when it comes to parallel reindexing, because\n> it is no-brainer in terms of knowing which index to distribute to one\n> job or another. In short, with this grammar you can just issue a set\n> of REINDEX TABLE commands that we know will not conflict with each\n> other. You cannot get that level of control with REINDEX INDEX as it\n\n(oops, seems I forgot to reply to this one, sorry!)\n\nUh, isn't it almost exactly the opposite?\n\nIf you do it in the backend grammar you *cannot* parallelize it\nbetween indexes, because you can only run one at a time.\n\nWhereas if you do it in the frontend, you can. Either with something\nlike reindexdb that could do it automatically, or with psql as a\ncopy/paste job?\n\n\n> may be possible that more or more commands conflict if they work on an\n> index of the same relation because it is required to take lock also on\n> the parent table. Of course, we could decide to implement a\n> redistribution logic in all frontend tools that need such things, like\n> reindexdb, but that's not something I think we should let the client\n> decide of. A backend-side filtering is IMO much simpler, less code,\n> and more elegant.\n\nIt's simpler in the simple case, i agree with that. But ISTM it's also\na lot less flexible for anything except the simplest case...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 26 Feb 2021 10:47:38 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 4:13 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Dec 16, 2020 at 8:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Dec 15, 2020 at 06:34:16PM +0100, Magnus Hagander wrote:\n> > > Is this really a common enough operation that we need it in the main grammar?\n> > >\n> > > Having the functionality, definitely, but what if it was \"just\" a\n> > > function instead? So you'd do something like:\n> > > SELECT 'reindex index ' || i FROM pg_blah(some, arguments, here)\n> > > \\gexec\n> > >\n> > > Or even a function that returns the REINDEX commands directly (taking\n> > > a parameter to turn on/off concurrency for example).\n> > >\n> > > That also seems like it would be easier to make flexible, and just as\n> > > easy to plug into reindexdb?\n> >\n> > Having control in the grammar to choose which index to reindex for a\n> > table is very useful when it comes to parallel reindexing, because\n> > it is no-brainer in terms of knowing which index to distribute to one\n> > job or another. In short, with this grammar you can just issue a set\n> > of REINDEX TABLE commands that we know will not conflict with each\n> > other. You cannot get that level of control with REINDEX INDEX as it\n> > may be possible that more or more commands conflict if they work on an\n> > index of the same relation because it is required to take lock also on\n> > the parent table. Of course, we could decide to implement a\n> > redistribution logic in all frontend tools that need such things, like\n> > reindexdb, but that's not something I think we should let the client\n> > decide of. A backend-side filtering is IMO much simpler, less code,\n> > and more elegant.\n>\n> Maybe additional filtering capabilities is not something that will be\n> required frequently, but I'm pretty sure that reindexing only indexes\n> that might be corrupt is something that will be required often.. So I\n> agree, having all that logic in the backend makes everything easier\n> for users, having the choice of the tools they want to issue the query\n> while still having all features available.\n\nI agree with that scenario -- in that the most common case will be\nexactly that of reindexing only indexes that might be corrupt.\n\nI don't agree with the conclusion though.\n\nThe most common case of that will be in the case of an upgrade. In\nthat case I want to reindex all of those indexes as quickly as\npossible. So I'll want to parallelize it across multiple sessions\n(like reindexdb -j 4 or whatever). But doesn't putting the filter in\nthe grammar prevent me from doing exactly that? Each of those 4 (or\nwhatever) sessions would have to guess which would go where and then\nspeculatively run the command on that, instead of being able to\ndirectly distribute the worload?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 26 Feb 2021 10:50:25 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 5:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> I don't agree with the conclusion though.\n>\n> The most common case of that will be in the case of an upgrade. In\n> that case I want to reindex all of those indexes as quickly as\n> possible. So I'll want to parallelize it across multiple sessions\n> (like reindexdb -j 4 or whatever). But doesn't putting the filter in\n> the grammar prevent me from doing exactly that? Each of those 4 (or\n> whatever) sessions would have to guess which would go where and then\n> speculatively run the command on that, instead of being able to\n> directly distribute the worload?\n\nIt means that you'll have to distribute the work on a per-table basis\nrather than a per-index basis. The time spent to find out that a\ntable doesn't have any impacted index should be negligible compared to\nthe cost of running a reindex. This obviously won't help that much if\nyou have a lot of table but only one being gigantic.\n\nBut even if we put the logic in the client, this still won't help as\nreindexdb doesn't support multiple job with an index list:\n\n * Index-level REINDEX is not supported with multiple jobs as we\n * cannot control the concurrent processing of multiple indexes\n * depending on the same relation.\n */\n if (concurrentCons > 1 && indexes.head != NULL)\n {\n pg_log_error(\"cannot use multiple jobs to reindex indexes\");\n exit(1);\n }\n\n\n",
"msg_date": "Fri, 26 Feb 2021 18:07:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 11:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 5:50 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > I don't agree with the conclusion though.\n> >\n> > The most common case of that will be in the case of an upgrade. In\n> > that case I want to reindex all of those indexes as quickly as\n> > possible. So I'll want to parallelize it across multiple sessions\n> > (like reindexdb -j 4 or whatever). But doesn't putting the filter in\n> > the grammar prevent me from doing exactly that? Each of those 4 (or\n> > whatever) sessions would have to guess which would go where and then\n> > speculatively run the command on that, instead of being able to\n> > directly distribute the worload?\n>\n> It means that you'll have to distribute the work on a per-table basis\n> rather than a per-index basis. The time spent to find out that a\n> table doesn't have any impacted index should be negligible compared to\n> the cost of running a reindex. This obviously won't help that much if\n> you have a lot of table but only one being gigantic.\n\nYeah -- or at least a couple of large and many small, which I find to\nbe a very common scenario. Or the case of some tables having many\naffected indexes and some having few.\n\nYou'd basically want to order the operation by table on something like\n\"total size of the affected indexes on table x\" -- which may very well\nput a smaller table with many indexes earlier in the queue. But you\ncan't do that without having access to the filter....\n\n\n> But even if we put the logic in the client, this still won't help as\n> reindexdb doesn't support multiple job with an index list:\n>\n> * Index-level REINDEX is not supported with multiple jobs as we\n> * cannot control the concurrent processing of multiple indexes\n> * depending on the same relation.\n> */\n> if (concurrentCons > 1 && indexes.head != NULL)\n> {\n> pg_log_error(\"cannot use multiple jobs to reindex indexes\");\n> exit(1);\n> }\n\nThat sounds like it would be a fixable problem though, in principle.\nIt could/should probably still limit all indexes on the same table to\nbe processed in the same connection for the locking reasons of course,\nbut doing an order by the total size of the indexes like above, and\nensuring that they are grouped that way, doesn't sound *that* hard. I\ndoubt it's that important in the current usecase of manually listing\nthe indexes, but it would be useful for something like this.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 26 Feb 2021 11:17:26 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 11:17:26AM +0100, Magnus Hagander wrote:\n> On Fri, Feb 26, 2021 at 11:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > It means that you'll have to distribute the work on a per-table basis\n> > rather than a per-index basis. The time spent to find out that a\n> > table doesn't have any impacted index should be negligible compared to\n> > the cost of running a reindex. This obviously won't help that much if\n> > you have a lot of table but only one being gigantic.\n> \n> Yeah -- or at least a couple of large and many small, which I find to\n> be a very common scenario. Or the case of some tables having many\n> affected indexes and some having few.\n> \n> You'd basically want to order the operation by table on something like\n> \"total size of the affected indexes on table x\" -- which may very well\n> put a smaller table with many indexes earlier in the queue. But you\n> can't do that without having access to the filter....\n\nSo, long running reindex due to some gigantic and/or numerous indexes on a\nsingle (or few) table is not something that we can solve, but inefficient\nreindex due to wrong table size / to-be-reindexed-indexes-size correlation can\nbe addressed.\n\nI would still prefer to go to backend implementation, so that all client tools\ncan benefit from it by default. We could simply export the current\nindex_has_oudated_collation(oid) function in sql, and tweak pg_dump to order\ntables by the cumulated size of such indexes as you mentioned below, would\nthat work for you?\n\nAlso, given Thomas proposal in a nearby email this function would be renamed to\nindex_has_oudated_dependencies(oid) or something like that.\n\n> > But even if we put the logic in the client, this still won't help as\n> > reindexdb doesn't support multiple job with an index list:\n> >\n> > * Index-level REINDEX is not supported with multiple jobs as we\n> > * cannot control the concurrent processing of multiple indexes\n> > * depending on the same relation.\n> > */\n> > if (concurrentCons > 1 && indexes.head != NULL)\n> > {\n> > pg_log_error(\"cannot use multiple jobs to reindex indexes\");\n> > exit(1);\n> > }\n> \n> That sounds like it would be a fixable problem though, in principle.\n> It could/should probably still limit all indexes on the same table to\n> be processed in the same connection for the locking reasons of course,\n> but doing an order by the total size of the indexes like above, and\n> ensuring that they are grouped that way, doesn't sound *that* hard. I\n> doubt it's that important in the current usecase of manually listing\n> the indexes, but it would be useful for something like this.\n\nYeah, I don't think that in case of oudated dependency the --index will be\nuseful, it's likely that there will be too many indexes to process. We can\nstill try to improve reindexdb to be able to process index lists with parallel\nconnections, but I would rather keep that separated from this patch.\n\n\n",
"msg_date": "Tue, 2 Mar 2021 12:01:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 12:01:55PM +0800, Julien Rouhaud wrote:\n> \n> So, long running reindex due to some gigantic and/or numerous indexes on a\n> single (or few) table is not something that we can solve, but inefficient\n> reindex due to wrong table size / to-be-reindexed-indexes-size correlation can\n> be addressed.\n> \n> I would still prefer to go to backend implementation, so that all client tools\n> can benefit from it by default. We could simply export the current\n> index_has_oudated_collation(oid) function in sql, and tweak pg_dump to order\n> tables by the cumulated size of such indexes as you mentioned below, would\n> that work for you?\n> \n> Also, given Thomas proposal in a nearby email this function would be renamed to\n> index_has_oudated_dependencies(oid) or something like that.\n\nPlease find attached v5 which address all previous comments:\n\n- consistently use \"outdated\"\n- use REINDEX (OUTDATED) grammar (with a new unreserved OUTDATED keyword)\n- new --outdated option to reindexdb\n- expose a new \"pg_index_has_outdated_dependency(regclass)\" SQL function\n- use that function in reindexdb --outdated to sort tables by total\n indexes-to-be-processed size",
"msg_date": "Wed, 3 Mar 2021 13:56:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 01:56:59PM +0800, Julien Rouhaud wrote:\n> \n> Please find attached v5 which address all previous comments:\n> \n> - consistently use \"outdated\"\n> - use REINDEX (OUTDATED) grammar (with a new unreserved OUTDATED keyword)\n> - new --outdated option to reindexdb\n> - expose a new \"pg_index_has_outdated_dependency(regclass)\" SQL function\n> - use that function in reindexdb --outdated to sort tables by total\n> indexes-to-be-processed size\n\nv6 attached, rebase only due to conflict with recent commit.",
"msg_date": "Sun, 14 Mar 2021 16:10:07 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 04:10:07PM +0800, Julien Rouhaud wrote:\n> v6 attached, rebase only due to conflict with recent commit.\n\nI have read through the patch.\n\n+ bool outdated_filter = false;\nWouldn't it be better to rename that \"outdated\" instead for\nconsistency with the other options?\n\nIn ReindexRelationConcurrently(), there is no filtering done for the\nindex themselves. The operation is only done on the list of indexes\nfetched from the parent relation. Why? This means that a REINDEX\n(OUTDATED) INDEX would actually rebuild an index even if this index\nhas no out-of-date collations, like a catalog. I think that's\nconfusing.\n\nSame comment for the non-concurrent case, as of the code paths of\nreindex_index().\n\nWould it be better to inform the user which indexes are getting\nskipped in the verbose output if REINDEXOPT_VERBOSE is set?\n\n+ <para>\n+ Check if the specified index has any outdated dependency. For now only\n+ dependency on collations are supported.\n+ </para></entry>\n[...]\n+ <term><literal>OUTDATED</literal></term>\n+ <listitem>\n+ <para>\n+ This option can be used to filter the list of indexes to rebuild and only\n+ process indexes that have outdated dependencies. Fow now, the only\n+ handle dependency is for the collation provider version.\n+ </para>\nDo we really need to be this specific in this part of the\ndocumentation with collations? The last sentence of this paragraph\nsounds weird. Don't you mean instead to write \"the only dependency\nhandled currently is the collation provider version\"?\n\n+\\set VERBOSITY terse \\\\ -- suppress machine-dependent details\n+-- no suitable index should be found\n+REINDEX (OUTDATED) TABLE reindex_coll;\nWhat are those details? And wouldn't it be more stable to just check\nafter the relfilenode of the indexes instead?\n\n\" ORDER BY sum(ci.relpages)\"\nSchema qualification here, twice.\n\n+ printf(_(\" --outdated only process indexes\nhaving outdated depencies\\n\"));\ns/depencies/dependencies/.\n\n+ rel = try_relation_open(indexOid, AccessShareLock);\n+\n+ if (rel == NULL)\n+ PG_RETURN_NULL();\nLet's introduce a try_index_open() here.\n\nWhat's the point in having both index_has_outdated_collation() and\nindex_has_outdated_collation()?\n\nIt seems to me that 0001 should be split into two patches:\n- One for the backend OUTDATED option.\n- One for pg_index_has_outdated_dependency(), which only makes sense\nin-core once reindexdb is introduced.\n--\nMichael",
"msg_date": "Sun, 14 Mar 2021 20:54:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 08:54:11PM +0900, Michael Paquier wrote:\n> On Sun, Mar 14, 2021 at 04:10:07PM +0800, Julien Rouhaud wrote:\n> \n> + bool outdated_filter = false;\n> Wouldn't it be better to rename that \"outdated\" instead for\n> consistency with the other options?\n\nI agree.\n\n> In ReindexRelationConcurrently(), there is no filtering done for the\n> index themselves. The operation is only done on the list of indexes\n> fetched from the parent relation. Why? This means that a REINDEX\n> (OUTDATED) INDEX would actually rebuild an index even if this index\n> has no out-of-date collations, like a catalog. I think that's\n> confusing.\n> \n> Same comment for the non-concurrent case, as of the code paths of\n> reindex_index().\n\nYes, I'm not sure what we should do in that case. I thought I put a comment\nabout that but it apparently disappeared during some rewrite.\n\nIs there really a use case for reindexing a specific index and at the same time\nasking for possibly ignoring it? I think we should just forbid REINDEX\n(OUTDATED) INDEX. What do you think?\n\n> Would it be better to inform the user which indexes are getting\n> skipped in the verbose output if REINDEXOPT_VERBOSE is set?\n\nI was thinking that users would be more interested in the list of indexes being\nprocessed rather than the full list of indexes and a mention of whether it was\nprocessed or not. I can change that if you prefer.\n\n> + <para>\n> + Check if the specified index has any outdated dependency. For now only\n> + dependency on collations are supported.\n> + </para></entry>\n> [...]\n> + <term><literal>OUTDATED</literal></term>\n> + <listitem>\n> + <para>\n> + This option can be used to filter the list of indexes to rebuild and only\n> + process indexes that have outdated dependencies. Fow now, the only\n> + handle dependency is for the collation provider version.\n> + </para>\n> Do we really need to be this specific in this part of the\n> documentation with collations?\n\nI think it's important to document what this option really does, and I don't\nsee a better place to document it.\n\n> The last sentence of this paragraph\n> sounds weird. Don't you mean instead to write \"the only dependency\n> handled currently is the collation provider version\"?\n\nIndeed, I'll fix!\n\n> +\\set VERBOSITY terse \\\\ -- suppress machine-dependent details\n> +-- no suitable index should be found\n> +REINDEX (OUTDATED) TABLE reindex_coll;\n> What are those details?\n\nThat just the same comment as the previous occurence in the file, I kept it for\nconsistency.\n\n> And wouldn't it be more stable to just check\n> after the relfilenode of the indexes instead?\n\nAgreed, I'll add additional tests for that.\n\n> \" ORDER BY sum(ci.relpages)\"\n> Schema qualification here, twice.\n\nWell, this isn't actually mandatory, per comment at the top:\n\n\t/*\n\t * The queries here are using a safe search_path, so there's no need to\n\t * fully qualify everything.\n\t */\n\nBut I think it's a better style to fully qualify objects, so I'll fix.\n\n> + rel = try_relation_open(indexOid, AccessShareLock);\n> +\n> + if (rel == NULL)\n> + PG_RETURN_NULL();\n> Let's introduce a try_index_open() here.\n\nGood idea!\n\n> What's the point in having both index_has_outdated_collation() and\n> index_has_outdated_collation()?\n\nDid you mean index_has_outdated_collation() and\nindex_has_outadted_dependency()? It's just to keep things separated, mostly\nfor future improvements on that infrastructure. I can get rid of that function\nand put back the code in index_has_outadted_dependency() if that's overkill.\n\n> It seems to me that 0001 should be split into two patches:\n> - One for the backend OUTDATED option.\n> - One for pg_index_has_outdated_dependency(), which only makes sense\n> in-core once reindexdb is introduced.\n\nI thought it would be better to add the backend part in a single commit, and\nthen built the client part on top of that in a different commit. I can\nrearrange things if you want, but in that case should\nindex_has_outadted_dependency() be in a different patch as you mention or\nsimply merged with 0002 (the --oudated option for reindexdb)?\n\n\n",
"msg_date": "Sun, 14 Mar 2021 22:57:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 10:57:37PM +0800, Julien Rouhaud wrote:\n> On Sun, Mar 14, 2021 at 08:54:11PM +0900, Michael Paquier wrote:\n>> In ReindexRelationConcurrently(), there is no filtering done for the\n>> index themselves. The operation is only done on the list of indexes\n>> fetched from the parent relation. Why? This means that a REINDEX\n>> (OUTDATED) INDEX would actually rebuild an index even if this index\n>> has no out-of-date collations, like a catalog. I think that's\n>> confusing.\n>> \n>> Same comment for the non-concurrent case, as of the code paths of\n>> reindex_index().\n> \n> Yes, I'm not sure what we should do in that case. I thought I put a comment\n> about that but it apparently disappeared during some rewrite.\n> \n> Is there really a use case for reindexing a specific index and at the same time\n> asking for possibly ignoring it? I think we should just forbid REINDEX\n> (OUTDATED) INDEX. What do you think?\n\nI think that there would be cases to be able to handle that, say if a\nuser wants to works on a specific set of indexes one-by-one. There is\nalso the argument of inconsistency with the other commands.\n\n> I was thinking that users would be more interested in the list of indexes being\n> processed rather than the full list of indexes and a mention of whether it was\n> processed or not. I can change that if you prefer.\n\nHow invasive do you think it would be to add a note in the verbose\noutput when indexes are skipped?\n\n> Did you mean index_has_outdated_collation() and\n> index_has_outdated_dependency()? It's just to keep things separated, mostly\n> for future improvements on that infrastructure. I can get rid of that function\n> and put back the code in index_has_outadted_dependency() if that's overkill.\n\nYes, sorry. I meant index_has_outdated_collation() and\nindex_has_outdated_dependency().\n--\nMichael",
"msg_date": "Mon, 15 Mar 2021 08:56:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 14, 2021, at 12:10 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> v6 attached, rebase only due to conflict with recent commit.\n\nHi Julien,\n\nI'm coming to this patch quite late, perhaps too late to change design decision in time for version 14.\n\n\n+\tif (outdated && PQserverVersion(conn) < 140000)\n+\t{\n+\t\tPQfinish(conn);\n+\t\tpg_log_error(\"cannot use the \\\"%s\\\" option on server versions older than PostgreSQL %s\",\n+\t\t\t\t\t \"outdated\", \"14\");\n+\t\texit(1);\n+\t}\n\nIf detection of outdated indexes were performed entirely in the frontend (reindexdb) rather than in the backend (reindex command), would reindexdb be able to connect to older servers? Looking quickly that the catalogs, it appears pg_index, pg_depend, pg_collation and a call to the SQL function pg_collation_actual_version() compared against pg_depend.refobjversion would be enough to construct a list of indexes in need of reindexing. Am I missing something here?\n\nI understand that wouldn't help somebody wanting to reindex from psql. Is that the whole reason you went a different direction with this feature?\n\n\n\n+\tprintf(_(\" --outdated only process indexes having outdated depencies\\n\")); \n\ntypo.\n\n\n\n+\tbool outdated;\t/* depends on at least on deprected collation? */\n\ntypo.\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 14 Mar 2021 17:01:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 08:56:00AM +0900, Michael Paquier wrote:\n> On Sun, Mar 14, 2021 at 10:57:37PM +0800, Julien Rouhaud wrote:\n> > \n> > Is there really a use case for reindexing a specific index and at the same time\n> > asking for possibly ignoring it? I think we should just forbid REINDEX\n> > (OUTDATED) INDEX. What do you think?\n> \n> I think that there would be cases to be able to handle that, say if a\n> user wants to works on a specific set of indexes one-by-one.\n\nIf a user want to work on a specific set of indexes one at a time, then the\nlist of indexes is probably already retrieved from some SQL query and there's\nalready all necessary infrastructure to properly filter the non oudated\nindexes.\n\n> There is\n> also the argument of inconsistency with the other commands.\n\nYes, but the other commands dynamically construct a list of indexes.\n\nThe only use case I see would be to process a partitioned index if some of the\nunderlying indexes have already been processed. IMO this is better addressed\nby REINDEX TABLE.\n\nAnyway I'll make REINDEX (OUTDATED) INDEX to maybe reindex the explicitely\nstated index name since you think it's a better behavior.\n\n> \n> > I was thinking that users would be more interested in the list of indexes being\n> > processed rather than the full list of indexes and a mention of whether it was\n> > processed or not. I can change that if you prefer.\n> \n> How invasive do you think it would be to add a note in the verbose\n> output when indexes are skipped?\n\nProbably not too invasive, but the verbose output is already inconsistent:\n\n# reindex (verbose) table tt;\nNOTICE: 00000: table \"tt\" has no indexes to reindex\n\nBut a REINDEX (VERBOSE) DATABASE won't emit such message. I'm assuming that\nit's because it doesn't make sense to warn in that case as the user didn't\nexplicitly specified the table name. We have the same behavior for now when\nusing the OUTDATED option if no indexes are processed. Should that be changed\ntoo?\n\n> > Did you mean index_has_outdated_collation() and\n> > index_has_outdated_dependency()? It's just to keep things separated, mostly\n> > for future improvements on that infrastructure. I can get rid of that function\n> > and put back the code in index_has_outadted_dependency() if that's overkill.\n> \n> Yes, sorry. I meant index_has_outdated_collation() and\n> index_has_outdated_dependency().\n\nAnd are you ok with this function?\n\n\n",
"msg_date": "Mon, 15 Mar 2021 08:35:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Hi Mark,\n\nOn Sun, Mar 14, 2021 at 05:01:20PM -0700, Mark Dilger wrote:\n> \n> > On Mar 14, 2021, at 12:10 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> I'm coming to this patch quite late, perhaps too late to change design decision in time for version 14.\n\nThanks for lookint at it!\n\n> +\tif (outdated && PQserverVersion(conn) < 140000)\n> +\t{\n> +\t\tPQfinish(conn);\n> +\t\tpg_log_error(\"cannot use the \\\"%s\\\" option on server versions older than PostgreSQL %s\",\n> +\t\t\t\t\t \"outdated\", \"14\");\n> +\t\texit(1);\n> +\t}\n> \n> If detection of outdated indexes were performed entirely in the frontend (reindexdb) rather than in the backend (reindex command), would reindexdb be able to connect to older servers? Looking quickly that the catalogs, it appears pg_index, pg_depend, pg_collation and a call to the SQL function pg_collation_actual_version() compared against pg_depend.refobjversion would be enough to construct a list of indexes in need of reindexing. Am I missing something here?\n\nThere won't be any need to connect on older servers if the patch is committed\nin this commitfest as refobjversion was also added in pg14.\n\n> I understand that wouldn't help somebody wanting to reindex from psql. Is that the whole reason you went a different direction with this feature?\n\nThis was already discussed with Magnus and Michael. The main reason for that\nare:\n\n- no need for a whole new infrastructure to be able to process a list of\n indexes in parallel which would be required if getting the list of indexes in\n the client\n\n- if done in the backend, then the ability is immediately available for all\n user scripts, compared to the burden of writing the needed query (with the\n usual caveats like quoting, qualifying all objects if the search_path isn't\n safe and such) and looping though all the results.\n\n> +\tprintf(_(\" --outdated only process indexes having outdated depencies\\n\")); \n> \n> typo.\n> \n> +\tbool outdated;\t/* depends on at least on deprected collation? */\n> \n> typo.\n\nThanks! I'll fix those.\n\n\n",
"msg_date": "Mon, 15 Mar 2021 08:46:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "Please find attached v7, with the following changes:\n\n- all typo reported by Michael and Mark are fixed\n- REINDEX (OUTDATED) INDEX will now ignore the index if it doesn't have any\n outdated dependency. Partitioned index are correctly handled.\n- REINDEX (OUTDATED, VERBOSE) will now inform caller of ignored indexes, with\n lines of the form:\n\nNOTICE: index \"index_name\" has no outdated dependency\n\n- updated regression tests to cover all those changes. I kept the current\n approach of using simple SQL test listing the ignored indexes. I also added\n some OUDATED option to collate.icu.utf8 tests so that we also check that both\n REINDEX and REINDEX(OUTDATED) work as expected.\n- move pg_index_has_outdated_dependency to 0002\n\nI didn't remove index_has_outdated_collation() for now.",
"msg_date": "Mon, 15 Mar 2021 11:33:20 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 14, 2021, at 8:33 PM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> <v7-0001-Add-a-new-OUTDATED-filtering-facility-for-REINDEX.patch><v7-0002-Add-a-outdated-option-to-reindexdb.patch>\n\n\nIn the docs, 0001, \"Fow now, the only dependency handled currently\",\n\n\"Fow now\" is misspelled, and \"For now\" seems redundant when used with \"currently\".\n\n\nIn the docs, 0002, \"For now only dependency on collations are supported.\"\n\n\"dependency\" is singular, \"are\" is conjugated for plural.\n\n\nIn the docs, 0002, you forgot to update doc/src/sgml/ref/reindexdb.sgml with the documentation for the --outdated switch.\n\n\nIn the tests, you check that REINDEX (OUTDATED) doesn't do anything crazy, but you are not really testing the functionality so far as I can see, as you don't have any tests which cause the collation to be outdated. Am I right about that? I wonder if you could modify DefineCollation. In addition to the providers \"icu\" and \"libc\" that it currently accepts, I wonder if it might accept \"test\" or similar, and then you could create a test in src/test/modules that compiles a \"test\" provider, creates a database with indexes dependent on something from that provider, stops the database, updates the test collation, ...? \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 09:30:43 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 09:30:43AM -0700, Mark Dilger wrote:\n> \n> In the docs, 0001, \"Fow now, the only dependency handled currently\",\n> \n> \"Fow now\" is misspelled, and \"For now\" seems redundant when used with \"currently\".\n> \n> \n> In the docs, 0002, \"For now only dependency on collations are supported.\"\n> \n> \"dependency\" is singular, \"are\" is conjugated for plural.\n> \n> \n> In the docs, 0002, you forgot to update doc/src/sgml/ref/reindexdb.sgml with the documentation for the --outdated switch.\n\nThanks, I'll fix those and do a full round a doc / comment proofreading.\n\n> In the tests, you check that REINDEX (OUTDATED) doesn't do anything crazy, but you are not really testing the functionality so far as I can see, as you don't have any tests which cause the collation to be outdated. Am I right about that? I wonder if you could modify DefineCollation. In addition to the providers \"icu\" and \"libc\" that it currently accepts, I wonder if it might accept \"test\" or similar, and then you could create a test in src/test/modules that compiles a \"test\" provider, creates a database with indexes dependent on something from that provider, stops the database, updates the test collation, ...? \n\n\nIndeed the tests in create_index.sql (and similarly in 090_reindexdb.pl) check\nthat REINDEX (OUTDATED) will ignore non outdated indexes as expected.\n\nBut there are also the tests in collate.icu.utf8.out which will fake outdated\ncollations (that's the original tests for the collation tracking patches) and\nthen check that outdated indexes are reindexed with both REINDEX and REINDEX\n(OUDATED).\n\nSo I think that all cases are covered. Do you want to have more test cases?\n\n\n",
"msg_date": "Tue, 16 Mar 2021 00:52:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 9:52 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> But there are also the tests in collate.icu.utf8.out which will fake outdated\n> collations (that's the original tests for the collation tracking patches) and\n> then check that outdated indexes are reindexed with both REINDEX and REINDEX\n> (OUDATED).\n> \n> So I think that all cases are covered. Do you want to have more test cases?\n\nI thought that just checked cases where a bogus 'not a version' was put into pg_catalog.pg_depend. I'm talking about having a collation provider who returns a different version string and has genuinely different collation rules between versions, thereby breaking the index until it is updated. Is that being tested?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:13:55 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:13:55AM -0700, Mark Dilger wrote:\n> \n> \n> > On Mar 15, 2021, at 9:52 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > \n> > But there are also the tests in collate.icu.utf8.out which will fake outdated\n> > collations (that's the original tests for the collation tracking patches) and\n> > then check that outdated indexes are reindexed with both REINDEX and REINDEX\n> > (OUDATED).\n> > \n> > So I think that all cases are covered. Do you want to have more test cases?\n> \n> I thought that just checked cases where a bogus 'not a version' was put into pg_catalog.pg_depend. I'm talking about having a collation provider who returns a different version string and has genuinely different collation rules between versions, thereby breaking the index until it is updated. Is that being tested?\n\nNo, we're only checking that the infrastructure works as intended.\n\nAre you saying that you want to implement a simplistic collation provider with\n\"tunable\" ordering, so that you can actually check that an ordering change will\nbe detected as a corrupted index, as in you'll get some error or incorrect\nresults?\n\nI don't think that this infrastructure is the right place to do that, and I'm\nnot sure what would be the benefit here. If a library was updated, the\nunderlying indexes may or may not be corrupted, and we only warn about the\ndiscrepancy with a low overhead.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 01:34:01 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 10:34 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> On Mon, Mar 15, 2021 at 10:13:55AM -0700, Mark Dilger wrote:\n>> \n>> \n>>> On Mar 15, 2021, at 9:52 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> \n>>> But there are also the tests in collate.icu.utf8.out which will fake outdated\n>>> collations (that's the original tests for the collation tracking patches) and\n>>> then check that outdated indexes are reindexed with both REINDEX and REINDEX\n>>> (OUDATED).\n>>> \n>>> So I think that all cases are covered. Do you want to have more test cases?\n>> \n>> I thought that just checked cases where a bogus 'not a version' was put into pg_catalog.pg_depend. I'm talking about having a collation provider who returns a different version string and has genuinely different collation rules between versions, thereby breaking the index until it is updated. Is that being tested?\n> \n> No, we're only checking that the infrastructure works as intended.\n> \n> Are you saying that you want to implement a simplistic collation provider with\n> \"tunable\" ordering, so that you can actually check that an ordering change will\n> be detected as a corrupted index, as in you'll get some error or incorrect\n> results?\n\nI'm saying that your patch seems to call down to get_collation_actual_version() via get_collation_version_for_oid() from your new function do_check_index_has_outdated_collation(), but I'm not seeing how that gets exercised. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:40:25 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:40:25AM -0700, Mark Dilger wrote:\n> I'm saying that your patch seems to call down to get_collation_actual_version() via get_collation_version_for_oid() from your new function do_check_index_has_outdated_collation(), but I'm not seeing how that gets exercised.\n\nIt's a little bit late here so sorry if I'm missing something.\n\ndo_check_index_has_outdated_collation() is called from\nindex_has_outdated_collation() which is called from\nindex_has_outdated_dependency() which is called from\nRelationGetIndexListFiltered(), and that function is called when passing the\nOUTDATED option to REINDEX (and reindexdb --outdated). So this is exercised\nwith added tests for both matching and non matching collation version.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 01:50:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 10:50 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> On Mon, Mar 15, 2021 at 10:40:25AM -0700, Mark Dilger wrote:\n>> I'm saying that your patch seems to call down to get_collation_actual_version() via get_collation_version_for_oid() from your new function do_check_index_has_outdated_collation(), but I'm not seeing how that gets exercised.\n> \n> It's a little bit late here so sorry if I'm missing something.\n> \n> do_check_index_has_outdated_collation() is called from\n> index_has_outdated_collation() which is called from\n> index_has_outdated_dependency() which is called from\n> RelationGetIndexListFiltered(), and that function is called when passing the\n> OUTDATED option to REINDEX (and reindexdb --outdated). So this is exercised\n> with added tests for both matching and non matching collation version.\n\nOk, fair enough. I was thinking about the case where the collation actually returns a different version number because it (the C library providing the collation) got updated, but I think you've answered already that you are not planning to test that case, only the case where pg_depend is modified to have a bogus version number.\n\nIt seems a bit odd to me that a feature intended to handle cases where collations are updated is not tested via having a collation be updated during the test. It leaves open the possibility that something differs between the test and reindexed run after real world collation updates. But that's for the committer who picks up your patch to decide, and perhaps it is unfair to make your patch depend on addressing that issue.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:56:50 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:56:50AM -0700, Mark Dilger wrote:\n> \n> \n> > On Mar 15, 2021, at 10:50 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > \n> > On Mon, Mar 15, 2021 at 10:40:25AM -0700, Mark Dilger wrote:\n> >> I'm saying that your patch seems to call down to get_collation_actual_version() via get_collation_version_for_oid() from your new function do_check_index_has_outdated_collation(), but I'm not seeing how that gets exercised.\n> > \n> > It's a little bit late here so sorry if I'm missing something.\n> > \n> > do_check_index_has_outdated_collation() is called from\n> > index_has_outdated_collation() which is called from\n> > index_has_outdated_dependency() which is called from\n> > RelationGetIndexListFiltered(), and that function is called when passing the\n> > OUTDATED option to REINDEX (and reindexdb --outdated). So this is exercised\n> > with added tests for both matching and non matching collation version.\n> \n> Ok, fair enough. I was thinking about the case where the collation actually returns a different version number because it (the C library providing the collation) got updated, but I think you've answered already that you are not planning to test that case, only the case where pg_depend is modified to have a bogus version number.\n\nThis infrastructure is supposed to detect that the collation library *used to*\nreturn a different version before it was updated. And that's exactly what\nwe're testing by manually updating the refobjversion.\n\n> It seems a bit odd to me that a feature intended to handle cases where collations are updated is not tested via having a collation be updated during the test. It leaves open the possibility that something differs between the test and reindexed run after real world collation updates. But that's for the committer who picks up your patch to decide, and perhaps it is unfair to make your patch depend on addressing that issue.\n\nWhy is that odd? We're testing that we're correctly storing the collation\nversion during index creating and correctly detecting a mismatch. Having a\nfake collation provider to return a fake version number won't add any more\ncoverage unless I'm missing something.\n\nIt's similar to how we test the various corruption scenario. AFAIK we're not\nproviding custom drivers to write corrupted data but we're simply simulating a\ncorruption overwriting some blocks.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 02:10:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 11:10 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> On Mon, Mar 15, 2021 at 10:56:50AM -0700, Mark Dilger wrote:\n>> \n>> \n>>> On Mar 15, 2021, at 10:50 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> \n>>> On Mon, Mar 15, 2021 at 10:40:25AM -0700, Mark Dilger wrote:\n>>>> I'm saying that your patch seems to call down to get_collation_actual_version() via get_collation_version_for_oid() from your new function do_check_index_has_outdated_collation(), but I'm not seeing how that gets exercised.\n>>> \n>>> It's a little bit late here so sorry if I'm missing something.\n>>> \n>>> do_check_index_has_outdated_collation() is called from\n>>> index_has_outdated_collation() which is called from\n>>> index_has_outdated_dependency() which is called from\n>>> RelationGetIndexListFiltered(), and that function is called when passing the\n>>> OUTDATED option to REINDEX (and reindexdb --outdated). So this is exercised\n>>> with added tests for both matching and non matching collation version.\n>> \n>> Ok, fair enough. I was thinking about the case where the collation actually returns a different version number because it (the C library providing the collation) got updated, but I think you've answered already that you are not planning to test that case, only the case where pg_depend is modified to have a bogus version number.\n> \n> This infrastructure is supposed to detect that the collation library *used to*\n> return a different version before it was updated. And that's exactly what\n> we're testing by manually updating the refobjversion.\n> \n>> It seems a bit odd to me that a feature intended to handle cases where collations are updated is not tested via having a collation be updated during the test. It leaves open the possibility that something differs between the test and reindexed run after real world collation updates. But that's for the committer who picks up your patch to decide, and perhaps it is unfair to make your patch depend on addressing that issue.\n> \n> Why is that odd? We're testing that we're correctly storing the collation\n> version during index creating and correctly detecting a mismatch. Having a\n> fake collation provider to return a fake version number won't add any more\n> coverage unless I'm missing something.\n> \n> It's similar to how we test the various corruption scenario. AFAIK we're not\n> providing custom drivers to write corrupted data but we're simply simulating a\n> corruption overwriting some blocks.\n\nWe do test corrupt relations. We intentionally corrupt the pages within corrupted heap tables to check that they get reported as corrupt. (See src/bin/pg_amcheck/t/004_verify_heapam.pl) Admittedly, the corruptions used in the tests are not necessarily representative of corruptions that might occur in the wild, but that is a hard problem to solve, since we don't know the statistical distribution of corruptions in the wild.\n\nIf you had a real, not fake, collation provider which actually provided a collation with an actual version number, stopped the server, changed the behavior of the collation as well as its version number, started the server, and ran REINDEX (OUTDATED), I think that would be a more real-world test. I'm not demanding that you write such a test. I'm just saying that it is strange that we don't have coverage for this anywhere, and was asking if you think there is such coverage, because, you know, maybe I just didn't see where that test was lurking.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 11:32:41 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 11:32 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> If you had a real, not fake, collation provider which actually provided a collation with an actual version number, stopped the server, changed the behavior of the collation as well as its version number, started the server, and ran REINDEX (OUTDATED), I think that would be a more real-world test. I'm not demanding that you write such a test. I'm just saying that it is strange that we don't have coverage for this anywhere, and was asking if you think there is such coverage, because, you know, maybe I just didn't see where that test was lurking.\n\nI should add some context regarding why I mentioned this issue at all.\n\nNot long ago, if an upgrade of icu or libc broke your collations, you were sad. But postgres didn't claim to be competent to deal with this problem, so it was just a missing feature. Now, with REINDEX (OUTDATED), we're really implying, if not outright saying, that postgres knows how to deal with collation upgrades. I feel uncomfortable that v14 will make such a claim with not a single regression test confirming such a claim. I'm happy to discover that such a test is lurking somewhere and I just didn't see it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 11:58:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX backend filtering"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 2:32 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> We do test corrupt relations. We intentionally corrupt the pages within corrupted heap tables to check that they get reported as corrupt. (See src/bin/pg_amcheck/t/004_verify_heapam.pl)\n\nI disagree. You're testing a modified version of the pages in OS\ncache, which is very likely to be different from real world\ncorruption. Those usually end up with a discrepancy between storage\nand OS cache and this scenario isn't tested nor documented.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 03:37:03 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX backend filtering"
}
] |
[
{
"msg_contents": "While doing the proverbial other things, I noticed that the grammar \nsymbols publication_name_list and publication_name_item are pretty \nuseless. We already use name_list/name to refer to publications in most \nplaces, so getting rid of these makes things more consistent.\n\nThese appear to have been introduced by the original logical replication \npatch, so there probably wasn't that much scrutiny on this detail then.",
"msg_date": "Thu, 3 Dec 2020 10:50:50 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary grammar symbols"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> While doing the proverbial other things, I noticed that the grammar \n> symbols publication_name_list and publication_name_item are pretty \n> useless. We already use name_list/name to refer to publications in most \n> places, so getting rid of these makes things more consistent.\n\n+1. Strictly speaking, this reduces the set of keywords that you\ncan use as names here (since name is ColId, versus ColLabel in\npublication_name_item). However, given the inconsistency with\nother commands, I don't see it as an advantage to be more forgiving\nin just one place. We might have problems preserving the laxer\ndefinition anyway, if the syntaxes of these commands ever get\nany more complicated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Dec 2020 10:07:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary grammar symbols"
}
] |
[
{
"msg_contents": "We start recording changes in ReorderBufferTXN even before we reach\nSNAPBUILD_CONSISTENT state so that if the commit is encountered after\nreaching that we should be able to send the changes of the entire\ntransaction. Now, while recording changes if the reorder buffer memory\nhas exceeded logical_decoding_work_mem then we can start streaming if\nit is allowed and we haven't yet streamed that data. However, we must\nnot allow streaming to start unless the snapshot has reached\nSNAPBUILD_CONSISTENT state.\n\nI have also improved the comments atop ReorderBufferResetTXN to\nmention the case when we need to continue streaming after getting an\nerror.\n\nAttached patch for the above changes.\n\nThoughts?",
"msg_date": "Thu, 3 Dec 2020 17:34:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove incorrect assertion in reorderbuffer.c."
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 5:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> We start recording changes in ReorderBufferTXN even before we reach\n> SNAPBUILD_CONSISTENT state so that if the commit is encountered after\n> reaching that we should be able to send the changes of the entire\n> transaction. Now, while recording changes if the reorder buffer memory\n> has exceeded logical_decoding_work_mem then we can start streaming if\n> it is allowed and we haven't yet streamed that data. However, we must\n> not allow streaming to start unless the snapshot has reached\n> SNAPBUILD_CONSISTENT state.\n>\n> I have also improved the comments atop ReorderBufferResetTXN to\n> mention the case when we need to continue streaming after getting an\n> error.\n>\n> Attached patch for the above changes.\n>\n> Thoughts?\n\nLGTM.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Dec 2020 11:18:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove incorrect assertion in reorderbuffer.c."
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 11:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Dec 3, 2020 at 5:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > We start recording changes in ReorderBufferTXN even before we reach\n> > SNAPBUILD_CONSISTENT state so that if the commit is encountered after\n> > reaching that we should be able to send the changes of the entire\n> > transaction. Now, while recording changes if the reorder buffer memory\n> > has exceeded logical_decoding_work_mem then we can start streaming if\n> > it is allowed and we haven't yet streamed that data. However, we must\n> > not allow streaming to start unless the snapshot has reached\n> > SNAPBUILD_CONSISTENT state.\n> >\n> > I have also improved the comments atop ReorderBufferResetTXN to\n> > mention the case when we need to continue streaming after getting an\n> > error.\n> >\n> > Attached patch for the above changes.\n> >\n> > Thoughts?\n>\n> LGTM.\n>\n\nThanks for the review, Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Dec 2020 14:35:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove incorrect assertion in reorderbuffer.c."
}
] |
[
{
"msg_contents": "https://www.postgresql.org/docs/current/sql-copy.html\n|. COPY FROM can be used with plain, foreign, or partitioned tables or with views that have INSTEAD OF INSERT triggers.\n|. COPY only deals with the specific table named; IT DOES NOT COPY DATA TO OR FROM CHILD TABLES. ...\n\nThat language in commit 854b5eb51 was never updated since partitioning was\nadded, so I propose this.\n\nI'm not sure, but maybe it should still say that \"COPY TO does not copy data to\nchild tables of inheritance hierarchies.\"\n\ndiff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml\nindex 369342b74d..0631dfe6b3 100644\n--- a/doc/src/sgml/ref/copy.sgml\n+++ b/doc/src/sgml/ref/copy.sgml\n@@ -414,9 +414,14 @@ COPY <replaceable class=\"parameter\">count</replaceable>\n \n <para>\n <command>COPY TO</command> can only be used with plain tables, not\n- with views. However, you can write <literal>COPY (SELECT * FROM\n- <replaceable class=\"parameter\">viewname</replaceable>) TO ...</literal>\n- to copy the current contents of a view.\n+ views, and does not copy data from child tables or partitions.\n+ Thus for example\n+ <literal>COPY <replaceable class=\"parameter\">table</replaceable> TO</literal>\n+ shows the same data as <literal>SELECT * FROM ONLY <replaceable\n+ class=\"parameter\">table</replaceable></literal>. But <literal>COPY\n+ (SELECT * FROM <replaceable class=\"parameter\">table</replaceable>) TO ...</literal>\n+ can be used to dump all of the data in an inheritance hierarchy,\n+ partitioned table, or view.\n </para>\n \n <para>\n@@ -425,16 +430,6 @@ COPY <replaceable class=\"parameter\">count</replaceable>\n <literal>INSTEAD OF INSERT</literal> triggers.\n </para>\n \n- <para>\n- <command>COPY</command> only deals with the specific table named;\n- it does not copy data to or from child tables. Thus for example\n- <literal>COPY <replaceable class=\"parameter\">table</replaceable> TO</literal>\n- shows the same data as <literal>SELECT * FROM ONLY <replaceable\n- class=\"parameter\">table</replaceable></literal>. But <literal>COPY\n- (SELECT * FROM <replaceable class=\"parameter\">table</replaceable>) TO ...</literal>\n- can be used to dump all of the data in an inheritance hierarchy.\n- </para>\n-\n <para>\n You must have select privilege on the table\n whose values are read by <command>COPY TO</command>, and\n\n\n",
"msg_date": "Thu, 3 Dec 2020 15:17:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "copy.sgml and partitioned tables"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 03:17:23PM -0600, Justin Pryzby wrote:\n> https://www.postgresql.org/docs/current/sql-copy.html\n> |. COPY FROM can be used with plain, foreign, or partitioned tables or with views that have INSTEAD OF INSERT triggers.\n> |. COPY only deals with the specific table named; IT DOES NOT COPY DATA TO OR FROM CHILD TABLES. ...\n> \n> That language in commit 854b5eb51 was never updated since partitioning was\n> added, so I propose this.\n> \n> I'm not sure, but maybe it should still say that \"COPY TO does not copy data to\n> child tables of inheritance hierarchies.\"\n\nI reworded it slightly, attached, and applied it back to PG 10, where we\nadded the partition syntax.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 15 Dec 2020 19:20:59 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: copy.sgml and partitioned tables"
}
] |
[
{
"msg_contents": "Hi all\n\nI was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"\nThis seems like a wild idea. But really, we have a super expensive NVMe drives for OLTP workload. And use this devices to store buffer for data to be dumped into MapReduce\\YT analytical system.\nIf OLAP cannot consume data fast enough - we are out of space due to repl slot.\nIf we have a WAL HA switchover - OLAP has a hole in the stream and have to resync data from the scratch.\n\nIf we could just run archive command ```archive-tool wal-push 0000000900000F2C000000E1.logical``` with contents of logical replication - this would be super cool for OLAP. I'd prefer even avoid writing 0000000900000F2C000000E1.logical to disk, i.e. push data on stdio or something like that.\n\nWhat do you think?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 4 Dec 2020 12:33:44 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Logical archiving"
},
{
"msg_contents": "On Fri, 4 Dec 2020 at 04:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n>\n> I was discussing problems of CDC with scientific community and they asked\n> this simple question: \"So you have efficient WAL archive on a very cheap\n> storage, why don't you have a logical archive too?\"\n>\n\nWAL archive doesn't process data; it just copies from one location into\nanother one. However, \"logical archive\" must process data.\n\n\n> If we could just run archive command ```archive-tool wal-push\n> 0000000900000F2C000000E1.logical``` with contents of logical replication -\n> this would be super cool for OLAP. I'd prefer even avoid writing\n> 0000000900000F2C000000E1.logical to disk, i.e. push data on stdio or\n> something like that.\n>\n> The most time consuming process is logical decoding, mainly due to long\nrunning transactions. In order to minimize your issue, we should improve\nthe logical decoding mechanism. There was a discussion about allowing\nlogical decoding on the replica that would probably help your use case a\nlot.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 4 Dec 2020 at 04:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\nI was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"\nWAL archive doesn't process data; it just copies from one location into another one. However, \"logical archive\" must process data. If we could just run archive command ```archive-tool wal-push 0000000900000F2C000000E1.logical``` with contents of logical replication - this would be super cool for OLAP. I'd prefer even avoid writing 0000000900000F2C000000E1.logical to disk, i.e. push data on stdio or something like that.\nThe most time consuming process is logical decoding, mainly due to long running transactions. In order to minimize your issue, we should improve the logical decoding mechanism. There was a discussion about allowing logical decoding on the replica that would probably help your use case a lot.-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 4 Dec 2020 14:14:50 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "Hi Euler!\n\nThanks for your response.\n\n> 4 дек. 2020 г., в 22:14, Euler Taveira <euler.taveira@2ndquadrant.com> написал(а):\n> \n> On Fri, 4 Dec 2020 at 04:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"\n> \n> WAL archive doesn't process data; it just copies from one location into another one. However, \"logical archive\" must process data.\nWAL archiving processes data: it does compression, encryption and digesting. Only minimal impractical setup will copy data as is. However I agree, that all processing is done outside postgres.\n\n> If we could just run archive command ```archive-tool wal-push 0000000900000F2C000000E1.logical``` with contents of logical replication - this would be super cool for OLAP. I'd prefer even avoid writing 0000000900000F2C000000E1.logical to disk, i.e. push data on stdio or something like that.\n> \n> The most time consuming process is logical decoding, mainly due to long running transactions.\nCurrently I do not experience problem of high CPU utilisation.\n\n> In order to minimize your issue, we should improve the logical decoding mechanism.\nNo, the issue I'm facing comes from the fact that corner cases of failover are not solved properly for logical replication. Timelines, partial segments, archiving along with streaming, starting from arbitrary LSN (within available WAL), rewind, named restore points, cascade replication etc etc. All these nice things are there for WAL and are missing for LR. I'm just trying to find shortest path through this to make CDC(changed data capture) work.\n\n> There was a discussion about allowing logical decoding on the replica that would probably help your use case a lot.\nI will look there more closely, thanks! But it's only part of a solution.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 4 Dec 2020 22:36:25 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "On Fri, 4 Dec 2020 at 14:36, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> >\n> > The most time consuming process is logical decoding, mainly due to long\n> running transactions.\n> Currently I do not experience problem of high CPU utilisation.\n>\n> I'm wondering why the LSN isn't moving fast enough for your use case.\n\n\n> > In order to minimize your issue, we should improve the logical decoding\n> mechanism.\n> No, the issue I'm facing comes from the fact that corner cases of failover\n> are not solved properly for logical replication. Timelines, partial\n> segments, archiving along with streaming, starting from arbitrary LSN\n> (within available WAL), rewind, named restore points, cascade replication\n> etc etc. All these nice things are there for WAL and are missing for LR.\n> I'm just trying to find shortest path through this to make CDC(changed data\n> capture) work.\n>\n> Craig started a thread a few days ago [1] that described some of these\nissues and possible solutions [2]. The lack of HA with logical replication\nreduces the number of solutions that could possibly use this technology.\nSome of the facilities such as logical replication slots and replication\norigin on failover-candidate subscribers should encourage users to adopt\nsuch solutions.\n\n[1]\nhttps://www.postgresql.org/message-id/CAGRY4nx0-ZVnFJV5749QCqwmqBMkjQpcFkYY56a9U6Vf%2Bf7-7Q%40mail.gmail.com\n[2]\nhttps://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 4 Dec 2020 at 14:36, Andrey Borodin <x4mmm@yandex-team.ru> wrote:> \n> The most time consuming process is logical decoding, mainly due to long running transactions.\nCurrently I do not experience problem of high CPU utilisation.\nI'm wondering why the LSN isn't moving fast enough for your use case. \n> In order to minimize your issue, we should improve the logical decoding mechanism.\nNo, the issue I'm facing comes from the fact that corner cases of failover are not solved properly for logical replication. Timelines, partial segments, archiving along with streaming, starting from arbitrary LSN (within available WAL), rewind, named restore points, cascade replication etc etc. All these nice things are there for WAL and are missing for LR. I'm just trying to find shortest path through this to make CDC(changed data capture) work.\nCraig started a thread a few days ago [1] that described some of these issues and possible solutions [2]. The lack of HA with logical replication reduces the number of solutions that could possibly use this technology. Some of the facilities such as logical replication slots and replication origin on failover-candidate subscribers should encourage users to adopt such solutions. [1] https://www.postgresql.org/message-id/CAGRY4nx0-ZVnFJV5749QCqwmqBMkjQpcFkYY56a9U6Vf%2Bf7-7Q%40mail.gmail.com[2] https://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 4 Dec 2020 15:28:20 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "Reply follows inline. I addressed your last point first, so it's out of\norder.\n\nOn Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote\n\n> If OLAP cannot consume data fast enough - we are out of space due to repl\nslot.\n\nThere is a much simpler solution to this than logical PITR.\n\nWhat we should be doing is teaching xlogreader how to invoke the\nrestore_command to fetch archived WALs for decoding.\n\nReplication slots already have a WAL retention limit, but right now when\nthat limit is reached the slot is invalidated and becomes useless, it's\neffectively dropped. Instead, if WAL archiving is enabled, we should leave\nthe slot as valid. If a consumer of the slot needs WAL that no longer\nexists in pg_wal, we should have the walsender invoke the restore_command\nto read the missing WAL segment, decode it, and remove it again.\n\nThis would not be a technically difficult patch, and it's IMO one of the\nmore important ones for improving logical replication.\n\n> I was discussing problems of CDC with scientific community and they asked\nthis simple question: \"So you have efficient WAL archive on a very cheap\nstorage, why don't you have a logical archive too?\"\n\nI've done work in this area, as has Petr (CC'd).\n\nIn short, logical archiving and PITR is very much desirable, but we're not\nnearly ready for it yet and we're missing a lot of the foundations needed\nto make it really useful.\n\nIMO the strongest pre-requisite is that we need integrated DDL capture and\nreplication in Pg. While this could be implemented in the\npublisher/subscriber logic for logical replication, it would make much more\nsense (IMO) to make it easier to feed DDL events into any logical\nreplication output plugin.\n\npglogical3 (the closed one) has quite comprehensive DDL replication\nsupport. Doing it is not simple though - there are plenty of complexities:\n\n* Reliably identifying the target objects and mapping them to replication\nset memberships for DML-replication\n* Capturing, replicating and managing the search_path and other DDL\nexecution context (DateStyle and much more) reliably\n\n - Each statement type needs specific logic to indicate whether it needs\n DDL replication (and often filter functions since we have lots of sub-types\n where some need replication and some don't)\n - Handling DDL affecting global objects in pg_global correctly, like\n those affecting roles, grants, database security labels etc. There's no one\n right answer for this, it depends on the deployment and requires the user\n to cooperate.\n - Correct handling of transactions that mix DDL and DML (mostly only an\n issue for multimaster).\n - Identifying statements that target a mix of replicated and\n non-replicated objects and handling them appropriately, including for\n CASCADEs\n - Gracefully handling DDL statements that mix TEMPORARY and persistent\n targets. We can do this ok for DROPs but it still requires care. Anything\n else gets messier.\n - Lack of hooks into table rewrite operations and the extremely clumsy\n and inefficient way logical decoding currently exposes decoding of the\n temp-table data during decoding of rewrites means handling table-rewriting\n DDL is difficult and impractical to do correctly. In pglogical we punt on\n it entirely and refuse to permit DDL that would rewrite a table except\n where we can prove it's reliant only on immutable inputs so we can discard\n the upstream rewrite and rely on statement replication.\n - As a consequence of the above, reliably determining whether a given\n statement will cause a table rewrite.\n - Handling re-entrant ProcessUtility_hook calls for ALTER TABLE etc.\n - Handling TRUNCATE's pseudo-DDL pseudo-DML halfway state, doing\n something sensible for truncate cascade.\n - Probably more I've forgotten\n\n\nIf we don't handle these, then any logical change-log archives will become\nlargely useless as soon as there's any schema change.\n\nSo we kind of have to solve DDL replication first IMO.\n\nSome consideration is also required for metadata management. Right now\nrelation and type metadata has session-lifetime, but you'd want to be able\nto discard old logical change-stream archives and have the later ones still\nbe usable. So we'd need to define some kind of restartpoint where we repeat\nthe metadata, or we'd have to support externalizing the metadata so it can\nbe retained when the main change archives get aged out.\n\nWe'd also need to separate the existing apply worker into a \"receiver\" and\n\"apply/writer\" part, so the wire-protocol handling isn't tightly coupled\nwith the actual change apply code, in order to make it possible to actually\nconsume those archives and apply them to the database. In pglogical3 we did\nthat by splitting them into two processes, connected by a shm_mq.\nOriginally the process split was optional and you could run a combined\nreceiver/writer process without the shm_mq if you wanted, but we quickly\nfound it difficult to reliably handle locking issues etc that way so the\nwriters all moved out-of-process.\n\nThat was done mainly to make it possible to support parallelism in logical\ndecoding apply. But we also have the intention of supporting an alternative\nreader process that can ingest \"logical archives\" and send them to the\nwriter to apply them, as if they'd been received from the on-wire stream.\nThat's not implemented at this time though. It'd be useful for a number of\nthings:\n\n* PITR-style logical replay and recovery\n* Ability to pre-decode a txn once on the upstream then send the buffered\nprotocol-stream to multiple subscribers, saving on logical decoding and\nreorder buffering overheads and write-multiplication costs\n* ability to ingest change-streams generated by non-postgres sources so we\ncould support streaming foreign-data ingestion, streaming OLAP and data\nwarehousing, etc\n\nTo make logical PITR more useful we'd also want to be a bit more tolerant\nof schema divergence, though that's not overly hard to do:\n\n - fill defaults for downstream columns if no value is present for the\n column in the upstream row and the downstream column is nullable or has a\n default (I think built-in logical rep does this one already)\n - ignore values for columns in upstream data if the downstream table\n lacks the column and the upstream value is null\n - optionally allow apply to be configured to ignore non-null data in\n upstream columns that're missing on downstream\n - optionally allow apply to be configured to drop rows on the floor if\n the downstream table is missing\n - policies for handling data conflicts like duplicate PKs\n\nand we'd probably want ways to filter the apply data-stream to apply\nchanges for only a subset of tables, rows, etc at least in a later version.\n\nNone of this is insurmountable. Most or all of the DDL replication support\nand divergence-tolerance stuff is already done in production deployments\nusing pglogical3 and bdr3.\n\nWhile I can't share the code, I am happy to share the experience I have\ngained from my part in working on these things. As you've probably recently\nseen with the wiki article I wrote on physical/logical failover interop.\n\nYou're free to take information like this and use it in wiki articles too.\n\nRight now I won't be able to launch into writing big patches for these\nthings, but I'll do my best to share what I can and review things.\n\n> This seems like a wild idea. But really, we have a super expensive NVMe\ndrives for OLTP workload. And use this devices to store buffer for data to\nbe dumped into MapReduce\\YT analytical system.\n\nIt's not a wild idea at all, as noted above.\n\nIn pglogical3 we already support streaming decoded WAL data to alternative\nwriter downstreams including RabbitMQ and Kafka via writer plugins.\n\nReply follows inline. I addressed your last point first, so it's out of order.On Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote> If OLAP cannot consume data fast enough - we are out of space due to repl slot.There is a much simpler solution to this than logical PITR.What we should be doing is teaching xlogreader how to invoke the restore_command to fetch archived WALs for decoding.Replication slots already have a WAL retention limit, but right now when that limit is reached the slot is invalidated and becomes useless, it's effectively dropped. Instead, if WAL archiving is enabled, we should leave the slot as valid. If a consumer of the slot needs WAL that no longer exists in pg_wal, we should have the walsender invoke the restore_command to read the missing WAL segment, decode it, and remove it again.This would not be a technically difficult patch, and it's IMO one of the more important ones for improving logical replication.> I was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"I've done work in this area, as has Petr (CC'd).In short, logical archiving and PITR is very much desirable, but we're not nearly ready for it yet and we're missing a lot of the foundations needed to make it really useful.IMO the strongest pre-requisite is that we need integrated DDL capture and replication in Pg. While this could be implemented in the publisher/subscriber logic for logical replication, it would make much more sense (IMO) to make it easier to feed DDL events into any logical replication output plugin.pglogical3 (the closed one) has quite comprehensive DDL replication support. Doing it is not simple though - there are plenty of complexities:* Reliably identifying the target objects and mapping them to replication set memberships for DML-replication* Capturing, replicating and managing the search_path and other DDL execution context (DateStyle and much more) reliablyEach statement type needs specific logic to indicate whether it needs DDL replication (and often filter functions since we have lots of sub-types where some need replication and some don't)Handling DDL affecting global objects in pg_global correctly, like those affecting roles, grants, database security labels etc. There's no one right answer for this, it depends on the deployment and requires the user to cooperate.Correct handling of transactions that mix DDL and DML (mostly only an issue for multimaster).Identifying statements that target a mix of replicated and non-replicated objects and handling them appropriately, including for CASCADEsGracefully handling DDL statements that mix TEMPORARY and persistent targets. We can do this ok for DROPs but it still requires care. Anything else gets messier.Lack of hooks into table rewrite operations and the extremely clumsy and inefficient way logical decoding currently exposes decoding of the temp-table data during decoding of rewrites means handling table-rewriting DDL is difficult and impractical to do correctly. In pglogical we punt on it entirely and refuse to permit DDL that would rewrite a table except where we can prove it's reliant only on immutable inputs so we can discard the upstream rewrite and rely on statement replication.As a consequence of the above, reliably determining whether a given statement will cause a table rewrite.Handling re-entrant ProcessUtility_hook calls for ALTER TABLE etc.Handling TRUNCATE's pseudo-DDL pseudo-DML halfway state, doing something sensible for truncate cascade.Probably more I've forgottenIf we don't handle these, then any logical change-log archives will become largely useless as soon as there's any schema change.So we kind of have to solve DDL replication first IMO.Some consideration is also required for metadata management. Right now relation and type metadata has session-lifetime, but you'd want to be able to discard old logical change-stream archives and have the later ones still be usable. So we'd need to define some kind of restartpoint where we repeat the metadata, or we'd have to support externalizing the metadata so it can be retained when the main change archives get aged out.We'd also need to separate the existing apply worker into a \"receiver\" and \"apply/writer\" part, so the wire-protocol handling isn't tightly coupled with the actual change apply code, in order to make it possible to actually consume those archives and apply them to the database. In pglogical3 we did that by splitting them into two processes, connected by a shm_mq. Originally the process split was optional and you could run a combined receiver/writer process without the shm_mq if you wanted, but we quickly found it difficult to reliably handle locking issues etc that way so the writers all moved out-of-process.That was done mainly to make it possible to support parallelism in logical decoding apply. But we also have the intention of supporting an alternative reader process that can ingest \"logical archives\" and send them to the writer to apply them, as if they'd been received from the on-wire stream. That's not implemented at this time though. It'd be useful for a number of things:* PITR-style logical replay and recovery* Ability to pre-decode a txn once on the upstream then send the buffered protocol-stream to multiple subscribers, saving on logical decoding and reorder buffering overheads and write-multiplication costs* ability to ingest change-streams generated by non-postgres sources so we could support streaming foreign-data ingestion, streaming OLAP and data warehousing, etcTo make logical PITR more useful we'd also want to be a bit more tolerant of schema divergence, though that's not overly hard to do:fill defaults for downstream columns if no value is present for the column in the upstream row and the downstream column is nullable or has a default (I think built-in logical rep does this one already)ignore values for columns in upstream data if the downstream table lacks the column and the upstream value is nulloptionally allow apply to be configured to ignore non-null data in upstream columns that're missing on downstreamoptionally allow apply to be configured to drop rows on the floor if the downstream table is missingpolicies for handling data conflicts like duplicate PKsand we'd probably want ways to filter the apply data-stream to apply changes for only a subset of tables, rows, etc at least in a later version.None of this is insurmountable. Most or all of the DDL replication support and divergence-tolerance stuff is already done in production deployments using pglogical3 and bdr3.While I can't share the code, I am happy to share the experience I have gained from my part in working on these things. As you've probably recently seen with the wiki article I wrote on physical/logical failover interop.You're free to take information like this and use it in wiki articles too.Right now I won't be able to launch into writing big patches for these things, but I'll do my best to share what I can and review things.> This seems like a wild idea. But really, we have a super expensive NVMe drives for OLTP workload. And use this devices to store buffer for data to be dumped into MapReduce\\YT analytical system.It's not a wild idea at all, as noted above.In pglogical3 we already support streaming decoded WAL data to alternative writer downstreams including RabbitMQ and Kafka via writer plugins.",
"msg_date": "Mon, 7 Dec 2020 11:05:12 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "Actually CC'd Petr this time.\n\nOn Mon, 7 Dec 2020 at 11:05, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n> Reply follows inline. I addressed your last point first, so it's out of\n> order.\n>\n> On Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote\n>\n> > If OLAP cannot consume data fast enough - we are out of space due to\n> repl slot.\n>\n> There is a much simpler solution to this than logical PITR.\n>\n> What we should be doing is teaching xlogreader how to invoke the\n> restore_command to fetch archived WALs for decoding.\n>\n> Replication slots already have a WAL retention limit, but right now when\n> that limit is reached the slot is invalidated and becomes useless, it's\n> effectively dropped. Instead, if WAL archiving is enabled, we should leave\n> the slot as valid. If a consumer of the slot needs WAL that no longer\n> exists in pg_wal, we should have the walsender invoke the restore_command\n> to read the missing WAL segment, decode it, and remove it again.\n>\n> This would not be a technically difficult patch, and it's IMO one of the\n> more important ones for improving logical replication.\n>\n> > I was discussing problems of CDC with scientific community and they\n> asked this simple question: \"So you have efficient WAL archive on a very\n> cheap storage, why don't you have a logical archive too?\"\n>\n> I've done work in this area, as has Petr (CC'd).\n>\n> In short, logical archiving and PITR is very much desirable, but we're not\n> nearly ready for it yet and we're missing a lot of the foundations needed\n> to make it really useful.\n>\n> IMO the strongest pre-requisite is that we need integrated DDL capture and\n> replication in Pg. While this could be implemented in the\n> publisher/subscriber logic for logical replication, it would make much more\n> sense (IMO) to make it easier to feed DDL events into any logical\n> replication output plugin.\n>\n> pglogical3 (the closed one) has quite comprehensive DDL replication\n> support. Doing it is not simple though - there are plenty of complexities:\n>\n> * Reliably identifying the target objects and mapping them to replication\n> set memberships for DML-replication\n> * Capturing, replicating and managing the search_path and other DDL\n> execution context (DateStyle and much more) reliably\n>\n> - Each statement type needs specific logic to indicate whether it\n> needs DDL replication (and often filter functions since we have lots of\n> sub-types where some need replication and some don't)\n> - Handling DDL affecting global objects in pg_global correctly, like\n> those affecting roles, grants, database security labels etc. There's no one\n> right answer for this, it depends on the deployment and requires the user\n> to cooperate.\n> - Correct handling of transactions that mix DDL and DML (mostly only\n> an issue for multimaster).\n> - Identifying statements that target a mix of replicated and\n> non-replicated objects and handling them appropriately, including for\n> CASCADEs\n> - Gracefully handling DDL statements that mix TEMPORARY and persistent\n> targets. We can do this ok for DROPs but it still requires care. Anything\n> else gets messier.\n> - Lack of hooks into table rewrite operations and the extremely clumsy\n> and inefficient way logical decoding currently exposes decoding of the\n> temp-table data during decoding of rewrites means handling table-rewriting\n> DDL is difficult and impractical to do correctly. In pglogical we punt on\n> it entirely and refuse to permit DDL that would rewrite a table except\n> where we can prove it's reliant only on immutable inputs so we can discard\n> the upstream rewrite and rely on statement replication.\n> - As a consequence of the above, reliably determining whether a given\n> statement will cause a table rewrite.\n> - Handling re-entrant ProcessUtility_hook calls for ALTER TABLE etc.\n> - Handling TRUNCATE's pseudo-DDL pseudo-DML halfway state, doing\n> something sensible for truncate cascade.\n> - Probably more I've forgotten\n>\n>\n> If we don't handle these, then any logical change-log archives will become\n> largely useless as soon as there's any schema change.\n>\n> So we kind of have to solve DDL replication first IMO.\n>\n> Some consideration is also required for metadata management. Right now\n> relation and type metadata has session-lifetime, but you'd want to be able\n> to discard old logical change-stream archives and have the later ones still\n> be usable. So we'd need to define some kind of restartpoint where we repeat\n> the metadata, or we'd have to support externalizing the metadata so it can\n> be retained when the main change archives get aged out.\n>\n> We'd also need to separate the existing apply worker into a \"receiver\" and\n> \"apply/writer\" part, so the wire-protocol handling isn't tightly coupled\n> with the actual change apply code, in order to make it possible to actually\n> consume those archives and apply them to the database. In pglogical3 we did\n> that by splitting them into two processes, connected by a shm_mq.\n> Originally the process split was optional and you could run a combined\n> receiver/writer process without the shm_mq if you wanted, but we quickly\n> found it difficult to reliably handle locking issues etc that way so the\n> writers all moved out-of-process.\n>\n> That was done mainly to make it possible to support parallelism in logical\n> decoding apply. But we also have the intention of supporting an alternative\n> reader process that can ingest \"logical archives\" and send them to the\n> writer to apply them, as if they'd been received from the on-wire stream.\n> That's not implemented at this time though. It'd be useful for a number of\n> things:\n>\n> * PITR-style logical replay and recovery\n> * Ability to pre-decode a txn once on the upstream then send the buffered\n> protocol-stream to multiple subscribers, saving on logical decoding and\n> reorder buffering overheads and write-multiplication costs\n> * ability to ingest change-streams generated by non-postgres sources so we\n> could support streaming foreign-data ingestion, streaming OLAP and data\n> warehousing, etc\n>\n> To make logical PITR more useful we'd also want to be a bit more tolerant\n> of schema divergence, though that's not overly hard to do:\n>\n> - fill defaults for downstream columns if no value is present for the\n> column in the upstream row and the downstream column is nullable or has a\n> default (I think built-in logical rep does this one already)\n> - ignore values for columns in upstream data if the downstream table\n> lacks the column and the upstream value is null\n> - optionally allow apply to be configured to ignore non-null data in\n> upstream columns that're missing on downstream\n> - optionally allow apply to be configured to drop rows on the floor if\n> the downstream table is missing\n> - policies for handling data conflicts like duplicate PKs\n>\n> and we'd probably want ways to filter the apply data-stream to apply\n> changes for only a subset of tables, rows, etc at least in a later version.\n>\n> None of this is insurmountable. Most or all of the DDL replication support\n> and divergence-tolerance stuff is already done in production deployments\n> using pglogical3 and bdr3.\n>\n> While I can't share the code, I am happy to share the experience I have\n> gained from my part in working on these things. As you've probably recently\n> seen with the wiki article I wrote on physical/logical failover interop.\n>\n> You're free to take information like this and use it in wiki articles too.\n>\n> Right now I won't be able to launch into writing big patches for these\n> things, but I'll do my best to share what I can and review things.\n>\n> > This seems like a wild idea. But really, we have a super expensive NVMe\n> drives for OLTP workload. And use this devices to store buffer for data to\n> be dumped into MapReduce\\YT analytical system.\n>\n> It's not a wild idea at all, as noted above.\n>\n> In pglogical3 we already support streaming decoded WAL data to alternative\n> writer downstreams including RabbitMQ and Kafka via writer plugins.\n>\n\nActually CC'd Petr this time.On Mon, 7 Dec 2020 at 11:05, Craig Ringer <craig.ringer@enterprisedb.com> wrote:Reply follows inline. I addressed your last point first, so it's out of order.On Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote> If OLAP cannot consume data fast enough - we are out of space due to repl slot.There is a much simpler solution to this than logical PITR.What we should be doing is teaching xlogreader how to invoke the restore_command to fetch archived WALs for decoding.Replication slots already have a WAL retention limit, but right now when that limit is reached the slot is invalidated and becomes useless, it's effectively dropped. Instead, if WAL archiving is enabled, we should leave the slot as valid. If a consumer of the slot needs WAL that no longer exists in pg_wal, we should have the walsender invoke the restore_command to read the missing WAL segment, decode it, and remove it again.This would not be a technically difficult patch, and it's IMO one of the more important ones for improving logical replication.> I was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"I've done work in this area, as has Petr (CC'd).In short, logical archiving and PITR is very much desirable, but we're not nearly ready for it yet and we're missing a lot of the foundations needed to make it really useful.IMO the strongest pre-requisite is that we need integrated DDL capture and replication in Pg. While this could be implemented in the publisher/subscriber logic for logical replication, it would make much more sense (IMO) to make it easier to feed DDL events into any logical replication output plugin.pglogical3 (the closed one) has quite comprehensive DDL replication support. Doing it is not simple though - there are plenty of complexities:* Reliably identifying the target objects and mapping them to replication set memberships for DML-replication* Capturing, replicating and managing the search_path and other DDL execution context (DateStyle and much more) reliablyEach statement type needs specific logic to indicate whether it needs DDL replication (and often filter functions since we have lots of sub-types where some need replication and some don't)Handling DDL affecting global objects in pg_global correctly, like those affecting roles, grants, database security labels etc. There's no one right answer for this, it depends on the deployment and requires the user to cooperate.Correct handling of transactions that mix DDL and DML (mostly only an issue for multimaster).Identifying statements that target a mix of replicated and non-replicated objects and handling them appropriately, including for CASCADEsGracefully handling DDL statements that mix TEMPORARY and persistent targets. We can do this ok for DROPs but it still requires care. Anything else gets messier.Lack of hooks into table rewrite operations and the extremely clumsy and inefficient way logical decoding currently exposes decoding of the temp-table data during decoding of rewrites means handling table-rewriting DDL is difficult and impractical to do correctly. In pglogical we punt on it entirely and refuse to permit DDL that would rewrite a table except where we can prove it's reliant only on immutable inputs so we can discard the upstream rewrite and rely on statement replication.As a consequence of the above, reliably determining whether a given statement will cause a table rewrite.Handling re-entrant ProcessUtility_hook calls for ALTER TABLE etc.Handling TRUNCATE's pseudo-DDL pseudo-DML halfway state, doing something sensible for truncate cascade.Probably more I've forgottenIf we don't handle these, then any logical change-log archives will become largely useless as soon as there's any schema change.So we kind of have to solve DDL replication first IMO.Some consideration is also required for metadata management. Right now relation and type metadata has session-lifetime, but you'd want to be able to discard old logical change-stream archives and have the later ones still be usable. So we'd need to define some kind of restartpoint where we repeat the metadata, or we'd have to support externalizing the metadata so it can be retained when the main change archives get aged out.We'd also need to separate the existing apply worker into a \"receiver\" and \"apply/writer\" part, so the wire-protocol handling isn't tightly coupled with the actual change apply code, in order to make it possible to actually consume those archives and apply them to the database. In pglogical3 we did that by splitting them into two processes, connected by a shm_mq. Originally the process split was optional and you could run a combined receiver/writer process without the shm_mq if you wanted, but we quickly found it difficult to reliably handle locking issues etc that way so the writers all moved out-of-process.That was done mainly to make it possible to support parallelism in logical decoding apply. But we also have the intention of supporting an alternative reader process that can ingest \"logical archives\" and send them to the writer to apply them, as if they'd been received from the on-wire stream. That's not implemented at this time though. It'd be useful for a number of things:* PITR-style logical replay and recovery* Ability to pre-decode a txn once on the upstream then send the buffered protocol-stream to multiple subscribers, saving on logical decoding and reorder buffering overheads and write-multiplication costs* ability to ingest change-streams generated by non-postgres sources so we could support streaming foreign-data ingestion, streaming OLAP and data warehousing, etcTo make logical PITR more useful we'd also want to be a bit more tolerant of schema divergence, though that's not overly hard to do:fill defaults for downstream columns if no value is present for the column in the upstream row and the downstream column is nullable or has a default (I think built-in logical rep does this one already)ignore values for columns in upstream data if the downstream table lacks the column and the upstream value is nulloptionally allow apply to be configured to ignore non-null data in upstream columns that're missing on downstreamoptionally allow apply to be configured to drop rows on the floor if the downstream table is missingpolicies for handling data conflicts like duplicate PKsand we'd probably want ways to filter the apply data-stream to apply changes for only a subset of tables, rows, etc at least in a later version.None of this is insurmountable. Most or all of the DDL replication support and divergence-tolerance stuff is already done in production deployments using pglogical3 and bdr3.While I can't share the code, I am happy to share the experience I have gained from my part in working on these things. As you've probably recently seen with the wiki article I wrote on physical/logical failover interop.You're free to take information like this and use it in wiki articles too.Right now I won't be able to launch into writing big patches for these things, but I'll do my best to share what I can and review things.> This seems like a wild idea. But really, we have a super expensive NVMe drives for OLTP workload. And use this devices to store buffer for data to be dumped into MapReduce\\YT analytical system.It's not a wild idea at all, as noted above.In pglogical3 we already support streaming decoded WAL data to alternative writer downstreams including RabbitMQ and Kafka via writer plugins.",
"msg_date": "Mon, 7 Dec 2020 11:05:35 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 8:35 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Reply follows inline. I addressed your last point first, so it's out of order.\n>\n> On Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote\n>\n> We'd also need to separate the existing apply worker into a \"receiver\" and \"apply/writer\" part, so the wire-protocol handling isn't tightly coupled with the actual change apply code, in order to make it possible to actually consume those archives and apply them to the database. In pglogical3 we did that by splitting them into two processes, connected by a shm_mq. Originally the process split was optional and you could run a combined receiver/writer process without the shm_mq if you wanted, but we quickly found it difficult to reliably handle locking issues etc that way so the writers all moved out-of-process.\n>\n> That was done mainly to make it possible to support parallelism in logical decoding apply. But we also have the intention of supporting an alternative reader process that can ingest \"logical archives\" and send them to the writer to apply them, as if they'd been received from the on-wire stream. That's not implemented at this time though. It'd be useful for a number of things:\n>\n> * PITR-style logical replay and recovery\n> * Ability to pre-decode a txn once on the upstream then send the buffered protocol-stream to multiple subscribers, saving on logical decoding and reorder buffering overheads and write-multiplication costs\n>\n\nI think doing parallel apply and ability to decode a txn once are\nreally good improvements independent of all the work you listed.\nThanks for sharing your knowledge.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 14:30:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "Thanks Craig!\nProbably, I should better ask in your nearby thread about logical replication, it just seemed to me that logical archiving is somewhat small independent piece of functionality...\n\n> 7 дек. 2020 г., в 08:05, Craig Ringer <craig.ringer@enterprisedb.com> написал(а):\n> \n> Reply follows inline. I addressed your last point first, so it's out of order.\n> \n> On Fri, 4 Dec 2020 at 15:33, Andrey Borodin <x4mmm@yandex-team.ru> wrote\n> \n> > If OLAP cannot consume data fast enough - we are out of space due to repl slot.\n> \n> There is a much simpler solution to this than logical PITR.\n> \n> What we should be doing is teaching xlogreader how to invoke the restore_command to fetch archived WALs for decoding.\n\n> Replication slots already have a WAL retention limit, but right now when that limit is reached the slot is invalidated and becomes useless, it's effectively dropped. Instead, if WAL archiving is enabled, we should leave the slot as valid. If a consumer of the slot needs WAL that no longer exists in pg_wal, we should have the walsender invoke the restore_command to read the missing WAL segment, decode it, and remove it again.\n> \n> This would not be a technically difficult patch, and it's IMO one of the more important ones for improving logical replication.\nCurrently we have restore_command in regular config, not in recovery.conf, so, probably, it should not be a very big deal to implement this.\n> \n> > I was discussing problems of CDC with scientific community and they asked this simple question: \"So you have efficient WAL archive on a very cheap storage, why don't you have a logical archive too?\"\n> \n> I've done work in this area, as has Petr (CC'd).\n> \n> In short, logical archiving and PITR is very much desirable, but we're not nearly ready for it yet and we're missing a lot of the foundations needed to make it really useful.\n> \n> IMO the strongest pre-requisite is that we need integrated DDL capture and replication in Pg. While this could be implemented in the publisher/subscriber logic for logical replication, it would make much more sense (IMO) to make it easier to feed DDL events into any logical replication output plugin.\n> \n> pglogical3 (the closed one) has quite comprehensive DDL replication support. Doing it is not simple though - there are plenty of complexities:\n> \n> * Reliably identifying the target objects and mapping them to replication set memberships for DML-replication\n> * Capturing, replicating and managing the search_path and other DDL execution context (DateStyle and much more) reliably\n> \t• Each statement type needs specific logic to indicate whether it needs DDL replication (and often filter functions since we have lots of sub-types where some need replication and some don't)\n> \t• Handling DDL affecting global objects in pg_global correctly, like those affecting roles, grants, database security labels etc. There's no one right answer for this, it depends on the deployment and requires the user to cooperate.\n> \t• Correct handling of transactions that mix DDL and DML (mostly only an issue for multimaster).\n> \t• Identifying statements that target a mix of replicated and non-replicated objects and handling them appropriately, including for CASCADEs\n> \t• Gracefully handling DDL statements that mix TEMPORARY and persistent targets. We can do this ok for DROPs but it still requires care. Anything else gets messier.\n> \t• Lack of hooks into table rewrite operations and the extremely clumsy and inefficient way logical decoding currently exposes decoding of the temp-table data during decoding of rewrites means handling table-rewriting DDL is difficult and impractical to do correctly. In pglogical we punt on it entirely and refuse to permit DDL that would rewrite a table except where we can prove it's reliant only on immutable inputs so we can discard the upstream rewrite and rely on statement replication.\n> \t• As a consequence of the above, reliably determining whether a given statement will cause a table rewrite.\n> \t• Handling re-entrant ProcessUtility_hook calls for ALTER TABLE etc.\n> \t• Handling TRUNCATE's pseudo-DDL pseudo-DML halfway state, doing something sensible for truncate cascade.\n> \t• Probably more I've forgotten\n> \n> If we don't handle these, then any logical change-log archives will become largely useless as soon as there's any schema change.\n> \n> So we kind of have to solve DDL replication first IMO.\n> \n> Some consideration is also required for metadata management. Right now relation and type metadata has session-lifetime, but you'd want to be able to discard old logical change-stream archives and have the later ones still be usable. So we'd need to define some kind of restartpoint where we repeat the metadata, or we'd have to support externalizing the metadata so it can be retained when the main change archives get aged out.\n> \n> We'd also need to separate the existing apply worker into a \"receiver\" and \"apply/writer\" part, so the wire-protocol handling isn't tightly coupled with the actual change apply code, in order to make it possible to actually consume those archives and apply them to the database. In pglogical3 we did that by splitting them into two processes, connected by a shm_mq. Originally the process split was optional and you could run a combined receiver/writer process without the shm_mq if you wanted, but we quickly found it difficult to reliably handle locking issues etc that way so the writers all moved out-of-process.\n> \n> That was done mainly to make it possible to support parallelism in logical decoding apply. But we also have the intention of supporting an alternative reader process that can ingest \"logical archives\" and send them to the writer to apply them, as if they'd been received from the on-wire stream. That's not implemented at this time though. It'd be useful for a number of things:\n> \n> * PITR-style logical replay and recovery\n> * Ability to pre-decode a txn once on the upstream then send the buffered protocol-stream to multiple subscribers, saving on logical decoding and reorder buffering overheads and write-multiplication costs\n> * ability to ingest change-streams generated by non-postgres sources so we could support streaming foreign-data ingestion, streaming OLAP and data warehousing, etc\n> \n> To make logical PITR more useful we'd also want to be a bit more tolerant of schema divergence, though that's not overly hard to do:\n> \t• fill defaults for downstream columns if no value is present for the column in the upstream row and the downstream column is nullable or has a default (I think built-in logical rep does this one already)\n> \t• ignore values for columns in upstream data if the downstream table lacks the column and the upstream value is null\n> \t• optionally allow apply to be configured to ignore non-null data in upstream columns that're missing on downstream\n> \t• optionally allow apply to be configured to drop rows on the floor if the downstream table is missing\n> \t• policies for handling data conflicts like duplicate PKs\n> and we'd probably want ways to filter the apply data-stream to apply changes for only a subset of tables, rows, etc at least in a later version.\n> \n> None of this is insurmountable. Most or all of the DDL replication support and divergence-tolerance stuff is already done in production deployments using pglogical3 and bdr3.\nI really like this wording for \"divergence-tolerance\" stuff, it captures problems I want to solve. I believe it's somewhat orthogonal to other issues.\n\n> \n> While I can't share the code, I am happy to share the experience I have gained from my part in working on these things. As you've probably recently seen with the wiki article I wrote on physical/logical failover interop.\n> \n> You're free to take information like this and use it in wiki articles too.\n> \n> Right now I won't be able to launch into writing big patches for these things, but I'll do my best to share what I can and review things.\n> \n> > This seems like a wild idea. But really, we have a super expensive NVMe drives for OLTP workload. And use this devices to store buffer for data to be dumped into MapReduce\\YT analytical system.\n> \n> It's not a wild idea at all, as noted above.\n> \n> In pglogical3 we already support streaming decoded WAL data to alternative writer downstreams including RabbitMQ and Kafka via writer plugins.\nYes, Yandex.Cloud Transfer Manger supports it too. But it has to be resynced after physical failover. And internal installation of YC have mandatory drills: few times in a month one datacenter is disconnected and failover happens for thousands a DBS.\n\nThank you for your input. Probably, I'll put some efforts into loading missing WAL as a first step towards bright future :)\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 8 Dec 2020 14:54:55 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Logical archiving"
},
{
"msg_contents": "On Tue, 8 Dec 2020 at 17:54, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n\n> > In pglogical3 we already support streaming decoded WAL data to\n> alternative writer downstreams including RabbitMQ and Kafka via writer\n> plugins.\n> Yes, Yandex.Cloud Transfer Manger supports it too. But it has to be\n> resynced after physical failover. And internal installation of YC have\n> mandatory drills: few times in a month one datacenter is disconnected and\n> failover happens for thousands a DBS.\n>\n\nYou'll want to look at\nhttps://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover#All-logical-replication_HA\nthen.\n\nOn Tue, 8 Dec 2020 at 17:54, Andrey Borodin <x4mmm@yandex-team.ru> wrote: \n> In pglogical3 we already support streaming decoded WAL data to alternative writer downstreams including RabbitMQ and Kafka via writer plugins.\nYes, Yandex.Cloud Transfer Manger supports it too. But it has to be resynced after physical failover. And internal installation of YC have mandatory drills: few times in a month one datacenter is disconnected and failover happens for thousands a DBS.You'll want to look at https://wiki.postgresql.org/wiki/Logical_replication_and_physical_standby_failover#All-logical-replication_HA then.",
"msg_date": "Wed, 9 Dec 2020 09:30:39 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical archiving"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nLet's say we want to strip the leading zero bytes from '\\x0000beefbabe00'::bytea.\n\nThis is currently not supported, since trim() for bytea values only support the BOTH mode:\n\nSELECT trim(LEADING '\\x00'::bytea FROM '\\x0000beefbabe00'::bytea);\nERROR: function pg_catalog.ltrim(bytea, bytea) does not exist\n\nThe attached patch adds LEADING | TRAILING support for the bytea version of trim():\n\nSELECT trim(LEADING '\\x00'::bytea FROM '\\x0000beefbabe00'::bytea);\n ltrim\n--------------\n\\xbeefbabe00\n\nSELECT trim(TRAILING '\\x00'::bytea FROM '\\x0000beefbabe00'::bytea);\n rtrim\n----------------\n\\x0000beefbabe\n\nBest regards,\n\nJoel Jacobson",
"msg_date": "Fri, 04 Dec 2020 17:30:43 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add support for leading/trailing bytea trim()ing"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> The attached patch adds LEADING | TRAILING support for the bytea version of trim():\n\nNo objection in principle, but you need to extend the code added by\ncommit 40c24bfef to know about these functions.\n\nThe grammar in the functions' descr strings seems a bit shaky too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 11:37:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for leading/trailing bytea trim()ing"
},
{
"msg_contents": "On Fri, Dec 4, 2020, at 17:37, Tom Lane wrote:\n>No objection in principle, but you need to extend the code added by\n>commit 40c24bfef to know about these functions.\n\nOh, I see, that's a very nice improvement.\n\nI've now added F_LTRIM_BYTEA_BYTEA and F_RTRIM_BYTEA_BYTEA to ruleutils.c accordingly,\nand also added regress tests to create_view.sql.\n\n>The grammar in the functions' descr strings seems a bit shaky too.\n\nNot sure what you mean? The grammar is unchanged, since it was already supported,\nbut the overloaded bytea functions were missing.\n\nI did however notice I forgot to update the description in func.sgml\nfor the bytea version of trim(). Maybe that's what you meant was shaky?\nI've changed the description to read:\n\n- <parameter>bytesremoved</parameter> from the start\n- and end of <parameter>bytes</parameter>.\n+ <parameter>bytesremoved</parameter> from the start,\n+ the end, or both ends of <parameter>bytes</parameter>.\n+ (<literal>BOTH</literal> is the default)\n\nNew patch attached.\n\n/Joel",
"msg_date": "Fri, 04 Dec 2020 19:44:37 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add support for leading/trailing bytea trim()ing"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Fri, Dec 4, 2020, at 17:37, Tom Lane wrote:\n>> The grammar in the functions' descr strings seems a bit shaky too.\n\n> Not sure what you mean?\n\n\"trim left ends\" (plural) seems wrong. A string only has one left end,\nat least in my universe.\n\n(Maybe the existing ltrim/rtrim descrs are also like this, but if so\nI'd change them too.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 16:02:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for leading/trailing bytea trim()ing"
},
{
"msg_contents": "On Fri, Dec 4, 2020, at 22:02, Tom Lane wrote:\n>\"trim left ends\" (plural) seems wrong. A string only has one left end,\n>at least in my universe.\n\nFixed, the extra \"s\" came from copying from btrim()'s description.\n\n>(Maybe the existing ltrim/rtrim descrs are also like this, but if so\nI>'d change them too.)\n\nThey weren't, but I think the description for the bytea functions\ncan be improved to have a more precise description\nif we take inspiration from the the text functions.\n\nHere is an overview of all functions containing \"trim\" in the function name,\nto get the full picture of the trim description terminology:\n\nSELECT\n oid,\n pg_describe_object('pg_proc'::regclass,oid,0),\n pg_catalog.obj_description(oid, 'pg_proc')\nFROM pg_proc\nWHERE proname LIKE '%trim%'\nORDER BY oid;\n\noid | pg_describe_object | obj_description\n------+------------------------------+----------------------------------------------------------\n 875 | function ltrim(text,text) | trim selected characters from left end of string\n 876 | function rtrim(text,text) | trim selected characters from right end of string\n 881 | function ltrim(text) | trim spaces from left end of string\n 882 | function rtrim(text) | trim spaces from right end of string\n 884 | function btrim(text,text) | trim selected characters from both ends of string\n 885 | function btrim(text) | trim spaces from both ends of string\n2015 | function btrim(bytea,bytea) | trim both ends of string\n5043 | function trim_scale(numeric) | numeric with minimum scale needed to represent the value\n\nDo we want the two new functions to derive their description from the existing bytea function?\n\n9612 | function ltrim(bytea,bytea) | trim left end of string\n9613 | function rtrim(bytea,bytea) | trim right end of string\n\nPatch with this wording: leading-trailing-trim-bytea-left-right-end-of-string.patch\n\nOr would it be better to be inspired by the more precise descriptions for the two parameter text functions,\nand to change the existing btrim() function's description as well?\n\n2015 | function btrim(bytea,bytea) | trim selected bytes from both ends of string\n9612 | function ltrim(bytea,bytea) | trim selected bytes from left end of string\n9613 | function rtrim(bytea,bytea) | trim selected bytes from right end of string\n\nPatch with this wording: leading-trailing-trim-bytea-selected-bytes.patch\n\nBest regards,\n\nJoel",
"msg_date": "Sat, 05 Dec 2020 08:22:13 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add support for leading/trailing bytea trim()ing"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Fri, Dec 4, 2020, at 22:02, Tom Lane wrote:\n>> (Maybe the existing ltrim/rtrim descrs are also like this, but if so\n>> I'd change them too.)\n\n> They weren't, but I think the description for the bytea functions\n> can be improved to have a more precise description\n> if we take inspiration from the the text functions.\n\nYeah, I agree with making the bytea descriptions look like the\ntext ones. Pushed with minor additional doc fixes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jan 2021 15:13:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for leading/trailing bytea trim()ing"
}
] |
[
{
"msg_contents": "\nHi pgsql-hackers,\n\nI have a relatively trivial proposal - this affects pg_dumpall exclusively. Primary use case in ability to use pg_dumpall without SUPERUSER.\n\n* Add --no-alter-role flag to only use CREATE ROLE syntax instead of CREATE then ALTER.\n* Add --exclude-role flag similar to --exclude-database, semantically equivalent but applying to ROLEs.\n* Add --no-granted-by flag to explicitly omit GRANTED BY clauses.\n* Likely controversial - add --merge-credentials-file which loads ROLE/PASSWORD combinations from an ini file and adds to dump output if ROLE password not present. Implemented with an external library, inih.\n\nAll together, based against REL_12_STABLE:\nhttps://github.com/remingtonc/postgres/compare/REL_12_STABLE...remingtonc:REL_12_STABLE_DUMPALL_CLOUDSQL\n\nExample usage used against GCP Cloud SQL:\npg_dumpall --host=$HOST --username=$USER --no-password \\\n --no-role-passwords --merge-credentials-file=$CREDENTIALS_PATH \\\n --quote-all-identifiers --no-comments --no-alter-role --no-granted-by \\\n --exclude-database=postgres\\* --exclude-database=template\\* --exclude-database=cloudsql\\* \\\n --exclude-role=cloudsql\\* --exclude-role=postgres\\* \\\n --file=$DUMP_PATH\n\nBefore I go to base against master and split in to individual patches - does this seem reasonable?\n\nBest,\nRemington\n\n\n",
"msg_date": "Fri, 04 Dec 2020 20:57:03 GMT",
"msg_from": "code@remington.io",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_dumpall options proposal/patch"
},
{
"msg_contents": "code@remington.io writes:\n> I have a relatively trivial proposal - this affects pg_dumpall exclusively. Primary use case in ability to use pg_dumpall without SUPERUSER.\n\n> * Add --no-alter-role flag to only use CREATE ROLE syntax instead of CREATE then ALTER.\n\nWhat's the point of that?\n\n> * Likely controversial - add --merge-credentials-file which loads ROLE/PASSWORD combinations from an ini file and adds to dump output if ROLE password not present. Implemented with an external library, inih.\n\nIf it requires an external library, it's probably DOA, regardless of\nwhether there's a compelling use-case (which you didn't present anyway).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 16:43:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dumpall options proposal/patch"
}
] |
[
{
"msg_contents": "I think it's time for $SUBJECT. We added this GUC in 9.5, which\nwill be EOL by the time of our next major release, and it was never\nmeant as more than a transitional aid. Moreover, it's been buggy\nas heck (cf abb164655, 05104f693, 01e0cbc4f, 4cae471d1), and the\nfact that some of those errors went undetected for years shows that\nit's not really gotten much field usage.\n\nHence, I propose the attached. Comments?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 04 Dec 2020 16:39:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Removal of operator_precedence_warning"
},
{
"msg_contents": "On 2020-Dec-04, Tom Lane wrote:\n\n> I think it's time for $SUBJECT. We added this GUC in 9.5, which\n> will be EOL by the time of our next major release, and it was never\n> meant as more than a transitional aid. Moreover, it's been buggy\n> as heck (cf abb164655, 05104f693, 01e0cbc4f, 4cae471d1), and the\n> fact that some of those errors went undetected for years shows that\n> it's not really gotten much field usage.\n> \n> Hence, I propose the attached. Comments?\n\nI wonder if it'd be fruitful to ask the submitters of those bugs about\ntheir experiences with the feature. Did they find it useful in finding\nprecedence problems in their code? Did they experience other problems\nthat they didn't report?\n\nReading the reports mentioned in those commits, it doesn't look like any\nof them were actually using the feature -- they all seem to have come\nacross the problems by accidents of varying nature.\n\n\n\n",
"msg_date": "Fri, 4 Dec 2020 19:01:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Removal of operator_precedence_warning"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Reading the reports mentioned in those commits, it doesn't look like any\n> of them were actually using the feature -- they all seem to have come\n> across the problems by accidents of varying nature.\n\nThe two oldest reports look like the submitters had\noperator_precedence_warning turned on in normal use, which is reasonable\ngiven that was early 9.5.x days. The third one looks like it was a test\nsetup, while the latest bug sounds like it was found by code inspection\nnot by stumbling over the misbehavior. So people did use it, at least\nfor awhile. But anyone who's going directly from 9.4 or earlier to v14\nis going to have lots more compatibility issues to worry about besides\nprecedence.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 17:21:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removal of operator_precedence_warning"
}
] |
[
{
"msg_contents": "Hi, David:\nFor nodeResultCache.c :\n\n+#define SH_EQUAL(tb, a, b) ResultCacheHash_equal(tb, a, b) == 0\n\nI think it would be safer if the comparison is enclosed in parentheses (in\ncase the macro appears in composite condition).\n\n+ResultCacheHash_equal(struct resultcache_hash *tb, const ResultCacheKey\n*key1,\n+ const ResultCacheKey *key2)\n\nSince key2 is not used, maybe name it unused_key ?\n\n+ /* Make a guess at a good size when we're not given a valid size. */\n+ if (size == 0)\n+ size = 1024;\n\nShould the default size be logged ?\n\n+ /* Update the memory accounting */\n+ rcstate->mem_used -= freed_mem;\n\nMaybe add an assertion that mem_used is >= 0 after the decrement (there is\nan assertion in remove_cache_entry however, that assertion is after another\ndecrement).\n\n+ * 'specialkey', if not NULL, causes the function to return false if the\nentry\n+ * entry which the key belongs to is removed from the cache.\n\nduplicate entry (one at the end of first line and one at the beginning of\nsecond line).\n\nFor cache_lookup(), new key is allocated before checking\nwhether rcstate->mem_used > rcstate->mem_upperlimit. It seems new entries\nshould probably have the same size.\nCan we check whether upper limit is crossed (assuming the addition of new\nentry) before allocating new entry ?\n\n+ if (unlikely(!cache_reduce_memory(rcstate, key)))\n+ return NULL;\n\nDoes the new entry need to be released in the above case?\n\nCheers\n\nHi, David:For nodeResultCache.c :+#define SH_EQUAL(tb, a, b) ResultCacheHash_equal(tb, a, b) == 0I think it would be safer if the comparison is enclosed in parentheses (in case the macro appears in composite condition).+ResultCacheHash_equal(struct resultcache_hash *tb, const ResultCacheKey *key1,+ const ResultCacheKey *key2)Since key2 is not used, maybe name it unused_key ?+ /* Make a guess at a good size when we're not given a valid size. */+ if (size == 0)+ size = 1024;Should the default size be logged ?+ /* Update the memory accounting */+ rcstate->mem_used -= freed_mem;Maybe add an assertion that mem_used is >= 0 after the decrement (there is an assertion in remove_cache_entry however, that assertion is after another decrement).+ * 'specialkey', if not NULL, causes the function to return false if the entry+ * entry which the key belongs to is removed from the cache.duplicate entry (one at the end of first line and one at the beginning of second line).For cache_lookup(), new key is allocated before checking whether rcstate->mem_used > rcstate->mem_upperlimit. It seems new entries should probably have the same size.Can we check whether upper limit is crossed (assuming the addition of new entry) before allocating new entry ?+ if (unlikely(!cache_reduce_memory(rcstate, key)))+ return NULL;Does the new entry need to be released in the above case?Cheers",
"msg_date": "Fri, 4 Dec 2020 17:09:06 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": "There are two blocks with almost identical code (second occurrence in\ncache_store_tuple):\n\n+ if (rcstate->mem_used > rcstate->mem_upperlimit)\n+ {\n\nIt would be nice if the code can be extracted to a method and shared.\n\n node->rc_status = RC_END_OF_SCAN;\n return NULL;\n }\n else\n\nThere are several places where the else keyword for else block can be\nomitted because the if block ends with return.\nThis would allow the code in else block to move leftward (for easier\nreading).\n\n if (!get_op_hash_functions(hashop, &left_hashfn, &right_hashfn))\n\nI noticed that right_hashfn isn't used. Would this cause some warning from\nthe compiler (for some compiler the warning would be treated as error) ?\nMaybe NULL can be passed as the last parameter. The return value\nof get_op_hash_functions would keep the current meaning (find both hash\nfn's).\n\n rcstate->mem_lowerlimit = rcstate->mem_upperlimit * 0.98;\n\nMaybe (in subsequent patch) GUC variable can be introduced for tuning the\nconstant 0.98.\n\nFor +paraminfo_get_equal_hashops :\n\n+ else\n+ Assert(false);\n\nAdd elog would be good for debugging.\n\nCheers\n\nOn Fri, Dec 4, 2020 at 5:09 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi, David:\n> For nodeResultCache.c :\n>\n> +#define SH_EQUAL(tb, a, b) ResultCacheHash_equal(tb, a, b) == 0\n>\n> I think it would be safer if the comparison is enclosed in parentheses (in\n> case the macro appears in composite condition).\n>\n> +ResultCacheHash_equal(struct resultcache_hash *tb, const ResultCacheKey\n> *key1,\n> + const ResultCacheKey *key2)\n>\n> Since key2 is not used, maybe name it unused_key ?\n>\n> + /* Make a guess at a good size when we're not given a valid size. */\n> + if (size == 0)\n> + size = 1024;\n>\n> Should the default size be logged ?\n>\n> + /* Update the memory accounting */\n> + rcstate->mem_used -= freed_mem;\n>\n> Maybe add an assertion that mem_used is >= 0 after the decrement (there is\n> an assertion in remove_cache_entry however, that assertion is after another\n> decrement).\n>\n> + * 'specialkey', if not NULL, causes the function to return false if the\n> entry\n> + * entry which the key belongs to is removed from the cache.\n>\n> duplicate entry (one at the end of first line and one at the beginning of\n> second line).\n>\n> For cache_lookup(), new key is allocated before checking\n> whether rcstate->mem_used > rcstate->mem_upperlimit. It seems new entries\n> should probably have the same size.\n> Can we check whether upper limit is crossed (assuming the addition of new\n> entry) before allocating new entry ?\n>\n> + if (unlikely(!cache_reduce_memory(rcstate, key)))\n> + return NULL;\n>\n> Does the new entry need to be released in the above case?\n>\n> Cheers\n>\n\nThere are two blocks with almost identical code (second occurrence in cache_store_tuple):+ if (rcstate->mem_used > rcstate->mem_upperlimit)+ {It would be nice if the code can be extracted to a method and shared. node->rc_status = RC_END_OF_SCAN; return NULL; } elseThere are several places where the else keyword for else block can be omitted because the if block ends with return.This would allow the code in else block to move leftward (for easier reading). if (!get_op_hash_functions(hashop, &left_hashfn, &right_hashfn))I noticed that right_hashfn isn't used. Would this cause some warning from the compiler (for some compiler the warning would be treated as error) ?Maybe NULL can be passed as the last parameter. The return value of get_op_hash_functions would keep the current meaning (find both hash fn's). rcstate->mem_lowerlimit = rcstate->mem_upperlimit * 0.98;Maybe (in subsequent patch) GUC variable can be introduced for tuning the constant 0.98.For +paraminfo_get_equal_hashops :+ else+ Assert(false);Add elog would be good for debugging.CheersOn Fri, Dec 4, 2020 at 5:09 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi, David:For nodeResultCache.c :+#define SH_EQUAL(tb, a, b) ResultCacheHash_equal(tb, a, b) == 0I think it would be safer if the comparison is enclosed in parentheses (in case the macro appears in composite condition).+ResultCacheHash_equal(struct resultcache_hash *tb, const ResultCacheKey *key1,+ const ResultCacheKey *key2)Since key2 is not used, maybe name it unused_key ?+ /* Make a guess at a good size when we're not given a valid size. */+ if (size == 0)+ size = 1024;Should the default size be logged ?+ /* Update the memory accounting */+ rcstate->mem_used -= freed_mem;Maybe add an assertion that mem_used is >= 0 after the decrement (there is an assertion in remove_cache_entry however, that assertion is after another decrement).+ * 'specialkey', if not NULL, causes the function to return false if the entry+ * entry which the key belongs to is removed from the cache.duplicate entry (one at the end of first line and one at the beginning of second line).For cache_lookup(), new key is allocated before checking whether rcstate->mem_used > rcstate->mem_upperlimit. It seems new entries should probably have the same size.Can we check whether upper limit is crossed (assuming the addition of new entry) before allocating new entry ?+ if (unlikely(!cache_reduce_memory(rcstate, key)))+ return NULL;Does the new entry need to be released in the above case?Cheers",
"msg_date": "Fri, 4 Dec 2020 19:51:48 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Sat, 5 Dec 2020 at 14:08, Zhihong Yu <zyu@yugabyte.com> wrote:\n> +#define SH_EQUAL(tb, a, b) ResultCacheHash_equal(tb, a, b) == 0\n>\n> I think it would be safer if the comparison is enclosed in parentheses (in case the macro appears in composite condition).\n\nThat seems fair. Likely it might be nicer if simplehash.h played it\nsafe and put usages of the macro in additional parenthesis. I see a\nbit of a mix of other places where we #define SH_EQUAL. It looks like\nthe one in execGrouping.c and tidbitmap.c are also missing the\nadditional parenthesis.\n\n> +ResultCacheHash_equal(struct resultcache_hash *tb, const ResultCacheKey *key1,\n> + const ResultCacheKey *key2)\n>\n> Since key2 is not used, maybe name it unused_key ?\n\nI'm not so sure it's a great change. The only place where people see\nthat is in the same area that mentions \" 'key2' is never used\"\n\n> + /* Make a guess at a good size when we're not given a valid size. */\n> + if (size == 0)\n> + size = 1024;\n>\n> Should the default size be logged ?\n\nI'm not too sure if I know what you mean here. Should it be a power of\n2? It is. Or do you mean I shouldn't use a magic number?\n\n> + /* Update the memory accounting */\n> + rcstate->mem_used -= freed_mem;\n>\n> Maybe add an assertion that mem_used is >= 0 after the decrement (there is an assertion in remove_cache_entry however, that assertion is after another decrement).\n\nGood idea.\n\n> + * 'specialkey', if not NULL, causes the function to return false if the entry\n> + * entry which the key belongs to is removed from the cache.\n>\n> duplicate entry (one at the end of first line and one at the beginning of second line).\n\nWell spotted.\n\n> For cache_lookup(), new key is allocated before checking whether rcstate->mem_used > rcstate->mem_upperlimit. It seems new entries should probably have the same size.\n> Can we check whether upper limit is crossed (assuming the addition of new entry) before allocating new entry ?\n\nI'd like to leave this as is. If we were to check if we've gone over\nmemory budget before the resultcache_insert() then we're doing a\nmemory check even for cache hits. That's additional effort. I'd prefer\nonly to check if we've gone over the memory budget in cases where\nwe've actually allocated more memory.\n\nIn each case we can go one allocation over budget, so I don't see what\ndoing the check beforehand gives us.\n\n> + if (unlikely(!cache_reduce_memory(rcstate, key)))\n> + return NULL;\n>\n> Does the new entry need to be released in the above case?\n\nNo. cache_reduce_memory returning false will have removed \"key\" from the cache.\n\nI'll post an updated patch on the main thread once I've looked at your\nfollowup review.\n\nDavid\n\n\n",
"msg_date": "Mon, 7 Dec 2020 13:33:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": "On Sat, 5 Dec 2020 at 16:51, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> There are two blocks with almost identical code (second occurrence in cache_store_tuple):\n>\n> + if (rcstate->mem_used > rcstate->mem_upperlimit)\n> + {\n> It would be nice if the code can be extracted to a method and shared.\n\nIt's true, they're *almost* identical. I quite like the fact that one\nof the cases can have an unlikely() macro in there. It's pretty\nunlikely that we'd go into cache overflow just when adding a new cache\nentry. work_mem would likely have to be set to a couple of dozen bytes\nfor that to happen. 64k is the lowest it can be set. However, I\ndidn't really check to see if having that unlikely() macro increases\nperformance. I've changed things locally here to add a new function\nnamed cache_check_mem(). I'll keep that for now, but I'm not sure if\nthere's enough code there to warrant a function. The majority of the\nadditional lines are from the comment being duplicated.\n\n> node->rc_status = RC_END_OF_SCAN;\n> return NULL;\n> }\n> else\n>\n> There are several places where the else keyword for else block can be omitted because the if block ends with return.\n> This would allow the code in else block to move leftward (for easier reading).\n\nOK, I've removed the \"else\" in places where it can be removed.\n\n> if (!get_op_hash_functions(hashop, &left_hashfn, &right_hashfn))\n>\n> I noticed that right_hashfn isn't used. Would this cause some warning from the compiler (for some compiler the warning would be treated as error) ?\n> Maybe NULL can be passed as the last parameter. The return value of get_op_hash_functions would keep the current meaning (find both hash fn's).\n\nIt's fine not to use output parameters. It's not the only one in the\ncode base ignoring one from that very function. See\nexecTuplesHashPrepare().\n\n> rcstate->mem_lowerlimit = rcstate->mem_upperlimit * 0.98;\n>\n> Maybe (in subsequent patch) GUC variable can be introduced for tuning the constant 0.98.\n\nI don't think exposing such knobs for users to adjust is a good idea.\nCan you think of a case where you'd want to change it? Or do you think\n98% is not a good number?\n\n>\n> For +paraminfo_get_equal_hashops :\n>\n> + else\n> + Assert(false);\n\nI'm keen to leave it like it is. I don't see any need to bloat the\ncompiled code with an elog(ERROR).\n\nThere's a comment in RelOptInfo.lateral_vars that mentions:\n\n/* LATERAL Vars and PHVs referenced by rel */\n\nSo, if anyone, in the future, wants to add some other node type to\nthat list then they'll have a bit more work to do. Plus, I'm only\ndoing the same as what's done in create_lateral_join_info().\n\nI'll run the updated patch which includes the cache_check_mem()\nfunction for a bit and post an updated patch to the main thread a bit\nlater.\n\nThanks for having a look at this patch.\n\nDavid\n\n\n",
"msg_date": "Mon, 7 Dec 2020 14:15:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": "> + /* Make a guess at a good size when we're not given a valid size. */\n> + if (size == 0)\n> + size = 1024;\n>\n> Should the default size be logged ?\n\n> I'm not too sure if I know what you mean here. Should it be a power of\n> 2? It is. Or do you mean I shouldn't use a magic number?\n\nUsing 1024 seems to be fine. I meant logging the default value of 1024 so\nthat user / dev can have better idea the actual value used (without looking\nat the code).\n\n>> Or do you think 98% is not a good number?\n\nSince you have played with Result Cache, I would trust your judgment in\nchoosing the percentage.\nIt is fine not to expose this constant until the need arises.\n\nCheers\n\nOn Sun, Dec 6, 2020 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sat, 5 Dec 2020 at 16:51, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > There are two blocks with almost identical code (second occurrence in\n> cache_store_tuple):\n> >\n> > + if (rcstate->mem_used > rcstate->mem_upperlimit)\n> > + {\n> > It would be nice if the code can be extracted to a method and shared.\n>\n> It's true, they're *almost* identical. I quite like the fact that one\n> of the cases can have an unlikely() macro in there. It's pretty\n> unlikely that we'd go into cache overflow just when adding a new cache\n> entry. work_mem would likely have to be set to a couple of dozen bytes\n> for that to happen. 64k is the lowest it can be set. However, I\n> didn't really check to see if having that unlikely() macro increases\n> performance. I've changed things locally here to add a new function\n> named cache_check_mem(). I'll keep that for now, but I'm not sure if\n> there's enough code there to warrant a function. The majority of the\n> additional lines are from the comment being duplicated.\n>\n> > node->rc_status = RC_END_OF_SCAN;\n> > return NULL;\n> > }\n> > else\n> >\n> > There are several places where the else keyword for else block can be\n> omitted because the if block ends with return.\n> > This would allow the code in else block to move leftward (for easier\n> reading).\n>\n> OK, I've removed the \"else\" in places where it can be removed.\n>\n> > if (!get_op_hash_functions(hashop, &left_hashfn, &right_hashfn))\n> >\n> > I noticed that right_hashfn isn't used. Would this cause some warning\n> from the compiler (for some compiler the warning would be treated as error)\n> ?\n> > Maybe NULL can be passed as the last parameter. The return value of\n> get_op_hash_functions would keep the current meaning (find both hash fn's).\n>\n> It's fine not to use output parameters. It's not the only one in the\n> code base ignoring one from that very function. See\n> execTuplesHashPrepare().\n>\n> > rcstate->mem_lowerlimit = rcstate->mem_upperlimit * 0.98;\n> >\n> > Maybe (in subsequent patch) GUC variable can be introduced for tuning\n> the constant 0.98.\n>\n> I don't think exposing such knobs for users to adjust is a good idea.\n> Can you think of a case where you'd want to change it? Or do you think\n> 98% is not a good number?\n>\n> >\n> > For +paraminfo_get_equal_hashops :\n> >\n> > + else\n> > + Assert(false);\n>\n> I'm keen to leave it like it is. I don't see any need to bloat the\n> compiled code with an elog(ERROR).\n>\n> There's a comment in RelOptInfo.lateral_vars that mentions:\n>\n> /* LATERAL Vars and PHVs referenced by rel */\n>\n> So, if anyone, in the future, wants to add some other node type to\n> that list then they'll have a bit more work to do. Plus, I'm only\n> doing the same as what's done in create_lateral_join_info().\n>\n> I'll run the updated patch which includes the cache_check_mem()\n> function for a bit and post an updated patch to the main thread a bit\n> later.\n>\n> Thanks for having a look at this patch.\n>\n> David\n>\n\n> + /* Make a guess at a good size when we're not given a valid size. */> + if (size == 0)> + size = 1024;>> Should the default size be logged ?> I'm not too sure if I know what you mean here. Should it be a power of> 2? It is. Or do you mean I shouldn't use a magic number?Using 1024 seems to be fine. I meant logging the default value of 1024 so that user / dev can have better idea the actual value used (without looking at the code).>> Or do you think 98% is not a good number?Since you have played with Result Cache, I would trust your judgment in choosing the percentage.It is fine not to expose this constant until the need arises.CheersOn Sun, Dec 6, 2020 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:On Sat, 5 Dec 2020 at 16:51, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> There are two blocks with almost identical code (second occurrence in cache_store_tuple):\n>\n> + if (rcstate->mem_used > rcstate->mem_upperlimit)\n> + {\n> It would be nice if the code can be extracted to a method and shared.\n\nIt's true, they're *almost* identical. I quite like the fact that one\nof the cases can have an unlikely() macro in there. It's pretty\nunlikely that we'd go into cache overflow just when adding a new cache\nentry. work_mem would likely have to be set to a couple of dozen bytes\nfor that to happen. 64k is the lowest it can be set. However, I\ndidn't really check to see if having that unlikely() macro increases\nperformance. I've changed things locally here to add a new function\nnamed cache_check_mem(). I'll keep that for now, but I'm not sure if\nthere's enough code there to warrant a function. The majority of the\nadditional lines are from the comment being duplicated.\n\n> node->rc_status = RC_END_OF_SCAN;\n> return NULL;\n> }\n> else\n>\n> There are several places where the else keyword for else block can be omitted because the if block ends with return.\n> This would allow the code in else block to move leftward (for easier reading).\n\nOK, I've removed the \"else\" in places where it can be removed.\n\n> if (!get_op_hash_functions(hashop, &left_hashfn, &right_hashfn))\n>\n> I noticed that right_hashfn isn't used. Would this cause some warning from the compiler (for some compiler the warning would be treated as error) ?\n> Maybe NULL can be passed as the last parameter. The return value of get_op_hash_functions would keep the current meaning (find both hash fn's).\n\nIt's fine not to use output parameters. It's not the only one in the\ncode base ignoring one from that very function. See\nexecTuplesHashPrepare().\n\n> rcstate->mem_lowerlimit = rcstate->mem_upperlimit * 0.98;\n>\n> Maybe (in subsequent patch) GUC variable can be introduced for tuning the constant 0.98.\n\nI don't think exposing such knobs for users to adjust is a good idea.\nCan you think of a case where you'd want to change it? Or do you think\n98% is not a good number?\n\n>\n> For +paraminfo_get_equal_hashops :\n>\n> + else\n> + Assert(false);\n\nI'm keen to leave it like it is. I don't see any need to bloat the\ncompiled code with an elog(ERROR).\n\nThere's a comment in RelOptInfo.lateral_vars that mentions:\n\n/* LATERAL Vars and PHVs referenced by rel */\n\nSo, if anyone, in the future, wants to add some other node type to\nthat list then they'll have a bit more work to do. Plus, I'm only\ndoing the same as what's done in create_lateral_join_info().\n\nI'll run the updated patch which includes the cache_check_mem()\nfunction for a bit and post an updated patch to the main thread a bit\nlater.\n\nThanks for having a look at this patch.\n\nDavid",
"msg_date": "Sun, 6 Dec 2020 17:25:49 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": "On Mon, 7 Dec 2020 at 14:25, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> > + /* Make a guess at a good size when we're not given a valid size. */\n> > + if (size == 0)\n> > + size = 1024;\n> >\n> > Should the default size be logged ?\n>\n> > I'm not too sure if I know what you mean here. Should it be a power of\n> > 2? It is. Or do you mean I shouldn't use a magic number?\n>\n> Using 1024 seems to be fine. I meant logging the default value of 1024 so that user / dev can have better idea the actual value used (without looking at the code).\n\nOh, right. In EXPLAIN ANALYZE. Good point. I wonder if that's going\nto be interesting enough to show.\n\n> >> Or do you think 98% is not a good number?\n>\n> Since you have played with Result Cache, I would trust your judgment in choosing the percentage.\n> It is fine not to expose this constant until the need arises.\n\nI did some experimentation with different values on a workload that\nnever gets a cache hit. and just always evicts the oldest entry.\nThere's only very slight changes in performance between 90%, 98% and\n100% with 1MB work_mem.\n\ntimes in milliseconds measured over 60 seconds on each run.\n\n 90% 98% 100%\nrun1 2318 2339 2344\nrun2 2339 2333 2309\nrun3 2357 2339 2346\navg (ms) 2338 2337 2333\n\nPerhaps this is an argument for just removing the logic that has the\nsoft upper limit and just have it do cache evictions after each\nallocation after the cache first fills.\n\nSetup: same tables as [1]\nalter table hundredk alter column hundredk set (n_distinct = 10);\nanalyze hundredk;\nalter system set work_mem = '1MB';\nselect pg_reload_conf();\n\nQuery\nselect count(*) from hundredk hk inner join lookup l on hk.hundredk = l.a;\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n\n\n",
"msg_date": "Tue, 8 Dec 2020 14:27:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
},
{
"msg_contents": ">> just removing the logic that has the\nsoft upper limit and just have it do cache evictions after each\nallocation after the cache first fills\n\nYeah - having one fewer limit would simplify the code.\n\nCheers\n\nOn Mon, Dec 7, 2020 at 5:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 7 Dec 2020 at 14:25, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > > + /* Make a guess at a good size when we're not given a valid size.\n> */\n> > > + if (size == 0)\n> > > + size = 1024;\n> > >\n> > > Should the default size be logged ?\n> >\n> > > I'm not too sure if I know what you mean here. Should it be a power of\n> > > 2? It is. Or do you mean I shouldn't use a magic number?\n> >\n> > Using 1024 seems to be fine. I meant logging the default value of 1024\n> so that user / dev can have better idea the actual value used (without\n> looking at the code).\n>\n> Oh, right. In EXPLAIN ANALYZE. Good point. I wonder if that's going\n> to be interesting enough to show.\n>\n> > >> Or do you think 98% is not a good number?\n> >\n> > Since you have played with Result Cache, I would trust your judgment in\n> choosing the percentage.\n> > It is fine not to expose this constant until the need arises.\n>\n> I did some experimentation with different values on a workload that\n> never gets a cache hit. and just always evicts the oldest entry.\n> There's only very slight changes in performance between 90%, 98% and\n> 100% with 1MB work_mem.\n>\n> times in milliseconds measured over 60 seconds on each run.\n>\n> 90% 98% 100%\n> run1 2318 2339 2344\n> run2 2339 2333 2309\n> run3 2357 2339 2346\n> avg (ms) 2338 2337 2333\n>\n> Perhaps this is an argument for just removing the logic that has the\n> soft upper limit and just have it do cache evictions after each\n> allocation after the cache first fills.\n>\n> Setup: same tables as [1]\n> alter table hundredk alter column hundredk set (n_distinct = 10);\n> analyze hundredk;\n> alter system set work_mem = '1MB';\n> select pg_reload_conf();\n>\n> Query\n> select count(*) from hundredk hk inner join lookup l on hk.hundredk = l.a;\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n>\n\n>> just removing the logic that has thesoft upper limit and just have it do cache evictions after eachallocation after the cache first fillsYeah - having one fewer limit would simplify the code.CheersOn Mon, Dec 7, 2020 at 5:27 PM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 7 Dec 2020 at 14:25, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> > + /* Make a guess at a good size when we're not given a valid size. */\n> > + if (size == 0)\n> > + size = 1024;\n> >\n> > Should the default size be logged ?\n>\n> > I'm not too sure if I know what you mean here. Should it be a power of\n> > 2? It is. Or do you mean I shouldn't use a magic number?\n>\n> Using 1024 seems to be fine. I meant logging the default value of 1024 so that user / dev can have better idea the actual value used (without looking at the code).\n\nOh, right. In EXPLAIN ANALYZE. Good point. I wonder if that's going\nto be interesting enough to show.\n\n> >> Or do you think 98% is not a good number?\n>\n> Since you have played with Result Cache, I would trust your judgment in choosing the percentage.\n> It is fine not to expose this constant until the need arises.\n\nI did some experimentation with different values on a workload that\nnever gets a cache hit. and just always evicts the oldest entry.\nThere's only very slight changes in performance between 90%, 98% and\n100% with 1MB work_mem.\n\ntimes in milliseconds measured over 60 seconds on each run.\n\n 90% 98% 100%\nrun1 2318 2339 2344\nrun2 2339 2333 2309\nrun3 2357 2339 2346\navg (ms) 2338 2337 2333\n\nPerhaps this is an argument for just removing the logic that has the\nsoft upper limit and just have it do cache evictions after each\nallocation after the cache first fills.\n\nSetup: same tables as [1]\nalter table hundredk alter column hundredk set (n_distinct = 10);\nanalyze hundredk;\nalter system set work_mem = '1MB';\nselect pg_reload_conf();\n\nQuery\nselect count(*) from hundredk hk inner join lookup l on hk.hundredk = l.a;\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com",
"msg_date": "Mon, 7 Dec 2020 18:54:22 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans"
}
] |
[
{
"msg_contents": "The attached patch changes definitions like\n\n #define FOO 0x01\n #define BAR 0x02\n\nto\n\n #define FOO (1 << 0)\n #define BAR (1 << 1)\n\netc.\n\nBoth styles are currently in use, but the latter style seems more \nreadable and easier to update.\n\nThis change only addresses bitmaps used in memory (e.g., for parsing or \nspecific function APIs), where the actual bits don't really matter. \nBits that might go on disk weren't touched. There, defining the bits in \na more concrete way seems better.",
"msg_date": "Sat, 5 Dec 2020 16:30:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The attached patch changes definitions like\n> #define FOO 0x01\n> #define BAR 0x02\n> to\n> #define FOO (1 << 0)\n> #define BAR (1 << 1)\n> etc.\n\n> Both styles are currently in use, but the latter style seems more \n> readable and easier to update.\n\nFWIW, personally I'd vote for doing the exact opposite. When you are\ndebugging and examining the contents of a bitmask variable, it's easier to\ncorrelate a value like \"0x03\" with definitions made in the former style.\nOr at least I think so; maybe others see it differently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Dec 2020 13:03:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "On 2020-Dec-05, Tom Lane wrote:\n\n> FWIW, personally I'd vote for doing the exact opposite. When you are\n> debugging and examining the contents of a bitmask variable, it's easier to\n> correlate a value like \"0x03\" with definitions made in the former style.\n> Or at least I think so; maybe others see it differently.\n\nThe hexadecimal representation is more natural to me than bit-shifting,\nso I would prefer to use that style too. But maybe I'm trained to it\nbecause of looking at t_infomask symbols constantly.\n\n\n",
"msg_date": "Sat, 5 Dec 2020 22:31:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "On Sat, 2020-12-05 at 13:03 -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> \n> > The attached patch changes definitions like\n> > #define FOO 0x01\n> > #define BAR 0x02\n> > to\n> > #define FOO (1 << 0)\n> > #define BAR (1 << 1)\n> > etc.\n> \n> > Both styles are currently in use, but the latter style seems more \n> > readable and easier to update.\n> \n> FWIW, personally I'd vote for doing the exact opposite. When you are\n> debugging and examining the contents of a bitmask variable, it's easier to\n> correlate a value like \"0x03\" with definitions made in the former style.\n> Or at least I think so; maybe others see it differently.\n\n+1\n\nLaurenz Albe\n\n\n\n",
"msg_date": "Sun, 06 Dec 2020 06:22:27 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "On Sat, Dec 05, 2020 at 10:31:09PM -0300, Alvaro Herrera wrote:\n> The hexadecimal representation is more natural to me than bit-shifting,\n> so I would prefer to use that style too. But maybe I'm trained to it\n> because of looking at t_infomask symbols constantly.\n\nIf we are going to change all that, hexa style sounds good to me too.\nWould it be worth an addition to the docs, say in [1] to tell that\nthis is a preferred style? \n\n[1]: https://www.postgresql.org/docs/devel/source-conventions.html?\n--\nMichael",
"msg_date": "Sun, 6 Dec 2020 15:25:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "On Sun, Dec 6, 2020 at 1:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Dec 05, 2020 at 10:31:09PM -0300, Alvaro Herrera wrote:\n> > The hexadecimal representation is more natural to me than bit-shifting,\n> > so I would prefer to use that style too. But maybe I'm trained to it\n> > because of looking at t_infomask symbols constantly.\n>\n> If we are going to change all that, hexa style sounds good to me too.\n> Would it be worth an addition to the docs, say in [1] to tell that\n> this is a preferred style?\n>\n> [1]: https://www.postgresql.org/docs/devel/source-conventions.html?\n> --\n> Michael\n\n\n\nIn my view the bit shifting approach makes it more obvious a single bit is\nbeing set, but on the other hand the hex approach makes it easier to\ncompare in debugging.\n\nI’m not really sure which to prefer, though I think I would have leaned\nslightly towards the former.\n\nJames\n\n>\n\nOn Sun, Dec 6, 2020 at 1:25 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Dec 05, 2020 at 10:31:09PM -0300, Alvaro Herrera wrote:\n> The hexadecimal representation is more natural to me than bit-shifting,\n> so I would prefer to use that style too. But maybe I'm trained to it\n> because of looking at t_infomask symbols constantly.\n\nIf we are going to change all that, hexa style sounds good to me too.\nWould it be worth an addition to the docs, say in [1] to tell that\nthis is a preferred style? \n\n[1]: https://www.postgresql.org/docs/devel/source-conventions.html?\n--\nMichaelIn my view the bit shifting approach makes it more obvious a single bit is being set, but on the other hand the hex approach makes it easier to compare in debugging. I’m not really sure which to prefer, though I think I would have leaned slightly towards the former. James",
"msg_date": "Sun, 6 Dec 2020 11:44:43 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
},
{
"msg_contents": "\nOn 12/6/20 11:44 AM, James Coleman wrote:\n> On Sun, Dec 6, 2020 at 1:25 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n>\n> On Sat, Dec 05, 2020 at 10:31:09PM -0300, Alvaro Herrera wrote:\n> > The hexadecimal representation is more natural to me than\n> bit-shifting,\n> > so I would prefer to use that style too. But maybe I'm trained\n> to it\n> > because of looking at t_infomask symbols constantly.\n>\n> If we are going to change all that, hexa style sounds good to me too.\n> Would it be worth an addition to the docs, say in [1] to tell that\n> this is a preferred style?\n>\n> [1]: https://www.postgresql.org/docs/devel/source-conventions.html\n> <https://www.postgresql.org/docs/devel/source-conventions.html>?\n> --\n> Michael\n>\n>\n>\n> In my view the bit shifting approach makes it more obvious a single\n> bit is being set, but on the other hand the hex approach makes it\n> easier to compare in debugging. \n>\n> I’m not really sure which to prefer, though I think I would have\n> leaned slightly towards the former. \n>\n>\n\nPerhaps we should put one style or the other in a comment. I take Tom's\npoint, but after the number of bits shifted gets above some number I\nhave trouble remembering which bit it is, and while of course I can work\nit out, it can be a very minor nuisance.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Sun, 6 Dec 2020 12:16:31 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Change definitions of bitmap flags to bit-shifting style"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nDue to the error in PG-ProEE we have added the following test to \npg_visibility:\n\ncreate table vacuum_test as select 42 i;\nvacuum vacuum_test;\nselect count(*) > 0 from pg_check_visible('vacuum_test');\ndrop table vacuum_test;\n\nSometime (very rarely) this test failed: pg_visibility reports \n\"corrupted\" tuples.\nThe same error can be reproduced not only with PG-Pro but also with \nvanilla REL_11_STABLE, REL_12_STABLE and REL_13_STABLE.\nIt is not reproduced with master after Andres snapshot optimization - \ncommit dc7420c2.\n\nIt is not so easy to reproduce this error: it is necessary to repeat \nthis tests many times.\nProbability increased with more aggressive autovacuum settings.\nBut even with such settings and thousands of iterations I was not able \nto reproduce this error at my notebook - only at virtual machine.\n\nThe error is reported here:\n\n /*\n * If we're checking whether the page is all-visible, we expect\n * the tuple to be all-visible.\n */\n if (check_visible &&\n !tuple_all_visible(&tuple, OldestXmin, buffer))\n {\n TransactionId RecomputedOldestXmin;\n\n /*\n * Time has passed since we computed OldestXmin, so it's\n * possible that this tuple is all-visible in reality even\n * though it doesn't appear so based on our\n * previously-computed value. Let's compute a new \nvalue so we\n * can be certain whether there is a problem.\n *\n * From a concurrency point of view, it sort of sucks to\n * retake ProcArrayLock here while we're holding the buffer\n * exclusively locked, but it should be safe against\n * deadlocks, because surely GetOldestXmin() should \nnever take\n * a buffer lock. And this shouldn't happen often, so it's\n * worth being careful so as to avoid false positives.\n */\n RecomputedOldestXmin = GetOldestXmin(NULL, \nPROCARRAY_FLAGS_VACUUM);\n\n if (!TransactionIdPrecedes(OldestXmin, \nRecomputedOldestXmin))\n record_corrupt_item(items, &tuple.t_self);\n\n\nI debugger I have checked that OldestXmin = RecomputedOldestXmin = \ntuple.t_data->xmin\nI wonder if this check in pg_visibility is really correct and it can not \nhappen that OldestXmin=tuple.t_data->xmin?\nPlease notice that tuple_all_visible returns false if \n!TransactionIdPrecedes(xmin, OldestXmin)\n\nThanks in advance,\nKonstantin\n\n\n\n\n",
"msg_date": "Sun, 6 Dec 2020 23:50:51 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Wrong check in pg_visibility?"
},
{
"msg_contents": "On 06.12.2020 23:50, Konstantin Knizhnik wrote:\n> Hi hackers!\n>\n> Due to the error in PG-ProEE we have added the following test to \n> pg_visibility:\n>\n> create table vacuum_test as select 42 i;\n> vacuum vacuum_test;\n> select count(*) > 0 from pg_check_visible('vacuum_test');\n> drop table vacuum_test;\n>\n> Sometime (very rarely) this test failed: pg_visibility reports \n> \"corrupted\" tuples.\n> The same error can be reproduced not only with PG-Pro but also with \n> vanilla REL_11_STABLE, REL_12_STABLE and REL_13_STABLE.\n> It is not reproduced with master after Andres snapshot optimization - \n> commit dc7420c2.\n>\n> It is not so easy to reproduce this error: it is necessary to repeat \n> this tests many times.\n> Probability increased with more aggressive autovacuum settings.\n> But even with such settings and thousands of iterations I was not able \n> to reproduce this error at my notebook - only at virtual machine.\n>\n> The error is reported here:\n>\n> /*\n> * If we're checking whether the page is all-visible, we \n> expect\n> * the tuple to be all-visible.\n> */\n> if (check_visible &&\n> !tuple_all_visible(&tuple, OldestXmin, buffer))\n> {\n> TransactionId RecomputedOldestXmin;\n>\n> /*\n> * Time has passed since we computed OldestXmin, so it's\n> * possible that this tuple is all-visible in reality \n> even\n> * though it doesn't appear so based on our\n> * previously-computed value. Let's compute a new \n> value so we\n> * can be certain whether there is a problem.\n> *\n> * From a concurrency point of view, it sort of sucks to\n> * retake ProcArrayLock here while we're holding the \n> buffer\n> * exclusively locked, but it should be safe against\n> * deadlocks, because surely GetOldestXmin() should \n> never take\n> * a buffer lock. And this shouldn't happen often, so \n> it's\n> * worth being careful so as to avoid false positives.\n> */\n> RecomputedOldestXmin = GetOldestXmin(NULL, \n> PROCARRAY_FLAGS_VACUUM);\n>\n> if (!TransactionIdPrecedes(OldestXmin, \n> RecomputedOldestXmin))\n> record_corrupt_item(items, &tuple.t_self);\n>\n>\n> I debugger I have checked that OldestXmin = RecomputedOldestXmin = \n> tuple.t_data->xmin\n> I wonder if this check in pg_visibility is really correct and it can \n> not happen that OldestXmin=tuple.t_data->xmin?\n> Please notice that tuple_all_visible returns false if \n> !TransactionIdPrecedes(xmin, OldestXmin)\n>\n\nI did more investigations and have to say that this check in \npg_visibility.c is really not correct.\nThe process which is keeping oldest xmin is autovacuum.\nIt should be ignored by GetOldestXmin because of PROCARRAY_FLAGS_VACUUM \nflags, but it is not actually skipped\nbecause PROC_IN_VACUUM flag is not set yet. There is yet another flag - \nPROC_IS_AUTOVACUUM\nwhich is always set in autovacuum, but it can not be passed to \nGetOldestXmin? because is cleared by PROCARRAY_PROC_FLAGS_MASK.\n\nIf we just repeat RecomputedOldestXmin = GetOldestXmin(NULL, \nPROCARRAY_FLAGS_VACUUM);\nseveral times, then finally we will get right xmin.\n\nI wonder if such check should be excluded from pg_visibility or made in \nmore correct way?\nBecause nothing in documentation of pg_check_visible says that it may \nreturn false positives:\n\n|pg_check_visible(relation regclass, t_ctid OUT tid) returns setof tid|\n\n Returns the TIDs of non-all-visible tuples stored in pages\n marked all-visible in the visibility map. If this function returns a\n non-empty set of TIDs, the visibility map is corrupt.\n\n||\nAnd comment to this function is even morefrightening:\n\n/*\n * Return the TIDs of not-all-visible tuples in pages marked all-visible\n * in the visibility map. We hope no one will ever find any, but there \ncould\n * be bugs, database corruption, etc.\n */\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 06.12.2020 23:50, Konstantin\n Knizhnik wrote:\n\nHi\n hackers!\n \n\n Due to the error in PG-ProEE we have added the following test to\n pg_visibility:\n \n\n create table vacuum_test as select 42 i;\n \n vacuum vacuum_test;\n \n select count(*) > 0 from pg_check_visible('vacuum_test');\n \n drop table vacuum_test;\n \n\n Sometime (very rarely) this test failed: pg_visibility reports\n \"corrupted\" tuples.\n \n The same error can be reproduced not only with PG-Pro but also\n with vanilla REL_11_STABLE, REL_12_STABLE and REL_13_STABLE.\n \n It is not reproduced with master after Andres snapshot\n optimization - commit dc7420c2.\n \n\n It is not so easy to reproduce this error: it is necessary to\n repeat this tests many times.\n \n Probability increased with more aggressive autovacuum settings.\n \n But even with such settings and thousands of iterations I was not\n able to reproduce this error at my notebook - only at virtual\n machine.\n \n\n The error is reported here:\n \n\n /*\n \n * If we're checking whether the page is all-visible,\n we expect\n \n * the tuple to be all-visible.\n \n */\n \n if (check_visible &&\n \n !tuple_all_visible(&tuple, OldestXmin,\n buffer))\n \n {\n \n TransactionId RecomputedOldestXmin;\n \n\n /*\n \n * Time has passed since we computed OldestXmin,\n so it's\n \n * possible that this tuple is all-visible in\n reality even\n \n * though it doesn't appear so based on our\n \n * previously-computed value. Let's compute a new\n value so we\n \n * can be certain whether there is a problem.\n \n *\n \n * From a concurrency point of view, it sort of\n sucks to\n \n * retake ProcArrayLock here while we're holding\n the buffer\n \n * exclusively locked, but it should be safe\n against\n \n * deadlocks, because surely GetOldestXmin()\n should never take\n \n * a buffer lock. And this shouldn't happen often,\n so it's\n \n * worth being careful so as to avoid false\n positives.\n \n */\n \n RecomputedOldestXmin = GetOldestXmin(NULL,\n PROCARRAY_FLAGS_VACUUM);\n \n\n if (!TransactionIdPrecedes(OldestXmin,\n RecomputedOldestXmin))\n \n record_corrupt_item(items, &tuple.t_self);\n \n\n\n I debugger I have checked that OldestXmin = RecomputedOldestXmin =\n tuple.t_data->xmin\n \n I wonder if this check in pg_visibility is really correct and it\n can not happen that OldestXmin=tuple.t_data->xmin?\n \n Please notice that tuple_all_visible returns false if\n !TransactionIdPrecedes(xmin, OldestXmin)\n \n\n\n\n I did more investigations and have to say that this check in\n pg_visibility.c is really not correct. \n The process which is keeping oldest xmin is autovacuum.\n It should be ignored by GetOldestXmin because of\n PROCARRAY_FLAGS_VACUUM flags, but it is not actually skipped\n because PROC_IN_VACUUM flag is not set yet. There is yet another\n flag - PROC_IS_AUTOVACUUM\n which is always set in autovacuum, but it can not be passed to\n GetOldestXmin? because is cleared by PROCARRAY_PROC_FLAGS_MASK.\n\n If we just repeat RecomputedOldestXmin = GetOldestXmin(NULL,\n PROCARRAY_FLAGS_VACUUM);\n several times, then finally we will get right xmin.\n\n I wonder if such check should be excluded from pg_visibility or made\n in more correct way?\n Because nothing in documentation of pg_check_visible says that it\n may return false positives:\n\npg_check_visible(relation\n regclass, t_ctid OUT tid) returns setof tid\n\n Returns the TIDs of non-all-visible tuples stored in pages\n marked all-visible in the visibility map. If this function\n returns a non-empty set of TIDs, the visibility map is corrupt.\n\n\n And comment to this function is even more frightening:\n\n/*\n * Return the TIDs of not-all-visible tuples in pages marked\n all-visible\n * in the visibility map. We hope no one will ever find any, but\n there could\n * be bugs, database corruption, etc.\n */\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Dec 2020 12:59:25 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Wrong check in pg_visibility?"
}
] |
[
{
"msg_contents": "> Note that near the end of grouping planner we have a similar check:\n>\n> if (final_rel->consider_parallel && root->query_level > 1 &&\n> !limit_needed(parse))\n> \n> guarding copying the partial paths from the current rel to the final\n> rel. I haven't managed to come up with a test case that exposes that\n\nPlayed around with this a bit, here's a non-correlated subquery that gets us to that if statement\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo (bar int);\n\nINSERT INTO foo (bar)\nSELECT\n g\nFROM\n generate_series(1, 10000) AS g;\n\n\nSELECT\n (\n SELECT\n bar\n FROM\n foo\n LIMIT 1\n ) AS y\nFROM\n foo;\n\n\nI also was thinking about the LATERAL part.\n\nI couldn't think of any reason why the uncorrelated subquery's results would need to be shared and therefore the same, when we'll be \"looping\" over each row of the source table, running the subquery anew for each, conceptually.\n\nBut then I tried this...\n\ntest=# CREATE TABLE foo (bar int);\nCREATE TABLE\ntest=#\ntest=# INSERT INTO foo (bar)\ntest-# SELECT\ntest-# g\ntest-# FROM\ntest-# generate_series(1, 10) AS g;\nINSERT 0 10\ntest=#\ntest=#\ntest=# SELECT\ntest-# foo.bar,\ntest-# lat.bar\ntest-# FROM\ntest-# foo JOIN LATERAL (\ntest(# SELECT\ntest(# bar\ntest(# FROM\ntest(# foo AS foo2\ntest(# ORDER BY\ntest(# random()\ntest(# LIMIT 1\ntest(# ) AS lat ON true;\n bar | bar\n-----+-----\n 1 | 7\n 2 | 7\n 3 | 7\n 4 | 7\n 5 | 7\n 6 | 7\n 7 | 7\n 8 | 7\n 9 | 7\n 10 | 7\n(10 rows)\n\n\nAs you can see, random() is only called once. If postgres were supposed to be running the subquery for each source row, conceptually, it would be a mistake to cache the results of a volatile function like random().\n\nThe docs say: \"When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the FROM item providing the cross-referenced column(s), or set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set's values of the columns. The resulting row(s) are joined as usual with the rows they were computed from. This is repeated for each row or set of rows from the column source table(s).\"\n\nThey don't say what happens with LATERAL when there aren't cross-references though. As we expect, adding one does show random() being called once for each source row.\n\ntest=# SELECT\ntest-# foo.bar,\ntest-# lat.bar\ntest-# FROM\ntest-# foo JOIN LATERAL (\ntest(# SELECT\ntest(# bar\ntest(# FROM\ntest(# foo AS foo2\ntest(# WHERE\ntest(# foo2.bar < foo.bar + 100000\ntest(# ORDER BY\ntest(# random()\ntest(# LIMIT 1\ntest(# ) AS lat ON true;\n bar | bar\n-----+-----\n 1 | 5\n 2 | 8\n 3 | 3\n 4 | 4\n 5 | 5\n 6 | 5\n 7 | 1\n 8 | 3\n 9 | 7\n 10 | 3\n(10 rows)\n\nIt seems like to keep the same behavior that exists today, results of LATERAL subqueries would need to be the same if they aren't correlated, and so you couldn't run them in parallel with a limit if the order wasn't guaranteed. But I'll be the first to admit that it's easy enough for me to miss a key piece of logic on something like this, so I could be way off base too.\n\n\n",
"msg_date": "Sun, 06 Dec 2020 18:33:30 -0600",
"msg_from": "\"Brian Davis\" <brian@brianlikespostgres.com>",
"msg_from_op": true,
"msg_subject": "Re: Consider parallel for lateral subqueries with limit"
}
] |
[
{
"msg_contents": "Hi folks\n\nTL;DR: Anyone object to a new bgworker flag that exempts background workers\n(such as logical apply workers) from the first round of fast shutdown\nsignals, and instead lets them to finish their currently in-progress txn\nand exit?\n\nThis is a change proposal raised for comment before patch submission so\nplease consider it. Explanation of why I think we need it comes first, then\nproposed implementation.\n\nRationale:\n\nCurrently a fast shutdown causes logical replication subscribers to abort\ntheir currently in-progress transaction and terminate along with user\nbackends. This means they cannot finish receiving and flushing the\ncurrently in-progress transaction, possibly wasting a very large amount of\nwork.\n\nAfter restart the subscriber must reconnect, decode and reorder buffer from\nthe restart_lsn up to the current confirmed_flush_lsn, receive the whole\ntxn on the wire all over again, and apply the whole txn again locally. We\ndon't currently spool received txn change-streams to disk on the subscriber\nand flush them so we can't repeat just the local apply part (see the\nrelated thread \"Logical archiving\" for relevant discussion there). This can\ncreate a lot of bloat, a lot of excess WAL, etc, if a big txn was in\nprogress at the time.\n\nI'd like to add a bgworker flag that tells the postmaster to treat the\nlogical apply bgworker (or extension equivalents) somewhat like a walsender\nfor the purpose of fast shutdown. Instead of immediately terminating it\nlike user backends on fast shutdown, the bgworker should be sent a\nProcSignal warning that shutdown is pending and instructing it to finish\nreceiving and applying its current transaction, then exit gracefully.\n\nIt's not quite the same as the walsender, since there we try to flush\nchanges to downstreams up to the end of the last commit before shutting\ndown. That doesn't make sense on a subscriber because the upstream is\nlikely still generating txns. We just want to avoid wasting our effort on\nany in-flight txn.\n\nAny objections?\n\nProposed implementation:\n\n* Add new bgworker flag like BGW_DELAYED_SHUTDOWN\n\n* Define new ProcSignal PROCSIG_SHUTDOWN_REQUESTED. On fast shutdown send\nthis instead of a SIGTERM to bgworker backends flagged\nBGW_DELAYED_SHUTDOWN. On smart shutdown send it to all backends when the\nshutdown request arrives, since that could be handy for other uses too.\n\n* Flagged bgworker is expected to finish its current txn and exit promptly.\nImpose a grace period after which they get SIGTERM'd anyway. Also send a\nSIGTERM if the postmaster receives a second fast shutdown request.\n\n* Defer sending PROCSIG_WALSND_INIT_STOPPING to walsenders until all\nBGW_DELAYED_SHUTDOWN flagged bgworkers have exited, so we can ensure that\ncascaded downstreams receive any txns applied from the upstream.\n\nThis doesn't look likely to be particularly complicated to implement.\n\nIt might be better to use a flag in PGPROC rather than the bgworker struct,\nin case we want to extend this to other backend types in future. Also to\nmake it easier for the postmaster to check the flag during shutdown. Could\njust claim a bit from statusFlags for the purpose. Thoughts?\n\nHi folksTL;DR: Anyone object to a new bgworker flag that exempts background workers (such as logical apply workers) from the first round of fast shutdown signals, and instead lets them to finish their currently in-progress txn and exit?This is a change proposal raised for comment before patch submission so please consider it. Explanation of why I think we need it comes first, then proposed implementation.Rationale:Currently a fast shutdown causes logical replication subscribers to abort their currently in-progress transaction and terminate along with user backends. This means they cannot finish receiving and flushing the currently in-progress transaction, possibly wasting a very large amount of work.After restart the subscriber must reconnect, decode and reorder buffer from the restart_lsn up to the current confirmed_flush_lsn, receive the whole txn on the wire all over again, and apply the whole txn again locally. We don't currently spool received txn change-streams to disk on the subscriber and flush them so we can't repeat just the local apply part (see the related thread \"Logical archiving\" for relevant discussion there). This can create a lot of bloat, a lot of excess WAL, etc, if a big txn was in progress at the time.I'd like to add a bgworker flag that tells the postmaster to treat the logical apply bgworker (or extension equivalents) somewhat like a walsender for the purpose of fast shutdown. Instead of immediately terminating it like user backends on fast shutdown, the bgworker should be sent a ProcSignal warning that shutdown is pending and instructing it to finish receiving and applying its current transaction, then exit gracefully.It's not quite the same as the walsender, since there we try to flush changes to downstreams up to the end of the last commit before shutting down. That doesn't make sense on a subscriber because the upstream is likely still generating txns. We just want to avoid wasting our effort on any in-flight txn.Any objections?Proposed implementation:* Add new bgworker flag like BGW_DELAYED_SHUTDOWN* Define new ProcSignal PROCSIG_SHUTDOWN_REQUESTED. On fast shutdown send this instead of a SIGTERM to bgworker backends flagged BGW_DELAYED_SHUTDOWN. On smart shutdown send it to all backends when the shutdown request arrives, since that could be handy for other uses too.* Flagged bgworker is expected to finish its current txn and exit promptly. Impose a grace period after which they get SIGTERM'd anyway. Also send a SIGTERM if the postmaster receives a second fast shutdown request.* Defer sending PROCSIG_WALSND_INIT_STOPPING to walsenders until all BGW_DELAYED_SHUTDOWN flagged bgworkers have exited, so we can ensure that cascaded downstreams receive any txns applied from the upstream.This doesn't look likely to be particularly complicated to implement.It might be better to use a flag in PGPROC rather than the bgworker struct, in case we want to extend this to other backend types in future. Also to make it easier for the postmaster to check the flag during shutdown. Could just claim a bit from statusFlags for the purpose. Thoughts?",
"msg_date": "Mon, 7 Dec 2020 11:33:57 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "RFC: Giving bgworkers walsender-like grace during shutdown (for\n logical replication)"
}
] |
[
{
"msg_contents": "Hi folks\n\nNow that we're well on track for streaming logical decoding, it's becoming\nmore practical to look at parallel logical apply.\n\nThe support for this in pglogical3 benefits from a deadlock detector hook.\nIt was added in the optional patched postgres pglogical can use to enable\nvarious extra features that weren't possible without core changes, but\nisn't present in community postgres yet.\n\nI'd like to add it.\n\nThe main benefit is that it lets the logical replication support tell the\ndeadlock detector that it should prefer to kill the victim whose txn has a\nhigher upstream commit lsn. That helps encourage parallel logical apply to\nmake progress in the face of deadlocks between concurrent txns.\n\nAny in-principle objections?\n\nHi folksNow that we're well on track for streaming logical decoding, it's becoming more practical to look at parallel logical apply.The support for this in pglogical3 benefits from a deadlock detector hook. It was added in the optional patched postgres pglogical can use to enable various extra features that weren't possible without core changes, but isn't present in community postgres yet.I'd like to add it.The main benefit is that it lets the logical replication support tell the deadlock detector that it should prefer to kill the victim whose txn has a higher upstream commit lsn. That helps encourage parallel logical apply to make progress in the face of deadlocks between concurrent txns.Any in-principle objections?",
"msg_date": "Mon, 7 Dec 2020 11:54:56 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "RFC: Deadlock detector hooks for victim selection and edge injection"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 9:25 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi folks\n>\n> Now that we're well on track for streaming logical decoding, it's becoming more practical to look at parallel logical apply.\n>\n> The support for this in pglogical3 benefits from a deadlock detector hook. It was added in the optional patched postgres pglogical can use to enable various extra features that weren't possible without core changes, but isn't present in community postgres yet.\n>\n> I'd like to add it.\n>\n> The main benefit is that it lets the logical replication support tell the deadlock detector that it should prefer to kill the victim whose txn has a higher upstream commit lsn. That helps encourage parallel logical apply to make progress in the face of deadlocks between concurrent txns.\n>\n> Any in-principle objections?\n>\n\nI think it will depend on your exact proposal of the hook but one\nthing we might want to consider is whether it is acceptable to invoke\nthird-party code after holding LWLocks. We acquire LWLocks in\nCheckDeadLock and then run the deadlock detector code.\n\nAlso, it might be better if you can expand the use case a bit more. It\nis not very clear from what you have written.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 15:42:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: Deadlock detector hooks for victim selection and edge\n injection"
}
] |
[
{
"msg_contents": "Hi all\n\nRelated to my other post about deadlock detector hooks for victim\nselection, I'd like to gauge opinions here about whether it's feasible to\ninject edges into the deadlock detector's waits-for graph.\n\nDoing so would help with detecting deadlocks relating to shm_mq waits, help\nwith implementing distributed deadlock detection solutions, make it\npossible to spot deadlocks relating to condition-variable waits, etc.\n\nI'm not sure quite how the implementation would look yet, this is an early\nRFC and sanity check so I don't invest any work into it if it has no hope\nof going anywhere.\n\nI expect we'd want to build the graph only when the detector is triggered,\nrather than proactively maintain such edges, so the code implementing the\nhook would be responsible for keeping track of whatever state it needs to\nin order to do so.\n\nWhen called, it'd append \"struct EDGE\" s to the deadlock detector's\nwaits-for list.\n\nWe'd need to support a node representation other than a LOCK* for struct\nEDGE, and to abstract edge sorting (TopoSort etc) to allow for other edge\ntypes. So it wouldn't be a trivial change to make, hence opening with this\nRFC.\n\nI expect it'd be fine to require each EDGE* to have a PGPROC and to require\nthe PGPROC for waits-for and waiting-for not be the same proc. Distributed\nsystems that use libpq connections to remote nodes, or anything else, would\nhave to register the local-side PGPROC as the involved waiter or waited-on\nobject, and handle any mapping between the remote object and local resource\nholder/acquirer themselves, probably using their own shmem state.\n\nBonus points if the callback could assign weights to the injected edges to\nbias victim selection more gently. Or a way to tag an waited-for node as\nnot a candidate victim for cancellation.\n\nGeneral thoughts?\n\nHi allRelated to my other post about deadlock detector hooks for victim selection, I'd like to gauge opinions here about whether it's feasible to inject edges into the deadlock detector's waits-for graph.Doing so would help with detecting deadlocks relating to shm_mq waits, help with implementing distributed deadlock detection solutions, make it possible to spot deadlocks relating to condition-variable waits, etc.I'm not sure quite how the implementation would look yet, this is an early RFC and sanity check so I don't invest any work into it if it has no hope of going anywhere. I expect we'd want to build the graph only when the detector is triggered, rather than proactively maintain such edges, so the code implementing the hook would be responsible for keeping track of whatever state it needs to in order to do so. When called, it'd append \"struct EDGE\" s to the deadlock detector's waits-for list.We'd need to support a node representation other than a LOCK* for struct EDGE, and to abstract edge sorting (TopoSort etc) to allow for other edge types. So it wouldn't be a trivial change to make, hence opening with this RFC.I expect it'd be fine to require each EDGE* to have a PGPROC and to require the PGPROC for waits-for and waiting-for not be the same proc. Distributed systems that use libpq connections to remote nodes, or anything else, would have to register the local-side PGPROC as the involved waiter or waited-on object, and handle any mapping between the remote object and local resource holder/acquirer themselves, probably using their own shmem state.Bonus points if the callback could assign weights to the injected edges to bias victim selection more gently. Or a way to tag an waited-for node as not a candidate victim for cancellation.General thoughts?",
"msg_date": "Mon, 7 Dec 2020 12:13:02 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "RFC: Deadlock detector hooks for edge injection"
}
] |
[
{
"msg_contents": "Hi,\n\nAdded missing copy related data structures to typedefs.list, these\ndata structures were added while copy files were split during the\nrecent commit. I found this while running pgindent for parallel copy\npatches.\nThe Attached patch has the changes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 7 Dec 2020 13:56:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Added missing copy related data structures to typedefs.list"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 01:56:50PM +0530, vignesh C wrote:\n> Hi,\n> \n> Added missing copy related data structures to typedefs.list, these\n> data structures were added while copy files were split during the\n> recent commit. I found this while running pgindent for parallel copy\n> patches.\n> The Attached patch has the changes for the same.\n> Thoughts?\n\nUh, we usually only update the typedefs file before we run pgindent on\nthe master branch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:58:54 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Added missing copy related data structures to typedefs.list"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 4:28 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Dec 7, 2020 at 01:56:50PM +0530, vignesh C wrote:\n> > Hi,\n> >\n> > Added missing copy related data structures to typedefs.list, these\n> > data structures were added while copy files were split during the\n> > recent commit. I found this while running pgindent for parallel copy\n> > patches.\n> > The Attached patch has the changes for the same.\n> > Thoughts?\n>\n> Uh, we usually only update the typedefs file before we run pgindent on\n> the master branch.\n>\n\nOk, Thanks for the clarification. I was not sure as in few of the\nenhancements it was included as part of the patches.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Dec 2020 21:15:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Added missing copy related data structures to typedefs.list"
},
{
"msg_contents": "On Sat, Dec 26, 2020 at 9:16 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Dec 17, 2020 at 4:28 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, Dec 7, 2020 at 01:56:50PM +0530, vignesh C wrote:\n> > > Hi,\n> > >\n> > > Added missing copy related data structures to typedefs.list, these\n> > > data structures were added while copy files were split during the\n> > > recent commit. I found this while running pgindent for parallel copy\n> > > patches.\n> > > The Attached patch has the changes for the same.\n> > > Thoughts?\n> >\n> > Uh, we usually only update the typedefs file before we run pgindent on\n> > the master branch.\n> >\n>\n> Ok, Thanks for the clarification. I was not sure as in few of the\n> enhancements it was included as part of the patches.\n>\n\nYeah, I do that while committing patches that require changes in\ntypedefs. It is not a norm and I am not sure how much value it adds to\ndo it separately for the missing ones unless you are making changes in\nthe same file they are used and you are facing unrelated diffs due to\nthose missing ones.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Dec 2020 19:12:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Added missing copy related data structures to typedefs.list"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Dec 26, 2020 at 9:16 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Dec 17, 2020 at 4:28 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Mon, Dec 7, 2020 at 01:56:50PM +0530, vignesh C wrote:\n> > > > Hi,\n> > > >\n> > > > Added missing copy related data structures to typedefs.list, these\n> > > > data structures were added while copy files were split during the\n> > > > recent commit. I found this while running pgindent for parallel copy\n> > > > patches.\n> > > > The Attached patch has the changes for the same.\n> > > > Thoughts?\n> > >\n> > > Uh, we usually only update the typedefs file before we run pgindent on\n> > > the master branch.\n> > >\n> >\n> > Ok, Thanks for the clarification. I was not sure as in few of the\n> > enhancements it was included as part of the patches.\n> >\n>\n> Yeah, I do that while committing patches that require changes in\n> typedefs. It is not a norm and I am not sure how much value it adds to\n> do it separately for the missing ones unless you are making changes in\n> the same file they are used and you are facing unrelated diffs due to\n> those missing ones.\n\nI found this while I was running pgindent for parallel copy patches. I\nwas not sure if this change was left out intentionally or by mistake.\nI'm fine if it is committed separately or together at a later point.\nIt is not a major problem for my patch since I know the change, I will\ndo the required adjustment when I make changes on top of it, if it is\nnot getting committed. But I felt we can commit this since it is a\nrecent change.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jan 2021 09:59:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Added missing copy related data structures to typedefs.list"
}
] |
[
{
"msg_contents": "get_constraint_index() does its work by going through pg_depend. It was \nadded before pg_constraint.conindid was added, and some callers are \nstill not changed. Are there reasons for that? Probably not. The \nattached patch changes get_constraint_index() to an lsyscache-style \nlookup instead.\n\nThe nearby get_index_constraint() should probably also be changed to \nscan pg_constraint instead of pg_depend, but that doesn't have a \nsyscache to use, so it would be a different approach, so I figured I'd \nask about get_constraint_index() first.",
"msg_date": "Mon, 7 Dec 2020 11:09:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "get_constraint_index() and conindid"
},
{
"msg_contents": "On Mon, 7 Dec 2020 at 11:09, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> get_constraint_index() does its work by going through pg_depend. It was\n> added before pg_constraint.conindid was added, and some callers are\n> still not changed. Are there reasons for that? Probably not. The\n> attached patch changes get_constraint_index() to an lsyscache-style\n> lookup instead.\n\nThis looks quite reasonable, and it passes \"make installcheck-world\".\n\nOnly thing I could think of is that it maybe could use a (small)\ncomment in the message on that/why get_constraint_index is moved to\nutils/lsyscache from catalog/dependency, as that took me some time to\nunderstand.\n\n\n",
"msg_date": "Tue, 8 Dec 2020 15:52:13 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: get_constraint_index() and conindid"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> On Mon, 7 Dec 2020 at 11:09, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> get_constraint_index() does its work by going through pg_depend. It was\n>> added before pg_constraint.conindid was added, and some callers are\n>> still not changed. Are there reasons for that? Probably not. The\n>> attached patch changes get_constraint_index() to an lsyscache-style\n>> lookup instead.\n\n> This looks quite reasonable, and it passes \"make installcheck-world\".\n\n+1, LGTM.\n\n> Only thing I could think of is that it maybe could use a (small)\n> comment in the message on that/why get_constraint_index is moved to\n> utils/lsyscache from catalog/dependency, as that took me some time to\n> understand.\n\ncommit message could reasonably say that maybe, but I don't think we\nneed to memorialize it in a comment. lsyscache.c *is* where one\nwould expect to find a simple catalog-field-fetch function like this.\nThe previous implementation was not that, so it didn't belong there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Dec 2020 13:28:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: get_constraint_index() and conindid"
},
{
"msg_contents": "On Tue, Dec 08, 2020 at 01:28:39PM -0500, Tom Lane wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n>> On Mon, 7 Dec 2020 at 11:09, Peter Eisentraut\n>> <peter.eisentraut@enterprisedb.com> wrote:\n>>> get_constraint_index() does its work by going through pg_depend. It was\n>>> added before pg_constraint.conindid was added, and some callers are\n>>> still not changed. Are there reasons for that? Probably not. The\n>>> attached patch changes get_constraint_index() to an lsyscache-style\n>>> lookup instead.\n> \n>> This looks quite reasonable, and it passes \"make installcheck-world\".\n> \n> +1, LGTM.\n\nNice cleanup!\n\n>> Only thing I could think of is that it maybe could use a (small)\n>> comment in the message on that/why get_constraint_index is moved to\n>> utils/lsyscache from catalog/dependency, as that took me some time to\n>> understand.\n> \n> commit message could reasonably say that maybe, but I don't think we\n> need to memorialize it in a comment. lsyscache.c *is* where one\n> would expect to find a simple catalog-field-fetch function like this.\n> The previous implementation was not that, so it didn't belong there.\n\nAgreed.\n--\nMichael",
"msg_date": "Wed, 9 Dec 2020 15:37:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: get_constraint_index() and conindid"
},
{
"msg_contents": "On 2020-12-09 07:37, Michael Paquier wrote:\n>>> Only thing I could think of is that it maybe could use a (small)\n>>> comment in the message on that/why get_constraint_index is moved to\n>>> utils/lsyscache from catalog/dependency, as that took me some time to\n>>> understand.\n>>\n>> commit message could reasonably say that maybe, but I don't think we\n>> need to memorialize it in a comment. lsyscache.c *is* where one\n>> would expect to find a simple catalog-field-fetch function like this.\n>> The previous implementation was not that, so it didn't belong there.\n> \n> Agreed.\n\nThanks, I committed it with an expanded commit message.\n\nAfter further inspection, I'm not going to do anything about the nearby \nget_index_constraint() at this item. The current implementation can use \nan index on pg_depend. A scan of pg_constraint has no index available.\n\n\n",
"msg_date": "Wed, 9 Dec 2020 15:50:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: get_constraint_index() and conindid"
}
] |
[
{
"msg_contents": "Hi:\n I see initscan calls RelationGetwNumberOfBlocks every time and rescan calls\n initscan as well. In my system, RelationGetNumberOfBlocks is expensive\n(the reason\n doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case,\nthe\nimpact will be huge. The comments of initscan are below.\n\n/*\n* Determine the number of blocks we have to scan.\n*\n* It is sufficient to do this once at scan start, since any tuples added\n* while the scan is in progress will be invisible to my snapshot anyway.\n* (That is not true when using a non-MVCC snapshot. However, we couldn't\n* guarantee to return tuples added after scan start anyway, since they\n* might go into pages we already scanned. To guarantee consistent\n* results for a non-MVCC snapshot, the caller must hold some higher-level\n* lock that ensures the interesting tuple(s) won't change.)\n*/\n\nI still do not fully understand the comments. Looks we only need to call\nmulti times for non-MVCC snapshot, IIUC, does the following change\nreasonable?\n\n===\n\ndiff --git a/src/backend/access/heap/heapam.c\nb/src/backend/access/heap/heapam.c\nindex 1b2f70499e..8238eabd8b 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\nkeep_startblock)\n ParallelBlockTableScanDesc bpscan = NULL;\n bool allow_strat;\n bool allow_sync;\n+ bool is_mvcc = scan->rs_base.rs_snapshot &&\nIsMVCCSnapshot(scan->rs_base.rs_snapshot);\n\n /*\n * Determine the number of blocks we have to scan.\n@@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool\nkeep_startblock)\n scan->rs_nblocks = bpscan->phs_nblocks;\n }\n else\n- scan->rs_nblocks =\nRelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n+ if (scan->rs_nblocks == -1 || !is_mvcc)\n+ scan->rs_nblocks =\nRelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n\n /*\n * If the table is large relative to NBuffers, use a bulk-read\naccess\n@@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n else\n scan->rs_base.rs_key = NULL;\n\n+ scan->rs_nblocks = -1;\n initscan(scan, key, false);\n\n-- \nBest Regards\nAndy Fan\n\nHi: I see initscan calls RelationGetwNumberOfBlocks every time and rescan calls initscan as well. In my system, RelationGetNumberOfBlocks is expensive (the reason doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case, the impact will be huge. The comments of initscan are below.\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */I still do not fully understand the comments. Looks we only need to callmulti times for non-MVCC snapshot, IIUC, does the following change reasonable? ===diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.cindex 1b2f70499e..8238eabd8b 100644--- a/src/backend/access/heap/heapam.c+++ b/src/backend/access/heap/heapam.c@@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) ParallelBlockTableScanDesc bpscan = NULL; bool allow_strat; bool allow_sync;+ bool is_mvcc = scan->rs_base.rs_snapshot && IsMVCCSnapshot(scan->rs_base.rs_snapshot); /* * Determine the number of blocks we have to scan.@@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) scan->rs_nblocks = bpscan->phs_nblocks; } else- scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd);+ if (scan->rs_nblocks == -1 || !is_mvcc)+ scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); /* * If the table is large relative to NBuffers, use a bulk-read access@@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot, else scan->rs_base.rs_key = NULL;+ scan->rs_nblocks = -1; initscan(scan, key, false);-- Best RegardsAndy Fan",
"msg_date": "Mon, 7 Dec 2020 20:26:53 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "initscan for MVCC snapshot"
},
{
"msg_contents": "On Mon, Dec 7, 2020 at 8:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n> I see initscan calls RelationGetwNumberOfBlocks every time and rescan\n> calls\n> initscan as well. In my system, RelationGetNumberOfBlocks is expensive\n> (the reason\n> doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case,\n> the\n> impact will be huge. The comments of initscan are below.\n>\n> /*\n> * Determine the number of blocks we have to scan.\n> *\n> * It is sufficient to do this once at scan start, since any tuples added\n> * while the scan is in progress will be invisible to my snapshot anyway.\n> * (That is not true when using a non-MVCC snapshot. However, we couldn't\n> * guarantee to return tuples added after scan start anyway, since they\n> * might go into pages we already scanned. To guarantee consistent\n> * results for a non-MVCC snapshot, the caller must hold some higher-level\n> * lock that ensures the interesting tuple(s) won't change.)\n> */\n>\n> I still do not fully understand the comments. Looks we only need to call\n> multi times for non-MVCC snapshot, IIUC, does the following change\n> reasonable?\n>\n> ===\n>\n> diff --git a/src/backend/access/heap/heapam.c\n> b/src/backend/access/heap/heapam.c\n> index 1b2f70499e..8238eabd8b 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n> keep_startblock)\n> ParallelBlockTableScanDesc bpscan = NULL;\n> bool allow_strat;\n> bool allow_sync;\n> + bool is_mvcc = scan->rs_base.rs_snapshot &&\n> IsMVCCSnapshot(scan->rs_base.rs_snapshot);\n>\n> /*\n> * Determine the number of blocks we have to scan.\n> @@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n> keep_startblock)\n> scan->rs_nblocks = bpscan->phs_nblocks;\n> }\n> else\n> - scan->rs_nblocks =\n> RelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n> + if (scan->rs_nblocks == -1 || !is_mvcc)\n> + scan->rs_nblocks =\n> RelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n>\n> /*\n> * If the table is large relative to NBuffers, use a bulk-read\n> access\n> @@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n> else\n> scan->rs_base.rs_key = NULL;\n>\n> + scan->rs_nblocks = -1;\n> initscan(scan, key, false);\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nI have tested this with an ext4 file system, and I can get a 7%+\nperformance improvement\nfor the given test case. Here are the steps:\n\ncreate table t(a int, b char(8000));\ninsert into t select i, i from generate_series(1, 1000000)i;\ncreate index on t(a);\ndelete from t where a <= 10000;\nvacuum t;\nalter system set enable_indexscan to off;\nselect pg_reload_conf();\n\ncat 1.sql\nselect * from generate_series(1, 10000)i, t where i = t.a;\n\nbin/pgbench -f 1.sql postgres -T 300 -c 10\n\nWithout this patch: latency average = 61.806 ms\nwith this patch: latency average = 57.484 ms\n\nI think the result is good and I think we can probably make this change.\nHowever, I'm not\nsure about it.\n\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Dec 7, 2020 at 8:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I see initscan calls RelationGetwNumberOfBlocks every time and rescan calls initscan as well. In my system, RelationGetNumberOfBlocks is expensive (the reason doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case, the impact will be huge. The comments of initscan are below.\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */I still do not fully understand the comments. Looks we only need to callmulti times for non-MVCC snapshot, IIUC, does the following change reasonable? ===diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.cindex 1b2f70499e..8238eabd8b 100644--- a/src/backend/access/heap/heapam.c+++ b/src/backend/access/heap/heapam.c@@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) ParallelBlockTableScanDesc bpscan = NULL; bool allow_strat; bool allow_sync;+ bool is_mvcc = scan->rs_base.rs_snapshot && IsMVCCSnapshot(scan->rs_base.rs_snapshot); /* * Determine the number of blocks we have to scan.@@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) scan->rs_nblocks = bpscan->phs_nblocks; } else- scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd);+ if (scan->rs_nblocks == -1 || !is_mvcc)+ scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); /* * If the table is large relative to NBuffers, use a bulk-read access@@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot, else scan->rs_base.rs_key = NULL;+ scan->rs_nblocks = -1; initscan(scan, key, false);-- Best RegardsAndy Fan\nI have tested this with an ext4 file system, and I can get a 7%+ performance improvementfor the given test case. Here are the steps:create table t(a int, b char(8000));insert into t select i, i from generate_series(1, 1000000)i;create index on t(a);delete from t where a <= 10000;vacuum t;alter system set enable_indexscan to off;select pg_reload_conf();cat 1.sqlselect * from generate_series(1, 10000)i, t where i = t.a;bin/pgbench -f 1.sql postgres -T 300 -c 10Without this patch: latency average = 61.806 ms with this patch: latency average = 57.484 msI think the result is good and I think we can probably make this change. However, I'm notsure about it. -- Best RegardsAndy Fan",
"msg_date": "Thu, 10 Dec 2020 19:31:15 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: initscan for MVCC snapshot"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 7:31 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Dec 7, 2020 at 8:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> Hi:\n>> I see initscan calls RelationGetwNumberOfBlocks every time and rescan\n>> calls\n>> initscan as well. In my system, RelationGetNumberOfBlocks is expensive\n>> (the reason\n>> doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case,\n>> the\n>> impact will be huge. The comments of initscan are below.\n>>\n>> /*\n>> * Determine the number of blocks we have to scan.\n>> *\n>> * It is sufficient to do this once at scan start, since any tuples added\n>> * while the scan is in progress will be invisible to my snapshot anyway.\n>> * (That is not true when using a non-MVCC snapshot. However, we couldn't\n>> * guarantee to return tuples added after scan start anyway, since they\n>> * might go into pages we already scanned. To guarantee consistent\n>> * results for a non-MVCC snapshot, the caller must hold some higher-level\n>> * lock that ensures the interesting tuple(s) won't change.)\n>> */\n>>\n>> I still do not fully understand the comments. Looks we only need to call\n>> multi times for non-MVCC snapshot, IIUC, does the following change\n>> reasonable?\n>>\n>> ===\n>>\n>> diff --git a/src/backend/access/heap/heapam.c\n>> b/src/backend/access/heap/heapam.c\n>> index 1b2f70499e..8238eabd8b 100644\n>> --- a/src/backend/access/heap/heapam.c\n>> +++ b/src/backend/access/heap/heapam.c\n>> @@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n>> keep_startblock)\n>> ParallelBlockTableScanDesc bpscan = NULL;\n>> bool allow_strat;\n>> bool allow_sync;\n>> + bool is_mvcc = scan->rs_base.rs_snapshot &&\n>> IsMVCCSnapshot(scan->rs_base.rs_snapshot);\n>>\n>> /*\n>> * Determine the number of blocks we have to scan.\n>> @@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n>> keep_startblock)\n>> scan->rs_nblocks = bpscan->phs_nblocks;\n>> }\n>> else\n>> - scan->rs_nblocks =\n>> RelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n>> + if (scan->rs_nblocks == -1 || !is_mvcc)\n>> + scan->rs_nblocks =\n>> RelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n>>\n>> /*\n>> * If the table is large relative to NBuffers, use a bulk-read\n>> access\n>> @@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n>> else\n>> scan->rs_base.rs_key = NULL;\n>>\n>> + scan->rs_nblocks = -1;\n>> initscan(scan, key, false);\n>>\n>> --\n>> Best Regards\n>> Andy Fan\n>>\n>\n> I have tested this with an ext4 file system, and I can get a 7%+\n> performance improvement\n> for the given test case. Here are the steps:\n>\n> create table t(a int, b char(8000));\n> insert into t select i, i from generate_series(1, 1000000)i;\n> create index on t(a);\n> delete from t where a <= 10000;\n> vacuum t;\n> alter system set enable_indexscan to off;\n> select pg_reload_conf();\n>\n> cat 1.sql\n> select * from generate_series(1, 10000)i, t where i = t.a;\n>\n> bin/pgbench -f 1.sql postgres -T 300 -c 10\n>\n> Without this patch: latency average = 61.806 ms\n> with this patch: latency average = 57.484 ms\n>\n> I think the result is good and I think we can probably make this change.\n> However, I'm not\n> sure about it.\n>\n>\nThe plan which was used is below, in the rescan of Bitmap Heap Scan,\nmdnblocks will\nbe called 10000 times in current implementation, Within my patch, it will\nbe only called\nonce.\n\npostgres=# explain (costs off) select * from generate_series(1, 10000)i, t\nwhere i = t.a;\n QUERY PLAN\n------------------------------------------\n Nested Loop\n -> Function Scan on generate_series i\n -> Bitmap Heap Scan on t\n Recheck Cond: (a = i.i)\n -> Bitmap Index Scan on t_a_idx\n Index Cond: (a = i.i)\n(6 rows)\n\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Dec 10, 2020 at 7:31 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Dec 7, 2020 at 8:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I see initscan calls RelationGetwNumberOfBlocks every time and rescan calls initscan as well. In my system, RelationGetNumberOfBlocks is expensive (the reason doesn't deserve a talk.. ), so in a nest loop + Bitmap heap scan case, the impact will be huge. The comments of initscan are below.\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */I still do not fully understand the comments. Looks we only need to callmulti times for non-MVCC snapshot, IIUC, does the following change reasonable? ===diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.cindex 1b2f70499e..8238eabd8b 100644--- a/src/backend/access/heap/heapam.c+++ b/src/backend/access/heap/heapam.c@@ -211,6 +211,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) ParallelBlockTableScanDesc bpscan = NULL; bool allow_strat; bool allow_sync;+ bool is_mvcc = scan->rs_base.rs_snapshot && IsMVCCSnapshot(scan->rs_base.rs_snapshot); /* * Determine the number of blocks we have to scan.@@ -229,7 +230,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) scan->rs_nblocks = bpscan->phs_nblocks; } else- scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd);+ if (scan->rs_nblocks == -1 || !is_mvcc)+ scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); /* * If the table is large relative to NBuffers, use a bulk-read access@@ -1210,6 +1212,7 @@ heap_beginscan(Relation relation, Snapshot snapshot, else scan->rs_base.rs_key = NULL;+ scan->rs_nblocks = -1; initscan(scan, key, false);-- Best RegardsAndy Fan\nI have tested this with an ext4 file system, and I can get a 7%+ performance improvementfor the given test case. Here are the steps:create table t(a int, b char(8000));insert into t select i, i from generate_series(1, 1000000)i;create index on t(a);delete from t where a <= 10000;vacuum t;alter system set enable_indexscan to off;select pg_reload_conf();cat 1.sqlselect * from generate_series(1, 10000)i, t where i = t.a;bin/pgbench -f 1.sql postgres -T 300 -c 10Without this patch: latency average = 61.806 ms with this patch: latency average = 57.484 msI think the result is good and I think we can probably make this change. However, I'm notsure about it. The plan which was used is below, in the rescan of Bitmap Heap Scan, mdnblocks willbe called 10000 times in current implementation, Within my patch, it will be only calledonce.postgres=# explain (costs off) select * from generate_series(1, 10000)i, t where i = t.a; QUERY PLAN------------------------------------------ Nested Loop -> Function Scan on generate_series i -> Bitmap Heap Scan on t Recheck Cond: (a = i.i) -> Bitmap Index Scan on t_a_idx Index Cond: (a = i.i)(6 rows) -- Best RegardsAndy Fan",
"msg_date": "Thu, 10 Dec 2020 19:58:46 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: initscan for MVCC snapshot"
}
] |
[
{
"msg_contents": "Hello all!\n\nI suggest a refactoring of analyze AM API as it is too much heap specific at the moment. The problem was inspired by Greenplum’s analyze improvement for append-optimized row and column AM with variable size compressed blocks.\nCurrently we do analyze in two steps.\n\n1. Sample fix size blocks with algorithm S from Knuth (BlockSampler function)\n2. Collect tuples into reservoir with algorithm Z from Vitter.\n\nSo this doesn’t work for AMs using variable sized physical blocks for example. They need weight random sampling (WRS) algorithms like A-Chao or logical blocks to follow S-Knuth (and have a problem with RelationGetNumberOfBlocks() estimating a physical number of blocks). Another problem with columns - they are not passed to analyze begin scan and can’t benefit from column storage at ANALYZE TABLE (COL).\n\nThe suggestion is to replace table_scan_analyze_next_block() and table_scan_analyze_next_tuple() with a single function: table_acquire_sample_rows(). The AM implementation of table_acquire_sample_rows() can use the BlockSampler functions if it wants to, but if the AM is not block-oriented, it could do something else. This suggestion also passes VacAttrStats to table_acquire_sample_rows() for column-oriented AMs and removes PROGRESS_ANALYZE_BLOCKS_TOTAL and PROGRESS_ANALYZE_BLOCKS_DONE definitions as not all AMs can be block-oriented.\n\n\n\n\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia",
"msg_date": "Mon, 7 Dec 2020 23:23:42 +1000",
"msg_from": "=?utf-8?B?0KHQvNC40YDQvdC+0LIg0JTQtdC90LjRgQ==?= <sd@arenadata.io>",
"msg_from_op": true,
"msg_subject": "PoC Refactor AM analyse API "
},
{
"msg_contents": "It seems that my mailing client set wrong MIME types for attached patch and it was filtered by the web archive. So I attach the patch again for the web archive.\n\n\n\n\n\n> 7 дек. 2020 г., в 23:23, Смирнов Денис <sd@arenadata.io> написал(а):\n> \n> Hello all!\n> \n> I suggest a refactoring of analyze AM API as it is too much heap specific at the moment. The problem was inspired by Greenplum’s analyze improvement for append-optimized row and column AM with variable size compressed blocks.\n> Currently we do analyze in two steps.\n> \n> 1. Sample fix size blocks with algorithm S from Knuth (BlockSampler function)\n> 2. Collect tuples into reservoir with algorithm Z from Vitter.\n> \n> So this doesn’t work for AMs using variable sized physical blocks for example. They need weight random sampling (WRS) algorithms like A-Chao or logical blocks to follow S-Knuth (and have a problem with RelationGetNumberOfBlocks() estimating a physical number of blocks). Another problem with columns - they are not passed to analyze begin scan and can’t benefit from column storage at ANALYZE TABLE (COL).\n> \n> The suggestion is to replace table_scan_analyze_next_block() and table_scan_analyze_next_tuple() with a single function: table_acquire_sample_rows(). The AM implementation of table_acquire_sample_rows() can use the BlockSampler functions if it wants to, but if the AM is not block-oriented, it could do something else. This suggestion also passes VacAttrStats to table_acquire_sample_rows() for column-oriented AMs and removes PROGRESS_ANALYZE_BLOCKS_TOTAL and PROGRESS_ANALYZE_BLOCKS_DONE definitions as not all AMs can be block-oriented.\n> \n> <am-analyze-1.patch>\n> \n> \n> \n> Best regards,\n> Denis Smirnov | Developer\n> sd@arenadata.io \n> Arenadata | Godovikova 9-17, Moscow 129085 Russia\n>",
"msg_date": "Tue, 8 Dec 2020 10:53:41 +1000",
"msg_from": "Denis Smirnov <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "Hi Denis!\n\n> 7 дек. 2020 г., в 18:23, Смирнов Денис <sd@arenadata.io> написал(а):\n> \n> I suggest a refactoring of analyze AM API as it is too much heap specific at the moment. The problem was inspired by Greenplum’s analyze improvement for append-optimized row and column AM with variable size compressed blocks.\n> Currently we do analyze in two steps.\n> \n> 1. Sample fix size blocks with algorithm S from Knuth (BlockSampler function)\n> 2. Collect tuples into reservoir with algorithm Z from Vitter.\n> \n> So this doesn’t work for AMs using variable sized physical blocks for example. They need weight random sampling (WRS) algorithms like A-Chao or logical blocks to follow S-Knuth (and have a problem with RelationGetNumberOfBlocks() estimating a physical number of blocks). Another problem with columns - they are not passed to analyze begin scan and can’t benefit from column storage at ANALYZE TABLE (COL).\n> \n> The suggestion is to replace table_scan_analyze_next_block() and table_scan_analyze_next_tuple() with a single function: table_acquire_sample_rows(). The AM implementation of table_acquire_sample_rows() can use the BlockSampler functions if it wants to, but if the AM is not block-oriented, it could do something else. This suggestion also passes VacAttrStats to table_acquire_sample_rows() for column-oriented AMs and removes PROGRESS_ANALYZE_BLOCKS_TOTAL and PROGRESS_ANALYZE_BLOCKS_DONE definitions as not all AMs can be block-oriented.\n\nJust few random notes about the idea.\nHeap pages are not of a fixed size, when measured in tuple count. And comment in the codes describes it.\n * Although every row has an equal chance of ending up in the final\n * sample, this sampling method is not perfect: not every possible\n * sample has an equal chance of being selected. For large relations\n * the number of different blocks represented by the sample tends to be\n * too small. We can live with that for now. Improvements are welcome.\n\nCurrent implementation provide framework with shared code. Though this framework is only suitable for block-of-tuples AMs. And have statistical downsides when count of tuples varies too much.\nMaybe can we just provide a richer API? To have both: avoid copying code and provide flexibility.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 8 Dec 2020 13:42:12 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "Andrey, thanks for your feedback!\n\nI agree that AMs with fix sized blocks can have much alike code in acquire_sample_rows() (though it is not a rule). But there are several points about current master sampling.\n\n* It is not perfect - AM developers may want to improve it with other sampling algorithms.\n* It is designed with a big influence of heap AM - for example, RelationGetNumberOfBlocks() returns uint32 while other AMs can have a bigger amount of blocks.\n* heapam_acquire_sample_rows() is a small function - I don't think it is not a big trouble to write something alike for any AM developer.\n* Some AMs may have a single level sampling (only algorithm Z from Vitter for example) - why not?\n\nAs a result we get a single and clear method to acquire rows for statistics. If we don’t modify but rather extend current API ( for example in a manner it is done for FDW) the code becomes more complicated and difficult to understand.\n\n> 8 дек. 2020 г., в 18:42, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> Hi Denis!\n> \n>> 7 дек. 2020 г., в 18:23, Смирнов Денис <sd@arenadata.io> написал(а):\n>> \n>> I suggest a refactoring of analyze AM API as it is too much heap specific at the moment. The problem was inspired by Greenplum’s analyze improvement for append-optimized row and column AM with variable size compressed blocks.\n>> Currently we do analyze in two steps.\n>> \n>> 1. Sample fix size blocks with algorithm S from Knuth (BlockSampler function)\n>> 2. Collect tuples into reservoir with algorithm Z from Vitter.\n>> \n>> So this doesn’t work for AMs using variable sized physical blocks for example. They need weight random sampling (WRS) algorithms like A-Chao or logical blocks to follow S-Knuth (and have a problem with RelationGetNumberOfBlocks() estimating a physical number of blocks). Another problem with columns - they are not passed to analyze begin scan and can’t benefit from column storage at ANALYZE TABLE (COL).\n>> \n>> The suggestion is to replace table_scan_analyze_next_block() and table_scan_analyze_next_tuple() with a single function: table_acquire_sample_rows(). The AM implementation of table_acquire_sample_rows() can use the BlockSampler functions if it wants to, but if the AM is not block-oriented, it could do something else. This suggestion also passes VacAttrStats to table_acquire_sample_rows() for column-oriented AMs and removes PROGRESS_ANALYZE_BLOCKS_TOTAL and PROGRESS_ANALYZE_BLOCKS_DONE definitions as not all AMs can be block-oriented.\n> \n> Just few random notes about the idea.\n> Heap pages are not of a fixed size, when measured in tuple count. And comment in the codes describes it.\n> * Although every row has an equal chance of ending up in the final\n> * sample, this sampling method is not perfect: not every possible\n> * sample has an equal chance of being selected. For large relations\n> * the number of different blocks represented by the sample tends to be\n> * too small. We can live with that for now. Improvements are welcome.\n> \n> Current implementation provide framework with shared code. Though this framework is only suitable for block-of-tuples AMs. And have statistical downsides when count of tuples varies too much.\n> Maybe can we just provide a richer API? To have both: avoid copying code and provide flexibility.\n> \n> Best regards, Andrey Borodin.\n> \n\n\n\n",
"msg_date": "Tue, 8 Dec 2020 21:44:39 +1000",
"msg_from": "Denis Smirnov <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "\n\n> 8 дек. 2020 г., в 16:44, Denis Smirnov <sd@arenadata.io> написал(а):\n> \n> Andrey, thanks for your feedback!\n> \n> I agree that AMs with fix sized blocks can have much alike code in acquire_sample_rows() (though it is not a rule). But there are several points about current master sampling.\n> \n> * It is not perfect - AM developers may want to improve it with other sampling algorithms.\n> * It is designed with a big influence of heap AM - for example, RelationGetNumberOfBlocks() returns uint32 while other AMs can have a bigger amount of blocks.\n> * heapam_acquire_sample_rows() is a small function - I don't think it is not a big trouble to write something alike for any AM developer.\n> * Some AMs may have a single level sampling (only algorithm Z from Vitter for example) - why not?\n> \n> As a result we get a single and clear method to acquire rows for statistics. If we don’t modify but rather extend current API ( for example in a manner it is done for FDW) the code becomes more complicated and difficult to understand.\n\nThis makes sense. Purpose of the API is to provide flexible abstraction. Current table_scan_analyze_next_block()/table_scan_analyze_next_tuple() API assumes too much about AM implementation.\nBut why do you pass int natts and VacAttrStats **stats to acquire_sample_rows()? Is it of any use? It seems to break abstraction too.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 27 Dec 2020 22:11:10 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "\n> But why do you pass int natts and VacAttrStats **stats to acquire_sample_rows()? Is it of any use? It seems to break abstraction too.\n\nYes, it is really a kluge that should be discussed. The main problem is that we don’t pass projection information to analyze scan (analyze begin scan relise only on relation information during initialization). And as a result we can’t benefit from column AMs during «analyze t(col)» and consume data only from target columns. This parameters were added to solve this problem.\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia\n\n\n\n",
"msg_date": "Wed, 30 Dec 2020 19:12:22 +1000",
"msg_from": "Denis Smirnov <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "On 30/12/2020 11:12, Denis Smirnov wrote:\n> \n>> But why do you pass int natts and VacAttrStats **stats to\n>> acquire_sample_rows()? Is it of any use? It seems to break\n>> abstraction too.\n> \n> Yes, it is really a kluge that should be discussed. The main problem\n> is that we don’t pass projection information to analyze scan (analyze\n> begin scan relise only on relation information during\n> initialization). And as a result we can’t benefit from column AMs\n> during «analyze t(col)» and consume data only from target columns.\n> This parameters were added to solve this problem.\n\nThe documentation needs to be updated accordingly, see \nAcquireSampleRowsFunc in fdwhandler.sgml.\n\nThis part of the patch, adding the list of columns being analyzed, seems \na bit unfinished. I'd suggest to leave that out for now, and add it as \npart of the \"Table AM modifications to accept column projection lists\" \npatch that's also in this commitfest [1]\n\n> This suggestion also ... removes PROGRESS_ANALYZE_BLOCKS_TOTAL and\n> PROGRESS_ANALYZE_BLOCKS_DONE definitions as not all AMs can be\n> block-oriented.\n\nWe shouldn't just remove it, a progress indicator is nice. It's true \nthat not all AMs are block-oriented, but those that are can still use \nthose. Perhaps we can add ther PROGRESS_ANALYZE_* states for \nnon-block-oriented AMs, but that can wait until there is a concrete use \nfor it.\n\n> static int\n> acquire_sample_rows(Relation onerel, int elevel,\n> \t\t\t\t\tHeapTuple *rows, int targrows,\n> \t\t\t\t\tdouble *totalrows, double *totaldeadrows)\n> {\n> \tint\t\t\tnumrows = 0;\t/* # rows now in reservoir */\n> \tTableScanDesc scan;\n> \n> \tAssert(targrows > 0);\n> \n> \tscan = table_beginscan_analyze(onerel);\n> \n> \tnumrows = table_acquire_sample_rows(scan, elevel,\n> \t\t\t\t\t\t\t\t\t\tnatts, stats,\n> \t\t\t\t\t\t\t\t\t\tvac_strategy, rows,\n> \t\t\t\t\t\t\t\t\t\ttargrows, totalrows,\n> \t\t\t\t\t\t\t\t\t\ttotaldeadrows);\n> \n> \ttable_endscan(scan);\n> \n> \t/*\n> \t * If we didn't find as many tuples as we wanted then we're done. No sort\n> \t * is needed, since they're already in order.\n> \t *\n> \t * Otherwise we need to sort the collected tuples by position\n> \t * (itempointer). It's not worth worrying about corner cases where the\n> \t * tuples are already sorted.\n> \t */\n> \tif (numrows == targrows)\n> \t\tqsort((void *) rows, numrows, sizeof(HeapTuple), compare_rows);\n> \n> \treturn numrows;\n> }\n\nPerhaps better to move the qsort() into heapam_acquire_sample_rows(), \nand document that the acquire_sample_rows() AM function must return the \ntuples in 'ctid' order. In a generic API, it seems like a shady \nassumption that they must be in order if we didn't find as many rows as \nwe wanted. Or always call qsort(); if the tuples are already in order, \nthat should be pretty quick.\n\nThe table_beginscan_analyze() call seems a bit pointless now. Let's \nremove it, and pass the Relation to table_acquire_sample_rows directly.\n\n> \t/*\n> \t * This callback needs to fill reservour with sample rows during analyze\n> \t * scan.\n> \t */\n> \tint\t\t\t(*acquire_sample_rows) (TableScanDesc scan,\n\nThe \"reservoir\" is related to the block sampler, but that's just an \nimplementation detail of the function. Perhaps something like \"Acquire a \nsample of rows from the table, for ANALYZE\". And explain the arguments \nhere, or in table_acquire_sample_rows().\n\nOverall, I like where this patch is going.\n\n[1] https://commitfest.postgresql.org/31/2922/\n\n- Heikki\n\n\n",
"msg_date": "Fri, 22 Jan 2021 15:12:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
},
{
"msg_contents": "Thanks for your review, Heikki.\n\nI have made the changes you have requested.\n\n1. All modifications interconnected with column projection were reverted (they should be added in https://commitfest.postgresql.org/31/2922 if the current patch would be merged one day).\n2. I have returned PROGRESS_ANALYZE_* states.\n3. qsort() was moved into heapam_acquire_sample_rows(). Also a comment was added, that the acquire_sample_rows() AM function must return the tuples in a physical table order.\n4. table_beginscan_analyze() was removed as a redundant function.\n5. acquire_sample_rows() comment about reservoir was changed.\n\n\n\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia",
"msg_date": "Fri, 19 Feb 2021 12:06:12 +1000",
"msg_from": "Denis Smirnov <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "Hi,\n\n+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);\n\nIs the above equivalent to:\n\n+ *totalrows = ceil((liverows / bs.m) * totalblocks);\n\nFor compare_rows(), it seems the computation of oa and ob can be delayed to\nwhen ba == bb (after the first two if statements).\n\nCheers\n\nOn Thu, Feb 18, 2021 at 6:06 PM Denis Smirnov <sd@arenadata.io> wrote:\n\n> Thanks for your review, Heikki.\n>\n> I have made the changes you have requested.\n>\n> 1. All modifications interconnected with column projection were reverted\n> (they should be added in https://commitfest.postgresql.org/31/2922 if the\n> current patch would be merged one day).\n> 2. I have returned PROGRESS_ANALYZE_* states.\n> 3. qsort() was moved into heapam_acquire_sample_rows(). Also a comment was\n> added, that the acquire_sample_rows() AM function must return the tuples in\n> a physical table order.\n> 4. table_beginscan_analyze() was removed as a redundant function.\n> 5. acquire_sample_rows() comment about reservoir was changed.\n>\n>\n> Best regards,\n> Denis Smirnov | Developer\n> sd@arenadata.io\n> Arenadata | Godovikova 9-17, Moscow 129085 Russia\n>\n>\n\nHi,+ *totalrows = floor((liverows / bs.m) * totalblocks + 0.5);Is the above equivalent to:+ *totalrows = ceil((liverows / bs.m) * totalblocks);For compare_rows(), it seems the computation of oa and ob can be delayed to when ba == bb (after the first two if statements).CheersOn Thu, Feb 18, 2021 at 6:06 PM Denis Smirnov <sd@arenadata.io> wrote:Thanks for your review, Heikki.\n\nI have made the changes you have requested.\n\n1. All modifications interconnected with column projection were reverted (they should be added in https://commitfest.postgresql.org/31/2922 if the current patch would be merged one day).\n2. I have returned PROGRESS_ANALYZE_* states.\n3. qsort() was moved into heapam_acquire_sample_rows(). Also a comment was added, that the acquire_sample_rows() AM function must return the tuples in a physical table order.\n4. table_beginscan_analyze() was removed as a redundant function.\n5. acquire_sample_rows() comment about reservoir was changed.\n\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia",
"msg_date": "Thu, 18 Feb 2021 18:33:43 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
},
{
"msg_contents": "Hello, Zhihong.\n\nThanks for your comments.\n\n1. I am afraid that an equivalence of \"floor(val + 0.5)\" to \"cell(val)\" is incorrect: \"floor(2.1 + 0.5)\" returns 2 while \"cell(2.1)\" returns 3. We can’t use such replacement, as you have suggested.\n\n2. >> For compare_rows(), it seems the computation of oa and ob can be delayed to when ba == bb (after the first two if statements).\nI have checked some examples of ASM code generated by different compilers with -O2/O3 flags on https://gcc.godbolt.org and didn’t see any big benefit in result CPU instructions. You can check yourself an attached example below.\n\n\n\n\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia",
"msg_date": "Fri, 19 Feb 2021 17:59:25 +1000",
"msg_from": "Denis Smirnov <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API "
},
{
"msg_contents": "Denis:\nThanks for considering my suggestion.\n\nFor #1, I didn't take your example into account. Thanks for pointing that\nout.\n\nCheers\n\nOn Thu, Feb 18, 2021 at 11:59 PM Denis Smirnov <sd@arenadata.io> wrote:\n\n> Hello, Zhihong.\n>\n> Thanks for your comments.\n>\n> 1. I am afraid that an equivalence of \"floor(val + 0.5)\" to \"cell(val)\" is\n> incorrect: \"floor(2.1 + 0.5)\" returns 2 while \"cell(2.1)\" returns 3. We\n> can’t use such replacement, as you have suggested.\n>\n> 2. >> For compare_rows(), it seems the computation of oa and ob can be\n> delayed to when ba == bb (after the first two if statements).\n> I have checked some examples of ASM code generated by different compilers\n> with -O2/O3 flags on https://gcc.godbolt.org and didn’t see any big\n> benefit in result CPU instructions. You can check yourself an attached\n> example below.\n>\n>\n>\n> Best regards,\n> Denis Smirnov | Developer\n> sd@arenadata.io\n> Arenadata | Godovikova 9-17, Moscow 129085 Russia\n>\n>\n\nDenis:Thanks for considering my suggestion.For #1, I didn't take your example into account. Thanks for pointing that out.CheersOn Thu, Feb 18, 2021 at 11:59 PM Denis Smirnov <sd@arenadata.io> wrote:Hello, Zhihong.\n\nThanks for your comments.\n\n1. I am afraid that an equivalence of \"floor(val + 0.5)\" to \"cell(val)\" is incorrect: \"floor(2.1 + 0.5)\" returns 2 while \"cell(2.1)\" returns 3. We can’t use such replacement, as you have suggested.\n\n2. >> For compare_rows(), it seems the computation of oa and ob can be delayed to when ba == bb (after the first two if statements).\nI have checked some examples of ASM code generated by different compilers with -O2/O3 flags on https://gcc.godbolt.org and didn’t see any big benefit in result CPU instructions. You can check yourself an attached example below.\n\n\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Godovikova 9-17, Moscow 129085 Russia",
"msg_date": "Fri, 19 Feb 2021 08:22:05 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
},
{
"msg_contents": "On Fri, Feb 19, 2021 at 12:06:12PM +1000, Denis Smirnov wrote:\n> Thanks for your review, Heikki.\n> \n> I have made the changes you have requested.\n> \n> 1. All modifications interconnected with column projection were reverted (they should be added in https://commitfest.postgresql.org/31/2922 if the current patch would be merged one day).\n> 2. I have returned PROGRESS_ANALYZE_* states.\n> 3. qsort() was moved into heapam_acquire_sample_rows(). Also a comment was added, that the acquire_sample_rows() AM function must return the tuples in a physical table order.\n> 4. table_beginscan_analyze() was removed as a redundant function.\n> 5. acquire_sample_rows() comment about reservoir was changed.\n> \n\nHi Denis,\n\nThis doesn't apply anymore because of c6fc50c, can you resubmit a new\npatch?\n\nPlease note that the patch must be submitted with a .patch extension\ninstead of .txt, that way the CI at http://commitfest.cputube.org/ will\nbe able to execute automatic tests on it.\n\nRegards,\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 8 Sep 2021 10:06:25 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
},
{
"msg_contents": "On Wed, Sep 08, 2021 at 10:06:25AM -0500, Jaime Casanova wrote:\n> This doesn't apply anymore because of c6fc50c, can you resubmit a new\n> patch?\n\nActivity has stalled here, so I have marked the entry as RwF in the CF\napp.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 15:57:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
},
{
"msg_contents": "I think this patch should be totally redesigned and removed from the upcoming CF. The problem is that vanilla PG has a single storage manager implementation, that works with fix size blocks. Current commit didn’t take this aspect into account. We should first decide whether PG needs an ability to implement custom storage managers with variable size blocks and custom block buffers (or without any for OLAP). And only after that we should move to the variable size block analyze.\n\nBest regards,\nDenis Smirnov | Developer\nsd@arenadata.io \nArenadata | Bldg. 3, Block 1 Skladochnaya St. Moscow, 127018\nI think this patch should be totally redesigned and removed from the upcoming CF. The problem is that vanilla PG has a single storage manager implementation, that works with fix size blocks. Current commit didn’t take this aspect into account. We should first decide whether PG needs an ability to implement custom storage managers with variable size blocks and custom block buffers (or without any for OLAP). And only after that we should move to the variable size block analyze.Best regards,Denis Smirnov | Developersd@arenadata.io Arenadata | Bldg. 3, Block 1 Skladochnaya St. Moscow, 127018",
"msg_date": "Fri, 1 Oct 2021 21:24:15 +0300",
"msg_from": "=?utf-8?B?0JTQtdC90LjRgSDQodC80LjRgNC90L7Qsg==?= <sd@arenadata.io>",
"msg_from_op": false,
"msg_subject": "Re: PoC Refactor AM analyse API"
}
] |
[
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Sun, Dec 06, 2020 at 10:03:08AM -0500, Stephen Frost wrote:\n> > * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> >> You keep making this statement, and I don't necessarily disagree, but if\n> >> that is the case, please explain why don't we have\n> >> checkpoint_completion_target set to 0.9 by default? Should we change\n> >> that?\n> > \n> > Yes, I do think we should change that..\n> \n> Agreed. FWIW, no idea for others, but it is one of those parameters I\n> keep telling to update after a default installation.\n\nConcretely, attached is a patch which changes the default and updates\nthe documentation accordingly.\n\nPasses regression tests and doc build. Will register in the January\ncommitfest as Needs Review.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 7 Dec 2020 12:53:29 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 2020-12-07 18:53, Stephen Frost wrote:\n> * Michael Paquier (michael@paquier.xyz) wrote:\n>> On Sun, Dec 06, 2020 at 10:03:08AM -0500, Stephen Frost wrote:\n>>> * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n>>>> You keep making this statement, and I don't necessarily disagree, but if\n>>>> that is the case, please explain why don't we have\n>>>> checkpoint_completion_target set to 0.9 by default? Should we change\n>>>> that?\n>>>\n>>> Yes, I do think we should change that..\n>>\n>> Agreed. FWIW, no idea for others, but it is one of those parameters I\n>> keep telling to update after a default installation.\n> \n> Concretely, attached is a patch which changes the default and updates\n> the documentation accordingly.\n\nI agree with considering this change, but I wonder why the value 0.9. \nWhy not, say, 0.95, 0.99, or 1.0?\n\n\n",
"msg_date": "Mon, 7 Dec 2020 20:08:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 2020-12-07 18:53, Stephen Frost wrote:\n> >* Michael Paquier (michael@paquier.xyz) wrote:\n> >>On Sun, Dec 06, 2020 at 10:03:08AM -0500, Stephen Frost wrote:\n> >>>* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> >>>>You keep making this statement, and I don't necessarily disagree, but if\n> >>>>that is the case, please explain why don't we have\n> >>>>checkpoint_completion_target set to 0.9 by default? Should we change\n> >>>>that?\n> >>>\n> >>>Yes, I do think we should change that..\n> >>\n> >>Agreed. FWIW, no idea for others, but it is one of those parameters I\n> >>keep telling to update after a default installation.\n> >\n> >Concretely, attached is a patch which changes the default and updates\n> >the documentation accordingly.\n> \n> I agree with considering this change, but I wonder why the value 0.9. Why\n> not, say, 0.95, 0.99, or 1.0?\n\nThe documentation (which my patch updates to match the new default)\ncovers this pretty well here:\n\nhttps://www.postgresql.org/docs/current/wal-configuration.html\n\n\"Although checkpoint_completion_target can be set as high as 1.0, it is\nbest to keep it less than that (perhaps 0.9 at most) since checkpoints\ninclude some other activities besides writing dirty buffers. A setting\nof 1.0 is quite likely to result in checkpoints not being completed on\ntime, which would result in performance loss due to unexpected variation\nin the number of WAL segments needed.\"\n\nThanks,\n\nStephen",
"msg_date": "Mon, 7 Dec 2020 14:17:43 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 12/7/20, 9:53 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> Concretely, attached is a patch which changes the default and updates\r\n> the documentation accordingly.\r\n\r\n+1 to setting checkpoint_completion_target to 0.9 by default.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 8 Dec 2020 17:29:30 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 12/7/20, 9:53 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n>> Concretely, attached is a patch which changes the default and updates\n>> the documentation accordingly.\n\n> +1 to setting checkpoint_completion_target to 0.9 by default.\n\nFWIW, I kind of like the idea of getting rid of it completely.\nIs there really ever a good reason to set it to something different\nthan that? If not, well, we have too many GUCs already, and each\nof them carries nonzero performance, documentation, and maintenance\noverhead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Dec 2020 12:41:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> > On 12/7/20, 9:53 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> >> Concretely, attached is a patch which changes the default and updates\n> >> the documentation accordingly.\n>\n> > +1 to setting checkpoint_completion_target to 0.9 by default.\n>\n> FWIW, I kind of like the idea of getting rid of it completely.\n> Is there really ever a good reason to set it to something different\n> than that? If not, well, we have too many GUCs already, and each\n> of them carries nonzero performance, documentation, and maintenance\n> overhead.\n>\n\n+1.\n\nThere are plenty of cases I think where it doesn't really matter with the\nvalues, but when it does I'm not sure what it would be where something else\nwould actually be better.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 8, 2020 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 12/7/20, 9:53 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n>> Concretely, attached is a patch which changes the default and updates\n>> the documentation accordingly.\n\n> +1 to setting checkpoint_completion_target to 0.9 by default.\n\nFWIW, I kind of like the idea of getting rid of it completely.\nIs there really ever a good reason to set it to something different\nthan that? If not, well, we have too many GUCs already, and each\nof them carries nonzero performance, documentation, and maintenance\noverhead.+1.There are plenty of cases I think where it doesn't really matter with the values, but when it does I'm not sure what it would be where something else would actually be better.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 8 Dec 2020 18:47:34 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On Tue, 2020-12-08 at 17:29 +0000, Bossart, Nathan wrote:\n> +1 to setting checkpoint_completion_target to 0.9 by default.\n\n+1 for changing the default or getting rid of it, as Tom suggested.\n\nWhile we are at it, could we change the default of \"log_lock_waits\" to \"on\"?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 09 Dec 2020 10:41:54 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Tue, 2020-12-08 at 17:29 +0000, Bossart, Nathan wrote:\n> > +1 to setting checkpoint_completion_target to 0.9 by default.\n> \n> +1 for changing the default or getting rid of it, as Tom suggested.\n\nAttached is a patch to change it from a GUC to a compile-time #define\nwhich is set to 0.9, with accompanying documentation updates.\n\n> While we are at it, could we change the default of \"log_lock_waits\" to \"on\"?\n\nWhile I agree that it'd be good to change quite a few of the log_X items\nto be 'on' by default, I'm not planning to work on this.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 10 Dec 2020 12:16:02 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Howdy,\n\nOn 2020-Dec-10, Stephen Frost wrote:\n\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > On Tue, 2020-12-08 at 17:29 +0000, Bossart, Nathan wrote:\n> > > +1 to setting checkpoint_completion_target to 0.9 by default.\n> > \n> > +1 for changing the default or getting rid of it, as Tom suggested.\n> \n> Attached is a patch to change it from a GUC to a compile-time #define\n> which is set to 0.9, with accompanying documentation updates.\n\nI think we should leave a doc stub or at least an <indexterm>, to let\npeople know the GUC has been removed rather than just making it\ncompletely invisible. (Maybe piggyback on the stuff in [1]?)\n\n[1] https://postgr.es/m/CAGRY4nyA=jmBNa4LVwgGO1GyO-RnFmfkesddpT_uO+3=mot8DA@mail.gmail.com\n\n\n\n",
"msg_date": "Thu, 10 Dec 2020 14:21:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> On 2020-Dec-10, Stephen Frost wrote:\n> > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > > On Tue, 2020-12-08 at 17:29 +0000, Bossart, Nathan wrote:\n> > > > +1 to setting checkpoint_completion_target to 0.9 by default.\n> > > \n> > > +1 for changing the default or getting rid of it, as Tom suggested.\n> > \n> > Attached is a patch to change it from a GUC to a compile-time #define\n> > which is set to 0.9, with accompanying documentation updates.\n> \n> I think we should leave a doc stub or at least an <indexterm>, to let\n> people know the GUC has been removed rather than just making it\n> completely invisible. (Maybe piggyback on the stuff in [1]?)\n> \n> [1] https://postgr.es/m/CAGRY4nyA=jmBNa4LVwgGO1GyO-RnFmfkesddpT_uO+3=mot8DA@mail.gmail.com\n\nYes, I agree, and am involved in that thread as well- currently waiting\nfeedback from others about the proposed approach.\n\nGetting a few more people looking at that thread and commenting on it\nwould really help us be able to move forward.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 10 Dec 2020 12:22:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Alvaro Herrera (alvherre@alvh.no-ip.org) wrote:\n> > On 2020-Dec-10, Stephen Frost wrote:\n> > > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > > > On Tue, 2020-12-08 at 17:29 +0000, Bossart, Nathan wrote:\n> > > > > +1 to setting checkpoint_completion_target to 0.9 by default.\n> > > > \n> > > > +1 for changing the default or getting rid of it, as Tom suggested.\n> > > \n> > > Attached is a patch to change it from a GUC to a compile-time #define\n> > > which is set to 0.9, with accompanying documentation updates.\n> > \n> > I think we should leave a doc stub or at least an <indexterm>, to let\n> > people know the GUC has been removed rather than just making it\n> > completely invisible. (Maybe piggyback on the stuff in [1]?)\n> > \n> > [1] https://postgr.es/m/CAGRY4nyA=jmBNa4LVwgGO1GyO-RnFmfkesddpT_uO+3=mot8DA@mail.gmail.com\n> \n> Yes, I agree, and am involved in that thread as well- currently waiting\n> feedback from others about the proposed approach.\n\nI've tried to push that forward. I'm happy to update this patch once\nwe've got agreement to move forward on that, to wit, adding to an\n'obsolete' section in the documentation information about this\nparticular GUC and how it's been removed due to not being sensible or\nnecessary to continue to have.\n\n> Getting a few more people looking at that thread and commenting on it\n> would really help us be able to move forward.\n\nThis is still the case though..\n\nThanks!\n\nStephen",
"msg_date": "Wed, 13 Jan 2021 17:10:38 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 12:16:02PM -0500, Stephen Frost wrote:\n> Attached is a patch to change it from a GUC to a compile-time #define\n> which is set to 0.9, with accompanying documentation updates.\n\nAll the references to checkpoint_target_completion are removed (except\nfor bgwriter.h as per the patch).\n\n> This is because it performs a checkpoint, and the I/O\n> - required for the checkpoint will be spread out over a significant\n> - period of time, by default half your inter-checkpoint interval\n> - (see the configuration parameter\n> - <xref linkend=\"guc-checkpoint-completion-target\"/>). This is\n> + required for the checkpoint will be spread out over the inter-checkpoint\n> + interval (see the configuration parameter\n> + <xref linkend=\"guc-checkpoint-timeout\"/>). This is\n\nIt may be worth mentioning that this is spread across 90% of the last\ncheckpoint's duration instead.\n\n> - in about half the time before the next checkpoint starts. On a system\n> - that's very close to maximum I/O throughput during normal operation,\n> - you might want to increase <varname>checkpoint_completion_target</varname>\n> - to reduce the I/O load from checkpoints. The disadvantage of this is that\n> - prolonging checkpoints affects recovery time, because more WAL segments\n> - will need to be kept around for possible use in recovery. Although\n> - <varname>checkpoint_completion_target</varname> can be set as high as 1.0,\n> - it is best to keep it less than that (perhaps 0.9 at most) since\n> - checkpoints include some other activities besides writing dirty buffers.\n> - A setting of 1.0 is quite likely to result in checkpoints not being\n> - completed on time, which would result in performance loss due to\n> - unexpected variation in the number of WAL segments needed.\n> + This spreads out the I/O as much as possible to have the I/O load be consistent\n> + during the checkpoint and generally throughout the operation of the system. The\n> + disadvantage of this is that prolonging checkpoints affects recovery time,\n> + because more WAL segments will need to be kept around for possible use in recovery.\n> + A user concerned about the amount of time required to recover might wish to reduce\n> + <varname>checkpoint_timeout</varname>, causing checkpoints to happen more\n> + frequently.\n> </para>\n> \n> <para>\n\nAgain, this makes the description of the I/O spread more general,\nremoving the portion where half the time is used by default. Should\nthis stuff also mention the spread value of 90% instead?\n\n> * At a checkpoint, how many WAL segments to recycle as preallocated future\n> * XLOG segments? Returns the highest segment that should be preallocated.\n> @@ -8694,7 +8687,7 @@ UpdateCheckPointDistanceEstimate(uint64 nbytes)\n> *\tCHECKPOINT_IS_SHUTDOWN: checkpoint is for database shutdown.\n> *\tCHECKPOINT_END_OF_RECOVERY: checkpoint is for end of WAL recovery.\n> *\tCHECKPOINT_IMMEDIATE: finish the checkpoint ASAP,\n> - *\t\tignoring checkpoint_completion_target parameter.\n> + *\t\tignoring the CheckPointCompletionTarget.\n\ns/the//?\n\n> \t * be a large gap between a checkpoint's redo-pointer and the checkpoint\n> \t * record itself, and we only start the restartpoint after we've seen the\n> \t * checkpoint record. (The gap is typically up to CheckPointSegments *\n> -\t * checkpoint_completion_target where checkpoint_completion_target is the\n> +\t * CheckPointCompletionTarget where CheckPointCompletionTarget is the\n> \t * value that was in effect when the WAL was generated).\n\nThe last part of this sentence does not make sense.\nCheckPointCompletionTarget becomes a constant with this patch.\n\n> \tif (RecoveryInProgress())\n> @@ -903,7 +902,7 @@ CheckpointerShmemInit(void)\n> *\tCHECKPOINT_IS_SHUTDOWN: checkpoint is for database shutdown.\n> *\tCHECKPOINT_END_OF_RECOVERY: checkpoint is for end of WAL recovery.\n> *\tCHECKPOINT_IMMEDIATE: finish the checkpoint ASAP,\n> - *\t\tignoring checkpoint_completion_target parameter.\n> + *\t\tignoring the CheckPointCompletionTarget.\n\ns/the//?\n\n> + * CheckPointCompletionTarget used to be exposed as a GUC named\n> + * checkpoint_completion_target, but there's little evidence to suggest that\n> + * there's actually a case for it being a different value, so it's no longer\n> + * exposed as a GUC to be configured.\n\nI would just remove this paragraph.\n--\nMichael",
"msg_date": "Thu, 14 Jan 2021 14:48:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-08 12:41:35 -0500, Tom Lane wrote:\n> FWIW, I kind of like the idea of getting rid of it completely.\n> Is there really ever a good reason to set it to something different\n> than that? If not, well, we have too many GUCs already, and each\n> of them carries nonzero performance, documentation, and maintenance\n> overhead.\n\nI like the idea of getting rid of it too, but I think we should consider\nevaluating the concrete hard-coded value a bit more careful than just\ngoing for 0.9 based on some old recommendations in the docs. It not\nbeing changeable afterwards...\n\nI think it might be a good idea to immediately change the default to\n0.9, and concurrently try to evaluate whether it's really the best value\n(vs 0.95, 1 or ...).\n\nFWIW I have seen a few cases in the past where setting the target to\nsomething very small helped, but I think that was mostly because we\ndidn't yet tell the kernel to flush dirty data more aggressively.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jan 2021 13:51:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 1/15/21 10:51 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2020-12-08 12:41:35 -0500, Tom Lane wrote:\n>> FWIW, I kind of like the idea of getting rid of it completely.\n>> Is there really ever a good reason to set it to something different\n>> than that? If not, well, we have too many GUCs already, and each\n>> of them carries nonzero performance, documentation, and maintenance\n>> overhead.\n> \n> I like the idea of getting rid of it too, but I think we should consider\n> evaluating the concrete hard-coded value a bit more careful than just\n> going for 0.9 based on some old recommendations in the docs. It not\n> being changeable afterwards...\n> \n> I think it might be a good idea to immediately change the default to\n> 0.9, and concurrently try to evaluate whether it's really the best value\n> (vs 0.95, 1 or ...).\n> \n> FWIW I have seen a few cases in the past where setting the target to\n> something very small helped, but I think that was mostly because we\n> didn't yet tell the kernel to flush dirty data more aggressively.\n> \n\nYeah. The flushing probably makes that mostly unnecessary, but we still\nallow disabling that. I'm not really convinced replacing it with a\ncompile-time #define is a good idea, exactly because it can't be changed\nif needed.\n\nAs for the exact value, maybe the right solution is to make it dynamic.\nThe usual approach is to leave \"enough time\" for the kernel to flush\ndirty data, so we could say 60 seconds and calculate the exact target\ndepending on the checkpoint_timeout.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 Jan 2021 23:05:02 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Hi,\n\nOn 2021-01-15 23:05:02 +0100, Tomas Vondra wrote:\n> Yeah. The flushing probably makes that mostly unnecessary, but we still\n> allow disabling that. I'm not really convinced replacing it with a\n> compile-time #define is a good idea, exactly because it can't be changed\n> if needed.\n\nIt's also not available everywhere...\n\n\n> As for the exact value, maybe the right solution is to make it dynamic.\n> The usual approach is to leave \"enough time\" for the kernel to flush\n> dirty data, so we could say 60 seconds and calculate the exact target\n> depending on the checkpoint_timeout.\n\nIME the kernel flushing at some later time precisely is the problem,\nbecause of the latency spikes that happen when it decides to do so. That\ncommonly starts to happen well before the fsyncs. The reason that\nsetting a very small checkpoint_completion_target can help is that it\ncondenses the period of unrealiable performance into one short time,\nrather than spreading it over the whole checkpoint...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jan 2021 14:41:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 2021-01-13 23:10, Stephen Frost wrote:\n>> Yes, I agree, and am involved in that thread as well- currently waiting\n>> feedback from others about the proposed approach.\n> I've tried to push that forward. I'm happy to update this patch once\n> we've got agreement to move forward on that, to wit, adding to an\n> 'obsolete' section in the documentation information about this\n> particular GUC and how it's been removed due to not being sensible or\n> necessary to continue to have.\n\nSome discussion a few days ago was arguing that it was still necessary \nin some cases as a way to counteract the possible lack of tuning in the \nkernel flushing behavior. I think in light of that we should go with \nyour first patch that just changes the default, possibly with the \ndocumentation updated a bit.\n\n\n",
"msg_date": "Tue, 19 Jan 2021 09:21:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 2021-01-13 23:10, Stephen Frost wrote:\n> >>Yes, I agree, and am involved in that thread as well- currently waiting\n> >>feedback from others about the proposed approach.\n> >I've tried to push that forward. I'm happy to update this patch once\n> >we've got agreement to move forward on that, to wit, adding to an\n> >'obsolete' section in the documentation information about this\n> >particular GUC and how it's been removed due to not being sensible or\n> >necessary to continue to have.\n> \n> Some discussion a few days ago was arguing that it was still necessary in\n> some cases as a way to counteract the possible lack of tuning in the kernel\n> flushing behavior. I think in light of that we should go with your first\n> patch that just changes the default, possibly with the documentation updated\n> a bit.\n\nRebased and updated patch attached which moves back to just changing the\ndefault instead of removing the option, with a more explicit call-out of\nthe '90%', as suggested by Michael on the other patch.\n\nAny further comments or thoughts on this one?\n\nThanks,\n\nStephen",
"msg_date": "Tue, 19 Jan 2021 14:14:50 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Any further comments or thoughts on this one?\n\nThis:\n\n+ total time between checkpoints. The default is 0.9, which spreads the\n+ checkpoint across the entire checkpoint timeout period of time,\n\nis confusing because 0.9 is obviously not 1.0; people will wonder\nwhether the scale is something strange or the text is just wrong.\nThey will also wonder why not use 1.0 instead. So perhaps more like\n\n\t... The default is 0.9, which spreads the checkpoint across almost\n\tall the available interval, providing fairly consistent I/O load\n\twhile also leaving some slop for checkpoint completion overhead.\n\nThe other chunk of text seems accurate, but there's no reason to let\nthis one be misleading.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 14:30:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Any further comments or thoughts on this one?\n> \n> This:\n> \n> + total time between checkpoints. The default is 0.9, which spreads the\n> + checkpoint across the entire checkpoint timeout period of time,\n> \n> is confusing because 0.9 is obviously not 1.0; people will wonder\n> whether the scale is something strange or the text is just wrong.\n> They will also wonder why not use 1.0 instead. So perhaps more like\n> \n> \t... The default is 0.9, which spreads the checkpoint across almost\n> \tall the available interval, providing fairly consistent I/O load\n> \twhile also leaving some slop for checkpoint completion overhead.\n> \n> The other chunk of text seems accurate, but there's no reason to let\n> this one be misleading.\n\nGood point, updated along those lines.\n\nIn passing, I noticed that we have a lot of documentation like:\n\nThis parameter can only be set in the postgresql.conf file or on the\nserver command line.\n\n... which hasn't been true since the introduction of ALTER SYSTEM. I\ndon't really think it's this patch's job to clean that up but it doesn't\nseem quite right that we don't include ALTER SYSTEM in that list either.\nIf this was C code, maybe we could get away with just changing such\nreferences as we find them, but I don't think we'd want the\ndocumentation to be in an inconsistent state regarding that.\n\nAnyone want to opine about what to do with that? Should we consider\nchanging those to mention ALTER SYSTEM? Or perhaps have a way of saying\n\"at server start\" that then links to \"how to set options at server\nstart\", perhaps..\n\nThanks,\n\nStephen",
"msg_date": "Tue, 19 Jan 2021 14:47:48 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> In passing, I noticed that we have a lot of documentation like:\n\n> This parameter can only be set in the postgresql.conf file or on the\n> server command line.\n\n> ... which hasn't been true since the introduction of ALTER SYSTEM.\n\nWell, it's still true if you understand \"the postgresql.conf file\"\nto cover whatever's included by postgresql.conf, notably\npostgresql.auto.conf (and the include facility existed long before\nthat, too, so you needed the expanded interpretation even then).\nStill, I take your point that it's confusing.\n\nI like your suggestion of shortening all of these to be \"can only be set\nat server start\", or maybe better \"cannot be changed after server start\".\nI'm not sure whether or not we really need new text elsewhere; I think\nsection 20.1 is pretty long already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jan 2021 15:12:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "\nOn Wed, 20 Jan 2021 at 03:47, Stephen Frost <sfrost@snowman.net> wrote:\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>> > Any further comments or thoughts on this one?\n>> \n>> This:\n>> \n>> + total time between checkpoints. The default is 0.9, which spreads the\n>> + checkpoint across the entire checkpoint timeout period of time,\n>> \n>> is confusing because 0.9 is obviously not 1.0; people will wonder\n>> whether the scale is something strange or the text is just wrong.\n>> They will also wonder why not use 1.0 instead. So perhaps more like\n>> \n>> \t... The default is 0.9, which spreads the checkpoint across almost\n>> \tall the available interval, providing fairly consistent I/O load\n>> \twhile also leaving some slop for checkpoint completion overhead.\n>> \n>> The other chunk of text seems accurate, but there's no reason to let\n>> this one be misleading.\n>\n> Good point, updated along those lines.\n>\n> In passing, I noticed that we have a lot of documentation like:\n>\n> This parameter can only be set in the postgresql.conf file or on the\n> server command line.\n>\n> ... which hasn't been true since the introduction of ALTER SYSTEM. I\n> don't really think it's this patch's job to clean that up but it doesn't\n> seem quite right that we don't include ALTER SYSTEM in that list either.\n> If this was C code, maybe we could get away with just changing such\n> references as we find them, but I don't think we'd want the\n> documentation to be in an inconsistent state regarding that.\n>\n\nI have already mentioned this in [1], however it seems unattractive.\n\n[1] - https://www.postgresql.org/message-id/flat/199703E4-A795-4FB8-911C-D0DE9F51519C%40hotmail.com\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 20 Jan 2021 10:59:27 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 1/19/21 2:47 PM, Stephen Frost wrote:\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> Any further comments or thoughts on this one?\n>>\n>> This:\n>>\n>> + total time between checkpoints. The default is 0.9, which spreads the\n>> + checkpoint across the entire checkpoint timeout period of time,\n>>\n>> is confusing because 0.9 is obviously not 1.0; people will wonder\n>> whether the scale is something strange or the text is just wrong.\n>> They will also wonder why not use 1.0 instead. So perhaps more like\n>>\n>> \t... The default is 0.9, which spreads the checkpoint across almost\n>> \tall the available interval, providing fairly consistent I/O load\n>> \twhile also leaving some slop for checkpoint completion overhead.\n>>\n>> The other chunk of text seems accurate, but there's no reason to let\n>> this one be misleading.\n> \n> Good point, updated along those lines.\n\nI had a look at the patch and the change and new documentation seem \nsensible to me.\n\nI think this phrase may be a bit too idiomatic:\n\n+ consistent I/O load while also leaving some slop for checkpoint\n\nPerhaps just:\n\n+ consistent I/O load while also leaving some time for checkpoint\n\nIt seems to me that the discussion about changing the wording for GUCs \nnot changeable after server should be saved for another patch as long as \nthis patch follows the current convention.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 19 Mar 2021 12:09:05 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> I had a look at the patch and the change and new documentation seem sensible\n> to me.\n\nThanks!\n\n> I think this phrase may be a bit too idiomatic:\n> \n> + consistent I/O load while also leaving some slop for checkpoint\n> \n> Perhaps just:\n> \n> + consistent I/O load while also leaving some time for checkpoint\n\nYeah, good thought, updated.\n\n> It seems to me that the discussion about changing the wording for GUCs not\n> changeable after server should be saved for another patch as long as this\n> patch follows the current convention.\n\nAgreed.\n\nUnless there's anything further on this, I'll plan to commit it tomorrow\nor Wednesday.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 22 Mar 2021 13:11:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 01:11:00PM -0400, Stephen Frost wrote:\n> Unless there's anything further on this, I'll plan to commit it tomorrow\n> or Wednesday.\n\nCool, looks fine to me.\n\nThis version of the patch has forgotten to update one spot:\nsrc/backend/postmaster/checkpointer.c:double CheckPointCompletionTarget = 0.5;\n--\nMichael",
"msg_date": "Tue, 23 Mar 2021 14:31:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Mar 22, 2021 at 01:11:00PM -0400, Stephen Frost wrote:\n> > Unless there's anything further on this, I'll plan to commit it tomorrow\n> > or Wednesday.\n> \n> Cool, looks fine to me.\n> \n> This version of the patch has forgotten to update one spot:\n> src/backend/postmaster/checkpointer.c:double CheckPointCompletionTarget = 0.5;\n\nHah! Indeed!\n\nFixed in the attached.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 23 Mar 2021 12:24:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "LGTM. I just have a few small wording suggestions.\r\n\r\n+ completion overhead. Reducing this parameter is not recommended as that\r\n+ causes the I/O from the checkpoint to have to complete faster, resulting\r\n+ in a higher I/O rate, while then having a period of less I/O between the\r\n+ completion of the checkpoint and the start of the next scheduled\r\n+ checkpoint. This parameter can only be set in the\r\n\r\nReducing this parameter is not recommended because it forces the\r\ncheckpoint to complete faster. This results in a higher rate of I/O\r\nduring the checkpoint followed by a period of less I/O between\r\ncheckpoint completion and the next scheduled checkpoint.\r\n\r\n+ duration). This spreads out the I/O as much as possible to have the I/O load be\r\n+ consistent during the checkpoint. The disadvantage of this is that prolonging\r\n\r\nThis spreads out the I/O as much as possible so that the checkpoint\r\nI/O load is consistent throughout the checkpoint interval.\r\n\r\n+ around for possible use in recovery. A user concerned about the amount of time\r\n+ required to recover might wish to reduce <varname>checkpoint_timeout</varname>,\r\n+ causing checkpoints to happen more frequently while still spreading out the I/O\r\n+ from each checkpoint. Alternatively,\r\n\r\nA user concerned about the amount of time required to recover might\r\nwish to reduce checkpoint_timeout so that checkpoints occur more\r\nfrequently but still spread the I/O across the checkpoint interval.\r\n\r\n+ Although <varname>checkpoint_completion_target</varname> could be set as high as\r\n+ 1.0, it is best to keep it less than that (such as at the default of 0.9, at most)\r\n+ since checkpoints include some other activities besides writing dirty buffers.\r\n\r\nAlthough checkpoint_completion_target can be set as high at 1.0, it is\r\ntypically recommended to set it to no higher than 0.9 (the default)\r\nsince checkpoints include some other activities besides writing dirty\r\nbuffers.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 23 Mar 2021 18:24:07 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 06:24:07PM +0000, Bossart, Nathan wrote:\n> LGTM. I just have a few small wording suggestions.\n> \n> + completion overhead. Reducing this parameter is not recommended as that\n> + causes the I/O from the checkpoint to have to complete faster, resulting\n> + in a higher I/O rate, while then having a period of less I/O between the\n> + completion of the checkpoint and the start of the next scheduled\n> + checkpoint. This parameter can only be set in the\n> \n> Reducing this parameter is not recommended because it forces the\n> checkpoint to complete faster. This results in a higher rate of I/O\n> during the checkpoint followed by a period of less I/O between\n> checkpoint completion and the next scheduled checkpoint.\n\nFYI, I am very happy this issue is being addressed for PG 14. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:30:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> LGTM. I just have a few small wording suggestions.\n\nAgreed, those looked like good suggestions and so I've incorporated\nthem.\n\nUpdated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 23 Mar 2021 15:19:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "On 3/23/21, 12:19 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> * Bossart, Nathan (bossartn@amazon.com) wrote:\r\n> > LGTM. I just have a few small wording suggestions.\r\n>\r\n> Agreed, those looked like good suggestions and so I've incorporated\r\n> them.\r\n>\r\n> Updated patch attached.\r\n\r\nLooks good!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 23 Mar 2021 22:52:32 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Change default of checkpoint_completion_target"
},
{
"msg_contents": "Greetings,\n\n* Bossart, Nathan (bossartn@amazon.com) wrote:\n> On 3/23/21, 12:19 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\n> > * Bossart, Nathan (bossartn@amazon.com) wrote:\n> > > LGTM. I just have a few small wording suggestions.\n> >\n> > Agreed, those looked like good suggestions and so I've incorporated\n> > them.\n> >\n> > Updated patch attached.\n> \n> Looks good!\n\nGreat, pushed! Thanks to everyone for your thoughts, comments,\nsuggestions, and improvments.\n\nStephen",
"msg_date": "Wed, 24 Mar 2021 13:09:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Change default of checkpoint_completion_target"
}
] |
[
{
"msg_contents": "There's a race condition between the checkpoint at promotion and \npg_rewind. When a server is promoted, the startup process writes an \nend-of-recovery checkpoint that includes the new TLI, and the server is \nimmediate opened for business. The startup process requests the \ncheckpointer process to perform a checkpoint, but it can take a few \nseconds or more to complete. If you run pg_rewind, using the just \npromoted server as the source, pg_rewind will think that the server is \nstill on the old timeline, because it only looks at TLI in the control \nfile's copy of the checkpoint record. That's not updated until the \ncheckpoint is finished.\n\nThis isn't a new issue. Stephen Frost first reported it back 2015 [1]. \nBack then, it was deemed just a small annoyance, and we just worked \naround it in the tests by issuing a checkpoint command after promotion, \nto wait for the checkpoint to finish. I just ran into it again today, \nwith the new pg_rewind test, and silenced it in the similar way.\n\nI think we should fix this properly. I'm not sure if it can lead to a \nbroken cluster, but at least it can cause pg_rewind to fail \nunnecessarily and in a user-unfriendly way. But this is actually pretty \nsimple to fix. pg_rewind looks at the control file to find out the \ntimeline the server is on. When promotion happens, the startup process \nupdates minRecoveryPoint and minRecoveryPointTLI fields in the control \nfile. We just need to read it from there. Patch attached.\n\nI think we should also backpatch this. Back in 2015, we decided that we \ncan live with this, but it's always been a bit bogus, and seems simple \nenough to fix.\n\nThoughts?\n\n[1] \nhttps://www.postgresql.org/message-id/20150428180253.GU30322%40tamriel.snowman.net\n\n- Heikki",
"msg_date": "Mon, 7 Dec 2020 20:13:25 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "pg_rewind race condition just after promotion"
},
{
"msg_contents": "At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> There's a race condition between the checkpoint at promotion and\n> pg_rewind. When a server is promoted, the startup process writes an\n> end-of-recovery checkpoint that includes the new TLI, and the server\n> is immediate opened for business. The startup process requests the\n> checkpointer process to perform a checkpoint, but it can take a few\n> seconds or more to complete. If you run pg_rewind, using the just\n> promoted server as the source, pg_rewind will think that the server is\n> still on the old timeline, because it only looks at TLI in the control\n> file's copy of the checkpoint record. That's not updated until the\n> checkpoint is finished.\n> \n> This isn't a new issue. Stephen Frost first reported it back 2015\n> [1]. Back then, it was deemed just a small annoyance, and we just\n> worked around it in the tests by issuing a checkpoint command after\n> promotion, to wait for the checkpoint to finish. I just ran into it\n> again today, with the new pg_rewind test, and silenced it in the\n> similar way.\n\nI (or we) faced that and avoided that by checking for history file on\nthe primary.\n\n> I think we should fix this properly. I'm not sure if it can lead to a\n> broken cluster, but at least it can cause pg_rewind to fail\n> unnecessarily and in a user-unfriendly way. But this is actually\n> pretty simple to fix. pg_rewind looks at the control file to find out\n> the timeline the server is on. When promotion happens, the startup\n> process updates minRecoveryPoint and minRecoveryPointTLI fields in the\n> control file. We just need to read it from there. Patch attached.\n\nLooks fine to me. A bit concerned about making sourceHistory\nneedlessly file-local but on the other hand unifying sourceHistory and\ntargetHistory looks better.\n\nFor the test part, that change doesn't necessariry catch the failure\nof the current version, but I *believe* the prevous code is the result\nof an actual failure in the past so the test probablistically (or\ndependently on platforms?) hits the failure if it happned.\n\n> I think we should also backpatch this. Back in 2015, we decided that\n> we can live with this, but it's always been a bit bogus, and seems\n> simple enough to fix.\n\nI don't think this changes any successful behavior and it just saves\nthe failure case so +1 for back-patching.\n\n> Thoughts?\n> \n> [1]\n> https://www.postgresql.org/message-id/20150428180253.GU30322%40tamriel.snowman.net\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Dec 2020 13:45:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "On 08/12/2020 06:45, Kyotaro Horiguchi wrote:\n> At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n>> I think we should fix this properly. I'm not sure if it can lead to a\n>> broken cluster, but at least it can cause pg_rewind to fail\n>> unnecessarily and in a user-unfriendly way. But this is actually\n>> pretty simple to fix. pg_rewind looks at the control file to find out\n>> the timeline the server is on. When promotion happens, the startup\n>> process updates minRecoveryPoint and minRecoveryPointTLI fields in the\n>> control file. We just need to read it from there. Patch attached.\n> \n> Looks fine to me. A bit concerned about making sourceHistory\n> needlessly file-local but on the other hand unifying sourceHistory and\n> targetHistory looks better.\n\nLooking closer, findCommonAncestorTimeline() was freeing sourceHistory, \nwhich was pretty horrible when it's a file-local variable. I changed it \nso that both the source and target histories are passed to \nfindCommonAncestorTimeline() as arguments. That seems more clear.\n\n> For the test part, that change doesn't necessariry catch the failure\n> of the current version, but I *believe* the prevous code is the result\n> of an actual failure in the past so the test probablistically (or\n> dependently on platforms?) hits the failure if it happned.\n\nRight. I think the current test coverage is good enough. We've been \nbitten by this a few times by now, when we've forgotten to add the \nmanual checkpoint commands to new tests, and the buildfarm has caught it \npretty quickly.\n\n>> I think we should also backpatch this. Back in 2015, we decided that\n>> we can live with this, but it's always been a bit bogus, and seems\n>> simple enough to fix.\n> \n> I don't think this changes any successful behavior and it just saves\n> the failure case so +1 for back-patching.\n\nThanks for the review! New patch version attached.\n\n- Heikki",
"msg_date": "Wed, 9 Dec 2020 15:35:18 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 6:35 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 08/12/2020 06:45, Kyotaro Horiguchi wrote:\n> > At Mon, 7 Dec 2020 20:13:25 +0200, Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote in\n> >> I think we should fix this properly. I'm not sure if it can lead to a\n> >> broken cluster, but at least it can cause pg_rewind to fail\n> >> unnecessarily and in a user-unfriendly way. But this is actually\n> >> pretty simple to fix. pg_rewind looks at the control file to find out\n> >> the timeline the server is on. When promotion happens, the startup\n> >> process updates minRecoveryPoint and minRecoveryPointTLI fields in the\n> >> control file. We just need to read it from there. Patch attached.\n> >\n> > Looks fine to me. A bit concerned about making sourceHistory\n> > needlessly file-local but on the other hand unifying sourceHistory and\n> > targetHistory looks better.\n>\n> Looking closer, findCommonAncestorTimeline() was freeing sourceHistory,\n> which was pretty horrible when it's a file-local variable. I changed it\n> so that both the source and target histories are passed to\n> findCommonAncestorTimeline() as arguments. That seems more clear.\n>\n> > For the test part, that change doesn't necessariry catch the failure\n> > of the current version, but I *believe* the prevous code is the result\n> > of an actual failure in the past so the test probablistically (or\n> > dependently on platforms?) hits the failure if it happned.\n>\n> Right. I think the current test coverage is good enough. We've been\n> bitten by this a few times by now, when we've forgotten to add the\n> manual checkpoint commands to new tests, and the buildfarm has caught it\n> pretty quickly.\n>\n> >> I think we should also backpatch this. Back in 2015, we decided that\n> >> we can live with this, but it's always been a bit bogus, and seems\n> >> simple enough to fix.\n> >\n> > I don't think this changes any successful behavior and it just saves\n> > the failure case so +1 for back-patching.\n>\n> Thanks for the review! New patch version attached.\n>\n> - Heikki\n>\n\nThe patch does not apply successfully\n\n http://cfbot.cputube.org/patch_32_2864.log\n 1 out of 10 hunks FAILED -- saving rejects to file\nsrc/bin/pg_rewind/pg_rewind.c.rej\n\nThere is a minor issue therefore I rebase the patch. Please take a look at\nthat.\n\n\n\n-- \nIbrar Ahmed",
"msg_date": "Mon, 8 Mar 2021 18:06:48 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThe v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that\r\nthey _don't_ have to call `checkpoint`, or otherwise, we will lose the test\r\ncoverage for this scenario. But I don't have a strong opinion on this one.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 14 Jul 2021 12:03:22 +0000",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "> On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that\n> they _don't_ have to call `checkpoint`, or otherwise, we will lose the test\n> coverage for this scenario. But I don't have a strong opinion on this one.\n> \n> The new status of this patch is: Ready for Committer\n\nHeikki, do you have plans to address this patch during this CF?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 9 Nov 2021 12:31:51 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:\n>\n> > On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: tested, passed\n> >\n> > The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that\n> > they _don't_ have to call `checkpoint`, or otherwise, we will lose the test\n> > coverage for this scenario. But I don't have a strong opinion on this one.\n> >\n> > The new status of this patch is: Ready for Committer\n>\n> Heikki, do you have plans to address this patch during this CF?\n\nFriendly reminder ping one year on; I haven't looked at this patch in\ndetail but going by the thread contents it seems it should be marked\n\"Ready for Committer\"? Moved to the next CF anyway.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sun, 11 Dec 2022 09:01:05 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "On 11/12/2022 02:01, Ian Lawrence Barwick wrote:\n> 2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:\n>>\n>>> On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>>\n>>> The following review has been posted through the commitfest application:\n>>> make installcheck-world: tested, passed\n>>> Implements feature: tested, passed\n>>> Spec compliant: tested, passed\n>>> Documentation: tested, passed\n>>>\n>>> The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that\n>>> they _don't_ have to call `checkpoint`, or otherwise, we will lose the test\n>>> coverage for this scenario. But I don't have a strong opinion on this one.\n>>>\n>>> The new status of this patch is: Ready for Committer\n>>\n>> Heikki, do you have plans to address this patch during this CF?\n> \n> Friendly reminder ping one year on; I haven't looked at this patch in\n> detail but going by the thread contents it seems it should be marked\n> \"Ready for Committer\"? Moved to the next CF anyway.\n\nHere's an updated version of the patch.\n\nI renamed the arguments to findCommonAncestorTimeline() so that the \n'targetHistory' argument doesn't shadow the global 'targetHistory' \nvariable. No other changes, and this still looks good to me, so I'll \nwait for the cfbot to run on this and commit in the next few days.\n\n- Heikki",
"msg_date": "Wed, 22 Feb 2023 16:00:27 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind race condition just after promotion"
},
{
"msg_contents": "On 22/02/2023 16:00, Heikki Linnakangas wrote:\n> On 11/12/2022 02:01, Ian Lawrence Barwick wrote:\n>> 2021年11月9日(火) 20:31 Daniel Gustafsson <daniel@yesql.se>:\n>>>\n>>>> On 14 Jul 2021, at 14:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>>>\n>>>> The following review has been posted through the commitfest application:\n>>>> make installcheck-world: tested, passed\n>>>> Implements feature: tested, passed\n>>>> Spec compliant: tested, passed\n>>>> Documentation: tested, passed\n>>>>\n>>>> The v3 patch LGTM. I wonder if we should explicitly say in pg_rewind tests that\n>>>> they _don't_ have to call `checkpoint`, or otherwise, we will lose the test\n>>>> coverage for this scenario. But I don't have a strong opinion on this one.\n>>>>\n>>>> The new status of this patch is: Ready for Committer\n>>>\n>>> Heikki, do you have plans to address this patch during this CF?\n>>\n>> Friendly reminder ping one year on; I haven't looked at this patch in\n>> detail but going by the thread contents it seems it should be marked\n>> \"Ready for Committer\"? Moved to the next CF anyway.\n> \n> Here's an updated version of the patch.\n> \n> I renamed the arguments to findCommonAncestorTimeline() so that the\n> 'targetHistory' argument doesn't shadow the global 'targetHistory'\n> variable. No other changes, and this still looks good to me, so I'll\n> wait for the cfbot to run on this and commit in the next few days.\n\nPushed. I decided not to backpatch this, after all. We haven't really \nbeen treating this as a bug so far, and the patch didn't apply cleanly \nto v13 and before.\n\n- Heikki\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 15:43:02 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind race condition just after promotion"
}
] |
[
{
"msg_contents": "We've had get_canonical_class() for a while as a backend-only function.\nThere is some ad-hoc code elsewhere that implements the same logic in a\ncouple places, so it makes sense for all sites to use this function\ninstead, as in the attached.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 7 Dec 2020 15:24:56 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "small cleanup in unicode_norm.c"
},
{
"msg_contents": "On Mon, Dec 07, 2020 at 03:24:56PM -0400, John Naylor wrote:\n> We've had get_canonical_class() for a while as a backend-only function.\n> There is some ad-hoc code elsewhere that implements the same logic in a\n> couple places, so it makes sense for all sites to use this function\n> instead, as in the attached.\n\nThanks John for caring about that. This is a nice simplification, and\nit looks fine to me.\n\n-static uint8\n-get_canonical_class(pg_wchar ch)\n-{\nTwo nits here. I would use \"code\" for the name of the argument for\nconsistency with get_code_entry(), and add a description at the top of \nthis helper routine (say a simple \"get the combining class of given\ncode\"). Anything else you can think of?\n--\nMichael",
"msg_date": "Tue, 8 Dec 2020 18:44:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: small cleanup in unicode_norm.c"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 5:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 07, 2020 at 03:24:56PM -0400, John Naylor wrote:\n> > We've had get_canonical_class() for a while as a backend-only function.\n> > There is some ad-hoc code elsewhere that implements the same logic in a\n> > couple places, so it makes sense for all sites to use this function\n> > instead, as in the attached.\n>\n> Thanks John for caring about that. This is a nice simplification, and\n> it looks fine to me.\n>\n> -static uint8\n> -get_canonical_class(pg_wchar ch)\n> -{\n> Two nits here. I would use \"code\" for the name of the argument for\n> consistency with get_code_entry(), and add a description at the top of\n> this helper routine (say a simple \"get the combining class of given\n> code\"). Anything else you can think of?\n\nThanks for taking a look. Sounds good, I've made those adjustments and\nwrote a commit message. I took another look and didn't see anything else to\naddress.\n\n--\nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 8 Dec 2020 14:25:43 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: small cleanup in unicode_norm.c"
},
{
"msg_contents": "On Tue, Dec 08, 2020 at 02:25:43PM -0400, John Naylor wrote:\n> Thanks for taking a look. Sounds good, I've made those adjustments and\n> wrote a commit message. I took another look and didn't see anything else to\n> address.\n\nLooks good to me, so applied.\n--\nMichael",
"msg_date": "Wed, 9 Dec 2020 13:36:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: small cleanup in unicode_norm.c"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI want to share some stats and thoughts about CF.\n\n***\nThe first is a graph with the numbers of committed, moved, returned, and \nrejected CF patches over time - [cf_items_status.png]. Credits to Dmitry \nDolgov for sharing his scripts to gather this stat.\n\n***\nBesides, I noticed that we have a lot of long-living discussions. And I \nwas curious what is the chance to get something committed after several \nCFs. The graph is in [num_commitfests.png]. So, most entries make it to \nrelease after just one or two commitfests.\n\nI think that the issue here is that the commitfest application now \nserves two different purposes:\n\nFirstly, we use it to track patches that we want to see in the nearest \nreleases and concentrate our efforts on. And current CFM guideline [1] \nreflects this idea. It suggests, that after the commitfest closure date \nwe relentlessly throw to RWF patches that got at least some feedback. To \nbe honest, I was reluctant to return semi-ready patches, because it \nmeans that they will get lost somewhere in mailing lists. And it seems \nlike other CFMs did the same.\nOn the other hand, we use Commitfest to track important entries that we \nwant to remember at least once in a while. You can find many examples in \nthe 'Bug Fixes' group of patches. They are too serious to move them to \nTODO list, yet too complex and/or rare to move on. And such entries \nsimply move from one CF to another.\n\nI wonder if we can improve the workflow somehow? Todo list was recently \ncleaned up, so maybe we can use it? Or we could add a special 'Backlog' \nsection to the commitfest application.\n\nWhat do you think?\n\n\n***\nI am also planning to update the CommitFest Checklist:\n- remove references to pgsql-rrreviewers;\n- add info about cfbot;\n- remove the 'Sudden Death Overtime' chapter as it no longer reflects \nreality.\n\nThoughts?\n\n\n[1] \nhttps://wiki.postgresql.org/wiki/CommitFest_Checklist#Sudden_Death_Overtime \n<https://wiki.postgresql.org/wiki/CommitFest_Checklist#Sudden_Death_Overtime>\n\n-- \nAnastasia Lubennikova\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Dec 2020 00:16:11 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Commitfest statistics"
},
{
"msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> I want to share some stats and thoughts about CF.\n\nFirst, thanks again for managing this CF!\n\n> The first is a graph with the numbers of committed, moved, returned, and \n> rejected CF patches over time - [cf_items_status.png]. Credits to Dmitry \n> Dolgov for sharing his scripts to gather this stat.\n\nYeah, that is a very interesting graph. It shows that our actual\nthroughput of resolving patches has held more or less steady, which\nisn't very surprising because the available person-power has not\nchanged much in the last couple of years. But it looks like the\nnumber of cans getting kicked down the road is progressively\nincreasing. That's not something we can sustain indefinitely.\n\n> Besides, I noticed that we have a lot of long-living discussions. And I \n> was curious what is the chance to get something committed after several \n> CFs. The graph is in [num_commitfests.png]. So, most entries make it to \n> release after just one or two commitfests.\n\nIt's hard to see anything in this graph about what happens after the\nfirst couple of CFs. Maybe if you re-did it with a log Y axis, the\ntail would be more readable?\n\n> I think that the issue here is that the commitfest application now \n> serves two different purposes:\n\n> Firstly, we use it to track patches that we want to see in the nearest \n> releases and concentrate our efforts on. And current CFM guideline [1] \n> reflects this idea. It suggests, that after the commitfest closure date \n> we relentlessly throw to RWF patches that got at least some feedback. To \n> be honest, I was reluctant to return semi-ready patches, because it \n> means that they will get lost somewhere in mailing lists. And it seems \n> like other CFMs did the same.\n> On the other hand, we use Commitfest to track important entries that we \n> want to remember at least once in a while. You can find many examples in \n> the 'Bug Fixes' group of patches. They are too serious to move them to \n> TODO list, yet too complex and/or rare to move on. And such entries \n> simply move from one CF to another.\n\nYeah, the aggressive policy suggested in \"Sudden Death Overtime\" is\ncertainly not what's been followed lately. I agree that that's\nprobably too draconic. On the other hand, if a patch sits in the\nqueue for several CFs without getting committed, that suggests that\nmaybe we ought to reject it on the grounds of \"apparently nobody but\nthe author cares about this\". That argument is easier to make for\nfeatures than bug fixes of course, so maybe the policy needs to\ndistinguish what kind of change is being considered.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Dec 2020 16:58:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest statistics"
},
{
"msg_contents": "On 2020-Dec-07, Tom Lane wrote:\n\n> Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n\n> > Firstly, we use it to track patches that we want to see in the nearest \n> > releases and concentrate our efforts on. And current CFM guideline [1] \n> > reflects this idea. It suggests, that after the commitfest closure date \n> > we relentlessly throw to RWF patches that got at least some feedback. To \n> > be honest, I was reluctant to return semi-ready patches, because it \n> > means that they will get lost somewhere in mailing lists. And it seems \n> > like other CFMs did the same.\n> \n> Yeah, the aggressive policy suggested in \"Sudden Death Overtime\" is\n> certainly not what's been followed lately. I agree that that's\n> probably too draconic. On the other hand, if a patch sits in the\n> queue for several CFs without getting committed, that suggests that\n> maybe we ought to reject it on the grounds of \"apparently nobody but\n> the author cares about this\". That argument is easier to make for\n> features than bug fixes of course, so maybe the policy needs to\n> distinguish what kind of change is being considered.\n\nNote that this checklist was written in 2013 and has never been updated\nsince then. I think there is nothing in that policy that we do use.\nI'm thinking that rather than try to fine-tune that document, we ought\nto rewrite one from scratch.\n\nFor one thing, \"a beer or three\" only at end of CF is surely not\nsufficient.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 20:31:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest statistics"
}
] |
[
{
"msg_contents": "Hi all\n\nA new kernel API called io_uring has recently come to my attention. I\nassume some of you (Andres?) have been following it for a while.\n\nio_uring appears to offer a way to make system calls including reads,\nwrites, fsync()s, and more in a non-blocking, batched and pipelined manner,\nwith or without O_DIRECT. Basically async I/O with usable buffered I/O and\nfsync support. It has ordering support which is really important for us.\n\nThis should be on our radar. The main barriers to benefiting from linux-aio\nbased async I/O in postgres in the past has been its reliance on direct\nI/O, the various kernel-version quirks, platform portability, and its\nmaybe-async-except-when-it's-randomly-not nature.\n\nThe kernel version and portability remain an issue with io_uring so it's\nnot like this is something we can pivot over to completely. But we should\nprobably take a closer look at it.\n\nPostgreSQL spends a huge amount of time waiting, doing nothing, for\nblocking I/O. If we can improve that then we could potentially realize some\nmajor increases in I/O utilization especially for bigger, less concurrent\nworkloads. The most obvious candidates to benefit would be redo, logical\napply, and bulk loading.\n\nBut I have no idea how to even begin to fit this into PostgreSQL's executor\npipeline. Almost all PostgreSQL's code is synchronous-blocking-imperative\nin nature, with a push/pull executor pipeline. It seems to have been\nrecognised for some time that this is increasingly hurting our performance\nand scalability as platforms become more and more parallel.\n\nTo benefit from AIO (be it POSIX, linux-aio, io_uring, Windows AIO, etc) we\nhave to be able to dispatch I/O and do something else while we wait for the\nresults. So we need the ability to pipeline the executor and pipeline redo.\n\nI thought I'd start the discussion on this and see where we can go with it.\nWhat incremental steps can be done to move us toward parallelisable I/O\nwithout having to redesign everything?\n\nI'm thinking that redo is probably a good first candidate. It doesn't\ndepend on the guts of the executor. It is much less sensitive to ordering\nbetween operations in shmem and on disk since it runs in the startup\nprocess. And it hurts REALLY BADLY from its single-threaded blocking\napproach to I/O - as shown by an extension written by 2ndQuadrant that can\ndouble redo performance by doing read-ahead on btree pages that will soon\nbe needed.\n\nThoughts anybody?\n\nHi allA new kernel API called io_uring has recently come to my attention. I assume some of you (Andres?) have been following it for a while.io_uring appears to offer a way to make system calls including reads, writes, fsync()s, and more in a non-blocking, batched and pipelined manner, with or without O_DIRECT. Basically async I/O with usable buffered I/O and fsync support. It has ordering support which is really important for us.This should be on our radar. The main barriers to benefiting from linux-aio based async I/O in postgres in the past has been its reliance on direct I/O, the various kernel-version quirks, platform portability, and its maybe-async-except-when-it's-randomly-not nature.The kernel version and portability remain an issue with io_uring so it's not like this is something we can pivot over to completely. But we should probably take a closer look at it.PostgreSQL spends a huge amount of time waiting, doing nothing, for blocking I/O. If we can improve that then we could potentially realize some major increases in I/O utilization especially for bigger, less concurrent workloads. The most obvious candidates to benefit would be redo, logical apply, and bulk loading.But I have no idea how to even begin to fit this into PostgreSQL's executor pipeline. Almost all PostgreSQL's code is synchronous-blocking-imperative in nature, with a push/pull executor pipeline. It seems to have been recognised for some time that this is increasingly hurting our performance and scalability as platforms become more and more parallel.To benefit from AIO (be it POSIX, linux-aio, io_uring, Windows AIO, etc) we have to be able to dispatch I/O and do something else while we wait for the results. So we need the ability to pipeline the executor and pipeline redo.I thought I'd start the discussion on this and see where we can go with it. What incremental steps can be done to move us toward parallelisable I/O without having to redesign everything?I'm thinking that redo is probably a good first candidate. It doesn't depend on the guts of the executor. It is much less sensitive to ordering between operations in shmem and on disk since it runs in the startup process. And it hurts REALLY BADLY from its single-threaded blocking approach to I/O - as shown by an extension written by 2ndQuadrant that can double redo performance by doing read-ahead on btree pages that will soon be needed.Thoughts anybody?",
"msg_date": "Tue, 8 Dec 2020 10:55:37 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "References to get things started:\n\n* https://lwn.net/Articles/810414/\n* https://unixism.net/loti/what_is_io_uring.html\n*\nhttps://blogs.oracle.com/linux/an-introduction-to-the-io_uring-asynchronous-io-framework\n*\nhttps://thenewstack.io/how-io_uring-and-ebpf-will-revolutionize-programming-in-linux/\n\nYou'll probably notice how this parallels my sporadic activities around\npipelining in other areas, and the PoC libpq pipelining patch I sent in a\nfew years ago.\n\nReferences to get things started:* https://lwn.net/Articles/810414/* https://unixism.net/loti/what_is_io_uring.html* https://blogs.oracle.com/linux/an-introduction-to-the-io_uring-asynchronous-io-framework* https://thenewstack.io/how-io_uring-and-ebpf-will-revolutionize-programming-in-linux/You'll probably notice how this parallels my sporadic activities around pipelining in other areas, and the PoC libpq pipelining patch I sent in a few years ago.",
"msg_date": "Tue, 8 Dec 2020 11:00:30 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 3:56 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> I thought I'd start the discussion on this and see where we can go with it. What incremental steps can be done to move us toward parallelisable I/O without having to redesign everything?\n>\n> I'm thinking that redo is probably a good first candidate. It doesn't depend on the guts of the executor. It is much less sensitive to ordering between operations in shmem and on disk since it runs in the startup process. And it hurts REALLY BADLY from its single-threaded blocking approach to I/O - as shown by an extension written by 2ndQuadrant that can double redo performance by doing read-ahead on btree pages that will soon be needed.\n\nAbout the redo suggestion: https://commitfest.postgresql.org/31/2410/\ndoes exactly that! It currently uses POSIX_FADV_WILLNEED because\nthat's what PrefetchSharedBuffer() does, but when combined with a\n\"real AIO\" patch set (see earlier threads and conference talks on this\nby Andres) and a few small tweaks to control batching of I/O\nsubmissions, it does exactly what you're describing. I tried to keep\nthe WAL prefetcher project entirely disentangled from the core AIO\nwork, though, hence the \"poor man's AIO\" for now.\n\n\n",
"msg_date": "Tue, 8 Dec 2020 16:27:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "On 12/8/20 3:55 AM, Craig Ringer wrote:\n> A new kernel API called io_uring has recently come to my attention. I \n> assume some of you (Andres?) have been following it for a while.\n\nAndres did a talk on this at FOSDEM PGDay earlier this year. You can see \nhis slides below, but since they are from January things might have \nchanged since then.\n\nhttps://www.postgresql.eu/events/fosdem2020/schedule/session/2959-asynchronous-io-for-postgresql/\n\nAndreas\n\n\n",
"msg_date": "Tue, 8 Dec 2020 04:51:49 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-08 10:55:37 +0800, Craig Ringer wrote:\n> A new kernel API called io_uring has recently come to my attention. I\n> assume some of you (Andres?) have been following it for a while.\n\nYea, I've spent a *lot* of time working on AIO support, utilizing\nio_uring. Recently Thomas also joined in the fun. I've given two talks\nreferencing it (last pgcon, last pgday brussels), but otherwise I've not\nyet written much about. Things aren't *quite* right yet architecturally,\nbut I think we're getting there.\n\nThomas is working on making the AIO infrastructure portable (a worker\nbased fallback, posix AIO support for freebsd & OSX). Once that's done,\nand some of the architectural thins are resolved, I plan to write a long\nemail about what I think the right design is, and where I am at.\n\nThe current state is at https://github.com/anarazel/postgres/tree/aio\n(but it's not a very clean history at the moment).\n\nThere's currently no windows AIO support, but it shouldn't be too hard\nto add. My preliminary look indicates that we'd likely have to use\noverlapped IO with WaitForMultipleObjects(), not IOCP, since we need to\nbe able to handle latches etc, which seems harder with IOCP. But perhaps\nwe can do something using the signal handling emulation posting events\nonto IOCP instead.\n\n\n> io_uring appears to offer a way to make system calls including reads,\n> writes, fsync()s, and more in a non-blocking, batched and pipelined manner,\n> with or without O_DIRECT. Basically async I/O with usable buffered I/O and\n> fsync support. It has ordering support which is really important for us.\n\nMy results indicate that we really want to have have, optional & not\nenabled by default of course, O_DIRECT support. We just can't benefit\nfully of modern SSDs otherwise. Buffered is also important, of course.\n\n\n> But I have no idea how to even begin to fit this into PostgreSQL's executor\n> pipeline. Almost all PostgreSQL's code is synchronous-blocking-imperative\n> in nature, with a push/pull executor pipeline. It seems to have been\n> recognised for some time that this is increasingly hurting our performance\n> and scalability as platforms become more and more parallel.\n\n> To benefit from AIO (be it POSIX, linux-aio, io_uring, Windows AIO, etc) we\n> have to be able to dispatch I/O and do something else while we wait for the\n> results. So we need the ability to pipeline the executor and pipeline redo.\n\n> I thought I'd start the discussion on this and see where we can go with it.\n> What incremental steps can be done to move us toward parallelisable I/O\n> without having to redesign everything?\n\nI'm pretty sure that I've got the basics of this working pretty well. I\ndon't think the executor architecture is as big an issue as you seem to\nthink. There are further benefits that could be unlocked if we had a\nmore flexible executor model (imagine switching between different parts\nof the query whenever blocked on IO - can't do that due to the stack\nright now).\n\nThe way it currently works is that things like sequential scans, vacuum,\netc use a prefetching helper which will try to use AIO to read ahead of\nthe next needed block. That helper uses callbacks to determine the next\nneeded block, which e.g. vacuum uses to skip over all-visible/frozen\nblocks. There's plenty other places that should use that helper, but we\nalready can get considerably higher throughput for seqscans, vacuum on\nboth very fast local storage, and high-latency cloud storage.\n\nSimilarly, for writes there's a small helper to manage a write-queue of\nconfigurable depth, which currently is used to by checkpointer and\nbgwriter (but should be used in more places). Especially with direct IO\ncheckpointing can be a lot faster *and* less impactful on the \"regular\"\nload.\n\nI've got asynchronous writing of WAL mostly working, but need to\nredesign the locking a bit further. Right now it's a win in some cases,\nbut not others. The latter to a significant degree due to unnecessary\nblocking....\n\n\n> I'm thinking that redo is probably a good first candidate. It doesn't\n> depend on the guts of the executor. It is much less sensitive to\n> ordering between operations in shmem and on disk since it runs in the\n> startup process. And it hurts REALLY BADLY from its single-threaded\n> blocking approach to I/O - as shown by an extension written by\n> 2ndQuadrant that can double redo performance by doing read-ahead on\n> btree pages that will soon be needed.\n\nThomas has a patch for prefetching during WAL apply. It currently uses\nposix_fadvise(), but he took care that it'd be fairly easy to rebase it\nonto \"real\" AIO. Most of the changes necessary are pretty independent of\nposix_fadvise vs aio.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Dec 2020 20:02:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de>\n> Especially with direct IO\n> checkpointing can be a lot faster *and* less impactful on the \"regular\"\n> load.\n\nI'm looking forward to this from the async+direct I/O, since the throughput of some write-heavy workload decreased by half or more during checkpointing (due to fsync?) Would you mind sharing any preliminary results on this if you have something?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Tue, 8 Dec 2020 04:24:44 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "On Tue, 8 Dec 2020 at 12:02, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-12-08 10:55:37 +0800, Craig Ringer wrote:\n> > A new kernel API called io_uring has recently come to my attention. I\n> > assume some of you (Andres?) have been following it for a while.\n>\n> Yea, I've spent a *lot* of time working on AIO support, utilizing\n> io_uring. Recently Thomas also joined in the fun. I've given two talks\n> referencing it (last pgcon, last pgday brussels), but otherwise I've not\n> yet written much about. Things aren't *quite* right yet architecturally,\n> but I think we're getting there.\n>\n\nThat's wonderful. Thankyou.\n\nI'm badly behind on the conference circuit due to geographic isolation and\nsmall children. I'll hunt up your talks.\n\nThe current state is at https://github.com/anarazel/postgres/tree/aio\n> (but it's not a very clean history at the moment).\n>\n\nFantastic!\n\nHave you done much bpf / systemtap / perf based work on measurement and\ntracing of latencies etc? If not that's something I'd be keen to help with.\nI've mostly been using systemtap so far but I'm trying to pivot over to\nbpf.\n\nI hope to submit a big tracepoints patch set for PostgreSQL soon to better\nexpose our wait points and latencies, improve visibility of blocking, and\nhelp make activity traceable through all the stages of processing. I'll Cc\nyou when I do.\n\n\n> > io_uring appears to offer a way to make system calls including reads,\n> > writes, fsync()s, and more in a non-blocking, batched and pipelined\n> manner,\n> > with or without O_DIRECT. Basically async I/O with usable buffered I/O\n> and\n> > fsync support. It has ordering support which is really important for us.\n>\n> My results indicate that we really want to have have, optional & not\n> enabled by default of course, O_DIRECT support. We just can't benefit\n> fully of modern SSDs otherwise. Buffered is also important, of course.\n>\n\nEven more so for NVDRAM, Optane and all that, where zero-copy and low\ncontext switches becomes important too.\n\nWe're a long way from that being a priority but it's still not to be\ndismissed.\n\nI'm pretty sure that I've got the basics of this working pretty well. I\n> don't think the executor architecture is as big an issue as you seem to\n> think. There are further benefits that could be unlocked if we had a\n> more flexible executor model (imagine switching between different parts\n> of the query whenever blocked on IO - can't do that due to the stack\n> right now).\n>\n\nYep, that's what I'm talking about being an issue.\n\nBlocked on an index read? Move on to the next tuple and come back when the\nindex read is done.\n\nI really like what I see of the io_uring architecture so far. It's ideal\nfor callback-based event-driven flow control. But that doesn't fit postgres\nwell for the executor. It's better for redo etc.\n\n\n\n> The way it currently works is that things like sequential scans, vacuum,\n> etc use a prefetching helper which will try to use AIO to read ahead of\n> the next needed block. That helper uses callbacks to determine the next\n> needed block, which e.g. vacuum uses to skip over all-visible/frozen\n> blocks. There's plenty other places that should use that helper, but we\n> already can get considerably higher throughput for seqscans, vacuum on\n> both very fast local storage, and high-latency cloud storage.\n>\n> Similarly, for writes there's a small helper to manage a write-queue of\n> configurable depth, which currently is used to by checkpointer and\n> bgwriter (but should be used in more places). Especially with direct IO\n> checkpointing can be a lot faster *and* less impactful on the \"regular\"\n> load.\n>\n\nSure sounds like a useful interim step. That's great.\n\nI've got asynchronous writing of WAL mostly working, but need to\n> redesign the locking a bit further. Right now it's a win in some cases,\n> but not others. The latter to a significant degree due to unnecessary\n> blocking....\n>\n\nThat's where io_uring's I/O ordering operations looked interesting. But I\nhaven't looked closely enough to see if they're going to help us with I/O\nordering in a multiprocessing architecture like postgres.\n\nIn an ideal world we could tell the kernel about WAL-to-heap I/O\ndependencies and even let it apply WAL then heap changes out-of-order so\nlong as they didn't violate any ordering constraints we specify between\nparticular WAL records or between WAL writes and their corresponding heap\nblocks. But I don't know if the io_uring interface is that capable.\n\nI did some basic experiments a while ago with using write barriers between\nWAL records and heap writes instead of fsync()ing, but as you note, the\nincreased blocking and reduction in the kernel's ability to do I/O\nreordering is generally worse than the costs of the fsync()s we do now.\n\n> I'm thinking that redo is probably a good first candidate. It doesn't\n> > depend on the guts of the executor. It is much less sensitive to\n> > ordering between operations in shmem and on disk since it runs in the\n> > startup process. And it hurts REALLY BADLY from its single-threaded\n> > blocking approach to I/O - as shown by an extension written by\n> > 2ndQuadrant that can double redo performance by doing read-ahead on\n> > btree pages that will soon be needed.\n>\n> Thomas has a patch for prefetching during WAL apply. It currently uses\n> posix_fadvise(), but he took care that it'd be fairly easy to rebase it\n> onto \"real\" AIO. Most of the changes necessary are pretty independent of\n> posix_fadvise vs aio.\n>\n\nCool. You know we worked on something like that in 2ndQ too, with\nfast_redo, and it's pretty effective at reducing the I/O waits for b-tree\nindex maintenance.\n\nHow feasible do you think it'd be to take it a step further and structure\nredo as a pipelined queue, where redo calls enqueue I/O operations and\ncompletion handlers then return immediately? Everything still goes to disk\nin the order it's enqueued, and the callbacks will be invoked in order, so\nthey can update appropriate shmem state etc. Since there's no concurrency\nduring redo, it should be *much* simpler than normal user backend\noperations where we have all the tight coordination of buffer management,\nWAL write ordering, PGXACT and PGPROC, the clog, etc.\n\nSo far the main issue I see with it is that there are still way too many\nplaces we'd have to block because of logic that requires the result of a\nread in order to perform a subsequent write. We can't just turn those into\nevent driven continuations on the queue and keep going unless we can\nguarantee that the later WAL we apply while we're waiting is independent of\nany changes the earlier pending writes might make and that's hard,\nespecially with b-trees. And it's those read-then-write ordering points\nthat hurt our redo performance the most already.\n\nOn Tue, 8 Dec 2020 at 12:02, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-12-08 10:55:37 +0800, Craig Ringer wrote:\n> A new kernel API called io_uring has recently come to my attention. I\n> assume some of you (Andres?) have been following it for a while.\n\nYea, I've spent a *lot* of time working on AIO support, utilizing\nio_uring. Recently Thomas also joined in the fun. I've given two talks\nreferencing it (last pgcon, last pgday brussels), but otherwise I've not\nyet written much about. Things aren't *quite* right yet architecturally,\nbut I think we're getting there.That's wonderful. Thankyou.I'm badly behind on the conference circuit due to geographic isolation and small children. I'll hunt up your talks.\n\nThe current state is at https://github.com/anarazel/postgres/tree/aio\n(but it's not a very clean history at the moment).Fantastic! Have you done much bpf / systemtap / perf based work on measurement and tracing of latencies etc? If not that's something I'd be keen to help with. I've mostly been using systemtap so far but I'm trying to pivot over to bpf. I hope to submit a big tracepoints patch set for PostgreSQL soon to better expose our wait points and latencies, improve visibility of blocking, and help make activity traceable through all the stages of processing. I'll Cc you when I do. \n> io_uring appears to offer a way to make system calls including reads,\n> writes, fsync()s, and more in a non-blocking, batched and pipelined manner,\n> with or without O_DIRECT. Basically async I/O with usable buffered I/O and\n> fsync support. It has ordering support which is really important for us.\n\nMy results indicate that we really want to have have, optional & not\nenabled by default of course, O_DIRECT support. We just can't benefit\nfully of modern SSDs otherwise. Buffered is also important, of course.Even more so for NVDRAM, Optane and all that, where zero-copy and low context switches becomes important too.We're a long way from that being a priority but it's still not to be dismissed.\nI'm pretty sure that I've got the basics of this working pretty well. I\ndon't think the executor architecture is as big an issue as you seem to\nthink. There are further benefits that could be unlocked if we had a\nmore flexible executor model (imagine switching between different parts\nof the query whenever blocked on IO - can't do that due to the stack\nright now).Yep, that's what I'm talking about being an issue.Blocked on an index read? Move on to the next tuple and come back when the index read is done.I really like what I see of the io_uring architecture so far. It's ideal for callback-based event-driven flow control. But that doesn't fit postgres well for the executor. It's better for redo etc.\n\nThe way it currently works is that things like sequential scans, vacuum,\netc use a prefetching helper which will try to use AIO to read ahead of\nthe next needed block. That helper uses callbacks to determine the next\nneeded block, which e.g. vacuum uses to skip over all-visible/frozen\nblocks. There's plenty other places that should use that helper, but we\nalready can get considerably higher throughput for seqscans, vacuum on\nboth very fast local storage, and high-latency cloud storage.\n\nSimilarly, for writes there's a small helper to manage a write-queue of\nconfigurable depth, which currently is used to by checkpointer and\nbgwriter (but should be used in more places). Especially with direct IO\ncheckpointing can be a lot faster *and* less impactful on the \"regular\"\nload.Sure sounds like a useful interim step. That's great.\n\nI've got asynchronous writing of WAL mostly working, but need to\nredesign the locking a bit further. Right now it's a win in some cases,\nbut not others. The latter to a significant degree due to unnecessary\nblocking....That's where io_uring's I/O ordering operations looked interesting. But I haven't looked closely enough to see if they're going to help us with I/O ordering in a multiprocessing architecture like postgres.In an ideal world we could tell the kernel about WAL-to-heap I/O dependencies and even let it apply WAL then heap changes out-of-order so long as they didn't violate any ordering constraints we specify between particular WAL records or between WAL writes and their corresponding heap blocks. But I don't know if the io_uring interface is that capable.I did some basic experiments a while ago with using write barriers between WAL records and heap writes instead of fsync()ing, but as you note, the increased blocking and reduction in the kernel's ability to do I/O reordering is generally worse than the costs of the fsync()s we do now.\n\n> I'm thinking that redo is probably a good first candidate. It doesn't\n> depend on the guts of the executor. It is much less sensitive to\n> ordering between operations in shmem and on disk since it runs in the\n> startup process. And it hurts REALLY BADLY from its single-threaded\n> blocking approach to I/O - as shown by an extension written by\n> 2ndQuadrant that can double redo performance by doing read-ahead on\n> btree pages that will soon be needed.\n\nThomas has a patch for prefetching during WAL apply. It currently uses\nposix_fadvise(), but he took care that it'd be fairly easy to rebase it\nonto \"real\" AIO. Most of the changes necessary are pretty independent of\nposix_fadvise vs aio.Cool. You know we worked on something like that in 2ndQ too, with fast_redo, and it's pretty effective at reducing the I/O waits for b-tree index maintenance.How feasible do you think it'd be to take it a step further and structure redo as a pipelined queue, where redo calls enqueue I/O operations and completion handlers then return immediately? Everything still goes to disk in the order it's enqueued, and the callbacks will be invoked in order, so they can update appropriate shmem state etc. Since there's no concurrency during redo, it should be much simpler than normal user backend operations where we have all the tight coordination of buffer management, WAL write ordering, PGXACT and PGPROC, the clog, etc.So far the main issue I see with it is that there are still way too many places we'd have to block because of logic that requires the result of a read in order to perform a subsequent write. We can't just turn those into event driven continuations on the queue and keep going unless we can guarantee that the later WAL we apply while we're waiting is independent of any changes the earlier pending writes might make and that's hard, especially with b-trees. And it's those read-then-write ordering points that hurt our redo performance the most already.",
"msg_date": "Tue, 8 Dec 2020 13:01:38 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-08 13:01:38 +0800, Craig Ringer wrote:\n> Have you done much bpf / systemtap / perf based work on measurement and\n> tracing of latencies etc? If not that's something I'd be keen to help with.\n> I've mostly been using systemtap so far but I'm trying to pivot over to\n> bpf.\n\nNot much - there's still so many low hanging fruits and architectural\nthings to finish that it didn't yet seem pressing.\n\n\n\n\n> I've got asynchronous writing of WAL mostly working, but need to\n> > redesign the locking a bit further. Right now it's a win in some cases,\n> > but not others. The latter to a significant degree due to unnecessary\n> > blocking....\n\n> That's where io_uring's I/O ordering operations looked interesting. But I\n> haven't looked closely enough to see if they're going to help us with I/O\n> ordering in a multiprocessing architecture like postgres.\n\nThe ordering ops aren't quite powerful enough to be a huge boon\nperformance-wise (yet). They can cut down on syscall and intra-process\ncontext switch overhead to some degree, but otherwise it's not different\nthan userspace submitting another request upon receving of a completion.\n\n\n> In an ideal world we could tell the kernel about WAL-to-heap I/O\n> dependencies and even let it apply WAL then heap changes out-of-order so\n> long as they didn't violate any ordering constraints we specify between\n> particular WAL records or between WAL writes and their corresponding heap\n> blocks. But I don't know if the io_uring interface is that capable.\n\nIt's not. And that kind of dependency inferrence wouldn't be cheap on\nthe PG side either.\n\nI don't think it'd help that much for WAL apply anyway. You need\nread-ahead of the WAL to avoid unnecessary waits for a lot of records\nanyway. And the writes during WAL are mostly pretty asynchronous (mainly\nwriteback during buffer replacement).\n\nAn imo considerably more interesting case is avoiding blocking on a WAL\nflush when needing to write a page out in an OLTPish workload. But I can\nthink of more efficient ways there too.\n\n\n> How feasible do you think it'd be to take it a step further and structure\n> redo as a pipelined queue, where redo calls enqueue I/O operations and\n> completion handlers then return immediately? Everything still goes to disk\n> in the order it's enqueued, and the callbacks will be invoked in order, so\n> they can update appropriate shmem state etc. Since there's no concurrency\n> during redo, it should be *much* simpler than normal user backend\n> operations where we have all the tight coordination of buffer management,\n> WAL write ordering, PGXACT and PGPROC, the clog, etc.\n\nI think it'd be a fairly massive increase in complexity. And I don't see\na really large payoff: Once you have real readahead in the WAL there's\nreally not much synchronous IO left. What am I missing?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Dec 2020 22:23:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-08 04:24:44 +0000, tsunakawa.takay@fujitsu.com wrote:\n> I'm looking forward to this from the async+direct I/O, since the\n> throughput of some write-heavy workload decreased by half or more\n> during checkpointing (due to fsync?)\n\nDepends on why that is. The most common, I think, cause is that your WAL\nvolume increases drastically just after a checkpoint starts, because\ninitially all page modification will trigger full-page writes. There's\na significant slowdown even if you prevent the checkpointer from doing\n*any* writes at that point. I got the WAL AIO stuff to the point that I\nsee a good bit of speedup at high WAL volumes, and I see it helping in\nthis scenario.\n\nThere's of course also the issue that checkpoint writes cause other IO\n(including WAL writes) to slow down and, importantly, cause a lot of\njitter leading to unpredictable latencies. I've seen some good and some\nbad results around this with the patch, but there's a bunch of TODOs to\nresolve before delving deeper really makes sense (the IO depth control\nis not good enough right now).\n\nA third issue is that sometimes checkpointer can't really keep up - and\nthat I think I've seen pretty clearly addressed by the patch. I have\nmanaged to get to ~80% of my NVMe disks top write speed (> 2.5GB/s) by\nthe checkpointer, and I think I know what to do for the remainder.\n\n\n> Would you mind sharing any preliminary results on this if you have\n> something?\n\nI ran numbers at some point, but since then enough has changed\n(including many correctness issues fixed) that they don't seem really\nrelevant anymore. I'll try to include some in the post I'm planning to\ndo in a few weeks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Dec 2020 23:04:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "\n\nOn 2020/12/08 11:55, Craig Ringer wrote:\n> Hi all\n> \n> A new kernel API called io_uring has recently come to my attention. I assume some of you (Andres?) have been following it for a while.\n> \n> io_uring appears to offer a way to make system calls including reads, writes, fsync()s, and more in a non-blocking, batched and pipelined manner, with or without O_DIRECT. Basically async I/O with usable buffered I/O and fsync support. It has ordering support which is really important for us.\n> \n> This should be on our radar. The main barriers to benefiting from linux-aio based async I/O in postgres in the past has been its reliance on direct I/O, the various kernel-version quirks, platform portability, and its maybe-async-except-when-it's-randomly-not nature.\n> \n> The kernel version and portability remain an issue with io_uring so it's not like this is something we can pivot over to completely. But we should probably take a closer look at it.\n> \n> PostgreSQL spends a huge amount of time waiting, doing nothing, for blocking I/O. If we can improve that then we could potentially realize some major increases in I/O utilization especially for bigger, less concurrent workloads. The most obvious candidates to benefit would be redo, logical apply, and bulk loading.\n> \n> But I have no idea how to even begin to fit this into PostgreSQL's executor pipeline. Almost all PostgreSQL's code is synchronous-blocking-imperative in nature, with a push/pull executor pipeline. It seems to have been recognised for some time that this is increasingly hurting our performance and scalability as platforms become more and more parallel.\n> \n> To benefit from AIO (be it POSIX, linux-aio, io_uring, Windows AIO, etc) we have to be able to dispatch I/O and do something else while we wait for the results. So we need the ability to pipeline the executor and pipeline redo.\n> \n> I thought I'd start the discussion on this and see where we can go with it. What incremental steps can be done to move us toward parallelisable I/O without having to redesign everything?\n> \n> I'm thinking that redo is probably a good first candidate. It doesn't depend on the guts of the executor. It is much less sensitive to ordering between operations in shmem and on disk since it runs in the startup process. And it hurts REALLY BADLY from its single-threaded blocking approach to I/O - as shown by an extension written by 2ndQuadrant that can double redo performance by doing read-ahead on btree pages that will soon be needed.\n> \n> Thoughts anybody?\n\nI was wondering if async I/O might be helpful for the performance\nimprovement of walreceiver. In physical replication, walreceiver receives,\nwrites and fsyncs WAL data. Also it does tasks like keepalive. Since\nwalreceiver is a single process, for example, currently it cannot do other\ntasks while fsyncing WAL to the disk.\n\nOTOH, if walreceiver can do other tasks even while fsyncing WAL by\nusing async I/O, ISTM that it might improve the performance of walreceiver.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 8 Dec 2020 21:49:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
},
{
"msg_contents": "On Tue, 8 Dec 2020 at 15:04, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-12-08 04:24:44 +0000, tsunakawa.takay@fujitsu.com wrote:\n> > I'm looking forward to this from the async+direct I/O, since the\n> > throughput of some write-heavy workload decreased by half or more\n> > during checkpointing (due to fsync?)\n>\n> Depends on why that is. The most common, I think, cause is that your WAL\n> volume increases drastically just after a checkpoint starts, because\n> initially all page modification will trigger full-page writes. There's\n> a significant slowdown even if you prevent the checkpointer from doing\n> *any* writes at that point. I got the WAL AIO stuff to the point that I\n> see a good bit of speedup at high WAL volumes, and I see it helping in\n> this scenario.\n>\n> There's of course also the issue that checkpoint writes cause other IO\n> (including WAL writes) to slow down and, importantly, cause a lot of\n> jitter leading to unpredictable latencies. I've seen some good and some\n> bad results around this with the patch, but there's a bunch of TODOs to\n> resolve before delving deeper really makes sense (the IO depth control\n> is not good enough right now).\n>\n> A third issue is that sometimes checkpointer can't really keep up - and\n> that I think I've seen pretty clearly addressed by the patch. I have\n> managed to get to ~80% of my NVMe disks top write speed (> 2.5GB/s) by\n> the checkpointer, and I think I know what to do for the remainder.\n>\n>\nThanks for explaining this. I'm really glad you're looking into it. If I\nget the chance I'd like to try to apply some wait-analysis and blocking\nstats tooling to it. I'll report back if I make any progress there.\n\nOn Tue, 8 Dec 2020 at 15:04, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-12-08 04:24:44 +0000, tsunakawa.takay@fujitsu.com wrote:\n> I'm looking forward to this from the async+direct I/O, since the\n> throughput of some write-heavy workload decreased by half or more\n> during checkpointing (due to fsync?)\n\nDepends on why that is. The most common, I think, cause is that your WAL\nvolume increases drastically just after a checkpoint starts, because\ninitially all page modification will trigger full-page writes. There's\na significant slowdown even if you prevent the checkpointer from doing\n*any* writes at that point. I got the WAL AIO stuff to the point that I\nsee a good bit of speedup at high WAL volumes, and I see it helping in\nthis scenario.\n\nThere's of course also the issue that checkpoint writes cause other IO\n(including WAL writes) to slow down and, importantly, cause a lot of\njitter leading to unpredictable latencies. I've seen some good and some\nbad results around this with the patch, but there's a bunch of TODOs to\nresolve before delving deeper really makes sense (the IO depth control\nis not good enough right now).\n\nA third issue is that sometimes checkpointer can't really keep up - and\nthat I think I've seen pretty clearly addressed by the patch. I have\nmanaged to get to ~80% of my NVMe disks top write speed (> 2.5GB/s) by\nthe checkpointer, and I think I know what to do for the remainder.\nThanks for explaining this. I'm really glad you're looking into it. If I get the chance I'd like to try to apply some wait-analysis and blocking stats tooling to it. I'll report back if I make any progress there.",
"msg_date": "Wed, 9 Dec 2020 09:28:43 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Blocking I/O, async I/O and io_uring"
}
] |
[
{
"msg_contents": "Hi,\n\nI propose to add wal write/fsync statistics to pg_stat_wal view.\nIt's useful not only for developing/improving source code related to WAL\nbut also for users to detect workload changes, HW failure, and so on.\n\nI introduce \"track_wal_io_timing\" parameter and provide the following \ninformation on pg_stat_wal view.\nI separate the parameter from \"track_io_timing\" to \"track_wal_io_timing\"\nbecause IIUC, WAL I/O activity may have a greater impact on query \nperformance than database I/O activity.\n\n```\npostgres=# SELECT wal_write, wal_write_time, wal_sync, wal_sync_time \nFROM pg_stat_wal;\n-[ RECORD 1 ]--+----\nwal_write | 650 # Total number of times WAL data was written to \nthe disk\n\nwal_write_time | 43 # Total amount of time that has been spent in the \nportion of WAL data was written to disk\n # if track-wal-io-timing is enabled, otherwise \nzero\n\nwal_sync | 78 # Total number of times WAL data was synced to the \ndisk\n\nwal_sync_time | 104 # Total amount of time that has been spent in the \nportion of WAL data was synced to disk\n # if track-wal-io-timing is enabled, otherwise \nzero\n```\n\nWhat do you think?\nPlease let me know your comments.\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 08 Dec 2020 14:06:52 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Hi,\r\n\r\n> On Dec 8, 2020, at 1:06 PM, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> I propose to add wal write/fsync statistics to pg_stat_wal view.\r\n> It's useful not only for developing/improving source code related to WAL\r\n> but also for users to detect workload changes, HW failure, and so on.\r\n> \r\n> I introduce \"track_wal_io_timing\" parameter and provide the following information on pg_stat_wal view.\r\n> I separate the parameter from \"track_io_timing\" to \"track_wal_io_timing\"\r\n> because IIUC, WAL I/O activity may have a greater impact on query performance than database I/O activity.\r\n> \r\n> ```\r\n> postgres=# SELECT wal_write, wal_write_time, wal_sync, wal_sync_time FROM pg_stat_wal;\r\n> -[ RECORD 1 ]--+----\r\n> wal_write | 650 # Total number of times WAL data was written to the disk\r\n> \r\n> wal_write_time | 43 # Total amount of time that has been spent in the portion of WAL data was written to disk\r\n> # if track-wal-io-timing is enabled, otherwise zero\r\n> \r\n> wal_sync | 78 # Total number of times WAL data was synced to the disk\r\n> \r\n> wal_sync_time | 104 # Total amount of time that has been spent in the portion of WAL data was synced to disk\r\n> # if track-wal-io-timing is enabled, otherwise zero\r\n> ```\r\n> \r\n> What do you think?\r\n> Please let me know your comments.\r\n> \r\n> Regards\r\n> -- \r\n> Masahiro Ikeda\r\n> NTT DATA CORPORATION<0001_add_wal_io_activity_to_the_pg_stat_wal.patch>\r\n\r\nThere is a no previous prototype warning for ‘fsyncMethodCalled’, and it now only used in xlog.c,\r\nshould we declare with static? And this function wants a boolean as a return, should we use\r\ntrue/false other than 0/1?\r\n\r\n+/*\r\n+ * Check if fsync mothod is called.\r\n+ */\r\n+bool\r\n+fsyncMethodCalled()\r\n+{\r\n+ if (!enableFsync)\r\n+ return 0;\r\n+\r\n+ switch (sync_method)\r\n+ {\r\n+ case SYNC_METHOD_FSYNC:\r\n+ case SYNC_METHOD_FSYNC_WRITETHROUGH:\r\n+ case SYNC_METHOD_FDATASYNC:\r\n+ return 1;\r\n+ default:\r\n+ /* others don't have a specific fsync method */\r\n+ return 0;\r\n+ }\r\n+}\r\n+\r\n\r\n--\r\nBest regards\r\nChengDu WenWu Information Technology Co.,Ltd.\r\nJapin Li\r\n\r\n",
"msg_date": "Tue, 8 Dec 2020 07:45:52 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2020-12-08 16:45, Li Japin wrote:\n> Hi,\n> \n>> On Dec 8, 2020, at 1:06 PM, Masahiro Ikeda <ikedamsh@oss.nttdata.com> \n>> wrote:\n>> \n>> Hi,\n>> \n>> I propose to add wal write/fsync statistics to pg_stat_wal view.\n>> It's useful not only for developing/improving source code related to \n>> WAL\n>> but also for users to detect workload changes, HW failure, and so on.\n>> \n>> I introduce \"track_wal_io_timing\" parameter and provide the following \n>> information on pg_stat_wal view.\n>> I separate the parameter from \"track_io_timing\" to \n>> \"track_wal_io_timing\"\n>> because IIUC, WAL I/O activity may have a greater impact on query \n>> performance than database I/O activity.\n>> \n>> ```\n>> postgres=# SELECT wal_write, wal_write_time, wal_sync, wal_sync_time \n>> FROM pg_stat_wal;\n>> -[ RECORD 1 ]--+----\n>> wal_write | 650 # Total number of times WAL data was written to \n>> the disk\n>> \n>> wal_write_time | 43 # Total amount of time that has been spent in \n>> the portion of WAL data was written to disk\n>> # if track-wal-io-timing is enabled, otherwise \n>> zero\n>> \n>> wal_sync | 78 # Total number of times WAL data was synced to \n>> the disk\n>> \n>> wal_sync_time | 104 # Total amount of time that has been spent in \n>> the portion of WAL data was synced to disk\n>> # if track-wal-io-timing is enabled, otherwise \n>> zero\n>> ```\n>> \n>> What do you think?\n>> Please let me know your comments.\n>> \n>> Regards\n>> --\n>> Masahiro Ikeda\n>> NTT DATA \n>> CORPORATION<0001_add_wal_io_activity_to_the_pg_stat_wal.patch>\n> \n> There is a no previous prototype warning for ‘fsyncMethodCalled’, and\n> it now only used in xlog.c,\n> should we declare with static? And this function wants a boolean as a\n> return, should we use\n> true/false other than 0/1?\n> \n> +/*\n> + * Check if fsync mothod is called.\n> + */\n> +bool\n> +fsyncMethodCalled()\n> +{\n> + if (!enableFsync)\n> + return 0;\n> +\n> + switch (sync_method)\n> + {\n> + case SYNC_METHOD_FSYNC:\n> + case SYNC_METHOD_FSYNC_WRITETHROUGH:\n> + case SYNC_METHOD_FDATASYNC:\n> + return 1;\n> + default:\n> + /* others don't have a specific fsync method */\n> + return 0;\n> + }\n> +}\n> +\n\nThanks for your review.\nI agree with your comments. I fixed them.\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 08 Dec 2020 20:39:47 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Hi,\n\nI rebased the patch to the master branch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 25 Dec 2020 18:45:59 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Dear Ikeda-san,\n\nThis patch cannot be applied to the HEAD, but anyway I put a comment.\n\n```\n+\t/*\n+\t * Measure i/o timing to fsync WAL data.\n+\t *\n+\t * The wal receiver skip to collect it to avoid performance degradation of standy servers.\n+\t * If sync_method doesn't have its fsync method, to skip too.\n+\t */\n+\tif (!AmWalReceiverProcess() && track_wal_io_timing && fsyncMethodCalled())\n+\t\tINSTR_TIME_SET_CURRENT(start);\n```\n\nI think m_wal_sync_time should be collected even if the process is WalRecevier.\nBecause all wal_fsync should be recorded, and\nsome performance issues have been aleady occurred if track_wal_io_timing is turned on.\nI think it's strange only to take care of the walrecevier case.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 22 Jan 2021 02:54:17 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I rebased the patch to the master branch.\n\nThank you for working on this. I've read the latest patch. Here are comments:\n\n---\n+ if (track_wal_io_timing)\n+ {\n+ INSTR_TIME_SET_CURRENT(duration);\n+ INSTR_TIME_SUBTRACT(duration, start);\n+ WalStats.m_wal_write_time +=\nINSTR_TIME_GET_MILLISEC(duration);\n+ }\n\n* I think it should add the time in micro sec.\n\nAfter running pgbench with track_wal_io_timing = on for 30 sec,\npg_stat_wal showed the following on my environment:\n\npostgres(1:61569)=# select * from pg_stat_wal;\n-[ RECORD 1 ]----+-----------------------------\nwal_records | 285947\nwal_fpi | 53285\nwal_bytes | 442008213\nwal_buffers_full | 0\nwal_write | 25516\nwal_write_time | 0\nwal_sync | 25437\nwal_sync_time | 14490\nstats_reset | 2021-01-22 10:56:13.29464+09\n\nSince writes can complete less than a millisecond, wal_write_time\ndidn't increase. I think sync_time could also have the same problem.\n\n---\n+ /*\n+ * Measure i/o timing to fsync WAL data.\n+ *\n+ * The wal receiver skip to collect it to avoid performance\ndegradation of standy servers.\n+ * If sync_method doesn't have its fsync method, to skip too.\n+ */\n+ if (!AmWalReceiverProcess() && track_wal_io_timing && fsyncMethodCalled())\n+ INSTR_TIME_SET_CURRENT(start);\n\n* Why does only the wal receiver skip it even if track_wal_io_timinig\nis true? I think the performance degradation is also true for backend\nprocesses. If there is another reason for that, I think it's better to\nmention in both the doc and comment.\n\n* How about checking track_wal_io_timing first?\n\n* s/standy/standby/\n\n---\n+ /* increment the i/o timing and the number of times to fsync WAL data */\n+ if (fsyncMethodCalled())\n+ {\n+ if (!AmWalReceiverProcess() && track_wal_io_timing)\n+ {\n+ INSTR_TIME_SET_CURRENT(duration);\n+ INSTR_TIME_SUBTRACT(duration, start);\n+ WalStats.m_wal_sync_time += INSTR_TIME_GET_MILLISEC(duration);\n+ }\n+\n+ WalStats.m_wal_sync++;\n+ }\n\n* I'd avoid always calling fsyncMethodCalled() in this path. How about\nincrementing m_wal_sync after each sync operation?\n\n---\n+/*\n+ * Check if fsync mothod is called.\n+ */\n+static bool\n+fsyncMethodCalled()\n+{\n+ if (!enableFsync)\n+ return false;\n+\n+ switch (sync_method)\n+ {\n+ case SYNC_METHOD_FSYNC:\n+ case SYNC_METHOD_FSYNC_WRITETHROUGH:\n+ case SYNC_METHOD_FDATASYNC:\n+ return true;\n+ default:\n+ /* others don't have a specific fsync method */\n+ return false;\n+ }\n+}\n\n* I'm concerned that the function name could confuse the reader\nbecause it's called even before the fsync method is called. As I\ncommented above, calling to fsyncMethodCalled() can be eliminated.\nThat way, this function is called at only once. So do we really need\nthis function?\n\n* As far as I read the code, issue_xlog_fsync() seems to do fsync even\nif enableFsync is false. Why does the function return false in that\ncase? I might be missing something.\n\n* void is missing as argument?\n\n* s/mothod/method/\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 22 Jan 2021 14:50:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-22 11:54, kuroda.hayato@fujitsu.com wrote:\n> Dear Ikeda-san,\n> \n> This patch cannot be applied to the HEAD, but anyway I put a comment.\n> \n> ```\n> +\t/*\n> +\t * Measure i/o timing to fsync WAL data.\n> +\t *\n> +\t * The wal receiver skip to collect it to avoid performance\n> degradation of standy servers.\n> +\t * If sync_method doesn't have its fsync method, to skip too.\n> +\t */\n> +\tif (!AmWalReceiverProcess() && track_wal_io_timing && \n> fsyncMethodCalled())\n> +\t\tINSTR_TIME_SET_CURRENT(start);\n> ```\n> \n> I think m_wal_sync_time should be collected even if the process is \n> WalRecevier.\n> Because all wal_fsync should be recorded, and\n> some performance issues have been aleady occurred if\n> track_wal_io_timing is turned on.\n> I think it's strange only to take care of the walrecevier case.\n\nKuroda-san, Thanks for your comments.\n\nAlthough I thought that the performance impact may be bigger in standby \nservers\nbecause WALReceiver didn't use wal buffers, it's no need to be \nconsidered.\nI agreed that if track_wal_io_timing is turned on, the primary server's\nperformance degradation occurs too.\n\nI will make rebased and modified.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 22 Jan 2021 21:14:28 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-22 14:50, Masahiko Sawada wrote:\n> On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> Hi,\n>> \n>> I rebased the patch to the master branch.\n> \n> Thank you for working on this. I've read the latest patch. Here are \n> comments:\n> \n> ---\n> + if (track_wal_io_timing)\n> + {\n> + INSTR_TIME_SET_CURRENT(duration);\n> + INSTR_TIME_SUBTRACT(duration, start);\n> + WalStats.m_wal_write_time +=\n> INSTR_TIME_GET_MILLISEC(duration);\n> + }\n> \n> * I think it should add the time in micro sec.\n> After running pgbench with track_wal_io_timing = on for 30 sec,\n> pg_stat_wal showed the following on my environment:\n> \n> postgres(1:61569)=# select * from pg_stat_wal;\n> -[ RECORD 1 ]----+-----------------------------\n> wal_records | 285947\n> wal_fpi | 53285\n> wal_bytes | 442008213\n> wal_buffers_full | 0\n> wal_write | 25516\n> wal_write_time | 0\n> wal_sync | 25437\n> wal_sync_time | 14490\n> stats_reset | 2021-01-22 10:56:13.29464+09\n> \n> Since writes can complete less than a millisecond, wal_write_time\n> didn't increase. I think sync_time could also have the same problem.\n\nThanks for your comments. I didn't notice that.\nI changed the unit from milliseconds to microseconds.\n\n> ---\n> + /*\n> + * Measure i/o timing to fsync WAL data.\n> + *\n> + * The wal receiver skip to collect it to avoid performance\n> degradation of standy servers.\n> + * If sync_method doesn't have its fsync method, to skip too.\n> + */\n> + if (!AmWalReceiverProcess() && track_wal_io_timing && \n> fsyncMethodCalled())\n> + INSTR_TIME_SET_CURRENT(start);\n> \n> * Why does only the wal receiver skip it even if track_wal_io_timinig\n> is true? I think the performance degradation is also true for backend\n> processes. If there is another reason for that, I think it's better to\n> mention in both the doc and comment.\n> * How about checking track_wal_io_timing first?\n> * s/standy/standby/\n\nI fixed it.\nAs kuroda-san mentioned too, the skip is no need to be considered.\n\n> ---\n> + /* increment the i/o timing and the number of times to fsync WAL \n> data */\n> + if (fsyncMethodCalled())\n> + {\n> + if (!AmWalReceiverProcess() && track_wal_io_timing)\n> + {\n> + INSTR_TIME_SET_CURRENT(duration);\n> + INSTR_TIME_SUBTRACT(duration, start);\n> + WalStats.m_wal_sync_time += \n> INSTR_TIME_GET_MILLISEC(duration);\n> + }\n> +\n> + WalStats.m_wal_sync++;\n> + }\n> \n> * I'd avoid always calling fsyncMethodCalled() in this path. How about\n> incrementing m_wal_sync after each sync operation?\n\nI think if syncing the disk does not occur, m_wal_sync should not be \nincremented.\nIt depends enableFsync and sync_method.\n\nenableFsync is checked in each fsync method like \npg_fsync_no_writethrough(),\nso if incrementing m_wal_sync after each sync operation, it should be \nimplemented\nin each fsync method. It leads to many duplicated codes.\n\nSo, why don't you change the function to a flag whether to\nsync data to the disk will be occurred or not in issue_xlog_fsync()?\n\n\n> ---\n> +/*\n> + * Check if fsync mothod is called.\n> + */\n> +static bool\n> +fsyncMethodCalled()\n> +{\n> + if (!enableFsync)\n> + return false;\n> +\n> + switch (sync_method)\n> + {\n> + case SYNC_METHOD_FSYNC:\n> + case SYNC_METHOD_FSYNC_WRITETHROUGH:\n> + case SYNC_METHOD_FDATASYNC:\n> + return true;\n> + default:\n> + /* others don't have a specific fsync method */\n> + return false;\n> + }\n> +}\n> \n> * I'm concerned that the function name could confuse the reader\n> because it's called even before the fsync method is called. As I\n> commented above, calling to fsyncMethodCalled() can be eliminated.\n> That way, this function is called at only once. So do we really need\n> this function?\n\nThanks to your comments, I removed them.\n\n\n> * As far as I read the code, issue_xlog_fsync() seems to do fsync even\n> if enableFsync is false. Why does the function return false in that\n> case? I might be missing something.\n\nIIUC, the reason is that I thought that each fsync functions like \npg_fsync_no_writethrough() check enableFsync.\n\nIf this code doesn't check, m_wal_sync_time may be incremented\neven though some sync methods like SYNC_METHOD_OPEN don't call to sync \nsome data to the disk at the time.\n\n> * void is missing as argument?\n> \n> * s/mothod/method/\n\nI removed them.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 22 Jan 2021 22:05:24 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\nHi, Masahiro\n\nThanks for you update the v4 patch. Here are some comments:\n\n(1)\n+ char *msg = NULL;\n+ bool sync_called; /* whether to sync data to the disk. */\n+ instr_time start;\n+ instr_time duration;\n+\n+ /* check whether to sync data to the disk is really occurred. */\n+ sync_called = false;\n\nMaybe we can initialize the \"sync_called\" variable when declare it.\n\n(2)\n+ if (sync_called)\n+ {\n+ /* increment the i/o timing and the number of times to fsync WAL data */\n+ if (track_wal_io_timing)\n+ {\n+ INSTR_TIME_SET_CURRENT(duration);\n+ INSTR_TIME_SUBTRACT(duration, start);\n+ WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n+ }\n+\n+ WalStats.m_wal_sync++;\n+ }\n\nThere is an extra space before INSTR_TIME_GET_MICROSEC(duration).\n\nIn the issue_xlog_fsync(), the comment says that if sync_method is\nSYNC_METHOD_OPEN or SYNC_METHOD_OPEN_DSYNC, it already write synced.\nDoes that mean it synced when write the WAL data? And for those cases, we\ncannot get accurate write/sync timing and number of write/sync times, right?\n\n case SYNC_METHOD_OPEN:\n case SYNC_METHOD_OPEN_DSYNC:\n /* write synced it already */\n break;\n\nOn Fri, 22 Jan 2021 at 21:05, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> On 2021-01-22 14:50, Masahiko Sawada wrote:\n>> On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda \n>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>> Hi,\n>>> \n>>> I rebased the patch to the master branch.\n>> \n>> Thank you for working on this. I've read the latest patch. Here are \n>> comments:\n>> \n>> ---\n>> + if (track_wal_io_timing)\n>> + {\n>> + INSTR_TIME_SET_CURRENT(duration);\n>> + INSTR_TIME_SUBTRACT(duration, start);\n>> + WalStats.m_wal_write_time +=\n>> INSTR_TIME_GET_MILLISEC(duration);\n>> + }\n>> \n>> * I think it should add the time in micro sec.\n>> After running pgbench with track_wal_io_timing = on for 30 sec,\n>> pg_stat_wal showed the following on my environment:\n>> \n>> postgres(1:61569)=# select * from pg_stat_wal;\n>> -[ RECORD 1 ]----+-----------------------------\n>> wal_records | 285947\n>> wal_fpi | 53285\n>> wal_bytes | 442008213\n>> wal_buffers_full | 0\n>> wal_write | 25516\n>> wal_write_time | 0\n>> wal_sync | 25437\n>> wal_sync_time | 14490\n>> stats_reset | 2021-01-22 10:56:13.29464+09\n>> \n>> Since writes can complete less than a millisecond, wal_write_time\n>> didn't increase. I think sync_time could also have the same problem.\n>\n> Thanks for your comments. I didn't notice that.\n> I changed the unit from milliseconds to microseconds.\n>\n>> ---\n>> + /*\n>> + * Measure i/o timing to fsync WAL data.\n>> + *\n>> + * The wal receiver skip to collect it to avoid performance\n>> degradation of standy servers.\n>> + * If sync_method doesn't have its fsync method, to skip too.\n>> + */\n>> + if (!AmWalReceiverProcess() && track_wal_io_timing && \n>> fsyncMethodCalled())\n>> + INSTR_TIME_SET_CURRENT(start);\n>> \n>> * Why does only the wal receiver skip it even if track_wal_io_timinig\n>> is true? I think the performance degradation is also true for backend\n>> processes. If there is another reason for that, I think it's better to\n>> mention in both the doc and comment.\n>> * How about checking track_wal_io_timing first?\n>> * s/standy/standby/\n>\n> I fixed it.\n> As kuroda-san mentioned too, the skip is no need to be considered.\n>\n>> ---\n>> + /* increment the i/o timing and the number of times to fsync WAL \n>> data */\n>> + if (fsyncMethodCalled())\n>> + {\n>> + if (!AmWalReceiverProcess() && track_wal_io_timing)\n>> + {\n>> + INSTR_TIME_SET_CURRENT(duration);\n>> + INSTR_TIME_SUBTRACT(duration, start);\n>> + WalStats.m_wal_sync_time += \n>> INSTR_TIME_GET_MILLISEC(duration);\n>> + }\n>> +\n>> + WalStats.m_wal_sync++;\n>> + }\n>> \n>> * I'd avoid always calling fsyncMethodCalled() in this path. How about\n>> incrementing m_wal_sync after each sync operation?\n>\n> I think if syncing the disk does not occur, m_wal_sync should not be \n> incremented.\n> It depends enableFsync and sync_method.\n>\n> enableFsync is checked in each fsync method like \n> pg_fsync_no_writethrough(),\n> so if incrementing m_wal_sync after each sync operation, it should be \n> implemented\n> in each fsync method. It leads to many duplicated codes.\n>\n> So, why don't you change the function to a flag whether to\n> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n>\n>\n>> ---\n>> +/*\n>> + * Check if fsync mothod is called.\n>> + */\n>> +static bool\n>> +fsyncMethodCalled()\n>> +{\n>> + if (!enableFsync)\n>> + return false;\n>> +\n>> + switch (sync_method)\n>> + {\n>> + case SYNC_METHOD_FSYNC:\n>> + case SYNC_METHOD_FSYNC_WRITETHROUGH:\n>> + case SYNC_METHOD_FDATASYNC:\n>> + return true;\n>> + default:\n>> + /* others don't have a specific fsync method */\n>> + return false;\n>> + }\n>> +}\n>> \n>> * I'm concerned that the function name could confuse the reader\n>> because it's called even before the fsync method is called. As I\n>> commented above, calling to fsyncMethodCalled() can be eliminated.\n>> That way, this function is called at only once. So do we really need\n>> this function?\n>\n> Thanks to your comments, I removed them.\n>\n>\n>> * As far as I read the code, issue_xlog_fsync() seems to do fsync even\n>> if enableFsync is false. Why does the function return false in that\n>> case? I might be missing something.\n>\n> IIUC, the reason is that I thought that each fsync functions like \n> pg_fsync_no_writethrough() check enableFsync.\n>\n> If this code doesn't check, m_wal_sync_time may be incremented\n> even though some sync methods like SYNC_METHOD_OPEN don't call to sync \n> some data to the disk at the time.\n>\n>> * void is missing as argument?\n>> \n>> * s/mothod/method/\n>\n> I removed them.\n>\n>\n> Regards,\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 23 Jan 2021 00:46:47 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Hi, Japin\n\nThanks for your comments.\n\nOn 2021-01-23 01:46, japin wrote:\n> Hi, Masahiro\n> \n> Thanks for you update the v4 patch. Here are some comments:\n> \n> (1)\n> + char *msg = NULL;\n> + bool sync_called; /* whether to sync\n> data to the disk. */\n> + instr_time start;\n> + instr_time duration;\n> +\n> + /* check whether to sync data to the disk is really occurred. \n> */\n> + sync_called = false;\n> \n> Maybe we can initialize the \"sync_called\" variable when declare it.\n\nYes, I fixed it.\n\n> (2)\n> + if (sync_called)\n> + {\n> + /* increment the i/o timing and the number of times to\n> fsync WAL data */\n> + if (track_wal_io_timing)\n> + {\n> + INSTR_TIME_SET_CURRENT(duration);\n> + INSTR_TIME_SUBTRACT(duration, start);\n> + WalStats.m_wal_sync_time =\n> INSTR_TIME_GET_MICROSEC(duration);\n> + }\n> +\n> + WalStats.m_wal_sync++;\n> + }\n> \n> There is an extra space before INSTR_TIME_GET_MICROSEC(duration).\n\nYes, I removed it.\n\n> In the issue_xlog_fsync(), the comment says that if sync_method is\n> SYNC_METHOD_OPEN or SYNC_METHOD_OPEN_DSYNC, it already write synced.\n> Does that mean it synced when write the WAL data? And for those cases, \n> we\n> cannot get accurate write/sync timing and number of write/sync times, \n> right?\n> \n> case SYNC_METHOD_OPEN:\n> case SYNC_METHOD_OPEN_DSYNC:\n> /* write synced it already */\n> break;\n\nYes, I add the following comments in the document.\n\n@@ -3515,6 +3515,9 @@ SELECT pid, wait_event_type, wait_event FROM \npg_stat_activity WHERE wait_event i\n </para>\n <para>\n Total number of times WAL data was synced to disk\n+ (if <xref linkend=\"guc-wal-sync-method\"/> is \n<literal>open_datasync</literal> or\n+ <literal>open_sync</literal>, this value is zero because WAL \ndata is synced\n+ when to write it).\n </para></entry>\n </row>\n\n@@ -3525,7 +3528,10 @@ SELECT pid, wait_event_type, wait_event FROM \npg_stat_activity WHERE wait_event i\n <para>\n Total amount of time that has been spent in the portion of\n WAL data was synced to disk, in milliseconds\n- (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled, \notherwise zero)\n+ (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled, \notherwise zero.\n+ if <xref linkend=\"guc-wal-sync-method\"/> is \n<literal>open_datasync</literal> or\n+ <literal>open_sync</literal>, this value is zero too because WAL \ndata is synced\n+ when to write it).\n </para></entry>\n </row>\n\n\nI attached a modified patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 25 Jan 2021 08:33:49 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Dear Ikeda-san,\n\nThank you for updating the patch. This can be applied to master, and\ncan be used on my RHEL7.\nwal_write_time and wal_sync_time increase normally :-).\n\n```\npostgres=# select * from pg_stat_wal;\n-[ RECORD 1 ]----+------------------------------\nwal_records | 121781\nwal_fpi | 2287\nwal_bytes | 36055146\nwal_buffers_full | 799\nwal_write | 12770\nwal_write_time | 4.469\nwal_sync | 11962\nwal_sync_time | 132.352\nstats_reset | 2021-01-25 00:51:40.674412+00\n```\n\nI put a further comment:\n\n```\n@@ -3485,7 +3485,53 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n <structfield>wal_buffers_full</structfield> <type>bigint</type>\n </para>\n <para>\n- Number of times WAL data was written to disk because WAL buffers became full\n+ Total number of times WAL data was written to disk because WAL buffers became full\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_write</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Total number of times WAL data was written to disk\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_write_time</structfield> <type>double precision</type>\n+ </para>\n+ <para>\n+ Total amount of time that has been spent in the portion of\n+ WAL data was written to disk, in milliseconds\n+ (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled, otherwise zero).\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_sync</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Total number of times WAL data was synced to disk\n+ (if <xref linkend=\"guc-wal-sync-method\"/> is <literal>open_datasync</literal> or \n+ <literal>open_sync</literal>, this value is zero because WAL data is synced \n+ when to write it).\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_sync_time</structfield> <type>double precision</type>\n+ </para>\n+ <para>\n+ Total amount of time that has been spent in the portion of\n+ WAL data was synced to disk, in milliseconds\n+ (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled, otherwise zero.\n+ if <xref linkend=\"guc-wal-sync-method\"/> is <literal>open_datasync</literal> or \n+ <literal>open_sync</literal>, this value is zero too because WAL data is synced \n+ when to write it).\n </para></entry>\n </row>\n ```\n\nMaybe \"Total amount of time\" should be used, not \"Total number of time.\"\nOther views use \"amount.\"\n\nI have no comments anymore.\n\nHayato Kuroda\nFUJITSU LIMITED\n\n\n",
"msg_date": "Mon, 25 Jan 2021 01:34:53 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n<ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2021-01-22 14:50, Masahiko Sawada wrote:\n> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n> > <ikedamsh@oss.nttdata.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I rebased the patch to the master branch.\n> >\n> > Thank you for working on this. I've read the latest patch. Here are\n> > comments:\n> >\n> > ---\n> > + if (track_wal_io_timing)\n> > + {\n> > + INSTR_TIME_SET_CURRENT(duration);\n> > + INSTR_TIME_SUBTRACT(duration, start);\n> > + WalStats.m_wal_write_time +=\n> > INSTR_TIME_GET_MILLISEC(duration);\n> > + }\n> >\n> > * I think it should add the time in micro sec.\n> > After running pgbench with track_wal_io_timing = on for 30 sec,\n> > pg_stat_wal showed the following on my environment:\n> >\n> > postgres(1:61569)=# select * from pg_stat_wal;\n> > -[ RECORD 1 ]----+-----------------------------\n> > wal_records | 285947\n> > wal_fpi | 53285\n> > wal_bytes | 442008213\n> > wal_buffers_full | 0\n> > wal_write | 25516\n> > wal_write_time | 0\n> > wal_sync | 25437\n> > wal_sync_time | 14490\n> > stats_reset | 2021-01-22 10:56:13.29464+09\n> >\n> > Since writes can complete less than a millisecond, wal_write_time\n> > didn't increase. I think sync_time could also have the same problem.\n>\n> Thanks for your comments. I didn't notice that.\n> I changed the unit from milliseconds to microseconds.\n>\n> > ---\n> > + /*\n> > + * Measure i/o timing to fsync WAL data.\n> > + *\n> > + * The wal receiver skip to collect it to avoid performance\n> > degradation of standy servers.\n> > + * If sync_method doesn't have its fsync method, to skip too.\n> > + */\n> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n> > fsyncMethodCalled())\n> > + INSTR_TIME_SET_CURRENT(start);\n> >\n> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n> > is true? I think the performance degradation is also true for backend\n> > processes. If there is another reason for that, I think it's better to\n> > mention in both the doc and comment.\n> > * How about checking track_wal_io_timing first?\n> > * s/standy/standby/\n>\n> I fixed it.\n> As kuroda-san mentioned too, the skip is no need to be considered.\n\nI think you also removed the code to have the wal receiver report the\nstats. So with the latest patch, the wal receiver tracks those\nstatistics but doesn't report.\n\nAnd maybe XLogWalRcvWrite() also needs to track I/O?\n\n>\n> > ---\n> > + /* increment the i/o timing and the number of times to fsync WAL\n> > data */\n> > + if (fsyncMethodCalled())\n> > + {\n> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n> > + {\n> > + INSTR_TIME_SET_CURRENT(duration);\n> > + INSTR_TIME_SUBTRACT(duration, start);\n> > + WalStats.m_wal_sync_time +=\n> > INSTR_TIME_GET_MILLISEC(duration);\n> > + }\n> > +\n> > + WalStats.m_wal_sync++;\n> > + }\n> >\n> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n> > incrementing m_wal_sync after each sync operation?\n>\n> I think if syncing the disk does not occur, m_wal_sync should not be\n> incremented.\n> It depends enableFsync and sync_method.\n>\n> enableFsync is checked in each fsync method like\n> pg_fsync_no_writethrough(),\n> so if incrementing m_wal_sync after each sync operation, it should be\n> implemented\n> in each fsync method. It leads to many duplicated codes.\n\nRight. I missed that each fsync function checks enableFsync.\n\n> So, why don't you change the function to a flag whether to\n> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n\nLooks better. Since we don't necessarily need to increment m_wal_sync\nafter doing fsync we can write the code without an additional variable\nas follows:\n\n if (enableFsync)\n {\n switch (sync_method)\n {\n case SYNC_METHOD_FSYNC:\n#ifdef HAVE_FSYNC_WRITETHROUGH\n case SYNC_METHOD_FSYNC_WRITETHROUGH:\n#endif\n#ifdef HAVE_FDATASYNC\n case SYNC_METHOD_FDATASYNC:\n#endif\n WalStats.m_wal_sync++;\n if (track_wal_io_timing)\n INSTR_TIME_SET_CURRENT(start);\n break;\n default:\n break;\n }\n }\n\n (do fsync and error handling here)\n\n /* increment the i/o timing and the number of times to fsync WAL data */\n if (track_wal_io_timing)\n {\n INSTR_TIME_SET_CURRENT(duration);\n INSTR_TIME_SUBTRACT(duration, start);\n WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n }\n\nI think we can change the first switch-case to an if statement.\n\n>\n>\n> > * As far as I read the code, issue_xlog_fsync() seems to do fsync even\n> > if enableFsync is false. Why does the function return false in that\n> > case? I might be missing something.\n>\n> IIUC, the reason is that I thought that each fsync functions like\n> pg_fsync_no_writethrough() check enableFsync.\n>\n> If this code doesn't check, m_wal_sync_time may be incremented\n> even though some sync methods like SYNC_METHOD_OPEN don't call to sync\n> some data to the disk at the time.\n\nRight.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 25 Jan 2021 10:36:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\nOn Mon, 25 Jan 2021 at 09:36, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n>>\n>> On 2021-01-22 14:50, Masahiko Sawada wrote:\n>> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n>> > <ikedamsh@oss.nttdata.com> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> I rebased the patch to the master branch.\n>> >\n>> > Thank you for working on this. I've read the latest patch. Here are\n>> > comments:\n>> >\n>> > ---\n>> > + if (track_wal_io_timing)\n>> > + {\n>> > + INSTR_TIME_SET_CURRENT(duration);\n>> > + INSTR_TIME_SUBTRACT(duration, start);\n>> > + WalStats.m_wal_write_time +=\n>> > INSTR_TIME_GET_MILLISEC(duration);\n>> > + }\n>> >\n>> > * I think it should add the time in micro sec.\n>> > After running pgbench with track_wal_io_timing = on for 30 sec,\n>> > pg_stat_wal showed the following on my environment:\n>> >\n>> > postgres(1:61569)=# select * from pg_stat_wal;\n>> > -[ RECORD 1 ]----+-----------------------------\n>> > wal_records | 285947\n>> > wal_fpi | 53285\n>> > wal_bytes | 442008213\n>> > wal_buffers_full | 0\n>> > wal_write | 25516\n>> > wal_write_time | 0\n>> > wal_sync | 25437\n>> > wal_sync_time | 14490\n>> > stats_reset | 2021-01-22 10:56:13.29464+09\n>> >\n>> > Since writes can complete less than a millisecond, wal_write_time\n>> > didn't increase. I think sync_time could also have the same problem.\n>>\n>> Thanks for your comments. I didn't notice that.\n>> I changed the unit from milliseconds to microseconds.\n>>\n>> > ---\n>> > + /*\n>> > + * Measure i/o timing to fsync WAL data.\n>> > + *\n>> > + * The wal receiver skip to collect it to avoid performance\n>> > degradation of standy servers.\n>> > + * If sync_method doesn't have its fsync method, to skip too.\n>> > + */\n>> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n>> > fsyncMethodCalled())\n>> > + INSTR_TIME_SET_CURRENT(start);\n>> >\n>> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n>> > is true? I think the performance degradation is also true for backend\n>> > processes. If there is another reason for that, I think it's better to\n>> > mention in both the doc and comment.\n>> > * How about checking track_wal_io_timing first?\n>> > * s/standy/standby/\n>>\n>> I fixed it.\n>> As kuroda-san mentioned too, the skip is no need to be considered.\n>\n> I think you also removed the code to have the wal receiver report the\n> stats. So with the latest patch, the wal receiver tracks those\n> statistics but doesn't report.\n>\n> And maybe XLogWalRcvWrite() also needs to track I/O?\n>\n>>\n>> > ---\n>> > + /* increment the i/o timing and the number of times to fsync WAL\n>> > data */\n>> > + if (fsyncMethodCalled())\n>> > + {\n>> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n>> > + {\n>> > + INSTR_TIME_SET_CURRENT(duration);\n>> > + INSTR_TIME_SUBTRACT(duration, start);\n>> > + WalStats.m_wal_sync_time +=\n>> > INSTR_TIME_GET_MILLISEC(duration);\n>> > + }\n>> > +\n>> > + WalStats.m_wal_sync++;\n>> > + }\n>> >\n>> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n>> > incrementing m_wal_sync after each sync operation?\n>>\n>> I think if syncing the disk does not occur, m_wal_sync should not be\n>> incremented.\n>> It depends enableFsync and sync_method.\n>>\n>> enableFsync is checked in each fsync method like\n>> pg_fsync_no_writethrough(),\n>> so if incrementing m_wal_sync after each sync operation, it should be\n>> implemented\n>> in each fsync method. It leads to many duplicated codes.\n>\n> Right. I missed that each fsync function checks enableFsync.\n>\n>> So, why don't you change the function to a flag whether to\n>> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n>\n> Looks better. Since we don't necessarily need to increment m_wal_sync\n> after doing fsync we can write the code without an additional variable\n> as follows:\n>\n> if (enableFsync)\n> {\n> switch (sync_method)\n> {\n> case SYNC_METHOD_FSYNC:\n> #ifdef HAVE_FSYNC_WRITETHROUGH\n> case SYNC_METHOD_FSYNC_WRITETHROUGH:\n> #endif\n> #ifdef HAVE_FDATASYNC\n> case SYNC_METHOD_FDATASYNC:\n> #endif\n> WalStats.m_wal_sync++;\n> if (track_wal_io_timing)\n> INSTR_TIME_SET_CURRENT(start);\n> break;\n> default:\n> break;\n> }\n> }\n>\n> (do fsync and error handling here)\n>\n> /* increment the i/o timing and the number of times to fsync WAL data */\n> if (track_wal_io_timing)\n> {\n> INSTR_TIME_SET_CURRENT(duration);\n> INSTR_TIME_SUBTRACT(duration, start);\n> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> }\n>\n> I think we can change the first switch-case to an if statement.\n>\n\n+1. We can also narrow the scope of \"duration\" into \"if (track_wal_io_timing)\" branch.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 25 Jan 2021 10:47:21 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-25 10:34, kuroda.hayato@fujitsu.com wrote:\n> Dear Ikeda-san,\n> \n> Thank you for updating the patch. This can be applied to master, and\n> can be used on my RHEL7.\n> wal_write_time and wal_sync_time increase normally :-).\n> \n> ```\n> postgres=# select * from pg_stat_wal;\n> -[ RECORD 1 ]----+------------------------------\n> wal_records | 121781\n> wal_fpi | 2287\n> wal_bytes | 36055146\n> wal_buffers_full | 799\n> wal_write | 12770\n> wal_write_time | 4.469\n> wal_sync | 11962\n> wal_sync_time | 132.352\n> stats_reset | 2021-01-25 00:51:40.674412+00\n> ```\n\nThanks for checking.\n\n> I put a further comment:\n> \n> ```\n> @@ -3485,7 +3485,53 @@ SELECT pid, wait_event_type, wait_event FROM\n> pg_stat_activity WHERE wait_event i\n> <structfield>wal_buffers_full</structfield> <type>bigint</type>\n> </para>\n> <para>\n> - Number of times WAL data was written to disk because WAL\n> buffers became full\n> + Total number of times WAL data was written to disk because WAL\n> buffers became full\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_write</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Total number of times WAL data was written to disk\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_write_time</structfield> <type>double \n> precision</type>\n> + </para>\n> + <para>\n> + Total amount of time that has been spent in the portion of\n> + WAL data was written to disk, in milliseconds\n> + (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled,\n> otherwise zero).\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_sync</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Total number of times WAL data was synced to disk\n> + (if <xref linkend=\"guc-wal-sync-method\"/> is\n> <literal>open_datasync</literal> or\n> + <literal>open_sync</literal>, this value is zero because WAL\n> data is synced\n> + when to write it).\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_sync_time</structfield> <type>double \n> precision</type>\n> + </para>\n> + <para>\n> + Total amount of time that has been spent in the portion of\n> + WAL data was synced to disk, in milliseconds\n> + (if <xref linkend=\"guc-track-wal-io-timing\"/> is enabled,\n> otherwise zero.\n> + if <xref linkend=\"guc-wal-sync-method\"/> is\n> <literal>open_datasync</literal> or\n> + <literal>open_sync</literal>, this value is zero too because\n> WAL data is synced\n> + when to write it).\n> </para></entry>\n> </row>\n> ```\n> \n> Maybe \"Total amount of time\" should be used, not \"Total number of \n> time.\"\n> Other views use \"amount.\"\n\nThanks.\n\nI checked columns' descriptions of other views.\nThere are \"Number of xxx\", \"Total number of xxx\", \"Total amount of time \nthat xxx\" and \"Total time spent xxx\".\n\nSince the \"time\" is used for showing spending time, not count,\nI'll change it to \"Total number of WAL data written/synced to disk\".\nThought?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jan 2021 12:53:05 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Dear Ikeda-san,\n\n> I checked columns' descriptions of other views.\n> There are \"Number of xxx\", \"Total number of xxx\", \"Total amount of time \n> that xxx\" and \"Total time spent xxx\".\n\nRight.\n\n> Since the \"time\" is used for showing spending time, not count,\n> I'll change it to \"Total number of WAL data written/synced to disk\".\n> Thought?\n\nI misread your patch, sorry. I prefer your suggestion.\nPlease fix like that way with others.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 25 Jan 2021 04:09:44 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-25 10:36, Masahiko Sawada wrote:\n> On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> On 2021-01-22 14:50, Masahiko Sawada wrote:\n>> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n>> > <ikedamsh@oss.nttdata.com> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> I rebased the patch to the master branch.\n>> >\n>> > Thank you for working on this. I've read the latest patch. Here are\n>> > comments:\n>> >\n>> > ---\n>> > + if (track_wal_io_timing)\n>> > + {\n>> > + INSTR_TIME_SET_CURRENT(duration);\n>> > + INSTR_TIME_SUBTRACT(duration, start);\n>> > + WalStats.m_wal_write_time +=\n>> > INSTR_TIME_GET_MILLISEC(duration);\n>> > + }\n>> >\n>> > * I think it should add the time in micro sec.\n>> > After running pgbench with track_wal_io_timing = on for 30 sec,\n>> > pg_stat_wal showed the following on my environment:\n>> >\n>> > postgres(1:61569)=# select * from pg_stat_wal;\n>> > -[ RECORD 1 ]----+-----------------------------\n>> > wal_records | 285947\n>> > wal_fpi | 53285\n>> > wal_bytes | 442008213\n>> > wal_buffers_full | 0\n>> > wal_write | 25516\n>> > wal_write_time | 0\n>> > wal_sync | 25437\n>> > wal_sync_time | 14490\n>> > stats_reset | 2021-01-22 10:56:13.29464+09\n>> >\n>> > Since writes can complete less than a millisecond, wal_write_time\n>> > didn't increase. I think sync_time could also have the same problem.\n>> \n>> Thanks for your comments. I didn't notice that.\n>> I changed the unit from milliseconds to microseconds.\n>> \n>> > ---\n>> > + /*\n>> > + * Measure i/o timing to fsync WAL data.\n>> > + *\n>> > + * The wal receiver skip to collect it to avoid performance\n>> > degradation of standy servers.\n>> > + * If sync_method doesn't have its fsync method, to skip too.\n>> > + */\n>> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n>> > fsyncMethodCalled())\n>> > + INSTR_TIME_SET_CURRENT(start);\n>> >\n>> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n>> > is true? I think the performance degradation is also true for backend\n>> > processes. If there is another reason for that, I think it's better to\n>> > mention in both the doc and comment.\n>> > * How about checking track_wal_io_timing first?\n>> > * s/standy/standby/\n>> \n>> I fixed it.\n>> As kuroda-san mentioned too, the skip is no need to be considered.\n> \n> I think you also removed the code to have the wal receiver report the\n> stats. So with the latest patch, the wal receiver tracks those\n> statistics but doesn't report.\n> And maybe XLogWalRcvWrite() also needs to track I/O?\n\nThanks, I forgot to add them.\nI'll fix it.\n\n\n>> \n>> > ---\n>> > + /* increment the i/o timing and the number of times to fsync WAL\n>> > data */\n>> > + if (fsyncMethodCalled())\n>> > + {\n>> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n>> > + {\n>> > + INSTR_TIME_SET_CURRENT(duration);\n>> > + INSTR_TIME_SUBTRACT(duration, start);\n>> > + WalStats.m_wal_sync_time +=\n>> > INSTR_TIME_GET_MILLISEC(duration);\n>> > + }\n>> > +\n>> > + WalStats.m_wal_sync++;\n>> > + }\n>> >\n>> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n>> > incrementing m_wal_sync after each sync operation?\n>> \n>> I think if syncing the disk does not occur, m_wal_sync should not be\n>> incremented.\n>> It depends enableFsync and sync_method.\n>> \n>> enableFsync is checked in each fsync method like\n>> pg_fsync_no_writethrough(),\n>> so if incrementing m_wal_sync after each sync operation, it should be\n>> implemented\n>> in each fsync method. It leads to many duplicated codes.\n> \n> Right. I missed that each fsync function checks enableFsync.\n> \n>> So, why don't you change the function to a flag whether to\n>> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n> \n> Looks better. Since we don't necessarily need to increment m_wal_sync\n> after doing fsync we can write the code without an additional variable\n> as follows:\n> \n> if (enableFsync)\n> {\n> switch (sync_method)\n> {\n> case SYNC_METHOD_FSYNC:\n> #ifdef HAVE_FSYNC_WRITETHROUGH\n> case SYNC_METHOD_FSYNC_WRITETHROUGH:\n> #endif\n> #ifdef HAVE_FDATASYNC\n> case SYNC_METHOD_FDATASYNC:\n> #endif\n> WalStats.m_wal_sync++;\n> if (track_wal_io_timing)\n> INSTR_TIME_SET_CURRENT(start);\n> break;\n> default:\n> break;\n> }\n> }\n> \n> (do fsync and error handling here)\n> \n> /* increment the i/o timing and the number of times to fsync WAL \n> data */\n> if (track_wal_io_timing)\n> {\n> INSTR_TIME_SET_CURRENT(duration);\n> INSTR_TIME_SUBTRACT(duration, start);\n> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> }\n\nIIUC, I think we can't handle the following case.\n\nWhen \"sync_method\" is SYNC_METHOD_OPEN or SYNC_METHOD_OPEN_DSYNC and\n\"track_wal_io_timing\" is enabled, \"start\" doesn't be initialized.\n\nMy understanding is something wrong, isn't it?\n\n\n> I think we can change the first switch-case to an if statement.\n\nYes, I'll change it.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jan 2021 13:15:22 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-25 11:47, japin wrote:\n> On Mon, 25 Jan 2021 at 09:36, Masahiko Sawada <sawada.mshk@gmail.com> \n> wrote:\n>> On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>> On 2021-01-22 14:50, Masahiko Sawada wrote:\n>>> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n>>> > <ikedamsh@oss.nttdata.com> wrote:\n>>> >>\n>>> >> Hi,\n>>> >>\n>>> >> I rebased the patch to the master branch.\n>>> >\n>>> > Thank you for working on this. I've read the latest patch. Here are\n>>> > comments:\n>>> >\n>>> > ---\n>>> > + if (track_wal_io_timing)\n>>> > + {\n>>> > + INSTR_TIME_SET_CURRENT(duration);\n>>> > + INSTR_TIME_SUBTRACT(duration, start);\n>>> > + WalStats.m_wal_write_time +=\n>>> > INSTR_TIME_GET_MILLISEC(duration);\n>>> > + }\n>>> >\n>>> > * I think it should add the time in micro sec.\n>>> > After running pgbench with track_wal_io_timing = on for 30 sec,\n>>> > pg_stat_wal showed the following on my environment:\n>>> >\n>>> > postgres(1:61569)=# select * from pg_stat_wal;\n>>> > -[ RECORD 1 ]----+-----------------------------\n>>> > wal_records | 285947\n>>> > wal_fpi | 53285\n>>> > wal_bytes | 442008213\n>>> > wal_buffers_full | 0\n>>> > wal_write | 25516\n>>> > wal_write_time | 0\n>>> > wal_sync | 25437\n>>> > wal_sync_time | 14490\n>>> > stats_reset | 2021-01-22 10:56:13.29464+09\n>>> >\n>>> > Since writes can complete less than a millisecond, wal_write_time\n>>> > didn't increase. I think sync_time could also have the same problem.\n>>> \n>>> Thanks for your comments. I didn't notice that.\n>>> I changed the unit from milliseconds to microseconds.\n>>> \n>>> > ---\n>>> > + /*\n>>> > + * Measure i/o timing to fsync WAL data.\n>>> > + *\n>>> > + * The wal receiver skip to collect it to avoid performance\n>>> > degradation of standy servers.\n>>> > + * If sync_method doesn't have its fsync method, to skip too.\n>>> > + */\n>>> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n>>> > fsyncMethodCalled())\n>>> > + INSTR_TIME_SET_CURRENT(start);\n>>> >\n>>> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n>>> > is true? I think the performance degradation is also true for backend\n>>> > processes. If there is another reason for that, I think it's better to\n>>> > mention in both the doc and comment.\n>>> > * How about checking track_wal_io_timing first?\n>>> > * s/standy/standby/\n>>> \n>>> I fixed it.\n>>> As kuroda-san mentioned too, the skip is no need to be considered.\n>> \n>> I think you also removed the code to have the wal receiver report the\n>> stats. So with the latest patch, the wal receiver tracks those\n>> statistics but doesn't report.\n>> \n>> And maybe XLogWalRcvWrite() also needs to track I/O?\n>> \n>>> \n>>> > ---\n>>> > + /* increment the i/o timing and the number of times to fsync WAL\n>>> > data */\n>>> > + if (fsyncMethodCalled())\n>>> > + {\n>>> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n>>> > + {\n>>> > + INSTR_TIME_SET_CURRENT(duration);\n>>> > + INSTR_TIME_SUBTRACT(duration, start);\n>>> > + WalStats.m_wal_sync_time +=\n>>> > INSTR_TIME_GET_MILLISEC(duration);\n>>> > + }\n>>> > +\n>>> > + WalStats.m_wal_sync++;\n>>> > + }\n>>> >\n>>> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n>>> > incrementing m_wal_sync after each sync operation?\n>>> \n>>> I think if syncing the disk does not occur, m_wal_sync should not be\n>>> incremented.\n>>> It depends enableFsync and sync_method.\n>>> \n>>> enableFsync is checked in each fsync method like\n>>> pg_fsync_no_writethrough(),\n>>> so if incrementing m_wal_sync after each sync operation, it should be\n>>> implemented\n>>> in each fsync method. It leads to many duplicated codes.\n>> \n>> Right. I missed that each fsync function checks enableFsync.\n>> \n>>> So, why don't you change the function to a flag whether to\n>>> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n>> \n>> Looks better. Since we don't necessarily need to increment m_wal_sync\n>> after doing fsync we can write the code without an additional variable\n>> as follows:\n>> \n>> if (enableFsync)\n>> {\n>> switch (sync_method)\n>> {\n>> case SYNC_METHOD_FSYNC:\n>> #ifdef HAVE_FSYNC_WRITETHROUGH\n>> case SYNC_METHOD_FSYNC_WRITETHROUGH:\n>> #endif\n>> #ifdef HAVE_FDATASYNC\n>> case SYNC_METHOD_FDATASYNC:\n>> #endif\n>> WalStats.m_wal_sync++;\n>> if (track_wal_io_timing)\n>> INSTR_TIME_SET_CURRENT(start);\n>> break;\n>> default:\n>> break;\n>> }\n>> }\n>> \n>> (do fsync and error handling here)\n>> \n>> /* increment the i/o timing and the number of times to fsync WAL \n>> data */\n>> if (track_wal_io_timing)\n>> {\n>> INSTR_TIME_SET_CURRENT(duration);\n>> INSTR_TIME_SUBTRACT(duration, start);\n>> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n>> }\n>> \n>> I think we can change the first switch-case to an if statement.\n>> \n> \n> +1. We can also narrow the scope of \"duration\" into \"if\n> (track_wal_io_timing)\" branch.\n\nThanks, I'll change it.\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jan 2021 13:22:13 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-25 13:15, Masahiro Ikeda wrote:\n> On 2021-01-25 10:36, Masahiko Sawada wrote:\n>> On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>> On 2021-01-22 14:50, Masahiko Sawada wrote:\n>>> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n>>> > <ikedamsh@oss.nttdata.com> wrote:\n>>> >>\n>>> >> Hi,\n>>> >>\n>>> >> I rebased the patch to the master branch.\n>>> >\n>>> > Thank you for working on this. I've read the latest patch. Here are\n>>> > comments:\n>>> >\n>>> > ---\n>>> > + if (track_wal_io_timing)\n>>> > + {\n>>> > + INSTR_TIME_SET_CURRENT(duration);\n>>> > + INSTR_TIME_SUBTRACT(duration, start);\n>>> > + WalStats.m_wal_write_time +=\n>>> > INSTR_TIME_GET_MILLISEC(duration);\n>>> > + }\n>>> >\n>>> > * I think it should add the time in micro sec.\n>>> > After running pgbench with track_wal_io_timing = on for 30 sec,\n>>> > pg_stat_wal showed the following on my environment:\n>>> >\n>>> > postgres(1:61569)=# select * from pg_stat_wal;\n>>> > -[ RECORD 1 ]----+-----------------------------\n>>> > wal_records | 285947\n>>> > wal_fpi | 53285\n>>> > wal_bytes | 442008213\n>>> > wal_buffers_full | 0\n>>> > wal_write | 25516\n>>> > wal_write_time | 0\n>>> > wal_sync | 25437\n>>> > wal_sync_time | 14490\n>>> > stats_reset | 2021-01-22 10:56:13.29464+09\n>>> >\n>>> > Since writes can complete less than a millisecond, wal_write_time\n>>> > didn't increase. I think sync_time could also have the same problem.\n>>> \n>>> Thanks for your comments. I didn't notice that.\n>>> I changed the unit from milliseconds to microseconds.\n>>> \n>>> > ---\n>>> > + /*\n>>> > + * Measure i/o timing to fsync WAL data.\n>>> > + *\n>>> > + * The wal receiver skip to collect it to avoid performance\n>>> > degradation of standy servers.\n>>> > + * If sync_method doesn't have its fsync method, to skip too.\n>>> > + */\n>>> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n>>> > fsyncMethodCalled())\n>>> > + INSTR_TIME_SET_CURRENT(start);\n>>> >\n>>> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n>>> > is true? I think the performance degradation is also true for backend\n>>> > processes. If there is another reason for that, I think it's better to\n>>> > mention in both the doc and comment.\n>>> > * How about checking track_wal_io_timing first?\n>>> > * s/standy/standby/\n>>> \n>>> I fixed it.\n>>> As kuroda-san mentioned too, the skip is no need to be considered.\n>> \n>> I think you also removed the code to have the wal receiver report the\n>> stats. So with the latest patch, the wal receiver tracks those\n>> statistics but doesn't report.\n>> And maybe XLogWalRcvWrite() also needs to track I/O?\n> \n> Thanks, I forgot to add them.\n> I'll fix it.\n> \n> \n>>> \n>>> > ---\n>>> > + /* increment the i/o timing and the number of times to fsync WAL\n>>> > data */\n>>> > + if (fsyncMethodCalled())\n>>> > + {\n>>> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n>>> > + {\n>>> > + INSTR_TIME_SET_CURRENT(duration);\n>>> > + INSTR_TIME_SUBTRACT(duration, start);\n>>> > + WalStats.m_wal_sync_time +=\n>>> > INSTR_TIME_GET_MILLISEC(duration);\n>>> > + }\n>>> > +\n>>> > + WalStats.m_wal_sync++;\n>>> > + }\n>>> >\n>>> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n>>> > incrementing m_wal_sync after each sync operation?\n>>> \n>>> I think if syncing the disk does not occur, m_wal_sync should not be\n>>> incremented.\n>>> It depends enableFsync and sync_method.\n>>> \n>>> enableFsync is checked in each fsync method like\n>>> pg_fsync_no_writethrough(),\n>>> so if incrementing m_wal_sync after each sync operation, it should be\n>>> implemented\n>>> in each fsync method. It leads to many duplicated codes.\n>> \n>> Right. I missed that each fsync function checks enableFsync.\n>> \n>>> So, why don't you change the function to a flag whether to\n>>> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n>> \n>> Looks better. Since we don't necessarily need to increment m_wal_sync\n>> after doing fsync we can write the code without an additional variable\n>> as follows:\n>> \n>> if (enableFsync)\n>> {\n>> switch (sync_method)\n>> {\n>> case SYNC_METHOD_FSYNC:\n>> #ifdef HAVE_FSYNC_WRITETHROUGH\n>> case SYNC_METHOD_FSYNC_WRITETHROUGH:\n>> #endif\n>> #ifdef HAVE_FDATASYNC\n>> case SYNC_METHOD_FDATASYNC:\n>> #endif\n>> WalStats.m_wal_sync++;\n>> if (track_wal_io_timing)\n>> INSTR_TIME_SET_CURRENT(start);\n>> break;\n>> default:\n>> break;\n>> }\n>> }\n>> \n>> (do fsync and error handling here)\n>> \n>> /* increment the i/o timing and the number of times to fsync WAL \n>> data */\n>> if (track_wal_io_timing)\n>> {\n>> INSTR_TIME_SET_CURRENT(duration);\n>> INSTR_TIME_SUBTRACT(duration, start);\n>> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n>> }\n> \n> IIUC, I think we can't handle the following case.\n> \n> When \"sync_method\" is SYNC_METHOD_OPEN or SYNC_METHOD_OPEN_DSYNC and\n> \"track_wal_io_timing\" is enabled, \"start\" doesn't be initialized.\n> \n> My understanding is something wrong, isn't it?\n\nI thought the following is better.\n\n\n```\n\t/* Measure i/o timing to sync WAL data.*/\n\tif (track_wal_io_timing)\n\t\tINSTR_TIME_SET_CURRENT(start);\n\n (do fsync and error handling here)\n\n\t/* check whether to sync WAL data to the disk right now. */\n\tif (enableFsync)\n\t{\n\t\tif ((sync_method == SYNC_METHOD_FSYNC) ||\n\t\t\t(sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH) ||\n\t\t\t(sync_method == SYNC_METHOD_FDATASYNC))\n\t\t{\n\t\t\t/* increment the i/o timing and the number of times to fsync WAL data \n*/\n\t\t\tif (track_wal_io_timing)\n\t\t\t{\n\t\t\t\tinstr_time\tduration;\n\n\t\t\t\tINSTR_TIME_SET_CURRENT(duration);\n\t\t\t\tINSTR_TIME_SUBTRACT(duration, start);\n\t\t\t\tWalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n\t\t\t}\n\t\t\tWalStats.m_wal_sync++;\n\t\t}\n\t}\n```\n\nAlthough INSTR_TIME_SET_CURRENT(start) is called everytime regardless\nof the \"sync_method\" and \"enableFsync\", we don't make additional \nvariables.\nBut it's ok because \"track_wal_io_timing\" leads already performance \ndegradation.\n\nWhat do you think?\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jan 2021 13:28:20 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 1:28 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2021-01-25 13:15, Masahiro Ikeda wrote:\n> > On 2021-01-25 10:36, Masahiko Sawada wrote:\n> >> On Fri, Jan 22, 2021 at 10:05 PM Masahiro Ikeda\n> >> <ikedamsh@oss.nttdata.com> wrote:\n> >>>\n> >>> On 2021-01-22 14:50, Masahiko Sawada wrote:\n> >>> > On Fri, Dec 25, 2020 at 6:46 PM Masahiro Ikeda\n> >>> > <ikedamsh@oss.nttdata.com> wrote:\n> >>> >>\n> >>> >> Hi,\n> >>> >>\n> >>> >> I rebased the patch to the master branch.\n> >>> >\n> >>> > Thank you for working on this. I've read the latest patch. Here are\n> >>> > comments:\n> >>> >\n> >>> > ---\n> >>> > + if (track_wal_io_timing)\n> >>> > + {\n> >>> > + INSTR_TIME_SET_CURRENT(duration);\n> >>> > + INSTR_TIME_SUBTRACT(duration, start);\n> >>> > + WalStats.m_wal_write_time +=\n> >>> > INSTR_TIME_GET_MILLISEC(duration);\n> >>> > + }\n> >>> >\n> >>> > * I think it should add the time in micro sec.\n> >>> > After running pgbench with track_wal_io_timing = on for 30 sec,\n> >>> > pg_stat_wal showed the following on my environment:\n> >>> >\n> >>> > postgres(1:61569)=# select * from pg_stat_wal;\n> >>> > -[ RECORD 1 ]----+-----------------------------\n> >>> > wal_records | 285947\n> >>> > wal_fpi | 53285\n> >>> > wal_bytes | 442008213\n> >>> > wal_buffers_full | 0\n> >>> > wal_write | 25516\n> >>> > wal_write_time | 0\n> >>> > wal_sync | 25437\n> >>> > wal_sync_time | 14490\n> >>> > stats_reset | 2021-01-22 10:56:13.29464+09\n> >>> >\n> >>> > Since writes can complete less than a millisecond, wal_write_time\n> >>> > didn't increase. I think sync_time could also have the same problem.\n> >>>\n> >>> Thanks for your comments. I didn't notice that.\n> >>> I changed the unit from milliseconds to microseconds.\n> >>>\n> >>> > ---\n> >>> > + /*\n> >>> > + * Measure i/o timing to fsync WAL data.\n> >>> > + *\n> >>> > + * The wal receiver skip to collect it to avoid performance\n> >>> > degradation of standy servers.\n> >>> > + * If sync_method doesn't have its fsync method, to skip too.\n> >>> > + */\n> >>> > + if (!AmWalReceiverProcess() && track_wal_io_timing &&\n> >>> > fsyncMethodCalled())\n> >>> > + INSTR_TIME_SET_CURRENT(start);\n> >>> >\n> >>> > * Why does only the wal receiver skip it even if track_wal_io_timinig\n> >>> > is true? I think the performance degradation is also true for backend\n> >>> > processes. If there is another reason for that, I think it's better to\n> >>> > mention in both the doc and comment.\n> >>> > * How about checking track_wal_io_timing first?\n> >>> > * s/standy/standby/\n> >>>\n> >>> I fixed it.\n> >>> As kuroda-san mentioned too, the skip is no need to be considered.\n> >>\n> >> I think you also removed the code to have the wal receiver report the\n> >> stats. So with the latest patch, the wal receiver tracks those\n> >> statistics but doesn't report.\n> >> And maybe XLogWalRcvWrite() also needs to track I/O?\n> >\n> > Thanks, I forgot to add them.\n> > I'll fix it.\n> >\n> >\n> >>>\n> >>> > ---\n> >>> > + /* increment the i/o timing and the number of times to fsync WAL\n> >>> > data */\n> >>> > + if (fsyncMethodCalled())\n> >>> > + {\n> >>> > + if (!AmWalReceiverProcess() && track_wal_io_timing)\n> >>> > + {\n> >>> > + INSTR_TIME_SET_CURRENT(duration);\n> >>> > + INSTR_TIME_SUBTRACT(duration, start);\n> >>> > + WalStats.m_wal_sync_time +=\n> >>> > INSTR_TIME_GET_MILLISEC(duration);\n> >>> > + }\n> >>> > +\n> >>> > + WalStats.m_wal_sync++;\n> >>> > + }\n> >>> >\n> >>> > * I'd avoid always calling fsyncMethodCalled() in this path. How about\n> >>> > incrementing m_wal_sync after each sync operation?\n> >>>\n> >>> I think if syncing the disk does not occur, m_wal_sync should not be\n> >>> incremented.\n> >>> It depends enableFsync and sync_method.\n> >>>\n> >>> enableFsync is checked in each fsync method like\n> >>> pg_fsync_no_writethrough(),\n> >>> so if incrementing m_wal_sync after each sync operation, it should be\n> >>> implemented\n> >>> in each fsync method. It leads to many duplicated codes.\n> >>\n> >> Right. I missed that each fsync function checks enableFsync.\n> >>\n> >>> So, why don't you change the function to a flag whether to\n> >>> sync data to the disk will be occurred or not in issue_xlog_fsync()?\n> >>\n> >> Looks better. Since we don't necessarily need to increment m_wal_sync\n> >> after doing fsync we can write the code without an additional variable\n> >> as follows:\n> >>\n> >> if (enableFsync)\n> >> {\n> >> switch (sync_method)\n> >> {\n> >> case SYNC_METHOD_FSYNC:\n> >> #ifdef HAVE_FSYNC_WRITETHROUGH\n> >> case SYNC_METHOD_FSYNC_WRITETHROUGH:\n> >> #endif\n> >> #ifdef HAVE_FDATASYNC\n> >> case SYNC_METHOD_FDATASYNC:\n> >> #endif\n> >> WalStats.m_wal_sync++;\n> >> if (track_wal_io_timing)\n> >> INSTR_TIME_SET_CURRENT(start);\n> >> break;\n> >> default:\n> >> break;\n> >> }\n> >> }\n> >>\n> >> (do fsync and error handling here)\n> >>\n> >> /* increment the i/o timing and the number of times to fsync WAL\n> >> data */\n> >> if (track_wal_io_timing)\n> >> {\n> >> INSTR_TIME_SET_CURRENT(duration);\n> >> INSTR_TIME_SUBTRACT(duration, start);\n> >> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> >> }\n> >\n> > IIUC, I think we can't handle the following case.\n> >\n> > When \"sync_method\" is SYNC_METHOD_OPEN or SYNC_METHOD_OPEN_DSYNC and\n> > \"track_wal_io_timing\" is enabled, \"start\" doesn't be initialized.\n> >\n> > My understanding is something wrong, isn't it?\n\nYou're right. We might want to initialize 'start' with 0 in those two\ncases and check if INSTR_TIME_IS_ZERO() later when accumulating the\nI/O time.\n\n>\n> I thought the following is better.\n>\n>\n> ```\n> /* Measure i/o timing to sync WAL data.*/\n> if (track_wal_io_timing)\n> INSTR_TIME_SET_CURRENT(start);\n>\n> (do fsync and error handling here)\n>\n> /* check whether to sync WAL data to the disk right now. */\n> if (enableFsync)\n> {\n> if ((sync_method == SYNC_METHOD_FSYNC) ||\n> (sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH) ||\n> (sync_method == SYNC_METHOD_FDATASYNC))\n> {\n> /* increment the i/o timing and the number of times to fsync WAL data\n> */\n> if (track_wal_io_timing)\n> {\n> instr_time duration;\n>\n> INSTR_TIME_SET_CURRENT(duration);\n> INSTR_TIME_SUBTRACT(duration, start);\n> WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> }\n> WalStats.m_wal_sync++;\n> }\n> }\n> ```\n>\n> Although INSTR_TIME_SET_CURRENT(start) is called everytime regardless\n> of the \"sync_method\" and \"enableFsync\", we don't make additional\n> variables.\n> But it's ok because \"track_wal_io_timing\" leads already performance\n> degradation.\n>\n> What do you think?\n\nThat also fine with me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 25 Jan 2021 13:58:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Hi, thanks for the reviews.\n\nI updated the attached patch.\nThe summary of the changes is following.\n\n1. fix document\n\nI followed another view's comments.\n\n\n2. refactor issue_xlog_fsync()\n\nI removed \"sync_called\" variables, narrowed the \"duration\" scope and\nchange the switch statement to if statement.\n\n\n3. make wal-receiver report WAL statistics\n\nI add the code to collect the statistics for a written operation\nin XLogWalRcvWrite() and to report stats in WalReceiverMain().\n\nSince WalReceiverMain() can loop fast, to avoid loading stats collector,\nI add \"force\" argument to the pgstat_send_wal function. If \"force\" is\nfalse, it can skip reporting until at least 500 msec since it last \nreported. WalReceiverMain() almost calls pgstat_send_wal() with \"force\" \nas false.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 25 Jan 2021 16:51:31 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 4:51 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi, thanks for the reviews.\n>\n> I updated the attached patch.\n\nThank you for updating the patch!\n\n> The summary of the changes is following.\n>\n> 1. fix document\n>\n> I followed another view's comments.\n>\n>\n> 2. refactor issue_xlog_fsync()\n>\n> I removed \"sync_called\" variables, narrowed the \"duration\" scope and\n> change the switch statement to if statement.\n\nLooking at the code again, I think if we check if an fsync was really\ncalled when calculating the I/O time, it's better to check that before\nstarting the measurement.\n\n bool issue_fsync = false;\n\n if (enableFsync &&\n (sync_method == SYNC_METHOD_FSYNC ||\n sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n sync_method == SYNC_METHOD_FDATASYNC))\n {\n if (track_wal_io_timing)\n INSTR_TIME_SET_CURRENT(start);\n issue_fsync = true;\n }\n (snip)\n if (issue_fsync)\n {\n if (track_wal_io_timing)\n {\n instr_time duration;\n\n INSTR_TIME_SET_CURRENT(duration);\n INSTR_TIME_SUBTRACT(duration, start);\n WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n }\n WalStats.m_wal_sync++;\n }\n\nSo I prefer either the above which is a modified version of the\noriginal approach or my idea that doesn’t introduce a new local\nvariable I proposed before. But I'm not going to insist on that.\n\n>\n>\n> 3. make wal-receiver report WAL statistics\n>\n> I add the code to collect the statistics for a written operation\n> in XLogWalRcvWrite() and to report stats in WalReceiverMain().\n>\n> Since WalReceiverMain() can loop fast, to avoid loading stats collector,\n> I add \"force\" argument to the pgstat_send_wal function. If \"force\" is\n> false, it can skip reporting until at least 500 msec since it last\n> reported. WalReceiverMain() almost calls pgstat_send_wal() with \"force\"\n> as false.\n\n void\n-pgstat_send_wal(void)\n+pgstat_send_wal(bool force)\n {\n /* We assume this initializes to zeroes */\n static const PgStat_MsgWal all_zeroes;\n+ static TimestampTz last_report = 0;\n\n+ TimestampTz now;\n WalUsage walusage;\n\n+ /*\n+ * Don't send a message unless it's been at least PGSTAT_STAT_INTERVAL\n+ * msec since we last sent one or specified \"force\".\n+ */\n+ now = GetCurrentTimestamp();\n+ if (!force &&\n+ !TimestampDifferenceExceeds(last_report, now, PGSTAT_STAT_INTERVAL))\n+ return;\n+\n+ last_report = now;\n\nHmm, I don’t think it's good to use PGSTAT_STAT_INTERVAL for this\npurpose since it is used as a minimum time for stats file updates. If\nwe want an interval, I think we should define another one Also, with\nthe patch, pgstat_send_wal() calls GetCurrentTimestamp() every time\neven if track_wal_io_timing is off, which is not good. On the other\nhand, I agree that your concern that the wal receiver should not send\nthe stats for whenever receiving wal records. So an idea could be to\nsend the wal stats when finishing the current WAL segment file and\nwhen timeout in the main loop. That way we can guarantee that the wal\nstats on a replica is updated at least every time finishing a WAL\nsegment file when actively receiving WAL records and every\nNAPTIME_PER_CYCLE in other cases.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 26 Jan 2021 00:03:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-26 00:03, Masahiko Sawada wrote:\n> On Mon, Jan 25, 2021 at 4:51 PM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> Hi, thanks for the reviews.\n>> \n>> I updated the attached patch.\n> \n> Thank you for updating the patch!\n> \n>> The summary of the changes is following.\n>> \n>> 1. fix document\n>> \n>> I followed another view's comments.\n>> \n>> \n>> 2. refactor issue_xlog_fsync()\n>> \n>> I removed \"sync_called\" variables, narrowed the \"duration\" scope and\n>> change the switch statement to if statement.\n> \n> Looking at the code again, I think if we check if an fsync was really\n> called when calculating the I/O time, it's better to check that before\n> starting the measurement.\n> \n> bool issue_fsync = false;\n> \n> if (enableFsync &&\n> (sync_method == SYNC_METHOD_FSYNC ||\n> sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n> sync_method == SYNC_METHOD_FDATASYNC))\n> {\n> if (track_wal_io_timing)\n> INSTR_TIME_SET_CURRENT(start);\n> issue_fsync = true;\n> }\n> (snip)\n> if (issue_fsync)\n> {\n> if (track_wal_io_timing)\n> {\n> instr_time duration;\n> \n> INSTR_TIME_SET_CURRENT(duration);\n> INSTR_TIME_SUBTRACT(duration, start);\n> WalStats.m_wal_sync_time = \n> INSTR_TIME_GET_MICROSEC(duration);\n> }\n> WalStats.m_wal_sync++;\n> }\n> \n> So I prefer either the above which is a modified version of the\n> original approach or my idea that doesn’t introduce a new local\n> variable I proposed before. But I'm not going to insist on that.\n\nThanks for the comments.\nI change the code to the above.\n\n>> \n>> \n>> 3. make wal-receiver report WAL statistics\n>> \n>> I add the code to collect the statistics for a written operation\n>> in XLogWalRcvWrite() and to report stats in WalReceiverMain().\n>> \n>> Since WalReceiverMain() can loop fast, to avoid loading stats \n>> collector,\n>> I add \"force\" argument to the pgstat_send_wal function. If \"force\" is\n>> false, it can skip reporting until at least 500 msec since it last\n>> reported. WalReceiverMain() almost calls pgstat_send_wal() with \n>> \"force\"\n>> as false.\n> \n> void\n> -pgstat_send_wal(void)\n> +pgstat_send_wal(bool force)\n> {\n> /* We assume this initializes to zeroes */\n> static const PgStat_MsgWal all_zeroes;\n> + static TimestampTz last_report = 0;\n> \n> + TimestampTz now;\n> WalUsage walusage;\n> \n> + /*\n> + * Don't send a message unless it's been at least \n> PGSTAT_STAT_INTERVAL\n> + * msec since we last sent one or specified \"force\".\n> + */\n> + now = GetCurrentTimestamp();\n> + if (!force &&\n> + !TimestampDifferenceExceeds(last_report, now, \n> PGSTAT_STAT_INTERVAL))\n> + return;\n> +\n> + last_report = now;\n> \n> Hmm, I don’t think it's good to use PGSTAT_STAT_INTERVAL for this\n> purpose since it is used as a minimum time for stats file updates. If\n> we want an interval, I think we should define another one Also, with\n> the patch, pgstat_send_wal() calls GetCurrentTimestamp() every time\n> even if track_wal_io_timing is off, which is not good. On the other\n> hand, I agree that your concern that the wal receiver should not send\n> the stats for whenever receiving wal records. So an idea could be to\n> send the wal stats when finishing the current WAL segment file and\n> when timeout in the main loop. That way we can guarantee that the wal\n> stats on a replica is updated at least every time finishing a WAL\n> segment file when actively receiving WAL records and every\n> NAPTIME_PER_CYCLE in other cases.\n\nI agree with your comments. I think it should report when\nreaching the end of WAL too. I add the code to report the stats\nwhen finishing the current WAL segment file when timeout in the\nmain loop and when reaching the end of WAL.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 26 Jan 2021 08:37:36 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 8:03 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Jan 25, 2021 at 4:51 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n> wrote:\n> >\n> > Hi, thanks for the reviews.\n> >\n> > I updated the attached patch.\n>\n> Thank you for updating the patch!\n>\n\nYour original email with \"total number of times\" is more correct, removing\nthe \"of times\" and just writing \"total number of WAL\" is not good wording.\n\nSpecifically, this change is strictly worse than the original.\n\n- Number of times WAL data was written to disk because WAL buffers\nbecame full\n+ Total number of WAL data written to disk because WAL buffers became\nfull\n\nBoth have the flaw that they leave implied exactly what it means to \"write\nWAL to disk\". It is also unclear whether a counter, bytes, or both, would\nbe more useful here. I've incorporated this into my documentation\nsuggestions below:\n\n(wal_buffers_full)\n-- Revert - the original was better, though maybe add more detail similar\nto the below. I didn't research exactly how this works.\n\n(wal_write)\nThe number of times WAL buffers were written out to disk via XLogWrite\n\n-- Seems like this should have a bytes version too\n\n(wal_write_time)\nThe amount of time spent writing WAL buffers to disk, excluding sync time\nunless the wal_sync_method is either open_datasync or open_sync.\nUnits are in milliseconds with microsecond resolution. This is zero when\ntrack_wal_io_timing is disabled.\n\n(wal_sync)\nThe number of times WAL files were synced to disk while wal_sync_method was\nset to one of the \"sync at commit\" options (i.e., fdatasync, fsync,\nor fsync_writethrough).\n\n-- it is not going to be zero just because those settings are presently\ndisabled as they could have been enabled at some point since the last time\nthese statistics were reset.\n\n(wal_sync_time)\nThe amount of time spent syncing WAL files to disk, in milliseconds with\nmicrosecond resolution. This requires setting wal_sync_method to one of\nthe \"sync at commit\" options (i.e., fdatasync, fsync,\nor fsync_writethrough).\n\n\nAlso,\n\nI would suggest extracting the changes to postmaster/pgstat.c and\nreplication/walreceiver.c to a separate patch as you've fundamentally\nchanged how it behaves with regards to that function and how it interacts\nwith the WAL receiver. That seems an entirely separate topic warranting\nits own patch and discussion.\n\nDavid J.\n\nOn Mon, Jan 25, 2021 at 8:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Jan 25, 2021 at 4:51 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi, thanks for the reviews.\n>\n> I updated the attached patch.\n\nThank you for updating the patch! Your original email with \"total number of times\" is more correct, removing the \"of times\" and just writing \"total number of WAL\" is not good wording.Specifically, this change is strictly worse than the original.- Number of times WAL data was written to disk because WAL buffers became full+ Total number of WAL data written to disk because WAL buffers became fullBoth have the flaw that they leave implied exactly what it means to \"write WAL to disk\". It is also unclear whether a counter, bytes, or both, would be more useful here. I've incorporated this into my documentation suggestions below:(wal_buffers_full)-- Revert - the original was better, though maybe add more detail similar to the below. I didn't research exactly how this works.(wal_write)The number of times WAL buffers were written out to disk via XLogWrite-- Seems like this should have a bytes version too(wal_write_time)The amount of time spent writing WAL buffers to disk, excluding sync time unless the wal_sync_method is either open_datasync or open_sync.Units are in milliseconds with microsecond resolution. This is zero when track_wal_io_timing is disabled.(wal_sync)The number of times WAL files were synced to disk while wal_sync_method was set to one of the \"sync at commit\" options (i.e., fdatasync, fsync, or fsync_writethrough).-- it is not going to be zero just because those settings are presently disabled as they could have been enabled at some point since the last time these statistics were reset.(wal_sync_time)The amount of time spent syncing WAL files to disk, in milliseconds with microsecond resolution. This requires setting wal_sync_method to one of the \"sync at commit\" options (i.e., fdatasync, fsync, or fsync_writethrough).Also,I would suggest extracting the changes to postmaster/pgstat.c and replication/walreceiver.c to a separate patch as you've fundamentally changed how it behaves with regards to that function and how it interacts with the WAL receiver. That seems an entirely separate topic warranting its own patch and discussion.David J.",
"msg_date": "Mon, 25 Jan 2021 16:48:09 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 4:37 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n\n>\n> I agree with your comments. I think it should report when\n> reaching the end of WAL too. I add the code to report the stats\n> when finishing the current WAL segment file when timeout in the\n> main loop and when reaching the end of WAL.\n>\n>\nThe following is not an improvement:\n\n- /* Send WAL statistics to the stats collector. */\n+ /* Send WAL statistics to stats collector */\n\nThe word \"the\" there makes it proper English. Your copy-pasting should\nhave kept the existing good wording in the other locations rather than\nreplace the existing location with the newly added incorrect wording.\n\nThis doesn't make sense:\n\n* current WAL segment file to avoid loading stats collector.\n\nMaybe \"overloading\" or \"overwhelming\"?\n\nI see you removed the pgstat_send_wal(force) change. The rest of my\ncomments on the v6 patch still stand I believe.\n\nDavid J.\n\nOn Mon, Jan 25, 2021 at 4:37 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\nI agree with your comments. I think it should report when\nreaching the end of WAL too. I add the code to report the stats\nwhen finishing the current WAL segment file when timeout in the\nmain loop and when reaching the end of WAL.The following is not an improvement:-\t\t/* Send WAL statistics to the stats collector. */+\t\t/* Send WAL statistics to stats collector */The word \"the\" there makes it proper English. Your copy-pasting should have kept the existing good wording in the other locations rather than replace the existing location with the newly added incorrect wording.This doesn't make sense:* current WAL segment file to avoid loading stats collector.Maybe \"overloading\" or \"overwhelming\"?I see you removed the pgstat_send_wal(force) change. The rest of my comments on the v6 patch still stand I believe.David J.",
"msg_date": "Mon, 25 Jan 2021 16:52:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "Hi, David.\n\nThanks for your comments.\n\nOn 2021-01-26 08:48, David G. Johnston wrote:\n> On Mon, Jan 25, 2021 at 8:03 AM Masahiko Sawada\n> <sawada.mshk@gmail.com> wrote:\n> \n>> On Mon, Jan 25, 2021 at 4:51 PM Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>> Hi, thanks for the reviews.\n>>> \n>>> I updated the attached patch.\n>> \n>> Thank you for updating the patch!\n> \n> Your original email with \"total number of times\" is more correct,\n> removing the \"of times\" and just writing \"total number of WAL\" is not\n> good wording.\n> \n> Specifically, this change is strictly worse than the original.\n> \n> - Number of times WAL data was written to disk because WAL\n> buffers became full\n> + Total number of WAL data written to disk because WAL buffers\n> became full\n> \n> Both have the flaw that they leave implied exactly what it means to\n> \"write WAL to disk\". It is also unclear whether a counter, bytes, or\n> both, would be more useful here. I've incorporated this into my\n> documentation suggestions below:\n> (wal_buffers_full)\n> \n> -- Revert - the original was better, though maybe add more detail\n> similar to the below. I didn't research exactly how this works.\n\nOK, I understood.\nI reverted since this is a counter statistics.\n\n\n> (wal_write)\n> The number of times WAL buffers were written out to disk via XLogWrite\n> \n\nThanks.\n\nI thought it's better to omit \"The\" and \"XLogWrite\" because other views' \ndescription\nomits \"The\" and there is no description of \"XlogWrite\" in the documents. \nWhat do you think?\n\n> -- Seems like this should have a bytes version too\n\nDo you mean that we need to separate statistics for wal write?\n\n\n> (wal_write_time)\n> The amount of time spent writing WAL buffers to disk, excluding sync\n> time unless the wal_sync_method is either open_datasync or open_sync.\n> Units are in milliseconds with microsecond resolution. This is zero\n> when track_wal_io_timing is disabled.\n\nThanks, I'll fix it.\n\n\n> (wal_sync)\n> The number of times WAL files were synced to disk while\n> wal_sync_method was set to one of the \"sync at commit\" options (i.e.,\n> fdatasync, fsync, or fsync_writethrough).\n\nThanks, I'll fix it.\n\n\n> -- it is not going to be zero just because those settings are\n> presently disabled as they could have been enabled at some point since\n> the last time these statistics were reset.\n\nRight, your description is correct.\nThe \"track_wal_io_timing\" has the same limitation, doesn't it?\n\n\n> (wal_sync_time)\n> The amount of time spent syncing WAL files to disk, in milliseconds\n> with microsecond resolution. This requires setting wal_sync_method to\n> one of the \"sync at commit\" options (i.e., fdatasync, fsync, or\n> fsync_writethrough).\n\nThanks, I'll fix it.\nI will add the comments related to \"track_wal_io_timing\".\n\n\n> Also,\n> \n> I would suggest extracting the changes to postmaster/pgstat.c and\n> replication/walreceiver.c to a separate patch as you've fundamentally\n> changed how it behaves with regards to that function and how it\n> interacts with the WAL receiver. That seems an entirely separate\n> topic warranting its own patch and discussion.\n\nOK, I will separate two patches.\n\n\nOn 2021-01-26 08:52, David G. Johnston wrote:\n> On Mon, Jan 25, 2021 at 4:37 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n> \n>> I agree with your comments. I think it should report when\n>> reaching the end of WAL too. I add the code to report the stats\n>> when finishing the current WAL segment file when timeout in the\n>> main loop and when reaching the end of WAL.\n> \n> The following is not an improvement:\n> \n> - /* Send WAL statistics to the stats collector. */+ /* Send WAL\n> statistics to stats collector */\n> \n> The word \"the\" there makes it proper English. Your copy-pasting\n> should have kept the existing good wording in the other locations\n> rather than replace the existing location with the newly added\n> incorrect wording.\n\nThanks, I'll fix it.\n\n\n> This doesn't make sense:\n> \n> * current WAL segment file to avoid loading stats collector.\n> \n> Maybe \"overloading\" or \"overwhelming\"?\n> \n> I see you removed the pgstat_send_wal(force) change. The rest of my\n> comments on the v6 patch still stand I believe.\n\nYes, \"overloading\" is right. Thanks.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 26 Jan 2021 15:56:22 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 11:56 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n\n>\n> > (wal_write)\n> > The number of times WAL buffers were written out to disk via XLogWrite\n> >\n>\n> Thanks.\n>\n> I thought it's better to omit \"The\" and \"XLogWrite\" because other views'\n> description\n> omits \"The\" and there is no description of \"XlogWrite\" in the documents.\n> What do you think?\n>\n>\nThe documentation for WAL does get into the public API level of detail and\ndoing so here makes what this measures crystal clear. The potential\nabsence of sufficient detail elsewhere should be corrected instead of\nmaking this description more vague. Specifically, probably XLogWrite\nshould be added to the WAL overview as part of this update and probably\neven have the descriptive section of the documentation note that the number\nof times that said function is executed is exposed as a counter in the wal\nstatistics table - thus closing the loop.\n\nDavid J.\n\nOn Mon, Jan 25, 2021 at 11:56 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> (wal_write)\n> The number of times WAL buffers were written out to disk via XLogWrite\n> \n\nThanks.\n\nI thought it's better to omit \"The\" and \"XLogWrite\" because other views' \ndescription\nomits \"The\" and there is no description of \"XlogWrite\" in the documents. \nWhat do you think?The documentation for WAL does get into the public API level of detail and doing so here makes what this measures crystal clear. The potential absence of sufficient detail elsewhere should be corrected instead of making this description more vague. Specifically, probably XLogWrite should be added to the WAL overview as part of this update and probably even have the descriptive section of the documentation note that the number of times that said function is executed is exposed as a counter in the wal statistics table - thus closing the loop.David J.",
"msg_date": "Tue, 26 Jan 2021 08:14:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-01-27 00:14, David G. Johnston wrote:\n> On Mon, Jan 25, 2021 at 11:56 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n> \n>>> (wal_write)\n>>> The number of times WAL buffers were written out to disk via\n>> XLogWrite\n>>> \n>> \n>> Thanks.\n>> \n>> I thought it's better to omit \"The\" and \"XLogWrite\" because other\n>> views'\n>> description\n>> omits \"The\" and there is no description of \"XlogWrite\" in the\n>> documents.\n>> What do you think?\n> \n> The documentation for WAL does get into the public API level of detail\n> and doing so here makes what this measures crystal clear. The\n> potential absence of sufficient detail elsewhere should be corrected\n> instead of making this description more vague. Specifically, probably\n> XLogWrite should be added to the WAL overview as part of this update\n> and probably even have the descriptive section of the documentation\n> note that the number of times that said function is executed is\n> exposed as a counter in the wal statistics table - thus closing the\n> loop.\n\nThanks for your comments.\n\nI added the descriptions in documents and separated the patch\ninto attached two patches. First is to add wal i/o activity statistics.\nSecond is to make the wal receiver report the wal statistics.\n\nPlease let me know if you have any comments.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 29 Jan 2021 17:49:00 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "I pgindented the patches.\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 05 Feb 2021 08:45:38 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/02/05 8:45, Masahiro Ikeda wrote:\n> I pgindented the patches.\n\nThanks for updating the patches!\n\n+ <function>XLogWrite</function>, which nomally called by an\n+ <function>issue_xlog_fsync</function>, which nomally called by an\n\nTypo: \"nomally\" should be \"normally\"?\n\n+ <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>)\n+ <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>),\n\nIsn't it better to add a space character just after \"request\"?\n\n+\t\t\t\t\tINSTR_TIME_SET_CURRENT(duration);\n+\t\t\t\t\tINSTR_TIME_SUBTRACT(duration, start);\n+\t\t\t\t\tWalStats.m_wal_write_time = INSTR_TIME_GET_MICROSEC(duration);\n\nIf several cycles happen in the do-while loop, m_wal_write_time should be\nupdated with the sum of \"duration\" in those cycles instead of \"duration\"\nin the last cycle? If yes, \"+=\" should be used instead of \"=\" when updating\nm_wal_write_time?\n\n+\t\t\tINSTR_TIME_SET_CURRENT(duration);\n+\t\t\tINSTR_TIME_SUBTRACT(duration, start);\n+\t\t\tWalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n\nAlso \"=\" should be \"+=\" in the above?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 8 Feb 2021 13:01:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/02/08 13:01, Fujii Masao wrote:\n> \n> \n> On 2021/02/05 8:45, Masahiro Ikeda wrote:\n>> I pgindented the patches.\n> \n> Thanks for updating the patches!\n> \n> +������ <function>XLogWrite</function>, which nomally called by an\n> +������ <function>issue_xlog_fsync</function>, which nomally called by an\n> \n> Typo: \"nomally\" should be \"normally\"?\n> \n> +������ <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>)\n> +������ <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>),\n> \n> Isn't it better to add a space character just after \"request\"?\n> \n> +������������������� INSTR_TIME_SET_CURRENT(duration);\n> +������������������� INSTR_TIME_SUBTRACT(duration, start);\n> +������������������� WalStats.m_wal_write_time = INSTR_TIME_GET_MICROSEC(duration);\n> \n> If several cycles happen in the do-while loop, m_wal_write_time should be\n> updated with the sum of \"duration\" in those cycles instead of \"duration\"\n> in the last cycle? If yes, \"+=\" should be used instead of \"=\" when updating\n> m_wal_write_time?\n> \n> +����������� INSTR_TIME_SET_CURRENT(duration);\n> +����������� INSTR_TIME_SUBTRACT(duration, start);\n> +����������� WalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> \n> Also \"=\" should be \"+=\" in the above?\n\n+\t\t/* Send WAL statistics */\n+\t\tpgstat_send_wal();\n\nThis may cause overhead in WAL-writing by walwriter because it's called\nevery cycles even when walwriter needs to write more WAL next cycle\n(don't need to sleep on WaitLatch)? If this is right, pgstat_send_wal()\nshould be called only when WaitLatch() returns with WL_TIMEOUT?\n\n- <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>)\n+ <function>XLogFlush</function> request(see <xref linkend=\"wal-configuration\"/>),\n+ or WAL data written out to disk by WAL receiver.\n\nSo regarding walreceiver, only wal_write, wal_write_time, wal_sync, and\nwal_sync_time are updated even while the other values are not. Isn't this\nconfusing to users? If so, what about reporting those walreceiver stats in\npg_stat_wal_receiver?\n\n \t\t\t\tif (endofwal)\n+\t\t\t\t{\n+\t\t\t\t\t/* Send WAL statistics to the stats collector */\n+\t\t\t\t\tpgstat_send_wal();\n \t\t\t\t\tbreak;\n\nYou added pgstat_send_wal() so that it's called in some cases where\nwalreceiver exits. But ISTM that there are other walreceiver-exit cases.\nFor example, in the case where SIGTERM is received. Instead,\npgstat_send_wal() should be called in WalRcvDie() for those all cases?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 8 Feb 2021 14:26:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n\n> I pgindented the patches.\n>\n>\n... <function>XLogWrite</function>, which is invoked during an\n<function>XLogFlush</function> request (see ...). This is also incremented\nby the WAL receiver during replication.\n\n(\"which normally called\" should be \"which is normally called\" or \"which\nnormally is called\" if you want to keep true to the original)\n\nYou missed the adding the space before an opening parenthesis here and\nelsewhere (probably copy-paste)\n\nis ether -> is either\n\n\"This parameter is off by default as it will repeatedly query the operating\nsystem...\"\n\", because\" -> \"as\"\n\nwal_write_time and the sync items also need the note: \"This is also\nincremented by the WAL receiver during replication.\"\n\n\"The number of times it happened...\" -> \" (the tally of this event is\nreported in wal_buffers_full in....) This is undesirable because ...\"\n\nI notice that the patch for WAL receiver doesn't require explicitly\ncomputing the sync statistics but does require computing the write\nstatistics. This is because of the presence of issue_xlog_fsync but\nabsence of an equivalent pg_xlog_pwrite. Additionally, I observe that the\nXLogWrite code path calls pgstat_report_wait_*() while the WAL receiver\npath does not. It seems technically straight-forward to refactor here to\navoid the almost-duplicated logic in the two places, though I suspect there\nmay be a trade-off for not adding another function call to the stack given\nthe importance of WAL processing (though that seems marginalized compared\nto the cost of actually writing the WAL). Or, as Fujii noted, go the other\nway and don't have any shared code between the two but instead implement\nthe WAL receiver one to use pg_stat_wal_receiver instead. In either case,\nthis half-and-half implementation seems undesirable.\n\nDavid J.\n\nOn Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:I pgindented the patches.... <function>XLogWrite</function>, which is invoked during an <function>XLogFlush</function> request (see ...). This is also incremented by the WAL receiver during replication.(\"which normally called\" should be \"which is normally called\" or \"which normally is called\" if you want to keep true to the original)You missed the adding the space before an opening parenthesis here and elsewhere (probably copy-paste)is ether -> is either\"This parameter is off by default as it will repeatedly query the operating system...\"\", because\" -> \"as\"wal_write_time and the sync items also need the note: \"This is also incremented by the WAL receiver during replication.\"\"The number of times it happened...\" -> \" (the tally of this event is reported in wal_buffers_full in....) This is undesirable because ...\"I notice that the patch for WAL receiver doesn't require explicitly computing the sync statistics but does require computing the write statistics. This is because of the presence of issue_xlog_fsync but absence of an equivalent pg_xlog_pwrite. Additionally, I observe that the XLogWrite code path calls pgstat_report_wait_*() while the WAL receiver path does not. It seems technically straight-forward to refactor here to avoid the almost-duplicated logic in the two places, though I suspect there may be a trade-off for not adding another function call to the stack given the importance of WAL processing (though that seems marginalized compared to the cost of actually writing the WAL). Or, as Fujii noted, go the other way and don't have any shared code between the two but instead implement the WAL receiver one to use pg_stat_wal_receiver instead. In either case, this half-and-half implementation seems undesirable.David J.",
"msg_date": "Tue, 9 Feb 2021 08:51:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-02-08 13:01, Fujii Masao wrote:\n> On 2021/02/05 8:45, Masahiro Ikeda wrote:\n>> I pgindented the patches.\n> \n> Thanks for updating the patches!\n\nThanks for checking the patches.\n\n> + <function>XLogWrite</function>, which nomally called by an\n> + <function>issue_xlog_fsync</function>, which nomally called by \n> an\n> \n> Typo: \"nomally\" should be \"normally\"?\n\nYes, I'll fix it.\n\n> + <function>XLogFlush</function> request(see <xref\n> linkend=\"wal-configuration\"/>)\n> + <function>XLogFlush</function> request(see <xref\n> linkend=\"wal-configuration\"/>),\n> \n> Isn't it better to add a space character just after \"request\"?\n\nThanks, I'll fix it.\n\n> +\t\t\t\t\tINSTR_TIME_SET_CURRENT(duration);\n> +\t\t\t\t\tINSTR_TIME_SUBTRACT(duration, start);\n> +\t\t\t\t\tWalStats.m_wal_write_time = INSTR_TIME_GET_MICROSEC(duration);\n> \n> If several cycles happen in the do-while loop, m_wal_write_time should \n> be\n> updated with the sum of \"duration\" in those cycles instead of \n> \"duration\"\n> in the last cycle? If yes, \"+=\" should be used instead of \"=\" when \n> updating\n> m_wal_write_time?\n> +\t\t\tINSTR_TIME_SET_CURRENT(duration);\n> +\t\t\tINSTR_TIME_SUBTRACT(duration, start);\n> +\t\t\tWalStats.m_wal_sync_time = INSTR_TIME_GET_MICROSEC(duration);\n> \n> Also \"=\" should be \"+=\" in the above?\n\nYes, they are my mistake when changing the unit from milliseconds to \nmicroseconds.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Feb 2021 11:32:12 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-02-08 14:26, Fujii Masao wrote:\n> On 2021/02/08 13:01, Fujii Masao wrote:\n>> \n>> \n>> On 2021/02/05 8:45, Masahiro Ikeda wrote:\n>>> I pgindented the patches.\n>> \n>> Thanks for updating the patches!\n>> \n>> + <function>XLogWrite</function>, which nomally called by an\n>> + <function>issue_xlog_fsync</function>, which nomally called by \n>> an\n>> \n>> Typo: \"nomally\" should be \"normally\"?\n>> \n>> + <function>XLogFlush</function> request(see <xref \n>> linkend=\"wal-configuration\"/>)\n>> + <function>XLogFlush</function> request(see <xref \n>> linkend=\"wal-configuration\"/>),\n>> \n>> Isn't it better to add a space character just after \"request\"?\n>> \n>> + INSTR_TIME_SET_CURRENT(duration);\n>> + INSTR_TIME_SUBTRACT(duration, start);\n>> + WalStats.m_wal_write_time = \n>> INSTR_TIME_GET_MICROSEC(duration);\n>> \n>> If several cycles happen in the do-while loop, m_wal_write_time should \n>> be\n>> updated with the sum of \"duration\" in those cycles instead of \n>> \"duration\"\n>> in the last cycle? If yes, \"+=\" should be used instead of \"=\" when \n>> updating\n>> m_wal_write_time?\n>> \n>> + INSTR_TIME_SET_CURRENT(duration);\n>> + INSTR_TIME_SUBTRACT(duration, start);\n>> + WalStats.m_wal_sync_time = \n>> INSTR_TIME_GET_MICROSEC(duration);\n>> \n>> Also \"=\" should be \"+=\" in the above?\n> \n> +\t\t/* Send WAL statistics */\n> +\t\tpgstat_send_wal();\n> \n> This may cause overhead in WAL-writing by walwriter because it's called\n> every cycles even when walwriter needs to write more WAL next cycle\n> (don't need to sleep on WaitLatch)? If this is right, pgstat_send_wal()\n> should be called only when WaitLatch() returns with WL_TIMEOUT?\n\nThanks, I didn't notice that.\nI'll fix it.\n\n> - <function>XLogFlush</function> request(see <xref\n> linkend=\"wal-configuration\"/>)\n> + <function>XLogFlush</function> request(see <xref\n> linkend=\"wal-configuration\"/>),\n> + or WAL data written out to disk by WAL receiver.\n> \n> So regarding walreceiver, only wal_write, wal_write_time, wal_sync, and\n> wal_sync_time are updated even while the other values are not. Isn't \n> this\n> confusing to users? If so, what about reporting those walreceiver stats \n> in\n> pg_stat_wal_receiver?\n\nOK, I'll add new infrastructure code to interect with wal receiver\nand stats collector and show the stats in pg_stat_wal_receiver.\n\n> \t\t\t\tif (endofwal)\n> +\t\t\t\t{\n> +\t\t\t\t\t/* Send WAL statistics to the stats collector */\n> +\t\t\t\t\tpgstat_send_wal();\n> \t\t\t\t\tbreak;\n> \n> You added pgstat_send_wal() so that it's called in some cases where\n> walreceiver exits. But ISTM that there are other walreceiver-exit \n> cases.\n> For example, in the case where SIGTERM is received. Instead,\n> pgstat_send_wal() should be called in WalRcvDie() for those all cases?\n\nThanks, I forgot the case.\nI'll fix it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Feb 2021 11:42:25 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-02-10 00:51, David G. Johnston wrote:\n> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n> \n>> I pgindented the patches.\n> \n> ... <function>XLogWrite</function>, which is invoked during an\n> <function>XLogFlush</function> request (see ...). This is also\n> incremented by the WAL receiver during replication.\n> \n> (\"which normally called\" should be \"which is normally called\" or\n> \"which normally is called\" if you want to keep true to the original)\n> You missed the adding the space before an opening parenthesis here and\n> elsewhere (probably copy-paste)\n> \n> is ether -> is either\n> \"This parameter is off by default as it will repeatedly query the\n> operating system...\"\n> \", because\" -> \"as\"\n\nThanks, I fixed them.\n\n> wal_write_time and the sync items also need the note: \"This is also\n> incremented by the WAL receiver during replication.\"\n\nI skipped changing it since I separated the stats for the WAL receiver\nin pg_stat_wal_receiver.\n\n> \"The number of times it happened...\" -> \" (the tally of this event is\n> reported in wal_buffers_full in....) This is undesirable because ...\"\n\nThanks, I fixed it.\n\n> I notice that the patch for WAL receiver doesn't require explicitly\n> computing the sync statistics but does require computing the write\n> statistics. This is because of the presence of issue_xlog_fsync but\n> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n> receiver path does not. It seems technically straight-forward to\n> refactor here to avoid the almost-duplicated logic in the two places,\n> though I suspect there may be a trade-off for not adding another\n> function call to the stack given the importance of WAL processing\n> (though that seems marginalized compared to the cost of actually\n> writing the WAL). Or, as Fujii noted, go the other way and don't have\n> any shared code between the two but instead implement the WAL receiver\n> one to use pg_stat_wal_receiver instead. In either case, this\n> half-and-half implementation seems undesirable.\n\nOK, as Fujii-san mentioned, I separated the WAL receiver stats.\n(v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\nI added the infrastructure code to communicate the WAL receiver stats \nmessages between the WAL receiver and the stats collector, and\nthe stats for WAL receiver is counted in pg_stat_wal_receiver.\nWhat do you think?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 15 Feb 2021 11:59:48 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/02/15 11:59, Masahiro Ikeda wrote:\n> On 2021-02-10 00:51, David G. Johnston wrote:\n>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote:\n>>\n>>> I pgindented the patches.\n>>\n>> ... <function>XLogWrite</function>, which is invoked during an\n>> <function>XLogFlush</function> request (see ...).� This is also\n>> incremented by the WAL receiver during replication.\n>>\n>> (\"which normally called\" should be \"which is normally called\" or\n>> \"which normally is called\" if you want to keep true to the original)\n>> You missed the adding the space before an opening parenthesis here and\n>> elsewhere (probably copy-paste)\n>>\n>> is ether -> is either\n>> \"This parameter is off by default as it will repeatedly query the\n>> operating system...\"\n>> \", because\" -> \"as\"\n> \n> Thanks, I fixed them.\n> \n>> wal_write_time and the sync items also need the note: \"This is also\n>> incremented by the WAL receiver during replication.\"\n> \n> I skipped changing it since I separated the stats for the WAL receiver\n> in pg_stat_wal_receiver.\n> \n>> \"The number of times it happened...\" -> \" (the tally of this event is\n>> reported in wal_buffers_full in....) This is undesirable because ...\"\n> \n> Thanks, I fixed it.\n> \n>> I notice that the patch for WAL receiver doesn't require explicitly\n>> computing the sync statistics but does require computing the write\n>> statistics.� This is because of the presence of issue_xlog_fsync but\n>> absence of an equivalent pg_xlog_pwrite.� Additionally, I observe that\n>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>> receiver path does not.� It seems technically straight-forward to\n>> refactor here to avoid the almost-duplicated logic in the two places,\n>> though I suspect there may be a trade-off for not adding another\n>> function call to the stack given the importance of WAL processing\n>> (though that seems marginalized compared to the cost of actually\n>> writing the WAL).� Or, as Fujii noted, go the other way and don't have\n>> any shared code between the two but instead implement the WAL receiver\n>> one to use pg_stat_wal_receiver instead.� In either case, this\n>> half-and-half implementation seems undesirable.\n> \n> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\nThanks for updating the patches!\n\n\n> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n> What do you think?\n\nOn second thought, this idea seems not good. Because those stats are\ncollected between multiple walreceivers, but other values in\npg_stat_wal_receiver is only related to the walreceiver process running\nat that moment. IOW, it seems strange that some values show dynamic\nstats and the others show collected stats, even though they are in\nthe same view pg_stat_wal_receiver. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Feb 2021 16:14:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-02-24 16:14, Fujii Masao wrote:\n> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>> <ikedamsh@oss.nttdata.com> wrote:\n>>> \n>>>> I pgindented the patches.\n>>> \n>>> ... <function>XLogWrite</function>, which is invoked during an\n>>> <function>XLogFlush</function> request (see ...). This is also\n>>> incremented by the WAL receiver during replication.\n>>> \n>>> (\"which normally called\" should be \"which is normally called\" or\n>>> \"which normally is called\" if you want to keep true to the original)\n>>> You missed the adding the space before an opening parenthesis here \n>>> and\n>>> elsewhere (probably copy-paste)\n>>> \n>>> is ether -> is either\n>>> \"This parameter is off by default as it will repeatedly query the\n>>> operating system...\"\n>>> \", because\" -> \"as\"\n>> \n>> Thanks, I fixed them.\n>> \n>>> wal_write_time and the sync items also need the note: \"This is also\n>>> incremented by the WAL receiver during replication.\"\n>> \n>> I skipped changing it since I separated the stats for the WAL receiver\n>> in pg_stat_wal_receiver.\n>> \n>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>> \n>> Thanks, I fixed it.\n>> \n>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>> computing the sync statistics but does require computing the write\n>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe \n>>> that\n>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>> receiver path does not. It seems technically straight-forward to\n>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>> though I suspect there may be a trade-off for not adding another\n>>> function call to the stack given the importance of WAL processing\n>>> (though that seems marginalized compared to the cost of actually\n>>> writing the WAL). Or, as Fujii noted, go the other way and don't \n>>> have\n>>> any shared code between the two but instead implement the WAL \n>>> receiver\n>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>> half-and-half implementation seems undesirable.\n>> \n>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n> \n> Thanks for updating the patches!\n> \n> \n>> I added the infrastructure code to communicate the WAL receiver stats \n>> messages between the WAL receiver and the stats collector, and\n>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>> What do you think?\n> \n> On second thought, this idea seems not good. Because those stats are\n> collected between multiple walreceivers, but other values in\n> pg_stat_wal_receiver is only related to the walreceiver process running\n> at that moment. IOW, it seems strange that some values show dynamic\n> stats and the others show collected stats, even though they are in\n> the same view pg_stat_wal_receiver. Thought?\n\nOK, I fixed it.\nThe stats collected in the WAL receiver is exposed in pg_stat_wal view \nin v11 patch.\n\n> I notice that the patch for WAL receiver doesn't require explicitly \n> computing the sync statistics but does require computing the write \n> statistics. This is because of the presence of issue_xlog_fsync but \n> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that \n> the XLogWrite code path calls pgstat_report_wait_*() while the WAL \n> receiver path does not. It seems technically straight-forward to \n> refactor here to avoid the almost-duplicated logic in the two places, \n> though I suspect there may be a trade-off for not adding another \n> function call to the stack given the importance of WAL processing \n> (though that seems marginalized compared to the cost of actually \n> writing the WAL). Or, as Fujii noted, go the other way and don't have \n> any shared code between the two but instead implement the WAL receiver \n> one to use pg_stat_wal_receiver instead. In either case, this \n> half-and-half implementation seems undesirable.\n\nI refactored the logic to write xlog file to unify collecting the write \nstats.\nAs David said, although pgstat_report_wait_start(WAIT_EVENT_WAL_WRITE) \nis not called in the WAL receiver's path,\nI agreed that the cost to write the WAL is much bigger.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 03 Mar 2021 14:33:03 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/03 14:33, Masahiro Ikeda wrote:\n> On 2021-02-24 16:14, Fujii Masao wrote:\n>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>\n>>>>> I pgindented the patches.\n>>>>\n>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>> incremented by the WAL receiver during replication.\n>>>>\n>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>> \"which normally is called\" if you want to keep true to the original)\n>>>> You missed the adding the space before an opening parenthesis here and\n>>>> elsewhere (probably copy-paste)\n>>>>\n>>>> is ether -> is either\n>>>> \"This parameter is off by default as it will repeatedly query the\n>>>> operating system...\"\n>>>> \", because\" -> \"as\"\n>>>\n>>> Thanks, I fixed them.\n>>>\n>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>> incremented by the WAL receiver during replication.\"\n>>>\n>>> I skipped changing it since I separated the stats for the WAL receiver\n>>> in pg_stat_wal_receiver.\n>>>\n>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>\n>>> Thanks, I fixed it.\n>>>\n>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>> computing the sync statistics but does require computing the write\n>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>> receiver path does not. It seems technically straight-forward to\n>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>> though I suspect there may be a trade-off for not adding another\n>>>> function call to the stack given the importance of WAL processing\n>>>> (though that seems marginalized compared to the cost of actually\n>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>> any shared code between the two but instead implement the WAL receiver\n>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>> half-and-half implementation seems undesirable.\n>>>\n>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>\n>> Thanks for updating the patches!\n>>\n>>\n>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>> What do you think?\n>>\n>> On second thought, this idea seems not good. Because those stats are\n>> collected between multiple walreceivers, but other values in\n>> pg_stat_wal_receiver is only related to the walreceiver process running\n>> at that moment. IOW, it seems strange that some values show dynamic\n>> stats and the others show collected stats, even though they are in\n>> the same view pg_stat_wal_receiver. Thought?\n> \n> OK, I fixed it.\n> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n\nThanks for updating the patches! I'm now reading 001 patch.\n\n+\t/* Check whether the WAL file was synced to disk right now */\n+\tif (enableFsync &&\n+\t\t(sync_method == SYNC_METHOD_FSYNC ||\n+\t\t sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n+\t\t sync_method == SYNC_METHOD_FDATASYNC))\n+\t{\n\nIsn't it better to make issue_xlog_fsync() return immediately\nif enableFsync is off, sync_method is open_sync or open_data_sync,\nto simplify the code more?\n\n\n+\t\t/*\n+\t\t * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n+\t\t * the overhead in WAL-writing.\n+\t\t */\n+\t\tif (rc & WL_TIMEOUT)\n+\t\t\tpgstat_send_wal();\n\nOn second thought, this change means that it always takes wal_writer_delay\nbefore walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\nFor example, if wal_writer_delay is set to several seconds, some values in\npg_stat_wal would be not up-to-date meaninglessly for those seconds.\nSo I'm thinking to withdraw my previous comment and it's ok to send\nthe stats every after XLogBackgroundFlush() is called. Thought?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Mar 2021 16:30:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-03 16:30, Fujii Masao wrote:\n> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>> \n>>>>>> I pgindented the patches.\n>>>>> \n>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>> incremented by the WAL receiver during replication.\n>>>>> \n>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>> \"which normally is called\" if you want to keep true to the \n>>>>> original)\n>>>>> You missed the adding the space before an opening parenthesis here \n>>>>> and\n>>>>> elsewhere (probably copy-paste)\n>>>>> \n>>>>> is ether -> is either\n>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>> operating system...\"\n>>>>> \", because\" -> \"as\"\n>>>> \n>>>> Thanks, I fixed them.\n>>>> \n>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>> incremented by the WAL receiver during replication.\"\n>>>> \n>>>> I skipped changing it since I separated the stats for the WAL \n>>>> receiver\n>>>> in pg_stat_wal_receiver.\n>>>> \n>>>>> \"The number of times it happened...\" -> \" (the tally of this event \n>>>>> is\n>>>>> reported in wal_buffers_full in....) This is undesirable because \n>>>>> ...\"\n>>>> \n>>>> Thanks, I fixed it.\n>>>> \n>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>> computing the sync statistics but does require computing the write\n>>>>> statistics. This is because of the presence of issue_xlog_fsync \n>>>>> but\n>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe \n>>>>> that\n>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>> receiver path does not. It seems technically straight-forward to\n>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>> places,\n>>>>> though I suspect there may be a trade-off for not adding another\n>>>>> function call to the stack given the importance of WAL processing\n>>>>> (though that seems marginalized compared to the cost of actually\n>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't \n>>>>> have\n>>>>> any shared code between the two but instead implement the WAL \n>>>>> receiver\n>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>> half-and-half implementation seems undesirable.\n>>>> \n>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>> \n>>> Thanks for updating the patches!\n>>> \n>>> \n>>>> I added the infrastructure code to communicate the WAL receiver \n>>>> stats messages between the WAL receiver and the stats collector, and\n>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>> What do you think?\n>>> \n>>> On second thought, this idea seems not good. Because those stats are\n>>> collected between multiple walreceivers, but other values in\n>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>> running\n>>> at that moment. IOW, it seems strange that some values show dynamic\n>>> stats and the others show collected stats, even though they are in\n>>> the same view pg_stat_wal_receiver. Thought?\n>> \n>> OK, I fixed it.\n>> The stats collected in the WAL receiver is exposed in pg_stat_wal view \n>> in v11 patch.\n> \n> Thanks for updating the patches! I'm now reading 001 patch.\n> \n> +\t/* Check whether the WAL file was synced to disk right now */\n> +\tif (enableFsync &&\n> +\t\t(sync_method == SYNC_METHOD_FSYNC ||\n> +\t\t sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n> +\t\t sync_method == SYNC_METHOD_FDATASYNC))\n> +\t{\n> \n> Isn't it better to make issue_xlog_fsync() return immediately\n> if enableFsync is off, sync_method is open_sync or open_data_sync,\n> to simplify the code more?\n\nThanks for the comments.\nI added the above code in v12 patch.\n\n> \n> +\t\t/*\n> +\t\t * Send WAL statistics only if WalWriterDelay has elapsed to \n> minimize\n> +\t\t * the overhead in WAL-writing.\n> +\t\t */\n> +\t\tif (rc & WL_TIMEOUT)\n> +\t\t\tpgstat_send_wal();\n> \n> On second thought, this change means that it always takes \n> wal_writer_delay\n> before walwriter's WAL stats is sent after XLogBackgroundFlush() is \n> called.\n> For example, if wal_writer_delay is set to several seconds, some values \n> in\n> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n> So I'm thinking to withdraw my previous comment and it's ok to send\n> the stats every after XLogBackgroundFlush() is called. Thought?\n\nThanks, I didn't notice that.\n\nAlthough PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\ndefault value is 200msec and it may be set shorter time.\n\nWhy don't to make another way to check the timestamp?\n\n+ /*\n+ * Don't send a message unless it's been at least \nPGSTAT_STAT_INTERVAL\n+ * msec since we last sent one\n+ */\n+ now = GetCurrentTimestamp();\n+ if (TimestampDifferenceExceeds(last_report, now, \nPGSTAT_STAT_INTERVAL))\n+ {\n+ pgstat_send_wal();\n+ last_report = now;\n+ }\n+\n\nAlthough I worried that it's better to add the check code in \npgstat_send_wal(),\nI didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\npgstat_send_wal() is invoked pg_report_stat() and it already checks the\nPGSTAT_STAT_INTERVAL.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 03 Mar 2021 20:27:29 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-03 20:27, Masahiro Ikeda wrote:\n> On 2021-03-03 16:30, Fujii Masao wrote:\n>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>> \n>>>>>>> I pgindented the patches.\n>>>>>> \n>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>> incremented by the WAL receiver during replication.\n>>>>>> \n>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>> original)\n>>>>>> You missed the adding the space before an opening parenthesis here \n>>>>>> and\n>>>>>> elsewhere (probably copy-paste)\n>>>>>> \n>>>>>> is ether -> is either\n>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>> operating system...\"\n>>>>>> \", because\" -> \"as\"\n>>>>> \n>>>>> Thanks, I fixed them.\n>>>>> \n>>>>>> wal_write_time and the sync items also need the note: \"This is \n>>>>>> also\n>>>>>> incremented by the WAL receiver during replication.\"\n>>>>> \n>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>> receiver\n>>>>> in pg_stat_wal_receiver.\n>>>>> \n>>>>>> \"The number of times it happened...\" -> \" (the tally of this event \n>>>>>> is\n>>>>>> reported in wal_buffers_full in....) This is undesirable because \n>>>>>> ...\"\n>>>>> \n>>>>> Thanks, I fixed it.\n>>>>> \n>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>> explicitly\n>>>>>> computing the sync statistics but does require computing the write\n>>>>>> statistics. This is because of the presence of issue_xlog_fsync \n>>>>>> but\n>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe \n>>>>>> that\n>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>>> places,\n>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>> function call to the stack given the importance of WAL processing\n>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't \n>>>>>> have\n>>>>>> any shared code between the two but instead implement the WAL \n>>>>>> receiver\n>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>> half-and-half implementation seems undesirable.\n>>>>> \n>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>> \n>>>> Thanks for updating the patches!\n>>>> \n>>>> \n>>>>> I added the infrastructure code to communicate the WAL receiver \n>>>>> stats messages between the WAL receiver and the stats collector, \n>>>>> and\n>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>> What do you think?\n>>>> \n>>>> On second thought, this idea seems not good. Because those stats are\n>>>> collected between multiple walreceivers, but other values in\n>>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>>> running\n>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>> stats and the others show collected stats, even though they are in\n>>>> the same view pg_stat_wal_receiver. Thought?\n>>> \n>>> OK, I fixed it.\n>>> The stats collected in the WAL receiver is exposed in pg_stat_wal \n>>> view in v11 patch.\n>> \n>> Thanks for updating the patches! I'm now reading 001 patch.\n>> \n>> +\t/* Check whether the WAL file was synced to disk right now */\n>> +\tif (enableFsync &&\n>> +\t\t(sync_method == SYNC_METHOD_FSYNC ||\n>> +\t\t sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>> +\t\t sync_method == SYNC_METHOD_FDATASYNC))\n>> +\t{\n>> \n>> Isn't it better to make issue_xlog_fsync() return immediately\n>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>> to simplify the code more?\n> \n> Thanks for the comments.\n> I added the above code in v12 patch.\n> \n>> \n>> +\t\t/*\n>> +\t\t * Send WAL statistics only if WalWriterDelay has elapsed to \n>> minimize\n>> +\t\t * the overhead in WAL-writing.\n>> +\t\t */\n>> +\t\tif (rc & WL_TIMEOUT)\n>> +\t\t\tpgstat_send_wal();\n>> \n>> On second thought, this change means that it always takes \n>> wal_writer_delay\n>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is \n>> called.\n>> For example, if wal_writer_delay is set to several seconds, some \n>> values in\n>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>> So I'm thinking to withdraw my previous comment and it's ok to send\n>> the stats every after XLogBackgroundFlush() is called. Thought?\n> \n> Thanks, I didn't notice that.\n> \n> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n> default value is 200msec and it may be set shorter time.\n> \n> Why don't to make another way to check the timestamp?\n> \n> + /*\n> + * Don't send a message unless it's been at least\n> PGSTAT_STAT_INTERVAL\n> + * msec since we last sent one\n> + */\n> + now = GetCurrentTimestamp();\n> + if (TimestampDifferenceExceeds(last_report, now,\n> PGSTAT_STAT_INTERVAL))\n> + {\n> + pgstat_send_wal();\n> + last_report = now;\n> + }\n> +\n> \n> Although I worried that it's better to add the check code in \n> pgstat_send_wal(),\n> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n> PGSTAT_STAT_INTERVAL.\n\nI forgot to remove an unused variable.\nThe attached v13 patch is fixed.\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 04 Mar 2021 16:14:42 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 12:14 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n\n> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n> > On 2021-03-03 16:30, Fujii Masao wrote:\n> >> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n> >>> On 2021-02-24 16:14, Fujii Masao wrote:\n> >>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n> >>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n> >>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n> >>>>>> <ikedamsh@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>> I pgindented the patches.\n> >>>>>>\n> >>>>>> ... <function>XLogWrite</function>, which is invoked during an\n> >>>>>> <function>XLogFlush</function> request (see ...). This is also\n> >>>>>> incremented by the WAL receiver during replication.\n> >>>>>>\n> >>>>>> (\"which normally called\" should be \"which is normally called\" or\n> >>>>>> \"which normally is called\" if you want to keep true to the\n> >>>>>> original)\n> >>>>>> You missed the adding the space before an opening parenthesis here\n> >>>>>> and\n> >>>>>> elsewhere (probably copy-paste)\n> >>>>>>\n> >>>>>> is ether -> is either\n> >>>>>> \"This parameter is off by default as it will repeatedly query the\n> >>>>>> operating system...\"\n> >>>>>> \", because\" -> \"as\"\n> >>>>>\n> >>>>> Thanks, I fixed them.\n> >>>>>\n> >>>>>> wal_write_time and the sync items also need the note: \"This is\n> >>>>>> also\n> >>>>>> incremented by the WAL receiver during replication.\"\n> >>>>>\n> >>>>> I skipped changing it since I separated the stats for the WAL\n> >>>>> receiver\n> >>>>> in pg_stat_wal_receiver.\n> >>>>>\n> >>>>>> \"The number of times it happened...\" -> \" (the tally of this event\n> >>>>>> is\n> >>>>>> reported in wal_buffers_full in....) This is undesirable because\n> >>>>>> ...\"\n> >>>>>\n> >>>>> Thanks, I fixed it.\n> >>>>>\n> >>>>>> I notice that the patch for WAL receiver doesn't require\n> >>>>>> explicitly\n> >>>>>> computing the sync statistics but does require computing the write\n> >>>>>> statistics. This is because of the presence of issue_xlog_fsync\n> >>>>>> but\n> >>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe\n> >>>>>> that\n> >>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n> >>>>>> receiver path does not. It seems technically straight-forward to\n> >>>>>> refactor here to avoid the almost-duplicated logic in the two\n> >>>>>> places,\n> >>>>>> though I suspect there may be a trade-off for not adding another\n> >>>>>> function call to the stack given the importance of WAL processing\n> >>>>>> (though that seems marginalized compared to the cost of actually\n> >>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't\n> >>>>>> have\n> >>>>>> any shared code between the two but instead implement the WAL\n> >>>>>> receiver\n> >>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n> >>>>>> half-and-half implementation seems undesirable.\n> >>>>>\n> >>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n> >>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n> >>>>\n> >>>> Thanks for updating the patches!\n> >>>>\n> >>>>\n> >>>>> I added the infrastructure code to communicate the WAL receiver\n> >>>>> stats messages between the WAL receiver and the stats collector,\n> >>>>> and\n> >>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n> >>>>> What do you think?\n> >>>>\n> >>>> On second thought, this idea seems not good. Because those stats are\n> >>>> collected between multiple walreceivers, but other values in\n> >>>> pg_stat_wal_receiver is only related to the walreceiver process\n> >>>> running\n> >>>> at that moment. IOW, it seems strange that some values show dynamic\n> >>>> stats and the others show collected stats, even though they are in\n> >>>> the same view pg_stat_wal_receiver. Thought?\n> >>>\n> >>> OK, I fixed it.\n> >>> The stats collected in the WAL receiver is exposed in pg_stat_wal\n> >>> view in v11 patch.\n> >>\n> >> Thanks for updating the patches! I'm now reading 001 patch.\n> >>\n> >> + /* Check whether the WAL file was synced to disk right now */\n> >> + if (enableFsync &&\n> >> + (sync_method == SYNC_METHOD_FSYNC ||\n> >> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n> >> + sync_method == SYNC_METHOD_FDATASYNC))\n> >> + {\n> >>\n> >> Isn't it better to make issue_xlog_fsync() return immediately\n> >> if enableFsync is off, sync_method is open_sync or open_data_sync,\n> >> to simplify the code more?\n> >\n> > Thanks for the comments.\n> > I added the above code in v12 patch.\n> >\n> >>\n> >> + /*\n> >> + * Send WAL statistics only if WalWriterDelay has elapsed\n> to\n> >> minimize\n> >> + * the overhead in WAL-writing.\n> >> + */\n> >> + if (rc & WL_TIMEOUT)\n> >> + pgstat_send_wal();\n> >>\n> >> On second thought, this change means that it always takes\n> >> wal_writer_delay\n> >> before walwriter's WAL stats is sent after XLogBackgroundFlush() is\n> >> called.\n> >> For example, if wal_writer_delay is set to several seconds, some\n> >> values in\n> >> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n> >> So I'm thinking to withdraw my previous comment and it's ok to send\n> >> the stats every after XLogBackgroundFlush() is called. Thought?\n> >\n> > Thanks, I didn't notice that.\n> >\n> > Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n> > default value is 200msec and it may be set shorter time.\n> >\n> > Why don't to make another way to check the timestamp?\n> >\n> > + /*\n> > + * Don't send a message unless it's been at least\n> > PGSTAT_STAT_INTERVAL\n> > + * msec since we last sent one\n> > + */\n> > + now = GetCurrentTimestamp();\n> > + if (TimestampDifferenceExceeds(last_report, now,\n> > PGSTAT_STAT_INTERVAL))\n> > + {\n> > + pgstat_send_wal();\n> > + last_report = now;\n> > + }\n> > +\n> >\n> > Although I worried that it's better to add the check code in\n> > pgstat_send_wal(),\n> > I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n> > pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n> > PGSTAT_STAT_INTERVAL.\n>\n> I forgot to remove an unused variable.\n> The attached v13 patch is fixed.\n>\n> Regards\n> --\n> Masahiro Ikeda\n> NTT DATA CORPORATION\n\n\nThis patch set no longer applies\nhttp://cfbot.cputube.org/patch_32_2859.log\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Mar 4, 2021 at 12:14 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:On 2021-03-03 20:27, Masahiro Ikeda wrote:\n> On 2021-03-03 16:30, Fujii Masao wrote:\n>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>> \n>>>>>>> I pgindented the patches.\n>>>>>> \n>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>> incremented by the WAL receiver during replication.\n>>>>>> \n>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>> original)\n>>>>>> You missed the adding the space before an opening parenthesis here \n>>>>>> and\n>>>>>> elsewhere (probably copy-paste)\n>>>>>> \n>>>>>> is ether -> is either\n>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>> operating system...\"\n>>>>>> \", because\" -> \"as\"\n>>>>> \n>>>>> Thanks, I fixed them.\n>>>>> \n>>>>>> wal_write_time and the sync items also need the note: \"This is \n>>>>>> also\n>>>>>> incremented by the WAL receiver during replication.\"\n>>>>> \n>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>> receiver\n>>>>> in pg_stat_wal_receiver.\n>>>>> \n>>>>>> \"The number of times it happened...\" -> \" (the tally of this event \n>>>>>> is\n>>>>>> reported in wal_buffers_full in....) This is undesirable because \n>>>>>> ...\"\n>>>>> \n>>>>> Thanks, I fixed it.\n>>>>> \n>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>> explicitly\n>>>>>> computing the sync statistics but does require computing the write\n>>>>>> statistics. This is because of the presence of issue_xlog_fsync \n>>>>>> but\n>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe \n>>>>>> that\n>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>>> places,\n>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>> function call to the stack given the importance of WAL processing\n>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't \n>>>>>> have\n>>>>>> any shared code between the two but instead implement the WAL \n>>>>>> receiver\n>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>> half-and-half implementation seems undesirable.\n>>>>> \n>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>> \n>>>> Thanks for updating the patches!\n>>>> \n>>>> \n>>>>> I added the infrastructure code to communicate the WAL receiver \n>>>>> stats messages between the WAL receiver and the stats collector, \n>>>>> and\n>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>> What do you think?\n>>>> \n>>>> On second thought, this idea seems not good. Because those stats are\n>>>> collected between multiple walreceivers, but other values in\n>>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>>> running\n>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>> stats and the others show collected stats, even though they are in\n>>>> the same view pg_stat_wal_receiver. Thought?\n>>> \n>>> OK, I fixed it.\n>>> The stats collected in the WAL receiver is exposed in pg_stat_wal \n>>> view in v11 patch.\n>> \n>> Thanks for updating the patches! I'm now reading 001 patch.\n>> \n>> + /* Check whether the WAL file was synced to disk right now */\n>> + if (enableFsync &&\n>> + (sync_method == SYNC_METHOD_FSYNC ||\n>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>> + sync_method == SYNC_METHOD_FDATASYNC))\n>> + {\n>> \n>> Isn't it better to make issue_xlog_fsync() return immediately\n>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>> to simplify the code more?\n> \n> Thanks for the comments.\n> I added the above code in v12 patch.\n> \n>> \n>> + /*\n>> + * Send WAL statistics only if WalWriterDelay has elapsed to \n>> minimize\n>> + * the overhead in WAL-writing.\n>> + */\n>> + if (rc & WL_TIMEOUT)\n>> + pgstat_send_wal();\n>> \n>> On second thought, this change means that it always takes \n>> wal_writer_delay\n>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is \n>> called.\n>> For example, if wal_writer_delay is set to several seconds, some \n>> values in\n>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>> So I'm thinking to withdraw my previous comment and it's ok to send\n>> the stats every after XLogBackgroundFlush() is called. Thought?\n> \n> Thanks, I didn't notice that.\n> \n> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n> default value is 200msec and it may be set shorter time.\n> \n> Why don't to make another way to check the timestamp?\n> \n> + /*\n> + * Don't send a message unless it's been at least\n> PGSTAT_STAT_INTERVAL\n> + * msec since we last sent one\n> + */\n> + now = GetCurrentTimestamp();\n> + if (TimestampDifferenceExceeds(last_report, now,\n> PGSTAT_STAT_INTERVAL))\n> + {\n> + pgstat_send_wal();\n> + last_report = now;\n> + }\n> +\n> \n> Although I worried that it's better to add the check code in \n> pgstat_send_wal(),\n> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n> PGSTAT_STAT_INTERVAL.\n\nI forgot to remove an unused variable.\nThe attached v13 patch is fixed.\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATIONThis patch set no longer applieshttp://cfbot.cputube.org/patch_32_2859.logCan we get a rebase? I am marking the patch \"Waiting on Author\"-- Ibrar Ahmed",
"msg_date": "Thu, 4 Mar 2021 16:25:06 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/04 16:14, Masahiro Ikeda wrote:\n> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>> I pgindented the patches.\n>>>>>>>\n>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>\n>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>\n>>>>>>> is ether -> is either\n>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>> operating system...\"\n>>>>>>> \", because\" -> \"as\"\n>>>>>>\n>>>>>> Thanks, I fixed them.\n>>>>>>\n>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>\n>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>> in pg_stat_wal_receiver.\n>>>>>>\n>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>\n>>>>>> Thanks, I fixed it.\n>>>>>>\n>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>\n>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>\n>>>>> Thanks for updating the patches!\n>>>>>\n>>>>>\n>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>> What do you think?\n>>>>>\n>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>> collected between multiple walreceivers, but other values in\n>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>> stats and the others show collected stats, even though they are in\n>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>\n>>>> OK, I fixed it.\n>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>\n>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>\n>>> + /* Check whether the WAL file was synced to disk right now */\n>>> + if (enableFsync &&\n>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>> + {\n>>>\n>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>> to simplify the code more?\n>>\n>> Thanks for the comments.\n>> I added the above code in v12 patch.\n>>\n>>>\n>>> + /*\n>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>> + * the overhead in WAL-writing.\n>>> + */\n>>> + if (rc & WL_TIMEOUT)\n>>> + pgstat_send_wal();\n>>>\n>>> On second thought, this change means that it always takes wal_writer_delay\n>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>\n>> Thanks, I didn't notice that.\n>>\n>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>> default value is 200msec and it may be set shorter time.\n\nYeah, if wal_writer_delay is set to very small value, there is a risk\nthat the WAL stats are sent too frequently. I agree that's a problem.\n\n>>\n>> Why don't to make another way to check the timestamp?\n>>\n>> + /*\n>> + * Don't send a message unless it's been at least\n>> PGSTAT_STAT_INTERVAL\n>> + * msec since we last sent one\n>> + */\n>> + now = GetCurrentTimestamp();\n>> + if (TimestampDifferenceExceeds(last_report, now,\n>> PGSTAT_STAT_INTERVAL))\n>> + {\n>> + pgstat_send_wal();\n>> + last_report = now;\n>> + }\n>> +\n>>\n>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n\nAgreed.\n\n>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>> PGSTAT_STAT_INTERVAL.\n\nI think that we can do that. What about the attached patch?\n\n> I forgot to remove an unused variable.\n> The attached v13 patch is fixed.\n\nThanks for updating the patch!\n\n+ w.wal_write,\n+ w.wal_write_time,\n+ w.wal_sync,\n+ w.wal_sync_time,\n\nIt's more natural to put wal_write_time and wal_sync_time next to\neach other? That is, what about the following order of columns?\n\nwal_write\nwal_sync\nwal_write_time\nwal_sync_time\n\n\n-\t\tcase SYNC_METHOD_OPEN:\n-\t\tcase SYNC_METHOD_OPEN_DSYNC:\n-\t\t\t/* write synced it already */\n-\t\t\tbreak;\n\nIMO it's better to add Assert(false) here to ensure that we never reach\nhere, as follows. Thought?\n\n+\t\tcase SYNC_METHOD_OPEN:\n+\t\tcase SYNC_METHOD_OPEN_DSYNC:\n+\t\t\t/* not reachable */\n+\t\t\tAssert(false);\n\n\nEven when a backend exits, it sends the stats via pgstat_beshutdown_hook().\nOn the other hand, walwriter doesn't do that. Walwriter also should send\nthe stats even at its exit? Otherwise some stats can fail to be collected.\nBut ISTM that this issue existed from before, for example checkpointer\ndoesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\nthis issue in this patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 5 Mar 2021 01:02:25 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-05 01:02, Fujii Masao wrote:\n> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>> \n>>>>>>>>> I pgindented the patches.\n>>>>>>>> \n>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>> \n>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>> original)\n>>>>>>>> You missed the adding the space before an opening parenthesis \n>>>>>>>> here and\n>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>> \n>>>>>>>> is ether -> is either\n>>>>>>>> \"This parameter is off by default as it will repeatedly query \n>>>>>>>> the\n>>>>>>>> operating system...\"\n>>>>>>>> \", because\" -> \"as\"\n>>>>>>> \n>>>>>>> Thanks, I fixed them.\n>>>>>>> \n>>>>>>>> wal_write_time and the sync items also need the note: \"This is \n>>>>>>>> also\n>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>> \n>>>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>>>> receiver\n>>>>>>> in pg_stat_wal_receiver.\n>>>>>>> \n>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this \n>>>>>>>> event is\n>>>>>>>> reported in wal_buffers_full in....) This is undesirable because \n>>>>>>>> ...\"\n>>>>>>> \n>>>>>>> Thanks, I fixed it.\n>>>>>>> \n>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>> explicitly\n>>>>>>>> computing the sync statistics but does require computing the \n>>>>>>>> write\n>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync \n>>>>>>>> but\n>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>> observe that\n>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the \n>>>>>>>> WAL\n>>>>>>>> receiver path does not. It seems technically straight-forward \n>>>>>>>> to\n>>>>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>>>>> places,\n>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>> processing\n>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and \n>>>>>>>> don't have\n>>>>>>>> any shared code between the two but instead implement the WAL \n>>>>>>>> receiver\n>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>> \n>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>> \n>>>>>> Thanks for updating the patches!\n>>>>>> \n>>>>>> \n>>>>>>> I added the infrastructure code to communicate the WAL receiver \n>>>>>>> stats messages between the WAL receiver and the stats collector, \n>>>>>>> and\n>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>> What do you think?\n>>>>>> \n>>>>>> On second thought, this idea seems not good. Because those stats \n>>>>>> are\n>>>>>> collected between multiple walreceivers, but other values in\n>>>>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>>>>> running\n>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>> dynamic\n>>>>>> stats and the others show collected stats, even though they are in\n>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>> \n>>>>> OK, I fixed it.\n>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal \n>>>>> view in v11 patch.\n>>>> \n>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>> \n>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>> + if (enableFsync &&\n>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>> + {\n>>>> \n>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>> to simplify the code more?\n>>> \n>>> Thanks for the comments.\n>>> I added the above code in v12 patch.\n>>> \n>>>> \n>>>> + /*\n>>>> + * Send WAL statistics only if WalWriterDelay has elapsed \n>>>> to minimize\n>>>> + * the overhead in WAL-writing.\n>>>> + */\n>>>> + if (rc & WL_TIMEOUT)\n>>>> + pgstat_send_wal();\n>>>> \n>>>> On second thought, this change means that it always takes \n>>>> wal_writer_delay\n>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is \n>>>> called.\n>>>> For example, if wal_writer_delay is set to several seconds, some \n>>>> values in\n>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>> \n>>> Thanks, I didn't notice that.\n>>> \n>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>> default value is 200msec and it may be set shorter time.\n> \n> Yeah, if wal_writer_delay is set to very small value, there is a risk\n> that the WAL stats are sent too frequently. I agree that's a problem.\n> \n>>> \n>>> Why don't to make another way to check the timestamp?\n>>> \n>>> + /*\n>>> + * Don't send a message unless it's been at least\n>>> PGSTAT_STAT_INTERVAL\n>>> + * msec since we last sent one\n>>> + */\n>>> + now = GetCurrentTimestamp();\n>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>> PGSTAT_STAT_INTERVAL))\n>>> + {\n>>> + pgstat_send_wal();\n>>> + last_report = now;\n>>> + }\n>>> +\n>>> \n>>> Although I worried that it's better to add the check code in \n>>> pgstat_send_wal(),\n> \n> Agreed.\n> \n>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks \n>>> the\n>>> PGSTAT_STAT_INTERVAL.\n> \n> I think that we can do that. What about the attached patch?\n\nThanks, I thought it's better.\n\n\n>> I forgot to remove an unused variable.\n>> The attached v13 patch is fixed.\n> \n> Thanks for updating the patch!\n> \n> + w.wal_write,\n> + w.wal_write_time,\n> + w.wal_sync,\n> + w.wal_sync_time,\n> \n> It's more natural to put wal_write_time and wal_sync_time next to\n> each other? That is, what about the following order of columns?\n> \n> wal_write\n> wal_sync\n> wal_write_time\n> wal_sync_time\n\nYes, I fixed it.\n\n> -\t\tcase SYNC_METHOD_OPEN:\n> -\t\tcase SYNC_METHOD_OPEN_DSYNC:\n> -\t\t\t/* write synced it already */\n> -\t\t\tbreak;\n> \n> IMO it's better to add Assert(false) here to ensure that we never reach\n> here, as follows. Thought?\n> \n> +\t\tcase SYNC_METHOD_OPEN:\n> +\t\tcase SYNC_METHOD_OPEN_DSYNC:\n> +\t\t\t/* not reachable */\n> +\t\t\tAssert(false);\n\nI agree.\n\n\n> Even when a backend exits, it sends the stats via \n> pgstat_beshutdown_hook().\n> On the other hand, walwriter doesn't do that. Walwriter also should \n> send\n> the stats even at its exit? Otherwise some stats can fail to be \n> collected.\n> But ISTM that this issue existed from before, for example checkpointer\n> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to \n> fix\n> this issue in this patch?\n\nThanks, I thought it's better to do so.\nI added the shutdown hook for the walwriter and the checkpointer in \nv14-0003 patch.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 05 Mar 2021 08:38:20 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/05 8:38, Masahiro Ikeda wrote:\n> On 2021-03-05 01:02, Fujii Masao wrote:\n>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>\n>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>\n>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>\n>>>>>>>>> is ether -> is either\n>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>> operating system...\"\n>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>\n>>>>>>>> Thanks, I fixed them.\n>>>>>>>>\n>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>\n>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>\n>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>\n>>>>>>>> Thanks, I fixed it.\n>>>>>>>>\n>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>\n>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>\n>>>>>>> Thanks for updating the patches!\n>>>>>>>\n>>>>>>>\n>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>> What do you think?\n>>>>>>>\n>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>\n>>>>>> OK, I fixed it.\n>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>\n>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>\n>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>> + if (enableFsync &&\n>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>> + {\n>>>>>\n>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>> to simplify the code more?\n>>>>\n>>>> Thanks for the comments.\n>>>> I added the above code in v12 patch.\n>>>>\n>>>>>\n>>>>> + /*\n>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>> + * the overhead in WAL-writing.\n>>>>> + */\n>>>>> + if (rc & WL_TIMEOUT)\n>>>>> + pgstat_send_wal();\n>>>>>\n>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>\n>>>> Thanks, I didn't notice that.\n>>>>\n>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>> default value is 200msec and it may be set shorter time.\n>>\n>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>\n>>>>\n>>>> Why don't to make another way to check the timestamp?\n>>>>\n>>>> + /*\n>>>> + * Don't send a message unless it's been at least\n>>>> PGSTAT_STAT_INTERVAL\n>>>> + * msec since we last sent one\n>>>> + */\n>>>> + now = GetCurrentTimestamp();\n>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>> PGSTAT_STAT_INTERVAL))\n>>>> + {\n>>>> + pgstat_send_wal();\n>>>> + last_report = now;\n>>>> + }\n>>>> +\n>>>>\n>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>\n>> Agreed.\n>>\n>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>> PGSTAT_STAT_INTERVAL.\n>>\n>> I think that we can do that. What about the attached patch?\n> \n> Thanks, I thought it's better.\n> \n> \n>>> I forgot to remove an unused variable.\n>>> The attached v13 patch is fixed.\n>>\n>> Thanks for updating the patch!\n>>\n>> + w.wal_write,\n>> + w.wal_write_time,\n>> + w.wal_sync,\n>> + w.wal_sync_time,\n>>\n>> It's more natural to put wal_write_time and wal_sync_time next to\n>> each other? That is, what about the following order of columns?\n>>\n>> wal_write\n>> wal_sync\n>> wal_write_time\n>> wal_sync_time\n> \n> Yes, I fixed it.\n> \n>> - case SYNC_METHOD_OPEN:\n>> - case SYNC_METHOD_OPEN_DSYNC:\n>> - /* write synced it already */\n>> - break;\n>>\n>> IMO it's better to add Assert(false) here to ensure that we never reach\n>> here, as follows. Thought?\n>>\n>> + case SYNC_METHOD_OPEN:\n>> + case SYNC_METHOD_OPEN_DSYNC:\n>> + /* not reachable */\n>> + Assert(false);\n> \n> I agree.\n> \n> \n>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>> But ISTM that this issue existed from before, for example checkpointer\n>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>> this issue in this patch?\n> \n> Thanks, I thought it's better to do so.\n> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n\nThanks!\n\nSeems you forgot to include the changes of expected/rules.out in 0001 patch,\nand which caused the regression test to fail. Attached is the updated version\nof the patch. I included expected/rules.out in it.\n\n+\tPgStat_Counter m_wal_write_time;\t/* time spend writing wal records in\n+\t\t\t\t\t\t\t\t\t\t * micro seconds */\n+\tPgStat_Counter m_wal_sync_time; /* time spend syncing wal records in micro\n+\t\t\t\t\t\t\t\t\t * seconds */\n\nIMO \"spend\" should be \"spent\". Also \"micro seconds\" should be \"microseconds\"\nin sake of consistent with other comments in pgstat.h. I fixed them.\n\nRegarding pgstat_report_wal() and pgstat_send_wal(), I found one bug. Even\nwhen pgstat_send_wal() returned without sending any message,\npgstat_report_wal() saved current pgWalUsage and that counter was used for\nthe subsequent calculation of WAL usage. This caused some counters not to\nbe sent to the collector. This is a bug that I added. I fixed this bug.\n\n+\twalStats.wal_write += msg->m_wal_write;\n+\twalStats.wal_write_time += msg->m_wal_write_time;\n+\twalStats.wal_sync += msg->m_wal_sync;\n+\twalStats.wal_sync_time += msg->m_wal_sync_time;\n\nI changed the order of the above in pgstat.c so that wal_write_time and\nwal_sync_time are placed in next to each other.\n\nThe followings are the comments for the docs part. I've not updated this\nin the patch yet because I'm not sure how to change them for now.\n\n+ Number of times WAL buffers were written out to disk via\n+ <function>XLogWrite</function>, which is invoked during an\n+ <function>XLogFlush</function> request (see <xref linkend=\"wal-configuration\"/>)\n+ </para></entry>\n\nXLogWrite() can be invoked during the functions other than XLogFlush().\nFor example, XLogBackgroundFlush(). So the above description might be\nconfusing?\n\n+ Number of times WAL files were synced to disk via\n+ <function>issue_xlog_fsync</function>, which is invoked during an\n+ <function>XLogFlush</function> request (see <xref linkend=\"wal-configuration\"/>)\n\nSame as above.\n\n+ while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n+ \"sync at commit\" options (i.e., <literal>fdatasync</literal>,\n+ <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n\nEven open_sync and open_datasync do the sync at commit. No? I'm not sure\nif \"sync at commit\" is right term to indicate fdatasync, fsync and\nfsync_writethrough.\n\n+ <literal>open_sync</literal>. Units are in milliseconds with microsecond resolution.\n\n\"with microsecond resolution\" part is really necessary?\n\n+ transaction records are flushed to permanent storage.\n+ <function>XLogFlush</function> calls <function>XLogWrite</function> to write\n+ and <function>issue_xlog_fsync</function> to flush them, which are counted as\n+ <literal>wal_write</literal> and <literal>wal_sync</literal> in\n+ <xref linkend=\"pg-stat-wal-view\"/>. On systems with high log output,\n\nThis description might cause users to misread that XLogFlush() calls\nissue_xlog_fsync(). Since issue_xlog_fsync() is called by XLogWrite(),\nISTM that this description needs to be updated.\n\nEach line in the above seems to end with a space character.\nThis space character should be removed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 5 Mar 2021 12:47:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-05 12:47, Fujii Masao wrote:\n> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>> \n>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>> \n>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is \n>>>>>>>>>> also\n>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>> \n>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" \n>>>>>>>>>> or\n>>>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>>>> original)\n>>>>>>>>>> You missed the adding the space before an opening parenthesis \n>>>>>>>>>> here and\n>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>> \n>>>>>>>>>> is ether -> is either\n>>>>>>>>>> \"This parameter is off by default as it will repeatedly query \n>>>>>>>>>> the\n>>>>>>>>>> operating system...\"\n>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>> \n>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>> \n>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is \n>>>>>>>>>> also\n>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>> \n>>>>>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>>>>>> receiver\n>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>> \n>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this \n>>>>>>>>>> event is\n>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable \n>>>>>>>>>> because ...\"\n>>>>>>>>> \n>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>> \n>>>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>>>> explicitly\n>>>>>>>>>> computing the sync statistics but does require computing the \n>>>>>>>>>> write\n>>>>>>>>>> statistics. This is because of the presence of \n>>>>>>>>>> issue_xlog_fsync but\n>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>>>> observe that\n>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the \n>>>>>>>>>> WAL\n>>>>>>>>>> receiver path does not. It seems technically straight-forward \n>>>>>>>>>> to\n>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>>>>>>> places,\n>>>>>>>>>> though I suspect there may be a trade-off for not adding \n>>>>>>>>>> another\n>>>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>>>> processing\n>>>>>>>>>> (though that seems marginalized compared to the cost of \n>>>>>>>>>> actually\n>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and \n>>>>>>>>>> don't have\n>>>>>>>>>> any shared code between the two but instead implement the WAL \n>>>>>>>>>> receiver\n>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>> \n>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>> \n>>>>>>>> Thanks for updating the patches!\n>>>>>>>> \n>>>>>>>> \n>>>>>>>>> I added the infrastructure code to communicate the WAL receiver \n>>>>>>>>> stats messages between the WAL receiver and the stats \n>>>>>>>>> collector, and\n>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>> What do you think?\n>>>>>>>> \n>>>>>>>> On second thought, this idea seems not good. Because those stats \n>>>>>>>> are\n>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>>>>>>> running\n>>>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>>>> dynamic\n>>>>>>>> stats and the others show collected stats, even though they are \n>>>>>>>> in\n>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>> \n>>>>>>> OK, I fixed it.\n>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal \n>>>>>>> view in v11 patch.\n>>>>>> \n>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>> \n>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>> + if (enableFsync &&\n>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>> + {\n>>>>>> \n>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>> to simplify the code more?\n>>>>> \n>>>>> Thanks for the comments.\n>>>>> I added the above code in v12 patch.\n>>>>> \n>>>>>> \n>>>>>> + /*\n>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed \n>>>>>> to minimize\n>>>>>> + * the overhead in WAL-writing.\n>>>>>> + */\n>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>> + pgstat_send_wal();\n>>>>>> \n>>>>>> On second thought, this change means that it always takes \n>>>>>> wal_writer_delay\n>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() \n>>>>>> is called.\n>>>>>> For example, if wal_writer_delay is set to several seconds, some \n>>>>>> values in\n>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those \n>>>>>> seconds.\n>>>>>> So I'm thinking to withdraw my previous comment and it's ok to \n>>>>>> send\n>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>> \n>>>>> Thanks, I didn't notice that.\n>>>>> \n>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>> default value is 200msec and it may be set shorter time.\n>>> \n>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>> \n>>>>> \n>>>>> Why don't to make another way to check the timestamp?\n>>>>> \n>>>>> + /*\n>>>>> + * Don't send a message unless it's been at least\n>>>>> PGSTAT_STAT_INTERVAL\n>>>>> + * msec since we last sent one\n>>>>> + */\n>>>>> + now = GetCurrentTimestamp();\n>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>> PGSTAT_STAT_INTERVAL))\n>>>>> + {\n>>>>> + pgstat_send_wal();\n>>>>> + last_report = now;\n>>>>> + }\n>>>>> +\n>>>>> \n>>>>> Although I worried that it's better to add the check code in \n>>>>> pgstat_send_wal(),\n>>> \n>>> Agreed.\n>>> \n>>>>> I didn't do so because to avoid to double check \n>>>>> PGSTAT_STAT_INTERVAL.\n>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks \n>>>>> the\n>>>>> PGSTAT_STAT_INTERVAL.\n>>> \n>>> I think that we can do that. What about the attached patch?\n>> \n>> Thanks, I thought it's better.\n>> \n>> \n>>>> I forgot to remove an unused variable.\n>>>> The attached v13 patch is fixed.\n>>> \n>>> Thanks for updating the patch!\n>>> \n>>> + w.wal_write,\n>>> + w.wal_write_time,\n>>> + w.wal_sync,\n>>> + w.wal_sync_time,\n>>> \n>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>> each other? That is, what about the following order of columns?\n>>> \n>>> wal_write\n>>> wal_sync\n>>> wal_write_time\n>>> wal_sync_time\n>> \n>> Yes, I fixed it.\n>> \n>>> - case SYNC_METHOD_OPEN:\n>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>> - /* write synced it already */\n>>> - break;\n>>> \n>>> IMO it's better to add Assert(false) here to ensure that we never \n>>> reach\n>>> here, as follows. Thought?\n>>> \n>>> + case SYNC_METHOD_OPEN:\n>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>> + /* not reachable */\n>>> + Assert(false);\n>> \n>> I agree.\n>> \n>> \n>>> Even when a backend exits, it sends the stats via \n>>> pgstat_beshutdown_hook().\n>>> On the other hand, walwriter doesn't do that. Walwriter also should \n>>> send\n>>> the stats even at its exit? Otherwise some stats can fail to be \n>>> collected.\n>>> But ISTM that this issue existed from before, for example \n>>> checkpointer\n>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to \n>>> fix\n>>> this issue in this patch?\n>> \n>> Thanks, I thought it's better to do so.\n>> I added the shutdown hook for the walwriter and the checkpointer in \n>> v14-0003 patch.\n> \n> Thanks!\n> \n> Seems you forgot to include the changes of expected/rules.out in 0001 \n> patch,\n> and which caused the regression test to fail. Attached is the updated \n> version\n> of the patch. I included expected/rules.out in it.\n\nSorry.\n\n> +\tPgStat_Counter m_wal_write_time;\t/* time spend writing wal records in\n> +\t\t\t\t\t\t\t\t\t\t * micro seconds */\n> +\tPgStat_Counter m_wal_sync_time; /* time spend syncing wal records in \n> micro\n> +\t\t\t\t\t\t\t\t\t * seconds */\n> \n> IMO \"spend\" should be \"spent\". Also \"micro seconds\" should be \n> \"microseconds\"\n> in sake of consistent with other comments in pgstat.h. I fixed them.\n\nThanks.\n\n> Regarding pgstat_report_wal() and pgstat_send_wal(), I found one bug. \n> Even\n> when pgstat_send_wal() returned without sending any message,\n> pgstat_report_wal() saved current pgWalUsage and that counter was used \n> for\n> the subsequent calculation of WAL usage. This caused some counters not \n> to\n> be sent to the collector. This is a bug that I added. I fixed this bug.\n\nThanks.\n\n\n> +\twalStats.wal_write += msg->m_wal_write;\n> +\twalStats.wal_write_time += msg->m_wal_write_time;\n> +\twalStats.wal_sync += msg->m_wal_sync;\n> +\twalStats.wal_sync_time += msg->m_wal_sync_time;\n> \n> I changed the order of the above in pgstat.c so that wal_write_time and\n> wal_sync_time are placed in next to each other.\n\nI forgot to fix them, thanks.\n\n\n> The followings are the comments for the docs part. I've not updated \n> this\n> in the patch yet because I'm not sure how to change them for now.\n> + Number of times WAL buffers were written out to disk via\n> + <function>XLogWrite</function>, which is invoked during an\n> + <function>XLogFlush</function> request (see <xref\n> linkend=\"wal-configuration\"/>)\n> + </para></entry>\n> \n> XLogWrite() can be invoked during the functions other than XLogFlush().\n> For example, XLogBackgroundFlush(). So the above description might be\n> confusing?\n> \n> + Number of times WAL files were synced to disk via\n> + <function>issue_xlog_fsync</function>, which is invoked during \n> an\n> + <function>XLogFlush</function> request (see <xref\n> linkend=\"wal-configuration\"/>)\n> \n> Same as above.\n\nYes, why don't you remove \"XLogFlush\" in the above comments\nbecause XLogWrite() description is covered in wal.sgml?\n\nBut, now it's mentioned only for backend,\nI added the comments for the wal writer in the attached patch.\n\n\n> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of \n> the\n> + \"sync at commit\" options (i.e., <literal>fdatasync</literal>,\n> + <literal>fsync</literal>, or \n> <literal>fsync_writethrough</literal>).\n> \n> Even open_sync and open_datasync do the sync at commit. No? I'm not \n> sure\n> if \"sync at commit\" is right term to indicate fdatasync, fsync and\n> fsync_writethrough.\n\nYes, why don't you change to the following comments?\n\n```\n while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n options which specific fsync method is called (i.e., \n<literal>fdatasync</literal>,\n <literal>fsync</literal>, or \n<literal>fsync_writethrough</literal>)\n```\n\n> + <literal>open_sync</literal>. Units are in milliseconds with\n> microsecond resolution.\n> \n> \"with microsecond resolution\" part is really necessary?\n\nI removed it because blk_read_time in pg_stat_database is the same \nabove,\nbut it doesn't mention it.\n\n\n> + transaction records are flushed to permanent storage.\n> + <function>XLogFlush</function> calls <function>XLogWrite</function> \n> to write\n> + and <function>issue_xlog_fsync</function> to flush them, which are\n> counted as\n> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n> + <xref linkend=\"pg-stat-wal-view\"/>. On systems with high log \n> output,\n> \n> This description might cause users to misread that XLogFlush() calls\n> issue_xlog_fsync(). Since issue_xlog_fsync() is called by XLogWrite(),\n> ISTM that this description needs to be updated.\n\nI understood. I fixed to mention that XLogWrite()\ncalls issue_xlog_fsync().\n\n\n> Each line in the above seems to end with a space character.\n> This space character should be removed.\n\nSorry for that. I removed it.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 05 Mar 2021 19:54:23 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/05 19:54, Masahiro Ikeda wrote:\n> On 2021-03-05 12:47, Fujii Masao wrote:\n>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>\n>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>\n>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>\n>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>> operating system...\"\n>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>\n>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>\n>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>\n>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>\n>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>\n>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>> What do you think?\n>>>>>>>>>\n>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>\n>>>>>>>> OK, I fixed it.\n>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>\n>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>\n>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>> + if (enableFsync &&\n>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>> + {\n>>>>>>>\n>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>> to simplify the code more?\n>>>>>>\n>>>>>> Thanks for the comments.\n>>>>>> I added the above code in v12 patch.\n>>>>>>\n>>>>>>>\n>>>>>>> + /*\n>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>> + * the overhead in WAL-writing.\n>>>>>>> + */\n>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>> + pgstat_send_wal();\n>>>>>>>\n>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>\n>>>>>> Thanks, I didn't notice that.\n>>>>>>\n>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>> default value is 200msec and it may be set shorter time.\n>>>>\n>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>\n>>>>>>\n>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>\n>>>>>> + /*\n>>>>>> + * Don't send a message unless it's been at least\n>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>> + * msec since we last sent one\n>>>>>> + */\n>>>>>> + now = GetCurrentTimestamp();\n>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>> + {\n>>>>>> + pgstat_send_wal();\n>>>>>> + last_report = now;\n>>>>>> + }\n>>>>>> +\n>>>>>>\n>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>\n>>>> Agreed.\n>>>>\n>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>\n>>>> I think that we can do that. What about the attached patch?\n>>>\n>>> Thanks, I thought it's better.\n>>>\n>>>\n>>>>> I forgot to remove an unused variable.\n>>>>> The attached v13 patch is fixed.\n>>>>\n>>>> Thanks for updating the patch!\n>>>>\n>>>> + w.wal_write,\n>>>> + w.wal_write_time,\n>>>> + w.wal_sync,\n>>>> + w.wal_sync_time,\n>>>>\n>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>> each other? That is, what about the following order of columns?\n>>>>\n>>>> wal_write\n>>>> wal_sync\n>>>> wal_write_time\n>>>> wal_sync_time\n>>>\n>>> Yes, I fixed it.\n>>>\n>>>> - case SYNC_METHOD_OPEN:\n>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>> - /* write synced it already */\n>>>> - break;\n>>>>\n>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>> here, as follows. Thought?\n>>>>\n>>>> + case SYNC_METHOD_OPEN:\n>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>> + /* not reachable */\n>>>> + Assert(false);\n>>>\n>>> I agree.\n>>>\n>>>\n>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>> this issue in this patch?\n>>>\n>>> Thanks, I thought it's better to do so.\n>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>\n>> Thanks!\n>>\n>> Seems you forgot to include the changes of expected/rules.out in 0001 patch,\n>> and which caused the regression test to fail. Attached is the updated version\n>> of the patch. I included expected/rules.out in it.\n> \n> Sorry.\n> \n>> + PgStat_Counter m_wal_write_time; /* time spend writing wal records in\n>> + * micro seconds */\n>> + PgStat_Counter m_wal_sync_time; /* time spend syncing wal records in micro\n>> + * seconds */\n>>\n>> IMO \"spend\" should be \"spent\". Also \"micro seconds\" should be \"microseconds\"\n>> in sake of consistent with other comments in pgstat.h. I fixed them.\n> \n> Thanks.\n> \n>> Regarding pgstat_report_wal() and pgstat_send_wal(), I found one bug. Even\n>> when pgstat_send_wal() returned without sending any message,\n>> pgstat_report_wal() saved current pgWalUsage and that counter was used for\n>> the subsequent calculation of WAL usage. This caused some counters not to\n>> be sent to the collector. This is a bug that I added. I fixed this bug.\n> \n> Thanks.\n> \n> \n>> + walStats.wal_write += msg->m_wal_write;\n>> + walStats.wal_write_time += msg->m_wal_write_time;\n>> + walStats.wal_sync += msg->m_wal_sync;\n>> + walStats.wal_sync_time += msg->m_wal_sync_time;\n>>\n>> I changed the order of the above in pgstat.c so that wal_write_time and\n>> wal_sync_time are placed in next to each other.\n> \n> I forgot to fix them, thanks.\n> \n> \n>> The followings are the comments for the docs part. I've not updated this\n>> in the patch yet because I'm not sure how to change them for now.\n>> + Number of times WAL buffers were written out to disk via\n>> + <function>XLogWrite</function>, which is invoked during an\n>> + <function>XLogFlush</function> request (see <xref\n>> linkend=\"wal-configuration\"/>)\n>> + </para></entry>\n>>\n>> XLogWrite() can be invoked during the functions other than XLogFlush().\n>> For example, XLogBackgroundFlush(). So the above description might be\n>> confusing?\n>>\n>> + Number of times WAL files were synced to disk via\n>> + <function>issue_xlog_fsync</function>, which is invoked during an\n>> + <function>XLogFlush</function> request (see <xref\n>> linkend=\"wal-configuration\"/>)\n>>\n>> Same as above.\n> \n> Yes, why don't you remove \"XLogFlush\" in the above comments\n> because XLogWrite() description is covered in wal.sgml?\n> \n> But, now it's mentioned only for backend,\n> I added the comments for the wal writer in the attached patch.\n> \n> \n>> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n>> + \"sync at commit\" options (i.e., <literal>fdatasync</literal>,\n>> + <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n>>\n>> Even open_sync and open_datasync do the sync at commit. No? I'm not sure\n>> if \"sync at commit\" is right term to indicate fdatasync, fsync and\n>> fsync_writethrough.\n> \n> Yes, why don't you change to the following comments?\n> \n> ```\n> while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n> options which specific fsync method is called (i.e., <literal>fdatasync</literal>,\n> <literal>fsync</literal>, or <literal>fsync_writethrough</literal>)\n> ```\n> \n>> + <literal>open_sync</literal>. Units are in milliseconds with\n>> microsecond resolution.\n>>\n>> \"with microsecond resolution\" part is really necessary?\n> \n> I removed it because blk_read_time in pg_stat_database is the same above,\n> but it doesn't mention it.\n> \n> \n>> + transaction records are flushed to permanent storage.\n>> + <function>XLogFlush</function> calls <function>XLogWrite</function> to write\n>> + and <function>issue_xlog_fsync</function> to flush them, which are\n>> counted as\n>> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n>> + <xref linkend=\"pg-stat-wal-view\"/>. On systems with high log output,\n>>\n>> This description might cause users to misread that XLogFlush() calls\n>> issue_xlog_fsync(). Since issue_xlog_fsync() is called by XLogWrite(),\n>> ISTM that this description needs to be updated.\n> \n> I understood. I fixed to mention that XLogWrite()\n> calls issue_xlog_fsync().\n> \n> \n>> Each line in the above seems to end with a space character.\n>> This space character should be removed.\n> \n> Sorry for that. I removed it.\n\nThanks for updating the patch! I think it's getting good shape!\n\n- pid | wait_event_type | wait_event\n+ pid | wait_event_type | wait_event\n\nThis change is not necessary?\n\n- every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds.\n+ every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds, which calls\n+ <function>XLogWrite</function> to write and <function>XLogWrite</function>\n+ <function>issue_xlog_fsync</function> to flush them. They are counted as\n+ <literal>wal_write</literal> and <literal>wal_sync</literal> in\n+ <xref linkend=\"pg-stat-wal-view\"/>.\n\nIsn't it better to avoid using the terms like XLogWrite or issue_xlog_fsync\nbefore explaining what they are? They are explained later. At least for me\nI'm ok without this change.\n\n- to write (move to kernel cache) a few filled <acronym>WAL</acronym>\n- buffers. This is undesirable because <function>XLogInsertRecord</function>\n+ to call <function>XLogWrite</function> to write (move to kernel cache) a\n+ few filled <acronym>WAL</acronym> buffers (the tally of this event is reported in\n+ <literal>wal_buffers_full</literal> in <xref linkend=\"pg-stat-wal-view\"/>).\n+ This is undesirable because <function>XLogInsertRecord</function>\n\nThis paragraph explains the relationshp between WAL writes and WAL buffers. I don't think it's good to add different context to this paragraph. Instead, what about adding new paragraph like the follwing?\n\n----------------------------------\nWhen track_wal_io_timing is enabled, the total amounts of time XLogWrite writes and issue_xlog_fsync syncs WAL data to disk are counted as wal_write_time and wal_sync_time in pg_stat_wal view, respectively. XLogWrite is normally called by XLogInsertRecord (when there is no space for the new record in WAL buffers), XLogFlush and the WAL writer, to write WAL buffers to disk and call issue_xlog_fsync. If wal_sync_method is either open_datasync or open_sync, a write operation in XLogWrite guarantees to sync written WAL data to disk and issue_xlog_fsync does nothing. If wal_sync_method is either fdatasync, fsync, or fsync_writethrough, the write operation moves WAL buffer to kernel cache and issue_xlog_fsync syncs WAL files to disk. Regardless of the setting of track_wal_io_timing, the numbers of times XLogWrite writes and issue_xlog_fsync syncs WAL data to disk are also counted as wal_write and wal_sync in pg_stat_wal, respectively.\n----------------------------------\n\n+ <function>issue_xlog_fsync</function> (see <xref linkend=\"wal-configuration\"/>)\n\n\"request\" should be place just before \"(see\"?\n\n+ Number of times WAL files were synced to disk via\n+ <function>issue_xlog_fsync</function> (see <xref linkend=\"wal-configuration\"/>)\n+ while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n+ options which specific fsync method is called (i.e., <literal>fdatasync</literal>,\n+ <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n\nIsn't it better to mention the case of fsync=off? What about the following?\n\n----------------------------------\nNumber of times WAL files were synced to disk via issue_xlog_fsync (see ...). This is zero when fsync is off or wal_sync_method is either open_datasync or open_sync.\n----------------------------------\n\n+ Total amount of time spent writing WAL buffers were written out to disk via\n\n\"were written out\" is not necessary?\n\n+ Total amount of time spent syncing WAL files to disk via\n+ <function>issue_xlog_fsync</function> request (see <xref linkend=\"wal-configuration\"/>)\n+ while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n+ options which specific fsync method is called (i.e., <literal>fdatasync</literal>,\n+ <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n+ Units are in milliseconds.\n+ This is zero when <xref linkend=\"guc-track-wal-io-timing\"/> is disabled.\n\nIsn't it better to explain the case where this counter is zero a bit more clearly as follows?\n\n---------------------\nThis is zero when track_wal_io_timing is disabled, fsync is off, or wal_sync_method is either open_datasync or open_sync.\n---------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:44:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-08 13:44, Fujii Masao wrote:\n> On 2021/03/05 19:54, Masahiro Ikeda wrote:\n>> On 2021-03-05 12:47, Fujii Masao wrote:\n>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>> \n>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>> \n>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during \n>>>>>>>>>>>> an\n>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is \n>>>>>>>>>>>> also\n>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>> \n>>>>>>>>>>>> (\"which normally called\" should be \"which is normally \n>>>>>>>>>>>> called\" or\n>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>>>>>> original)\n>>>>>>>>>>>> You missed the adding the space before an opening \n>>>>>>>>>>>> parenthesis here and\n>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>> \n>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly \n>>>>>>>>>>>> query the\n>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>> \n>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>> \n>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This \n>>>>>>>>>>>> is also\n>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>> \n>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>>>>>>>> receiver\n>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>> \n>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this \n>>>>>>>>>>>> event is\n>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable \n>>>>>>>>>>>> because ...\"\n>>>>>>>>>>> \n>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>> \n>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>>>>>> explicitly\n>>>>>>>>>>>> computing the sync statistics but does require computing the \n>>>>>>>>>>>> write\n>>>>>>>>>>>> statistics. This is because of the presence of \n>>>>>>>>>>>> issue_xlog_fsync but\n>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>>>>>> observe that\n>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while \n>>>>>>>>>>>> the WAL\n>>>>>>>>>>>> receiver path does not. It seems technically \n>>>>>>>>>>>> straight-forward to\n>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the \n>>>>>>>>>>>> two places,\n>>>>>>>>>>>> though I suspect there may be a trade-off for not adding \n>>>>>>>>>>>> another\n>>>>>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>>>>>> processing\n>>>>>>>>>>>> (though that seems marginalized compared to the cost of \n>>>>>>>>>>>> actually\n>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and \n>>>>>>>>>>>> don't have\n>>>>>>>>>>>> any shared code between the two but instead implement the \n>>>>>>>>>>>> WAL receiver\n>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, \n>>>>>>>>>>>> this\n>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>> \n>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver \n>>>>>>>>>>> stats.\n>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>> \n>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>> \n>>>>>>>>>> \n>>>>>>>>>>> I added the infrastructure code to communicate the WAL \n>>>>>>>>>>> receiver stats messages between the WAL receiver and the \n>>>>>>>>>>> stats collector, and\n>>>>>>>>>>> the stats for WAL receiver is counted in \n>>>>>>>>>>> pg_stat_wal_receiver.\n>>>>>>>>>>> What do you think?\n>>>>>>>>>> \n>>>>>>>>>> On second thought, this idea seems not good. Because those \n>>>>>>>>>> stats are\n>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver \n>>>>>>>>>> process running\n>>>>>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>>>>>> dynamic\n>>>>>>>>>> stats and the others show collected stats, even though they \n>>>>>>>>>> are in\n>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>> \n>>>>>>>>> OK, I fixed it.\n>>>>>>>>> The stats collected in the WAL receiver is exposed in \n>>>>>>>>> pg_stat_wal view in v11 patch.\n>>>>>>>> \n>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>> \n>>>>>>>> + /* Check whether the WAL file was synced to disk right now \n>>>>>>>> */\n>>>>>>>> + if (enableFsync &&\n>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>> + {\n>>>>>>>> \n>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>> if enableFsync is off, sync_method is open_sync or \n>>>>>>>> open_data_sync,\n>>>>>>>> to simplify the code more?\n>>>>>>> \n>>>>>>> Thanks for the comments.\n>>>>>>> I added the above code in v12 patch.\n>>>>>>> \n>>>>>>>> \n>>>>>>>> + /*\n>>>>>>>> + * Send WAL statistics only if WalWriterDelay has \n>>>>>>>> elapsed to minimize\n>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>> + */\n>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>> + pgstat_send_wal();\n>>>>>>>> \n>>>>>>>> On second thought, this change means that it always takes \n>>>>>>>> wal_writer_delay\n>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() \n>>>>>>>> is called.\n>>>>>>>> For example, if wal_writer_delay is set to several seconds, some \n>>>>>>>> values in\n>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those \n>>>>>>>> seconds.\n>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to \n>>>>>>>> send\n>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>> \n>>>>>>> Thanks, I didn't notice that.\n>>>>>>> \n>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>> \n>>>>> Yeah, if wal_writer_delay is set to very small value, there is a \n>>>>> risk\n>>>>> that the WAL stats are sent too frequently. I agree that's a \n>>>>> problem.\n>>>>> \n>>>>>>> \n>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>> \n>>>>>>> + /*\n>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>> + * msec since we last sent one\n>>>>>>> + */\n>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>> + {\n>>>>>>> + pgstat_send_wal();\n>>>>>>> + last_report = now;\n>>>>>>> + }\n>>>>>>> +\n>>>>>>> \n>>>>>>> Although I worried that it's better to add the check code in \n>>>>>>> pgstat_send_wal(),\n>>>>> \n>>>>> Agreed.\n>>>>> \n>>>>>>> I didn't do so because to avoid to double check \n>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already \n>>>>>>> checks the\n>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>> \n>>>>> I think that we can do that. What about the attached patch?\n>>>> \n>>>> Thanks, I thought it's better.\n>>>> \n>>>> \n>>>>>> I forgot to remove an unused variable.\n>>>>>> The attached v13 patch is fixed.\n>>>>> \n>>>>> Thanks for updating the patch!\n>>>>> \n>>>>> + w.wal_write,\n>>>>> + w.wal_write_time,\n>>>>> + w.wal_sync,\n>>>>> + w.wal_sync_time,\n>>>>> \n>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>> each other? That is, what about the following order of columns?\n>>>>> \n>>>>> wal_write\n>>>>> wal_sync\n>>>>> wal_write_time\n>>>>> wal_sync_time\n>>>> \n>>>> Yes, I fixed it.\n>>>> \n>>>>> - case SYNC_METHOD_OPEN:\n>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>> - /* write synced it already */\n>>>>> - break;\n>>>>> \n>>>>> IMO it's better to add Assert(false) here to ensure that we never \n>>>>> reach\n>>>>> here, as follows. Thought?\n>>>>> \n>>>>> + case SYNC_METHOD_OPEN:\n>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>> + /* not reachable */\n>>>>> + Assert(false);\n>>>> \n>>>> I agree.\n>>>> \n>>>> \n>>>>> Even when a backend exits, it sends the stats via \n>>>>> pgstat_beshutdown_hook().\n>>>>> On the other hand, walwriter doesn't do that. Walwriter also should \n>>>>> send\n>>>>> the stats even at its exit? Otherwise some stats can fail to be \n>>>>> collected.\n>>>>> But ISTM that this issue existed from before, for example \n>>>>> checkpointer\n>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill \n>>>>> to fix\n>>>>> this issue in this patch?\n>>>> \n>>>> Thanks, I thought it's better to do so.\n>>>> I added the shutdown hook for the walwriter and the checkpointer in \n>>>> v14-0003 patch.\n>>> \n>>> Thanks!\n>>> \n>>> Seems you forgot to include the changes of expected/rules.out in 0001 \n>>> patch,\n>>> and which caused the regression test to fail. Attached is the updated \n>>> version\n>>> of the patch. I included expected/rules.out in it.\n>> \n>> Sorry.\n>> \n>>> + PgStat_Counter m_wal_write_time; /* time spend writing wal \n>>> records in\n>>> + * micro seconds */\n>>> + PgStat_Counter m_wal_sync_time; /* time spend syncing wal \n>>> records in micro\n>>> + * seconds */\n>>> \n>>> IMO \"spend\" should be \"spent\". Also \"micro seconds\" should be \n>>> \"microseconds\"\n>>> in sake of consistent with other comments in pgstat.h. I fixed them.\n>> \n>> Thanks.\n>> \n>>> Regarding pgstat_report_wal() and pgstat_send_wal(), I found one bug. \n>>> Even\n>>> when pgstat_send_wal() returned without sending any message,\n>>> pgstat_report_wal() saved current pgWalUsage and that counter was \n>>> used for\n>>> the subsequent calculation of WAL usage. This caused some counters \n>>> not to\n>>> be sent to the collector. This is a bug that I added. I fixed this \n>>> bug.\n>> \n>> Thanks.\n>> \n>> \n>>> + walStats.wal_write += msg->m_wal_write;\n>>> + walStats.wal_write_time += msg->m_wal_write_time;\n>>> + walStats.wal_sync += msg->m_wal_sync;\n>>> + walStats.wal_sync_time += msg->m_wal_sync_time;\n>>> \n>>> I changed the order of the above in pgstat.c so that wal_write_time \n>>> and\n>>> wal_sync_time are placed in next to each other.\n>> \n>> I forgot to fix them, thanks.\n>> \n>> \n>>> The followings are the comments for the docs part. I've not updated \n>>> this\n>>> in the patch yet because I'm not sure how to change them for now.\n>>> + Number of times WAL buffers were written out to disk via\n>>> + <function>XLogWrite</function>, which is invoked during an\n>>> + <function>XLogFlush</function> request (see <xref\n>>> linkend=\"wal-configuration\"/>)\n>>> + </para></entry>\n>>> \n>>> XLogWrite() can be invoked during the functions other than \n>>> XLogFlush().\n>>> For example, XLogBackgroundFlush(). So the above description might be\n>>> confusing?\n>>> \n>>> + Number of times WAL files were synced to disk via\n>>> + <function>issue_xlog_fsync</function>, which is invoked \n>>> during an\n>>> + <function>XLogFlush</function> request (see <xref\n>>> linkend=\"wal-configuration\"/>)\n>>> \n>>> Same as above.\n>> \n>> Yes, why don't you remove \"XLogFlush\" in the above comments\n>> because XLogWrite() description is covered in wal.sgml?\n>> \n>> But, now it's mentioned only for backend,\n>> I added the comments for the wal writer in the attached patch.\n>> \n>> \n>>> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of \n>>> the\n>>> + \"sync at commit\" options (i.e., <literal>fdatasync</literal>,\n>>> + <literal>fsync</literal>, or \n>>> <literal>fsync_writethrough</literal>).\n>>> \n>>> Even open_sync and open_datasync do the sync at commit. No? I'm not \n>>> sure\n>>> if \"sync at commit\" is right term to indicate fdatasync, fsync and\n>>> fsync_writethrough.\n>> \n>> Yes, why don't you change to the following comments?\n>> \n>> ```\n>> while <xref linkend=\"guc-wal-sync-method\"/> was set to one of \n>> the\n>> options which specific fsync method is called (i.e., \n>> <literal>fdatasync</literal>,\n>> <literal>fsync</literal>, or \n>> <literal>fsync_writethrough</literal>)\n>> ```\n>> \n>>> + <literal>open_sync</literal>. Units are in milliseconds with\n>>> microsecond resolution.\n>>> \n>>> \"with microsecond resolution\" part is really necessary?\n>> \n>> I removed it because blk_read_time in pg_stat_database is the same \n>> above,\n>> but it doesn't mention it.\n>> \n>> \n>>> + transaction records are flushed to permanent storage.\n>>> + <function>XLogFlush</function> calls \n>>> <function>XLogWrite</function> to write\n>>> + and <function>issue_xlog_fsync</function> to flush them, which \n>>> are\n>>> counted as\n>>> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n>>> + <xref linkend=\"pg-stat-wal-view\"/>. On systems with high log \n>>> output,\n>>> \n>>> This description might cause users to misread that XLogFlush() calls\n>>> issue_xlog_fsync(). Since issue_xlog_fsync() is called by \n>>> XLogWrite(),\n>>> ISTM that this description needs to be updated.\n>> \n>> I understood. I fixed to mention that XLogWrite()\n>> calls issue_xlog_fsync().\n>> \n>> \n>>> Each line in the above seems to end with a space character.\n>>> This space character should be removed.\n>> \n>> Sorry for that. I removed it.\n> \n> Thanks for updating the patch! I think it's getting good shape!\n> - pid | wait_event_type | wait_event\n> + pid | wait_event_type | wait_event\n> \n> This change is not necessary?\n\nNo, sorry.\nI removed it by mistake when I remove trailing space characters.\n\n\n> - every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds.\n> + every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds, which \n> calls\n> + <function>XLogWrite</function> to write and \n> <function>XLogWrite</function>\n> + <function>issue_xlog_fsync</function> to flush them. They are \n> counted as\n> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n> + <xref linkend=\"pg-stat-wal-view\"/>.\n> \n> Isn't it better to avoid using the terms like XLogWrite or \n> issue_xlog_fsync\n> before explaining what they are? They are explained later. At least for \n> me\n> I'm ok without this change.\n\nOK. I removed them and add a new paragraph.\n\n\n> - to write (move to kernel cache) a few filled <acronym>WAL</acronym>\n> - buffers. This is undesirable because \n> <function>XLogInsertRecord</function>\n> + to call <function>XLogWrite</function> to write (move to kernel \n> cache) a\n> + few filled <acronym>WAL</acronym> buffers (the tally of this event\n> is reported in\n> + <literal>wal_buffers_full</literal> in <xref \n> linkend=\"pg-stat-wal-view\"/>).\n> + This is undesirable because <function>XLogInsertRecord</function>\n> \n> This paragraph explains the relationshp between WAL writes and WAL\n> buffers. I don't think it's good to add different context to this\n> paragraph. Instead, what about adding new paragraph like the follwing?\n> \n> ----------------------------------\n> When track_wal_io_timing is enabled, the total amounts of time\n> XLogWrite writes and issue_xlog_fsync syncs WAL data to disk are\n> counted as wal_write_time and wal_sync_time in pg_stat_wal view,\n> respectively. XLogWrite is normally called by XLogInsertRecord (when\n> there is no space for the new record in WAL buffers), XLogFlush and\n> the WAL writer, to write WAL buffers to disk and call\n> issue_xlog_fsync. If wal_sync_method is either open_datasync or\n> open_sync, a write operation in XLogWrite guarantees to sync written\n> WAL data to disk and issue_xlog_fsync does nothing. If wal_sync_method\n> is either fdatasync, fsync, or fsync_writethrough, the write operation\n> moves WAL buffer to kernel cache and issue_xlog_fsync syncs WAL files\n> to disk. Regardless of the setting of track_wal_io_timing, the numbers\n> of times XLogWrite writes and issue_xlog_fsync syncs WAL data to disk\n> are also counted as wal_write and wal_sync in pg_stat_wal,\n> respectively.\n> ----------------------------------\n\nThanks, I agree it's better.\n\n\n> + <function>issue_xlog_fsync</function> (see <xref\n> linkend=\"wal-configuration\"/>)\n> \n> \"request\" should be place just before \"(see\"?\n\nYes, thanks.\n\n\n\n> + Number of times WAL files were synced to disk via\n> + <function>issue_xlog_fsync</function> (see <xref\n> linkend=\"wal-configuration\"/>)\n> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of \n> the\n> + options which specific fsync method is called (i.e.,\n> <literal>fdatasync</literal>,\n> + <literal>fsync</literal>, or \n> <literal>fsync_writethrough</literal>).\n> \n> Isn't it better to mention the case of fsync=off? What about the \n> following?\n> \n> ----------------------------------\n> Number of times WAL files were synced to disk via issue_xlog_fsync\n> (see ...). This is zero when fsync is off or wal_sync_method is either\n> open_datasync or open_sync.\n> ----------------------------------\n\nYes.\n\n\n> + Total amount of time spent writing WAL buffers were written\n> out to disk via\n> \n> \"were written out\" is not necessary?\n\nYes, removed it.\n\n> + Total amount of time spent syncing WAL files to disk via\n> + <function>issue_xlog_fsync</function> request (see <xref\n> linkend=\"wal-configuration\"/>)\n> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of \n> the\n> + options which specific fsync method is called (i.e.,\n> <literal>fdatasync</literal>,\n> + <literal>fsync</literal>, or \n> <literal>fsync_writethrough</literal>).\n> + Units are in milliseconds.\n> + This is zero when <xref linkend=\"guc-track-wal-io-timing\"/> is \n> disabled.\n> \n> Isn't it better to explain the case where this counter is zero a bit\n> more clearly as follows?\n> \n> ---------------------\n> This is zero when track_wal_io_timing is disabled, fsync is off, or\n> wal_sync_method is either open_datasync or open_sync.\n> ---------------------\n\nYes, thanks.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 08 Mar 2021 19:42:37 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/08 19:42, Masahiro Ikeda wrote:\n> On 2021-03-08 13:44, Fujii Masao wrote:\n>> On 2021/03/05 19:54, Masahiro Ikeda wrote:\n>>> On 2021-03-05 12:47, Fujii Masao wrote:\n>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>\n>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>\n>>>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>\n>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>\n>>>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>>>> + if (enableFsync &&\n>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>> + {\n>>>>>>>>>\n>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>>>> to simplify the code more?\n>>>>>>>>\n>>>>>>>> Thanks for the comments.\n>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> + /*\n>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>> + */\n>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>\n>>>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>>>\n>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>\n>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>\n>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>>>\n>>>>>>>>\n>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>\n>>>>>>>> + /*\n>>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>> + * msec since we last sent one\n>>>>>>>> + */\n>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>> + {\n>>>>>>>> + pgstat_send_wal();\n>>>>>>>> + last_report = now;\n>>>>>>>> + }\n>>>>>>>> +\n>>>>>>>>\n>>>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>>>\n>>>>>> Agreed.\n>>>>>>\n>>>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>\n>>>>>> I think that we can do that. What about the attached patch?\n>>>>>\n>>>>> Thanks, I thought it's better.\n>>>>>\n>>>>>\n>>>>>>> I forgot to remove an unused variable.\n>>>>>>> The attached v13 patch is fixed.\n>>>>>>\n>>>>>> Thanks for updating the patch!\n>>>>>>\n>>>>>> + w.wal_write,\n>>>>>> + w.wal_write_time,\n>>>>>> + w.wal_sync,\n>>>>>> + w.wal_sync_time,\n>>>>>>\n>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>> each other? That is, what about the following order of columns?\n>>>>>>\n>>>>>> wal_write\n>>>>>> wal_sync\n>>>>>> wal_write_time\n>>>>>> wal_sync_time\n>>>>>\n>>>>> Yes, I fixed it.\n>>>>>\n>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>> - /* write synced it already */\n>>>>>> - break;\n>>>>>>\n>>>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>>>> here, as follows. Thought?\n>>>>>>\n>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>> + /* not reachable */\n>>>>>> + Assert(false);\n>>>>>\n>>>>> I agree.\n>>>>>\n>>>>>\n>>>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>>>> this issue in this patch?\n>>>>>\n>>>>> Thanks, I thought it's better to do so.\n>>>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>>>\n>>>> Thanks!\n>>>>\n>>>> Seems you forgot to include the changes of expected/rules.out in 0001 patch,\n>>>> and which caused the regression test to fail. Attached is the updated version\n>>>> of the patch. I included expected/rules.out in it.\n>>>\n>>> Sorry.\n>>>\n>>>> + PgStat_Counter m_wal_write_time; /* time spend writing wal records in\n>>>> + * micro seconds */\n>>>> + PgStat_Counter m_wal_sync_time; /* time spend syncing wal records in micro\n>>>> + * seconds */\n>>>>\n>>>> IMO \"spend\" should be \"spent\". Also \"micro seconds\" should be \"microseconds\"\n>>>> in sake of consistent with other comments in pgstat.h. I fixed them.\n>>>\n>>> Thanks.\n>>>\n>>>> Regarding pgstat_report_wal() and pgstat_send_wal(), I found one bug. Even\n>>>> when pgstat_send_wal() returned without sending any message,\n>>>> pgstat_report_wal() saved current pgWalUsage and that counter was used for\n>>>> the subsequent calculation of WAL usage. This caused some counters not to\n>>>> be sent to the collector. This is a bug that I added. I fixed this bug.\n>>>\n>>> Thanks.\n>>>\n>>>\n>>>> + walStats.wal_write += msg->m_wal_write;\n>>>> + walStats.wal_write_time += msg->m_wal_write_time;\n>>>> + walStats.wal_sync += msg->m_wal_sync;\n>>>> + walStats.wal_sync_time += msg->m_wal_sync_time;\n>>>>\n>>>> I changed the order of the above in pgstat.c so that wal_write_time and\n>>>> wal_sync_time are placed in next to each other.\n>>>\n>>> I forgot to fix them, thanks.\n>>>\n>>>\n>>>> The followings are the comments for the docs part. I've not updated this\n>>>> in the patch yet because I'm not sure how to change them for now.\n>>>> + Number of times WAL buffers were written out to disk via\n>>>> + <function>XLogWrite</function>, which is invoked during an\n>>>> + <function>XLogFlush</function> request (see <xref\n>>>> linkend=\"wal-configuration\"/>)\n>>>> + </para></entry>\n>>>>\n>>>> XLogWrite() can be invoked during the functions other than XLogFlush().\n>>>> For example, XLogBackgroundFlush(). So the above description might be\n>>>> confusing?\n>>>>\n>>>> + Number of times WAL files were synced to disk via\n>>>> + <function>issue_xlog_fsync</function>, which is invoked during an\n>>>> + <function>XLogFlush</function> request (see <xref\n>>>> linkend=\"wal-configuration\"/>)\n>>>>\n>>>> Same as above.\n>>>\n>>> Yes, why don't you remove \"XLogFlush\" in the above comments\n>>> because XLogWrite() description is covered in wal.sgml?\n>>>\n>>> But, now it's mentioned only for backend,\n>>> I added the comments for the wal writer in the attached patch.\n>>>\n>>>\n>>>> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n>>>> + \"sync at commit\" options (i.e., <literal>fdatasync</literal>,\n>>>> + <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n>>>>\n>>>> Even open_sync and open_datasync do the sync at commit. No? I'm not sure\n>>>> if \"sync at commit\" is right term to indicate fdatasync, fsync and\n>>>> fsync_writethrough.\n>>>\n>>> Yes, why don't you change to the following comments?\n>>>\n>>> ```\n>>> while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n>>> options which specific fsync method is called (i.e., <literal>fdatasync</literal>,\n>>> <literal>fsync</literal>, or <literal>fsync_writethrough</literal>)\n>>> ```\n>>>\n>>>> + <literal>open_sync</literal>. Units are in milliseconds with\n>>>> microsecond resolution.\n>>>>\n>>>> \"with microsecond resolution\" part is really necessary?\n>>>\n>>> I removed it because blk_read_time in pg_stat_database is the same above,\n>>> but it doesn't mention it.\n>>>\n>>>\n>>>> + transaction records are flushed to permanent storage.\n>>>> + <function>XLogFlush</function> calls <function>XLogWrite</function> to write\n>>>> + and <function>issue_xlog_fsync</function> to flush them, which are\n>>>> counted as\n>>>> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n>>>> + <xref linkend=\"pg-stat-wal-view\"/>. On systems with high log output,\n>>>>\n>>>> This description might cause users to misread that XLogFlush() calls\n>>>> issue_xlog_fsync(). Since issue_xlog_fsync() is called by XLogWrite(),\n>>>> ISTM that this description needs to be updated.\n>>>\n>>> I understood. I fixed to mention that XLogWrite()\n>>> calls issue_xlog_fsync().\n>>>\n>>>\n>>>> Each line in the above seems to end with a space character.\n>>>> This space character should be removed.\n>>>\n>>> Sorry for that. I removed it.\n>>\n>> Thanks for updating the patch! I think it's getting good shape!\n>> - pid | wait_event_type | wait_event\n>> + pid | wait_event_type | wait_event\n>>\n>> This change is not necessary?\n> \n> No, sorry.\n> I removed it by mistake when I remove trailing space characters.\n> \n> \n>> - every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds.\n>> + every <xref linkend=\"guc-wal-writer-delay\"/> milliseconds, which calls\n>> + <function>XLogWrite</function> to write and <function>XLogWrite</function>\n>> + <function>issue_xlog_fsync</function> to flush them. They are counted as\n>> + <literal>wal_write</literal> and <literal>wal_sync</literal> in\n>> + <xref linkend=\"pg-stat-wal-view\"/>.\n>>\n>> Isn't it better to avoid using the terms like XLogWrite or issue_xlog_fsync\n>> before explaining what they are? They are explained later. At least for me\n>> I'm ok without this change.\n> \n> OK. I removed them and add a new paragraph.\n> \n> \n>> - to write (move to kernel cache) a few filled <acronym>WAL</acronym>\n>> - buffers. This is undesirable because <function>XLogInsertRecord</function>\n>> + to call <function>XLogWrite</function> to write (move to kernel cache) a\n>> + few filled <acronym>WAL</acronym> buffers (the tally of this event\n>> is reported in\n>> + <literal>wal_buffers_full</literal> in <xref linkend=\"pg-stat-wal-view\"/>).\n>> + This is undesirable because <function>XLogInsertRecord</function>\n>>\n>> This paragraph explains the relationshp between WAL writes and WAL\n>> buffers. I don't think it's good to add different context to this\n>> paragraph. Instead, what about adding new paragraph like the follwing?\n>>\n>> ----------------------------------\n>> When track_wal_io_timing is enabled, the total amounts of time\n>> XLogWrite writes and issue_xlog_fsync syncs WAL data to disk are\n>> counted as wal_write_time and wal_sync_time in pg_stat_wal view,\n>> respectively. XLogWrite is normally called by XLogInsertRecord (when\n>> there is no space for the new record in WAL buffers), XLogFlush and\n>> the WAL writer, to write WAL buffers to disk and call\n>> issue_xlog_fsync. If wal_sync_method is either open_datasync or\n>> open_sync, a write operation in XLogWrite guarantees to sync written\n>> WAL data to disk and issue_xlog_fsync does nothing. If wal_sync_method\n>> is either fdatasync, fsync, or fsync_writethrough, the write operation\n>> moves WAL buffer to kernel cache and issue_xlog_fsync syncs WAL files\n>> to disk. Regardless of the setting of track_wal_io_timing, the numbers\n>> of times XLogWrite writes and issue_xlog_fsync syncs WAL data to disk\n>> are also counted as wal_write and wal_sync in pg_stat_wal,\n>> respectively.\n>> ----------------------------------\n> \n> Thanks, I agree it's better.\n> \n> \n>> + <function>issue_xlog_fsync</function> (see <xref\n>> linkend=\"wal-configuration\"/>)\n>>\n>> \"request\" should be place just before \"(see\"?\n> \n> Yes, thanks.\n> \n> \n> \n>> + Number of times WAL files were synced to disk via\n>> + <function>issue_xlog_fsync</function> (see <xref\n>> linkend=\"wal-configuration\"/>)\n>> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n>> + options which specific fsync method is called (i.e.,\n>> <literal>fdatasync</literal>,\n>> + <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n>>\n>> Isn't it better to mention the case of fsync=off? What about the following?\n>>\n>> ----------------------------------\n>> Number of times WAL files were synced to disk via issue_xlog_fsync\n>> (see ...). This is zero when fsync is off or wal_sync_method is either\n>> open_datasync or open_sync.\n>> ----------------------------------\n> \n> Yes.\n> \n> \n>> + Total amount of time spent writing WAL buffers were written\n>> out to disk via\n>>\n>> \"were written out\" is not necessary?\n> \n> Yes, removed it.\n> \n>> + Total amount of time spent syncing WAL files to disk via\n>> + <function>issue_xlog_fsync</function> request (see <xref\n>> linkend=\"wal-configuration\"/>)\n>> + while <xref linkend=\"guc-wal-sync-method\"/> was set to one of the\n>> + options which specific fsync method is called (i.e.,\n>> <literal>fdatasync</literal>,\n>> + <literal>fsync</literal>, or <literal>fsync_writethrough</literal>).\n>> + Units are in milliseconds.\n>> + This is zero when <xref linkend=\"guc-track-wal-io-timing\"/> is disabled.\n>>\n>> Isn't it better to explain the case where this counter is zero a bit\n>> more clearly as follows?\n>>\n>> ---------------------\n>> This is zero when track_wal_io_timing is disabled, fsync is off, or\n>> wal_sync_method is either open_datasync or open_sync.\n>> ---------------------\n> \n> Yes, thanks.\n\nThanks for updating the patch! I applied cosmetic changes to that.\nPatch attached. Barring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 9 Mar 2021 00:48:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 8:48 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n> Thanks for updating the patch! I applied cosmetic changes to that.\n> Patch attached. Barring any objection, I will commit this version.\n>\n\nRead over the patch and it looks good.\n\nOne minor \"the\" omission (in a couple of places, copy-paste style):\n\n+ See <xref linkend=\"wal-configuration\"/> for more information about\n+ internal WAL function <function>XLogWrite</function>.\n\n\"about *the* internal WAL function\"\n\nAlso, I'm not sure why you find omitting documentation that the millisecond\nfield has a fractional part out to microseconds to be helpful.\n\nDavid J.\n\nOn Mon, Mar 8, 2021 at 8:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\nThanks for updating the patch! I applied cosmetic changes to that.\nPatch attached. Barring any objection, I will commit this version.Read over the patch and it looks good.One minor \"the\" omission (in a couple of places, copy-paste style):+ See <xref linkend=\"wal-configuration\"/> for more information about+ internal WAL function <function>XLogWrite</function>.\"about *the* internal WAL function\"Also, I'm not sure why you find omitting documentation that the millisecond field has a fractional part out to microseconds to be helpful.David J.",
"msg_date": "Mon, 8 Mar 2021 12:47:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/09 4:47, David G. Johnston wrote:\n> On Mon, Mar 8, 2021 at 8:48 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> Thanks for updating the patch! I applied cosmetic changes to that.\n> Patch attached. Barring any objection, I will commit this version.\n> \n> \n> Read over the patch and it looks good.\n\nThanks for the review! I committed the patch.\n\n\n> \n> One minor \"the\" omission (in a couple of places, copy-paste style):\n> \n> + See <xref linkend=\"wal-configuration\"/> for more information about\n> + internal WAL function <function>XLogWrite</function>.\n> \n> \"about *the* internal WAL function\"\n\nI added \"the\" in such two places. Thanks!\n\n\n> \n> Also, I'm not sure why you find omitting documentation that the millisecond field has a fractional part out to microseconds to be helpful.\n\nIf this information should be documented, we should do that for\nnot only wal_write/sync_time but also other several columns,\nfor example, pg_stat_database.blk_write_time?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Mar 2021 17:02:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/05 8:38, Masahiro Ikeda wrote:\n> On 2021-03-05 01:02, Fujii Masao wrote:\n>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>\n>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>\n>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>\n>>>>>>>>> is ether -> is either\n>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>> operating system...\"\n>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>\n>>>>>>>> Thanks, I fixed them.\n>>>>>>>>\n>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>\n>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>\n>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>\n>>>>>>>> Thanks, I fixed it.\n>>>>>>>>\n>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>\n>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>\n>>>>>>> Thanks for updating the patches!\n>>>>>>>\n>>>>>>>\n>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>> What do you think?\n>>>>>>>\n>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>\n>>>>>> OK, I fixed it.\n>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>\n>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>\n>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>> + if (enableFsync &&\n>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>> + {\n>>>>>\n>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>> to simplify the code more?\n>>>>\n>>>> Thanks for the comments.\n>>>> I added the above code in v12 patch.\n>>>>\n>>>>>\n>>>>> + /*\n>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>> + * the overhead in WAL-writing.\n>>>>> + */\n>>>>> + if (rc & WL_TIMEOUT)\n>>>>> + pgstat_send_wal();\n>>>>>\n>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>\n>>>> Thanks, I didn't notice that.\n>>>>\n>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>> default value is 200msec and it may be set shorter time.\n>>\n>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>\n>>>>\n>>>> Why don't to make another way to check the timestamp?\n>>>>\n>>>> + /*\n>>>> + * Don't send a message unless it's been at least\n>>>> PGSTAT_STAT_INTERVAL\n>>>> + * msec since we last sent one\n>>>> + */\n>>>> + now = GetCurrentTimestamp();\n>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>> PGSTAT_STAT_INTERVAL))\n>>>> + {\n>>>> + pgstat_send_wal();\n>>>> + last_report = now;\n>>>> + }\n>>>> +\n>>>>\n>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>\n>> Agreed.\n>>\n>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>> PGSTAT_STAT_INTERVAL.\n>>\n>> I think that we can do that. What about the attached patch?\n> \n> Thanks, I thought it's better.\n> \n> \n>>> I forgot to remove an unused variable.\n>>> The attached v13 patch is fixed.\n>>\n>> Thanks for updating the patch!\n>>\n>> + w.wal_write,\n>> + w.wal_write_time,\n>> + w.wal_sync,\n>> + w.wal_sync_time,\n>>\n>> It's more natural to put wal_write_time and wal_sync_time next to\n>> each other? That is, what about the following order of columns?\n>>\n>> wal_write\n>> wal_sync\n>> wal_write_time\n>> wal_sync_time\n> \n> Yes, I fixed it.\n> \n>> - case SYNC_METHOD_OPEN:\n>> - case SYNC_METHOD_OPEN_DSYNC:\n>> - /* write synced it already */\n>> - break;\n>>\n>> IMO it's better to add Assert(false) here to ensure that we never reach\n>> here, as follows. Thought?\n>>\n>> + case SYNC_METHOD_OPEN:\n>> + case SYNC_METHOD_OPEN_DSYNC:\n>> + /* not reachable */\n>> + Assert(false);\n> \n> I agree.\n> \n> \n>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>> But ISTM that this issue existed from before, for example checkpointer\n>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>> this issue in this patch?\n> \n> Thanks, I thought it's better to do so.\n> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n\nThanks for 0003 patch!\n\nIsn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\njust send the stats only when ShutdownRequestPending is true in the walwriter\nmain loop (maybe just before calling HandleMainLoopInterrupts()).\nIf we do this, we cannot send the stats when walwriter throws FATAL error.\nBut that's ok because FATAL error on walwriter causes the server to crash.\nThought?\n\nAlso ISTM that we don't need to use the callback for that purpose in\ncheckpointer because of the same reason. That is, we can send the stats\njust after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\nThought?\n\nI'm now not sure how much useful these changes are. As far as I read pgstat.c,\nwhen shutdown is requested, the stats collector seems to exit even when\nthere are outstanding stats messages. So if checkpointer and walwriter send\nthe stats in their last cycles, those stats might not be collected.\n\nOn the other hand, I can think that sending the stats in the last cycles would\nimprove the situation a bit than now. So I'm inclined to apply those changes...\n\nOf course, there is another direction; we can improve the stats collector so\nthat it guarantees to collect all the sent stats messages. But I'm afraid\nthis change might be big.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Mar 2021 17:51:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-09 17:51, Fujii Masao wrote:\n> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>> \n>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>> \n>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is \n>>>>>>>>>> also\n>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>> \n>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" \n>>>>>>>>>> or\n>>>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>>>> original)\n>>>>>>>>>> You missed the adding the space before an opening parenthesis \n>>>>>>>>>> here and\n>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>> \n>>>>>>>>>> is ether -> is either\n>>>>>>>>>> \"This parameter is off by default as it will repeatedly query \n>>>>>>>>>> the\n>>>>>>>>>> operating system...\"\n>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>> \n>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>> \n>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is \n>>>>>>>>>> also\n>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>> \n>>>>>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>>>>>> receiver\n>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>> \n>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this \n>>>>>>>>>> event is\n>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable \n>>>>>>>>>> because ...\"\n>>>>>>>>> \n>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>> \n>>>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>>>> explicitly\n>>>>>>>>>> computing the sync statistics but does require computing the \n>>>>>>>>>> write\n>>>>>>>>>> statistics. This is because of the presence of \n>>>>>>>>>> issue_xlog_fsync but\n>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>>>> observe that\n>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the \n>>>>>>>>>> WAL\n>>>>>>>>>> receiver path does not. It seems technically straight-forward \n>>>>>>>>>> to\n>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two \n>>>>>>>>>> places,\n>>>>>>>>>> though I suspect there may be a trade-off for not adding \n>>>>>>>>>> another\n>>>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>>>> processing\n>>>>>>>>>> (though that seems marginalized compared to the cost of \n>>>>>>>>>> actually\n>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and \n>>>>>>>>>> don't have\n>>>>>>>>>> any shared code between the two but instead implement the WAL \n>>>>>>>>>> receiver\n>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>> \n>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>> \n>>>>>>>> Thanks for updating the patches!\n>>>>>>>> \n>>>>>>>> \n>>>>>>>>> I added the infrastructure code to communicate the WAL receiver \n>>>>>>>>> stats messages between the WAL receiver and the stats \n>>>>>>>>> collector, and\n>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>> What do you think?\n>>>>>>>> \n>>>>>>>> On second thought, this idea seems not good. Because those stats \n>>>>>>>> are\n>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process \n>>>>>>>> running\n>>>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>>>> dynamic\n>>>>>>>> stats and the others show collected stats, even though they are \n>>>>>>>> in\n>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>> \n>>>>>>> OK, I fixed it.\n>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal \n>>>>>>> view in v11 patch.\n>>>>>> \n>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>> \n>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>> + if (enableFsync &&\n>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>> + {\n>>>>>> \n>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>> to simplify the code more?\n>>>>> \n>>>>> Thanks for the comments.\n>>>>> I added the above code in v12 patch.\n>>>>> \n>>>>>> \n>>>>>> + /*\n>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed \n>>>>>> to minimize\n>>>>>> + * the overhead in WAL-writing.\n>>>>>> + */\n>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>> + pgstat_send_wal();\n>>>>>> \n>>>>>> On second thought, this change means that it always takes \n>>>>>> wal_writer_delay\n>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() \n>>>>>> is called.\n>>>>>> For example, if wal_writer_delay is set to several seconds, some \n>>>>>> values in\n>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those \n>>>>>> seconds.\n>>>>>> So I'm thinking to withdraw my previous comment and it's ok to \n>>>>>> send\n>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>> \n>>>>> Thanks, I didn't notice that.\n>>>>> \n>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>> default value is 200msec and it may be set shorter time.\n>>> \n>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>> \n>>>>> \n>>>>> Why don't to make another way to check the timestamp?\n>>>>> \n>>>>> + /*\n>>>>> + * Don't send a message unless it's been at least\n>>>>> PGSTAT_STAT_INTERVAL\n>>>>> + * msec since we last sent one\n>>>>> + */\n>>>>> + now = GetCurrentTimestamp();\n>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>> PGSTAT_STAT_INTERVAL))\n>>>>> + {\n>>>>> + pgstat_send_wal();\n>>>>> + last_report = now;\n>>>>> + }\n>>>>> +\n>>>>> \n>>>>> Although I worried that it's better to add the check code in \n>>>>> pgstat_send_wal(),\n>>> \n>>> Agreed.\n>>> \n>>>>> I didn't do so because to avoid to double check \n>>>>> PGSTAT_STAT_INTERVAL.\n>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks \n>>>>> the\n>>>>> PGSTAT_STAT_INTERVAL.\n>>> \n>>> I think that we can do that. What about the attached patch?\n>> \n>> Thanks, I thought it's better.\n>> \n>> \n>>>> I forgot to remove an unused variable.\n>>>> The attached v13 patch is fixed.\n>>> \n>>> Thanks for updating the patch!\n>>> \n>>> + w.wal_write,\n>>> + w.wal_write_time,\n>>> + w.wal_sync,\n>>> + w.wal_sync_time,\n>>> \n>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>> each other? That is, what about the following order of columns?\n>>> \n>>> wal_write\n>>> wal_sync\n>>> wal_write_time\n>>> wal_sync_time\n>> \n>> Yes, I fixed it.\n>> \n>>> - case SYNC_METHOD_OPEN:\n>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>> - /* write synced it already */\n>>> - break;\n>>> \n>>> IMO it's better to add Assert(false) here to ensure that we never \n>>> reach\n>>> here, as follows. Thought?\n>>> \n>>> + case SYNC_METHOD_OPEN:\n>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>> + /* not reachable */\n>>> + Assert(false);\n>> \n>> I agree.\n>> \n>> \n>>> Even when a backend exits, it sends the stats via \n>>> pgstat_beshutdown_hook().\n>>> On the other hand, walwriter doesn't do that. Walwriter also should \n>>> send\n>>> the stats even at its exit? Otherwise some stats can fail to be \n>>> collected.\n>>> But ISTM that this issue existed from before, for example \n>>> checkpointer\n>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to \n>>> fix\n>>> this issue in this patch?\n>> \n>> Thanks, I thought it's better to do so.\n>> I added the shutdown hook for the walwriter and the checkpointer in \n>> v14-0003 patch.\n> \n> Thanks for 0003 patch!\n> \n> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO \n> we can\n> just send the stats only when ShutdownRequestPending is true in the \n> walwriter\n> main loop (maybe just before calling HandleMainLoopInterrupts()).\n> If we do this, we cannot send the stats when walwriter throws FATAL \n> error.\n> But that's ok because FATAL error on walwriter causes the server to \n> crash.\n> Thought?\n\nThanks for your comments!\nYes, I agree.\n\n\n> Also ISTM that we don't need to use the callback for that purpose in\n> checkpointer because of the same reason. That is, we can send the stats\n> just after calling ShutdownXLOG(0, 0) in \n> HandleCheckpointerInterrupts().\n> Thought?\n\nYes, I think so too.\n\nSince ShutdownXLOG() may create restartpoint or checkpoint,\nit might generate WAL records.\n\n\n> I'm now not sure how much useful these changes are. As far as I read \n> pgstat.c,\n> when shutdown is requested, the stats collector seems to exit even when\n> there are outstanding stats messages. So if checkpointer and walwriter \n> send\n> the stats in their last cycles, those stats might not be collected.\n> \n> On the other hand, I can think that sending the stats in the last \n> cycles would\n> improve the situation a bit than now. So I'm inclined to apply those \n> changes...\n\nI didn't notice that. I agree this is an important aspect.\nI understood there is a case that the stats collector exits before the \ncheckpointer\nor the walwriter exits and some stats might not be collected.\n\n\n> Of course, there is another direction; we can improve the stats \n> collector so\n> that it guarantees to collect all the sent stats messages. But I'm \n> afraid\n> this change might be big.\n\nFor example, implement to manage background process status in shared \nmemory and\nthe stats collector collects the stats until another background process \nexits?\n\nIn my understanding, the statistics are not required high accuracy,\nit's ok to ignore them if the impact is not big.\n\nIf we guarantee high accuracy, another background process like \nautovacuum launcher\nmust send the WAL stats because it accesses the system catalog and might \ngenerate\nWAL records due to HOT update even though the possibility is low.\n\nI thought the impact is small because the time uncollected stats are \ngenerated is\nshort compared to the time from startup. So, it's ok to ignore the \nremaining stats\nwhen the process exists.\n\nBTW, I found BgWriterStats.m_timed_checkpoints is not counted in \nShutdownLOG()\nand we need to count it if to collect stats before it exits.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 10 Mar 2021 14:11:49 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/10 14:11, Masahiro Ikeda wrote:\n> On 2021-03-09 17:51, Fujii Masao wrote:\n>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>\n>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>\n>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>\n>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>\n>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>> operating system...\"\n>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>\n>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>\n>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>\n>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>\n>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>\n>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>> What do you think?\n>>>>>>>>>\n>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>\n>>>>>>>> OK, I fixed it.\n>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>\n>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>\n>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>> + if (enableFsync &&\n>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>> + {\n>>>>>>>\n>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>> to simplify the code more?\n>>>>>>\n>>>>>> Thanks for the comments.\n>>>>>> I added the above code in v12 patch.\n>>>>>>\n>>>>>>>\n>>>>>>> + /*\n>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>> + * the overhead in WAL-writing.\n>>>>>>> + */\n>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>> + pgstat_send_wal();\n>>>>>>>\n>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>\n>>>>>> Thanks, I didn't notice that.\n>>>>>>\n>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>> default value is 200msec and it may be set shorter time.\n>>>>\n>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>\n>>>>>>\n>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>\n>>>>>> + /*\n>>>>>> + * Don't send a message unless it's been at least\n>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>> + * msec since we last sent one\n>>>>>> + */\n>>>>>> + now = GetCurrentTimestamp();\n>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>> + {\n>>>>>> + pgstat_send_wal();\n>>>>>> + last_report = now;\n>>>>>> + }\n>>>>>> +\n>>>>>>\n>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>\n>>>> Agreed.\n>>>>\n>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>\n>>>> I think that we can do that. What about the attached patch?\n>>>\n>>> Thanks, I thought it's better.\n>>>\n>>>\n>>>>> I forgot to remove an unused variable.\n>>>>> The attached v13 patch is fixed.\n>>>>\n>>>> Thanks for updating the patch!\n>>>>\n>>>> + w.wal_write,\n>>>> + w.wal_write_time,\n>>>> + w.wal_sync,\n>>>> + w.wal_sync_time,\n>>>>\n>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>> each other? That is, what about the following order of columns?\n>>>>\n>>>> wal_write\n>>>> wal_sync\n>>>> wal_write_time\n>>>> wal_sync_time\n>>>\n>>> Yes, I fixed it.\n>>>\n>>>> - case SYNC_METHOD_OPEN:\n>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>> - /* write synced it already */\n>>>> - break;\n>>>>\n>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>> here, as follows. Thought?\n>>>>\n>>>> + case SYNC_METHOD_OPEN:\n>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>> + /* not reachable */\n>>>> + Assert(false);\n>>>\n>>> I agree.\n>>>\n>>>\n>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>> this issue in this patch?\n>>>\n>>> Thanks, I thought it's better to do so.\n>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>\n>> Thanks for 0003 patch!\n>>\n>> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\n>> just send the stats only when ShutdownRequestPending is true in the walwriter\n>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>> If we do this, we cannot send the stats when walwriter throws FATAL error.\n>> But that's ok because FATAL error on walwriter causes the server to crash.\n>> Thought?\n> \n> Thanks for your comments!\n> Yes, I agree.\n> \n> \n>> Also ISTM that we don't need to use the callback for that purpose in\n>> checkpointer because of the same reason. That is, we can send the stats\n>> just after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\n>> Thought?\n> \n> Yes, I think so too.\n> \n> Since ShutdownXLOG() may create restartpoint or checkpoint,\n> it might generate WAL records.\n> \n> \n>> I'm now not sure how much useful these changes are. As far as I read pgstat.c,\n>> when shutdown is requested, the stats collector seems to exit even when\n>> there are outstanding stats messages. So if checkpointer and walwriter send\n>> the stats in their last cycles, those stats might not be collected.\n>>\n>> On the other hand, I can think that sending the stats in the last cycles would\n>> improve the situation a bit than now. So I'm inclined to apply those changes...\n> \n> I didn't notice that. I agree this is an important aspect.\n> I understood there is a case that the stats collector exits before the checkpointer\n> or the walwriter exits and some stats might not be collected.\n\nIIUC the stats collector basically exits after checkpointer and walwriter exit.\nBut there seems no guarantee that the stats collector processes\nall the messages that other processes have sent during the shutdown of\nthe server.\n\n\n> \n> \n>> Of course, there is another direction; we can improve the stats collector so\n>> that it guarantees to collect all the sent stats messages. But I'm afraid\n>> this change might be big.\n> \n> For example, implement to manage background process status in shared memory and\n> the stats collector collects the stats until another background process exits?\n> \n> In my understanding, the statistics are not required high accuracy,\n> it's ok to ignore them if the impact is not big.\n> \n> If we guarantee high accuracy, another background process like autovacuum launcher\n> must send the WAL stats because it accesses the system catalog and might generate\n> WAL records due to HOT update even though the possibility is low.\n> \n> I thought the impact is small because the time uncollected stats are generated is\n> short compared to the time from startup. So, it's ok to ignore the remaining stats\n> when the process exists.\n\nI agree that it's not worth changing lots of code to collect such stats.\nBut if we can implement that very simply, isn't it more worth doing\nthat than current situation because we may be able to collect more\naccurate stats.\n\n\n> BTW, I found BgWriterStats.m_timed_checkpoints is not counted in ShutdownLOG()\n> and we need to count it if to collect stats before it exits.\n\nMaybe m_requested_checkpoints should be incremented in that case?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 10 Mar 2021 17:08:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-10 17:08, Fujii Masao wrote:\n> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>> \n>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>> \n>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during \n>>>>>>>>>>>> an\n>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is \n>>>>>>>>>>>> also\n>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>> \n>>>>>>>>>>>> (\"which normally called\" should be \"which is normally \n>>>>>>>>>>>> called\" or\n>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>>>>>> original)\n>>>>>>>>>>>> You missed the adding the space before an opening \n>>>>>>>>>>>> parenthesis here and\n>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>> \n>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly \n>>>>>>>>>>>> query the\n>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>> \n>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>> \n>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This \n>>>>>>>>>>>> is also\n>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>> \n>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL \n>>>>>>>>>>> receiver\n>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>> \n>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this \n>>>>>>>>>>>> event is\n>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable \n>>>>>>>>>>>> because ...\"\n>>>>>>>>>>> \n>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>> \n>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>>>>>> explicitly\n>>>>>>>>>>>> computing the sync statistics but does require computing the \n>>>>>>>>>>>> write\n>>>>>>>>>>>> statistics. This is because of the presence of \n>>>>>>>>>>>> issue_xlog_fsync but\n>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>>>>>> observe that\n>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while \n>>>>>>>>>>>> the WAL\n>>>>>>>>>>>> receiver path does not. It seems technically \n>>>>>>>>>>>> straight-forward to\n>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the \n>>>>>>>>>>>> two places,\n>>>>>>>>>>>> though I suspect there may be a trade-off for not adding \n>>>>>>>>>>>> another\n>>>>>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>>>>>> processing\n>>>>>>>>>>>> (though that seems marginalized compared to the cost of \n>>>>>>>>>>>> actually\n>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and \n>>>>>>>>>>>> don't have\n>>>>>>>>>>>> any shared code between the two but instead implement the \n>>>>>>>>>>>> WAL receiver\n>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, \n>>>>>>>>>>>> this\n>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>> \n>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver \n>>>>>>>>>>> stats.\n>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>> \n>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>> \n>>>>>>>>>> \n>>>>>>>>>>> I added the infrastructure code to communicate the WAL \n>>>>>>>>>>> receiver stats messages between the WAL receiver and the \n>>>>>>>>>>> stats collector, and\n>>>>>>>>>>> the stats for WAL receiver is counted in \n>>>>>>>>>>> pg_stat_wal_receiver.\n>>>>>>>>>>> What do you think?\n>>>>>>>>>> \n>>>>>>>>>> On second thought, this idea seems not good. Because those \n>>>>>>>>>> stats are\n>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver \n>>>>>>>>>> process running\n>>>>>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>>>>>> dynamic\n>>>>>>>>>> stats and the others show collected stats, even though they \n>>>>>>>>>> are in\n>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>> \n>>>>>>>>> OK, I fixed it.\n>>>>>>>>> The stats collected in the WAL receiver is exposed in \n>>>>>>>>> pg_stat_wal view in v11 patch.\n>>>>>>>> \n>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>> \n>>>>>>>> + /* Check whether the WAL file was synced to disk right now \n>>>>>>>> */\n>>>>>>>> + if (enableFsync &&\n>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>> + {\n>>>>>>>> \n>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>> if enableFsync is off, sync_method is open_sync or \n>>>>>>>> open_data_sync,\n>>>>>>>> to simplify the code more?\n>>>>>>> \n>>>>>>> Thanks for the comments.\n>>>>>>> I added the above code in v12 patch.\n>>>>>>> \n>>>>>>>> \n>>>>>>>> + /*\n>>>>>>>> + * Send WAL statistics only if WalWriterDelay has \n>>>>>>>> elapsed to minimize\n>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>> + */\n>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>> + pgstat_send_wal();\n>>>>>>>> \n>>>>>>>> On second thought, this change means that it always takes \n>>>>>>>> wal_writer_delay\n>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() \n>>>>>>>> is called.\n>>>>>>>> For example, if wal_writer_delay is set to several seconds, some \n>>>>>>>> values in\n>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those \n>>>>>>>> seconds.\n>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to \n>>>>>>>> send\n>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>> \n>>>>>>> Thanks, I didn't notice that.\n>>>>>>> \n>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>> \n>>>>> Yeah, if wal_writer_delay is set to very small value, there is a \n>>>>> risk\n>>>>> that the WAL stats are sent too frequently. I agree that's a \n>>>>> problem.\n>>>>> \n>>>>>>> \n>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>> \n>>>>>>> + /*\n>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>> + * msec since we last sent one\n>>>>>>> + */\n>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>> + {\n>>>>>>> + pgstat_send_wal();\n>>>>>>> + last_report = now;\n>>>>>>> + }\n>>>>>>> +\n>>>>>>> \n>>>>>>> Although I worried that it's better to add the check code in \n>>>>>>> pgstat_send_wal(),\n>>>>> \n>>>>> Agreed.\n>>>>> \n>>>>>>> I didn't do so because to avoid to double check \n>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already \n>>>>>>> checks the\n>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>> \n>>>>> I think that we can do that. What about the attached patch?\n>>>> \n>>>> Thanks, I thought it's better.\n>>>> \n>>>> \n>>>>>> I forgot to remove an unused variable.\n>>>>>> The attached v13 patch is fixed.\n>>>>> \n>>>>> Thanks for updating the patch!\n>>>>> \n>>>>> + w.wal_write,\n>>>>> + w.wal_write_time,\n>>>>> + w.wal_sync,\n>>>>> + w.wal_sync_time,\n>>>>> \n>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>> each other? That is, what about the following order of columns?\n>>>>> \n>>>>> wal_write\n>>>>> wal_sync\n>>>>> wal_write_time\n>>>>> wal_sync_time\n>>>> \n>>>> Yes, I fixed it.\n>>>> \n>>>>> - case SYNC_METHOD_OPEN:\n>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>> - /* write synced it already */\n>>>>> - break;\n>>>>> \n>>>>> IMO it's better to add Assert(false) here to ensure that we never \n>>>>> reach\n>>>>> here, as follows. Thought?\n>>>>> \n>>>>> + case SYNC_METHOD_OPEN:\n>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>> + /* not reachable */\n>>>>> + Assert(false);\n>>>> \n>>>> I agree.\n>>>> \n>>>> \n>>>>> Even when a backend exits, it sends the stats via \n>>>>> pgstat_beshutdown_hook().\n>>>>> On the other hand, walwriter doesn't do that. Walwriter also should \n>>>>> send\n>>>>> the stats even at its exit? Otherwise some stats can fail to be \n>>>>> collected.\n>>>>> But ISTM that this issue existed from before, for example \n>>>>> checkpointer\n>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill \n>>>>> to fix\n>>>>> this issue in this patch?\n>>>> \n>>>> Thanks, I thought it's better to do so.\n>>>> I added the shutdown hook for the walwriter and the checkpointer in \n>>>> v14-0003 patch.\n>>> \n>>> Thanks for 0003 patch!\n>>> \n>>> Isn't it overkill to send the stats in the walwriter-exit-callback? \n>>> IMO we can\n>>> just send the stats only when ShutdownRequestPending is true in the \n>>> walwriter\n>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>> If we do this, we cannot send the stats when walwriter throws FATAL \n>>> error.\n>>> But that's ok because FATAL error on walwriter causes the server to \n>>> crash.\n>>> Thought?\n>> \n>> Thanks for your comments!\n>> Yes, I agree.\n>> \n>> \n>>> Also ISTM that we don't need to use the callback for that purpose in\n>>> checkpointer because of the same reason. That is, we can send the \n>>> stats\n>>> just after calling ShutdownXLOG(0, 0) in \n>>> HandleCheckpointerInterrupts().\n>>> Thought?\n>> \n>> Yes, I think so too.\n>> \n>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>> it might generate WAL records.\n>> \n>> \n>>> I'm now not sure how much useful these changes are. As far as I read \n>>> pgstat.c,\n>>> when shutdown is requested, the stats collector seems to exit even \n>>> when\n>>> there are outstanding stats messages. So if checkpointer and \n>>> walwriter send\n>>> the stats in their last cycles, those stats might not be collected.\n>>> \n>>> On the other hand, I can think that sending the stats in the last \n>>> cycles would\n>>> improve the situation a bit than now. So I'm inclined to apply those \n>>> changes...\n>> \n>> I didn't notice that. I agree this is an important aspect.\n>> I understood there is a case that the stats collector exits before the \n>> checkpointer\n>> or the walwriter exits and some stats might not be collected.\n> \n> IIUC the stats collector basically exits after checkpointer and \n> walwriter exit.\n> But there seems no guarantee that the stats collector processes\n> all the messages that other processes have sent during the shutdown of\n> the server.\n\nThanks, I understood the above postmaster behaviors.\n\nPMState manages the status and after checkpointer is exited, the \npostmaster sends\nSIGQUIT signal to the stats collector if the shutdown mode is smart or \nfast.\n(IIUC, although the postmaster kill the walsender, the archiver and\nthe stats collector at the same time, it's ok because the walsender\nand the archiver doesn't send stats to the stats collector now.)\n\nBut, there might be a corner case to lose stats sent by background \nworkers like\nthe checkpointer before they exit (although this is not implemented \nyet.)\n\nFor example,\n\n1. checkpointer send the stats before it exit\n2. stats collector receive the signal and break before processing\n the stats message from checkpointer. In this case, 1's message is \nlost.\n3. stats collector writes the stats in the statsfiles and exit\n\nWhy don't you recheck the coming message is zero just before the 2th \nprocedure?\n(v17-0004-guarantee-to-collect-last-stats-messages.patch)\n\n\nI measured the timing of the above in my linux laptop using \nv17-measure-timing.patch.\nI don't have any strong opinion to handle this case since this result \nshows to receive and processes\nthe messages takes too short time (less than 1ms) although the stats \ncollector receives the shutdown\nsignal in 5msec(099->104) after the checkpointer process exits.\n\n```\n1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to make \n # exit and send the messages\n1615421208.099 [stats collector] DEBUG: process BGWRITER stats message \n # receive and process the messages\n1615421208.099 [stats collector] DEBUG: process WAL stats message\n1615421208.104 [postmaster] DEBUG: reaping dead processes\n1615421208.104 [stats collector] DEBUG: received shutdown request \nsignal # receive shutdown request from the postmaster\n```\n\n>>> Of course, there is another direction; we can improve the stats \n>>> collector so\n>>> that it guarantees to collect all the sent stats messages. But I'm \n>>> afraid\n>>> this change might be big.\n>> \n>> For example, implement to manage background process status in shared \n>> memory and\n>> the stats collector collects the stats until another background \n>> process exits?\n>> \n>> In my understanding, the statistics are not required high accuracy,\n>> it's ok to ignore them if the impact is not big.\n>> \n>> If we guarantee high accuracy, another background process like \n>> autovacuum launcher\n>> must send the WAL stats because it accesses the system catalog and \n>> might generate\n>> WAL records due to HOT update even though the possibility is low.\n>> \n>> I thought the impact is small because the time uncollected stats are \n>> generated is\n>> short compared to the time from startup. So, it's ok to ignore the \n>> remaining stats\n>> when the process exists.\n> \n> I agree that it's not worth changing lots of code to collect such \n> stats.\n> But if we can implement that very simply, isn't it more worth doing\n> that than current situation because we may be able to collect more\n> accurate stats.\n\nYes, I agree.\nI attached the patch to send the stats before the wal writer and the \ncheckpointer exit.\n(v17-0001-send-stats-for-walwriter-when-shutdown.patch, \nv17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n\n\n>> BTW, I found BgWriterStats.m_timed_checkpoints is not counted in \n>> ShutdownLOG()\n>> and we need to count it if to collect stats before it exits.\n> \n> Maybe m_requested_checkpoints should be incremented in that case?\n\nI thought this should be incremented\nbecause it invokes the methods with CHECKPOINT_IS_SHUTDOWN.\n\n```ShutdownXLOG()\n CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n CreateCheckPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n```\n\nI fixed in v17-0002-send-stats-for-checkpointer-when-shutdown.patch.\n\n\nIn addition, I rebased the patch for WAL receiver.\n(v17-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 11 Mar 2021 09:38:43 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/11 9:38, Masahiro Ikeda wrote:\n> On 2021-03-10 17:08, Fujii Masao wrote:\n>> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>\n>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>\n>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>\n>>>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>\n>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>\n>>>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>>>> + if (enableFsync &&\n>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>> + {\n>>>>>>>>>\n>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>>>> to simplify the code more?\n>>>>>>>>\n>>>>>>>> Thanks for the comments.\n>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> + /*\n>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>> + */\n>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>\n>>>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>>>\n>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>\n>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>\n>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>>>\n>>>>>>>>\n>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>\n>>>>>>>> + /*\n>>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>> + * msec since we last sent one\n>>>>>>>> + */\n>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>> + {\n>>>>>>>> + pgstat_send_wal();\n>>>>>>>> + last_report = now;\n>>>>>>>> + }\n>>>>>>>> +\n>>>>>>>>\n>>>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>>>\n>>>>>> Agreed.\n>>>>>>\n>>>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>\n>>>>>> I think that we can do that. What about the attached patch?\n>>>>>\n>>>>> Thanks, I thought it's better.\n>>>>>\n>>>>>\n>>>>>>> I forgot to remove an unused variable.\n>>>>>>> The attached v13 patch is fixed.\n>>>>>>\n>>>>>> Thanks for updating the patch!\n>>>>>>\n>>>>>> + w.wal_write,\n>>>>>> + w.wal_write_time,\n>>>>>> + w.wal_sync,\n>>>>>> + w.wal_sync_time,\n>>>>>>\n>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>> each other? That is, what about the following order of columns?\n>>>>>>\n>>>>>> wal_write\n>>>>>> wal_sync\n>>>>>> wal_write_time\n>>>>>> wal_sync_time\n>>>>>\n>>>>> Yes, I fixed it.\n>>>>>\n>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>> - /* write synced it already */\n>>>>>> - break;\n>>>>>>\n>>>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>>>> here, as follows. Thought?\n>>>>>>\n>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>> + /* not reachable */\n>>>>>> + Assert(false);\n>>>>>\n>>>>> I agree.\n>>>>>\n>>>>>\n>>>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>>>> this issue in this patch?\n>>>>>\n>>>>> Thanks, I thought it's better to do so.\n>>>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>>>\n>>>> Thanks for 0003 patch!\n>>>>\n>>>> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\n>>>> just send the stats only when ShutdownRequestPending is true in the walwriter\n>>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>>> If we do this, we cannot send the stats when walwriter throws FATAL error.\n>>>> But that's ok because FATAL error on walwriter causes the server to crash.\n>>>> Thought?\n>>>\n>>> Thanks for your comments!\n>>> Yes, I agree.\n>>>\n>>>\n>>>> Also ISTM that we don't need to use the callback for that purpose in\n>>>> checkpointer because of the same reason. That is, we can send the stats\n>>>> just after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\n>>>> Thought?\n>>>\n>>> Yes, I think so too.\n>>>\n>>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>>> it might generate WAL records.\n>>>\n>>>\n>>>> I'm now not sure how much useful these changes are. As far as I read pgstat.c,\n>>>> when shutdown is requested, the stats collector seems to exit even when\n>>>> there are outstanding stats messages. So if checkpointer and walwriter send\n>>>> the stats in their last cycles, those stats might not be collected.\n>>>>\n>>>> On the other hand, I can think that sending the stats in the last cycles would\n>>>> improve the situation a bit than now. So I'm inclined to apply those changes...\n>>>\n>>> I didn't notice that. I agree this is an important aspect.\n>>> I understood there is a case that the stats collector exits before the checkpointer\n>>> or the walwriter exits and some stats might not be collected.\n>>\n>> IIUC the stats collector basically exits after checkpointer and walwriter exit.\n>> But there seems no guarantee that the stats collector processes\n>> all the messages that other processes have sent during the shutdown of\n>> the server.\n> \n> Thanks, I understood the above postmaster behaviors.\n> \n> PMState manages the status and after checkpointer is exited, the postmaster sends\n> SIGQUIT signal to the stats collector if the shutdown mode is smart or fast.\n> (IIUC, although the postmaster kill the walsender, the archiver and\n> the stats collector at the same time, it's ok because the walsender\n> and the archiver doesn't send stats to the stats collector now.)\n> \n> But, there might be a corner case to lose stats sent by background workers like\n> the checkpointer before they exit (although this is not implemented yet.)\n> \n> For example,\n> \n> 1. checkpointer send the stats before it exit\n> 2. stats collector receive the signal and break before processing\n> the stats message from checkpointer. In this case, 1's message is lost.\n> 3. stats collector writes the stats in the statsfiles and exit\n> \n> Why don't you recheck the coming message is zero just before the 2th procedure?\n> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n\nYes, I was thinking the same. This is the straight-forward fix for this issue.\nThe stats collector should process all the outstanding messages when\nnormal shutdown is requested, as the patch does. On the other hand,\nif immediate shutdown is requested or emergency bailout (by postmaster death)\nis requested, maybe the stats collector should skip those processings\nand exit immediately.\n\nBut if we implement that, we would need to teach the stats collector\nthe shutdown type (i.e., normal shutdown or immediate one). Because\ncurrently SIGQUIT is sent to the collector whichever shutdown is requested,\nand so the collector cannot distinguish the shutdown type. I'm afraid that\nchange is a bit overkill for now.\n\nBTW, I found that the collector calls pgstat_write_statsfiles() even at\nemergency bailout case, before exiting. It's not necessary to save\nthe stats to the file in that case because subsequent server startup does\ncrash recovery and clears that stats file. So it's better to make\nthe collector exit immediately without calling pgstat_write_statsfiles()\nat emergency bailout case? Probably this should be discussed in other\nthread because it's different topic from the feature we're discussing here,\nthough.\n\n> \n> \n> I measured the timing of the above in my linux laptop using v17-measure-timing.patch.\n> I don't have any strong opinion to handle this case since this result shows to receive and processes\n> the messages takes too short time (less than 1ms) although the stats collector receives the shutdown\n> signal in 5msec(099->104) after the checkpointer process exits.\n\nAgreed.\n\n> \n> ```\n> 1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to make # exit and send the messages\n> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats message # receive and process the messages\n> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n> 1615421208.104 [stats collector] DEBUG: received shutdown request signal # receive shutdown request from the postmaster\n> ```\n> \n>>>> Of course, there is another direction; we can improve the stats collector so\n>>>> that it guarantees to collect all the sent stats messages. But I'm afraid\n>>>> this change might be big.\n>>>\n>>> For example, implement to manage background process status in shared memory and\n>>> the stats collector collects the stats until another background process exits?\n>>>\n>>> In my understanding, the statistics are not required high accuracy,\n>>> it's ok to ignore them if the impact is not big.\n>>>\n>>> If we guarantee high accuracy, another background process like autovacuum launcher\n>>> must send the WAL stats because it accesses the system catalog and might generate\n>>> WAL records due to HOT update even though the possibility is low.\n>>>\n>>> I thought the impact is small because the time uncollected stats are generated is\n>>> short compared to the time from startup. So, it's ok to ignore the remaining stats\n>>> when the process exists.\n>>\n>> I agree that it's not worth changing lots of code to collect such stats.\n>> But if we can implement that very simply, isn't it more worth doing\n>> that than current situation because we may be able to collect more\n>> accurate stats.\n> \n> Yes, I agree.\n> I attached the patch to send the stats before the wal writer and the checkpointer exit.\n> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n\nThanks for making those patches! Firstly I'm reading 0001 and 0002 patches.\n\nHere is the review comments for 0001 patch.\n\n+/* Prototypes for private functions */\n+static void HandleWalWriterInterrupts(void);\n\nHandleWalWriterInterrupts() and HandleMainLoopInterrupts() are almost the same.\nSo I don't think that we need to introduce HandleWalWriterInterrupts(). Instead,\nwe can just call pgstat_send_wal(true) before HandleMainLoopInterrupts()\nif ShutdownRequestPending is true in the main loop. Attached is the patch\nI implemented that way. Thought?\n\n\nHere is the review comments for 0002 patch.\n\n+static void pgstat_send_checkpointer(void);\n\nI'm inclined to avoid adding the function with the prefix \"pgstat_\" outside\npgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() and\npgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch attached.\n\n\n> \n> \n>>> BTW, I found BgWriterStats.m_timed_checkpoints is not counted in ShutdownLOG()\n>>> and we need to count it if to collect stats before it exits.\n>>\n>> Maybe m_requested_checkpoints should be incremented in that case?\n> \n> I thought this should be incremented\n> because it invokes the methods with CHECKPOINT_IS_SHUTDOWN.\n\nYes.\n\n> \n> ```ShutdownXLOG()\n> CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n> CreateCheckPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n> ```\n> \n> I fixed in v17-0002-send-stats-for-checkpointer-when-shutdown.patch.\n> \n> \n> In addition, I rebased the patch for WAL receiver.\n> (v17-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\nThanks! Will review this later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 11 Mar 2021 11:52:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-11 11:52, Fujii Masao wrote:\n> On 2021/03/11 9:38, Masahiro Ikeda wrote:\n>> On 2021-03-10 17:08, Fujii Masao wrote:\n>>> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>>>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>> \n>>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>> \n>>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked \n>>>>>>>>>>>>>> during an\n>>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is \n>>>>>>>>>>>>>> also\n>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>> \n>>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally \n>>>>>>>>>>>>>> called\" or\n>>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the \n>>>>>>>>>>>>>> original)\n>>>>>>>>>>>>>> You missed the adding the space before an opening \n>>>>>>>>>>>>>> parenthesis here and\n>>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>> \n>>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly \n>>>>>>>>>>>>>> query the\n>>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>> \n>>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>> \n>>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \n>>>>>>>>>>>>>> \"This is also\n>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>> \n>>>>>>>>>>>>> I skipped changing it since I separated the stats for the \n>>>>>>>>>>>>> WAL receiver\n>>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>> \n>>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of \n>>>>>>>>>>>>>> this event is\n>>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable \n>>>>>>>>>>>>>> because ...\"\n>>>>>>>>>>>>> \n>>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>> \n>>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require \n>>>>>>>>>>>>>> explicitly\n>>>>>>>>>>>>>> computing the sync statistics but does require computing \n>>>>>>>>>>>>>> the write\n>>>>>>>>>>>>>> statistics. This is because of the presence of \n>>>>>>>>>>>>>> issue_xlog_fsync but\n>>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I \n>>>>>>>>>>>>>> observe that\n>>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while \n>>>>>>>>>>>>>> the WAL\n>>>>>>>>>>>>>> receiver path does not. It seems technically \n>>>>>>>>>>>>>> straight-forward to\n>>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the \n>>>>>>>>>>>>>> two places,\n>>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding \n>>>>>>>>>>>>>> another\n>>>>>>>>>>>>>> function call to the stack given the importance of WAL \n>>>>>>>>>>>>>> processing\n>>>>>>>>>>>>>> (though that seems marginalized compared to the cost of \n>>>>>>>>>>>>>> actually\n>>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way \n>>>>>>>>>>>>>> and don't have\n>>>>>>>>>>>>>> any shared code between the two but instead implement the \n>>>>>>>>>>>>>> WAL receiver\n>>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, \n>>>>>>>>>>>>>> this\n>>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>> \n>>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver \n>>>>>>>>>>>>> stats.\n>>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>> \n>>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>> \n>>>>>>>>>>>> \n>>>>>>>>>>>>> I added the infrastructure code to communicate the WAL \n>>>>>>>>>>>>> receiver stats messages between the WAL receiver and the \n>>>>>>>>>>>>> stats collector, and\n>>>>>>>>>>>>> the stats for WAL receiver is counted in \n>>>>>>>>>>>>> pg_stat_wal_receiver.\n>>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>> \n>>>>>>>>>>>> On second thought, this idea seems not good. Because those \n>>>>>>>>>>>> stats are\n>>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver \n>>>>>>>>>>>> process running\n>>>>>>>>>>>> at that moment. IOW, it seems strange that some values show \n>>>>>>>>>>>> dynamic\n>>>>>>>>>>>> stats and the others show collected stats, even though they \n>>>>>>>>>>>> are in\n>>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>> \n>>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>>> The stats collected in the WAL receiver is exposed in \n>>>>>>>>>>> pg_stat_wal view in v11 patch.\n>>>>>>>>>> \n>>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>> \n>>>>>>>>>> + /* Check whether the WAL file was synced to disk right \n>>>>>>>>>> now */\n>>>>>>>>>> + if (enableFsync &&\n>>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>>> + {\n>>>>>>>>>> \n>>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>>> if enableFsync is off, sync_method is open_sync or \n>>>>>>>>>> open_data_sync,\n>>>>>>>>>> to simplify the code more?\n>>>>>>>>> \n>>>>>>>>> Thanks for the comments.\n>>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>> \n>>>>>>>>>> \n>>>>>>>>>> + /*\n>>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has \n>>>>>>>>>> elapsed to minimize\n>>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>>> + */\n>>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>> \n>>>>>>>>>> On second thought, this change means that it always takes \n>>>>>>>>>> wal_writer_delay\n>>>>>>>>>> before walwriter's WAL stats is sent after \n>>>>>>>>>> XLogBackgroundFlush() is called.\n>>>>>>>>>> For example, if wal_writer_delay is set to several seconds, \n>>>>>>>>>> some values in\n>>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those \n>>>>>>>>>> seconds.\n>>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to \n>>>>>>>>>> send\n>>>>>>>>>> the stats every after XLogBackgroundFlush() is called. \n>>>>>>>>>> Thought?\n>>>>>>>>> \n>>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>> \n>>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>> \n>>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a \n>>>>>>> risk\n>>>>>>> that the WAL stats are sent too frequently. I agree that's a \n>>>>>>> problem.\n>>>>>>> \n>>>>>>>>> \n>>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>> \n>>>>>>>>> + /*\n>>>>>>>>> + * Don't send a message unless it's been at \n>>>>>>>>> least\n>>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>>> + * msec since we last sent one\n>>>>>>>>> + */\n>>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>>> + if (TimestampDifferenceExceeds(last_report, \n>>>>>>>>> now,\n>>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>>> + {\n>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>> + last_report = now;\n>>>>>>>>> + }\n>>>>>>>>> +\n>>>>>>>>> \n>>>>>>>>> Although I worried that it's better to add the check code in \n>>>>>>>>> pgstat_send_wal(),\n>>>>>>> \n>>>>>>> Agreed.\n>>>>>>> \n>>>>>>>>> I didn't do so because to avoid to double check \n>>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already \n>>>>>>>>> checks the\n>>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>> \n>>>>>>> I think that we can do that. What about the attached patch?\n>>>>>> \n>>>>>> Thanks, I thought it's better.\n>>>>>> \n>>>>>> \n>>>>>>>> I forgot to remove an unused variable.\n>>>>>>>> The attached v13 patch is fixed.\n>>>>>>> \n>>>>>>> Thanks for updating the patch!\n>>>>>>> \n>>>>>>> + w.wal_write,\n>>>>>>> + w.wal_write_time,\n>>>>>>> + w.wal_sync,\n>>>>>>> + w.wal_sync_time,\n>>>>>>> \n>>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>>> each other? That is, what about the following order of columns?\n>>>>>>> \n>>>>>>> wal_write\n>>>>>>> wal_sync\n>>>>>>> wal_write_time\n>>>>>>> wal_sync_time\n>>>>>> \n>>>>>> Yes, I fixed it.\n>>>>>> \n>>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>> - /* write synced it already */\n>>>>>>> - break;\n>>>>>>> \n>>>>>>> IMO it's better to add Assert(false) here to ensure that we never \n>>>>>>> reach\n>>>>>>> here, as follows. Thought?\n>>>>>>> \n>>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>> + /* not reachable */\n>>>>>>> + Assert(false);\n>>>>>> \n>>>>>> I agree.\n>>>>>> \n>>>>>> \n>>>>>>> Even when a backend exits, it sends the stats via \n>>>>>>> pgstat_beshutdown_hook().\n>>>>>>> On the other hand, walwriter doesn't do that. Walwriter also \n>>>>>>> should send\n>>>>>>> the stats even at its exit? Otherwise some stats can fail to be \n>>>>>>> collected.\n>>>>>>> But ISTM that this issue existed from before, for example \n>>>>>>> checkpointer\n>>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill \n>>>>>>> to fix\n>>>>>>> this issue in this patch?\n>>>>>> \n>>>>>> Thanks, I thought it's better to do so.\n>>>>>> I added the shutdown hook for the walwriter and the checkpointer \n>>>>>> in v14-0003 patch.\n>>>>> \n>>>>> Thanks for 0003 patch!\n>>>>> \n>>>>> Isn't it overkill to send the stats in the walwriter-exit-callback? \n>>>>> IMO we can\n>>>>> just send the stats only when ShutdownRequestPending is true in the \n>>>>> walwriter\n>>>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>>>> If we do this, we cannot send the stats when walwriter throws FATAL \n>>>>> error.\n>>>>> But that's ok because FATAL error on walwriter causes the server to \n>>>>> crash.\n>>>>> Thought?\n>>>> \n>>>> Thanks for your comments!\n>>>> Yes, I agree.\n>>>> \n>>>> \n>>>>> Also ISTM that we don't need to use the callback for that purpose \n>>>>> in\n>>>>> checkpointer because of the same reason. That is, we can send the \n>>>>> stats\n>>>>> just after calling ShutdownXLOG(0, 0) in \n>>>>> HandleCheckpointerInterrupts().\n>>>>> Thought?\n>>>> \n>>>> Yes, I think so too.\n>>>> \n>>>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>>>> it might generate WAL records.\n>>>> \n>>>> \n>>>>> I'm now not sure how much useful these changes are. As far as I \n>>>>> read pgstat.c,\n>>>>> when shutdown is requested, the stats collector seems to exit even \n>>>>> when\n>>>>> there are outstanding stats messages. So if checkpointer and \n>>>>> walwriter send\n>>>>> the stats in their last cycles, those stats might not be collected.\n>>>>> \n>>>>> On the other hand, I can think that sending the stats in the last \n>>>>> cycles would\n>>>>> improve the situation a bit than now. So I'm inclined to apply \n>>>>> those changes...\n>>>> \n>>>> I didn't notice that. I agree this is an important aspect.\n>>>> I understood there is a case that the stats collector exits before \n>>>> the checkpointer\n>>>> or the walwriter exits and some stats might not be collected.\n>>> \n>>> IIUC the stats collector basically exits after checkpointer and \n>>> walwriter exit.\n>>> But there seems no guarantee that the stats collector processes\n>>> all the messages that other processes have sent during the shutdown \n>>> of\n>>> the server.\n>> \n>> Thanks, I understood the above postmaster behaviors.\n>> \n>> PMState manages the status and after checkpointer is exited, the \n>> postmaster sends\n>> SIGQUIT signal to the stats collector if the shutdown mode is smart or \n>> fast.\n>> (IIUC, although the postmaster kill the walsender, the archiver and\n>> the stats collector at the same time, it's ok because the walsender\n>> and the archiver doesn't send stats to the stats collector now.)\n>> \n>> But, there might be a corner case to lose stats sent by background \n>> workers like\n>> the checkpointer before they exit (although this is not implemented \n>> yet.)\n>> \n>> For example,\n>> \n>> 1. checkpointer send the stats before it exit\n>> 2. stats collector receive the signal and break before processing\n>> the stats message from checkpointer. In this case, 1's message is \n>> lost.\n>> 3. stats collector writes the stats in the statsfiles and exit\n>> \n>> Why don't you recheck the coming message is zero just before the 2th \n>> procedure?\n>> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n> \n> Yes, I was thinking the same. This is the straight-forward fix for this \n> issue.\n> The stats collector should process all the outstanding messages when\n> normal shutdown is requested, as the patch does. On the other hand,\n> if immediate shutdown is requested or emergency bailout (by postmaster \n> death)\n> is requested, maybe the stats collector should skip those processings\n> and exit immediately.\n> \n> But if we implement that, we would need to teach the stats collector\n> the shutdown type (i.e., normal shutdown or immediate one). Because\n> currently SIGQUIT is sent to the collector whichever shutdown is \n> requested,\n> and so the collector cannot distinguish the shutdown type. I'm afraid \n> that\n> change is a bit overkill for now.\n> \n> BTW, I found that the collector calls pgstat_write_statsfiles() even at\n> emergency bailout case, before exiting. It's not necessary to save\n> the stats to the file in that case because subsequent server startup \n> does\n> crash recovery and clears that stats file. So it's better to make\n> the collector exit immediately without calling \n> pgstat_write_statsfiles()\n> at emergency bailout case? Probably this should be discussed in other\n> thread because it's different topic from the feature we're discussing \n> here,\n> though.\n\nIIUC, only the stats collector has another hander for SIGQUIT although\nother background processes have a common hander for it and just call \n_exit(2).\nI thought to guarantee when TerminateChildren(SIGTERM) is invoked, don't \nmake stats\ncollector shutdown before other background processes are shutdown.\n\nI will make another thread to discuss that the stats collector should \nknow the shutdown type or not.\nIf it should be, it's better to make the stats collector exit as soon as \npossible if the shutdown type\nis an immediate, and avoid losing the remaining stats if it's normal.\n\n\n\n>> I measured the timing of the above in my linux laptop using \n>> v17-measure-timing.patch.\n>> I don't have any strong opinion to handle this case since this result \n>> shows to receive and processes\n>> the messages takes too short time (less than 1ms) although the stats \n>> collector receives the shutdown\n>> signal in 5msec(099->104) after the checkpointer process exits.\n> \n> Agreed.\n> \n>> \n>> ```\n>> 1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n>> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to \n>> make # exit and send the messages\n>> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats \n>> message # receive and process the messages\n>> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n>> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n>> 1615421208.104 [stats collector] DEBUG: received shutdown request \n>> signal # receive shutdown request from the postmaster\n>> ```\n>> \n>>>>> Of course, there is another direction; we can improve the stats \n>>>>> collector so\n>>>>> that it guarantees to collect all the sent stats messages. But I'm \n>>>>> afraid\n>>>>> this change might be big.\n>>>> \n>>>> For example, implement to manage background process status in shared \n>>>> memory and\n>>>> the stats collector collects the stats until another background \n>>>> process exits?\n>>>> \n>>>> In my understanding, the statistics are not required high accuracy,\n>>>> it's ok to ignore them if the impact is not big.\n>>>> \n>>>> If we guarantee high accuracy, another background process like \n>>>> autovacuum launcher\n>>>> must send the WAL stats because it accesses the system catalog and \n>>>> might generate\n>>>> WAL records due to HOT update even though the possibility is low.\n>>>> \n>>>> I thought the impact is small because the time uncollected stats are \n>>>> generated is\n>>>> short compared to the time from startup. So, it's ok to ignore the \n>>>> remaining stats\n>>>> when the process exists.\n>>> \n>>> I agree that it's not worth changing lots of code to collect such \n>>> stats.\n>>> But if we can implement that very simply, isn't it more worth doing\n>>> that than current situation because we may be able to collect more\n>>> accurate stats.\n>> \n>> Yes, I agree.\n>> I attached the patch to send the stats before the wal writer and the \n>> checkpointer exit.\n>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, \n>> v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n> \n> Thanks for making those patches! Firstly I'm reading 0001 and 0002 \n> patches.\n\nThanks for your comments and for making patches.\n\n\n> Here is the review comments for 0001 patch.\n> \n> +/* Prototypes for private functions */\n> +static void HandleWalWriterInterrupts(void);\n> \n> HandleWalWriterInterrupts() and HandleMainLoopInterrupts() are almost \n> the same.\n> So I don't think that we need to introduce HandleWalWriterInterrupts(). \n> Instead,\n> we can just call pgstat_send_wal(true) before \n> HandleMainLoopInterrupts()\n> if ShutdownRequestPending is true in the main loop. Attached is the \n> patch\n> I implemented that way. Thought?\n\nI thought there is a corner case that can't send the stats like\n\n```\n// First, ShutdownRequstPending = false\n\n if (ShutdownRequestPending) // don't send the stats\n pgstat_send_wal(true);\n\n// receive signal and ShutdownRequestPending became true\n\n HandleMainLoopInterrupts(); // proc exit without sending the stats\n\n```\n\nIs it ok because it almost never occurs?\n\n\n> Here is the review comments for 0002 patch.\n> \n> +static void pgstat_send_checkpointer(void);\n> \n> I'm inclined to avoid adding the function with the prefix \"pgstat_\" \n> outside\n> pgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() and\n> pgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch \n> attached.\n\nThanks. I agree.\n\n\n>>>> BTW, I found BgWriterStats.m_timed_checkpoints is not counted in \n>>>> ShutdownLOG()\n>>>> and we need to count it if to collect stats before it exits.\n>>> \n>>> Maybe m_requested_checkpoints should be incremented in that case?\n>> \n>> I thought this should be incremented\n>> because it invokes the methods with CHECKPOINT_IS_SHUTDOWN.\n> \n> Yes.\n\nOK, thanks.\n\n\n>> \n>> ```ShutdownXLOG()\n>> CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n>> CreateCheckPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n>> ```\n>> \n>> I fixed in v17-0002-send-stats-for-checkpointer-when-shutdown.patch.\n>> \n>> \n>> In addition, I rebased the patch for WAL receiver.\n>> (v17-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n> \n> Thanks! Will review this later.\n\nThanks a lot!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 11 Mar 2021 21:29:38 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/11 21:29, Masahiro Ikeda wrote:\n> On 2021-03-11 11:52, Fujii Masao wrote:\n>> On 2021/03/11 9:38, Masahiro Ikeda wrote:\n>>> On 2021-03-10 17:08, Fujii Masao wrote:\n>>>> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>>>>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>>>\n>>>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>>>\n>>>>>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>>>>>> + if (enableFsync &&\n>>>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>>>> + {\n>>>>>>>>>>>\n>>>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>>>>>> to simplify the code more?\n>>>>>>>>>>\n>>>>>>>>>> Thanks for the comments.\n>>>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> + /*\n>>>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>>>> + */\n>>>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>>>\n>>>>>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>>>\n>>>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>>>\n>>>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>>>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>>>\n>>>>>>>>>> + /*\n>>>>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>>>> + * msec since we last sent one\n>>>>>>>>>> + */\n>>>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>>>> + {\n>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>> + last_report = now;\n>>>>>>>>>> + }\n>>>>>>>>>> +\n>>>>>>>>>>\n>>>>>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>>>>>\n>>>>>>>> Agreed.\n>>>>>>>>\n>>>>>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>>>\n>>>>>>>> I think that we can do that. What about the attached patch?\n>>>>>>>\n>>>>>>> Thanks, I thought it's better.\n>>>>>>>\n>>>>>>>\n>>>>>>>>> I forgot to remove an unused variable.\n>>>>>>>>> The attached v13 patch is fixed.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patch!\n>>>>>>>>\n>>>>>>>> + w.wal_write,\n>>>>>>>> + w.wal_write_time,\n>>>>>>>> + w.wal_sync,\n>>>>>>>> + w.wal_sync_time,\n>>>>>>>>\n>>>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>>>> each other? That is, what about the following order of columns?\n>>>>>>>>\n>>>>>>>> wal_write\n>>>>>>>> wal_sync\n>>>>>>>> wal_write_time\n>>>>>>>> wal_sync_time\n>>>>>>>\n>>>>>>> Yes, I fixed it.\n>>>>>>>\n>>>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>> - /* write synced it already */\n>>>>>>>> - break;\n>>>>>>>>\n>>>>>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>>>>>> here, as follows. Thought?\n>>>>>>>>\n>>>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>> + /* not reachable */\n>>>>>>>> + Assert(false);\n>>>>>>>\n>>>>>>> I agree.\n>>>>>>>\n>>>>>>>\n>>>>>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>>>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>>>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>>>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>>>>>> this issue in this patch?\n>>>>>>>\n>>>>>>> Thanks, I thought it's better to do so.\n>>>>>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>>>>>\n>>>>>> Thanks for 0003 patch!\n>>>>>>\n>>>>>> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\n>>>>>> just send the stats only when ShutdownRequestPending is true in the walwriter\n>>>>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>>>>> If we do this, we cannot send the stats when walwriter throws FATAL error.\n>>>>>> But that's ok because FATAL error on walwriter causes the server to crash.\n>>>>>> Thought?\n>>>>>\n>>>>> Thanks for your comments!\n>>>>> Yes, I agree.\n>>>>>\n>>>>>\n>>>>>> Also ISTM that we don't need to use the callback for that purpose in\n>>>>>> checkpointer because of the same reason. That is, we can send the stats\n>>>>>> just after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\n>>>>>> Thought?\n>>>>>\n>>>>> Yes, I think so too.\n>>>>>\n>>>>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>>>>> it might generate WAL records.\n>>>>>\n>>>>>\n>>>>>> I'm now not sure how much useful these changes are. As far as I read pgstat.c,\n>>>>>> when shutdown is requested, the stats collector seems to exit even when\n>>>>>> there are outstanding stats messages. So if checkpointer and walwriter send\n>>>>>> the stats in their last cycles, those stats might not be collected.\n>>>>>>\n>>>>>> On the other hand, I can think that sending the stats in the last cycles would\n>>>>>> improve the situation a bit than now. So I'm inclined to apply those changes...\n>>>>>\n>>>>> I didn't notice that. I agree this is an important aspect.\n>>>>> I understood there is a case that the stats collector exits before the checkpointer\n>>>>> or the walwriter exits and some stats might not be collected.\n>>>>\n>>>> IIUC the stats collector basically exits after checkpointer and walwriter exit.\n>>>> But there seems no guarantee that the stats collector processes\n>>>> all the messages that other processes have sent during the shutdown of\n>>>> the server.\n>>>\n>>> Thanks, I understood the above postmaster behaviors.\n>>>\n>>> PMState manages the status and after checkpointer is exited, the postmaster sends\n>>> SIGQUIT signal to the stats collector if the shutdown mode is smart or fast.\n>>> (IIUC, although the postmaster kill the walsender, the archiver and\n>>> the stats collector at the same time, it's ok because the walsender\n>>> and the archiver doesn't send stats to the stats collector now.)\n>>>\n>>> But, there might be a corner case to lose stats sent by background workers like\n>>> the checkpointer before they exit (although this is not implemented yet.)\n>>>\n>>> For example,\n>>>\n>>> 1. checkpointer send the stats before it exit\n>>> 2. stats collector receive the signal and break before processing\n>>> the stats message from checkpointer. In this case, 1's message is lost.\n>>> 3. stats collector writes the stats in the statsfiles and exit\n>>>\n>>> Why don't you recheck the coming message is zero just before the 2th procedure?\n>>> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n>>\n>> Yes, I was thinking the same. This is the straight-forward fix for this issue.\n>> The stats collector should process all the outstanding messages when\n>> normal shutdown is requested, as the patch does. On the other hand,\n>> if immediate shutdown is requested or emergency bailout (by postmaster death)\n>> is requested, maybe the stats collector should skip those processings\n>> and exit immediately.\n>>\n>> But if we implement that, we would need to teach the stats collector\n>> the shutdown type (i.e., normal shutdown or immediate one). Because\n>> currently SIGQUIT is sent to the collector whichever shutdown is requested,\n>> and so the collector cannot distinguish the shutdown type. I'm afraid that\n>> change is a bit overkill for now.\n>>\n>> BTW, I found that the collector calls pgstat_write_statsfiles() even at\n>> emergency bailout case, before exiting. It's not necessary to save\n>> the stats to the file in that case because subsequent server startup does\n>> crash recovery and clears that stats file. So it's better to make\n>> the collector exit immediately without calling pgstat_write_statsfiles()\n>> at emergency bailout case? Probably this should be discussed in other\n>> thread because it's different topic from the feature we're discussing here,\n>> though.\n> \n> IIUC, only the stats collector has another hander for SIGQUIT although\n> other background processes have a common hander for it and just call _exit(2).\n> I thought to guarantee when TerminateChildren(SIGTERM) is invoked, don't make stats\n> collector shutdown before other background processes are shutdown.\n> \n> I will make another thread to discuss that the stats collector should know the shutdown type or not.\n> If it should be, it's better to make the stats collector exit as soon as possible if the shutdown type\n> is an immediate, and avoid losing the remaining stats if it's normal.\n\n+1\n\n\n> \n> \n> \n>>> I measured the timing of the above in my linux laptop using v17-measure-timing.patch.\n>>> I don't have any strong opinion to handle this case since this result shows to receive and processes\n>>> the messages takes too short time (less than 1ms) although the stats collector receives the shutdown\n>>> signal in 5msec(099->104) after the checkpointer process exits.\n>>\n>> Agreed.\n>>\n>>>\n>>> ```\n>>> 1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n>>> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to make # exit and send the messages\n>>> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats message # receive and process the messages\n>>> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n>>> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n>>> 1615421208.104 [stats collector] DEBUG: received shutdown request signal # receive shutdown request from the postmaster\n>>> ```\n>>>\n>>>>>> Of course, there is another direction; we can improve the stats collector so\n>>>>>> that it guarantees to collect all the sent stats messages. But I'm afraid\n>>>>>> this change might be big.\n>>>>>\n>>>>> For example, implement to manage background process status in shared memory and\n>>>>> the stats collector collects the stats until another background process exits?\n>>>>>\n>>>>> In my understanding, the statistics are not required high accuracy,\n>>>>> it's ok to ignore them if the impact is not big.\n>>>>>\n>>>>> If we guarantee high accuracy, another background process like autovacuum launcher\n>>>>> must send the WAL stats because it accesses the system catalog and might generate\n>>>>> WAL records due to HOT update even though the possibility is low.\n>>>>>\n>>>>> I thought the impact is small because the time uncollected stats are generated is\n>>>>> short compared to the time from startup. So, it's ok to ignore the remaining stats\n>>>>> when the process exists.\n>>>>\n>>>> I agree that it's not worth changing lots of code to collect such stats.\n>>>> But if we can implement that very simply, isn't it more worth doing\n>>>> that than current situation because we may be able to collect more\n>>>> accurate stats.\n>>>\n>>> Yes, I agree.\n>>> I attached the patch to send the stats before the wal writer and the checkpointer exit.\n>>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n>>\n>> Thanks for making those patches! Firstly I'm reading 0001 and 0002 patches.\n> \n> Thanks for your comments and for making patches.\n> \n> \n>> Here is the review comments for 0001 patch.\n>>\n>> +/* Prototypes for private functions */\n>> +static void HandleWalWriterInterrupts(void);\n>>\n>> HandleWalWriterInterrupts() and HandleMainLoopInterrupts() are almost the same.\n>> So I don't think that we need to introduce HandleWalWriterInterrupts(). Instead,\n>> we can just call pgstat_send_wal(true) before HandleMainLoopInterrupts()\n>> if ShutdownRequestPending is true in the main loop. Attached is the patch\n>> I implemented that way. Thought?\n> \n> I thought there is a corner case that can't send the stats like\n\nYou're right! So IMO your patch (v17-0001-send-stats-for-walwriter-when-shutdown.patch) is better.\n\n\n> \n> ```\n> // First, ShutdownRequstPending = false\n> \n> if (ShutdownRequestPending) // don't send the stats\n> pgstat_send_wal(true);\n> \n> // receive signal and ShutdownRequestPending became true\n> \n> HandleMainLoopInterrupts(); // proc exit without sending the stats\n> \n> ```\n> \n> Is it ok because it almost never occurs?\n> \n> \n>> Here is the review comments for 0002 patch.\n>>\n>> +static void pgstat_send_checkpointer(void);\n>>\n>> I'm inclined to avoid adding the function with the prefix \"pgstat_\" outside\n>> pgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() and\n>> pgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch attached.\n> \n> Thanks. I agree.\n\nThanks for the review!\n\n\nSo, barring any objection, I will commit the changes for\nwalwriter and checkpointer. That is,\n\nv17-0001-send-stats-for-walwriter-when-shutdown.patch\nv17-0002-send-stats-for-checkpointer-when-shutdown_fujii.patch\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 11 Mar 2021 23:33:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/11 21:29, Masahiro Ikeda wrote:\n> On 2021-03-11 11:52, Fujii Masao wrote:\n>> On 2021/03/11 9:38, Masahiro Ikeda wrote:\n>>> On 2021-03-10 17:08, Fujii Masao wrote:\n>>>> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>>>>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>\n>>>>>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>>>\n>>>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>>>\n>>>>>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>>>>>> + if (enableFsync &&\n>>>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>>>> + {\n>>>>>>>>>>>\n>>>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>>>>>> to simplify the code more?\n>>>>>>>>>>\n>>>>>>>>>> Thanks for the comments.\n>>>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> + /*\n>>>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>>>> + */\n>>>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>>>\n>>>>>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>>>>>\n>>>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>>>\n>>>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>>>\n>>>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>>>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>>>\n>>>>>>>>>> + /*\n>>>>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>>>> + * msec since we last sent one\n>>>>>>>>>> + */\n>>>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>>>> + {\n>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>> + last_report = now;\n>>>>>>>>>> + }\n>>>>>>>>>> +\n>>>>>>>>>>\n>>>>>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>>>>>\n>>>>>>>> Agreed.\n>>>>>>>>\n>>>>>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>>>\n>>>>>>>> I think that we can do that. What about the attached patch?\n>>>>>>>\n>>>>>>> Thanks, I thought it's better.\n>>>>>>>\n>>>>>>>\n>>>>>>>>> I forgot to remove an unused variable.\n>>>>>>>>> The attached v13 patch is fixed.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patch!\n>>>>>>>>\n>>>>>>>> + w.wal_write,\n>>>>>>>> + w.wal_write_time,\n>>>>>>>> + w.wal_sync,\n>>>>>>>> + w.wal_sync_time,\n>>>>>>>>\n>>>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>>>> each other? That is, what about the following order of columns?\n>>>>>>>>\n>>>>>>>> wal_write\n>>>>>>>> wal_sync\n>>>>>>>> wal_write_time\n>>>>>>>> wal_sync_time\n>>>>>>>\n>>>>>>> Yes, I fixed it.\n>>>>>>>\n>>>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>> - /* write synced it already */\n>>>>>>>> - break;\n>>>>>>>>\n>>>>>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>>>>>> here, as follows. Thought?\n>>>>>>>>\n>>>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>> + /* not reachable */\n>>>>>>>> + Assert(false);\n>>>>>>>\n>>>>>>> I agree.\n>>>>>>>\n>>>>>>>\n>>>>>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>>>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>>>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>>>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>>>>>> this issue in this patch?\n>>>>>>>\n>>>>>>> Thanks, I thought it's better to do so.\n>>>>>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>>>>>\n>>>>>> Thanks for 0003 patch!\n>>>>>>\n>>>>>> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\n>>>>>> just send the stats only when ShutdownRequestPending is true in the walwriter\n>>>>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>>>>> If we do this, we cannot send the stats when walwriter throws FATAL error.\n>>>>>> But that's ok because FATAL error on walwriter causes the server to crash.\n>>>>>> Thought?\n>>>>>\n>>>>> Thanks for your comments!\n>>>>> Yes, I agree.\n>>>>>\n>>>>>\n>>>>>> Also ISTM that we don't need to use the callback for that purpose in\n>>>>>> checkpointer because of the same reason. That is, we can send the stats\n>>>>>> just after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\n>>>>>> Thought?\n>>>>>\n>>>>> Yes, I think so too.\n>>>>>\n>>>>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>>>>> it might generate WAL records.\n>>>>>\n>>>>>\n>>>>>> I'm now not sure how much useful these changes are. As far as I read pgstat.c,\n>>>>>> when shutdown is requested, the stats collector seems to exit even when\n>>>>>> there are outstanding stats messages. So if checkpointer and walwriter send\n>>>>>> the stats in their last cycles, those stats might not be collected.\n>>>>>>\n>>>>>> On the other hand, I can think that sending the stats in the last cycles would\n>>>>>> improve the situation a bit than now. So I'm inclined to apply those changes...\n>>>>>\n>>>>> I didn't notice that. I agree this is an important aspect.\n>>>>> I understood there is a case that the stats collector exits before the checkpointer\n>>>>> or the walwriter exits and some stats might not be collected.\n>>>>\n>>>> IIUC the stats collector basically exits after checkpointer and walwriter exit.\n>>>> But there seems no guarantee that the stats collector processes\n>>>> all the messages that other processes have sent during the shutdown of\n>>>> the server.\n>>>\n>>> Thanks, I understood the above postmaster behaviors.\n>>>\n>>> PMState manages the status and after checkpointer is exited, the postmaster sends\n>>> SIGQUIT signal to the stats collector if the shutdown mode is smart or fast.\n>>> (IIUC, although the postmaster kill the walsender, the archiver and\n>>> the stats collector at the same time, it's ok because the walsender\n>>> and the archiver doesn't send stats to the stats collector now.)\n>>>\n>>> But, there might be a corner case to lose stats sent by background workers like\n>>> the checkpointer before they exit (although this is not implemented yet.)\n>>>\n>>> For example,\n>>>\n>>> 1. checkpointer send the stats before it exit\n>>> 2. stats collector receive the signal and break before processing\n>>> the stats message from checkpointer. In this case, 1's message is lost.\n>>> 3. stats collector writes the stats in the statsfiles and exit\n>>>\n>>> Why don't you recheck the coming message is zero just before the 2th procedure?\n>>> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n>>\n>> Yes, I was thinking the same. This is the straight-forward fix for this issue.\n>> The stats collector should process all the outstanding messages when\n>> normal shutdown is requested, as the patch does. On the other hand,\n>> if immediate shutdown is requested or emergency bailout (by postmaster death)\n>> is requested, maybe the stats collector should skip those processings\n>> and exit immediately.\n>>\n>> But if we implement that, we would need to teach the stats collector\n>> the shutdown type (i.e., normal shutdown or immediate one). Because\n>> currently SIGQUIT is sent to the collector whichever shutdown is requested,\n>> and so the collector cannot distinguish the shutdown type. I'm afraid that\n>> change is a bit overkill for now.\n>>\n>> BTW, I found that the collector calls pgstat_write_statsfiles() even at\n>> emergency bailout case, before exiting. It's not necessary to save\n>> the stats to the file in that case because subsequent server startup does\n>> crash recovery and clears that stats file. So it's better to make\n>> the collector exit immediately without calling pgstat_write_statsfiles()\n>> at emergency bailout case? Probably this should be discussed in other\n>> thread because it's different topic from the feature we're discussing here,\n>> though.\n> \n> IIUC, only the stats collector has another hander for SIGQUIT although\n> other background processes have a common hander for it and just call _exit(2).\n> I thought to guarantee when TerminateChildren(SIGTERM) is invoked, don't make stats\n> collector shutdown before other background processes are shutdown.\n> \n> I will make another thread to discuss that the stats collector should know the shutdown type or not.\n> If it should be, it's better to make the stats collector exit as soon as possible if the shutdown type\n> is an immediate, and avoid losing the remaining stats if it's normal.\n> \n> \n> \n>>> I measured the timing of the above in my linux laptop using v17-measure-timing.patch.\n>>> I don't have any strong opinion to handle this case since this result shows to receive and processes\n>>> the messages takes too short time (less than 1ms) although the stats collector receives the shutdown\n>>> signal in 5msec(099->104) after the checkpointer process exits.\n>>\n>> Agreed.\n>>\n>>>\n>>> ```\n>>> 1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n>>> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to make # exit and send the messages\n>>> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats message # receive and process the messages\n>>> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n>>> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n>>> 1615421208.104 [stats collector] DEBUG: received shutdown request signal # receive shutdown request from the postmaster\n>>> ```\n>>>\n>>>>>> Of course, there is another direction; we can improve the stats collector so\n>>>>>> that it guarantees to collect all the sent stats messages. But I'm afraid\n>>>>>> this change might be big.\n>>>>>\n>>>>> For example, implement to manage background process status in shared memory and\n>>>>> the stats collector collects the stats until another background process exits?\n>>>>>\n>>>>> In my understanding, the statistics are not required high accuracy,\n>>>>> it's ok to ignore them if the impact is not big.\n>>>>>\n>>>>> If we guarantee high accuracy, another background process like autovacuum launcher\n>>>>> must send the WAL stats because it accesses the system catalog and might generate\n>>>>> WAL records due to HOT update even though the possibility is low.\n>>>>>\n>>>>> I thought the impact is small because the time uncollected stats are generated is\n>>>>> short compared to the time from startup. So, it's ok to ignore the remaining stats\n>>>>> when the process exists.\n>>>>\n>>>> I agree that it's not worth changing lots of code to collect such stats.\n>>>> But if we can implement that very simply, isn't it more worth doing\n>>>> that than current situation because we may be able to collect more\n>>>> accurate stats.\n>>>\n>>> Yes, I agree.\n>>> I attached the patch to send the stats before the wal writer and the checkpointer exit.\n>>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n>>\n>> Thanks for making those patches! Firstly I'm reading 0001 and 0002 patches.\n> \n> Thanks for your comments and for making patches.\n> \n> \n>> Here is the review comments for 0001 patch.\n>>\n>> +/* Prototypes for private functions */\n>> +static void HandleWalWriterInterrupts(void);\n>>\n>> HandleWalWriterInterrupts() and HandleMainLoopInterrupts() are almost the same.\n>> So I don't think that we need to introduce HandleWalWriterInterrupts(). Instead,\n>> we can just call pgstat_send_wal(true) before HandleMainLoopInterrupts()\n>> if ShutdownRequestPending is true in the main loop. Attached is the patch\n>> I implemented that way. Thought?\n> \n> I thought there is a corner case that can't send the stats like\n> \n> ```\n> // First, ShutdownRequstPending = false\n> \n> if (ShutdownRequestPending) // don't send the stats\n> pgstat_send_wal(true);\n> \n> // receive signal and ShutdownRequestPending became true\n> \n> HandleMainLoopInterrupts(); // proc exit without sending the stats\n> \n> ```\n> \n> Is it ok because it almost never occurs?\n> \n> \n>> Here is the review comments for 0002 patch.\n>>\n>> +static void pgstat_send_checkpointer(void);\n>>\n>> I'm inclined to avoid adding the function with the prefix \"pgstat_\" outside\n>> pgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() and\n>> pgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch attached.\n> \n> Thanks. I agree.\n> \n> \n>>>>> BTW, I found BgWriterStats.m_timed_checkpoints is not counted in ShutdownLOG()\n>>>>> and we need to count it if to collect stats before it exits.\n>>>>\n>>>> Maybe m_requested_checkpoints should be incremented in that case?\n>>>\n>>> I thought this should be incremented\n>>> because it invokes the methods with CHECKPOINT_IS_SHUTDOWN.\n>>\n>> Yes.\n> \n> OK, thanks.\n> \n> \n>>>\n>>> ```ShutdownXLOG()\n>>> CreateRestartPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n>>> CreateCheckPoint(CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_IMMEDIATE);\n>>> ```\n>>>\n>>> I fixed in v17-0002-send-stats-for-checkpointer-when-shutdown.patch.\n>>>\n>>>\n>>> In addition, I rebased the patch for WAL receiver.\n>>> (v17-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>\n>> Thanks! Will review this later.\n> \n> Thanks a lot!\n\nI read through the 0003 patch. Here are some comments for that.\n\nWith the patch, walreceiver's stats are counted as wal_write, wal_sync, wal_write_time and wal_sync_time in pg_stat_wal. But they should be counted as different columns because WAL IO is different between walreceiver and other processes like a backend? For example, open_sync or open_datasync is chosen as wal_sync_method, those other processes use O_DIRECT flag to open WAL files, but walreceiver does not. For example, those other procesess write WAL data in block units, but walreceiver does not. So I'm concerned that mixing different WAL IO stats in the same columns would confuse the users. Thought? I'd like to hear more opinions about how to expose walreceiver's stats to users.\n\n+int\n+XLogWriteFile(int fd, const void *buf, size_t nbyte, off_t offset)\n\nThis common function writes WAL data and measures IO timing. IMO we can refactor the code furthermore by making this function handle the case where pg_write() reports an error. In other words, I think that the function should do what do-while loop block in XLogWrite() does. Thought?\n\nBTW, currently XLogWrite() increments IO timing even when pg_pwrite() reports an error. But this is useless. Probably IO timing should be incremented after the return code of pg_pwrite() is checked, instead?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 12 Mar 2021 12:39:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/11 23:33, Fujii Masao wrote:\n> \n> \n> On 2021/03/11 21:29, Masahiro Ikeda wrote:\n>> On 2021-03-11 11:52, Fujii Masao wrote:\n>>> On 2021/03/11 9:38, Masahiro Ikeda wrote:\n>>>> On 2021-03-10 17:08, Fujii Masao wrote:\n>>>>> On 2021/03/10 14:11, Masahiro Ikeda wrote:\n>>>>>> On 2021-03-09 17:51, Fujii Masao wrote:\n>>>>>>> On 2021/03/05 8:38, Masahiro Ikeda wrote:\n>>>>>>>> On 2021-03-05 01:02, Fujii Masao wrote:\n>>>>>>>>> On 2021/03/04 16:14, Masahiro Ikeda wrote:\n>>>>>>>>>> On 2021-03-03 20:27, Masahiro Ikeda wrote:\n>>>>>>>>>>> On 2021-03-03 16:30, Fujii Masao wrote:\n>>>>>>>>>>>> On 2021/03/03 14:33, Masahiro Ikeda wrote:\n>>>>>>>>>>>>> On 2021-02-24 16:14, Fujii Masao wrote:\n>>>>>>>>>>>>>> On 2021/02/15 11:59, Masahiro Ikeda wrote:\n>>>>>>>>>>>>>>> On 2021-02-10 00:51, David G. Johnston wrote:\n>>>>>>>>>>>>>>>> On Thu, Feb 4, 2021 at 4:45 PM Masahiro Ikeda\n>>>>>>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>>> I pgindented the patches.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> ... <function>XLogWrite</function>, which is invoked during an\n>>>>>>>>>>>>>>>> <function>XLogFlush</function> request (see ...). This is also\n>>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> (\"which normally called\" should be \"which is normally called\" or\n>>>>>>>>>>>>>>>> \"which normally is called\" if you want to keep true to the original)\n>>>>>>>>>>>>>>>> You missed the adding the space before an opening parenthesis here and\n>>>>>>>>>>>>>>>> elsewhere (probably copy-paste)\n>>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> is ether -> is either\n>>>>>>>>>>>>>>>> \"This parameter is off by default as it will repeatedly query the\n>>>>>>>>>>>>>>>> operating system...\"\n>>>>>>>>>>>>>>>> \", because\" -> \"as\"\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> Thanks, I fixed them.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> wal_write_time and the sync items also need the note: \"This is also\n>>>>>>>>>>>>>>>> incremented by the WAL receiver during replication.\"\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> I skipped changing it since I separated the stats for the WAL receiver\n>>>>>>>>>>>>>>> in pg_stat_wal_receiver.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> \"The number of times it happened...\" -> \" (the tally of this event is\n>>>>>>>>>>>>>>>> reported in wal_buffers_full in....) This is undesirable because ...\"\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> Thanks, I fixed it.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>>> I notice that the patch for WAL receiver doesn't require explicitly\n>>>>>>>>>>>>>>>> computing the sync statistics but does require computing the write\n>>>>>>>>>>>>>>>> statistics. This is because of the presence of issue_xlog_fsync but\n>>>>>>>>>>>>>>>> absence of an equivalent pg_xlog_pwrite. Additionally, I observe that\n>>>>>>>>>>>>>>>> the XLogWrite code path calls pgstat_report_wait_*() while the WAL\n>>>>>>>>>>>>>>>> receiver path does not. It seems technically straight-forward to\n>>>>>>>>>>>>>>>> refactor here to avoid the almost-duplicated logic in the two places,\n>>>>>>>>>>>>>>>> though I suspect there may be a trade-off for not adding another\n>>>>>>>>>>>>>>>> function call to the stack given the importance of WAL processing\n>>>>>>>>>>>>>>>> (though that seems marginalized compared to the cost of actually\n>>>>>>>>>>>>>>>> writing the WAL). Or, as Fujii noted, go the other way and don't have\n>>>>>>>>>>>>>>>> any shared code between the two but instead implement the WAL receiver\n>>>>>>>>>>>>>>>> one to use pg_stat_wal_receiver instead. In either case, this\n>>>>>>>>>>>>>>>> half-and-half implementation seems undesirable.\n>>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> OK, as Fujii-san mentioned, I separated the WAL receiver stats.\n>>>>>>>>>>>>>>> (v10-0002-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> Thanks for updating the patches!\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>>> I added the infrastructure code to communicate the WAL receiver stats messages between the WAL receiver and the stats collector, and\n>>>>>>>>>>>>>>> the stats for WAL receiver is counted in pg_stat_wal_receiver.\n>>>>>>>>>>>>>>> What do you think?\n>>>>>>>>>>>>>>\n>>>>>>>>>>>>>> On second thought, this idea seems not good. Because those stats are\n>>>>>>>>>>>>>> collected between multiple walreceivers, but other values in\n>>>>>>>>>>>>>> pg_stat_wal_receiver is only related to the walreceiver process running\n>>>>>>>>>>>>>> at that moment. IOW, it seems strange that some values show dynamic\n>>>>>>>>>>>>>> stats and the others show collected stats, even though they are in\n>>>>>>>>>>>>>> the same view pg_stat_wal_receiver. Thought?\n>>>>>>>>>>>>>\n>>>>>>>>>>>>> OK, I fixed it.\n>>>>>>>>>>>>> The stats collected in the WAL receiver is exposed in pg_stat_wal view in v11 patch.\n>>>>>>>>>>>>\n>>>>>>>>>>>> Thanks for updating the patches! I'm now reading 001 patch.\n>>>>>>>>>>>>\n>>>>>>>>>>>> + /* Check whether the WAL file was synced to disk right now */\n>>>>>>>>>>>> + if (enableFsync &&\n>>>>>>>>>>>> + (sync_method == SYNC_METHOD_FSYNC ||\n>>>>>>>>>>>> + sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH ||\n>>>>>>>>>>>> + sync_method == SYNC_METHOD_FDATASYNC))\n>>>>>>>>>>>> + {\n>>>>>>>>>>>>\n>>>>>>>>>>>> Isn't it better to make issue_xlog_fsync() return immediately\n>>>>>>>>>>>> if enableFsync is off, sync_method is open_sync or open_data_sync,\n>>>>>>>>>>>> to simplify the code more?\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for the comments.\n>>>>>>>>>>> I added the above code in v12 patch.\n>>>>>>>>>>>\n>>>>>>>>>>>>\n>>>>>>>>>>>> + /*\n>>>>>>>>>>>> + * Send WAL statistics only if WalWriterDelay has elapsed to minimize\n>>>>>>>>>>>> + * the overhead in WAL-writing.\n>>>>>>>>>>>> + */\n>>>>>>>>>>>> + if (rc & WL_TIMEOUT)\n>>>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>>>>\n>>>>>>>>>>>> On second thought, this change means that it always takes wal_writer_delay\n>>>>>>>>>>>> before walwriter's WAL stats is sent after XLogBackgroundFlush() is called.\n>>>>>>>>>>>> For example, if wal_writer_delay is set to several seconds, some values in\n>>>>>>>>>>>> pg_stat_wal would be not up-to-date meaninglessly for those seconds.\n>>>>>>>>>>>> So I'm thinking to withdraw my previous comment and it's ok to send\n>>>>>>>>>>>> the stats every after XLogBackgroundFlush() is called. Thought?\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks, I didn't notice that.\n>>>>>>>>>>>\n>>>>>>>>>>> Although PGSTAT_STAT_INTERVAL is 500msec, wal_writer_delay's\n>>>>>>>>>>> default value is 200msec and it may be set shorter time.\n>>>>>>>>>\n>>>>>>>>> Yeah, if wal_writer_delay is set to very small value, there is a risk\n>>>>>>>>> that the WAL stats are sent too frequently. I agree that's a problem.\n>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> Why don't to make another way to check the timestamp?\n>>>>>>>>>>>\n>>>>>>>>>>> + /*\n>>>>>>>>>>> + * Don't send a message unless it's been at least\n>>>>>>>>>>> PGSTAT_STAT_INTERVAL\n>>>>>>>>>>> + * msec since we last sent one\n>>>>>>>>>>> + */\n>>>>>>>>>>> + now = GetCurrentTimestamp();\n>>>>>>>>>>> + if (TimestampDifferenceExceeds(last_report, now,\n>>>>>>>>>>> PGSTAT_STAT_INTERVAL))\n>>>>>>>>>>> + {\n>>>>>>>>>>> + pgstat_send_wal();\n>>>>>>>>>>> + last_report = now;\n>>>>>>>>>>> + }\n>>>>>>>>>>> +\n>>>>>>>>>>>\n>>>>>>>>>>> Although I worried that it's better to add the check code in pgstat_send_wal(),\n>>>>>>>>>\n>>>>>>>>> Agreed.\n>>>>>>>>>\n>>>>>>>>>>> I didn't do so because to avoid to double check PGSTAT_STAT_INTERVAL.\n>>>>>>>>>>> pgstat_send_wal() is invoked pg_report_stat() and it already checks the\n>>>>>>>>>>> PGSTAT_STAT_INTERVAL.\n>>>>>>>>>\n>>>>>>>>> I think that we can do that. What about the attached patch?\n>>>>>>>>\n>>>>>>>> Thanks, I thought it's better.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>>> I forgot to remove an unused variable.\n>>>>>>>>>> The attached v13 patch is fixed.\n>>>>>>>>>\n>>>>>>>>> Thanks for updating the patch!\n>>>>>>>>>\n>>>>>>>>> + w.wal_write,\n>>>>>>>>> + w.wal_write_time,\n>>>>>>>>> + w.wal_sync,\n>>>>>>>>> + w.wal_sync_time,\n>>>>>>>>>\n>>>>>>>>> It's more natural to put wal_write_time and wal_sync_time next to\n>>>>>>>>> each other? That is, what about the following order of columns?\n>>>>>>>>>\n>>>>>>>>> wal_write\n>>>>>>>>> wal_sync\n>>>>>>>>> wal_write_time\n>>>>>>>>> wal_sync_time\n>>>>>>>>\n>>>>>>>> Yes, I fixed it.\n>>>>>>>>\n>>>>>>>>> - case SYNC_METHOD_OPEN:\n>>>>>>>>> - case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>>> - /* write synced it already */\n>>>>>>>>> - break;\n>>>>>>>>>\n>>>>>>>>> IMO it's better to add Assert(false) here to ensure that we never reach\n>>>>>>>>> here, as follows. Thought?\n>>>>>>>>>\n>>>>>>>>> + case SYNC_METHOD_OPEN:\n>>>>>>>>> + case SYNC_METHOD_OPEN_DSYNC:\n>>>>>>>>> + /* not reachable */\n>>>>>>>>> + Assert(false);\n>>>>>>>>\n>>>>>>>> I agree.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>> Even when a backend exits, it sends the stats via pgstat_beshutdown_hook().\n>>>>>>>>> On the other hand, walwriter doesn't do that. Walwriter also should send\n>>>>>>>>> the stats even at its exit? Otherwise some stats can fail to be collected.\n>>>>>>>>> But ISTM that this issue existed from before, for example checkpointer\n>>>>>>>>> doesn't call pgstat_send_bgwriter() at its exit, so it's overkill to fix\n>>>>>>>>> this issue in this patch?\n>>>>>>>>\n>>>>>>>> Thanks, I thought it's better to do so.\n>>>>>>>> I added the shutdown hook for the walwriter and the checkpointer in v14-0003 patch.\n>>>>>>>\n>>>>>>> Thanks for 0003 patch!\n>>>>>>>\n>>>>>>> Isn't it overkill to send the stats in the walwriter-exit-callback? IMO we can\n>>>>>>> just send the stats only when ShutdownRequestPending is true in the walwriter\n>>>>>>> main loop (maybe just before calling HandleMainLoopInterrupts()).\n>>>>>>> If we do this, we cannot send the stats when walwriter throws FATAL error.\n>>>>>>> But that's ok because FATAL error on walwriter causes the server to crash.\n>>>>>>> Thought?\n>>>>>>\n>>>>>> Thanks for your comments!\n>>>>>> Yes, I agree.\n>>>>>>\n>>>>>>\n>>>>>>> Also ISTM that we don't need to use the callback for that purpose in\n>>>>>>> checkpointer because of the same reason. That is, we can send the stats\n>>>>>>> just after calling ShutdownXLOG(0, 0) in HandleCheckpointerInterrupts().\n>>>>>>> Thought?\n>>>>>>\n>>>>>> Yes, I think so too.\n>>>>>>\n>>>>>> Since ShutdownXLOG() may create restartpoint or checkpoint,\n>>>>>> it might generate WAL records.\n>>>>>>\n>>>>>>\n>>>>>>> I'm now not sure how much useful these changes are. As far as I read pgstat.c,\n>>>>>>> when shutdown is requested, the stats collector seems to exit even when\n>>>>>>> there are outstanding stats messages. So if checkpointer and walwriter send\n>>>>>>> the stats in their last cycles, those stats might not be collected.\n>>>>>>>\n>>>>>>> On the other hand, I can think that sending the stats in the last cycles would\n>>>>>>> improve the situation a bit than now. So I'm inclined to apply those changes...\n>>>>>>\n>>>>>> I didn't notice that. I agree this is an important aspect.\n>>>>>> I understood there is a case that the stats collector exits before the checkpointer\n>>>>>> or the walwriter exits and some stats might not be collected.\n>>>>>\n>>>>> IIUC the stats collector basically exits after checkpointer and walwriter exit.\n>>>>> But there seems no guarantee that the stats collector processes\n>>>>> all the messages that other processes have sent during the shutdown of\n>>>>> the server.\n>>>>\n>>>> Thanks, I understood the above postmaster behaviors.\n>>>>\n>>>> PMState manages the status and after checkpointer is exited, the postmaster sends\n>>>> SIGQUIT signal to the stats collector if the shutdown mode is smart or fast.\n>>>> (IIUC, although the postmaster kill the walsender, the archiver and\n>>>> the stats collector at the same time, it's ok because the walsender\n>>>> and the archiver doesn't send stats to the stats collector now.)\n>>>>\n>>>> But, there might be a corner case to lose stats sent by background workers like\n>>>> the checkpointer before they exit (although this is not implemented yet.)\n>>>>\n>>>> For example,\n>>>>\n>>>> 1. checkpointer send the stats before it exit\n>>>> 2. stats collector receive the signal and break before processing\n>>>> the stats message from checkpointer. In this case, 1's message is lost.\n>>>> 3. stats collector writes the stats in the statsfiles and exit\n>>>>\n>>>> Why don't you recheck the coming message is zero just before the 2th procedure?\n>>>> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n>>>\n>>> Yes, I was thinking the same. This is the straight-forward fix for this issue.\n>>> The stats collector should process all the outstanding messages when\n>>> normal shutdown is requested, as the patch does. On the other hand,\n>>> if immediate shutdown is requested or emergency bailout (by postmaster death)\n>>> is requested, maybe the stats collector should skip those processings\n>>> and exit immediately.\n>>>\n>>> But if we implement that, we would need to teach the stats collector\n>>> the shutdown type (i.e., normal shutdown or immediate one). Because\n>>> currently SIGQUIT is sent to the collector whichever shutdown is requested,\n>>> and so the collector cannot distinguish the shutdown type. I'm afraid that\n>>> change is a bit overkill for now.\n>>>\n>>> BTW, I found that the collector calls pgstat_write_statsfiles() even at\n>>> emergency bailout case, before exiting. It's not necessary to save\n>>> the stats to the file in that case because subsequent server startup does\n>>> crash recovery and clears that stats file. So it's better to make\n>>> the collector exit immediately without calling pgstat_write_statsfiles()\n>>> at emergency bailout case? Probably this should be discussed in other\n>>> thread because it's different topic from the feature we're discussing here,\n>>> though.\n>>\n>> IIUC, only the stats collector has another hander for SIGQUIT although\n>> other background processes have a common hander for it and just call _exit(2).\n>> I thought to guarantee when TerminateChildren(SIGTERM) is invoked, don't make stats\n>> collector shutdown before other background processes are shutdown.\n>>\n>> I will make another thread to discuss that the stats collector should know the shutdown type or not.\n>> If it should be, it's better to make the stats collector exit as soon as possible if the shutdown type\n>> is an immediate, and avoid losing the remaining stats if it's normal.\n> \n> +1\n> \n> \n>>\n>>\n>>\n>>>> I measured the timing of the above in my linux laptop using v17-measure-timing.patch.\n>>>> I don't have any strong opinion to handle this case since this result shows to receive and processes\n>>>> the messages takes too short time (less than 1ms) although the stats collector receives the shutdown\n>>>> signal in 5msec(099->104) after the checkpointer process exits.\n>>>\n>>> Agreed.\n>>>\n>>>>\n>>>> ```\n>>>> 1615421204.556 [checkpointer] DEBUG: received shutdown request signal\n>>>> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to make # exit and send the messages\n>>>> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats message # receive and process the messages\n>>>> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n>>>> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n>>>> 1615421208.104 [stats collector] DEBUG: received shutdown request signal # receive shutdown request from the postmaster\n>>>> ```\n>>>>\n>>>>>>> Of course, there is another direction; we can improve the stats collector so\n>>>>>>> that it guarantees to collect all the sent stats messages. But I'm afraid\n>>>>>>> this change might be big.\n>>>>>>\n>>>>>> For example, implement to manage background process status in shared memory and\n>>>>>> the stats collector collects the stats until another background process exits?\n>>>>>>\n>>>>>> In my understanding, the statistics are not required high accuracy,\n>>>>>> it's ok to ignore them if the impact is not big.\n>>>>>>\n>>>>>> If we guarantee high accuracy, another background process like autovacuum launcher\n>>>>>> must send the WAL stats because it accesses the system catalog and might generate\n>>>>>> WAL records due to HOT update even though the possibility is low.\n>>>>>>\n>>>>>> I thought the impact is small because the time uncollected stats are generated is\n>>>>>> short compared to the time from startup. So, it's ok to ignore the remaining stats\n>>>>>> when the process exists.\n>>>>>\n>>>>> I agree that it's not worth changing lots of code to collect such stats.\n>>>>> But if we can implement that very simply, isn't it more worth doing\n>>>>> that than current situation because we may be able to collect more\n>>>>> accurate stats.\n>>>>\n>>>> Yes, I agree.\n>>>> I attached the patch to send the stats before the wal writer and the checkpointer exit.\n>>>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n>>>\n>>> Thanks for making those patches! Firstly I'm reading 0001 and 0002 patches.\n>>\n>> Thanks for your comments and for making patches.\n>>\n>>\n>>> Here is the review comments for 0001 patch.\n>>>\n>>> +/* Prototypes for private functions */\n>>> +static void HandleWalWriterInterrupts(void);\n>>>\n>>> HandleWalWriterInterrupts() and HandleMainLoopInterrupts() are almost the same.\n>>> So I don't think that we need to introduce HandleWalWriterInterrupts(). Instead,\n>>> we can just call pgstat_send_wal(true) before HandleMainLoopInterrupts()\n>>> if ShutdownRequestPending is true in the main loop. Attached is the patch\n>>> I implemented that way. Thought?\n>>\n>> I thought there is a corner case that can't send the stats like\n> \n> You're right! So IMO your patch (v17-0001-send-stats-for-walwriter-when-shutdown.patch) is better.\n> \n> \n>>\n>> ```\n>> // First, ShutdownRequstPending = false\n>>\n>> if (ShutdownRequestPending) // don't send the stats\n>> pgstat_send_wal(true);\n>>\n>> // receive signal and ShutdownRequestPending became true\n>>\n>> HandleMainLoopInterrupts(); // proc exit without sending the stats\n>>\n>> ```\n>>\n>> Is it ok because it almost never occurs?\n>>\n>>\n>>> Here is the review comments for 0002 patch.\n>>>\n>>> +static void pgstat_send_checkpointer(void);\n>>>\n>>> I'm inclined to avoid adding the function with the prefix \"pgstat_\" outside\n>>> pgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() and\n>>> pgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch attached.\n>>\n>> Thanks. I agree.\n> \n> Thanks for the review!\n> \n> \n> So, barring any objection, I will commit the changes for\n> walwriter and checkpointer. That is,\n> \n> v17-0001-send-stats-for-walwriter-when-shutdown.patch\n> v17-0002-send-stats-for-checkpointer-when-shutdown_fujii.patch\n\nI pushed these two patches.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 12 Mar 2021 14:25:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-12 12:39, Fujii Masao wrote:\n> On 2021/03/11 21:29, Masahiro Ikeda wrote:\n>>>> In addition, I rebased the patch for WAL receiver.\n>>>> (v17-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>> \n>>> Thanks! Will review this later.\n>> \n>> Thanks a lot!\n> \n> I read through the 0003 patch. Here are some comments for that.\n> \n> With the patch, walreceiver's stats are counted as wal_write,\n> wal_sync, wal_write_time and wal_sync_time in pg_stat_wal. But they\n> should be counted as different columns because WAL IO is different\n> between walreceiver and other processes like a backend? For example,\n> open_sync or open_datasync is chosen as wal_sync_method, those other\n> processes use O_DIRECT flag to open WAL files, but walreceiver does\n> not. For example, those other procesess write WAL data in block units,\n> but walreceiver does not. So I'm concerned that mixing different WAL\n> IO stats in the same columns would confuse the users. Thought? I'd\n> like to hear more opinions about how to expose walreceiver's stats to\n> users.\n\nThanks, I understood get_sync_bit() checks the sync flags and\nthe write unit of generated wal data and replicated wal data is \ndifferent.\n(It's interesting optimization whether to use kernel cache or not.)\n\nOK. Although I agree to separate the stats for the walrecever,\nI want to hear opinions from other people too. I didn't change the \npatch.\n\nPlease feel to your comments.\n\n\n\n\n> +int\n> +XLogWriteFile(int fd, const void *buf, size_t nbyte, off_t offset)\n> \n> This common function writes WAL data and measures IO timing. IMO we\n> can refactor the code furthermore by making this function handle the\n> case where pg_write() reports an error. In other words, I think that\n> the function should do what do-while loop block in XLogWrite() does.\n> Thought?\n\nOK. I agree.\n\nI wonder to change the error check ways depending on who calls this \nfunction?\nNow, only the walreceiver checks (1)errno==0 and doesn't check \n(2)errno==ENITR.\nOther processes are the opposite.\n\nIIUC, it's appropriate that every process checks (1)(2).\nPlease let me know my understanding is wrong.\n\n\n\n> BTW, currently XLogWrite() increments IO timing even when pg_pwrite()\n> reports an error. But this is useless. Probably IO timing should be\n> incremented after the return code of pg_pwrite() is checked, instead?\n\nYes, I agree. I fixed it.\n(v18-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\n\n\nBTW, thanks for your comments in person that the bgwriter process will \ngenerate wal data.\nI checked that it generates the WAL to take a snapshot via \nLogStandySnapshot().\nI attached the patch for bgwriter to send the wal stats.\n(v18-0005-send-stats-for-bgwriter-when-shutdown.patch)\n\nThis patch includes the following changes.\n\n(1) introduce pgstat_send_bgwriter() the mechanism to send the stats\n if PGSTAT_STAT_INTERVAL msec has passed like pgstat_send_wal()\n to avoid overloading to stats collector because \"bgwriter_delay\"\n can be set for 10msec or more.\n\n(2) remove pgstat_report_wal() and integrate with pgstat_send_wal()\n because bgwriter sends WalStats.m_wal_records and to avoid \noverloading (see (1)).\n IIUC, although the pros to separate them is to reduce the \ncalculation cost of\n WalUsageAccumDiff(), the impact is limited.\n\n(3) make a new signal handler for bgwriter to force sending remaining \nstats during shutdown\n because of (1) and remove HandleMainLoopInterrupts() because there \nare no processes to use it.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 15 Mar 2021 10:39:06 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": ">> On 2021/03/11 21:29, Masahiro Ikeda wrote:\n>>> On 2021-03-11 11:52, Fujii Masao wrote:\n>>>> On 2021/03/11 9:38, Masahiro Ikeda wrote:\n>>>>> On 2021-03-10 17:08, Fujii Masao wrote:\n>>>>>> IIUC the stats collector basically exits after checkpointer and \n>>>>>> walwriter exit.\n>>>>>> But there seems no guarantee that the stats collector processes\n>>>>>> all the messages that other processes have sent during the \n>>>>>> shutdown of\n>>>>>> the server.\n>>>>> \n>>>>> Thanks, I understood the above postmaster behaviors.\n>>>>> \n>>>>> PMState manages the status and after checkpointer is exited, the \n>>>>> postmaster sends\n>>>>> SIGQUIT signal to the stats collector if the shutdown mode is smart \n>>>>> or fast.\n>>>>> (IIUC, although the postmaster kill the walsender, the archiver and\n>>>>> the stats collector at the same time, it's ok because the walsender\n>>>>> and the archiver doesn't send stats to the stats collector now.)\n>>>>> \n>>>>> But, there might be a corner case to lose stats sent by background \n>>>>> workers like\n>>>>> the checkpointer before they exit (although this is not implemented \n>>>>> yet.)\n>>>>> \n>>>>> For example,\n>>>>> \n>>>>> 1. checkpointer send the stats before it exit\n>>>>> 2. stats collector receive the signal and break before processing\n>>>>> the stats message from checkpointer. In this case, 1's message \n>>>>> is lost.\n>>>>> 3. stats collector writes the stats in the statsfiles and exit\n>>>>> \n>>>>> Why don't you recheck the coming message is zero just before the \n>>>>> 2th procedure?\n>>>>> (v17-0004-guarantee-to-collect-last-stats-messages.patch)\n>>>> \n>>>> Yes, I was thinking the same. This is the straight-forward fix for \n>>>> this issue.\n>>>> The stats collector should process all the outstanding messages when\n>>>> normal shutdown is requested, as the patch does. On the other hand,\n>>>> if immediate shutdown is requested or emergency bailout (by \n>>>> postmaster death)\n>>>> is requested, maybe the stats collector should skip those \n>>>> processings\n>>>> and exit immediately.\n>>>> \n>>>> But if we implement that, we would need to teach the stats collector\n>>>> the shutdown type (i.e., normal shutdown or immediate one). Because\n>>>> currently SIGQUIT is sent to the collector whichever shutdown is \n>>>> requested,\n>>>> and so the collector cannot distinguish the shutdown type. I'm \n>>>> afraid that\n>>>> change is a bit overkill for now.\n>>>> \n>>>> BTW, I found that the collector calls pgstat_write_statsfiles() even \n>>>> at\n>>>> emergency bailout case, before exiting. It's not necessary to save\n>>>> the stats to the file in that case because subsequent server startup \n>>>> does\n>>>> crash recovery and clears that stats file. So it's better to make\n>>>> the collector exit immediately without calling \n>>>> pgstat_write_statsfiles()\n>>>> at emergency bailout case? Probably this should be discussed in \n>>>> other\n>>>> thread because it's different topic from the feature we're \n>>>> discussing here,\n>>>> though.\n>>> \n>>> IIUC, only the stats collector has another hander for SIGQUIT \n>>> although\n>>> other background processes have a common hander for it and just call \n>>> _exit(2).\n>>> I thought to guarantee when TerminateChildren(SIGTERM) is invoked, \n>>> don't make stats\n>>> collector shutdown before other background processes are shutdown.\n>>> \n>>> I will make another thread to discuss that the stats collector should \n>>> know the shutdown type or not.\n>>> If it should be, it's better to make the stats collector exit as soon \n>>> as possible if the shutdown type\n>>> is an immediate, and avoid losing the remaining stats if it's normal.\n>> \n>> +1\n\nI researched the past discussion related to writing the stats files when \nthe immediate\nshutdown is requested. And I found the following thread([1]) although \nthe discussion is\nstopped on 12/1/2016.\n\nIIUC, the thread's consensus are\n\n(1) To kill the stats collector soon before writing the stats file is \nneeded in some case\n because there is a possibility that it takes a long time until the \nfailover happens.\n The possible reasons are that disk write speed is slow, stats files \nare big, and so on.\n\n(2) It needs to change the behavior from removing all stats files when \nthe startup does\n crash recovery because autovacuum uses the stats.\n\n(3) It's ok that the stats collector exit without calling \npgstat_write_statsfiles() if\n the stats file is written every X minutes (using wal or another \nmechanism) and startup\n process can restore the stats with slightly low freshness.\n\n(4) It needs to find the way how to handle the (2)'s stats file when \ndeleting on PITR\n rewind or stats collector crash happens.\n\nSo, I need to ping the threads. But I don't have any idea to handle (4) \nyet...\n\n[1] \nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F5EF25A%40G01JPEXMBYT05\n\n\n>> \n>>> \n>>> \n>>> \n>>>>> I measured the timing of the above in my linux laptop using \n>>>>> v17-measure-timing.patch.\n>>>>> I don't have any strong opinion to handle this case since this \n>>>>> result shows to receive and processes\n>>>>> the messages takes too short time (less than 1ms) although the \n>>>>> stats collector receives the shutdown\n>>>>> signal in 5msec(099->104) after the checkpointer process exits.\n>>>> \n>>>> Agreed.\n>>>> \n>>>>> \n>>>>> ```\n>>>>> 1615421204.556 [checkpointer] DEBUG: received shutdown request \n>>>>> signal\n>>>>> 1615421208.099 [checkpointer] DEBUG: proc_exit(-1): 0 callbacks to \n>>>>> make # exit and send the messages\n>>>>> 1615421208.099 [stats collector] DEBUG: process BGWRITER stats \n>>>>> message # receive and process the messages\n>>>>> 1615421208.099 [stats collector] DEBUG: process WAL stats message\n>>>>> 1615421208.104 [postmaster] DEBUG: reaping dead processes\n>>>>> 1615421208.104 [stats collector] DEBUG: received shutdown request \n>>>>> signal # receive shutdown request from the postmaster\n>>>>> ```\n>>>>> \n>>>>>>>> Of course, there is another direction; we can improve the stats \n>>>>>>>> collector so\n>>>>>>>> that it guarantees to collect all the sent stats messages. But \n>>>>>>>> I'm afraid\n>>>>>>>> this change might be big.\n>>>>>>> \n>>>>>>> For example, implement to manage background process status in \n>>>>>>> shared memory and\n>>>>>>> the stats collector collects the stats until another background \n>>>>>>> process exits?\n>>>>>>> \n>>>>>>> In my understanding, the statistics are not required high \n>>>>>>> accuracy,\n>>>>>>> it's ok to ignore them if the impact is not big.\n>>>>>>> \n>>>>>>> If we guarantee high accuracy, another background process like \n>>>>>>> autovacuum launcher\n>>>>>>> must send the WAL stats because it accesses the system catalog \n>>>>>>> and might generate\n>>>>>>> WAL records due to HOT update even though the possibility is low.\n>>>>>>> \n>>>>>>> I thought the impact is small because the time uncollected stats \n>>>>>>> are generated is\n>>>>>>> short compared to the time from startup. So, it's ok to ignore \n>>>>>>> the remaining stats\n>>>>>>> when the process exists.\n>>>>>> \n>>>>>> I agree that it's not worth changing lots of code to collect such \n>>>>>> stats.\n>>>>>> But if we can implement that very simply, isn't it more worth \n>>>>>> doing\n>>>>>> that than current situation because we may be able to collect more\n>>>>>> accurate stats.\n>>>>> \n>>>>> Yes, I agree.\n>>>>> I attached the patch to send the stats before the wal writer and \n>>>>> the checkpointer exit.\n>>>>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch, \n>>>>> v17-0002-send-stats-for-checkpointer-when-shutdown.patch)\n>>>> \n>>>> Thanks for making those patches! Firstly I'm reading 0001 and 0002 \n>>>> patches.\n>>> \n>>> Thanks for your comments and for making patches.\n>>> \n>>> \n>>>> Here is the review comments for 0001 patch.\n>>>> \n>>>> +/* Prototypes for private functions */\n>>>> +static void HandleWalWriterInterrupts(void);\n>>>> \n>>>> HandleWalWriterInterrupts() and HandleMainLoopInterrupts() are \n>>>> almost the same.\n>>>> So I don't think that we need to introduce \n>>>> HandleWalWriterInterrupts(). Instead,\n>>>> we can just call pgstat_send_wal(true) before \n>>>> HandleMainLoopInterrupts()\n>>>> if ShutdownRequestPending is true in the main loop. Attached is the \n>>>> patch\n>>>> I implemented that way. Thought?\n>>> \n>>> I thought there is a corner case that can't send the stats like\n>> \n>> You're right! So IMO your patch \n>> (v17-0001-send-stats-for-walwriter-when-shutdown.patch) is better.\n>>> \n>>> ```\n>>> // First, ShutdownRequstPending = false\n>>> \n>>> if (ShutdownRequestPending) // don't send the stats\n>>> pgstat_send_wal(true);\n>>> \n>>> // receive signal and ShutdownRequestPending became true\n>>> \n>>> HandleMainLoopInterrupts(); // proc exit without sending the \n>>> stats\n>>> \n>>> ```\n>>> \n>>> Is it ok because it almost never occurs?\n>>> \n>>> \n>>>> Here is the review comments for 0002 patch.\n>>>> \n>>>> +static void pgstat_send_checkpointer(void);\n>>>> \n>>>> I'm inclined to avoid adding the function with the prefix \"pgstat_\" \n>>>> outside\n>>>> pgstat.c. Instead, I'm ok to just call both pgstat_send_bgwriter() \n>>>> and\n>>>> pgstat_report_wal() directly after ShutdownXLOG(). Thought? Patch \n>>>> attached.\n>>> \n>>> Thanks. I agree.\n>> \n>> Thanks for the review!\n>> \n>> \n>> So, barring any objection, I will commit the changes for\n>> walwriter and checkpointer. That is,\n>> \n>> v17-0001-send-stats-for-walwriter-when-shutdown.patch\n>> v17-0002-send-stats-for-checkpointer-when-shutdown_fujii.patch\n> \n> I pushed these two patches.\n\nThanks a lot!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:54:01 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/15 10:39, Masahiro Ikeda wrote:\n> Thanks, I understood get_sync_bit() checks the sync flags and\n> the write unit of generated wal data and replicated wal data is different.\n> (It's interesting optimization whether to use kernel cache or not.)\n> \n> OK. Although I agree to separate the stats for the walrecever,\n> I want to hear opinions from other people too. I didn't change the patch.\n> \n> Please feel to your comments.\n\nWhat about applying the patch for common WAL write function like\nXLogWriteFile(), separately from the patch for walreceiver's stats?\nSeems the former reaches the consensus, so we can commit it firstly.\nAlso even only the former change is useful because which allows\nwalreceiver to report WALWrite wait event.\n\n> OK. I agree.\n> \n> I wonder to change the error check ways depending on who calls this function?\n> Now, only the walreceiver checks (1)errno==0 and doesn't check (2)errno==ENITR.\n> Other processes are the opposite.\n> \n> IIUC, it's appropriate that every process checks (1)(2).\n> Please let me know my understanding is wrong.\n\nI'm thinking the same. Regarding (2), commit 79ce29c734 introduced\nthat code. According to the following commit log, it seems harmless\nto retry on EINTR even walreceiver.\n\n Also retry on EINTR. All signals used in the backend are flagged SA_RESTART\n nowadays, so it shouldn't happen, but better to be defensive.\n\n>> BTW, currently XLogWrite() increments IO timing even when pg_pwrite()\n>> reports an error. But this is useless. Probably IO timing should be\n>> incremented after the return code of pg_pwrite() is checked, instead?\n> \n> Yes, I agree. I fixed it.\n> (v18-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\nThanks for the patch!\n\n \t\t\tnleft = nbytes;\n \t\t\tdo\n \t\t\t{\n-\t\t\t\terrno = 0;\n+\t\t\t\twritten = XLogWriteFile(openLogFile, from, nleft, (off_t) startoffset,\n+\t\t\t\t\t\t\t\t\t\tThisTimeLineID, openLogSegNo, wal_segment_size);\n\nCan we merge this do-while loop in XLogWrite() into the loop\nin XLogWriteFile()?\n\nIf we do that, ISTM that the following codes are not necessary in XLogWrite().\n\n \t\t\t\tnleft -= written;\n \t\t\t\tfrom += written;\n\n+ * 'segsize' is a segment size of WAL segment file.\n\nSince segsize is always wal_segment_size, segsize argument seems\nnot necessary in XLogWriteFile().\n\n+XLogWriteFile(int fd, const void *buf, size_t nbyte, off_t offset,\n+\t\t\t TimeLineID timelineid, XLogSegNo segno, int segsize)\n\nWhy did you use \"const void *\" instead of \"char *\" for *buf?\n\nRegarding 0005 patch, I will review it later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 16:30:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021-03-19 16:30, Fujii Masao wrote:\n> On 2021/03/15 10:39, Masahiro Ikeda wrote:\n>> Thanks, I understood get_sync_bit() checks the sync flags and\n>> the write unit of generated wal data and replicated wal data is \n>> different.\n>> (It's interesting optimization whether to use kernel cache or not.)\n>> \n>> OK. Although I agree to separate the stats for the walrecever,\n>> I want to hear opinions from other people too. I didn't change the \n>> patch.\n>> \n>> Please feel to your comments.\n> \n> What about applying the patch for common WAL write function like\n> XLogWriteFile(), separately from the patch for walreceiver's stats?\n> Seems the former reaches the consensus, so we can commit it firstly.\n> Also even only the former change is useful because which allows\n> walreceiver to report WALWrite wait event.\n\nAgreed. I separated the patches.\n\nIf only the former is committed, my trivial concern is that there may be\na disadvantage, but no advantage for the standby server. It may lead to\nperformance degradation to the wal receiver by calling\nINSTR_TIME_SET_CURRENT(), but the stats can't visible for users until the\nlatter patch is committed.\n\nI think it's ok because this not happening in the case to disable the\n\"track_wal_io_timing\" in the standby server. Although some users may start the\nstandby server using the backup which \"track_wal_io_timing\" is enabled in the\nprimary server, they will say it's ok since the users already accept the\nperformance degradation in the primary server.\n\n>> OK. I agree.\n>> \n>> I wonder to change the error check ways depending on who calls this \n>> function?\n>> Now, only the walreceiver checks (1)errno==0 and doesn't check \n>> (2)errno==ENITR.\n>> Other processes are the opposite.\n>> \n>> IIUC, it's appropriate that every process checks (1)(2).\n>> Please let me know my understanding is wrong.\n> \n> I'm thinking the same. Regarding (2), commit 79ce29c734 introduced\n> that code. According to the following commit log, it seems harmless\n> to retry on EINTR even walreceiver.\n> \n> Also retry on EINTR. All signals used in the backend are flagged \n> SA_RESTART\n> nowadays, so it shouldn't happen, but better to be defensive.\n\nThanks, I understood.\n\n\n>>> BTW, currently XLogWrite() increments IO timing even when pg_pwrite()\n>>> reports an error. But this is useless. Probably IO timing should be\n>>> incremented after the return code of pg_pwrite() is checked, instead?\n>> \n>> Yes, I agree. I fixed it.\n>> (v18-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n> \n> Thanks for the patch!\n> \n> \t\t\tnleft = nbytes;\n> \t\t\tdo\n> \t\t\t{\n> -\t\t\t\terrno = 0;\n> +\t\t\t\twritten = XLogWriteFile(openLogFile, from, nleft, (off_t) \n> startoffset,\n> +\t\t\t\t\t\t\t\t\t\tThisTimeLineID, openLogSegNo, wal_segment_size);\n> \n> Can we merge this do-while loop in XLogWrite() into the loop\n> in XLogWriteFile()?\n> If we do that, ISTM that the following codes are not necessary in \n> XLogWrite().\n> \n> \t\t\t\tnleft -= written;\n> \t\t\t\tfrom += written;\n\nOK, I fixed it.\n\n\n> + * 'segsize' is a segment size of WAL segment file.\n> \n> Since segsize is always wal_segment_size, segsize argument seems\n> not necessary in XLogWriteFile().\n\nRight. I fixed it.\n\n\n> +XLogWriteFile(int fd, const void *buf, size_t nbyte, off_t offset,\n> +\t\t\t TimeLineID timelineid, XLogSegNo segno, int segsize)\n> \n> Why did you use \"const void *\" instead of \"char *\" for *buf?\n\nI followed the argument of pg_pwrite().\nBut, I think \"char *\" is better, so fixed it.\n\n\n> Regarding 0005 patch, I will review it later.\n\nThanks.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 22 Mar 2021 09:50:45 +0900",
"msg_from": "ikedamsh <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/22 9:50, ikedamsh wrote:\n> Agreed. I separated the patches.\n> \n> If only the former is committed, my trivial concern is that there may be\n> a disadvantage, but no advantage for the standby server. It may lead to\n> performance degradation to the wal receiver by calling\n> INSTR_TIME_SET_CURRENT(), but the stats can't visible for users until the\n> latter patch is committed.\n\nYour concern is valid, so let's polish and commit also the 0003 patch to v14.\nI'm still thinking that it's better to separate wal_xxx columns into\nwalreceiver's and the others. But if we count even walreceiver activity on\nthe existing columns, regarding 0003 patch, we need to update the document?\nFor example, \"Number of times WAL buffers were written out to disk via\nXLogWrite request.\" should be \"Number of times WAL buffers were written\nout to disk via XLogWrite request and by WAL receiver process.\"? Maybe\nwe need to append some descriptions about this into \"WAL configuration\"\nsection?\n\n\n> I followed the argument of pg_pwrite().\n> But, I think \"char *\" is better, so fixed it.\n\nThanks for updating the patch!\n\n+extern int\tXLogWriteFile(int fd, char *buf,\n+\t\t\t\t\t\t size_t nbyte, off_t offset,\n+\t\t\t\t\t\t TimeLineID timelineid, XLogSegNo segno,\n+\t\t\t\t\t\t bool write_all);\n\nwrite_all seems not to be necessary. You added this flag for walreceiver,\nI guess. But even without the argument, walreceiver seems to work expectedly.\nSo, what about the attached patch? I applied some cosmetic changes to the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 22 Mar 2021 16:50:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "On 2021/03/22 16:50, Fujii Masao wrote:\n> \n> \n> On 2021/03/22 9:50, ikedamsh wrote:\n>> Agreed. I separated the patches.\n>>\n>> If only the former is committed, my trivial concern is that there may be\n>> a disadvantage, but no advantage for the standby server. It may lead to\n>> performance degradation to the wal receiver by calling\n>> INSTR_TIME_SET_CURRENT(), but the stats can't visible for users until the\n>> latter patch is committed.\n> \n> Your concern is valid, so let's polish and commit also the 0003 patch to v14.\n> I'm still thinking that it's better to separate wal_xxx columns into\n> walreceiver's and the others. But if we count even walreceiver activity on\n> the existing columns, regarding 0003 patch, we need to update the document?\n> For example, \"Number of times WAL buffers were written out to disk via\n> XLogWrite request.\" should be \"Number of times WAL buffers were written\n> out to disk via XLogWrite request and by WAL receiver process.\"? Maybe\n> we need to append some descriptions about this into \"WAL configuration\"\n> section?\n\nAgreed. Users can know whether the stats is for walreceiver or not. The\npg_stat_wal view in standby server shows for the walreceiver, and in primary\nserver it shows for the others. So, I updated the document.\n(v20-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\n>> I followed the argument of pg_pwrite().\n>> But, I think \"char *\" is better, so fixed it.\n> \n> Thanks for updating the patch!\n> \n> +extern int��� XLogWriteFile(int fd, char *buf,\n> +������������������������� size_t nbyte, off_t offset,\n> +������������������������� TimeLineID timelineid, XLogSegNo segno,\n> +������������������������� bool write_all);\n> \n> write_all seems not to be necessary. You added this flag for walreceiver,\n> I guess. But even without the argument, walreceiver seems to work expectedly.\n> So, what about the attached patch? I applied some cosmetic changes to the patch.\n\nThanks a lot. Yes, \"write_all\" is unnecessary.\nYour patch is looks good to me.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 22 Mar 2021 20:25:45 +0900",
"msg_from": "ikedamsh <ikedamsh@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/22 20:25, ikedamsh wrote:\n> Agreed. Users can know whether the stats is for walreceiver or not. The\n> pg_stat_wal view in standby server shows for the walreceiver, and in primary\n> server it shows for the others. So, I updated the document.\n> (v20-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n\nThanks for updating the docs!\n\nThere was the discussion about when the stats collector is invoked, at [1].\nCurrently during archive recovery or standby, the stats collector is\ninvoked when the startup process reaches the consistent state, sends\nPMSIGNAL_BEGIN_HOT_STANDBY, and then the system is starting accepting\nread-only connections. But walreceiver can be invoked at earlier stage.\nThis can cause walreceiver to generate and send the statistics about WAL\nwriting even though the stats collector has not been running yet. This might\nbe problematic? If so, maybe we need to ensure that the stats collector is\ninvoked before walreceiver?\n\nDuring recovery, the stats collector is not invoked if hot standby mode is\ndisabled. But walreceiver can be running in this case. So probably we should\nchange walreceiver so that it's invoked even when hot standby is disabled?\nOtherwise we cannnot collect the statistics about WAL writing by walreceiver\nin that case.\n\n[1]\nhttps://postgr.es/m/e5a982a5-8bb4-5a10-cf9a-40dd1921bdb5@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 23 Mar 2021 16:10:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/23 16:10, Fujii Masao wrote:\n> \n> \n> On 2021/03/22 20:25, ikedamsh wrote:\n>> Agreed. Users can know whether the stats is for walreceiver or not. The\n>> pg_stat_wal view in standby server shows for the walreceiver, and in primary\n>> server it shows for the others. So, I updated the document.\n>> (v20-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n> \n> Thanks for updating the docs!\n> \n> There was the discussion about when the stats collector is invoked, at [1].\n> Currently during archive recovery or standby, the stats collector is\n> invoked when the startup process reaches the consistent state, sends\n> PMSIGNAL_BEGIN_HOT_STANDBY, and then the system is starting accepting\n> read-only connections. But walreceiver can be invoked at earlier stage.\n> This can cause walreceiver to generate and send the statistics about WAL\n> writing even though the stats collector has not been running yet. This might\n> be problematic? If so, maybe we need to ensure that the stats collector is\n> invoked before walreceiver?\n> \n> During recovery, the stats collector is not invoked if hot standby mode is\n> disabled. But walreceiver can be running in this case. So probably we should\n> change walreceiver so that it's invoked even when hot standby is disabled?\n> Otherwise we cannnot collect the statistics about WAL writing by walreceiver\n> in that case.\n> \n> [1]\n> https://postgr.es/m/e5a982a5-8bb4-5a10-cf9a-40dd1921bdb5@oss.nttdata.com\n\nThanks for comments! I didn't notice that.\nAs I mentioned[1], if my understanding is right, this issue seem to be not for\nonly the wal receiver.\n\nSince the shared memory thread already handles these issues, does this patch,\nwhich to collect the stats for the wal receiver and make a common function for\nwriting wal files, have to be committed after the patches for share memory\nstats are committed? Or to handle them in this thread because we don't know\nwhen the shared memory stats patches will be committed.\n\nI think the former is better because to collect stats in shared memory is very\nuseful feature for users and it make a big change in design. So, I think it's\nbeneficial to make an effort to move the shared memory stats thread forward\n(by reviewing or testing) instead of handling the issues in this thread.\n\n[1]\nhttps://www.postgresql.org/message-id/9f4e19ad-518d-b91a-e500-25a666471c42%40oss.nttdata.com\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 25 Mar 2021 11:50:12 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
},
{
"msg_contents": "\n\nOn 2021/03/25 11:50, Masahiro Ikeda wrote:\n> \n> \n> On 2021/03/23 16:10, Fujii Masao wrote:\n>>\n>>\n>> On 2021/03/22 20:25, ikedamsh wrote:\n>>> Agreed. Users can know whether the stats is for walreceiver or not. The\n>>> pg_stat_wal view in standby server shows for the walreceiver, and in primary\n>>> server it shows for the others. So, I updated the document.\n>>> (v20-0003-Makes-the-wal-receiver-report-WAL-statistics.patch)\n>>\n>> Thanks for updating the docs!\n>>\n>> There was the discussion about when the stats collector is invoked, at [1].\n>> Currently during archive recovery or standby, the stats collector is\n>> invoked when the startup process reaches the consistent state, sends\n>> PMSIGNAL_BEGIN_HOT_STANDBY, and then the system is starting accepting\n>> read-only connections. But walreceiver can be invoked at earlier stage.\n>> This can cause walreceiver to generate and send the statistics about WAL\n>> writing even though the stats collector has not been running yet. This might\n>> be problematic? If so, maybe we need to ensure that the stats collector is\n>> invoked before walreceiver?\n>>\n>> During recovery, the stats collector is not invoked if hot standby mode is\n>> disabled. But walreceiver can be running in this case. So probably we should\n>> change walreceiver so that it's invoked even when hot standby is disabled?\n>> Otherwise we cannnot collect the statistics about WAL writing by walreceiver\n>> in that case.\n>>\n>> [1]\n>> https://postgr.es/m/e5a982a5-8bb4-5a10-cf9a-40dd1921bdb5@oss.nttdata.com\n> \n> Thanks for comments! I didn't notice that.\n> As I mentioned[1], if my understanding is right, this issue seem to be not for\n> only the wal receiver.\n> \n> Since the shared memory thread already handles these issues, does this patch,\n> which to collect the stats for the wal receiver and make a common function for\n> writing wal files, have to be committed after the patches for share memory\n> stats are committed? Or to handle them in this thread because we don't know\n> when the shared memory stats patches will be committed.\n> \n> I think the former is better because to collect stats in shared memory is very\n> useful feature for users and it make a big change in design. So, I think it's\n> beneficial to make an effort to move the shared memory stats thread forward\n> (by reviewing or testing) instead of handling the issues in this thread.\n\nSounds reasonable. Agreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 25 Mar 2021 22:06:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: About to add WAL write/fsync statistics to pg_stat_wal view"
}
] |
[
{
"msg_contents": "Hello,\n\nA description of what you are trying to achieve and what results you expect.:\nI am a student and I am new in PSQL. I am working on a research\nproject and an initial step is\n\nto trace the page request of the buffer manager. I need to know which\npage was evicted from the buffer and\nwhich page replaced it. For each request, I want to know the relation\noid and the blocknumber. In the end, I want to feed this\n\ninformation to a Machine learning model, that is why I it is essential\nfor me to create an easy-to-process trace.\n\nInitially, I modified the code within the BufferAlloc method in the\nbufmgr.c file,\n\nto log the pages that were requested and were already in the cache,\nthe pages that were evicted and the pages that\n\nreplaced them. However, I feel that this might not be the most optimal\nway, as the log file is a mess and\n\nit is hard to analyze. I was wondering if there is a more optimal way\nto achieve this.\n\nPostgreSQL version number you are running:\nPostgreSQL 12.4, compiled by Visual C++ build 1927, 64-bit\n\nHow you installed PostgreSQL:\nDownloaded the Source code from the github repository, build using\nVisual Studio 2019.\n\nChanges made to the settings in the postgresql.conf file*: see Server\nConfiguration <https://wiki.postgresql.org/wiki/Server_Configuration>\nfor a quick way to list them all.*\nlogging_collector = on\nlog_rotation_age = 0\nlog_min_error_statement = panic\nlog_error_verbosity = terse\nlog_statement = 'all'\n\n\nOperating system and version:\nWindows 10 Version 1909\n\n\nI apologise if this is the wrong list to post. Please direct me to an\nappropriate one if you feel this question is irrelevant.\n\n\nThank you for your time and help.\n\nHello,\nA description of what you are trying to achieve and what results you expect.:\nI am a student and I am new in PSQL. I am working on a research project and an initial step isto trace the page request of the buffer manager. I need to know which page was evicted from the buffer andwhich page replaced it. For each request, I want to know the relation oid and the blocknumber. In the end, I want to feed thisinformation to a Machine learning model, that is why I it is essential for me to create an easy-to-process trace.Initially, I modified the code within the BufferAlloc method in the bufmgr.c file,to log the pages that were requested and were already in the cache, the pages that were evicted and the pages that replaced them. However, I feel that this might not be the most optimal way, as the log file is a mess andit is hard to analyze. I was wondering if there is a more optimal way to achieve this.\nPostgreSQL version number you are running:\nPostgreSQL 12.4, compiled by Visual C++ build 1927, 64-bit\nHow you installed PostgreSQL:Downloaded the Source code from the github repository, build using Visual Studio 2019.\n\nChanges made to the settings in the postgresql.conf file: see Server Configuration for a quick way to list them all.\nlogging_collector = onlog_rotation_age = 0log_min_error_statement = paniclog_error_verbosity = terselog_statement = 'all'\nOperating system and version:Windows 10 Version 1909\nI apologise if this is the wrong list to post. Please direct me to an appropriate one if you feel this question is irrelevant.\n\n\nThank you for your time and help.",
"msg_date": "Tue, 8 Dec 2020 09:43:42 +0200",
"msg_from": "Irodotos Terpizis <irodotosterpizis@gmail.com>",
"msg_from_op": true,
"msg_subject": "Printing page request trace from buffer manager"
},
{
"msg_contents": "On 2020-Dec-08, Irodotos Terpizis wrote:\n\n> Initially, I modified the code within the BufferAlloc method in the\n> bufmgr.c file,\n> to log the pages that were requested and were already in the cache,\n> the pages that were evicted and the pages that\n> replaced them. However, I feel that this might not be the most optimal\n> way, as the log file is a mess and\n> it is hard to analyze. I was wondering if there is a more optimal way\n> to achieve this.\n\nHi Irodotos,\n\nDid you find an answer to this question? Can you explain in what way\nthe log is a mess?\n\nMaybe you need to start by defining how would you like the log file to\nlook.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La conclusi�n que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusi�n de ellos\" (Tanenbaum)\n\n\n",
"msg_date": "Sat, 20 Feb 2021 19:20:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Printing page request trace from buffer manager"
}
] |
[
{
"msg_contents": "Hi,\n\nBased on a PoC reported in a previous thread [1] I'd like to propose new\nhooks around transaction commands. The objective of this patch is to\nallow PostgreSQL extension to act at start and end (including abort) of\na SQL statement in a transaction.\n\nThe idea for these hooks is born from the no go given to Takayuki\nTsunakawa's patch[2] proposing an in core implementation of\nstatement-level rollback transaction and the pg_statement_rollback\nextension[3] that we have developed at LzLabs. The extension\npg_statement_rollback has two limitation, the first one is that the\nclient still have to call the ROLLBACK TO SAVEPOINT when an error is\nencountered and the second is that it generates a crash when PostgreSQL\nis compiled with assert that can not be fixed at the extension level.\n\nAlthough that I have not though about other uses for these hooks, they\nwill allow a full server side statement-level rollback feature like in\ncommercial DBMSs like DB2 and Oracle. This feature is very often\nrequested by users that want to migrate to PostgreSQL.\n\n\nSPECIFICATION\n==================================================\n\n\nThere is no additional syntax or GUC, the patch just adds three new hooks:\n\n\n* start_xact_command_hook called at end of the start_xact_command()\nfunction.\n* finish_xact_command called in finish_xact_command() just before\nCommitTransactionCommand().\n* abort_current_transaction_hook called after an error is encountered at\nend of AbortCurrentTransaction().\n\nThese hooks allow an external plugins to execute code related to the SQL\nstatements executed in a transaction.\n\n\nDESIGN\n==================================================\n\n\nNothing more to add here.\n\n\nCONSIDERATIONS AND REQUESTS\n==================================================\n\n\nAn extension using these hooks that implements the server side rollback\nat statement level feature is attached to demonstrate the interest of\nthese hooks. If we want to support this feature the extension could be\nadded under the contrib/ directory.\n\nHere is an example of use of these hooks through the\npg_statement_rollbackv2 extension:\n\n LOAD 'pg_statement_rollbackv2.so';\n LOAD\n SET pg_statement_rollback.enabled TO on;\n SET\n CREATE SCHEMA testrsl;\n CREATE SCHEMA\n SET search_path TO testrsl,public;\n SET\n BEGIN;\n BEGIN\n CREATE TABLE tbl_rsl(id integer, val varchar(256));\n CREATE TABLE\n INSERT INTO tbl_rsl VALUES (1, 'one');\n INSERT 0 1\n WITH write AS (INSERT INTO tbl_rsl VALUES (2, 'two') RETURNING id,\nval) SELECT * FROM write;\n id | val\n ----+-----\n 2 | two\n (1 row)\n\n UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1; -- >>>>> will fail\n psql:simple.sql:14: ERROR: invalid input syntax for type integer: \"two\"\n LINE 1: UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1;\n ^\n SELECT * FROM tbl_rsl; -- Should show records id 1 + 2\n id | val\n ----+-----\n 1 | one\n 2 | two\n (2 rows)\n\n COMMIT;\n COMMIT\n\nAs you can see the failing UPDATE statement has been rolled back and we\nrecover the state of the transaction just before the statement without\nany client savepoint and rollback to savepoint action.\n\n\nI'll add this patch to Commitfest 2021-01.\n\n\nBest regards\n\n\n[1]\nhttps://www.postgresql-archive.org/Issue-with-server-side-statement-level-rollback-td6162387.html\n[2]\nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F6A9286%40G01JPEXMBYT05\n[3] https://github.com/darold/pg_statement_rollbackv2\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Tue, 8 Dec 2020 11:15:12 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi Julien,\n\nOn 12/8/20 5:15 AM, Gilles Darold wrote:\n> \n> Based on a PoC reported in a previous thread [1] I'd like to propose new\n> hooks around transaction commands. The objective of this patch is to\n> allow PostgreSQL extension to act at start and end (including abort) of\n> a SQL statement in a transaction.\n\nYou have signed up to review this patch. Do you know when you will have \na chance to do that?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 11 Mar 2021 07:41:35 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi David,\n\nOn Thu, Mar 11, 2021 at 07:41:35AM -0500, David Steele wrote:\n> Hi Julien,\n> \n> On 12/8/20 5:15 AM, Gilles Darold wrote:\n> > \n> > Based on a PoC reported in a previous thread [1] I'd like to propose new\n> > hooks around transaction commands. The objective of this patch is to\n> > allow PostgreSQL extension to act at start and end (including abort) of\n> > a SQL statement in a transaction.\n> \n> You have signed up to review this patch. Do you know when you will have a\n> chance to do that?\n\nThanks for the reminder! And sorry about that, I've unfortunately been quite\nbusy with other work duties recently. I already started to look at it (and\npg_statement_rollbackv2) so I should be able to post a review within a few\ndays!\n\n\n",
"msg_date": "Thu, 11 Mar 2021 21:01:04 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 08, 2020 at 11:15:12AM +0100, Gilles Darold wrote:\n> \n> Based on a PoC reported in a previous thread [1] I'd like to propose new\n> hooks around transaction commands. The objective of this patch is to\n> allow PostgreSQL extension to act at start and end (including abort) of\n> a SQL statement in a transaction.\n> \n> The idea for these hooks is born from the no go given to Takayuki\n> Tsunakawa's patch[2] proposing an in core implementation of\n> statement-level rollback transaction and the pg_statement_rollback\n> extension[3] that we have developed at LzLabs. The extension\n> pg_statement_rollback has two limitation, the first one is that the\n> client still have to call the ROLLBACK TO SAVEPOINT when an error is\n> encountered and the second is that it generates a crash when PostgreSQL\n> is compiled with assert that can not be fixed at the extension level.\n\nThis topic came up quite often on the mailing list, the last being from �lvaro\nat [1]. I think there's a general agreement that customers want that feature,\nwon't stop asking for it, and many if not all forks ended up implementing it.\n\nI would still prefer if he had a way to support if in vanilla postgres, with of\ncourse all possible safeguards to avoid an epic fiasco.\n\nI personally think that �lvaro's previous approach, giving the ability to\nspecify the rollback behavior in the TransactionStmt grammar, would be enough\n(I mean without the GUC part) to cover realistic and sensible usecases, which\nis where the client fully decides whether there's a statement level rollback or\nnot. One could probably build a custom module on top of that to introduce some\nkind of GUC to change the behavior more globally if it wants to take that risk.\n\nIf such an approach is still not wanted for core inclusion, then I'm in favor\nof adding those hooks. There's already a published extension that tries to\nimplement that (for complete fairness I'm one of the people to blame), but as\nGilles previously mentioned this is very hackish and the currently available\nhooks makes it very hard if not impossible to have a perfect implementation.\nIt's clear that people will never stop to try doing it, so at least let's make\nit possible using a custom module.\n\nIt's also probably worthwhile to mention that the custom extension implementing\nserver side statement level rollback wasn't implemented because it wasn't\ndoable in the client side, but because the client side implementation was\ncausing a really big overhead due to the need of sending the extra commands,\nand putting it on the server side lead to really significant performance\nimprovement.\n\n> Although that I have not though about other uses for these hooks, they\n> will allow a full server side statement-level rollback feature like in\n> commercial DBMSs like DB2 and Oracle. This feature is very often\n> requested by users that want to migrate to PostgreSQL.\n\nI also thought about it, and I don't really see other possible usage for those\nhooks.\n\n> There is no additional syntax or GUC, the patch just adds three new hooks:\n> \n> \n> * start_xact_command_hook called at end of the start_xact_command()\n> function.\n> * finish_xact_command called in finish_xact_command() just before\n> CommitTransactionCommand().\n> * abort_current_transaction_hook called after an error is encountered at\n> end of AbortCurrentTransaction().\n> \n> These hooks allow an external plugins to execute code related to the SQL\n> statements executed in a transaction.\n\nThe only comment I have for those hooks is for the\nabort_current_transaction_hook. AbortCurrentTransaction() can be called\nrecursively, so should the hook provide some more information about the\nCurrentTransactionState, like the blockState, or is\nGetCurrentTransactionNestLevel() enough to act only for the wanted calls?\n\n\n[1] https://www.postgresql.org/message-id/20181207192006.rf4tkfl25oc6pqmv@alvherre.pgsql\n\n\n",
"msg_date": "Fri, 12 Mar 2021 13:55:46 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 12/03/2021 à 06:55, Julien Rouhaud a écrit :\n> Hi,\n>\n> On Tue, Dec 08, 2020 at 11:15:12AM +0100, Gilles Darold wrote:\n>> Based on a PoC reported in a previous thread [1] I'd like to propose new\n>> hooks around transaction commands. The objective of this patch is to\n>> allow PostgreSQL extension to act at start and end (including abort) of\n>> a SQL statement in a transaction.\n>>\n>> The idea for these hooks is born from the no go given to Takayuki\n>> Tsunakawa's patch[2] proposing an in core implementation of\n>> statement-level rollback transaction and the pg_statement_rollback\n>> extension[3] that we have developed at LzLabs. The extension\n>> pg_statement_rollback has two limitation, the first one is that the\n>> client still have to call the ROLLBACK TO SAVEPOINT when an error is\n>> encountered and the second is that it generates a crash when PostgreSQL\n>> is compiled with assert that can not be fixed at the extension level.\n> This topic came up quite often on the mailing list, the last being from Álvaro\n> at [1]. I think there's a general agreement that customers want that feature,\n> won't stop asking for it, and many if not all forks ended up implementing it.\n>\n> I would still prefer if he had a way to support if in vanilla postgres, with of\n> course all possible safeguards to avoid an epic fiasco.\n\n\nI have added Alvarro and Takayuki to the thread, this patch is inspired\nfrom their proposals. I wrote this patch after reading the thread and\nconcluding that a core implementation doesn't seems to make the\nconsensus and that this feature could be available to users through an\nextension.\n\n\n> I personally think that Álvaro's previous approach, giving the ability to\n> specify the rollback behavior in the TransactionStmt grammar, would be enough\n> (I mean without the GUC part) to cover realistic and sensible usecases, which\n> is where the client fully decides whether there's a statement level rollback or\n> not. One could probably build a custom module on top of that to introduce some\n> kind of GUC to change the behavior more globally if it wants to take that risk.\n\n\nYes probably, with this patch I just want to propose an external\nimplementation of the feature. The extension implementation \"just\"\nrequire these three hooks to propose the same feature as if it was\nimplemented in vanilla postgres. The feature can be simply enabled or\ndisabled by a custom user defined variable before a transaction is\nstarted or globaly for all transaction.\n\n\n> If such an approach is still not wanted for core inclusion, then I'm in favor\n> of adding those hooks. There's already a published extension that tries to\n> implement that (for complete fairness I'm one of the people to blame), but as\n> Gilles previously mentioned this is very hackish and the currently available\n> hooks makes it very hard if not impossible to have a perfect implementation.\n> It's clear that people will never stop to try doing it, so at least let's make\n> it possible using a custom module.\n>\n> It's also probably worthwhile to mention that the custom extension implementing\n> server side statement level rollback wasn't implemented because it wasn't\n> doable in the client side, but because the client side implementation was\n> causing a really big overhead due to the need of sending the extra commands,\n> and putting it on the server side lead to really significant performance\n> improvement.\n\nRight, the closer extension to reach this feature is the extension we\ndevelop at LzLabs [2] but it still require a rollback to savepoint at\nclient side in case of error. The extension [3] using these hooks\ndoesn't have this limitation, everything is handled server side.\n\n\n>> Although that I have not though about other uses for these hooks, they\n>> will allow a full server side statement-level rollback feature like in\n>> commercial DBMSs like DB2 and Oracle. This feature is very often\n>> requested by users that want to migrate to PostgreSQL.\n> I also thought about it, and I don't really see other possible usage for those\n> hooks.\n\n\nYes I have not a lot of imagination too for possible other use for these\nhooks but I hope that in itself this feature can justify them. I just\nthough that if we expose the query_string at command_start hook we could\nallow its modification by external modules, but this is surely the worst\nidea I can produce.\n\n\n>> There is no additional syntax or GUC, the patch just adds three new hooks:\n>>\n>>\n>> * start_xact_command_hook called at end of the start_xact_command()\n>> function.\n>> * finish_xact_command called in finish_xact_command() just before\n>> CommitTransactionCommand().\n>> * abort_current_transaction_hook called after an error is encountered at\n>> end of AbortCurrentTransaction().\n>>\n>> These hooks allow an external plugins to execute code related to the SQL\n>> statements executed in a transaction.\n> The only comment I have for those hooks is for the\n> abort_current_transaction_hook. AbortCurrentTransaction() can be called\n> recursively, so should the hook provide some more information about the\n> CurrentTransactionState, like the blockState, or is\n> GetCurrentTransactionNestLevel() enough to act only for the wanted calls?\n\n\nI don't think we need to pass any information at least for the rollback\nat statement level extension. All information needed are accessible and\nactually at abort_current_transaction_hook we only toggle a boolean to\nfire the rollback.\n\n\nI have rebased the patch.\n\n\nThanks for the review.\n\n\n[1]\nhttps://www.postgresql.org/message-id/20181207192006.rf4tkfl25oc6pqmv@alvherre.pgsql\n\n[2] https://github.com/lzlabs/pg_statement_rollback/\n\n[3] https://github.com/darold/pg_statement_rollbackv2\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/",
"msg_date": "Fri, 19 Mar 2021 23:02:29 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 11:02:29PM +0100, Gilles Darold wrote:\n> Le 12/03/2021 � 06:55, Julien Rouhaud a �crit�:\n> >\n> \n> I don't think we need to pass any information at least for the rollback\n> at statement level extension. All information needed are accessible and\n> actually at abort_current_transaction_hook we only toggle a boolean to\n> fire the rollback.\n\nThat's what I thought but I wanted to be sure.\n\nSo, I have nothing more to say about the patch itself. At that point, I guess\nthat we can't keep postponing that topic, and we should either:\n\n- commit this patch, or �lvaro's one based on a new grammar keyword for BEGIN\n (maybe without the GUC if that's the only hard blocker), assuming that there\n aren't any technical issue with those\n\n- reject this patch, and I guess set in stone that vanilla postgres will\n never allow that.\n\nGiven the situation I'm not sure if I should mark the patch as Ready for\nCommitter or not. I'll leave it as-is for now as �lvaro is already in Cc.\n\n> I have rebased the patch.\n\nThanks!\n\n\n",
"msg_date": "Sat, 20 Mar 2021 18:33:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 06:33:24PM +0800, Julien Rouhaud wrote:\n> \n> So, I have nothing more to say about the patch itself. At that point, I guess\n> that we can't keep postponing that topic, and we should either:\n> \n> - commit this patch, or �lvaro's one based on a new grammar keyword for BEGIN\n> (maybe without the GUC if that's the only hard blocker), assuming that there\n> aren't any technical issue with those\n> \n> - reject this patch, and I guess set in stone that vanilla postgres will\n> never allow that.\n> \n> Given the situation I'm not sure if I should mark the patch as Ready for\n> Committer or not. I'll leave it as-is for now as �lvaro is already in Cc.\n\nI just switched the patch to Ready for Committer.\n\n\n",
"msg_date": "Sun, 28 Mar 2021 21:22:07 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hello,\r\n\r\nAs far as I am concerned, I am totally awaiting for this kind of feature exposed here, for one single reason at this time : the extension pg_statement_rollback will be much more valuable with the ability of processing \"rollback to savepoint\" without the need for explicit instruction from client side (and this patch is giving this option).\r\nThe way the improvement is suggested here seems to be clever enough to allow many interesting behaviours from differents kinds of extensions.\r\n\r\nThank you,",
"msg_date": "Wed, 23 Jun 2021 06:30:09 +0000",
"msg_from": "Nicolas CHAHWEKILIAN <leptitstagiaire@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Nicolas CHAHWEKILIAN <leptitstagiaire@gmail.com> writes:\n> As far as I am concerned, I am totally awaiting for this kind of feature\n> exposed here, for one single reason at this time : the extension\n> pg_statement_rollback will be much more valuable with the ability of\n> processing \"rollback to savepoint\" without the need for explicit\n> instruction from client side (and this patch is giving this option).\n\nWhat exactly do these hooks do that isn't done as well or better\nby the RegisterXactCallback and RegisterSubXactCallback mechanisms?\nPerhaps we need to define some additional event types for those?\nOr pass more data to the callback functions?\n\nI quite dislike inventing a hook that's defined as \"run during\nstart_xact_command\", because there is basically nothing that's\nnot ad-hoc about that function: it's internal to postgres.c\nand both its responsibilities and its call sites have changed\nover time. I think anyone hooking into that will be displeased\nby the stability of their results.\n\nBTW, per the cfbot the patch doesn't even apply right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Jul 2021 12:47:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 01/07/2021 à 18:47, Tom Lane a écrit :\n> Nicolas CHAHWEKILIAN <leptitstagiaire@gmail.com> writes:\n>> As far as I am concerned, I am totally awaiting for this kind of feature\n>> exposed here, for one single reason at this time : the extension\n>> pg_statement_rollback will be much more valuable with the ability of\n>> processing \"rollback to savepoint\" without the need for explicit\n>> instruction from client side (and this patch is giving this option).\n> What exactly do these hooks do that isn't done as well or better\n> by the RegisterXactCallback and RegisterSubXactCallback mechanisms?\n> Perhaps we need to define some additional event types for those?\n> Or pass more data to the callback functions?\n\n\nSorry it take me time to recall the reason of the hooks. Actually the\nproblem is that the callbacks are not called when a statement is\nexecuted after an error so that we fall back to error:\n\n ERROR: current transaction is aborted, commands ignored until end\nof transaction block\n\nFor example with the rollback at statement level extension:\n\n\n BEGIN;\n UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1; -- >>>>> will fail\n LOG: statement: UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1;\n ERROR: invalid input syntax for type integer: \"two\"\n LINE 1: UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1;\n ^\n UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1; -- >>>>> will\n fail again\n LOG: statement: UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1;\n ERROR: current transaction is aborted, commands ignored until end\n of transaction block\n SELECT * FROM tbl_rsl; -- Should show records id 1 + 2\n LOG: statement: SELECT * FROM tbl_rsl;\n ERROR: current transaction is aborted, commands ignored until end\n of transaction block\n\n\nWith the exention and the hook on start_xact_command() we can continue\nand execute all the following statements.\n\n\nI have updated the patch to only keep the hook on start_xact_command(),\nas you've suggested the other hooks can be replaced by the use of the\nxact callback. The extension has also been updated for testing the\nfeature, available here https://github.com/darold/pg_statement_rollbackv2\n\n\n> I quite dislike inventing a hook that's defined as \"run during\n> start_xact_command\", because there is basically nothing that's\n> not ad-hoc about that function: it's internal to postgres.c\n> and both its responsibilities and its call sites have changed\n> over time. I think anyone hooking into that will be displeased\n> by the stability of their results.\n\nUnfortunately I had not found a better solution, but I just tried with\nplacing the hook in function BeginCommand() in src/backend/tcop/dest.c\nand the extension is working as espected. Do you think it would be a\nbetter place?In this case I can update the patch. For this feature we\nneed a hook that is executed before any command even if the transaction\nis in abort state to be able to inject the rollback to savepoint, maybe\nI'm not looking at the right place to do that.\n\n\nThanks\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Sat, 3 Jul 2021 17:46:10 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 01/07/2021 à 18:47, Tom Lane a écrit :\n> Nicolas CHAHWEKILIAN <leptitstagiaire@gmail.com> writes:\n>> As far as I am concerned, I am totally awaiting for this kind of feature\n>> exposed here, for one single reason at this time : the extension\n>> pg_statement_rollback will be much more valuable with the ability of\n>> processing \"rollback to savepoint\" without the need for explicit\n>> instruction from client side (and this patch is giving this option).\n> What exactly do these hooks do that isn't done as well or better\n> by the RegisterXactCallback and RegisterSubXactCallback mechanisms?\n> Perhaps we need to define some additional event types for those?\n> Or pass more data to the callback functions?\n>\n> I quite dislike inventing a hook that's defined as \"run during\n> start_xact_command\", because there is basically nothing that's\n> not ad-hoc about that function: it's internal to postgres.c\n> and both its responsibilities and its call sites have changed\n> over time. I think anyone hooking into that will be displeased\n> by the stability of their results.\n\n\nSorry Tom, it seems that I have totally misinterpreted your comments, \ngoogle translate was not a great help for my understanding but Julien \nwas. Thanks Julien.\n\n\nI'm joining a new patch v4 that removes the need of any hook and adds a \nnew events XACT_EVENT_COMMAND_START and SUBXACT_EVENT_COMMAND_START that \ncan be cautch in the xact callbacks when a new command is to be executed.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Mon, 5 Jul 2021 12:48:01 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi,\n\n\nI have renamed the patch and the title of this proposal registered in \nthe commitfest \"Xact/SubXact event callback at command start\" to reflect \nthe last changes that do not include new hooks anymore.\n\n\nHere is the new description corresponding to the current patch.\n\n\nThis patch allow to execute user-defined code for the start of any \ncommand through a xact registered callback. It introduce two new events \nin XactEvent and SubXactEvent enum called respectively \nXACT_EVENT_COMMAND_START and SUBXACT_EVENT_COMMAND_START. The callback \nis not called if a transaction is not started.\n\n\nThe objective of this new callback is to be able to call user-defined \ncode before any new statement is executed. For example it can call a \nrollback to savepoint if there was an error in the previous transaction \nstatement, which allow to implements Rollback at Statement Level at \nserver side using a PostgreSQL extension, see [1] .\n\n\nThe patch compile and regressions tests with assert enabled passed \nsuccessfully.\n\nThere is no regression test for this feature but extension at [1] has \nbeen used for validation of the new callback.\n\n\nThe patch adds insignificant overhead by looking at an existing callback \ndefinition but clearly it is the responsibility to the developer to \nevaluate the performances impact of its user-defined code for this \ncallback as it will be called before each statement. Here is a very \nsimple test using pgbench -c 20 -j 8 -T 30\n\n tps = 669.930274 (without user-defined code)\n tps = 640.718361 (with user-defined code from extension [1])\n\nthe overhead for this extension is around 4.5% which I think is not so \nbad good for such feature (internally it adds calls to RELEASE + \nSAVEPOINT before each write statement execution and in case of error a \nROLLBACK TO savepoint).\n\n\n[1] https://github.com/darold/pg_statement_rollbackv2\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Wed, 14 Jul 2021 15:48:43 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> I have renamed the patch and the title of this proposal registered in \n> the commitfest \"Xact/SubXact event callback at command start\" to reflect \n> the last changes that do not include new hooks anymore.\n\nHmm, it doesn't seem like this has addressed my concern at all.\nThe callbacks are still effectively defined as \"run during\nstart_xact_command\", so they're not any less squishy semantically\nthan they were before. The point of my criticism was that you\nshould move the call site to someplace that's more organically\nconnected to execution of commands.\n\nAnother thing I'm not too pleased with in this formulation is that it's\nvery unclear what the distinction is between XACT_EVENT_COMMAND_START\nand SUBXACT_EVENT_COMMAND_START. AFAICS, *every* potential use-case\nfor this would have to hook into both callback chains, and most likely\nwould treat the two events alike. Plus, as you note, the goalposts\nhave suddenly been moved for the amount of overhead it's okay to have\nin an XactCallback or SubXactCallback function. So that might cause\nproblems for unrelated code. It's probably better to not try to\nre-use that infrastructure.\n\n<digression>\n\n> The objective of this new callback is to be able to call user-defined \n> code before any new statement is executed. For example it can call a \n> rollback to savepoint if there was an error in the previous transaction \n> statement, which allow to implements Rollback at Statement Level at \n> server side using a PostgreSQL extension, see [1] .\n\nUrgh. Isn't this re-making the same mistake we made years ago, namely\ntrying to implement autocommit on the server side? I fear this will\nbe a disaster even larger than that was, because if it's an extension\nthen pretty much no applications will be prepared for the new semantics.\nI strongly urge you to read the discussions that led up to f85f43dfb,\nand to search the commit history before that for mentions of\n\"autocommit\", to see just how extensive the mess was.\n\nI spent a little time trying to locate said discussions; it's harder\nthan it should be because we didn't have the practice of citing email\nthreads in the commit log at the time. I did find\n\nhttps://www.postgresql.org/message-id/flat/Pine.LNX.4.44.0303172059170.1975-100000%40peter.localdomain#7ae26ed5c1bfbf9b22a420dfd8b8e69f\n\nwhich seems to have been the proximate decision, and here are\na few threads talking about all the messes that were created\nfor JDBC etc:\n\nhttps://www.postgresql.org/message-id/flat/3D793A93.7030000%40xythos.com#4a2e2d9bdf2967906a6e0a75815d6636\nhttps://www.postgresql.org/message-id/flat/3383060E-272E-11D7-BA14-000502E740BA%40wellsgaming.com\nhttps://www.postgresql.org/message-id/flat/Law14-F37PIje6n0ssr00000bc1%40hotmail.com\n\nBasically, changing transactional semantics underneath clients is\na horrid idea. Having such behavior in an extension rather than\nthe core doesn't make it less horrid. If we'd designed it to act\nthat way from day one, maybe it'd have been fine. But as things\nstand, we are quite locked into the position that this has to be\nmanaged on the client side.\n\n</digression>\n\nThat point doesn't necessarily invalidate the value of having\nsome sort of hook in this general area. But I would kind of like\nto see another use-case, because I don't believe in this one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jul 2021 15:26:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 14/07/2021 à 21:26, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> I have renamed the patch and the title of this proposal registered in\n>> the commitfest \"Xact/SubXact event callback at command start\" to reflect\n>> the last changes that do not include new hooks anymore.\n> Hmm, it doesn't seem like this has addressed my concern at all.\n> The callbacks are still effectively defined as \"run during\n> start_xact_command\", so they're not any less squishy semantically\n> than they were before. The point of my criticism was that you\n> should move the call site to someplace that's more organically\n> connected to execution of commands.\n\nI would like to move it closer to the command execution but the only \nplace I see would be in BeginCommand() which actually is waiting for \ncode to execute, for the moment this function does nothing. I don't see \nanother possible place after start_xact_command() call. All my attempt \nto inject the callback after start_xact_command() result in a failure. \nIf you see an other place I will be please to give it a test.\n\n\n> Another thing I'm not too pleased with in this formulation is that it's\n> very unclear what the distinction is between XACT_EVENT_COMMAND_START\n> and SUBXACT_EVENT_COMMAND_START. AFAICS, *every* potential use-case\n> for this would have to hook into both callback chains, and most likely\n> would treat the two events alike. Plus, as you note, the goalposts\n> have suddenly been moved for the amount of overhead it's okay to have\n> in an XactCallback or SubXactCallback function. So that might cause\n> problems for unrelated code. It's probably better to not try to\n> re-use that infrastructure.\n\nActually XACT_EVENT_COMMAND_START occurs only after the call BEGIN, when \na transaction starts, whereas SUBXACT_EVENT_COMMAND_START occurs in all \nsubsequent statement execution of this transaction. This helps to \nperform different actions following the event. In the example extension \nonly SUBXACT_EVENT_COMMAND_START is used but for example I could use \nevent XACT_EVENT_COMMAND_START to not send a RELEASE savepoint as there \nis none. I detect this case differently but this could be an improvement \nin the extension.\n\n\n>\n> <digression>\n>\n>> The objective of this new callback is to be able to call user-defined\n>> code before any new statement is executed. For example it can call a\n>> rollback to savepoint if there was an error in the previous transaction\n>> statement, which allow to implements Rollback at Statement Level at\n>> server side using a PostgreSQL extension, see [1] .\n> Urgh. Isn't this re-making the same mistake we made years ago, namely\n> trying to implement autocommit on the server side? I fear this will\n> be a disaster even larger than that was, because if it's an extension\n> then pretty much no applications will be prepared for the new semantics.\n> I strongly urge you to read the discussions that led up to f85f43dfb,\n> and to search the commit history before that for mentions of\n> \"autocommit\", to see just how extensive the mess was.\n>\n> I spent a little time trying to locate said discussions; it's harder\n> than it should be because we didn't have the practice of citing email\n> threads in the commit log at the time. I did find\n>\n> https://www.postgresql.org/message-id/flat/Pine.LNX.4.44.0303172059170.1975-100000%40peter.localdomain#7ae26ed5c1bfbf9b22a420dfd8b8e69f\n>\n> which seems to have been the proximate decision, and here are\n> a few threads talking about all the messes that were created\n> for JDBC etc:\n>\n> https://www.postgresql.org/message-id/flat/3D793A93.7030000%40xythos.com#4a2e2d9bdf2967906a6e0a75815d6636\n> https://www.postgresql.org/message-id/flat/3383060E-272E-11D7-BA14-000502E740BA%40wellsgaming.com\n> https://www.postgresql.org/message-id/flat/Law14-F37PIje6n0ssr00000bc1%40hotmail.com\n>\n> Basically, changing transactional semantics underneath clients is\n> a horrid idea. Having such behavior in an extension rather than\n> the core doesn't make it less horrid. If we'd designed it to act\n> that way from day one, maybe it'd have been fine. But as things\n> stand, we are quite locked into the position that this has to be\n> managed on the client side.\n\n\nYes I have suffered of this implementation for server side autocommit, \nit was reverted in PG 7.4 if I remember well. I'm old enough to remember \nthat :-). I'm also against restoring this feature inside PG core but the \nfact that the subject comes again almost every 2 years mean that there \nis a need on this feature. This is why I'm proposing to be able to use \nan extension for those who really need the feature, with all the \nassociated warning.\n\n\nFor example in my case the first time I was needing this feature was to \nemulate the behavior of DB2 that allows rollback at statement level. \nThis is not exactly autocommit because the transaction still need to be \nvalidated or rolledback at end, this is just that an error will not \ninvalidate the full transaction but just the failing statement. I think \nthat this is different. Actually I have an extension that is doing that \nfor most of the work but we still have to send the ROLLBACK TO savepoint \nat client side which is really a performances killer and especially \npainful to implement with JDBC Exception blocks.\n\n\nRecently I was working on an Oracle to PostgreSQL migration and want to \nimplement an other Oracle feature like that is heavily used when \nimporting data from different sources into a data warehouse. It's very \ncommon in the Oracle world to batch data importinside a transaction and \nlog silently the errors into a dedicated table to be processed later. \n\"Whatever\" (this concern only certain errors) happens you continue to \nimport the data and DBAs will check what to fix and will re-import the \nrecords in error. Again, I have an extension that is doing that but we \nstill have to generate the ROLLBACK TO at client side. This can be \navoided with this proposal and will greatly simplify the code at client \nside.\n\n\nWe all know the problems of such server side implementation but once you \nhave implemented it at client side and you are looking for better \nperformances it's obvious that this kind of extension could help. The \nother solution is to move to a proprietary PostgreSQL fork which is \nsurely not what we want.\n\n\n> </digression>\n>\n> That point doesn't necessarily invalidate the value of having\n> some sort of hook in this general area. But I would kind of like\n> to see another use-case, because I don't believe in this one.\n\n\nI have sited two use-case, they are both based on the rollback at \nstatement level feature. I'm pretty sure that there is several other \nuse-cases that escape my poor imagination. IMHO the possibility to offer \nthe rollback at statement level feature through an extension should be \nenough but if anyone have other use-case I will be pleased to create an \nextension to test it :-)\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 14/07/2021 à 21:26, Tom Lane a\n écrit :\n\n\nGilles Darold <gilles@darold.net> writes:\n\n\nI have renamed the patch and the title of this proposal registered in \nthe commitfest \"Xact/SubXact event callback at command start\" to reflect \nthe last changes that do not include new hooks anymore.\n\n\n\nHmm, it doesn't seem like this has addressed my concern at all.\nThe callbacks are still effectively defined as \"run during\nstart_xact_command\", so they're not any less squishy semantically\nthan they were before. The point of my criticism was that you\nshould move the call site to someplace that's more organically\nconnected to execution of commands.\n\n\nI would like to move it closer to the command execution but the\n only place I see would be in BeginCommand() which actually is\n waiting for code to execute, for the moment this function does\n nothing. I don't see another possible place after\n start_xact_command() call. All my attempt to inject the callback\n after start_xact_command() result in a failure. If you see an\n other place I will be please to give it a test. \n\n\n\n\nAnother thing I'm not too pleased with in this formulation is that it's\nvery unclear what the distinction is between XACT_EVENT_COMMAND_START\nand SUBXACT_EVENT_COMMAND_START. AFAICS, *every* potential use-case\nfor this would have to hook into both callback chains, and most likely\nwould treat the two events alike. Plus, as you note, the goalposts\nhave suddenly been moved for the amount of overhead it's okay to have\nin an XactCallback or SubXactCallback function. So that might cause\nproblems for unrelated code. It's probably better to not try to\nre-use that infrastructure.\n\nActually XACT_EVENT_COMMAND_START occurs only after the call\n BEGIN, when a transaction starts, whereas\n SUBXACT_EVENT_COMMAND_START occurs in all subsequent statement\n execution of this transaction. This helps to perform different\n actions following the event. In the example extension only\n SUBXACT_EVENT_COMMAND_START is used but for example I could use\n event XACT_EVENT_COMMAND_START to not send a RELEASE savepoint as\n there is none. I detect this case differently but this could be an\n improvement in the extension.\n\n\n\n\n\n<digression>\n\n\n\nThe objective of this new callback is to be able to call user-defined \ncode before any new statement is executed. For example it can call a \nrollback to savepoint if there was an error in the previous transaction \nstatement, which allow to implements Rollback at Statement Level at \nserver side using a PostgreSQL extension, see [1] .\n\n\n\nUrgh. Isn't this re-making the same mistake we made years ago, namely\ntrying to implement autocommit on the server side? I fear this will\nbe a disaster even larger than that was, because if it's an extension\nthen pretty much no applications will be prepared for the new semantics.\nI strongly urge you to read the discussions that led up to f85f43dfb,\nand to search the commit history before that for mentions of\n\"autocommit\", to see just how extensive the mess was.\n\nI spent a little time trying to locate said discussions; it's harder\nthan it should be because we didn't have the practice of citing email\nthreads in the commit log at the time. I did find\n\nhttps://www.postgresql.org/message-id/flat/Pine.LNX.4.44.0303172059170.1975-100000%40peter.localdomain#7ae26ed5c1bfbf9b22a420dfd8b8e69f\n\nwhich seems to have been the proximate decision, and here are\na few threads talking about all the messes that were created\nfor JDBC etc:\n\nhttps://www.postgresql.org/message-id/flat/3D793A93.7030000%40xythos.com#4a2e2d9bdf2967906a6e0a75815d6636\nhttps://www.postgresql.org/message-id/flat/3383060E-272E-11D7-BA14-000502E740BA%40wellsgaming.com\nhttps://www.postgresql.org/message-id/flat/Law14-F37PIje6n0ssr00000bc1%40hotmail.com\n\nBasically, changing transactional semantics underneath clients is\na horrid idea. Having such behavior in an extension rather than\nthe core doesn't make it less horrid. If we'd designed it to act\nthat way from day one, maybe it'd have been fine. But as things\nstand, we are quite locked into the position that this has to be\nmanaged on the client side.\n\n\n\n\nYes I have suffered of this implementation for server side\n autocommit, it was reverted in PG 7.4 if I remember well. I'm old\n enough to remember that :-). I'm also against restoring this\n feature inside PG core but the fact that the subject comes again\n almost every 2 years mean that there is a need on this feature.\n This is why I'm proposing to be able to use an extension for those\n who really need the feature, with all the associated warning.\n\n\nFor example in my case the first time I was needing this feature\n was to emulate the behavior of DB2 that allows rollback at\n statement level. This is not exactly autocommit because the\n transaction still need to be validated or rolledback at end, this\n is just that an error will not invalidate the full transaction but\n just the failing statement. I think that this is different.\n Actually I have an extension that is doing that for most of the\n work but we still have to send the ROLLBACK TO savepoint at client\n side which is really a performances killer and especially painful\n to implement with JDBC Exception blocks.\n\n\n\nRecently I was working on an Oracle to PostgreSQL migration and\n want to implement an other Oracle feature like that is heavily\n used when importing data from different sources into a data\n warehouse. It's very common in the Oracle world to batch\n data import inside a transaction\n and log silently the errors into a dedicated table to be\n processed later. \"Whatever\" (this concern only certain\n errors) happens you continue to import the data and DBAs\n will check what to fix and will re-import the records in\n error. Again, I have an extension that is doing that but we\n still have to generate the ROLLBACK TO at client side. This\n can be avoided with this proposal and will greatly simplify\n the code at client side.\n\n\n\nWe\n all know the problems of such server side implementation but\n once you have implemented it at client side and you are\n looking for better performances it's obvious that this kind\n of extension could help. The other solution is to move to a\n proprietary PostgreSQL fork which is surely not what we\n want.\n\n \n \n\n\n</digression>\n\nThat point doesn't necessarily invalidate the value of having\nsome sort of hook in this general area. But I would kind of like\nto see another use-case, because I don't believe in this one.\n\n\n\n\nI have sited two use-case, they are both based on the rollback at\n statement level feature. I'm pretty sure that there is several\n other use-cases that\n escape my poor imagination. IMHO the\n possibility to offer the rollback at statement level feature\n through an extension should be enough but if anyone have\n other use-case I will be pleased to create an extension to\n test it :-)\n\n\n \n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Thu, 15 Jul 2021 09:44:13 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 15/07/2021 à 09:44, Gilles Darold a écrit :\n> Le 14/07/2021 à 21:26, Tom Lane a écrit :\n>> Gilles Darold<gilles@darold.net> writes:\n>>> I have renamed the patch and the title of this proposal registered in\n>>> the commitfest \"Xact/SubXact event callback at command start\" to reflect\n>>> the last changes that do not include new hooks anymore.\n>> Hmm, it doesn't seem like this has addressed my concern at all.\n>> The callbacks are still effectively defined as \"run during\n>> start_xact_command\", so they're not any less squishy semantically\n>> than they were before. The point of my criticism was that you\n>> should move the call site to someplace that's more organically\n>> connected to execution of commands.\n>\n> I would like to move it closer to the command execution but the only \n> place I see would be in BeginCommand() which actually is waiting for \n> code to execute, for the moment this function does nothing. I don't \n> see another possible place after start_xact_command() call. All my \n> attempt to inject the callback after start_xact_command() result in a \n> failure. If you see an other place I will be please to give it a test.\n>\n\nLooks like I have not well understood again, maybe you want me to move \nthe callback just after the start_xact_command() so that it is not \n\"hidden\" in the \"run during\nstart_xact_command\". Ok, I will do that.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n",
"msg_date": "Thu, 15 Jul 2021 10:59:07 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 14/07/2021 à 21:26, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> I have renamed the patch and the title of this proposal registered in\n>> the commitfest \"Xact/SubXact event callback at command start\" to reflect\n>> the last changes that do not include new hooks anymore.\n> Hmm, it doesn't seem like this has addressed my concern at all.\n> The callbacks are still effectively defined as \"run during\n> start_xact_command\", so they're not any less squishy semantically\n> than they were before. The point of my criticism was that you\n> should move the call site to someplace that's more organically\n> connected to execution of commands.\n>\n> Another thing I'm not too pleased with in this formulation is that it's\n> very unclear what the distinction is between XACT_EVENT_COMMAND_START\n> and SUBXACT_EVENT_COMMAND_START. AFAICS, *every* potential use-case\n> for this would have to hook into both callback chains, and most likely\n> would treat the two events alike.\n\nPlease find in attachment the new version v2 of the patch, I hope this \ntime I have well understood your advices. Myapologies for this waste of \ntime.\n\n\nI have moved the call to the callback out of start_xact_command() and \nlimit his call into exec_simple_query() and c_parse_exemessage(). There \nis other call to start_xact_command() elsewhere but actually these two \nplaces areenough for what I'm doing with the extensions. I have updated \nthe extension test cases to check the behavior when autocommit is on or \noff, error in execute of prepared statement and error in update where \ncurrent of cursor. But there is certainly a case that I have missed.\n\n\nOther calls of start_xact_command() are in exec_bind_message(), \nexec_execute_message(), exec_describe_statement_message(), \nexec_describe_portal_message and PostgresMain. In my test they are \neither not called or generates duplicates calls to the callback with \nexec_simple_query() and c_parse_exemessage().\n\n\nAlso CallXactStartCommand() will only use one event \nXACT_EVENT_COMMAND_START and only do a single call:\n\nCallXactCallbacks(XACT_EVENT_COMMAND_START);\n\n\n> Plus, as you note, the goalposts have suddenly been moved for the\n> amount of overhead it's okay to have in an XactCallback or SubXactCallback\n> function. So that might cause problems for unrelated code. It's probably\n> better to not try to re-use that infrastructure.\n\n\nAbout this maybe I was not clear in my bench, the overhead is not \nintroduced by the patch on the callback, there is no overhead. But by \nthe rollback at statement level extension. In case this was clear but \nyou think that we must not reuse this callback infrastructure do you \nmean that I should fallback to a hook?\n\n\nBest regard,\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Fri, 16 Jul 2021 11:48:24 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> [ 00001-startcommand_xact_callback-v2.diff ]\n\nI've not read this version of the patch, but I see from the cfbot's\nresults that it's broken postgres_fdw. I recall that postgres_fdw\nuses the XactCallback and SubXactCallback mechanisms, so I'm betting\nthis means that you've changed the semantics of those callbacks in\nan incompatible way. That's probably not a great idea. We could\nfix postgres_fdw, but there are more than likely some external\nmodules that would also get broken, and that is supposed to be a\nreasonably stable API.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 13:58:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-30 13:58:51 -0400, Tom Lane wrote:\n> Gilles Darold <gilles@darold.net> writes:\n> > [ 00001-startcommand_xact_callback-v2.diff ]\n>\n> I've not read this version of the patch, but I see from the cfbot's\n> results that it's broken postgres_fdw. I recall that postgres_fdw\n> uses the XactCallback and SubXactCallback mechanisms, so I'm betting\n> this means that you've changed the semantics of those callbacks in\n> an incompatible way. That's probably not a great idea. We could\n> fix postgres_fdw, but there are more than likely some external\n> modules that would also get broken, and that is supposed to be a\n> reasonably stable API.\n\nI think this may partially be an issue with the way that postgres_fdw\nuses the callback than with the patch. It disconnects from the server\n*regardless* of the XactEvent passed to the callback. That makes it\nreally hard to extend the callback mechanism to further events...\n\nNow, I'm also *quite* unconvinced that the placement of the\nnew CallXactStartCommand() in postgres.c is right.\n\n\nOn 2021-07-16 11:48:24 +0200, Gilles Darold wrote:\n> Other calls of start_xact_command() are in exec_bind_message(),\n> exec_execute_message(), exec_describe_statement_message(),\n> exec_describe_portal_message and PostgresMain. In my test they are either\n> not called or generates duplicates calls to the callback with\n> exec_simple_query() and c_parse_exemessage().\n\nThat seems like an issue with your test then. Prepared statements can be\nparsed in one transaction and bind+exec'ed in another. And you even can\nexecute transaction control statements this way.\n\nIMO this'd need tests somewhere that allow us to verify the hook\nplacements do something sensible.\n\n\nIt does not seems not great to add a bunch of external function calls\ninto all these routines. For simple queries postgres.c's exec_*\nfunctions show up in profiles - doing yet another function call that\nthen also needs to look at various memory locations plausibly will show\nup. Particularly when used with pipelined queries.\n\n\nI'm *very* unconvinced it makes sense to implement a feature like this\nin an extension / that we should expose API for that purpose. To me the\ntop-level transaction state is way too tied to our internals for it to\nbe reasonably dealt with in an extension. And I think an in-core version\nwould need to tackle the overhead and internal query execution issues\nthis feature has.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jul 2021 14:14:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-07-30 13:58:51 -0400, Tom Lane wrote:\n>> I've not read this version of the patch, but I see from the cfbot's\n>> results that it's broken postgres_fdw.\n\n> I think this may partially be an issue with the way that postgres_fdw\n> uses the callback than with the patch. It disconnects from the server\n> *regardless* of the XactEvent passed to the callback. That makes it\n> really hard to extend the callback mechanism to further events...\n\nPerhaps. Nonetheless, I thought upthread that adding these events\nas Xact/SubXactCallback events was the wrong design, and I still\nthink that. A new hook would be a more sensible way.\n\n> I'm *very* unconvinced it makes sense to implement a feature like this\n> in an extension / that we should expose API for that purpose. To me the\n> top-level transaction state is way too tied to our internals for it to\n> be reasonably dealt with in an extension.\n\nYeah, that's the other major problem --- the use-case doesn't seem\nvery convincing. I'm not even sold on the goal, let alone on trying\nto implement it by hooking into these particular places. I think\nthat'll end up being buggy and fragile as well as not very performant.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 17:49:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-30 17:49:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-07-30 13:58:51 -0400, Tom Lane wrote:\n> >> I've not read this version of the patch, but I see from the cfbot's\n> >> results that it's broken postgres_fdw.\n> \n> > I think this may partially be an issue with the way that postgres_fdw\n> > uses the callback than with the patch. It disconnects from the server\n> > *regardless* of the XactEvent passed to the callback. That makes it\n> > really hard to extend the callback mechanism to further events...\n> \n> Perhaps. Nonetheless, I thought upthread that adding these events\n> as Xact/SubXactCallback events was the wrong design, and I still\n> think that. A new hook would be a more sensible way.\n\nI know I've wanted additional events in XactEvent before that'd also be\nproblematic for pg_fdw, but not make sense as a separate event. E.g. an\nevent when an xid is assigned.\n\n\n> > I'm *very* unconvinced it makes sense to implement a feature like this\n> > in an extension / that we should expose API for that purpose. To me the\n> > top-level transaction state is way too tied to our internals for it to\n> > be reasonably dealt with in an extension.\n> \n> Yeah, that's the other major problem --- the use-case doesn't seem\n> very convincing. I'm not even sold on the goal, let alone on trying\n> to implement it by hooking into these particular places. I think\n> that'll end up being buggy and fragile as well as not very performant.\n\nI'm more favorable than you on the overall goal. Migrations to PG are a\nfrequent and good thing and as discussed before, lots of PG forks ended\nup implementing a version of this. Clearly there's demand.\n\nHowever, I think a proper implementation would require a substantial\namount of effort. Including things like optimizing the subtransaction\nlogic so that switching the feature on doesn't lead to xid wraparound\nissues. Adding odd hooks doesn't move us towards a real solution imo.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jul 2021 16:28:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 31/07/2021 à 01:28, Andres Freund a écrit :\n>\n>>> I'm *very* unconvinced it makes sense to implement a feature like this\n>>> in an extension / that we should expose API for that purpose. To me the\n>>> top-level transaction state is way too tied to our internals for it to\n>>> be reasonably dealt with in an extension.\n>> Yeah, that's the other major problem --- the use-case doesn't seem\n>> very convincing. I'm not even sold on the goal, let alone on trying\n>> to implement it by hooking into these particular places. I think\n>> that'll end up being buggy and fragile as well as not very performant.\n> I'm more favorable than you on the overall goal. Migrations to PG are a\n> frequent and good thing and as discussed before, lots of PG forks ended\n> up implementing a version of this. Clearly there's demand.\n\n\nSorry for the response delay. I have though about adding this odd hook \nto be able to implement this feature through an extension because I \ndon't think this is something that should be implemented in core. There \nwere also patches proposals which were all rejected.\n\nWe usually implement the feature at client side which is imo enough for \nthe use cases. But the problem is that this a catastrophe in term of \nperformances. I have done a small benchmark to illustrate the problem. \nThis is a single process client on the same host than the PG backend.\n\nFor 10,000 tuples inserted with 50% of failures and rollback at \nstatement level handled at client side:\n\n Expected: 5001, Count: 5001\n DML insert took: 13 wallclock secs ( 0.53 usr + 0.94 sys = \n1.47 CPU)\n\nNow with statement at rollback level handled at server side using the \nhook and the extension:\n\n Expected: 5001, Count: 5001\n DML insert took: 4 wallclock secs ( 0.27 usr + 0.32 sys = \n0.59 CPU)\n\n\nAnd with 100,000 tuples this is worst. Without the extension:\n\n Expected: 50001, Count: 50001\n DML insert took: 1796 wallclock secs (14.95 usr + 20.29 sys = \n35.24 CPU)\n\nwith server side Rollback at statement level:\n\n Expected: 50001, Count: 50001\n DML insert took: 372 wallclock secs ( 4.85 usr + 5.45 sys = \n10.30 CPU)\n\n\nI think this is not so uncommon use cases and that could shows the \ninterest of such extension.\n\n\n> However, I think a proper implementation would require a substantial\n> amount of effort. Including things like optimizing the subtransaction\n> logic so that switching the feature on doesn't lead to xid wraparound\n> issues. Adding odd hooks doesn't move us towards a real solution imo.\n\nI would like to help on this part but unfortunately I have no idea on \nhow we can improve that.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n\n\n\n\nLe 31/07/2021 à 01:28, Andres Freund a\n écrit :\n\n\n\n\nI'm *very* unconvinced it makes sense to implement a feature like this\nin an extension / that we should expose API for that purpose. To me the\ntop-level transaction state is way too tied to our internals for it to\nbe reasonably dealt with in an extension.\n\n\nYeah, that's the other major problem --- the use-case doesn't seem\nvery convincing. I'm not even sold on the goal, let alone on trying\nto implement it by hooking into these particular places. I think\nthat'll end up being buggy and fragile as well as not very performant.\n\n\nI'm more favorable than you on the overall goal. Migrations to PG are a\nfrequent and good thing and as discussed before, lots of PG forks ended\nup implementing a version of this. Clearly there's demand.\n\n\n\nSorry for the response delay. I have though about adding this odd\n hook to be able to implement this feature through an extension\n because I don't think this is something that should be implemented\n in core. There\n were also patches proposals which were all rejected.\nWe usually implement the feature at client side which is imo\n enough for the use cases. But the problem is that this a\n catastrophe in term of performances. I have done a small benchmark\n to illustrate the problem. This is a single process client on the\n same host than the PG backend.\n\nFor 10,000 tuples inserted with 50% of failures and rollback at\n statement level handled at client side:\n\n Expected: 5001, Count: 5001\n DML insert took: 13 wallclock secs ( 0.53 usr + 0.94 sys\n = 1.47 CPU)\n\n Now with statement at rollback level handled at server side using\n the hook and the extension:\n\n Expected: 5001, Count: 5001\n DML insert took: 4 wallclock secs ( 0.27 usr + 0.32 sys\n = 0.59 CPU)\n\n\nAnd with 100,000 tuples this is worst. Without the extension:\n\n Expected: 50001, Count: 50001\n DML insert took: 1796 wallclock secs (14.95 usr + 20.29\n sys = 35.24 CPU)\n\n with server side Rollback at statement level:\n\n Expected: 50001, Count: 50001\n DML insert took: 372 wallclock secs ( 4.85 usr + 5.45 sys\n = 10.30 CPU)\n\n\nI think this is not so uncommon use cases and that could shows\n the interest of such extension.\n\n\nHowever, I think a proper implementation would require a substantial\namount of effort. Including things like optimizing the subtransaction\nlogic so that switching the feature on doesn't lead to xid wraparound\nissues. Adding odd hooks doesn't move us towards a real solution imo.\n\n\nI would like to help on this part but unfortunately I have no\n idea on how we can improve that.\n\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Tue, 10 Aug 2021 10:12:26 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 30/07/2021 à 23:49, Tom Lane a écrit :\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2021-07-30 13:58:51 -0400, Tom Lane wrote:\n>>> I've not read this version of the patch, but I see from the cfbot's\n>>> results that it's broken postgres_fdw.\n>> I think this may partially be an issue with the way that postgres_fdw\n>> uses the callback than with the patch. It disconnects from the server\n>> *regardless* of the XactEvent passed to the callback. That makes it\n>> really hard to extend the callback mechanism to further events...\n> Perhaps. Nonetheless, I thought upthread that adding these events\n> as Xact/SubXactCallback events was the wrong design, and I still\n> think that. A new hook would be a more sensible way.\n>\n>> I'm *very* unconvinced it makes sense to implement a feature like this\n>> in an extension / that we should expose API for that purpose. To me the\n>> top-level transaction state is way too tied to our internals for it to\n>> be reasonably dealt with in an extension.\n> Yeah, that's the other major problem --- the use-case doesn't seem\n> very convincing. I'm not even sold on the goal, let alone on trying\n> to implement it by hooking into these particular places. I think\n> that'll end up being buggy and fragile as well as not very performant.\n\n\nI've attached the new version v5 of the patch that use a hook instead of \nthe use of a xact callback. Compared to the first implementation calls \nto the hook have been extracted from the start_xact_command() function. \nThe test extension have also be updated.\n\n\nIf I understand well the last discussions there is no chance of having \nthis hook included. If there is no contrary opinion I will withdraw the \npatch from the commitfest. However thank you so much to have taken time \nto review this proposal.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Tue, 10 Aug 2021 10:41:20 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-10 10:12:26 +0200, Gilles Darold wrote:\n> Sorry for the response delay. I have though about adding this odd hook to be\n> able to implement this feature through an extension because I don't think\n> this is something that should be implemented in core. There were also\n> patches proposals which were all rejected.\n>\n> We usually implement the feature at client side which is imo enough for the\n> use cases. But the problem is that this a catastrophe in term of\n> performances. I have done a small benchmark to illustrate the problem. This\n> is a single process client on the same host than the PG backend.\n>\n> For 10,000 tuples inserted with 50% of failures and rollback at statement\n> level handled at client side:\n>\n> ������� Expected: 5001, Count:� 5001\n> ������� DML insert took: 13 wallclock secs ( 0.53 usr +� 0.94 sys =� 1.47\n> CPU)\n\nSomething seems off here. This suggests every insert took 2.6ms. That\nseems awfully long, unless your network latency is substantial. I did a\nquick test implementing this in the naive-most way in pgbench, and I get\nbetter times - and there's *lots* of room for improvement.\n\nI used a pgbench script that sent the following:\nBEGIN;\nSAVEPOINT insert_fail;\nINSERT INTO testinsert(data) VALUES (1);\nROLLBACK TO SAVEPOINT insert_fail;\nSAVEPOINT insert_success;\nINSERT INTO testinsert(data) VALUES (1);\nRELEASE SAVEPOINT insert_success;\n{repeat 5 times}\nCOMMIT;\n\nI.e. 5 failing and 5 succeeding insertions wrapped in one transaction. I\nget >2500 tps, i.e. > 25k rows/sec. And it's not hard to optimize that\nfurther - the {ROLLBACK TO,RELEASE} SAVEPOINT; SAVEPOINT; INSERT can be\nsent in one roundtrip. That gets me to somewhere around 40k rows/sec.\n\n\nBEGIN;\n\n\\startpipeline\nSAVEPOINT insert_fail;\nINSERT INTO testinsert(data) VALUES (1);\n\\endpipeline\n\n\\startpipeline\nROLLBACK TO SAVEPOINT insert_fail;\nSAVEPOINT insert_success;\nINSERT INTO testinsert(data) VALUES (1);\n\\endpipeline\n\n\\startpipeline\nRELEASE SAVEPOINT insert_success;\nSAVEPOINT insert_fail;\nINSERT INTO testinsert(data) VALUES (1);\n\\endpipeline\n\n\\startpipeline\nROLLBACK TO SAVEPOINT insert_fail;\nSAVEPOINT insert_success;\nINSERT INTO testinsert(data) VALUES (1);\n\\endpipeline\n\n{repeat last two blocks three times}\nCOMMIT;\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Aug 2021 02:58:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "Le 13/08/2021 à 11:58, Andres Freund a écrit :\n> Hi,\n>\n> On 2021-08-10 10:12:26 +0200, Gilles Darold wrote:\n>> Sorry for the response delay. I have though about adding this odd hook to be\n>> able to implement this feature through an extension because I don't think\n>> this is something that should be implemented in core. There were also\n>> patches proposals which were all rejected.\n>>\n>> We usually implement the feature at client side which is imo enough for the\n>> use cases. But the problem is that this a catastrophe in term of\n>> performances. I have done a small benchmark to illustrate the problem. This\n>> is a single process client on the same host than the PG backend.\n>>\n>> For 10,000 tuples inserted with 50% of failures and rollback at statement\n>> level handled at client side:\n>>\n>> Expected: 5001, Count: 5001\n>> DML insert took: 13 wallclock secs ( 0.53 usr + 0.94 sys = 1.47\n>> CPU)\n> Something seems off here. This suggests every insert took 2.6ms. That\n> seems awfully long, unless your network latency is substantial. I did a\n> quick test implementing this in the naive-most way in pgbench, and I get\n> better times - and there's *lots* of room for improvement.\n>\n> I used a pgbench script that sent the following:\n> BEGIN;\n> SAVEPOINT insert_fail;\n> INSERT INTO testinsert(data) VALUES (1);\n> ROLLBACK TO SAVEPOINT insert_fail;\n> SAVEPOINT insert_success;\n> INSERT INTO testinsert(data) VALUES (1);\n> RELEASE SAVEPOINT insert_success;\n> {repeat 5 times}\n> COMMIT;\n>\n> I.e. 5 failing and 5 succeeding insertions wrapped in one transaction. I\n> get >2500 tps, i.e. > 25k rows/sec. And it's not hard to optimize that\n> further - the {ROLLBACK TO,RELEASE} SAVEPOINT; SAVEPOINT; INSERT can be\n> sent in one roundtrip. That gets me to somewhere around 40k rows/sec.\n>\n>\n> BEGIN;\n>\n> \\startpipeline\n> SAVEPOINT insert_fail;\n> INSERT INTO testinsert(data) VALUES (1);\n> \\endpipeline\n>\n> \\startpipeline\n> ROLLBACK TO SAVEPOINT insert_fail;\n> SAVEPOINT insert_success;\n> INSERT INTO testinsert(data) VALUES (1);\n> \\endpipeline\n>\n> \\startpipeline\n> RELEASE SAVEPOINT insert_success;\n> SAVEPOINT insert_fail;\n> INSERT INTO testinsert(data) VALUES (1);\n> \\endpipeline\n>\n> \\startpipeline\n> ROLLBACK TO SAVEPOINT insert_fail;\n> SAVEPOINT insert_success;\n> INSERT INTO testinsert(data) VALUES (1);\n> \\endpipeline\n>\n> {repeat last two blocks three times}\n> COMMIT;\n>\n> Greetings,\n>\n> Andres Freund\n\n\nI have written a Perl script to mimic what I have found in an Oracle\nbatch script to import data in a table. I had this use case in a recent\nmigration the only difference is that the batch was written in Java.\n\n\n$dbh->do(\"BEGIN\") or die \"FATAL: \" . $dbh->errstr . \"\\n\";\nmy $start = new Benchmark;\nmy $sth = $dbh->prepare(\"INSERT INTO t1 VALUES (?, ?)\");\nexit 1 if (not defined $sth);\nfor (my $i = 0; $i <= 10000; $i++)\n{\n $dbh->do(\"SAVEPOINT foo\") or die \"FATAL: \" . $dbh->errstr . \"\\n\";\n # Generate a duplicate key each two row inserted\n my $val = $i;\n $val = $i-1 if ($i % 2 != 0);\n unless ($sth->execute($val, 'insert '.$i)) {\n $dbh->do(\"ROLLBACK TO foo\") or die \"FATAL: \" .\n$dbh->errstr . \"\\n\";\n } else {\n $dbh->do(\"RELEASE foo\") or die \"FATAL: \" . $dbh->errstr\n. \"\\n\";\n }\n}\n$sth->finish();\nmy $end = new Benchmark;\n\n$dbh->do(\"COMMIT;\");\n\nmy $td = timediff($end, $start);\nprint \"DML insert took: \" . timestr($td) . \"\\n\";\n\n\nThe timing reported are from my personal computer, there is no network\nlatency, it uses localhost. Anyway, the objective was not to bench the\nDML throughput but the overhead of the rollback at statement level made\nat client side versus server side. I guess that you might have the same\nspeed gain around x3 to x5 or more following the number of tuples?\n\n\nThe full script can be found here\nhttps://github.com/darold/pg_statement_rollbackv2/blob/main/test/batch_script_example.pl\n\n\nCheers,\n\n-- \nGilles Darold\n\n\n\n\n",
"msg_date": "Fri, 13 Aug 2021 14:43:01 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
},
{
"msg_contents": "I have changed the status of this proposal as rejected.\n\n\nTo resume the final state of this proposal there is no consensus on the \ninterest to add a hook on start xact commands. Also the only useful case \nfor this hook was to be able to have a server side automatic rollback at \nstatement level. It can be regrettable because I don't think that \nPostgreSQL will have such feature before a long time (that's probably \nbetter) and a way to external implementation through an extension would \nbe helpful for migration from other RDBMS like DB2 or Oracle. The only \nways to have this feature is to handle the rollback at client side using \nsavepoint, which is at least 3 times slower than a server side \nimplementation, or not use such implementation at all. Outside not being \nperformant it doesn't scale due to txid wraparound. And the last way is \nto use a proprietary forks of PostgreSQL, some are proposing this feature.\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n",
"msg_date": "Sat, 4 Sep 2021 13:00:58 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Hooks at XactCommand level"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, for any component (such as COPY, CTAS[1], CREATE/REFRESH\nMat View[1], INSERT INTO SELECTs[2]) multi insert logic such as buffer\nslots allocation, maintenance, decision to flush and clean up, need to\nbe implemented outside the table_multi_insert() API. The main problem\nis that it fails to take into consideration the underlying storage\nengine capabilities, for more details of this point refer to a\ndiscussion in multi inserts in CTAS thread[1]. This also creates a lot\nof duplicate code which is more error prone and not maintainable.\n\nMore importantly, in another thread [3] @Andres Freund suggested to\nhave table insert APIs in such a way that they look more like 'scan'\nAPIs i.e. insert_begin, insert, insert_end. The main advantages doing\nthis are(quoting from his statement in [3]) - \"more importantly it'd\nallow an AM to optimize operations across multiple inserts, which is\nimportant for column stores.\"\n\nI propose to introduce new table access methods for both multi and\nsingle inserts based on the prototype suggested by Andres in [3]. Main\ndesign goal of these new APIs is to give flexibility to tableam\ndevelopers in implementing multi insert logic dependent on the\nunderlying storage engine.\n\nBelow are the APIs. I suggest to have a look at\nv1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch for details\nof the new data structure and the API functionality. Note that\ntemporarily I used XX_v2, we can change it later.\n\nTableInsertState* table_insert_begin(initial_args);\nvoid table_insert_v2(TableInsertState *state, TupleTableSlot *slot);\nvoid table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot);\nvoid table_multi_insert_flush(TableInsertState *state);\nvoid table_insert_end(TableInsertState *state);\n\nI'm attaching a few patches(just to show that these APIs work, avoids\na lot of duplicate code and makes life easier). Better commenting can\nbe added later. If these APIs and patches look okay, we can even\nconsider replacing them in other places such as nodeModifyTable.c and\nso on.\n\nv1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch --->\nintroduces new table access methods for multi and single inserts. Also\nimplements/rearranges the outside code for heap am into these new\nAPIs.\nv1-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-Table-AM.patch\n---> adds new multi insert table access methods to CREATE TABLE AS,\nCREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW.\nv1-0003-ATRewriteTable-With-New-Single-Insert-Table-AM.patch ---> adds\nnew single insert table access method to ALTER TABLE rewrite table\ncode.\nv1-0004-COPY-With-New-Multi-and-Single-Insert-Table-AM.patch ---> adds\nnew single and multi insert table access method to COPY code.\n\nThoughts?\n\nMany thanks to Robert, Vignesh and Dilip for offlist discussion.\n\n[1] - https://www.postgresql.org/message-id/4eee0730-f6ec-e72d-3477-561643f4b327%40swarm64.com\n[2] - https://www.postgresql.org/message-id/20201124020020.GK24052%40telsasoft.com\n[3] - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Dec 2020 18:27:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 6:27 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n> Hi,\n>\n> Currently, for any component (such as COPY, CTAS[1], CREATE/REFRESH\n> Mat View[1], INSERT INTO SELECTs[2]) multi insert logic such as buffer\n> slots allocation, maintenance, decision to flush and clean up, need to\n> be implemented outside the table_multi_insert() API. The main problem\n> is that it fails to take into consideration the underlying storage\n> engine capabilities, for more details of this point refer to a\n> discussion in multi inserts in CTAS thread[1]. This also creates a lot\n> of duplicate code which is more error prone and not maintainable.\n>\n> More importantly, in another thread [3] @Andres Freund suggested to\n> have table insert APIs in such a way that they look more like 'scan'\n> APIs i.e. insert_begin, insert, insert_end. The main advantages doing\n> this are(quoting from his statement in [3]) - \"more importantly it'd\n> allow an AM to optimize operations across multiple inserts, which is\n> important for column stores.\"\n>\n> I propose to introduce new table access methods for both multi and\n> single inserts based on the prototype suggested by Andres in [3]. Main\n> design goal of these new APIs is to give flexibility to tableam\n> developers in implementing multi insert logic dependent on the\n> underlying storage engine.\n>\n> Below are the APIs. I suggest to have a look at\n> v1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch for details\n> of the new data structure and the API functionality. Note that\n> temporarily I used XX_v2, we can change it later.\n>\n> TableInsertState* table_insert_begin(initial_args);\n> void table_insert_v2(TableInsertState *state, TupleTableSlot *slot);\n> void table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot);\n> void table_multi_insert_flush(TableInsertState *state);\n> void table_insert_end(TableInsertState *state);\n>\n> I'm attaching a few patches(just to show that these APIs work, avoids\n> a lot of duplicate code and makes life easier). Better commenting can\n> be added later. If these APIs and patches look okay, we can even\n> consider replacing them in other places such as nodeModifyTable.c and\n> so on.\n>\n> v1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch --->\n> introduces new table access methods for multi and single inserts. Also\n> implements/rearranges the outside code for heap am into these new\n> APIs.\n> v1-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-Table-AM.patch\n> ---> adds new multi insert table access methods to CREATE TABLE AS,\n> CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW.\n> v1-0003-ATRewriteTable-With-New-Single-Insert-Table-AM.patch ---> adds\n> new single insert table access method to ALTER TABLE rewrite table\n> code.\n> v1-0004-COPY-With-New-Multi-and-Single-Insert-Table-AM.patch ---> adds\n> new single and multi insert table access method to COPY code.\n>\n> Thoughts?\n>\n> [1] -\nhttps://www.postgresql.org/message-id/4eee0730-f6ec-e72d-3477-561643f4b327%40swarm64.com\n> [2] -\nhttps://www.postgresql.org/message-id/20201124020020.GK24052%40telsasoft.com\n> [3] -\nhttps://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n\nAdded this to commitfest to get it reviewed further.\n\nhttps://commitfest.postgresql.org/31/2871/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Dec 8, 2020 at 6:27 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> Hi,\n>\n> Currently, for any component (such as COPY, CTAS[1], CREATE/REFRESH\n> Mat View[1], INSERT INTO SELECTs[2]) multi insert logic such as buffer\n> slots allocation, maintenance, decision to flush and clean up, need to\n> be implemented outside the table_multi_insert() API. The main problem\n> is that it fails to take into consideration the underlying storage\n> engine capabilities, for more details of this point refer to a\n> discussion in multi inserts in CTAS thread[1]. This also creates a lot\n> of duplicate code which is more error prone and not maintainable.\n>\n> More importantly, in another thread [3] @Andres Freund suggested to\n> have table insert APIs in such a way that they look more like 'scan'\n> APIs i.e. insert_begin, insert, insert_end. The main advantages doing\n> this are(quoting from his statement in [3]) - \"more importantly it'd\n> allow an AM to optimize operations across multiple inserts, which is\n> important for column stores.\"\n>\n> I propose to introduce new table access methods for both multi and\n> single inserts based on the prototype suggested by Andres in [3]. Main\n> design goal of these new APIs is to give flexibility to tableam\n> developers in implementing multi insert logic dependent on the\n> underlying storage engine.\n>\n> Below are the APIs. I suggest to have a look at\n> v1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch for details\n> of the new data structure and the API functionality. Note that\n> temporarily I used XX_v2, we can change it later.\n>\n> TableInsertState* table_insert_begin(initial_args);\n> void table_insert_v2(TableInsertState *state, TupleTableSlot *slot);\n> void table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot);\n> void table_multi_insert_flush(TableInsertState *state);\n> void table_insert_end(TableInsertState *state);\n>\n> I'm attaching a few patches(just to show that these APIs work, avoids\n> a lot of duplicate code and makes life easier). Better commenting can\n> be added later. If these APIs and patches look okay, we can even\n> consider replacing them in other places such as nodeModifyTable.c and\n> so on.\n>\n> v1-0001-New-Table-AMs-for-Multi-and-Single-Inserts.patch --->\n> introduces new table access methods for multi and single inserts. Also\n> implements/rearranges the outside code for heap am into these new\n> APIs.\n> v1-0002-CTAS-and-REFRESH-Mat-View-With-New-Multi-Insert-Table-AM.patch\n> ---> adds new multi insert table access methods to CREATE TABLE AS,\n> CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW.\n> v1-0003-ATRewriteTable-With-New-Single-Insert-Table-AM.patch ---> adds\n> new single insert table access method to ALTER TABLE rewrite table\n> code.\n> v1-0004-COPY-With-New-Multi-and-Single-Insert-Table-AM.patch ---> adds\n> new single and multi insert table access method to COPY code.\n>\n> Thoughts?\n>\n> [1] - https://www.postgresql.org/message-id/4eee0730-f6ec-e72d-3477-561643f4b327%40swarm64.com\n> [2] - https://www.postgresql.org/message-id/20201124020020.GK24052%40telsasoft.com\n> [3] - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n\nAdded this to commitfest to get it reviewed further.\n\nhttps://commitfest.postgresql.org/31/2871/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 11 Dec 2020 19:17:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Typos:\n\n+ * 1) Specify is_multi as true, then multi insert state is allcoated.\n=> allocated\n+ * dropped, short-lived memory context is delted and mistate is freed up.\n=> deleted\n+ * 2) Currently, GetTupleSize() handles the existing heap, buffer, minmal and\n=> minimal\n+ /* Mulit insert state if requested, otherwise NULL. */\n=> multi\n+ * Buffer the input slots and insert the tuples from the buffered slots at a\n=> *one* at a time ?\n+ * Compute the size of the tuple only if mi_max_size i.e. the total tuple size\n=> I guess you mean max_size\n\nThis variable could use a better name:\n+CopyMulitInsertFlushBuffers(List **mirri, ..\nmirri is fine for a local variable like an element of a struture/array, or a\nloop variable, but not for a function parameter which is an \"List\" of arbitrary\npointers.\n\nI think this comment needs to be updated (again) for the removal of the Info\nstructure.\n- * CopyMultiInsertBuffer items stored in CopyMultiInsertInfo's\n+ * multi insert buffer items stored in CopyMultiInsertInfo's\n\nI think the COPY patch should be 0002 (or maybe merged into 0001).\nThere's some superfluous whitespace (and other) changes there which make the\npatch unnecessarily long.\n\nYou made the v2 insert interface a requirement for all table AMs.\nShould it be optional, and fall back to simple inserts if not implemented ? \n\nFor CTAS, I think we need to consider Paul's idea here.\nhttps://www.postgresql.org/message-id/26C14A63-CCE5-4B46-975A-57C1784B3690%40vmware.com\nConceivably, tableam should support something like that for arbitrary AMs\n(\"insert into a new table for which we have exclusive lock\"). I think that AM\nmethod should also be optional. It should be possible to implement a minimal\nAM without implementing every available optimization, which may not apply to\nall AMs, anyway.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 16 Dec 2020 23:05:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Thanks a lot for taking a look at the patches.\n\nOn Thu, Dec 17, 2020 at 10:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Typos:\n>\n> + * 1) Specify is_multi as true, then multi insert state is allcoated.\n> => allocated\n> + * dropped, short-lived memory context is delted and mistate is freed up.\n> => deleted\n> + * 2) Currently, GetTupleSize() handles the existing heap, buffer, minmal and\n> => minimal\n> + /* Mulit insert state if requested, otherwise NULL. */\n> => multi\n> + * Buffer the input slots and insert the tuples from the buffered slots at a\n> => *one* at a time ?\n> + * Compute the size of the tuple only if mi_max_size i.e. the total tuple size\n> => I guess you mean max_size\n>\n> This variable could use a better name:\n> +CopyMulitInsertFlushBuffers(List **mirri, ..\n> mirri is fine for a local variable like an element of a struture/array, or a\n> loop variable, but not for a function parameter which is an \"List\" of arbitrary\n> pointers.\n>\n> I think this comment needs to be updated (again) for the removal of the Info\n> structure.\n> - * CopyMultiInsertBuffer items stored in CopyMultiInsertInfo's\n> + * multi insert buffer items stored in CopyMultiInsertInfo's\n>\n> There's some superfluous whitespace (and other) changes there which make the\n> patch unnecessarily long.\n\nI will correct them and post the next version of the patch set. Before\nthat, I would like to have the discussion and thoughts on the APIs and\ntheir usefulness.\n\n> I think the COPY patch should be 0002 (or maybe merged into 0001).\n\nI can make it as a 0002 patch.\n\n> You made the v2 insert interface a requirement for all table AMs.\n> Should it be optional, and fall back to simple inserts if not implemented ?\n\nI tried to implement the APIs mentioned by Andreas here in [1]. I just\nused v2 table am APIs in existing table_insert places to show that it\nworks. Having said that, if you notice, I moved the bulk insert\nallocation and deallocation to the new APIs table_insert_begin() and\ntable_insert_end() respectively, which make them tableam specific.\nCurrently, the bulk insert state is outside and independent of\ntableam. I think we should not make bulk insert state allocation and\ndeallocation tableam specific. Thoughts?\n\n[1] - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n\n> For CTAS, I think we need to consider Paul's idea here.\n> https://www.postgresql.org/message-id/26C14A63-CCE5-4B46-975A-57C1784B3690%40vmware.com\n\nIMO, if we were to allow those raw insert APIs to perform parallel\ninserts, then we would be reimplementing the existing table_insert or\ntable_mulit_insert API by having some sort of shared memory for\ncoordinating among workers and so on, may be in some other way. Yes,\nwe could avoid all the existing locking and shared buffers with those\nraw insert APIs, I also feel that we can now do that with the existing\ninsert APIs for unlogged tables and bulk insert state. To me, the raw\ninsert APIs after implementing them for the parallel inserts, they\nwould look like the existing insert APIs for unlogged tables and with\nbulk insert state. Thoughts?\n\nPlease have a look at [1] for detailed comment.\n\n[1] https://www.postgresql.org/message-id/CALj2ACX0u%3DQvB7GHLEqeVYwvs2eQS7%3D-cEuem7ZaF%3Dp%2BqZ0ikA%40mail.gmail.com\n\n> Conceivably, tableam should support something like that for arbitrary AMs\n> (\"insert into a new table for which we have exclusive lock\"). I think that AM\n> method should also be optional. It should be possible to implement a minimal\n> AM without implementing every available optimization, which may not apply to\n> all AMs, anyway.\n\nI could not understand this point well. Maybe more thoughts help me here.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Dec 2020 16:35:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 04:35:33PM +0530, Bharath Rupireddy wrote:\n> > You made the v2 insert interface a requirement for all table AMs.\n> > Should it be optional, and fall back to simple inserts if not implemented ?\n> \n> I tried to implement the APIs mentioned by Andreas here in [1]. I just\n> used v2 table am APIs in existing table_insert places to show that it\n> works. Having said that, if you notice, I moved the bulk insert\n> allocation and deallocation to the new APIs table_insert_begin() and\n> table_insert_end() respectively, which make them tableam specific.\n\nI mean I think it should be optional for a tableam to support the optimized\ninsert routines. Here, you've made it a requirement.\n\n+ Assert(routine->tuple_insert_begin != NULL);\n+ Assert(routine->tuple_insert_v2 != NULL);\n+ Assert(routine->multi_insert_v2 != NULL);\n+ Assert(routine->multi_insert_flush != NULL);\n+ Assert(routine->tuple_insert_end != NULL);\n\n+static inline void\n+table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n+{\n+ state->rel->rd_tableam->multi_insert_v2(state, slot);\n+}\n\nIf multi_insert_v2 == NULL, I think table_multi_insert_v2() would just call\ntable_insert_v2(), and begin/flush/end would do nothing. If\ntable_multi_insert_v2!=NULL, then you should assert that the other routines are\nprovided.\n\nAre you thinking that TableInsertState would eventually have additional\nattributes which would apply to other tableams, but not to heap ? Is\nheap_insert_begin() really specific to heap ? It's allocating and populating a\nstructure based on its arguments, but those same arguments would be passed to\nevery other AM's insert_begin routine, too. Do you need a more flexible data\nstructure, something that would also accomodate extensions? I'm thinking of\nreloptions as a loose analogy.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 17 Dec 2020 14:44:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 2:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Dec 17, 2020 at 04:35:33PM +0530, Bharath Rupireddy wrote:\n> > > You made the v2 insert interface a requirement for all table AMs.\n> > > Should it be optional, and fall back to simple inserts if not implemented ?\n> >\n> > I tried to implement the APIs mentioned by Andreas here in [1]. I just\n> > used v2 table am APIs in existing table_insert places to show that it\n> > works. Having said that, if you notice, I moved the bulk insert\n> > allocation and deallocation to the new APIs table_insert_begin() and\n> > table_insert_end() respectively, which make them tableam specific.\n>\n> I mean I think it should be optional for a tableam to support the optimized\n> insert routines. Here, you've made it a requirement.\n>\n> + Assert(routine->tuple_insert_begin != NULL);\n> + Assert(routine->tuple_insert_v2 != NULL);\n> + Assert(routine->multi_insert_v2 != NULL);\n> + Assert(routine->multi_insert_flush != NULL);\n> + Assert(routine->tuple_insert_end != NULL);\n\n+1 to make them optional. I will change.\n\n> +static inline void\n> +table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n> +{\n> + state->rel->rd_tableam->multi_insert_v2(state, slot);\n> +}\n>\n> If multi_insert_v2 == NULL, I think table_multi_insert_v2() would just call\n> table_insert_v2(), and begin/flush/end would do nothing. If\n> table_multi_insert_v2!=NULL, then you should assert that the other routines are\n> provided.\n\nWhat should happen if both multi_insert_v2 and insert_v2 are NULL?\nShould we error out from table_insert_v2()?\n\n> Are you thinking that TableInsertState would eventually have additional\n> attributes which would apply to other tableams, but not to heap ? Is\n> heap_insert_begin() really specific to heap ? It's allocating and populating a\n> structure based on its arguments, but those same arguments would be passed to\n> every other AM's insert_begin routine, too. Do you need a more flexible data\n> structure, something that would also accomodate extensions? I'm thinking of\n> reloptions as a loose analogy.\n\nI could not think of other tableam attributes now. But +1 to have that\nkind of flexible structure for TableInsertState. So, it can have\ntableam type and attributes within the union for each type.\n\n> I moved the bulk insert allocation and deallocation to the new APIs table_insert_begin()\n> and table_insert_end() respectively, which make them tableam specific.\n> Currently, the bulk insert state is outside and independent of\n> tableam. I think we should not make bulk insert state allocation and\n> deallocation tableam specific.\n\nAny thoughts on the above point?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Dec 2020 07:39:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 07:39:14AM +0530, Bharath Rupireddy wrote:\n> On Fri, Dec 18, 2020 at 2:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Are you thinking that TableInsertState would eventually have additional\n> > attributes which would apply to other tableams, but not to heap ? Is\n> > heap_insert_begin() really specific to heap ? It's allocating and populating a\n> > structure based on its arguments, but those same arguments would be passed to\n> > every other AM's insert_begin routine, too. Do you need a more flexible data\n> > structure, something that would also accomodate extensions? I'm thinking of\n> > reloptions as a loose analogy.\n> \n> I could not think of other tableam attributes now. But +1 to have that\n> kind of flexible structure for TableInsertState. So, it can have\n> tableam type and attributes within the union for each type.\n\nRight now you have heap_insert_begin(), and I asked if it was really\nheap-specific. Right now, it populates a struct based on a static list of\narguments, which are what heap uses. \n\nIf you were to implement a burp_insert_begin(), how would it differ from\nheap's? With the current API, they'd (have to) be the same, which means either\nthat it should apply to all AMs (or have a \"default\" implementation that can be\noverridden by an AM), or that this API assumes that other AMs will want to do\nexactly what heap does, and fails to allow other AMs to implement optimizations\nfor bulk inserts as claimed.\n\nI don't think using a \"union\" solves the problem, since it can only accommodate\ncore AMs, and not extensions, so I suggested something like reloptions, which\nhave a \"namespace\" prefix (and core has toast.*, like ALTER TABLE t SET\ntoast.autovacuum_enabled).\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:54:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Dec 18, 2020 at 07:39:14AM +0530, Bharath Rupireddy wrote:\n> > On Fri, Dec 18, 2020 at 2:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Are you thinking that TableInsertState would eventually have additional\n> > > attributes which would apply to other tableams, but not to heap ? Is\n> > > heap_insert_begin() really specific to heap ? It's allocating and populating a\n> > > structure based on its arguments, but those same arguments would be passed to\n> > > every other AM's insert_begin routine, too. Do you need a more flexible data\n> > > structure, something that would also accomodate extensions? I'm thinking of\n> > > reloptions as a loose analogy.\n> >\n> > I could not think of other tableam attributes now. But +1 to have that\n> > kind of flexible structure for TableInsertState. So, it can have\n> > tableam type and attributes within the union for each type.\n>\n> Right now you have heap_insert_begin(), and I asked if it was really\n> heap-specific. Right now, it populates a struct based on a static list of\n> arguments, which are what heap uses.\n>\n> If you were to implement a burp_insert_begin(), how would it differ from\n> heap's? With the current API, they'd (have to) be the same, which means either\n> that it should apply to all AMs (or have a \"default\" implementation that can be\n> overridden by an AM), or that this API assumes that other AMs will want to do\n> exactly what heap does, and fails to allow other AMs to implement optimizations\n> for bulk inserts as claimed.\n>\n> I don't think using a \"union\" solves the problem, since it can only accommodate\n> core AMs, and not extensions, so I suggested something like reloptions, which\n> have a \"namespace\" prefix (and core has toast.*, like ALTER TABLE t SET\n> toast.autovacuum_enabled).\n\nIIUC, your suggestion is to make the heap options such as\nalloc_bistate(bulk insert state is required or not), mi_max_slots\n(number of maximum buffered slots/tuples) and mi_max_size (the maximum\ntuple size of the buffered slots) as reloptions with some default\nvalues in reloptions.c under RELOPT_KIND_HEAP category so that they\ncan be modified by users on a per table basis. And likewise other\ntableam options can be added by the tableam developers. This way, the\nAPIs will become more generic. The tableam developers need to add\nreloptions of their choice and use them in the new API\nimplementations.\n\nLet me know if I am missing anything from what you have in your mind.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Dec 2020 13:12:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:54:39AM -0600, Justin Pryzby wrote:\n> On Fri, Dec 18, 2020 at 07:39:14AM +0530, Bharath Rupireddy wrote:\n> > On Fri, Dec 18, 2020 at 2:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Are you thinking that TableInsertState would eventually have additional\n> > > attributes which would apply to other tableams, but not to heap ? Is\n> > > heap_insert_begin() really specific to heap ? It's allocating and populating a\n> > > structure based on its arguments, but those same arguments would be passed to\n> > > every other AM's insert_begin routine, too. Do you need a more flexible data\n> > > structure, something that would also accomodate extensions? I'm thinking of\n> > > reloptions as a loose analogy.\n> > \n> > I could not think of other tableam attributes now. But +1 to have that\n> > kind of flexible structure for TableInsertState. So, it can have\n> > tableam type and attributes within the union for each type.\n> \n> Right now you have heap_insert_begin(), and I asked if it was really\n> heap-specific. Right now, it populates a struct based on a static list of\n> arguments, which are what heap uses. \n> \n> If you were to implement a burp_insert_begin(), how would it differ from\n> heap's? With the current API, they'd (have to) be the same, which means either\n> that it should apply to all AMs (or have a \"default\" implementation that can be\n> overridden by an AM), or that this API assumes that other AMs will want to do\n> exactly what heap does, and fails to allow other AMs to implement optimizations\n> for bulk inserts as claimed.\n> \n> I don't think using a \"union\" solves the problem, since it can only accommodate\n> core AMs, and not extensions, so I suggested something like reloptions, which\n> have a \"namespace\" prefix (and core has toast.*, like ALTER TABLE t SET\n> toast.autovacuum_enabled).\n\nI think you'd want to handle things like:\n\n - a compressed AM wants to specify a threshold for a tuple's *compressed* size\n (maybe in addition to the uncompressed size);\n - a \"columnar\" AM wants to specify a threshold size for a column, rather\n than for each tuple;\n\nI'm not proposing to handle those specific parameters, but rather pointing out\nthat your implementation doesn't allow handling AM-specific considerations,\nwhich I think was the goal.\n\nThe TableInsertState structure would need to store those, and then the AM's\nmulti_insert_v2 routine would need to make use of them.\n\nIt feels a bit like we'd introduce the idea of an \"AM option\", except that it\nwouldn't be user-facing (or maybe some of them would be?). Maybe I've\nmisunderstood though, so other opinions are welcome.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 21 Dec 2020 01:47:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 1:17 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Dec 18, 2020 at 11:54:39AM -0600, Justin Pryzby wrote:\n> > On Fri, Dec 18, 2020 at 07:39:14AM +0530, Bharath Rupireddy wrote:\n> > > On Fri, Dec 18, 2020 at 2:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > Are you thinking that TableInsertState would eventually have additional\n> > > > attributes which would apply to other tableams, but not to heap ? Is\n> > > > heap_insert_begin() really specific to heap ? It's allocating and populating a\n> > > > structure based on its arguments, but those same arguments would be passed to\n> > > > every other AM's insert_begin routine, too. Do you need a more flexible data\n> > > > structure, something that would also accomodate extensions? I'm thinking of\n> > > > reloptions as a loose analogy.\n> > >\n> > > I could not think of other tableam attributes now. But +1 to have that\n> > > kind of flexible structure for TableInsertState. So, it can have\n> > > tableam type and attributes within the union for each type.\n> >\n> > Right now you have heap_insert_begin(), and I asked if it was really\n> > heap-specific. Right now, it populates a struct based on a static list of\n> > arguments, which are what heap uses.\n> >\n> > If you were to implement a burp_insert_begin(), how would it differ from\n> > heap's? With the current API, they'd (have to) be the same, which means either\n> > that it should apply to all AMs (or have a \"default\" implementation that can be\n> > overridden by an AM), or that this API assumes that other AMs will want to do\n> > exactly what heap does, and fails to allow other AMs to implement optimizations\n> > for bulk inserts as claimed.\n> >\n> > I don't think using a \"union\" solves the problem, since it can only accommodate\n> > core AMs, and not extensions, so I suggested something like reloptions, which\n> > have a \"namespace\" prefix (and core has toast.*, like ALTER TABLE t SET\n> > toast.autovacuum_enabled).\n>\n> I think you'd want to handle things like:\n>\n> - a compressed AM wants to specify a threshold for a tuple's *compressed* size\n> (maybe in addition to the uncompressed size);\n> - a \"columnar\" AM wants to specify a threshold size for a column, rather\n> than for each tuple;\n>\n> I'm not proposing to handle those specific parameters, but rather pointing out\n> that your implementation doesn't allow handling AM-specific considerations,\n> which I think was the goal.\n>\n> The TableInsertState structure would need to store those, and then the AM's\n> multi_insert_v2 routine would need to make use of them.\n>\n> It feels a bit like we'd introduce the idea of an \"AM option\", except that it\n> wouldn't be user-facing (or maybe some of them would be?). Maybe I've\n> misunderstood though, so other opinions are welcome.\n\nAttaching a v2 patch for the new table AMs.\n\nThis patch has following changes:\n\n1) Made the TableInsertState structure generic by having a void\npointer for multi insert state and defined the heap specific multi\ninsert state information in heapam.h. This way each AM can have it's\nown multi insert state structure and dereference the void pointer\nusing that structure inside the respective AM implementations.\n2) Earlier in the v1 patch, the bulk insert state\nallocation/deallocation was moved to AM level, but I see that there's\nnothing specific in doing so and I think it should be independent of\nAM. So I'm doing that in table_insert_begin() and table_insert_end().\nBecause of this, I had to move the BulkInsert function declarations\nfrom heapam.h to tableam.h\n3) Corrected the typos and tried to adjust indentation of the code.\n\nNote that I have not yet made the multi_insert_v2 API optional as\nsuggested earlier. I will think more on this and update.\n\nI'm not posting the updated 0002 to 0004 patches, I plan to do so\nafter a couple of reviews happen on the design of the APIs in 0001.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 24 Dec 2020 05:48:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 05:48:42AM +0530, Bharath Rupireddy wrote:\n> I'm not posting the updated 0002 to 0004 patches, I plan to do so\n> after a couple of reviews happen on the design of the APIs in 0001.\n> \n> Thoughts?\n\nAre you familiar with this work ?\n\nhttps://commitfest.postgresql.org/31/2717/\nReloptions for table access methods\n\nIt seems like that can be relevant for your patch, and I think some of what\nyour patch needs might be provided by AM opts. \n\nIt's difficult to generalize AMs when we have only one, but your use-case might\nbe a concrete example which would help to answer some questions on the other\nthread.\n\n@Jeff: https://commitfest.postgresql.org/31/2871/\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 24 Dec 2020 20:40:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 8:10 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Dec 24, 2020 at 05:48:42AM +0530, Bharath Rupireddy wrote:\n> > I'm not posting the updated 0002 to 0004 patches, I plan to do so\n> > after a couple of reviews happen on the design of the APIs in 0001.\n> >\n> > Thoughts?\n>\n> Are you familiar with this work ?\n>\n> https://commitfest.postgresql.org/31/2717/\n> Reloptions for table access methods\n>\n> It seems like that can be relevant for your patch, and I think some of what\n> your patch needs might be provided by AM opts.\n>\n> It's difficult to generalize AMs when we have only one, but your use-case might\n> be a concrete example which would help to answer some questions on the other\n> thread.\n>\n> @Jeff: https://commitfest.postgresql.org/31/2871/\n\nNote that I have not gone through the entire thread at [1]. On some\ninitial study, that patch is proposing to allow different table AMs to\nhave custom rel options.\n\nIn the v2 patch that I sent upthread [2] for new table AMs has heap AM\nmulti insert code moved inside the new heap AM implementation and I\ndon't see any need of having rel options. In case, any other AMs want\nto have the control for their multi insert API implementation via rel\noptions, I think the proposal at [1] can be useful.\n\nIIUC, there's no dependency or anything as such for the new table AM\npatch with the rel options thread [1]. If I'm right, can this new\ntable AM patch [2] be reviewed further?\n\nThoughts?\n\n[1] - https://commitfest.postgresql.org/31/2717/\n[2] - https://www.postgresql.org/message-id/CALj2ACWMnZZCu%3DG0PJkEeYYicKeuJ-X%3DSU19i6vQ1%2B%3DuXz8u0Q%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Dec 2020 18:18:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On 28-12-2020 13:48, Bharath Rupireddy wrote:\n> On Fri, Dec 25, 2020 at 8:10 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> On Thu, Dec 24, 2020 at 05:48:42AM +0530, Bharath Rupireddy wrote:\n>>> I'm not posting the updated 0002 to 0004 patches, I plan to do so\n>>> after a couple of reviews happen on the design of the APIs in 0001.\n>>>\n>>> Thoughts?\n>>\n>> Are you familiar with this work ?\n>>\n>> https://commitfest.postgresql.org/31/2717/\n>> Reloptions for table access methods\n>>\n>> It seems like that can be relevant for your patch, and I think some of what\n>> your patch needs might be provided by AM opts.\n>>\n>> It's difficult to generalize AMs when we have only one, but your use-case might\n>> be a concrete example which would help to answer some questions on the other\n>> thread.\n>>\n>> @Jeff: https://commitfest.postgresql.org/31/2871/\n> \n> Note that I have not gone through the entire thread at [1]. On some\n> initial study, that patch is proposing to allow different table AMs to\n> have custom rel options.\n> \n> In the v2 patch that I sent upthread [2] for new table AMs has heap AM\n> multi insert code moved inside the new heap AM implementation and I\n> don't see any need of having rel options. In case, any other AMs want\n> to have the control for their multi insert API implementation via rel\n> options, I think the proposal at [1] can be useful.\n> \n> \n> Thoughts?\n> \n> [1] - https://commitfest.postgresql.org/31/2717/\n> [2] - https://www.postgresql.org/message-id/CALj2ACWMnZZCu%3DG0PJkEeYYicKeuJ-X%3DSU19i6vQ1%2B%3DuXz8u0Q%40mail.gmail.com\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \nHi,\n\n > IIUC, there's no dependency or anything as such for the new table AM\n > patch with the rel options thread [1]. If I'm right, can this new\n > table AM patch [2] be reviewed further?\n\nTo me this seems good enough. Reason is that I anticipate that there \nwould not necessarily be per-table options for now but rather global \noptions, if any. Moreover, if we want to make these kind of tradeoffs \nuser-controllable I would argue this should be done in a different \npatch-set either way. Reason is that there are parameters in heap \nalready that are computed / hardcoded as well (see e.g. \nRelationAddExtraBlocks).\n\n===\n\nAs to the patches themselves:\n\nI think the API is a huge step forward! I assume that we want to have a \nsingle-insert API like heap_insert_v2 so that we can encode the \nknowledge that there will just be a single insert coming and likely a \ncommit afterwards?\n\nReason I'm asking is that I quite liked the heap_insert_begin parameter \nis_multi, which could even be turned into a \"expected_rowcount\" of the \namount of rows expected to be commited in the transaction (e.g. single, \nseveral, thousands/stream).\nIf we were to make the API based on expected rowcounts, the whole \nheap_insert_v2, heap_insert and heap_multi_insert could be turned into a \nsingle function heap_insert, as the knowledge about buffering of the \nslots is then already stored in the TableInsertState, creating an API like:\n\n// expectedRows: -1 = streaming, otherwise expected rowcount.\nTableInsertState* heap_insert_begin(Relation rel, CommandId cid, int \noptions, int expectedRows);\nheap_insert(TableInsertState *state, TupleTableSlot *slot);\n\nDo you think that's a good idea?\n\nTwo smaller things I'm wondering:\n- the clear_mi_slots; why is this not in the HeapMultiInsertState? the \nslots themselves are declared there? also, the boolean themselves is \nsomewhat problematic I think because it would only work if you specified \nis_multi=true which would depend on the actual tableam to implement this \nthen in a way that copy/ctas/etc can also use the slot properly, which I \nthink would severely limit their freedom to store the slots more \nefficiently? Also, why do we want to do ExecClearTuple() anyway? Isn't \nit good enough that the next call to ExecCopySlot will effectively clear \nit out?\n- flushed -> why is this a stored boolean? isn't this indirectly encoded \nby cur_slots/cur_size == 0?\n\nFor patches 02-04 I quickly skimmed through them as I assume we first \nwant the API agreed upon. Generally they look nice and like a big step \nforward. What I'm just wondering about is the usage of the \nimplementation details like mistate->slots[X]. It makes a lot of sense \nto do so but also makes for a difficult compromise, because now the \ntableam has to guarantee a copy of the slot, and hopefully even one in a \nsomewhat efficient form.\n\nKind regards,\nLuc\n\n\n",
"msg_date": "Mon, 4 Jan 2021 08:59:01 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Jan 4, 2021 at 1:29 PM Luc Vlaming <luc@swarm64.com> wrote:\n> > table AM patch [2] be reviewed further?\n> As to the patches themselves:\n>\n> I think the API is a huge step forward! I assume that we want to have a\n> single-insert API like heap_insert_v2 so that we can encode the\n> knowledge that there will just be a single insert coming and likely a\n> commit afterwards?\n>\n> Reason I'm asking is that I quite liked the heap_insert_begin parameter\n> is_multi, which could even be turned into a \"expected_rowcount\" of the\n> amount of rows expected to be commited in the transaction (e.g. single,\n> several, thousands/stream).\n> If we were to make the API based on expected rowcounts, the whole\n> heap_insert_v2, heap_insert and heap_multi_insert could be turned into a\n> single function heap_insert, as the knowledge about buffering of the\n> slots is then already stored in the TableInsertState, creating an API\nlike:\n>\n> // expectedRows: -1 = streaming, otherwise expected rowcount.\n> TableInsertState* heap_insert_begin(Relation rel, CommandId cid, int\n> options, int expectedRows);\n> heap_insert(TableInsertState *state, TupleTableSlot *slot);\n>\n> Do you think that's a good idea?\n\nIIUC, your suggestion is to use expectedRows and move the multi insert\nimplementation heap_multi_insert_v2 to heap_insert_v2. If that's correct,\nso heap_insert_v2 will look something like this:\n\nheap_insert_v2()\n{\n if (single_insert)\n //do single insertion work, the code in existing heap_insert_v2 comes\nhere\n else\n //do multi insertion work, the code in existing heap_multi_insert_v2\ncomes here\n}\n\nI don't see any problem in combining single and multi insert APIs into one.\nHaving said that, will the APIs be cleaner then? Isn't it going to be\nconfusing if a single heap_insert_v2 API does both the works? With the\nexisting separate APIs, for single insertion, the sequence of the API can\nbe like begin, insert_v2, end and for multi inserts it's like begin,\nmulti_insert_v2, flush, end. I prefer to have a separate multi insert API\nso that it will make the code look readable.\n\nThoughts?\n\n> Two smaller things I'm wondering:\n> - the clear_mi_slots; why is this not in the HeapMultiInsertState? the\n> slots themselves are declared there?\n\nFirstly, we need to have the buffered slots sometimes(please have a look at\nthe comments in TableInsertState structure) outside the multi_insert API.\nAnd we need to have cleared the previously flushed slots before we start\nbuffering in heap_multi_insert_v2(). I can remove the clear_mi_slots flag\naltogether and do as follows: I will not set mistate->cur_slots to 0 in\nheap_multi_insert_flush after the flush, I will only set state->flushed to\ntrue. In heap_multi_insert_v2,\n\nvoid\nheap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n{\n TupleTableSlot *batchslot;\n HeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n Size sz;\n\n Assert(mistate && mistate->slots);\n\n\n\n\n\n\n\n\n\n\n\n* /* if the slots are flushed previously then clear them off before using\nthem again. */ if (state->flushed) { int i; for (i = 0;\ni < mistate->cur_slots; i++) ExecClearTuple(mistate->slots[i]);\n mistate->cur_slots = 0; state->flushed = false }*\n\n if (mistate->slots[mistate->cur_slots] == NULL)\n mistate->slots[mistate->cur_slots] =\n table_slot_create(state->rel, NULL);\n\n batchslot = mistate->slots[mistate->cur_slots];\n\n ExecCopySlot(batchslot, slot);\n\nThoughts?\n\n> Also, why do we want to do ExecClearTuple() anyway? Isn't\n> it good enough that the next call to ExecCopySlot will effectively clear\n> it out?\n\nFor virtual, heap, minimal tuple slots, yes ExecCopySlot slot clears the\nslot before copying. But, for buffer heap slots, the\ntts_buffer_heap_copyslot does not always clear the destination slot, see\nbelow. If we fall into else condition, we might get some issues. And also\nnote that, once the slot is cleared in ExecClearTuple, it will not be\ncleared again in ExecCopySlot because TTS_SHOULDFREE(slot) will be false.\nThat is why, let's have ExecClearTuple as is.\n\n /*\n * If the source slot is of a different kind, or is a buffer slot that\nhas\n * been materialized / is virtual, make a new copy of the tuple.\nOtherwise\n * make a new reference to the in-buffer tuple.\n */\n if (dstslot->tts_ops != srcslot->tts_ops ||\n TTS_SHOULDFREE(srcslot) ||\n !bsrcslot->base.tuple)\n {\n MemoryContext oldContext;\n\n ExecClearTuple(dstslot);\n }\n else\n {\n Assert(BufferIsValid(bsrcslot->buffer));\n\n tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple,\n bsrcslot->buffer, false);\n\n> - flushed -> why is this a stored boolean? isn't this indirectly encoded\n> by cur_slots/cur_size == 0?\n\nNote that cur_slots is in HeapMultiInsertState and outside of the new APIs\ni.e. in TableInsertState, mistate is a void pointer, and we can't really\naccess the cur_slots. I mean, we can access but we need to be dereferencing\nusing the tableam kind. Instead of doing all of that, to keep the API\ncleaner, I chose to have a boolean in the TableInsertState which we can see\nand use outside of the new APIs. Hope that's fine.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jan 4, 2021 at 1:29 PM Luc Vlaming <luc@swarm64.com> wrote:> > table AM patch [2] be reviewed further?> As to the patches themselves:>> I think the API is a huge step forward! I assume that we want to have a> single-insert API like heap_insert_v2 so that we can encode the> knowledge that there will just be a single insert coming and likely a> commit afterwards?>> Reason I'm asking is that I quite liked the heap_insert_begin parameter> is_multi, which could even be turned into a \"expected_rowcount\" of the> amount of rows expected to be commited in the transaction (e.g. single,> several, thousands/stream).> If we were to make the API based on expected rowcounts, the whole> heap_insert_v2, heap_insert and heap_multi_insert could be turned into a> single function heap_insert, as the knowledge about buffering of the> slots is then already stored in the TableInsertState, creating an API like:>> // expectedRows: -1 = streaming, otherwise expected rowcount.> TableInsertState* heap_insert_begin(Relation rel, CommandId cid, int> options, int expectedRows);> heap_insert(TableInsertState *state, TupleTableSlot *slot);>> Do you think that's a good idea?IIUC, your suggestion is to use expectedRows and move the multi insert implementation heap_multi_insert_v2 to heap_insert_v2. If that's correct, so heap_insert_v2 will look something like this:heap_insert_v2(){ if (single_insert) //do single insertion work, the code in existing heap_insert_v2 comes here else //do multi insertion work, the code in existing heap_multi_insert_v2 comes here}I don't see any problem in combining single and multi insert APIs into one. Having said that, will the APIs be cleaner then? Isn't it going to be confusing if a single heap_insert_v2 API does both the works? With the existing separate APIs, for single insertion, the sequence of the API can be like begin, insert_v2, end and for multi inserts it's like begin, multi_insert_v2, flush, end. I prefer to have a separate multi insert API so that it will make the code look readable.Thoughts?> Two smaller things I'm wondering:> - the clear_mi_slots; why is this not in the HeapMultiInsertState? the> slots themselves are declared there?Firstly, we need to have the buffered slots sometimes(please have a look at the comments in TableInsertState structure) outside the multi_insert API. And we need to have cleared the previously flushed slots before we start buffering in heap_multi_insert_v2(). I can remove the clear_mi_slots flag altogether and do as follows: I will not set mistate->cur_slots to 0 in heap_multi_insert_flush after the flush, I will only set state->flushed to true. In heap_multi_insert_v2,voidheap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot){ TupleTableSlot *batchslot; HeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate; Size sz; Assert(mistate && mistate->slots); /* if the slots are flushed previously then clear them off before using them again. */ if (state->flushed) { int i; for (i = 0; i < mistate->cur_slots; i++) ExecClearTuple(mistate->slots[i]); mistate->cur_slots = 0; state->flushed = false } if (mistate->slots[mistate->cur_slots] == NULL) mistate->slots[mistate->cur_slots] = table_slot_create(state->rel, NULL); batchslot = mistate->slots[mistate->cur_slots]; ExecCopySlot(batchslot, slot);Thoughts?> Also, why do we want to do ExecClearTuple() anyway? Isn't> it good enough that the next call to ExecCopySlot will effectively clear> it out?For virtual, heap, minimal tuple slots, yes ExecCopySlot slot clears the slot before copying. But, for buffer heap slots, the tts_buffer_heap_copyslot does not always clear the destination slot, see below. If we fall into else condition, we might get some issues. And also note that, once the slot is cleared in ExecClearTuple, it will not be cleared again in ExecCopySlot because TTS_SHOULDFREE(slot) will be false. That is why, let's have ExecClearTuple as is. /* * If the source slot is of a different kind, or is a buffer slot that has * been materialized / is virtual, make a new copy of the tuple. Otherwise * make a new reference to the in-buffer tuple. */ if (dstslot->tts_ops != srcslot->tts_ops || TTS_SHOULDFREE(srcslot) || !bsrcslot->base.tuple) { MemoryContext oldContext; ExecClearTuple(dstslot); } else { Assert(BufferIsValid(bsrcslot->buffer)); tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple, bsrcslot->buffer, false);> - flushed -> why is this a stored boolean? isn't this indirectly encoded> by cur_slots/cur_size == 0?Note that cur_slots is in HeapMultiInsertState and outside of the new APIs i.e. in TableInsertState, mistate is a void pointer, and we can't really access the cur_slots. I mean, we can access but we need to be dereferencing using the tableam kind. Instead of doing all of that, to keep the API cleaner, I chose to have a boolean in the TableInsertState which we can see and use outside of the new APIs. Hope that's fine.With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Jan 2021 15:36:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, 2021-01-04 at 08:59 +0100, Luc Vlaming wrote:\n> Reason I'm asking is that I quite liked the heap_insert_begin\n> parameter \n> is_multi, which could even be turned into a \"expected_rowcount\" of\n> the \n> amount of rows expected to be commited in the transaction (e.g.\n> single, \n> several, thousands/stream).\n\nDo you mean \"written by the statement\" instead of \"committed in the\ntransaction\"? It doesn't look like the TableInsertState state will\nsurvive across statement boundaries.\n\nThough that is an important question to consider. If the premise is\nthat a given custom AM may be much more efficient at bulk inserts than\nretail inserts (which is reasonable), then it makes sense to handle the\ncase of a transaction with many single-tuple inserts. But keeping\ninsert state across statement boundaries also raises a few potential\nproblems.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 05 Jan 2021 13:28:59 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On 05-01-2021 22:28, Jeff Davis wrote:\n> On Mon, 2021-01-04 at 08:59 +0100, Luc Vlaming wrote:\n>> Reason I'm asking is that I quite liked the heap_insert_begin\n>> parameter\n>> is_multi, which could even be turned into a \"expected_rowcount\" of\n>> the\n>> amount of rows expected to be commited in the transaction (e.g.\n>> single,\n>> several, thousands/stream).\n> \n> Do you mean \"written by the statement\" instead of \"committed in the\n> transaction\"? It doesn't look like the TableInsertState state will\n> survive across statement boundaries.\n> \n> Though that is an important question to consider. If the premise is\n> that a given custom AM may be much more efficient at bulk inserts than\n> retail inserts (which is reasonable), then it makes sense to handle the\n> case of a transaction with many single-tuple inserts. But keeping\n> insert state across statement boundaries also raises a few potential\n> problems.\n> \n> Regards,\n> \tJeff Davis\n> \n> \n\nI did actually mean until the end of the transaction. I know this is \ncurrently not possible with the current design but I think it would be \ncool to start going that way (even if slightly). Creating some more \nfreedom on how a tableam optimizes inserts, when one syncs to disk, etc \nwould be good imo. It would allow one to create e.g. a tableam that \nwould not have as a high overhead when doing single statement inserts.\n\nKind regards,\nLuc\n\n\n",
"msg_date": "Wed, 6 Jan 2021 08:00:55 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On 05-01-2021 11:06, Bharath Rupireddy wrote:\n> On Mon, Jan 4, 2021 at 1:29 PM Luc Vlaming <luc@swarm64.com \n> <mailto:luc@swarm64.com>> wrote:\n> > > table AM patch [2] be reviewed further?\n> > As to the patches themselves:\n> >\n> > I think the API is a huge step forward! I assume that we want to have a\n> > single-insert API like heap_insert_v2 so that we can encode the\n> > knowledge that there will just be a single insert coming and likely a\n> > commit afterwards?\n> >\n> > Reason I'm asking is that I quite liked the heap_insert_begin parameter\n> > is_multi, which could even be turned into a \"expected_rowcount\" of the\n> > amount of rows expected to be commited in the transaction (e.g. single,\n> > several, thousands/stream).\n> > If we were to make the API based on expected rowcounts, the whole\n> > heap_insert_v2, heap_insert and heap_multi_insert could be turned into a\n> > single function heap_insert, as the knowledge about buffering of the\n> > slots is then already stored in the TableInsertState, creating an API \n> like:\n> >\n> > // expectedRows: -1 = streaming, otherwise expected rowcount.\n> > TableInsertState* heap_insert_begin(Relation rel, CommandId cid, int\n> > options, int expectedRows);\n> > heap_insert(TableInsertState *state, TupleTableSlot *slot);\n> >\n> > Do you think that's a good idea?\n> \n> IIUC, your suggestion is to use expectedRows and move the multi insert \n> implementation heap_multi_insert_v2 to heap_insert_v2. If that's \n> correct, so heap_insert_v2 will look something like this:\n> \n> heap_insert_v2()\n> {\n> if (single_insert)\n> //do single insertion work, the code in existing heap_insert_v2 \n> comes here\n> else\n> //do multi insertion work, the code in existing \n> heap_multi_insert_v2 comes here\n> }\n> \n> I don't see any problem in combining single and multi insert APIs into \n> one. Having said that, will the APIs be cleaner then? Isn't it going to \n> be confusing if a single heap_insert_v2 API does both the works? With \n> the existing separate APIs, for single insertion, the sequence of the \n> API can be like begin, insert_v2, end and for multi inserts it's like \n> begin, multi_insert_v2, flush, end. I prefer to have a separate multi \n> insert API so that it will make the code look readable.\n> \n> Thoughts?\n\nThe main reason for me for wanting a single API is that I would like the \ndecision of using single or multi inserts to move to inside the tableam.\nFor e.g. a heap insert we might want to put the threshold at e.g. 100 \nrows so that the overhead of buffering the tuples is actually \ncompensated. For other tableam this logic might also be quite different, \nand I think therefore that it shouldn't be e.g. COPY or CTAS deciding \nwhether or not multi inserts should be used. Because otherwise the thing \nwe'll get is that there will be tableams that will ignore this flag and \ndo their own thing anyway. I'd rather have an API that gives all \nnecessary information to the tableam and then make the tableam do \"the \nright thing\".\n\nAnother reason I'm suggesting this API is that I would expect that the \nbegin is called in a different place in the code for the (multiple) \ninserts than the actual insert statement.\nTo me conceptually the begin and end are like e.g. the executor begin \nand end: you prepare the inserts with the knowledge you have at that \npoint. I assumed (wrongly?) that during the start of the statement one \nknows best how many rows are coming; and then the actual insertion of \nthe row doesn't have to deal anymore with multi/single inserts, choosing \nwhen to buffer or not, because that information has already been given \nduring the initial phase. One of the reasons this is appealing to me is \nthat e.g. in [1] there was discussion on when to switch to a multi \ninsert state, and imo this should be up to the tableam.\n\n> \n> > Two smaller things I'm wondering:\n> > - the clear_mi_slots; why is this not in the HeapMultiInsertState? the\n> > slots themselves are declared there?\n> \n> Firstly, we need to have the buffered slots sometimes(please have a look \n> at the comments in TableInsertState structure) outside the multi_insert \n> API. And we need to have cleared the previously flushed slots before we \n> start buffering in heap_multi_insert_v2(). I can remove the \n> clear_mi_slots flag altogether and do as follows: I will not set \n> mistate->cur_slots to 0 in heap_multi_insert_flush after the flush, I \n> will only set state->flushed to true. In heap_multi_insert_v2,\n> \n> void\n> heap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n> {\n> TupleTableSlot *batchslot;\n> HeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n> Size sz;\n> \n> Assert(mistate && mistate->slots);\n> \n> * /* if the slots are flushed previously then clear them off before \n> using them again. */\n> if (state->flushed)\n> {\n> int i;\n> \n> for (i = 0; i < mistate->cur_slots; i++)\n> ExecClearTuple(mistate->slots[i]);\n> \n> mistate->cur_slots = 0;\n> state->flushed = false\n> }*\n> \n> if (mistate->slots[mistate->cur_slots] == NULL)\n> mistate->slots[mistate->cur_slots] =\n> table_slot_create(state->rel, NULL);\n> \n> batchslot = mistate->slots[mistate->cur_slots];\n> \n> ExecCopySlot(batchslot, slot);\n> \n> Thoughts?\n\n From what I can see you can just keep the v2-0001 patch and:\n- remove the flushed variable alltogether. mistate->cur_slots == 0 \nencodes this already and the variable is never actually checked on.\n- call ExecClearTuple just before ExecCopySlot()\n\nWhich would make the code something like:\n\nvoid\nheap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n{\n\tTupleTableSlot *batchslot;\n\tHeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n\tSize sz;\n\n\tAssert(mistate && mistate->slots);\n\n\tif (mistate->slots[mistate->cur_slots] == NULL)\n\t\tmistate->slots[mistate->cur_slots] =\n\t\t\t\t\t\t\t\t\ttable_slot_create(state->rel, NULL);\n\n\tbatchslot = mistate->slots[mistate->cur_slots];\n\n\tExecClearTuple(batchslot);\n\tExecCopySlot(batchslot, slot);\n\n\t/*\n\t * Calculate the tuple size after the original slot is copied, because the\n\t * copied slot type and the tuple size may change.\n\t */\n\tsz = GetTupleSize(batchslot, mistate->max_size);\n\n\tAssert(sz > 0);\n\n\tmistate->cur_slots++;\n\tmistate->cur_size += sz;\n\n\tif (mistate->cur_slots >= mistate->max_slots ||\n\t\tmistate->cur_size >= mistate->max_size)\n\t\theap_multi_insert_flush(state);\n}\n\nvoid\nheap_multi_insert_flush(TableInsertState *state)\n{\n\tHeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n\tMemoryContext oldcontext;\n\n\tAssert(mistate && mistate->slots && mistate->cur_slots >= 0 &&\n\t\t mistate->context);\n\n\tif (mistate->cur_slots == 0)\n\t\treturn;\n\n\toldcontext = MemoryContextSwitchTo(mistate->context);\n\n\theap_multi_insert(state->rel, mistate->slots, mistate->cur_slots,\n\t\t\t\t\t state->cid, state->options, state->bistate);\n\n\tMemoryContextReset(mistate->context);\n\tMemoryContextSwitchTo(oldcontext);\n\n\t/*\n\t * Do not clear the slots always. Sometimes callers may want the slots for\n\t * index insertions or after row trigger executions in which case they have\n\t * to clear the tuples before using for the next insert batch.\n\t */\n\tif (state->clear_mi_slots)\n\t{\n\t\tint i;\n\n\t\tfor (i = 0; i < mistate->cur_slots; i++)\n\t\t\tExecClearTuple(mistate->slots[i]);\n\t}\n\n\tmistate->cur_slots = 0;\n\tmistate->cur_size = 0;\n}\n\n\n> \n> > Also, why do we want to do ExecClearTuple() anyway? Isn't\n> > it good enough that the next call to ExecCopySlot will effectively clear\n> > it out?\n> \n> For virtual, heap, minimal tuple slots, yes ExecCopySlot slot clears the \n> slot before copying. But, for buffer heap slots, the \n> tts_buffer_heap_copyslot does not always clear the destination slot, see \n> below. If we fall into else condition, we might get some issues. And \n> also note that, once the slot is cleared in ExecClearTuple, it will not \n> be cleared again in ExecCopySlot because TTS_SHOULDFREE(slot) will be \n> false. That is why, let's have ExecClearTuple as is.\n> \nI had no idea the buffer heap slot doesn't unconditionally clear out the \nslot :( So yes lets call it unconditionally ourselves. See also \nsuggestion above.\n\n> /*\n> * If the source slot is of a different kind, or is a buffer slot \n> that has\n> * been materialized / is virtual, make a new copy of the tuple. \n> Otherwise\n> * make a new reference to the in-buffer tuple.\n> */\n> if (dstslot->tts_ops != srcslot->tts_ops ||\n> TTS_SHOULDFREE(srcslot) ||\n> !bsrcslot->base.tuple)\n> {\n> MemoryContext oldContext;\n> \n> ExecClearTuple(dstslot);\n> }\n> else\n> {\n> Assert(BufferIsValid(bsrcslot->buffer));\n> \n> tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple,\n> bsrcslot->buffer, false);\n> \n> > - flushed -> why is this a stored boolean? isn't this indirectly encoded\n> > by cur_slots/cur_size == 0?\n> \n> Note that cur_slots is in HeapMultiInsertState and outside of the new \n> APIs i.e. in TableInsertState, mistate is a void pointer, and we can't \n> really access the cur_slots. I mean, we can access but we need to be \n> dereferencing using the tableam kind. Instead of doing all of that, to \n> keep the API cleaner, I chose to have a boolean in the TableInsertState \n> which we can see and use outside of the new APIs. Hope that's fine.\n> \nSo you mean the flushed variable is actually there to tell the user of \nthe API that they are supposed to call flush before end? Why can't the \nend call flush itself then? I guess I completely misunderstood the \npurpose of table_multi_insert_flush being public. I had assumed it is \nthere to from the usage site indicate that now would be a good time to \nflush, e.g. because of a statement ending or something. I had not \nunderstood this is a requirement that its always required to do \ntable_multi_insert_flush + table_insert_end.\nIMHO I would hide this from the callee, given that you would only really \ncall flush yourself when you immediately after would call end, or are \nthere other cases where one would be required to explicitly call flush?\n\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\nKind regards,\nLuc\n\n\n",
"msg_date": "Wed, 6 Jan 2021 08:26:38 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 12:56 PM Luc Vlaming <luc@swarm64.com> wrote:\n> The main reason for me for wanting a single API is that I would like the\n> decision of using single or multi inserts to move to inside the tableam.\n> For e.g. a heap insert we might want to put the threshold at e.g. 100\n> rows so that the overhead of buffering the tuples is actually\n> compensated. For other tableam this logic might also be quite different,\n> and I think therefore that it shouldn't be e.g. COPY or CTAS deciding\n> whether or not multi inserts should be used. Because otherwise the thing\n> we'll get is that there will be tableams that will ignore this flag and\n> do their own thing anyway. I'd rather have an API that gives all\n> necessary information to the tableam and then make the tableam do \"the\n> right thing\".\n>\n> Another reason I'm suggesting this API is that I would expect that the\n> begin is called in a different place in the code for the (multiple)\n> inserts than the actual insert statement.\n> To me conceptually the begin and end are like e.g. the executor begin\n> and end: you prepare the inserts with the knowledge you have at that\n> point. I assumed (wrongly?) that during the start of the statement one\n> knows best how many rows are coming; and then the actual insertion of\n> the row doesn't have to deal anymore with multi/single inserts, choosing\n> when to buffer or not, because that information has already been given\n> during the initial phase. One of the reasons this is appealing to me is\n> that e.g. in [1] there was discussion on when to switch to a multi\n> insert state, and imo this should be up to the tableam.\n\nAgree that whether to go with the multi or single inserts should be\ncompletely left to tableam implementation, we, as callers of those API\njust need to inform whether we expect single or multiple rows, and it\nshould be left to tableam implementation whether to actually go with\nbuffering or single inserts. ISTM that it's an elegant way of making\nthe API generic and abstracting everything from the callers. What I\nwonder is how can we know in advance the expected row count that we\nneed to pass in to heap_insert_begin()? IIUC, we can not estimate the\nupcoming rows in COPY, Insert Into Select, or Refresh Mat View or some\nother insert queries? Of course, we can look at the planner's\nestimated row count for the selects in COPY, Insert Into Select or\nRefresh Mat View after the planning, but to me that's not something we\ncan depend on and pass in the row count to the insert APIs.\n\nWhen we don't know the expected row count, why can't we(as callers of\nthe APIs) tell the APIs something like, \"I'm intending to perform\nmulti inserts, so if possible and if you have a mechanism to buffer\nthe slots, do it, otherwise insert the tuples one by one, or else do\nwhatever you want to do with the tuples I give it you\". So, in case of\nCOPY we can ask the API for multi inserts and call heap_insert_begin()\nand heap_insert_v2().\n\nGiven the above explanation, I still feel bool is_multi would suffice.\n\nThoughts?\n\nOn dynamically, switching from single to multi inserts, this can be\ndone by heap_insert_v2 itself. The way I think it's possible is that,\nsay we have some threshold row count 1000(can be a macro) after\ninserting those many tuples, heap_insert_v2 can switch to buffering\nmode.\n\nThoughts?\n\n> Which would make the code something like:\n>\n> void\n> heap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n> {\n> TupleTableSlot *batchslot;\n> HeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n> Size sz;\n>\n> Assert(mistate && mistate->slots);\n>\n> if (mistate->slots[mistate->cur_slots] == NULL)\n> mistate->slots[mistate->cur_slots] =\n> table_slot_create(state->rel, NULL);\n>\n> batchslot = mistate->slots[mistate->cur_slots];\n>\n> ExecClearTuple(batchslot);\n> ExecCopySlot(batchslot, slot);\n>\n> /*\n> * Calculate the tuple size after the original slot is copied, because the\n> * copied slot type and the tuple size may change.\n> */\n> sz = GetTupleSize(batchslot, mistate->max_size);\n>\n> Assert(sz > 0);\n>\n> mistate->cur_slots++;\n> mistate->cur_size += sz;\n>\n> if (mistate->cur_slots >= mistate->max_slots ||\n> mistate->cur_size >= mistate->max_size)\n> heap_multi_insert_flush(state);\n> }\n\nI think clearing tuples before copying the slot as you suggested may\nwork without the need of clear_slots flag.\n\n>\n> > > Also, why do we want to do ExecClearTuple() anyway? Isn't\n> > > it good enough that the next call to ExecCopySlot will effectively clear\n> > > it out?\n> >\n> > For virtual, heap, minimal tuple slots, yes ExecCopySlot slot clears the\n> > slot before copying. But, for buffer heap slots, the\n> > tts_buffer_heap_copyslot does not always clear the destination slot, see\n> > below. If we fall into else condition, we might get some issues. And\n> > also note that, once the slot is cleared in ExecClearTuple, it will not\n> > be cleared again in ExecCopySlot because TTS_SHOULDFREE(slot) will be\n> > false. That is why, let's have ExecClearTuple as is.\n> >\n> I had no idea the buffer heap slot doesn't unconditionally clear out the\n> slot :( So yes lets call it unconditionally ourselves. See also\n> suggestion above.\n\nYeah, we will clear the tuple slot before copy to be on the safer side.\n\n> > /*\n> > * If the source slot is of a different kind, or is a buffer slot\n> > that has\n> > * been materialized / is virtual, make a new copy of the tuple.\n> > Otherwise\n> > * make a new reference to the in-buffer tuple.\n> > */\n> > if (dstslot->tts_ops != srcslot->tts_ops ||\n> > TTS_SHOULDFREE(srcslot) ||\n> > !bsrcslot->base.tuple)\n> > {\n> > MemoryContext oldContext;\n> >\n> > ExecClearTuple(dstslot);\n> > }\n> > else\n> > {\n> > Assert(BufferIsValid(bsrcslot->buffer));\n> >\n> > tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple,\n> > bsrcslot->buffer, false);\n> >\n> > > - flushed -> why is this a stored boolean? isn't this indirectly encoded\n> > > by cur_slots/cur_size == 0?\n> >\n> > Note that cur_slots is in HeapMultiInsertState and outside of the new\n> > APIs i.e. in TableInsertState, mistate is a void pointer, and we can't\n> > really access the cur_slots. I mean, we can access but we need to be\n> > dereferencing using the tableam kind. Instead of doing all of that, to\n> > keep the API cleaner, I chose to have a boolean in the TableInsertState\n> > which we can see and use outside of the new APIs. Hope that's fine.\n> >\n> So you mean the flushed variable is actually there to tell the user of\n> the API that they are supposed to call flush before end? Why can't the\n> end call flush itself then? I guess I completely misunderstood the\n> purpose of table_multi_insert_flush being public. I had assumed it is\n> there to from the usage site indicate that now would be a good time to\n> flush, e.g. because of a statement ending or something. I had not\n> understood this is a requirement that its always required to do\n> table_multi_insert_flush + table_insert_end.\n> IMHO I would hide this from the callee, given that you would only really\n> call flush yourself when you immediately after would call end, or are\n> there other cases where one would be required to explicitly call flush?\n\nWe need to know outside the multi_insert API whether the buffered\nslots in case of multi inserts are flushed. Reason is that if we have\nindexes or after row triggers, currently we call ExecInsertIndexTuples\nor ExecARInsertTriggers on the buffered slots outside the API in a\nloop after the flush.\n\nIf we agree on removing heap_multi_insert_v2 API and embed that logic\ninside heap_insert_v2, then we can do this - pass the required\ninformation and the functions ExecInsertIndexTuples and\nExecARInsertTriggers as callbacks so that, whether or not\nheap_insert_v2 choses single or multi inserts, it can callback these\nfunctions with the required information passed after the flush. We can\nadd the callback and required information into TableInsertState. But,\nI'm not quite sure, we would make ExecInsertIndexTuples and\nExecARInsertTriggers. And in\n\nIf we don't want to go with callback way, then at least we need to\nknow whether or not heap_insert_v2 has chosen multi inserts, if yes,\nthe buffered slots array, and the number of current buffered slots,\nwhether they are flushed or not in the TableInsertState. Then,\neventually, we might need all the HeapMultiInsertState info in the\nTableInsertState.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Jan 2021 18:36:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On 06-01-2021 14:06, Bharath Rupireddy wrote:\n> On Wed, Jan 6, 2021 at 12:56 PM Luc Vlaming <luc@swarm64.com> wrote:\n>> The main reason for me for wanting a single API is that I would like the\n>> decision of using single or multi inserts to move to inside the tableam.\n>> For e.g. a heap insert we might want to put the threshold at e.g. 100\n>> rows so that the overhead of buffering the tuples is actually\n>> compensated. For other tableam this logic might also be quite different,\n>> and I think therefore that it shouldn't be e.g. COPY or CTAS deciding\n>> whether or not multi inserts should be used. Because otherwise the thing\n>> we'll get is that there will be tableams that will ignore this flag and\n>> do their own thing anyway. I'd rather have an API that gives all\n>> necessary information to the tableam and then make the tableam do \"the\n>> right thing\".\n>>\n>> Another reason I'm suggesting this API is that I would expect that the\n>> begin is called in a different place in the code for the (multiple)\n>> inserts than the actual insert statement.\n>> To me conceptually the begin and end are like e.g. the executor begin\n>> and end: you prepare the inserts with the knowledge you have at that\n>> point. I assumed (wrongly?) that during the start of the statement one\n>> knows best how many rows are coming; and then the actual insertion of\n>> the row doesn't have to deal anymore with multi/single inserts, choosing\n>> when to buffer or not, because that information has already been given\n>> during the initial phase. One of the reasons this is appealing to me is\n>> that e.g. in [1] there was discussion on when to switch to a multi\n>> insert state, and imo this should be up to the tableam.\n> \n> Agree that whether to go with the multi or single inserts should be\n> completely left to tableam implementation, we, as callers of those API\n> just need to inform whether we expect single or multiple rows, and it\n> should be left to tableam implementation whether to actually go with\n> buffering or single inserts. ISTM that it's an elegant way of making\n> the API generic and abstracting everything from the callers. What I\n> wonder is how can we know in advance the expected row count that we\n> need to pass in to heap_insert_begin()? IIUC, we can not estimate the\n> upcoming rows in COPY, Insert Into Select, or Refresh Mat View or some\n> other insert queries? Of course, we can look at the planner's\n> estimated row count for the selects in COPY, Insert Into Select or\n> Refresh Mat View after the planning, but to me that's not something we\n> can depend on and pass in the row count to the insert APIs.\n> \n> When we don't know the expected row count, why can't we(as callers of\n> the APIs) tell the APIs something like, \"I'm intending to perform\n> multi inserts, so if possible and if you have a mechanism to buffer\n> the slots, do it, otherwise insert the tuples one by one, or else do\n> whatever you want to do with the tuples I give it you\". So, in case of\n> COPY we can ask the API for multi inserts and call heap_insert_begin()\n> and heap_insert_v2().\n> \n\nI thought that when it is available (because of planning) it would be \nnice to pass it in. If you don't know you could pass in a 1 for doing \nsingle inserts, and e.g. -1 or max-int for streaming. The reason I \nproposed it is so that tableam's have as much knowledge as posisble to \ndo the right thing. is_multi does also work of course but is just \nsomewhat less informative.\n\nWhat to me seemed somewhat counterintuitive is that with the proposed \nAPI it is possible to say is_multi=true and then still call \nheap_insert_v2 to do a single insert.\n\n> Given the above explanation, I still feel bool is_multi would suffice.\n> \n> Thoughts?\n> \n> On dynamically, switching from single to multi inserts, this can be\n> done by heap_insert_v2 itself. The way I think it's possible is that,\n> say we have some threshold row count 1000(can be a macro) after\n> inserting those many tuples, heap_insert_v2 can switch to buffering\n> mode.\n\nFor that I thought it'd be good to use the expected row count, but yeah \ndynamically switching also works and might work better if the expected \nrow counts are usually off.\n\n> \n> Thoughts?\n> \n>> Which would make the code something like:\n>>\n>> void\n>> heap_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n>> {\n>> TupleTableSlot *batchslot;\n>> HeapMultiInsertState *mistate = (HeapMultiInsertState *)state->mistate;\n>> Size sz;\n>>\n>> Assert(mistate && mistate->slots);\n>>\n>> if (mistate->slots[mistate->cur_slots] == NULL)\n>> mistate->slots[mistate->cur_slots] =\n>> table_slot_create(state->rel, NULL);\n>>\n>> batchslot = mistate->slots[mistate->cur_slots];\n>>\n>> ExecClearTuple(batchslot);\n>> ExecCopySlot(batchslot, slot);\n>>\n>> /*\n>> * Calculate the tuple size after the original slot is copied, because the\n>> * copied slot type and the tuple size may change.\n>> */\n>> sz = GetTupleSize(batchslot, mistate->max_size);\n>>\n>> Assert(sz > 0);\n>>\n>> mistate->cur_slots++;\n>> mistate->cur_size += sz;\n>>\n>> if (mistate->cur_slots >= mistate->max_slots ||\n>> mistate->cur_size >= mistate->max_size)\n>> heap_multi_insert_flush(state);\n>> }\n> \n> I think clearing tuples before copying the slot as you suggested may\n> work without the need of clear_slots flag.\n\nok, cool :)\n\n> \n>>\n>>> > Also, why do we want to do ExecClearTuple() anyway? Isn't\n>>> > it good enough that the next call to ExecCopySlot will effectively clear\n>>> > it out?\n>>>\n>>> For virtual, heap, minimal tuple slots, yes ExecCopySlot slot clears the\n>>> slot before copying. But, for buffer heap slots, the\n>>> tts_buffer_heap_copyslot does not always clear the destination slot, see\n>>> below. If we fall into else condition, we might get some issues. And\n>>> also note that, once the slot is cleared in ExecClearTuple, it will not\n>>> be cleared again in ExecCopySlot because TTS_SHOULDFREE(slot) will be\n>>> false. That is why, let's have ExecClearTuple as is.\n>>>\n>> I had no idea the buffer heap slot doesn't unconditionally clear out the\n>> slot :( So yes lets call it unconditionally ourselves. See also\n>> suggestion above.\n> \n> Yeah, we will clear the tuple slot before copy to be on the safer side.\n> \n\nok\n\n>>> /*\n>>> * If the source slot is of a different kind, or is a buffer slot\n>>> that has\n>>> * been materialized / is virtual, make a new copy of the tuple.\n>>> Otherwise\n>>> * make a new reference to the in-buffer tuple.\n>>> */\n>>> if (dstslot->tts_ops != srcslot->tts_ops ||\n>>> TTS_SHOULDFREE(srcslot) ||\n>>> !bsrcslot->base.tuple)\n>>> {\n>>> MemoryContext oldContext;\n>>>\n>>> ExecClearTuple(dstslot);\n>>> }\n>>> else\n>>> {\n>>> Assert(BufferIsValid(bsrcslot->buffer));\n>>>\n>>> tts_buffer_heap_store_tuple(dstslot, bsrcslot->base.tuple,\n>>> bsrcslot->buffer, false);\n>>>\n>>> > - flushed -> why is this a stored boolean? isn't this indirectly encoded\n>>> > by cur_slots/cur_size == 0?\n>>>\n>>> Note that cur_slots is in HeapMultiInsertState and outside of the new\n>>> APIs i.e. in TableInsertState, mistate is a void pointer, and we can't\n>>> really access the cur_slots. I mean, we can access but we need to be\n>>> dereferencing using the tableam kind. Instead of doing all of that, to\n>>> keep the API cleaner, I chose to have a boolean in the TableInsertState\n>>> which we can see and use outside of the new APIs. Hope that's fine.\n>>>\n>> So you mean the flushed variable is actually there to tell the user of\n>> the API that they are supposed to call flush before end? Why can't the\n>> end call flush itself then? I guess I completely misunderstood the\n>> purpose of table_multi_insert_flush being public. I had assumed it is\n>> there to from the usage site indicate that now would be a good time to\n>> flush, e.g. because of a statement ending or something. I had not\n>> understood this is a requirement that its always required to do\n>> table_multi_insert_flush + table_insert_end.\n>> IMHO I would hide this from the callee, given that you would only really\n>> call flush yourself when you immediately after would call end, or are\n>> there other cases where one would be required to explicitly call flush?\n> \n> We need to know outside the multi_insert API whether the buffered\n> slots in case of multi inserts are flushed. Reason is that if we have\n> indexes or after row triggers, currently we call ExecInsertIndexTuples\n> or ExecARInsertTriggers on the buffered slots outside the API in a\n> loop after the flush.\n> \n> If we agree on removing heap_multi_insert_v2 API and embed that logic\n> inside heap_insert_v2, then we can do this - pass the required\n> information and the functions ExecInsertIndexTuples and\n> ExecARInsertTriggers as callbacks so that, whether or not\n> heap_insert_v2 choses single or multi inserts, it can callback these\n> functions with the required information passed after the flush. We can\n> add the callback and required information into TableInsertState. But,\n> I'm not quite sure, we would make ExecInsertIndexTuples and\n> ExecARInsertTriggers. And in\n> \n> If we don't want to go with callback way, then at least we need to\n> know whether or not heap_insert_v2 has chosen multi inserts, if yes,\n> the buffered slots array, and the number of current buffered slots,\n> whether they are flushed or not in the TableInsertState. Then,\n> eventually, we might need all the HeapMultiInsertState info in the\n> TableInsertState.\n> \n\nTo me the callback API seems cleaner, that on heap_insert_begin we can \npass in a callback that is called on every flushed slot, or only on \nmulti-insert flushes. Is there a reason it would only be done for \nmulti-insert flushes or can it be generic?\n\n> Thoughts?\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nHi,\n\nReplied inline.\n\nKind regards,\nLuc\n\n\n",
"msg_date": "Tue, 12 Jan 2021 09:03:33 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "\n> If we agree on removing heap_multi_insert_v2 API and embed that logic\n> inside heap_insert_v2, then we can do this - pass the required\n> information and the functions ExecInsertIndexTuples and\n> ExecARInsertTriggers as callbacks so that, whether or not\n> heap_insert_v2 choses single or multi inserts, it can callback these\n> functions with the required information passed after the flush. We\n> can\n> add the callback and required information into TableInsertState. But,\n> I'm not quite sure, we would make ExecInsertIndexTuples and\n> ExecARInsertTriggers.\n\nHow should the API interact with INSERT INTO ... SELECT? Right now it\ndoesn't appear to be integrated at all, but that seems like a fairly\nimportant path for bulk inserts.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sat, 16 Jan 2021 15:04:16 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On 17-01-2021 00:04, Jeff Davis wrote:\n> \n>> If we agree on removing heap_multi_insert_v2 API and embed that logic\n>> inside heap_insert_v2, then we can do this - pass the required\n>> information and the functions ExecInsertIndexTuples and\n>> ExecARInsertTriggers as callbacks so that, whether or not\n>> heap_insert_v2 choses single or multi inserts, it can callback these\n>> functions with the required information passed after the flush. We\n>> can\n>> add the callback and required information into TableInsertState. But,\n>> I'm not quite sure, we would make ExecInsertIndexTuples and\n>> ExecARInsertTriggers.\n> \n> How should the API interact with INSERT INTO ... SELECT? Right now it\n> doesn't appear to be integrated at all, but that seems like a fairly\n> important path for bulk inserts.\n> \n> Regards,\n> \tJeff Davis\n> \n> \n\nHi,\n\nYou mean how it could because of that the table modification API uses \nthe table_tuple_insert_speculative ? Just wondering if you think if it \ngenerally cannot work or would like to see that path / more paths \nintegrated in to the patch.\n\nKind regards,\nLuc\n\n\n",
"msg_date": "Mon, 18 Jan 2021 08:58:08 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, 2021-01-18 at 08:58 +0100, Luc Vlaming wrote:\n> You mean how it could because of that the table modification API\n> uses \n> the table_tuple_insert_speculative ? Just wondering if you think if\n> it \n> generally cannot work or would like to see that path / more paths \n> integrated in to the patch.\n\nI think the patch should support INSERT INTO ... SELECT, and it will be\neasier to tell if we have the right API when that's integrated.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 19 Jan 2021 09:33:09 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Hi,\n\nI addressed the following review comments and attaching v3 patch set.\n\n1) ExecClearTuple happens before ExecCopySlot in heap_multi_insert_v2\nand this allowed us to remove clear_mi_slots flag from\nTableInsertState.\n2) I retained the flushed variable inside TableInsertState so that the\ncallers can know whether the buffered slots have been flushed. If yes,\nthe callers can execute after insert row triggers or perform index\ninsertions. This is easier than passing the after insert row triggers\ninfo and index info to new multi insert table am and let it do. This\nway the functionalities can be kept separate i.e. multi insert ams do\nonly buffering, decisions on when to flush, insertions and the callers\nwill execute triggers or index insertions. And also none of the\nexisting table ams are performing these operations within them, so\nthis is inline with the current design of the table ams.\n3) I have kept the single and multi insert API separate. The previous\nsuggestion was to have only a single insert API and let the callers\nprovide initially whether they want multi or single inserts. One\nproblem with that approach is that we have to allow table ams to\nexecute the after row triggers or index insertions. That is something\nI personally don't like.\n\n0001 - new table ams implementation\n0002 - the new multi table ams used in CREATE TABLE AS and REFRESH\nMATERIALIZED VIEW\n0003 - the new multi table ams used in COPY\n\nPlease review the v3 patch set further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 17 Feb 2021 12:46:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Feb 17, 2021 at 12:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Hi,\n>\n> I addressed the following review comments and attaching v3 patch set.\n>\n> 1) ExecClearTuple happens before ExecCopySlot in heap_multi_insert_v2\n> and this allowed us to remove clear_mi_slots flag from\n> TableInsertState.\n> 2) I retained the flushed variable inside TableInsertState so that the\n> callers can know whether the buffered slots have been flushed. If yes,\n> the callers can execute after insert row triggers or perform index\n> insertions. This is easier than passing the after insert row triggers\n> info and index info to new multi insert table am and let it do. This\n> way the functionalities can be kept separate i.e. multi insert ams do\n> only buffering, decisions on when to flush, insertions and the callers\n> will execute triggers or index insertions. And also none of the\n> existing table ams are performing these operations within them, so\n> this is inline with the current design of the table ams.\n> 3) I have kept the single and multi insert API separate. The previous\n> suggestion was to have only a single insert API and let the callers\n> provide initially whether they want multi or single inserts. One\n> problem with that approach is that we have to allow table ams to\n> execute the after row triggers or index insertions. That is something\n> I personally don't like.\n>\n> 0001 - new table ams implementation\n> 0002 - the new multi table ams used in CREATE TABLE AS and REFRESH\n> MATERIALIZED VIEW\n> 0003 - the new multi table ams used in COPY\n>\n> Please review the v3 patch set further.\n\nBelow is the performance gain measured for CREATE TABLE AS with the\nnew multi insert am propsed in this thread:\n\ncase 1 - 2 integer(of 4 bytes each) columns, 3 varchar(8), tuple size\n59 bytes, 100mn tuples\non master - 185sec\non master with multi inserts - 121sec, gain - 1.52X\n\ncase 2 - 2 bigint(of 8 bytes each) columns, 3 name(of 64 bytes each)\ncolumns, 1 varchar(8), tuple size 241 bytes, 100mn tuples\non master - 367sec\non master with multi inserts - 291sec, gain - 1.26X\n\ncase 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes, 100mn tuples\non master - 130sec\non master with multi inserts - 105sec, gain - 1.23X\n\ncase 4 - 2 bigint(of 8 bytes each) columns, 16 name(of 64 bytes each)\ncolumns, tuple size 1064 bytes, 10mn tuples\non master - 120sec\non master with multi inserts - 115sec, gain - 1.04X\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Feb 2021 11:15:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Hi,\nbq. case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes\n\nIs there some other column(s) per row apart from the integer columns ?\nSince the 2 integer columns only occupy 8 bytes. I wonder where the other\n32-8=24 bytes come from.\n\nThanks\n\nOn Fri, Feb 19, 2021 at 9:45 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Feb 17, 2021 at 12:46 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Hi,\n> >\n> > I addressed the following review comments and attaching v3 patch set.\n> >\n> > 1) ExecClearTuple happens before ExecCopySlot in heap_multi_insert_v2\n> > and this allowed us to remove clear_mi_slots flag from\n> > TableInsertState.\n> > 2) I retained the flushed variable inside TableInsertState so that the\n> > callers can know whether the buffered slots have been flushed. If yes,\n> > the callers can execute after insert row triggers or perform index\n> > insertions. This is easier than passing the after insert row triggers\n> > info and index info to new multi insert table am and let it do. This\n> > way the functionalities can be kept separate i.e. multi insert ams do\n> > only buffering, decisions on when to flush, insertions and the callers\n> > will execute triggers or index insertions. And also none of the\n> > existing table ams are performing these operations within them, so\n> > this is inline with the current design of the table ams.\n> > 3) I have kept the single and multi insert API separate. The previous\n> > suggestion was to have only a single insert API and let the callers\n> > provide initially whether they want multi or single inserts. One\n> > problem with that approach is that we have to allow table ams to\n> > execute the after row triggers or index insertions. That is something\n> > I personally don't like.\n> >\n> > 0001 - new table ams implementation\n> > 0002 - the new multi table ams used in CREATE TABLE AS and REFRESH\n> > MATERIALIZED VIEW\n> > 0003 - the new multi table ams used in COPY\n> >\n> > Please review the v3 patch set further.\n>\n> Below is the performance gain measured for CREATE TABLE AS with the\n> new multi insert am propsed in this thread:\n>\n> case 1 - 2 integer(of 4 bytes each) columns, 3 varchar(8), tuple size\n> 59 bytes, 100mn tuples\n> on master - 185sec\n> on master with multi inserts - 121sec, gain - 1.52X\n>\n> case 2 - 2 bigint(of 8 bytes each) columns, 3 name(of 64 bytes each)\n> columns, 1 varchar(8), tuple size 241 bytes, 100mn tuples\n> on master - 367sec\n> on master with multi inserts - 291sec, gain - 1.26X\n>\n> case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes, 100mn\n> tuples\n> on master - 130sec\n> on master with multi inserts - 105sec, gain - 1.23X\n>\n> case 4 - 2 bigint(of 8 bytes each) columns, 16 name(of 64 bytes each)\n> columns, tuple size 1064 bytes, 10mn tuples\n> on master - 120sec\n> on master with multi inserts - 115sec, gain - 1.04X\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHi,bq. case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytesIs there some other column(s) per row apart from the integer columns ? Since the 2 integer columns only occupy 8 bytes. I wonder where the other 32-8=24 bytes come from.ThanksOn Fri, Feb 19, 2021 at 9:45 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Feb 17, 2021 at 12:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Hi,\n>\n> I addressed the following review comments and attaching v3 patch set.\n>\n> 1) ExecClearTuple happens before ExecCopySlot in heap_multi_insert_v2\n> and this allowed us to remove clear_mi_slots flag from\n> TableInsertState.\n> 2) I retained the flushed variable inside TableInsertState so that the\n> callers can know whether the buffered slots have been flushed. If yes,\n> the callers can execute after insert row triggers or perform index\n> insertions. This is easier than passing the after insert row triggers\n> info and index info to new multi insert table am and let it do. This\n> way the functionalities can be kept separate i.e. multi insert ams do\n> only buffering, decisions on when to flush, insertions and the callers\n> will execute triggers or index insertions. And also none of the\n> existing table ams are performing these operations within them, so\n> this is inline with the current design of the table ams.\n> 3) I have kept the single and multi insert API separate. The previous\n> suggestion was to have only a single insert API and let the callers\n> provide initially whether they want multi or single inserts. One\n> problem with that approach is that we have to allow table ams to\n> execute the after row triggers or index insertions. That is something\n> I personally don't like.\n>\n> 0001 - new table ams implementation\n> 0002 - the new multi table ams used in CREATE TABLE AS and REFRESH\n> MATERIALIZED VIEW\n> 0003 - the new multi table ams used in COPY\n>\n> Please review the v3 patch set further.\n\nBelow is the performance gain measured for CREATE TABLE AS with the\nnew multi insert am propsed in this thread:\n\ncase 1 - 2 integer(of 4 bytes each) columns, 3 varchar(8), tuple size\n59 bytes, 100mn tuples\non master - 185sec\non master with multi inserts - 121sec, gain - 1.52X\n\ncase 2 - 2 bigint(of 8 bytes each) columns, 3 name(of 64 bytes each)\ncolumns, 1 varchar(8), tuple size 241 bytes, 100mn tuples\non master - 367sec\non master with multi inserts - 291sec, gain - 1.26X\n\ncase 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes, 100mn tuples\non master - 130sec\non master with multi inserts - 105sec, gain - 1.23X\n\ncase 4 - 2 bigint(of 8 bytes each) columns, 16 name(of 64 bytes each)\ncolumns, tuple size 1064 bytes, 10mn tuples\non master - 120sec\non master with multi inserts - 115sec, gain - 1.04X\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 19 Feb 2021 23:25:51 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sat, Feb 20, 2021 at 12:53 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> bq. case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes\n>\n> Is there some other column(s) per row apart from the integer columns ? Since the 2 integer columns only occupy 8 bytes. I wonder where the other 32-8=24 bytes come from.\n\nThere are no other columns in the test case. Those 24 bytes are for\ntuple header(23bytes) and 1 byte for other bookkeeping info. See\n\"Table Row Layout\" from\nhttps://www.postgresql.org/docs/devel/storage-page-layout.html.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 20 Feb 2021 13:20:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sat, Feb 20, 2021 at 11:15 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> > Please review the v3 patch set further.\n>\n> Below is the performance gain measured for CREATE TABLE AS with the\n> new multi insert am propsed in this thread:\n>\n> case 1 - 2 integer(of 4 bytes each) columns, 3 varchar(8), tuple size\n> 59 bytes, 100mn tuples\n> on master - 185sec\n> on master with multi inserts - 121sec, gain - 1.52X\n>\n> case 2 - 2 bigint(of 8 bytes each) columns, 3 name(of 64 bytes each)\n> columns, 1 varchar(8), tuple size 241 bytes, 100mn tuples\n> on master - 367sec\n> on master with multi inserts - 291sec, gain - 1.26X\n>\n> case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes, 100mn tuples\n> on master - 130sec\n> on master with multi inserts - 105sec, gain - 1.23X\n>\n> case 4 - 2 bigint(of 8 bytes each) columns, 16 name(of 64 bytes each)\n> columns, tuple size 1064 bytes, 10mn tuples\n> on master - 120sec\n> on master with multi inserts - 115sec, gain - 1.04X\n\nPerformance numbers look good, especially with the smaller tuple size.\nI was looking into the patch and I have a question.\n\n+static inline void\n+table_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n+{\n+ state->rel->rd_tableam->tuple_insert_v2(state, slot);\n+}\n+\n+static inline void\n+table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n+{\n+ state->rel->rd_tableam->multi_insert_v2(state, slot);\n+}\n\nWhy do we need to invent a new version table_insert_v2? And also why\nit is named table_insert* instead of table_tuple_insert*?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Mar 2021 18:37:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 6:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Feb 20, 2021 at 11:15 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > > Please review the v3 patch set further.\n> >\n> > Below is the performance gain measured for CREATE TABLE AS with the\n> > new multi insert am propsed in this thread:\n> >\n> > case 1 - 2 integer(of 4 bytes each) columns, 3 varchar(8), tuple size\n> > 59 bytes, 100mn tuples\n> > on master - 185sec\n> > on master with multi inserts - 121sec, gain - 1.52X\n> >\n> > case 2 - 2 bigint(of 8 bytes each) columns, 3 name(of 64 bytes each)\n> > columns, 1 varchar(8), tuple size 241 bytes, 100mn tuples\n> > on master - 367sec\n> > on master with multi inserts - 291sec, gain - 1.26X\n> >\n> > case 3 - 2 integer(of 4 bytes each) columns, tuple size 32 bytes, 100mn tuples\n> > on master - 130sec\n> > on master with multi inserts - 105sec, gain - 1.23X\n> >\n> > case 4 - 2 bigint(of 8 bytes each) columns, 16 name(of 64 bytes each)\n> > columns, tuple size 1064 bytes, 10mn tuples\n> > on master - 120sec\n> > on master with multi inserts - 115sec, gain - 1.04X\n>\n> Performance numbers look good, especially with the smaller tuple size.\n\nThanks.\n\n> I was looking into the patch and I have a question.\n>\n> +static inline void\n> +table_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n> +{\n> + state->rel->rd_tableam->tuple_insert_v2(state, slot);\n> +}\n> +\n> +static inline void\n> +table_multi_insert_v2(TableInsertState *state, TupleTableSlot *slot)\n> +{\n> + state->rel->rd_tableam->multi_insert_v2(state, slot);\n> +}\n>\n> Why do we need to invent a new version table_insert_v2? And also why\n> it is named table_insert* instead of table_tuple_insert*?\n\nNew version, because we changed the input parameters, now passing the\nparams via TableInsertState but existing table_tuple_insert doesn't do\nthat. If okay, I can change table_insert_v2 to table_tuple_insert_v2?\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:45:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:45 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Mar 8, 2021 at 6:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n> > Why do we need to invent a new version table_insert_v2? And also why\n> > it is named table_insert* instead of table_tuple_insert*?\n>\n> New version, because we changed the input parameters, now passing the\n> params via TableInsertState but existing table_tuple_insert doesn't do\n> that. If okay, I can change table_insert_v2 to table_tuple_insert_v2?\n> Thoughts?\n\nChanged table_insert_v2 to table_tuple_insert_v2. And also, rebased\nthe patches on to the latest master.\n\nAttaching the v4 patch set. Please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Mar 2021 10:21:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 10:21 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Attaching the v4 patch set. Please review it further.\n\nAttaching v5 patch set after rebasing onto the latest master. Please\nreview it further.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Apr 2021 09:49:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 9:49 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 10, 2021 at 10:21 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Attaching the v4 patch set. Please review it further.\n>\n> Attaching v5 patch set after rebasing onto the latest master.\n\nAnother rebase due to conflicts in 0003. Attaching v6 for review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Apr 2021 10:21:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, 19 Apr 2021 at 06:52, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 9:49 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Mar 10, 2021 at 10:21 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Attaching the v4 patch set. Please review it further.\n> >\n> > Attaching v5 patch set after rebasing onto the latest master.\n>\n> Another rebase due to conflicts in 0003. Attaching v6 for review.\n\nI recently touched the topic of multi_insert, and I remembered this\npatch. I had to dig a bit to find it, but as it's still open I've\nadded some comments:\n\n> diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> +#define MAX_BUFFERED_TUPLES 1000\n> +#define MAX_BUFFERED_BYTES 65535\n\nIt looks like these values were copied over from copyfrom.c, but are\nnow expressed in the context of the heapam.\nAs these values are now heap-specific (as opposed to the\nTableAM-independent COPY infrastructure), shouldn't we instead\noptimize for heap page insertions? That is, I suggest using a multiple\n(1 or more) of MaxHeapTuplesPerPage for _TUPLES, and that same / a\nsimilar multiple of BLCKSZ for MAX_BUFFERED_BYTES.\n\n> TableInsertState->flushed\n> TableInsertState->mi_slots\n\nI don't quite like the current storage-and-feedback mechanism for\nflushed tuples. The current assumptions in this mechanism seem to be\nthat\n1.) access methods always want to flush all available tuples at once,\n2.) access methods want to maintain the TupleTableSlots for all\ninserted tuples that have not yet had all triggers handled, and\n3.) we need access to the not-yet-flushed TupleTableSlots.\n\nI think that that is not a correct set of assumptions; I think that\nonly flushed tuples need to be visible to the tableam-agnostic code;\nand that tableams should be allowed to choose which tuples to flush at\nwhich point; as long as they're all flushed after a final\nmulti_insert_flush.\n\nExamples:\nA heap-based access method might want bin-pack tuples and write out\nfull pages only; and thus keep some tuples in the buffers as they\ndidn't fill a page; thus having flushed only a subset of the current\nbuffered tuples.\nA columnstore-based access method might not want to maintain the\nTupleTableSlots of original tuples, but instead use temporary columnar\nstorage: TupleTableSlots are quite large when working with huge\namounts of tuples; and keeping lots of tuple data in memory is\nexpensive.\n\nAs such, I think that this should be replaced with a\nTableInsertState->mi_flushed_slots + TableInsertState->mi_flushed_len,\nmanaged by the tableAM, in which only the flushed tuples are visible\nto the AM-agnostic code. This is not much different from how the\nimplementation currently works; except that ->mi_slots now does not\nexpose unflushed tuples; and that ->flushed is replaced by an integer\nvalue of number of flushed tuples.\n\nA further improvement (in my opinion) would be the change from a\nsingle multi_insert_flush() to a signalling-based multi_insert_flush:\nIt is not unreasonable for e.g. a columnstore to buffer tens of\nthousands of inserts; but doing so in TupleTableSlots would introduce\na high memory usage. Allowing for batched retrieval of flushed tuples\nwould help in memory usage; which is why multiple calls to\nmulti_insert_flush() could be useful. To handle this gracefully, we'd\nprobably add TIS->mi_flush_remaining, where each insert adds one to\nmi_flush_remaining; and each time mi_flushed_slots has been handled\nmi_flush_remaining is decreased by mi_flushed_len by the handler code.\nOnce we're done inserting into the table, we keep calling\nmulti_insert_flush until no more tuples are being flushed (and error\nout if we're still waiting for flushes but no new flushed tuples are\nreturned).\n\n- Matthias\n\n\n",
"msg_date": "Fri, 4 Mar 2022 15:37:32 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 8:07 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Another rebase due to conflicts in 0003. Attaching v6 for review.\n>\n> I recently touched the topic of multi_insert, and I remembered this\n> patch. I had to dig a bit to find it, but as it's still open I've\n> added some comments:\n\nThanks for reviving the thread. I almost lost hope in it. In fact, it\ntook me a while to recollect the work and respond to your comments.\nI'm now happy to answer or continue working on this patch if you or\nsomeone is really interested to review it and take it to the end.\n\n> > diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> > +#define MAX_BUFFERED_TUPLES 1000\n> > +#define MAX_BUFFERED_BYTES 65535\n>\n> It looks like these values were copied over from copyfrom.c, but are\n> now expressed in the context of the heapam.\n> As these values are now heap-specific (as opposed to the\n> TableAM-independent COPY infrastructure), shouldn't we instead\n> optimize for heap page insertions? That is, I suggest using a multiple\n> (1 or more) of MaxHeapTuplesPerPage for _TUPLES, and that same / a\n> similar multiple of BLCKSZ for MAX_BUFFERED_BYTES.\n\nWe can do that. In fact, it is a good idea to let callers pass in as\nan input to tuple_insert_begin and have it as part of\nTableInsertState. If okay, I will do that in the next patch.\n\n> > TableInsertState->flushed\n> > TableInsertState->mi_slots\n>\n> I don't quite like the current storage-and-feedback mechanism for\n> flushed tuples. The current assumptions in this mechanism seem to be\n> that\n> 1.) access methods always want to flush all available tuples at once,\n> 2.) access methods want to maintain the TupleTableSlots for all\n> inserted tuples that have not yet had all triggers handled, and\n> 3.) we need access to the not-yet-flushed TupleTableSlots.\n>\n> I think that that is not a correct set of assumptions; I think that\n> only flushed tuples need to be visible to the tableam-agnostic code;\n> and that tableams should be allowed to choose which tuples to flush at\n> which point; as long as they're all flushed after a final\n> multi_insert_flush.\n>\n> Examples:\n> A heap-based access method might want bin-pack tuples and write out\n> full pages only; and thus keep some tuples in the buffers as they\n> didn't fill a page; thus having flushed only a subset of the current\n> buffered tuples.\n> A columnstore-based access method might not want to maintain the\n> TupleTableSlots of original tuples, but instead use temporary columnar\n> storage: TupleTableSlots are quite large when working with huge\n> amounts of tuples; and keeping lots of tuple data in memory is\n> expensive.\n>\n> As such, I think that this should be replaced with a\n> TableInsertState->mi_flushed_slots + TableInsertState->mi_flushed_len,\n> managed by the tableAM, in which only the flushed tuples are visible\n> to the AM-agnostic code. This is not much different from how the\n> implementation currently works; except that ->mi_slots now does not\n> expose unflushed tuples; and that ->flushed is replaced by an integer\n> value of number of flushed tuples.\n\nYeah, that makes sense. Let's table AMs expose the flushed tuples\noutside on which the callers can handle the after-insert row triggers\nupon them.\n\nIIUC, TableInsertState needs to have few other variables:\n\n /* Below members are only used for multi inserts. */\n /* Array of buffered slots. */\n TupleTableSlot **mi_slots;\n /* Number of slots that are currently buffered. */\n int32 mi_cur_slots;\n /* Array of flushed slots that will be used by callers to handle\nafter-insert row triggers or similar events outside */\n TupleTableSlot **mi_flushed_slots ;\n /* Number of slots that are currently buffered. */\n int32 mi_no_of_flushed_slots;\n\nThe implementation of heap_multi_insert_flush will just set the\nmi_slots to mi_flushed_slots.\n\n> A further improvement (in my opinion) would be the change from a\n> single multi_insert_flush() to a signalling-based multi_insert_flush:\n> It is not unreasonable for e.g. a columnstore to buffer tens of\n> thousands of inserts; but doing so in TupleTableSlots would introduce\n> a high memory usage. Allowing for batched retrieval of flushed tuples\n> would help in memory usage; which is why multiple calls to\n> multi_insert_flush() could be useful. To handle this gracefully, we'd\n> probably add TIS->mi_flush_remaining, where each insert adds one to\n> mi_flush_remaining; and each time mi_flushed_slots has been handled\n> mi_flush_remaining is decreased by mi_flushed_len by the handler code.\n> Once we're done inserting into the table, we keep calling\n> multi_insert_flush until no more tuples are being flushed (and error\n> out if we're still waiting for flushes but no new flushed tuples are\n> returned).\n\nThe current approach is signalling-based right?\nheap_multi_insert_v2\n if (state->mi_cur_slots >= mistate->max_slots ||\n mistate->cur_size >= mistate->max_size)\n heap_multi_insert_flush(state);\n\nThe table_multi_insert_v2 am implementers will have to carefully\nchoose buffering strategy i.e. number of tuples, size to buffer and\ndecide rightly without hitting memory usages.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sun, 6 Mar 2022 16:41:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sun, 6 Mar 2022 at 12:12, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Mar 4, 2022 at 8:07 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > Another rebase due to conflicts in 0003. Attaching v6 for review.\n> >\n> > I recently touched the topic of multi_insert, and I remembered this\n> > patch. I had to dig a bit to find it, but as it's still open I've\n> > added some comments:\n>\n> Thanks for reviving the thread. I almost lost hope in it. In fact, it\n> took me a while to recollect the work and respond to your comments.\n> I'm now happy to answer or continue working on this patch if you or\n> someone is really interested to review it and take it to the end.\n>\n> > > diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> > > +#define MAX_BUFFERED_TUPLES 1000\n> > > +#define MAX_BUFFERED_BYTES 65535\n> >\n> > It looks like these values were copied over from copyfrom.c, but are\n> > now expressed in the context of the heapam.\n> > As these values are now heap-specific (as opposed to the\n> > TableAM-independent COPY infrastructure), shouldn't we instead\n> > optimize for heap page insertions? That is, I suggest using a multiple\n> > (1 or more) of MaxHeapTuplesPerPage for _TUPLES, and that same / a\n> > similar multiple of BLCKSZ for MAX_BUFFERED_BYTES.\n>\n> We can do that. In fact, it is a good idea to let callers pass in as\n> an input to tuple_insert_begin and have it as part of\n> TableInsertState. If okay, I will do that in the next patch.\n>\n> > > TableInsertState->flushed\n> > > TableInsertState->mi_slots\n> >\n> > I don't quite like the current storage-and-feedback mechanism for\n> > flushed tuples. The current assumptions in this mechanism seem to be\n> > that\n> > 1.) access methods always want to flush all available tuples at once,\n> > 2.) access methods want to maintain the TupleTableSlots for all\n> > inserted tuples that have not yet had all triggers handled, and\n> > 3.) we need access to the not-yet-flushed TupleTableSlots.\n> >\n> > I think that that is not a correct set of assumptions; I think that\n> > only flushed tuples need to be visible to the tableam-agnostic code;\n> > and that tableams should be allowed to choose which tuples to flush at\n> > which point; as long as they're all flushed after a final\n> > multi_insert_flush.\n> >\n> > Examples:\n> > A heap-based access method might want bin-pack tuples and write out\n> > full pages only; and thus keep some tuples in the buffers as they\n> > didn't fill a page; thus having flushed only a subset of the current\n> > buffered tuples.\n> > A columnstore-based access method might not want to maintain the\n> > TupleTableSlots of original tuples, but instead use temporary columnar\n> > storage: TupleTableSlots are quite large when working with huge\n> > amounts of tuples; and keeping lots of tuple data in memory is\n> > expensive.\n> >\n> > As such, I think that this should be replaced with a\n> > TableInsertState->mi_flushed_slots + TableInsertState->mi_flushed_len,\n> > managed by the tableAM, in which only the flushed tuples are visible\n> > to the AM-agnostic code. This is not much different from how the\n> > implementation currently works; except that ->mi_slots now does not\n> > expose unflushed tuples; and that ->flushed is replaced by an integer\n> > value of number of flushed tuples.\n>\n> Yeah, that makes sense. Let's table AMs expose the flushed tuples\n> outside on which the callers can handle the after-insert row triggers\n> upon them.\n>\n> IIUC, TableInsertState needs to have few other variables:\n>\n> /* Below members are only used for multi inserts. */\n> /* Array of buffered slots. */\n> TupleTableSlot **mi_slots;\n\nNot quite: there's no need for TupleTableSlot **mi_slots in the\nTableInsertState; as the buffer used by the tableAM to buffer\nunflushed tuples shouldn't be publicly visible. I suspect that moving\nthat field to HeapMultiInsertState instead would be the prudent thing\nto do; limiting the external access of AM-specific buffers.\n\n> /* Number of slots that are currently buffered. */\n> int32 mi_cur_slots;\n> /* Array of flushed slots that will be used by callers to handle\n> after-insert row triggers or similar events outside */\n> TupleTableSlot **mi_flushed_slots ;\n> /* Number of slots that are currently buffered. */\n> int32 mi_no_of_flushed_slots;\n>\n> The implementation of heap_multi_insert_flush will just set the\n> mi_slots to mi_flushed_slots.\n\nYes.\n\n> > A further improvement (in my opinion) would be the change from a\n> > single multi_insert_flush() to a signalling-based multi_insert_flush:\n> > It is not unreasonable for e.g. a columnstore to buffer tens of\n> > thousands of inserts; but doing so in TupleTableSlots would introduce\n> > a high memory usage. Allowing for batched retrieval of flushed tuples\n> > would help in memory usage; which is why multiple calls to\n> > multi_insert_flush() could be useful. To handle this gracefully, we'd\n> > probably add TIS->mi_flush_remaining, where each insert adds one to\n> > mi_flush_remaining; and each time mi_flushed_slots has been handled\n> > mi_flush_remaining is decreased by mi_flushed_len by the handler code.\n> > Once we're done inserting into the table, we keep calling\n> > multi_insert_flush until no more tuples are being flushed (and error\n> > out if we're still waiting for flushes but no new flushed tuples are\n> > returned).\n>\n> The current approach is signalling-based right?\n> heap_multi_insert_v2\n> if (state->mi_cur_slots >= mistate->max_slots ||\n> mistate->cur_size >= mistate->max_size)\n> heap_multi_insert_flush(state);\n\nThat's for the AM-internal flushing; yes. I was thinking about the AM\napi for flushing that's used when finalizing the batched insert; i.e.\ntable_multi_insert_flush.\n\nCurrently it assumes that all buffered tuples will be flushed after\none call (which is correct for heap), but putting those unflushed\ntuples all at once back in memory might not be desirable or possible\n(for e.g. columnar); so we might need to call table_multi_insert_flush\nuntil there's no more buffered tuples.\n\n> The table_multi_insert_v2 am implementers will have to carefully\n> choose buffering strategy i.e. number of tuples, size to buffer and\n> decide rightly without hitting memory usages.\n\nAgreed\n\n-Matthias\n\n\n",
"msg_date": "Mon, 7 Mar 2022 17:09:23 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 05:09:23PM +0100, Matthias van de Meent wrote:\n> That's for the AM-internal flushing; yes. I was thinking about the AM\n> api for flushing that's used when finalizing the batched insert; i.e.\n> table_multi_insert_flush.\n> \n> Currently it assumes that all buffered tuples will be flushed after\n> one call (which is correct for heap), but putting those unflushed\n> tuples all at once back in memory might not be desirable or possible\n> (for e.g. columnar); so we might need to call table_multi_insert_flush\n> until there's no more buffered tuples.\n\nThis thread has been idle for 6 months now, so I have marked it as\nreturned with feedback as of what looks like a lack of activity. I\nhave looked at what's been proposed, and I am not really sure if the\ndirection taken is correct, though there may be a potential gain in\nconsolidating the multi-insert path within the table AM set of\ncallbacks.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:30:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 11:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> This thread has been idle for 6 months now, so I have marked it as\n> returned with feedback as of what looks like a lack of activity. I\n> have looked at what's been proposed, and I am not really sure if the\n> direction taken is correct, though there may be a potential gain in\n> consolidating the multi-insert path within the table AM set of\n> callbacks.\n\nThanks. Unfortunately, I'm not finding enough cycles to work on this\nfeature. I'm happy to help if others have any further thoughts and\ntake it from here.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Oct 2022 11:05:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Hi,\n\nThis patch was referenced in a discussion at pgcon, so I thought I'd give it a\nlook, even though Bharat said that he won't have time to drive it forward...\n\n\nOn 2021-04-19 10:21:36 +0530, Bharath Rupireddy wrote:\n> diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c\n> index bd5faf0c1f..655de8e6b7 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -2558,6 +2558,11 @@ static const TableAmRoutine heapam_methods = {\n> \t.tuple_insert_speculative = heapam_tuple_insert_speculative,\n> \t.tuple_complete_speculative = heapam_tuple_complete_speculative,\n> \t.multi_insert = heap_multi_insert,\n> +\t.tuple_insert_begin = heap_insert_begin,\n> +\t.tuple_insert_v2 = heap_insert_v2,\n> +\t.multi_insert_v2 = heap_multi_insert_v2,\n> +\t.multi_insert_flush = heap_multi_insert_flush,\n> +\t.tuple_insert_end = heap_insert_end,\n> \t.tuple_delete = heapam_tuple_delete,\n> \t.tuple_update = heapam_tuple_update,\n> \t.tuple_lock = heapam_tuple_lock,\n\nI don't think we should have multiple callback for the insertion APIs in\ntableam.h. I think it'd be good to continue supporting the old table_*()\nfunctions, but supporting multiple insert APIs in each AM doesn't make much\nsense to me.\n\n\n> +/*\n> + * GetTupleSize - Compute the tuple size given a table slot.\n> + *\n> + * For heap tuple, buffer tuple and minimal tuple slot types return the actual\n> + * tuple size that exists. For virtual tuple, the size is calculated as the\n> + * slot does not have the tuple size. If the computed size exceeds the given\n> + * maxsize for the virtual tuple, this function exits, not investing time in\n> + * further unnecessary calculation.\n> + *\n> + * Important Notes:\n> + * 1) Size calculation code for virtual slots is being used from\n> + * \t tts_virtual_materialize(), hence ensure to have the same changes or fixes\n> + * \t here and also there.\n> + * 2) Currently, GetTupleSize() handles the existing heap, buffer, minimal and\n> + * \t virtual slots. Ensure to add related code in case any new slot type is\n> + * introduced.\n> + */\n> +inline Size\n> +GetTupleSize(TupleTableSlot *slot, Size maxsize)\n> +{\n> +\tSize sz = 0;\n> +\tHeapTuple tuple = NULL;\n> +\n> +\tif (TTS_IS_HEAPTUPLE(slot))\n> +\t\ttuple = ((HeapTupleTableSlot *) slot)->tuple;\n> +\telse if(TTS_IS_BUFFERTUPLE(slot))\n> +\t\ttuple = ((BufferHeapTupleTableSlot *) slot)->base.tuple;\n> +\telse if(TTS_IS_MINIMALTUPLE(slot))\n> +\t\ttuple = ((MinimalTupleTableSlot *) slot)->tuple;\n> +\telse if(TTS_IS_VIRTUAL(slot))\n\nI think this embeds too much knowledge of the set of slot types in core\ncode. I don't see why it's needed either?\n\n\n> diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h\n> index 414b6b4d57..2a1470a7b6 100644\n> --- a/src/include/access/tableam.h\n> +++ b/src/include/access/tableam.h\n> @@ -229,6 +229,32 @@ typedef struct TM_IndexDeleteOp\n> \tTM_IndexStatus *status;\n> } TM_IndexDeleteOp;\n>\n> +/* Holds table insert state. */\n> +typedef struct TableInsertState\n\nI suspect we should design it to be usable for updates and deletes in the\nfuture, and thus name it TableModifyState.\n\n\n\n> +{\n> +\tRelation\trel;\n> +\t/* Bulk insert state if requested, otherwise NULL. */\n> +\tstruct BulkInsertStateData\t*bistate;\n> +\tCommandId\tcid;\n\nHm - I'm not sure it's a good idea to force the cid to be the same for all\ninserts done via one TableInsertState.\n\n\n\n> +\tint\toptions;\n> +\t/* Below members are only used for multi inserts. */\n> +\t/* Array of buffered slots. */\n> +\tTupleTableSlot\t**mi_slots;\n> +\t/* Number of slots that are currently buffered. */\n> +\tint32\tmi_cur_slots;\n\n> +\t/*\n> +\t * Access method specific information such as parameters that are needed\n> +\t * for buffering and flushing decisions can go here.\n> +\t */\n> +\tvoid\t*mistate;\n\nI think we should instead have a generic TableModifyState, which each AM then\nembeds into an AM specific AM state. Forcing two very related structs to be\nallocated separately doesn't seem wise in this case.\n\n\n\n> @@ -1430,6 +1473,50 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,\n> \t\t\t\t\t\t\t\t cid, options, bistate);\n> }\n>\n> +static inline TableInsertState*\n> +table_insert_begin(Relation rel, CommandId cid, int options,\n> +\t\t\t\t bool alloc_bistate, bool is_multi)\n\nWhy have alloc_bistate and options?\n\n\n> +static inline void\n> +table_insert_end(TableInsertState *state)\n> +{\n> +\t/* Deallocate bulk insert state here, since it's AM independent. */\n> +\tif (state->bistate)\n> +\t\tFreeBulkInsertState(state->bistate);\n> +\n> +\tstate->rel->rd_tableam->tuple_insert_end(state);\n> +}\n\nSeems like the order in here should be swapped?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 3 Jun 2023 15:38:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sun, Jun 4, 2023 at 4:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> This patch was referenced in a discussion at pgcon, so I thought I'd give it a\n> look, even though Bharat said that he won't have time to drive it forward...\n\nThanks. I'm glad to know that the feature was discussed at PGCon.\n\nIf there's an interest, I'm happy to spend time again on it.\n\nI'll look into the review comments and respond soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 5 Jun 2023 08:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sun, Jun 4, 2023 at 4:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> This patch was referenced in a discussion at pgcon, so I thought I'd give it a\n> look, even though Bharat said that he won't have time to drive it forward...\n\nThanks. Finally, I started to spend time on this. Just curious - may\nI know the discussion in/for which this patch is referenced? What was\nthe motive? Is it captured somewhere?\n\n> On 2021-04-19 10:21:36 +0530, Bharath Rupireddy wrote:\n> > + .tuple_insert_begin = heap_insert_begin,\n> > + .tuple_insert_v2 = heap_insert_v2,\n> > + .multi_insert_v2 = heap_multi_insert_v2,\n> > + .multi_insert_flush = heap_multi_insert_flush,\n> > + .tuple_insert_end = heap_insert_end,\n>\n> I don't think we should have multiple callback for the insertion APIs in\n> tableam.h. I think it'd be good to continue supporting the old table_*()\n> functions, but supporting multiple insert APIs in each AM doesn't make much\n> sense to me.\n\nI named these new functions XXX_v2 for compatibility reasons. Because,\nit's quite possible for external modules to use existing\ntable_tuple_insert, table_multi_insert functions. If we were to change\nthe existing insert tableams, all the external modules using them\nwould have to change their code, is that okay?\n\n> > +/*\n> > + * GetTupleSize - Compute the tuple size given a table slot.\n> > +inline Size\n>\n> I think this embeds too much knowledge of the set of slot types in core\n> code. I don't see why it's needed either?\n\nThe heapam multi-insert implementation needs to know the tuple size\nfrom the slot to decide whether or not to flush the tuples from the\nbuffers. I couldn't find a direct way then to know the tuple size from\nthe slot, so added that helper function. With a better understanding\nnow, I think we can rely on the memory allocated for TupleTableSlot's\ntts_mcxt. While this works for the materialized slots passed in to the\ninsert functions, for non-materialized slots the flushing decision can\nbe solely on the number of tuples stored in the buffers. Another way\nis to add a get_tuple_size callback to TupleTableSlotOps and let the\ntuple slot providers give us the tuple size.\n\n> > diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h\n> > index 414b6b4d57..2a1470a7b6 100644\n> > --- a/src/include/access/tableam.h\n> > +++ b/src/include/access/tableam.h\n> > @@ -229,6 +229,32 @@ typedef struct TM_IndexDeleteOp\n> > TM_IndexStatus *status;\n> > } TM_IndexDeleteOp;\n> >\n> > +/* Holds table insert state. */\n> > +typedef struct TableInsertState\n>\n> I suspect we should design it to be usable for updates and deletes in the\n> future, and thus name it TableModifyState.\n\nThere are different parameters that insert/update/delete would want to\npass across in the state. So, having Table{Insert/Update/Delete}State\nmay be a better idea than having the unneeded variables lying around\nor having a union and state_type as INSERT/UPDATE/DELETE, no? Do you\nhave a different thought here?\n\n> I think we should instead have a generic TableModifyState, which each AM then\n> embeds into an AM specific AM state. Forcing two very related structs to be\n> allocated separately doesn't seem wise in this case.\n\nThe v7 patches have largely changed the way these options and\nparameters are passed, please have a look.\n\n> > +{\n> > + Relation rel;\n> > + /* Bulk insert state if requested, otherwise NULL. */\n> > + struct BulkInsertStateData *bistate;\n> > + CommandId cid;\n>\n> Hm - I'm not sure it's a good idea to force the cid to be the same for all\n> inserts done via one TableInsertState.\n\nIf required, someone can always pass a new CID before every\ntuple_insert_v2/tuple_multi_insert_v2 call via TableInsertState. Isn't\nit sufficient?\n\n> > @@ -1430,6 +1473,50 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,\n> > cid, options, bistate);\n> > }\n> >\n> > +static inline TableInsertState*\n> > +table_insert_begin(Relation rel, CommandId cid, int options,\n> > + bool alloc_bistate, bool is_multi)\n>\n> Why have alloc_bistate and options?\n\n\"alloc_bistate\" is for the caller to specify if they need a bulk\ninsert state or not. \"options\" is for the caller to specify if they\nneed table_tuple_insert performance options such as\nTABLE_INSERT_SKIP_FSM, TABLE_INSERT_FROZEN, TABLE_INSERT_NO_LOGICAL.\nThe v7 patches have changed the way these options and parameters are\npassed, please have a look.\n\n> > +static inline void\n> > +table_insert_end(TableInsertState *state)\n> > +{\n> > + /* Deallocate bulk insert state here, since it's AM independent. */\n> > + if (state->bistate)\n> > + FreeBulkInsertState(state->bistate);\n> > +\n> > + state->rel->rd_tableam->tuple_insert_end(state);\n> > +}\n>\n> Seems like the order in here should be swapped?\n\nRight. It looks like BulkInsertState is for heapam, it really doesn't\nhave to be in table_XXX functions, hence it all the way down to\nheap_insert_XXX functions.\n\nI'm attaching the v7 patch set with the above review comments\naddressed. My initial idea behind these new insert APIs was the\nability to re-use the multi insert code in COPY for CTAS and REFRESH\nMATERIALIZED VIEW. I'm open to more thoughts here.\n\nThe v7 patches have largely changed the way state structure (heapam\nspecific things are moved all the way down to heapam.c) is defined,\nthe parameters are passed, and simplified the multi insert logic a\nlot.\n\n0001 - introduces new single and multi insert table AM and heapam\nimplementation of the new AM.\n0002 - optimizes CREATE TABLE AS to use the new multi inserts table AM\nmaking it faster by 2.13X or 53%.\n0003 - optimizes REFRESH MATERIALIZED VIEW to use the new multi\ninserts table AM making it faster by 1.52X or 34%.\n0004 - uses the new multi inserts table AM for COPY FROM - I'm yet to\nspend time on this, I'll share the patch when ready.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 1 Aug 2023 22:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 9:31 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. Finally, I started to spend time on this. Just curious - may\n> I know the discussion in/for which this patch is referenced? What was\n> the motive? Is it captured somewhere?\n\nIt may not have been the only place, but we at least touched on it\nduring the unconference:\n\n https://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference#Table_AMs\n\nWe discussed two related-but-separate ideas:\n1) bulk/batch operations and\n2) maintenance of TAM state across multiple related operations.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 1 Aug 2023 10:02:12 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 10:32 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Tue, Aug 1, 2023 at 9:31 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks. Finally, I started to spend time on this. Just curious - may\n> > I know the discussion in/for which this patch is referenced? What was\n> > the motive? Is it captured somewhere?\n>\n> It may not have been the only place, but we at least touched on it\n> during the unconference:\n>\n> https://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference#Table_AMs\n>\n> We discussed two related-but-separate ideas:\n> 1) bulk/batch operations and\n> 2) maintenance of TAM state across multiple related operations.\n\nThank you. I'm attaching v8 patch-set here which includes use of new\ninsert TAMs for COPY FROM. With this, postgres not only will have the\nnew TAM for inserts, but they also can make the following commands\nfaster - CREATE TABLE AS, SELECT INTO, CREATE MATERIALIZED VIEW,\nREFRESH MATERIALIZED VIEW and COPY FROM. I'll perform some testing in\nthe coming days and post the results here, until then I appreciate any\nfeedback on the patches.\n\nI've also added this proposal to CF -\nhttps://commitfest.postgresql.org/47/4777/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 17 Jan 2024 22:57:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 10:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thank you. I'm attaching v8 patch-set here which includes use of new\n> insert TAMs for COPY FROM. With this, postgres not only will have the\n> new TAM for inserts, but they also can make the following commands\n> faster - CREATE TABLE AS, SELECT INTO, CREATE MATERIALIZED VIEW,\n> REFRESH MATERIALIZED VIEW and COPY FROM. I'll perform some testing in\n> the coming days and post the results here, until then I appreciate any\n> feedback on the patches.\n>\n> I've also added this proposal to CF -\n> https://commitfest.postgresql.org/47/4777/.\n\nSome of the tests related to Incremental Sort added by a recent commit\n0452b461bc4 in aggregates.sql are failing when the multi inserts\nfeature is used for CTAS (like done in 0002 patch). I'm not so sure if\nit's because of the reduction in the CTAS execution times. Execution\ntime for table 'btg' created with CREATE TABLE AS added by commit\n0452b461bc4 with single inserts is 25.3 msec, with multi inserts is\n17.7 msec. This means that the multi inserts are about 1.43 times or\n30.04% faster than the single inserts. Couple of ways to make these\ntests pick Incremental Sort as expected - 1) CLUSTER btg USING abc; or\n2) increase the number of rows in table btg to 100K from 10K. FWIW, if\nI reduce the number of rows in the table from 10K to 1K, the\nIncremental Sort won't get picked on HEAD with CTAS using single\ninserts. Hence, I chose option (2) to fix the issue.\n\nPlease find the attached v9 patch set.\n\n[1]\n -- Engage incremental sort\n explain (COSTS OFF) SELECT x,y FROM btg GROUP BY x,y,z,w;\n- QUERY PLAN\n--------------------------------------------------\n+ QUERY PLAN\n+------------------------------\n Group\n Group Key: x, y, z, w\n- -> Incremental Sort\n+ -> Sort\n Sort Key: x, y, z, w\n- Presorted Key: x, y\n- -> Index Scan using btg_x_y_idx on btg\n-(6 rows)\n+ -> Seq Scan on btg\n+(5 rows)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Jan 2024 12:57:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 12:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 17, 2024 at 10:57 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thank you. I'm attaching v8 patch-set here which includes use of new\n> > insert TAMs for COPY FROM. With this, postgres not only will have the\n> > new TAM for inserts, but they also can make the following commands\n> > faster - CREATE TABLE AS, SELECT INTO, CREATE MATERIALIZED VIEW,\n> > REFRESH MATERIALIZED VIEW and COPY FROM. I'll perform some testing in\n> > the coming days and post the results here, until then I appreciate any\n> > feedback on the patches.\n> >\n> > I've also added this proposal to CF -\n> > https://commitfest.postgresql.org/47/4777/.\n>\n> Some of the tests related to Incremental Sort added by a recent commit\n> 0452b461bc4 in aggregates.sql are failing when the multi inserts\n> feature is used for CTAS (like done in 0002 patch). I'm not so sure if\n> it's because of the reduction in the CTAS execution times. Execution\n> time for table 'btg' created with CREATE TABLE AS added by commit\n> 0452b461bc4 with single inserts is 25.3 msec, with multi inserts is\n> 17.7 msec. This means that the multi inserts are about 1.43 times or\n> 30.04% faster than the single inserts. Couple of ways to make these\n> tests pick Incremental Sort as expected - 1) CLUSTER btg USING abc; or\n> 2) increase the number of rows in table btg to 100K from 10K. FWIW, if\n> I reduce the number of rows in the table from 10K to 1K, the\n> Incremental Sort won't get picked on HEAD with CTAS using single\n> inserts. Hence, I chose option (2) to fix the issue.\n>\n> Please find the attached v9 patch set.\n>\n> [1]\n> -- Engage incremental sort\n> explain (COSTS OFF) SELECT x,y FROM btg GROUP BY x,y,z,w;\n> - QUERY PLAN\n> --------------------------------------------------\n> + QUERY PLAN\n> +------------------------------\n> Group\n> Group Key: x, y, z, w\n> - -> Incremental Sort\n> + -> Sort\n> Sort Key: x, y, z, w\n> - Presorted Key: x, y\n> - -> Index Scan using btg_x_y_idx on btg\n> -(6 rows)\n> + -> Seq Scan on btg\n> +(5 rows)\n\nCF bot machine with Windows isn't happy with the compilation [1], so\nfixed those warnings and attached v10 patch set.\n\n[1]\n[07:35:25.458] [632/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/commands_copyfrom.c.obj\n[07:35:25.458] c:\\cirrus\\src\\include\\access\\tableam.h(1574) : warning\nC4715: 'table_multi_insert_slots': not all control paths return a\nvalue\n[07:35:25.458] c:\\cirrus\\src\\include\\access\\tableam.h(1522) : warning\nC4715: 'table_insert_begin': not all control paths return a value\n[07:35:25.680] c:\\cirrus\\src\\include\\access\\tableam.h(1561) : warning\nC4715: 'table_multi_insert_next_free_slot': not all control paths\nreturn a value\n[07:35:25.680] [633/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/commands_createas.c.obj\n[07:35:25.680] c:\\cirrus\\src\\include\\access\\tableam.h(1522) : warning\nC4715: 'table_insert_begin': not all control paths return a value\n[07:35:26.310] [646/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/commands_matview.c.obj\n[07:35:26.310] c:\\cirrus\\src\\include\\access\\tableam.h(1522) : warning\nC4715: 'table_insert_begin': not all control paths return a value\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Jan 2024 17:16:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 5:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > Please find the attached v9 patch set.\n\nI've had to rebase the patches due to commit 874d817, please find the\nattached v11 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 2 Mar 2024 12:02:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sat, Mar 2, 2024 at 12:02 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jan 29, 2024 at 5:16 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > Please find the attached v9 patch set.\n>\n> I've had to rebase the patches due to commit 874d817, please find the\n> attached v11 patch set.\n\nRebase needed. Please see the v12 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 8 Mar 2024 16:06:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 8, 2024 at 7:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Mar 2, 2024 at 12:02 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Jan 29, 2024 at 5:16 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > > Please find the attached v9 patch set.\n> >\n> > I've had to rebase the patches due to commit 874d817, please find the\n> > attached v11 patch set.\n>\n> Rebase needed. Please see the v12 patch set.\n>\n\nI've not reviewed the patches in depth yet, but run performance tests\nfor CREATE MATERIALIZED VIEW. The test scenarios is:\n\n-- setup\ncreate unlogged table test (c int);\ninsert into test select generate_series(1, 10000000);\n\n-- run\ncreate materialized view test_mv as select * from test;\n\nHere are the results:\n\n* HEAD\n3775.221 ms\n3744.039 ms\n3723.228 ms\n\n* v12 patch\n6289.972 ms\n5880.674 ms\n7663.509 ms\n\nI can see performance regressions and the perf report says that CPU\nspent most time on extending the ResourceOwner's array while copying\nthe buffer-heap tuple:\n\n- 52.26% 0.18% postgres postgres [.] intorel_receive\n 52.08% intorel_receive\n table_multi_insert_v2 (inlined)\n - heap_multi_insert_v2\n - 51.53% ExecCopySlot (inlined)\n tts_buffer_heap_copyslot\n tts_buffer_heap_store_tuple (inlined)\n - IncrBufferRefCount\n - ResourceOwnerEnlarge\n ResourceOwnerAddToHash (inlined)\n\nIs there any reason why we copy a buffer-heap tuple to another\nbuffer-heap tuple? Which results in that we increments the buffer\nrefcount and register it to ResourceOwner for every tuples. I guess\nthat the destination tuple slot is not necessarily a buffer-heap, and\nwe could use VirtualTupleTableSlot instead. It would in turn require\ncopying a heap tuple. I might be missing something but it improved the\nperformance at least in my env. The change I made was:\n\n- dstslot = table_slot_create(state->rel, NULL);\n+ //dstslot = table_slot_create(state->rel, NULL);\n+ dstslot = MakeTupleTableSlot(RelationGetDescr(state->rel),\n+ &TTSOpsVirtual);\n+\n\nAnd the execution times are:\n1588.984 ms\n1591.618 ms\n1582.519 ms\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 14:09:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 10:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've not reviewed the patches in depth yet, but run performance tests\n> for CREATE MATERIALIZED VIEW. The test scenarios is:\n\nThanks for looking into this.\n\n> Is there any reason why we copy a buffer-heap tuple to another\n> buffer-heap tuple? Which results in that we increments the buffer\n> refcount and register it to ResourceOwner for every tuples. I guess\n> that the destination tuple slot is not necessarily a buffer-heap, and\n> we could use VirtualTupleTableSlot instead. It would in turn require\n> copying a heap tuple. I might be missing something but it improved the\n> performance at least in my env. The change I made was:\n>\n> - dstslot = table_slot_create(state->rel, NULL);\n> + //dstslot = table_slot_create(state->rel, NULL);\n> + dstslot = MakeTupleTableSlot(RelationGetDescr(state->rel),\n> + &TTSOpsVirtual);\n> +\n>\n> And the execution times are:\n> 1588.984 ms\n> 1591.618 ms\n> 1582.519 ms\n\nYes, usingVirtualTupleTableSlot helps improve the performance a lot.\nBelow are results from my testing. Note that CMV, RMV, CTAS stand for\nCREATE MATERIALIZED VIEW, REFRESH MATERIALIZED VIEW, CREATE TABLE AS\nrespectively. These commands got faster by 62.54%, 68.87%, 74.31% or\n2.67, 3.21, 3.89 times respectively. I've used the test case specified\nat [1].\n\nHEAD:\nCMV:\nTime: 6276.468 ms (00:06.276)\nCTAS:\nTime: 8141.632 ms (00:08.142)\nRMV:\nTime: 14747.139 ms (00:14.747)\n\nPATCHED:\nCMV:\nTime: 2350.282 ms (00:02.350)\nCTAS:\nTime: 2091.427 ms (00:02.091)\nRMV:\nTime: 4590.180 ms (00:04.590)\n\nI quickly looked at the description of what a \"virtual\" tuple is from\nsrc/include/executor/tuptable.h [2]. IIUC, it is invented for\nminimizing data copying, but it also says that it's the responsibility\nof the generating plan node to be sure these resources are not\nreleased for as long as the virtual tuple needs to be valid or is\nmaterialized. While it says this, as far as this patch is concerned,\nthe virtual slot gets materialized when we copy the tuples from source\nslot (can be any type of slot) to destination slot (which is virtual\nslot). See ExecCopySlot->\ntts_virtual_copyslot->tts_virtual_materialize. This way,\ntts_virtual_copyslot ensures the tuples storage doesn't depend on\nexternal memory because all the datums that aren't passed by value are\ncopied into the slot's memory context.\n\nWith the above understanding, it looks safe to use virtual slots for\nthe multi insert buffered slots. I'm not so sure if I'm missing\nanything here.\n\n[1]\ncd $PWD/pg17/bin\nrm -rf data logfile\n./initdb -D data\n./pg_ctl -D data -l logfile start\n\n./psql -d postgres\n\\timing\ndrop table test cascade;\ncreate unlogged table test (c int);\ninsert into test select generate_series(1, 10000000);\ncreate materialized view test_mv as select * from test;\ncreate table test_copy as select * from test;\ninsert into test select generate_series(1, 10000000);\nrefresh materialized view test_mv;\n\n[2]\n * A \"virtual\" tuple is an optimization used to minimize physical data copying\n * in a nest of plan nodes. Until materialized pass-by-reference Datums in\n * the slot point to storage that is not directly associated with the\n * TupleTableSlot; generally they will point to part of a tuple stored in a\n * lower plan node's output TupleTableSlot, or to a function result\n * constructed in a plan node's per-tuple econtext. It is the responsibility\n * of the generating plan node to be sure these resources are not released for\n * as long as the virtual tuple needs to be valid or is materialized. Note\n * also that a virtual tuple does not have any \"system columns\".\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Mar 2024 09:44:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 9:44 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Yes, usingVirtualTupleTableSlot helps improve the performance a lot.\n> Below are results from my testing. Note that CMV, RMV, CTAS stand for\n> CREATE MATERIALIZED VIEW, REFRESH MATERIALIZED VIEW, CREATE TABLE AS\n> respectively. These commands got faster by 62.54%, 68.87%, 74.31% or\n> 2.67, 3.21, 3.89 times respectively. I've used the test case specified\n> at [1].\n>\n> HEAD:\n> CMV:\n> Time: 6276.468 ms (00:06.276)\n> CTAS:\n> Time: 8141.632 ms (00:08.142)\n> RMV:\n> Time: 14747.139 ms (00:14.747)\n>\n> PATCHED:\n> CMV:\n> Time: 2350.282 ms (00:02.350)\n> CTAS:\n> Time: 2091.427 ms (00:02.091)\n> RMV:\n> Time: 4590.180 ms (00:04.590)\n>\n> I quickly looked at the description of what a \"virtual\" tuple is from\n> src/include/executor/tuptable.h [2]. IIUC, it is invented for\n> minimizing data copying, but it also says that it's the responsibility\n> of the generating plan node to be sure these resources are not\n> released for as long as the virtual tuple needs to be valid or is\n> materialized. While it says this, as far as this patch is concerned,\n> the virtual slot gets materialized when we copy the tuples from source\n> slot (can be any type of slot) to destination slot (which is virtual\n> slot). See ExecCopySlot->\n> tts_virtual_copyslot->tts_virtual_materialize. This way,\n> tts_virtual_copyslot ensures the tuples storage doesn't depend on\n> external memory because all the datums that aren't passed by value are\n> copied into the slot's memory context.\n>\n> With the above understanding, it looks safe to use virtual slots for\n> the multi insert buffered slots. I'm not so sure if I'm missing\n> anything here.\n\nI'm attaching the v13 patches using virtual tuple slots for buffered\ntuples for multi inserts.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 21 Mar 2024 13:10:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Thu, 2024-03-21 at 13:10 +0530, Bharath Rupireddy wrote:\n> I'm attaching the v13 patches using virtual tuple slots for buffered\n> tuples for multi inserts.\n\nComments:\n\n* Do I understand correctly that CMV, RMV, and CTAS experience a\nperformance benefit, but COPY FROM does not? And is that because COPY\nalready used table_multi_insert, whereas CMV and RMV did not?\n\n* In the COPY FROM code, it looks like it's deciding whether to flush\nbased on MAX_BUFFERED_TUPLES, but the slot array is allocated with\nMAX_BUFFERED_SLOTS (they happen to be the same for heap, but perhaps\nnot for other AMs). The copy code shouldn't be using internal knowledge\nof the multi-insert code; it should know somehow from the API when the\nright time is to flush.\n\n* How is the memory management expected to work? It looks like COPY\nFROM is using the ExprContext when running the input functions, but we\nreally want to be using a memory context owned by the table AM, right?\n\n* What's the point of the table_multi_insert_slots() \"num_slots\"\nargument? The only caller simply discards it.\n\n* table_tuple_insert_v2 isn't called anywhere, what's it for?\n\n* the \"v2\" naming is inconsistent -- it seems you only added it in\nplaces where there's a name conflict, which makes it hard to tell which\nAPI methods go together. I'm not sure how widely table_multi_insert* is\nused outside of core, so it's possible that we may even be able to just\nchange those APIs and the few extensions that call it can be updated.\n\n* Memory usage tracking should be done in the AM by allocating\neverything in a single context so it's easy to check the size. Don't\nmanually add up memory.\n\n* I don't understand: \"Caller may have got the slot using\nheap_multi_insert_next_free_slot, filled it and passed. So, skip\ncopying in such a case.\" If the COPY FROM had a WHERE clause and\nskipped a tuple after filling the slot, doesn't that mean the slot has\nbogus data from the last tuple?\n\n* We'd like this to work for insert-into-select (IIS) and logical\nreplication, too. Do you see any problem there, or is it just a matter\nof code?\n\n* Andres had some comments[1] that don't seem entirely addressed.\n - You are still allocating the AM-specific part of TableModifyState\nas a separately-allocated chunk.\n - It's still called TableInsertState rather than TableModifyState as\nhe suggested. If you change that, you should also change to\ntable_modify_begin/end.\n - CID: I suppose Andres is considering the use case of \"BEGIN; ...\nten thousand inserts ... COMMIT;\". I don't think this problem is really\nsolvable (discussed below) but we should have some response/consensus\non that point.\n - He mentioned that we only need one new method implemented by the\nAM. I don't know if one is enough, but 7 does seem excessive. I have\nsome simplification ideas below.\n\nOverall:\n\nIf I understand correctly, there are two ways to use the API:\n\n1. used by CTAS, MV:\n\n tistate = table_insert_begin(...);\n table_multi_insert_v2(tistate, tup1);\n ...\n table_multi_insert_v2(tistate, tupN);\n table_insert_end(tistate);\n\n2. used by COPY ... FROM:\n\n tistate = table_insert_begin(..., SKIP_FLUSH);\n if (multi_insert_slot_array_is_full())\n table_multi_insert_flush(tistate);\n slot = table_insert_next_free_slot(tistate);\n ... fill slot with tup1\n table_multi_insert_v2(tistate, tup1);\n ...\n slot = table_insert_next_free_slot(tistate);\n ... fill slot with tupN\n table_multi_insert_v2(tistate, tupN);\n table_insert_end(tistate);\n\nThose two uses need comments explaining what's going on. It appears the\nSKIP_FLUSH flag is used to indicate which use the caller intends.\n\nUse #2 is not enforced well by either the API or runtime checks. If the\ncaller neglects to check for a full buffer, it appears that it will\njust overrun the slots array.\n\nAlso, for use #2, table_multi_insert_v2() doesn't do much other than\nincrementing the memory used. The slot will never be NULL because it\nwas obtained with table_multi_insert_next_free_slot(), and the other\ntwo branches don't happen when SKIP_FLUSH is true.\n\nThe real benefit to COPY of your new API is that the AM can manage\nslots for itself, and how many tuples may be tracked (which might be a\nlot higher for non-heap AMs).\n\nI agree with Luc Vlaming's comment[2] that more should be left to the\ntable AM. Your patch tries too hard to work with the copyfrom.c slot\narray, somehow sharing it with the table AM. That adds complexity to\nthe API and feels like a layering violation.\n\nWe also shouldn't mandate a slot array in the API. Each slot is 64\nbytes -- a lot of overhead for small tuples. For a non-heap AM, it's\nmuch better to store the tuple data in a big contiguous chunk with\nminimal overhead.\n\nLet's just have a simple API like:\n\n tmstate = table_modify_begin(...);\n table_modify_save_insert(tmstate, tup1);\n ...\n table_modify_save_insert(tmstate, tupN);\n table_modify_end(tmstate);\n\nand leave it up to the AM to do all the buffering and flushing work (as\nLuc Vlaming suggested[2]).\n\nThat leaves one problem, which is: how do we update the indexes and\ncall AR triggers while flushing? I think the best way is to just have a\ncallback in the TableModifyState that is called during flush. (I don't\nthink that would affect performance, but worth double-checking.)\n\nWe have to disable this whole multi-insert mechanism if there are\nvolatile BR/AR triggers, because those are supposed to see already-\ninserted tuples. That's not a problem with your patch but it is a bit\nunfortunate -- triggers can be costly already, but this increases the\npenalty. There may be some theoretical ways to avoid this problem, like\nreading tuples out of the unflushed buffer during a SELECT, which\nsounds a little too clever (though perhaps not completely crazy if the\nAM is in control of both?).\n\nFor potentially working with multi-updates/deletes, it might be as\nsimple as tracking the old TIDs along with the slots and having new\n_save_update and _save_delete methods. I haven't thought deeply about\nthat, and I'm not sure we have a good example AM to work with, but it\nseems plausible that we could make something useful here.\n\nTo batch multiple different INSERT statements within a transaction just\nseems like a really hard problem. That could mean different CIDs, but\nalso different subtransaction IDs. Constraint violation errors will\nhappen at the time of flushing, which could be many commands later from\nthe one that actually violates the constraint. And what if someone\nissues a SELECT in the middle of the transaction, how does it see the\nalready-inserted-but-not-flushed tuples? If that's not hard enough\nalready, then you would also need to extend low-level APIs to accept\narbitrary CIDs and subxact IDs when storing tuples during a flush. The\nonly way I could imagine solving all of these problems is declaring\nsomehow that your transaction won't do any of these complicated things,\nand that you don't mind getting constraint violations at the wrong\ntime. So I recommend that you punt on this problem.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/20230603223824.o7iyochli2dwwi7k%40alap3.anarazel.de\n[2]\nhttps://www.postgresql.org/message-id/508af801-6356-d36b-1867-011ac6df8f55%40swarm64.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 17:17:12 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sat, Mar 23, 2024 at 5:47 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Comments:\n\nThanks for looking into it.\n\n> * Do I understand correctly that CMV, RMV, and CTAS experience a\n> performance benefit, but COPY FROM does not? And is that because COPY\n> already used table_multi_insert, whereas CMV and RMV did not?\n\nYes, that's right. COPY FROM is already optimized with multi inserts.\n\nI now have a feeling that I need to simplify the patches. I'm thinking\nof dropping the COPY FROM patch using the new multi insert API for the\nfollowing reasons:\n1. We can now remove some of the new APIs (table_multi_insert_slots\nand table_multi_insert_next_free_slot) that were just invented for\nCOPY FROM.\n2. COPY FROM is already optimized with multi inserts, so no real gain\nis expected with the new multi insert API.\n3. As we are inching towards feature freeze, simplifying the patches\nby having only the necessary things increases the probability of\ngetting this in.\n4. The real benefit of this whole new multi insert API is seen if used\nfor the commands CMV, RMV, CTAS. These commands got faster by 62.54%,\n68.87%, 74.31% or 2.67, 3.21, 3.89 times respectively.\n5. This leaves with really simple APIs. No need for callback stuff for\ndealing with indexes, triggers etc. as CMV, RMV, CTAS cannot have any\nof them.\n\nThe new APIs are more extensible, memory management is taken care of\nby AM, and with TableModifyState as the structure name and more\nmeaningful API names. The callback for triggers/indexes etc. aren't\ntaken care of as I'm now only focusing on CTAS, CMV, RMV\noptimizations.\n\nPlease see the attached v14 patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 26 Mar 2024 01:28:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, 2024-03-26 at 01:28 +0530, Bharath Rupireddy wrote:\n> I'm thinking\n> of dropping the COPY FROM patch using the new multi insert API for\n> the\n> following reasons: ...\n\nI agree with all of this. We do want COPY ... FROM support, but there\nare some details to work out and we don't want to make a big code\nchange at this point in the cycle.\n\n> The new APIs are more extensible, memory management is taken care of\n> by AM, and with TableModifyState as the structure name and more\n> meaningful API names. The callback for triggers/indexes etc. aren't\n> taken care of as I'm now only focusing on CTAS, CMV, RMV\n> optimizations.\n> \n> Please see the attached v14 patches.\n\n* No need for a 'kind' field in TableModifyState. The state should be\naware of the kinds of changes that it has received and that may need to\nbe flushed later -- for now, only inserts, but possibly updates/deletes\nin the future.\n\n* If the AM doesn't support the bulk methods, fall back to retail\ninserts instead of throwing an error.\n\n* It seems like this API will eventually replace table_multi_insert and\ntable_finish_bulk_insert completely. Do those APIs have any advantage\nremaining over the new one proposed here?\n\n* Right now I don't any important use of the flush method. It seems\nthat could be accomplished in the finish method, and flush could just\nbe an internal detail when the memory is exhausted. If we find a use\nfor it later, we can always add it, but right now it seems unnecessary.\n\n* We need to be careful about cases where the command can be successful\nbut the writes are not flushed. I don't tihnk that's a problem with the\ncurrent patch, but we will need to do something here when we expand to\nINSERT INTO ... SELECT.\n\nAndres, is this patch overall closer to what you had in mind in the\nemail here:\n\nhttps://www.postgresql.org/message-id/20230603223824.o7iyochli2dwwi7k@alap3.anarazel.de\n\n?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 26 Mar 2024 08:37:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 9:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, 2024-03-26 at 01:28 +0530, Bharath Rupireddy wrote:\n> > I'm thinking\n> > of dropping the COPY FROM patch using the new multi insert API for\n> > the\n> > following reasons: ...\n>\n> I agree with all of this. We do want COPY ... FROM support, but there\n> are some details to work out and we don't want to make a big code\n> change at this point in the cycle.\n\nRight.\n\n> > Please see the attached v14 patches.\n>\n> * No need for a 'kind' field in TableModifyState. The state should be\n> aware of the kinds of changes that it has received and that may need to\n> be flushed later -- for now, only inserts, but possibly updates/deletes\n> in the future.\n\nRemoved 'kind' field with lazy initialization of required AM specific\nmodify (insert in this case) state. Since we don't have 'kind', I\nchose the callback approach to cleanup the modify (insert in this\ncase) specific state at the end.\n\n> * If the AM doesn't support the bulk methods, fall back to retail\n> inserts instead of throwing an error.\n\nFor instance, CREATE MATERIALIZED VIEW foo_mv AS SELECT * FROM foo\nUSING bar_tam; doesn't work if bar_tam doesn't have the\ntable_tuple_insert implemented.\n\nSimilarly, with this new AM, the onus lies on the table AM\nimplementers to provide an implementation for these new AMs even if\nthey just do single inserts. But, I do agree that we must catch this\nahead during parse analysis itself, so I've added assertions in\nGetTableAmRoutine().\n\n> * It seems like this API will eventually replace table_multi_insert and\n> table_finish_bulk_insert completely. Do those APIs have any advantage\n> remaining over the new one proposed here?\n\ntable_multi_insert needs to be there for sure as COPY ... FROM uses\nit. Not sure if we need to remove the optional callback\ntable_finish_bulk_insert though. Heap AM doesn't implement one, but\nsome other AM might. Having said that, with this new AM, whatever the\nlogic that used to be there in table_finish_bulk_insert previously,\ntable AM implementers will have to move them to table_modify_end.\n\nFWIW, I can try writing a test table AM that uses this new AM but just\ndoes single inserts, IOW, equivalent to table_tuple_insert().\nThoughts?\n\n> * Right now I don't any important use of the flush method. It seems\n> that could be accomplished in the finish method, and flush could just\n> be an internal detail when the memory is exhausted. If we find a use\n> for it later, we can always add it, but right now it seems unnecessary.\n\nFirstly, we are not storing CommandId and options in TableModifyState,\nbecause we expect CommandId to be changing (per Andres comment).\nSecondly, we don't want to pass just the CommandId and options to\ntable_modify_end(). Thirdly, one just has to call the\ntable_modify_buffer_flush before the table_modify_end. Do you have any\nother thoughts here?\n\n> * We need to be careful about cases where the command can be successful\n> but the writes are not flushed. I don't tihnk that's a problem with the\n> current patch, but we will need to do something here when we expand to\n> INSERT INTO ... SELECT.\n\nYou mean, writes are not flushed to the disk? Can you please elaborate\nwhy it's different for INSERT INTO ... SELECT and not others? Can't\nthe new flush AM be helpful here to implement any flush related\nthings?\n\nPlease find the attached v15 patches with the above review comments addressed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 27 Mar 2024 01:19:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, 2024-03-27 at 01:19 +0530, Bharath Rupireddy wrote:\n> \n> Similarly, with this new AM, the onus lies on the table AM\n> implementers to provide an implementation for these new AMs even if\n> they just do single inserts.\n\nWhy not fall back to using the plain tuple_insert? Surely some table\nAMs might be simple and limited, and we shouldn't break them just\nbecause they don't implement the new APIs.\n\n> \n> table_multi_insert needs to be there for sure as COPY ... FROM uses\n> it.\n\nAfter we have these new APIs fully in place and used by COPY, what will\nhappen to those other APIs? Will they be deprecated or will there be a\nreason to keep them?\n\n> FWIW, I can try writing a test table AM that uses this new AM but\n> just\n> does single inserts, IOW, equivalent to table_tuple_insert().\n> Thoughts?\n\nMore table AMs to test against would be great, but I also know that can\nbe a lot of work.\n\n> \n> Firstly, we are not storing CommandId and options in\n> TableModifyState,\n> because we expect CommandId to be changing (per Andres comment).\n\nTrying to make this feature work across multiple commands poses a lot\nof challenges: what happens when there are SELECTs and subtransactions\nand non-deferrable constraints?\n\nRegardless, if we care about multiple CIDs, they should be stored along\nwith the tuples, not supplied at the time of flushing.\n\n> You mean, writes are not flushed to the disk? Can you please\n> elaborate\n> why it's different for INSERT INTO ... SELECT and not others? Can't\n> the new flush AM be helpful here to implement any flush related\n> things?\n\nNot a major problem. We can discuss while working on IIS support.\n\n\nI am concnerned that the flush callback is not a part of the API. We\nwill clearly need that to support index insertions for COPY/IIS, so as-\nis the API feels incomplete. Thoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 27 Mar 2024 01:12:19 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 1:42 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2024-03-27 at 01:19 +0530, Bharath Rupireddy wrote:\n> >\n> > Similarly, with this new AM, the onus lies on the table AM\n> > implementers to provide an implementation for these new AMs even if\n> > they just do single inserts.\n>\n> Why not fall back to using the plain tuple_insert? Surely some table\n> AMs might be simple and limited, and we shouldn't break them just\n> because they don't implement the new APIs.\n\nHm. That might complicate table_modify_begin,\ntable_modify_buffer_insert and table_modify_end a bit. What do we put\nin TableModifyState then? Do we create the bulk insert state\n(BulkInsertStateData) outside? I think to give a better interface, can\nwe let TAM implementers support these new APIs in their own way? If\nthis sounds rather intrusive, we can just implement the fallback to\ntuple_insert if these new API are not supported in the caller, for\nexample, do something like below in createas.c and matview.c.\nThoughts?\n\nif (table_modify_buffer_insert() is defined)\n table_modify_buffer_insert(...);\nelse\n{\n myState->bistate = GetBulkInsertState();\n table_tuple_insert(...);\n}\n\n> > table_multi_insert needs to be there for sure as COPY ... FROM uses\n> > it.\n>\n> After we have these new APIs fully in place and used by COPY, what will\n> happen to those other APIs? Will they be deprecated or will there be a\n> reason to keep them?\n\nDeprecated perhaps?\n\nPlease find the attached v16 patches for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 31 Mar 2024 21:18:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Sun, 2024-03-31 at 21:18 +0530, Bharath Rupireddy wrote:\n> if (table_modify_buffer_insert() is defined)\n> table_modify_buffer_insert(...);\n> else\n> {\n> myState->bistate = GetBulkInsertState();\n> table_tuple_insert(...);\n> }\n\nWe can't alloc/free the bulk insert state for every insert call. I see\ntwo options:\n\n* Each caller needs to support two code paths: if the buffered insert\nAPIs are defined, then use those; otherwise the caller needs to manage\nthe bulk insert state itself and call the plain insert API.\n\n* Have default implementation for the new API methods, so that the\ndefault for the begin method would allocate the bulk insert state, and\nthe default for the buffered insert method would be to call plain\ninsert using the bulk insert state.\n\nI'd prefer the latter, at least in the long term. But I haven't really\nthought through the details, so perhaps we'd need to use the former.\n\n> > \n> > After we have these new APIs fully in place and used by COPY, what\n> > will\n> > happen to those other APIs? Will they be deprecated or will there\n> > be a\n> > reason to keep them?\n> \n> Deprecated perhaps?\n\nIncluding Alexander on this thread, because he's making changes to the\nmulti-insert API. We need some consensus on where we are going with\nthese APIs before we make more changes, and what incremental steps make\nsense in v17.\n\nHere's where I think this API should go:\n\n1. Have table_modify_begin/end and table_modify_buffer_insert, like\nthose that are implemented in your patch.\n\n2. Add some kind of flush callback that will be called either while the\ntuples are being flushed or after the tuples are flushed (but before\nthey are freed by the AM). (Aside: do we need to call it while the\ntuples are being flushed to get the right visibility semantics for\nafter-row triggers?)\n\n3. Add table_modify_buffer_{update|delete} APIs.\n\n4. Some kind of API tweaks to help manage memory when modifying\npertitioned tables, so that the buffering doesn't get out of control.\nPerhaps just reporting memory usage and allowing the caller to force\nflushes would be enough.\n\n5. Use these new methods for CREATE/REFRESH MATERIALIZED VIEW. This is\nfairly straightforward, I believe, and handled by your patch. Indexes\nare (re)built afterward, and no triggers are possible.\n\n6. Use these new methods for CREATE TABLE ... AS. This is fairly\nstraightforward, I believe, and handled by your patch. No indexes or\ntriggers are possible.\n\n7. Use these new methods for COPY. We have to be careful to avoid\nregressions for the heap method, because it's already managing its own\nbuffers. If the AM manages the buffering, then it may require\nadditional copying of slots, which could be a disadvantage. To solve\nthis, we may need some minor API tweaks to avoid copying when the\ncaller guarantees that the memory will not be freed to early, or\nperhaps expose the AM's memory context to copyfrom.c. Another thing to\nconsider is that the buffering in copyfrom.c is also used for FDWs, so\nthat buffering code path needs to be preserved in copyfrom.c even if\nnot used for AMs.\n\n8. Use these new methods for INSERT INTO ... SELECT. One potential\nchallenge here is that execution nodes are not always run to\ncompletion, so we need to be sure that the flush isn't forgotten in\nthat case.\n\n9. Use these new methods for DELETE, UPDATE, and MERGE. MERGE can use\nthe buffer_insert/update/delete APIs; we don't need a separate merge\nmethod. This probably requires that the AM maintain 3 separate buffers\nto distinguish different kinds of changes at flush time (obviously\nthese can be initialized lazily to avoid overhead when not being used).\n\n10. Use these new methods for logical apply.\n\n11. Deprecate the multi_insert API.\n\nThoughts on this plan? Does your patch make sense in v17 as a stepping\nstone, or should we try to make all of these API changes together in\nv18?\n\nAlso, a sample AM code would be a huge benefit here. Writing a real AM\nis hard, but perhaps we can at least have an example one to demonstrate\nhow to use these APIs?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 12:40:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 1:10 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sun, 2024-03-31 at 21:18 +0530, Bharath Rupireddy wrote:\n> > if (table_modify_buffer_insert() is defined)\n> > table_modify_buffer_insert(...);\n> > else\n> > {\n> > myState->bistate = GetBulkInsertState();\n> > table_tuple_insert(...);\n> > }\n>\n> We can't alloc/free the bulk insert state for every insert call. I see\n> two options:\n>\n> * Each caller needs to support two code paths: if the buffered insert\n> APIs are defined, then use those; otherwise the caller needs to manage\n> the bulk insert state itself and call the plain insert API.\n>\n> * Have default implementation for the new API methods, so that the\n> default for the begin method would allocate the bulk insert state, and\n> the default for the buffered insert method would be to call plain\n> insert using the bulk insert state.\n>\n> I'd prefer the latter, at least in the long term. But I haven't really\n> thought through the details, so perhaps we'd need to use the former.\n\nI too prefer the latter so that the caller doesn't have to have two\npaths. The new API can just transparently fallback to single inserts.\nI've implemented that in the attached v17 patch. I also tested the\ndefault APIs manually, but I'll see if I can add some tests to it the\ndefault API.\n\n> > > After we have these new APIs fully in place and used by COPY, what\n> > > will\n> > > happen to those other APIs? Will they be deprecated or will there\n> > > be a\n> > > reason to keep them?\n> >\n> > Deprecated perhaps?\n>\n> Including Alexander on this thread, because he's making changes to the\n> multi-insert API. We need some consensus on where we are going with\n> these APIs before we make more changes, and what incremental steps make\n> sense in v17.\n>\n> Here's where I think this API should go:\n>\n> 1. Have table_modify_begin/end and table_modify_buffer_insert, like\n> those that are implemented in your patch.\n>\n> 2. Add some kind of flush callback that will be called either while the\n> tuples are being flushed or after the tuples are flushed (but before\n> they are freed by the AM). (Aside: do we need to call it while the\n> tuples are being flushed to get the right visibility semantics for\n> after-row triggers?)\n>\n> 3. Add table_modify_buffer_{update|delete} APIs.\n>\n> 4. Some kind of API tweaks to help manage memory when modifying\n> pertitioned tables, so that the buffering doesn't get out of control.\n> Perhaps just reporting memory usage and allowing the caller to force\n> flushes would be enough.\n>\n> 5. Use these new methods for CREATE/REFRESH MATERIALIZED VIEW. This is\n> fairly straightforward, I believe, and handled by your patch. Indexes\n> are (re)built afterward, and no triggers are possible.\n>\n> 6. Use these new methods for CREATE TABLE ... AS. This is fairly\n> straightforward, I believe, and handled by your patch. No indexes or\n> triggers are possible.\n>\n> 7. Use these new methods for COPY. We have to be careful to avoid\n> regressions for the heap method, because it's already managing its own\n> buffers. If the AM manages the buffering, then it may require\n> additional copying of slots, which could be a disadvantage. To solve\n> this, we may need some minor API tweaks to avoid copying when the\n> caller guarantees that the memory will not be freed to early, or\n> perhaps expose the AM's memory context to copyfrom.c. Another thing to\n> consider is that the buffering in copyfrom.c is also used for FDWs, so\n> that buffering code path needs to be preserved in copyfrom.c even if\n> not used for AMs.\n>\n> 8. Use these new methods for INSERT INTO ... SELECT. One potential\n> challenge here is that execution nodes are not always run to\n> completion, so we need to be sure that the flush isn't forgotten in\n> that case.\n>\n> 9. Use these new methods for DELETE, UPDATE, and MERGE. MERGE can use\n> the buffer_insert/update/delete APIs; we don't need a separate merge\n> method. This probably requires that the AM maintain 3 separate buffers\n> to distinguish different kinds of changes at flush time (obviously\n> these can be initialized lazily to avoid overhead when not being used).\n>\n> 10. Use these new methods for logical apply.\n>\n> 11. Deprecate the multi_insert API.\n>\n> Thoughts on this plan? Does your patch make sense in v17 as a stepping\n> stone, or should we try to make all of these API changes together in\n> v18?\n\nI'd like to see the new multi insert API (as proposed in the v17\npatches) for PG17 if possible. The basic idea with these new APIs is\nto let the AM implementers choose the right buffered insert strategy\n(one can choose the AM specific slot type to buffer the tuples, choose\nthe AM specific memory and flushing decisions etc.). Another advantage\nwith these new multi insert API is that the CREATE MATERIALIZED VIEW,\nREFRESH MATERIALIZED VIEW, CREATE TABLE AS commands for heap AM got\nfaster by 62.54%, 68.87%, 74.31% or 2.67, 3.21, 3.89 times\nrespectively. The performance improvement in REFRESH MATERIALIZED VIEW\ncan benefit customers running analytical workloads on postgres.\n\nI'm fine if we gradually add more infrastructure to support COPY,\nINSERT INTO SELECT, Logical Replication Apply, Table Rewrites in\nfuture releases. I'm sure it requires a lot more thoughts and time.\n\n> Also, a sample AM code would be a huge benefit here. Writing a real AM\n> is hard, but perhaps we can at least have an example one to demonstrate\n> how to use these APIs?\n\nThe heap AM implements this new API. Also, there's a default\nimplementation for the new API falling back on to single inserts.\nAren't these sufficient to help AM implementers to come up with their\nown implementations?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Apr 2024 14:32:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 2:32 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I too prefer the latter so that the caller doesn't have to have two\n> paths. The new API can just transparently fallback to single inserts.\n> I've implemented that in the attached v17 patch. I also tested the\n> default APIs manually, but I'll see if I can add some tests to it the\n> default API.\n\nFixed a compiler warning found via CF bot. Please find the attached\nv18 patches. I'm sorry for the noise.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Apr 2024 17:55:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New Table Access Methods for Multi and Single Inserts"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 1:10 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Here's where I think this API should go:\n>\n> 1. Have table_modify_begin/end and table_modify_buffer_insert, like\n> those that are implemented in your patch.\n\nI added table_modify_begin, table_modify_buffer_insert,\ntable_modify_buffer_flush and table_modify_end. Table Access Method (AM)\nauthors now can define their own buffering strategy and flushing decisions\nbased on their tuple storage kinds and various other AM specific factors. I\nalso added a default implementation that falls back to single inserts when\nno implementation is provided for these AM by AM authors. See the attached\nv19-0001 patch.\n\n> 2. Add some kind of flush callback that will be called either while the\n> tuples are being flushed or after the tuples are flushed (but before\n> they are freed by the AM). (Aside: do we need to call it while the\n> tuples are being flushed to get the right visibility semantics for\n> after-row triggers?)\n\nI added a flush callback named TableModifyBufferFlushCallback; when\nprovided by callers invoked after tuples are flushed to disk from the\nbuffers but before the AM frees them up. Index insertions and AFTER ROW\nINSERT triggers can be executed in this callback. See the v19-0001 patch\nfor how AM invokes the flush callback, and see either v19-0003 or v19-0004\nor v19-0005 for how a caller can supply the callback and required context\nto execute index insertions and AR triggers.\n\n> 3. Add table_modify_buffer_{update|delete} APIs.\n>\n> 9. Use these new methods for DELETE, UPDATE, and MERGE. MERGE can use\n> the buffer_insert/update/delete APIs; we don't need a separate merge\n> method. This probably requires that the AM maintain 3 separate buffers\n> to distinguish different kinds of changes at flush time (obviously\n> these can be initialized lazily to avoid overhead when not being used).\n\nI haven't thought about these things yet. I can only focus on them after\nseeing how the attached patches go from here.\n\n> 4. Some kind of API tweaks to help manage memory when modifying\n> pertitioned tables, so that the buffering doesn't get out of control.\n> Perhaps just reporting memory usage and allowing the caller to force\n> flushes would be enough.\n\nHeap implementation for thes new Table AMs uses a separate memory context\nfor all of the operations. Please have a look and let me know if we need\nanything more.\n\n> 5. Use these new methods for CREATE/REFRESH MATERIALIZED VIEW. This is\n> fairly straightforward, I believe, and handled by your patch. Indexes\n> are (re)built afterward, and no triggers are possible.\n>\n> 6. Use these new methods for CREATE TABLE ... AS. This is fairly\n> straightforward, I believe, and handled by your patch. No indexes or\n> triggers are possible.\n\nI used multi inserts for all of these including TABLE REWRITE commands such\nas ALTER TABLE. See the attached v19-0002 patch. Check the testing section\nbelow for benefits.\n\nFWIW, following are some of the TABLE REWRITE commands that can get\nbenefitted:\n\nALTER TABLE tbl ALTER c1 TYPE bigint;\nALTER TABLE itest13 ADD COLUMN c int GENERATED BY DEFAULT AS IDENTITY;\nALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap2;\nALTER TABLE itest3 ALTER COLUMN a TYPE int;\nALTER TABLE gtest20 ALTER COLUMN b SET EXPRESSION AS (a * 3);\nALTER TABLE has_volatile ADD col4 int DEFAULT (random() * 10000)::int;\nand so on.\n\n> 7. Use these new methods for COPY. We have to be careful to avoid\n> regressions for the heap method, because it's already managing its own\n> buffers. If the AM manages the buffering, then it may require\n> additional copying of slots, which could be a disadvantage. To solve\n> this, we may need some minor API tweaks to avoid copying when the\n> caller guarantees that the memory will not be freed to early, or\n> perhaps expose the AM's memory context to copyfrom.c. Another thing to\n> consider is that the buffering in copyfrom.c is also used for FDWs, so\n> that buffering code path needs to be preserved in copyfrom.c even if\n> not used for AMs.\n\nI modified the COPY FROM code to use the new Table AMs, and performed some\ntests which show no signs of regression. Check the testing section below\nfor more details. See the attached v19-0005 patch. With this,\ntable_multi_insert can be deprecated.\n\n> 8. Use these new methods for INSERT INTO ... SELECT. One potential\n> challenge here is that execution nodes are not always run to\n> completion, so we need to be sure that the flush isn't forgotten in\n> that case.\n\nI did that in v19-0003. I did place the table_modify_end call in multiple\nplaces including ExecEndModifyTable. I didn't find any issues with it.\nPlease have a look and let me know if we need the end call in more places.\nCheck the testing section below for benefits.\n\n> 10. Use these new methods for logical apply.\n\nI used multi inserts for Logical Replication apply. in v19-0004. Check the\ntesting section below for benefits.\n\nFWIW, open-source pglogical does have multi insert support, check code\naround\nhttps://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_apply_heap.c#L960\n.\n\n> 11. Deprecate the multi_insert API.\n\nI did remove both table_multi_insert and table_finish_bulk_insert in\nv19-0006. Perhaps, removing them isn't a great idea, but adding a\ndeprecation WARNING/ERROR until some more PG releases might be worth\nlooking at.\n\n> Thoughts on this plan? Does your patch make sense in v17 as a stepping\n> stone, or should we try to make all of these API changes together in\n> v18?\n\nIf the design, code and benefits that these new Table AMs bring to the\ntable look good, I hope to see it for PG 18.\n\n> Also, a sample AM code would be a huge benefit here. Writing a real AM\n> is hard, but perhaps we can at least have an example one to demonstrate\n> how to use these APIs?\n\nThe attached patches already have implemented these new Table AMs for Heap.\nI don't think we need a separate implementation to demonstrate. If others\nfeel so, I'm open to thoughts here.\n\nHaving said above, I'd like to reiterate the motivation behind the new\nTable AMs for multi and single inserts.\n\n1. A scan-like API with state being carried across is thought to be better\nas suggested by Andres Freund -\nhttps://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx@alap3.anarazel.de\n.\n2. Allowing a Table AM to optimize operations across multiple inserts,\ndefine its own buffering strategy and take its own flushing decisions based\non their tuple storage kinds and various other AM specific factors.\n3. Improve performance of various SQL commands with multi inserts for Heap\nAM.\n\nThe attached v19 patches might need some more detailed comments, some\ndocumentation and some specific tests ensuring the multi inserts for Heap\nare kicked-in for various commands. I'm open to thoughts here.\n\nI did some testing to see how various commands benefit with multi inserts\nusing these new Table AM for heap. It's not only the improvement in\nperformance these commands see, but also the amount of WAL that gets\ngenerated reduces greatly. After all, multi inserts optimize the insertions\nby writing less WAL. IOW, writing WAL record per page if multiple rows fit\ninto a single data page as opposed to WAL record per row.\n\nTest case 1: 100 million rows, 2 columns (int and float)\n\nCommand | HEAD (sec) | PATCHED (sec) | Faster by % |\nFaster by X\n------------------------------ | ---------- | ------------- | ----------- |\n-----------\nCREATE TABLE AS | 121 | 77 | 36.3 |\n1.57\nCREATE MATERIALIZED VIEW | 101 | 49 | 51.4 |\n2.06\nREFRESH MATERIALIZED VIEW | 113 | 54 | 52.2 |\n2.09\nALTER TABLE (TABLE REWRITE) | 124 | 81 | 34.6 |\n1.53\nCOPY FROM | 71 | 72 | 0 |\n1\nINSERT INTO ... SELECT | 117 | 62 | 47 |\n1.88\nLOGICAL REPLICATION APPLY | 393 | 306 | 22.1 |\n1.28\n\nCommand | HEAD (WAL in GB) | PATCHED (WAL in GB) |\nReduced by % | Reduced by X\n------------------------------ | ---------------- | ------------------- |\n------------ | -----------\nCREATE TABLE AS | 6.8 | 2.4 |\n64.7 | 2.83\nCREATE MATERIALIZED VIEW | 7.2 | 2.3 |\n68 | 3.13\nREFRESH MATERIALIZED VIEW | 10 | 5.1 |\n49 | 1.96\nALTER TABLE (TABLE REWRITE) | 8 | 3.2 |\n60 | 2.5\nCOPY FROM | 2.9 | 3 | 0\n | 1\nINSERT INTO ... SELECT | 8 | 3 |\n62.5 | 2.66\nLOGICAL REPLICATION APPLY | 7.5 | 2.3 |\n69.3 | 3.26\n\nTest case 2: 1 billion rows, 1 column (int)\n\nCommand | HEAD (sec) | PATCHED (sec) | Faster by % |\nFaster by X\n------------------------------ | ---------- | ------------- | ----------- |\n-----------\nCREATE TABLE AS | 794 | 386 | 51.38 |\n2.05\nCREATE MATERIALIZED VIEW | 1006 | 563 | 44.03 |\n1.78\nREFRESH MATERIALIZED VIEW | 977 | 603 | 38.28 |\n1.62\nALTER TABLE (TABLE REWRITE) | 1189 | 714 | 39.94 |\n1.66\nCOPY FROM | 321 | 330 | -0.02 |\n0.97\nINSERT INTO ... SELECT | 1084 | 586 | 45.94 |\n1.84\nLOGICAL REPLICATION APPLY | 3530 | 2982 | 15.52 |\n1.18\n\nCommand | HEAD (WAL in GB) | PATCHED (WAL in GB) |\nReduced by % | Reduced by X\n------------------------------ | ---------------- | ------------------- |\n------------ | -----------\nCREATE TABLE AS | 60 | 12 |\n80 | 5\nCREATE MATERIALIZED VIEW | 60 | 12 |\n80 | 5\nREFRESH MATERIALIZED VIEW | 60 | 12 |\n80 | 5\nALTER TABLE (TABLE REWRITE) | 123 | 31 |\n60 | 2.5\nCOPY FROM | 12 | 12 | 0\n | 1\nINSERT INTO ... SELECT | 120 | 24 |\n80 | 5\nLOGICAL REPLICATION APPLY | 61 | 12 |\n80.32 | 5\n\nTest setup:\n./configure --prefix=$PWD/pg17/ --enable-tap-tests CFLAGS=\"-ggdb3 -O2\" >\ninstall.log && make -j 8 install > install.log 2>&1 &\n\nwal_level=logical\nmax_wal_size = 256GB\ncheckpoint_timeout = 1h\n\nTest system is EC2 instance of type c5.4xlarge:\nArchitecture: x86_64\n CPU op-mode(s): 32-bit, 64-bit\n Address sizes: 46 bits physical, 48 bits virtual\n Byte Order: Little Endian\nCPU(s): 16\n On-line CPU(s) list: 0-15\nVendor ID: GenuineIntel\n Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz\n CPU family: 6\n Model: 85\n Thread(s) per core: 2\n Core(s) per socket: 8\n Socket(s): 1\n Stepping: 7\n BogoMIPS: 5999.99\nCaches (sum of all):\n L1d: 256 KiB (8 instances)\n L1i: 256 KiB (8 instances)\n L2: 8 MiB (8 instances)\n L3: 35.8 MiB (1 instance)\nNUMA:\n NUMA node(s): 1\n NUMA node0 CPU(s): 0-15\nRAM:\n MemTotal: 32036536 kB\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 24 Apr 2024 18:19:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "st 24. 4. 2024 v 14:50 odesílatel Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> napsal:\n\n> On Wed, Apr 3, 2024 at 1:10 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > Here's where I think this API should go:\n> >\n> > 1. Have table_modify_begin/end and table_modify_buffer_insert, like\n> > those that are implemented in your patch.\n>\n> I added table_modify_begin, table_modify_buffer_insert,\n> table_modify_buffer_flush and table_modify_end. Table Access Method (AM)\n> authors now can define their own buffering strategy and flushing decisions\n> based on their tuple storage kinds and various other AM specific factors. I\n> also added a default implementation that falls back to single inserts when\n> no implementation is provided for these AM by AM authors. See the attached\n> v19-0001 patch.\n>\n> > 2. Add some kind of flush callback that will be called either while the\n> > tuples are being flushed or after the tuples are flushed (but before\n> > they are freed by the AM). (Aside: do we need to call it while the\n> > tuples are being flushed to get the right visibility semantics for\n> > after-row triggers?)\n>\n> I added a flush callback named TableModifyBufferFlushCallback; when\n> provided by callers invoked after tuples are flushed to disk from the\n> buffers but before the AM frees them up. Index insertions and AFTER ROW\n> INSERT triggers can be executed in this callback. See the v19-0001 patch\n> for how AM invokes the flush callback, and see either v19-0003 or v19-0004\n> or v19-0005 for how a caller can supply the callback and required context\n> to execute index insertions and AR triggers.\n>\n> > 3. Add table_modify_buffer_{update|delete} APIs.\n> >\n> > 9. Use these new methods for DELETE, UPDATE, and MERGE. MERGE can use\n> > the buffer_insert/update/delete APIs; we don't need a separate merge\n> > method. This probably requires that the AM maintain 3 separate buffers\n> > to distinguish different kinds of changes at flush time (obviously\n> > these can be initialized lazily to avoid overhead when not being used).\n>\n> I haven't thought about these things yet. I can only focus on them after\n> seeing how the attached patches go from here.\n>\n> > 4. Some kind of API tweaks to help manage memory when modifying\n> > pertitioned tables, so that the buffering doesn't get out of control.\n> > Perhaps just reporting memory usage and allowing the caller to force\n> > flushes would be enough.\n>\n> Heap implementation for thes new Table AMs uses a separate memory context\n> for all of the operations. Please have a look and let me know if we need\n> anything more.\n>\n> > 5. Use these new methods for CREATE/REFRESH MATERIALIZED VIEW. This is\n> > fairly straightforward, I believe, and handled by your patch. Indexes\n> > are (re)built afterward, and no triggers are possible.\n> >\n> > 6. Use these new methods for CREATE TABLE ... AS. This is fairly\n> > straightforward, I believe, and handled by your patch. No indexes or\n> > triggers are possible.\n>\n> I used multi inserts for all of these including TABLE REWRITE commands\n> such as ALTER TABLE. See the attached v19-0002 patch. Check the testing\n> section below for benefits.\n>\n> FWIW, following are some of the TABLE REWRITE commands that can get\n> benefitted:\n>\n> ALTER TABLE tbl ALTER c1 TYPE bigint;\n> ALTER TABLE itest13 ADD COLUMN c int GENERATED BY DEFAULT AS IDENTITY;\n> ALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap2;\n> ALTER TABLE itest3 ALTER COLUMN a TYPE int;\n> ALTER TABLE gtest20 ALTER COLUMN b SET EXPRESSION AS (a * 3);\n> ALTER TABLE has_volatile ADD col4 int DEFAULT (random() * 10000)::int;\n> and so on.\n>\n> > 7. Use these new methods for COPY. We have to be careful to avoid\n> > regressions for the heap method, because it's already managing its own\n> > buffers. If the AM manages the buffering, then it may require\n> > additional copying of slots, which could be a disadvantage. To solve\n> > this, we may need some minor API tweaks to avoid copying when the\n> > caller guarantees that the memory will not be freed to early, or\n> > perhaps expose the AM's memory context to copyfrom.c. Another thing to\n> > consider is that the buffering in copyfrom.c is also used for FDWs, so\n> > that buffering code path needs to be preserved in copyfrom.c even if\n> > not used for AMs.\n>\n> I modified the COPY FROM code to use the new Table AMs, and performed some\n> tests which show no signs of regression. Check the testing section below\n> for more details. See the attached v19-0005 patch. With this,\n> table_multi_insert can be deprecated.\n>\n> > 8. Use these new methods for INSERT INTO ... SELECT. One potential\n> > challenge here is that execution nodes are not always run to\n> > completion, so we need to be sure that the flush isn't forgotten in\n> > that case.\n>\n> I did that in v19-0003. I did place the table_modify_end call in multiple\n> places including ExecEndModifyTable. I didn't find any issues with it.\n> Please have a look and let me know if we need the end call in more places.\n> Check the testing section below for benefits.\n>\n> > 10. Use these new methods for logical apply.\n>\n> I used multi inserts for Logical Replication apply. in v19-0004. Check the\n> testing section below for benefits.\n>\n> FWIW, open-source pglogical does have multi insert support, check code\n> around\n> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_apply_heap.c#L960\n> .\n>\n> > 11. Deprecate the multi_insert API.\n>\n> I did remove both table_multi_insert and table_finish_bulk_insert in\n> v19-0006. Perhaps, removing them isn't a great idea, but adding a\n> deprecation WARNING/ERROR until some more PG releases might be worth\n> looking at.\n>\n> > Thoughts on this plan? Does your patch make sense in v17 as a stepping\n> > stone, or should we try to make all of these API changes together in\n> > v18?\n>\n> If the design, code and benefits that these new Table AMs bring to the\n> table look good, I hope to see it for PG 18.\n>\n> > Also, a sample AM code would be a huge benefit here. Writing a real AM\n> > is hard, but perhaps we can at least have an example one to demonstrate\n> > how to use these APIs?\n>\n> The attached patches already have implemented these new Table AMs for\n> Heap. I don't think we need a separate implementation to demonstrate. If\n> others feel so, I'm open to thoughts here.\n>\n> Having said above, I'd like to reiterate the motivation behind the new\n> Table AMs for multi and single inserts.\n>\n> 1. A scan-like API with state being carried across is thought to be better\n> as suggested by Andres Freund -\n> https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx@alap3.anarazel.de\n> .\n> 2. Allowing a Table AM to optimize operations across multiple inserts,\n> define its own buffering strategy and take its own flushing decisions based\n> on their tuple storage kinds and various other AM specific factors.\n> 3. Improve performance of various SQL commands with multi inserts for Heap\n> AM.\n>\n> The attached v19 patches might need some more detailed comments, some\n> documentation and some specific tests ensuring the multi inserts for Heap\n> are kicked-in for various commands. I'm open to thoughts here.\n>\n> I did some testing to see how various commands benefit with multi inserts\n> using these new Table AM for heap. It's not only the improvement in\n> performance these commands see, but also the amount of WAL that gets\n> generated reduces greatly. After all, multi inserts optimize the insertions\n> by writing less WAL. IOW, writing WAL record per page if multiple rows fit\n> into a single data page as opposed to WAL record per row.\n>\n> Test case 1: 100 million rows, 2 columns (int and float)\n>\n> Command | HEAD (sec) | PATCHED (sec) | Faster by %\n> | Faster by X\n> ------------------------------ | ---------- | ------------- | -----------\n> | -----------\n> CREATE TABLE AS | 121 | 77 | 36.3\n> | 1.57\n> CREATE MATERIALIZED VIEW | 101 | 49 | 51.4\n> | 2.06\n> REFRESH MATERIALIZED VIEW | 113 | 54 | 52.2\n> | 2.09\n> ALTER TABLE (TABLE REWRITE) | 124 | 81 | 34.6\n> | 1.53\n> COPY FROM | 71 | 72 | 0\n> | 1\n> INSERT INTO ... SELECT | 117 | 62 | 47\n> | 1.88\n> LOGICAL REPLICATION APPLY | 393 | 306 | 22.1\n> | 1.28\n>\n> Command | HEAD (WAL in GB) | PATCHED (WAL in GB) |\n> Reduced by % | Reduced by X\n> ------------------------------ | ---------------- | ------------------- |\n> ------------ | -----------\n> CREATE TABLE AS | 6.8 | 2.4 |\n> 64.7 | 2.83\n> CREATE MATERIALIZED VIEW | 7.2 | 2.3 |\n> 68 | 3.13\n> REFRESH MATERIALIZED VIEW | 10 | 5.1 |\n> 49 | 1.96\n> ALTER TABLE (TABLE REWRITE) | 8 | 3.2 |\n> 60 | 2.5\n> COPY FROM | 2.9 | 3 |\n> 0 | 1\n> INSERT INTO ... SELECT | 8 | 3 |\n> 62.5 | 2.66\n> LOGICAL REPLICATION APPLY | 7.5 | 2.3 |\n> 69.3 | 3.26\n>\n> Test case 2: 1 billion rows, 1 column (int)\n>\n> Command | HEAD (sec) | PATCHED (sec) | Faster by %\n> | Faster by X\n> ------------------------------ | ---------- | ------------- | -----------\n> | -----------\n> CREATE TABLE AS | 794 | 386 | 51.38\n> | 2.05\n> CREATE MATERIALIZED VIEW | 1006 | 563 | 44.03\n> | 1.78\n> REFRESH MATERIALIZED VIEW | 977 | 603 | 38.28\n> | 1.62\n> ALTER TABLE (TABLE REWRITE) | 1189 | 714 | 39.94\n> | 1.66\n> COPY FROM | 321 | 330 | -0.02\n> | 0.97\n> INSERT INTO ... SELECT | 1084 | 586 | 45.94\n> | 1.84\n> LOGICAL REPLICATION APPLY | 3530 | 2982 | 15.52\n> | 1.18\n>\n> Command | HEAD (WAL in GB) | PATCHED (WAL in GB) |\n> Reduced by % | Reduced by X\n> ------------------------------ | ---------------- | ------------------- |\n> ------------ | -----------\n> CREATE TABLE AS | 60 | 12 |\n> 80 | 5\n> CREATE MATERIALIZED VIEW | 60 | 12 |\n> 80 | 5\n> REFRESH MATERIALIZED VIEW | 60 | 12 |\n> 80 | 5\n> ALTER TABLE (TABLE REWRITE) | 123 | 31 |\n> 60 | 2.5\n> COPY FROM | 12 | 12 |\n> 0 | 1\n> INSERT INTO ... SELECT | 120 | 24 |\n> 80 | 5\n> LOGICAL REPLICATION APPLY | 61 | 12 |\n> 80.32 | 5\n>\n\nlooks pretty impressive!\n\nPavel\n\n\n>\n> Test setup:\n> ./configure --prefix=$PWD/pg17/ --enable-tap-tests CFLAGS=\"-ggdb3 -O2\" >\n> install.log && make -j 8 install > install.log 2>&1 &\n>\n> wal_level=logical\n> max_wal_size = 256GB\n> checkpoint_timeout = 1h\n>\n> Test system is EC2 instance of type c5.4xlarge:\n> Architecture: x86_64\n> CPU op-mode(s): 32-bit, 64-bit\n> Address sizes: 46 bits physical, 48 bits virtual\n> Byte Order: Little Endian\n> CPU(s): 16\n> On-line CPU(s) list: 0-15\n> Vendor ID: GenuineIntel\n> Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz\n> CPU family: 6\n> Model: 85\n> Thread(s) per core: 2\n> Core(s) per socket: 8\n> Socket(s): 1\n> Stepping: 7\n> BogoMIPS: 5999.99\n> Caches (sum of all):\n> L1d: 256 KiB (8 instances)\n> L1i: 256 KiB (8 instances)\n> L2: 8 MiB (8 instances)\n> L3: 35.8 MiB (1 instance)\n> NUMA:\n> NUMA node(s): 1\n> NUMA node0 CPU(s): 0-15\n> RAM:\n> MemTotal: 32036536 kB\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nst 24. 4. 2024 v 14:50 odesílatel Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> napsal:On Wed, Apr 3, 2024 at 1:10 AM Jeff Davis <pgsql@j-davis.com> wrote:>> Here's where I think this API should go:>> 1. Have table_modify_begin/end and table_modify_buffer_insert, like> those that are implemented in your patch.I added table_modify_begin, table_modify_buffer_insert, table_modify_buffer_flush and table_modify_end. Table Access Method (AM) authors now can define their own buffering strategy and flushing decisions based on their tuple storage kinds and various other AM specific factors. I also added a default implementation that falls back to single inserts when no implementation is provided for these AM by AM authors. See the attached v19-0001 patch.> 2. Add some kind of flush callback that will be called either while the> tuples are being flushed or after the tuples are flushed (but before> they are freed by the AM). (Aside: do we need to call it while the> tuples are being flushed to get the right visibility semantics for> after-row triggers?)I added a flush callback named TableModifyBufferFlushCallback; when provided by callers invoked after tuples are flushed to disk from the buffers but before the AM frees them up. Index insertions and AFTER ROW INSERT triggers can be executed in this callback. See the v19-0001 patch for how AM invokes the flush callback, and see either v19-0003 or v19-0004 or v19-0005 for how a caller can supply the callback and required context to execute index insertions and AR triggers.> 3. Add table_modify_buffer_{update|delete} APIs.>> 9. Use these new methods for DELETE, UPDATE, and MERGE. MERGE can use> the buffer_insert/update/delete APIs; we don't need a separate merge> method. This probably requires that the AM maintain 3 separate buffers> to distinguish different kinds of changes at flush time (obviously> these can be initialized lazily to avoid overhead when not being used).I haven't thought about these things yet. I can only focus on them after seeing how the attached patches go from here.> 4. Some kind of API tweaks to help manage memory when modifying> pertitioned tables, so that the buffering doesn't get out of control.> Perhaps just reporting memory usage and allowing the caller to force> flushes would be enough.Heap implementation for thes new Table AMs uses a separate memory context for all of the operations. Please have a look and let me know if we need anything more.> 5. Use these new methods for CREATE/REFRESH MATERIALIZED VIEW. This is> fairly straightforward, I believe, and handled by your patch. Indexes> are (re)built afterward, and no triggers are possible.>> 6. Use these new methods for CREATE TABLE ... AS. This is fairly> straightforward, I believe, and handled by your patch. No indexes or> triggers are possible.I used multi inserts for all of these including TABLE REWRITE commands such as ALTER TABLE. See the attached v19-0002 patch. Check the testing section below for benefits.FWIW, following are some of the TABLE REWRITE commands that can get benefitted:ALTER TABLE tbl ALTER c1 TYPE bigint;ALTER TABLE itest13 ADD COLUMN c int GENERATED BY DEFAULT AS IDENTITY;ALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap2;ALTER TABLE itest3 ALTER COLUMN a TYPE int;ALTER TABLE gtest20 ALTER COLUMN b SET EXPRESSION AS (a * 3);ALTER TABLE has_volatile ADD col4 int DEFAULT (random() * 10000)::int;and so on.> 7. Use these new methods for COPY. We have to be careful to avoid> regressions for the heap method, because it's already managing its own> buffers. If the AM manages the buffering, then it may require> additional copying of slots, which could be a disadvantage. To solve> this, we may need some minor API tweaks to avoid copying when the> caller guarantees that the memory will not be freed to early, or> perhaps expose the AM's memory context to copyfrom.c. Another thing to> consider is that the buffering in copyfrom.c is also used for FDWs, so> that buffering code path needs to be preserved in copyfrom.c even if> not used for AMs.I modified the COPY FROM code to use the new Table AMs, and performed some tests which show no signs of regression. Check the testing section below for more details. See the attached v19-0005 patch. With this, table_multi_insert can be deprecated.> 8. Use these new methods for INSERT INTO ... SELECT. One potential> challenge here is that execution nodes are not always run to> completion, so we need to be sure that the flush isn't forgotten in> that case.I did that in v19-0003. I did place the table_modify_end call in multiple places including ExecEndModifyTable. I didn't find any issues with it. Please have a look and let me know if we need the end call in more places. Check the testing section below for benefits.> 10. Use these new methods for logical apply.I used multi inserts for Logical Replication apply. in v19-0004. Check the testing section below for benefits.FWIW, open-source pglogical does have multi insert support, check code around https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_apply_heap.c#L960.> 11. Deprecate the multi_insert API.I did remove both table_multi_insert and table_finish_bulk_insert in v19-0006. Perhaps, removing them isn't a great idea, but adding a deprecation WARNING/ERROR until some more PG releases might be worth looking at.> Thoughts on this plan? Does your patch make sense in v17 as a stepping> stone, or should we try to make all of these API changes together in> v18?If the design, code and benefits that these new Table AMs bring to the table look good, I hope to see it for PG 18.> Also, a sample AM code would be a huge benefit here. Writing a real AM> is hard, but perhaps we can at least have an example one to demonstrate> how to use these APIs?The attached patches already have implemented these new Table AMs for Heap. I don't think we need a separate implementation to demonstrate. If others feel so, I'm open to thoughts here.Having said above, I'd like to reiterate the motivation behind the new Table AMs for multi and single inserts.1. A scan-like API with state being carried across is thought to be better as suggested by Andres Freund - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx@alap3.anarazel.de.2. Allowing a Table AM to optimize operations across multiple inserts, define its own buffering strategy and take its own flushing decisions based on their tuple storage kinds and various other AM specific factors.3. Improve performance of various SQL commands with multi inserts for Heap AM.The attached v19 patches might need some more detailed comments, some documentation and some specific tests ensuring the multi inserts for Heap are kicked-in for various commands. I'm open to thoughts here.I did some testing to see how various commands benefit with multi inserts using these new Table AM for heap. It's not only the improvement in performance these commands see, but also the amount of WAL that gets generated reduces greatly. After all, multi inserts optimize the insertions by writing less WAL. IOW, writing WAL record per page if multiple rows fit into a single data page as opposed to WAL record per row.Test case 1: 100 million rows, 2 columns (int and float)Command | HEAD (sec) | PATCHED (sec) | Faster by % | Faster by X------------------------------ | ---------- | ------------- | ----------- | -----------CREATE TABLE AS | 121 | 77 | 36.3 | 1.57CREATE MATERIALIZED VIEW | 101 | 49 | 51.4 | 2.06REFRESH MATERIALIZED VIEW | 113 | 54 | 52.2 | 2.09ALTER TABLE (TABLE REWRITE) | 124 | 81 | 34.6 | 1.53COPY FROM | 71 | 72 | 0 | 1INSERT INTO ... SELECT | 117 | 62 | 47 | 1.88LOGICAL REPLICATION APPLY | 393 | 306 | 22.1 | 1.28Command | HEAD (WAL in GB) | PATCHED (WAL in GB) | Reduced by % | Reduced by X------------------------------ | ---------------- | ------------------- | ------------ | -----------CREATE TABLE AS | 6.8 | 2.4 | 64.7 | 2.83CREATE MATERIALIZED VIEW | 7.2 | 2.3 | 68 | 3.13REFRESH MATERIALIZED VIEW | 10 | 5.1 | 49 | 1.96ALTER TABLE (TABLE REWRITE) | 8 | 3.2 | 60 | 2.5COPY FROM | 2.9 | 3 | 0 | 1INSERT INTO ... SELECT | 8 | 3 | 62.5 | 2.66LOGICAL REPLICATION APPLY | 7.5 | 2.3 | 69.3 | 3.26Test case 2: 1 billion rows, 1 column (int)Command | HEAD (sec) | PATCHED (sec) | Faster by % | Faster by X------------------------------ | ---------- | ------------- | ----------- | -----------CREATE TABLE AS | 794 | 386 | 51.38 | 2.05CREATE MATERIALIZED VIEW | 1006 | 563 | 44.03 | 1.78REFRESH MATERIALIZED VIEW | 977 | 603 | 38.28 | 1.62ALTER TABLE (TABLE REWRITE) | 1189 | 714 | 39.94 | 1.66COPY FROM | 321 | 330 | -0.02 | 0.97INSERT INTO ... SELECT | 1084 | 586 | 45.94 | 1.84LOGICAL REPLICATION APPLY | 3530 | 2982 | 15.52 | 1.18Command | HEAD (WAL in GB) | PATCHED (WAL in GB) | Reduced by % | Reduced by X------------------------------ | ---------------- | ------------------- | ------------ | -----------CREATE TABLE AS | 60 | 12 | 80 | 5CREATE MATERIALIZED VIEW | 60 | 12 | 80 | 5REFRESH MATERIALIZED VIEW | 60 | 12 | 80 | 5ALTER TABLE (TABLE REWRITE) | 123 | 31 | 60 | 2.5COPY FROM | 12 | 12 | 0 | 1INSERT INTO ... SELECT | 120 | 24 | 80 | 5LOGICAL REPLICATION APPLY | 61 | 12 | 80.32 | 5looks pretty impressive!Pavel Test setup:./configure --prefix=$PWD/pg17/ --enable-tap-tests CFLAGS=\"-ggdb3 -O2\" > install.log && make -j 8 install > install.log 2>&1 &wal_level=logicalmax_wal_size = 256GBcheckpoint_timeout = 1h\nTest system is EC2 instance of type c5.4xlarge:Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little EndianCPU(s): 16 On-line CPU(s) list: 0-15Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 7 BogoMIPS: 5999.99Caches (sum of all): L1d: 256 KiB (8 instances) L1i: 256 KiB (8 instances) L2: 8 MiB (8 instances) L3: 35.8 MiB (1 instance)NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-15RAM: MemTotal: 32036536 kB--Bharath RupireddyPostgreSQL Contributors TeamRDS Open Source DatabasesAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 24 Apr 2024 18:07:03 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, 2024-04-24 at 18:19 +0530, Bharath Rupireddy wrote:\n> I added a flush callback named TableModifyBufferFlushCallback; when\n> provided by callers invoked after tuples are flushed to disk from the\n> buffers but before the AM frees them up. Index insertions and AFTER\n> ROW INSERT triggers can be executed in this callback. See the v19-\n> 0001 patch for how AM invokes the flush callback, and see either v19-\n> 0003 or v19-0004 or v19-0005 for how a caller can supply the callback\n> and required context to execute index insertions and AR triggers.\n\nThe flush callback takes a pointer to an array of slot pointers, and I\ndon't think that's the right API. I think the callback should be called\non each slot individually.\n\nWe shouldn't assume that a table AM stores buffered inserts as an array\nof slot pointers. A TupleTableSlot has a fair amount of memory overhead\n(64 bytes), so most AMs wouldn't want to pay that overhead for every\ntuple. COPY does, but that's because the number of buffered tuples is\nfairly small.\n\n> \n> \n> > 11. Deprecate the multi_insert API.\n> \n> I did remove both table_multi_insert and table_finish_bulk_insert in\n> v19-0006.\n\nThat's OK with me. Let's leave those functions out for now.\n\n> \n> If the design, code and benefits that these new Table AMs bring to\n> the table look good, I hope to see it for PG 18.\n\nSounds good.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 25 Apr 2024 09:41:08 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Thu, Apr 25, 2024 at 10:11 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2024-04-24 at 18:19 +0530, Bharath Rupireddy wrote:\n> > I added a flush callback named TableModifyBufferFlushCallback; when\n> > provided by callers invoked after tuples are flushed to disk from the\n> > buffers but before the AM frees them up. Index insertions and AFTER\n> > ROW INSERT triggers can be executed in this callback. See the v19-\n> > 0001 patch for how AM invokes the flush callback, and see either v19-\n> > 0003 or v19-0004 or v19-0005 for how a caller can supply the callback\n> > and required context to execute index insertions and AR triggers.\n>\n> The flush callback takes a pointer to an array of slot pointers, and I\n> don't think that's the right API. I think the callback should be called\n> on each slot individually.\n>\n> We shouldn't assume that a table AM stores buffered inserts as an array\n> of slot pointers. A TupleTableSlot has a fair amount of memory overhead\n> (64 bytes), so most AMs wouldn't want to pay that overhead for every\n> tuple. COPY does, but that's because the number of buffered tuples is\n> fairly small.\n\nI get your point. An AM can choose to implement the buffering strategy\nby just storing the plain tuple rather than the tuple slots in which\ncase the flush callback with an array of tuple slots won't work.\nTherefore, I now changed the flush callback to accept only a single\ntuple slot.\n\n> > > 11. Deprecate the multi_insert API.\n> >\n> > I did remove both table_multi_insert and table_finish_bulk_insert in\n> > v19-0006.\n>\n> That's OK with me. Let's leave those functions out for now.\n\nOkay. Dropped the 0006 patch for now.\n\nPlease see the attached v20 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Apr 2024 11:36:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, Apr 29, 2024 at 11:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please see the attached v20 patch set.\n\nIt looks like with the use of the new multi insert table access method\n(TAM) for COPY (v20-0005), pgbench regressed about 35% [1]. The reason\nis that the memory-based flushing decision the new TAM takes [2]\ndiffers from that of what the COPY does today with table_multi_insert.\nThe COPY with table_multi_insert, maintains exact size of the tuples\nin CopyFromState after it does the line parsing. For instance, the\ntuple size of a table with two integer columns is 8 (4+4) bytes here.\nThe new TAM relies on the memory occupied by the slot's memory context\nwhich holds the actual tuple as a good approximation for the tuple\nsize. But, this memory context size also includes a tuple header, so\nthe size here is not just 8 (4+4) bytes but more. Because of this, the\nbuffers get flushed sooner than that of the existing COPY with\ntable_multi_insert AM causing regression in pgbench which uses COPY\nextensively. The new TAM aren't designed to be able to receive tuple\nsizes from the callers, even if we do that, the API doesn't look\ngeneric.\n\nHere are couple of ideas to get away with this:\n\n1. Try to get the actual tuple sizes excluding header sizes for each\ncolumn in the new TAM.\n2. Try not to use the new TAM for COPY in which case the\ntable_multi_insert stays forever.\n3. Try passing a flag to tell the new TAM that the caller does the\nflushing and no need for an internal flushing.\n\nI haven't explored the idea (1) in depth yet. If we find a way to do\nso, it looks to me that we are going backwards since we need to strip\noff headers for each column of a row for all of the rows. I suspect\nthis would cost a bit more and may not solve the regression.\n\nWith an eventual goal to get rid of table_multi_insert, (3) may not be\na better choice.\n\n(3) seems reasonable to implement and reduce the regression. I did so\nin the attached v21 patches. A new flag TM_SKIP_INTERNAL_BUFFER_FLUSH\nis introduced in v21 patch, when specified, no internal flushing is\ndone, the caller has to flush the buffered tuples using\ntable_modify_buffer_flush(). Check the test results [3] HEAD 2.948 s,\nPATCHED 2.946 s.\n\nv21 also adds code to maintain tuple size for virtual tuple slots.\nThis helps make better memory-based flushing decisions in the new TAM.\n\nThoughts?\n\n[1]\nHEAD:\ndone in 2.84 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.99 s, vacuum 0.21 s, primary keys 0.62 s).\ndone in 2.78 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.88 s, vacuum 0.21 s, primary keys 0.69 s).\ndone in 2.97 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.07 s, vacuum 0.21 s, primary keys 0.69 s).\ndone in 2.86 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.96 s, vacuum 0.21 s, primary keys 0.69 s).\ndone in 2.90 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.05 s, vacuum 0.21 s, primary keys 0.64 s).\ndone in 2.83 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.96 s, vacuum 0.21 s, primary keys 0.66 s).\ndone in 2.80 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.95 s, vacuum 0.20 s, primary keys 0.63 s).\ndone in 2.79 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 1.89 s, vacuum 0.21 s, primary keys 0.69 s).\ndone in 3.75 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.17 s, vacuum 0.32 s, primary keys 1.25 s).\ndone in 3.86 s (drop tables 0.00 s, create tables 0.08 s, client-side\ngenerate 2.97 s, vacuum 0.21 s, primary keys 0.59 s).\n\nAVG done in 2.948 s\n\nv20 PATCHED:\ndone in 3.94 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.12 s, vacuum 0.19 s, primary keys 0.62 s).\ndone in 4.04 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.22 s, vacuum 0.20 s, primary keys 0.61 s).\ndone in 3.98 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.16 s, vacuum 0.20 s, primary keys 0.61 s).\ndone in 4.04 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.16 s, vacuum 0.20 s, primary keys 0.67 s).\ndone in 3.98 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.16 s, vacuum 0.20 s, primary keys 0.61 s).\ndone in 4.00 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.17 s, vacuum 0.20 s, primary keys 0.63 s).\ndone in 4.43 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.24 s, vacuum 0.21 s, primary keys 0.98 s).\ndone in 4.16 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 3.36 s, vacuum 0.20 s, primary keys 0.59 s).\ndone in 3.62 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.83 s, vacuum 0.20 s, primary keys 0.58 s).\ndone in 3.67 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.84 s, vacuum 0.21 s, primary keys 0.61 s).\n\nAVG done in 3.986 s\n\n[2]\n+ /*\n+ * Memory allocated for the whole tuple is in slot's memory context, so\n+ * use it keep track of the total space occupied by all buffered tuples.\n+ */\n+ if (TTS_SHOULDFREE(slot))\n+ mistate->cur_size += MemoryContextMemAllocated(slot->tts_mcxt, false);\n+\n+ if (mistate->cur_slots >= HEAP_MAX_BUFFERED_SLOTS ||\n+ mistate->cur_size >= HEAP_MAX_BUFFERED_BYTES)\n+ heap_modify_buffer_flush(state);\n\n[3]\nv21 PATCHED:\ndone in 2.92 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.12 s, vacuum 0.21 s, primary keys 0.59 s).\ndone in 2.89 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.07 s, vacuum 0.21 s, primary keys 0.61 s).\ndone in 2.89 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.05 s, vacuum 0.21 s, primary keys 0.62 s).\ndone in 2.90 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.07 s, vacuum 0.21 s, primary keys 0.62 s).\ndone in 2.80 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.00 s, vacuum 0.21 s, primary keys 0.59 s).\ndone in 2.84 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.04 s, vacuum 0.20 s, primary keys 0.60 s).\ndone in 2.84 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.03 s, vacuum 0.20 s, primary keys 0.59 s).\ndone in 2.85 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.04 s, vacuum 0.20 s, primary keys 0.60 s).\ndone in 3.48 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.44 s, vacuum 0.23 s, primary keys 0.80 s).\ndone in 3.05 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 2.28 s, vacuum 0.21 s, primary keys 0.55 s).\n\nAVG done in 2.946 s\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 May 2024 12:56:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "Sorry to interject, but --\n\nOn 2024-May-15, Bharath Rupireddy wrote:\n\n> It looks like with the use of the new multi insert table access method\n> (TAM) for COPY (v20-0005), pgbench regressed about 35% [1].\n\nWhere does this acronym \"TAM\" comes from for \"table access method\"? I\nfind it thoroughly horrible and wish we didn't use it. What's wrong\nwith using \"table AM\"? It's not that much longer, much clearer and\nreuses our well-established acronym AM.\n\nWe don't use IAM anywhere, for example (it's always \"index AM\"), and I\ndon't think we'd turn \"sequence AM\" into SAM either, would we?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 15 May 2024 11:14:14 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, May 15, 2024 at 2:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > It looks like with the use of the new multi insert table access method\n> > (TAM) for COPY (v20-0005), pgbench regressed about 35% [1].\n>\n> Where does this acronym \"TAM\" comes from for \"table access method\"?\n\nThanks for pointing it out. I used it for just the discussion sake in\nthis response. Although a few of the previous responses from others in\nthis thread mentioned that word, none of the patches have it added in\nthe code. I'll ensure to not use it further in this thread if that\nworries one like another acronym is being added.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 May 2024 15:29:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "Hi,\n\nOn 2024-05-15 11:14:14 +0200, Alvaro Herrera wrote:\n> On 2024-May-15, Bharath Rupireddy wrote:\n> \n> > It looks like with the use of the new multi insert table access method\n> > (TAM) for COPY (v20-0005), pgbench regressed about 35% [1].\n> \n> Where does this acronym \"TAM\" comes from for \"table access method\"? I\n> find it thoroughly horrible and wish we didn't use it. What's wrong\n> with using \"table AM\"? It's not that much longer, much clearer and\n> reuses our well-established acronym AM.\n\nStrongly agreed. I don't know why I dislike TAM so much though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2024 15:03:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, May 15, 2024 at 11:14:14AM +0200, Alvaro Herrera wrote:\n> We don't use IAM anywhere, for example (it's always \"index AM\"), and I\n> don't think we'd turn \"sequence AM\" into SAM either, would we?\n\nSAM is not a term I've seen used for sequence AMs in the past, I don't\nintend to use it. TAM is similar strange to me, but perhaps it's just\nbecause I am used to table AMs as a whole.\n--\nMichael",
"msg_date": "Thu, 16 May 2024 08:07:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, 2024-05-15 at 12:56 +0530, Bharath Rupireddy wrote:\n> Because of this, the\n> buffers get flushed sooner than that of the existing COPY with\n> table_multi_insert AM causing regression in pgbench which uses COPY\n> extensively.\n\nThe flushing behavior is entirely controlled by the table AM. The heap\ncan use the same flushing logic that it did before, which is to hold\n1000 tuples.\n\nI like that it's accounting for memory, too, but it doesn't need to be\noverly restrictive. Why not just use work_mem? That should hold 1000\nreasonably-sized tuples, plus overhead.\n\nEven better would be if we could take into account partitioning. That\nmight be out of scope for your current work, but it would be very\nuseful. We could have a couple new GUCs like modify_table_buffer and\nmodify_table_buffer_per_partition or something like that.\n\n> 1. Try to get the actual tuple sizes excluding header sizes for each\n> column in the new TAM.\n\nI don't see the point in arbitrarily excluding the header.\n\n> v21 also adds code to maintain tuple size for virtual tuple slots.\n> This helps make better memory-based flushing decisions in the new\n> TAM.\n\nThat seems wrong. We shouldn't need to change the TupleTableSlot\nstructure for this patch.\n\n\nComments on v21:\n\n* All callers specify TM_FLAG_MULTI_INSERTS. What's the purpose?\n\n* The only caller that doesn't use TM_FLAG_BAS_BULKWRITE is\nExecInsert(). What's the disadvantage to using a bulk insert state\nthere?\n\n* I'm a bit confused by TableModifyState->modify_end_callback. The AM\nboth sets the callback and calls the callback -- why can't the code\njust go into the table_modify_end method?\n\n* The code structure in table_modify_begin() (and related) is strange.\nCan it be simplified or am I missing something?\n\n* Why are table_modify_state and insert_modify_buffer_flush_context\nglobals? What if there are multiple modify nodes in a plan?\n\n* Can you explain the design in logical rep?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 15 May 2024 16:31:42 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, 2024-05-15 at 16:31 -0700, Jeff Davis wrote:\n> Even better would be if we could take into account partitioning. That\n> might be out of scope for your current work, but it would be very\n> useful. We could have a couple new GUCs like modify_table_buffer and\n> modify_table_buffer_per_partition or something like that.\n\nTo expand on this point:\n\nFor heap, the insert buffer is only 1000 tuples, which doesn't take\nmuch memory. But for an AM that does any significant reorganization of\nthe input data, the buffer may be much larger. For insert into a\npartitioned table, that buffer could be multiplied across many\npartitions, and start to be a real concern.\n\nWe might not need table AM API changes at all here beyond what v21\noffers. The ModifyTableState includes the memory context, so that gives\nthe caller a way to know the memory consumption of a single partition's\nbuffer. And if it needs to free the resources, it can just call\nmodify_table_end(), and then _begin() again if more tuples hit that\npartition.\n\nSo I believe what I'm asking for here is entirely orthogonal to the\ncurrent proposal.\n\nHowever, it got me thinking that we might not want to use work_mem for\ncontrolling the heap's buffer size. Each AM is going to have radically\ndifferent memory needs, and may have its own (extension) GUCs to\ncontrol that memory usage, so they won't honor work_mem. We could\neither have a separate GUC for the heap if it makes sense, or we could\njust hard-code a reasonable value.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 16 May 2024 12:00:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "Hi,\n\nOn Thu, May 16, 2024 at 5:01 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> The flushing behavior is entirely controlled by the table AM. The heap\n> can use the same flushing logic that it did before, which is to hold\n> 1000 tuples.\n>\n> I like that it's accounting for memory, too, but it doesn't need to be\n> overly restrictive. Why not just use work_mem? That should hold 1000\n> reasonably-sized tuples, plus overhead.\n>\n> Even better would be if we could take into account partitioning. That\n> might be out of scope for your current work, but it would be very\n> useful. We could have a couple new GUCs like modify_table_buffer and\n> modify_table_buffer_per_partition or something like that.\n\nI disagree with inventing more GUCs. Instead, I'd vote for just\nholding 1000 tuples in buffers for heap AM. This not only keeps the\ncode and new table AM simple, but also does not cause regression for\nCOPY. In my testing, 1000 tuples with 1 int and 1 float columns took\n40000 bytes of memory (40 bytes each tuple), whereas with 1 int, 1\nfloat and 1 text columns took 172000 bytes of memory (172 bytes each\ntuple) bytes which IMO mustn't be a big problem. Thoughts?\n\n> > 1. Try to get the actual tuple sizes excluding header sizes for each\n> > column in the new TAM.\n>\n> I don't see the point in arbitrarily excluding the header.\n>\n> > v21 also adds code to maintain tuple size for virtual tuple slots.\n> > This helps make better memory-based flushing decisions in the new\n> > TAM.\n>\n> That seems wrong. We shouldn't need to change the TupleTableSlot\n> structure for this patch.\n\nI dropped these ideas as I went ahead with the above idea of just\nholding 1000 tuples in buffers for heap AM.\n\n> Comments on v21:\n>\n> * All callers specify TM_FLAG_MULTI_INSERTS. What's the purpose?\n\nPreviously, the multi insert state was initialized in modify_begin, so\nit was then required to differentiate the code path. But, it's not\nneeded anymore with the lazy initialization of the multi insert state\nmoved to modify_buffer_insert. I removed it.\n\n> * The only caller that doesn't use TM_FLAG_BAS_BULKWRITE is\n> ExecInsert(). What's the disadvantage to using a bulk insert state\n> there?\n\nThe subsequent read queries will not find the just-now-inserted tuples\nin shared buffers as a separate ring buffer is used with bulk insert\naccess strategy. The multi inserts is nothing but buffering multiple\ntuples plus inserting in bulk. So using the bulk insert strategy might\nbe worth it for INSERT INTO SELECTs too. Thoughts?\n\n> * I'm a bit confused by TableModifyState->modify_end_callback. The AM\n> both sets the callback and calls the callback -- why can't the code\n> just go into the table_modify_end method?\n\nI came up with modify_end_callback as per the discussion upthread to\nuse modify_begin, modify_end in future for UPDATE, DELETE and MERGE,\nand not use any operation specific flags to clean the state\nappropriately. The operation specific state cleaning logic can go to\nthe modify_end_callback implementation defined by the AM.\n\n> * The code structure in table_modify_begin() (and related) is strange.\n> Can it be simplified or am I missing something?\n\nI previously defined these new table AMs as optional, check\nGetTableAmRoutine(). And, there was a point upthread to provide\ndefault/fallback implementation to help not fail insert operations on\ntables without the new table AMs implemented. FWIW, the default\nimplementation was just doing the single inserts. The\ntable_modify_begin and friends need the logic to fallback making the\ncode there look different than other AMs. However, I now have a\nfeeling to drop the idea of having fallback implementation and let the\nAMs deal with it. Although it might create some friction with various\nnon-core AM implementations, it keeps this patch simple which I would\nvote for. Thoughts?\n\n> * Why are table_modify_state and insert_modify_buffer_flush_context\n> globals? What if there are multiple modify nodes in a plan?\n\nCan you please provide the case that can generate multiple \"modify\nnodes\" in a single plan? AFAICS, multiple \"modify nodes\" in a plan can\nexist for both partitioned tables and tables that get created as part\nof CTEs. I disabled multi inserts for both of these cases. The way I\ndisabled for CTEs looks pretty naive - I just did the following. Any\nbetter suggestions here to deal with all such cases?\n\n+ if (operation == CMD_INSERT &&\n+ nodeTag(subplanstate) == T_SeqScanState)\n+ canMultiInsert = true;\n\n> * Can you explain the design in logical rep?\n\nMulti inserts for logical replication work at the table level. In\nother words, all tuple inserts related to a single table within a\ntransaction are buffered and written to the corresponding table when\nnecessary. Whenever inserts pertaining to another table arrive, the\nbuffered tuples related to the previous table are written to the table\nbefore starting the buffering for the new table. Also, the tuples are\nwritten to the table from the buffer when there arrives a non-INSERT\noperation, for example, UPDATE/DELETE/TRUNCATE/COMMIT etc. FWIW,\npglogical has the similar multi inserts logic -\nhttps://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_apply_heap.c#L879.\n\nPlease find the v22 patches with the above changes.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 5 Jun 2024 12:42:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, Jun 5, 2024 at 12:42 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please find the v22 patches with the above changes.\n\nPlease find the v23 patches after rebasing 0005 and adapting 0004 for\n9758174e2e.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Aug 2024 11:09:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, 2024-08-26 at 11:09 +0530, Bharath Rupireddy wrote:\n> On Wed, Jun 5, 2024 at 12:42 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > \n> > Please find the v22 patches with the above changes.\n> \n> Please find the v23 patches after rebasing 0005 and adapting 0004 for\n> 9758174e2e.\n\n\nThank you.\n\n0001 API design:\n\n* Remove TableModifyState.modify_end_callback.\n\n* This patch means that we will either remove or deprecate\nTableAmRoutine.multi_insert and finish_bulk_insert. Are there any\nstrong opinions about maintaining support for multi-insert, or should\nwe just remove it outright and force any new AMs to implement the new\nAPIs to maintain COPY performance?\n\n* Why do we need a separate \"modify_flags\" and \"options\"? Can't we just\ncombine them into TABLE_MODIFY_* flags?\n\n\nAlexander, you had some work in this area as well, such b1484a3f19. I\nbelieve 0001 covers this use case in a different way: rather than\ngiving complete responsibility to the AM to insert into the indexes,\nthe caller provides a callback and the AM is responsible for calling it\nat the time the tuples are flushed. Is that right?\n\nThe design has been out for a while, so unless others have suggestions,\nI'm considering the major design points mostly settled and I will move\nforward with something like 0001 (pending implementation issues).\n\nNote: I believe this API will extend naturally to updates and deletes,\nas well.\n\n\n0001 implementation issues:\n\n* We need default implementations for AMs that don't implement the new\nAPIs, so that the AM will still function even if it only defines the\nsingle-tuple APIs. If we need to make use of the AM's multi_insert\nmethod (I'm not sure we do), then the default methods would need to\nhandle that as well. (I thought a previous version had these default\nimplementations -- is there a reason they were removed?)\n\n* I am confused about how the heap implementation manages state and\nresets it. mistate->mem_cxt is initialized to a new memory context in\nheap_modify_begin, and then re-initialized to another new memory\ncontext in heap_modify_buffer_insert. Then the mistate->mem_cxt is also\nused as a temp context for executing heap_multi_insert, and it gets\nreset before calling the flush callback, which still needs the slots.\n\n* Why materialize the slot at copyfrom.c:1308 if the slot is going to\nbe copied anyway (which also materializes it; see\ntts_virtual_copyslot()) at heapam.c:2710?\n\n* After correcting the memory issues, can you get updated performance\nnumbers for COPY?\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 14:18:21 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, 26 Aug 2024 at 23:18, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2024-08-26 at 11:09 +0530, Bharath Rupireddy wrote:\n> > On Wed, Jun 5, 2024 at 12:42 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Please find the v22 patches with the above changes.\n> >\n> > Please find the v23 patches after rebasing 0005 and adapting 0004 for\n> > 9758174e2e.\n>\n>\n> Thank you.\n>\n> 0001 API design:\n>\n> * Remove TableModifyState.modify_end_callback.\n>\n> * This patch means that we will either remove or deprecate\n> TableAmRoutine.multi_insert and finish_bulk_insert. Are there any\n> strong opinions about maintaining support for multi-insert, or should\n> we just remove it outright and force any new AMs to implement the new\n> APIs to maintain COPY performance?\n\nI don't think there is a significant enough difference in the\ncapabilities and requirements between the two APIs as currently\ndesigned that removal of the old API would mean a significant\ndifference in capabilities. Maybe we could supply an equivalent API\nshim to help the transition, but I don't think we should keep the old\nAPI around in the TableAM.\n\n> * Why do we need a separate \"modify_flags\" and \"options\"? Can't we just\n> combine them into TABLE_MODIFY_* flags?\n>\n>\n> Alexander, you had some work in this area as well, such b1484a3f19. I\n> believe 0001 covers this use case in a different way: rather than\n> giving complete responsibility to the AM to insert into the indexes,\n> the caller provides a callback and the AM is responsible for calling it\n> at the time the tuples are flushed. Is that right?\n>\n> The design has been out for a while, so unless others have suggestions,\n> I'm considering the major design points mostly settled and I will move\n> forward with something like 0001 (pending implementation issues).\n\nSorry about this late feedback, but while I'm generally +1 on the idea\nand primary design, I feel that it doesn't quite cover all the areas\nI'd expected it to cover.\n\nSpecifically, I'm having trouble seeing how this could be used to\nimplement ```INSERT INTO ... SELECT ... RETURNING ctid``` as I see no\nreturning output path for the newly inserted tuples' data, which is\nusually required for our execution nodes' output path. Is support for\nRETURN-clauses planned for this API? In a previous iteration, the\nflush operation was capable of returning a TTS, but that seems to have\nbeen dropped, and I can't quite figure out why.\n\n> Note: I believe this API will extend naturally to updates and deletes,\n> as well.\n\nI have the same concern about UPDATE ... RETURNING not fitting with\nthis callback-based design.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 26 Aug 2024 23:59:28 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, 2024-08-26 at 23:59 +0200, Matthias van de Meent wrote:\n> Specifically, I'm having trouble seeing how this could be used to\n> implement ```INSERT INTO ... SELECT ... RETURNING ctid``` as I see no\n> returning output path for the newly inserted tuples' data, which is\n> usually required for our execution nodes' output path. Is support for\n> RETURN-clauses planned for this API? In a previous iteration, the\n> flush operation was capable of returning a TTS, but that seems to\n> have\n> been dropped, and I can't quite figure out why.\n\nI'm not sure where that was lost, but I suspect when we changed\nflushing to use a callback. I didn't get to v23-0003 yet, but I think\nyou're right that the current flushing mechanism isn't right for\nreturning tuples. Thank you.\n\nOne solution: when the buffer is flushed, we can return an iterator\nover the buffered tuples to the caller. The caller can then use the\niterator to insert into indexes, return a tuple to the executor, etc.,\nand then release the iterator when done (freeing the buffer). That\ncontrol flow is less convenient for most callers, though, so perhaps\nthat should be optional?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 22:42:27 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Tue, 27 Aug 2024 at 07:42, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2024-08-26 at 23:59 +0200, Matthias van de Meent wrote:\n> > Specifically, I'm having trouble seeing how this could be used to\n> > implement ```INSERT INTO ... SELECT ... RETURNING ctid``` as I see no\n> > returning output path for the newly inserted tuples' data, which is\n> > usually required for our execution nodes' output path. Is support for\n> > RETURN-clauses planned for this API? In a previous iteration, the\n> > flush operation was capable of returning a TTS, but that seems to\n> > have\n> > been dropped, and I can't quite figure out why.\n>\n> I'm not sure where that was lost, but I suspect when we changed\n> flushing to use a callback. I didn't get to v23-0003 yet, but I think\n> you're right that the current flushing mechanism isn't right for\n> returning tuples. Thank you.\n>\n> One solution: when the buffer is flushed, we can return an iterator\n> over the buffered tuples to the caller. The caller can then use the\n> iterator to insert into indexes, return a tuple to the executor, etc.,\n> and then release the iterator when done (freeing the buffer).\n\nI think that would work, but it'd need to be accomodated in the\ntable_modify_buffer_insert path too, not just the _flush path, as the\nheap AM flushes the buffer when inserting tuples and its internal\nbuffer is full, so not only at the end of modifications.\n\n> That control flow is less convenient for most callers, though, so\n> perhaps that should be optional?\n\nThat would be OK with me.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 27 Aug 2024 15:44:13 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Tue, 2024-08-27 at 15:44 +0200, Matthias van de Meent wrote:\n> > One solution: when the buffer is flushed, we can return an iterator\n> > over the buffered tuples to the caller. The caller can then use the\n> > iterator to insert into indexes, return a tuple to the executor,\n> > etc.,\n> > and then release the iterator when done (freeing the buffer).\n> \n> I think that would work, but it'd need to be accomodated in the\n> table_modify_buffer_insert path too, not just the _flush path, as the\n> heap AM flushes the buffer when inserting tuples and its internal\n> buffer is full, so not only at the end of modifications.\n\nI gave this a little more thought and I don't think we need a change\nhere now. The callback could support RETURNING by copying the tuples\nout into the caller's state somewhere, and then the caller can iterate\non its own and emit those tuples.\n\nThat's not ideal, because it involves an extra copy, but it's a much\nsimpler API.\n\nAnother thought is that there are already a number of cases where we\nneed to limit the use of batching similar to copyfrom.c:917-1006. For\ninstance, before-row triggers, instead-of-row triggers, and volatile\nfunctions in the query. We could also just consider RETURNING another\nrestriction, which could be lifted later by implementing the logic in\nthe callback (as described above) without an API change.\n\nRegards,\n Jeff Davis\n\n\n\n",
"msg_date": "Tue, 27 Aug 2024 13:09:10 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, 2024-08-26 at 11:09 +0530, Bharath Rupireddy wrote:\n> On Wed, Jun 5, 2024 at 12:42 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > \n> > Please find the v22 patches with the above changes.\n> \n> Please find the v23 patches after rebasing 0005 and adapting 0004 for\n> 9758174e2e.\n\nIn patches 0002-0004, they must avoid the multi insert path when there\nare before-row triggers, instead-of-row triggers, or volatile functions\nused (see copyfrom.c:917-1006).\n\nAlso, until we decide on the RETURNING clause, we should block the\nmulti-insert path for that, as well, or implement it by using the\ncallback to copy tuples into the caller's context.\n\nIn 0003, why do you need the global insert_modify_buffer_flush_context?\n\n0004 is the only place that calls table_modify_buffer_flush(). Is that\nreally necessary, or is automatic flushing enough?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:37:27 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Mon, 2024-08-26 at 14:18 -0700, Jeff Davis wrote:\n> 0001 implementation issues:\n> \n> * We need default implementations for AMs that don't implement the\n> new\n> APIs, so that the AM will still function even if it only defines the\n> single-tuple APIs. If we need to make use of the AM's multi_insert\n> method (I'm not sure we do), then the default methods would need to\n> handle that as well. (I thought a previous version had these default\n> implementations -- is there a reason they were removed?)\n\nOn second thought, it would be easier to just have the caller check\nwhether the AM supports the multi-insert path; and if not, fall back to\nthe single-tuple path. The single-tuple path is needed anyway for cases\nlike before-row triggers.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:43:59 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 3:14 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2024-08-26 at 14:18 -0700, Jeff Davis wrote:\n> > 0001 implementation issues:\n> >\n> > * We need default implementations for AMs that don't implement the\n> > new\n> > APIs, so that the AM will still function even if it only defines the\n> > single-tuple APIs. If we need to make use of the AM's multi_insert\n> > method (I'm not sure we do), then the default methods would need to\n> > handle that as well. (I thought a previous version had these default\n> > implementations -- is there a reason they were removed?)\n>\n> On second thought, it would be easier to just have the caller check\n> whether the AM supports the multi-insert path; and if not, fall back to\n> the single-tuple path. The single-tuple path is needed anyway for cases\n> like before-row triggers.\n\nUp until v21, the default implementation existed, see\nhttps://www.postgresql.org/message-id/CALj2ACX90L5Mb5Vv%3DjsvhOdZ8BVsfpZf-CdCGhtm2N%2BbGUCSjg%40mail.gmail.com.\nI then removed it in v22 to keep the code simple.\n\nIMO, every caller branching out in the code like if (rel->rd_tableam->\ntuple_modify_buffer_insert != NULL) then multi insert; else single\ninsert; doesn't look good. IMO, the default implementation approach\nkeeps things simple which eventually can be removed in *near* future.\nThoughts?\n\nOne change in the default implementation I would do from that of v21\nis to assign the default AMs in GetTableAmRoutine() itself to avoid if\n.. else if .. else in the table_modify_XXX().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Aug 2024 12:55:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
},
{
"msg_contents": "On Thu, 2024-08-29 at 12:55 +0530, Bharath Rupireddy wrote:\n> IMO, every caller branching out in the code like if (rel->rd_tableam-\n> >\n> tuple_modify_buffer_insert != NULL) then multi insert; else single\n> insert; doesn't look good. IMO, the default implementation approach\n> keeps things simple which eventually can be removed in *near* future.\n> Thoughts?\n\nI believe we need the branching in the caller anyway:\n\n1. If there is a BEFORE row trigger with a volatile function, the\nvisibility rules[1] mean that the function should see changes from all\nthe rows inserted so far this command, which won't work if they are\nstill in the buffer.\n\n2. Similarly, for an INSTEAD OF row trigger, the visibility rules say\nthat the function should see all previous rows inserted.\n\n3. If there are volatile functions in the target list or WHERE clause,\nthe same visibility semantics apply.\n\n4. If there's a \"RETURNING ctid\" clause, we need to either come up with\na way to return the tuples after flushing, or we need to use the\nsingle-tuple path. (Similarly in the future when we support UPDATE ...\nRETURNING, as Matthias pointed out.)\n\nIf we need two paths in each caller anyway, it seems cleaner to just\nwrap the check for tuple_modify_buffer_insert in\ntable_modify_buffer_enabled().\n\nWe could perhaps use a one path and then force a batch size of one or\nsomething, which is an alternative, but we have to be careful not to\nintroduce a regression (and it still requires a solution for #4).\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/docs/devel/trigger-datachanges.html\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 12:29:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce new multi insert Table AM and improve performance of\n various SQL commands with it for Heap AM"
}
] |
[
{
"msg_contents": "Hi,\n\nFor fairly obvious reasons I like to run check-world in parallel [1]. In\nthe last few months I've occasionally seen failures during that that I\ncannot recall seeing before.\n\n--- /home/andres/build/postgres/13-assert/vpath/src/test/regress/expected/tablespace.out 2020-12-07 18:41:23.079235588 -0800\n+++ /home/andres/build/postgres/13-assert/vpath/src/test/regress/results/tablespace.out 2020-12-07 18:42:01.892632468 -0800\n@@ -209,496 +209,344 @@\n ERROR: cannot specify default tablespace for partitioned relations\n CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace) PARTITION BY LIST (a);\n ERROR: cannot specify default tablespace for partitioned relations\n -- but these work:\n CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace) PARTITION BY LIST (a) TABLESPACE regress_tblspace;\n SET default_tablespace TO '';\n CREATE TABLE testschema.dflt2 (a int PRIMARY KEY) PARTITION BY LIST (a);\n DROP TABLE testschema.dflt, testschema.dflt2;\n -- check that default_tablespace doesn't affect ALTER TABLE index rebuilds\n CREATE TABLE testschema.test_default_tab(id bigint) TABLESPACE regress_tblspace;\n+ERROR: could not create directory \"pg_tblspc/16387/PG_13_202007201/16384\": No such file or directory\n INSERT INTO testschema.test_default_tab VALUES (1);\n\n(many failures follow)\n\n\nI suspect this is related to the pg_upgrade test and the main regression\ntest running at the same time. We have the following in src/test/regress/GNUMakefile\n\n# Tablespace setup\n\n.PHONY: tablespace-setup\ntablespace-setup:\n\techo $(realpath ./testtablespace) >> /tmp/tablespace.log\n\trm -rf ./testtablespace\n\tmkdir ./testtablespace\n...\n\nwhich pg_upgrade triggers. Even though it, as far as I can tell, never\nactually ends up putting any data in it:\n\n# Send installcheck outputs to a private directory. This avoids conflict when\n# check-world runs pg_upgrade check concurrently with src/test/regress check.\n# To retrieve interesting files after a run, use pattern tmp_check/*/*.diffs.\noutputdir=\"$temp_root/regress\"\nEXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --outputdir=$outputdir\"\nexport EXTRA_REGRESS_OPTS\nmkdir \"$outputdir\"\nmkdir \"$outputdir\"/testtablespace\n\nIt's not clear to me why we have this logic in the makefile at all?\nSomebody taught pg_regress to do so, but only on windows... See\nconvert_sourcefiles_in().\n\n\nThe other thing that confuses me is why I started getting that error in\n*multiple* branches recently, even though I have used the parallel\ncheck-world for ages.\n\nGreetings,\n\nAndres Freund\n\n[1]: make -Otarget -j20 -s check-world && echo success || echo failed\n\n\n",
"msg_date": "Tue, 8 Dec 2020 17:29:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "On Tue, Dec 08, 2020 at 05:29:11PM -0800, Andres Freund wrote:\n> I suspect this is related to the pg_upgrade test and the main regression\n> test running at the same time. We have the following in\n> src/test/regress/GNUMakefile.\n\nYes, this one is not completely new to -hackers. See patch 0002 here\nthat slightly touched the topic by creating a specific makefile rule,\nbut I never got back to it as I never got annoyed by this problem:\nhttps://www.postgresql.org/message-id/20200511.171354.514381788845037011.horikyota.ntt@gmail.com\nWhat we have here is not a solution though...\n\n> It's not clear to me why we have this logic in the makefile at all?\n> Somebody taught pg_regress to do so, but only on windows... See\n> convert_sourcefiles_in().\n\n... Because we may still introduce this problem again if some new\nstuff uses src/test/pg_regress in a way similar to pg_upgrade,\ntriggering again tablespace-setup. Something like the attached may be\nenough, though I have not spent much time checking the surroundings,\nWindows included.\n\n> The other thing that confuses me is why I started getting that error in\n> *multiple* branches recently, even though I have used the parallel\n> check-world for ages.\n\nPerhaps you have just increased -j lately or moved to a faster machine\nwhere there are higher changes of collision?\n--\nMichael",
"msg_date": "Wed, 9 Dec 2020 16:55:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "On 2020-12-09 02:29, Andres Freund wrote:\n> I suspect this is related to the pg_upgrade test and the main regression\n> test running at the same time. We have the following in src/test/regress/GNUMakefile\n> \n> # Tablespace setup\n> \n> .PHONY: tablespace-setup\n> tablespace-setup:\n> \techo $(realpath ./testtablespace) >> /tmp/tablespace.log\n> \trm -rf ./testtablespace\n> \tmkdir ./testtablespace\n> ...\n> \n> which pg_upgrade triggers. Even though it, as far as I can tell, never\n> actually ends up putting any data in it:\n> \n> # Send installcheck outputs to a private directory. This avoids conflict when\n> # check-world runs pg_upgrade check concurrently with src/test/regress check.\n> # To retrieve interesting files after a run, use pattern tmp_check/*/*.diffs.\n> outputdir=\"$temp_root/regress\"\n> EXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --outputdir=$outputdir\"\n> export EXTRA_REGRESS_OPTS\n> mkdir \"$outputdir\"\n> mkdir \"$outputdir\"/testtablespace\n> \n> It's not clear to me why we have this logic in the makefile at all?\n> Somebody taught pg_regress to do so, but only on windows... See\n> convert_sourcefiles_in().\n\nI vaguely recall that this had something to do with SELinux (or \nsomething similar?), where it matters in what context you create a file \nor directory and then certain properties attach to it that are relevant \nto subsequent programs that run on it. Again, vague.\n\n\n",
"msg_date": "Fri, 15 Jan 2021 09:59:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "On Fri, Jan 15, 2021 at 09:59:02AM +0100, Peter Eisentraut wrote:\n> I vaguely recall that this had something to do with SELinux (or something\n> similar?), where it matters in what context you create a file or directory\n> and then certain properties attach to it that are relevant to subsequent\n> programs that run on it. Again, vague.\n\nHmm. Does it? sepgsql has some tests for tablespaces involving only\npg_default, so it does not seem that this applies in the context of\nthe regression tests. The cleanup of testtablespace in GNUMakefile\ncomes from 2467394, as of June 2004, that introduced tablespaces.\n--\nMichael",
"msg_date": "Sat, 16 Jan 2021 13:46:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "On 09.12.20 08:55, Michael Paquier wrote:\n>> It's not clear to me why we have this logic in the makefile at all?\n>> Somebody taught pg_regress to do so, but only on windows... See\n>> convert_sourcefiles_in().\n> \n> ... Because we may still introduce this problem again if some new\n> stuff uses src/test/pg_regress in a way similar to pg_upgrade,\n> triggering again tablespace-setup. Something like the attached may be\n> enough, though I have not spent much time checking the surroundings,\n> Windows included.\n\nThis patch looks alright to me.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 11:53:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 11:53:57AM +0100, Peter Eisentraut wrote:\n> On 09.12.20 08:55, Michael Paquier wrote:\n>> ... Because we may still introduce this problem again if some new\n>> stuff uses src/test/pg_regress in a way similar to pg_upgrade,\n>> triggering again tablespace-setup. Something like the attached may be\n>> enough, though I have not spent much time checking the surroundings,\n>> Windows included.\n> \n> This patch looks alright to me.\n\nSo, I have spent more time checking the surroundings of this patch,\nand finally applied it. Thanks for the review, Peter.\n--\nMichael",
"msg_date": "Wed, 10 Mar 2021 15:40:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-10 15:40:38 +0900, Michael Paquier wrote:\n> On Mon, Mar 08, 2021 at 11:53:57AM +0100, Peter Eisentraut wrote:\n> > On 09.12.20 08:55, Michael Paquier wrote:\n> >> ... Because we may still introduce this problem again if some new\n> >> stuff uses src/test/pg_regress in a way similar to pg_upgrade,\n> >> triggering again tablespace-setup. Something like the attached may be\n> >> enough, though I have not spent much time checking the surroundings,\n> >> Windows included.\n> > \n> > This patch looks alright to me.\n> \n> So, I have spent more time checking the surroundings of this patch,\n> and finally applied it. Thanks for the review, Peter.\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Mar 2021 12:17:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Occasional tablespace.sql failures in check-world -jnn"
}
] |
[
{
"msg_contents": "Hi\n\nSince ba3e76c,\nthe optimizer call generate_useful_gather_paths instead of generate_gather_paths() outside.\n\nBut I noticed that some comment still talking about generate_gather_paths not generate_useful_gather_paths.\nI think we should fix these comment, and I try to replace these \" generate_gather_paths \" with \" generate_useful_gather_paths \"\n\nBest regards,\nhouzj",
"msg_date": "Wed, 9 Dec 2020 02:21:25 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo about generate_gather_paths"
},
{
"msg_contents": "On 12/9/20 3:21 AM, Hou, Zhijie wrote:\n> Hi\n> \n> Since ba3e76c,\n> the optimizer call generate_useful_gather_paths instead of generate_gather_paths() outside.\n> \n> But I noticed that some comment still talking about generate_gather_paths not generate_useful_gather_paths.\n> I think we should fix these comment, and I try to replace these \" generate_gather_paths \" with \" generate_useful_gather_paths \"\n> \n\nThanks. I started looking at this a bit more closely, and I think most \nof the changes are fine - the code was changed to call a different \nfunction, but the comments still reference generate_gather_paths().\n\nThe one exception seems to be create_ordered_paths(), because that \ncomment also makes statements about what generate_gather_pathes is \ndoing. And some of it does not apply to generate_useful_gather_paths.\nFor example it says it generates order-preserving Gather Merge paths, \nbut generate_useful_gather_paths also generates paths with sorts (which \nare clearly not order-preserving).\n\nSo I think this comment will need a bit more work to update ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 22 Dec 2020 19:24:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo about generate_gather_paths"
},
{
"msg_contents": "Hi,\n\nOn Wed, Dec 23, 2020 at 3:24 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/9/20 3:21 AM, Hou, Zhijie wrote:\n> > Hi\n> >\n> > Since ba3e76c,\n> > the optimizer call generate_useful_gather_paths instead of generate_gather_paths() outside.\n> >\n> > But I noticed that some comment still talking about generate_gather_paths not generate_useful_gather_paths.\n> > I think we should fix these comment, and I try to replace these \" generate_gather_paths \" with \" generate_useful_gather_paths \"\n> >\n>\n> Thanks. I started looking at this a bit more closely, and I think most\n> of the changes are fine - the code was changed to call a different\n> function, but the comments still reference generate_gather_paths().\n>\n> The one exception seems to be create_ordered_paths(), because that\n> comment also makes statements about what generate_gather_pathes is\n> doing. And some of it does not apply to generate_useful_gather_paths.\n> For example it says it generates order-preserving Gather Merge paths,\n> but generate_useful_gather_paths also generates paths with sorts (which\n> are clearly not order-preserving).\n>\n> So I think this comment will need a bit more work to update ...\n\nStatus update for a commitfest entry.\n\nThis patch has been \"Waiting on Author\" without seeing any activity\nsince Tomas sent review comments. I'm planning to set it to \"Returned\nwith Feedback”, barring objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 11:44:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo about generate_gather_paths"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 1, 2021 at 11:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Dec 23, 2020 at 3:24 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 12/9/20 3:21 AM, Hou, Zhijie wrote:\n> > > Hi\n> > >\n> > > Since ba3e76c,\n> > > the optimizer call generate_useful_gather_paths instead of generate_gather_paths() outside.\n> > >\n> > > But I noticed that some comment still talking about generate_gather_paths not generate_useful_gather_paths.\n> > > I think we should fix these comment, and I try to replace these \" generate_gather_paths \" with \" generate_useful_gather_paths \"\n> > >\n> >\n> > Thanks. I started looking at this a bit more closely, and I think most\n> > of the changes are fine - the code was changed to call a different\n> > function, but the comments still reference generate_gather_paths().\n> >\n> > The one exception seems to be create_ordered_paths(), because that\n> > comment also makes statements about what generate_gather_pathes is\n> > doing. And some of it does not apply to generate_useful_gather_paths.\n> > For example it says it generates order-preserving Gather Merge paths,\n> > but generate_useful_gather_paths also generates paths with sorts (which\n> > are clearly not order-preserving).\n> >\n> > So I think this comment will need a bit more work to update ...\n>\n> Status update for a commitfest entry.\n>\n> This patch has been \"Waiting on Author\" without seeing any activity\n> since Tomas sent review comments. I'm planning to set it to \"Returned\n> with Feedback”, barring objections.\n>\n\nThis patch, which you submitted to this CommitFest, has been awaiting\nyour attention for more than one month. As such, we have moved it to\n\"Returned with Feedback\" and removed it from the reviewing queue.\nDepending on timing, this may be reversable, so let us know if there\nare extenuating circumstances. In any case, you are welcome to address\nthe feedback you have received, and resubmit the patch to the next\nCommitFest.\n\nThank you for contributing to PostgreSQL.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 22:23:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo about generate_gather_paths"
},
{
"msg_contents": "On 2020-Dec-22, Tomas Vondra wrote:\n\n> Thanks. I started looking at this a bit more closely, and I think most of\n> the changes are fine - the code was changed to call a different function,\n> but the comments still reference generate_gather_paths().\n\nHi, this was forgotten. It seemed better to fix at least some of the\nwrong references than not do anything, so I pushed the parts that seemed\n100% correct. Regarding this one:\n\n> The one exception seems to be create_ordered_paths(), because that comment\n> also makes statements about what generate_gather_pathes is doing. And some\n> of it does not apply to generate_useful_gather_paths.\n> For example it says it generates order-preserving Gather Merge paths, but\n> generate_useful_gather_paths also generates paths with sorts (which are\n> clearly not order-preserving).\n\nI left this one out. If Hou or Tomas want to propose/push a further\npatch, that'd be great.\n\nThanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)\n\n\n",
"msg_date": "Tue, 23 Feb 2021 20:12:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo about generate_gather_paths"
}
] |
[
{
"msg_contents": "Hi!\r\n\r\nI created a patch for improving CLOSE, FETCH, MOVE tab completion.\r\nSpecifically, I add CLOSE, FETCH, MOVE tab completion for completing a predefined cursors.\r\n\r\nRegards,\r\nShinya Kato",
"msg_date": "Wed, 9 Dec 2020 03:57:55 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 12:59 PM <Shinya11.Kato@nttdata.com> wrote:\n>\n> Hi!\n>\n>\n>\n> I created a patch for improving CLOSE, FETCH, MOVE tab completion.\n>\n> Specifically, I add CLOSE, FETCH, MOVE tab completion for completing a predefined cursors.\n>\n\nThank you for the patch!\n\nWhen I applied the patch, I got the following whitespace warnings:\n\n$ git apply ~/patches/fix_tab_complete_close_fetch_move.patch\n/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:40:\nindent with spaces.\n COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:41:\nindent with spaces.\n \" UNION SELECT 'ABSOLUTE'\"\n/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:42:\nindent with spaces.\n \" UNION SELECT 'BACKWARD'\"\n/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:43:\nindent with spaces.\n \" UNION SELECT 'FORWARD'\"\n/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:44:\nindent with spaces.\n \" UNION SELECT 'RELATIVE'\"\nwarning: squelched 19 whitespace errors\nwarning: 24 lines add whitespace errors.\n\nI recommend you checking whitespaces or running pgindent.\n\nHere are some comments:\n\n /*\n- * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD, RELATIVE, ALL,\n- * NEXT, PRIOR, FIRST, LAST\n+ * Complete FETCH with a list of cursors and one of ABSOLUTE,\nBACKWARD, FORWARD, RELATIVE, ALL,\n+ * NEXT, PRIOR, FIRST, LAST, FROM, IN\n */\n\nMaybe I think the commend should say:\n\n+ * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD, RELATIVE, ALL,\n+ * NEXT, PRIOR, FIRST, LAST, FROM, IN, and a list of cursors\n\nOther updates of the comment seem to have the same issue.\n\nOr I think we can omit the details from the comment since it seems\nredundant information. We can read it from the arguments of the\nfollowing COMPLETE_WITH_QUERY().\n\n---\n- /*\n- * Complete FETCH <direction> with \"FROM\" or \"IN\". These are equivalent,\n- * but we may as well tab-complete both: perhaps some users prefer one\n- * variant or the other.\n- */\n+ /* Complete FETCH <direction> with a list of cursors and \"FROM\" or \"IN\" */\n\nWhy did you remove the second sentence in the above comment?\n\n---\nThe patch improves tab completion for CLOSE, FETCH, and MOVE but is\nthere any reason why you didn't do that for DECLARE? I think DECLARE\nalso can be improved and it's a good timing for that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Jan 2021 20:56:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "Thank you for your review!\r\nI fixed some codes and attach a new patch.\r\n\r\n>When I applied the patch, I got the following whitespace warnings:\r\n>\r\n>$ git apply ~/patches/fix_tab_complete_close_fetch_move.patch\r\n>/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:40:\r\n>indent with spaces.\r\n> COMPLETE_WITH_QUERY(Query_for_list_of_cursors\r\n>/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:41:\r\n>indent with spaces.\r\n> \" UNION SELECT 'ABSOLUTE'\"\r\n>/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:42:\r\n>indent with spaces.\r\n> \" UNION SELECT 'BACKWARD'\"\r\n>/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:43:\r\n>indent with spaces.\r\n> \" UNION SELECT 'FORWARD'\"\r\n>/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:44:\r\n>indent with spaces.\r\n> \" UNION SELECT 'RELATIVE'\"\r\n>warning: squelched 19 whitespace errors\r\n>warning: 24 lines add whitespace errors.\r\n>\r\n>I recommend you checking whitespaces or running pgindent.\r\n\r\nThank you for your advice and I have corrected it.\r\n\r\n>Here are some comments:\r\n>\r\n> /*\r\n>- * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\r\n>RELATIVE, ALL,\r\n>- * NEXT, PRIOR, FIRST, LAST\r\n>+ * Complete FETCH with a list of cursors and one of ABSOLUTE,\r\n>BACKWARD, FORWARD, RELATIVE, ALL,\r\n>+ * NEXT, PRIOR, FIRST, LAST, FROM, IN\r\n> */\r\n>\r\n>Maybe I think the commend should say:\r\n>\r\n>+ * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\r\n>RELATIVE, ALL,\r\n>+ * NEXT, PRIOR, FIRST, LAST, FROM, IN, and a list of cursors\r\n>\r\n>Other updates of the comment seem to have the same issue.\r\n>\r\n>Or I think we can omit the details from the comment since it seems redundant\r\n>information. We can read it from the arguments of the following\r\n>COMPLETE_WITH_QUERY().\r\n\r\nIt certainly seems redundant, so I deleted them.\r\n\r\n>---\r\n>- /*\r\n>- * Complete FETCH <direction> with \"FROM\" or \"IN\". These are equivalent,\r\n>- * but we may as well tab-complete both: perhaps some users prefer one\r\n>- * variant or the other.\r\n>- */\r\n>+ /* Complete FETCH <direction> with a list of cursors and \"FROM\" or\r\n>+ \"IN\" */\r\n>\r\n>Why did you remove the second sentence in the above comment?\r\n\r\nI had misunderstood the meaning and deleted it.\r\nI deleted it as well as above, but would you prefer it to be there?\r\n\r\n>---\r\n>The patch improves tab completion for CLOSE, FETCH, and MOVE but is there\r\n>any reason why you didn't do that for DECLARE? I think DECLARE also can be\r\n>improved and it's a good timing for that.\r\n\r\nI wanted to improve tab completion for DECLARE, but I couldn't find anything to improve.\r\nPlease let me know if there are any codes that can be improved.\r\n\r\nRegards,\r\nShinya Kato",
"msg_date": "Tue, 5 Jan 2021 06:02:02 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "\n\nOn 2021/01/05 15:02, Shinya11.Kato@nttdata.com wrote:\n> Thank you for your review!\n> I fixed some codes and attach a new patch.\n\nThanks for updating the patch!\n\n+#define Query_for_list_of_cursors \\\n+\" SELECT name FROM pg_cursors\"\\\n\nThis query should be the following?\n\n\" SELECT pg_catalog.quote_ident(name) \"\\\n\" FROM pg_catalog.pg_cursors \"\\\n\" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n\n+/* CLOSE */\n+\telse if (Matches(\"CLOSE\"))\n+\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_cursors\n+\t\t\t\t\t\t\t\" UNION ALL SELECT 'ALL'\");\n\n\"UNION ALL\" should be \"UNION\"?\n\n+\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_cursors\n+\t\t\t\t\t\t\t\" UNION SELECT 'ABSOLUTE'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'BACKWARD'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'FORWARD'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'RELATIVE'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'ALL'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'NEXT'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'PRIOR'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'FIRST'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'LAST'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'FROM'\"\n+\t\t\t\t\t\t\t\" UNION SELECT 'IN'\");\n\nThis change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n\n> \n>> When I applied the patch, I got the following whitespace warnings:\n>>\n>> $ git apply ~/patches/fix_tab_complete_close_fetch_move.patch\n>> /home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:40:\n>> indent with spaces.\n>> COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>> /home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:41:\n>> indent with spaces.\n>> \" UNION SELECT 'ABSOLUTE'\"\n>> /home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:42:\n>> indent with spaces.\n>> \" UNION SELECT 'BACKWARD'\"\n>> /home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:43:\n>> indent with spaces.\n>> \" UNION SELECT 'FORWARD'\"\n>> /home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:44:\n>> indent with spaces.\n>> \" UNION SELECT 'RELATIVE'\"\n>> warning: squelched 19 whitespace errors\n>> warning: 24 lines add whitespace errors.\n>>\n>> I recommend you checking whitespaces or running pgindent.\n> \n> Thank you for your advice and I have corrected it.\n> \n>> Here are some comments:\n>>\n>> /*\n>> - * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\n>> RELATIVE, ALL,\n>> - * NEXT, PRIOR, FIRST, LAST\n>> + * Complete FETCH with a list of cursors and one of ABSOLUTE,\n>> BACKWARD, FORWARD, RELATIVE, ALL,\n>> + * NEXT, PRIOR, FIRST, LAST, FROM, IN\n>> */\n>>\n>> Maybe I think the commend should say:\n>>\n>> + * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\n>> RELATIVE, ALL,\n>> + * NEXT, PRIOR, FIRST, LAST, FROM, IN, and a list of cursors\n>>\n>> Other updates of the comment seem to have the same issue.\n>>\n>> Or I think we can omit the details from the comment since it seems redundant\n>> information. We can read it from the arguments of the following\n>> COMPLETE_WITH_QUERY().\n> \n> It certainly seems redundant, so I deleted them.\n\nI think that it's better to update and keep those comments rather than removing them.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 5 Jan 2021 18:08:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 3:03 PM <Shinya11.Kato@nttdata.com> wrote:\n>\n> Thank you for your review!\n> I fixed some codes and attach a new patch.\n\nThank you for updating the patch!\n\n>\n> >When I applied the patch, I got the following whitespace warnings:\n> >\n> >$ git apply ~/patches/fix_tab_complete_close_fetch_move.patch\n> >/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:40:\n> >indent with spaces.\n> > COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:41:\n> >indent with spaces.\n> > \" UNION SELECT 'ABSOLUTE'\"\n> >/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:42:\n> >indent with spaces.\n> > \" UNION SELECT 'BACKWARD'\"\n> >/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:43:\n> >indent with spaces.\n> > \" UNION SELECT 'FORWARD'\"\n> >/home/masahiko/patches/fix_tab_complete_close_fetch_move.patch:44:\n> >indent with spaces.\n> > \" UNION SELECT 'RELATIVE'\"\n> >warning: squelched 19 whitespace errors\n> >warning: 24 lines add whitespace errors.\n> >\n> >I recommend you checking whitespaces or running pgindent.\n>\n> Thank you for your advice and I have corrected it.\n>\n> >Here are some comments:\n> >\n> > /*\n> >- * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\n> >RELATIVE, ALL,\n> >- * NEXT, PRIOR, FIRST, LAST\n> >+ * Complete FETCH with a list of cursors and one of ABSOLUTE,\n> >BACKWARD, FORWARD, RELATIVE, ALL,\n> >+ * NEXT, PRIOR, FIRST, LAST, FROM, IN\n> > */\n> >\n> >Maybe I think the commend should say:\n> >\n> >+ * Complete FETCH with one of ABSOLUTE, BACKWARD, FORWARD,\n> >RELATIVE, ALL,\n> >+ * NEXT, PRIOR, FIRST, LAST, FROM, IN, and a list of cursors\n> >\n> >Other updates of the comment seem to have the same issue.\n> >\n> >Or I think we can omit the details from the comment since it seems redundant\n> >information. We can read it from the arguments of the following\n> >COMPLETE_WITH_QUERY().\n>\n> It certainly seems redundant, so I deleted them.\n>\n> >---\n> >- /*\n> >- * Complete FETCH <direction> with \"FROM\" or \"IN\". These are equivalent,\n> >- * but we may as well tab-complete both: perhaps some users prefer one\n> >- * variant or the other.\n> >- */\n> >+ /* Complete FETCH <direction> with a list of cursors and \"FROM\" or\n> >+ \"IN\" */\n> >\n> >Why did you remove the second sentence in the above comment?\n>\n> I had misunderstood the meaning and deleted it.\n> I deleted it as well as above, but would you prefer it to be there?\n\nI would leave it. I realized this area is recently updated by commit\n8176afd8b7. In that change, the comments were updated rather than\nremoved. So it might be better to leave them. Sorry for confusing you.\n\n>\n> >---\n> >The patch improves tab completion for CLOSE, FETCH, and MOVE but is there\n> >any reason why you didn't do that for DECLARE? I think DECLARE also can be\n> >improved and it's a good timing for that.\n>\n> I wanted to improve tab completion for DECLARE, but I couldn't find anything to improve.\n> Please let me know if there are any codes that can be improved.\n\nI've attached the patch improving the tab completion for DECLARE as an\nexample. What do you think?\n\nBTW according to the documentation, the options of DECLARE statement\n(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n\nDECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n\nBut I realized that these options are actually order-insensitive. For\ninstance, we can declare a cursor like:\n\n=# declare abc scroll binary cursor for select * from pg_class;\nDECLARE CURSOR\n\nThe both parser code and documentation has been unchanged from 2003.\nIs it a documentation bug?\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 5 Jan 2021 18:56:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 6:08 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> + \" UNION SELECT 'ABSOLUTE'\"\n> + \" UNION SELECT 'BACKWARD'\"\n> + \" UNION SELECT 'FORWARD'\"\n> + \" UNION SELECT 'RELATIVE'\"\n> + \" UNION SELECT 'ALL'\"\n> + \" UNION SELECT 'NEXT'\"\n> + \" UNION SELECT 'PRIOR'\"\n> + \" UNION SELECT 'FIRST'\"\n> + \" UNION SELECT 'LAST'\"\n> + \" UNION SELECT 'FROM'\"\n> + \" UNION SELECT 'IN'\");\n>\n> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n\nI think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\nthe documentation, the direction can be empty. For instance, we can do\nlike:\n\npostgres(1:7668)=# begin;\nBEGIN\n\npostgres(1:7668)=# declare test cursor for select relname from pg_class;\nDECLARE CURSOR\n\npostgres(1:7668)=# fetch from test;\n relname\n--------------\n pg_statistic\n(1 row)\n\npostgres(1:7668)=# fetch in test;\n relname\n---------\n pg_type\n(1 row)\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 6 Jan 2021 11:13:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "\n\nOn 2021/01/06 11:13, Masahiko Sawada wrote:\n> On Tue, Jan 5, 2021 at 6:08 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>> + \" UNION SELECT 'ABSOLUTE'\"\n>> + \" UNION SELECT 'BACKWARD'\"\n>> + \" UNION SELECT 'FORWARD'\"\n>> + \" UNION SELECT 'RELATIVE'\"\n>> + \" UNION SELECT 'ALL'\"\n>> + \" UNION SELECT 'NEXT'\"\n>> + \" UNION SELECT 'PRIOR'\"\n>> + \" UNION SELECT 'FIRST'\"\n>> + \" UNION SELECT 'LAST'\"\n>> + \" UNION SELECT 'FROM'\"\n>> + \" UNION SELECT 'IN'\");\n>>\n>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n> \n> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n> the documentation, the direction can be empty.\n\nYou're right. Thanks for correcting me!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 6 Jan 2021 11:33:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": ">+#define Query_for_list_of_cursors \\\r\n>+\" SELECT name FROM pg_cursors\"\\\r\n>\r\n>This query should be the following?\r\n>\r\n>\" SELECT pg_catalog.quote_ident(name) \"\\\r\n>\" FROM pg_catalog.pg_cursors \"\\\r\n>\" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\r\n>\r\n>+/* CLOSE */\r\n>+\telse if (Matches(\"CLOSE\"))\r\n>+\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_cursors\r\n>+\t\t\t\t\t\t\t\" UNION ALL SELECT 'ALL'\");\r\n>\r\n>\"UNION ALL\" should be \"UNION\"?\r\n\r\nThank you for your advice, and I corrected them.\r\n\r\n>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\r\n>> + \" UNION SELECT 'ABSOLUTE'\"\r\n>> + \" UNION SELECT 'BACKWARD'\"\r\n>> + \" UNION SELECT 'FORWARD'\"\r\n>> + \" UNION SELECT 'RELATIVE'\"\r\n>> + \" UNION SELECT 'ALL'\"\r\n>> + \" UNION SELECT 'NEXT'\"\r\n>> + \" UNION SELECT 'PRIOR'\"\r\n>> + \" UNION SELECT 'FIRST'\"\r\n>> + \" UNION SELECT 'LAST'\"\r\n>> + \" UNION SELECT 'FROM'\"\r\n>> + \" UNION SELECT 'IN'\");\r\n>>\r\n>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\r\n>\r\n>I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\r\n>the documentation, the direction can be empty. For instance, we can do\r\n>like:\r\n\r\nThank you!\r\n\r\n>I've attached the patch improving the tab completion for DECLARE as an\r\n>example. What do you think?\r\n>\r\n>BTW according to the documentation, the options of DECLARE statement\r\n>(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\r\n>\r\n>DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\r\n> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\r\n>\r\n>But I realized that these options are actually order-insensitive. For\r\n>instance, we can declare a cursor like:\r\n>\r\n>=# declare abc scroll binary cursor for select * from pg_class;\r\n>DECLARE CURSOR\r\n>\r\n>The both parser code and documentation has been unchanged from 2003.\r\n>Is it a documentation bug?\r\n\r\nThank you for your patch, and it is good.\r\nI cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\r\nI saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\r\n\r\nI made a new patch, but the amount of codes was large due to order-insensitive.\r\nIf you know of a better way, please let me know.\r\n\r\nRegards,\r\nShinya Kato",
"msg_date": "Wed, 6 Jan 2021 06:36:05 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 3:37 PM <Shinya11.Kato@nttdata.com> wrote:\n>\n> >+#define Query_for_list_of_cursors \\\n> >+\" SELECT name FROM pg_cursors\"\\\n> >\n> >This query should be the following?\n> >\n> >\" SELECT pg_catalog.quote_ident(name) \"\\\n> >\" FROM pg_catalog.pg_cursors \"\\\n> >\" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n> >\n> >+/* CLOSE */\n> >+ else if (Matches(\"CLOSE\"))\n> >+ COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >+ \" UNION ALL SELECT 'ALL'\");\n> >\n> >\"UNION ALL\" should be \"UNION\"?\n>\n> Thank you for your advice, and I corrected them.\n>\n> >> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >> + \" UNION SELECT 'ABSOLUTE'\"\n> >> + \" UNION SELECT 'BACKWARD'\"\n> >> + \" UNION SELECT 'FORWARD'\"\n> >> + \" UNION SELECT 'RELATIVE'\"\n> >> + \" UNION SELECT 'ALL'\"\n> >> + \" UNION SELECT 'NEXT'\"\n> >> + \" UNION SELECT 'PRIOR'\"\n> >> + \" UNION SELECT 'FIRST'\"\n> >> + \" UNION SELECT 'LAST'\"\n> >> + \" UNION SELECT 'FROM'\"\n> >> + \" UNION SELECT 'IN'\");\n> >>\n> >> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n> >\n> >I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n> >the documentation, the direction can be empty. For instance, we can do\n> >like:\n>\n> Thank you!\n>\n> >I've attached the patch improving the tab completion for DECLARE as an\n> >example. What do you think?\n> >\n> >BTW according to the documentation, the options of DECLARE statement\n> >(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> >\n> >DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> > CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> >\n> >But I realized that these options are actually order-insensitive. For\n> >instance, we can declare a cursor like:\n> >\n> >=# declare abc scroll binary cursor for select * from pg_class;\n> >DECLARE CURSOR\n> >\n> >The both parser code and documentation has been unchanged from 2003.\n> >Is it a documentation bug?\n>\n> Thank you for your patch, and it is good.\n> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n\nThanks, you're right. I missed that sentence. I still don't think the\nsyntax of DECLARE statement in the documentation doesn't match its\nimplementation but I agree that it's order-insensitive.\n\n> I made a new patch, but the amount of codes was large due to order-insensitive.\n\nThank you for updating the patch!\n\nYeah, I'm also afraid a bit that conditions will exponentially\nincrease when a new option is added to DECLARE statement in the\nfuture. Looking at the parser code for DECLARE statement, we can put\nthe same options multiple times (that's also why I don't think it\nmatches). For instance,\n\npostgres(1:44758)=# begin;\nBEGIN\npostgres(1:44758)=# declare test binary binary binary cursor for\nselect * from pg_class;\nDECLARE CURSOR\n\nSo how about simplify the above code as follows?\n\n@@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n else if (Matches(\"DECLARE\", MatchAny))\n COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n \"CURSOR\");\n+ /*\n+ * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n+ * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n+ * DECLARE, assume we want options.\n+ */\n+ else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n+ TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n+ COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n+ \"CURSOR\");\n+ /*\n+ * Complete DECLARE <name> ... CURSOR with one of WITH HOLD, WITHOUT HOLD,\n+ * and FOR.\n+ */\n else if (HeadMatches(\"DECLARE\") && TailMatches(\"CURSOR\"))\n COMPLETE_WITH(\"WITH HOLD\", \"WITHOUT HOLD\", \"FOR\");\n+ else if (HeadMatches(\"DECLARE\") && TailMatches(\"HOLD\"))\n+ COMPLETE_WITH(\"FOR\");\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 7 Jan 2021 10:01:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "\n\nOn 2021/01/07 10:01, Masahiko Sawada wrote:\n> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11.Kato@nttdata.com> wrote:\n>>\n>>> +#define Query_for_list_of_cursors \\\n>>> +\" SELECT name FROM pg_cursors\"\\\n>>>\n>>> This query should be the following?\n>>>\n>>> \" SELECT pg_catalog.quote_ident(name) \"\\\n>>> \" FROM pg_catalog.pg_cursors \"\\\n>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n>>>\n>>> +/* CLOSE */\n>>> + else if (Matches(\"CLOSE\"))\n>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>> + \" UNION ALL SELECT 'ALL'\");\n>>>\n>>> \"UNION ALL\" should be \"UNION\"?\n>>\n>> Thank you for your advice, and I corrected them.\n>>\n>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>>> + \" UNION SELECT 'ABSOLUTE'\"\n>>>> + \" UNION SELECT 'BACKWARD'\"\n>>>> + \" UNION SELECT 'FORWARD'\"\n>>>> + \" UNION SELECT 'RELATIVE'\"\n>>>> + \" UNION SELECT 'ALL'\"\n>>>> + \" UNION SELECT 'NEXT'\"\n>>>> + \" UNION SELECT 'PRIOR'\"\n>>>> + \" UNION SELECT 'FIRST'\"\n>>>> + \" UNION SELECT 'LAST'\"\n>>>> + \" UNION SELECT 'FROM'\"\n>>>> + \" UNION SELECT 'IN'\");\n>>>>\n>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n>>>\n>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n>>> the documentation, the direction can be empty. For instance, we can do\n>>> like:\n>>\n>> Thank you!\n>>\n>>> I've attached the patch improving the tab completion for DECLARE as an\n>>> example. What do you think?\n>>>\n>>> BTW according to the documentation, the options of DECLARE statement\n>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n>>>\n>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n>>>\n>>> But I realized that these options are actually order-insensitive. For\n>>> instance, we can declare a cursor like:\n>>>\n>>> =# declare abc scroll binary cursor for select * from pg_class;\n>>> DECLARE CURSOR\n>>>\n>>> The both parser code and documentation has been unchanged from 2003.\n>>> Is it a documentation bug?\n>>\n>> Thank you for your patch, and it is good.\n>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n> \n> Thanks, you're right. I missed that sentence. I still don't think the\n> syntax of DECLARE statement in the documentation doesn't match its\n> implementation but I agree that it's order-insensitive.\n> \n>> I made a new patch, but the amount of codes was large due to order-insensitive.\n> \n> Thank you for updating the patch!\n> \n> Yeah, I'm also afraid a bit that conditions will exponentially\n> increase when a new option is added to DECLARE statement in the\n> future. Looking at the parser code for DECLARE statement, we can put\n> the same options multiple times (that's also why I don't think it\n> matches). For instance,\n> \n> postgres(1:44758)=# begin;\n> BEGIN\n> postgres(1:44758)=# declare test binary binary binary cursor for\n> select * from pg_class;\n> DECLARE CURSOR\n> \n> So how about simplify the above code as follows?\n> \n> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n> else if (Matches(\"DECLARE\", MatchAny))\n> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> \"CURSOR\");\n> + /*\n> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n> + * DECLARE, assume we want options.\n> + */\n> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> + \"CURSOR\");\n\nThis change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\nunexpectedly output BINARY, INSENSITIVE, etc.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 Jan 2021 10:59:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/01/07 10:01, Masahiko Sawada wrote:\n> > On Wed, Jan 6, 2021 at 3:37 PM <Shinya11.Kato@nttdata.com> wrote:\n> >>\n> >>> +#define Query_for_list_of_cursors \\\n> >>> +\" SELECT name FROM pg_cursors\"\\\n> >>>\n> >>> This query should be the following?\n> >>>\n> >>> \" SELECT pg_catalog.quote_ident(name) \"\\\n> >>> \" FROM pg_catalog.pg_cursors \"\\\n> >>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n> >>>\n> >>> +/* CLOSE */\n> >>> + else if (Matches(\"CLOSE\"))\n> >>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>> + \" UNION ALL SELECT 'ALL'\");\n> >>>\n> >>> \"UNION ALL\" should be \"UNION\"?\n> >>\n> >> Thank you for your advice, and I corrected them.\n> >>\n> >>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>>> + \" UNION SELECT 'ABSOLUTE'\"\n> >>>> + \" UNION SELECT 'BACKWARD'\"\n> >>>> + \" UNION SELECT 'FORWARD'\"\n> >>>> + \" UNION SELECT 'RELATIVE'\"\n> >>>> + \" UNION SELECT 'ALL'\"\n> >>>> + \" UNION SELECT 'NEXT'\"\n> >>>> + \" UNION SELECT 'PRIOR'\"\n> >>>> + \" UNION SELECT 'FIRST'\"\n> >>>> + \" UNION SELECT 'LAST'\"\n> >>>> + \" UNION SELECT 'FROM'\"\n> >>>> + \" UNION SELECT 'IN'\");\n> >>>>\n> >>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n> >>>\n> >>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n> >>> the documentation, the direction can be empty. For instance, we can do\n> >>> like:\n> >>\n> >> Thank you!\n> >>\n> >>> I've attached the patch improving the tab completion for DECLARE as an\n> >>> example. What do you think?\n> >>>\n> >>> BTW according to the documentation, the options of DECLARE statement\n> >>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> >>>\n> >>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> >>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> >>>\n> >>> But I realized that these options are actually order-insensitive. For\n> >>> instance, we can declare a cursor like:\n> >>>\n> >>> =# declare abc scroll binary cursor for select * from pg_class;\n> >>> DECLARE CURSOR\n> >>>\n> >>> The both parser code and documentation has been unchanged from 2003.\n> >>> Is it a documentation bug?\n> >>\n> >> Thank you for your patch, and it is good.\n> >> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n> >> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n> >\n> > Thanks, you're right. I missed that sentence. I still don't think the\n> > syntax of DECLARE statement in the documentation doesn't match its\n> > implementation but I agree that it's order-insensitive.\n> >\n> >> I made a new patch, but the amount of codes was large due to order-insensitive.\n> >\n> > Thank you for updating the patch!\n> >\n> > Yeah, I'm also afraid a bit that conditions will exponentially\n> > increase when a new option is added to DECLARE statement in the\n> > future. Looking at the parser code for DECLARE statement, we can put\n> > the same options multiple times (that's also why I don't think it\n> > matches). For instance,\n> >\n> > postgres(1:44758)=# begin;\n> > BEGIN\n> > postgres(1:44758)=# declare test binary binary binary cursor for\n> > select * from pg_class;\n> > DECLARE CURSOR\n> >\n> > So how about simplify the above code as follows?\n> >\n> > @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n> > else if (Matches(\"DECLARE\", MatchAny))\n> > COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> > \"CURSOR\");\n> > + /*\n> > + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> > + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n> > + * DECLARE, assume we want options.\n> > + */\n> > + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n> > + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n> > + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> > + \"CURSOR\");\n>\n> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\n> unexpectedly output BINARY, INSENSITIVE, etc.\n\nIndeed. Is the following not complete but much better?\n\n@@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\n \" UNION SELECT 'ALL'\");\n\n /* DECLARE */\n- else if (Matches(\"DECLARE\", MatchAny))\n- COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n- \"CURSOR\");\n+ /*\n+ * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n+ * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\n+ * still want options.\n+ */\n+ else if (Matches(\"DECLARE\", MatchAny) ||\n+ TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n+ COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\n else if (HeadMatches(\"DECLARE\") && TailMatches(\"CURSOR\"))\n COMPLETE_WITH(\"WITH HOLD\", \"WITHOUT HOLD\", \"FOR\");\n+ else if (HeadMatches(\"DECLARE\") && TailMatches(\"HOLD\"))\n+ COMPLETE_WITH(\"FOR\");\n\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 7 Jan 2021 12:42:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "\n\nOn 2021/01/07 12:42, Masahiko Sawada wrote:\n> On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/01/07 10:01, Masahiko Sawada wrote:\n>>> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11.Kato@nttdata.com> wrote:\n>>>>\n>>>>> +#define Query_for_list_of_cursors \\\n>>>>> +\" SELECT name FROM pg_cursors\"\\\n>>>>>\n>>>>> This query should be the following?\n>>>>>\n>>>>> \" SELECT pg_catalog.quote_ident(name) \"\\\n>>>>> \" FROM pg_catalog.pg_cursors \"\\\n>>>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n>>>>>\n>>>>> +/* CLOSE */\n>>>>> + else if (Matches(\"CLOSE\"))\n>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>>>> + \" UNION ALL SELECT 'ALL'\");\n>>>>>\n>>>>> \"UNION ALL\" should be \"UNION\"?\n>>>>\n>>>> Thank you for your advice, and I corrected them.\n>>>>\n>>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>>>>> + \" UNION SELECT 'ABSOLUTE'\"\n>>>>>> + \" UNION SELECT 'BACKWARD'\"\n>>>>>> + \" UNION SELECT 'FORWARD'\"\n>>>>>> + \" UNION SELECT 'RELATIVE'\"\n>>>>>> + \" UNION SELECT 'ALL'\"\n>>>>>> + \" UNION SELECT 'NEXT'\"\n>>>>>> + \" UNION SELECT 'PRIOR'\"\n>>>>>> + \" UNION SELECT 'FIRST'\"\n>>>>>> + \" UNION SELECT 'LAST'\"\n>>>>>> + \" UNION SELECT 'FROM'\"\n>>>>>> + \" UNION SELECT 'IN'\");\n>>>>>>\n>>>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n>>>>>\n>>>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n>>>>> the documentation, the direction can be empty. For instance, we can do\n>>>>> like:\n>>>>\n>>>> Thank you!\n>>>>\n>>>>> I've attached the patch improving the tab completion for DECLARE as an\n>>>>> example. What do you think?\n>>>>>\n>>>>> BTW according to the documentation, the options of DECLARE statement\n>>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n>>>>>\n>>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n>>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n>>>>>\n>>>>> But I realized that these options are actually order-insensitive. For\n>>>>> instance, we can declare a cursor like:\n>>>>>\n>>>>> =# declare abc scroll binary cursor for select * from pg_class;\n>>>>> DECLARE CURSOR\n>>>>>\n>>>>> The both parser code and documentation has been unchanged from 2003.\n>>>>> Is it a documentation bug?\n>>>>\n>>>> Thank you for your patch, and it is good.\n>>>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n>>>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n>>>\n>>> Thanks, you're right. I missed that sentence. I still don't think the\n>>> syntax of DECLARE statement in the documentation doesn't match its\n>>> implementation but I agree that it's order-insensitive.\n>>>\n>>>> I made a new patch, but the amount of codes was large due to order-insensitive.\n>>>\n>>> Thank you for updating the patch!\n>>>\n>>> Yeah, I'm also afraid a bit that conditions will exponentially\n>>> increase when a new option is added to DECLARE statement in the\n>>> future. Looking at the parser code for DECLARE statement, we can put\n>>> the same options multiple times (that's also why I don't think it\n>>> matches). For instance,\n>>>\n>>> postgres(1:44758)=# begin;\n>>> BEGIN\n>>> postgres(1:44758)=# declare test binary binary binary cursor for\n>>> select * from pg_class;\n>>> DECLARE CURSOR\n>>>\n>>> So how about simplify the above code as follows?\n>>>\n>>> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n>>> else if (Matches(\"DECLARE\", MatchAny))\n>>> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n>>> \"CURSOR\");\n>>> + /*\n>>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n>>> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n>>> + * DECLARE, assume we want options.\n>>> + */\n>>> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n>>> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n>>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n>>> + \"CURSOR\");\n>>\n>> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\n>> unexpectedly output BINARY, INSENSITIVE, etc.\n> \n> Indeed. Is the following not complete but much better?\n\nYes, I think that's better.\n\n> \n> @@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\n> \" UNION SELECT 'ALL'\");\n> \n> /* DECLARE */\n> - else if (Matches(\"DECLARE\", MatchAny))\n> - COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> - \"CURSOR\");\n> + /*\n> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> + * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\n> + * still want options.\n> + */\n> + else if (Matches(\"DECLARE\", MatchAny) ||\n> + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\n\nThis change seems to cause \"DECLARE ... NO <tab>\" to unexpectedly output\n\"NO SCROLL\". Also this change seems to cause \"COPY ... (FORMAT BINARY <tab>\"\nto unexpectedly output BINARY, CURSOR, etc.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 Jan 2021 13:30:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 1:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/01/07 12:42, Masahiko Sawada wrote:\n> > On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2021/01/07 10:01, Masahiko Sawada wrote:\n> >>> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11.Kato@nttdata.com> wrote:\n> >>>>\n> >>>>> +#define Query_for_list_of_cursors \\\n> >>>>> +\" SELECT name FROM pg_cursors\"\\\n> >>>>>\n> >>>>> This query should be the following?\n> >>>>>\n> >>>>> \" SELECT pg_catalog.quote_ident(name) \"\\\n> >>>>> \" FROM pg_catalog.pg_cursors \"\\\n> >>>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n> >>>>>\n> >>>>> +/* CLOSE */\n> >>>>> + else if (Matches(\"CLOSE\"))\n> >>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>>>> + \" UNION ALL SELECT 'ALL'\");\n> >>>>>\n> >>>>> \"UNION ALL\" should be \"UNION\"?\n> >>>>\n> >>>> Thank you for your advice, and I corrected them.\n> >>>>\n> >>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>>>>> + \" UNION SELECT 'ABSOLUTE'\"\n> >>>>>> + \" UNION SELECT 'BACKWARD'\"\n> >>>>>> + \" UNION SELECT 'FORWARD'\"\n> >>>>>> + \" UNION SELECT 'RELATIVE'\"\n> >>>>>> + \" UNION SELECT 'ALL'\"\n> >>>>>> + \" UNION SELECT 'NEXT'\"\n> >>>>>> + \" UNION SELECT 'PRIOR'\"\n> >>>>>> + \" UNION SELECT 'FIRST'\"\n> >>>>>> + \" UNION SELECT 'LAST'\"\n> >>>>>> + \" UNION SELECT 'FROM'\"\n> >>>>>> + \" UNION SELECT 'IN'\");\n> >>>>>>\n> >>>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n> >>>>>\n> >>>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n> >>>>> the documentation, the direction can be empty. For instance, we can do\n> >>>>> like:\n> >>>>\n> >>>> Thank you!\n> >>>>\n> >>>>> I've attached the patch improving the tab completion for DECLARE as an\n> >>>>> example. What do you think?\n> >>>>>\n> >>>>> BTW according to the documentation, the options of DECLARE statement\n> >>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> >>>>>\n> >>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> >>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> >>>>>\n> >>>>> But I realized that these options are actually order-insensitive. For\n> >>>>> instance, we can declare a cursor like:\n> >>>>>\n> >>>>> =# declare abc scroll binary cursor for select * from pg_class;\n> >>>>> DECLARE CURSOR\n> >>>>>\n> >>>>> The both parser code and documentation has been unchanged from 2003.\n> >>>>> Is it a documentation bug?\n> >>>>\n> >>>> Thank you for your patch, and it is good.\n> >>>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n> >>>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n> >>>\n> >>> Thanks, you're right. I missed that sentence. I still don't think the\n> >>> syntax of DECLARE statement in the documentation doesn't match its\n> >>> implementation but I agree that it's order-insensitive.\n> >>>\n> >>>> I made a new patch, but the amount of codes was large due to order-insensitive.\n> >>>\n> >>> Thank you for updating the patch!\n> >>>\n> >>> Yeah, I'm also afraid a bit that conditions will exponentially\n> >>> increase when a new option is added to DECLARE statement in the\n> >>> future. Looking at the parser code for DECLARE statement, we can put\n> >>> the same options multiple times (that's also why I don't think it\n> >>> matches). For instance,\n> >>>\n> >>> postgres(1:44758)=# begin;\n> >>> BEGIN\n> >>> postgres(1:44758)=# declare test binary binary binary cursor for\n> >>> select * from pg_class;\n> >>> DECLARE CURSOR\n> >>>\n> >>> So how about simplify the above code as follows?\n> >>>\n> >>> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n> >>> else if (Matches(\"DECLARE\", MatchAny))\n> >>> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> >>> \"CURSOR\");\n> >>> + /*\n> >>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> >>> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n> >>> + * DECLARE, assume we want options.\n> >>> + */\n> >>> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n> >>> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n> >>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> >>> + \"CURSOR\");\n> >>\n> >> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\n> >> unexpectedly output BINARY, INSENSITIVE, etc.\n> >\n> > Indeed. Is the following not complete but much better?\n>\n> Yes, I think that's better.\n>\n> >\n> > @@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\n> > \" UNION SELECT 'ALL'\");\n> >\n> > /* DECLARE */\n> > - else if (Matches(\"DECLARE\", MatchAny))\n> > - COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> > - \"CURSOR\");\n> > + /*\n> > + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> > + * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\n> > + * still want options.\n> > + */\n> > + else if (Matches(\"DECLARE\", MatchAny) ||\n> > + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n> > + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\n>\n> This change seems to cause \"DECLARE ... NO <tab>\" to unexpectedly output\n> \"NO SCROLL\". Also this change seems to cause \"COPY ... (FORMAT BINARY <tab>\"\n> to unexpectedly output BINARY, CURSOR, etc.\n\nOops, I missed the HeadMatches(). Thank you for pointing this out.\n\nI've attached the updated patch including Kato-san's changes. I\nthink the tab completion support for DECLARE added by this patch\nworks better.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 7 Jan 2021 15:53:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": ">On Thu, Jan 7, 2021 at 1:30 PM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\r\n>>\r\n>> On 2021/01/07 12:42, Masahiko Sawada wrote:\r\n>> > On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\r\n>> >>\r\n>> >> On 2021/01/07 10:01, Masahiko Sawada wrote:\r\n>> >>> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11(dot)Kato(at)nttdata(dot)com> wrote:\r\n>> >>>>\r\n>> >>>>> +#define Query_for_list_of_cursors \\\r\n>> >>>>> +\" SELECT name FROM pg_cursors\"\\\r\n>> >>>>>\r\n>> >>>>> This query should be the following?\r\n>> >>>>>\r\n>> >>>>> \" SELECT pg_catalog.quote_ident(name) \"\\\r\n>> >>>>> \" FROM pg_catalog.pg_cursors \"\\\r\n>> >>>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\r\n>> >>>>>\r\n>> >>>>> +/* CLOSE */\r\n>> >>>>> + else if (Matches(\"CLOSE\"))\r\n>> >>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\r\n>> >>>>> + \" UNION ALL SELECT 'ALL'\");\r\n>> >>>>>\r\n>> >>>>> \"UNION ALL\" should be \"UNION\"?\r\n>> >>>>\r\n>> >>>> Thank you for your advice, and I corrected them.\r\n>> >>>>\r\n>> >>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\r\n>> >>>>>> + \" UNION SELECT 'ABSOLUTE'\"\r\n>> >>>>>> + \" UNION SELECT 'BACKWARD'\"\r\n>> >>>>>> + \" UNION SELECT 'FORWARD'\"\r\n>> >>>>>> + \" UNION SELECT 'RELATIVE'\"\r\n>> >>>>>> + \" UNION SELECT 'ALL'\"\r\n>> >>>>>> + \" UNION SELECT 'NEXT'\"\r\n>> >>>>>> + \" UNION SELECT 'PRIOR'\"\r\n>> >>>>>> + \" UNION SELECT 'FIRST'\"\r\n>> >>>>>> + \" UNION SELECT 'LAST'\"\r\n>> >>>>>> + \" UNION SELECT 'FROM'\"\r\n>> >>>>>> + \" UNION SELECT 'IN'\");\r\n>> >>>>>>\r\n>> >>>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\r\n>> >>>>>\r\n>> >>>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\r\n>> >>>>> the documentation, the direction can be empty. For instance, we can do\r\n>> >>>>> like:\r\n>> >>>>\r\n>> >>>> Thank you!\r\n>> >>>>\r\n>> >>>>> I've attached the patch improving the tab completion for DECLARE as an\r\n>> >>>>> example. What do you think?\r\n>> >>>>>\r\n>> >>>>> BTW according to the documentation, the options of DECLARE statement\r\n>> >>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\r\n>> >>>>>\r\n>> >>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\r\n>> >>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\r\n>> >>>>>\r\n>> >>>>> But I realized that these options are actually order-insensitive. For\r\n>> >>>>> instance, we can declare a cursor like:\r\n>> >>>>>\r\n>> >>>>> =# declare abc scroll binary cursor for select * from pg_class;\r\n>> >>>>> DECLARE CURSOR\r\n>> >>>>>\r\n>> >>>>> The both parser code and documentation has been unchanged from 2003.\r\n>> >>>>> Is it a documentation bug?\r\n>> >>>>\r\n>> >>>> Thank you for your patch, and it is good.\r\n>> >>>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\r\n>> >>>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\r\n>> >>>\r\n>> >>> Thanks, you're right. I missed that sentence. I still don't think the\r\n>> >>> syntax of DECLARE statement in the documentation doesn't match its\r\n>> >>> implementation but I agree that it's order-insensitive.\r\n>> >>>\r\n>> >>>> I made a new patch, but the amount of codes was large due to order-insensitive.\r\n>> >>>\r\n>> >>> Thank you for updating the patch!\r\n>> >>>\r\n>> >>> Yeah, I'm also afraid a bit that conditions will exponentially\r\n>> >>> increase when a new option is added to DECLARE statement in the\r\n>> >>> future. Looking at the parser code for DECLARE statement, we can put\r\n>> >>> the same options multiple times (that's also why I don't think it\r\n>> >>> matches). For instance,\r\n>> >>>\r\n>> >>> postgres(1:44758)=# begin;\r\n>> >>> BEGIN\r\n>> >>> postgres(1:44758)=# declare test binary binary binary cursor for\r\n>> >>> select * from pg_class;\r\n>> >>> DECLARE CURSOR\r\n>> >>>\r\n>> >>> So how about simplify the above code as follows?\r\n>> >>>\r\n>> >>> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\r\n>> >>> else if (Matches(\"DECLARE\", MatchAny))\r\n>> >>> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\r\n>> >>> \"CURSOR\");\r\n>> >>> + /*\r\n>> >>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\r\n>> >>> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\r\n>> >>> + * DECLARE, assume we want options.\r\n>> >>> + */\r\n>> >>> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\r\n>> >>> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\r\n>> >>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\r\n>> >>> + \"CURSOR\");\r\n>> >>\r\n>> >> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\r\n>> >> unexpectedly output BINARY, INSENSITIVE, etc.\r\n>> >\r\n>> > Indeed. Is the following not complete but much better?\r\n>>\r\n>> Yes, I think that's better.\r\n>>\r\n>> >\r\n>> > @@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\r\n>> > \" UNION SELECT 'ALL'\");\r\n>> >\r\n>> > /* DECLARE */\r\n>> > - else if (Matches(\"DECLARE\", MatchAny))\r\n>> > - COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\r\n>> > - \"CURSOR\");\r\n>> > + /*\r\n>> > + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\r\n>> > + * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\r\n>> > + * still want options.\r\n>> > + */\r\n>> > + else if (Matches(\"DECLARE\", MatchAny) ||\r\n>> > + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\r\n>> > + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\r\n>>\r\n>> This change seems to cause \"DECLARE ... NO <tab>\" to unexpectedly output\r\n>> \"NO SCROLL\". Also this change seems to cause \"COPY ... (FORMAT BINARY <tab>\"\r\n>> to unexpectedly output BINARY, CURSOR, etc.\r\n>\r\n>Oops, I missed the HeadMatches(). Thank you for pointing this out.\r\n>\r\n>I've attached the updated patch including Kato-san's changes. I\r\n>think the tab completion support for DECLARE added by this patch\r\n>works better.\r\n\r\nThank you very much for the new patch!\r\nI checked the operation and I think it is good.\r\n\r\nRegards,\r\nShinya Kato\r\n",
"msg_date": "Thu, 7 Jan 2021 08:28:20 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On 2021/01/07 17:28, Shinya11.Kato@nttdata.com wrote:\n>> On Thu, Jan 7, 2021 at 1:30 PM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\n>>>\n>>> On 2021/01/07 12:42, Masahiko Sawada wrote:\n>>>> On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\n>>>>>\n>>>>> On 2021/01/07 10:01, Masahiko Sawada wrote:\n>>>>>> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11(dot)Kato(at)nttdata(dot)com> wrote:\n>>>>>>>\n>>>>>>>> +#define Query_for_list_of_cursors \\\n>>>>>>>> +\" SELECT name FROM pg_cursors\"\\\n>>>>>>>>\n>>>>>>>> This query should be the following?\n>>>>>>>>\n>>>>>>>> \" SELECT pg_catalog.quote_ident(name) \"\\\n>>>>>>>> \" FROM pg_catalog.pg_cursors \"\\\n>>>>>>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n>>>>>>>>\n>>>>>>>> +/* CLOSE */\n>>>>>>>> + else if (Matches(\"CLOSE\"))\n>>>>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>>>>>>> + \" UNION ALL SELECT 'ALL'\");\n>>>>>>>>\n>>>>>>>> \"UNION ALL\" should be \"UNION\"?\n>>>>>>>\n>>>>>>> Thank you for your advice, and I corrected them.\n>>>>>>>\n>>>>>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n>>>>>>>>> + \" UNION SELECT 'ABSOLUTE'\"\n>>>>>>>>> + \" UNION SELECT 'BACKWARD'\"\n>>>>>>>>> + \" UNION SELECT 'FORWARD'\"\n>>>>>>>>> + \" UNION SELECT 'RELATIVE'\"\n>>>>>>>>> + \" UNION SELECT 'ALL'\"\n>>>>>>>>> + \" UNION SELECT 'NEXT'\"\n>>>>>>>>> + \" UNION SELECT 'PRIOR'\"\n>>>>>>>>> + \" UNION SELECT 'FIRST'\"\n>>>>>>>>> + \" UNION SELECT 'LAST'\"\n>>>>>>>>> + \" UNION SELECT 'FROM'\"\n>>>>>>>>> + \" UNION SELECT 'IN'\");\n>>>>>>>>>\n>>>>>>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n>>>>>>>>\n>>>>>>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n>>>>>>>> the documentation, the direction can be empty. For instance, we can do\n>>>>>>>> like:\n>>>>>>>\n>>>>>>> Thank you!\n>>>>>>>\n>>>>>>>> I've attached the patch improving the tab completion for DECLARE as an\n>>>>>>>> example. What do you think?\n>>>>>>>>\n>>>>>>>> BTW according to the documentation, the options of DECLARE statement\n>>>>>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n>>>>>>>>\n>>>>>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n>>>>>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n>>>>>>>>\n>>>>>>>> But I realized that these options are actually order-insensitive. For\n>>>>>>>> instance, we can declare a cursor like:\n>>>>>>>>\n>>>>>>>> =# declare abc scroll binary cursor for select * from pg_class;\n>>>>>>>> DECLARE CURSOR\n>>>>>>>>\n>>>>>>>> The both parser code and documentation has been unchanged from 2003.\n>>>>>>>> Is it a documentation bug?\n>>>>>>>\n>>>>>>> Thank you for your patch, and it is good.\n>>>>>>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n>>>>>>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n>>>>>>\n>>>>>> Thanks, you're right. I missed that sentence. I still don't think the\n>>>>>> syntax of DECLARE statement in the documentation doesn't match its\n>>>>>> implementation but I agree that it's order-insensitive.\n>>>>>>\n>>>>>>> I made a new patch, but the amount of codes was large due to order-insensitive.\n>>>>>>\n>>>>>> Thank you for updating the patch!\n>>>>>>\n>>>>>> Yeah, I'm also afraid a bit that conditions will exponentially\n>>>>>> increase when a new option is added to DECLARE statement in the\n>>>>>> future. Looking at the parser code for DECLARE statement, we can put\n>>>>>> the same options multiple times (that's also why I don't think it\n>>>>>> matches). For instance,\n>>>>>>\n>>>>>> postgres(1:44758)=# begin;\n>>>>>> BEGIN\n>>>>>> postgres(1:44758)=# declare test binary binary binary cursor for\n>>>>>> select * from pg_class;\n>>>>>> DECLARE CURSOR\n>>>>>>\n>>>>>> So how about simplify the above code as follows?\n>>>>>>\n>>>>>> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n>>>>>> else if (Matches(\"DECLARE\", MatchAny))\n>>>>>> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n>>>>>> \"CURSOR\");\n>>>>>> + /*\n>>>>>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n>>>>>> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n>>>>>> + * DECLARE, assume we want options.\n>>>>>> + */\n>>>>>> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n>>>>>> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n>>>>>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n>>>>>> + \"CURSOR\");\n>>>>>\n>>>>> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\n>>>>> unexpectedly output BINARY, INSENSITIVE, etc.\n>>>>\n>>>> Indeed. Is the following not complete but much better?\n>>>\n>>> Yes, I think that's better.\n>>>\n>>>>\n>>>> @@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\n>>>> \" UNION SELECT 'ALL'\");\n>>>>\n>>>> /* DECLARE */\n>>>> - else if (Matches(\"DECLARE\", MatchAny))\n>>>> - COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n>>>> - \"CURSOR\");\n>>>> + /*\n>>>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n>>>> + * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\n>>>> + * still want options.\n>>>> + */\n>>>> + else if (Matches(\"DECLARE\", MatchAny) ||\n>>>> + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n>>>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\n>>>\n>>> This change seems to cause \"DECLARE ... NO <tab>\" to unexpectedly output\n>>> \"NO SCROLL\". Also this change seems to cause \"COPY ... (FORMAT BINARY <tab>\"\n>>> to unexpectedly output BINARY, CURSOR, etc.\n>>\n>> Oops, I missed the HeadMatches(). Thank you for pointing this out.\n>>\n>> I've attached the updated patch including Kato-san's changes. I\n>> think the tab completion support for DECLARE added by this patch\n>> works better.\n\nThanks!\n\n+\t/* Complete with more options */\n+\telse if (HeadMatches(\"DECLARE\", MatchAny, \"BINARY|INSENSITIVE|SCROLL|NO\") &&\n+\t\t\t TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n\nSeems \"MatchAny, \"BINARY|INSENSITIVE|SCROLL|NO\"\" is not necessary. Right?\n\nIf this is true, I'd like to refactor the code a bit.\nWhat about the attached patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 7 Jan 2021 21:32:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 9:32 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/01/07 17:28, Shinya11.Kato@nttdata.com wrote:\n> >> On Thu, Jan 7, 2021 at 1:30 PM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\n> >>>\n> >>> On 2021/01/07 12:42, Masahiko Sawada wrote:\n> >>>> On Thu, Jan 7, 2021 at 10:59 AM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:\n> >>>>>\n> >>>>> On 2021/01/07 10:01, Masahiko Sawada wrote:\n> >>>>>> On Wed, Jan 6, 2021 at 3:37 PM <Shinya11(dot)Kato(at)nttdata(dot)com> wrote:\n> >>>>>>>\n> >>>>>>>> +#define Query_for_list_of_cursors \\\n> >>>>>>>> +\" SELECT name FROM pg_cursors\"\\\n> >>>>>>>>\n> >>>>>>>> This query should be the following?\n> >>>>>>>>\n> >>>>>>>> \" SELECT pg_catalog.quote_ident(name) \"\\\n> >>>>>>>> \" FROM pg_catalog.pg_cursors \"\\\n> >>>>>>>> \" WHERE substring(pg_catalog.quote_ident(name),1,%d)='%s'\"\n> >>>>>>>>\n> >>>>>>>> +/* CLOSE */\n> >>>>>>>> + else if (Matches(\"CLOSE\"))\n> >>>>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>>>>>>> + \" UNION ALL SELECT 'ALL'\");\n> >>>>>>>>\n> >>>>>>>> \"UNION ALL\" should be \"UNION\"?\n> >>>>>>>\n> >>>>>>> Thank you for your advice, and I corrected them.\n> >>>>>>>\n> >>>>>>>>> + COMPLETE_WITH_QUERY(Query_for_list_of_cursors\n> >>>>>>>>> + \" UNION SELECT 'ABSOLUTE'\"\n> >>>>>>>>> + \" UNION SELECT 'BACKWARD'\"\n> >>>>>>>>> + \" UNION SELECT 'FORWARD'\"\n> >>>>>>>>> + \" UNION SELECT 'RELATIVE'\"\n> >>>>>>>>> + \" UNION SELECT 'ALL'\"\n> >>>>>>>>> + \" UNION SELECT 'NEXT'\"\n> >>>>>>>>> + \" UNION SELECT 'PRIOR'\"\n> >>>>>>>>> + \" UNION SELECT 'FIRST'\"\n> >>>>>>>>> + \" UNION SELECT 'LAST'\"\n> >>>>>>>>> + \" UNION SELECT 'FROM'\"\n> >>>>>>>>> + \" UNION SELECT 'IN'\");\n> >>>>>>>>>\n> >>>>>>>>> This change makes psql unexpectedly output \"FROM\" and \"IN\" just after \"FETCH\".\n> >>>>>>>>\n> >>>>>>>> I think \"FROM\" and \"IN\" can be placed just after \"FETCH\". According to\n> >>>>>>>> the documentation, the direction can be empty. For instance, we can do\n> >>>>>>>> like:\n> >>>>>>>\n> >>>>>>> Thank you!\n> >>>>>>>\n> >>>>>>>> I've attached the patch improving the tab completion for DECLARE as an\n> >>>>>>>> example. What do you think?\n> >>>>>>>>\n> >>>>>>>> BTW according to the documentation, the options of DECLARE statement\n> >>>>>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> >>>>>>>>\n> >>>>>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> >>>>>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> >>>>>>>>\n> >>>>>>>> But I realized that these options are actually order-insensitive. For\n> >>>>>>>> instance, we can declare a cursor like:\n> >>>>>>>>\n> >>>>>>>> =# declare abc scroll binary cursor for select * from pg_class;\n> >>>>>>>> DECLARE CURSOR\n> >>>>>>>>\n> >>>>>>>> The both parser code and documentation has been unchanged from 2003.\n> >>>>>>>> Is it a documentation bug?\n> >>>>>>>\n> >>>>>>> Thank you for your patch, and it is good.\n> >>>>>>> I cannot find the description \"(BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\"\n> >>>>>>> I saw \"The key words BINARY, INSENSITIVE, and SCROLL can appear in any order.\" , according to the documentation.\n> >>>>>>\n> >>>>>> Thanks, you're right. I missed that sentence. I still don't think the\n> >>>>>> syntax of DECLARE statement in the documentation doesn't match its\n> >>>>>> implementation but I agree that it's order-insensitive.\n> >>>>>>\n> >>>>>>> I made a new patch, but the amount of codes was large due to order-insensitive.\n> >>>>>>\n> >>>>>> Thank you for updating the patch!\n> >>>>>>\n> >>>>>> Yeah, I'm also afraid a bit that conditions will exponentially\n> >>>>>> increase when a new option is added to DECLARE statement in the\n> >>>>>> future. Looking at the parser code for DECLARE statement, we can put\n> >>>>>> the same options multiple times (that's also why I don't think it\n> >>>>>> matches). For instance,\n> >>>>>>\n> >>>>>> postgres(1:44758)=# begin;\n> >>>>>> BEGIN\n> >>>>>> postgres(1:44758)=# declare test binary binary binary cursor for\n> >>>>>> select * from pg_class;\n> >>>>>> DECLARE CURSOR\n> >>>>>>\n> >>>>>> So how about simplify the above code as follows?\n> >>>>>>\n> >>>>>> @@ -3005,8 +3014,23 @@ psql_completion(const char *text, int start, int end)\n> >>>>>> else if (Matches(\"DECLARE\", MatchAny))\n> >>>>>> COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> >>>>>> \"CURSOR\");\n> >>>>>> + /*\n> >>>>>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> >>>>>> + * NO SCROLL, and CURSOR. The tail doesn't match any keywords for\n> >>>>>> + * DECLARE, assume we want options.\n> >>>>>> + */\n> >>>>>> + else if (HeadMatches(\"DECLARE\", MatchAny, \"*\") &&\n> >>>>>> + TailMatches(MatchAnyExcept(\"CURSOR|WITH|WITHOUT|HOLD|FOR\")))\n> >>>>>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> >>>>>> + \"CURSOR\");\n> >>>>>\n> >>>>> This change seems to cause \"DECLARE ... CURSOR FOR SELECT <tab>\" to\n> >>>>> unexpectedly output BINARY, INSENSITIVE, etc.\n> >>>>\n> >>>> Indeed. Is the following not complete but much better?\n> >>>\n> >>> Yes, I think that's better.\n> >>>\n> >>>>\n> >>>> @@ -3002,11 +3011,18 @@ psql_completion(const char *text, int start, int end)\n> >>>> \" UNION SELECT 'ALL'\");\n> >>>>\n> >>>> /* DECLARE */\n> >>>> - else if (Matches(\"DECLARE\", MatchAny))\n> >>>> - COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\",\n> >>>> - \"CURSOR\");\n> >>>> + /*\n> >>>> + * Complete DECLARE <name> with one of BINARY, INSENSITIVE, SCROLL,\n> >>>> + * NO SCROLL, and CURSOR. If the tail is any one of options, assume we\n> >>>> + * still want options.\n> >>>> + */\n> >>>> + else if (Matches(\"DECLARE\", MatchAny) ||\n> >>>> + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n> >>>> + COMPLETE_WITH(\"BINARY\", \"INSENSITIVE\", \"SCROLL\", \"NO SCROLL\", \"CURSOR\");\n> >>>\n> >>> This change seems to cause \"DECLARE ... NO <tab>\" to unexpectedly output\n> >>> \"NO SCROLL\". Also this change seems to cause \"COPY ... (FORMAT BINARY <tab>\"\n> >>> to unexpectedly output BINARY, CURSOR, etc.\n> >>\n> >> Oops, I missed the HeadMatches(). Thank you for pointing this out.\n> >>\n> >> I've attached the updated patch including Kato-san's changes. I\n> >> think the tab completion support for DECLARE added by this patch\n> >> works better.\n>\n> Thanks!\n>\n> + /* Complete with more options */\n> + else if (HeadMatches(\"DECLARE\", MatchAny, \"BINARY|INSENSITIVE|SCROLL|NO\") &&\n> + TailMatches(\"BINARY|INSENSITIVE|SCROLL|NO\"))\n>\n> Seems \"MatchAny, \"BINARY|INSENSITIVE|SCROLL|NO\"\" is not necessary. Right?\n>\n\nRight.\n\n> If this is true, I'd like to refactor the code a bit.\n> What about the attached patch?\n\nThank you for updating the patch! Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 11 Jan 2021 14:39:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On 2021-01-05 10:56, Masahiko Sawada wrote:\n> BTW according to the documentation, the options of DECLARE statement\n> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> \n> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> \n> But I realized that these options are actually order-insensitive. For\n> instance, we can declare a cursor like:\n> \n> =# declare abc scroll binary cursor for select * from pg_class;\n> DECLARE CURSOR\n> \n> The both parser code and documentation has been unchanged from 2003.\n> Is it a documentation bug?\n\nAccording to the SQL standard, the ordering of the cursor properties is \nfixed. Even if the PostgreSQL parser offers more flexibility, I think \nwe should continue to encourage writing the clauses in the standard order.\n\n\n",
"msg_date": "Mon, 11 Jan 2021 15:00:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 11:00 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 2021-01-05 10:56, Masahiko Sawada wrote:\n> > BTW according to the documentation, the options of DECLARE statement\n> > (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> >\n> > DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> > CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> >\n> > But I realized that these options are actually order-insensitive. For\n> > instance, we can declare a cursor like:\n> >\n> > =# declare abc scroll binary cursor for select * from pg_class;\n> > DECLARE CURSOR\n> >\n> > The both parser code and documentation has been unchanged from 2003.\n> > Is it a documentation bug?\n>\n> According to the SQL standard, the ordering of the cursor properties is\n> fixed. Even if the PostgreSQL parser offers more flexibility, I think\n> we should continue to encourage writing the clauses in the standard order.\n\nThanks for your comment. Agreed.\n\nSo regarding the tab completion for DECLARE statement, perhaps it\nwould be better to follow the documentation? In the current proposed\npatch, we complete it with the options in any order.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 12 Jan 2021 09:59:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Tue, Jan 12, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 11, 2021 at 11:00 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 2021-01-05 10:56, Masahiko Sawada wrote:\n> > > BTW according to the documentation, the options of DECLARE statement\n> > > (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> > >\n> > > DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> > > CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> > >\n> > > But I realized that these options are actually order-insensitive. For\n> > > instance, we can declare a cursor like:\n> > >\n> > > =# declare abc scroll binary cursor for select * from pg_class;\n> > > DECLARE CURSOR\n> > >\n> > > The both parser code and documentation has been unchanged from 2003.\n> > > Is it a documentation bug?\n> >\n> > According to the SQL standard, the ordering of the cursor properties is\n> > fixed. Even if the PostgreSQL parser offers more flexibility, I think\n> > we should continue to encourage writing the clauses in the standard order.\n>\n> Thanks for your comment. Agreed.\n>\n> So regarding the tab completion for DECLARE statement, perhaps it\n> would be better to follow the documentation?\n\nIMO yes because it's less confusing to make the document and\ntab-completion consistent.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 12 Jan 2021 11:09:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Tue, Jan 12, 2021 at 11:09 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Tue, Jan 12, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jan 11, 2021 at 11:00 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 2021-01-05 10:56, Masahiko Sawada wrote:\n> > > > BTW according to the documentation, the options of DECLARE statement\n> > > > (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> > > >\n> > > > DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> > > > CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> > > >\n> > > > But I realized that these options are actually order-insensitive. For\n> > > > instance, we can declare a cursor like:\n> > > >\n> > > > =# declare abc scroll binary cursor for select * from pg_class;\n> > > > DECLARE CURSOR\n> > > >\n> > > > The both parser code and documentation has been unchanged from 2003.\n> > > > Is it a documentation bug?\n> > >\n> > > According to the SQL standard, the ordering of the cursor properties is\n> > > fixed. Even if the PostgreSQL parser offers more flexibility, I think\n> > > we should continue to encourage writing the clauses in the standard order.\n> >\n> > Thanks for your comment. Agreed.\n> >\n> > So regarding the tab completion for DECLARE statement, perhaps it\n> > would be better to follow the documentation?\n>\n> IMO yes because it's less confusing to make the document and\n> tab-completion consistent.\n\nI updated the patch that way. Could you review this version?\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 13 Jan 2021 13:55:25 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 1:55 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Tue, Jan 12, 2021 at 11:09 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > On Tue, Jan 12, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 11, 2021 at 11:00 PM Peter Eisentraut\n> > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > >\n> > > > On 2021-01-05 10:56, Masahiko Sawada wrote:\n> > > > > BTW according to the documentation, the options of DECLARE statement\n> > > > > (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n> > > > >\n> > > > > DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n> > > > > CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n> > > > >\n> > > > > But I realized that these options are actually order-insensitive. For\n> > > > > instance, we can declare a cursor like:\n> > > > >\n> > > > > =# declare abc scroll binary cursor for select * from pg_class;\n> > > > > DECLARE CURSOR\n> > > > >\n> > > > > The both parser code and documentation has been unchanged from 2003.\n> > > > > Is it a documentation bug?\n> > > >\n> > > > According to the SQL standard, the ordering of the cursor properties is\n> > > > fixed. Even if the PostgreSQL parser offers more flexibility, I think\n> > > > we should continue to encourage writing the clauses in the standard order.\n> > >\n> > > Thanks for your comment. Agreed.\n> > >\n> > > So regarding the tab completion for DECLARE statement, perhaps it\n> > > would be better to follow the documentation?\n> >\n> > IMO yes because it's less confusing to make the document and\n> > tab-completion consistent.\n\nAgreed.\n\n>\n> I updated the patch that way. Could you review this version?\n\nThank you for updating the patch. Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 14 Jan 2021 14:38:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
},
{
"msg_contents": "\n\nOn 2021/01/14 14:38, Masahiko Sawada wrote:\n> On Wed, Jan 13, 2021 at 1:55 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>\n>> On Tue, Jan 12, 2021 at 11:09 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>>\n>>> On Tue, Jan 12, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>>\n>>>> On Mon, Jan 11, 2021 at 11:00 PM Peter Eisentraut\n>>>> <peter.eisentraut@enterprisedb.com> wrote:\n>>>>>\n>>>>> On 2021-01-05 10:56, Masahiko Sawada wrote:\n>>>>>> BTW according to the documentation, the options of DECLARE statement\n>>>>>> (BINARY, INSENSITIVE, SCROLL, and NO SCROLL) are order-sensitive.\n>>>>>>\n>>>>>> DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]\n>>>>>> CURSOR [ { WITH | WITHOUT } HOLD ] FOR query\n>>>>>>\n>>>>>> But I realized that these options are actually order-insensitive. For\n>>>>>> instance, we can declare a cursor like:\n>>>>>>\n>>>>>> =# declare abc scroll binary cursor for select * from pg_class;\n>>>>>> DECLARE CURSOR\n>>>>>>\n>>>>>> The both parser code and documentation has been unchanged from 2003.\n>>>>>> Is it a documentation bug?\n>>>>>\n>>>>> According to the SQL standard, the ordering of the cursor properties is\n>>>>> fixed. Even if the PostgreSQL parser offers more flexibility, I think\n>>>>> we should continue to encourage writing the clauses in the standard order.\n>>>>\n>>>> Thanks for your comment. Agreed.\n>>>>\n>>>> So regarding the tab completion for DECLARE statement, perhaps it\n>>>> would be better to follow the documentation?\n>>>\n>>> IMO yes because it's less confusing to make the document and\n>>> tab-completion consistent.\n> \n> Agreed.\n> \n>>\n>> I updated the patch that way. Could you review this version?\n> \n> Thank you for updating the patch. Looks good to me.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 14 Jan 2021 15:43:25 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Feature improvement for CLOSE, FETCH, MOVE tab completion"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nA year ago Vladimir Leskov proposed patch to speed up pglz compression[0]. PFA the patch with some editorialisation by me.\nI saw some reports of bottlenecking in pglz WAL compression [1].\n\nHopefully soon we will have compression codecs developed by compression specialists. The work is going on in nearby thread about custom compression methods.\nIs it viable to work on pglz optimisation? It's about x1.4 faster. Or should we rely on future use of lz4\\zstd and others?\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/169163A8-C96F-4DBE-A062-7D1CECBE9E5D@yandex-team.ru\n[1] https://smalldatum.blogspot.com/2020/12/tuning-for-insert-benchmark-postgres_4.html",
"msg_date": "Wed, 9 Dec 2020 12:44:42 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "pglz compression performance, take two"
},
{
"msg_contents": "\n\n> 9 дек. 2020 г., в 12:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> PFA the patch with some editorialisation by me.\n> I saw some reports of bottlenecking in pglz WAL compression [1].\n\nI've checked that on my machine simple test\necho \"wal_compression = on\" >> $PGDATA/postgresql.conf\npgbench -i -s 20 && pgbench -T 30\nshows ~2-3% of improvement, but the result is not very stable, deviation is comparable. In fact, bottleneck is just shifted from pglz, thus impact is not that measurable.\n\nI've found out that the patch continues ideas from thread [0] and commit 031cc55 [1], but in much more shotgun-surgery way.\nOut of curiosity I've rerun tests from that thread\npostgres=# with patched as (select testname, avg(seconds) patched from testresults0 group by testname),unpatched as (select testname, avg(seconds) unpatched from testresults group by testname) select * from unpatched join patched using (testname);\n testname | unpatched | patched \n-------------------+------------------------+------------------------\n 512b random | 4.5568015000000000 | 4.3512980000000000\n 100k random | 1.03342300000000000000 | 1.00326200000000000000\n 100k of same byte | 2.1689715000000000 | 2.0958155000000000\n 2k random | 3.1613815000000000 | 3.1861350000000000\n 512b text | 5.7233600000000000 | 5.3602330000000000\n 5k text | 1.7044835000000000 | 1.8086770000000000\n(6 rows)\n\n\nResults of direct call are somewhat more clear.\nUnpatched:\n testname | auto \n-------------------+-----------\n 5k text | 1100.705\n 512b text | 240.585\n 2k random | 106.865\n 100k random | 2.663\n 512b random | 145.736\n 100k of same byte | 13426.880\n(6 rows)\n\nPatched:\n testname | auto \n-------------------+----------\n 5k text | 767.535\n 512b text | 159.076\n 2k random | 77.126\n 100k random | 1.698\n 512b random | 95.768\n 100k of same byte | 6035.159\n(6 rows)\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/5130C914.8080106%40vmware.com\n[1] https://github.com/x4m/postgres_g/commit/031cc55bbea6b3a6b67c700498a78fb1d4399476\n\n",
"msg_date": "Sat, 12 Dec 2020 22:47:59 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "> 12 дек. 2020 г., в 22:47, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> \n\nI've cleaned up comments, checked that memory alignment stuff actually make sense for 32-bit ARM (according to Godbolt) and did some more code cleanup. PFA v2 patch.\n\nI'm still in doubt should I register this patch on CF or not. I'm willing to work on this, but it's not clear will it hit PGv14. And I hope for PGv15 we will have lz4 or something better for WAL compression.\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 26 Dec 2020 12:06:59 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\nOn 12/26/20 8:06 AM, Andrey Borodin wrote:\n> \n> \n>> 12 дек. 2020 г., в 22:47, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>\n>>\n> \n> I've cleaned up comments, checked that memory alignment stuff actually make sense for 32-bit ARM (according to Godbolt) and did some more code cleanup. PFA v2 patch.\n> \n> I'm still in doubt should I register this patch on CF or not. I'm willing to work on this, but it's not clear will it hit PGv14. And I hope for PGv15 we will have lz4 or something better for WAL compression.\n> \n\nI'd suggest registering it, otherwise people are much less likely to \ngive you feedback. I don't see why it couldn't land in PG14.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 26 Dec 2020 16:10:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 12/26/20 8:06 AM, Andrey Borodin wrote:\n>> I'm still in doubt should I register this patch on CF or not. I'm willing to work on this, but it's not clear will it hit PGv14. And I hope for PGv15 we will have lz4 or something better for WAL compression.\n\n> I'd suggest registering it, otherwise people are much less likely to \n> give you feedback. I don't see why it couldn't land in PG14.\n\nEven if lz4 or something else shows up, the existing code will remain\nimportant for TOAST purposes. It would be years before we lose interest\nin it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Dec 2020 13:07:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\nOn Sat, Dec 26, 2020 at 12:06:59PM +0500, Andrey Borodin wrote:\n> > 12 дек. 2020 г., в 22:47, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> I've cleaned up comments, checked that memory alignment stuff actually make sense for 32-bit ARM (according to Godbolt) and did some more code cleanup. PFA v2 patch.\n\n> \n> I'm still in doubt should I register this patch on CF or not. I'm willing to work on this, but it's not clear will it hit PGv14. And I hope for PGv15 we will have lz4 or something better for WAL compression.\n\nThanks for registering it.\n\nThere's some typos in the current patch;\n\nfarer (further: but it's not your typo)\npositiion\nreduce a => reduce the\nmonotonicity what => monotonicity, which\nlesser good => less good\nallign: align\n\nThis comment I couldn't understand:\n+ * As initial compare for short matches compares 4 bytes then for the end\n+ * of stream length of match should be cut\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 29 Dec 2020 22:39:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Thanks for looking into this, Justin!\n\n> 30 дек. 2020 г., в 09:39, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> There's some typos in the current patch;\n> \n> farer (further: but it's not your typo)\n> positiion\n> reduce a => reduce the\n> monotonicity what => monotonicity, which\n> lesser good => less good\n> allign: align\n\nFixed.\n> \n> This comment I couldn't understand:\n> + * As initial compare for short matches compares 4 bytes then for the end\n> + * of stream length of match should be cut\n\nI've reworded comments.\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 30 Dec 2020 17:22:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "@cfbot: rebased",
"msg_date": "Thu, 21 Jan 2021 20:48:11 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\n> 22 янв. 2021 г., в 07:48, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> @cfbot: rebased\n> <0001-Reorganize-pglz-compression-code.patch>\n\nThanks!\n\nI'm experimenting with TPC-C over PostgreSQL 13 on production-like cluster in the cloud. Overall performance is IO-bound, but compression is burning a lot energy too (according to perf top). Cluster consists of 3 nodes(only HA, no standby queries) with 32 vCPU each, 128GB RAM, sync replication, 2000 warehouses, 240GB PGDATA.\n\nSamples: 1M of event 'cpu-clock', 4000 Hz, Event count (approx.): 177958545079\nOverhead Shared Object Symbol\n 18.36% postgres [.] pglz_compress\n 3.88% [kernel] [k] _raw_spin_unlock_irqrestore\n 3.39% postgres [.] hash_search_with_hash_value\n 3.00% [kernel] [k] finish_task_switch\n 2.03% [kernel] [k] copy_user_enhanced_fast_string\n 1.14% [kernel] [k] filemap_map_pages\n 1.02% postgres [.] AllocSetAlloc\n 0.93% postgres [.] _bt_compare\n 0.89% postgres [.] PinBuffer\n 0.82% postgres [.] SearchCatCache1\n 0.79% postgres [.] LWLockAttemptLock\n 0.78% postgres [.] GetSnapshotData\n\nOverall cluster runs 862tps (52KtpmC, though only 26KtmpC is qualified on 2K warehouses).\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 28 Jan 2021 15:56:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\n> On Jan 21, 2021, at 6:48 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> @cfbot: rebased\n> <0001-Reorganize-pglz-compression-code.patch>\n\nReview comments.\n\nFirst, I installed a build from master without this patch, created a test installation with lots of compressed text and array columns, upgraded the binaries to a build with this patch included, and tried to find problems with the data left over from the pre-patch binaries. Everything checks out. This is on little-endian mac osx intel core i9, not on any ARM platform that you are targeting with portions of the patch.\n\n+/**************************************\n+ * CPU Feature Detection *\n+ **************************************/\n+/* PGLZ_FORCE_MEMORY_ACCESS\n+ * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable.\n+ * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal.\n+ * The below switch allow to select different access method for improved performance.\n+ * Method 0 (default) : use `memcpy()`. Safe and portable.\n+ * Method 1 : direct access. This method is portable but violate C standard.\n+ * It can generate buggy code on targets which assembly generation depends on alignment.\n+ * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6)\n+ * See https://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details.\n+ * Prefer these methods in priority order (0 > 1)\n+ */\n\nThe link to blogspot.fr has a lot more information than your summary in the code comments. It might be hard to understand this comment if the blogspot article were ever to disappear. Perhaps you could include a bit more of the relevant details?\n\n+#ifndef PGLZ_FORCE_MEMORY_ACCESS /* can be defined externally */\n+#if defined(__GNUC__) && \\\n+ ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) \\\n+ || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) )\n+#define PGLZ_FORCE_MEMORY_ACCESS 1\n+#endif\n+#endif\n\nI can understand wanting to set this on gcc + ARMv6, but doesn't this belong in a configure script rather than directly in the compression code?\n\nThe blogspot article indicates that the author lied about alignment to the compiler when using gcc on ARMv6, thereby generating a fast load instruction which happens to work on ARMv6. You appear to be using that same approach. Your #if defined(__GNUC__), seems to assume that all future versions of gcc will generate the instructions that you want, and not start generating some other set of instructions. Wouldn't you at least need a configure test to verify that the version of gcc being used generates the desired assembly? Even then, you'd be banking on gcc doing the same thing for the test code and for the pglz code, which I guess might not be true. Have you considered using inline assembly instead?\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 12:35:46 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\n> On Jan 28, 2021, at 2:56 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 22 янв. 2021 г., в 07:48, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n>> \n>> @cfbot: rebased\n>> <0001-Reorganize-pglz-compression-code.patch>\n> \n> Thanks!\n> \n> I'm experimenting with TPC-C over PostgreSQL 13 on production-like cluster in the cloud. Overall performance is IO-bound, but compression is burning a lot energy too (according to perf top). Cluster consists of 3 nodes(only HA, no standby queries) with 32 vCPU each, 128GB RAM, sync replication, 2000 warehouses, 240GB PGDATA.\n> \n> Samples: 1M of event 'cpu-clock', 4000 Hz, Event count (approx.): 177958545079\n> Overhead Shared Object Symbol\n> 18.36% postgres [.] pglz_compress\n> 3.88% [kernel] [k] _raw_spin_unlock_irqrestore\n> 3.39% postgres [.] hash_search_with_hash_value\n> 3.00% [kernel] [k] finish_task_switch\n> 2.03% [kernel] [k] copy_user_enhanced_fast_string\n> 1.14% [kernel] [k] filemap_map_pages\n> 1.02% postgres [.] AllocSetAlloc\n> 0.93% postgres [.] _bt_compare\n> 0.89% postgres [.] PinBuffer\n> 0.82% postgres [.] SearchCatCache1\n> 0.79% postgres [.] LWLockAttemptLock\n> 0.78% postgres [.] GetSnapshotData\n> \n> Overall cluster runs 862tps (52KtpmC, though only 26KtmpC is qualified on 2K warehouses).\n> \n> Thanks!\n\nRobert Haas just committed Dilip Kumar's LZ4 compression, bbe0a81db69bd10bd166907c3701492a29aca294.\n\nIs this pglz compression patch still relevant? How does the LZ4 compression compare on your hardware?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 13:29:14 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 01:29:14PM -0700, Mark Dilger wrote:\n> Robert Haas just committed Dilip Kumar's LZ4 compression, bbe0a81db69bd10bd166907c3701492a29aca294.\n> \n> Is this pglz compression patch still relevant? How does the LZ4 compression compare on your hardware?\n\nI think it's still relevant, since many people may not end up with binaries\n--with-lz4 (I'm thinking of cloud providers). PGLZ is what existing data uses,\nand people may not want to/know to migrate to shiny new features, but they'd\nlike it if their queries were 20% faster after upgrading without needing to.\n\nAlso, Dilip's patch is only for TOAST compression, and pglz is also being used\nfor wal_compression - Andrey has a short patch to implement lz4 for that:\nhttps://commitfest.postgresql.org/32/3015/\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 20 Mar 2021 00:19:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 12:19:45AM -0500, Justin Pryzby wrote:\n> I think it's still relevant, since many people may not end up with binaries\n> --with-lz4 (I'm thinking of cloud providers). PGLZ is what existing data uses,\n> and people may not want to/know to migrate to shiny new features, but they'd\n> like it if their queries were 20% faster after upgrading without needing to.\n\nYeah, I agree that local improvements here are relevant, particularly\nas we don't enforce the rewrite of toast data already compressed with\npglz. So, we still need to stick with pglz for some time.\n--\nMichael",
"msg_date": "Fri, 25 Jun 2021 16:31:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "> 20 марта 2021 г., в 00:35, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n> \n> \n>> On Jan 21, 2021, at 6:48 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> \n>> @cfbot: rebased\n>> <0001-Reorganize-pglz-compression-code.patch>\n> \n> Review comments.\n\nThanks for the review, Mark!\nAnd sorry for such a long delay, I've been trying to figure out a way to do things less-platform dependent.\nAnd here's what I've come up with.\n\nWe use pglz_read32() not the way xxhash and lz4 does - we really do not need to get 4-byte value, we only need to compare 4 bytes at once.\nSo, essentially, we need to compare two implementation of 4-byte comparison\n\nbool\ncpm_a(const void *ptr1, const void *ptr2)\n{\n return *(const uint32_t *) ptr1 == *(const uint32_t *) ptr2;\n}\n\nbool\ncmp_b(const void *ptr1, const void *ptr2)\n{\n return memcmp(ptr1, ptr2, 4) == 0;\n}\n\nVariant B is more portable. Inspecting it Godblot's compiler explorer I've found out that for GCC 7.1+ it generates assembly without memcmp() call. For x86-64 and ARM64 assembly of cmp_b is identical to cmp_a.\nSo I think maybe we could just stick with version cmp_b instead of optimising for ARM6 and similar architectures like Arduino.\n\nI've benchmarked the patch with \"REINDEX table pgbench_accounts\" on pgbench -i of scale 100. wal_compression was on, other settings were default.\nWithout patch it takes ~11055.077 ms on my machine, with patch it takes ~9512.411 ms, 14% speedup overall.\n\nPFA v5.\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 27 Jun 2021 15:41:35 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\n> On Jun 27, 2021, at 3:41 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> And here's what I've come up with.\n\nI have not tested the patch yet, but here are some quick review comments:\n\n\n> #define PGLZ_HISTORY_SIZE 0x0fff - 1 /* to avoid compare in iteration */\n...\n> static PGLZ_HistEntry hist_entries[PGLZ_HISTORY_SIZE + 1];\n...\n> if (hist_next == PGLZ_HISTORY_SIZE + 1)\n\nThese are the only uses of PGLZ_HISTORY_SIZE. Perhaps you could just defined the symbol as 0x0fff and skip the -1 and +1 business?\n\n> /* ----------\n> * pglz_compare -\n> *\n> * Compares 4 bytes at pointers\n> * ----------\n> */\n> static inline bool\n> pglz_compare32(const void *ptr1, const void *ptr2)\n> {\n> return memcmp(ptr1, ptr2, 4) == 0;\n> }\n\nThe comment function name differs from the actual function name.\n\nAlso, pglz_compare returns an offset into the string, whereas pglz_compare32 returns a boolean. This is fairly unintuitive. The \"32\" part of pglz_compare32 implies doing the same thing as pglz_compare but where the string is known to be 4 bytes in length. Given that pglz_compare32 is dissimilar to pglz_compare, perhaps avoid using /pglz_compare/ in its name?\n\n> /*\n> * Determine length of match. A better match must be larger than the\n> * best so far. And if we already have a match of 16 or more bytes,\n> * it's worth the call overhead to use memcmp()\n\nThis comment is hard to understand, given the code that follows. The first block calls memcmp(), which seems to be the function overhead you refer to. The second block calls the static inline function pglz_compare32, which internally calls memcmp(). Superficially, there seems to be a memcmp() function call either way. The difference is that in the first block's call to memcmp(), the length is a runtime value, and in the second block, it is a compile-time known value. If you are depending on the compiler to notice this distinction and optimize the second call, perhaps you can mention that explicitly? Otherwise, reading and understanding the comment takes more effort.\n\nI took a quick look for other places in the code that try to beat the performance of memcmp on short strings. In varlena.c, rest_of_char_same() seems to do so. We also use comparisons on NameData, which frequently contains strings shorter than 16 bytes. Is it worth sharting a static inline function that uses your optimization in other places? How confident are you that your optimization really helps?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 28 Jun 2021 09:05:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\n> On Jun 28, 2021, at 9:05 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Is it worth sharting a static inline function that uses your optimization in other places? \n\ns/sharting/sharing/\n\n> How confident are you that your optimization really helps?\n\nBy which I mean, is the optimization worth the extra branch checking if (len >= 16)? Is the 14% speedup you report dependent on this extra complexity?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 28 Jun 2021 09:33:53 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Hi,\n\nI've looked at this patch again and did some testing. I don't have any \ncomments to the code (I see there are two comments from Mark after the \nlast version, though).\n\nFor the testing, I did a fairly simple benchmark loading either random \nor compressible data into a bytea column. The tables are defined as \nunlogged, the values are 1kB, 4kB and 1MB, and the total amount of data \nis always 1GB. The timings are\n\n test master patched delta\n ------------------------------------------\n random_1k 12295 12312 100%\n random_1m 12999 12984 100%\n random_4k 16881 15959 95%\n redundant_1k 12308 12348 100%\n redundant_1m 16632 14072 85%\n redundant_4k 16798 13828 82%\n\nI ran the test on multiple x86_64 machines, but the behavior is almost \nexactly the same.\n\nThis shows there's no difference for 1kB (expected, because this does \nnot exceed the ~2kB TOAST threshold). For random data in general the \ndifference is pretty negligible, although it's a bit strange it takes \nlonger for 4kB than 1MB ones.\n\nFor redundant (highly compressible) values, there's quite significant \nspeedup between 15-18%. Real-world data are likely somewhere between, \nbut the speedup is still pretty nice.\n\nAndrey, can you update the patch per Mark's review? I'll do my best to \nget it committed sometime in this CF.\n\nAttached are the two scripts used for generating / testing (you'll have \nto fix some hardcoded paths, but simple otherwise).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Nov 2021 21:47:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Thanks for the review Mark! Sorry it took too long to reply on my side.\n\n> 28 июня 2021 г., в 21:05, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n>> #define PGLZ_HISTORY_SIZE 0x0fff - 1 /* to avoid compare in iteration */\n> ...\n>> static PGLZ_HistEntry hist_entries[PGLZ_HISTORY_SIZE + 1];\n> ...\n>> if (hist_next == PGLZ_HISTORY_SIZE + 1)\n> \n> These are the only uses of PGLZ_HISTORY_SIZE. Perhaps you could just defined the symbol as 0x0fff and skip the -1 and +1 business?\nFixed.\n\n>> /* ----------\n>> * pglz_compare -\n>> *\n>> * Compares 4 bytes at pointers\n>> * ----------\n>> */\n>> static inline bool\n>> pglz_compare32(const void *ptr1, const void *ptr2)\n>> {\n>> return memcmp(ptr1, ptr2, 4) == 0;\n>> }\n> \n> The comment function name differs from the actual function name.\n> \n> Also, pglz_compare returns an offset into the string, whereas pglz_compare32 returns a boolean. This is fairly unintuitive. The \"32\" part of pglz_compare32 implies doing the same thing as pglz_compare but where the string is known to be 4 bytes in length. Given that pglz_compare32 is dissimilar to pglz_compare, perhaps avoid using /pglz_compare/ in its name?\nI've removed pglz_compare32 entirely. It was a simple substitution for memcmp().\n\n> \n>> /*\n>> * Determine length of match. A better match must be larger than the\n>> * best so far. And if we already have a match of 16 or more bytes,\n>> * it's worth the call overhead to use memcmp()\n> \n> This comment is hard to understand, given the code that follows. The first block calls memcmp(), which seems to be the function overhead you refer to. The second block calls the static inline function pglz_compare32, which internally calls memcmp(). Superficially, there seems to be a memcmp() function call either way. The difference is that in the first block's call to memcmp(), the length is a runtime value, and in the second block, it is a compile-time known value. If you are depending on the compiler to notice this distinction and optimize the second call, perhaps you can mention that explicitly? Otherwise, reading and understanding the comment takes more effort.\nI've updated comment for second branch with fixed-size memcmp(). Frankly, I'm not sure \"if (memcmp(input_pos, hist_pos, 4) == 0)\" worth the complexity, internals of \"pglz_compare(0, len_bound, input_pos + 0, hist_pos + 0);\" would do almost same instructions.\n\n> \n> I took a quick look for other places in the code that try to beat the performance of memcmp on short strings. In varlena.c, rest_of_char_same() seems to do so. We also use comparisons on NameData, which frequently contains strings shorter than 16 bytes. Is it worth sharting a static inline function that uses your optimization in other places? How confident are you that your optimization really helps?\nHonestly, I do not know. The overall patch effect consists of stacking up many small optimizations. They have a net effect, but are too noisy to measure independently. That's mostly the reason why I didn't know what to reply for so long.\n\n\n> 5 нояб. 2021 г., в 01:47, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> \n> Andrey, can you update the patch per Mark's review? I'll do my best to get it committed sometime in this CF.\n\nCool! Here's the patch.\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 5 Nov 2021 10:50:53 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "2021年11月5日(金) 14:51 Andrey Borodin <x4mmm@yandex-team.ru>:\n>\n> Thanks for the review Mark! Sorry it took too long to reply on my side.\n>\n> > 28 июня 2021 г., в 21:05, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> >\n> >> #define PGLZ_HISTORY_SIZE 0x0fff - 1 /* to avoid compare in iteration */\n> > ...\n> >> static PGLZ_HistEntry hist_entries[PGLZ_HISTORY_SIZE + 1];\n> > ...\n> >> if (hist_next == PGLZ_HISTORY_SIZE + 1)\n> >\n> > These are the only uses of PGLZ_HISTORY_SIZE. Perhaps you could just defined the symbol as 0x0fff and skip the -1 and +1 business?\n> Fixed.\n>\n> >> /* ----------\n> >> * pglz_compare -\n> >> *\n> >> * Compares 4 bytes at pointers\n> >> * ----------\n> >> */\n> >> static inline bool\n> >> pglz_compare32(const void *ptr1, const void *ptr2)\n> >> {\n> >> return memcmp(ptr1, ptr2, 4) == 0;\n> >> }\n> >\n> > The comment function name differs from the actual function name.\n> >\n> > Also, pglz_compare returns an offset into the string, whereas pglz_compare32 returns a boolean. This is fairly unintuitive. The \"32\" part of pglz_compare32 implies doing the same thing as pglz_compare but where the string is known to be 4 bytes in length. Given that pglz_compare32 is dissimilar to pglz_compare, perhaps avoid using /pglz_compare/ in its name?\n> I've removed pglz_compare32 entirely. It was a simple substitution for memcmp().\n>\n> >\n> >> /*\n> >> * Determine length of match. A better match must be larger than the\n> >> * best so far. And if we already have a match of 16 or more bytes,\n> >> * it's worth the call overhead to use memcmp()\n> >\n> > This comment is hard to understand, given the code that follows. The first block calls memcmp(), which seems to be the function overhead you refer to. The second block calls the static inline function pglz_compare32, which internally calls memcmp(). Superficially, there seems to be a memcmp() function call either way. The difference is that in the first block's call to memcmp(), the length is a runtime value, and in the second block, it is a compile-time known value. If you are depending on the compiler to notice this distinction and optimize the second call, perhaps you can mention that explicitly? Otherwise, reading and understanding the comment takes more effort.\n> I've updated comment for second branch with fixed-size memcmp(). Frankly, I'm not sure \"if (memcmp(input_pos, hist_pos, 4) == 0)\" worth the complexity, internals of \"pglz_compare(0, len_bound, input_pos + 0, hist_pos + 0);\" would do almost same instructions.\n>\n> >\n> > I took a quick look for other places in the code that try to beat the performance of memcmp on short strings. In varlena.c, rest_of_char_same() seems to do so. We also use comparisons on NameData, which frequently contains strings shorter than 16 bytes. Is it worth sharting a static inline function that uses your optimization in other places? How confident are you that your optimization really helps?\n> Honestly, I do not know. The overall patch effect consists of stacking up many small optimizations. They have a net effect, but are too noisy to measure independently. That's mostly the reason why I didn't know what to reply for so long.\n>\n>\n> > 5 нояб. 2021 г., в 01:47, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> >\n> > Andrey, can you update the patch per Mark's review? I'll do my best to get it committed sometime in this CF.\n>\n> Cool! Here's the patch.\n\nHI!\n\nThis patch is marked as \"Ready for Committer\" in the current commitfest [1]\nbut has seen no further activity for more than a year, Given that it's\non its 10th\ncommitfest, it would be useful to clarify its status one way or the other.\n\n[1] https://commitfest.postgresql.org/40/2897/\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 17 Nov 2022 13:17:01 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Hi,\n\nI took a look at the v6 patch, with the intention to get it committed. I\nhave a couple minor comments:\n\n1) For PGLZ_HISTORY_SIZE it uses literal 0x0fff, with the explanation:\n\n /* to avoid compare in iteration */\n\nwhich I think means intent to use this value as a bit mask, but then the\nonly place using PGLZ_HISTORY_SIZE does\n\n if (hist_next == PGLZ_HISTORY_SIZE) ...\n\ni.e. a comparison. Maybe I misunderstand the comment, though.\n\n\n2) PGLZ_HistEntry was modified and replaces links (pointers) with\nindexes, but the comments still talk about \"links\", so maybe that needs\nto be changed. Also, I wonder why next_id is int16 while hist_idx is\nuint16 (and also why id vs. idx)?\n\n3) minor formatting of comments\n\n4) the comment in pglz_find_match about traversing the history seems too\nearly - it's before handling invalid entries and cleanup, but it does\nnot talk about that at all, and the actual while loop is after that.\n\nAttached is v6 in 0001 (verbatim), with the review comments in 0002.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 27 Nov 2022 17:02:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On 11/27/22 17:02, Tomas Vondra wrote:\n> Hi,\n> \n> I took a look at the v6 patch, with the intention to get it committed. I\n> have a couple minor comments:\n> \n> 1) For PGLZ_HISTORY_SIZE it uses literal 0x0fff, with the explanation:\n> \n> /* to avoid compare in iteration */\n> \n> which I think means intent to use this value as a bit mask, but then the\n> only place using PGLZ_HISTORY_SIZE does\n> \n> if (hist_next == PGLZ_HISTORY_SIZE) ...\n> \n> i.e. a comparison. Maybe I misunderstand the comment, though.\n> \n> \n> 2) PGLZ_HistEntry was modified and replaces links (pointers) with\n> indexes, but the comments still talk about \"links\", so maybe that needs\n> to be changed. Also, I wonder why next_id is int16 while hist_idx is\n> uint16 (and also why id vs. idx)?\n> \n> 3) minor formatting of comments\n> \n> 4) the comment in pglz_find_match about traversing the history seems too\n> early - it's before handling invalid entries and cleanup, but it does\n> not talk about that at all, and the actual while loop is after that.\n> \n> Attached is v6 in 0001 (verbatim), with the review comments in 0002.\n> \n\nBTW I've switched this to WoA, but the comments should be trivial to\nresolve and to get it committed.\n\nAlso, I still see roughly 15-20% improvement on some compression-heavy\ntests, as reported before. Which is nice.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 27 Nov 2022 17:08:58 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Hi Tomas,\n\nOn Sun, Nov 27, 2022 at 8:02 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> 1) For PGLZ_HISTORY_SIZE it uses literal 0x0fff, with the explanation:\n>\n> /* to avoid compare in iteration */\n>\n> which I think means intent to use this value as a bit mask, but then the\n> only place using PGLZ_HISTORY_SIZE does\n>\n> if (hist_next == PGLZ_HISTORY_SIZE) ...\n>\n> i.e. a comparison. Maybe I misunderstand the comment, though.\n>\n\nAs far as I recollect, it's a leftover from an attempt to optimize the\ncode into branchless version\nI.e. instead of\nif(hist_next>=PGLZ_HISTORY_SIZE)\n hist_next = 1;\nuse something like hist_next = hist_next & PGLZ_HISTORY_SIZE.\nBut the optimization did not show any measurable impact and was\nimproperly rolled back.\n\n>\n> 2) PGLZ_HistEntry was modified and replaces links (pointers) with\n> indexes, but the comments still talk about \"links\", so maybe that needs\n> to be changed.\n\nThe offsets still form a \"linked list\"... however I removed some\nmentions of pointers, since they are not pointers anymore.\n\n> Also, I wonder why next_id is int16 while hist_idx is\n> uint16 (and also why id vs. idx)?\n\n+1. I'd call them next and hash.\n\nint16 next; /* instead of next_d */\nuint16 hash; /* instead of hist_idx */\n\nWhat do you think?\nhist_idx comes from the function name... I'm not sure how far renaming\nshould go here.\n\n\n>\n> 3) minor formatting of comments\n>\n> 4) the comment in pglz_find_match about traversing the history seems too\n> early - it's before handling invalid entries and cleanup, but it does\n> not talk about that at all, and the actual while loop is after that.\n\nYes, this seems right for me.\n\nPFA review fixes (step 1 is unchanged).\nI did not include next_id->next and hist_idx -> hash rename.\n\nThank you!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 27 Nov 2022 10:43:54 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 10:43 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> PFA review fixes (step 1 is unchanged).\n\nHello! Please find attached v8.\nChanges are mostly cosmetic:\n1. 2 steps from previous message were squashed together\n2. I tried to do a better commit message\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 6 Jan 2023 22:02:36 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> Hello! Please find attached v8.\n\nI got some interesting feedback from some patch users.\nThere was an oversight that frequently yielded results that are 1,2 or\n3 bytes longer than expected.\nLooking closer I found that the correctness of the last 3-byte tail is\nchecked in two places. PFA fix for this. Previously compressed data\nwas correct, however in some cases few bytes longer than the result of\ncurrent pglz implementation.\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 5 Feb 2023 10:36:39 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On 2/5/23 19:36, Andrey Borodin wrote:\n> On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>>\n>> Hello! Please find attached v8.\n> \n> I got some interesting feedback from some patch users.\n> There was an oversight that frequently yielded results that are 1,2 or\n> 3 bytes longer than expected.\n> Looking closer I found that the correctness of the last 3-byte tail is\n> checked in two places. PFA fix for this. Previously compressed data\n> was correct, however in some cases few bytes longer than the result of\n> current pglz implementation.\n> \n\nThanks. What were the consequences of the issue? Lower compression\nratio, or did we then fail to decompress the data (or would current pglz\nimplementation fail to decompress it)?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Feb 2023 02:51:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Sun, Feb 5, 2023 at 5:51 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/5/23 19:36, Andrey Borodin wrote:\n> > On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> >>\n> >> Hello! Please find attached v8.\n> >\n> > I got some interesting feedback from some patch users.\n> > There was an oversight that frequently yielded results that are 1,2 or\n> > 3 bytes longer than expected.\n> > Looking closer I found that the correctness of the last 3-byte tail is\n> > checked in two places. PFA fix for this. Previously compressed data\n> > was correct, however in some cases few bytes longer than the result of\n> > current pglz implementation.\n> >\n>\n> Thanks. What were the consequences of the issue? Lower compression\n> ratio, or did we then fail to decompress the data (or would current pglz\n> implementation fail to decompress it)?\n>\nThe data was decompressed fine. But extension tests (Citus's columnar\nengine) hard-coded a lot of compression ratio stuff.\nAnd there is still 1 more test where optimized version produces 1 byte\nlonger output. I'm trying to find it, but with no success yet.\n\nThere are known and documented cases when optimized pglz version would\ndo so. good_match without 10-division and memcmp by 4 bytes. But even\ndisabling this, still observing 1-byte longer compression results\npersists... The problem is the length is changed after deleting some\ndata, so compression of that particular sequence seems to be somewhere\nfar away.\nIt was funny at the beginning - to hunt for 1 byte. But the weekend is\nending, and it seems that byte slipped from me again...\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Sun, 5 Feb 2023 18:00:20 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\nOn 2/6/23 03:00, Andrey Borodin wrote:\n> On Sun, Feb 5, 2023 at 5:51 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 2/5/23 19:36, Andrey Borodin wrote:\n>>> On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>>>>\n>>>> Hello! Please find attached v8.\n>>>\n>>> I got some interesting feedback from some patch users.\n>>> There was an oversight that frequently yielded results that are 1,2 or\n>>> 3 bytes longer than expected.\n>>> Looking closer I found that the correctness of the last 3-byte tail is\n>>> checked in two places. PFA fix for this. Previously compressed data\n>>> was correct, however in some cases few bytes longer than the result of\n>>> current pglz implementation.\n>>>\n>>\n>> Thanks. What were the consequences of the issue? Lower compression\n>> ratio, or did we then fail to decompress the data (or would current pglz\n>> implementation fail to decompress it)?\n>>\n> The data was decompressed fine. But extension tests (Citus's columnar\n> engine) hard-coded a lot of compression ratio stuff.\n\nOK. Not sure I'd blame the patch for these failures, as long as long as\nthe result is still correct and can be decompressed. I'm not aware of a\nspecification of what the compression must (not) produce.\n\n> And there is still 1 more test where optimized version produces 1 byte\n> longer output. I'm trying to find it, but with no success yet.\n> \n> There are known and documented cases when optimized pglz version would\n> do so. good_match without 10-division and memcmp by 4 bytes. But even\n> disabling this, still observing 1-byte longer compression results\n> persists... The problem is the length is changed after deleting some\n> data, so compression of that particular sequence seems to be somewhere\n> far away.\n> It was funny at the beginning - to hunt for 1 byte. But the weekend is\n> ending, and it seems that byte slipped from me again...\n> \n\nI wonder what that means for the patch. I haven't investigated this at\nall, but it seems as if the optimization means we fail to find a match,\nproducing a tad larger output. That may be still be a good tradeoff, as\nlong as the output is correct (assuming it doesn't break some promise\nregarding expected output).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Feb 2023 20:57:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 11:57 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I wonder what that means for the patch. I haven't investigated this at\n> all, but it seems as if the optimization means we fail to find a match,\n> producing a tad larger output. That may be still be a good tradeoff, as\n> long as the output is correct (assuming it doesn't break some promise\n> regarding expected output).\n>\n\nYes, patch produces correct results and faster. And keeps the\ncompression ratio the same except for some one odd case.\nThe only problem is I do not understand _why_ it happens in that odd\ncase. And so far I failed to extract input\\outputs of that odd case,\nbecause it is buried under so many layers of abstraction and affects\nonly late stats.\nMaybe the problem is not in compression at all...\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 12:16:03 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-05 10:36:39 -0800, Andrey Borodin wrote:\n> On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> >\n> > Hello! Please find attached v8.\n>\n> I got some interesting feedback from some patch users.\n> There was an oversight that frequently yielded results that are 1,2 or\n> 3 bytes longer than expected.\n> Looking closer I found that the correctness of the last 3-byte tail is\n> checked in two places. PFA fix for this. Previously compressed data\n> was correct, however in some cases few bytes longer than the result of\n> current pglz implementation.\n\nThis version fails on cfbot, due to address sanitizer:\n\nhttps://cirrus-ci.com/task/4921632586727424\nhttps://api.cirrus-ci.com/v1/artifact/task/4921632586727424/log/src/test/regress/log/initdb.log\n\n\nperforming post-bootstrap initialization ... =================================================================\n==15991==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61e000002ee0 at pc 0x558e1b847b16 bp 0x7ffd35782f70 sp 0x7ffd35782f68\nREAD of size 1 at 0x61e000002ee0 thread T0\n #0 0x558e1b847b15 in pglz_hist_add /tmp/cirrus-ci-build/src/common/pg_lzcompress.c:310\n #1 0x558e1b847b15 in pglz_compress /tmp/cirrus-ci-build/src/common/pg_lzcompress.c:680\n #2 0x558e1aa86ef0 in pglz_compress_datum /tmp/cirrus-ci-build/src/backend/access/common/toast_compression.c:65\n #3 0x558e1aa87af2 in toast_compress_datum /tmp/cirrus-ci-build/src/backend/access/common/toast_internals.c:68\n #4 0x558e1ac22989 in toast_tuple_try_compression /tmp/cirrus-ci-build/src/backend/access/table/toast_helper.c:234\n #5 0x558e1ab6af24 in heap_toast_insert_or_update /tmp/cirrus-ci-build/src/backend/access/heap/heaptoast.c:197\n #6 0x558e1ab4a2a6 in heap_update /tmp/cirrus-ci-build/src/backend/access/heap/heapam.c:3533\n...\n\n\n\nIndependent of this failure, I'm worried about the cost/benefit analysis of a\npglz change that changes this much at once. It's quite hard to review.\n\n\nAndres\n\n\n",
"msg_date": "Tue, 7 Feb 2023 12:18:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "\n\nOn 2/7/23 21:18, Andres Freund wrote:\n> Hi,\n> \n> On 2023-02-05 10:36:39 -0800, Andrey Borodin wrote:\n>> On Fri, Jan 6, 2023 at 10:02 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>>>\n>>> Hello! Please find attached v8.\n>>\n>> I got some interesting feedback from some patch users.\n>> There was an oversight that frequently yielded results that are 1,2 or\n>> 3 bytes longer than expected.\n>> Looking closer I found that the correctness of the last 3-byte tail is\n>> checked in two places. PFA fix for this. Previously compressed data\n>> was correct, however in some cases few bytes longer than the result of\n>> current pglz implementation.\n> \n> This version fails on cfbot, due to address sanitizer:\n> \n> https://cirrus-ci.com/task/4921632586727424\n> https://api.cirrus-ci.com/v1/artifact/task/4921632586727424/log/src/test/regress/log/initdb.log\n> \n> \n> performing post-bootstrap initialization ... =================================================================\n> ==15991==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61e000002ee0 at pc 0x558e1b847b16 bp 0x7ffd35782f70 sp 0x7ffd35782f68\n> READ of size 1 at 0x61e000002ee0 thread T0\n> #0 0x558e1b847b15 in pglz_hist_add /tmp/cirrus-ci-build/src/common/pg_lzcompress.c:310\n> #1 0x558e1b847b15 in pglz_compress /tmp/cirrus-ci-build/src/common/pg_lzcompress.c:680\n> #2 0x558e1aa86ef0 in pglz_compress_datum /tmp/cirrus-ci-build/src/backend/access/common/toast_compression.c:65\n> #3 0x558e1aa87af2 in toast_compress_datum /tmp/cirrus-ci-build/src/backend/access/common/toast_internals.c:68\n> #4 0x558e1ac22989 in toast_tuple_try_compression /tmp/cirrus-ci-build/src/backend/access/table/toast_helper.c:234\n> #5 0x558e1ab6af24 in heap_toast_insert_or_update /tmp/cirrus-ci-build/src/backend/access/heap/heaptoast.c:197\n> #6 0x558e1ab4a2a6 in heap_update /tmp/cirrus-ci-build/src/backend/access/heap/heapam.c:3533\n> ...\n> \n\nYeah, and valgrind seems to hit the same issue (it's not labeled as\nbuffer overflow, but it seems to be exactly the same place):\n\n==380682== Invalid read of size 1\n==380682== at 0xBCEAAB: pglz_hist_add (pg_lzcompress.c:310)\n==380682== by 0xBCF130: pglz_compress (pg_lzcompress.c:670)\n==380682== by 0x4A911F: pglz_compress_datum (toast_compression.c:65)\n==380682== by 0x4A97E2: toast_compress_datum (toast_internals.c:68)\n==380682== by 0x54CCA4: toast_tuple_try_compression (toast_helper.c:234)\n==380682== by 0x4FFC33: heap_toast_insert_or_update (heaptoast.c:197)\n==380682== by 0x4ED498: heap_update (heapam.c:3624)\n==380682== by 0x4EE023: simple_heap_update (heapam.c:4060)\n==380682== by 0x5B1B2B: CatalogTupleUpdateWithInfo (indexing.c:329)\n==380682== by 0x65C3AB: update_attstats (analyze.c:1741)\n==380682== by 0x65A054: do_analyze_rel (analyze.c:602)\n==380682== by 0x659405: analyze_rel (analyze.c:261)\n==380682== by 0x70A162: vacuum (vacuum.c:523)\n==380682== by 0x8DF8F7: autovacuum_do_vac_analyze (autovacuum.c:3155)\n==380682== by 0x8DE74A: do_autovacuum (autovacuum.c:2473)\n==380682== by 0x8DD49E: AutoVacWorkerMain (autovacuum.c:1716)\n==380682== by 0x8DD097: StartAutoVacWorker (autovacuum.c:1494)\n==380682== by 0x8EA5B2: StartAutovacuumWorker (postmaster.c:5481)\n==380682== by 0x8EA10A: process_pm_pmsignal (postmaster.c:5192)\n==380682== by 0x8E6121: ServerLoop (postmaster.c:1770)\n==380682== Address 0xe722c78 is 103,368 bytes inside a recently\nre-allocated block of size 131,072 alloc'd\n==380682== at 0x48457AB: malloc (vg_replace_malloc.c:393)\n==380682== by 0xB95423: AllocSetAlloc (aset.c:929)\n==380682== by 0xBA2B6C: palloc (mcxt.c:1224)\n==380682== by 0x4A0962: heap_copytuple (heaptuple.c:687)\n==380682== by 0x73A2BB: tts_buffer_heap_copy_heap_tuple\n(execTuples.c:842)\n==380682== by 0x658E42: ExecCopySlotHeapTuple (tuptable.h:464)\n==380682== by 0x65B288: acquire_sample_rows (analyze.c:1261)\n==380682== by 0x659E42: do_analyze_rel (analyze.c:536)\n==380682== by 0x659405: analyze_rel (analyze.c:261)\n==380682== by 0x70A162: vacuum (vacuum.c:523)\n==380682== by 0x8DF8F7: autovacuum_do_vac_analyze (autovacuum.c:3155)\n==380682== by 0x8DE74A: do_autovacuum (autovacuum.c:2473)\n==380682== by 0x8DD49E: AutoVacWorkerMain (autovacuum.c:1716)\n==380682== by 0x8DD097: StartAutoVacWorker (autovacuum.c:1494)\n==380682== by 0x8EA5B2: StartAutovacuumWorker (postmaster.c:5481)\n==380682== by 0x8EA10A: process_pm_pmsignal (postmaster.c:5192)\n==380682== by 0x8E6121: ServerLoop (postmaster.c:1770)\n==380682== by 0x8E5B54: PostmasterMain (postmaster.c:1463)\n==380682== by 0x7A806C: main (main.c:200)\n==380682==\n\nThe place allocating the buffer changes over time, but the first part\n(invalid read) seems to be exactly the same.\n\nFWIW I did run previous versions using valgrind, so this gotta be due\nsome recent change.\n\n> \n> Independent of this failure, I'm worried about the cost/benefit analysis of a\n> pglz change that changes this much at once. It's quite hard to review.\n> \n\nI agree.\n\nI think I managed to understand what the patch does during the review,\nbut it's so much harder - it'd definitely be better to have this split\ninto smaller parts, somehow. Interestingly enough the commit message\nactually says this:\n\n This patch accumulates several changes to pglz compression:\n 1. Convert macro-functions to regular functions for readability\n 2. Use more compact hash table with uint16 indexes instead of pointers\n 3. Avoid prev pointer in hash table\n 4. Use 4-byte comparisons during a search instead of 1-byte\n comparisons\n\nWhich I think is a pretty good recipe how to split the patch. (And we\nalso need a better commit message, or at least a proposal.)\n\nThis'd probably also help when investigating the extra byte issue,\ndiscussed yesterday. (Assuming it's not related to the invalid access\nreported by valgrind / asan).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Feb 2023 11:16:47 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 11:16:47 +0100, Tomas Vondra wrote:\n> On 2/7/23 21:18, Andres Freund wrote:\n> > \n> > Independent of this failure, I'm worried about the cost/benefit analysis of a\n> > pglz change that changes this much at once. It's quite hard to review.\n> > \n> \n> I agree.\n> \n> I think I managed to understand what the patch does during the review,\n> but it's so much harder - it'd definitely be better to have this split\n> into smaller parts, somehow. Interestingly enough the commit message\n> actually says this:\n> \n> This patch accumulates several changes to pglz compression:\n> 1. Convert macro-functions to regular functions for readability\n> 2. Use more compact hash table with uint16 indexes instead of pointers\n> 3. Avoid prev pointer in hash table\n> 4. Use 4-byte comparisons during a search instead of 1-byte\n> comparisons\n> \n> Which I think is a pretty good recipe how to split the patch. (And we\n> also need a better commit message, or at least a proposal.)\n> \n> This'd probably also help when investigating the extra byte issue,\n> discussed yesterday. (Assuming it's not related to the invalid access\n> reported by valgrind / asan).\n\nDue to the sanitizer changes, and this feedback, I'm marking the entry as\nwaiting on author.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:03:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 4:03 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Due to the sanitizer changes, and this feedback, I'm marking the entry as\n> waiting on author.\n>\nThanks Andres! Yes, I plan to make another attempt to refactor this\npatch on the weekend. If this attempt fails, I think we should just\nreject it and I'll get back to this during summer.\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:09:03 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz compression performance, take two"
}
] |
[
{
"msg_contents": "Hello.\n\nWe found a behavioral change (which seems to be a bug) in recovery at\nPG13.\n\nThe following steps might seem somewhat strange but the replication\ncode deliberately cope with the case. This is a sequense seen while\noperating a HA cluseter using Pacemaker.\n\n- Run initdb to create a primary.\n- Set archive_mode=on on the primary.\n- Start the primary.\n\n- Create a standby using pg_basebackup from the primary.\n- Stop the standby.\n- Stop the primary.\n\n- Put stnadby.signal to the primary then start it.\n- Promote the primary.\n\n- Start the standby.\n\n\nUntil PG12, the parimary signals end-of-timeline to the standby and\nswitches to the next timeline. Since PG13, that doesn't happen and\nthe standby continues to request for the segment of the older\ntimeline, which no longer exists.\n\nFATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000003 has already been removed\n\nIt is because WalSndSegmentOpen() can fail to detect a timeline switch\non a historic timeline, due to use of a wrong variable to check\nthat. It is using state->seg.ws_segno but it seems to be a thinko when\nthe code around was refactored in 709d003fbd.\n\nThe first patch detects the wrong behavior. The second small patch\nfixes it.\n\nIn the first patch, the test added to 001_stream_rep.pl involves two\ncopied functions related to server-log investigation from\n019_repslot_limit.pl.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 09 Dec 2020 17:43:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "A failure of standby to follow timeline switch"
},
{
"msg_contents": "\n\nOn 2020/12/09 17:43, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> We found a behavioral change (which seems to be a bug) in recovery at\n> PG13.\n> \n> The following steps might seem somewhat strange but the replication\n> code deliberately cope with the case. This is a sequense seen while\n> operating a HA cluseter using Pacemaker.\n> \n> - Run initdb to create a primary.\n> - Set archive_mode=on on the primary.\n> - Start the primary.\n> \n> - Create a standby using pg_basebackup from the primary.\n> - Stop the standby.\n> - Stop the primary.\n> \n> - Put stnadby.signal to the primary then start it.\n> - Promote the primary.\n> \n> - Start the standby.\n> \n> \n> Until PG12, the parimary signals end-of-timeline to the standby and\n> switches to the next timeline. Since PG13, that doesn't happen and\n> the standby continues to request for the segment of the older\n> timeline, which no longer exists.\n> \n> FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000003 has already been removed\n> \n> It is because WalSndSegmentOpen() can fail to detect a timeline switch\n> on a historic timeline, due to use of a wrong variable to check\n> that. It is using state->seg.ws_segno but it seems to be a thinko when\n> the code around was refactored in 709d003fbd.\n> \n> The first patch detects the wrong behavior. The second small patch\n> fixes it.\n\nThanks for reporting this! This looks like a bug.\n\nWhen I applied two patches in the master branch and\nran \"make check-world\", I got the following error.\n\n============== creating database \"contrib_regression\" ==============\n# Looks like you planned 37 tests but ran 36.\n# Looks like your test exited with 255 just after 36.\nt/001_stream_rep.pl ..................\nDubious, test returned 255 (wstat 65280, 0xff00)\nFailed 1/37 subtests\n...\nTest Summary Report\n-------------------\nt/001_stream_rep.pl (Wstat: 65280 Tests: 36 Failed: 0)\n Non-zero exit status: 255\n Parse errors: Bad plan. You planned 37 tests but ran 36.\nFiles=21, Tests=239, 302 wallclock secs ( 0.10 usr 0.05 sys + 41.69 cusr 39.84 csys = 81.68 CPU)\nResult: FAIL\nmake[2]: *** [check] Error 1\nmake[1]: *** [check-recovery-recurse] Error 2\nmake[1]: *** Waiting for unfinished jobs....\nt/070_dropuser.pl ......... ok\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 24 Dec 2020 15:33:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "Thank you for looking this.\n\nAt Thu, 24 Dec 2020 15:33:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> When I applied two patches in the master branch and\n> ran \"make check-world\", I got the following error.\n> \n> ============== creating database \"contrib_regression\" ==============\n> # Looks like you planned 37 tests but ran 36.\n> # Looks like your test exited with 255 just after 36.\n> t/001_stream_rep.pl ..................\n> Dubious, test returned 255 (wstat 65280, 0xff00)\n> Failed 1/37 subtests\n> ...\n> Test Summary Report\n> -------------------\n> t/001_stream_rep.pl (Wstat: 65280 Tests: 36 Failed: 0)\n> Non-zero exit status: 255\n> Parse errors: Bad plan. You planned 37 tests but ran 36.\n> Files=21, Tests=239, 302 wallclock secs ( 0.10 usr 0.05 sys + 41.69\n> cusr 39.84 csys = 81.68 CPU)\n> Result: FAIL\n> make[2]: *** [check] Error 1\n> make[1]: *** [check-recovery-recurse] Error 2\n> make[1]: *** Waiting for unfinished jobs....\n> t/070_dropuser.pl ......... ok\n\nMmm. I retried that and saw it succeed (with 0002 applied).\n\nIf I modified \"user Test::More tests => 37\" to 38 in the perl file, I\ngot a similar result.\n\n> t/001_stream_rep.pl .. 37/38 # Looks like you planned 38 tests but ran 37.\n> t/001_stream_rep.pl .. Dubious, test returned 255 (wstat 65280, 0xff00)\n> Failed 1/38 subtests \n> \n> Test Summary Report\n> -------------------\n> t/001_stream_rep.pl (Wstat: 65280 Tests: 37 Failed: 0)\n> Non-zero exit status: 255\n> Parse errors: Bad plan. You planned 38 tests but ran 37.\n> Files=1, Tests=37, 10 wallclock secs ( 0.03 usr 0.00 sys + 3.64 cusr 2.05 csy\n> s = 5.72 CPU)\n> Result: FAIL\n> make: *** [Makefile:19: check] Error 1\n\nI can't guess what happenened on your environment..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Dec 2020 12:03:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "\n\nOn 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n> Thank you for looking this.\n> \n> At Thu, 24 Dec 2020 15:33:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> When I applied two patches in the master branch and\n>> ran \"make check-world\", I got the following error.\n>>\n>> ============== creating database \"contrib_regression\" ==============\n>> # Looks like you planned 37 tests but ran 36.\n>> # Looks like your test exited with 255 just after 36.\n>> t/001_stream_rep.pl ..................\n>> Dubious, test returned 255 (wstat 65280, 0xff00)\n>> Failed 1/37 subtests\n>> ...\n>> Test Summary Report\n>> -------------------\n>> t/001_stream_rep.pl (Wstat: 65280 Tests: 36 Failed: 0)\n>> Non-zero exit status: 255\n>> Parse errors: Bad plan. You planned 37 tests but ran 36.\n>> Files=21, Tests=239, 302 wallclock secs ( 0.10 usr 0.05 sys + 41.69\n>> cusr 39.84 csys = 81.68 CPU)\n>> Result: FAIL\n>> make[2]: *** [check] Error 1\n>> make[1]: *** [check-recovery-recurse] Error 2\n>> make[1]: *** Waiting for unfinished jobs....\n>> t/070_dropuser.pl ......... ok\n> \n> Mmm. I retried that and saw it succeed (with 0002 applied).\n> \n> If I modified \"user Test::More tests => 37\" to 38 in the perl file, I\n> got a similar result.\n\nWhat happens if you run make check-world with -j 4? When I ran that,\nthe test failed. But with -j 1, the test finished with success. I'm not sure\nwhy this happened, though..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 26 Dec 2020 02:15:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Sat, 26 Dec 2020 02:15:06 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n> > Thank you for looking this.\n> > At Thu, 24 Dec 2020 15:33:04 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >> When I applied two patches in the master branch and\n> >> ran \"make check-world\", I got the following error.\n> >>\n> >> ============== creating database \"contrib_regression\" ==============\n> >> # Looks like you planned 37 tests but ran 36.\n> >> # Looks like your test exited with 255 just after 36.\n> >> t/001_stream_rep.pl ..................\n> >> Dubious, test returned 255 (wstat 65280, 0xff00)\n> >> Failed 1/37 subtests\n> >> ...\n> >> Test Summary Report\n> >> -------------------\n> >> t/001_stream_rep.pl (Wstat: 65280 Tests: 36 Failed: 0)\n> >> Non-zero exit status: 255\n> >> Parse errors: Bad plan. You planned 37 tests but ran 36.\n> >> Files=21, Tests=239, 302 wallclock secs ( 0.10 usr 0.05 sys + 41.69\n> >> cusr 39.84 csys = 81.68 CPU)\n> >> Result: FAIL\n> >> make[2]: *** [check] Error 1\n> >> make[1]: *** [check-recovery-recurse] Error 2\n> >> make[1]: *** Waiting for unfinished jobs....\n> >> t/070_dropuser.pl ......... ok\n> > Mmm. I retried that and saw it succeed (with 0002 applied).\n> > If I modified \"user Test::More tests => 37\" to 38 in the perl file, I\n> > got a similar result.\n> \n> What happens if you run make check-world with -j 4? When I ran that,\n> the test failed. But with -j 1, the test finished with success. I'm\n> not sure\n> why this happened, though..\n\nMaybe this is it.\n\n+\tusleep(100_000);\n\nIf the script doesn't find the expected log line, it reaches the\nusleep and bark that \"Undefined subroutine &main::usleep called...\". I\nthought I tested that path but perhaps I overlooked the error. \"use\nTime::HiRes\" is needed.\n\nThe attached is the fixed version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 04 Jan 2021 12:06:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "\n\nOn 2021/01/04 12:06, Kyotaro Horiguchi wrote:\n> At Sat, 26 Dec 2020 02:15:06 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n>>> Thank you for looking this.\n>>> At Thu, 24 Dec 2020 15:33:04 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>> When I applied two patches in the master branch and\n>>>> ran \"make check-world\", I got the following error.\n>>>>\n>>>> ============== creating database \"contrib_regression\" ==============\n>>>> # Looks like you planned 37 tests but ran 36.\n>>>> # Looks like your test exited with 255 just after 36.\n>>>> t/001_stream_rep.pl ..................\n>>>> Dubious, test returned 255 (wstat 65280, 0xff00)\n>>>> Failed 1/37 subtests\n>>>> ...\n>>>> Test Summary Report\n>>>> -------------------\n>>>> t/001_stream_rep.pl (Wstat: 65280 Tests: 36 Failed: 0)\n>>>> Non-zero exit status: 255\n>>>> Parse errors: Bad plan. You planned 37 tests but ran 36.\n>>>> Files=21, Tests=239, 302 wallclock secs ( 0.10 usr 0.05 sys + 41.69\n>>>> cusr 39.84 csys = 81.68 CPU)\n>>>> Result: FAIL\n>>>> make[2]: *** [check] Error 1\n>>>> make[1]: *** [check-recovery-recurse] Error 2\n>>>> make[1]: *** Waiting for unfinished jobs....\n>>>> t/070_dropuser.pl ......... ok\n>>> Mmm. I retried that and saw it succeed (with 0002 applied).\n>>> If I modified \"user Test::More tests => 37\" to 38 in the perl file, I\n>>> got a similar result.\n>>\n>> What happens if you run make check-world with -j 4? When I ran that,\n>> the test failed. But with -j 1, the test finished with success. I'm\n>> not sure\n>> why this happened, though..\n> \n> Maybe this is it.\n> \n> +\tusleep(100_000);\n> \n> If the script doesn't find the expected log line, it reaches the\n> usleep and bark that \"Undefined subroutine &main::usleep called...\". I\n> thought I tested that path but perhaps I overlooked the error. \"use\n> Time::HiRes\" is needed.\n\nYes.\n\n> \n> The attached is the fixed version.\n\nThanks for updating the patches!\n\n> In the first patch, the test added to 001_stream_rep.pl involves two\n> copied functions related to server-log investigation from\n> 019_repslot_limit.pl.\n\nSo you're planning to define them commonly in TestLib.pm or elsewhere?\n\n+$node_primary_2->init(allows_streaming => 1);\n+$node_primary_2->enable_archiving; # needed to make .paritial segment\n\nIsn't it better to use has_archiving flag in init() instead of doing\nenable_archiving, like other tests do?\n\n0002 looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 4 Jan 2021 19:00:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Mon, 4 Jan 2021 19:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/01/04 12:06, Kyotaro Horiguchi wrote:\n> > At Sat, 26 Dec 2020 02:15:06 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >>\n> >> On 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n> > The attached is the fixed version.\n> \n> Thanks for updating the patches!\n> \n> > In the first patch, the test added to 001_stream_rep.pl involves two\n> > copied functions related to server-log investigation from\n> > 019_repslot_limit.pl.\n> \n> So you're planning to define them commonly in TestLib.pm or elsewhere?\n\nYeah.. That's correct. Newly added as the first patch.\n\nWhile making that change, I extended the interface of slurp_file to\nallow reading from arbitrary position. I attached this as a separate\npatch just for clarifying the changeset.\n\nThe existing messages for open() and OSHandleOpen() look somewhat\nstrange after patching since they are not really \"read\" errors, but\nthey're harmless. (It successfully ran also on Windows10)\n\nThe first hunk below is a fix for a forgotten line-feed.\n\n \t\tmy $fHandle = createFile($filename, \"r\", \"rwd\")\n-\t\t or croak \"could not open \\\"$filename\\\": $^E\";\n+\t\t or croak \"could not open \\\"$filename\\\": $^E\\n\";\n \t\tOsFHandleOpen(my $fh = IO::Handle->new(), $fHandle, 'r')\n \t\t or croak \"could not read \\\"$filename\\\": $^E\\n\";\n+\t\tseek($fh, $from, 0)\n+\t\t or croak \"could not seek \\\"$filename\\\" to $from: $^E\\n\";\n\n\n> +$node_primary_2->init(allows_streaming => 1);\n> +$node_primary_2->enable_archiving; # needed to make .paritial segment\n> \n> Isn't it better to use has_archiving flag in init() instead of doing\n> enable_archiving, like other tests do?\n\nAgreed. Fixed 0002 (formerly 0001).\n\n> 0002 looks good to me.\n\nThanks. The attached is the revised patchset. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 05 Jan 2021 17:26:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Tue, 05 Jan 2021 17:26:02 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thanks. The attached is the revised patchset. \n\nIt is not applicable to PG13 due to wording changes. This is an\napplicable all-in-one version to PG13.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 06 Jan 2021 10:48:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "\n\nOn 2021/01/05 17:26, Kyotaro Horiguchi wrote:\n> At Mon, 4 Jan 2021 19:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2021/01/04 12:06, Kyotaro Horiguchi wrote:\n>>> At Sat, 26 Dec 2020 02:15:06 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>>\n>>>> On 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n>>> The attached is the fixed version.\n>>\n>> Thanks for updating the patches!\n>>\n>>> In the first patch, the test added to 001_stream_rep.pl involves two\n>>> copied functions related to server-log investigation from\n>>> 019_repslot_limit.pl.\n>>\n>> So you're planning to define them commonly in TestLib.pm or elsewhere?\n> \n> Yeah.. That's correct. Newly added as the first patch.\n> \n> While making that change, I extended the interface of slurp_file to\n> allow reading from arbitrary position.\n\nIs this extension really helpful for current use case?\nAt least I'd like to avoid back-patching this since it's an exntesion...\n\n \t\tOsFHandleOpen(my $fh = IO::Handle->new(), $fHandle, 'r')\n \t\t or croak \"could not read \\\"$filename\\\": $^E\\n\";\n+\t\tseek($fh, $from, 0)\n+\t\t or croak \"could not seek \\\"$filename\\\" to $from: $^E\\n\";\n\nI'm not familiar with this area, but SetFilePointer() is more suitable\nrather than seek()?\n\n\n> Thanks. The attached is the revised patchset.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 Jan 2021 11:55:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Thu, 7 Jan 2021 11:55:33 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/01/05 17:26, Kyotaro Horiguchi wrote:\n> > At Mon, 4 Jan 2021 19:00:21 +0900, Fujii Masao <masao.fujii@oss.nttdata.com>\n> > wrote in\n> >>\n> >>\n> >> On 2021/01/04 12:06, Kyotaro Horiguchi wrote:\n> >>> At Sat, 26 Dec 2020 02:15:06 +0900, Fujii Masao\n> >>> <masao.fujii@oss.nttdata.com> wrote in\n> >>>>\n> >>>> On 2020/12/25 12:03, Kyotaro Horiguchi wrote:\n> >>> The attached is the fixed version.\n> >>\n> >> Thanks for updating the patches!\n> >>\n> >>> In the first patch, the test added to 001_stream_rep.pl involves two\n> >>> copied functions related to server-log investigation from\n> >>> 019_repslot_limit.pl.\n> >>\n> >> So you're planning to define them commonly in TestLib.pm or elsewhere?\n> > Yeah.. That's correct. Newly added as the first patch.\n> > While making that change, I extended the interface of slurp_file to\n> > allow reading from arbitrary position.\n> \n> Is this extension really helpful for current use case?\n> At least I'd like to avoid back-patching this since it's an exntesion...\n\nYeah, I felt a hesitattion about it a bit. It's less useful assuming\nthat log files won't get so large. Removed in this version.\n\n> \t\tOsFHandleOpen(my $fh = IO::Handle->new(), $fHandle, 'r')\n> \t\t or croak \"could not read \\\"$filename\\\": $^E\\n\";\n> +\t\tseek($fh, $from, 0)\n> +\t\t or croak \"could not seek \\\"$filename\\\" to $from: $^E\\n\";\n> \n> I'm not familiar with this area, but SetFilePointer() is more suitable\n> rather than seek()?\n\nSetFilePointer() works for a native handle, IO::Handle->new()\nhere. seek() works on $fh, a perl handle. If ReadFile is used later\nSetFilePointer() might be needed separately.\n\nAnyway, it is removed.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n From 933dd946ca547b7de2dfed84807cd9d871c12f6f Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\nDate: Tue, 5 Jan 2021 13:34:36 +0900\nSubject: [PATCH] Fix timeline-tracking failure while sending a historic\n timeline\n\nWalsender should track timeline switches while sending a historic\ntimeline. Regain that behavior, which was broken in PG13, by a thinko\nof 709d003fbd. Backpatch to PG13.\n---\n src/backend/replication/walsender.c | 2 +-\n src/test/perl/PostgresNode.pm | 36 ++++++++++++++++++++\n src/test/recovery/t/001_stream_rep.pl | 41 ++++++++++++++++++++++-\n src/test/recovery/t/019_replslot_limit.pl | 37 ++++----------------\n 4 files changed, 83 insertions(+), 33 deletions(-)\n\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex 7f87eb7f19..04f6c3ebb4 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -2478,7 +2478,7 @@ WalSndSegmentOpen(XLogReaderState *state, XLogSegNo nextSegNo,\n \t\tXLogSegNo\tendSegNo;\n \n \t\tXLByteToSeg(sendTimeLineValidUpto, endSegNo, state->segcxt.ws_segsize);\n-\t\tif (state->seg.ws_segno == endSegNo)\n+\t\tif (nextSegNo == endSegNo)\n \t\t\t*tli_p = sendTimeLineNextTLI;\n \t}\n \ndiff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm\nindex 980f1f1533..a08c71b549 100644\n--- a/src/test/perl/PostgresNode.pm\n+++ b/src/test/perl/PostgresNode.pm\n@@ -2138,6 +2138,42 @@ sub pg_recvlogical_upto\n \n =pod\n \n+=item $node->current_log_position()\n+\n+Return the current position of server log.\n+\n+=cut\n+\n+sub current_log_position\n+{\n+\tmy $self = shift;\n+\n+\treturn (stat $self->logfile)[7];\n+}\n+\n+=pod\n+\n+=item $node->find_in_log($pattern, $startpos)\n+\n+Returns whether the $pattern occurs after $startpos in the server log.\n+\n+=cut\n+\n+sub find_in_log\n+{\n+\tmy ($self, $pattern, $startpos) = @_;\n+\n+\t$startpos = 0 unless defined $startpos;\n+\tmy $log = TestLib::slurp_file($self->logfile);\n+\treturn 0 if (length($log) <= $startpos);\n+\n+\t$log = substr($log, $startpos);\n+\n+\treturn $log =~ m/$pattern/;\n+}\n+\n+=pod\n+\n =back\n \n =cut\ndiff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl\nindex 778f11b28b..8d2b24fe55 100644\n--- a/src/test/recovery/t/001_stream_rep.pl\n+++ b/src/test/recovery/t/001_stream_rep.pl\n@@ -2,8 +2,9 @@\n use strict;\n use warnings;\n use PostgresNode;\n+use Time::HiRes qw(usleep);\n use TestLib;\n-use Test::More tests => 36;\n+use Test::More tests => 37;\n \n # Initialize master node\n my $node_master = get_new_node('master');\n@@ -409,3 +410,41 @@ ok( ($phys_restart_lsn_pre cmp $phys_restart_lsn_post) == 0,\n my $master_data = $node_master->data_dir;\n ok(!-f \"$master_data/pg_wal/$segment_removed\",\n \t\"WAL segment $segment_removed recycled after physical slot advancing\");\n+\n+#\n+# Check if timeline-increment works while reading a historic timeline.\n+my $node_primary_2 = get_new_node('primary_2');\n+# archiving is needed to create .paritial segment\n+$node_primary_2->init(allows_streaming => 1, has_archiving => 1);\n+$node_primary_2->start;\n+$node_primary_2->backup($backup_name);\n+my $node_standby_3 = get_new_node('standby_3');\n+$node_standby_3->init_from_backup($node_primary_2, $backup_name,\n+\t\t\t\t\t\t\t\t has_streaming => 1);\n+$node_primary_2->stop;\n+$node_primary_2->set_standby_mode; # increment primary timeline\n+$node_primary_2->start;\n+$node_primary_2->promote;\n+my $logstart = $node_standby_3->current_log_position();\n+$node_standby_3->start;\n+\n+my $success = 0;\n+for (my $i = 0 ; $i < 1000; $i++)\n+{\n+\tif ($node_standby_3->find_in_log(\n+\t\t\t\"requested WAL segment [0-9A-F]+ has already been removed\",\n+\t\t\t$logstart))\n+\t{\n+\t\tlast;\n+\t}\n+\telsif ($node_standby_3->find_in_log(\n+\t\t\t\"End of WAL reached on timeline\",\n+\t\t\t $logstart))\n+\t{\n+\t\t$success = 1;\n+\t\tlast;\n+\t}\n+\tusleep(100_000);\n+}\n+\n+ok($success, 'Timeline increment while reading a historic timeline');\ndiff --git a/src/test/recovery/t/019_replslot_limit.pl b/src/test/recovery/t/019_replslot_limit.pl\nindex a7231dcd47..8b3c5de057 100644\n--- a/src/test/recovery/t/019_replslot_limit.pl\n+++ b/src/test/recovery/t/019_replslot_limit.pl\n@@ -165,19 +165,17 @@ $node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);\n \n $node_standby->stop;\n \n-ok( !find_in_log(\n-\t\t$node_standby,\n-\t\t\"requested WAL segment [0-9A-F]+ has already been removed\"),\n+ok( !$node_standby->find_in_log(\n+\t\t \"requested WAL segment [0-9A-F]+ has already been removed\"),\n \t'check that required WAL segments are still available');\n \n # Advance WAL again, the slot loses the oldest segment.\n-my $logstart = get_log_size($node_master);\n+my $logstart = $node_master->current_log_position();\n advance_wal($node_master, 7);\n $node_master->safe_psql('postgres', \"CHECKPOINT;\");\n \n # WARNING should be issued\n-ok( find_in_log(\n-\t\t$node_master,\n+ok( $node_master->find_in_log(\n \t\t\"invalidating slot \\\"rep1\\\" because its restart_lsn [0-9A-F/]+ exceeds max_slot_wal_keep_size\",\n \t\t$logstart),\n \t'check that the warning is logged');\n@@ -190,14 +188,13 @@ is($result, \"rep1|f|t|lost|\",\n \t'check that the slot became inactive and the state \"lost\" persists');\n \n # The standby no longer can connect to the master\n-$logstart = get_log_size($node_standby);\n+$logstart = $node_standby->current_log_position();\n $node_standby->start;\n \n my $failed = 0;\n for (my $i = 0; $i < 10000; $i++)\n {\n-\tif (find_in_log(\n-\t\t\t$node_standby,\n+\tif ($node_standby->find_in_log(\n \t\t\t\"requested WAL segment [0-9A-F]+ has already been removed\",\n \t\t\t$logstart))\n \t{\n@@ -264,25 +261,3 @@ sub advance_wal\n \t}\n \treturn;\n }\n-\n-# return the size of logfile of $node in bytes\n-sub get_log_size\n-{\n-\tmy ($node) = @_;\n-\n-\treturn (stat $node->logfile)[7];\n-}\n-\n-# find $pat in logfile of $node after $off-th byte\n-sub find_in_log\n-{\n-\tmy ($node, $pat, $off) = @_;\n-\n-\t$off = 0 unless defined $off;\n-\tmy $log = TestLib::slurp_file($node->logfile);\n-\treturn 0 if (length($log) <= $off);\n-\n-\t$log = substr($log, $off);\n-\n-\treturn $log =~ m/$pat/;\n-}\n-- \n2.27.0",
"msg_date": "Thu, 07 Jan 2021 16:32:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "Masao-san: Are you intending to act as committer for these? Since the\nbug is mine I can look into it, but since you already did all the\nreviewing work, I'm good with you giving it the final push.\n\n0001 looks good to me; let's get that one committed quickly so that we\ncan focus on the interesting stuff. While the implementation of\nfind_in_log is quite dumb (not this patch's fault), it seems sufficient\nto deal with small log files. We can improve the implementation later,\nif needed, but we have to get the API right on the first try.\n\n0003: The fix looks good to me. I verified that the test fails without\nthe fix, and it passes with the fix.\n\n\nThe test added in 0002 is a bit optimistic regarding timing, as well as\npotentially slow; it loops 1000 times and sleeps 100 milliseconds each\ntime. In a very slow server (valgrind or clobber_cache animals) this\ncould not be sufficient time, while on fast servers it may end up\nwaiting longer than needed. Maybe we can do something like this:\n\nfor (my $i = 0 ; $i < 1000; $i++)\n{\n\tmy $current_log_size = determine_current_log_size()\n\n\tif ($node_standby_3->find_in_log(\n\t\t\t\"requested WAL segment [0-9A-F]+ has already been removed\",\n\t\t\t$logstart))\n\t{\n\t\tlast;\n\t}\n\telsif ($node_standby_3->find_in_log(\n\t\t\t\"End of WAL reached on timeline\",\n\t\t\t $logstart))\n\t{\n\t\t$success = 1;\n\t\tlast;\n\t}\n\t$logstart = $current_log_size;\n\n\twhile (determine_current_log_size() == current_log_size)\n\t{\n\t\tusleep(10_000);\n\t\t# with a retry count?\n\t}\n}\n\nWith test patch, make check PROVE_FLAGS=\"--timer\" PROVE_TESTS=t/001_stream_rep.pl\n\nok 6386 ms ( 0.00 usr 0.00 sys + 1.14 cusr 0.93 csys = 2.07 CPU)\nok 6352 ms ( 0.00 usr 0.00 sys + 1.10 cusr 0.94 csys = 2.04 CPU)\nok 6255 ms ( 0.01 usr 0.00 sys + 0.99 cusr 0.97 csys = 1.97 CPU)\n\nwithout test patch:\n\nok 4954 ms ( 0.00 usr 0.00 sys + 0.71 cusr 0.64 csys = 1.35 CPU)\nok 5033 ms ( 0.01 usr 0.00 sys + 0.71 cusr 0.73 csys = 1.45 CPU)\nok 4991 ms ( 0.01 usr 0.00 sys + 0.73 cusr 0.59 csys = 1.33 CPU)\n\n-- \n�lvaro Herrera\n\n\n",
"msg_date": "Fri, 8 Jan 2021 17:08:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "On Sat, Jan 9, 2021 at 5:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Masao-san: Are you intending to act as committer for these? Since the\n> bug is mine I can look into it, but since you already did all the\n> reviewing work, I'm good with you giving it the final push.\n\nThanks! I'm thinking to push the patch.\n\n\n> 0001 looks good to me; let's get that one committed quickly so that we\n> can focus on the interesting stuff. While the implementation of\n> find_in_log is quite dumb (not this patch's fault), it seems sufficient\n> to deal with small log files. We can improve the implementation later,\n> if needed, but we have to get the API right on the first try.\n>\n> 0003: The fix looks good to me. I verified that the test fails without\n> the fix, and it passes with the fix.\n\nYes.\n\n\n> The test added in 0002 is a bit optimistic regarding timing, as well as\n> potentially slow; it loops 1000 times and sleeps 100 milliseconds each\n> time. In a very slow server (valgrind or clobber_cache animals) this\n> could not be sufficient time, while on fast servers it may end up\n> waiting longer than needed. Maybe we can do something like this:\n\nOn second thought, I think that the regression test should be in\n004_timeline_switch.pl instead of 001_stream_rep.pl because it's\nthe test about timeline switch. Also I'm thinking that it's better to\ntest the timeline switch by checking whether some data is successfully\nreplicatead like the existing regression test for timeline switch in\n004_timeline_switch.pl does, instead of finding the specific message\nin the log file. I attached the POC patch. Thought?\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Tue, 12 Jan 2021 10:47:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Tue, 12 Jan 2021 10:47:21 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Sat, Jan 9, 2021 at 5:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Masao-san: Are you intending to act as committer for these? Since the\n> > bug is mine I can look into it, but since you already did all the\n> > reviewing work, I'm good with you giving it the final push.\n> \n> Thanks! I'm thinking to push the patch.\n> \n> \n> > 0001 looks good to me; let's get that one committed quickly so that we\n> > can focus on the interesting stuff. While the implementation of\n> > find_in_log is quite dumb (not this patch's fault), it seems sufficient\n> > to deal with small log files. We can improve the implementation later,\n> > if needed, but we have to get the API right on the first try.\n> >\n> > 0003: The fix looks good to me. I verified that the test fails without\n> > the fix, and it passes with the fix.\n> \n> Yes.\n> \n> \n> > The test added in 0002 is a bit optimistic regarding timing, as well as\n> > potentially slow; it loops 1000 times and sleeps 100 milliseconds each\n> > time. In a very slow server (valgrind or clobber_cache animals) this\n> > could not be sufficient time, while on fast servers it may end up\n> > waiting longer than needed. Maybe we can do something like this:\n> \n> On second thought, I think that the regression test should be in\n> 004_timeline_switch.pl instead of 001_stream_rep.pl because it's\n\nAgreed. It's definitely the right place.\n\n> the test about timeline switch. Also I'm thinking that it's better to\n> test the timeline switch by checking whether some data is successfully\n> replicatead like the existing regression test for timeline switch in\n> 004_timeline_switch.pl does, instead of finding the specific message\n> in the log file. I attached the POC patch. Thought?\n\nIt's practically a check on this issue, and looks better. The 180s\ntimeout in the failure case seems a bit annoying but it's the way all\nof this kind of test follow.\n\nThe last check on table content is actually useless but it might make\nsense to confirm that replication is actually working. However, I\ndon't think the test don't need to insert as many as 1000 tuples. Just\na single tuple would suffice.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Jan 2021 10:48:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 10:48 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 12 Jan 2021 10:47:21 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in\n> > On Sat, Jan 9, 2021 at 5:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > Masao-san: Are you intending to act as committer for these? Since the\n> > > bug is mine I can look into it, but since you already did all the\n> > > reviewing work, I'm good with you giving it the final push.\n> >\n> > Thanks! I'm thinking to push the patch.\n> >\n> >\n> > > 0001 looks good to me; let's get that one committed quickly so that we\n> > > can focus on the interesting stuff. While the implementation of\n> > > find_in_log is quite dumb (not this patch's fault), it seems sufficient\n> > > to deal with small log files. We can improve the implementation later,\n> > > if needed, but we have to get the API right on the first try.\n> > >\n> > > 0003: The fix looks good to me. I verified that the test fails without\n> > > the fix, and it passes with the fix.\n> >\n> > Yes.\n> >\n> >\n> > > The test added in 0002 is a bit optimistic regarding timing, as well as\n> > > potentially slow; it loops 1000 times and sleeps 100 milliseconds each\n> > > time. In a very slow server (valgrind or clobber_cache animals) this\n> > > could not be sufficient time, while on fast servers it may end up\n> > > waiting longer than needed. Maybe we can do something like this:\n> >\n> > On second thought, I think that the regression test should be in\n> > 004_timeline_switch.pl instead of 001_stream_rep.pl because it's\n>\n> Agreed. It's definitely the right place.\n>\n> > the test about timeline switch. Also I'm thinking that it's better to\n> > test the timeline switch by checking whether some data is successfully\n> > replicatead like the existing regression test for timeline switch in\n> > 004_timeline_switch.pl does, instead of finding the specific message\n> > in the log file. I attached the POC patch. Thought?\n>\n> It's practically a check on this issue, and looks better. The 180s\n> timeout in the failure case seems a bit annoying but it's the way all\n> of this kind of test follow.\n\nYes.\n\n>\n> The last check on table content is actually useless but it might make\n> sense to confirm that replication is actually working. However, I\n> don't think the test don't need to insert as many as 1000 tuples. Just\n> a single tuple would suffice.\n\nThanks for the review!\nI'm ok with this change (i.e., insert only single row).\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 13 Jan 2021 12:08:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "On 2021-Jan-13, Fujii Masao wrote:\n\n> Thanks for the review!\n> I'm ok with this change (i.e., insert only single row).\n> Attached is the updated version of the patch.\n\nLooks good to me, thanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 13 Jan 2021 16:51:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Wed, 13 Jan 2021 16:51:32 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2021-Jan-13, Fujii Masao wrote:\n> \n> > Thanks for the review!\n> > I'm ok with this change (i.e., insert only single row).\n> > Attached is the updated version of the patch.\n> \n> Looks good to me, thanks!\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Jan 2021 10:10:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 10:10 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 13 Jan 2021 16:51:32 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2021-Jan-13, Fujii Masao wrote:\n> >\n> > > Thanks for the review!\n> > > I'm ok with this change (i.e., insert only single row).\n> > > Attached is the updated version of the patch.\n> >\n> > Looks good to me, thanks!\n>\n> +1\n\nThanks Alvaro and Horiguchi for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 14 Jan 2021 12:34:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A failure of standby to follow timeline switch"
},
{
"msg_contents": "At Thu, 14 Jan 2021 12:34:01 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Thu, Jan 14, 2021 at 10:10 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 13 Jan 2021 16:51:32 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > On 2021-Jan-13, Fujii Masao wrote:\n> > >\n> > > > Thanks for the review!\n> > > > I'm ok with this change (i.e., insert only single row).\n> > > > Attached is the updated version of the patch.\n> > >\n> > > Looks good to me, thanks!\n> >\n> > +1\n> \n> Thanks Alvaro and Horiguchi for the review! I pushed the patch.\n\nThanks for commiting this fix!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Jan 2021 13:32:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A failure of standby to follow timeline switch"
}
] |
[
{
"msg_contents": "Hey,\n\nWould it be accurate to add the following sentence to the INSERT\ndocumentation under \"Outputs\"?\n\n\"...inserted or updated by the command.\" For a multiple-values insertion,\nthe order of output rows will match the order that rows are presented in\nthe values or query clause.\n\nhttps://www.postgresql.org/docs/current/sql-insert.html\n\nDavid J.\n\nHey,Would it be accurate to add the following sentence to the INSERT documentation under \"Outputs\"?\"...inserted or updated by the command.\" For a multiple-values insertion, the order of output rows will match the order that rows are presented in the values or query clause.https://www.postgresql.org/docs/current/sql-insert.htmlDavid J.",
"msg_date": "Wed, 9 Dec 2020 08:40:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Wed, Dec 9, 2020 at 9:10 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> Hey,\n>\n> Would it be accurate to add the following sentence to the INSERT documentation under \"Outputs\"?\n>\n> \"...inserted or updated by the command.\" For a multiple-values insertion, the order of output rows will match the order that rows are presented in the values or query clause.\n\nPostgres's current implementation may be doing so, but I don't think\nthat can be guaranteed in possible implementations. I don't think\nrestricting choice of implementation to guarantee that is a good idea\neither.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 10 Dec 2020 19:01:28 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Thursday, December 10, 2020, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Wed, Dec 9, 2020 at 9:10 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > Hey,\n> >\n> > Would it be accurate to add the following sentence to the INSERT\n> documentation under \"Outputs\"?\n> >\n> > \"...inserted or updated by the command.\" For a multiple-values\n> insertion, the order of output rows will match the order that rows are\n> presented in the values or query clause.\n>\n> Postgres's current implementation may be doing so, but I don't think\n> that can be guaranteed in possible implementations. I don't think\n> restricting choice of implementation to guarantee that is a good idea\n> either.\n>\n>\nYeah, the ongoing work on parallel inserts would seem to be an issue. We\nshould probably document that though. And maybe as part of parallel\ninserts patch provide a user-specifiable way to ask for such a guarantee if\nneeded. ‘Insert returning ordered”\n\nDavid J.\n\nOn Thursday, December 10, 2020, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Wed, Dec 9, 2020 at 9:10 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> Hey,\n>\n> Would it be accurate to add the following sentence to the INSERT documentation under \"Outputs\"?\n>\n> \"...inserted or updated by the command.\" For a multiple-values insertion, the order of output rows will match the order that rows are presented in the values or query clause.\n\nPostgres's current implementation may be doing so, but I don't think\nthat can be guaranteed in possible implementations. I don't think\nrestricting choice of implementation to guarantee that is a good idea\neither.\nYeah, the ongoing work on parallel inserts would seem to be an issue. We should probably document that though. And maybe as part of parallel inserts patch provide a user-specifiable way to ask for such a guarantee if needed. ‘Insert returning ordered”David J.",
"msg_date": "Thu, 10 Dec 2020 07:19:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 7:49 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Thursday, December 10, 2020, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> On Wed, Dec 9, 2020 at 9:10 PM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > Hey,\n>> >\n>> > Would it be accurate to add the following sentence to the INSERT documentation under \"Outputs\"?\n>> >\n>> > \"...inserted or updated by the command.\" For a multiple-values insertion, the order of output rows will match the order that rows are presented in the values or query clause.\n>>\n>> Postgres's current implementation may be doing so, but I don't think\n>> that can be guaranteed in possible implementations. I don't think\n>> restricting choice of implementation to guarantee that is a good idea\n>> either.\n>>\n>\n> Yeah, the ongoing work on parallel inserts would seem to be an issue. We should probably document that though. And maybe as part of parallel inserts patch provide a user-specifiable way to ask for such a guarantee if needed. ‘Insert returning ordered”\n\nI am curious about the usecase which needs that guarantee? Don't you\nhave a column on which you can ORDER BY so that it returns the same\norder as INSERT?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 11 Dec 2020 18:54:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 6:24 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Thu, Dec 10, 2020 at 7:49 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> > Yeah, the ongoing work on parallel inserts would seem to be an issue.\n> We should probably document that though. And maybe as part of parallel\n> inserts patch provide a user-specifiable way to ask for such a guarantee if\n> needed. ‘Insert returning ordered”\n>\n> I am curious about the usecase which needs that guarantee? Don't you\n> have a column on which you can ORDER BY so that it returns the same\n> order as INSERT?\n>\n\nThis comes up periodically in the context of auto-generated keys being\nreturned - specifically on the JDBC project list (maybe elsewhere...). If\none adds 15 VALUES entries to an insert and then sends them in bulk to the\nserver it would be helpful if the generated keys could be matched up\none-to-one with the keyless objects in the client. Basically \"pipelining\"\nthe client and server.\n\nDavid J.\n\nOn Fri, Dec 11, 2020 at 6:24 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Thu, Dec 10, 2020 at 7:49 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Yeah, the ongoing work on parallel inserts would seem to be an issue. We should probably document that though. And maybe as part of parallel inserts patch provide a user-specifiable way to ask for such a guarantee if needed. ‘Insert returning ordered”\n\nI am curious about the usecase which needs that guarantee? Don't you\nhave a column on which you can ORDER BY so that it returns the same\norder as INSERT?This comes up periodically in the context of auto-generated keys being returned - specifically on the JDBC project list (maybe elsewhere...). If one adds 15 VALUES entries to an insert and then sends them in bulk to the server it would be helpful if the generated keys could be matched up one-to-one with the keyless objects in the client. Basically \"pipelining\" the client and server.David J.",
"msg_date": "Fri, 11 Dec 2020 15:31:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Friday, December 11, 2020, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Dec 11, 2020 at 6:24 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> On Thu, Dec 10, 2020 at 7:49 PM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>>\n>> > Yeah, the ongoing work on parallel inserts would seem to be an issue. We should probably document that though. And maybe as part of parallel inserts patch provide a user-specifiable way to ask for such a guarantee if needed. ‘Insert returning ordered”\n>>\n>> I am curious about the usecase which needs that guarantee? Don't you\n>> have a column on which you can ORDER BY so that it returns the same\n>> order as INSERT?\n>\n>\n> This comes up periodically in the context of auto-generated keys being returned - specifically on the JDBC project list (maybe elsewhere...). If one adds 15 VALUES entries to an insert and then sends them in bulk to the server it would be helpful if the generated keys could be matched up one-to-one with the keyless objects in the client. Basically \"pipelining\" the client and server.\n\nThat’s a great use case. It’s not so much about ordering, per se, but\nabout identity.\n\nCertainly almost every ORM, and maybe even other forms of application\ncode, need to be able to associate the serial column value returned\nwith what it inserted. I'd expect something like that (whether by\nordering explicitly or by providing some kind of mapping between\nindexes in the statement data and the inserted/returned row values).\n\nJames\n\n\n",
"msg_date": "Sat, 12 Dec 2020 09:02:02 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 7:02 AM James Coleman <jtc331@gmail.com> wrote:\n\n>\n> Certainly almost every ORM, and maybe even other forms of application\n> code, need to be able to associate the serial column value returned\n> with what it inserted.\n>\n\nYet most ORM would perform single inserts at a time, not in bulk, making\nsuch a feature irrelevant to them.\n\nI don't think having such a feature is all that important personally, but\nthe question comes every so often and it would be nice to be able to point\nat the documentation for a definitive answer - not just one inferred from a\nlack of documentation - especially since the observed behavior is that\norder is preserved today.\n\nDavid J.\n\nOn Sat, Dec 12, 2020 at 7:02 AM James Coleman <jtc331@gmail.com> wrote:\nCertainly almost every ORM, and maybe even other forms of application\ncode, need to be able to associate the serial column value returned\nwith what it inserted.Yet most ORM would perform single inserts at a time, not in bulk, making such a feature irrelevant to them.I don't think having such a feature is all that important personally, but the question comes every so often and it would be nice to be able to point at the documentation for a definitive answer - not just one inferred from a lack of documentation - especially since the observed behavior is that order is preserved today.David J.",
"msg_date": "Sat, 12 Dec 2020 08:11:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 10:11 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sat, Dec 12, 2020 at 7:02 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>>\n>> Certainly almost every ORM, and maybe even other forms of application\n>> code, need to be able to associate the serial column value returned\n>> with what it inserted.\n>\n>\n> Yet most ORM would perform single inserts at a time, not in bulk, making such a feature irrelevant to them.\n\nI think that's a pretty hasty generalization. It's the majority of use\ncases in an ORM, sure, but plenty of ORMs (and libraries or\napplications using them) support inserting batches where performance\nrequires it. Rails/ActiveRecord is actually integrating that feature\ninto core (though many Ruby libraries already add that support, as\ndoes, for example, the application I spend the majority of time\nworking on).\n\nJames\n\n\n",
"msg_date": "Sat, 12 Dec 2020 20:14:42 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 8:41 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sat, Dec 12, 2020 at 7:02 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>>\n>> Certainly almost every ORM, and maybe even other forms of application\n>> code, need to be able to associate the serial column value returned\n>> with what it inserted.\n>\n>\n> Yet most ORM would perform single inserts at a time, not in bulk, making such a feature irrelevant to them.\n>\n> I don't think having such a feature is all that important personally, but the question comes every so often and it would be nice to be able to point at the documentation for a definitive answer - not just one inferred from a lack of documentation - especially since the observed behavior is that order is preserved today.\n>\n\nThat's a valid usecase, but adding such a guarantee in documentation\nwould restrict implementation. So at best we can say \"no order is\nguaranteed\". But we write what's guaranteed. Anything not written in\nthe documents is not guaranteed.\n\nThere are ways to get it working, but let's not go into those details\nin this thread.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 14 Dec 2020 19:39:36 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 7:09 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> But we write what's guaranteed. Anything not written in\n> the documents is not guaranteed.\n>\n\nIn the case of LIMIT we go to great lengths to write what isn't\nguaranteed. I suggest that this is similar enough in nature to warrant the\nsame emphasis.\n\n\"Thus, using different LIMIT/OFFSET values to select different subsets of a\nquery result will give inconsistent results unless you enforce a\npredictable result ordering with ORDER BY. This is not a bug; it is an\ninherent consequence of the fact that SQL does not promise to deliver the\nresults of a query in any particular order unless ORDER BY is used to\nconstrain the order.\n\nIt is even possible for repeated executions of the same LIMIT query to\nreturn different subsets of the rows of a table, if there is not an ORDER\nBY to enforce selection of a deterministic subset. Again, this is not a\nbug; determinism of the results is simply not guaranteed in such a case.\"\n\nI'd go so far as to say that it's more important here since the observed\nbehavior is that things are ordered, and expected to be ordered, while with\nlimit the non-determinism seems more obvious.\n\nDavid J.\n\nOn Mon, Dec 14, 2020 at 7:09 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:But we write what's guaranteed. Anything not written in\nthe documents is not guaranteed.In the case of LIMIT we go to great lengths to write what isn't guaranteed. I suggest that this is similar enough in nature to warrant the same emphasis.\"Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.It is even possible for repeated executions of the same LIMIT query to return different subsets of the rows of a table, if there is not an ORDER BY to enforce selection of a deterministic subset. Again, this is not a bug; determinism of the results is simply not guaranteed in such a case.\"I'd go so far as to say that it's more important here since the observed behavior is that things are ordered, and expected to be ordered, while with limit the non-determinism seems more obvious.David J.",
"msg_date": "Wed, 11 Aug 2021 09:08:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Insert Documentation - Returning Clause and Order"
}
] |
[
{
"msg_contents": "I thought this was a good idea, but didn't hear back when I raised it before.\n\nFailing to preserve access method is arguably a bug, reminiscent of CREATE\nSTATISTICS and 5564c1181. But maybe it's not important to backpatch a fix in\nthis case, since access methods are still evolving.\n\nhttps://www.postgresql.org/message-id/20190818193533.GL11185@telsasoft.com\nOn Sun, Aug 18, 2019 at 02:35:33PM -0500, Justin Pryzby wrote:\n> . What do you think about pg_restore --no-tableam; similar to\n> --no-tablespaces, it would allow restoring a table to a different AM:\n> PGOPTIONS='-c default_table_access_method=zedstore' pg_restore --no-tableam ./pg_dump.dat -d postgres\n> Otherwise, the dump says \"SET default_table_access_method=heap\", which\n> overrides any value from PGOPTIONS and precludes restoring to new AM.\n...\n> . it'd be nice if there was an ALTER TABLE SET ACCESS METHOD, to allow\n> migrating data. Otherwise I think the alternative is:\n> begin; lock t;\n> CREATE TABLE new_t LIKE (t INCLUDING ALL) USING (zedstore);\n> INSERT INTO new_t SELECT * FROM t;\n> for index; do CREATE INDEX...; done\n> DROP t; RENAME new_t (and all its indices). attach/inherit, etc.\n> commit;\n>\n> . Speaking of which, I think LIKE needs a new option for ACCESS METHOD, which\n> is otherwise lost.",
"msg_date": "Wed, 9 Dec 2020 14:13:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "create table like: ACCESS METHOD"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 02:13:29PM -0600, Justin Pryzby wrote:\n> I thought this was a good idea, but didn't hear back when I raised it before.\n> \n> Failing to preserve access method is arguably a bug, reminiscent of CREATE\n> STATISTICS and 5564c1181. But maybe it's not important to backpatch a fix in\n> this case, since access methods are still evolving.\n\nInteresting. Access methods for tables are released for more than one\nyear now, so my take about a backpatch is that this boat has already\nsailed. This may give a reason to actually not introduce this\nfeature.\n\n+ CREATE_TABLE_LIKE_ACCESSMETHOD = 1 << 0,\nNit: wouldn't this be better as ACCESS_METHOD?\n\n -- fail, as partitioned tables don't allow NO INHERIT constraints\n-CREATE TABLE noinh_con_copy1_parted (LIKE noinh_con_copy INCLUDING ALL)\n+CREATE TABLE noinh_con_copy1_parted (LIKE noinh_con_copy INCLUDING ALL EXCLUDING ACCESS METHOD)\n PARTITION BY LIST (a);\nThis diff means that you are introducing an incompatible change by\nforcing any application using CREATE TABLE LIKE for a partitioned\ntable to exclude access methods. This is not acceptable, and it may\nbe better to just ignore this clause instead in this context.\n\nThis patch should have more tests. Something could be added in\ncreate_am.sql where there is a fake heap2 as table AM.\n\n+ <para>\n+ The table's access method will be copied. By default, the\n+ <literal>default_table_access_method</literal> is used.\n+ </para>\nSecond sentence sounds a bit strange by combining \"the\" and a GUC\nname. I would just write \"Default is default_table_a_m\".\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 15:41:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 03:41:46PM +0900, Michael Paquier wrote:\n> On Wed, Dec 09, 2020 at 02:13:29PM -0600, Justin Pryzby wrote:\n> > I thought this was a good idea, but didn't hear back when I raised it before.\n> > \n> > Failing to preserve access method is arguably a bug, reminiscent of CREATE\n> > STATISTICS and 5564c1181. But maybe it's not important to backpatch a fix in\n> > this case, since access methods are still evolving.\n> \n> Interesting. Access methods for tables are released for more than one\n> year now, so my take about a backpatch is that this boat has already\n> sailed. This may give a reason to actually not introduce this\n> feature.\n\nAre you saying that since v12/13 didn't preserve the access method, it might be\npreferred to never do it ? I think it's reasonable to to not change v12/13 but\nthe behavior seems like an omission going forward. It's not so important right\nnow, since AMs aren't widely used.\n\nThis might be important for a few cases I can think of easily:\n\nIf an readonly AM doesn't support DDL, and table needs to be rebuilt, we'd\nhandle that by creating a new table LIKE the existing table, preserving its AM,\nand then INSERT into it. Like for column type promotion. That's much better\nthan querying amname FROM pg_class JOIN relam.\n\nALTER TABLE..ATTACH PARTITION requires a less strong lock than CREATE\nTABLE..PARTITION OF, so it's nice to be able to CREATE TABLE LIKE.\n\nTo use an alternate AM for historic data, we'd CREATE TABLE LIKE an existing,\npopulated table before inserting into it. This would support re-creating on a\nnew AM, or re-creating on the same AM, say, to get rid of dropped columns, or\nto re-arrange columns. \n\n> -- fail, as partitioned tables don't allow NO INHERIT constraints\n> -CREATE TABLE noinh_con_copy1_parted (LIKE noinh_con_copy INCLUDING ALL)\n> +CREATE TABLE noinh_con_copy1_parted (LIKE noinh_con_copy INCLUDING ALL EXCLUDING ACCESS METHOD)\n> PARTITION BY LIST (a);\n> This diff means that you are introducing an incompatible change by\n> forcing any application using CREATE TABLE LIKE for a partitioned\n> table to exclude access methods. This is not acceptable, and it may\n> be better to just ignore this clause instead in this context.\n\nOk. This means that \nCREATE TABLE (LIKE x INCLUDING ACCESS METHOD) PARTITION BY ...\nsilently ignores the INCLUDING AM. Is that ok ? I think the alternative is\nfor INCLUDING to be \"ternary\" options, defaulting to UNSET=0, when it's ok to\nignore options in contexts where they're not useful.\nMaybe we'd need to specially handle INCLUDING ALL, to make options\n\"soft\"/implied rather than explicit.\n\n> This patch should have more tests. Something could be added in\n> create_am.sql where there is a fake heap2 as table AM.\n\nYes, I had already done that locally.\n\n-- \nJustin",
"msg_date": "Tue, 29 Dec 2020 17:08:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Tue, 29 Dec 2020 at 23:08, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Dec 25, 2020 at 03:41:46PM +0900, Michael Paquier wrote:\n> > On Wed, Dec 09, 2020 at 02:13:29PM -0600, Justin Pryzby wrote:\n> > > I thought this was a good idea, but didn't hear back when I raised it before.\n> > >\n> > > Failing to preserve access method is arguably a bug, reminiscent of CREATE\n> > > STATISTICS and 5564c1181. But maybe it's not important to backpatch a fix in\n> > > this case, since access methods are still evolving.\n> >\n> > Interesting. Access methods for tables are released for more than one\n> > year now, so my take about a backpatch is that this boat has already\n> > sailed. This may give a reason to actually not introduce this\n> > feature.\n>\n> Are you saying that since v12/13 didn't preserve the access method, it might be\n> preferred to never do it ? I think it's reasonable to not change v12/13 but\n> the behavior seems like an omission going forward. It's not so important right\n> now, since AMs aren't widely used.\n\nOmitting copying the AM seems like a bug during\n CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL);\nBut this does allow you to specify the TableAM by using\ndefault_table_access_method, and to use\n CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL) USING (heapdup);\nif you wish to set the AM explicitly, so I don't see this as needing backpatch.\n\nWhich means that the emphasis for the earlier functionality was\ntowards one \"preferred AM\" rather than using multiple AMs at same\ntime. Allowing this change in later releases makes sense.\n\nPlease make sure this is marked as an incompatibility in the release notes.\n\n> > This patch should have more tests. Something could be added in\n> > create_am.sql where there is a fake heap2 as table AM.\n>\n> Yes, I had already done that locally.\n\nThere are no tests for the new functionality, please could you add some?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Dec 2020 12:33:56 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 12:33:56PM +0000, Simon Riggs wrote:\n> There are no tests for the new functionality, please could you add some?\n\nDid you look at the most recent patch?\n\n+CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n+CREATE TABLE likeam() USING heapdup;\n+CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL); \n\nAlso, I just realized that Dilip's toast compression patch adds \"INCLUDING\nCOMPRESSION\", which is stored in pg_am. That's an implementation detail of\nthat patch, but it's not intuitive that \"including access method\" wouldn't\ninclude the compression stored there. So I think this should use \"INCLUDING\nTABLE ACCESS METHOD\" not just ACCESS METHOD. \n\n-- \nJustin",
"msg_date": "Tue, 19 Jan 2021 15:03:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On 1/19/21 4:03 PM, Justin Pryzby wrote:\n> On Wed, Dec 30, 2020 at 12:33:56PM +0000, Simon Riggs wrote:\n>> There are no tests for the new functionality, please could you add some?\n> \n> Did you look at the most recent patch?\n> \n> +CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n> +CREATE TABLE likeam() USING heapdup;\n> +CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL);\n> \n> Also, I just realized that Dilip's toast compression patch adds \"INCLUDING\n> COMPRESSION\", which is stored in pg_am. That's an implementation detail of\n> that patch, but it's not intuitive that \"including access method\" wouldn't\n> include the compression stored there. So I think this should use \"INCLUDING\n> TABLE ACCESS METHOD\" not just ACCESS METHOD.\n\nSimon, do you know when you'll have a chance to review the updated patch \nin [1]?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/20210119210331.GN8560%40telsasoft.com\n\n\n",
"msg_date": "Fri, 19 Mar 2021 11:52:37 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Tue, Jan 19, 2021 at 03:03:31PM -0600, Justin Pryzby wrote:\n> On Wed, Dec 30, 2020 at 12:33:56PM +0000, Simon Riggs wrote:\n> > There are no tests for the new functionality, please could you add some?\n> \n> Did you look at the most recent patch?\n> \n> +CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n> +CREATE TABLE likeam() USING heapdup;\n> +CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL); \n> \n> Also, I just realized that Dilip's toast compression patch adds \"INCLUDING\n> COMPRESSION\", which is stored in pg_am. That's an implementation detail of\n> that patch, but it's not intuitive that \"including access method\" wouldn't\n> include the compression stored there. So I think this should use \"INCLUDING\n> TABLE ACCESS METHOD\" not just ACCESS METHOD. \n\nSince the TOAST patch ended up not using access methods after all, I renamed\nthis back to \"like ACCESS METHOD\" (without table).\n\nFor now, I left TableLikeOption un-alphabetized.\n\n-- \nJustin",
"msg_date": "Mon, 22 Mar 2021 19:39:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "rebased and alphabetized",
"msg_date": "Tue, 1 Jun 2021 14:10:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Tue, Jun 01, 2021 at 02:10:45PM -0500, Justin Pryzby wrote:\n> rebased and alphabetized\n\n+ /* ACCESS METHOD doesn't apply and isn't copied for partitioned tables */\n+ if ((table_like_clause->options & CREATE_TABLE_LIKE_ACCESS_METHOD) != 0 &&\n+ !cxt->ispartitioned)\n+ cxt->accessMethod = get_am_name(relation->rd_rel->relam);\nI was thinking about an ERROR here, but all the other options do the\nwork when specified only if required, so that's fine. We should have\na test with a partitioned table and the clause specified, though.\n\n+ <para>\n+ The table's access method will be copied. By default, the\n+ <literal>default_table_access_method</literal> is used.\n+ </para>\nWhy is there any need to mention default_table_access_method? This\njust inherits the AM from the source table, which has nothing to do\nwith the default directly.\n\n+CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n+CREATE TABLE likeam() USING heapdup;\n+CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL);\nRather than creating a custom AM in this test path, I would be\ntempted to move that to create_am.sql.\n--\nMichael",
"msg_date": "Fri, 27 Aug 2021 14:38:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On 3/23/21 1:39 AM, Justin Pryzby wrote:\n> On Tue, Jan 19, 2021 at 03:03:31PM -0600, Justin Pryzby wrote:\n>> On Wed, Dec 30, 2020 at 12:33:56PM +0000, Simon Riggs wrote:\n>>> There are no tests for the new functionality, please could you add some?\n>>\n>> Did you look at the most recent patch?\n>>\n>> +CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n>> +CREATE TABLE likeam() USING heapdup;\n>> +CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL); \n\nIt seems like this should error to me:\n\nCREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\nCREATE TABLE likeam1() USING heap;\nCREATE TABLE likeam2() USING heapdup;\nCREATE TABLE likeamlike(\n LIKE likeam1 INCLUDING ACCESS METHOD,\n LIKE likeam2 INCLUDING ACCESS METHOD\n);\n\nAt the very least, the documentation should say that the last one wins.\n-- \nVik Fearing\n\n\n",
"msg_date": "Fri, 27 Aug 2021 12:37:59 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Fri, Aug 27, 2021 at 12:37:59PM +0200, Vik Fearing wrote:\n> It seems like this should error to me:\n> \n> CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n> CREATE TABLE likeam1() USING heap;\n> CREATE TABLE likeam2() USING heapdup;\n> CREATE TABLE likeamlike(\n> LIKE likeam1 INCLUDING ACCESS METHOD,\n> LIKE likeam2 INCLUDING ACCESS METHOD\n> );\n> \n> At the very least, the documentation should say that the last one wins.\n\nAn error may be annoying once you do an INCLUDING ALL with more than\none relation, no? I'd be fine with just documenting that the last one\nwins.\n--\nMichael",
"msg_date": "Mon, 30 Aug 2021 13:58:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Fri, Aug 27, 2021 at 02:38:43PM +0900, Michael Paquier wrote:\n> +CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n> +CREATE TABLE likeam() USING heapdup;\n> +CREATE TABLE likeamlike(LIKE likeam INCLUDING ALL);\n> Rather than creating a custom AM in this test path, I would be\n> tempted to move that to create_am.sql.\n\n+ /* ACCESS METHOD doesn't apply and isn't copied for partitioned tables */\n+ if ((table_like_clause->options & CREATE_TABLE_LIKE_ACCESS_METHOD) != 0 &&\n+ !cxt->ispartitioned)\n+ cxt->accessMethod = get_am_name(relation->rd_rel->relam);\n\nIf the new table is partitioned, this would work. Now I think that we\nshould also add here a (relation->rd_rel->relkind == RELKIND_RELATION)\nto make sure that we only copy an access method if the original\nrelation is a table. Note that the original relation could be as well\na view, a foreign table or a composite type.\n\n@@ -349,6 +351,9 @@ transformCreateStmt(CreateStmt *stmt, const char *queryString)\n[...]\n+ if (cxt.accessMethod != NULL)\n+ stmt->accessMethod = cxt.accessMethod;\n\nThis bit is something I have been chewing on a bit. It means that if\nwe find out an AM to copy from any of the LIKE clauses, we would\nblindly overwrite the AM defined in an existing CreateStmt. We could\nalso argue in favor of keeping the original AM defined by USING from\nthe query rather than having an error. This means to check that\nstmt->accessMethod is overwritten only if NULL at this point. Anyway,\nthe patch is wrong with this implementation.\n\nThis makes me actually wonder if this patch is really a good idea at\nthe end. The interactions with USING and LIKE would be confusing to\nthe end-user one way or the other. The argument of upthread regarding\nINCLUDING ALL or INCLUDING ACCESS METHOD with multiple original\nrelations also goes in this sense. If we want to move forward here, I\nthink that we should really be careful and have a clear definition\nbehind all those corner cases. The patch fails this point for now.\n--\nMichael",
"msg_date": "Mon, 30 Aug 2021 14:56:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On 27.08.21 12:37, Vik Fearing wrote:\n> It seems like this should error to me:\n> \n> CREATE ACCESS METHOD heapdup TYPE TABLE HANDLER heap_tableam_handler;\n> CREATE TABLE likeam1() USING heap;\n> CREATE TABLE likeam2() USING heapdup;\n> CREATE TABLE likeamlike(\n> LIKE likeam1 INCLUDING ACCESS METHOD,\n> LIKE likeam2 INCLUDING ACCESS METHOD\n> );\n> \n> At the very least, the documentation should say that the last one wins.\n\nHmm. The problem is that the LIKE clause is really a macro that expands \nto the column definitions of the other table. So there is, so far, no \nsuch as thing as two LIKE clauses contradicting. Whereas the access \nmethod is a table property. So I don't think this syntax is the right \napproach for this feature.\n\nYou might think about something like\n\nCREATE TABLE t2 (...) USING (LIKE t1);\n\nAt least in terms of how the syntax should be structured.\n\n\n",
"msg_date": "Thu, 9 Sep 2021 14:30:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
},
{
"msg_contents": "On Thu, Sep 09, 2021 at 02:30:51PM +0200, Peter Eisentraut wrote:\n> Hmm. The problem is that the LIKE clause is really a macro that expands to\n> the column definitions of the other table. So there is, so far, no such as\n> thing as two LIKE clauses contradicting. Whereas the access method is a\n> table property. So I don't think this syntax is the right approach for this\n> feature.\n> \n> You might think about something like\n> \n> CREATE TABLE t2 (...) USING (LIKE t1);\n> \n> At least in terms of how the syntax should be structured.\n\nGood point. I have marked the patch as RwF.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 17:01:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: create table like: ACCESS METHOD"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe remnant work that I have on my agenda to replace the remaining\nlow-level cryptohash calls of OpenSSL (SHAXXInit and such) by EVP is\nthe stuff related to SHA1, that gets used in two places: pgcrypto and\nuuid-ossp.\n\nFirst, I got to wonder if it would be better to support SHA1 directly\nin cryptohash{_openssl}.c, glue some code to pgcrypto to use EVP\ndiscreetly or just do nothing. Contrary to SHA256 and MD5 that are\nused for authentication or backup manifests, SHA1 has a limited use in\ncore, so I wanted first to just stick something in pgcrypto or just\nlet it go, hoping for the day where we'd remove those two modules but\nthat's not a call I think we can make now.\n\nBut then, my very-recent history with uuid-ossp has made me look at\nwhat kind of tricks we use to pull in SHA1 from pgcrypto to\nuuid-ossp, and I did not like much the shortcuts used in ./configure\nor uuid-ossp's Makefile to get those files when needed, depending on\nthe version of libuuid used (grep for UUID_EXTRA_OBJS for example).\nSo, I got to look at the second option of moving SHA1 directly into\nthe new cryptohash stuff, and quite liked the cleanup this gives.\n\nPlease find attached a set of two patches:\n- 0001 is a set of small adjustments for the existing code of\ncryptohashes: some cleanup for MD5 in uuid-ossp, and more importantly\none fix to call explicit_bzero() on the context data for the fallback\nimplementations. With the existing code, we may leave behind some\ncontext data. That could become a problem if somebody has access to\nthis area of the memory even when they should not be able to do so,\nsomething that should not happen, but I see no reason to not play it\nsafe and eliminate any traces. If there are no objections, I'd like\nto apply this part.\n- 0002 is the addition of sha1 in the cryptohash infra, that includes\nthe cleanup between uuid-ossp and pgcrypto. This makes any caller of\ncryptohash for SHA1 to use EVP when building with OpenSSL, or the\nfallback implementation. I have adapted the fallback implementation\nof SHA1 to have some symmetry with src/common/{md5.c,sha2.c}.\n\nI am adding this patch set to the next commit fest. Thanks for\nreading!\n--\nMichael",
"msg_date": "Thu, 10 Dec 2020 17:07:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Some more hackery around cryptohashes (some fixes + SHA1)"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 05:07:05PM +0900, Michael Paquier wrote:\n> - 0001 is a set of small adjustments for the existing code of\n> cryptohashes: some cleanup for MD5 in uuid-ossp, and more importantly\n> one fix to call explicit_bzero() on the context data for the fallback\n> implementations. With the existing code, we may leave behind some\n> context data. That could become a problem if somebody has access to\n> this area of the memory even when they should not be able to do so,\n> something that should not happen, but I see no reason to not play it\n> safe and eliminate any traces. If there are no objections, I'd like\n> to apply this part.\n\nThis is a nice cleanup, so I have moved ahead and applied it. A\nrebased version of the SHA1 business is attached.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 12:48:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some more hackery around cryptohashes (some fixes + SHA1)"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 12:48:15PM +0900, Michael Paquier wrote:\n> This is a nice cleanup, so I have moved ahead and applied it. A\n> rebased version of the SHA1 business is attached.\n\nRebased version attached to address the conflicts caused by 55fe26a.\nI have fixed three places in pgcrypto where this missed to issue an\nerror if one of the init/update/final cryptohash calls failed for\nSHA1.\n--\nMichael",
"msg_date": "Thu, 7 Jan 2021 12:41:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some more hackery around cryptohashes (some fixes + SHA1)"
},
{
"msg_contents": "On 07/01/2021 05:41, Michael Paquier wrote:\n> On Mon, Dec 14, 2020 at 12:48:15PM +0900, Michael Paquier wrote:\n>> This is a nice cleanup, so I have moved ahead and applied it. A\n>> rebased version of the SHA1 business is attached.\n> \n> Rebased version attached to address the conflicts caused by 55fe26a.\n> I have fixed three places in pgcrypto where this missed to issue an\n> error if one of the init/update/final cryptohash calls failed for\n> SHA1.\n\n> diff --git a/contrib/pgcrypto/sha1.h b/src/common/sha1_int.h\n> similarity index 72%\n> rename from contrib/pgcrypto/sha1.h\n> rename to src/common/sha1_int.h\n> index 4300694a34..40fbffcd0b 100644\n> --- a/contrib/pgcrypto/sha1.h\n> +++ b/src/common/sha1_int.h\n> @@ -1,3 +1,17 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * sha1_int.h\n> + *\t Internal headers for fallback implementation of SHA1\n> + *\n> + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + * IDENTIFICATION\n> + *\t\t src/common/sha1_int.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +\n> /*\tcontrib/pgcrypto/sha1.h */\n> /*\t $KAME: sha1.h,v 1.4 2000/02/22 14:01:18 itojun Exp $ */\n\nLeftover reference to \"contrib/pgcrypto/sha1.h\"\n\nOther than that, looks good to me.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 22 Jan 2021 15:50:04 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Some more hackery around cryptohashes (some fixes + SHA1)"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 03:50:04PM +0200, Heikki Linnakangas wrote:\n> Leftover reference to \"contrib/pgcrypto/sha1.h\"\n> \n> Other than that, looks good to me.\n\nThanks! I have looked at that again this morning, and this was still\none indentation short. I have also run more tests with different\ncombinations of --with-openssl and --with-uuid just to be sure, and\napplied it.\n--\nMichael",
"msg_date": "Sat, 23 Jan 2021 11:37:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some more hackery around cryptohashes (some fixes + SHA1)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading the documentation of pg_shmem_allocations, I noticed that the\noff column is described as such :\n\n\"The offset at which the allocation starts. NULL for anonymous allocations\nand unused memory.\"\n\nWhereas, the view returns a value for unused memory:\n\n[local]:5433 postgres@postgres=# SELECT * FROM pg_shmem_allocations WHERE\nname IS NULL;\n name | off | size | allocated_size\n------+-----------+---------+----------------\n ¤ | 178095232 | 1923968 | 1923968\n(1 row)\n\n From what I understand, the doc is wrong.\nAm I right ?\n\nBenoit\n\n[1] https://www.postgresql.org/docs/13/view-pg-shmem-allocations.html\n[2]\nhttps://www.postgresql.org/message-id/flat/20140504114417.GM12715%40awork2.anarazel.de\n(original thread)\n\nHi,While reading the documentation of pg_shmem_allocations, I noticed that the off column is described as such : \"The offset at which the allocation starts. NULL for anonymous allocations and unused memory.\"Whereas, the view returns a value for unused memory: [local]:5433 postgres@postgres=# SELECT * FROM pg_shmem_allocations WHERE name IS NULL; name | off | size | allocated_size ------+-----------+---------+---------------- ¤ | 178095232 | 1923968 | 1923968(1 row)From what I understand, the doc is wrong.Am I right ?Benoit[1] https://www.postgresql.org/docs/13/view-pg-shmem-allocations.html[2] https://www.postgresql.org/message-id/flat/20140504114417.GM12715%40awork2.anarazel.de (original thread)",
"msg_date": "Thu, 10 Dec 2020 11:07:47 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_shmem_allocations & documentation"
},
{
"msg_contents": "At Thu, 10 Dec 2020 11:07:47 +0100, Benoit Lobréau <benoit.lobreau@gmail.com> wrote in \n> Hi,\n> \n> While reading the documentation of pg_shmem_allocations, I noticed that the\n> off column is described as such :\n> \n> \"The offset at which the allocation starts. NULL for anonymous allocations\n> and unused memory.\"\n> \n> Whereas, the view returns a value for unused memory:\n> \n> [local]:5433 postgres@postgres=# SELECT * FROM pg_shmem_allocations WHERE\n> name IS NULL;\n> name | off | size | allocated_size\n> ------+-----------+---------+----------------\n> ¤ | 178095232 | 1923968 | 1923968\n> (1 row)\n> \n> From what I understand, the doc is wrong.\n> Am I right ?\n\nGood catch! I think you're right. It seems to me the conclusion in\nthe discussion is to expose the offset for free memory.\n\nAlthough we could just rip some words off, I'd like to propose instead\nto add an explanation why it is not exposed for anonymous allocations,\nlike the column allocated_size.\n\n> Benoit\n> \n> [1] https://www.postgresql.org/docs/13/view-pg-shmem-allocations.html\n> [2]\n> https://www.postgresql.org/message-id/flat/20140504114417.GM12715%40awork2.anarazel.de\n> (original thread)\n\nregards.\n¤\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 11 Dec 2020 11:00:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 11:00:58AM +0900, Kyotaro Horiguchi wrote:\n> Although we could just rip some words off, I'd like to propose instead\n> to add an explanation why it is not exposed for anonymous allocations,\n> like the column allocated_size.\n\nIndeed, there is a hiccup between what the code does and what the docs\ntell: the offset is not NULL for unused memory.\n\n> - The offset at which the allocation starts. NULL for anonymous\n> - allocations and unused memory.\n> + The offset at which the allocation starts. For anonymous allocations,\n> + no information about individual allocations is available, so the column\n> + will be NULL in that case.\n\nI'd say: let's be simple and just remove \"and unused memory\" because\nanonymous allocations are... Anonymous so you cannot know details\nrelated to them. That's something easy to reason about, and the docs\nwere written originally to remain simple.\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 14:42:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "At Fri, 11 Dec 2020 14:42:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Dec 11, 2020 at 11:00:58AM +0900, Kyotaro Horiguchi wrote:\n> > Although we could just rip some words off, I'd like to propose instead\n> > to add an explanation why it is not exposed for anonymous allocations,\n> > like the column allocated_size.\n> \n> Indeed, there is a hiccup between what the code does and what the docs\n> tell: the offset is not NULL for unused memory.\n> \n> > - The offset at which the allocation starts. NULL for anonymous\n> > - allocations and unused memory.\n> > + The offset at which the allocation starts. For anonymous allocations,\n> > + no information about individual allocations is available, so the column\n> > + will be NULL in that case.\n> \n> I'd say: let's be simple and just remove \"and unused memory\" because\n> anonymous allocations are... Anonymous so you cannot know details\n> related to them. That's something easy to reason about, and the docs\n> were written originally to remain simple.\n\nHmm. I don't object to that. Howerver, isn't the description for\nallocated_size too verbose in that sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Dec 2020 17:29:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "Would \"NULL for anonymous allocations, since details related to them are\nnot known.\" be ok ?\n\n\nLe ven. 11 déc. 2020 à 09:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com> a\nécrit :\n\n> At Fri, 11 Dec 2020 14:42:45 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in\n> > On Fri, Dec 11, 2020 at 11:00:58AM +0900, Kyotaro Horiguchi wrote:\n> > > Although we could just rip some words off, I'd like to propose instead\n> > > to add an explanation why it is not exposed for anonymous allocations,\n> > > like the column allocated_size.\n> >\n> > Indeed, there is a hiccup between what the code does and what the docs\n> > tell: the offset is not NULL for unused memory.\n> >\n> > > - The offset at which the allocation starts. NULL for anonymous\n> > > - allocations and unused memory.\n> > > + The offset at which the allocation starts. For anonymous\n> allocations,\n> > > + no information about individual allocations is available, so\n> the column\n> > > + will be NULL in that case.\n> >\n> > I'd say: let's be simple and just remove \"and unused memory\" because\n> > anonymous allocations are... Anonymous so you cannot know details\n> > related to them. That's something easy to reason about, and the docs\n> > were written originally to remain simple.\n>\n> Hmm. I don't object to that. Howerver, isn't the description for\n> allocated_size too verbose in that sense?\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nWould \"NULL for anonymous allocations, since details related to them are not known.\" be ok ?Le ven. 11 déc. 2020 à 09:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com> a écrit :At Fri, 11 Dec 2020 14:42:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Dec 11, 2020 at 11:00:58AM +0900, Kyotaro Horiguchi wrote:\n> > Although we could just rip some words off, I'd like to propose instead\n> > to add an explanation why it is not exposed for anonymous allocations,\n> > like the column allocated_size.\n> \n> Indeed, there is a hiccup between what the code does and what the docs\n> tell: the offset is not NULL for unused memory.\n> \n> > - The offset at which the allocation starts. NULL for anonymous\n> > - allocations and unused memory.\n> > + The offset at which the allocation starts. For anonymous allocations,\n> > + no information about individual allocations is available, so the column\n> > + will be NULL in that case.\n> \n> I'd say: let's be simple and just remove \"and unused memory\" because\n> anonymous allocations are... Anonymous so you cannot know details\n> related to them. That's something easy to reason about, and the docs\n> were written originally to remain simple.\n\nHmm. I don't object to that. Howerver, isn't the description for\nallocated_size too verbose in that sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 11 Dec 2020 09:58:29 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "Here's a proposal patch.\n\nLe ven. 11 déc. 2020 à 09:58, Benoit Lobréau <benoit.lobreau@gmail.com> a\nécrit :\n\n> Would \"NULL for anonymous allocations, since details related to them are\n> not known.\" be ok ?\n>\n>\n> Le ven. 11 déc. 2020 à 09:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> a écrit :\n>\n>> At Fri, 11 Dec 2020 14:42:45 +0900, Michael Paquier <michael@paquier.xyz>\n>> wrote in\n>> > On Fri, Dec 11, 2020 at 11:00:58AM +0900, Kyotaro Horiguchi wrote:\n>> > > Although we could just rip some words off, I'd like to propose instead\n>> > > to add an explanation why it is not exposed for anonymous allocations,\n>> > > like the column allocated_size.\n>> >\n>> > Indeed, there is a hiccup between what the code does and what the docs\n>> > tell: the offset is not NULL for unused memory.\n>> >\n>> > > - The offset at which the allocation starts. NULL for anonymous\n>> > > - allocations and unused memory.\n>> > > + The offset at which the allocation starts. For anonymous\n>> allocations,\n>> > > + no information about individual allocations is available, so\n>> the column\n>> > > + will be NULL in that case.\n>> >\n>> > I'd say: let's be simple and just remove \"and unused memory\" because\n>> > anonymous allocations are... Anonymous so you cannot know details\n>> > related to them. That's something easy to reason about, and the docs\n>> > were written originally to remain simple.\n>>\n>> Hmm. I don't object to that. Howerver, isn't the description for\n>> allocated_size too verbose in that sense?\n>>\n>> regards.\n>>\n>> --\n>> Kyotaro Horiguchi\n>> NTT Open Source Software Center\n>>\n>",
"msg_date": "Mon, 14 Dec 2020 10:33:06 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 10:33:06AM +0100, Benoit Lobréau wrote:\n> </para>\n> <para>\n> The offset at which the allocation starts. NULL for anonymous\n> - allocations and unused memory.\n> + allocations, since details related to them are not known.\n> </para></entry>\n\nBoth of you seem to agree about having more details about that, which\nis fine by me at the end. Horiguchi-san, do you have more thoughts to\noffer? Benoit's version is similar to yours, just simpler.\n--\nMichael",
"msg_date": "Tue, 15 Dec 2020 10:09:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_shmem_allocations & documentation"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 10:09:35AM +0900, Michael Paquier wrote:\n> Both of you seem to agree about having more details about that, which\n> is fine by me at the end. Horiguchi-san, do you have more thoughts to\n> offer? Benoit's version is similar to yours, just simpler.\n\nOkay, applied this one then. Thanks Benoit and Horigushi-san.\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 10:40:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_shmem_allocations & documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, for CTAS or CREATE MATERIALIZED VIEW(CMV) without\nif-not-exists clause, the existence of the relation gets checked\nduring the execution of the select part and an error is thrown there.\nAll the unnecessary rewrite and planning for the select part would\nhave happened just to fail later. However, if if-not-exists clause is\npresent, then a notice is issued and returned immediately without any\nfurther rewrite or planning for . This seems somewhat inconsistent to\nme.\n\nI propose to check the relation existence early in ExecCreateTableAs()\nas well as in ExplainOneUtility() and throw an error in case it exists\nalready to avoid unnecessary rewrite, planning and execution of the\nselect part.\n\nAttaching a patch. Note that I have not added any test cases as the\nexisting test cases in create_table.sql and matview.sql would cover\nthe code.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 10 Dec 2020 17:06:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fail Fast In CTAS/CMV If Relation Already Exists To Avoid Unnecessary\n Rewrite, Planning Costs"
},
{
"msg_contents": "> Currently, for CTAS or CREATE MATERIALIZED VIEW(CMV) without if-not-exists\r\n> clause, the existence of the relation gets checked during the execution\r\n> of the select part and an error is thrown there.\r\n> All the unnecessary rewrite and planning for the select part would have\r\n> happened just to fail later. However, if if-not-exists clause is present,\r\n> then a notice is issued and returned immediately without any further rewrite\r\n> or planning for . This seems somewhat inconsistent to me.\r\n> \r\n> I propose to check the relation existence early in ExecCreateTableAs() as\r\n> well as in ExplainOneUtility() and throw an error in case it exists already\r\n> to avoid unnecessary rewrite, planning and execution of the select part.\r\n> \r\n> Attaching a patch. Note that I have not added any test cases as the existing\r\n> test cases in create_table.sql and matview.sql would cover the code.\r\n> \r\n> Thoughts?\r\n\r\nPersonally, I think it make sense, as other CMD(such as create extension/index ...) throw that error\r\nbefore any further operation too.\r\n\r\nI am just a little worried about the behavior change of [explain CTAS].\r\nMay be someone will complain the change from normal explaininfo to error output.\r\n\r\nAnd I took a look into the patch.\r\n\r\n+\t\tStringInfoData emsg;\r\n+\r\n+\t\tinitStringInfo(&emsg);\r\n+\r\n+\t\tif (level == NOTICE)\r\n+\t\t\tappendStringInfo(&emsg,\r\n\r\nUsing variable emsg and level seems a little complicated to me.\r\nHow about just:\r\n\r\nif (!is_explain && ctas->if_not_exists)\r\nereport(NOTICE,xxx\r\nelse\r\nereport(ERROR,xxx\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n",
"msg_date": "Fri, 11 Dec 2020 01:00:59 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "Thanks for taking a look at this.\n\nOn Fri, Dec 11, 2020 at 6:33 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > Currently, for CTAS or CREATE MATERIALIZED VIEW(CMV) without if-not-exists\n> > clause, the existence of the relation gets checked during the execution\n> > of the select part and an error is thrown there.\n> > All the unnecessary rewrite and planning for the select part would have\n> > happened just to fail later. However, if if-not-exists clause is present,\n> > then a notice is issued and returned immediately without any further rewrite\n> > or planning for . This seems somewhat inconsistent to me.\n> >\n> > I propose to check the relation existence early in ExecCreateTableAs() as\n> > well as in ExplainOneUtility() and throw an error in case it exists already\n> > to avoid unnecessary rewrite, planning and execution of the select part.\n> >\n> > Attaching a patch. Note that I have not added any test cases as the existing\n> > test cases in create_table.sql and matview.sql would cover the code.\n> >\n> > Thoughts?\n>\n> Personally, I think it make sense, as other CMD(such as create extension/index ...) throw that error\n> before any further operation too.\n>\n> I am just a little worried about the behavior change of [explain CTAS].\n> May be someone will complain the change from normal explaininfo to error output.\n\nI think we are clear with the patch for plain i.e. non-EXPLAIN and\nEXPLAIN ANALYZE CTAS/CMV cases.\n\nThe behaviour for EXPLAIN is as follows:\n\n1)EXPLAIN without ANALYZE, without patch: select part is planned(note\nthat the relations in the select part are checked for their existence\nwhile planning, fails any of them don't exist) , relation(CTAS/CMV\nbeing created) existence is not checked as we will not create the\nrelation and execute the plan.\n\n2)EXPLAIN with ANALYZE, without patch: select part is planned, as we\nexecute the plan, relation(CTAS/CMV) existence is checked during the\nexecution and fails there if it exists.\n\n3) EXPLAIN without ANALYZE, with patch: relation(CTAS/CMV) existence\nis checked before the planning and fails if it exists, without going\nfurther to the planning for select part.\n\n4)EXPLAIN with ANALYZE, with patch: relation(CTAS/CMV) existence is\nchecked before the rewrite, planning and fails if it exists, without\ngoing further.\n\nIMO, let's not change the 1) behaviour to 3) with the patch. If\nagreed, I can do the following way in ExplainOneUtility and will add a\ncomment on why we are doing this.\n\nif (es->analyze)\n (void) CheckRelExistenceInCTAS(ctas, true);\n\nThoughts?\n\n> And I took a look into the patch.\n>\n> + StringInfoData emsg;\n> +\n> + initStringInfo(&emsg);\n> +\n> + if (level == NOTICE)\n> + appendStringInfo(&emsg,\n>\n> Using variable emsg and level seems a little complicated to me.\n> How about just:\n>\n> if (!is_explain && ctas->if_not_exists)\n> ereport(NOTICE,xxx\n> else\n> ereport(ERROR,xxx\n\nI will modify it in the next version of the patch which I plan to send\nonce agreed on the above point.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Dec 2020 07:24:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "> IMO, let's not change the 1) behaviour to 3) with the patch. If agreed,\r\n\r\n> I can do the following way in ExplainOneUtility and will add a comment on\r\n\r\n> why we are doing this.\r\n\r\n>\r\n\r\n> if (es->analyze)\r\n\r\n> (void) CheckRelExistenceInCTAS(ctas, true);\r\n\r\n>\r\n\r\n> Thoughts?\r\n\r\n\r\n\r\nAgreed.\r\n\r\n\r\n\r\nJust in case, I took a look at Oracle 12’s behavior about [explain CTAS].\r\n\r\nOracle 12 will output the plan without throwing any msg in this case.\r\n\r\n\r\n\r\nBest regards,\r\n\r\nhouzj\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n> IMO, let's not change the 1) behaviour to 3) with the patch. If agreed,\n> I can do the following way in ExplainOneUtility and will add a comment on\n> why we are doing this.\n> \n> if (es->analyze)\n> (void) CheckRelExistenceInCTAS(ctas, true);\n> \n> Thoughts?\n \nAgreed.\n \nJust in case, I took a look at Oracle 12’s behavior about [explain CTAS].\nOracle 12 will output the plan without throwing any msg in this case.\n \nBest regards,\nhouzj",
"msg_date": "Fri, 11 Dec 2020 06:43:56 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 12:13 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > IMO, let's not change the 1) behaviour to 3) with the patch. If agreed,\n>\n> > I can do the following way in ExplainOneUtility and will add a comment on\n>\n> > why we are doing this.\n>\n> > if (es->analyze)\n>\n> > (void) CheckRelExistenceInCTAS(ctas, true);\n>\n> > Thoughts?\n>\n> Agreed.\n\nThanks!\n\nSo, I will post an updated patch soon.\n\n> Just in case, I took a look at Oracle 12’s behavior about [explain CTAS].\n>\n> Oracle 12 will output the plan without throwing any msg in this case.\n\nI'm not quite sure how other databases behave. If I go by the main\nintention of EXPLAIN without ANALYZE, that should do the planning,\nshow it in the output and no execution of the query should happen. For\nEXPLAIN CTAS/CMV, only thing that gets planned is the SELECT part and\nno execution happens so no existence check for the CTAS/CMV relation\nthat will get created if the CTAS/CMV is executed. Having said that,\nthe existence of the relations that are in the SELECT part are anyways\nchecked during planning for EXPLAIN without ANALYZE.\n\nIMHO, let's not alter the existing behaviour, if needed, that can be\ndiscussed separately.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Dec 2020 12:48:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 12:48:49PM +0530, Bharath Rupireddy wrote:\n> I'm not quite sure how other databases behave. If I go by the main\n> intention of EXPLAIN without ANALYZE, that should do the planning,\n> show it in the output and no execution of the query should happen. For\n> EXPLAIN CTAS/CMV, only thing that gets planned is the SELECT part and\n> no execution happens so no existence check for the CTAS/CMV relation\n> that will get created if the CTAS/CMV is executed. Having said that,\n> the existence of the relations that are in the SELECT part are anyways\n> checked during planning for EXPLAIN without ANALYZE.\n\nI think that it is tricky to define IF NOT EXISTS for a CTAS with\nEXPLAIN. How would you for example treat an EXPLAIN ANALYZE with a\nquery that includes an INSERT RETURNING in a WITH clause. Would you\nsay that we do nothing if the relation exists? Or would you execute\nit, still insert nothing on the result relation because it already\nexists, even if the inner query may have inserted something as part of\nits execution on a different relation?\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 17:10:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 1:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Dec 11, 2020 at 12:48:49PM +0530, Bharath Rupireddy wrote:\n> > I'm not quite sure how other databases behave. If I go by the main\n> > intention of EXPLAIN without ANALYZE, that should do the planning,\n> > show it in the output and no execution of the query should happen. For\n> > EXPLAIN CTAS/CMV, only thing that gets planned is the SELECT part and\n> > no execution happens so no existence check for the CTAS/CMV relation\n> > that will get created if the CTAS/CMV is executed. Having said that,\n> > the existence of the relations that are in the SELECT part are anyways\n> > checked during planning for EXPLAIN without ANALYZE.\n>\n> I think that it is tricky to define IF NOT EXISTS for a CTAS with\n> EXPLAIN. How would you for example treat an EXPLAIN ANALYZE with a\n> query that includes an INSERT RETURNING in a WITH clause. Would you\n> say that we do nothing if the relation exists? Or would you execute\n> it, still insert nothing on the result relation because it already\n> exists, even if the inner query may have inserted something as part of\n> its execution on a different relation?\n\nI may not have got your above scenario correctly(it will be good if\nyou can provide the use case in case I want to check something there).\nI tried the following way, all the involved relations are being\nchecked for existence even though for EXPLAIN:\npostgres=# EXPLAIN WITH temp1 AS (SELECT * FROM t1) INSERT INTO\nt1_does_not_exit VALUES (1);\nERROR: relation \"t1_does_not_exit\" does not exist\nLINE 1: ...LAIN WITH temp1 AS (SELECT * FROM t1) INSERT INTO t1_does_no...\n ^\nIIUC, is it that we want the following behaviour in case the relation\nCTAS/CMV is trying to create does not exist? Note that the sample\nqueries are run on latest master branch:\n\nEXPLAIN: throw an error, instead of the query showing select plan on\nmaster branch currently?\npostgres=# explain create table t2 as select * from t1;\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on t1 (cost=0.00..2.00 rows=100 width=8)\n\nEXPLAIN ANALYZE: throw an error as it does on master branch?\npostgres=# explain analyze create table t2 as select * from t1;\nERROR: relation \"t2\" already exists\n\nEXPLAIN with if-not-exists clause: throw a warning and an empty plan\nfrom ExplainOneUtility? If not an empty plan, we should be doing the\nrelation existence check before we come to explain routines, maybe in\ngram.c? On the master branch it doesn't happen, the query shows the\nplan for select part as shown below.\npostgres=# explain create table if not exists t2 as select * from t1;\n QUERY PLAN\n----------------------------------------------------\n Seq Scan on t1 (cost=0.00..2.00 rows=100 width=8)\n\nEXPLAIN ANALYZE with if-not-exists clause: (ideally, for if-not-exists\nclause we expect a warning to be issued, but currently relation\nexistence error is thrown) a warning and an empty plan from\nExplainOneUtility? If not an empty plan, we should be doing the\nrelation existence check before we come to explain routines, maybe in\ngram.c? On the master branch an ERROR is thrown.\npostgres=# explain analyze create table if not exists t2 as select * from t1;\nERROR: relation \"t2\" already exists\n\nFor plain CTAS -> throw an error as it happens on master branch.\npostgres=# create table t2 as select * from t1;\nERROR: relation \"t2\" already exists\n\nFor plain CTAS with if-not-exists clause -> a warning is issued as it\nhappens on master branch.\npostgres=# create table if not exists t2 as select * from t1;\nNOTICE: relation \"t2\" already exists, skipping\nCREATE TABLE AS\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Dec 2020 15:03:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 03:03:46PM +0530, Bharath Rupireddy wrote:\n> I may not have got your above scenario correctly(it will be good if\n> you can provide the use case in case I want to check something there).\n\nIt is possible to have DML queries in WITH clauses, as long as they\nuse RETURNING to feed tuples to the outer query. Just imagine\nsomething like that:\n=# explain analyze\n create table if not exists aa as\n with insert_query as\n (insert into aa values (1) returning a)\n select * from insert_query;\n\nPlease note that this case fails with your patch, but the presence of\nIF NOT EXISTS should ensure that we don't fail and issue a NOTICE\ninstead, no? Taking this case specifically (OK, I am playing with\nthe rules a bit to insert data into the relation itself, still), this\nquery may finish by adding tuples to the table whose creation should\nhave been bypassed but the query got executed and inserted tuples.\nThat's one example of behavior that may be confusing. There may be\nothers, but it seems to me that it may be simpler to execute or even\nplan the query at all if the relation already exists.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 15:15:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 03:15:12PM +0900, Michael Paquier wrote:\n> Please note that this case fails with your patch, but the presence of\n> IF NOT EXISTS should ensure that we don't fail and issue a NOTICE\n> instead, no? Taking this case specifically (OK, I am playing with\n> the rules a bit to insert data into the relation itself, still), this\n> query may finish by adding tuples to the table whose creation should\n> have been bypassed but the query got executed and inserted tuples.\n> That's one example of behavior that may be confusing. There may be\n> others, but it seems to me that it may be simpler to execute or even\n> plan the query at all if the relation already exists.\n\nEr.. Sorry. I meant here to *not* execute or even *not* plan the\nquery at all if the relation already exists.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 15:22:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 11:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Dec 14, 2020 at 03:15:12PM +0900, Michael Paquier wrote:\n> > Please note that this case fails with your patch, but the presence of\n> > IF NOT EXISTS should ensure that we don't fail and issue a NOTICE\n> > instead, no?\n\nThanks for the use case. The provided use case (or for that matter any\nuse case with explain analyze ctas if-not-exists) fails if the\nrelation already exists. It happens on the master branch, please have\na look at tests [1]. You are right in saying that whether it is\nexplain/explain analyze ctas if there is if-not-exists we should issue\nnotice instead of error as with plain ctas.\n\nDo we want to fix this behaviour for explain/explain analyze ctat with\nif-not-exists cases? Thoughts?\n\nIf yes, we could change the code in ExplainOneUtility() such that we\ncheck relation existence before rewrite/planning, issue notice and\nreturn. Then. the user sees a notice and an empty plan as we are\nreturning from ExplainOneUtility(). Is it okay to show a warning and\nan empty plan to the user? Thoughts?\n\n>> Taking this case specifically (OK, I am playing with\n> > the rules a bit to insert data into the relation itself, still), this\n> > query may finish by adding tuples to the table whose creation should\n> > have been bypassed but the query got executed and inserted tuples.\n\nIIUC, with the use case provided, the tuples will not be inserted as\nthe query later fails (and the txn gets aborted) if the relation\nexists.\n\n> > That's one example of behavior that may be confusing. There may be\n> > others, but it seems to me that it may be simpler to execute or even\n> > plan the query at all if the relation already exists.\n>\n> Er.. Sorry. I meant here to *not* execute or even *not* plan the\n> query at all if the relation already exists.\n\n+1 to not plan and execute the query at all if the relation which\nctas/cmv is trying to create already exists.\n\n[1] -\npostgres=# explain analyze\npostgres-# create table if not exists aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\nERROR: relation \"aa\" already exists\n\npostgres=# explain analyze\npostgres-# create table aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\nERROR: relation \"aa\" already exists\n\npostgres=# explain\npostgres-# create table aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\n QUERY PLAN\n------------------------------------------------------------\n CTE Scan on insert_query (cost=0.01..0.03 rows=1 width=4)\n CTE insert_query\n -> Insert on aa (cost=0.00..0.01 rows=1 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n\npostgres=# explain\npostgres-# create table if not exists aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\n QUERY PLAN\n------------------------------------------------------------\n CTE Scan on insert_query (cost=0.01..0.03 rows=1 width=4)\n CTE insert_query\n -> Insert on aa (cost=0.00..0.01 rows=1 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n\npostgres=# create table aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\nERROR: relation \"aa\" already exists\n\npostgres=# create table if not exists aa as\npostgres-# with insert_query as\npostgres-# (insert into aa values (1) returning a1)\npostgres-# select * from insert_query;\nNOTICE: relation \"aa\" already exists, skipping\nCREATE TABLE AS\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Dec 2020 13:54:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 1:54 PM Bharath Rupireddy <bharath.\nrupireddyforpostgres@gmail.com> wrote:\n> On Mon, Dec 14, 2020 at 11:52 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n> > On Mon, Dec 14, 2020 at 03:15:12PM +0900, Michael Paquier wrote:\n> > > Please note that this case fails with your patch, but the presence of\n> > > IF NOT EXISTS should ensure that we don't fail and issue a NOTICE\n> > > instead, no?\n>\n> Thanks for the use case. The provided use case (or for that matter any\n> use case with explain analyze ctas if-not-exists) fails if the\n> relation already exists. It happens on the master branch, please have\n> a look at tests [1]. You are right in saying that whether it is\n> explain/explain analyze ctas if there is if-not-exists we should issue\n> notice instead of error as with plain ctas.\n>\n> Do we want to fix this behaviour for explain/explain analyze ctat with\n> if-not-exists cases? Thoughts?\n>\n> If yes, we could change the code in ExplainOneUtility() such that we\n> check relation existence before rewrite/planning, issue notice and\n> return. Then. the user sees a notice and an empty plan as we are\n> returning from ExplainOneUtility(). Is it okay to show a warning and\n> an empty plan to the user? Thoughts?\n>\n> >> Taking this case specifically (OK, I am playing with\n> > > the rules a bit to insert data into the relation itself, still), this\n> > > query may finish by adding tuples to the table whose creation should\n> > > have been bypassed but the query got executed and inserted tuples.\n>\n> IIUC, with the use case provided, the tuples will not be inserted as\n> the query later fails (and the txn gets aborted) if the relation\n> exists.\n>\n> > > That's one example of behavior that may be confusing. There may be\n> > > others, but it seems to me that it may be simpler to execute or even\n> > > plan the query at all if the relation already exists.\n> >\n> > Er.. Sorry. I meant here to *not* execute or even *not* plan the\n> > query at all if the relation already exists.\n>\n> +1 to not plan and execute the query at all if the relation which\n> ctas/cmv is trying to create already exists.\n\nPosting a v2 patch after modifying the new function CheckRelExistenceInCTAS()\na bit as suggested earlier.\n\nThe behavior of the ctas/cmv, in case the relation already exists is as\nshown in [1]. The things that have been changed with the patch are: 1) In\nany case we do not rewrite or plan the select part if the relation already\nexists 2) For explain ctas/cmv (without analyze), now the relation\nexistence is checked early and the error is thrown as highlighted in [1].\n\nWith patch, there is no behavioral change(from that of master branch) in\nexplain analyze ctas/cmv with if-not-exists i.e. error is thrown not the\nnotice.\n\nThoughts?\n\n[1]\nWith patch:\npostgres=# create table foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# create table if not exists foo as select 1;\nNOTICE: relation \"foo\" already exists, skipping\nCREATE TABLE AS\npostgres=# explain analyze create table foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# explain analyze create table if not exists foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# explain create table foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# explain create table if not exists foo as select 1;\nERROR: relation \"foo\" already exists\n\nOn master/without patch:\npostgres=# create table foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# create table if not exists foo as select 1;\nNOTICE: relation \"foo\" already exists, skipping\nCREATE TABLE AS\npostgres=# explain analyze create table foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# explain analyze create table if not exists foo as select 1;\nERROR: relation \"foo\" already exists\npostgres=# explain create table foo as select 1;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n(1 row)\npostgres=# explain create table if not exists foo as select 1;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n(1 row)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Dec 2020 15:06:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 03:06:59PM +0530, Bharath Rupireddy wrote:\n> The behavior of the ctas/cmv, in case the relation already exists is as\n> shown in [1]. The things that have been changed with the patch are: 1) In\n> any case we do not rewrite or plan the select part if the relation already\n> exists 2) For explain ctas/cmv (without analyze), now the relation\n> existence is checked early and the error is thrown as highlighted in [1].\n> \n> With patch, there is no behavioral change(from that of master branch) in\n> explain analyze ctas/cmv with if-not-exists i.e. error is thrown not the\n> notice.\n> \n> Thoughts?\n\nHEAD is already a mixed bad of behaviors, and the set of results you\nare presenting here is giving a similar impression. It brings in some\nsanity by just ignoring the effects of the IF NOT EXISTS clause all\nthe time still that's not consistent with the queries not using\nEXPLAIN. Hmm. Looking for similar behaviors, I can see one case in\nselect_into.sql where we just never execute the plan when using WITH\nNO DATA but still show the plan, meaning that the query gets planned\nbut it just gets marked as \"(never executed)\" if attempting to use\nANALYZE. There may be use cases for that as the user directly asked\ndirectly for an EXPLAIN.\n\nNote: the patch needs tests for all the patterns you would like to\nstress. This way it is easier to follow the patterns that are\nchanging with your patch and compare them with the HEAD behavior (like\nlooking at the diffs with the tests of the patch, but without the\ndiffs in src/backend/).\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 10:48:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Dec 17, 2020 at 03:06:59PM +0530, Bharath Rupireddy wrote:\n> > The behavior of the ctas/cmv, in case the relation already exists is as\n> > shown in [1]. The things that have been changed with the patch are: 1) In\n> > any case we do not rewrite or plan the select part if the relation already\n> > exists 2) For explain ctas/cmv (without analyze), now the relation\n> > existence is checked early and the error is thrown as highlighted in [1].\n> >\n> > With patch, there is no behavioral change(from that of master branch) in\n> > explain analyze ctas/cmv with if-not-exists i.e. error is thrown not the\n> > notice.\n> >\n> > Thoughts?\n>\n> HEAD is already a mixed bad of behaviors, and the set of results you\n> are presenting here is giving a similar impression. It brings in some\n> sanity by just ignoring the effects of the IF NOT EXISTS clause all\n> the time still that's not consistent with the queries not using\n> EXPLAIN.\n\nI tried to make it consistent by issuing NOTICE (not an error) even\nfor EXPLAIN/EXPLAIN ANALYZE IF NOT EXISTS case. If issue notice and\nexit from the ExplainOneUtility, we could output an empty plan to the\nuser because, by now ExplainResultDesc would have been called at the\nstart of the explain via PortalStart(). I didn't find a clean way of\ncoding if we are not okay to show notice and empty plan to the user.\n\nAny suggestions on achieving above?\n\n> Hmm. Looking for similar behaviors, I can see one case in\n> select_into.sql where we just never execute the plan when using WITH\n> NO DATA but still show the plan, meaning that the query gets planned\n> but it just gets marked as \"(never executed)\" if attempting to use\n> ANALYZE.\n\nYes, with no data we would see \"(never executed)\" for explain analyze\nif the relation does not already exist. If the relation does exist,\nthen the error/notice.\n\n>There may be use cases for that as the user directly asked directly for an EXPLAIN.\n\nIMHO, in any case checking for the existence of the relations\nspecified in a query is must before we output something to the user.\nFor instance, the query \"explain select * from non_existent_tbl;\"\nwhere non_existent_tbl doesn't exist, throws an error. Similarly,\n\"explain create table already_existing_tbl as select * from\nanother_tbl;\" where the table ctas/select into trying to create\nalready exists, should also throw error. But that's not happening\ncurrently on master. Which seems to be a problem to me. So, with the\npatch proposed here, we error out in this case.\n\nIf the user really wants to see the explain plan, then he/she should\nuse the correct query.\n\n> Note: the patch needs tests for all the patterns you would like to\n> stress. This way it is easier to follow the patterns that are\n> changing with your patch and compare them with the HEAD behavior (like\n> looking at the diffs with the tests of the patch, but without the\n> diffs in src/backend/).\n\nSure, I will add test cases and post v3 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Dec 2020 08:15:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 8:15 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Dec 18, 2020 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Thu, Dec 17, 2020 at 03:06:59PM +0530, Bharath Rupireddy wrote:\n> > > The behavior of the ctas/cmv, in case the relation already exists is as\n> > > shown in [1]. The things that have been changed with the patch are: 1) In\n> > > any case we do not rewrite or plan the select part if the relation already\n> > > exists 2) For explain ctas/cmv (without analyze), now the relation\n> > > existence is checked early and the error is thrown as highlighted in [1].\n> > >\n> > > With patch, there is no behavioral change(from that of master branch) in\n> > > explain analyze ctas/cmv with if-not-exists i.e. error is thrown not the\n> > > notice.\n> > >\n> > > Thoughts?\n> >\n> > HEAD is already a mixed bad of behaviors, and the set of results you\n> > are presenting here is giving a similar impression. It brings in some\n> > sanity by just ignoring the effects of the IF NOT EXISTS clause all\n> > the time still that's not consistent with the queries not using\n> > EXPLAIN.\n>\n> I tried to make it consistent by issuing NOTICE (not an error) even\n> for EXPLAIN/EXPLAIN ANALYZE IF NOT EXISTS case. If issue notice and\n> exit from the ExplainOneUtility, we could output an empty plan to the\n> user because, by now ExplainResultDesc would have been called at the\n> start of the explain via PortalStart(). I didn't find a clean way of\n> coding if we are not okay to show notice and empty plan to the user.\n>\n> Any suggestions on achieving above?\n>\n> > Hmm. Looking for similar behaviors, I can see one case in\n> > select_into.sql where we just never execute the plan when using WITH\n> > NO DATA but still show the plan, meaning that the query gets planned\n> > but it just gets marked as \"(never executed)\" if attempting to use\n> > ANALYZE.\n>\n> Yes, with no data we would see \"(never executed)\" for explain analyze\n> if the relation does not already exist. If the relation does exist,\n> then the error/notice.\n>\n> >There may be use cases for that as the user directly asked directly for an EXPLAIN.\n>\n> IMHO, in any case checking for the existence of the relations\n> specified in a query is must before we output something to the user.\n> For instance, the query \"explain select * from non_existent_tbl;\"\n> where non_existent_tbl doesn't exist, throws an error. Similarly,\n> \"explain create table already_existing_tbl as select * from\n> another_tbl;\" where the table ctas/select into trying to create\n> already exists, should also throw error. But that's not happening\n> currently on master. Which seems to be a problem to me. So, with the\n> patch proposed here, we error out in this case.\n>\n> If the user really wants to see the explain plan, then he/she should\n> use the correct query.\n>\n> > Note: the patch needs tests for all the patterns you would like to\n> > stress. This way it is easier to follow the patterns that are\n> > changing with your patch and compare them with the HEAD behavior (like\n> > looking at the diffs with the tests of the patch, but without the\n> > diffs in src/backend/).\n>\n> Sure, I will add test cases and post v3 patch.\n\nAttaching v3 patch that also contains test cases. Please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 21 Dec 2020 12:01:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 12:01:38PM +0530, Bharath Rupireddy wrote:\n> On Fri, Dec 18, 2020 at 8:15 AM Bharath Rupireddy\n>> I tried to make it consistent by issuing NOTICE (not an error) even\n>> for EXPLAIN/EXPLAIN ANALYZE IF NOT EXISTS case. If issue notice and\n>> exit from the ExplainOneUtility, we could output an empty plan to the\n>> user because, by now ExplainResultDesc would have been called at the\n>> start of the explain via PortalStart(). I didn't find a clean way of\n>> coding if we are not okay to show notice and empty plan to the user.\n>>\n>> Any suggestions on achieving above?\n\nI was looking at your patch today, and I actually found the conclusion\nto output an empty plan while issuing a NOTICE to be quite intuitive\nif the caller uses IF NOT EXISTS with EXPLAIN.\n\n> Attaching v3 patch that also contains test cases. Please review it further.\n\nThanks for adding some test cases! Some of them were exact\nduplicates, so it is possible to reduce the number of queries without\nimpacting the coverage. I have also chosen a query that forces an\nerror within the planner.\n\nPlease see the attached. IF NOT EXISTS implies that CTAS or CREATE\nMATVIEW will never ERROR if the relation already exists, with or\nwithout EXPLAIN, EXECUTE or WITH NO DATA, so that gets us a consistent\nbehavior across all the patterns.\n\nNote: I'd like to think that we could choose a better name for\nCheckRelExistenceInCTAS().\n--\nMichael",
"msg_date": "Tue, 22 Dec 2020 17:37:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 2:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I was looking at your patch today, and I actually found the conclusion\n> to output an empty plan while issuing a NOTICE to be quite intuitive\n> if the caller uses IF NOT EXISTS with EXPLAIN.\n\nThanks!\n\n> Thanks for adding some test cases! Some of them were exact\n> duplicates, so it is possible to reduce the number of queries without\n> impacting the coverage. I have also chosen a query that forces an\n> error within the planner.\n> Please see the attached. IF NOT EXISTS implies that CTAS or CREATE\n> MATVIEW will never ERROR if the relation already exists, with or\n> without EXPLAIN, EXECUTE or WITH NO DATA, so that gets us a consistent\n> behavior across all the patterns.\n\nLGTM.\n\n> Note: I'd like to think that we could choose a better name for\n> CheckRelExistenceInCTAS().\n\nI changed it to IsCTASRelCreationAllowed() and attached a v5 patch.\nPlease let me know if this is okay.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 22 Dec 2020 15:12:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 03:12:15PM +0530, Bharath Rupireddy wrote:\n> On Tue, Dec 22, 2020 at 2:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Note: I'd like to think that we could choose a better name for\n>> CheckRelExistenceInCTAS().\n> \n> I changed it to IsCTASRelCreationAllowed() and attached a v5 patch.\n> Please let me know if this is okay.\n\nAfter thinking about that, using \"CTAS\" while other routines in the\nsame area use \"CreateTableAs\" looks inconsistent to me. So I have\ncome up with CreateTableAsRelExists() as name.\n\nAs the same time, I have looked at the git history to note 9bd27b7\nwhere we had better not give an empty output for non-text formats. So\nI'd like to think that it makes sense to use ExplainDummyGroup() if\nthe relation exists with IF NOT EXISTS, keeping some consistency.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Wed, 23 Dec 2020 21:31:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 6:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Dec 22, 2020 at 03:12:15PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Dec 22, 2020 at 2:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Note: I'd like to think that we could choose a better name for\n> >> CheckRelExistenceInCTAS().\n> >\n> > I changed it to IsCTASRelCreationAllowed() and attached a v5 patch.\n> > Please let me know if this is okay.\n>\n> After thinking about that, using \"CTAS\" while other routines in the\n> same area use \"CreateTableAs\" looks inconsistent to me. So I have\n> come up with CreateTableAsRelExists() as name.\n\nI think CreateTableAsRelExists() can return true if the relation\nalready exists and false otherwise, to keep in sync with the function\nname. I updated this and attached v7 patch.\n\n> As the same time, I have looked at the git history to note 9bd27b7\n> where we had better not give an empty output for non-text formats. So\n> I'd like to think that it makes sense to use ExplainDummyGroup() if\n> the relation exists with IF NOT EXISTS, keeping some consistency.\n>\n> What do you think?\n\n+1. Shall we add some test cases(with xml, yaml, json formats as is\ncurrently being done in explain.sql) to cover that? We can have the\nexplain_filter() function to remove the unstable parts in the output,\nit looks something like below. If yes, please let me know I can add\nthem to matview and select_into.\n\npostgres=# select explain_filter('explain(analyze, format xml) create\ntable if not exists t1 as select 1;');\nNOTICE: relation \"t1\" already exists, skipping\n explain_filter\n-------------------------------------------------------\n <explain xmlns=\"http://www.postgresql.org/N/explain\">+\n <CREATE-TABLE-AS /> +\n </explain>\n(1 row)\n\npostgres=# select explain_filter('explain(analyze, format yaml)\ncreate table if not exists t1 as select 1;');\nNOTICE: relation \"t1\" already exists, skipping\n explain_filter\n---------------------\n - \"CREATE TABLE AS\"\n(1 row)\n\npostgres=# select explain_filter('explain(analyze, format json)\ncreate table if not exists t1 as select 1;');\nNOTICE: relation \"t1\" already exists, skipping\n explain_filter\n---------------------\n [ +\n \"CREATE TABLE AS\"+\n ]\n(1 row)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 23 Dec 2020 19:13:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 07:13:33PM +0530, Bharath Rupireddy wrote:\n> +1. Shall we add some test cases(with xml, yaml, json formats as is\n> currently being done in explain.sql) to cover that? We can have the\n> explain_filter() function to remove the unstable parts in the output,\n> it looks something like below. If yes, please let me know I can add\n> them to matview and select_into.\n\nI am not sure that we need tests for all the formats, but having at\nleast one of them sounds good to me. I leave the choice up to you.\n\nWhat we have here looks rather committable. Let's wait until the\nperiod of vacations is over before wrapping this up to give others the\noccasion to comment.\n--\nMichael",
"msg_date": "Thu, 24 Dec 2020 11:09:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 7:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 23, 2020 at 07:13:33PM +0530, Bharath Rupireddy wrote:\n> > +1. Shall we add some test cases(with xml, yaml, json formats as is\n> > currently being done in explain.sql) to cover that? We can have the\n> > explain_filter() function to remove the unstable parts in the output,\n> > it looks something like below. If yes, please let me know I can add\n> > them to matview and select_into.\n>\n> I am not sure that we need tests for all the formats, but having at\n> least one of them sounds good to me. I leave the choice up to you.\n\nSince I tested that with all the formats manually here and it works,\nso I don't want to make the test cases complicated with adding\nexplain_filter() function into matview.sql and select_into.sql and all\nthat. I'm okay without those test cases.\n\n> What we have here looks rather committable. Let's wait until the\n> period of vacations is over before wrapping this up to give others the\n> occasion to comment.\n\nThanks! Happy Vacations!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Dec 2020 09:10:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 09:10:22AM +0530, Bharath Rupireddy wrote:\n> Since I tested that with all the formats manually here and it works,\n> so I don't want to make the test cases complicated with adding\n> explain_filter() function into matview.sql and select_into.sql and all\n> that. I'm okay without those test cases.\n\nPlease note that I have added an entry in the CF app for the moment so\nas we don't lose track of it:\nhttps://commitfest.postgresql.org/31/2892/\n\n>> What we have here looks rather committable. Let's wait until the\n>> period of vacations is over before wrapping this up to give others the\n>> occasion to comment.\n> \n> Thanks! Happy Vacations!\n\nYou too!\n--\nMichael",
"msg_date": "Thu, 24 Dec 2020 13:23:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 01:23:40PM +0900, Michael Paquier wrote:\n> Please note that I have added an entry in the CF app for the moment so\n> as we don't lose track of it:\n> https://commitfest.postgresql.org/31/2892/\n\nI have been able to look at that again today, and applied it. I have\ntweaked a bit the comments, and added an elog(ERROR) as a safety net\nfor explain.c if the IFNE code path is taken for an object type that\nis not expected with CreateTableAsStmt.\n--\nMichael",
"msg_date": "Wed, 30 Dec 2020 21:55:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fail Fast In CTAS/CMV If Relation Already Exists To Avoid\n Unnecessary Rewrite, Planning Costs"
}
] |
[
{
"msg_contents": "I went looking at the SSL connection state change information callback we\ninstall when setting up connections with OpenSSL, and I wasn't getting the\nstate changes I expected. Turns out we install it at the tail end of setting\nup the connection so we miss most of the calls. Moving it to the beginning of\nbe_tls_open_server allows us to catch the handshake etc. I also extended it by\nprinting the human readable state change message available from OpenSSL to make\nthe logs more detailed (SSL_state_string_long has existed since 0.9.8).\n\nA randomly selected sequence from a src/test/ssl testrun with the callback\nmoved but not extended with state information:\n\nLOG: connection received: host=localhost port=51177\nDEBUG: SSL: handshake start\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept exit (-1)\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept exit (-1)\nDEBUG: SSL: accept exit (-1)\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: accept loop\nDEBUG: SSL: handshake done\nDEBUG: SSL: accept exit (1)\nDEBUG: SSL connection from \"(anonymous)\"\n\nThe same sequence with the patch applied:\n\nLOG: connection received: host=localhost port=51177\nDEBUG: SSL: handshake start: \"before/accept initialization\"\nDEBUG: SSL: accept loop: \"before/accept initialization\"\nDEBUG: SSL: accept exit (-1): \"SSLv2/v3 read client hello A\"\nDEBUG: SSL: accept loop: \"SSLv3 read client hello A\"\nDEBUG: SSL: accept loop: \"SSLv3 write server hello A\"\nDEBUG: SSL: accept loop: \"SSLv3 write certificate A\"\nDEBUG: SSL: accept loop: \"SSLv3 write key exchange A\"\nDEBUG: SSL: accept loop: \"SSLv3 write certificate request A\"\nDEBUG: SSL: accept loop: \"SSLv3 flush data\"\nDEBUG: SSL: accept exit (-1): \"SSLv3 read client certificate A\"\nDEBUG: SSL: accept exit (-1): \"SSLv3 read client certificate A\"\nDEBUG: SSL: accept loop: \"SSLv3 read client certificate A\"\nDEBUG: SSL: accept loop: \"SSLv3 read client key exchange A\"\nDEBUG: SSL: accept loop: \"SSLv3 read certificate verify A\"\nDEBUG: SSL: accept loop: \"SSLv3 read finished A\"\nDEBUG: SSL: accept loop: \"SSLv3 write change cipher spec A\"\nDEBUG: SSL: accept loop: \"SSLv3 write finished A\"\nDEBUG: SSL: accept loop: \"SSLv3 flush data\"\nDEBUG: SSL: handshake done: \"SSL negotiation finished successfully\"\nDEBUG: SSL: accept exit (1): \"SSL negotiation finished successfully\"\nDEBUG: SSL connection from \"(anonymous)\"\n\nThe attached contains these two changes as well as comment fixups which Heikki\nnoticed.\n\ncheers ./daniel",
"msg_date": "Thu, 10 Dec 2020 14:43:33 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "OpenSSL connection setup debug callback issue"
},
{
"msg_contents": "Hi Daniel,\n\nOn Thu, Dec 10, 2020 at 10:43 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> I went looking at the SSL connection state change information callback we\n> install when setting up connections with OpenSSL, and I wasn't getting the\n> state changes I expected. Turns out we install it at the tail end of setting\n> up the connection so we miss most of the calls. Moving it to the beginning of\n> be_tls_open_server allows us to catch the handshake etc. I also extended it by\n> printing the human readable state change message available from OpenSSL to make\n> the logs more detailed (SSL_state_string_long has existed since 0.9.8).\n>\n> A randomly selected sequence from a src/test/ssl testrun with the callback\n> moved but not extended with state information:\n>\n> LOG: connection received: host=localhost port=51177\n> DEBUG: SSL: handshake start\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept exit (-1)\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept exit (-1)\n> DEBUG: SSL: accept exit (-1)\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: accept loop\n> DEBUG: SSL: handshake done\n> DEBUG: SSL: accept exit (1)\n> DEBUG: SSL connection from \"(anonymous)\"\n>\n> The same sequence with the patch applied:\n>\n> LOG: connection received: host=localhost port=51177\n> DEBUG: SSL: handshake start: \"before/accept initialization\"\n> DEBUG: SSL: accept loop: \"before/accept initialization\"\n> DEBUG: SSL: accept exit (-1): \"SSLv2/v3 read client hello A\"\n> DEBUG: SSL: accept loop: \"SSLv3 read client hello A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write server hello A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write certificate A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write key exchange A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write certificate request A\"\n> DEBUG: SSL: accept loop: \"SSLv3 flush data\"\n> DEBUG: SSL: accept exit (-1): \"SSLv3 read client certificate A\"\n> DEBUG: SSL: accept exit (-1): \"SSLv3 read client certificate A\"\n> DEBUG: SSL: accept loop: \"SSLv3 read client certificate A\"\n> DEBUG: SSL: accept loop: \"SSLv3 read client key exchange A\"\n> DEBUG: SSL: accept loop: \"SSLv3 read certificate verify A\"\n> DEBUG: SSL: accept loop: \"SSLv3 read finished A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write change cipher spec A\"\n> DEBUG: SSL: accept loop: \"SSLv3 write finished A\"\n> DEBUG: SSL: accept loop: \"SSLv3 flush data\"\n> DEBUG: SSL: handshake done: \"SSL negotiation finished successfully\"\n> DEBUG: SSL: accept exit (1): \"SSL negotiation finished successfully\"\n> DEBUG: SSL connection from \"(anonymous)\"\n>\n> The attached contains these two changes as well as comment fixups which Heikki\n> noticed.\n\nYou sent in your patch,\n0001-Move-information-callback-earlier-to-capture-connect.patch to\npgsql-hackers on Dec 10, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 21:04:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: OpenSSL connection setup debug callback issue"
},
{
"msg_contents": "> On 28 Dec 2020, at 13:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> if you want your patch to be reviewed as part of the\n> upcoming CommitFest, then you need to add it yourself before\n> 2021-01-01 AoE[2]. Thanks for your contributions.\n\nI thought I had added it but clearly I had missed doing so, fixed now. Thanks\nfor the reminder!\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 29 Dec 2020 10:18:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: OpenSSL connection setup debug callback issue"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 02:43:33PM +0100, Daniel Gustafsson wrote:\n> I went looking at the SSL connection state change information callback we\n> install when setting up connections with OpenSSL, and I wasn't getting the\n> state changes I expected. Turns out we install it at the tail end of setting\n> up the connection so we miss most of the calls. Moving it to the beginning of\n> be_tls_open_server allows us to catch the handshake etc. I also extended it by\n> printing the human readable state change message available from OpenSSL to make\n> the logs more detailed (SSL_state_string_long has existed since 0.9.8).\n\nLooking at the docs, SSL_state_string_long() is better than just\nSSL_state_string(), so that sounds right:\nhttps://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_info_callback.html\nhttps://www.openssl.org/docs/manmaster/man3/SSL_state_string.html\nhttps://www.openssl.org/docs/manmaster/man3/SSL_state_string_long.html\n\nThis is interesting for debugging, +1 for applying what you have\nhere, and this works for 1.0.1~3.0.0. Worth noting that this returns\na static string, as per ssl_stat.c.\n--\nMichael",
"msg_date": "Thu, 21 Jan 2021 17:01:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: OpenSSL connection setup debug callback issue"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 05:01:15PM +0900, Michael Paquier wrote:\n> This is interesting for debugging, +1 for applying what you have\n> here, and this works for 1.0.1~3.0.0. Worth noting that this returns\n> a static string, as per ssl_stat.c.\n\nDone as of af0e79c, after an indentation.\n--\nMichael",
"msg_date": "Fri, 22 Jan 2021 10:54:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: OpenSSL connection setup debug callback issue"
}
] |
[
{
"msg_contents": "Hi,\n\nIn hemdistsign() of tsgistidx.c, if we process the bits in 64-bit\nchunks rather than byte-by-byte, we get an overall speed up in Gist\nindex creation for tsvector types. With default siglen (124), the\nspeed up is 12-20%. With siglen=700, it is 30-50%. So with longer\nsignature lengths, we get higher percentage speed-up. The attached\npatch 0001 has the required changes.\n\nIn the patch 0001, rather than using xor operator on char values, xor\nis operated on 64-bit chunks. And since the chunks are 64-bit,\npopcount64() is used on each of the chunks. I have checked that the\ntwo bitvector pointer arguments of hemdistsign() are not always 64-bit\naligned. So process the leading mis-aligned bits and the trailing\nremainder bits char-by-char, leaving the middle 64-bit chunks for\npopcount64() usage.\n\nWe might extend this to the hemdistsign() definitions at other places\nin the code. But for now, we can start with gist. I haven't tried\nother places.\n\n-------------\n\nWhile working on this, I observed that on platforms other than x86_64,\nwe still declare pg_popcount64() as a function pointer, even though we\ndon't use the runtime selection of right function using__get_cpuid()\nas is done on x86.\nThe other patch i.e. 0002 is a general optimization that avoids this\nfunction pointer for pg_popcount32/64() call. The patch arranges for\ndirect function call so as to get rid of function pointer\ndereferencing each time pg_popcount32/64 is called.\n\nTo do this, define pg_popcount64 to another function name\n(pg_popcount64_nonasm) rather than a function pointer, whenever\nUSE_POPCNT_ASM is not defined. And let pg_popcount64_nonasm() be a\nstatic inline function so that whenever pg_popcount64() is called,\ndirectly the __builtin_popcount() gets called. For platforms not\nsupporting __builtin_popcount(), continue using the slow version as is\nthe current behaviour.\n\nTested this 0002 patch on ARM64, with patch 0001 already applied, and the\ngist index creation for tsvectors *further* speeds up by 6% for\ndefault siglen (=124), and by 12% with siglen=700.\n\n-------------\n\nSchema :\n\nCREATE TABLE test_tsvector(t text, a tsvector);\n-- Attached tsearch.data (a bigger version of\n-- src/test/regress/data/tsearch.data)\n\\COPY test_tsvector FROM 'tsearch.data';\n\nTest case that shows improvement :\nCREATE INDEX wowidx6 ON test_tsvector USING gist (a);\n\nTime taken by the above create-index command, in seconds, along with %\nspeed-up w.r.t. HEAD :\n\nA) siglen=124 (Default)\n head 0001.patch 0001+0002.patch\nx86 .827 .737 (12%) .....\narm 1.098 .912 (20%) .861 (28%)\n\n\nB) siglen=700 (... USING gist (a tsvector_ops(siglen=700))\n head 0001.patch 0001+0002.patch\nx86 1.121 .847 (32%) .....\narm 1.751 1.191 (47%) 1.062 (65%)\n\n--\nThanks,\n-Amit Khandekar\nHuawei Technologies",
"msg_date": "Thu, 10 Dec 2020 20:01:31 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "Hi, Amit!\nIt's really cool to hear about another GiST improvement proposal. I'd like\nto connect recently committed GiST ordered build discussion here [1] and\nfurther improvement proposed [2]\n\nI've tested feature [1] and got 2.5-3 times speed improvement which is much\nbetter I believe. There is an ongoing activity [2] to build support for\ndifferent data types for GiST. Maybe you will consider it interesting to\njoin.\n\nBTW you may have heard about Gin and Rum [3] indexes which suit text search\nmuch, much better (and faster) than GiST. The idea to process data in\nbigger chunks is good. Still optimize index structure, minimizing disc\npages access, etc. seems better in many cases.\n\nThank you for your proposal!\n\n[1] https://commitfest.postgresql.org/29/2276/\n[2] https://commitfest.postgresql.org/31/2824/\n[3] https://github.com/postgrespro/rum\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, Amit!It's really cool to hear about another GiST improvement proposal. I'd like to connect recently committed GiST ordered build discussion here [1] and further improvement proposed [2]I've tested feature [1] and got 2.5-3 times speed improvement which is much better I believe. There is an ongoing activity [2] to build support for different data types for GiST. Maybe you will consider it interesting to join.BTW you may have heard about Gin and Rum [3] indexes which suit text search much, much better (and faster) than GiST. The idea to process data in bigger chunks is good. Still optimize index structure, minimizing disc pages access, etc. seems better in many cases.Thank you for your proposal! [1] https://commitfest.postgresql.org/29/2276/[2] https://commitfest.postgresql.org/31/2824/[3] https://github.com/postgrespro/rum-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 10 Dec 2020 19:13:36 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Thu, 10 Dec 2020 at 20:43, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi, Amit!\n> It's really cool to hear about another GiST improvement proposal. I'd like to connect recently committed GiST ordered build discussion here [1] and further improvement proposed [2]\n>\n> I've tested feature [1] and got 2.5-3 times speed improvement which is much better I believe.\n\nYeah, I am completely new to the GIST stuff, but I had taken a quick\nlook at the sortsupport feature for GIST, and found it very\ninteresting. I believe it's an additional option for making the gist\nindex builds much faster. But then I thought that my small patch would\nstill be worthwhile because for tsvector types the non-sort method for\nindex build would continue to be used by users, and in general we can\nextend this small optimization for other gist types also.\n\n> There is an ongoing activity [2] to build support for different data types for GiST. Maybe you will consider it interesting to join.\n>\n> BTW you may have heard about Gin and Rum [3] indexes which suit text search much, much better (and faster) than GiST. The idea to process data in bigger chunks is good. Still optimize index structure, minimizing disc pages access, etc. seems better in many cases.\n\nSure. Thanks for the pointers.\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n",
"msg_date": "Sun, 13 Dec 2020 18:16:54 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "\n\n> 13 дек. 2020 г., в 17:46, Amit Khandekar <amitdkhan.pg@gmail.com> написал(а):\n> \n> On Thu, 10 Dec 2020 at 20:43, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> \n>> Hi, Amit!\n>> It's really cool to hear about another GiST improvement proposal. I'd like to connect recently committed GiST ordered build discussion here [1] and further improvement proposed [2]\n>> \n>> I've tested feature [1] and got 2.5-3 times speed improvement which is much better I believe.\n> \n> Yeah, I am completely new to the GIST stuff, but I had taken a quick\n> look at the sortsupport feature for GIST, and found it very\n> interesting. I believe it's an additional option for making the gist\n> index builds much faster.\n+1\nThis will make all INSERTs and UPDATES for tsvector's GiSTs.\nAlso I really like idea of taking advantage of hardware capabilities like __builtin_* etc wherever possible.\n\nMeanwhile there are at least 4 incarnation of hemdistsign() functions that are quite similar. I'd propose to refactor them somehow...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Sun, 13 Dec 2020 20:58:18 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Sun, 13 Dec 2020 at 9:28 PM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> +1\n> This will make all INSERTs and UPDATES for tsvector's GiSTs.\n\nOh, I didn't realize that this code is getting used in GIST index\ninsertion and creation too. Will check there.\n\n> Also I really like idea of taking advantage of hardware capabilities like __builtin_* etc wherever possible.\n\nYes. Also, the __builtin_popcount() uses SIMD vectorization (on arm64\n: \"cnt v0.8b, v0.8b\"), hence there's all the more reason to use it.\nOver and above that, I had thought that if we can auto-vectorize the\nbyte-by-byte xor operation and the popcount() call using compiler\noptimizations, we would benefit out of this, but didn't see any more\nimprovement. I hoped for the benefit because that would have allowed\nus to process in 128-bit chunks or 256-bit chunks, since the vector\nregisters are at least that long. Maybe gcc is not that smart to\ntranslate __builtin_popcount() to 128/256 bit vectorized instruction.\nBut for XOR operator, it does translate to 128bit vectorized\ninstructions (on arm64 : \"eor v2.16b, v2.16b, v18.16b\")\n\n> Meanwhile there are at least 4 incarnation of hemdistsign() functions that are quite similar. I'd propose to refactor them somehow...\n\nYes, I hope we get the benefit there also. Before that, I thought I\nshould post the first use-case to get some early comments. Thanks for\nyour encouraging comments :)\n\n\n",
"msg_date": "Tue, 15 Dec 2020 20:34:02 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Tue, 15 Dec 2020 at 20:34, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Sun, 13 Dec 2020 at 9:28 PM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > +1\n> > This will make all INSERTs and UPDATES for tsvector's GiSTs.\n>\n> Oh, I didn't realize that this code is getting used in GIST index\n> insertion and creation too. Will check there.\n\nI ran some insert and update tests; they show only marginal\nimprovement. So looks like the patch is mainly improving index builds.\n\n> > Meanwhile there are at least 4 incarnation of hemdistsign() functions that are quite similar. I'd propose to refactor them somehow...\n>\n> Yes, I hope we get the benefit there also. Before that, I thought I\n> should post the first use-case to get some early comments. Thanks for\n> your encouraging comments :)\n\nThe attached v2 version of 0001 patch extends the hemdistsign()\nchanges to the other use cases like intarray, ltree and hstore. I see\nthe same index build improvement for all these types.\n\nSince for the gist index creation of some of these types the default\nvalue for siglen is small (8-20), I tested with small siglens. For\nsiglens <= 20, particularly for values that are not multiples of 8\n(e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index\ncreation. It's probably because of\nan extra function call for pg_xorcount(); and also might be due to the\nextra logic in pg_xorcount() which becomes prominent for shorter\ntraversals. So for siglen less than 32, I kept the existing method\nusing byte-by-byte traversal.\n\n--\nThanks,\n-Amit Khandekar\nHuawei Technologies",
"msg_date": "Wed, 27 Jan 2021 20:36:28 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "I have added this one in the March commitfest.\nhttps://commitfest.postgresql.org/32/3023/\n\n\n",
"msg_date": "Mon, 1 Mar 2021 10:44:42 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 11:07 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n\nHi Amit,\n\nYour performance numbers look like this is a fruitful area to improve. I\nhave not yet tested performance, but I will do so at a later date. I did\nsome microbenchmarking of our popcount implementation, since I wasn't quite\nsure it's optimal, and indeed, there is room for improvement there [1]. I'd\nbe curious to hear your thoughts on those findings. I think it'd be worth\nit to test a version of this patch using those idioms here as well, so at\nsome point I plan to post something.\n\nNow for the patches:\n\n0001:\n\n+ /*\n+ * We can process 64-bit chunks only if both are mis-aligned by the same\n+ * number of bytes.\n+ */\n+ if (b_aligned - b == a_aligned - a)\n\nThe obvious question here is: how often are they identically misaligned?\nYou don't indicate that your measurements differ in a bimodal fashion, so\ndoes that mean they happen to be always (mis)aligned? Is that an accident\nof the gist coding and could change unexpectedly? And how hard would it be\nto allocate those values upstream so that the pointers are always aligned\non 8-byte boundaries? (I imagine pretty hard, since there are multiple\ncallers, and such tight coupling is not good style)\n\n+ /* For smaller lengths, do simple byte-by-byte traversal */\n+ if (bytes <= 32)\n\nYou noted upthread:\n\n> Since for the gist index creation of some of these types the default\n> value for siglen is small (8-20), I tested with small siglens. For\n> siglens <= 20, particularly for values that are not multiples of 8\n> (e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index\n> creation. It's probably because of\n> an extra function call for pg_xorcount(); and also might be due to the\n> extra logic in pg_xorcount() which becomes prominent for shorter\n> traversals. So for siglen less than 32, I kept the existing method\n> using byte-by-byte traversal.\n\nI wonder if that can be addressed directly, while cleaning up the loops and\nalignment checks in pg_xorcount_long() a little. For example, have a look\nat pg_crc32c_armv8.c -- it might be worth testing a similar approach.\n\nAlso, pardon my ignorance, but where can I find the default siglen for\nvarious types?\n\n+/* Count the number of 1-bits in the result of xor operation */\n+extern uint64 pg_xorcount_long(const unsigned char *a, const unsigned char\n*b,\n+ int bytes);\n+static inline uint64 pg_xorcount(const unsigned char *a, const unsigned\nchar *b,\n+ int bytes)\n\nI don't think it makes sense to have a static inline function call a global\nfunction.\n\n-static int\n+static inline int\n hemdistsign(BITVECP a, BITVECP b, int siglen)\n\nNot sure what the reason is for inlining all these callers. Come to think\nof it, I don't see a reason to have hemdistsign() at all anymore. All it\ndoes is call pg_xorcount(). I suspect that's what Andrey Borodin meant when\nhe said upthread:\n\n> > > Meanwhile there are at least 4 incarnation of hemdistsign() functions\nthat are quite similar. I'd propose to refactor them somehow...\n\n\n0002:\n\nI'm not really happy with this patch. I like the idea of keeping indirect\ncalls out of non-x86 platforms, but I believe it could be done more simply.\nFor one, I don't see a need to invent a third category of retail function.\nSecond, there's no reason to put \"nonasm\" in the header as a static inline,\nand then call from there to the new now-global function \"slow\". Especially\nsince the supposed static inline is still needed as a possible value as a\nfunction pointer on x86, so the compiler is not going to inline it on x86\nanyway. That just confuses things. (I did make sure to remove indirect\ncalls from the retail functions in [1], in case we want to go that route).\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsFCWys_yfPe4PoF3%3D2_oxU5fFR2H%2BmtM6njUA8nBiCYug%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jan 27, 2021 at 11:07 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:Hi Amit,Your performance numbers look like this is a fruitful area to improve. I have not yet tested performance, but I will do so at a later date. I did some microbenchmarking of our popcount implementation, since I wasn't quite sure it's optimal, and indeed, there is room for improvement there [1]. I'd be curious to hear your thoughts on those findings. I think it'd be worth it to test a version of this patch using those idioms here as well, so at some point I plan to post something.Now for the patches:0001:+\t/*+\t * We can process 64-bit chunks only if both are mis-aligned by the same+\t * number of bytes.+\t */+\tif (b_aligned - b == a_aligned - a)The obvious question here is: how often are they identically misaligned? You don't indicate that your measurements differ in a bimodal fashion, so does that mean they happen to be always (mis)aligned? Is that an accident of the gist coding and could change unexpectedly? And how hard would it be to allocate those values upstream so that the pointers are always aligned on 8-byte boundaries? (I imagine pretty hard, since there are multiple callers, and such tight coupling is not good style)+\t/* For smaller lengths, do simple byte-by-byte traversal */+\tif (bytes <= 32)You noted upthread:> Since for the gist index creation of some of these types the default> value for siglen is small (8-20), I tested with small siglens. For> siglens <= 20, particularly for values that are not multiples of 8> (e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index> creation. It's probably because of> an extra function call for pg_xorcount(); and also might be due to the> extra logic in pg_xorcount() which becomes prominent for shorter> traversals. So for siglen less than 32, I kept the existing method> using byte-by-byte traversal.I wonder if that can be addressed directly, while cleaning up the loops and alignment checks in pg_xorcount_long() a little. For example, have a look at pg_crc32c_armv8.c -- it might be worth testing a similar approach.Also, pardon my ignorance, but where can I find the default siglen for various types?+/* Count the number of 1-bits in the result of xor operation */+extern uint64 pg_xorcount_long(const unsigned char *a, const unsigned char *b,+\t\t\t\t\t\t\t int bytes);+static inline uint64 pg_xorcount(const unsigned char *a, const unsigned char *b,+\t\t\t\t\t\t\t\t int bytes)I don't think it makes sense to have a static inline function call a global function.-static int+static inline int hemdistsign(BITVECP a, BITVECP b, int siglen)Not sure what the reason is for inlining all these callers. Come to think of it, I don't see a reason to have hemdistsign() at all anymore. All it does is call pg_xorcount(). I suspect that's what Andrey Borodin meant when he said upthread:> > > Meanwhile there are at least 4 incarnation of hemdistsign() functions that are quite similar. I'd propose to refactor them somehow...0002:I'm not really happy with this patch. I like the idea of keeping indirect calls out of non-x86 platforms, but I believe it could be done more simply. For one, I don't see a need to invent a third category of retail function. Second, there's no reason to put \"nonasm\" in the header as a static inline, and then call from there to the new now-global function \"slow\". Especially since the supposed static inline is still needed as a possible value as a function pointer on x86, so the compiler is not going to inline it on x86 anyway. That just confuses things. (I did make sure to remove indirect calls from the retail functions in [1], in case we want to go that route).[1] https://www.postgresql.org/message-id/CAFBsxsFCWys_yfPe4PoF3%3D2_oxU5fFR2H%2BmtM6njUA8nBiCYug%40mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 3 Mar 2021 14:02:18 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 23:32, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Your performance numbers look like this is a fruitful area to improve. I have not yet tested performance, but I will do so at a later date.\n\nThanks for reviewing the patch !\n\n> I did some\n> microbenchmarking of our popcount implementation, since I wasn't quite sure\n> it's optimal, and indeed, there is room for improvement there [1]. I'd be\n> curious to hear your thoughts on those findings. I think it'd be worth it to\n> test a version of this patch using those idioms here as well, so at some\n> point I plan to post something.\n\nI am not yet clear about the implications of that work on this patch\nset here, but I am still going over it, and will reply to that.\n\n>\n> Now for the patches:\n>\n> 0001:\n>\n> + /*\n> + * We can process 64-bit chunks only if both are mis-aligned by the same\n> + * number of bytes.\n> + */\n> + if (b_aligned - b == a_aligned - a)\n>\n> The obvious question here is: how often are they identically misaligned? You\n> don't indicate that your measurements differ in a bimodal fashion, so does\n> that mean they happen to be always (mis)aligned?\n\nI ran CREATE INDEX on tsvector columns using the tsearch.data that I\nhad attached upthread, with some instrumentation; here are the\nproportions :\n1. In 15% of the cases, only one among a and b was aligned. The other\nwas offset from the 8-byte boundary by 4 bytes.\n2. 6% of the cases, both were offset by 4 bytes, i.e. identically misaligned.\n3. Rest of the cases, both were aligned.\n\nWith other types, and with different sets of data, I believe I can get\ntotally different proportions.\n\n> Is that an accident of the gist coding and could change unexpectedly?\n> And how hard would it be to\n> allocate those values upstream so that the pointers are always aligned on\n> 8-byte boundaries? (I imagine pretty hard, since there are multiple callers,\n> and such tight coupling is not good style)\n\nThat I am not sure though; haven't clearly understood the structure of\ngist indexes yet. I believe it would depend on individual gist\nimplementation, and we can't assume about that ?\n\n\n> + /* For smaller lengths, do simple byte-by-byte traversal */\n> + if (bytes <= 32)\n>\n> You noted upthread:\n>\n> > Since for the gist index creation of some of these types the default\n> > value for siglen is small (8-20), I tested with small siglens. For\n> > siglens <= 20, particularly for values that are not multiples of 8\n> > (e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index\n> > creation. It's probably because of\n> > an extra function call for pg_xorcount(); and also might be due to the\n> > extra logic in pg_xorcount() which becomes prominent for shorter\n> > traversals. So for siglen less than 32, I kept the existing method\n> > using byte-by-byte traversal.\n>\n> I wonder if that can be addressed directly, while cleaning up the loops and\n> alignment checks in pg_xorcount_long() a little. For example, have a look at\n> pg_crc32c_armv8.c -- it might be worth testing a similar approach.\n\nYeah, we can put the bytes <= 32 condition inside pg_xorcount_long().\nI avoided that to not hamper the <= 32 scenarios. Details explained\nbelow for \"why inline pg_xorcount is calling global function\"\n\n> Also, pardon my ignorance, but where can I find the default siglen for various types?\nCheck SIGLEN_DEFAULT.\n\n>\n> +/* Count the number of 1-bits in the result of xor operation */\n> +extern uint64 pg_xorcount_long(const unsigned char *a, const unsigned char *b,\n> + int bytes);\n> +static inline uint64 pg_xorcount(const unsigned char *a, const unsigned char *b,\n> + int bytes)\n>\n> I don't think it makes sense to have a static inline function call a global function.\n\nAs you may note, the global function will be called only in a subset\nof cases where siglen <= 32. Yeah, we can put the bytes <= 32\ncondition inside pg_xorcount_long(). I avoided that to not hamper the\n<= 32 scenarios. If I do this, it will add a function call for these\nsmall siglen scenarios. The idea was: use the function call only for\ncases where we know that the function call overhead will be masked by\nthe popcount() optimization.\n\n\n>\n> -static int\n> +static inline int\n> hemdistsign(BITVECP a, BITVECP b, int siglen)\n>\n> Not sure what the reason is for inlining all these callers.\n> Come to think of it, I don't see a reason to have hemdistsign()\n> at all anymore. All it does is call pg_xorcount(). I suspect that's\n> what Andrey Borodin meant when he said upthread:\n>\n> > > > Meanwhile there are at least 4 incarnation of hemdistsign()\n> > > > functions that are quite similar. I'd propose to refactor them somehow...\n\nI had something in mind when I finally decided to not remove\nhemdistsign(). Now I think you are right, we can remove hemdistsign()\naltogether. Let me check again.\n\n\n\n--------------------\n\n\n> 0002:\n>\n> I'm not really happy with this patch. I like the idea of keeping indirect\n> calls out of non-x86 platforms, but I believe it could be done more simply.\n\nI am open for other approaches that would make this patch simpler.\n\n> For one, I don't see a need to invent a third category of retail function.\n\nSo currently we have pg_popcount64_choose, pg_popcount64_slow and\npg_popcount64_asm.\nWith the patch, we have those three, plus pg_popcount64_nonasm.\n\nI had earlier considered #defining pg_popcount64 as pg_popcount64_slow\nin the .h (inside USE_POPCNT_ASM of course) and leave\npg_popcount64_slow() untouched. But this will still involve an extra\nlevel of function call for each call to pg_popcount64() since\npg_popcount64_slow() needs to be an exported function meant to be used\nin multiple place outside pg_bitutils.c; and our purpose is to avoid\nindirect calls for this function because it is used so repeatedly.\n\nSo then I thought why not move the current pg_popcount64_slow()\ndefinition to pg_bitutils.h and make it inline. We can do that way.\nThis way that function would look similar to how the other existing\nfunctions like pg_leftmost_one_pos64() are defined. But I didn't like\nit for two reasons:\n1) I anyway found the function name confusing. The only slow part of\nthat function is the last part where it does byte-by-byte traversal.\nThat's the reason I renamed pg_popcount64_slow() to\npg_popcount64_nonasm() and kept the slow logic in\npg_popcount64_slow(). It's ok to keep the slow part in a non-inline\nfunction because that part is anyway slow, and is a fallback code for\nnon-supporting platforms.\n2) This also keeps the inline pg_popcount64_nonasm() code smaller.\n\nThe way I look at the final functions is :\npg_popcount64_choose() chooses between an asm and non-asm function.\npg_popcount64_asm() is the one for running direct assembly code.\npg_popcount64_nonasm() is used for platforms where we don't want to\ncall assembly code. So it either calls hardware intrinsics, or calls\nthe slow version if the intrinsics are not available.\n\nIf I look at these functions this way, it sounds simpler to me. But I\nunderstand if it may still sound confusing. That's why I mentioned\nthat I am open to simplifying the patch. Also, the current popcount\nplatform-specific stuff is already confusing; but I feel what the\npatch is trying to do looks worth it because I am getting an extra\n7-8% improvement on my ARM machine.\n\n> Second, there's no reason to put \"nonasm\" in the header as a static inline,\n> and then call from there to the new now-global function \"slow\".\n\nExplained above, the reason why I shifted the nonasm code in the\nheader and made it inline.\n\n> Especially since the supposed static inline is still needed as a possible\n> value as a function pointer on x86, so the compiler is not going to inline\n> it on x86 anyway. That just confuses things.\n\nYeah, the inline is anyway just a request to the compiler, right ? On\nx86, the pg_bitutils.c will have it as non-inline function, and all\nthe other files would have it as an inline function which will never\nbe used.\nLike I mentioned, it is important to define it as inline, in order to\navoid function call when one calls pg_popcount64(). pg_popcount64()\nshould be translated to the built-in intrinsic.\n\n> (I did make sure to remove indirect calls from the retail functions\n> in [1], in case we want to go that route).\n\nYeah, I quickly had a look at that. I am still going over that thread.\nThanks for the exhaustive analysis there.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 18:13:00 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 8:43 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n>\n> On Wed, 3 Mar 2021 at 23:32, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> > 0001:\n> >\n> > + /*\n> > + * We can process 64-bit chunks only if both are mis-aligned by the\nsame\n> > + * number of bytes.\n> > + */\n> > + if (b_aligned - b == a_aligned - a)\n> >\n> > The obvious question here is: how often are they identically\nmisaligned? You\n> > don't indicate that your measurements differ in a bimodal fashion, so\ndoes\n> > that mean they happen to be always (mis)aligned?\n>\n> I ran CREATE INDEX on tsvector columns using the tsearch.data that I\n> had attached upthread, with some instrumentation; here are the\n> proportions :\n> 1. In 15% of the cases, only one among a and b was aligned. The other\n> was offset from the 8-byte boundary by 4 bytes.\n> 2. 6% of the cases, both were offset by 4 bytes, i.e. identically\nmisaligned.\n> 3. Rest of the cases, both were aligned.\n\nThat's somewhat strange. I would have assumed always aligned, or usually\nnot. It sounds like we could require them both to be aligned and not bother\nwith the byte-by-byte prologue. I also wonder if it's worth it to memcpy to\na new buffer if the passed pointer is not aligned.\n\n> > + /* For smaller lengths, do simple byte-by-byte traversal */\n> > + if (bytes <= 32)\n> >\n> > You noted upthread:\n> >\n> > > Since for the gist index creation of some of these types the default\n> > > value for siglen is small (8-20), I tested with small siglens. For\n> > > siglens <= 20, particularly for values that are not multiples of 8\n> > > (e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index\n> > > creation. It's probably because of\n> > > an extra function call for pg_xorcount(); and also might be due to the\n> > > extra logic in pg_xorcount() which becomes prominent for shorter\n> > > traversals. So for siglen less than 32, I kept the existing method\n> > > using byte-by-byte traversal.\n> >\n> > I wonder if that can be addressed directly, while cleaning up the loops\nand\n> > alignment checks in pg_xorcount_long() a little. For example, have a\nlook at\n> > pg_crc32c_armv8.c -- it might be worth testing a similar approach.\n>\n> Yeah, we can put the bytes <= 32 condition inside pg_xorcount_long().\n> I avoided that to not hamper the <= 32 scenarios. Details explained\n> below for \"why inline pg_xorcount is calling global function\"\n>\n> > Also, pardon my ignorance, but where can I find the default siglen for\nvarious types?\n> Check SIGLEN_DEFAULT.\n\nOkay, so it's hard-coded in various functions in contrib modules. In that\ncase, let's just keep the existing coding for those. In fact, the comments\nthat got removed by your patch specifically say:\n\n/* Using the popcount functions here isn't likely to win */\n\n...which your testing confirmed. The new function should be used only for\nGist and possibly Intarray, whose default is 124. That means we can't get\nrid of hemdistsign(), but that's fine. Alternatively, we could say that a\nsmall regression is justifiable reason to refactor all call sites, but I'm\nnot proposing that.\n\n> > (I did make sure to remove indirect calls from the retail functions\n> > in [1], in case we want to go that route).\n>\n> Yeah, I quickly had a look at that. I am still going over that thread.\n> Thanks for the exhaustive analysis there.\n\nI'll post a patch soon that builds on that, so you can see what I mean.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Mar 8, 2021 at 8:43 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:>> On Wed, 3 Mar 2021 at 23:32, John Naylor <john.naylor@enterprisedb.com> wrote:> > 0001:> >> > + /*> > + * We can process 64-bit chunks only if both are mis-aligned by the same> > + * number of bytes.> > + */> > + if (b_aligned - b == a_aligned - a)> >> > The obvious question here is: how often are they identically misaligned? You> > don't indicate that your measurements differ in a bimodal fashion, so does> > that mean they happen to be always (mis)aligned?>> I ran CREATE INDEX on tsvector columns using the tsearch.data that I> had attached upthread, with some instrumentation; here are the> proportions :> 1. In 15% of the cases, only one among a and b was aligned. The other> was offset from the 8-byte boundary by 4 bytes.> 2. 6% of the cases, both were offset by 4 bytes, i.e. identically misaligned.> 3. Rest of the cases, both were aligned.That's somewhat strange. I would have assumed always aligned, or usually not. It sounds like we could require them both to be aligned and not bother with the byte-by-byte prologue. I also wonder if it's worth it to memcpy to a new buffer if the passed pointer is not aligned. > > + /* For smaller lengths, do simple byte-by-byte traversal */> > + if (bytes <= 32)> >> > You noted upthread:> >> > > Since for the gist index creation of some of these types the default> > > value for siglen is small (8-20), I tested with small siglens. For> > > siglens <= 20, particularly for values that are not multiples of 8> > > (e.g. 10, 13, etc), I see a 1-7 % reduction in speed of index> > > creation. It's probably because of> > > an extra function call for pg_xorcount(); and also might be due to the> > > extra logic in pg_xorcount() which becomes prominent for shorter> > > traversals. So for siglen less than 32, I kept the existing method> > > using byte-by-byte traversal.> >> > I wonder if that can be addressed directly, while cleaning up the loops and> > alignment checks in pg_xorcount_long() a little. For example, have a look at> > pg_crc32c_armv8.c -- it might be worth testing a similar approach.>> Yeah, we can put the bytes <= 32 condition inside pg_xorcount_long().> I avoided that to not hamper the <= 32 scenarios. Details explained> below for \"why inline pg_xorcount is calling global function\">> > Also, pardon my ignorance, but where can I find the default siglen for various types?> Check SIGLEN_DEFAULT.Okay, so it's hard-coded in various functions in contrib modules. In that case, let's just keep the existing coding for those. In fact, the comments that got removed by your patch specifically say:/* Using the popcount functions here isn't likely to win */...which your testing confirmed. The new function should be used only for Gist and possibly Intarray, whose default is 124. That means we can't get rid of hemdistsign(), but that's fine. Alternatively, we could say that a small regression is justifiable reason to refactor all call sites, but I'm not proposing that.> > (I did make sure to remove indirect calls from the retail functions> > in [1], in case we want to go that route).>> Yeah, I quickly had a look at that. I am still going over that thread.> Thanks for the exhaustive analysis there.I'll post a patch soon that builds on that, so you can see what I mean.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Mar 2021 14:08:34 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "I wrote:\n> I'll post a patch soon that builds on that, so you can see what I mean.\n\nI've attached where I was imagining this heading, as a text file to avoid\ndistracting the cfbot. Here are the numbers I got with your test on the\nattached, as well as your 0001, on x86-64 Clang 10, default siglen:\n\nmaster:\n739ms\n\nv3-0001\n692ms\n\nattached POC\n665ms\n\nThe small additional speed up is not worth it, given the code churn and\ncomplexity, so I don't want to go this route after all. I think the way to\ngo is a simplified version of your 0001 (not 0002), with only a single\nfunction, for gist and intarray only, and a style that better matches the\nsurrounding code. If you look at my xor functions in the attached text\nfile, you'll get an idea of what it should look like. Note that it got the\nabove performance without ever trying to massage the pointer alignment. I'm\na bit uncomfortable with the fact that we can't rely on alignment, but\nmaybe there's a simple fix somewhere in the gist code.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Mar 2021 18:55:45 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Thu, 11 Mar 2021 at 04:25, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Okay, so it's hard-coded in various functions in contrib modules. In that\n> case, let's just keep the existing coding for those. In fact, the comments\n> that got removed by your patch specifically say: /* Using the popcount\n> functions here isn't likely to win */ ...which your testing confirmed. The\n> new function should be used only for Gist and possibly Intarray, whose\n> default is 124. That means we can't get rid of hemdistsign(), but that's\n> fine.\n\nThe comment is there for all types. Since I get the performance better\non all the types, I have kept the pg_xorcount() call for all these\ncontrib modules. I understand that since for some types default\nsiglen is small, we won't get benefit. But I think we should call\npg_xorcount() for the benefit of non-default siglen case.\n\nHave replaced hemdistsign() by pg_xorcount() everywhere; but with\nthat, the call looks a bit clumsy because of having to type-cast the\nparameters to const unsigned char *. I noticed that the cast to\n\"unsigned char *\" is required only when we use the value in the\npg_number_of_ones array. Elsewhere we can safely use 'a' and 'b' as\n\"char *\". So I changed the pg_xorcount() parameters from unsigned char\n* to char *.\n\n> I think the way to go is a simplified version of your 0001 (not 0002), with\n> only a single function, for gist and intarray only, and a style that better\n> matches the surrounding code. If you look at my xor functions in the attached\n> text file, you'll get an idea of what it should look like. Note that it got\n> the above performance without ever trying to massage the pointer alignment.\n> I'm a bit uncomfortable with the fact that we can't rely on alignment, but\n> maybe there's a simple fix somewhere in the gist code.\n\nIn the attached 0001-v3 patch, I have updated the code to match it\nwith surrounding code; specifically the usage of *a++ rather than\na[i].\n\nRegarding the alignment changes... I have removed the code that\nhandled the leading identically unaligned bytes, for lack of evidence\nthat percentage of such cases is significant. Like I noted earlier,\nfor the tsearch data I used, identically unaligned cases were only 6%.\nIf I find scenarios where these cases can be significant after all and\nif we cannot do anything in the gist index code, then we might have to\nbring back the unaligned byte handling. I didn't get a chance to dig\ndeeper into the gist index implementation to see why they are not\nalways 8-byte aligned.\n\nI have kept the 0002 patch as-is. Due to significant *additional*\nspeedup, over and above the 0001 improvement, I think the code\nre-arrangement done is worth it for non-x86 platforms.\n\n-Amit Khandekar",
"msg_date": "Fri, 19 Mar 2021 18:26:33 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 8:57 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n>\n> On Thu, 11 Mar 2021 at 04:25, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > Okay, so it's hard-coded in various functions in contrib modules. In\nthat\n> > case, let's just keep the existing coding for those. In fact, the\ncomments\n> > that got removed by your patch specifically say: /* Using the popcount\n> > functions here isn't likely to win */ ...which your testing confirmed.\nThe\n> > new function should be used only for Gist and possibly Intarray, whose\n> > default is 124. That means we can't get rid of hemdistsign(), but that's\n> > fine.\n>\n> The comment is there for all types. Since I get the performance better\n> on all the types, I have kept the pg_xorcount() call for all these\n> contrib modules. I understand that since for some types default\n> siglen is small, we won't get benefit. But I think we should call\n> pg_xorcount() for the benefit of non-default siglen case.\n\nThe 1-7% degradation measured was from an earlier version, when\npg_xorcount_long had a lot of finicky branching and computation. Is it\nstill true in v3? We should answer that first. I'm interested in what\nhappens if you now use pg_xorcount_long in the call sites, at least in the\nworst case 7% test.\n\n> Have replaced hemdistsign() by pg_xorcount() everywhere; but with\n> that, the call looks a bit clumsy because of having to type-cast the\n> parameters to const unsigned char *. I noticed that the cast to\n> \"unsigned char *\" is required only when we use the value in the\n> pg_number_of_ones array. Elsewhere we can safely use 'a' and 'b' as\n> \"char *\". So I changed the pg_xorcount() parameters from unsigned char\n> * to char *.\n\nThat matches the style of that file, so +1.\n\n> > I think the way to go is a simplified version of your 0001 (not 0002),\nwith\n> > only a single function, for gist and intarray only, and a style that\nbetter\n> > matches the surrounding code. If you look at my xor functions in the\nattached\n> > text file, you'll get an idea of what it should look like. Note that it\ngot\n> > the above performance without ever trying to massage the pointer\nalignment.\n> > I'm a bit uncomfortable with the fact that we can't rely on alignment,\nbut\n> > maybe there's a simple fix somewhere in the gist code.\n>\n> In the attached 0001-v3 patch, I have updated the code to match it\n> with surrounding code; specifically the usage of *a++ rather than\n> a[i].\n>\n> Regarding the alignment changes... I have removed the code that\n> handled the leading identically unaligned bytes, for lack of evidence\n> that percentage of such cases is significant. Like I noted earlier,\n> for the tsearch data I used, identically unaligned cases were only 6%.\n> If I find scenarios where these cases can be significant after all and\n> if we cannot do anything in the gist index code, then we might have to\n> bring back the unaligned byte handling. I didn't get a chance to dig\n> deeper into the gist index implementation to see why they are not\n> always 8-byte aligned.\n\nI find it stranger that something equivalent to char* is not randomly\nmisaligned, but rather only seems to land on 4-byte boundaries.\n\n[thinks] I'm guessing it's because of VARHDRSZ, but I'm not positive.\n\nFWIW, I anticipate some push back from the community because of the fact\nthat the optimization relies on statistical phenomena.\n\n> I have kept the 0002 patch as-is. Due to significant *additional*\n> speedup, over and above the 0001 improvement, I think the code\n> re-arrangement done is worth it for non-x86 platforms.\n\nFor the amount of code uglification involved, we should be getting full asm\npopcount support on Arm, not an attempt to kluge around the current\nimplementation. I'd be happy to review such an effort for PG15, by the way.\n\nReadability counts, and as it stands I don't feel comfortable asking a\ncommitter to read 0002.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Mar 19, 2021 at 8:57 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:>> On Thu, 11 Mar 2021 at 04:25, John Naylor <john.naylor@enterprisedb.com> wrote:> > Okay, so it's hard-coded in various functions in contrib modules. In that> > case, let's just keep the existing coding for those. In fact, the comments> > that got removed by your patch specifically say: /* Using the popcount> > functions here isn't likely to win */ ...which your testing confirmed. The> > new function should be used only for Gist and possibly Intarray, whose> > default is 124. That means we can't get rid of hemdistsign(), but that's> > fine.>> The comment is there for all types. Since I get the performance better> on all the types, I have kept the pg_xorcount() call for all these> contrib modules. I understand that since for some types default> siglen is small, we won't get benefit. But I think we should call> pg_xorcount() for the benefit of non-default siglen case.The 1-7% degradation measured was from an earlier version, when pg_xorcount_long had a lot of finicky branching and computation. Is it still true in v3? We should answer that first. I'm interested in what happens if you now use pg_xorcount_long in the call sites, at least in the worst case 7% test.> Have replaced hemdistsign() by pg_xorcount() everywhere; but with> that, the call looks a bit clumsy because of having to type-cast the> parameters to const unsigned char *. I noticed that the cast to> \"unsigned char *\" is required only when we use the value in the> pg_number_of_ones array. Elsewhere we can safely use 'a' and 'b' as> \"char *\". So I changed the pg_xorcount() parameters from unsigned char> * to char *.That matches the style of that file, so +1.> > I think the way to go is a simplified version of your 0001 (not 0002), with> > only a single function, for gist and intarray only, and a style that better> > matches the surrounding code. If you look at my xor functions in the attached> > text file, you'll get an idea of what it should look like. Note that it got> > the above performance without ever trying to massage the pointer alignment.> > I'm a bit uncomfortable with the fact that we can't rely on alignment, but> > maybe there's a simple fix somewhere in the gist code.>> In the attached 0001-v3 patch, I have updated the code to match it> with surrounding code; specifically the usage of *a++ rather than> a[i].>> Regarding the alignment changes... I have removed the code that> handled the leading identically unaligned bytes, for lack of evidence> that percentage of such cases is significant. Like I noted earlier,> for the tsearch data I used, identically unaligned cases were only 6%.> If I find scenarios where these cases can be significant after all and> if we cannot do anything in the gist index code, then we might have to> bring back the unaligned byte handling. I didn't get a chance to dig> deeper into the gist index implementation to see why they are not> always 8-byte aligned.I find it stranger that something equivalent to char* is not randomly misaligned, but rather only seems to land on 4-byte boundaries.[thinks] I'm guessing it's because of VARHDRSZ, but I'm not positive.FWIW, I anticipate some push back from the community because of the fact that the optimization relies on statistical phenomena.> I have kept the 0002 patch as-is. Due to significant *additional*> speedup, over and above the 0001 improvement, I think the code> re-arrangement done is worth it for non-x86 platforms.For the amount of code uglification involved, we should be getting full asm popcount support on Arm, not an attempt to kluge around the current implementation. I'd be happy to review such an effort for PG15, by the way.Readability counts, and as it stands I don't feel comfortable asking a committer to read 0002.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 19 Mar 2021 16:49:43 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Sat, 20 Mar 2021 at 02:19, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Fri, Mar 19, 2021 at 8:57 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Regarding the alignment changes... I have removed the code that\n> > handled the leading identically unaligned bytes, for lack of evidence\n> > that percentage of such cases is significant. Like I noted earlier,\n> > for the tsearch data I used, identically unaligned cases were only 6%.\n> > If I find scenarios where these cases can be significant after all and\n> > if we cannot do anything in the gist index code, then we might have to\n> > bring back the unaligned byte handling. I didn't get a chance to dig\n> > deeper into the gist index implementation to see why they are not\n> > always 8-byte aligned.\n>\n> I find it stranger that something equivalent to char* is not randomly misaligned, but rather only seems to land on 4-byte boundaries.\n>\n> [thinks] I'm guessing it's because of VARHDRSZ, but I'm not positive.\n>\n> FWIW, I anticipate some push back from the community because of the fact that the optimization relies on statistical phenomena.\n\nI dug into this issue for tsvector type. Found out that it's the way\nin which the sign array elements are arranged that is causing the pointers to\nbe misaligned:\n\nDatum\ngtsvector_picksplit(PG_FUNCTION_ARGS)\n{\n......\n cache = (CACHESIGN *) palloc(sizeof(CACHESIGN) * (maxoff + 2));\n cache_sign = palloc(siglen * (maxoff + 2));\n\n for (j = 0; j < maxoff + 2; j++)\n cache[j].sign = &cache_sign[siglen * j];\n....\n}\n\nIf siglen is not a multiple of 8 (say 700), cache[j].sign will in some\ncases point to non-8-byte-aligned addresses, as you can see in the\nabove code snippet.\n\nReplacing siglen by MAXALIGN64(siglen) in the above snippet gets rid\nof the misalignment. This change applied over the 0001-v3 patch gives\nadditional ~15% benefit. MAXALIGN64(siglen) will cause a bit more\nspace, but for not-so-small siglens, this looks worth doing. Haven't\nyet checked into types other than tsvector.\n\nWill get back with your other review comments. I thought, meanwhile, I\ncan post the above update first.\n\n\n",
"msg_date": "Mon, 2 Aug 2021 09:10:31 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
},
{
"msg_contents": "On Sun, Aug 1, 2021 at 11:41 PM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n>\n> > FWIW, I anticipate some push back from the community because of the\nfact that the optimization relies on statistical phenomena.\n>\n> I dug into this issue for tsvector type. Found out that it's the way\n> in which the sign array elements are arranged that is causing the\npointers to\n> be misaligned:\n[...]\n> If siglen is not a multiple of 8 (say 700), cache[j].sign will in some\n> cases point to non-8-byte-aligned addresses, as you can see in the\n> above code snippet.\n>\n> Replacing siglen by MAXALIGN64(siglen) in the above snippet gets rid\n> of the misalignment. This change applied over the 0001-v3 patch gives\n> additional ~15% benefit. MAXALIGN64(siglen) will cause a bit more\n> space, but for not-so-small siglens, this looks worth doing. Haven't\n> yet checked into types other than tsvector.\n\nSounds good.\n\n> Will get back with your other review comments. I thought, meanwhile, I\n> can post the above update first.\n\nThinking some more, my discomfort with inline functions that call a global\nfunction doesn't make logical sense, so feel free to do it that way if you\nlike.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Aug 1, 2021 at 11:41 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:>> > FWIW, I anticipate some push back from the community because of the fact that the optimization relies on statistical phenomena.>> I dug into this issue for tsvector type. Found out that it's the way> in which the sign array elements are arranged that is causing the pointers to> be misaligned:[...]> If siglen is not a multiple of 8 (say 700), cache[j].sign will in some> cases point to non-8-byte-aligned addresses, as you can see in the> above code snippet.>> Replacing siglen by MAXALIGN64(siglen) in the above snippet gets rid> of the misalignment. This change applied over the 0001-v3 patch gives> additional ~15% benefit. MAXALIGN64(siglen) will cause a bit more> space, but for not-so-small siglens, this looks worth doing. Haven't> yet checked into types other than tsvector.Sounds good.> Will get back with your other review comments. I thought, meanwhile, I> can post the above update first.Thinking some more, my discomfort with inline functions that call a global function doesn't make logical sense, so feel free to do it that way if you like.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Aug 2021 06:26:37 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up GIST index creation for tsvectors"
}
] |
[
{
"msg_contents": "Hi,\n\nI try to host multiple postgresql-servers on the same ip and the same\nport through SNI-based load-balancing.\nCurrently this is not possible because of two issues:\n1. The psql client won't set the tls-sni-extension correctly\n(https://www.postgresql.org/message-id/20181211145240.GL20222%40redhat.com)\n2. The psql connection protocol implements a SSLRequest in plain text\nbefore actually opening a connection.\n\nThe first issue is easily solvable by calling\n`SSL_set_tlsext_host_name(conn->ssl,\nconn->connhost[conn->whichhost].host)` before opening the connection.\n\nThe second issue is also solvable through a new parameter\n\"ssltermination\" which if set to \"proxy\" will skip the initial\nSSLRequest and connects directly through ssl.\nThe default value would be \"server\" which changes nothing on the\nexisting behaviour.\n\nI compiled the psql-client with these changes and was able to connect to\n2 different databases through the same ip and port just by changing the\nhostname.\n\nThis fix is important to allow multiple postgres instances on one ip\nwithout having to add a port number.\n\nI implemented this change on a fork of the postgres mirror on github:\nhttps://github.com/klg71/mayope_postgres\n\nThe affected files are:\n- src/interfaces/libpq/fe-connect.c (added ssltermination parameter)\n- src/interfaces/libpq/libpq-int.h (added ssltermination parameter)\n- src/interfaces/libpq/fe-secure-openssl.c (added tls-sni-extension)\n\nI appended the relevant diff.\n\nBest Regards\nLukas",
"msg_date": "Thu, 10 Dec 2020 16:49:35 +0100",
"msg_from": "Lukas Meisegeier <MeisegeierLukas@gmx.de>",
"msg_from_op": true,
"msg_subject": "Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "On 10/12/2020 17:49, Lukas Meisegeier wrote:\n> I try to host multiple postgresql-servers on the same ip and the same\n> port through SNI-based load-balancing.\n> Currently this is not possible because of two issues:\n> 1. The psql client won't set the tls-sni-extension correctly\n> (https://www.postgresql.org/message-id/20181211145240.GL20222%40redhat.com)\n> 2. The psql connection protocol implements a SSLRequest in plain text\n> before actually opening a connection.\n> \n> The first issue is easily solvable by calling\n> `SSL_set_tlsext_host_name(conn->ssl,\n> conn->connhost[conn->whichhost].host)` before opening the connection.\n> \n> The second issue is also solvable through a new parameter\n> \"ssltermination\" which if set to \"proxy\" will skip the initial\n> SSLRequest and connects directly through ssl.\n> The default value would be \"server\" which changes nothing on the\n> existing behaviour.\n\nDon't you need backend changes as well? The backend will still expect \nthe client to send an SSLRequest. Or is the connection from the proxy to \nthe actual server unencrypted?\n\nIt's not very nice that the client needs to set special options, \ndepending on whether the server is behind a proxy or not. Could you \nteach the proxy to deal with the SSLRequest message?\n\nPerhaps we should teach the backend to accept a TLS ClientHello \ndirectly, without the SSLRequest message. That way, the client could \nsend the ClientHello without SSLRequest, and the proxy wouldn't need to \ncare about SSLRequest. It would also eliminate one round-trip from the \nprotocol handshake, which would be nice. A long deprecation/transition \nperiod would be needed before we could make that the default behavior, \nbut that's ok.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 11 Dec 2020 15:26:56 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "Hey Heikki,\n\nthanks for providing feedback :)\nThe traffic between proxy and psql-server is unencrypted thats why I\ndon't need to patch the server.\nI tried returning a fixed response on the first plain SSLRequest\nforwarding it to a psql-server with ssl enabled an tried to switch then\non the ssl connection startup but that didn't work out. I guess its\nbecause the psql-server won't accept an ssl connection if its not\nrequested via SSLRequest.\nI would definitly appreciate if the psql-server could accept the\nTLS-client hello directly but we would still need to set the\ntls-sni-extension correctly.\nPerhaps we could rename the parameter to \"sslplainrequest(yes/no)\" and\nstart with making the plain SSLRequest optional in the psql-server.\n\nBest Regards\nLukas\n\n\nAm 11-Dec-20 um 14:26 schrieb Heikki Linnakangas:\n> On 10/12/2020 17:49, Lukas Meisegeier wrote:\n>> I try to host multiple postgresql-servers on the same ip and the same\n>> port through SNI-based load-balancing.\n>> Currently this is not possible because of two issues:\n>> 1. The psql client won't set the tls-sni-extension correctly\n>> (https://www.postgresql.org/message-id/20181211145240.GL20222%40redhat.com)\n>>\n>> 2. The psql connection protocol implements a SSLRequest in plain text\n>> before actually opening a connection.\n>>\n>> The first issue is easily solvable by calling\n>> `SSL_set_tlsext_host_name(conn->ssl,\n>> conn->connhost[conn->whichhost].host)` before opening the connection.\n>>\n>> The second issue is also solvable through a new parameter\n>> \"ssltermination\" which if set to \"proxy\" will skip the initial\n>> SSLRequest and connects directly through ssl.\n>> The default value would be \"server\" which changes nothing on the\n>> existing behaviour.\n>\n> Don't you need backend changes as well? The backend will still expect\n> the client to send an SSLRequest. Or is the connection from the proxy to\n> the actual server unencrypted?\n>\n> It's not very nice that the client needs to set special options,\n> depending on whether the server is behind a proxy or not. Could you\n> teach the proxy to deal with the SSLRequest message?\n>\n> Perhaps we should teach the backend to accept a TLS ClientHello\n> directly, without the SSLRequest message. That way, the client could\n> send the ClientHello without SSLRequest, and the proxy wouldn't need to\n> care about SSLRequest. It would also eliminate one round-trip from the\n> protocol handshake, which would be nice. A long deprecation/transition\n> period would be needed before we could make that the default behavior,\n> but that's ok.\n>\n> - Heikki\n\n\n",
"msg_date": "Fri, 11 Dec 2020 15:46:18 +0100",
"msg_from": "Lukas Meisegeier <MeisegeierLukas@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "On 11/12/2020 16:46, Lukas Meisegeier wrote:\n> Hey Heikki,\n> \n> thanks for providing feedback :)\n> The traffic between proxy and psql-server is unencrypted thats why I\n> don't need to patch the server.\n\nOk.\n\n> I tried returning a fixed response on the first plain SSLRequest\n> forwarding it to a psql-server with ssl enabled an tried to switch then\n> on the ssl connection startup but that didn't work out. I guess its\n> because the psql-server won't accept an ssl connection if its not\n> requested via SSLRequest.\n\nYour proxy could receive the client's SSLRequest message, and respond \nwith a single byte 'S'. You don't need to forward that to the real \nPostgreSQL server, since the connection to the PostgreSQL server is \nunencrypted. Then perform the TLS handshake, and forward all traffic to \nthe real server only after that.\n\nClient: -> SSLRequest\n Proxy: <- 'S'\nClient: -> TLS ClientHello\n Proxy: [finish TLS handshake]\n\n- Heikki\n\n\n",
"msg_date": "Fri, 11 Dec 2020 17:44:29 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "Thanks for the provided ideas :)\nI use HaProxy for my load-balancing and unfortunately I can't define\nthat I want to listen on a port for both ssl and non ssl requests.\nThat means if I try to return a fixed response 'S' on the SSLRequest it\nfails with an SSL-Handshake failure cause the server expects a ssl message.\n\nI searched for some way to forward to a default backend on ssl failure\nbut this seems to be no use-case for haproxy and isn't supported.\n\nI also didn't find any other tcp-loadbalancer, which could handle this\ntype of ssl-failure fallback.\n\nMy only option would therefore be to write a custom loadbalancer for\nthis usecase, which is not really feasible given the amount of features\nof haproxy I plan to use.\n\nI have to say the psql ssl handshake procedure is really unique and\nchallenging :D\n\nThe tool stunnel is capable of this protocol but I can't do sni-based\nloadbalancing with it so this is kind of a dead end here.\n\nLukas\n\nAm 11-Dec-20 um 16:44 schrieb Heikki Linnakangas:\n> On 11/12/2020 16:46, Lukas Meisegeier wrote:\n>> Hey Heikki,\n>>\n>> thanks for providing feedback :)\n>> The traffic between proxy and psql-server is unencrypted thats why I\n>> don't need to patch the server.\n>\n> Ok.\n>\n>> I tried returning a fixed response on the first plain SSLRequest\n>> forwarding it to a psql-server with ssl enabled an tried to switch then\n>> on the ssl connection startup but that didn't work out. I guess its\n>> because the psql-server won't accept an ssl connection if its not\n>> requested via SSLRequest.\n>\n> Your proxy could receive the client's SSLRequest message, and respond\n> with a single byte 'S'. You don't need to forward that to the real\n> PostgreSQL server, since the connection to the PostgreSQL server is\n> unencrypted. Then perform the TLS handshake, and forward all traffic to\n> the real server only after that.\n>\n> Client: -> SSLRequest\n> Proxy: <- 'S'\n> Client: -> TLS ClientHello\n> Proxy: [finish TLS handshake]\n>\n> - Heikki\n\n\n",
"msg_date": "Sat, 12 Dec 2020 12:52:12 +0100",
"msg_from": "Lukas Meisegeier <MeisegeierLukas@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "On 12/12/2020 13:52, Lukas Meisegeier wrote:\n> Thanks for the provided ideas :)\n> I use HaProxy for my load-balancing and unfortunately I can't define\n> that I want to listen on a port for both ssl and non ssl requests.\n\nCould you configure HaProxy to listen on separate ports for SSL and \nnon-SSL connections, then? And forward both to the same Postgres server.\n\n> That means if I try to return a fixed response 'S' on the SSLRequest it\n> fails with an SSL-Handshake failure cause the server expects a ssl message.\n\nThat doesn't sound right to me, or perhaps I have misunderstood what you \nmean. If you don't send the SSLRequest to the Postgres server, but \"eat\" \nit in the proxy, the Postgres server will not try to do an SSL handshake.\n\n> I have to say the psql ssl handshake procedure is really unique and\n> challenging :D\n\nYeah. IMAP and SMTP can use \"STARTTLS\" to switch an unencrypted \nconnection to encrypted, though. That's pretty similar to the \n'SSLRequest' message used in the postgres protocol.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 14 Dec 2020 15:50:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "I liked the idea with separate ports for ssl and non ssl requests and\ntried it with haproxy.\nThe psql-client connects with haproxy and receives the fixed 'S' byte\nresponse. After that he tried to continue on the same connection and\ndoens't open a new one. This crashes the connection because haproxy\nexpects a new tcp connection.\n\n\npsqlClient: opens connection (ARP: SYN)\nhaproxy: accepts connection (ARP: SYN ACK)\npsqlClient: confirmes the connection (ARP: ACK)\npsqlClient: sends SSLRequest\nhaproxy: sends confirmation (ARP: ACK)\nhaproxy: sends fixed byte response ('S')\nhaproxy: closes connection (ARP: FIN, ACK)\npsqlclient: confirmed fixed byte response (ARP: ACK)\npsqlclient: sends ssl hello request --> error connection already\nclosed(\"psql: error: SSL SYSCALL error: No error (0x00000000/0))\n\nIn my eyes the problem lies in upgrading the connection rather then\nopening a new one.\n\nAm 14-Dec-20 um 14:50 schrieb Heikki Linnakangas:\n> On 12/12/2020 13:52, Lukas Meisegeier wrote:\n>> Thanks for the provided ideas :)\n>> I use HaProxy for my load-balancing and unfortunately I can't define\n>> that I want to listen on a port for both ssl and non ssl requests.\n>\n> Could you configure HaProxy to listen on separate ports for SSL and\n> non-SSL connections, then? And forward both to the same Postgres server.\n>\n>> That means if I try to return a fixed response 'S' on the SSLRequest it\n>> fails with an SSL-Handshake failure cause the server expects a ssl\n>> message.\n>\n> That doesn't sound right to me, or perhaps I have misunderstood what you\n> mean. If you don't send the SSLRequest to the Postgres server, but \"eat\"\n> it in the proxy, the Postgres server will not try to do an SSL handshake.\n>\n>> I have to say the psql ssl handshake procedure is really unique and\n>> challenging :D\n>\n> Yeah. IMAP and SMTP can use \"STARTTLS\" to switch an unencrypted\n> connection to encrypted, though. That's pretty similar to the\n> 'SSLRequest' message used in the postgres protocol.\n>\n> - Heikki\n\n\n",
"msg_date": "Mon, 14 Dec 2020 16:01:09 +0100",
"msg_from": "Lukas Meisegeier <MeisegeierLukas@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
},
{
"msg_contents": "\nHey,\n\nwhats the state of this? Can we start working out a plan to remove the\ninital SSLRequest from the connection protocol or is there any reason to\nkeep it?\n\nI would start by removing the need of the SSLRequest in the psql-server\nif its started with a special parameter(ssl-only or so).\nSimultaneously I would add a parameter to this disable the SSLRequest in\nthe client as well.\n\nLater we could make this behaviour default for psql-server with\nssl-enabled and clients and some far time ahead we could remove the\nimplementation of SSLRequest in both server and client.\n\nWhat are your thaughts about this?\n\nBest Regards\n\n\n",
"msg_date": "Tue, 22 Dec 2020 10:19:48 +0100",
"msg_from": "Lukas Meisegeier <MeisegeierLukas@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Feature Proposal: Add ssltermination parameter for SNI-based\n LoadBalancing"
}
] |
[
{
"msg_contents": "$SUBJECT is not great. The options to pg_basebackup that are not\ntested by any part of the regression test suite include the\nsingle-letter options rlzZdUwWvP as well as --no-estimate-size.\n\nIt would probably be good to fix as much of this as we can, but there\nare a couple of cases I think would be particularly good to cover. One\nis 'pg_basebackup -Ft -Xnone -D -', which tries to write the output as\na single tar file on standard output, injecting the backup_manifest\nfile into the tar file instead of writing it out separately as we\nnormally would. This case requires special handling in a few places\nand it would be good to check that it actually works. The other is the\n-z or -Z option, which produces a compressed tar file.\n\nNow, there's nothing to prevent us from running commands like this\nfrom the pg_basebackup tests, but it doesn't seem like we could really\ncheck anything meaningful. Perhaps we'd notice if the command exited\nnon-zero or didn't produce any output, but it would be nice to verify\nthat the resulting backups are actually correct. The way to do that\nwould seem to be to extract the tar file (and decompress it first, in\nthe -z/-Z case) and then run pg_verifybackup on the result and check\nthat it passes. However, as far as I can tell there's no guarantee\nthat the user has 'tar' or 'gunzip' installed on their system, so I\ndon't see a clean way to do this. A short (but perhaps incomplete)\nsearch didn't turn up similar precedents in the existing tests.\n\nAny ideas?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Dec 2020 12:32:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup test coverage"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 12:32:52PM -0500, Robert Haas wrote:\n> It would probably be good to fix as much of this as we can, but there\n> are a couple of cases I think would be particularly good to cover. One\n> is 'pg_basebackup -Ft -Xnone -D -', which tries to write the output as\n> a single tar file on standard output, injecting the backup_manifest\n> file into the tar file instead of writing it out separately as we\n> normally would. This case requires special handling in a few places\n> and it would be good to check that it actually works. The other is the\n> -z or -Z option, which produces a compressed tar file.\n> \n> Now, there's nothing to prevent us from running commands like this\n> from the pg_basebackup tests, but it doesn't seem like we could really\n> check anything meaningful. Perhaps we'd notice if the command exited\n> non-zero or didn't produce any output, but it would be nice to verify\n> that the resulting backups are actually correct. The way to do that\n> would seem to be to extract the tar file (and decompress it first, in\n> the -z/-Z case) and then run pg_verifybackup on the result and check\n> that it passes. However, as far as I can tell there's no guarantee\n> that the user has 'tar' or 'gunzip' installed on their system, so I\n> don't see a clean way to do this. A short (but perhaps incomplete)\n> search didn't turn up similar precedents in the existing tests.\n> \n> Any ideas?\n\nI would probe for the commands with \"tar -cf- anyfile | tar -xf-\" and \"echo\nfoo | gzip | gunzip\". If those fail, skip any test that relies on an\nunavailable command.\n\n\n",
"msg_date": "Thu, 10 Dec 2020 22:53:51 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup test coverage"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 10:53:51PM -0800, Noah Misch wrote:\n> On Thu, Dec 10, 2020 at 12:32:52PM -0500, Robert Haas wrote:\n>> Now, there's nothing to prevent us from running commands like this\n>> from the pg_basebackup tests, but it doesn't seem like we could really\n>> check anything meaningful. Perhaps we'd notice if the command exited\n>> non-zero or didn't produce any output, but it would be nice to verify\n>> that the resulting backups are actually correct. The way to do that\n>> would seem to be to extract the tar file (and decompress it first, in\n>> the -z/-Z case) and then run pg_verifybackup on the result and check\n>> that it passes. However, as far as I can tell there's no guarantee\n>> that the user has 'tar' or 'gunzip' installed on their system, so I\n>> don't see a clean way to do this. A short (but perhaps incomplete)\n>> search didn't turn up similar precedents in the existing tests.\n>> \n>> Any ideas?\n> \n> I would probe for the commands with \"tar -cf- anyfile | tar -xf-\" and \"echo\n> foo | gzip | gunzip\". If those fail, skip any test that relies on an\n> unavailable command.\n\nWhy don't you just use Archive::Tar [1] instead of looking for some system\ncommands? Combining list_files() with extract(), it is possible to\nextract an archive, with or without compression, without hoping for an\nequivalent to exist on Windows. That would not be the first time\neither that there is a TAP test that skips some tests if a module does\nnot exist. See for example what psql does with IO::Pty.\n\n[1]: https://metacpan.org/pod/Archive::Tar\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 17:04:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup test coverage"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 3:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Why don't you just use Archive::Tar [1] instead of looking for some system\n> commands? Combining list_files() with extract(), it is possible to\n> extract an archive, with or without compression, without hoping for an\n> equivalent to exist on Windows. That would not be the first time\n> either that there is a TAP test that skips some tests if a module does\n> not exist. See for example what psql does with IO::Pty.\n\nWell, either this or Noah's method has the disadvantage that not\neveryone will get the benefit of the tests, and that those who wish to\nget that benefit must install more stuff. But, all three of the\ncomputers I have within arm's reach (yeah, I might have a problem)\nhave Archive::Tar installed, so maybe it's not a big concern in\npractice. I am slightly inclined to think that the perl package\napproach might be better than shell commands because perhaps it is\nmore likely to work on Windows, but I'm not positive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Dec 2020 12:23:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup test coverage"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, either this or Noah's method has the disadvantage that not\n> everyone will get the benefit of the tests, and that those who wish to\n> get that benefit must install more stuff. But, all three of the\n> computers I have within arm's reach (yeah, I might have a problem)\n> have Archive::Tar installed, so maybe it's not a big concern in\n> practice.\n\nFWIW, it looks to me like Archive::Tar is part of the standard Perl\ninstallation on both RHEL and macOS, so it's probably pretty common.\n\n> I am slightly inclined to think that the perl package\n> approach might be better than shell commands because perhaps it is\n> more likely to work on Windows, but I'm not positive.\n\nYeah, that makes sense to me too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Dec 2020 13:04:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup test coverage"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 12:23:10PM -0500, Robert Haas wrote:\n> On Fri, Dec 11, 2020 at 3:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > Why don't you just use Archive::Tar [1] instead of looking for some system\n> > commands? Combining list_files() with extract(), it is possible to\n> > extract an archive, with or without compression, without hoping for an\n> > equivalent to exist on Windows. That would not be the first time\n> > either that there is a TAP test that skips some tests if a module does\n> > not exist. See for example what psql does with IO::Pty.\n> \n> Well, either this or Noah's method has the disadvantage that not\n> everyone will get the benefit of the tests, and that those who wish to\n> get that benefit must install more stuff. But, all three of the\n> computers I have within arm's reach (yeah, I might have a problem)\n> have Archive::Tar installed, so maybe it's not a big concern in\n> practice. I am slightly inclined to think that the perl package\n> approach might be better than shell commands because perhaps it is\n> more likely to work on Windows, but I'm not positive.\n\nOutside Windows, Archive::Tar is less portable. For example, in the forty-two\nsystems of the GCC Compile Farm, five lack Archive::Tar. (Each of those five\nis a CentOS 7 system. Every system does have tar, gzip and gunzip.)\n\nEither way is fine with me. Favoring Archive::Tar, a Windows-specific bug is\nmore likely than a CentOS/RHEL-specific bug. Favoring shell commands, they\ncan catch PostgreSQL writing a tar file that the system's tar can't expand.\n\n\n",
"msg_date": "Fri, 11 Dec 2020 20:27:11 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup test coverage"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI noticed that when pg_waldump finds an invalid record, the\r\ncorresponding error message seems to point to the last valid record\r\nread.\r\n\r\n rmgr: ... lsn: 0/090E5AF8, prev 0/090E59D0, ...\r\n pg_waldump: fatal: error in WAL record at 0/90E5AF8: invalid record length at 0/90E5B30: wanted 24, got 0\r\n\r\nShould pg_waldump report currRecPtr instead of ReadRecPtr in the error\r\nmessage? With that, I see the following.\r\n\r\n rmgr: ... lsn: 0/090E5AF8, prev 0/090E59D0, ...\r\n pg_waldump: fatal: error in WAL record at 0/90E5B30: invalid record length at 0/90E5B30: wanted 24, got 0\r\n\r\nHere is the patch:\r\n\r\ndiff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c\r\nindex 31e99c2a6d..27da60e6db 100644\r\n--- a/src/bin/pg_waldump/pg_waldump.c\r\n+++ b/src/bin/pg_waldump/pg_waldump.c\r\n@@ -1110,8 +1110,8 @@ main(int argc, char **argv)\r\n\r\n if (errormsg)\r\n fatal_error(\"error in WAL record at %X/%X: %s\",\r\n- (uint32) (xlogreader_state->ReadRecPtr >> 32),\r\n- (uint32) xlogreader_state->ReadRecPtr,\r\n+ (uint32) (xlogreader_state->currRecPtr >> 32),\r\n+ (uint32) xlogreader_state->currRecPtr,\r\n errormsg);\r\n\r\n XLogReaderFree(xlogreader_state);\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 10 Dec 2020 18:47:58 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "pg_waldump error message fix"
},
{
"msg_contents": "At Thu, 10 Dec 2020 18:47:58 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> Hi,\n> \n> I noticed that when pg_waldump finds an invalid record, the\n> corresponding error message seems to point to the last valid record\n> read.\n\nGood catch!\n\n> rmgr: ... lsn: 0/090E5AF8, prev 0/090E59D0, ...\n> pg_waldump: fatal: error in WAL record at 0/90E5AF8: invalid record length at 0/90E5B30: wanted 24, got 0\n> \n> Should pg_waldump report currRecPtr instead of ReadRecPtr in the error\n> message? With that, I see the following.\n> \n> rmgr: ... lsn: 0/090E5AF8, prev 0/090E59D0, ...\n> pg_waldump: fatal: error in WAL record at 0/90E5B30: invalid record length at 0/90E5B30: wanted 24, got 0\n> \n> Here is the patch:\n> \n> diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c\n> index 31e99c2a6d..27da60e6db 100644\n> --- a/src/bin/pg_waldump/pg_waldump.c\n> +++ b/src/bin/pg_waldump/pg_waldump.c\n> @@ -1110,8 +1110,8 @@ main(int argc, char **argv)\n> \n> if (errormsg)\n> fatal_error(\"error in WAL record at %X/%X: %s\",\n> - (uint32) (xlogreader_state->ReadRecPtr >> 32),\n> - (uint32) xlogreader_state->ReadRecPtr,\n> + (uint32) (xlogreader_state->currRecPtr >> 32),\n> + (uint32) xlogreader_state->currRecPtr,\n> errormsg);\n> \n> XLogReaderFree(xlogreader_state);\n\ncurrRecPtr is a private member of the struct (see the definition of\nthe struct XLogReaderState). We should instead use EndRecPtr outside\nxlog reader.\n\nregardes.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Dec 2020 13:30:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 01:30:16PM +0900, Kyotaro Horiguchi wrote:\n> currRecPtr is a private member of the struct (see the definition of\n> the struct XLogReaderState). We should instead use EndRecPtr outside\n> xlog reader.\n\nPlease note that this is documented in xlogreader.h. Hmm. I can see\nyour point here, still I think that what both of you are suggesting\nis not completely correct. For example, switching to EndRecPtr would\ncause DecodeXLogRecord() to report an error with the end of the\ncurrent record but it makes more sense to point to ReadRecPtr in this\ncase. On the other hand, it would make sense to report the beginning \nof the position we are working on when validating the header if an\nerror happens at this point. So it seems to me that EndRecPtr or\nReadRecPtr are not completely correct, and that we may need an extra\nLSN position to tell on which LSN we are working on instead that gets\nupdated before the validation part, because we update ReadRecPtr and\nEndRecPtr only after this initial validation is done.\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 14:21:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "At Fri, 11 Dec 2020 14:21:57 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Dec 11, 2020 at 01:30:16PM +0900, Kyotaro Horiguchi wrote:\n> > currRecPtr is a private member of the struct (see the definition of\n> > the struct XLogReaderState). We should instead use EndRecPtr outside\n> > xlog reader.\n> \n> Please note that this is documented in xlogreader.h. Hmm. I can see\n> your point here, still I think that what both of you are suggesting\n> is not completely correct. For example, switching to EndRecPtr would\n\nEndRecPtr is defined as it points to the LSN to start reading the next\nrecord. It donsn't move if XLogReadRecord failed to read the\nrecord. I think this is documented in a comment somewhere. It can\npoint to the beginning of a page so \"error in WAL record at <page\nstart>\" is a kind of bogus but that is not the point here.\n\n> cause DecodeXLogRecord() to report an error with the end of the\n> current record but it makes more sense to point to ReadRecPtr in this\n\nDecodeXLogRecord() handles a record alread successflly read. So\nReadRecPtr is pointing to the beginning of the given record at the\ntimex. pg_waldump:main() and ReadRecrod (or the context of\nDecodeXLogRecord()) are in different contexts. The place in question\nin pg_waldump seems to be a result of a thinko that it can use\nReadRecPtr regardless of whether XLogReadRecrod successfully read a\nrecord or not.\n\n> case. On the other hand, it would make sense to report the beginning \n> of the position we are working on when validating the header if an\n> error happens at this point. So it seems to me that EndRecPtr or\n> ReadRecPtr are not completely correct, and that we may need an extra\n> LSN position to tell on which LSN we are working on instead that gets\n> updated before the validation part, because we update ReadRecPtr and\n> EndRecPtr only after this initial validation is done.\n\nSo we cannot use the ErrorRecPtr since pg_waldump:main() shoud show\nthe LSN XLogReadRecord() found a invalid record and DecodeXLogRecord()\nshould show the LSN XLogReadRecord() found a valid record.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Dec 2020 17:19:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "At Fri, 11 Dec 2020 17:19:33 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 11 Dec 2020 14:21:57 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Fri, Dec 11, 2020 at 01:30:16PM +0900, Kyotaro Horiguchi wrote:\n> > > currRecPtr is a private member of the struct (see the definition of\n> > > the struct XLogReaderState). We should instead use EndRecPtr outside\n> > > xlog reader.\n> > \n> > Please note that this is documented in xlogreader.h. Hmm. I can see\n> > your point here, still I think that what both of you are suggesting\n> > is not completely correct. For example, switching to EndRecPtr would\n> \n> EndRecPtr is defined as it points to the LSN to start reading the next\n> record. It donsn't move if XLogReadRecord failed to read the\n> record. I think this is documented in a comment somewhere. It can\n> point to the beginning of a page so \"error in WAL record at <page\n> start>\" is a kind of bogus but that is not the point here.\n> \n> > cause DecodeXLogRecord() to report an error with the end of the\n> > current record but it makes more sense to point to ReadRecPtr in this\n> \n> DecodeXLogRecord() handles a record alread successflly read. So\n> ReadRecPtr is pointing to the beginning of the given record at the\n> timex. pg_waldump:main() and ReadRecrod (or the context of\n> DecodeXLogRecord()) are in different contexts. The place in question\n> in pg_waldump seems to be a result of a thinko that it can use\n> ReadRecPtr regardless of whether XLogReadRecrod successfully read a\n> record or not.\n> \n> > case. On the other hand, it would make sense to report the beginning \n> > of the position we are working on when validating the header if an\n> > error happens at this point. So it seems to me that EndRecPtr or\n> > ReadRecPtr are not completely correct, and that we may need an extra\n> > LSN position to tell on which LSN we are working on instead that gets\n> > updated before the validation part, because we update ReadRecPtr and\n> > EndRecPtr only after this initial validation is done.\n> \n> So we cannot use the ErrorRecPtr since pg_waldump:main() shoud show\n> the LSN XLogReadRecord() found a invalid record and DecodeXLogRecord()\n> should show the LSN XLogReadRecord() found a valid record.\n\nWait! That's wrong. Yeah, we can add ErrorRecPtr to point the error\nrecord regardless of the context.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Dec 2020 17:44:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "On 12/10/20, 9:23 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Please note that this is documented in xlogreader.h. Hmm. I can see\r\n> your point here, still I think that what both of you are suggesting\r\n> is not completely correct. For example, switching to EndRecPtr would\r\n> cause DecodeXLogRecord() to report an error with the end of the\r\n> current record but it makes more sense to point to ReadRecPtr in this\r\n> case. On the other hand, it would make sense to report the beginning \r\n> of the position we are working on when validating the header if an\r\n> error happens at this point. So it seems to me that EndRecPtr or\r\n> ReadRecPtr are not completely correct, and that we may need an extra\r\n> LSN position to tell on which LSN we are working on instead that gets\r\n> updated before the validation part, because we update ReadRecPtr and\r\n> EndRecPtr only after this initial validation is done.\r\n\r\nI looked through all the calls to report_invalid_record() in\r\nxlogreader.c and noticed that all but a few in\r\nXLogReaderValidatePageHeader() already report an LSN. Of the calls in\r\nXLogReaderValidatePageHeader() that don't report an LSN, it looks like\r\nmost still report a position, and the remaining ones are for \"WAL file\r\nis from different database system...,\" which IIUC generally happens on\r\nthe first page of the segment.\r\n\r\nPerhaps we could simply omit the LSN in the pg_waldump message.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 11 Dec 2020 19:27:31 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "At Fri, 11 Dec 2020 19:27:31 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/10/20, 9:23 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > Please note that this is documented in xlogreader.h. Hmm. I can see\n> > your point here, still I think that what both of you are suggesting\n> > is not completely correct. For example, switching to EndRecPtr would\n> > cause DecodeXLogRecord() to report an error with the end of the\n> > current record but it makes more sense to point to ReadRecPtr in this\n> > case. On the other hand, it would make sense to report the beginning \n> > of the position we are working on when validating the header if an\n> > error happens at this point. So it seems to me that EndRecPtr or\n> > ReadRecPtr are not completely correct, and that we may need an extra\n> > LSN position to tell on which LSN we are working on instead that gets\n> > updated before the validation part, because we update ReadRecPtr and\n> > EndRecPtr only after this initial validation is done.\n> \n> I looked through all the calls to report_invalid_record() in\n> xlogreader.c and noticed that all but a few in\n> XLogReaderValidatePageHeader() already report an LSN. Of the calls in\n> XLogReaderValidatePageHeader() that don't report an LSN, it looks like\n> most still report a position, and the remaining ones are for \"WAL file\n> is from different database system...,\" which IIUC generally happens on\n> the first page of the segment.\n> \n> Perhaps we could simply omit the LSN in the pg_waldump message.\n\nYeah, I had the same feeling. At least, the two LSNs in the message\nunder discussion are simply redundant. So +1 to just remove the LSN at\nthe caller site.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Dec 2020 10:26:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "> At Fri, 11 Dec 2020 19:27:31 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> > I looked through all the calls to report_invalid_record() in\n> > xlogreader.c and noticed that all but a few in\n> > XLogReaderValidatePageHeader() already report an LSN. Of the calls in\n> > XLogReaderValidatePageHeader() that don't report an LSN, it looks like\n> > most still report a position, and the remaining ones are for \"WAL file\n> > is from different database system...,\" which IIUC generally happens on\n> > the first page of the segment.\n\nApart from this issue, while checking that, I noticed that if server\nstarts having WALs from a server of a different systemid, the server\nstops with obscure messages.\n\n> LOG: database system was shut down at 2020-12-14 10:36:02 JST\n> LOG: invalid primary checkpoint record\n> PANIC: could not locate a valid checkpoint record\n\nThe cause is XLogPageRead erases the error message set by\nXLogReaderValidatePageHeader(). As the comment just above says, this\nis required to continue replication under a certain situation. The\ncode is aiming to allow continue replication when the first half of a\ncontinued record has been removed on the primary so we don't need to\ndo the amendment unless we're in standby mode. If we let the savior\ncode only while StandbyMode, we would have the correct error message.\n\n> JST LOG: database system was shut down at 2020-12-14 10:36:02 JST\n> LOG: WAL file is from different database system: WAL file database system identifier is 6905923817995618754, pg_control database system identifier is 6905924227171453468\n> JST LOG: invalid primary checkpoint record\n> JST PANIC: could not locate a valid checkpoint record\n\nI confirmed 0668719801 still works under the intended context using\nthe steps shown in [1].\n\n\n[1]: https://www.postgresql.org/message-id/flat/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 14 Dec 2020 11:34:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 10:26:01AM +0900, Kyotaro Horiguchi wrote:\n> Yeah, I had the same feeling. At least, the two LSNs in the message\n> under discussion are simply redundant. So +1 to just remove the LSN at\n> the caller site.\n\nThat would mean that we are ready to accept that we will never forget\nto a LSN in any of the messages produced by xlogreader.c or any of the\ncallbacks used by pg_waldump. FWIW, I'd rather let a position in this\nreport than none. At least it allows users to know the area where the\nproblem happened.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 12:00:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 11:34:51AM +0900, Kyotaro Horiguchi wrote:\n> Apart from this issue, while checking that, I noticed that if server\n> starts having WALs from a server of a different systemid, the server\n> stops with obscure messages.\n\nWouldn't it be better to discuss that on a separate thread? I have\nmostly missed your message here.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 16:48:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump error message fix"
},
{
"msg_contents": "On 12/13/20, 7:01 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Mon, Dec 14, 2020 at 10:26:01AM +0900, Kyotaro Horiguchi wrote:\r\n>> Yeah, I had the same feeling. At least, the two LSNs in the message\r\n>> under discussion are simply redundant. So +1 to just remove the LSN at\r\n>> the caller site.\r\n>\r\n> That would mean that we are ready to accept that we will never forget\r\n> to a LSN in any of the messages produced by xlogreader.c or any of the\r\n> callbacks used by pg_waldump. FWIW, I'd rather let a position in this\r\n> report than none. At least it allows users to know the area where the\r\n> problem happened.\r\n\r\nYeah. Unfortunately, I suspect we will have the same problem if we\r\nadd a new variable that we only use to track the LSN to report for\r\nerrors.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 14 Dec 2020 17:20:34 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump error message fix"
}
] |
[
{
"msg_contents": "Hi,\n\nover in the pgsql-general channel, Michael Lewis reported [1] a bit\nstrange behavior switching between good/bad estimates with extended\nstatistics.\n\nThe crux of the issue is that with statistics containing both MCV and\nfunctional dependencies, we prefer applying the MCV. And functional\ndependencies are used only for the remaining clauses on columns not\ncovered by the MCV list.\n\nThis works perfectly fine when the clauses match a MCV item (or even\nmultiple of them). But if there's no matching MCV item, this may be\nproblematic - statext_mcv_clauselist_selectivity tries to be smart, but\nwhen the MCV represents only a small fraction of the data set the\nresults may not be far from just a product of selectivities (as if the\nclauses were independent).\n\nSo I'm wondering about two things:\n\n1) Does it actually make sense to define extended statistics with both\nMCV and functional dependencies? ISTM the MCV part will always filter\nall the clauses, before we even try to apply the dependencies.\n\n2) Could we consider the functional dependencies when estimating the\npart not covered by the MCV list. Of course, this could only help with\nequality clauses (as supported by functional dependencies).\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAMcsB%3Dy%3D3G_%2Bs_zFYPu2-O6OMWOvOkb3t1MU%3D907yk5RC_RaYw%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 10 Dec 2020 22:10:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "extended statistics - functional dependencies vs. MCV lists"
}
] |
[
{
"msg_contents": "Many older tests where written in a style like\n\n SELECT '' AS two, i.* FROM INT2_TBL ...\n\nwhere the first column indicated the number of expected result rows.\nThis has gotten increasingly out of date, as the test data fixtures\nhave expanded, so a lot of these don't match anymore and are misleading. \n Moreover, this style isn't really necessary, since the psql output \nalready shows the number of result rows. (Perhaps this was different at \nsome point?)\n\nI'm proposing to clean all this up by removing all those extra columns.\n\nThe patch is very big, so I'm attaching a compressed version. You can \nalso see a diff oneline: \nhttps://github.com/postgres/postgres/compare/master...petere:test-cleanup",
"msg_date": "Fri, 11 Dec 2020 17:52:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Clean up ancient test style"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Many older tests where written in a style like\n> SELECT '' AS two, i.* FROM INT2_TBL ...\n\n> where the first column indicated the number of expected result rows.\n> This has gotten increasingly out of date, as the test data fixtures\n> have expanded, so a lot of these don't match anymore and are misleading. \n> Moreover, this style isn't really necessary, since the psql output \n> already shows the number of result rows. (Perhaps this was different at \n> some point?)\n\n> I'm proposing to clean all this up by removing all those extra columns.\n\n+1 for concept ... I didn't bother to check the patch in detail.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Dec 2020 12:22:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clean up ancient test style"
}
] |
[
{
"msg_contents": "We've had complaints in the past about how plpgsql can't handle\nassignments to fields in arrays of records [1], that is cases like\n\n\tarrayvar[n].field := something;\n\nand also complaints about how plpgsql can't handle assignments\nto array slices [2], ie\n\n\tarrayvar[m:n] := something;\n\nAs of commit c7aba7c14, we have another problem, namely that\nplpgsql's subscripted assignment only works for regular arrays;\nit won't work for other types that might define subscript\nassignment handlers.\n\nSo I started to think about how to fix that, and eventually\ndecided that what we ought to do is nuke plpgsql's array-assignment\ncode altogether. The core code already has support for everything\nwe want here in the context of field/element assignments in UPDATE\ncommands; if we could get plpgsql to make use of that infrastructure\ninstead of rolling its own, we'd be a lot better off.\n\nThe hard part of that is that the core parser will only generate\nthe structures we need (FieldStores and assignment SubscriptingRefs)\nwithin UPDATE commands. We could export the relevant functions\n(particularly transformAssignmentIndirection); but that won't help\nplpgsql very much, because it really wants to be able to run all this\nstuff through SPI. That means we have to have SQL syntax that can\ngenerate an expression of that form.\n\nThat led me to think about introducing a new statement, say\n\n\tSET variable_name opt_indirection := a_expr\n\nwhere opt_indirection is gram.y's symbol for \"field selections and/or\nsubscripts\". The idea here is that a plpgsql statement like\n\n\tx[2].fld := something;\n\nwould be parsed using this new statement, producing an expression\nthat uses an assignment SubscriptingRef and a FieldStore operating\non a Param that gives the initial value of the array-of-composite\nvariable \"x\". Then plpgsql would just evaluate this expression and\nassign the result to x. Problem solved.\n\nThis almost works as-is, modulo annoying parse conflicts against the\nexisting variants of SET. However there's a nasty little detail\nabout what \"variable_name\" can be in plpgsql: it can be either one or\ntwo identifiers, since there might be a block label involved, eg\n\n\t<<mylabel>> declare x int; begin mylabel.x := ...\n\nBetween that and the parse-conflict problem, I ended up\nwith this syntax:\n\n\tSET n: variable_name opt_indirection := a_expr\n\nwhere \"n\" is an integer literal indicating how many dot-separated names\nshould be taken as the base variable name. Another annoying point is\nthat plpgsql historically has allowed fun stuff like\n\n\tmycount := count(*) from my_table where ...;\n\nthat is, after the expression you can have all the rest of an ordinary\nSELECT command. That's not terribly hard to deal with, but it means\nthat this new statement has to have all of SELECT's other options too.\n\nThe other area that doesn't quite work without some kind of hack is\nthat plpgsql's casting rules for which types can be assigned to what\nare far laxer than what the core parser thinks should be allowed in\nUPDATE. The cast has to happen within the assignment expression\nfor this to work at all, so plpgsql can't fix it by itself. The\nsolution I adopted was just to invent a new CoercionContext value\nCOERCION_PLPGSQL, representing \"use pl/pgsql's rules\". (Basically\nwhat that means nowadays is to apply CoerceViaIO if assignment cast\nlookup doesn't find a cast pathway.)\n\nA happy side-effect of this approach is that it actually makes\nsome cases faster. In particular I can measure speedups for\n(a) assignments to subscripted variables and (b) cases where a\ncoercion must be performed to produce the result to be assigned.\nI believe the reason for this is that the patch effectively\nmerges what had been separate expressions (subscripts or casts,\nrespectively) into the main result-producing expression. This\neliminates a nontrivial amount of overhead for plancache validity\nchecking, execution startup, etc.\n\nAnother side-effect is that the report of the statement in error\ncases might look different. For example, in v13 a typo in a\nsubscript expression produces\n\nregression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\nERROR: operator does not exist: ! integer\nLINE 1: SELECT !2\n ^\nHINT: No operator matches the given name and argument type. You might need to add an explicit type cast.\nQUERY: SELECT !2\nCONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n\nWith this patch, you get\n\nregression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\nERROR: operator does not exist: ! integer\nLINE 1: SET 1: x[!2] = 43\n ^\nHINT: No operator matches the given name and argument type. You might need to add an explicit type cast.\nQUERY: SET 1: x[!2] = 43\nCONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n\nIt seems like a clear improvement to me that the whole plpgsql statement\nis now quoted, but the \"SET n:\" bit in front of it might confuse people,\nespecially if we don't document this new syntax (which I'm inclined not\nto, since it's useless in straight SQL). On the other hand, the\n\"SELECT\" that you got with the old code was confusing to novices too.\nMaybe something could be done to suppress those prefixes in error\nreports? Seems like a matter for another patch. We could also use\nsome other prefix --- there's nothing particularly magic about the\nword \"SET\" here, except that it already exists as a keyword --- but\nI didn't think of anything I liked better.\n\nThis is still WIP: I've not added any new regression test cases\nnor looked at the docs, and there's more cleanup needed in plpgsql.\nBut it passes check-world, so I thought I'd put it out for comments.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/A3691E98-CCA5-4DEB-B43C-92AD0437E09E%40mikatiming.de\n[2] https://www.postgresql.org/message-id/1070.1451345954%40sss.pgh.pa.us",
"msg_date": "Fri, 11 Dec 2020 12:21:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "On 12/11/20 12:21, Tom Lane wrote:\n> solution I adopted was just to invent a new CoercionContext value\n> COERCION_PLPGSQL, representing \"use pl/pgsql's rules\". (Basically\n> what that means nowadays is to apply CoerceViaIO if assignment cast\n> lookup doesn't find a cast pathway.)\n\nThat seems like a rule that might be of use in other PLs or extensions;\ncould it have a more generic name, COERCION_FALLBACK or something?\n\n> is now quoted, but the \"SET n:\" bit in front of it might confuse people,\n> especially if we don't document this new syntax (which I'm inclined not\n> to, since it's useless in straight SQL).\n\nIf it's true that the only choices for n: are 1: or 2:, maybe it would look\nless confusing in an error message to, hmm, decree that this specialized SET\nform /always/ takes a two-component name, but accept something special like\nROUTINE.x (or UNNAMED.x or NULL.x or something) for the case where there\nisn't a qualifying label in the plpgsql source?\n\nIt's still a strange arbitrary creation, but might give more of a hint of\nits meaning if it crops up in an error message somewhere.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 11 Dec 2020 13:09:16 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Hi\n\nIt is great. I expected much more work.\n\npá 11. 12. 2020 v 18:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> We've had complaints in the past about how plpgsql can't handle\n> assignments to fields in arrays of records [1], that is cases like\n>\n> arrayvar[n].field := something;\n>\n> and also complaints about how plpgsql can't handle assignments\n> to array slices [2], ie\n>\n> arrayvar[m:n] := something;\n>\n> As of commit c7aba7c14, we have another problem, namely that\n> plpgsql's subscripted assignment only works for regular arrays;\n> it won't work for other types that might define subscript\n> assignment handlers.\n>\n> So I started to think about how to fix that, and eventually\n> decided that what we ought to do is nuke plpgsql's array-assignment\n> code altogether. The core code already has support for everything\n> we want here in the context of field/element assignments in UPDATE\n> commands; if we could get plpgsql to make use of that infrastructure\n> instead of rolling its own, we'd be a lot better off.\n>\n> The hard part of that is that the core parser will only generate\n> the structures we need (FieldStores and assignment SubscriptingRefs)\n> within UPDATE commands. We could export the relevant functions\n> (particularly transformAssignmentIndirection); but that won't help\n> plpgsql very much, because it really wants to be able to run all this\n> stuff through SPI. That means we have to have SQL syntax that can\n> generate an expression of that form.\n>\n> That led me to think about introducing a new statement, say\n>\n\n> SET variable_name opt_indirection := a_expr\n>\n\nSQL/PSM (ANSI SQL) defines SET var = expr\n\nIf you introduce a new statement - LET, then it can be less confusing for\nusers, and this statement can be the foundation for schema variables. With\nthis statement the implementation of schema variables is significantly\nsimpler.\n\nRegards\n\nPavel\n\n\n\n>\n> where opt_indirection is gram.y's symbol for \"field selections and/or\n> subscripts\". The idea here is that a plpgsql statement like\n>\n> x[2].fld := something;\n>\n> would be parsed using this new statement, producing an expression\n> that uses an assignment SubscriptingRef and a FieldStore operating\n> on a Param that gives the initial value of the array-of-composite\n> variable \"x\". Then plpgsql would just evaluate this expression and\n> assign the result to x. Problem solved.\n>\n> This almost works as-is, modulo annoying parse conflicts against the\n> existing variants of SET. However there's a nasty little detail\n> about what \"variable_name\" can be in plpgsql: it can be either one or\n> two identifiers, since there might be a block label involved, eg\n>\n> <<mylabel>> declare x int; begin mylabel.x := ...\n>\n> Between that and the parse-conflict problem, I ended up\n> with this syntax:\n>\n> SET n: variable_name opt_indirection := a_expr\n>\n> where \"n\" is an integer literal indicating how many dot-separated names\n> should be taken as the base variable name. Another annoying point is\n> that plpgsql historically has allowed fun stuff like\n>\n> mycount := count(*) from my_table where ...;\n>\n> that is, after the expression you can have all the rest of an ordinary\n> SELECT command. That's not terribly hard to deal with, but it means\n> that this new statement has to have all of SELECT's other options too.\n>\n> The other area that doesn't quite work without some kind of hack is\n> that plpgsql's casting rules for which types can be assigned to what\n> are far laxer than what the core parser thinks should be allowed in\n> UPDATE. The cast has to happen within the assignment expression\n> for this to work at all, so plpgsql can't fix it by itself. The\n> solution I adopted was just to invent a new CoercionContext value\n> COERCION_PLPGSQL, representing \"use pl/pgsql's rules\". (Basically\n> what that means nowadays is to apply CoerceViaIO if assignment cast\n> lookup doesn't find a cast pathway.)\n>\n> A happy side-effect of this approach is that it actually makes\n> some cases faster. In particular I can measure speedups for\n> (a) assignments to subscripted variables and (b) cases where a\n> coercion must be performed to produce the result to be assigned.\n> I believe the reason for this is that the patch effectively\n> merges what had been separate expressions (subscripts or casts,\n> respectively) into the main result-producing expression. This\n> eliminates a nontrivial amount of overhead for plancache validity\n> checking, execution startup, etc.\n>\n> Another side-effect is that the report of the statement in error\n> cases might look different. For example, in v13 a typo in a\n> subscript expression produces\n>\n> regression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\n> ERROR: operator does not exist: ! integer\n> LINE 1: SELECT !2\n> ^\n> HINT: No operator matches the given name and argument type. You might\n> need to add an explicit type cast.\n> QUERY: SELECT !2\n> CONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n>\n> With this patch, you get\n>\n> regression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\n> ERROR: operator does not exist: ! integer\n> LINE 1: SET 1: x[!2] = 43\n> ^\n> HINT: No operator matches the given name and argument type. You might\n> need to add an explicit type cast.\n> QUERY: SET 1: x[!2] = 43\n> CONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n>\n> It seems like a clear improvement to me that the whole plpgsql statement\n> is now quoted, but the \"SET n:\" bit in front of it might confuse people,\n> especially if we don't document this new syntax (which I'm inclined not\n> to, since it's useless in straight SQL). On the other hand, the\n> \"SELECT\" that you got with the old code was confusing to novices too.\n> Maybe something could be done to suppress those prefixes in error\n> reports? Seems like a matter for another patch. We could also use\n> some other prefix --- there's nothing particularly magic about the\n> word \"SET\" here, except that it already exists as a keyword --- but\n> I didn't think of anything I liked better.\n>\n> This is still WIP: I've not added any new regression test cases\n> nor looked at the docs, and there's more cleanup needed in plpgsql.\n> But it passes check-world, so I thought I'd put it out for comments.\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/A3691E98-CCA5-4DEB-B43C-92AD0437E09E%40mikatiming.de\n> [2] https://www.postgresql.org/message-id/1070.1451345954%40sss.pgh.pa.us\n>\n>\n\nHiIt is great. I expected much more work.pá 11. 12. 2020 v 18:21 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:We've had complaints in the past about how plpgsql can't handle\nassignments to fields in arrays of records [1], that is cases like\n\n arrayvar[n].field := something;\n\nand also complaints about how plpgsql can't handle assignments\nto array slices [2], ie\n\n arrayvar[m:n] := something;\n\nAs of commit c7aba7c14, we have another problem, namely that\nplpgsql's subscripted assignment only works for regular arrays;\nit won't work for other types that might define subscript\nassignment handlers.\n\nSo I started to think about how to fix that, and eventually\ndecided that what we ought to do is nuke plpgsql's array-assignment\ncode altogether. The core code already has support for everything\nwe want here in the context of field/element assignments in UPDATE\ncommands; if we could get plpgsql to make use of that infrastructure\ninstead of rolling its own, we'd be a lot better off.\n\nThe hard part of that is that the core parser will only generate\nthe structures we need (FieldStores and assignment SubscriptingRefs)\nwithin UPDATE commands. We could export the relevant functions\n(particularly transformAssignmentIndirection); but that won't help\nplpgsql very much, because it really wants to be able to run all this\nstuff through SPI. That means we have to have SQL syntax that can\ngenerate an expression of that form.\n\nThat led me to think about introducing a new statement, say \n\n SET variable_name opt_indirection := a_exprSQL/PSM (ANSI SQL) defines SET var = exprIf you introduce a new statement - LET, then it can be less confusing for users, and this statement can be the foundation for schema variables. With this statement the implementation of schema variables is significantly simpler. RegardsPavel \n\nwhere opt_indirection is gram.y's symbol for \"field selections and/or\nsubscripts\". The idea here is that a plpgsql statement like\n\n x[2].fld := something;\n\nwould be parsed using this new statement, producing an expression\nthat uses an assignment SubscriptingRef and a FieldStore operating\non a Param that gives the initial value of the array-of-composite\nvariable \"x\". Then plpgsql would just evaluate this expression and\nassign the result to x. Problem solved.\n\nThis almost works as-is, modulo annoying parse conflicts against the\nexisting variants of SET. However there's a nasty little detail\nabout what \"variable_name\" can be in plpgsql: it can be either one or\ntwo identifiers, since there might be a block label involved, eg\n\n <<mylabel>> declare x int; begin mylabel.x := ...\n\nBetween that and the parse-conflict problem, I ended up\nwith this syntax:\n\n SET n: variable_name opt_indirection := a_expr\n\nwhere \"n\" is an integer literal indicating how many dot-separated names\nshould be taken as the base variable name. Another annoying point is\nthat plpgsql historically has allowed fun stuff like\n\n mycount := count(*) from my_table where ...;\n\nthat is, after the expression you can have all the rest of an ordinary\nSELECT command. That's not terribly hard to deal with, but it means\nthat this new statement has to have all of SELECT's other options too.\n\nThe other area that doesn't quite work without some kind of hack is\nthat plpgsql's casting rules for which types can be assigned to what\nare far laxer than what the core parser thinks should be allowed in\nUPDATE. The cast has to happen within the assignment expression\nfor this to work at all, so plpgsql can't fix it by itself. The\nsolution I adopted was just to invent a new CoercionContext value\nCOERCION_PLPGSQL, representing \"use pl/pgsql's rules\". (Basically\nwhat that means nowadays is to apply CoerceViaIO if assignment cast\nlookup doesn't find a cast pathway.)\n\nA happy side-effect of this approach is that it actually makes\nsome cases faster. In particular I can measure speedups for\n(a) assignments to subscripted variables and (b) cases where a\ncoercion must be performed to produce the result to be assigned.\nI believe the reason for this is that the patch effectively\nmerges what had been separate expressions (subscripts or casts,\nrespectively) into the main result-producing expression. This\neliminates a nontrivial amount of overhead for plancache validity\nchecking, execution startup, etc.\n\nAnother side-effect is that the report of the statement in error\ncases might look different. For example, in v13 a typo in a\nsubscript expression produces\n\nregression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\nERROR: operator does not exist: ! integer\nLINE 1: SELECT !2\n ^\nHINT: No operator matches the given name and argument type. You might need to add an explicit type cast.\nQUERY: SELECT !2\nCONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n\nWith this patch, you get\n\nregression=# do $$ declare x int[]; begin x[!2] = 43; end $$;\nERROR: operator does not exist: ! integer\nLINE 1: SET 1: x[!2] = 43\n ^\nHINT: No operator matches the given name and argument type. You might need to add an explicit type cast.\nQUERY: SET 1: x[!2] = 43\nCONTEXT: PL/pgSQL function inline_code_block line 1 at assignment\n\nIt seems like a clear improvement to me that the whole plpgsql statement\nis now quoted, but the \"SET n:\" bit in front of it might confuse people,\nespecially if we don't document this new syntax (which I'm inclined not\nto, since it's useless in straight SQL). On the other hand, the\n\"SELECT\" that you got with the old code was confusing to novices too.\nMaybe something could be done to suppress those prefixes in error\nreports? Seems like a matter for another patch. We could also use\nsome other prefix --- there's nothing particularly magic about the\nword \"SET\" here, except that it already exists as a keyword --- but\nI didn't think of anything I liked better.\n\nThis is still WIP: I've not added any new regression test cases\nnor looked at the docs, and there's more cleanup needed in plpgsql.\nBut it passes check-world, so I thought I'd put it out for comments.\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/A3691E98-CCA5-4DEB-B43C-92AD0437E09E%40mikatiming.de\n[2] https://www.postgresql.org/message-id/1070.1451345954%40sss.pgh.pa.us",
"msg_date": "Fri, 11 Dec 2020 19:29:22 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 12/11/20 12:21, Tom Lane wrote:\n>> solution I adopted was just to invent a new CoercionContext value\n>> COERCION_PLPGSQL, representing \"use pl/pgsql's rules\". (Basically\n>> what that means nowadays is to apply CoerceViaIO if assignment cast\n>> lookup doesn't find a cast pathway.)\n\n> That seems like a rule that might be of use in other PLs or extensions;\n> could it have a more generic name, COERCION_FALLBACK or something?\n\nI'm not wedded to that name, but I doubt that it's semantics that we\nreally want to encourage anyone else to use. In particular, the fact\nthat it's not a superset of COERCION_EXPLICIT is pretty darn weird,\nwith little except backwards compatibility to recommend it.\n\n>> is now quoted, but the \"SET n:\" bit in front of it might confuse people,\n>> especially if we don't document this new syntax (which I'm inclined not\n>> to, since it's useless in straight SQL).\n\n> If it's true that the only choices for n: are 1: or 2:, maybe it would look\n> less confusing in an error message to, hmm, decree that this specialized SET\n> form /always/ takes a two-component name, but accept something special like\n> ROUTINE.x (or UNNAMED.x or NULL.x or something) for the case where there\n> isn't a qualifying label in the plpgsql source?\n\nAs the patch stands, it's still using the RECFIELD code paths, which\nmeans that there could be three-component target variable names\n(label.variable.field). If we were to get rid of that and expect\ntop-level field assignment to also be handled by this new mechanism,\nthen maybe your idea could be made to work. But I have not tried to\nimplement that here, as I don't see how to make it work for RECORD-type\nvariables (where the names and types of the fields aren't determinate).\n\nIn any case, that approach still involves inserting some query text\nthat the user didn't write, so I'm not sure how much confusion it'd\nremove. The \"SET n:\" business at least looks like it's some weird\nprefix comparable to \"LINE n:\", so while people wouldn't understand\nit I think they'd easily see it as something the system prefixed\nto their query.\n\nLooking a bit ahead, it's not too hard to imagine plpgsql wishing\nto pass other sorts of annotations through SPI and down to the core\nparser. Maybe we should think about a more general way to do that\nin an out-of-band, not-visible-in-the-query-text fashion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Dec 2020 13:32:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "I wrote:\n> In any case, that approach still involves inserting some query text\n> that the user didn't write, so I'm not sure how much confusion it'd\n> remove. The \"SET n:\" business at least looks like it's some weird\n> prefix comparable to \"LINE n:\", so while people wouldn't understand\n> it I think they'd easily see it as something the system prefixed\n> to their query.\n\n> Looking a bit ahead, it's not too hard to imagine plpgsql wishing\n> to pass other sorts of annotations through SPI and down to the core\n> parser. Maybe we should think about a more general way to do that\n> in an out-of-band, not-visible-in-the-query-text fashion.\n\nI have an idea (no code written yet) about this.\n\nAfter looking around, it seems like the ParserSetupHook mechanism\nis plenty for anything we might want an extension to be able to\nchange in the behavior of parse analysis. The hooks that we\ncurrently allow that to set affect only the interpretation of\nvariable names and $N parameter symbols, but we could surely\nadd much more in that line as needed.\n\nWhat we lack is any good way for an extension to control the\nbehavior of raw_parser() (i.e., gram.y). Currently, plpgsql\nprefixes \"SELECT \" to expressions it might want to parse, and\nnow my current patch proposes to prefix something else to get a\ndifferent grammar behavior. Another example of a very similar\nproblem is typeStringToTypeName(), which prefixes a string it\nexpects to be a type name with \"SELECT NULL::\", and then has\nto do a bunch of kluges to deal with the underspecification\ninvolved in that. Based on these examples, we need some sort\nof \"overall goal\" option for the raw parser, but maybe not more\nthan that --- other things you might want tend to fall into the\nparse analysis side of things.\n\nSo my idea here is to add a parsing-mode option to raw_parser(),\nwhich would be an enum with values like \"normal SQL statement\",\n\"expression only\", \"type name\", \"plpgsql assignment statement\".\nThe problem I had with not knowing how many dotted names to\nabsorb at the start of an assignment statement could be finessed\nby inventing \"assignment1\", \"assignment2\", and \"assignment3\"\nparsing modes; that's a little bit ugly but not enough to make\nme think we need a wider API.\n\nAs to how it could actually work, I'm noticing that raw_parser\nstarts out by initializing yyextra's lookahead buffer to empty.\nFor the parsing modes other than \"normal SQL statement\", it\ncould instead inject a lookahead token that is a code that cannot\nbe generated by the regular lexer. Then gram.y could have\nproductions like\n\n\tEXPRESSION_MODE a_expr { ... generate parse tree ... }\n\nwhere EXPRESSION_MODE is one of these special tokens. And now\nwe have something that will parse an a_expr, and only an a_expr,\nand we don't need any special \"SELECT \" or any other prefix in\nthe user-visible source string. Similarly for the other special\nparsing modes.\n\nEssentially, this is a way of having a few distinct parsers\nthat share a common base of productions, without the bloat and\ncode maintenance issues of building actually-distinct parsers.\n\nA small problem with this is that the images of these special\nproductions in ECPG would be dead code so far as ECPG is concerned.\nFor the use-cases I can foresee, there wouldn't be enough special\nproductions for that to be a deal-breaker. But we could probably\nteach the ECPG grammar-building scripts to filter out these\nproductions if it ever got to be annoying.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Dec 2020 22:16:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "I wrote:\n> So my idea here is to add a parsing-mode option to raw_parser(),\n> which would be an enum with values like \"normal SQL statement\",\n> \"expression only\", \"type name\", \"plpgsql assignment statement\".\n\nHere's a fleshed-out patch series that attacks things that way.\nI'm a lot better pleased with this than with my original approach.\n\n0001 creates the basic infrastructure for \"raw parse modes\", and as\nproof of concept simplifies typeStringToTypeName(). There's a minor\nfunctional improvement there, which is that we can now use the core\nparser's error cursor position, so instead of\n\nregression=# do $$ declare x int[23/] ; begin end $$;\nERROR: syntax error at or near \"/\"\nLINE 1: do $$ declare x int[23/] ; begin end $$;\n ^\nCONTEXT: invalid type name \"int[23/] \"\n\nyou get\n\nregression=# do $$ declare x int[23/] ; begin end $$;\nERROR: syntax error at or near \"/\"\nLINE 1: do $$ declare x int[23/] ; begin end $$;\n ^\nCONTEXT: invalid type name \"int[23/] \"\n\nIt's possible we could dispense with the error context callback\nin typeStringToTypeName altogether, but I've not experimented much.\n\n\n0002 tackles the next problem, which is to make this feature accessible\nthrough SPI. There are a couple of possibly-controversial choices here.\n\nFollowing the principle that we should avoid changing documented SPI\ninterfaces, we need a new version of SPI_prepare to pass RawParseMode\nthrough. This'll be the fourth one :-(, so I decided it was time to\ntry to make a definition that can stay API-compatible through future\nchanges. So it takes a struct of options, and I added a promise that\nzeroing the struct is enough to guarantee forward compatibility\nthrough future additions.\n\nThis leaves both of the previous iterations, SPI_prepare_cursor\nand SPI_prepare_params, unused anywhere in the core code.\nI suppose we can't kill them (codesearch.debian.net knows of some\nexternal uses) but I propose to mark them deprecated, with an eye\nto at least removing their documentation someday.\n\nI did not want to add a RawParseMode parameter to pg_parse_query(),\nbecause that would have affected a larger number of unrelated modules,\nand it would not have been great from a header-inclusion footprint\nstandpoint either. So I chose to pass down the mode from SPI by\nhaving it just call raw_parser() directly instead of going through\npg_parse_query(). Perhaps this is a modularity violation, or perhaps\nthere's somebody who really wants the extra tracing overhead in\npg_parse_query() to apply to SPI queries. I'm open to discussing\nwhether this should be done differently.\n\n(However, having made these two patches, I'm now wondering whether\nthere is any rhyme or reason to the existing state of affairs\nwith some callers going through pg_parse_query() while others use\nraw_parser() directly. It's hard to knock making a different\nchoice in spi.c unless we have a coherent policy about which to\nuse where.)\n\n\nNext, 0003 invents a raw parse mode for plpgsql expressions (which,\nin some contexts, can be pretty nearly whole SELECT statements),\nand uses that to get plpgsql out of the business of prefixing\n\"SELECT \" to user-written text. I would not have bothered with this\nas a standalone fix, but I think it does make for less-confusing\nerror messages --- we've definitely had novices ask \"where'd this\nSELECT come from?\" in the past. (I cheated a bit on PERFORM, though.\nUnlike other places, it needs to allow UNION, so it can't use the\nsame restricted syntax.)\n\n0004 then reimplements plpgsql assignment. This is essentially the same\npatch I submitted before, but redesigned to work with the infrastructure\nfrom 0001-0003.\n\n0005 adds documentation and test cases. It also fixes a couple\nof pre-existing problems that the plpgsql parser had with assigning\nto sub-fields of record fields, which I discovered while making the\ntests.\n\nFinally, 0006 removes plpgsql's ARRAYELEM datum type, on the grounds\nthat we don't need it anymore. This might be a little controversial\ntoo, because there was still one way to reach the code: GET DIAGNOSTICS\nwith an array element as target would do so. However, that seems like\na pretty weird corner case. Reviewing the git history, I find that\nI added support for that in commit 55caaaeba; but a check of the\nassociated discussion shows that there was no actual user request for\nthat, I'd just done it because it was easy and seemed more symmetric.\nThe amount of code involved here seems way more than is justified by\nthat one case, so I think we should just take it out and lose the\n\"feature\". (I did think about whether GET DIAGNOSTICS could be\nreimplemented on top of the new infrastructure, but it wouldn't be\neasy because we don't have a SQL-expression representation of the\nGET DIAGNOSTICS values. Moreover, going in that direction would add\nan expression evaluation, making GET DIAGNOSTICS slower. So I think\nwe should just drop it.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 13 Dec 2020 16:40:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "ne 13. 12. 2020 v 22:41 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > So my idea here is to add a parsing-mode option to raw_parser(),\n> > which would be an enum with values like \"normal SQL statement\",\n> > \"expression only\", \"type name\", \"plpgsql assignment statement\".\n>\n> Here's a fleshed-out patch series that attacks things that way.\n> I'm a lot better pleased with this than with my original approach.\n>\n> 0001 creates the basic infrastructure for \"raw parse modes\", and as\n> proof of concept simplifies typeStringToTypeName(). There's a minor\n> functional improvement there, which is that we can now use the core\n> parser's error cursor position, so instead of\n>\n> regression=# do $$ declare x int[23/] ; begin end $$;\n> ERROR: syntax error at or near \"/\"\n> LINE 1: do $$ declare x int[23/] ; begin end $$;\n> ^\n> CONTEXT: invalid type name \"int[23/] \"\n>\n> you get\n>\n> regression=# do $$ declare x int[23/] ; begin end $$;\n> ERROR: syntax error at or near \"/\"\n> LINE 1: do $$ declare x int[23/] ; begin end $$;\n> ^\n> CONTEXT: invalid type name \"int[23/] \"\n>\n> It's possible we could dispense with the error context callback\n> in typeStringToTypeName altogether, but I've not experimented much.\n>\n>\n> 0002 tackles the next problem, which is to make this feature accessible\n> through SPI. There are a couple of possibly-controversial choices here.\n>\n> Following the principle that we should avoid changing documented SPI\n> interfaces, we need a new version of SPI_prepare to pass RawParseMode\n> through. This'll be the fourth one :-(, so I decided it was time to\n> try to make a definition that can stay API-compatible through future\n> changes. So it takes a struct of options, and I added a promise that\n> zeroing the struct is enough to guarantee forward compatibility\n> through future additions.\n>\n> This leaves both of the previous iterations, SPI_prepare_cursor\n> and SPI_prepare_params, unused anywhere in the core code.\n> I suppose we can't kill them (codesearch.debian.net knows of some\n> external uses) but I propose to mark them deprecated, with an eye\n> to at least removing their documentation someday.\n>\n> I did not want to add a RawParseMode parameter to pg_parse_query(),\n> because that would have affected a larger number of unrelated modules,\n> and it would not have been great from a header-inclusion footprint\n> standpoint either. So I chose to pass down the mode from SPI by\n> having it just call raw_parser() directly instead of going through\n> pg_parse_query(). Perhaps this is a modularity violation, or perhaps\n> there's somebody who really wants the extra tracing overhead in\n> pg_parse_query() to apply to SPI queries. I'm open to discussing\n> whether this should be done differently.\n>\n> (However, having made these two patches, I'm now wondering whether\n> there is any rhyme or reason to the existing state of affairs\n> with some callers going through pg_parse_query() while others use\n> raw_parser() directly. It's hard to knock making a different\n> choice in spi.c unless we have a coherent policy about which to\n> use where.)\n>\n>\n> Next, 0003 invents a raw parse mode for plpgsql expressions (which,\n> in some contexts, can be pretty nearly whole SELECT statements),\n> and uses that to get plpgsql out of the business of prefixing\n> \"SELECT \" to user-written text. I would not have bothered with this\n> as a standalone fix, but I think it does make for less-confusing\n> error messages --- we've definitely had novices ask \"where'd this\n> SELECT come from?\" in the past. (I cheated a bit on PERFORM, though.\n> Unlike other places, it needs to allow UNION, so it can't use the\n> same restricted syntax.)\n>\n> 0004 then reimplements plpgsql assignment. This is essentially the same\n> patch I submitted before, but redesigned to work with the infrastructure\n> from 0001-0003.\n>\n> 0005 adds documentation and test cases. It also fixes a couple\n> of pre-existing problems that the plpgsql parser had with assigning\n> to sub-fields of record fields, which I discovered while making the\n> tests.\n>\n> Finally, 0006 removes plpgsql's ARRAYELEM datum type, on the grounds\n> that we don't need it anymore. This might be a little controversial\n> too, because there was still one way to reach the code: GET DIAGNOSTICS\n> with an array element as target would do so. However, that seems like\n> a pretty weird corner case. Reviewing the git history, I find that\n> I added support for that in commit 55caaaeba; but a check of the\n> associated discussion shows that there was no actual user request for\n> that, I'd just done it because it was easy and seemed more symmetric.\n> The amount of code involved here seems way more than is justified by\n> that one case, so I think we should just take it out and lose the\n> \"feature\". (I did think about whether GET DIAGNOSTICS could be\n> reimplemented on top of the new infrastructure, but it wouldn't be\n> easy because we don't have a SQL-expression representation of the\n> GET DIAGNOSTICS values. Moreover, going in that direction would add\n> an expression evaluation, making GET DIAGNOSTICS slower. So I think\n> we should just drop it.)\n>\n>\nIt is a really great patch. I did fast check and I didn't find any\nfunctionality issue\n\n--\n-- Name: footype; Type: TYPE; Schema: public; Owner: pavel\n--\n\nCREATE TYPE public.footype AS (\na integer,\nb integer\n);\n\n\nALTER TYPE public.footype OWNER TO pavel;\n\n--\n-- Name: bootype; Type: TYPE; Schema: public; Owner: pavel\n--\n\nCREATE TYPE public.bootype AS (\na integer,\nf public.footype\n);\n\n\nALTER TYPE public.bootype OWNER TO pavel;\n\n--\n-- Name: cootype; Type: TYPE; Schema: public; Owner: pavel\n--\n\nCREATE TYPE public.cootype AS (\na integer,\nb integer[]\n);\n\n\nALTER TYPE public.cootype OWNER TO pavel;\n\n--\n-- Name: dootype; Type: TYPE; Schema: public; Owner: pavel\n--\n\nCREATE TYPE public.dootype AS (\na integer,\nb public.footype,\nc public.footype[]\n);\n\n\nALTER TYPE public.dootype OWNER TO pavel;\n\n--\n-- PostgreSQL database dump complete\n--\n\npostgres=# do $$\n<<lab>>\ndeclare\n a footype[];\n b bootype;\n ba bootype[];\n c cootype[];\n d dootype[];\n x int default 1;\nbegin\n a[10] := row(10,20);\n a[11] := (30,40);\n a[3] := (0,0);\n a[3].a := 100;\n raise notice '%', a;\n b.a := 100;\n b.f.a := 1000;\n raise notice '%', b;\n ba[0] := b;\n\n ba[0].a = 33; ba[0].f := row(33,33);\n lab.ba[0].f.a := 1000000;\n raise notice '%', ba;\n c[0].a := 10000;\n c[0].b := ARRAY[1,2,4];\n lab.c[0].b[1] := 10000;\n raise notice '% %', c, c[0].b[x];\n\n d[0].a := 100;\n d[0].b.a := 101;\n d[0].c[x+1].a := 102;\n raise notice '%', d;\nend;\n$$;\nNOTICE:\n [3:11]={\"(100,0)\",NULL,NULL,NULL,NULL,NULL,NULL,\"(10,20)\",\"(30,40)\"}\nNOTICE: (100,\"(1000,)\")\nNOTICE: [0:0]={\"(33,\\\"(1000000,33)\\\")\"}\nNOTICE: [0:0]={\"(10000,\\\"{10000,2,4}\\\")\"} 10000\nNOTICE: [0:0]={\"(100,\\\"(101,)\\\",\\\"[2:2]={\\\"\\\"(102,)\\\"\\\"}\\\")\"}\nDO\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n>\n\nne 13. 12. 2020 v 22:41 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> So my idea here is to add a parsing-mode option to raw_parser(),\n> which would be an enum with values like \"normal SQL statement\",\n> \"expression only\", \"type name\", \"plpgsql assignment statement\".\n\nHere's a fleshed-out patch series that attacks things that way.\nI'm a lot better pleased with this than with my original approach.\n\n0001 creates the basic infrastructure for \"raw parse modes\", and as\nproof of concept simplifies typeStringToTypeName(). There's a minor\nfunctional improvement there, which is that we can now use the core\nparser's error cursor position, so instead of\n\nregression=# do $$ declare x int[23/] ; begin end $$;\nERROR: syntax error at or near \"/\"\nLINE 1: do $$ declare x int[23/] ; begin end $$;\n ^\nCONTEXT: invalid type name \"int[23/] \"\n\nyou get\n\nregression=# do $$ declare x int[23/] ; begin end $$;\nERROR: syntax error at or near \"/\"\nLINE 1: do $$ declare x int[23/] ; begin end $$;\n ^\nCONTEXT: invalid type name \"int[23/] \"\n\nIt's possible we could dispense with the error context callback\nin typeStringToTypeName altogether, but I've not experimented much.\n\n\n0002 tackles the next problem, which is to make this feature accessible\nthrough SPI. There are a couple of possibly-controversial choices here.\n\nFollowing the principle that we should avoid changing documented SPI\ninterfaces, we need a new version of SPI_prepare to pass RawParseMode\nthrough. This'll be the fourth one :-(, so I decided it was time to\ntry to make a definition that can stay API-compatible through future\nchanges. So it takes a struct of options, and I added a promise that\nzeroing the struct is enough to guarantee forward compatibility\nthrough future additions.\n\nThis leaves both of the previous iterations, SPI_prepare_cursor\nand SPI_prepare_params, unused anywhere in the core code.\nI suppose we can't kill them (codesearch.debian.net knows of some\nexternal uses) but I propose to mark them deprecated, with an eye\nto at least removing their documentation someday.\n\nI did not want to add a RawParseMode parameter to pg_parse_query(),\nbecause that would have affected a larger number of unrelated modules,\nand it would not have been great from a header-inclusion footprint\nstandpoint either. So I chose to pass down the mode from SPI by\nhaving it just call raw_parser() directly instead of going through\npg_parse_query(). Perhaps this is a modularity violation, or perhaps\nthere's somebody who really wants the extra tracing overhead in\npg_parse_query() to apply to SPI queries. I'm open to discussing\nwhether this should be done differently.\n\n(However, having made these two patches, I'm now wondering whether\nthere is any rhyme or reason to the existing state of affairs\nwith some callers going through pg_parse_query() while others use\nraw_parser() directly. It's hard to knock making a different\nchoice in spi.c unless we have a coherent policy about which to\nuse where.)\n\n\nNext, 0003 invents a raw parse mode for plpgsql expressions (which,\nin some contexts, can be pretty nearly whole SELECT statements),\nand uses that to get plpgsql out of the business of prefixing\n\"SELECT \" to user-written text. I would not have bothered with this\nas a standalone fix, but I think it does make for less-confusing\nerror messages --- we've definitely had novices ask \"where'd this\nSELECT come from?\" in the past. (I cheated a bit on PERFORM, though.\nUnlike other places, it needs to allow UNION, so it can't use the\nsame restricted syntax.)\n\n0004 then reimplements plpgsql assignment. This is essentially the same\npatch I submitted before, but redesigned to work with the infrastructure\nfrom 0001-0003.\n\n0005 adds documentation and test cases. It also fixes a couple\nof pre-existing problems that the plpgsql parser had with assigning\nto sub-fields of record fields, which I discovered while making the\ntests.\n\nFinally, 0006 removes plpgsql's ARRAYELEM datum type, on the grounds\nthat we don't need it anymore. This might be a little controversial\ntoo, because there was still one way to reach the code: GET DIAGNOSTICS\nwith an array element as target would do so. However, that seems like\na pretty weird corner case. Reviewing the git history, I find that\nI added support for that in commit 55caaaeba; but a check of the\nassociated discussion shows that there was no actual user request for\nthat, I'd just done it because it was easy and seemed more symmetric.\nThe amount of code involved here seems way more than is justified by\nthat one case, so I think we should just take it out and lose the\n\"feature\". (I did think about whether GET DIAGNOSTICS could be\nreimplemented on top of the new infrastructure, but it wouldn't be\neasy because we don't have a SQL-expression representation of the\nGET DIAGNOSTICS values. Moreover, going in that direction would add\nan expression evaluation, making GET DIAGNOSTICS slower. So I think\nwe should just drop it.)\nIt is a really great patch. I did fast check and I didn't find any functionality issue---- Name: footype; Type: TYPE; Schema: public; Owner: pavel--CREATE TYPE public.footype AS (\ta integer,\tb integer);ALTER TYPE public.footype OWNER TO pavel;---- Name: bootype; Type: TYPE; Schema: public; Owner: pavel--CREATE TYPE public.bootype AS (\ta integer,\tf public.footype);ALTER TYPE public.bootype OWNER TO pavel;---- Name: cootype; Type: TYPE; Schema: public; Owner: pavel--CREATE TYPE public.cootype AS (\ta integer,\tb integer[]);ALTER TYPE public.cootype OWNER TO pavel;---- Name: dootype; Type: TYPE; Schema: public; Owner: pavel--CREATE TYPE public.dootype AS (\ta integer,\tb public.footype,\tc public.footype[]);ALTER TYPE public.dootype OWNER TO pavel;---- PostgreSQL database dump complete--postgres=# do $$<<lab>>declare a footype[]; b bootype; ba bootype[]; c cootype[]; d dootype[]; x int default 1;begin a[10] := row(10,20); a[11] := (30,40); a[3] := (0,0); a[3].a := 100; raise notice '%', a; b.a := 100; b.f.a := 1000; raise notice '%', b; ba[0] := b; ba[0].a = 33; ba[0].f := row(33,33); lab.ba[0].f.a := 1000000; raise notice '%', ba; c[0].a := 10000; c[0].b := ARRAY[1,2,4]; lab.c[0].b[1] := 10000; raise notice '% %', c, c[0].b[x]; d[0].a := 100; d[0].b.a := 101; d[0].c[x+1].a := 102; raise notice '%', d;end;$$;NOTICE: [3:11]={\"(100,0)\",NULL,NULL,NULL,NULL,NULL,NULL,\"(10,20)\",\"(30,40)\"}NOTICE: (100,\"(1000,)\")NOTICE: [0:0]={\"(33,\\\"(1000000,33)\\\")\"}NOTICE: [0:0]={\"(10000,\\\"{10000,2,4}\\\")\"} 10000NOTICE: [0:0]={\"(100,\\\"(101,)\\\",\\\"[2:2]={\\\"\\\"(102,)\\\"\\\"}\\\")\"}DORegardsPavel \n regards, tom lane",
"msg_date": "Mon, 14 Dec 2020 07:57:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Hi\n\nI checked a performance and it looks so access to record's field is faster,\nbut an access to arrays field is significantly slower\n\ndo $$\ndeclare\n a int[];\n aux int;\n rep boolean default true;\nbegin\n for i in 1..5000\n loop\n a[i]:= 5000 - i;\n end loop;\n\n raise notice '%', a[1:10];\n\n while rep\n loop\n rep := false;\n for i in 1..5000\n loop\n if a[i] > a[i+1] then\n aux := a[i];\n a[i] := a[i+1]; a[i+1] := aux;\n rep := true;\n end if;\n end loop;\n end loop;\n\n raise notice '%', a[1:10];\n\nend;\n$$;\n\nThis code is about 3x slower than master (40 sec x 12 sec). I believe so\nthis is a worst case scenario\n\nI tested pi calculation\n\nCREATE OR REPLACE FUNCTION pi_est_1(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0)));\n c1 := c1 + 2.0;\n c2 := c2 + 2.0;\n END LOOP;\n RETURN accum * 2.0;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION pi_est_2(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + double precision '2.0')));\n c1 := c1 + double precision '2.0';\n c2 := c2 + double precision '2.0';\n END LOOP;\n RETURN accum * double precision '2.0';\nEND;\n$$ LANGUAGE plpgsql;\n\nAnd the performance is 10% slower than on master\n\nInteresting point - the master is about 5% faster than pg13\n\nHiI checked a performance and it looks so access to record's field is faster, but an access to arrays field is significantly slowerdo $$declare a int[]; aux int; rep boolean default true;begin for i in 1..5000 loop a[i]:= 5000 - i; end loop; raise notice '%', a[1:10]; while rep loop rep := false; for i in 1..5000 loop if a[i] > a[i+1] then aux := a[i]; a[i] := a[i+1]; a[i+1] := aux; rep := true; end if; end loop; end loop; raise notice '%', a[1:10];end;$$;This code is about 3x slower than master (40 sec x 12 sec). I believe so this is a worst case scenarioI tested pi calculationCREATE OR REPLACE FUNCTION pi_est_1(n int)RETURNS numeric AS $$DECLARE accum double precision DEFAULT 1.0; c1 double precision DEFAULT 2.0; c2 double precision DEFAULT 1.0;BEGIN FOR i IN 1..n LOOP accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0))); c1 := c1 + 2.0; c2 := c2 + 2.0; END LOOP; RETURN accum * 2.0; END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION pi_est_2(n int)RETURNS numeric AS $$DECLARE accum double precision DEFAULT 1.0; c1 double precision DEFAULT 2.0; c2 double precision DEFAULT 1.0;BEGIN FOR i IN 1..n LOOP accum := accum * ((c1 * c1) / (c2 * (c2 + double precision '2.0'))); c1 := c1 + double precision '2.0'; c2 := c2 + double precision '2.0'; END LOOP; RETURN accum * double precision '2.0';END;$$ LANGUAGE plpgsql;And the performance is 10% slower than on masterInteresting point - the master is about 5% faster than pg13",
"msg_date": "Mon, 14 Dec 2020 09:20:23 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I checked a performance and it looks so access to record's field is faster,\n> but an access to arrays field is significantly slower\n\nHmm, I'd drawn the opposite conclusion in my own testing ...\n\n> for i in 1..5000\n> loop\n> if a[i] > a[i+1] then\n> aux := a[i];\n> a[i] := a[i+1]; a[i+1] := aux;\n> rep := true;\n> end if;\n> end loop;\n\n... but I now see that I'd not checked cases like \"a[i] := a[j]\".\nexec_check_rw_parameter() is being too conservative about whether\nit can optimize a case like that. The attached incremental patch\nfixes it.\n\n> I tested pi calculation\n> ...\n> And the performance is 10% slower than on master\n\nCan't reproduce that here. For the record, I get the following\ntimings (medians of three runs) for your test cases:\n\nHEAD:\n\nsort:\t\t\tTime: 13974.709 ms (00:13.975)\npi_est_1(10000000):\tTime: 3537.482 ms (00:03.537)\npi_est_2(10000000):\tTime: 3546.557 ms (00:03.547)\n\nPatch v1:\n\nsort:\t\t\tTime: 47053.892 ms (00:47.054)\npi_est_1(10000000):\tTime: 3456.078 ms (00:03.456)\npi_est_2(10000000):\tTime: 3451.347 ms (00:03.451)\n\n+ exec_check_rw_parameter fix:\n\nsort:\t\t\tTime: 12199.724 ms (00:12.200)\npi_est_1(10000000):\tTime: 3357.955 ms (00:03.358)\npi_est_2(10000000):\tTime: 3367.526 ms (00:03.368)\n\nI'm inclined to think that the differences in the pi calculation\ntimings are mostly chance effects; there's certainly no reason\nwhy exec_check_rw_parameter should affect that test case at all.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 14 Dec 2020 11:25:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "po 14. 12. 2020 v 17:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I checked a performance and it looks so access to record's field is\n> faster,\n> > but an access to arrays field is significantly slower\n>\n> Hmm, I'd drawn the opposite conclusion in my own testing ...\n>\n> > for i in 1..5000\n> > loop\n> > if a[i] > a[i+1] then\n> > aux := a[i];\n> > a[i] := a[i+1]; a[i+1] := aux;\n> > rep := true;\n> > end if;\n> > end loop;\n>\n> ... but I now see that I'd not checked cases like \"a[i] := a[j]\".\n> exec_check_rw_parameter() is being too conservative about whether\n> it can optimize a case like that. The attached incremental patch\n> fixes it.\n>\n> > I tested pi calculation\n> > ...\n> > And the performance is 10% slower than on master\n>\n> Can't reproduce that here. For the record, I get the following\n> timings (medians of three runs) for your test cases:\n>\n> HEAD:\n>\n> sort: Time: 13974.709 ms (00:13.975)\n> pi_est_1(10000000): Time: 3537.482 ms (00:03.537)\n> pi_est_2(10000000): Time: 3546.557 ms (00:03.547)\n>\n> Patch v1:\n>\n> sort: Time: 47053.892 ms (00:47.054)\n> pi_est_1(10000000): Time: 3456.078 ms (00:03.456)\n> pi_est_2(10000000): Time: 3451.347 ms (00:03.451)\n>\n> + exec_check_rw_parameter fix:\n>\n> sort: Time: 12199.724 ms (00:12.200)\n> pi_est_1(10000000): Time: 3357.955 ms (00:03.358)\n> pi_est_2(10000000): Time: 3367.526 ms (00:03.368)\n>\n> I'm inclined to think that the differences in the pi calculation\n> timings are mostly chance effects; there's certainly no reason\n> why exec_check_rw_parameter should affect that test case at all.\n>\n\nperformance patch helps lot of for sort - with patch it is faster 5-10%\nthan master 10864 x 12122 ms\n\nI found probably reason why patched was slower\n\nI used\n\nCFLAGS=\"-fno-omit-frame-pointer -Wall -Wmissing-prototypes\n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-g -O2 -Werror=switch\"\n\nWith these options the pi test was slower. When I used default, then there\nis no difference.\n\nSo it can be very good feature, new code has same speed or it is faster\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n\n\n\n> regards, tom lane\n>\n>\n\npo 14. 12. 2020 v 17:25 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I checked a performance and it looks so access to record's field is faster,\n> but an access to arrays field is significantly slower\n\nHmm, I'd drawn the opposite conclusion in my own testing ...\n\n> for i in 1..5000\n> loop\n> if a[i] > a[i+1] then\n> aux := a[i];\n> a[i] := a[i+1]; a[i+1] := aux;\n> rep := true;\n> end if;\n> end loop;\n\n... but I now see that I'd not checked cases like \"a[i] := a[j]\".\nexec_check_rw_parameter() is being too conservative about whether\nit can optimize a case like that. The attached incremental patch\nfixes it.\n\n> I tested pi calculation\n> ...\n> And the performance is 10% slower than on master\n\nCan't reproduce that here. For the record, I get the following\ntimings (medians of three runs) for your test cases:\n\nHEAD:\n\nsort: Time: 13974.709 ms (00:13.975)\npi_est_1(10000000): Time: 3537.482 ms (00:03.537)\npi_est_2(10000000): Time: 3546.557 ms (00:03.547)\n\nPatch v1:\n\nsort: Time: 47053.892 ms (00:47.054)\npi_est_1(10000000): Time: 3456.078 ms (00:03.456)\npi_est_2(10000000): Time: 3451.347 ms (00:03.451)\n\n+ exec_check_rw_parameter fix:\n\nsort: Time: 12199.724 ms (00:12.200)\npi_est_1(10000000): Time: 3357.955 ms (00:03.358)\npi_est_2(10000000): Time: 3367.526 ms (00:03.368)\n\nI'm inclined to think that the differences in the pi calculation\ntimings are mostly chance effects; there's certainly no reason\nwhy exec_check_rw_parameter should affect that test case at all.performance patch helps lot of for sort - with patch it is faster 5-10% than master 10864 x 12122 msI found probably reason why patched was slowerI used CFLAGS=\"-fno-omit-frame-pointer -Wall -Wmissing-prototypes -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror=switch\"With these options the pi test was slower. When I used default, then there is no difference. So it can be very good feature, new code has same speed or it is fasterRegardsPavel \n\n regards, tom lane",
"msg_date": "Mon, 14 Dec 2020 19:01:17 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "I realized that the speedup patch I posted yesterday is flawed: it's\ntoo aggressive about applying the R/W param mechanism, instead of\nnot aggressive enough.\n\nTo review, the point of that logic is that if we have an assignment\nlike\n\tarrayvar := array_append(arrayvar, some-scalar-expression);\na naive implementation would have array_append construct an entire\nnew array, which we'd then have to copy into plpgsql's variable\nstorage. Instead, if the array variable is in expanded-array\nformat (which plpgsql encourages it to be) then we can pass the\narray parameter as a \"read/write expanded datum\", which array_append\nrecognizes as license to scribble right on its input and return the\nmodified input; that takes only O(1) time not O(N). Then plpgsql's\nassignment code notices that the expression result datum is the same\npointer already stored in the variable, so it does nothing.\n\nWith the patch at hand, a subscripted assignment a[i] := x becomes,\nessentially,\n\ta := subscriptingref(a, i, x);\nand we need to make the same sort of transformation to allow\narray_set_element to scribble right on the original value of \"a\"\ninstead of making a copy.\n\nHowever, we can't simply not consider the source expression \"x\",\nas I proposed yesterday. For example, if we have\n\ta := subscriptingref(a, i, f(array_append(a, x)));\nit's not okay for array_append() to scribble on \"a\". The R/W\nparam mechanism normally disallows any additional references to\nthe target variable, which would prevent this error, but I broke\nthat safety check with the 0007 patch.\n\nAfter thinking about this awhile, I decided that plpgsql's R/W param\nmechanism is really misdesigned. Instead of requiring the assignment\nsource expression to be such that *all* its references to the target\nvariable could be passed as R/W, we really want to identify *one*\nreference to the target variable to be passed as R/W, allowing any other\nones to be passed read/only as they would be by default. As long as the\nR/W reference is a direct argument to the top-level (hence last to be\nexecuted) function in the expression, there is no harm in R/O references\nbeing passed to other lower parts of the expression. Nor is there any\nuse-case for more than one argument of the top-level function being R/W.\n\nSo the attached rewrite of the 0007 patch reimplements that logic to\nidentify one single Param that references the target variable, and\nmake only that Param pass a read/write reference, not any other\nParams referencing the target variable. This is a good change even\nwithout considering the assignment-reimplementation proposal, because\neven before this patchset we could have cases like\n\tarrayvar := array_append(arrayvar, arrayvar[i]);\nThe existing code would be afraid to optimize this, but it's in fact\nsafe.\n\nI also re-attach the 0001-0006 patches, which have not changed, just\nto keep the cfbot happy.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 15 Dec 2020 15:17:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "út 15. 12. 2020 v 21:18 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I realized that the speedup patch I posted yesterday is flawed: it's\n> too aggressive about applying the R/W param mechanism, instead of\n> not aggressive enough.\n>\n> To review, the point of that logic is that if we have an assignment\n> like\n> arrayvar := array_append(arrayvar, some-scalar-expression);\n> a naive implementation would have array_append construct an entire\n> new array, which we'd then have to copy into plpgsql's variable\n> storage. Instead, if the array variable is in expanded-array\n> format (which plpgsql encourages it to be) then we can pass the\n> array parameter as a \"read/write expanded datum\", which array_append\n> recognizes as license to scribble right on its input and return the\n> modified input; that takes only O(1) time not O(N). Then plpgsql's\n> assignment code notices that the expression result datum is the same\n> pointer already stored in the variable, so it does nothing.\n>\n> With the patch at hand, a subscripted assignment a[i] := x becomes,\n> essentially,\n> a := subscriptingref(a, i, x);\n> and we need to make the same sort of transformation to allow\n> array_set_element to scribble right on the original value of \"a\"\n> instead of making a copy.\n>\n> However, we can't simply not consider the source expression \"x\",\n> as I proposed yesterday. For example, if we have\n> a := subscriptingref(a, i, f(array_append(a, x)));\n> it's not okay for array_append() to scribble on \"a\". The R/W\n> param mechanism normally disallows any additional references to\n> the target variable, which would prevent this error, but I broke\n> that safety check with the 0007 patch.\n>\n> After thinking about this awhile, I decided that plpgsql's R/W param\n> mechanism is really misdesigned. Instead of requiring the assignment\n> source expression to be such that *all* its references to the target\n> variable could be passed as R/W, we really want to identify *one*\n> reference to the target variable to be passed as R/W, allowing any other\n> ones to be passed read/only as they would be by default. As long as the\n> R/W reference is a direct argument to the top-level (hence last to be\n> executed) function in the expression, there is no harm in R/O references\n> being passed to other lower parts of the expression. Nor is there any\n> use-case for more than one argument of the top-level function being R/W.\n>\n> So the attached rewrite of the 0007 patch reimplements that logic to\n> identify one single Param that references the target variable, and\n> make only that Param pass a read/write reference, not any other\n> Params referencing the target variable. This is a good change even\n> without considering the assignment-reimplementation proposal, because\n> even before this patchset we could have cases like\n> arrayvar := array_append(arrayvar, arrayvar[i]);\n> The existing code would be afraid to optimize this, but it's in fact\n> safe.\n>\n> I also re-attach the 0001-0006 patches, which have not changed, just\n> to keep the cfbot happy.\n>\n>\nI run some performance tests and it looks very well.\n\n\n regards, tom lane\n>\n>\n\nút 15. 12. 2020 v 21:18 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I realized that the speedup patch I posted yesterday is flawed: it's\ntoo aggressive about applying the R/W param mechanism, instead of\nnot aggressive enough.\n\nTo review, the point of that logic is that if we have an assignment\nlike\n arrayvar := array_append(arrayvar, some-scalar-expression);\na naive implementation would have array_append construct an entire\nnew array, which we'd then have to copy into plpgsql's variable\nstorage. Instead, if the array variable is in expanded-array\nformat (which plpgsql encourages it to be) then we can pass the\narray parameter as a \"read/write expanded datum\", which array_append\nrecognizes as license to scribble right on its input and return the\nmodified input; that takes only O(1) time not O(N). Then plpgsql's\nassignment code notices that the expression result datum is the same\npointer already stored in the variable, so it does nothing.\n\nWith the patch at hand, a subscripted assignment a[i] := x becomes,\nessentially,\n a := subscriptingref(a, i, x);\nand we need to make the same sort of transformation to allow\narray_set_element to scribble right on the original value of \"a\"\ninstead of making a copy.\n\nHowever, we can't simply not consider the source expression \"x\",\nas I proposed yesterday. For example, if we have\n a := subscriptingref(a, i, f(array_append(a, x)));\nit's not okay for array_append() to scribble on \"a\". The R/W\nparam mechanism normally disallows any additional references to\nthe target variable, which would prevent this error, but I broke\nthat safety check with the 0007 patch.\n\nAfter thinking about this awhile, I decided that plpgsql's R/W param\nmechanism is really misdesigned. Instead of requiring the assignment\nsource expression to be such that *all* its references to the target\nvariable could be passed as R/W, we really want to identify *one*\nreference to the target variable to be passed as R/W, allowing any other\nones to be passed read/only as they would be by default. As long as the\nR/W reference is a direct argument to the top-level (hence last to be\nexecuted) function in the expression, there is no harm in R/O references\nbeing passed to other lower parts of the expression. Nor is there any\nuse-case for more than one argument of the top-level function being R/W.\n\nSo the attached rewrite of the 0007 patch reimplements that logic to\nidentify one single Param that references the target variable, and\nmake only that Param pass a read/write reference, not any other\nParams referencing the target variable. This is a good change even\nwithout considering the assignment-reimplementation proposal, because\neven before this patchset we could have cases like\n arrayvar := array_append(arrayvar, arrayvar[i]);\nThe existing code would be afraid to optimize this, but it's in fact\nsafe.\n\nI also re-attach the 0001-0006 patches, which have not changed, just\nto keep the cfbot happy.\nI run some performance tests and it looks very well. \n regards, tom lane",
"msg_date": "Wed, 16 Dec 2020 10:56:31 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Hi\n\nI repeated tests. I wrote a set of simple functions. It is a synthetical\ntest, but I think it can identify potential problems well.\n\nI calculated the average of 3 cycles and I checked the performance of each\nfunction. I didn't find any problem. The total execution time is well too.\nPatched code is about 11% faster than master (14sec x 15.8sec). So there is\nnew important functionality with nice performance benefits.\n\nmake check-world passed\n\nRegards\n\nPavel",
"msg_date": "Sat, 26 Dec 2020 19:00:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "so 26. 12. 2020 v 19:00 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I repeated tests. I wrote a set of simple functions. It is a synthetical\n> test, but I think it can identify potential problems well.\n>\n> I calculated the average of 3 cycles and I checked the performance of each\n> function. I didn't find any problem. The total execution time is well too.\n> Patched code is about 11% faster than master (14sec x 15.8sec). So there is\n> new important functionality with nice performance benefits.\n>\n> make check-world passed\n>\n\nI played with plpgsql_check tests and again I didn't find any significant\nissue of this patch. I am very satisfied with implementation.\n\nNow, the behavior of SELECT INTO is behind the assign statement and this\nfact should be documented. Usually we don't need to use array's fields\nhere, but somebody can try it.\n\nRegards\n\nPavel\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n\nso 26. 12. 2020 v 19:00 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI repeated tests. I wrote a set of simple functions. It is a synthetical test, but I think it can identify potential problems well.I calculated the average of 3 cycles and I checked the performance of each function. I didn't find any problem. The total execution time is well too. Patched code is about 11% faster than master (14sec x 15.8sec). So there is new important functionality with nice performance benefits.make check-world passedI played with plpgsql_check tests and again I didn't find any significant issue of this patch. I am very satisfied with implementation. Now, the behavior of SELECT INTO is behind the assign statement and this fact should be documented. Usually we don't need to use array's fields here, but somebody can try it.RegardsPavel RegardsPavel",
"msg_date": "Mon, 28 Dec 2020 00:08:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Now, the behavior of SELECT INTO is behind the assign statement and this\n> fact should be documented. Usually we don't need to use array's fields\n> here, but somebody can try it.\n\nIt's been behind all along --- this patch didn't really change that.\nBut I don't mind documenting it more clearly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Dec 2020 18:54:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "po 28. 12. 2020 v 0:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > Now, the behavior of SELECT INTO is behind the assign statement and this\n> > fact should be documented. Usually we don't need to use array's fields\n> > here, but somebody can try it.\n>\n> It's been behind all along --- this patch didn't really change that.\n> But I don't mind documenting it more clearly.\n>\n\nok\n\nPavel\n\n\n> regards, tom lane\n>\n\npo 28. 12. 2020 v 0:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Now, the behavior of SELECT INTO is behind the assign statement and this\n> fact should be documented. Usually we don't need to use array's fields\n> here, but somebody can try it.\n\nIt's been behind all along --- this patch didn't really change that.\nBut I don't mind documenting it more clearly.okPavel\n\n regards, tom lane",
"msg_date": "Mon, 28 Dec 2020 06:28:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Hi\n\nI continue in review.\n\nI found inconsistency in work with slicings (this is not directly related\nto this patch, but can be interesting, because with new functionality the\narray slicings can be edited more often).\n\na = array[1,2,3,4,5];\na[1:5] = 10; -- correctly fails, although for some people can be more\nnatural semantic setting a[1..5] to value 10\n\na[1:5] = NULL; does nothing - no fail, no value change ??? Is it correct\n\na[1:5] = ARRAY[1]; -- correctly fails ERROR: source array too small\n\nbut\n\na[1:5] = ARRAY[1,2,3,4,5,6]; -- this statement works, but 6 is ignored. Is\nit correct? I expected \"source array too big\"\n\nMore, this behave is not documented\n\nanything other looks well, all tests passed, and in my benchmarks I don't\nsee any slowdowns , so I'll mark this patch as ready for committer\n\nRegards\n\nPavel\n\nHiI continue in review.I found inconsistency in work with slicings (this is not directly related to this patch, but can be interesting, because with new functionality the array slicings can be edited more often).a = array[1,2,3,4,5];a[1:5] = 10; -- correctly fails, although for some people can be more natural semantic setting a[1..5] to value 10a[1:5] = NULL; does nothing - no fail, no value change ??? Is it correcta[1:5] = ARRAY[1]; -- correctly fails ERROR: source array too smallbuta[1:5] = ARRAY[1,2,3,4,5,6]; -- this statement works, but 6 is ignored. Is it correct? I expected \"source array too big\"More, this behave is not documentedanything other looks well, all tests passed, and in my benchmarks I don't see any slowdowns , so I'll mark this patch as ready for committerRegardsPavel",
"msg_date": "Sun, 3 Jan 2021 16:25:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I found inconsistency in work with slicings (this is not directly related\n> to this patch, but can be interesting, because with new functionality the\n> array slicings can be edited more often).\n\n> a = array[1,2,3,4,5];\n> a[1:5] = 10; -- correctly fails, although for some people can be more\n> natural semantic setting a[1..5] to value 10\n> a[1:5] = NULL; does nothing - no fail, no value change ??? Is it correct\n> a[1:5] = ARRAY[1]; -- correctly fails ERROR: source array too small\n> but\n> a[1:5] = ARRAY[1,2,3,4,5,6]; -- this statement works, but 6 is ignored. Is\n> it correct? I expected \"source array too big\"\n\nHm. All of these behaviors have existed for a long time in the context\nof UPDATE statements:\n\nregression=# create table t1 (a int[]);\nCREATE TABLE\nregression=# insert into t1 values(array[1,2,3,4,5]);\nINSERT 0 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,5}\n(1 row)\n\nregression=# update t1 set a[1:5] = 10;\nERROR: subscripted assignment to \"a\" requires type integer[] but expression is of type integer\nregression=# update t1 set a[1:5] = null;\nUPDATE 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,5}\n(1 row)\n\n(Note that in this example, the null is implicitly typed as int[];\nso it's not like the prior example.)\n\nregression=# update t1 set a[1:5] = array[1];\nERROR: source array too small\nregression=# update t1 set a[1:5] = array[1,2,3,4,6,5];\nUPDATE 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,6}\n(1 row)\n\nI agree this is inconsistent, but given the way this patch works,\nwe'd have to change UPDATE's behavior if we want plpgsql to do\nsomething different. Not sure if we can get away with that.\n\n> anything other looks well, all tests passed, and in my benchmarks I don't\n> see any slowdowns , so I'll mark this patch as ready for committer\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jan 2021 13:06:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "ne 3. 1. 2021 v 19:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I found inconsistency in work with slicings (this is not directly related\n> > to this patch, but can be interesting, because with new functionality the\n> > array slicings can be edited more often).\n>\n> > a = array[1,2,3,4,5];\n> > a[1:5] = 10; -- correctly fails, although for some people can be more\n> > natural semantic setting a[1..5] to value 10\n> > a[1:5] = NULL; does nothing - no fail, no value change ??? Is it correct\n> > a[1:5] = ARRAY[1]; -- correctly fails ERROR: source array too small\n> > but\n> > a[1:5] = ARRAY[1,2,3,4,5,6]; -- this statement works, but 6 is ignored.\n> Is\n> > it correct? I expected \"source array too big\"\n>\n> Hm. All of these behaviors have existed for a long time in the context\n> of UPDATE statements:\n>\n> regression=# create table t1 (a int[]);\n> CREATE TABLE\n> regression=# insert into t1 values(array[1,2,3,4,5]);\n> INSERT 0 1\n> regression=# table t1;\n> a\n> -------------\n> {1,2,3,4,5}\n> (1 row)\n>\n> regression=# update t1 set a[1:5] = 10;\n> ERROR: subscripted assignment to \"a\" requires type integer[] but\n> expression is of type integer\n> regression=# update t1 set a[1:5] = null;\n> UPDATE 1\n> regression=# table t1;\n> a\n> -------------\n> {1,2,3,4,5}\n> (1 row)\n>\n> (Note that in this example, the null is implicitly typed as int[];\n> so it's not like the prior example.)\n>\n\nI understand\n\n\n> regression=# update t1 set a[1:5] = array[1];\n> ERROR: source array too small\n> regression=# update t1 set a[1:5] = array[1,2,3,4,6,5];\n> UPDATE 1\n> regression=# table t1;\n> a\n> -------------\n> {1,2,3,4,6}\n> (1 row)\n>\n> I agree this is inconsistent, but given the way this patch works,\n> we'd have to change UPDATE's behavior if we want plpgsql to do\n> something different. Not sure if we can get away with that.\n>\n\nYes, the UPDATE should be changed. This is not a pretty important corner\ncase. But any inconsistency can be messy for users.\n\nI don't see any interesting use case for current behavior, but it is a\ncorner case.\n\n\n\n> > anything other looks well, all tests passed, and in my benchmarks I don't\n> > see any slowdowns , so I'll mark this patch as ready for committer\n>\n> Thanks!\n>\n\nwith pleasure\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nne 3. 1. 2021 v 19:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I found inconsistency in work with slicings (this is not directly related\n> to this patch, but can be interesting, because with new functionality the\n> array slicings can be edited more often).\n\n> a = array[1,2,3,4,5];\n> a[1:5] = 10; -- correctly fails, although for some people can be more\n> natural semantic setting a[1..5] to value 10\n> a[1:5] = NULL; does nothing - no fail, no value change ??? Is it correct\n> a[1:5] = ARRAY[1]; -- correctly fails ERROR: source array too small\n> but\n> a[1:5] = ARRAY[1,2,3,4,5,6]; -- this statement works, but 6 is ignored. Is\n> it correct? I expected \"source array too big\"\n\nHm. All of these behaviors have existed for a long time in the context\nof UPDATE statements:\n\nregression=# create table t1 (a int[]);\nCREATE TABLE\nregression=# insert into t1 values(array[1,2,3,4,5]);\nINSERT 0 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,5}\n(1 row)\n\nregression=# update t1 set a[1:5] = 10;\nERROR: subscripted assignment to \"a\" requires type integer[] but expression is of type integer\nregression=# update t1 set a[1:5] = null;\nUPDATE 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,5}\n(1 row)\n\n(Note that in this example, the null is implicitly typed as int[];\nso it's not like the prior example.)I understand \n\nregression=# update t1 set a[1:5] = array[1];\nERROR: source array too small\nregression=# update t1 set a[1:5] = array[1,2,3,4,6,5];\nUPDATE 1\nregression=# table t1;\n a \n-------------\n {1,2,3,4,6}\n(1 row)\n\nI agree this is inconsistent, but given the way this patch works,\nwe'd have to change UPDATE's behavior if we want plpgsql to do\nsomething different. Not sure if we can get away with that.Yes, the UPDATE should be changed. This is not a pretty important corner case. But any inconsistency can be messy for users.I don't see any interesting use case for current behavior, but it is a corner case. \n\n> anything other looks well, all tests passed, and in my benchmarks I don't\n> see any slowdowns , so I'll mark this patch as ready for committer\n\nThanks!with pleasureRegardsPavel\n\n regards, tom lane",
"msg_date": "Sun, 3 Jan 2021 19:16:43 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "Hi\n\nNow, I am testing subscribing on the jsonb feature, and I found one issue,\nthat is not supported by parser.\n\nWhen the target is scalar, then all is ok. But we can have a plpgsql array\nof jsonb values.\n\npostgres=# do $$\ndeclare j jsonb[];\nbegin\n j[1] = '{\"b\":\"Ahoj\"}';\n raise notice '%', j;\n raise notice '%', (j[1])['b'];\nend\n$$;\nNOTICE: {\"{\\\"b\\\": \\\"Ahoj\\\"}\"}\nNOTICE: \"Ahoj\"\nDO\n\nParenthesis work well in expressions, but are not supported on the left\nside of assignment.\n\npostgres=# do $$\ndeclare j jsonb[];\nbegin\n (j[1])['b'] = '\"Ahoj\"';\n raise notice '%', j;\n raise notice '%', j[1]['b'];\nend\n$$;\nERROR: syntax error at or near \"(\"\nLINE 4: (j[1])['b'] = '\"Ahoj\"';\n ^\n\nRegards\n\nPavel\n\nHiNow, I am testing subscribing on the jsonb feature, and I found one issue, that is not supported by parser.When the target is scalar, then all is ok. But we can have a plpgsql array of jsonb values.postgres=# do $$declare j jsonb[];begin j[1] = '{\"b\":\"Ahoj\"}'; raise notice '%', j; raise notice '%', (j[1])['b'];end$$;NOTICE: {\"{\\\"b\\\": \\\"Ahoj\\\"}\"}NOTICE: \"Ahoj\"DOParenthesis work well in expressions, but are not supported on the left side of assignment.postgres=# do $$declare j jsonb[];begin (j[1])['b'] = '\"Ahoj\"'; raise notice '%', j; raise notice '%', j[1]['b'];end$$;ERROR: syntax error at or near \"(\"LINE 4: (j[1])['b'] = '\"Ahoj\"'; ^RegardsPavel",
"msg_date": "Tue, 19 Jan 2021 19:21:04 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
},
{
"msg_contents": "út 19. 1. 2021 v 19:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> Now, I am testing subscribing on the jsonb feature, and I found one issue,\n> that is not supported by parser.\n>\n> When the target is scalar, then all is ok. But we can have a plpgsql array\n> of jsonb values.\n>\n> postgres=# do $$\n> declare j jsonb[];\n> begin\n> j[1] = '{\"b\":\"Ahoj\"}';\n> raise notice '%', j;\n> raise notice '%', (j[1])['b'];\n> end\n> $$;\n> NOTICE: {\"{\\\"b\\\": \\\"Ahoj\\\"}\"}\n> NOTICE: \"Ahoj\"\n> DO\n>\n> Parenthesis work well in expressions, but are not supported on the left\n> side of assignment.\n>\n> postgres=# do $$\n> declare j jsonb[];\n> begin\n> (j[1])['b'] = '\"Ahoj\"';\n> raise notice '%', j;\n> raise notice '%', j[1]['b'];\n> end\n> $$;\n> ERROR: syntax error at or near \"(\"\n> LINE 4: (j[1])['b'] = '\"Ahoj\"';\n> ^\n>\n\nAssignment for nesting composite types is working better - although there\nis some inconsistency too:\n\ncreate type t_inner as (x int, y int);\ncreate type t_outer as (a t_inner, b t_inner);\n\ndo $$\ndeclare v t_outer;\nbegin\n v.a.x := 10; -- parenthesis not allowed here, but not required\n raise notice '%', v;\n raise notice '%', (v).a.x; -- parenthesis are required here\nend;\n$$;\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n\nút 19. 1. 2021 v 19:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiNow, I am testing subscribing on the jsonb feature, and I found one issue, that is not supported by parser.When the target is scalar, then all is ok. But we can have a plpgsql array of jsonb values.postgres=# do $$declare j jsonb[];begin j[1] = '{\"b\":\"Ahoj\"}'; raise notice '%', j; raise notice '%', (j[1])['b'];end$$;NOTICE: {\"{\\\"b\\\": \\\"Ahoj\\\"}\"}NOTICE: \"Ahoj\"DOParenthesis work well in expressions, but are not supported on the left side of assignment.postgres=# do $$declare j jsonb[];begin (j[1])['b'] = '\"Ahoj\"'; raise notice '%', j; raise notice '%', j[1]['b'];end$$;ERROR: syntax error at or near \"(\"LINE 4: (j[1])['b'] = '\"Ahoj\"'; ^Assignment for nesting composite types is working better - although there is some inconsistency too:create type t_inner as (x int, y int);create type t_outer as (a t_inner, b t_inner);do $$declare v t_outer;begin v.a.x := 10; -- parenthesis not allowed here, but not required raise notice '%', v; raise notice '%', (v).a.x; -- parenthesis are required hereend;$$;RegardsPavelRegardsPavel",
"msg_date": "Wed, 20 Jan 2021 09:43:09 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking plpgsql's assignment implementation"
}
] |
[
{
"msg_contents": "I would like to have an anonymous block, like DO, but having resuts, like an\nusual function does.\n\nI know any user can do ...\n\ncreate function pg_temp.run_time_bigger(numeric,numeric) returns numeric\nlanguage plpgsql as $$ \nbegin if $1 > $2 then return $1; else return $2; end if; end;$$;\nselect * from pg_temp.run_time_bigger(5,3);\ndrop function pg_temp.run_time_bigger(numeric,numeric);\n\nbut would be better if he could ...\nexecute block(numeric,numeric) returns numeric language plpgsql as $$ \nbegin if $1 > $2 then return $1; else return $2; end if; end;$$ \nUSING(5,3); \n\nThat USING would be params, but if it complicates it could be easily be\nreplaced by real values because that block is entirely created in run time,\nso its optional.\n\nWhat do you think about ? \nWhat part of postgres code do I have to carefully understand to write\nsomething to do that ?\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 11 Dec 2020 12:06:40 -0700 (MST)",
"msg_from": "PegoraroF10 <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "anonymous block returning like a function"
},
{
"msg_contents": "On 11/12/2020 21:06, PegoraroF10 wrote:\n> I would like to have an anonymous block, like DO, but having resuts, like an\n> usual function does.\n> \n> I know any user can do ...\n> \n> create function pg_temp.run_time_bigger(numeric,numeric) returns numeric\n> language plpgsql as $$\n> begin if $1 > $2 then return $1; else return $2; end if; end;$$;\n> select * from pg_temp.run_time_bigger(5,3);\n> drop function pg_temp.run_time_bigger(numeric,numeric);\n> \n> but would be better if he could ...\n> execute block(numeric,numeric) returns numeric language plpgsql as $$\n> begin if $1 > $2 then return $1; else return $2; end if; end;$$\n> USING(5,3);\n> \n> That USING would be params, but if it complicates it could be easily be\n> replaced by real values because that block is entirely created in run time,\n> so its optional.\n> \n> What do you think about ?\n\nYeah, I think that would be useful. This was actually proposed and \ndiscussed back in 2014 ([1], but it didn't lead to a patch. Not sure if \nit's been discussed again after that.\n\n> What part of postgres code do I have to carefully understand to write\n> something to do that ?\n\nHmm, let's see. You'll need to modify the grammar in src/backend/gram.y, \nto accept the USING clause. DoStmt struct needs a new 'params' field to \ncarry the params from the parser to the PL execution, I think you can \nlook at how that's done for ExecuteStmt or CallStmt for inspiration. \nExecuteDoStmt() needs some changes to pass the params to the 'laninline' \nhandler of the PL language. And finally, the 'laninline' implementations \nof all the built-in languages needs to be modified to accept the \nparameters, like plpgsql_compile_inline() function for PL/pgSQL. For \nlanguages provided as extensions, there should be some mechanism to fail \ngracefully, if the PL implementation hasn't been taught about the \nparameters yet.\n\n[1] \nhttps://www.postgresql.org/message-id/1410849538.4296.19.camel%40localhost\n\n- Heikki\n\n\n",
"msg_date": "Mon, 14 Dec 2020 15:31:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: anonymous block returning like a function"
},
{
"msg_contents": "po 14. 12. 2020 v 14:31 odesílatel Heikki Linnakangas <hlinnaka@iki.fi>\nnapsal:\n\n> On 11/12/2020 21:06, PegoraroF10 wrote:\n> > I would like to have an anonymous block, like DO, but having resuts,\n> like an\n> > usual function does.\n> >\n> > I know any user can do ...\n> >\n> > create function pg_temp.run_time_bigger(numeric,numeric) returns numeric\n> > language plpgsql as $$\n> > begin if $1 > $2 then return $1; else return $2; end if; end;$$;\n> > select * from pg_temp.run_time_bigger(5,3);\n> > drop function pg_temp.run_time_bigger(numeric,numeric);\n> >\n> > but would be better if he could ...\n> > execute block(numeric,numeric) returns numeric language plpgsql as $$\n> > begin if $1 > $2 then return $1; else return $2; end if; end;$$\n> > USING(5,3);\n> >\n> > That USING would be params, but if it complicates it could be easily be\n> > replaced by real values because that block is entirely created in run\n> time,\n> > so its optional.\n> >\n> > What do you think about ?\n>\n> Yeah, I think that would be useful. This was actually proposed and\n> discussed back in 2014 ([1], but it didn't lead to a patch. Not sure if\n> it's been discussed again after that.\n>\n> > What part of postgres code do I have to carefully understand to write\n> > something to do that ?\n>\n> Hmm, let's see. You'll need to modify the grammar in src/backend/gram.y,\n> to accept the USING clause. DoStmt struct needs a new 'params' field to\n> carry the params from the parser to the PL execution, I think you can\n> look at how that's done for ExecuteStmt or CallStmt for inspiration.\n> ExecuteDoStmt() needs some changes to pass the params to the 'laninline'\n> handler of the PL language. And finally, the 'laninline' implementations\n> of all the built-in languages needs to be modified to accept the\n> parameters, like plpgsql_compile_inline() function for PL/pgSQL. For\n> languages provided as extensions, there should be some mechanism to fail\n> gracefully, if the PL implementation hasn't been taught about the\n> parameters yet.\n>\n> [1]\n> https://www.postgresql.org/message-id/1410849538.4296.19.camel%40localhost\n\n\nParametrization of DO statement can be first step and just this\nfunctionality can be pretty useful. Today, the code can be modification of\nCALL statement.\n\nThere should be discussion if DO statement will be more like procedure or\nmore like function. Now, DO statement is more procedure than function. And\nI think so it is correct. Probably one day, the procedures can returns\nmultirecordsets, and then can be easy same functionality to push to DO\nstatement.\n\nOracle hace nice CTE enhancing\n\nWITH\n FUNCTION with_function(p_id IN NUMBER) RETURN NUMBER IS\n BEGIN\n RETURN p_id;\n END;\nSELECT with_function(id)\nFROM t1\nWHERE rownum = 1\n\nCan be nice to have this feature in Postgres. We don't need to invite\nnew syntax.\n\nRegards\n\nPavel\n\n\n>\n> - Heikki\n>\n>\n>\n\npo 14. 12. 2020 v 14:31 odesílatel Heikki Linnakangas <hlinnaka@iki.fi> napsal:On 11/12/2020 21:06, PegoraroF10 wrote:\n> I would like to have an anonymous block, like DO, but having resuts, like an\n> usual function does.\n> \n> I know any user can do ...\n> \n> create function pg_temp.run_time_bigger(numeric,numeric) returns numeric\n> language plpgsql as $$\n> begin if $1 > $2 then return $1; else return $2; end if; end;$$;\n> select * from pg_temp.run_time_bigger(5,3);\n> drop function pg_temp.run_time_bigger(numeric,numeric);\n> \n> but would be better if he could ...\n> execute block(numeric,numeric) returns numeric language plpgsql as $$\n> begin if $1 > $2 then return $1; else return $2; end if; end;$$\n> USING(5,3);\n> \n> That USING would be params, but if it complicates it could be easily be\n> replaced by real values because that block is entirely created in run time,\n> so its optional.\n> \n> What do you think about ?\n\nYeah, I think that would be useful. This was actually proposed and \ndiscussed back in 2014 ([1], but it didn't lead to a patch. Not sure if \nit's been discussed again after that.\n\n> What part of postgres code do I have to carefully understand to write\n> something to do that ?\n\nHmm, let's see. You'll need to modify the grammar in src/backend/gram.y, \nto accept the USING clause. DoStmt struct needs a new 'params' field to \ncarry the params from the parser to the PL execution, I think you can \nlook at how that's done for ExecuteStmt or CallStmt for inspiration. \nExecuteDoStmt() needs some changes to pass the params to the 'laninline' \nhandler of the PL language. And finally, the 'laninline' implementations \nof all the built-in languages needs to be modified to accept the \nparameters, like plpgsql_compile_inline() function for PL/pgSQL. For \nlanguages provided as extensions, there should be some mechanism to fail \ngracefully, if the PL implementation hasn't been taught about the \nparameters yet.\n\n[1] \nhttps://www.postgresql.org/message-id/1410849538.4296.19.camel%40localhostParametrization of DO statement can be first step and just this functionality can be pretty useful. Today, the code can be modification of CALL statement.There should be discussion if DO statement will be more like procedure or more like function. Now, DO statement is more procedure than function. And I think so it is correct. Probably one day, the procedures can returns multirecordsets, and then can be easy same functionality to push to DO statement.Oracle hace nice CTE enhancing WITH\n FUNCTION with_function(p_id IN NUMBER) RETURN NUMBER IS\n BEGIN\n RETURN p_id;\n END;\nSELECT with_function(id)\nFROM t1\nWHERE rownum = 1Can be nice to have this feature in Postgres. We don't need to invite new syntax.RegardsPavel\n\n- Heikki",
"msg_date": "Mon, 14 Dec 2020 14:43:08 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: anonymous block returning like a function"
}
] |
[
{
"msg_contents": "Hi,\n\ncurrently a failed cast throws an error. It would be useful to have a \nway to get a default value instead.\n\nT-SQL has try_cast [1]\nOracle has CAST(... AS .. DEFAULT ... ON CONVERSION ERROR) [2]\n\nThe DEFAULT ... ON CONVERSION ERROR syntax seems like it could be \nimplemented in PostgreSQL. Even if only DEFAULT NULL was supported (at \nfirst) that would already help.\n\nThe short syntax could be extended for the DEFAULT NULL case, too:\n\nSELECT '...'::type -- throws error\nSELECT '...':::type -- returns NULL\n\nI couldn't find any previous discussion on this, please advise in case I \njust missed it.\n\nThoughts?\n\nBest\n\nWolfgang\n\n[1]: \nhttps://docs.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql\n[2]: \nhttps://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/CAST.html\n\n\n",
"msg_date": "Sat, 12 Dec 2020 10:13:40 +0100",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": true,
"msg_subject": "Suggestion: optionally return default value instead of error on\n failed cast"
},
{
"msg_contents": ">\n> currently a failed cast throws an error. It would be useful to have a\n> way to get a default value instead.\n>\n\nI've recently encountered situations where this would have been helpful.\nRecently I came across some client code:\n\nCREATE OR REPLACE FUNCTION is_valid_json(str text) RETURNS boolean LANGUAGE\nPLPGSQL\nAS $$\nDECLARE\n j json;\nBEGIN\n j := str::json;\n return true;\nEXCEPTION WHEN OTHERS THEN return false;\nEND\n$$;\n\n\nThis is a double-bummer. First, the function discards the value so we have\nto recompute it, and secondly, the exception block prevents the query from\nbeing parallelized.\n\n\n>\n> T-SQL has try_cast [1]\n>\n\nI'd be more in favor of this if we learn that there's no work (current or\nproposed) in the SQL standard.\n\n\n> Oracle has CAST(... AS .. DEFAULT ... ON CONVERSION ERROR) [2]\n>\n\nIf the SQL group has suggested anything, I'd bet it looks a lot like this.\n\n\n>\n> The DEFAULT ... ON CONVERSION ERROR syntax seems like it could be\n> implemented in PostgreSQL. Even if only DEFAULT NULL was supported (at\n> first) that would already help.\n>\n> The short syntax could be extended for the DEFAULT NULL case, too:\n>\n> SELECT '...'::type -- throws error\n> SELECT '...':::type -- returns NULL\n>\n\nI think I'm against adding a ::: operator, because too many people are\ngoing to type (or omit) the third : by accident, and that would be a really\nsubtle bug. The CAST/TRY_CAST syntax is wordy but it makes it very clear\nthat you expected janky input and have specified a contingency plan.\n\nThe TypeCast node seems like it wouldn't need too much modification to\nallow for this. The big lift, from what I can tell, is either creating\nversions of every $foo_in() function to return NULL instead of raising an\nerror, and then effectively wrapping that inside a coalesce() to process\nthe default. Alternatively, we could add an extra boolean parameter\n(\"nullOnFailure\"? \"suppressErrors\"?) to the existing $foo_in() functions, a\nboolean to return null instead of raising an error, and the default would\nbe handled in coerce_to_target_type(). Either of those would create a fair\namount of work for extensions that add types, but I think the value would\nbe worth it.\n\nI do remember when I proposed the \"void\"/\"black hole\"/\"meh\" datatype (all\nvalues map to NULL) I ran into a fairly fundamental rule that types must\nmap any not-null input to a not-null output, and this could potentially\nviolate that, but I'm not sure.\n\nDoes anyone know if the SQL standard has anything to say on this subject?\n\ncurrently a failed cast throws an error. It would be useful to have a \nway to get a default value instead.I've recently encountered situations where this would have been helpful. Recently I came across some client code:CREATE OR REPLACE FUNCTION is_valid_json(str text) RETURNS boolean LANGUAGE PLPGSQLAS $$DECLARE j json;BEGIN j := str::json; return true;EXCEPTION WHEN OTHERS THEN return false;END$$;This is a double-bummer. First, the function discards the value so we have to recompute it, and secondly, the exception block prevents the query from being parallelized. \n\nT-SQL has try_cast [1]I'd be more in favor of this if we learn that there's no work (current or proposed) in the SQL standard. \nOracle has CAST(... AS .. DEFAULT ... ON CONVERSION ERROR) [2]If the SQL group has suggested anything, I'd bet it looks a lot like this. \n\nThe DEFAULT ... ON CONVERSION ERROR syntax seems like it could be \nimplemented in PostgreSQL. Even if only DEFAULT NULL was supported (at \nfirst) that would already help.\n\nThe short syntax could be extended for the DEFAULT NULL case, too:\n\nSELECT '...'::type -- throws error\nSELECT '...':::type -- returns NULLI think I'm against adding a ::: operator, because too many people are going to type (or omit) the third : by accident, and that would be a really subtle bug. The CAST/TRY_CAST syntax is wordy but it makes it very clear that you expected janky input and have specified a contingency plan.The TypeCast node seems like it wouldn't need too much modification to allow for this. The big lift, from what I can tell, is either creating versions of every $foo_in() function to return NULL instead of raising an error, and then effectively wrapping that inside a coalesce() to process the default. Alternatively, we could add an extra boolean parameter (\"nullOnFailure\"? \"suppressErrors\"?) to the existing $foo_in() functions, a boolean to return null instead of raising an error, and the default would be handled in coerce_to_target_type(). Either of those would create a fair amount of work for extensions that add types, but I think the value would be worth it.I do remember when I proposed the \"void\"/\"black hole\"/\"meh\" datatype (all values map to NULL) I ran into a fairly fundamental rule that types must map any not-null input to a not-null output, and this could potentially violate that, but I'm not sure.Does anyone know if the SQL standard has anything to say on this subject?",
"msg_date": "Tue, 4 Jan 2022 22:17:07 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion: optionally return default value instead of error on\n failed cast"
},
{
"msg_contents": "\nOn 1/4/22 22:17, Corey Huinker wrote:\n>\n> currently a failed cast throws an error. It would be useful to have a\n> way to get a default value instead.\n>\n>\n> I've recently encountered situations where this would have been\n> helpful. Recently I came across some client code:\n>\n> CREATE OR REPLACE FUNCTION is_valid_json(str text) RETURNS boolean\n> LANGUAGE PLPGSQL\n> AS $$\n> DECLARE\n> j json;\n> BEGIN\n> j := str::json;\n> return true;\n> EXCEPTION WHEN OTHERS THEN return false;\n> END\n> $$;\n>\n>\n> This is a double-bummer. First, the function discards the value so we\n> have to recompute it, and secondly, the exception block prevents the\n> query from being parallelized.\n\n\nThis particular case is catered for in the SQL/JSON patches which\nseveral people are currently reviewing:\n\n\nandrew=# select 'foo' is json;\n ?column?\n----------\n f\n(1 row)\n\nandrew=# select '\"foo\"' is json;\n ?column?\n----------\n t\n(1 row)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 6 Jan 2022 12:18:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion: optionally return default value instead of error on\n failed cast"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 12:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 1/4/22 22:17, Corey Huinker wrote:\n> >\n> > currently a failed cast throws an error. It would be useful to have a\n> > way to get a default value instead.\n> >\n> >\n> > I've recently encountered situations where this would have been\n> > helpful. Recently I came across some client code:\n> >\n> > CREATE OR REPLACE FUNCTION is_valid_json(str text) RETURNS boolean\n> > LANGUAGE PLPGSQL\n> > AS $$\n> > DECLARE\n> > j json;\n> > BEGIN\n> > j := str::json;\n> > return true;\n> > EXCEPTION WHEN OTHERS THEN return false;\n> > END\n> > $$;\n> >\n> >\n> > This is a double-bummer. First, the function discards the value so we\n> > have to recompute it, and secondly, the exception block prevents the\n> > query from being parallelized.\n>\n>\n> This particular case is catered for in the SQL/JSON patches which\n> several people are currently reviewing:\n>\n>\nThat's great to know, but it would still be parsing the json twice, once to\nlearn that it is legit json, and once to get the casted value.\n\nAlso, I had a similar issue with type numeric, so having generic \"x is a\ntype_y\" support would essentially do everything that a try_catch()-ish\nconstruct would need to do, and be more generic.\n\nOn Thu, Jan 6, 2022 at 12:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 1/4/22 22:17, Corey Huinker wrote:\n>\n> currently a failed cast throws an error. It would be useful to have a\n> way to get a default value instead.\n>\n>\n> I've recently encountered situations where this would have been\n> helpful. Recently I came across some client code:\n>\n> CREATE OR REPLACE FUNCTION is_valid_json(str text) RETURNS boolean\n> LANGUAGE PLPGSQL\n> AS $$\n> DECLARE\n> j json;\n> BEGIN\n> j := str::json;\n> return true;\n> EXCEPTION WHEN OTHERS THEN return false;\n> END\n> $$;\n>\n>\n> This is a double-bummer. First, the function discards the value so we\n> have to recompute it, and secondly, the exception block prevents the\n> query from being parallelized.\n\n\nThis particular case is catered for in the SQL/JSON patches which\nseveral people are currently reviewing:That's great to know, but it would still be parsing the json twice, once to learn that it is legit json, and once to get the casted value.Also, I had a similar issue with type numeric, so having generic \"x is a type_y\" support would essentially do everything that a try_catch()-ish construct would need to do, and be more generic.",
"msg_date": "Thu, 6 Jan 2022 13:02:19 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion: optionally return default value instead of error on\n failed cast"
},
{
"msg_contents": "PostgreSQL is the only popular DBMS (define popular?) which doesn’t have \na friendly alternative. I asked about it on Stack \n(https://dba.stackexchange.com/questions/203934/postgresql-alternative-to-sql-server-s-try-cast-function/311980#311980), \nand ended up with the following:\n\n DROP FUNCTION IF EXISTS cast_int; CREATE FUNCTION \ncast_int(string varchar, planB int default null) RETURNS INT AS $$ \n BEGIN RETURN floor(cast(string as numeric)); \nEXCEPTION WHEN OTHERS THEN return planB; END $$ \nLANGUAGE plpgsql;\n\nObviously this is type-specific, but the point is that it’s not hard.\n\nBest Regards,\n\nMark\nOn 12/12/2020 8:13 pm, Wolfgang Walther wrote:\n> Hi,\n>\n> currently a failed cast throws an error. It would be useful to have a \n> way to get a default value instead.\n>\n> T-SQL has try_cast [1]\n> Oracle has CAST(... AS .. DEFAULT ... ON CONVERSION ERROR) [2]\n>\n> The DEFAULT ... ON CONVERSION ERROR syntax seems like it could be \n> implemented in PostgreSQL. Even if only DEFAULT NULL was supported (at \n> first) that would already help.\n>\n> The short syntax could be extended for the DEFAULT NULL case, too:\n>\n> SELECT '...'::type -- throws error\n> SELECT '...':::type -- returns NULL\n>\n> I couldn't find any previous discussion on this, please advise in case \n> I just missed it.\n>\n> Thoughts?\n>\n> Best\n>\n> Wolfgang\n>\n> [1]: \n> https://docs.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql\n> [2]: \n> https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/CAST.html\n>\n>\n>\n>\n-- \n\n\n Mark Simon\n\nManngo Net Pty Ltd\n\nmobile:0411 246 672\n\nemail:mark@manngo.net <mailto:mark@comparity.net>\nweb:http://www.manngo.net\n\nResume:http://mark.manngo.net\n\n\n\n\n\n\nPostgreSQL is the only popular DBMS\n (define popular?) which doesn’t have a friendly alternative. I\n asked about it on Stack\n (https://dba.stackexchange.com/questions/203934/postgresql-alternative-to-sql-server-s-try-cast-function/311980#311980),\n and ended up with the following:\n DROP FUNCTION IF EXISTS cast_int;\n CREATE FUNCTION cast_int(string varchar, planB int default null) RETURNS INT AS $$\n BEGIN\n RETURN floor(cast(string as numeric));\n EXCEPTION\n WHEN OTHERS THEN return planB;\n END\n $$ LANGUAGE plpgsql;\n\nObviously this is type-specific, but the point is that it’s not\n hard.\nBest Regards,\n Mark\n On 12/12/2020 8:13 pm, Wolfgang Walther\n wrote:\n\nHi,\n \n\n currently a failed cast throws an error. It would be useful to\n have a way to get a default value instead.\n \n\n T-SQL has try_cast [1]\n \n Oracle has CAST(... AS .. DEFAULT ... ON CONVERSION ERROR) [2]\n \n\n The DEFAULT ... ON CONVERSION ERROR syntax seems like it could be\n implemented in PostgreSQL. Even if only DEFAULT NULL was supported\n (at first) that would already help.\n \n\n The short syntax could be extended for the DEFAULT NULL case, too:\n \n\n SELECT '...'::type -- throws error\n \n SELECT '...':::type -- returns NULL\n \n\n I couldn't find any previous discussion on this, please advise in\n case I just missed it.\n \n\n Thoughts?\n \n\n Best\n \n\n Wolfgang\n \n\n [1]:\nhttps://docs.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql\n [2]:\nhttps://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/CAST.html\n\n\n\n\n\n-- \n\n\nMark Simon\nManngo Net Pty Ltd\nmobile:0411 246 672\nemail:mark@manngo.net\nweb:http://www.manngo.net\nResume:http://mark.manngo.net",
"msg_date": "Sun, 14 Aug 2022 13:36:01 +1000",
"msg_from": "Mark Simon <mark@manngo.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestion: optionally return default value instead of error on\n failed cast"
}
] |
[
{
"msg_contents": "Hi,\nI was experimenting with the following query.\n\ncreate table sint1(k int primary key, arr smallint[]);\ncreate index s1 on sint1(arr);\ninsert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS s;\nselect * from sint1 where arr @> array[4];\nERROR: operator does not exist: smallint[] @> integer[]\nLINE 1: select * from sint1 where arr @> array[4];\n ^\nHINT: No operator matches the given name and argument types. You might\nneed to add explicit type casts.\n-------\n\nI wonder if someone can enlighten me on the correct way to perform the type\ncast.\n\nThanks\n\nHi,I was experimenting with the following query.create table sint1(k int primary key, arr smallint[]);create index s1 on sint1(arr);insert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS s;select * from sint1 where arr @> array[4];ERROR: operator does not exist: smallint[] @> integer[]LINE 1: select * from sint1 where arr @> array[4]; ^HINT: No operator matches the given name and argument types. You might need to add explicit type casts.-------I wonder if someone can enlighten me on the correct way to perform the type cast.Thanks",
"msg_date": "Sun, 13 Dec 2020 09:43:10 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "query on smallint array column"
},
{
"msg_contents": "Hi\r\n\r\nne 13. 12. 2020 v 18:42 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\r\n\r\n> Hi,\r\n> I was experimenting with the following query.\r\n>\r\n> create table sint1(k int primary key, arr smallint[]);\r\n> create index s1 on sint1(arr);\r\n> insert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS s;\r\n> select * from sint1 where arr @> array[4];\r\n> ERROR: operator does not exist: smallint[] @> integer[]\r\n> LINE 1: select * from sint1 where arr @> array[4];\r\n> ^\r\n> HINT: No operator matches the given name and argument types. You might\r\n> need to add explicit type casts.\r\n> -------\r\n>\r\n> I wonder if someone can enlighten me on the correct way to perform the\r\n> type cast.\r\n>\r\n\r\n\r\npostgres=# select * from sint1 where arr @> array[4::smallint];\r\n┌───┬────────┐\r\n│ k │ arr │\r\n╞═══╪════════╡\r\n│ 2 │ {4,2} │\r\n│ 4 │ {16,4} │\r\n└───┴────────┘\r\n(2 rows)\r\n\r\npostgres=# select * from sint1 where arr @> array[4]::smallint[];\r\n┌───┬────────┐\r\n│ k │ arr │\r\n╞═══╪════════╡\r\n│ 2 │ {4,2} │\r\n│ 4 │ {16,4} │\r\n└───┴────────┘\r\n(2 rows)\r\n\r\npostgres=#\r\n\r\n\r\n>\r\n> Thanks\r\n>\r\n\nHine 13. 12. 2020 v 18:42 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,I was experimenting with the following query.create table sint1(k int primary key, arr smallint[]);create index s1 on sint1(arr);insert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS s;select * from sint1 where arr @> array[4];ERROR: operator does not exist: smallint[] @> integer[]LINE 1: select * from sint1 where arr @> array[4]; ^HINT: No operator matches the given name and argument types. You might need to add explicit type casts.-------I wonder if someone can enlighten me on the correct way to perform the type cast. postgres=# select * from sint1 where arr @> array[4::smallint];┌───┬────────┐│ k │ arr │╞═══╪════════╡│ 2 │ {4,2} ││ 4 │ {16,4} │└───┴────────┘(2 rows)postgres=# select * from sint1 where arr @> array[4]::smallint[];┌───┬────────┐│ k │ arr │╞═══╪════════╡│ 2 │ {4,2} ││ 4 │ {16,4} │└───┴────────┘(2 rows)postgres=# Thanks",
"msg_date": "Sun, 13 Dec 2020 18:50:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: query on smallint array column"
},
{
"msg_contents": "Thanks Pavel for fast response.\r\n\r\nOn Sun, Dec 13, 2020 at 9:51 AM Pavel Stehule <pavel.stehule@gmail.com>\r\nwrote:\r\n\r\n> Hi\r\n>\r\n> ne 13. 12. 2020 v 18:42 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\r\n>\r\n>> Hi,\r\n>> I was experimenting with the following query.\r\n>>\r\n>> create table sint1(k int primary key, arr smallint[]);\r\n>> create index s1 on sint1(arr);\r\n>> insert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS\r\n>> s;\r\n>> select * from sint1 where arr @> array[4];\r\n>> ERROR: operator does not exist: smallint[] @> integer[]\r\n>> LINE 1: select * from sint1 where arr @> array[4];\r\n>> ^\r\n>> HINT: No operator matches the given name and argument types. You might\r\n>> need to add explicit type casts.\r\n>> -------\r\n>>\r\n>> I wonder if someone can enlighten me on the correct way to perform the\r\n>> type cast.\r\n>>\r\n>\r\n>\r\n> postgres=# select * from sint1 where arr @> array[4::smallint];\r\n> ┌───┬────────┐\r\n> │ k │ arr │\r\n> ╞═══╪════════╡\r\n> │ 2 │ {4,2} │\r\n> │ 4 │ {16,4} │\r\n> └───┴────────┘\r\n> (2 rows)\r\n>\r\n> postgres=# select * from sint1 where arr @> array[4]::smallint[];\r\n> ┌───┬────────┐\r\n> │ k │ arr │\r\n> ╞═══╪════════╡\r\n> │ 2 │ {4,2} │\r\n> │ 4 │ {16,4} │\r\n> └───┴────────┘\r\n> (2 rows)\r\n>\r\n> postgres=#\r\n>\r\n>\r\n>>\r\n>> Thanks\r\n>>\r\n>\r\n\nThanks Pavel for fast response.On Sun, Dec 13, 2020 at 9:51 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 13. 12. 2020 v 18:42 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,I was experimenting with the following query.create table sint1(k int primary key, arr smallint[]);create index s1 on sint1(arr);insert into sint1 select s, array[s*s, s] FROM generate_series(1, 10) AS s;select * from sint1 where arr @> array[4];ERROR: operator does not exist: smallint[] @> integer[]LINE 1: select * from sint1 where arr @> array[4]; ^HINT: No operator matches the given name and argument types. You might need to add explicit type casts.-------I wonder if someone can enlighten me on the correct way to perform the type cast. postgres=# select * from sint1 where arr @> array[4::smallint];┌───┬────────┐│ k │ arr │╞═══╪════════╡│ 2 │ {4,2} ││ 4 │ {16,4} │└───┴────────┘(2 rows)postgres=# select * from sint1 where arr @> array[4]::smallint[];┌───┬────────┐│ k │ arr │╞═══╪════════╡│ 2 │ {4,2} ││ 4 │ {16,4} │└───┴────────┘(2 rows)postgres=# Thanks",
"msg_date": "Sun, 13 Dec 2020 10:42:09 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: query on smallint array column"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have two logical replication slots in our postgresql database\n(version-11) instance and we are using pgJDBC to stream data from these two\nslots. We are ensuring that when we regularly send feedback and update the\nconfirmed_flush_lsn (every 10 minutes) for both the slots to the same\nposition. However From our data we have seen that the restart_lsn movement\nof the two are not in sync and most of the time one of them lags too far\nbehind to hold the WAL files unnecessarily. Here are some data points to\nindicate the problem\n\nThu Dec 10 05:37:13 CET 2020\n slot_name | restart_lsn |\nconfirmed_flush_lsn\n--------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 48FB/F3000208 | 48FB/F3000208\n db_dsn_metadata_src_shared | 48FB/F3000208 | 48FB/F3000208\n(2 rows)\n\n\n\nThu Dec 10 13:53:46 CET 2020\n slot_name | restart_lsn |\nconfirmed_flush_lsn\n-------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 48FC/2309B150 | 48FC/233AA1D0\n db_dsn_metadata_src_shared | 48FC/233AA1D0 | 48FC/233AA1D0\n(2 rows)\n\n\nThu Dec 10 17:13:51 CET 2020\n slot_name | restart_lsn |\nconfirmed_flush_lsn\n-------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 4900/B4C3AE8 | 4900/94FDF908\n db_dsn_metadata_src_shared | 48FD/D2F66F10 | 4900/94FDF908\n(2 rows)\n\nThough we are using setFlushLsn() and forceStatusUpdate for both the slot's\nstream regularly still the slot with name private is far behind the\nconfirmed_flush_lsn and slot with name shared is also behind with\nconfirmed_flush_lsn but not too far. Since the restart_lsn is not moving\nfast enough, causing lot of issues with WAL log file management and not\nallowing to delete them to free up disk space\n\n\nPlease note that for the second slot we are not doing reading from the\nstream rather just sending the feedback.\n\nHow can this problem be solved? Are there any general guidelines to\novercome this issue ?\n\nRegards\n\nShailesh\n\n\nHello,We have two logical replication slots in our postgresql database \n(version-11) instance and we are using pgJDBC to stream data from these \ntwo slots.\nWe are ensuring that when we regularly send feedback and update the \nconfirmed_flush_lsn (every 10 minutes) for both the slots to the same \nposition. However\n From our data we have seen that the restart_lsn movement of the two are \nnot in sync and most of the time one of them lags too far behind\nto hold the WAL files unnecessarily.\nHere are some data points to indicate the problem\nThu Dec 10 05:37:13 CET 2020\n slot_name | restart_lsn | confirmed_flush_lsn \n--------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 48FB/F3000208 | 48FB/F3000208\n db_dsn_metadata_src_shared | 48FB/F3000208 | 48FB/F3000208\n(2 rows)\n\n\n\nThu Dec 10 13:53:46 CET 2020\n slot_name | restart_lsn | confirmed_flush_lsn \n-------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 48FC/2309B150 | 48FC/233AA1D0\n db_dsn_metadata_src_shared | 48FC/233AA1D0 | 48FC/233AA1D0\n(2 rows)\n\n\nThu Dec 10 17:13:51 CET 2020\n slot_name | restart_lsn | confirmed_flush_lsn \n-------------------------------------+---------------+---------------------\n db_dsn_metadata_src_private | 4900/B4C3AE8 | 4900/94FDF908\n db_dsn_metadata_src_shared | 48FD/D2F66F10 | 4900/94FDF908\n(2 rows)\n\nThough we are using setFlushLsn() and forceStatusUpdate for both the \nslot's stream regularly still the slot with name private is far behind \nthe confirmed_flush_lsn and\nslot with name shared is also behind with confirmed_flush_lsn but not \ntoo far. Since the restart_lsn is not moving fast enough, causing lot of\n issues with WAL log\nfile management and not allowing to delete them to free up disk spacePlease note that for the second slot we are not doing reading from the stream rather just sending the feedback.\nHow can this problem be solved? Are there any general guidelines to overcome this issue ?\nRegardsShailesh",
"msg_date": "Mon, 14 Dec 2020 09:29:54 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 9:30 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Hello,\n>\n> We have two logical replication slots in our postgresql database (version-11) instance and we are using pgJDBC to stream data from these two slots.\n>\n\nIIUC, you are using some out-of-core outputplugin to stream the data?\nAre you using in walsender mechanism to decode the changes from slots\nor via SQL APIs?\n\n> We are ensuring that when we regularly send feedback and update the confirmed_flush_lsn (every 10 minutes) for both the slots to the same position. However From our data we have seen that the restart_lsn movement of the two are not in sync and most of the time one of them lags too far behind to hold the WAL files unnecessarily. Here are some data points to indicate the problem\n>\n> Thu Dec 10 05:37:13 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> --------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 48FB/F3000208 | 48FB/F3000208\n> db_dsn_metadata_src_shared | 48FB/F3000208 | 48FB/F3000208\n> (2 rows)\n>\n>\n>\n> Thu Dec 10 13:53:46 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> -------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 48FC/2309B150 | 48FC/233AA1D0\n> db_dsn_metadata_src_shared | 48FC/233AA1D0 | 48FC/233AA1D0\n> (2 rows)\n>\n>\n> Thu Dec 10 17:13:51 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> -------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 4900/B4C3AE8 | 4900/94FDF908\n> db_dsn_metadata_src_shared | 48FD/D2F66F10 | 4900/94FDF908\n> (2 rows)\n>\n> Though we are using setFlushLsn() and forceStatusUpdate for both the slot's stream regularly still the slot with name private is far behind the confirmed_flush_lsn and slot with name shared is also behind with confirmed_flush_lsn but not too far. Since the restart_lsn is not moving fast enough, causing lot of issues with WAL log file management and not allowing to delete them to free up disk space\n>\n\nWhat is this setFlushLsn? I am not able to find in the PG-code. If it\nis some outside code reference then please provide the link to code.\nIn general, the restart_lsn and confirmed_flush_lsn are advanced in\ndifferent ways so you might see some difference but it should not be\nthis much. The confirmed_flush_lsn is updated when we get confirmation\nfrom the downstream node about the flush_lsn but restart_lsn is only\nincremented based on the LSN required by the oldest in-progress\ntransaction.\n\n>\n> Please note that for the second slot we are not doing reading from the stream rather just sending the feedback.\n>\n\nHere does the second slot refers to 'shared' or 'private'? It is not\nvery clear what you mean by \"we are not doing reading from the\nstream', do you mean to say that decoding happens in the slot but the\noutput plugin just throws away the streamed data and in the end just\nsend the feedback?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Dec 2020 16:53:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "Thanks Amit for the response\n\nWe are using pgJDBC sample program here\nhttps://jdbc.postgresql.org/documentation/head/replication.html\n\nthe setFlushLSN is coming from the pgJDBC only.\n\ngit hub for APIs of pgJDBC methods available.\n\nhttps://github.com/pgjdbc/pgjdbc\n\nThe second slot refers to \"private\" slot.\n\nSo \"\"we are not doing reading from the stream' ==> It means that we are\nhaving readPending call only from the shared slot then we get the\nlastReceivedLSN() from stream and\nsend it back to stream as confirmed_flush_lsn for both private and shared\nslot. We dont do readPending call to private slot. we will use private slot\nonly when we dont have choice. It is kind of reserver slot for us.\n\nWe are also doing forceUpdateStatus for both the slots().\n\nQuestions :\n1) The confirmed_flush_lsn is updated when we get confirmation\nfrom the downstream node about the flush_lsn but restart_lsn is only\nincremented based on the LSN required by the oldest in-progress\ntransaction. ==> As explained above we are updating (setFlshLSN an API to\nupdate confirmed_flush_lsn) both the slots with same LSN. So dont\nunderstand why one leaves behind.\n\n2) What are the other factors that might cause delay in updating\nrestart_lsn of the slot ?\n3) In PG -13 does this behaviour change ?\n\nRegards\nShailesh\n\nthe s\n\nOn Mon, Dec 14, 2020 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Dec 14, 2020 at 9:30 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n> >\n> > Hello,\n> >\n> > We have two logical replication slots in our postgresql database\n> (version-11) instance and we are using pgJDBC to stream data from these two\n> slots.\n> >\n>\n> IIUC, you are using some out-of-core outputplugin to stream the data?\n> Are you using in walsender mechanism to decode the changes from slots\n> or via SQL APIs?\n>\n> > We are ensuring that when we regularly send feedback and update the\n> confirmed_flush_lsn (every 10 minutes) for both the slots to the same\n> position. However From our data we have seen that the restart_lsn movement\n> of the two are not in sync and most of the time one of them lags too far\n> behind to hold the WAL files unnecessarily. Here are some data points to\n> indicate the problem\n> >\n> > Thu Dec 10 05:37:13 CET 2020\n> > slot_name | restart_lsn |\n> confirmed_flush_lsn\n> >\n> --------------------------------------+---------------+---------------------\n> > db_dsn_metadata_src_private | 48FB/F3000208 | 48FB/F3000208\n> > db_dsn_metadata_src_shared | 48FB/F3000208 | 48FB/F3000208\n> > (2 rows)\n> >\n> >\n> >\n> > Thu Dec 10 13:53:46 CET 2020\n> > slot_name | restart_lsn |\n> confirmed_flush_lsn\n> >\n> -------------------------------------+---------------+---------------------\n> > db_dsn_metadata_src_private | 48FC/2309B150 | 48FC/233AA1D0\n> > db_dsn_metadata_src_shared | 48FC/233AA1D0 | 48FC/233AA1D0\n> > (2 rows)\n> >\n> >\n> > Thu Dec 10 17:13:51 CET 2020\n> > slot_name | restart_lsn |\n> confirmed_flush_lsn\n> >\n> -------------------------------------+---------------+---------------------\n> > db_dsn_metadata_src_private | 4900/B4C3AE8 | 4900/94FDF908\n> > db_dsn_metadata_src_shared | 48FD/D2F66F10 | 4900/94FDF908\n> > (2 rows)\n> >\n> > Though we are using setFlushLsn() and forceStatusUpdate for both the\n> slot's stream regularly still the slot with name private is far behind the\n> confirmed_flush_lsn and slot with name shared is also behind with\n> confirmed_flush_lsn but not too far. Since the restart_lsn is not moving\n> fast enough, causing lot of issues with WAL log file management and not\n> allowing to delete them to free up disk space\n> >\n>\n> What is this setFlushLsn? I am not able to find in the PG-code. If it\n> is some outside code reference then please provide the link to code.\n> In general, the restart_lsn and confirmed_flush_lsn are advanced in\n> different ways so you might see some difference but it should not be\n> this much. The confirmed_flush_lsn is updated when we get confirmation\n> from the downstream node about the flush_lsn but restart_lsn is only\n> incremented based on the LSN required by the oldest in-progress\n> transaction.\n>\n> >\n> > Please note that for the second slot we are not doing reading from the\n> stream rather just sending the feedback.\n> >\n>\n> Here does the second slot refers to 'shared' or 'private'? It is not\n> very clear what you mean by \"we are not doing reading from the\n> stream', do you mean to say that decoding happens in the slot but the\n> output plugin just throws away the streamed data and in the end just\n> send the feedback?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nThanks Amit for the responseWe are using pgJDBC sample program here https://jdbc.postgresql.org/documentation/head/replication.htmlthe setFlushLSN is coming from the pgJDBC only.git hub for APIs of pgJDBC methods available. https://github.com/pgjdbc/pgjdbcThe second slot refers to \"private\" slot.So \"\"we are not doing reading from the stream' ==> It means that we are having readPending call only from the shared slot then we get the lastReceivedLSN() from stream andsend it back to stream as confirmed_flush_lsn for both private and shared slot. We dont do readPending call to private slot. we will use private slot only when we dont have choice. It is kind of reserver slot for us.We are also doing forceUpdateStatus for both the slots().Questions :1) The confirmed_flush_lsn is updated when we get confirmation\nfrom the downstream node about the flush_lsn but restart_lsn is only\nincremented based on the LSN required by the oldest in-progress\ntransaction. ==> As explained above we are updating (setFlshLSN an API to update confirmed_flush_lsn) both the slots with same LSN. So dont understand why one leaves behind.2) What are the other factors that might cause delay in updating restart_lsn of the slot ?3) In PG -13 does this behaviour change ? RegardsShaileshthe sOn Mon, Dec 14, 2020 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Dec 14, 2020 at 9:30 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Hello,\n>\n> We have two logical replication slots in our postgresql database (version-11) instance and we are using pgJDBC to stream data from these two slots.\n>\n\nIIUC, you are using some out-of-core outputplugin to stream the data?\nAre you using in walsender mechanism to decode the changes from slots\nor via SQL APIs?\n\n> We are ensuring that when we regularly send feedback and update the confirmed_flush_lsn (every 10 minutes) for both the slots to the same position. However From our data we have seen that the restart_lsn movement of the two are not in sync and most of the time one of them lags too far behind to hold the WAL files unnecessarily. Here are some data points to indicate the problem\n>\n> Thu Dec 10 05:37:13 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> --------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 48FB/F3000208 | 48FB/F3000208\n> db_dsn_metadata_src_shared | 48FB/F3000208 | 48FB/F3000208\n> (2 rows)\n>\n>\n>\n> Thu Dec 10 13:53:46 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> -------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 48FC/2309B150 | 48FC/233AA1D0\n> db_dsn_metadata_src_shared | 48FC/233AA1D0 | 48FC/233AA1D0\n> (2 rows)\n>\n>\n> Thu Dec 10 17:13:51 CET 2020\n> slot_name | restart_lsn | confirmed_flush_lsn\n> -------------------------------------+---------------+---------------------\n> db_dsn_metadata_src_private | 4900/B4C3AE8 | 4900/94FDF908\n> db_dsn_metadata_src_shared | 48FD/D2F66F10 | 4900/94FDF908\n> (2 rows)\n>\n> Though we are using setFlushLsn() and forceStatusUpdate for both the slot's stream regularly still the slot with name private is far behind the confirmed_flush_lsn and slot with name shared is also behind with confirmed_flush_lsn but not too far. Since the restart_lsn is not moving fast enough, causing lot of issues with WAL log file management and not allowing to delete them to free up disk space\n>\n\nWhat is this setFlushLsn? I am not able to find in the PG-code. If it\nis some outside code reference then please provide the link to code.\nIn general, the restart_lsn and confirmed_flush_lsn are advanced in\ndifferent ways so you might see some difference but it should not be\nthis much. The confirmed_flush_lsn is updated when we get confirmation\nfrom the downstream node about the flush_lsn but restart_lsn is only\nincremented based on the LSN required by the oldest in-progress\ntransaction.\n\n>\n> Please note that for the second slot we are not doing reading from the stream rather just sending the feedback.\n>\n\nHere does the second slot refers to 'shared' or 'private'? It is not\nvery clear what you mean by \"we are not doing reading from the\nstream', do you mean to say that decoding happens in the slot but the\noutput plugin just throws away the streamed data and in the end just\nsend the feedback?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 15 Dec 2020 11:00:11 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Thanks Amit for the response\n>\n> We are using pgJDBC sample program here\n> https://jdbc.postgresql.org/documentation/head/replication.html\n>\n> the setFlushLSN is coming from the pgJDBC only.\n>\n> git hub for APIs of pgJDBC methods available.\n>\n> https://github.com/pgjdbc/pgjdbc\n>\n> The second slot refers to \"private\" slot.\n>\n> So \"\"we are not doing reading from the stream' ==> It means that we are having readPending call only from the shared slot then we get the lastReceivedLSN() from stream and\n> send it back to stream as confirmed_flush_lsn for both private and shared slot. We dont do readPending call to private slot. we will use private slot only when we dont have choice. It is kind of reserver slot for us.\n>\n\nI think this (not performing read/decode on the private slot) could be\nthe reason why it lagging behind. If you want to use as a reserve slot\nthen you probably want to at least perform\npg_replication_slot_advance() to move it to the required position. The\nrestart_lsn won't move unless you read/decode from that slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 15 Dec 2020 18:32:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "Thanks Amit for the response.\nTwo things :\n1) In our observation via PSQL the advance command as well do not move the\nrestart_lsn immediately. It is similar to our approach that use the\nconfirmed_flush_lsn via stream\n2) I am ok to understand the point that we are not reading from the stream\nso we might be facing the issue. But the question is why we are able to\nmove the restart_lsn most of the time by updating the confirmed_flush_lsn\nvia pgJDBC. But only occasionally it lags behind too far behind.\n\nRegards\nShailesh\n\n\n\nOn Tue, Dec 15, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com>\n> wrote:\n> >\n> > Thanks Amit for the response\n> >\n> > We are using pgJDBC sample program here\n> > https://jdbc.postgresql.org/documentation/head/replication.html\n> >\n> > the setFlushLSN is coming from the pgJDBC only.\n> >\n> > git hub for APIs of pgJDBC methods available.\n> >\n> > https://github.com/pgjdbc/pgjdbc\n> >\n> > The second slot refers to \"private\" slot.\n> >\n> > So \"\"we are not doing reading from the stream' ==> It means that we are\n> having readPending call only from the shared slot then we get the\n> lastReceivedLSN() from stream and\n> > send it back to stream as confirmed_flush_lsn for both private and\n> shared slot. We dont do readPending call to private slot. we will use\n> private slot only when we dont have choice. It is kind of reserver slot for\n> us.\n> >\n>\n> I think this (not performing read/decode on the private slot) could be\n> the reason why it lagging behind. If you want to use as a reserve slot\n> then you probably want to at least perform\n> pg_replication_slot_advance() to move it to the required position. The\n> restart_lsn won't move unless you read/decode from that slot.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nThanks Amit for the response.Two things :1) In our observation via PSQL the advance command as well do not move the restart_lsn immediately. It is similar to our approach that use the confirmed_flush_lsn via stream2) I am ok to understand the point that we are not reading from the stream so we might be facing the issue. But the question is why we are able to move the restart_lsn most of the time by updating the confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too far behind.RegardsShailesh On Tue, Dec 15, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Thanks Amit for the response\n>\n> We are using pgJDBC sample program here\n> https://jdbc.postgresql.org/documentation/head/replication.html\n>\n> the setFlushLSN is coming from the pgJDBC only.\n>\n> git hub for APIs of pgJDBC methods available.\n>\n> https://github.com/pgjdbc/pgjdbc\n>\n> The second slot refers to \"private\" slot.\n>\n> So \"\"we are not doing reading from the stream' ==> It means that we are having readPending call only from the shared slot then we get the lastReceivedLSN() from stream and\n> send it back to stream as confirmed_flush_lsn for both private and shared slot. We dont do readPending call to private slot. we will use private slot only when we dont have choice. It is kind of reserver slot for us.\n>\n\nI think this (not performing read/decode on the private slot) could be\nthe reason why it lagging behind. If you want to use as a reserve slot\nthen you probably want to at least perform\npg_replication_slot_advance() to move it to the required position. The\nrestart_lsn won't move unless you read/decode from that slot.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 23 Dec 2020 19:05:56 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "However when the situation comes and that one slot gets behind it never\nrecovers and no way to recover from this situation even after reading using\nadvance ro pg_logical_get_changes sql command.\n\nOn Wed, Dec 23, 2020 at 7:05 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n\n> Thanks Amit for the response.\n> Two things :\n> 1) In our observation via PSQL the advance command as well do not move the\n> restart_lsn immediately. It is similar to our approach that use the\n> confirmed_flush_lsn via stream\n> 2) I am ok to understand the point that we are not reading from the stream\n> so we might be facing the issue. But the question is why we are able to\n> move the restart_lsn most of the time by updating the confirmed_flush_lsn\n> via pgJDBC. But only occasionally it lags behind too far behind.\n>\n> Regards\n> Shailesh\n>\n>\n>\n> On Tue, Dec 15, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com>\n>> wrote:\n>> >\n>> > Thanks Amit for the response\n>> >\n>> > We are using pgJDBC sample program here\n>> > https://jdbc.postgresql.org/documentation/head/replication.html\n>> >\n>> > the setFlushLSN is coming from the pgJDBC only.\n>> >\n>> > git hub for APIs of pgJDBC methods available.\n>> >\n>> > https://github.com/pgjdbc/pgjdbc\n>> >\n>> > The second slot refers to \"private\" slot.\n>> >\n>> > So \"\"we are not doing reading from the stream' ==> It means that we are\n>> having readPending call only from the shared slot then we get the\n>> lastReceivedLSN() from stream and\n>> > send it back to stream as confirmed_flush_lsn for both private and\n>> shared slot. We dont do readPending call to private slot. we will use\n>> private slot only when we dont have choice. It is kind of reserver slot for\n>> us.\n>> >\n>>\n>> I think this (not performing read/decode on the private slot) could be\n>> the reason why it lagging behind. If you want to use as a reserve slot\n>> then you probably want to at least perform\n>> pg_replication_slot_advance() to move it to the required position. The\n>> restart_lsn won't move unless you read/decode from that slot.\n>>\n>> --\n>> With Regards,\n>> Amit Kapila.\n>>\n>\n\nHowever when the situation comes and that one slot gets behind it never recovers and no way to recover from this situation even after reading using advance ro pg_logical_get_changes sql command.On Wed, Dec 23, 2020 at 7:05 PM Jammie <shailesh.jamloki@gmail.com> wrote:Thanks Amit for the response.Two things :1) In our observation via PSQL the advance command as well do not move the restart_lsn immediately. It is similar to our approach that use the confirmed_flush_lsn via stream2) I am ok to understand the point that we are not reading from the stream so we might be facing the issue. But the question is why we are able to move the restart_lsn most of the time by updating the confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too far behind.RegardsShailesh On Tue, Dec 15, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Dec 15, 2020 at 11:00 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Thanks Amit for the response\n>\n> We are using pgJDBC sample program here\n> https://jdbc.postgresql.org/documentation/head/replication.html\n>\n> the setFlushLSN is coming from the pgJDBC only.\n>\n> git hub for APIs of pgJDBC methods available.\n>\n> https://github.com/pgjdbc/pgjdbc\n>\n> The second slot refers to \"private\" slot.\n>\n> So \"\"we are not doing reading from the stream' ==> It means that we are having readPending call only from the shared slot then we get the lastReceivedLSN() from stream and\n> send it back to stream as confirmed_flush_lsn for both private and shared slot. We dont do readPending call to private slot. we will use private slot only when we dont have choice. It is kind of reserver slot for us.\n>\n\nI think this (not performing read/decode on the private slot) could be\nthe reason why it lagging behind. If you want to use as a reserve slot\nthen you probably want to at least perform\npg_replication_slot_advance() to move it to the required position. The\nrestart_lsn won't move unless you read/decode from that slot.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 24 Dec 2020 11:37:52 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 7:06 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Thanks Amit for the response.\n> Two things :\n> 1) In our observation via PSQL the advance command as well do not move the restart_lsn immediately. It is similar to our approach that use the confirmed_flush_lsn via stream\n> 2) I am ok to understand the point that we are not reading from the stream so we might be facing the issue. But the question is why we are able to move the restart_lsn most of the time by updating the confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too far behind.\n>\n\nI am not sure why you are seeing such behavior. Is it possible for you\nto debug the code? Both confirmed_flush_lsn and restart_lsn are\nadvanced in LogicalConfirmReceivedLocation. You can add elog to print\nthe values to see the progress. Here, the point to note is that even\nthough we update confirmed_flush_lsn every time with the new value but\nrestart_lsn is updated only when candidate_restart_valid has a valid\nvalue each time after a call to LogicalConfirmReceivedLocation. We\nupdate candidate_restart_valid in\nLogicalIncreaseRestartDecodingForSlot which is called only during\ndecoding of XLOG_RUNNING_XACTS record. So, it is not clear to me how\nin your case restart_lsn is getting advanced without decode? I think\nif you add some elogs in the code to track the values of\ncandidate_restart_valid, confirmed_flush_lsn, and restart_lsn, you\nmight get some clue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Dec 2020 12:30:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "Sorry dont have the debug setup handy. However the sql commands now works\nthough to move the restart_lsn of the slots in standlone code from psql.\n\n A few followup questions.\n\nWhat is catalog_xmin in the pg_replication_slots ? and how is it playing\nrole in moving the restart_lsn of the slot.\n\nI am just checking possibility that if a special transaction can cause\nprivate slot to stale ?\n\nI do see that in the private slot catalog_xmin also stuck along with\nrestart_lsn. Though from JDBC code confirmed_flush_lsn is updated correctly\nin the pg_replication_slots;\n\nRegards\nShailesh\n\nOn Thu, Dec 24, 2020 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Wed, Dec 23, 2020 at 7:06 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n> >\n> > Thanks Amit for the response.\n> > Two things :\n> > 1) In our observation via PSQL the advance command as well do not move\n> the restart_lsn immediately. It is similar to our approach that use the\n> confirmed_flush_lsn via stream\n> > 2) I am ok to understand the point that we are not reading from the\n> stream so we might be facing the issue. But the question is why we are able\n> to move the restart_lsn most of the time by updating the\n> confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too\n> far behind.\n> >\n>\n> I am not sure why you are seeing such behavior. Is it possible for you\n> to debug the code? Both confirmed_flush_lsn and restart_lsn are\n> advanced in LogicalConfirmReceivedLocation. You can add elog to print\n> the values to see the progress. Here, the point to note is that even\n> though we update confirmed_flush_lsn every time with the new value but\n> restart_lsn is updated only when candidate_restart_valid has a valid\n> value each time after a call to LogicalConfirmReceivedLocation. We\n> update candidate_restart_valid in\n> LogicalIncreaseRestartDecodingForSlot which is called only during\n> decoding of XLOG_RUNNING_XACTS record. So, it is not clear to me how\n> in your case restart_lsn is getting advanced without decode? I think\n> if you add some elogs in the code to track the values of\n> candidate_restart_valid, confirmed_flush_lsn, and restart_lsn, you\n> might get some clue.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nSorry dont have the debug setup handy. However the sql commands now works though to move the restart_lsn of the slots in standlone code from psql. A few followup questions. What is catalog_xmin in the pg_replication_slots ? and how is it playing role in moving the restart_lsn of the slot.I am just checking possibility that if a special transaction can cause private slot to stale ?I do see that in the private slot catalog_xmin also stuck along with restart_lsn. Though from JDBC code confirmed_flush_lsn is updated correctly in the pg_replication_slots;RegardsShaileshOn Thu, Dec 24, 2020 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Dec 23, 2020 at 7:06 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Thanks Amit for the response.\n> Two things :\n> 1) In our observation via PSQL the advance command as well do not move the restart_lsn immediately. It is similar to our approach that use the confirmed_flush_lsn via stream\n> 2) I am ok to understand the point that we are not reading from the stream so we might be facing the issue. But the question is why we are able to move the restart_lsn most of the time by updating the confirmed_flush_lsn via pgJDBC. But only occasionally it lags behind too far behind.\n>\n\nI am not sure why you are seeing such behavior. Is it possible for you\nto debug the code? Both confirmed_flush_lsn and restart_lsn are\nadvanced in LogicalConfirmReceivedLocation. You can add elog to print\nthe values to see the progress. Here, the point to note is that even\nthough we update confirmed_flush_lsn every time with the new value but\nrestart_lsn is updated only when candidate_restart_valid has a valid\nvalue each time after a call to LogicalConfirmReceivedLocation. We\nupdate candidate_restart_valid in\nLogicalIncreaseRestartDecodingForSlot which is called only during\ndecoding of XLOG_RUNNING_XACTS record. So, it is not clear to me how\nin your case restart_lsn is getting advanced without decode? I think\nif you add some elogs in the code to track the values of\ncandidate_restart_valid, confirmed_flush_lsn, and restart_lsn, you\nmight get some clue.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 24 Dec 2020 19:30:30 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 7:30 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Sorry dont have the debug setup handy. However the sql commands now works though to move the restart_lsn of the slots in standlone code from psql.\n>\n> A few followup questions.\n>\n> What is catalog_xmin in the pg_replication_slots ? and how is it playing role in moving the restart_lsn of the slot.\n>\n> I am just checking possibility that if a special transaction can cause private slot to stale ?\n>\n\nYeah, it is possible if there is some old transaction is active in the\nsystem. The restart_lsn is lsn required by the oldesttxn. But it is\nstrange that it affects only one of the slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Dec 2020 19:45:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "Hi Amit,\nThanks for the response .\nCan you please let me know what pg_current_wal_lsn returns ?\n\nis this position the LSN of the next log record to be created, or is it the\nLSN of the last log record already created and inserted in the log?\n\nThe document says\n- it returns current WAL write location.\n\nRegards\nShailesh\n\nOn Thu, 24 Dec, 2020, 7:43 pm Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Dec 24, 2020 at 7:30 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n> >\n> > Sorry dont have the debug setup handy. However the sql commands now\n> works though to move the restart_lsn of the slots in standlone code from\n> psql.\n> >\n> > A few followup questions.\n> >\n> > What is catalog_xmin in the pg_replication_slots ? and how is it playing\n> role in moving the restart_lsn of the slot.\n> >\n> > I am just checking possibility that if a special transaction can cause\n> private slot to stale ?\n> >\n>\n> Yeah, it is possible if there is some old transaction is active in the\n> system. The restart_lsn is lsn required by the oldesttxn. But it is\n> strange that it affects only one of the slots.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHi Amit, Thanks for the response .Can you please let me know what pg_current_wal_lsn returns ?is this position the LSN of the next log record to be created, or is it the LSN of the last log record already created and inserted in the log?The document says - it returns current WAL write location.RegardsShaileshOn Thu, 24 Dec, 2020, 7:43 pm Amit Kapila, <amit.kapila16@gmail.com> wrote:On Thu, Dec 24, 2020 at 7:30 PM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Sorry dont have the debug setup handy. However the sql commands now works though to move the restart_lsn of the slots in standlone code from psql.\n>\n> A few followup questions.\n>\n> What is catalog_xmin in the pg_replication_slots ? and how is it playing role in moving the restart_lsn of the slot.\n>\n> I am just checking possibility that if a special transaction can cause private slot to stale ?\n>\n\nYeah, it is possible if there is some old transaction is active in the\nsystem. The restart_lsn is lsn required by the oldesttxn. But it is\nstrange that it affects only one of the slots.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 12 Jan 2021 09:15:11 +0530",
"msg_from": "Jammie <shailesh.jamloki@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
},
{
"msg_contents": "On Tue, Jan 12, 2021 at 9:15 AM Jammie <shailesh.jamloki@gmail.com> wrote:\n>\n> Hi Amit,\n> Thanks for the response .\n> Can you please let me know what pg_current_wal_lsn returns ?\n>\n> is this position the LSN of the next log record to be created, or is it the LSN of the last log record already created and inserted in the log?\n>\n\nThis is the position up to which we have already written the WAL to\nthe kernel but not yet flushed to disk.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 11:09:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Movement of restart_lsn position movement of logical replication\n slots is very slow"
}
] |
[
{
"msg_contents": "At Mon, 14 Dec 2020 16:48:05 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Dec 14, 2020 at 11:34:51AM +0900, Kyotaro Horiguchi wrote:\n> > Apart from this issue, while checking that, I noticed that if server\n> > starts having WALs from a server of a different systemid, the server\n> > stops with obscure messages.\n> \n> Wouldn't it be better to discuss that on a separate thread? I have\n> mostly missed your message here.\n\nRight. Here is the duplicate of the message. Thanks for the\nsuggestion!\n\n=====\nWhile in another discussion related to xlogreader[2], I noticed that\nif server starts having WALs from a server of a different systemid,\nthe server stops with obscure messages.\n\n\n> LOG: database system was shut down at 2020-12-14 10:36:02 JST\n> LOG: invalid primary checkpoint record\n> PANIC: could not locate a valid checkpoint record\n\nThe cause is XLogPageRead erases the error message set by\nXLogReaderValidatePageHeader(). As the comment just above says, this\nis required to continue replication under a certain situation. The\ncode is aiming to allow continue replication when the first half of a\ncontinued record has been removed on the primary so we don't need to\ndo the amendment unless we're in standby mode. If we let the savior\ncode only while StandbyMode, we would have the correct error message.\n\n> JST LOG: database system was shut down at 2020-12-14 10:36:02 JST\n> LOG: WAL file is from different database system: WAL file database system identifier is 6905923817995618754, pg_control database system identifier is 6905924227171453468\n> JST LOG: invalid primary checkpoint record\n> JST PANIC: could not locate a valid checkpoint record\n\nI confirmed 0668719801 still works under the intended context using\nthe steps shown in [1].\n\n\n[1]: https://www.postgresql.org/message-id/flat/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com\n\n[2]: https://www.postgresql.org/message-id/flat/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 14 Dec 2020 18:04:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some error messages are omitted while recovery."
}
] |
[
{
"msg_contents": "The name of the function suggests that the given message will be queued in\nReorderBuffer. The prologue of the function says so too\n 776 /*\n 777 * Queue message into a transaction so it can be processed upon commit.\n 778 */\nIt led me to think that a non-transactional message is processed along with\nthe surrounding transaction, esp. when it has an associated xid.\n\nBut in reality, the function queues only a transactional message and\ndecoders a non-transactional message immediately without waiting for a\ncommit.\n\nWe should modify the prologue to say\n\"Queue a transactional message into a transaction so that it can be\nprocessed upon commit. A non-transactional message is processed\nimmediately.\" and also change the name of the function\nto ReorderBufferProcessMessage(), but the later may break API compatibility.\n\n--\nBest Wishes,\nAshutosh\n\nThe name of the function suggests that the given message will be queued in ReorderBuffer. The prologue of the function says so too 776 /* 777 * Queue message into a transaction so it can be processed upon commit. 778 */It led me to think that a non-transactional message is processed along with the surrounding transaction, esp. when it has an associated xid.But in reality, the function queues only a transactional message and decoders a non-transactional message immediately without waiting for a commit.We should modify the prologue to say\"Queue a transactional message into a transaction so that it can be processed upon commit. A non-transactional message is processed immediately.\" and also change the name of the function to ReorderBufferProcessMessage(), but the later may break API compatibility.--Best Wishes,Ashutosh",
"msg_date": "Mon, 14 Dec 2020 14:44:39 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Misleading comment in prologue of ReorderBufferQueueMessage"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 2:45 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> The name of the function suggests that the given message will be queued in ReorderBuffer. The prologue of the function says so too\n> 776 /*\n> 777 * Queue message into a transaction so it can be processed upon commit.\n> 778 */\n> It led me to think that a non-transactional message is processed along with the surrounding transaction, esp. when it has an associated xid.\n>\n> But in reality, the function queues only a transactional message and decoders a non-transactional message immediately without waiting for a commit.\n>\n> We should modify the prologue to say\n> \"Queue a transactional message into a transaction so that it can be processed upon commit. A non-transactional message is processed immediately.\" and also change the name of the function to ReorderBufferProcessMessage(), but the later may break API compatibility.\n>\n\n+1 for the comment change but I am not sure if it is a good idea to\nchange the API name.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Dec 2020 15:16:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Misleading comment in prologue of ReorderBufferQueueMessage"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 3:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Dec 14, 2020 at 2:45 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > The name of the function suggests that the given message will be queued\n> in ReorderBuffer. The prologue of the function says so too\n> > 776 /*\n> > 777 * Queue message into a transaction so it can be processed upon\n> commit.\n> > 778 */\n> > It led me to think that a non-transactional message is processed along\n> with the surrounding transaction, esp. when it has an associated xid.\n> >\n> > But in reality, the function queues only a transactional message and\n> decoders a non-transactional message immediately without waiting for a\n> commit.\n> >\n> > We should modify the prologue to say\n> > \"Queue a transactional message into a transaction so that it can be\n> processed upon commit. A non-transactional message is processed\n> immediately.\" and also change the name of the function to\n> ReorderBufferProcessMessage(), but the later may break API compatibility.\n> >\n>\n> +1 for the comment change but I am not sure if it is a good idea to\n> change the API name.\n>\n> Can you please review wording? I will create a patch with updated wording.\n-- \n--\nBest Wishes,\nAshutosh\n\nOn Mon, Dec 14, 2020 at 3:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Dec 14, 2020 at 2:45 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> The name of the function suggests that the given message will be queued in ReorderBuffer. The prologue of the function says so too\n> 776 /*\n> 777 * Queue message into a transaction so it can be processed upon commit.\n> 778 */\n> It led me to think that a non-transactional message is processed along with the surrounding transaction, esp. when it has an associated xid.\n>\n> But in reality, the function queues only a transactional message and decoders a non-transactional message immediately without waiting for a commit.\n>\n> We should modify the prologue to say\n> \"Queue a transactional message into a transaction so that it can be processed upon commit. A non-transactional message is processed immediately.\" and also change the name of the function to ReorderBufferProcessMessage(), but the later may break API compatibility.\n>\n\n+1 for the comment change but I am not sure if it is a good idea to\nchange the API name.\nCan you please review wording? I will create a patch with updated wording.-- --Best Wishes,Ashutosh",
"msg_date": "Tue, 15 Dec 2020 11:25:00 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Misleading comment in prologue of ReorderBufferQueueMessage"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 11:25 AM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> On Mon, Dec 14, 2020 at 3:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Dec 14, 2020 at 2:45 PM Ashutosh Bapat\n>> <ashutosh.bapat@enterprisedb.com> wrote:\n>> >\n>> > The name of the function suggests that the given message will be queued in ReorderBuffer. The prologue of the function says so too\n>> > 776 /*\n>> > 777 * Queue message into a transaction so it can be processed upon commit.\n>> > 778 */\n>> > It led me to think that a non-transactional message is processed along with the surrounding transaction, esp. when it has an associated xid.\n>> >\n>> > But in reality, the function queues only a transactional message and decoders a non-transactional message immediately without waiting for a commit.\n>> >\n>> > We should modify the prologue to say\n>> > \"Queue a transactional message into a transaction so that it can be processed upon commit. A non-transactional message is processed immediately.\" and also change the name of the function to ReorderBufferProcessMessage(), but the later may break API compatibility.\n>> >\n>>\n>> +1 for the comment change but I am not sure if it is a good idea to\n>> change the API name.\n>>\n> Can you please review wording? I will create a patch with updated wording.\n>\n\nHow about something like below:\nA transactional message is queued to be processed upon commit and a\nnon-transactional message gets processed immediately.\nOR\nA transactional message is queued so it can be processed upon commit\nand a non-transactional message gets processed immediately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Dec 2020 08:01:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Misleading comment in prologue of ReorderBufferQueueMessage"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 8:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Dec 15, 2020 at 11:25 AM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > On Mon, Dec 14, 2020 at 3:14 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Mon, Dec 14, 2020 at 2:45 PM Ashutosh Bapat\n> >> <ashutosh.bapat@enterprisedb.com> wrote:\n> >> >\n> >> > The name of the function suggests that the given message will be\n> queued in ReorderBuffer. The prologue of the function says so too\n> >> > 776 /*\n> >> > 777 * Queue message into a transaction so it can be processed upon\n> commit.\n> >> > 778 */\n> >> > It led me to think that a non-transactional message is processed\n> along with the surrounding transaction, esp. when it has an associated xid.\n> >> >\n> >> > But in reality, the function queues only a transactional message and\n> decoders a non-transactional message immediately without waiting for a\n> commit.\n> >> >\n> >> > We should modify the prologue to say\n> >> > \"Queue a transactional message into a transaction so that it can be\n> processed upon commit. A non-transactional message is processed\n> immediately.\" and also change the name of the function to\n> ReorderBufferProcessMessage(), but the later may break API compatibility.\n> >> >\n> >>\n> >> +1 for the comment change but I am not sure if it is a good idea to\n> >> change the API name.\n> >>\n> > Can you please review wording? I will create a patch with updated\n> wording.\n> >\n>\n> How about something like below:\n> A transactional message is queued to be processed upon commit and a\n> non-transactional message gets processed immediately.\n>\n\nUsed this one. PFA patch.\n\n--\nBest Wishes,\nAshutosh",
"msg_date": "Fri, 18 Dec 2020 15:37:47 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Misleading comment in prologue of ReorderBufferQueueMessage"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 3:37 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> On Wed, Dec 16, 2020 at 8:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> How about something like below:\n>> A transactional message is queued to be processed upon commit and a\n>> non-transactional message gets processed immediately.\n>\n>\n> Used this one. PFA patch.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Dec 2020 12:12:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Misleading comment in prologue of ReorderBufferQueueMessage"
}
] |
[
{
"msg_contents": "-hackers,\n\nThe community has spent a lot of time optimizing features over the years.\nExcellent examples include parallel query and partitioning which have been\nmulti-year efforts to increase the quality, performance, and extend\nfeatures of the original commit. We should consider the documentation in a\nsimilar manner. Just like code, documentation can sometimes use a bug fix,\noptimization, and/or new features added to the original implementation.\n\nTechnical documentation should only be as verbose as needed to illustrate\nthe concept or task that we are explaining. It should not be redundant, nor\nshould it use .50 cent words when a .10 cent word would suffice. I would\nlike to put effort into optimizing the documentation and am requesting\ngeneral consensus that this would be a worthwhile effort before I begin to\ndust off my Docbook skills.\n\nI have provided an example below:\n\nOriginal text (79 words):\n\nThis book is the official documentation of PostgreSQL. It has been written\nby the PostgreSQL developers and other volunteers in parallel to the\ndevelopment of the PostgreSQL software. It describes all the functionality\nthat the current version of PostgreSQL officially supports.\n\nTo make the large amount of information about PostgreSQL manageable, this\nbook has been organized in several parts. Each part is targeted at a\ndifferent class of users, or at users in different stages of their\nPostgreSQL experience:\n\nOptimized text (35 words):\n\nThis is the official PostgreSQL documentation. It is written by the\nPostgreSQL community in parallel with the development of the software. We\nhave organized it by the type of user and their stages of experience:\n\nIssues that are resolved with the optimized text:\n\n -\n\n Succinct text is more likely to be read than skimmed\n -\n\n Removal of extraneous mentions of PostgreSQL\n -\n\n Removal of unneeded justifications\n -\n\n Joining of two paragraphs into one that provides only the needed\n information to the user\n -\n\n Word count decreased by over 50%. As changes such as these are adopted\n it would make the documentation more consumable.\n\nThanks,\nJD\n\n-- \nFounder - https://commandprompt.com/ - 24x7x365 Postgres since 1997\nCo-Chair - https://postgresconf.org/ - Postgres Education at its finest\nPeople, Postgres, Data\n\n-hackers,The community has spent a lot of time optimizing features over the years. Excellent examples include parallel query and partitioning which have been multi-year efforts to increase the quality, performance, and extend features of the original commit. We should consider the documentation in a similar manner. Just like code, documentation can sometimes use a bug fix, optimization, and/or new features added to the original implementation.Technical documentation should only be as verbose as needed to illustrate the concept or task that we are explaining. It should not be redundant, nor should it use .50 cent words when a .10 cent word would suffice. I would like to put effort into optimizing the documentation and am requesting general consensus that this would be a worthwhile effort before I begin to dust off my Docbook skills. I have provided an example below:Original text (79 words):This book is the official documentation of PostgreSQL. It has been written by the PostgreSQL developers and other volunteers in parallel to the development of the PostgreSQL software. It describes all the functionality that the current version of PostgreSQL officially supports.To make the large amount of information about PostgreSQL manageable, this book has been organized in several parts. Each part is targeted at a different class of users, or at users in different stages of their PostgreSQL experience:Optimized text (35 words):This is the official PostgreSQL documentation. It is written by the PostgreSQL community in parallel with the development of the software. We have organized it by the type of user and their stages of experience:Issues that are resolved with the optimized text:Succinct text is more likely to be read than skimmedRemoval of extraneous mentions of PostgreSQLRemoval of unneeded justificationsJoining of two paragraphs into one that provides only the needed information to the userWord count decreased by over 50%. As changes such as these are adopted it would make the documentation more consumable.Thanks,JD-- Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997Co-Chair - https://postgresconf.org/ - Postgres Education at its finestPeople, Postgres, Data",
"msg_date": "Mon, 14 Dec 2020 11:50:07 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Optimizing the documentation"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 12:50 PM Joshua Drake <jd@commandprompt.com> wrote:\n\n> -hackers,\n>\n> The community has spent a lot of time optimizing features over the years.\n> Excellent examples include parallel query and partitioning which have been\n> multi-year efforts to increase the quality, performance, and extend\n> features of the original commit. We should consider the documentation in a\n> similar manner. Just like code, documentation can sometimes use a bug fix,\n> optimization, and/or new features added to the original implementation.\n>\n> Technical documentation should only be as verbose as needed to illustrate\n> the concept or task that we are explaining. It should not be redundant, nor\n> should it use .50 cent words when a .10 cent word would suffice. I would\n> like to put effort into optimizing the documentation and am requesting\n> general consensus that this would be a worthwhile effort before I begin to\n> dust off my Docbook skills.\n>\n>\nAs a quick observation, it would be more immediately helpful to add to the\nexisting proposal to add more details about architecture and get that\ncommitted before embarking on a new documentation project.\n\nhttps://commitfest.postgresql.org/31/2541/\n\n\n\n> I have provided an example below:\n>\n> Original text (79 words):\n>\n> This book is the official documentation of PostgreSQL. It has been written\n> by the PostgreSQL developers and other volunteers in parallel to the\n> development of the PostgreSQL software. It describes all the functionality\n> that the current version of PostgreSQL officially supports.\n>\n> To make the large amount of information about PostgreSQL manageable, this\n> book has been organized in several parts. Each part is targeted at a\n> different class of users, or at users in different stages of their\n> PostgreSQL experience:\n>\n> Optimized text (35 words):\n>\n> This is the official PostgreSQL documentation. It is written by the\n> PostgreSQL community in parallel with the development of the software. We\n> have organized it by the type of user and their stages of experience:\n>\n> Issues that are resolved with the optimized text:\n>\n> -\n>\n> Succinct text is more likely to be read than skimmed\n> -\n>\n> Removal of extraneous mentions of PostgreSQL\n> -\n>\n> Removal of unneeded justifications\n> -\n>\n> Joining of two paragraphs into one that provides only the needed\n> information to the user\n> -\n>\n> Word count decreased by over 50%. As changes such as these are adopted\n> it would make the documentation more consumable.\n>\n> That actually exists in our documentation? I suspect changing it isn't\nall that worthwhile as the typical user isn't reading the documentation\nlike a book and with the entry point being the table of contents most of\nthat material is simply gleaned from observing the presented structure\nwithout words needed to describe it.\n\nWhile I don't think making readability changes is a bad thing, and maybe my\nperspective is a bit biased and negative right now, but the attention given\nto the existing documentation patches in the commitfest isn't that great -\nso adding another mass of patches fixing up items that haven't provoked\ncomplaints seems likely to just make the list longer.\n\nIn short, I don't think optimization should be a goal in its own right; but\nrather changes should mostly be driven by questions asked by our users. I\ndon't think reading random chapters of the documentation to find\nnon-optimal exposition is going to be a good use of time.\n\nDavid J.\n\nOn Mon, Dec 14, 2020 at 12:50 PM Joshua Drake <jd@commandprompt.com> wrote:-hackers,The community has spent a lot of time optimizing features over the years. Excellent examples include parallel query and partitioning which have been multi-year efforts to increase the quality, performance, and extend features of the original commit. We should consider the documentation in a similar manner. Just like code, documentation can sometimes use a bug fix, optimization, and/or new features added to the original implementation.Technical documentation should only be as verbose as needed to illustrate the concept or task that we are explaining. It should not be redundant, nor should it use .50 cent words when a .10 cent word would suffice. I would like to put effort into optimizing the documentation and am requesting general consensus that this would be a worthwhile effort before I begin to dust off my Docbook skills. As a quick observation, it would be more immediately helpful to add to the existing proposal to add more details about architecture and get that committed before embarking on a new documentation project.https://commitfest.postgresql.org/31/2541/ I have provided an example below:Original text (79 words):This book is the official documentation of PostgreSQL. It has been written by the PostgreSQL developers and other volunteers in parallel to the development of the PostgreSQL software. It describes all the functionality that the current version of PostgreSQL officially supports.To make the large amount of information about PostgreSQL manageable, this book has been organized in several parts. Each part is targeted at a different class of users, or at users in different stages of their PostgreSQL experience:Optimized text (35 words):This is the official PostgreSQL documentation. It is written by the PostgreSQL community in parallel with the development of the software. We have organized it by the type of user and their stages of experience:Issues that are resolved with the optimized text:Succinct text is more likely to be read than skimmedRemoval of extraneous mentions of PostgreSQLRemoval of unneeded justificationsJoining of two paragraphs into one that provides only the needed information to the userWord count decreased by over 50%. As changes such as these are adopted it would make the documentation more consumable.That actually exists in our documentation? I suspect changing it isn't all that worthwhile as the typical user isn't reading the documentation like a book and with the entry point being the table of contents most of that material is simply gleaned from observing the presented structure without words needed to describe it.While I don't think making readability changes is a bad thing, and maybe my perspective is a bit biased and negative right now, but the attention given to the existing documentation patches in the commitfest isn't that great - so adding another mass of patches fixing up items that haven't provoked complaints seems likely to just make the list longer.In short, I don't think optimization should be a goal in its own right; but rather changes should mostly be driven by questions asked by our users. I don't think reading random chapters of the documentation to find non-optimal exposition is going to be a good use of time.David J.",
"msg_date": "Mon, 14 Dec 2020 13:13:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "On 14/12/2020 21:50, Joshua Drake wrote:\n> The community has spent a lot of time optimizing features over the \n> years. Excellent examples include parallel query and partitioning which \n> have been multi-year efforts to increase the quality, performance, and \n> extend features of the original commit. We should consider the \n> documentation in a similar manner. Just like code, documentation can \n> sometimes use a bug fix, optimization, and/or new features added to the \n> original implementation.\n> \n> Technical documentation should only be as verbose as needed to \n> illustrate the concept or task that we are explaining. It should not be \n> redundant, nor should it use .50 cent words when a .10 cent word would \n> suffice. I would like to put effort into optimizing the documentation \n> and am requesting general consensus that this would be a worthwhile \n> effort before I begin to dust off my Docbook skills.\n\nHard to argue with \"let's make the doc better\" :-).\n\nI expect that there will be a lot of bikeshedding over the exact \nphrases. That's OK. Every improvement that actually gets committed \nhelps, even if we don't make progress on other parts.\n\n> I have provided an example below:\n> \n> \n> Original text (79 words):\n> \n> \n> This book is the official documentation of PostgreSQL. It has been \n> written by the PostgreSQL developers and other volunteers in parallel to \n> the development of the PostgreSQL software. It describes all the \n> functionality that the current version of PostgreSQL officially supports.\n> \n> To make the large amount of information about PostgreSQL manageable, \n> this book has been organized in several parts. Each part is targeted at \n> a different class of users, or at users in different stages of their \n> PostgreSQL experience:\n> \n> Optimized text (35 words):\n> \n> \n> This is the official PostgreSQL documentation. It is written by the \n> PostgreSQL community in parallel with the development of the software. \n> We have organized it by the type of user and their stages of experience:\n\nSome thoughts on this example:\n\n- Changing \"has been\" to \"is\" changes the tone here. \"Is\" implies that \nit is being written continuously, whereas \"has been\" implies that it's \nfinished. We do update the docs continuously, but point of the sentence \nis that the docs were developed together with the features, so \"has \nbeen\" seems more accurate.\n\n´- I like \"PostgreSQL developers and other volunteers\" better than the \n\"PostgreSQL community\". This is the very first introduction to \nPostgreSQL, so we can't expect the reader to know what the \"PostgreSQL \ncommunity\" is. I like the \"volunteers\" word here a lot.\n\n- I think a little bit of ceremony is actually OK in this particular \nparagraph, since it's the very first one in the docs.\n\n- I agree with dropping the \"to make the large amount of information \nmanageable\".\n\nSo I would largely keep this example unchanged, changing it into:\n\n---\nThis book is the official documentation of PostgreSQL. It has been \nwritten by the PostgreSQL developers and other volunteers in parallel to \nthe development of the PostgreSQL software. It describes all the \nfunctionality that the current version of PostgreSQL officially supports.\n\nThis book has been organized in several parts. Each part is targeted at \na different class of users, or at users in different stages of their \nPostgreSQL experience:\n---\n\n> Issues that are resolved with the optimized text:\n> \n> * Succinct text is more likely to be read than skimmed\n> \n> * Removal of extraneous mentions of PostgreSQL\n> \n> * Removal of unneeded justifications\n> \n> * Joining of two paragraphs into one that provides only the needed\n> information to the user\n> \n> * Word count decreased by over 50%. As changes such as these are\n> adopted it would make the documentation more consumable.\nI agree with these goals in general. I like to refer to \nhttp://www.plainenglish.co.uk/how-to-write-in-plain-english.html when \nwriting documentation. Or anything else, really.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 14 Dec 2020 22:35:01 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": ">\n>\n>>\n>> Technical documentation should only be as verbose as needed to illustrate\n>> the concept or task that we are explaining. It should not be redundant, nor\n>> should it use .50 cent words when a .10 cent word would suffice. I would\n>> like to put effort into optimizing the documentation and am requesting\n>> general consensus that this would be a worthwhile effort before I begin to\n>> dust off my Docbook skills.\n>>\n>>\n> As a quick observation, it would be more immediately helpful to add to the\n> existing proposal to add more details about architecture and get that\n> committed before embarking on a new documentation project.\n>\n> https://commitfest.postgresql.org/31/2541/\n>\n\nI considered just starting to review patches as such but even with that,\ndoesn't it make sense that if I am going to be putting a particular thought\nprocess into my efforts that there is a general consensus? For example,\nwhat would be exceedly helpful would be a documentation style guide that is\ncanonical and we can review documentation against. Currently our\ndocumentation is all over the place. It isn't that it is not technically\naccurate or comprehensive\n\n\n> Optimized text (35 words):\n>>\n>> This is the official PostgreSQL documentation. It is written by the\n>> PostgreSQL community in parallel with the development of the software. We\n>> have organized it by the type of user and their stages of experience:\n>>\n>> Issues that are resolved with the optimized text:\n>>\n>> -\n>>\n>> Succinct text is more likely to be read than skimmed\n>> -\n>>\n>> Removal of extraneous mentions of PostgreSQL\n>> -\n>>\n>> Removal of unneeded justifications\n>> -\n>>\n>> Joining of two paragraphs into one that provides only the needed\n>> information to the user\n>> -\n>>\n>> Word count decreased by over 50%. As changes such as these are\n>> adopted it would make the documentation more consumable.\n>>\n>> That actually exists in our documentation?\n>\n\nYes. https://www.postgresql.org/docs/13/preface.html\n\n\n> I suspect changing it isn't all that worthwhile as the typical user isn't\n> reading the documentation like a book and with the entry point being the\n> table of contents most of that material is simply gleaned from observing\n> the presented structure without words needed to describe it.\n>\n\nIt is a matter of consistency.\n\n\n>\n> While I don't think making readability changes is a bad thing, and maybe\n> my perspective is a bit biased and negative right now, but the attention\n> given to the existing documentation patches in the commitfest isn't that\n> great - so adding another mass of patches fixing up items that haven't\n> provoked complaints seems likely to just make the list longer.\n>\n\nOne of the issues is that editing documentation with patches is a pain. It\nis simpler and a lower barrier of effort to pull up an existing section of\nDocbook and edit that (just like code) than it is to break out specific\ntext within a patch. Though I would be happy to take a swipe at reviewing a\nspecific documentation patch (as you linked).\n\n\n>\n> In short, I don't think optimization should be a goal in its own right;\n> but rather changes should mostly be driven by questions asked by our\n> users. I don't think reading random chapters of the documentation to find\n> non-optimal exposition is going to be a good use of time.\n>\n\nI wasn't planning on reading random chapters. I was planning on walking\nthrough the documentation as it is written and hopefully others would join.\nThis is a monumental effort to perform completely. Also consider the\noverall benefit, not just one specific piece. Would you not consider it a\nnet win if certain questions were being answered in a succinct way as to\nallow users to use the documentation instead of asking the most novice of\nquestions on various channels?\n\nJD\n\n>\n>\n\nTechnical documentation should only be as verbose as needed to illustrate the concept or task that we are explaining. It should not be redundant, nor should it use .50 cent words when a .10 cent word would suffice. I would like to put effort into optimizing the documentation and am requesting general consensus that this would be a worthwhile effort before I begin to dust off my Docbook skills. As a quick observation, it would be more immediately helpful to add to the existing proposal to add more details about architecture and get that committed before embarking on a new documentation project.https://commitfest.postgresql.org/31/2541/I considered just starting to review patches as such but even with that, doesn't it make sense that if I am going to be putting a particular thought process into my efforts that there is a general consensus? For example, what would be exceedly helpful would be a documentation style guide that is canonical and we can review documentation against. Currently our documentation is all over the place. It isn't that it is not technically accurate or comprehensive Optimized text (35 words):This is the official PostgreSQL documentation. It is written by the PostgreSQL community in parallel with the development of the software. We have organized it by the type of user and their stages of experience:Issues that are resolved with the optimized text:Succinct text is more likely to be read than skimmedRemoval of extraneous mentions of PostgreSQLRemoval of unneeded justificationsJoining of two paragraphs into one that provides only the needed information to the userWord count decreased by over 50%. As changes such as these are adopted it would make the documentation more consumable.That actually exists in our documentation? Yes. https://www.postgresql.org/docs/13/preface.html I suspect changing it isn't all that worthwhile as the typical user isn't reading the documentation like a book and with the entry point being the table of contents most of that material is simply gleaned from observing the presented structure without words needed to describe it.It is a matter of consistency. While I don't think making readability changes is a bad thing, and maybe my perspective is a bit biased and negative right now, but the attention given to the existing documentation patches in the commitfest isn't that great - so adding another mass of patches fixing up items that haven't provoked complaints seems likely to just make the list longer.One of the issues is that editing documentation with patches is a pain. It is simpler and a lower barrier of effort to pull up an existing section of Docbook and edit that (just like code) than it is to break out specific text within a patch. Though I would be happy to take a swipe at reviewing a specific documentation patch (as you linked). In short, I don't think optimization should be a goal in its own right; but rather changes should mostly be driven by questions asked by our users. I don't think reading random chapters of the documentation to find non-optimal exposition is going to be a good use of time.I wasn't planning on reading random chapters. I was planning on walking through the documentation as it is written and hopefully others would join. This is a monumental effort to perform completely. Also consider the overall benefit, not just one specific piece. Would you not consider it a net win if certain questions were being answered in a succinct way as to allow users to use the documentation instead of asking the most novice of questions on various channels?JD",
"msg_date": "Mon, 14 Dec 2020 12:39:49 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": ">\n>\n>\n> > This is the official PostgreSQL documentation. It is written by the\n> > PostgreSQL community in parallel with the development of the software.\n> > We have organized it by the type of user and their stages of experience:\n>\n> Some thoughts on this example:\n>\n> - Changing \"has been\" to \"is\" changes the tone here. \"Is\" implies that\n> it is being written continuously, whereas \"has been\" implies that it's\n> finished. We do update the docs continuously, but point of the sentence\n> is that the docs were developed together with the features, so \"has\n> been\" seems more accurate.\n>\n\nNo argument.\n\n\n>\n> ´- I like \"PostgreSQL developers and other volunteers\" better than the\n> \"PostgreSQL community\". This is the very first introduction to\n> PostgreSQL, so we can't expect the reader to know what the \"PostgreSQL\n> community\" is. I like the \"volunteers\" word here a lot.\n>\n>\nThere is a huge community for PostgreSQL, the developers are only a\nsmall (albeit critical) part of it. By using the term \"PostgreSQL\ncommunity\" we are providing equity to all those who participate in the\nsuccess of the project. I could definitely see saying \"PostgreSQL\nvolunteers\".\n\n\n\n> - I think a little bit of ceremony is actually OK in this particular\n> paragraph, since it's the very first one in the docs.\n>\n> - I agree with dropping the \"to make the large amount of information\n> manageable\".\n>\n> So I would largely keep this example unchanged, changing it into:\n>\n> ---\n> This book is the official documentation of PostgreSQL. It has been\n> written by the PostgreSQL developers and other volunteers in parallel to\n> the development of the PostgreSQL software. It describes all the\n> functionality that the current version of PostgreSQL officially supports.\n>\n> This book has been organized in several parts. Each part is targeted at\n> a different class of users, or at users in different stages of their\n> PostgreSQL experience:\n> ---\n>\n>\nI appreciate the feedback and before we get too far down the rabbit hole, I\nwould like to note that I am not tied to an exact wording as my post was\nmore about the general goal and results based on that goal.\n\n\n> I agree with these goals in general. I like to refer to\n> http://www.plainenglish.co.uk/how-to-write-in-plain-english.html when\n> writing documentation. Or anything else, really.\n>\n\nGreat resource!\n\nJD\n\n\n>\n> - Heikki\n>\n\n\n> This is the official PostgreSQL documentation. It is written by the \n> PostgreSQL community in parallel with the development of the software. \n> We have organized it by the type of user and their stages of experience:\n\nSome thoughts on this example:\n\n- Changing \"has been\" to \"is\" changes the tone here. \"Is\" implies that \nit is being written continuously, whereas \"has been\" implies that it's \nfinished. We do update the docs continuously, but point of the sentence \nis that the docs were developed together with the features, so \"has \nbeen\" seems more accurate.No argument. \n\n´- I like \"PostgreSQL developers and other volunteers\" better than the \n\"PostgreSQL community\". This is the very first introduction to \nPostgreSQL, so we can't expect the reader to know what the \"PostgreSQL \ncommunity\" is. I like the \"volunteers\" word here a lot.\nThere is a huge community for PostgreSQL, the developers are only a small (albeit critical) part of it. By using the term \"PostgreSQL community\" we are providing equity to all those who participate in the success of the project. I could definitely see saying \"PostgreSQL volunteers\". \n- I think a little bit of ceremony is actually OK in this particular \nparagraph, since it's the very first one in the docs.\n\n- I agree with dropping the \"to make the large amount of information \nmanageable\".\n\nSo I would largely keep this example unchanged, changing it into:\n\n---\nThis book is the official documentation of PostgreSQL. It has been \nwritten by the PostgreSQL developers and other volunteers in parallel to \nthe development of the PostgreSQL software. It describes all the \nfunctionality that the current version of PostgreSQL officially supports.\n\nThis book has been organized in several parts. Each part is targeted at \na different class of users, or at users in different stages of their \nPostgreSQL experience:\n--- I appreciate the feedback and before we get too far down the rabbit hole, I would like to note that I am not tied to an exact wording as my post was more about the general goal and results based on that goal. \nI agree with these goals in general. I like to refer to \nhttp://www.plainenglish.co.uk/how-to-write-in-plain-english.html when \nwriting documentation. Or anything else, really.Great resource!JD \n\n- Heikki",
"msg_date": "Mon, 14 Dec 2020 12:49:52 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 14/12/2020 21:50, Joshua Drake wrote:\n>> Issues that are resolved with the optimized text:\n>> \n>> * Succinct text is more likely to be read than skimmed\n>> \n>> * Removal of extraneous mentions of PostgreSQL\n>> \n>> * Removal of unneeded justifications\n>> \n>> * Joining of two paragraphs into one that provides only the needed\n>> information to the user\n>> \n>> * Word count decreased by over 50%. As changes such as these are\n>> adopted it would make the documentation more consumable.\n\n> I agree with these goals in general. I like to refer to \n> http://www.plainenglish.co.uk/how-to-write-in-plain-english.html when \n> writing documentation. Or anything else, really.\n\nI think this particular chunk of text is an outlier. (Not unreasonably\nso; as Heikki notes, it's customary for the very beginning of a book to\nbe a bit more formal.) Most of the docs contain pretty dense technical\nmaterial that's not going to be improved by making it even denser.\nAlso, to the extent that there's duplication, it's often deliberate.\nFor example, if a given bit of info appears in the tutorial and the\nmain docs and the reference pages, that doesn't mean we should rip\nout two of the three appearances.\n\nThere certainly are sections that are crying out for reorganization,\nbut that's going to be very topic-specific and not something that\njust going into it with a copy-editing mindset will help.\n\nIn short, the devil's in the details. Maybe there are lots of\nplaces where this type of approach would help, but I think it's\ngoing to be a case-by-case discussion not something where there's\na clear win overall.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Dec 2020 15:50:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 1:40 PM Joshua Drake <jd@commandprompt.com> wrote:\n\n> For example, what would be exceedly helpful would be a documentation style\n>>> guide that is canonical and we can review documentation against.\n>>>\n>>\nI do agree with that premise, with the goal of getting more people to\ncontribute to writing and reviewing documentation and having more than\nvague ideas about what is or isn't considered minor items to just leave\nalone or points of interest to debate. But as much as I would love\nperfectly written English documentation I try to consciously make an effort\nto accept things that maybe aren't perfect but are good enough in the\ninterest of having a larger set of contributors with more varied abilities\nin this area. \"It is clear enough\" is a valid trade-off to take.\n\n\n> Yes. https://www.postgresql.org/docs/13/preface.html\n>\n\nThanks, though it was meant to be a bit rhetorical.\n\n\n>\n>\n\n>> While I don't think making readability changes is a bad thing, and maybe\n>> my perspective is a bit biased and negative right now, but the attention\n>> given to the existing documentation patches in the commitfest isn't that\n>> great - so adding another mass of patches fixing up items that haven't\n>> provoked complaints seems likely to just make the list longer.\n>>\n>\n> One of the issues is that editing documentation with patches is a pain. It\n> is simpler and a lower barrier of effort to pull up an existing section of\n> Docbook and edit that (just like code) than it is to break out specific\n> text within a patch. Though I would be happy to take a swipe at reviewing a\n> specific documentation patch (as you linked).\n>\n\nI'm not following this line of reasoning.\n\n\n>\n>>\n>> In short, I don't think optimization should be a goal in its own right;\n>> but rather changes should mostly be driven by questions asked by our\n>> users. I don't think reading random chapters of the documentation to find\n>> non-optimal exposition is going to be a good use of time.\n>>\n>\n> I wasn't planning on reading random chapters. I was planning on walking\n> through the documentation as it is written and hopefully others would join.\n> This is a monumental effort to perform completely. Also consider the\n> overall benefit, not just one specific piece. Would you not consider it a\n> net win if certain questions were being answered in a succinct way as to\n> allow users to use the documentation instead of asking the most novice of\n> questions on various channels?\n>\n\nI suspect over half of the questions asked are due to not reading the\ndocumentation at all - I tend to get good results when I point someone to\nthe correct terminology and section, and if there are follow-up questions\nthen I know where to look for improvements and have a concrete question or\ntwo in hand to ensure that the revised documentation answers.\n\nI'm fairly well plugged into user questions and have recently made an\nattempt to respond to those with specific patches to improve the\ndocumentation involved in those questions. And also have been working to\nhelp other documentation patches get pushed through. Based upon those\nexperiences I think this monumental community effort is going to stall out\npretty quickly - regardless of its merits - though if the effort results in\na new guidelines document then I would say it was worth the effort\nregardless of how many paragraphs are optimized away.\n\nMy $0.02\n\nDavid J.\n\nOn Mon, Dec 14, 2020 at 1:40 PM Joshua Drake <jd@commandprompt.com> wrote:For example, what would be exceedly helpful would be a documentation style guide that is canonical and we can review documentation against.I do agree with that premise, with the goal of getting more people to contribute to writing and reviewing documentation and having more than vague ideas about what is or isn't considered minor items to just leave alone or points of interest to debate. But as much as I would love perfectly written English documentation I try to consciously make an effort to accept things that maybe aren't perfect but are good enough in the interest of having a larger set of contributors with more varied abilities in this area. \"It is clear enough\" is a valid trade-off to take.Yes. https://www.postgresql.org/docs/13/preface.htmlThanks, though it was meant to be a bit rhetorical. While I don't think making readability changes is a bad thing, and maybe my perspective is a bit biased and negative right now, but the attention given to the existing documentation patches in the commitfest isn't that great - so adding another mass of patches fixing up items that haven't provoked complaints seems likely to just make the list longer.One of the issues is that editing documentation with patches is a pain. It is simpler and a lower barrier of effort to pull up an existing section of Docbook and edit that (just like code) than it is to break out specific text within a patch. Though I would be happy to take a swipe at reviewing a specific documentation patch (as you linked).I'm not following this line of reasoning. In short, I don't think optimization should be a goal in its own right; but rather changes should mostly be driven by questions asked by our users. I don't think reading random chapters of the documentation to find non-optimal exposition is going to be a good use of time.I wasn't planning on reading random chapters. I was planning on walking through the documentation as it is written and hopefully others would join. This is a monumental effort to perform completely. Also consider the overall benefit, not just one specific piece. Would you not consider it a net win if certain questions were being answered in a succinct way as to allow users to use the documentation instead of asking the most novice of questions on various channels?I suspect over half of the questions asked are due to not reading the documentation at all - I tend to get good results when I point someone to the correct terminology and section, and if there are follow-up questions then I know where to look for improvements and have a concrete question or two in hand to ensure that the revised documentation answers.I'm fairly well plugged into user questions and have recently made an attempt to respond to those with specific patches to improve the documentation involved in those questions. And also have been working to help other documentation patches get pushed through. Based upon those experiences I think this monumental community effort is going to stall out pretty quickly - regardless of its merits - though if the effort results in a new guidelines document then I would say it was worth the effort regardless of how many paragraphs are optimized away.My $0.02David J.",
"msg_date": "Mon, 14 Dec 2020 14:04:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": ">\n>\n>\n> In short, the devil's in the details. Maybe there are lots of\n> places where this type of approach would help, but I think it's\n> going to be a case-by-case discussion not something where there's\n> a clear win overall.\n>\n\nCertainly and I didn't want to just start dumping patches. Part of this is\njust style, for example:\n\nThus far, our queries have only accessed one table at a time. Queries can\naccess multiple tables at once, or access the same table in such a way that\nmultiple rows of the table are being processed at the same time. A query\nthat accesses multiple rows of the same or different tables at one time is\ncalled a join query. As an example, say you wish to list all the weather\nrecords together with the location of the associated city. To do that, we\nneed to compare the city column of each row of the weather table with the\nname column of all rows in the cities table, and select the pairs of rows\nwhere these values match.\n\nIt isn't \"terrible\" but can definitely be optimized. In a quick review, I\nwould put it something like this:\n\nQueries can also access multiple tables at once, or access the same table\nin a way that multiple rows are processed. A query that accesses multiple\nrows of the same or different tables at one time is a join. For example, if\nyou wish to list all of the weather records with the location of the\nassociated city, we would compare the city column of each row of the weather\ntable with the name column of all rows in the cities table, and select the\nrows *WHERE* the values match.\n\nThe reason I bolded and capitalized WHERE was to provide a visual signal to\nthe example that is on the page. I could also argue that we could remove\n\"For example,\" though I understand its purpose here.\n\nAgain, this was just a quick review.\n\nJD\n\n\nIn short, the devil's in the details. Maybe there are lots of\nplaces where this type of approach would help, but I think it's\ngoing to be a case-by-case discussion not something where there's\na clear win overall.Certainly and I didn't want to just start dumping patches. Part of this is just style, for example:Thus far, our queries have only accessed one table at a time. Queries can access multiple tables at once, or access the same table in such a way that multiple rows of the table are being processed at the same time. A query that accesses multiple rows of the same or different tables at one time is called a join query. As an example, say you wish to list all the weather records together with the location of the associated city. To do that, we need to compare the city column of each row of the weather table with the name column of all rows in the cities table, and select the pairs of rows where these values match.It isn't \"terrible\" but can definitely be optimized. In a quick review, I would put it something like this:Queries can also access multiple tables at once, or access the same table in a way that multiple rows are processed. A query that accesses multiple rows of the same or different tables at one time is a join. For example, if you wish to list all of the weather records with the location of the associated city, we would compare the city column of each row of the weather table with the name column of all rows in the cities table, and select the rows WHERE the values match.The reason I bolded and capitalized WHERE was to provide a visual signal to the example that is on the page. I could also argue that we could remove \"For example,\" though I understand its purpose here.Again, this was just a quick review.JD",
"msg_date": "Mon, 14 Dec 2020 13:04:52 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Most of the docs contain pretty dense technical\n> material that's not going to be improved by making it even denser.\n\nIt's always hard to write dense technical prose, for a variety of\nreasons. I often struggle with framing. For example I seem to write\nsentences that sound indecisive. But is that necessarily a bad thing?\nIt seems wise to hedge a little bit when talking about (say) some kind\nof complex system with many moving parts. Ernest Hemingway never had\nto describe how VACUUM works.\n\nI agree with Heikki to some degree; there is value in trying to follow\na style guide. But let's not forget about the other problem with the\ndocs, which is that there isn't enough low level technical details of\nthe kind that advanced users value. There is a clear unmet demand for\nthat IME. If we're going to push in the direction of simplification,\nit should not make this other important task harder.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Dec 2020 13:38:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "Joshua Drake <jd@commandprompt.com> writes:\n> Certainly and I didn't want to just start dumping patches. Part of this is\n> just style, for example:\n\n> Thus far, our queries have only accessed one table at a time. Queries can\n> access multiple tables at once, or access the same table in such a way that\n> multiple rows of the table are being processed at the same time. A query\n> that accesses multiple rows of the same or different tables at one time is\n> called a join query. As an example, say you wish to list all the weather\n> records together with the location of the associated city. To do that, we\n> need to compare the city column of each row of the weather table with the\n> name column of all rows in the cities table, and select the pairs of rows\n> where these values match.\n\n> It isn't \"terrible\" but can definitely be optimized. In a quick review, I\n> would put it something like this:\n\n> Queries can also access multiple tables at once, or access the same table\n> in a way that multiple rows are processed. A query that accesses multiple\n> rows of the same or different tables at one time is a join. For example, if\n> you wish to list all of the weather records with the location of the\n> associated city, we would compare the city column of each row of the weather\n> table with the name column of all rows in the cities table, and select the\n> rows *WHERE* the values match.\n\nTBH, I'm not sure that that is an improvement at all. I'm constantly\nreminded that for many of our users, English is not their first language.\nA little bit of redundancy in wording is often helpful for them.\n\nThe places where I think the docs need help tend to be places where\nassorted people have added information over time, such that there's\nnot a consistent style throughout a section; or maybe the information\ncould be presented in a better order. We don't need to be taking a\nhacksaw to text that's perfectly clear as it stands.\n\n(If I were thinking of rewriting this text, I'd probably think of\nremoving the references to self-joins and covering that topic\nin a separate para. But that's because self-joins aren't basic\nusage, not because I think the text is unreadable.)\n\n> The reason I bolded and capitalized WHERE was to provide a visual signal to\n> the example that is on the page.\n\nIMO, typographical tricks are not something to lean on heavily.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Dec 2020 16:40:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": ">\n>\n>\n> > Queries can also access multiple tables at once, or access the same table\n> > in a way that multiple rows are processed. A query that accesses multiple\n> > rows of the same or different tables at one time is a join. For example,\n> if\n> > you wish to list all of the weather records with the location of the\n> > associated city, we would compare the city column of each row of the\n> weather\n> > table with the name column of all rows in the cities table, and select\n> the\n> > rows *WHERE* the values match.\n>\n> TBH, I'm not sure that that is an improvement at all. I'm constantly\n> reminded that for many of our users, English is not their first language.\n> A little bit of redundancy in wording is often helpful for them.\n>\n\nInteresting point, it is certainly true that many of our users are ESL\nfolks. I would expect a succinct version to be easier to understand but I\nhave no idea.\n\n\n>\n> The places where I think the docs need help tend to be places where\n> assorted people have added information over time, such that there's\n> not a consistent style throughout a section; or maybe the information\n> could be presented in a better order. We don't need to be taking a\n> hacksaw to text that's perfectly clear as it stands.\n>\n\nThe term perfectly clear is part of the problem I am trying to address. I\ncan pick and pull at the documentation all day long and show things that\nare not perfectly clear. They are clear to you, myself and I imagine most\nof the readers on this list. Generally speaking we are not the target of\nthe documentation and we may easily get pulled into the \"good enough\" when\nin reality it could be so much better. I have gotten so used to our\ndocumentation that I literally skip over unneeded words to get to the\nanswer I am looking for. I don't think that is the target we want to hit.\n\nWouldn't we want the least amount of mental energy to understand the\nconcept as possible for the reader? Every extra word that isn't needed,\nevery extra adjective, repeated term or \"very unique\" that exists is extra\nenergy spent to understand what the writer is trying to say. That mental\nenergy can be exhausted quickly, especially when considering dense\ntechnical topics.\n\n\n\n> (If I were thinking of rewriting this text, I'd probably think of\n> removing the references to self-joins and covering that topic\n> in a separate para. But that's because self-joins aren't basic\n> usage, not because I think the text is unreadable.)\n>\n\nThat makes sense. I was just taking the direct approach of making existing\ncontent better as an example. I would agree with your assessment if it were\nto be submitted as a patch.\n\n\n> > The reason I bolded and capitalized WHERE was to provide a visual signal\n> to\n> > the example that is on the page.\n>\n> IMO, typographical tricks are not something to lean on heavily.\n>\n\nFair enough.\n\nJD\n\n\n> Queries can also access multiple tables at once, or access the same table\n> in a way that multiple rows are processed. A query that accesses multiple\n> rows of the same or different tables at one time is a join. For example, if\n> you wish to list all of the weather records with the location of the\n> associated city, we would compare the city column of each row of the weather\n> table with the name column of all rows in the cities table, and select the\n> rows *WHERE* the values match.\n\nTBH, I'm not sure that that is an improvement at all. I'm constantly\nreminded that for many of our users, English is not their first language.\nA little bit of redundancy in wording is often helpful for them.Interesting point, it is certainly true that many of our users are ESL folks. I would expect a succinct version to be easier to understand but I have no idea. \n\nThe places where I think the docs need help tend to be places where\nassorted people have added information over time, such that there's\nnot a consistent style throughout a section; or maybe the information\ncould be presented in a better order. We don't need to be taking a\nhacksaw to text that's perfectly clear as it stands.The term perfectly clear is part of the problem I am trying to address. I can pick and pull at the documentation all day long and show things that are not perfectly clear. They are clear to you, myself and I imagine most of the readers on this list. Generally speaking we are not the target of the documentation and we may easily get pulled into the \"good enough\" when in reality it could be so much better. I have gotten so used to our documentation that I literally skip over unneeded words to get to the answer I am looking for. I don't think that is the target we want to hit.Wouldn't we want the least amount of mental energy to understand the concept as possible for the reader? Every extra word that isn't needed, every extra adjective, repeated term or \"very unique\" that exists is extra energy spent to understand what the writer is trying to say. That mental energy can be exhausted quickly, especially when considering dense technical topics. \n(If I were thinking of rewriting this text, I'd probably think of\nremoving the references to self-joins and covering that topic\nin a separate para. But that's because self-joins aren't basic\nusage, not because I think the text is unreadable.)That makes sense. I was just taking the direct approach of making existing content better as an example. I would agree with your assessment if it were to be submitted as a patch. \n> The reason I bolded and capitalized WHERE was to provide a visual signal to\n> the example that is on the page.\n\nIMO, typographical tricks are not something to lean on heavily.Fair enough.JD",
"msg_date": "Mon, 14 Dec 2020 14:31:23 -0800",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 01:38:05PM -0800, Peter Geoghegan wrote:\n> On Mon, Dec 14, 2020 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Most of the docs contain pretty dense technical\n> > material that's not going to be improved by making it even denser.\n> \n> It's always hard to write dense technical prose, for a variety of\n> reasons. I often struggle with framing. For example I seem to write\n> sentences that sound indecisive. But is that necessarily a bad thing?\n> It seems wise to hedge a little bit when talking about (say) some kind\n> of complex system with many moving parts. Ernest Hemingway never had\n> to describe how VACUUM works.\n> \n> I agree with Heikki to some degree; there is value in trying to follow\n> a style guide. But let's not forget about the other problem with the\n> docs, which is that there isn't enough low level technical details of\n> the kind that advanced users value. There is a clear unmet demand for\n> that IME. If we're going to push in the direction of simplification,\n> it should not make this other important task harder.\n\nI agree a holistic review of the docs can yield great benefits. No one\nusually complains about overly verbose text, but making it clearer is\nalways a win. Anyway, of course, it is going to be very specific for\neach case. As an extreme example, in 2007 when I did a full review of\nthe docs, I clarified may/can/might in our docs, and it probably helped.\nHere is one of several commits:\n\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e81c138e18\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 10:42:10 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 7:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I agree a holistic review of the docs can yield great benefits. No one\n> usually complains about overly verbose text, but making it clearer is\n> always a win. Anyway, of course, it is going to be very specific for\n> each case. As an extreme example, in 2007 when I did a full review of\n> the docs, I clarified may/can/might in our docs, and it probably helped.\n\nI think that the \"may/can/might\" rule is a very good one. It\nstandardizes something that would otherwise just be left to chance,\nand AFAICT has no possible downside. Even still, I think that adding\nnew rules is subject to sharp diminishing returns. There just aren't\nthat many things that work like that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 17 Dec 2020 11:19:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing the documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed in [1], in postgres_fdw the cached connections to remote\nservers can stay until the lifetime of the local session without\ngetting a chance to disconnect (connection leak), if the underlying\nuser mapping or foreign server is dropped in another session. Here are\nfew scenarios how this can happen:\n\nUse case 1:\n1) Run a foreign query in session 1 with server 1 and user mapping 1\n2) Drop user mapping 1 in another session 2, an invalidation message\ngets generated which will have to be processed by all the sessions\n3) Run the foreign query again in session 1, at the start of txn the\ncached entry gets invalidated via pgfdw_inval_callback() (as part of\ninvalidation message processing). Whatever may be the type of foreign\nquery (select, update, explain, delete, insert, analyze etc.), upon\nnext call to GetUserMapping() from postgres_fdw.c, the cache lookup\nfails(with ERROR: user mapping not found for \"XXXX\") since the user\nmapping 1 has been dropped in session 2 and the query will also fail\nbefore reaching GetConnection() where the connections associated with\nthe invalidated entries would have got disconnected.\n\nSo, the connection associated with invalidated entry would remain\nuntil the local session exits.\n\nUse case 2:\n1) Run a foreign query in session 1 with server 1 and user mapping 1\n2) Try to drop foreign server 1, then we would not be allowed because\nof dependency. Use CASCADE so that dependent objects i.e. user mapping\n1 and foreign tables get dropped along with foreign server 1.\n3) Run the foreign query again in session 1, at the start of txn, the\ncached entry gets invalidated via pgfdw_inval_callback() and the query\nfails because there is no foreign table.\n\nNote that the remote connection remains open in session 1 until the\nlocal session exits.\n\nTo solve the above connection leak problem, it looks like the right\nplace to close all the invalid connections is pgfdw_xact_callback(),\nonce registered, which gets called at the end of every txn in the\ncurrent session(by then all the sub txns also would have been\nfinished). Note that if there are too many invalidated entries, then\nthe following txn has to close all of them, but that's okay than\nhaving leaked connections and it's a one time job for the following\none txn.\n\nAttaching a patch for the same.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACUOQYs%2BssxkxRvZ6Ja5%2Bsfc6a-s_0e-Nv2A591hEyOgiw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 15 Dec 2020 08:08:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "Is the following sequence possible ?\nIn pgfdw_inval_callback():\n\n entry->invalidated = true;\n+ have_invalid_connections = true;\n\nAt which time the loop in pgfdw_xact_callback() is already running (but\npast the above entry).\nThen:\n\n+ /* We are done closing all the invalidated connections so reset. */\n+ have_invalid_connections = false;\n\nAt which time, there is still at least one invalid connection but the\nglobal flag is off.\n\nOn Mon, Dec 14, 2020 at 6:39 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> As discussed in [1], in postgres_fdw the cached connections to remote\n> servers can stay until the lifetime of the local session without\n> getting a chance to disconnect (connection leak), if the underlying\n> user mapping or foreign server is dropped in another session. Here are\n> few scenarios how this can happen:\n>\n> Use case 1:\n> 1) Run a foreign query in session 1 with server 1 and user mapping 1\n> 2) Drop user mapping 1 in another session 2, an invalidation message\n> gets generated which will have to be processed by all the sessions\n> 3) Run the foreign query again in session 1, at the start of txn the\n> cached entry gets invalidated via pgfdw_inval_callback() (as part of\n> invalidation message processing). Whatever may be the type of foreign\n> query (select, update, explain, delete, insert, analyze etc.), upon\n> next call to GetUserMapping() from postgres_fdw.c, the cache lookup\n> fails(with ERROR: user mapping not found for \"XXXX\") since the user\n> mapping 1 has been dropped in session 2 and the query will also fail\n> before reaching GetConnection() where the connections associated with\n> the invalidated entries would have got disconnected.\n>\n> So, the connection associated with invalidated entry would remain\n> until the local session exits.\n>\n> Use case 2:\n> 1) Run a foreign query in session 1 with server 1 and user mapping 1\n> 2) Try to drop foreign server 1, then we would not be allowed because\n> of dependency. Use CASCADE so that dependent objects i.e. user mapping\n> 1 and foreign tables get dropped along with foreign server 1.\n> 3) Run the foreign query again in session 1, at the start of txn, the\n> cached entry gets invalidated via pgfdw_inval_callback() and the query\n> fails because there is no foreign table.\n>\n> Note that the remote connection remains open in session 1 until the\n> local session exits.\n>\n> To solve the above connection leak problem, it looks like the right\n> place to close all the invalid connections is pgfdw_xact_callback(),\n> once registered, which gets called at the end of every txn in the\n> current session(by then all the sub txns also would have been\n> finished). Note that if there are too many invalidated entries, then\n> the following txn has to close all of them, but that's okay than\n> having leaked connections and it's a one time job for the following\n> one txn.\n>\n> Attaching a patch for the same.\n>\n> Thoughts?\n>\n> [1] -\n> https://www.postgresql.org/message-id/flat/CALj2ACUOQYs%2BssxkxRvZ6Ja5%2Bsfc6a-s_0e-Nv2A591hEyOgiw%40mail.gmail.com\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nIs the following sequence possible ?In pgfdw_inval_callback(): entry->invalidated = true;+ have_invalid_connections = true;At which time the loop in pgfdw_xact_callback() is already running (but past the above entry).Then:+ /* We are done closing all the invalidated connections so reset. */+ have_invalid_connections = false;At which time, there is still at least one invalid connection but the global flag is off.On Mon, Dec 14, 2020 at 6:39 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nAs discussed in [1], in postgres_fdw the cached connections to remote\nservers can stay until the lifetime of the local session without\ngetting a chance to disconnect (connection leak), if the underlying\nuser mapping or foreign server is dropped in another session. Here are\nfew scenarios how this can happen:\n\nUse case 1:\n1) Run a foreign query in session 1 with server 1 and user mapping 1\n2) Drop user mapping 1 in another session 2, an invalidation message\ngets generated which will have to be processed by all the sessions\n3) Run the foreign query again in session 1, at the start of txn the\ncached entry gets invalidated via pgfdw_inval_callback() (as part of\ninvalidation message processing). Whatever may be the type of foreign\nquery (select, update, explain, delete, insert, analyze etc.), upon\nnext call to GetUserMapping() from postgres_fdw.c, the cache lookup\nfails(with ERROR: user mapping not found for \"XXXX\") since the user\nmapping 1 has been dropped in session 2 and the query will also fail\nbefore reaching GetConnection() where the connections associated with\nthe invalidated entries would have got disconnected.\n\nSo, the connection associated with invalidated entry would remain\nuntil the local session exits.\n\nUse case 2:\n1) Run a foreign query in session 1 with server 1 and user mapping 1\n2) Try to drop foreign server 1, then we would not be allowed because\nof dependency. Use CASCADE so that dependent objects i.e. user mapping\n1 and foreign tables get dropped along with foreign server 1.\n3) Run the foreign query again in session 1, at the start of txn, the\ncached entry gets invalidated via pgfdw_inval_callback() and the query\nfails because there is no foreign table.\n\nNote that the remote connection remains open in session 1 until the\nlocal session exits.\n\nTo solve the above connection leak problem, it looks like the right\nplace to close all the invalid connections is pgfdw_xact_callback(),\nonce registered, which gets called at the end of every txn in the\ncurrent session(by then all the sub txns also would have been\nfinished). Note that if there are too many invalidated entries, then\nthe following txn has to close all of them, but that's okay than\nhaving leaked connections and it's a one time job for the following\none txn.\n\nAttaching a patch for the same.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACUOQYs%2BssxkxRvZ6Ja5%2Bsfc6a-s_0e-Nv2A591hEyOgiw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 14 Dec 2020 18:56:07 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 8:25 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Is the following sequence possible ?\n> In pgfdw_inval_callback():\n>\n> entry->invalidated = true;\n> + have_invalid_connections = true;\n>\n> At which time the loop in pgfdw_xact_callback() is already running (but past the above entry).\n> Then:\n>\n> + /* We are done closing all the invalidated connections so reset. */\n> + have_invalid_connections = false;\n>\n> At which time, there is still at least one invalid connection but the global flag is off.\n\nIt's not possible, as this backend specific code doesn't run in\nmultiple threads. We can not have pgfdw_inval_callback() and\npgfdw_xact_callback() running at the same time, so we are safe there.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Dec 2020 08:38:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "Hi\r\n\r\nI have an issue about the existing testcase.\r\n\r\n\"\"\"\r\n-- Test that alteration of server options causes reconnection SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work ALTER SERVER loopback OPTIONS (SET dbname 'no such database'); SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail DO $d$\r\n BEGIN\r\n EXECUTE $$ALTER SERVER loopback\r\n OPTIONS (SET dbname '$$||current_database()||$$')$$;\r\n END;\r\n$d$;\r\nSELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again \"\"\"\r\n\r\nIMO, the above case is designed to test the following code[1]:\r\nWith the patch, it seems the following code[1] will not work for this case, right?\r\n(It seems the connection will be disconnect in pgfdw_xact_callback)\r\n\r\nI do not know does it matter, or should we add a testcase to cover that?\r\n\r\n[1]\t/*\r\n\t * If the connection needs to be remade due to invalidation, disconnect as\r\n\t * soon as we're out of all transactions.\r\n\t */\r\n\tif (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\r\n\t{\r\n\t\telog(DEBUG3, \"closing connection %p for option changes to take effect\",\r\n\t\t\t entry->conn);\r\n\t\tdisconnect_pg_server(entry);\r\n\t}\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:37:21 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 18, 2020 at 5:06 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > I have an issue about the existing testcase.\n> >\n> > \"\"\"\n> > -- Test that alteration of server options causes reconnection\n> > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work\n> > ALTER SERVER loopback OPTIONS (SET dbname 'no such database');\n> > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail\n> > DO $d$\n> > BEGIN\n> > EXECUTE $$ALTER SERVER loopback\n> > OPTIONS (SET dbname '$$||current_database()||$$')$$;\n> > END;\n> > $d$;\n> > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again\n> > \"\"\"\n> >\n> > IMO, the above case is designed to test the following code[1]:\n> > With the patch, it seems the following code[1] will not work for this case, right?\n> > (It seems the connection will be disconnect in pgfdw_xact_callback)\n> >\n> > I do not know does it matter, or should we add a testcase to cover that?\n> >\n> > [1] /*\n> > * If the connection needs to be remade due to invalidation, disconnect as\n> > * soon as we're out of all transactions.\n> > */\n> > if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\n> > {\n> > elog(DEBUG3, \"closing connection %p for option changes to take effect\",\n> > entry->conn);\n> > disconnect_pg_server(entry);\n> > }\n>\n> Yes you are right. With the patch if the server is altered in the same\n> session in which the connection exists, the connection gets closed at\n> the end of that alter query txn, not at the beginning of the next\n> foreign tbl query. So, that part of the code in GetConnection()\n> doesn't get hit. Having said that, this code gets hit when the alter\n> query is run in another session and the connection in the current\n> session gets disconnected at the start of the next foreign tbl query.\n>\n> If we want to cover it with a test case, we might have to alter the\n> foreign server in another session. I'm not sure whether we can open\n> another session from the current psql session and run a sql command.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Dec 2020 18:46:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Dec 18, 2020 at 6:39 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Dec 18, 2020 at 5:06 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > > I have an issue about the existing testcase.\n> > >\n> > > \"\"\"\n> > > -- Test that alteration of server options causes reconnection\n> > > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work\n> > > ALTER SERVER loopback OPTIONS (SET dbname 'no such database');\n> > > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail\n> > > DO $d$\n> > > BEGIN\n> > > EXECUTE $$ALTER SERVER loopback\n> > > OPTIONS (SET dbname '$$||current_database()||$$')$$;\n> > > END;\n> > > $d$;\n> > > SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again\n> > > \"\"\"\n> > >\n> > > IMO, the above case is designed to test the following code[1]:\n> > > With the patch, it seems the following code[1] will not work for this case, right?\n> > > (It seems the connection will be disconnect in pgfdw_xact_callback)\n> > >\n> > > I do not know does it matter, or should we add a testcase to cover that?\n> > >\n> > > [1] /*\n> > > * If the connection needs to be remade due to invalidation, disconnect as\n> > > * soon as we're out of all transactions.\n> > > */\n> > > if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\n> > > {\n> > > elog(DEBUG3, \"closing connection %p for option changes to take effect\",\n> > > entry->conn);\n> > > disconnect_pg_server(entry);\n> > > }\n> >\n> > Yes you are right. With the patch if the server is altered in the same\n> > session in which the connection exists, the connection gets closed at\n> > the end of that alter query txn, not at the beginning of the next\n> > foreign tbl query. So, that part of the code in GetConnection()\n> > doesn't get hit. Having said that, this code gets hit when the alter\n> > query is run in another session and the connection in the current\n> > session gets disconnected at the start of the next foreign tbl query.\n> >\n> > If we want to cover it with a test case, we might have to alter the\n> > foreign server in another session. I'm not sure whether we can open\n> > another session from the current psql session and run a sql command.\n\nI further checked on how we can add/move the test case( that is\naltering server options in a different session and see if the\nconnection gets disconnected at the start of the next foreign query in\nthe current session ) to cover the above code. Upon some initial\nanalysis, it looks like it is possible to add that under\nsrc/test/isolation tests. Another way could be to add it using the TAP\nframework under contrib/postgres_fdw. Having said that, currently\nthese two places don't have any postgres_fdw related tests, we will be\nthe first ones to add.\n\nI'm not quite sure whether that's okay or is there any better way of\ndoing it. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Dec 2020 16:02:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On 2020/12/15 11:38, Bharath Rupireddy wrote:\n> Hi,\n> \n> As discussed in [1], in postgres_fdw the cached connections to remote\n> servers can stay until the lifetime of the local session without\n> getting a chance to disconnect (connection leak), if the underlying\n> user mapping or foreign server is dropped in another session. Here are\n> few scenarios how this can happen:\n> \n> Use case 1:\n> 1) Run a foreign query in session 1 with server 1 and user mapping 1\n> 2) Drop user mapping 1 in another session 2, an invalidation message\n> gets generated which will have to be processed by all the sessions\n> 3) Run the foreign query again in session 1, at the start of txn the\n> cached entry gets invalidated via pgfdw_inval_callback() (as part of\n> invalidation message processing). Whatever may be the type of foreign\n> query (select, update, explain, delete, insert, analyze etc.), upon\n> next call to GetUserMapping() from postgres_fdw.c, the cache lookup\n> fails(with ERROR: user mapping not found for \"XXXX\") since the user\n> mapping 1 has been dropped in session 2 and the query will also fail\n> before reaching GetConnection() where the connections associated with\n> the invalidated entries would have got disconnected.\n> \n> So, the connection associated with invalidated entry would remain\n> until the local session exits.\n> \n> Use case 2:\n> 1) Run a foreign query in session 1 with server 1 and user mapping 1\n> 2) Try to drop foreign server 1, then we would not be allowed because\n> of dependency. Use CASCADE so that dependent objects i.e. user mapping\n> 1 and foreign tables get dropped along with foreign server 1.\n> 3) Run the foreign query again in session 1, at the start of txn, the\n> cached entry gets invalidated via pgfdw_inval_callback() and the query\n> fails because there is no foreign table.\n> \n> Note that the remote connection remains open in session 1 until the\n> local session exits.\n> \n> To solve the above connection leak problem, it looks like the right\n> place to close all the invalid connections is pgfdw_xact_callback(),\n> once registered, which gets called at the end of every txn in the\n> current session(by then all the sub txns also would have been\n> finished). Note that if there are too many invalidated entries, then\n> the following txn has to close all of them, but that's okay than\n> having leaked connections and it's a one time job for the following\n> one txn.\n> \n> Attaching a patch for the same.\n> \n> Thoughts?\n\nThanks for making the patch!\n\nI agree to make pgfdw_xact_callback() close the connection when\nentry->invalidated == true. But I think that it's better to get rid of\nhave_invalid_connections flag and make pgfdw_inval_callback() close\nthe connection immediately if entry->xact_depth == 0, to avoid unnecessary\nscan of the hashtable during COMMIT of transaction not accessing to\nforeign servers. Attached is the POC patch that I'm thinking. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 23 Dec 2020 23:01:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I agree to make pgfdw_xact_callback() close the connection when\n> entry->invalidated == true. But I think that it's better to get rid of\n> have_invalid_connections flag and make pgfdw_inval_callback() close\n> the connection immediately if entry->xact_depth == 0, to avoid unnecessary\n> scan of the hashtable during COMMIT of transaction not accessing to\n> foreign servers. Attached is the POC patch that I'm thinking. Thought?\n\nWe could do that way as well. It seems okay to me. Now the disconnect\ncode is spread in pgfdw_inval_callback() and pgfdw_xact_callback().\nThere's also less burden on pgfdw_xact_callback() to close a lot of\nconnections on a single commit. The behaviour is like this - If\nentry->xact_depth == 0, then the entries wouldn't have got any\nconnection in the current txn so they can be immediately closed in\npgfdw_inval_callback() and pgfdw_xact_callback() can exit immediately\nas xact_got_connection is false. If entry->xact_depth > 0 which means\nthat probably pgfdw_inval_callback() came from a sub txn, we would\nhave got a connection in the txn i.e. xact_got_connection becomes true\ndue to which it can get invalidated in pgfdw_inval_callback() and get\nclosed in pgfdw_xact_callback() at the end of the txn.\n\nAnd there's no chance of entry->xact_depth > 0 and xact_got_connection false.\n\nI think we need to change the comment before pgfdw_inval_callback() a bit:\n\n * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n * mark connections depending on that entry as needing to be remade.\n * We can't immediately destroy them, since they might be in the midst of\n * a transaction, but we'll remake them at the next opportunity.\n\nto\n\n * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n* try to close the cached connections associated with them if they\nare not in the\n* midst of a transaction otherwise mark them as invalidated. We will\ndestroy the\n * invalidated connections in pgfdw_xact_callback() at the end of the main xact.\n * Closed connections will be remade at the next opportunity.\n\nAny thoughts on the earlier point in [1] related to a test case(which\nbecomes unnecessary with this patch) coverage?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXymb%3Dd4KeOq%2Bjnh_E6yThn%2Bcf1CDRhtK_crkj0_hVimQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Dec 2020 20:10:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "\n\nOn 2020/12/23 23:40, Bharath Rupireddy wrote:\n> On Wed, Dec 23, 2020 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> I agree to make pgfdw_xact_callback() close the connection when\n>> entry->invalidated == true. But I think that it's better to get rid of\n>> have_invalid_connections flag and make pgfdw_inval_callback() close\n>> the connection immediately if entry->xact_depth == 0, to avoid unnecessary\n>> scan of the hashtable during COMMIT of transaction not accessing to\n>> foreign servers. Attached is the POC patch that I'm thinking. Thought?\n> \n> We could do that way as well. It seems okay to me. Now the disconnect\n> code is spread in pgfdw_inval_callback() and pgfdw_xact_callback().\n> There's also less burden on pgfdw_xact_callback() to close a lot of\n> connections on a single commit. The behaviour is like this - If\n> entry->xact_depth == 0, then the entries wouldn't have got any\n> connection in the current txn so they can be immediately closed in\n> pgfdw_inval_callback() and pgfdw_xact_callback() can exit immediately\n> as xact_got_connection is false. If entry->xact_depth > 0 which means\n> that probably pgfdw_inval_callback() came from a sub txn, we would\n> have got a connection in the txn i.e. xact_got_connection becomes true\n> due to which it can get invalidated in pgfdw_inval_callback() and get\n> closed in pgfdw_xact_callback() at the end of the txn.\n> \n> And there's no chance of entry->xact_depth > 0 and xact_got_connection false.\n> \n> I think we need to change the comment before pgfdw_inval_callback() a bit:\n> \n> * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n> * mark connections depending on that entry as needing to be remade.\n> * We can't immediately destroy them, since they might be in the midst of\n> * a transaction, but we'll remake them at the next opportunity.\n> \n> to\n> \n> * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n> * try to close the cached connections associated with them if they\n> are not in the\n> * midst of a transaction otherwise mark them as invalidated. We will\n> destroy the\n> * invalidated connections in pgfdw_xact_callback() at the end of the main xact.\n> * Closed connections will be remade at the next opportunity.\n\nYes, I agree that we need to update that comment.\n\n> \n> Any thoughts on the earlier point in [1] related to a test case(which\n> becomes unnecessary with this patch) coverage?\n> \n\nISTM that we can leave that test as it is because it's still useful to test\nthe case where the cached connection is discarded in pgfdw_inval_callback().\nThought?\n\nBy applying the patch, probably we can get rid of the code to discard\nthe invalidated cached connection in GetConnection(). But at least for\nthe back branches, I'd like to leave the code as it is so that we can make\nsure that the invalidated cached connection doesn't exist when getting\nnew connection. Maybe we can improve that in the master, but I'm not\nsure if it's really worth doing that against the gain. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 24 Dec 2020 10:51:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 7:21 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/12/23 23:40, Bharath Rupireddy wrote:\n> > On Wed, Dec 23, 2020 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> I agree to make pgfdw_xact_callback() close the connection when\n> >> entry->invalidated == true. But I think that it's better to get rid of\n> >> have_invalid_connections flag and make pgfdw_inval_callback() close\n> >> the connection immediately if entry->xact_depth == 0, to avoid unnecessary\n> >> scan of the hashtable during COMMIT of transaction not accessing to\n> >> foreign servers. Attached is the POC patch that I'm thinking. Thought?\n> >\n> > We could do that way as well. It seems okay to me. Now the disconnect\n> > code is spread in pgfdw_inval_callback() and pgfdw_xact_callback().\n> > There's also less burden on pgfdw_xact_callback() to close a lot of\n> > connections on a single commit. The behaviour is like this - If\n> > entry->xact_depth == 0, then the entries wouldn't have got any\n> > connection in the current txn so they can be immediately closed in\n> > pgfdw_inval_callback() and pgfdw_xact_callback() can exit immediately\n> > as xact_got_connection is false. If entry->xact_depth > 0 which means\n> > that probably pgfdw_inval_callback() came from a sub txn, we would\n> > have got a connection in the txn i.e. xact_got_connection becomes true\n> > due to which it can get invalidated in pgfdw_inval_callback() and get\n> > closed in pgfdw_xact_callback() at the end of the txn.\n> >\n> > And there's no chance of entry->xact_depth > 0 and xact_got_connection false.\n> >\n> > I think we need to change the comment before pgfdw_inval_callback() a bit:\n> >\n> > * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n> > * mark connections depending on that entry as needing to be remade.\n> > * We can't immediately destroy them, since they might be in the midst of\n> > * a transaction, but we'll remake them at the next opportunity.\n> >\n> > to\n> >\n> > * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n> > * try to close the cached connections associated with them if they\n> > are not in the\n> > * midst of a transaction otherwise mark them as invalidated. We will\n> > destroy the\n> > * invalidated connections in pgfdw_xact_callback() at the end of the main xact.\n> > * Closed connections will be remade at the next opportunity.\n>\n> Yes, I agree that we need to update that comment.\n>\n> >\n> > Any thoughts on the earlier point in [1] related to a test case(which\n> > becomes unnecessary with this patch) coverage?\n> >\n>\n> ISTM that we can leave that test as it is because it's still useful to test\n> the case where the cached connection is discarded in pgfdw_inval_callback().\n> Thought?\n\nYes, that test case covers the code this patch adds i.e. closing the\nconnection in pgfdw_inval_callback() while committing alter foreign\nserver stmt.\n\n> By applying the patch, probably we can get rid of the code to discard\n> the invalidated cached connection in GetConnection(). But at least for\n> the back branches, I'd like to leave the code as it is so that we can make\n> sure that the invalidated cached connection doesn't exist when getting\n> new connection. Maybe we can improve that in the master, but I'm not\n> sure if it's really worth doing that against the gain. Thought?\n\n+1 to keep that code as is even after this patch is applied(at least\nit works as an assertion that we don't have any leftover invalid\nconnections). I'm not quite sure, we may need that in some cases, say\nif we don't hit disconnect_pg_server() code in pgfdw_xact_callback()\nand pgfdw_inval_callback() due to some errors in between. I can not\nthink of an exact use case though.\n\nAttaching v2 patch that adds the comments and the other test case that\ncovers disconnecting code in pgfdw_xact_callback. Please review it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 24 Dec 2020 12:12:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "\n\nOn 2020/12/24 15:42, Bharath Rupireddy wrote:\n> On Thu, Dec 24, 2020 at 7:21 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/12/23 23:40, Bharath Rupireddy wrote:\n>>> On Wed, Dec 23, 2020 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> I agree to make pgfdw_xact_callback() close the connection when\n>>>> entry->invalidated == true. But I think that it's better to get rid of\n>>>> have_invalid_connections flag and make pgfdw_inval_callback() close\n>>>> the connection immediately if entry->xact_depth == 0, to avoid unnecessary\n>>>> scan of the hashtable during COMMIT of transaction not accessing to\n>>>> foreign servers. Attached is the POC patch that I'm thinking. Thought?\n>>>\n>>> We could do that way as well. It seems okay to me. Now the disconnect\n>>> code is spread in pgfdw_inval_callback() and pgfdw_xact_callback().\n>>> There's also less burden on pgfdw_xact_callback() to close a lot of\n>>> connections on a single commit. The behaviour is like this - If\n>>> entry->xact_depth == 0, then the entries wouldn't have got any\n>>> connection in the current txn so they can be immediately closed in\n>>> pgfdw_inval_callback() and pgfdw_xact_callback() can exit immediately\n>>> as xact_got_connection is false. If entry->xact_depth > 0 which means\n>>> that probably pgfdw_inval_callback() came from a sub txn, we would\n>>> have got a connection in the txn i.e. xact_got_connection becomes true\n>>> due to which it can get invalidated in pgfdw_inval_callback() and get\n>>> closed in pgfdw_xact_callback() at the end of the txn.\n>>>\n>>> And there's no chance of entry->xact_depth > 0 and xact_got_connection false.\n>>>\n>>> I think we need to change the comment before pgfdw_inval_callback() a bit:\n>>>\n>>> * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n>>> * mark connections depending on that entry as needing to be remade.\n>>> * We can't immediately destroy them, since they might be in the midst of\n>>> * a transaction, but we'll remake them at the next opportunity.\n>>>\n>>> to\n>>>\n>>> * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n>>> * try to close the cached connections associated with them if they\n>>> are not in the\n>>> * midst of a transaction otherwise mark them as invalidated. We will\n>>> destroy the\n>>> * invalidated connections in pgfdw_xact_callback() at the end of the main xact.\n>>> * Closed connections will be remade at the next opportunity.\n>>\n>> Yes, I agree that we need to update that comment.\n>>\n>>>\n>>> Any thoughts on the earlier point in [1] related to a test case(which\n>>> becomes unnecessary with this patch) coverage?\n>>>\n>>\n>> ISTM that we can leave that test as it is because it's still useful to test\n>> the case where the cached connection is discarded in pgfdw_inval_callback().\n>> Thought?\n> \n> Yes, that test case covers the code this patch adds i.e. closing the\n> connection in pgfdw_inval_callback() while committing alter foreign\n> server stmt.\n> \n>> By applying the patch, probably we can get rid of the code to discard\n>> the invalidated cached connection in GetConnection(). But at least for\n>> the back branches, I'd like to leave the code as it is so that we can make\n>> sure that the invalidated cached connection doesn't exist when getting\n>> new connection. Maybe we can improve that in the master, but I'm not\n>> sure if it's really worth doing that against the gain. Thought?\n> \n> +1 to keep that code as is even after this patch is applied(at least\n> it works as an assertion that we don't have any leftover invalid\n> connections). I'm not quite sure, we may need that in some cases, say\n> if we don't hit disconnect_pg_server() code in pgfdw_xact_callback()\n> and pgfdw_inval_callback() due to some errors in between. I can not\n> think of an exact use case though.\n> \n> Attaching v2 patch that adds the comments and the other test case that\n> covers disconnecting code in pgfdw_xact_callback. Please review it.\n\nThanks for updating the patch! It basically looks good to me except\nthe following minor things.\n\n+ * After a change to a pg_foreign_server or pg_user_mapping catalog entry, try\n+ * to close the cached connections associated with them if they are not in the\n+ * midst of a transaction otherwise mark them as invalid. We will destroy the\n+ * invalidated connections in pgfdw_xact_callback() at the end of the main\n+ * xact. Closed connections will be remade at the next opportunity.\n\nEven when we're in the midst of transaction, if that transaction has not used\nthe cached connections yet, we close them immediately. So, to make the\ncomment more precise, what about updating the comment as follows?\n\n---------------------\n After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n close connections depending on that entry immediately if current\n transaction has not used those connections yet. Otherwise, mark those\n connections as invalid and then make pgfdw_xact_callback() close them\n at the end of current transaction, since they cannot be closed in the midst\n of a transaction using them. Closed connections will be remade at the next\n opportunity if necessary.\n---------------------\n\n+\t\t\t/*\n+\t\t\t * Close the connection if it's not in midst of a xact. Otherwise\n+\t\t\t * mark it as invalid so that it can be disconnected at the end of\n+\t\t\t * main xact in pgfdw_xact_callback().\n+\t\t\t */\n\nBecause of the same reason as the above, what about updating this comment\nas follows?\n\n---------------------\n Close the connection immediately if it's not used yet in this transaction.\n Otherwise mark it as invalid so that pgfdw_xact_callback() can close it\n at the end of this transaction.\n---------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 24 Dec 2020 23:13:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 7:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Even when we're in the midst of transaction, if that transaction has not used\n> the cached connections yet, we close them immediately. So, to make the\n> comment more precise, what about updating the comment as follows?\n>\n> ---------------------\n> After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n> close connections depending on that entry immediately if current\n> transaction has not used those connections yet. Otherwise, mark those\n> connections as invalid and then make pgfdw_xact_callback() close them\n> at the end of current transaction, since they cannot be closed in the midst\n> of a transaction using them. Closed connections will be remade at the next\n> opportunity if necessary.\n> ---------------------\n\nDone.\n\n> + /*\n> + * Close the connection if it's not in midst of a xact. Otherwise\n> + * mark it as invalid so that it can be disconnected at the end of\n> + * main xact in pgfdw_xact_callback().\n> + */\n>\n> Because of the same reason as the above, what about updating this comment\n> as follows?\n>\n> ---------------------\n> Close the connection immediately if it's not used yet in this transaction.\n> Otherwise mark it as invalid so that pgfdw_xact_callback() can close it\n> at the end of this transaction.\n> ---------------------\n\nDone.\n\nAttaching v3 patch. Please have a look. Thanks!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 24 Dec 2020 20:00:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "\n\nOn 2020/12/24 23:30, Bharath Rupireddy wrote:\n> On Thu, Dec 24, 2020 at 7:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Even when we're in the midst of transaction, if that transaction has not used\n>> the cached connections yet, we close them immediately. So, to make the\n>> comment more precise, what about updating the comment as follows?\n>>\n>> ---------------------\n>> After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n>> close connections depending on that entry immediately if current\n>> transaction has not used those connections yet. Otherwise, mark those\n>> connections as invalid and then make pgfdw_xact_callback() close them\n>> at the end of current transaction, since they cannot be closed in the midst\n>> of a transaction using them. Closed connections will be remade at the next\n>> opportunity if necessary.\n>> ---------------------\n> \n> Done.\n> \n>> + /*\n>> + * Close the connection if it's not in midst of a xact. Otherwise\n>> + * mark it as invalid so that it can be disconnected at the end of\n>> + * main xact in pgfdw_xact_callback().\n>> + */\n>>\n>> Because of the same reason as the above, what about updating this comment\n>> as follows?\n>>\n>> ---------------------\n>> Close the connection immediately if it's not used yet in this transaction.\n>> Otherwise mark it as invalid so that pgfdw_xact_callback() can close it\n>> at the end of this transaction.\n>> ---------------------\n> \n> Done.\n> \n> Attaching v3 patch. Please have a look. Thanks!\n\nThanks a lot! Barring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 24 Dec 2020 23:45:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
},
{
"msg_contents": "\n\nOn 2020/12/24 23:45, Fujii Masao wrote:\n> \n> \n> On 2020/12/24 23:30, Bharath Rupireddy wrote:\n>> On Thu, Dec 24, 2020 at 7:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Even when we're in the midst of transaction, if that transaction has not used\n>>> the cached connections yet, we close them immediately. So, to make the\n>>> comment more precise, what about updating the comment as follows?\n>>>\n>>> ---------------------\n>>> After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n>>> close connections depending on that entry immediately if current\n>>> transaction has not used those connections yet. Otherwise, mark those\n>>> connections as invalid and then make pgfdw_xact_callback() close them\n>>> at the end of current transaction, since they cannot be closed in the midst\n>>> of a transaction using them. Closed connections will be remade at the next\n>>> opportunity if necessary.\n>>> ---------------------\n>>\n>> Done.\n>>\n>>> + /*\n>>> + * Close the connection if it's not in midst of a xact. Otherwise\n>>> + * mark it as invalid so that it can be disconnected at the end of\n>>> + * main xact in pgfdw_xact_callback().\n>>> + */\n>>>\n>>> Because of the same reason as the above, what about updating this comment\n>>> as follows?\n>>>\n>>> ---------------------\n>>> Close the connection immediately if it's not used yet in this transaction.\n>>> Otherwise mark it as invalid so that pgfdw_xact_callback() can close it\n>>> at the end of this transaction.\n>>> ---------------------\n>>\n>> Done.\n>>\n>> Attaching v3 patch. Please have a look. Thanks!\n> \n> Thanks a lot! Barring any objection, I will commit this version.\n\nPushed. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 28 Dec 2020 20:05:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - cached connection leaks if the associated user\n mapping/foreign server is dropped"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen examining the duration of locks, we often do join on pg_locks\nand pg_stat_activity and use columns such as query_start or\nstate_change.\n\nHowever, since these columns are the moment when queries have\nstarted or their state has changed, we cannot get the exact lock\nduration in this way.\n\nSo I'm now thinking about adding a new column in pg_locks which\nkeeps the time at which locks started waiting.\n\nOne problem with this idea would be the performance impact of\ncalling gettimeofday repeatedly.\nTo avoid it, I reused the result of the gettimeofday which was\ncalled for deadlock_timeout timer start as suggested in the\nprevious discussion[1].\n\nAttached a patch.\n\nBTW in this patch, for fast path locks, wait_start is set to\nzero because it seems the lock will not be waited for.\nIf my understanding is wrong, I would appreciate it if\nsomeone could point out.\n\n\nAny thoughts?\n\n\n[1] \nhttps://www.postgresql.org/message-id/28804.1407907184%40sss.pgh.pa.us\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Tue, 15 Dec 2020 11:47:23 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "adding wait_start column to pg_locks"
},
{
"msg_contents": "On Tue, Dec 15, 2020 at 11:47:23AM +0900, torikoshia wrote:\n> So I'm now thinking about adding a new column in pg_locks which\n> keeps the time at which locks started waiting.\n> \n> Attached a patch.\n\nThis is failing make check-world, would you send an updated patch ?\n\nI added you as an author so it shows up here.\nhttp://cfbot.cputube.org/atsushi-torikoshi.html\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 1 Jan 2021 15:49:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-01-02 06:49, Justin Pryzby wrote:\n> On Tue, Dec 15, 2020 at 11:47:23AM +0900, torikoshia wrote:\n>> So I'm now thinking about adding a new column in pg_locks which\n>> keeps the time at which locks started waiting.\n>> \n>> Attached a patch.\n> \n> This is failing make check-world, would you send an updated patch ?\n> \n> I added you as an author so it shows up here.\n> http://cfbot.cputube.org/atsushi-torikoshi.html\n\nThanks!\n\nAttached an updated patch.\n\nRegards,",
"msg_date": "Mon, 04 Jan 2021 15:04:29 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "Hi\n\n2021年1月4日(月) 15:04 torikoshia <torikoshia@oss.nttdata.com>:\n>\n> On 2021-01-02 06:49, Justin Pryzby wrote:\n> > On Tue, Dec 15, 2020 at 11:47:23AM +0900, torikoshia wrote:\n> >> So I'm now thinking about adding a new column in pg_locks which\n> >> keeps the time at which locks started waiting.\n> >>\n> >> Attached a patch.\n> >\n> > This is failing make check-world, would you send an updated patch ?\n> >\n> > I added you as an author so it shows up here.\n> > http://cfbot.cputube.org/atsushi-torikoshi.html\n>\n> Thanks!\n>\n> Attached an updated patch.\n\nI took a look at this patch as it seems useful (and I have an item on my\nbucket\nlist labelled \"look at the locking code\", which I am not at all familiar\nwith).\n\nI tested the patch by doing the following:\n\nSession 1:\n\n postgres=# CREATE TABLE foo (id int);\n CREATE TABLE\n\n postgres=# BEGIN ;\n BEGIN\n\n postgres=*# INSERT INTO foo VALUES (1);\n INSERT 0 1\n\nSession 2:\n\n postgres=# BEGIN ;\n BEGIN\n\n postgres=*# LOCK TABLE foo;\n\nSession 3:\n\n postgres=# SELECT locktype, relation, pid, mode, granted, fastpath,\nwait_start\n FROM pg_locks\n WHERE relation = 'foo'::regclass AND NOT granted\\x\\g\\x\n\n -[ RECORD 1 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3513935\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:03:06.683053+09\n\nSo far so good, but checking *all* the locks on this relation:\n\n postgres=# SELECT locktype, relation, pid, mode, granted, fastpath,\nwait_start\n FROM pg_locks\n WHERE relation = 'foo'::regclass\\x\\g\\x\n\n -[ RECORD 1 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3513824\n mode | RowExclusiveLock\n granted | t\n fastpath | f\n wait_start | 2021-01-14 12:03:06.683053+09\n -[ RECORD 2 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3513935\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:03:06.683053+09\n\nshows the RowExclusiveLock granted in session 1 as apparently waiting from\nexactly the same time as session 2's attempt to acquire the lock, which is\nclearly\nnot right.\n\nAlso, if a further session attempts to acquire a lock, we get:\n\n postgres=# SELECT locktype, relation, pid, mode, granted, fastpath,\nwait_start\n FROM pg_locks\n WHERE relation = 'foo'::regclass\\x\\g\\x\n\n -[ RECORD 1 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3513824\n mode | RowExclusiveLock\n granted | t\n fastpath | f\n wait_start | 2021-01-14 12:05:53.747309+09\n -[ RECORD 2 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3514240\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:05:53.747309+09\n -[ RECORD 3 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3513935\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:05:53.747309+09\n\ni.e. all entries now have \"wait_start\" set to the start time of the latest\nsession's\nlock acquisition attempt.\n\nLooking at the code, this happens as the wait start time is being recorded\nin\nthe lock record itself, so always contains the value reported by the latest\nlock\nacquisition attempt.\n\nIt looks like the logical place to store the value is in the PROCLOCK\nstructure; the attached patch reworks your patch to do that, and given the\nabove\nscenario produces following output:\n\n postgres=# SELECT locktype, relation, pid, mode, granted, fastpath,\nwait_start\n FROM pg_locks\n WHERE relation = 'foo'::regclass\\x\\g\\x\n\n -[ RECORD 1 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3516391\n mode | RowExclusiveLock\n granted | t\n fastpath | f\n wait_start |\n -[ RECORD 2 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3516470\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:19:16.217163+09\n -[ RECORD 3 ]-----------------------------\n locktype | relation\n relation | 16452\n pid | 3516968\n mode | AccessExclusiveLock\n granted | f\n fastpath | f\n wait_start | 2021-01-14 12:18:08.195429+09\n\nAs mentioned, I'm not at all familiar with the locking code so it's quite\npossible that it's incomplete,there may be non-obvious side-effects, or it's\nthe wrong approach altogether etc., so further review is necessary.\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Thu, 14 Jan 2021 12:39:45 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> It looks like the logical place to store the value is in the PROCLOCK\n> structure; ...\n\nThat seems surprising, because there's one PROCLOCK for every\ncombination of a process and a lock. But, a process can't be waiting\nfor more than one lock at the same time, because once it starts\nwaiting to acquire the first one, it can't do anything else, and thus\ncan't begin waiting for a second one. So I would have thought that\nthis would be recorded in the PROC.\n\nBut I haven't looked at the patch so maybe I'm dumb.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Jan 2021 13:45:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "2021年1月15日(金) 3:45 Robert Haas <robertmhaas@gmail.com>:\n\n> On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick <barwick@gmail.com>\n> wrote:\n> > It looks like the logical place to store the value is in the PROCLOCK\n> > structure; ...\n>\n> That seems surprising, because there's one PROCLOCK for every\n> combination of a process and a lock. But, a process can't be waiting\n> for more than one lock at the same time, because once it starts\n> waiting to acquire the first one, it can't do anything else, and thus\n> can't begin waiting for a second one. So I would have thought that\n> this would be recorded in the PROC.\n>\n\nUmm, I think we're at cross-purposes here. The suggestion is to note\nthe time when the process started waiting for the lock in the process's\nPROCLOCK, rather than in the lock itself (which in the original version\nof the patch resulted in all processes with an interest in the lock\nappearing\nto have been waiting to acquire it since the time a lock acquisition\nwas most recently attempted).\n\nAs mentioned, I hadn't really ever looked at the lock code before so might\nbe barking up the wrong forest.\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年1月15日(金) 3:45 Robert Haas <robertmhaas@gmail.com>:On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> It looks like the logical place to store the value is in the PROCLOCK\n> structure; ...\n\nThat seems surprising, because there's one PROCLOCK for every\ncombination of a process and a lock. But, a process can't be waiting\nfor more than one lock at the same time, because once it starts\nwaiting to acquire the first one, it can't do anything else, and thus\ncan't begin waiting for a second one. So I would have thought that\nthis would be recorded in the PROC.Umm, I think we're at cross-purposes here. The suggestion is to notethe time when the process started waiting for the lock in the process'sPROCLOCK, rather than in the lock itself (which in the original versionof the patch resulted in all processes with an interest in the lock appearingto have been waiting to acquire it since the time a lock acquisitionwas most recently attempted).As mentioned, I hadn't really ever looked at the lock code before so mightbe barking up the wrong forest. RegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Fri, 15 Jan 2021 11:48:34 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "Thanks for your reviewing and comments!\n\nOn 2021-01-14 12:39, Ian Lawrence Barwick wrote:\n> Looking at the code, this happens as the wait start time is being \n> recorded in\n> the lock record itself, so always contains the value reported by the \n> latest lock\n> acquisition attempt.\n\nI think you are right and wait_start should not be recorded\nin the LOCK.\n\n\nOn 2021-01-15 11:48, Ian Lawrence Barwick wrote:\n> 2021年1月15日(金) 3:45 Robert Haas <robertmhaas@gmail.com>:\n> \n>> On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick\n>> <barwick@gmail.com> wrote:\n>>> It looks like the logical place to store the value is in the\n>> PROCLOCK\n>>> structure; ...\n>> \n>> That seems surprising, because there's one PROCLOCK for every\n>> combination of a process and a lock. But, a process can't be waiting\n>> for more than one lock at the same time, because once it starts\n>> waiting to acquire the first one, it can't do anything else, and\n>> thus\n>> can't begin waiting for a second one. So I would have thought that\n>> this would be recorded in the PROC.\n> \n> Umm, I think we're at cross-purposes here. The suggestion is to note\n> the time when the process started waiting for the lock in the\n> process's\n> PROCLOCK, rather than in the lock itself (which in the original\n> version\n> of the patch resulted in all processes with an interest in the lock\n> appearing\n> to have been waiting to acquire it since the time a lock acquisition\n> was most recently attempted).\n\nAFAIU, it seems possible to record wait_start in the PROCLOCK but\nredundant since each process can wait at most one lock.\n\nTo confirm my understanding, I'm going to make another patch that\nrecords wait_start in the PGPROC.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n",
"msg_date": "Fri, 15 Jan 2021 15:23:51 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-01-15 15:23, torikoshia wrote:\n> Thanks for your reviewing and comments!\n> \n> On 2021-01-14 12:39, Ian Lawrence Barwick wrote:\n>> Looking at the code, this happens as the wait start time is being \n>> recorded in\n>> the lock record itself, so always contains the value reported by the \n>> latest lock\n>> acquisition attempt.\n> \n> I think you are right and wait_start should not be recorded\n> in the LOCK.\n> \n> \n> On 2021-01-15 11:48, Ian Lawrence Barwick wrote:\n>> 2021年1月15日(金) 3:45 Robert Haas <robertmhaas@gmail.com>:\n>> \n>>> On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick\n>>> <barwick@gmail.com> wrote:\n>>>> It looks like the logical place to store the value is in the\n>>> PROCLOCK\n>>>> structure; ...\n>>> \n>>> That seems surprising, because there's one PROCLOCK for every\n>>> combination of a process and a lock. But, a process can't be waiting\n>>> for more than one lock at the same time, because once it starts\n>>> waiting to acquire the first one, it can't do anything else, and\n>>> thus\n>>> can't begin waiting for a second one. So I would have thought that\n>>> this would be recorded in the PROC.\n>> \n>> Umm, I think we're at cross-purposes here. The suggestion is to note\n>> the time when the process started waiting for the lock in the\n>> process's\n>> PROCLOCK, rather than in the lock itself (which in the original\n>> version\n>> of the patch resulted in all processes with an interest in the lock\n>> appearing\n>> to have been waiting to acquire it since the time a lock acquisition\n>> was most recently attempted).\n> \n> AFAIU, it seems possible to record wait_start in the PROCLOCK but\n> redundant since each process can wait at most one lock.\n> \n> To confirm my understanding, I'm going to make another patch that\n> records wait_start in the PGPROC.\n\nAttached a patch.\n\nI noticed previous patches left the wait_start untouched even after\nit acquired lock.\nThe patch also fixed it.\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Mon, 18 Jan 2021 12:00:22 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/01/18 12:00, torikoshia wrote:\n> On 2021-01-15 15:23, torikoshia wrote:\n>> Thanks for your reviewing and comments!\n>>\n>> On 2021-01-14 12:39, Ian Lawrence Barwick wrote:\n>>> Looking at the code, this happens as the wait start time is being recorded in\n>>> the lock record itself, so always contains the value reported by the latest lock\n>>> acquisition attempt.\n>>\n>> I think you are right and wait_start should not be recorded\n>> in the LOCK.\n>>\n>>\n>> On 2021-01-15 11:48, Ian Lawrence Barwick wrote:\n>>> 2021年1月15日(金) 3:45 Robert Haas <robertmhaas@gmail.com>:\n>>>\n>>>> On Wed, Jan 13, 2021 at 10:40 PM Ian Lawrence Barwick\n>>>> <barwick@gmail.com> wrote:\n>>>>> It looks like the logical place to store the value is in the\n>>>> PROCLOCK\n>>>>> structure; ...\n>>>>\n>>>> That seems surprising, because there's one PROCLOCK for every\n>>>> combination of a process and a lock. But, a process can't be waiting\n>>>> for more than one lock at the same time, because once it starts\n>>>> waiting to acquire the first one, it can't do anything else, and\n>>>> thus\n>>>> can't begin waiting for a second one. So I would have thought that\n>>>> this would be recorded in the PROC.\n>>>\n>>> Umm, I think we're at cross-purposes here. The suggestion is to note\n>>> the time when the process started waiting for the lock in the\n>>> process's\n>>> PROCLOCK, rather than in the lock itself (which in the original\n>>> version\n>>> of the patch resulted in all processes with an interest in the lock\n>>> appearing\n>>> to have been waiting to acquire it since the time a lock acquisition\n>>> was most recently attempted).\n>>\n>> AFAIU, it seems possible to record wait_start in the PROCLOCK but\n>> redundant since each process can wait at most one lock.\n>>\n>> To confirm my understanding, I'm going to make another patch that\n>> records wait_start in the PGPROC.\n> \n> Attached a patch.\n> \n> I noticed previous patches left the wait_start untouched even after\n> it acquired lock.\n> The patch also fixed it.\n> \n> Any thoughts?\n\nThanks for updating the patch! I think that this is really useful feature!!\nI have two minor comments.\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wait_start</structfield> <type>timestamptz</type>\n\nThe column name \"wait_start\" should be \"waitstart\" for the sake of consistency\nwith other column names in pg_locks? pg_locks seems to avoid including\nan underscore in column names, so \"locktype\" is used instead of \"lock_type\",\n\"virtualtransaction\" is used instead of \"virtual_transaction\", etc.\n\n+ Lock acquisition wait start time. <literal>NULL</literal> if\n+ lock acquired.\n\nThere seems the case where the wait start time is NULL even when \"grant\"\nis false. It's better to add note about that case into the docs? For example,\nI found that the wait start time is NULL while the startup process is waiting\nfor the lock. Is this only that case?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 21 Jan 2021 12:48:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-01-21 12:48, Fujii Masao wrote:\n\n> Thanks for updating the patch! I think that this is really useful \n> feature!!\n\nThanks for reviewing!\n\n> I have two minor comments.\n> \n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wait_start</structfield> <type>timestamptz</type>\n> \n> The column name \"wait_start\" should be \"waitstart\" for the sake of \n> consistency\n> with other column names in pg_locks? pg_locks seems to avoid including\n> an underscore in column names, so \"locktype\" is used instead of \n> \"lock_type\",\n> \"virtualtransaction\" is used instead of \"virtual_transaction\", etc.\n> \n> + Lock acquisition wait start time. <literal>NULL</literal> if\n> + lock acquired.\n> \n\nAgreed.\n\nI also changed the variable name \"wait_start\" in struct PGPROC and\nLockInstanceData to \"waitStart\" for the same reason.\n\n\n> There seems the case where the wait start time is NULL even when \n> \"grant\"\n> is false. It's better to add note about that case into the docs? For \n> example,\n> I found that the wait start time is NULL while the startup process is \n> waiting\n> for the lock. Is this only that case?\n\nThanks, this is because I set 'waitstart' in the following\ncondition.\n\n ---src/backend/storage/lmgr/proc.c\n > 1250 if (!InHotStandby)\n\nAs far as considering this, I guess startup process would\nbe the only case.\n\nI also think that in case of startup process, it seems possible\nto set 'waitstart' in ResolveRecoveryConflictWithLock(), so I\ndid it in the attached patch.\n\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Fri, 22 Jan 2021 14:37:50 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/01/22 14:37, torikoshia wrote:\n> On 2021-01-21 12:48, Fujii Masao wrote:\n> \n>> Thanks for updating the patch! I think that this is really useful feature!!\n> \n> Thanks for reviewing!\n> \n>> I have two minor comments.\n>>\n>> +����� <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> +������ <structfield>wait_start</structfield> <type>timestamptz</type>\n>>\n>> The column name \"wait_start\" should be \"waitstart\" for the sake of consistency\n>> with other column names in pg_locks? pg_locks seems to avoid including\n>> an underscore in column names, so \"locktype\" is used instead of \"lock_type\",\n>> \"virtualtransaction\" is used instead of \"virtual_transaction\", etc.\n>>\n>> +������ Lock acquisition wait start time. <literal>NULL</literal> if\n>> +������ lock acquired.\n>>\n> \n> Agreed.\n> \n> I also changed the variable name \"wait_start\" in struct PGPROC and\n> LockInstanceData to \"waitStart\" for the same reason.\n> \n> \n>> There seems the case where the wait start time is NULL even when \"grant\"\n>> is false. It's better to add note about that case into the docs? For example,\n>> I found that the wait start time is NULL while the startup process is waiting\n>> for the lock. Is this only that case?\n> \n> Thanks, this is because I set 'waitstart' in the following\n> condition.\n> \n> � ---src/backend/storage/lmgr/proc.c\n> � > 1250�������� if (!InHotStandby)\n> \n> As far as considering this, I guess startup process would\n> be the only case.\n> \n> I also think that in case of startup process, it seems possible\n> to set 'waitstart' in ResolveRecoveryConflictWithLock(), so I\n> did it in the attached patch.\n\nThis change seems to cause \"waitstart\" to be reset every time\nResolveRecoveryConflictWithLock() is called in the do-while loop.\nI guess this is not acceptable. Right?\n\nTo avoid that issue, IMO the following change is better. Thought?\n\n- else if (log_recovery_conflict_waits)\n+ else\n {\n+ TimestampTz now = GetCurrentTimestamp();\n+\n+ MyProc->waitStart = now;\n+\n /*\n * Set the wait start timestamp if logging is enabled and in hot\n * standby.\n */\n- standbyWaitStart = GetCurrentTimestamp();\n+ if (log_recovery_conflict_waits)\n+ standbyWaitStart = now\n }\n\nThis change causes the startup process to call GetCurrentTimestamp()\nadditionally even when log_recovery_conflict_waits is disabled. Which\nmight decrease the performance of the startup process, but that performance\ndegradation can happen only when the startup process waits in\nACCESS EXCLUSIVE lock. So if this my understanding right, IMO it's almost\nharmless to call GetCurrentTimestamp() additionally in that case. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 22 Jan 2021 18:11:54 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/01/22 18:11, Fujii Masao wrote:\n> \n> \n> On 2021/01/22 14:37, torikoshia wrote:\n>> On 2021-01-21 12:48, Fujii Masao wrote:\n>>\n>>> Thanks for updating the patch! I think that this is really useful feature!!\n>>\n>> Thanks for reviewing!\n>>\n>>> I have two minor comments.\n>>>\n>>> +����� <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>> +������ <structfield>wait_start</structfield> <type>timestamptz</type>\n>>>\n>>> The column name \"wait_start\" should be \"waitstart\" for the sake of consistency\n>>> with other column names in pg_locks? pg_locks seems to avoid including\n>>> an underscore in column names, so \"locktype\" is used instead of \"lock_type\",\n>>> \"virtualtransaction\" is used instead of \"virtual_transaction\", etc.\n>>>\n>>> +������ Lock acquisition wait start time. <literal>NULL</literal> if\n>>> +������ lock acquired.\n>>>\n>>\n>> Agreed.\n>>\n>> I also changed the variable name \"wait_start\" in struct PGPROC and\n>> LockInstanceData to \"waitStart\" for the same reason.\n>>\n>>\n>>> There seems the case where the wait start time is NULL even when \"grant\"\n>>> is false. It's better to add note about that case into the docs? For example,\n>>> I found that the wait start time is NULL while the startup process is waiting\n>>> for the lock. Is this only that case?\n>>\n>> Thanks, this is because I set 'waitstart' in the following\n>> condition.\n>>\n>> �� ---src/backend/storage/lmgr/proc.c\n>> �� > 1250�������� if (!InHotStandby)\n>>\n>> As far as considering this, I guess startup process would\n>> be the only case.\n>>\n>> I also think that in case of startup process, it seems possible\n>> to set 'waitstart' in ResolveRecoveryConflictWithLock(), so I\n>> did it in the attached patch.\n> \n> This change seems to cause \"waitstart\" to be reset every time\n> ResolveRecoveryConflictWithLock() is called in the do-while loop.\n> I guess this is not acceptable. Right?\n> \n> To avoid that issue, IMO the following change is better. Thought?\n> \n> -������ else if (log_recovery_conflict_waits)\n> +������ else\n> ������� {\n> +�������������� TimestampTz now = GetCurrentTimestamp();\n> +\n> +�������������� MyProc->waitStart = now;\n> +\n> ��������������� /*\n> ���������������� * Set the wait start timestamp if logging is enabled and in hot\n> ���������������� * standby.\n> ���������������� */\n> -�������������� standbyWaitStart = GetCurrentTimestamp();\n> +��������������� if (log_recovery_conflict_waits)\n> +����������������������� standbyWaitStart = now\n> ������� }\n> \n> This change causes the startup process to call GetCurrentTimestamp()\n> additionally even when log_recovery_conflict_waits is disabled. Which\n> might decrease the performance of the startup process, but that performance\n> degradation can happen only when the startup process waits in\n> ACCESS EXCLUSIVE lock. So if this my understanding right, IMO it's almost\n> harmless to call GetCurrentTimestamp() additionally in that case. Thought?\n\nAccording to the off-list discussion with you, this should not happen because ResolveRecoveryConflictWithDatabase() sets MyProc->waitStart only when it's not set yet (i.e., = 0). That's good. So I'd withdraw my comment.\n\n+\tif (MyProc->waitStart == 0)\n+\t\tMyProc->waitStart = now;\n<snip>\n+\t\tMyProc->waitStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n\nAnother comment is; Doesn't the change of MyProc->waitStart need the lock table's partition lock? If yes, we can do that by moving LWLockRelease(partitionLock) just after the change of MyProc->waitStart, but which causes the time that lwlock is being held to be long. So maybe we need another way to do that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jan 2021 23:44:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-01-25 23:44, Fujii Masao wrote:\n> Another comment is; Doesn't the change of MyProc->waitStart need the\n> lock table's partition lock? If yes, we can do that by moving\n> LWLockRelease(partitionLock) just after the change of\n> MyProc->waitStart, but which causes the time that lwlock is being held\n> to be long. So maybe we need another way to do that.\n\nThanks for your comments!\n\nIt would be ideal for the consistency of the view to record \"waitstart\" \nduring holding the table partition lock.\nHowever, as you pointed out, it would give non-negligible performance \nimpacts.\n\nI may miss something, but as far as I can see, the influence of not \nholding the lock is that \"waitstart\" can be NULL even though \"granted\" \nis false.\n\nI think people want to know the start time of the lock when locks are \nheld for a long time.\nIn that case, \"waitstart\" should have already been recorded.\n\nIf this is true, I think the current implementation may be enough on the \ncondition that users understand it can happen that \"waitStart\" is NULL \nand \"granted\" is false.\n\nAttached a patch describing this in the doc and comments.\n\n\nAny Thoughts?\n\nRegards,\n\n\n--\nAtsushi Torikoshi",
"msg_date": "Tue, 02 Feb 2021 22:00:47 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/02 22:00, torikoshia wrote:\n> On 2021-01-25 23:44, Fujii Masao wrote:\n>> Another comment is; Doesn't the change of MyProc->waitStart need the\n>> lock table's partition lock? If yes, we can do that by moving\n>> LWLockRelease(partitionLock) just after the change of\n>> MyProc->waitStart, but which causes the time that lwlock is being held\n>> to be long. So maybe we need another way to do that.\n> \n> Thanks for your comments!\n> \n> It would be ideal for the consistency of the view to record \"waitstart\" during holding the table partition lock.\n> However, as you pointed out, it would give non-negligible performance impacts.\n> \n> I may miss something, but as far as I can see, the influence of not holding the lock is that \"waitstart\" can be NULL even though \"granted\" is false.\n> \n> I think people want to know the start time of the lock when locks are held for a long time.\n> In that case, \"waitstart\" should have already been recorded.\n\nSounds reasonable.\n\n\n> If this is true, I think the current implementation may be enough on the condition that users understand it can happen that \"waitStart\" is NULL and \"granted\" is false.\n> \n> Attached a patch describing this in the doc and comments.\n> \n> \n> Any Thoughts?\n\n64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n\n\n+ Lock acquisition wait start time.\n\nIsn't it better to describe this more clearly? What about the following?\n\n Time when the server process started waiting for this lock,\n or null if the lock is held.\n\n+ Note that updating this field and lock acquisition are not performed\n+ synchronously for performance reasons. Therefore, depending on the\n+ timing, it can happen that <structfield>waitstart</structfield> is\n+ <literal>NULL</literal> even though\n+ <structfield>granted</structfield> is false.\n\nI agree that it's helpful to add the note about that NULL can be returned even when \"granted\" is false. But IMO we don't need to document why this behavior can happen internally. So what about the following?\n\n Note that this can be null for a very short period of time after\n the wait started even though <structfield>granted</structfield>\n is <literal>false</literal>.\n\nSince the document for pg_locks uses \"null\" instead of <literal>NULL</literal> (I'm not sure why, though), I used \"null\" for the sake of consistency.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Feb 2021 01:49:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/03 1:49, Fujii Masao wrote:\n> \n> \n> On 2021/02/02 22:00, torikoshia wrote:\n>> On 2021-01-25 23:44, Fujii Masao wrote:\n>>> Another comment is; Doesn't the change of MyProc->waitStart need the\n>>> lock table's partition lock? If yes, we can do that by moving\n>>> LWLockRelease(partitionLock) just after the change of\n>>> MyProc->waitStart, but which causes the time that lwlock is being held\n>>> to be long. So maybe we need another way to do that.\n>>\n>> Thanks for your comments!\n>>\n>> It would be ideal for the consistency of the view to record \"waitstart\" during holding the table partition lock.\n>> However, as you pointed out, it would give non-negligible performance impacts.\n>>\n>> I may miss something, but as far as I can see, the influence of not holding the lock is that \"waitstart\" can be NULL even though \"granted\" is false.\n>>\n>> I think people want to know the start time of the lock when locks are held for a long time.\n>> In that case, \"waitstart\" should have already been recorded.\n> \n> Sounds reasonable.\n> \n> \n>> If this is true, I think the current implementation may be enough on the condition that users understand it can happen that \"waitStart\" is NULL and \"granted\" is false.\n>>\n>> Attached a patch describing this in the doc and comments.\n>>\n>>\n>> Any Thoughts?\n> \n> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n\nAlso it might be worth thinking to use 64-bit atomic operations like pg_atomic_read_u64(), for that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Feb 2021 11:23:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-02-03 11:23, Fujii Masao wrote:\n>> 64-bit fetches are not atomic on some platforms. So spinlock is \n>> necessary when updating \"waitStart\" without holding the partition \n>> lock? Also GetLockStatusData() needs spinlock when reading \n>> \"waitStart\"?\n> \n> Also it might be worth thinking to use 64-bit atomic operations like\n> pg_atomic_read_u64(), for that.\n\nThanks for your suggestion and advice!\n\nIn the attached patch I used pg_atomic_read_u64() and \npg_atomic_write_u64().\n\nwaitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx \nand pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n\nI may be using these functions not correctly, so if something is wrong, \nI would appreciate any comments.\n\n\nAbout the documentation, since your suggestion seems better than v6, I \nused it as is.\n\n\nRegards,\n\n--\nAtsushi Torikoshi",
"msg_date": "Fri, 05 Feb 2021 00:03:51 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021/02/05 0:03, torikoshia wrote:\n> On 2021-02-03 11:23, Fujii Masao wrote:\n>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>\n>> Also it might be worth thinking to use 64-bit atomic operations like\n>> pg_atomic_read_u64(), for that.\n> \n> Thanks for your suggestion and advice!\n> \n> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n> \n> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n> \n> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n> \n> \n> About the documentation, since your suggestion seems better than v6, I used it as is.\n\nThanks for updating the patch!\n\n+\tif (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n+\t\tpg_atomic_write_u64(&MyProc->waitStart,\n+\t\t\t\t\t\t\tpg_atomic_read_u64((pg_atomic_uint64 *) &now));\n\npg_atomic_read_u64() is really necessary? I think that\n\"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n\n+\t\tdeadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n+\t\tpg_atomic_write_u64(&MyProc->waitStart,\n+\t\t\t\t\tpg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n\nSame as above.\n\n+\t\t/*\n+\t\t * Record waitStart reusing the deadlock timeout timer.\n+\t\t *\n+\t\t * It would be ideal this can be synchronously done with updating\n+\t\t * lock information. Howerver, since it gives performance impacts\n+\t\t * to hold partitionLock longer time, we do it here asynchronously.\n+\t\t */\n\nIMO it's better to comment why we reuse the deadlock timeout timer.\n\n \tproc->waitStatus = waitStatus;\n+\tpg_atomic_init_u64(&MyProc->waitStart, 0);\n\npg_atomic_write_u64() should be used instead? Because waitStart can be\naccessed concurrently there.\n\nI updated the patch and addressed the above review comments. Patch attached.\nBarring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 5 Feb 2021 18:49:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-02-05 18:49, Fujii Masao wrote:\n> On 2021/02/05 0:03, torikoshia wrote:\n>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>> 64-bit fetches are not atomic on some platforms. So spinlock is \n>>>> necessary when updating \"waitStart\" without holding the partition \n>>>> lock? Also GetLockStatusData() needs spinlock when reading \n>>>> \"waitStart\"?\n>>> \n>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>> pg_atomic_read_u64(), for that.\n>> \n>> Thanks for your suggestion and advice!\n>> \n>> In the attached patch I used pg_atomic_read_u64() and \n>> pg_atomic_write_u64().\n>> \n>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx \n>> and pg_atomic_write_xxx only supports unsigned int, so I cast the \n>> type.\n>> \n>> I may be using these functions not correctly, so if something is \n>> wrong, I would appreciate any comments.\n>> \n>> \n>> About the documentation, since your suggestion seems better than v6, I \n>> used it as is.\n> \n> Thanks for updating the patch!\n> \n> +\tif (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n> +\t\tpg_atomic_write_u64(&MyProc->waitStart,\n> +\t\t\t\t\t\t\tpg_atomic_read_u64((pg_atomic_uint64 *) &now));\n> \n> pg_atomic_read_u64() is really necessary? I think that\n> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n> \n> +\t\tdeadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n> +\t\tpg_atomic_write_u64(&MyProc->waitStart,\n> +\t\t\t\t\tpg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n> \n> Same as above.\n> \n> +\t\t/*\n> +\t\t * Record waitStart reusing the deadlock timeout timer.\n> +\t\t *\n> +\t\t * It would be ideal this can be synchronously done with updating\n> +\t\t * lock information. Howerver, since it gives performance impacts\n> +\t\t * to hold partitionLock longer time, we do it here asynchronously.\n> +\t\t */\n> \n> IMO it's better to comment why we reuse the deadlock timeout timer.\n> \n> \tproc->waitStatus = waitStatus;\n> +\tpg_atomic_init_u64(&MyProc->waitStart, 0);\n> \n> pg_atomic_write_u64() should be used instead? Because waitStart can be\n> accessed concurrently there.\n> \n> I updated the patch and addressed the above review comments. Patch \n> attached.\n> Barring any objection, I will commit this version.\n\nThanks for modifying the patch!\nI agree with your comments.\n\nBTW, I ran pgbench several times before and after applying\nthis patch.\n\nThe environment is virtual machine(CentOS 8), so this is\njust for reference, but there were no significant difference\nin latency or tps(both are below 1%).\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n",
"msg_date": "Tue, 09 Feb 2021 17:48:55 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/09 17:48, torikoshia wrote:\n> On 2021-02-05 18:49, Fujii Masao wrote:\n>> On 2021/02/05 0:03, torikoshia wrote:\n>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>\n>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>> pg_atomic_read_u64(), for that.\n>>>\n>>> Thanks for your suggestion and advice!\n>>>\n>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>\n>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>\n>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>\n>>>\n>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>\n>> Thanks for updating the patch!\n>>\n>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>> + pg_atomic_write_u64(&MyProc->waitStart,\n>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>\n>> pg_atomic_read_u64() is really necessary? I think that\n>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>\n>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>> + pg_atomic_write_u64(&MyProc->waitStart,\n>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>\n>> Same as above.\n>>\n>> + /*\n>> + * Record waitStart reusing the deadlock timeout timer.\n>> + *\n>> + * It would be ideal this can be synchronously done with updating\n>> + * lock information. Howerver, since it gives performance impacts\n>> + * to hold partitionLock longer time, we do it here asynchronously.\n>> + */\n>>\n>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>\n>> proc->waitStatus = waitStatus;\n>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>\n>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>> accessed concurrently there.\n>>\n>> I updated the patch and addressed the above review comments. Patch attached.\n>> Barring any objection, I will commit this version.\n> \n> Thanks for modifying the patch!\n> I agree with your comments.\n> \n> BTW, I ran pgbench several times before and after applying\n> this patch.\n> \n> The environment is virtual machine(CentOS 8), so this is\n> just for reference, but there were no significant difference\n> in latency or tps(both are below 1%).\n\nThanks for the test! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Feb 2021 18:13:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/09 18:13, Fujii Masao wrote:\n> \n> \n> On 2021/02/09 17:48, torikoshia wrote:\n>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>\n>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>> pg_atomic_read_u64(), for that.\n>>>>\n>>>> Thanks for your suggestion and advice!\n>>>>\n>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>\n>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>\n>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>\n>>>>\n>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>\n>>> Thanks for updating the patch!\n>>>\n>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>\n>>> pg_atomic_read_u64() is really necessary? I think that\n>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>\n>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>\n>>> Same as above.\n>>>\n>>> + /*\n>>> + * Record waitStart reusing the deadlock timeout timer.\n>>> + *\n>>> + * It would be ideal this can be synchronously done with updating\n>>> + * lock information. Howerver, since it gives performance impacts\n>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>> + */\n>>>\n>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>\n>>> proc->waitStatus = waitStatus;\n>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>\n>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>> accessed concurrently there.\n>>>\n>>> I updated the patch and addressed the above review comments. Patch attached.\n>>> Barring any objection, I will commit this version.\n>>\n>> Thanks for modifying the patch!\n>> I agree with your comments.\n>>\n>> BTW, I ran pgbench several times before and after applying\n>> this patch.\n>>\n>> The environment is virtual machine(CentOS 8), so this is\n>> just for reference, but there were no significant difference\n>> in latency or tps(both are below 1%).\n> \n> Thanks for the test! I pushed the patch.\n\nBut I reverted the patch because buildfarm members rorqual and\nprion don't like the patch. I'm trying to investigate the cause\nof this failures.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-02-09%2009%3A13%3A16\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Feb 2021 19:11:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021/02/09 19:11, Fujii Masao wrote:\n> \n> \n> On 2021/02/09 18:13, Fujii Masao wrote:\n>>\n>>\n>> On 2021/02/09 17:48, torikoshia wrote:\n>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>>\n>>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>>> pg_atomic_read_u64(), for that.\n>>>>>\n>>>>> Thanks for your suggestion and advice!\n>>>>>\n>>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>>\n>>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>>\n>>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>>\n>>>>>\n>>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>>\n>>>> Thanks for updating the patch!\n>>>>\n>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>\n>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>\n>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>>\n>>>> Same as above.\n>>>>\n>>>> + /*\n>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>> + *\n>>>> + * It would be ideal this can be synchronously done with updating\n>>>> + * lock information. Howerver, since it gives performance impacts\n>>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>>> + */\n>>>>\n>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>\n>>>> proc->waitStatus = waitStatus;\n>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>\n>>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>>> accessed concurrently there.\n>>>>\n>>>> I updated the patch and addressed the above review comments. Patch attached.\n>>>> Barring any objection, I will commit this version.\n>>>\n>>> Thanks for modifying the patch!\n>>> I agree with your comments.\n>>>\n>>> BTW, I ran pgbench several times before and after applying\n>>> this patch.\n>>>\n>>> The environment is virtual machine(CentOS 8), so this is\n>>> just for reference, but there were no significant difference\n>>> in latency or tps(both are below 1%).\n>>\n>> Thanks for the test! I pushed the patch.\n> \n> But I reverted the patch because buildfarm members rorqual and\n> prion don't like the patch. I'm trying to investigate the cause\n> of this failures.\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n\n- relation | locktype | mode\n------------------+----------+---------------------\n- test_prepared_1 | relation | RowExclusiveLock\n- test_prepared_1 | relation | AccessExclusiveLock\n-(2 rows)\n-\n+ERROR: invalid spinlock number: 0\n\n\"rorqual\" reported that the above error happened in the server built with\n--disable-atomics --disable-spinlocks when reading pg_locks after\nthe transaction was prepared. The cause of this issue is that \"waitStart\"\natomic variable in the dummy proc created at the end of prepare transaction\nwas not initialized. I updated the patch so that pg_atomic_init_u64() is\ncalled for the \"waitStart\" in the dummy proc for prepared transaction.\nPatch attached. I confirmed that the patched server built with\n--disable-atomics --disable-spinlocks passed all the regression tests.\n\nBTW, while investigating this issue, I found that pg_stat_wal_receiver also\ncould cause this error even in the current master (without the patch).\nI will report that in separate thread.\n\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-02-09%2009%3A13%3A16\n\n\"prion\" reported the following error. But I'm not sure how the changes of\npg_locks caused this error. I found that Heikki also reported at [1] that\n\"prion\" failed with the same error but was not sure how it happened.\nThis makes me think for now that this issue is not directly related to\nthe pg_locks changes.\n\n-------------------------------------\npg_dump: error: query failed: ERROR: missing chunk number 0 for toast value 14444 in pg_toast_2619\npg_dump: error: query was: SELECT\na.attnum,\na.attname,\na.atttypmod,\na.attstattarget,\na.attstorage,\nt.typstorage,\na.attnotnull,\na.atthasdef,\na.attisdropped,\na.attlen,\na.attalign,\na.attislocal,\npg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,\narray_to_string(a.attoptions, ', ') AS attoptions,\nCASE WHEN a.attcollation <> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,\npg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name) || ' ' || pg_catalog.quote_literal(option_value) FROM pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E',\n ') AS attfdwoptions,\na.attidentity,\nCASE WHEN a.atthasmissing AND NOT a.attisdropped THEN a.attmissingval ELSE null END AS attmissingval,\na.attgenerated\nFROM pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = t.oid\nWHERE a.attrelid = '35987'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2\nORDER BY a.attnum\npg_dumpall: error: pg_dump failed on database \"regression\", exiting\nwaiting for server to shut down.... done\nserver stopped\npg_dumpall of post-upgrade database cluster failed\n-------------------------------------\n\n[1]\nhttps://www.postgresql.org/message-id/f03ea04a-9b77-e371-9ab9-182cb35db1f9@iki.fi\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 9 Feb 2021 22:54:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-02-09 22:54, Fujii Masao wrote:\n> On 2021/02/09 19:11, Fujii Masao wrote:\n>> \n>> \n>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>> \n>>> \n>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is \n>>>>>>>> necessary when updating \"waitStart\" without holding the \n>>>>>>>> partition lock? Also GetLockStatusData() needs spinlock when \n>>>>>>>> reading \"waitStart\"?\n>>>>>>> \n>>>>>>> Also it might be worth thinking to use 64-bit atomic operations \n>>>>>>> like\n>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>> \n>>>>>> Thanks for your suggestion and advice!\n>>>>>> \n>>>>>> In the attached patch I used pg_atomic_read_u64() and \n>>>>>> pg_atomic_write_u64().\n>>>>>> \n>>>>>> waitStart is TimestampTz i.e., int64, but it seems \n>>>>>> pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned \n>>>>>> int, so I cast the type.\n>>>>>> \n>>>>>> I may be using these functions not correctly, so if something is \n>>>>>> wrong, I would appreciate any comments.\n>>>>>> \n>>>>>> \n>>>>>> About the documentation, since your suggestion seems better than \n>>>>>> v6, I used it as is.\n>>>>> \n>>>>> Thanks for updating the patch!\n>>>>> \n>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>> + pg_atomic_read_u64((pg_atomic_uint64 \n>>>>> *) &now));\n>>>>> \n>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>> \n>>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) \n>>>>> &deadlockStart));\n>>>>> \n>>>>> Same as above.\n>>>>> \n>>>>> + /*\n>>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>>> + *\n>>>>> + * It would be ideal this can be synchronously done with \n>>>>> updating\n>>>>> + * lock information. Howerver, since it gives performance \n>>>>> impacts\n>>>>> + * to hold partitionLock longer time, we do it here \n>>>>> asynchronously.\n>>>>> + */\n>>>>> \n>>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>> \n>>>>> proc->waitStatus = waitStatus;\n>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>> \n>>>>> pg_atomic_write_u64() should be used instead? Because waitStart can \n>>>>> be\n>>>>> accessed concurrently there.\n>>>>> \n>>>>> I updated the patch and addressed the above review comments. Patch \n>>>>> attached.\n>>>>> Barring any objection, I will commit this version.\n>>>> \n>>>> Thanks for modifying the patch!\n>>>> I agree with your comments.\n>>>> \n>>>> BTW, I ran pgbench several times before and after applying\n>>>> this patch.\n>>>> \n>>>> The environment is virtual machine(CentOS 8), so this is\n>>>> just for reference, but there were no significant difference\n>>>> in latency or tps(both are below 1%).\n>>> \n>>> Thanks for the test! I pushed the patch.\n>> \n>> But I reverted the patch because buildfarm members rorqual and\n>> prion don't like the patch. I'm trying to investigate the cause\n>> of this failures.\n>> \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n> \n> - relation | locktype | mode\n> ------------------+----------+---------------------\n> - test_prepared_1 | relation | RowExclusiveLock\n> - test_prepared_1 | relation | AccessExclusiveLock\n> -(2 rows)\n> -\n> +ERROR: invalid spinlock number: 0\n> \n> \"rorqual\" reported that the above error happened in the server built \n> with\n> --disable-atomics --disable-spinlocks when reading pg_locks after\n> the transaction was prepared. The cause of this issue is that \n> \"waitStart\"\n> atomic variable in the dummy proc created at the end of prepare \n> transaction\n> was not initialized. I updated the patch so that pg_atomic_init_u64() \n> is\n> called for the \"waitStart\" in the dummy proc for prepared transaction.\n> Patch attached. I confirmed that the patched server built with\n> --disable-atomics --disable-spinlocks passed all the regression tests.\n\nThanks for fixing the bug, I also tested v9.patch configured with\n--disable-atomics --disable-spinlocks on my environment and confirmed\nthat all tests have passed.\n\n> \n> BTW, while investigating this issue, I found that pg_stat_wal_receiver \n> also\n> could cause this error even in the current master (without the patch).\n> I will report that in separate thread.\n> \n> \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-02-09%2009%3A13%3A16\n> \n> \"prion\" reported the following error. But I'm not sure how the changes \n> of\n> pg_locks caused this error. I found that Heikki also reported at [1] \n> that\n> \"prion\" failed with the same error but was not sure how it happened.\n> This makes me think for now that this issue is not directly related to\n> the pg_locks changes.\n\nThanks! I was wondering how these errors were related to the commit.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n> -------------------------------------\n> pg_dump: error: query failed: ERROR: missing chunk number 0 for toast\n> value 14444 in pg_toast_2619\n> pg_dump: error: query was: SELECT\n> a.attnum,\n> a.attname,\n> a.atttypmod,\n> a.attstattarget,\n> a.attstorage,\n> t.typstorage,\n> a.attnotnull,\n> a.atthasdef,\n> a.attisdropped,\n> a.attlen,\n> a.attalign,\n> a.attislocal,\n> pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,\n> array_to_string(a.attoptions, ', ') AS attoptions,\n> CASE WHEN a.attcollation <> t.typcollation THEN a.attcollation ELSE 0\n> END AS attcollation,\n> pg_catalog.array_to_string(ARRAY(SELECT\n> pg_catalog.quote_ident(option_name) || ' ' ||\n> pg_catalog.quote_literal(option_value) FROM\n> pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name),\n> E',\n> ') AS attfdwoptions,\n> a.attidentity,\n> CASE WHEN a.atthasmissing AND NOT a.attisdropped THEN a.attmissingval\n> ELSE null END AS attmissingval,\n> a.attgenerated\n> FROM pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON\n> a.atttypid = t.oid\n> WHERE a.attrelid = '35987'::pg_catalog.oid AND a.attnum > \n> 0::pg_catalog.int2\n> ORDER BY a.attnum\n> pg_dumpall: error: pg_dump failed on database \"regression\", exiting\n> waiting for server to shut down.... done\n> server stopped\n> pg_dumpall of post-upgrade database cluster failed\n> -------------------------------------\n> \n> [1]\n> https://www.postgresql.org/message-id/f03ea04a-9b77-e371-9ab9-182cb35db1f9@iki.fi\n> \n> \n> Regards,\n\n\n",
"msg_date": "Tue, 09 Feb 2021 23:31:10 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021/02/09 23:31, torikoshia wrote:\n> On 2021-02-09 22:54, Fujii Masao wrote:\n>> On 2021/02/09 19:11, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>>>>\n>>>>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>>>\n>>>>>>> Thanks for your suggestion and advice!\n>>>>>>>\n>>>>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>>>>\n>>>>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>>>>\n>>>>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>>>>\n>>>>>>>\n>>>>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>>>>\n>>>>>> Thanks for updating the patch!\n>>>>>>\n>>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>>>\n>>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>>>\n>>>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>>>>\n>>>>>> Same as above.\n>>>>>>\n>>>>>> + /*\n>>>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>>>> + *\n>>>>>> + * It would be ideal this can be synchronously done with updating\n>>>>>> + * lock information. Howerver, since it gives performance impacts\n>>>>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>>>>> + */\n>>>>>>\n>>>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>>>\n>>>>>> proc->waitStatus = waitStatus;\n>>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>>>\n>>>>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>>>>> accessed concurrently there.\n>>>>>>\n>>>>>> I updated the patch and addressed the above review comments. Patch attached.\n>>>>>> Barring any objection, I will commit this version.\n>>>>>\n>>>>> Thanks for modifying the patch!\n>>>>> I agree with your comments.\n>>>>>\n>>>>> BTW, I ran pgbench several times before and after applying\n>>>>> this patch.\n>>>>>\n>>>>> The environment is virtual machine(CentOS 8), so this is\n>>>>> just for reference, but there were no significant difference\n>>>>> in latency or tps(both are below 1%).\n>>>>\n>>>> Thanks for the test! I pushed the patch.\n>>>\n>>> But I reverted the patch because buildfarm members rorqual and\n>>> prion don't like the patch. I'm trying to investigate the cause\n>>> of this failures.\n>>>\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n>>\n>> - relation | locktype | mode\n>> ------------------+----------+---------------------\n>> - test_prepared_1 | relation | RowExclusiveLock\n>> - test_prepared_1 | relation | AccessExclusiveLock\n>> -(2 rows)\n>> -\n>> +ERROR: invalid spinlock number: 0\n>>\n>> \"rorqual\" reported that the above error happened in the server built with\n>> --disable-atomics --disable-spinlocks when reading pg_locks after\n>> the transaction was prepared. The cause of this issue is that \"waitStart\"\n>> atomic variable in the dummy proc created at the end of prepare transaction\n>> was not initialized. I updated the patch so that pg_atomic_init_u64() is\n>> called for the \"waitStart\" in the dummy proc for prepared transaction.\n>> Patch attached. I confirmed that the patched server built with\n>> --disable-atomics --disable-spinlocks passed all the regression tests.\n> \n> Thanks for fixing the bug, I also tested v9.patch configured with\n> --disable-atomics --disable-spinlocks on my environment and confirmed\n> that all tests have passed.\n\nThanks for the test!\n\nI found another bug in the patch. InitProcess() initializes \"waitStart\",\nbut previously InitAuxiliaryProcess() did not. This could cause \"invalid\nspinlock number\" error when reading pg_locks in the standby server.\nI fixed that. Attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 10 Feb 2021 10:43:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/10 10:43, Fujii Masao wrote:\n> \n> \n> On 2021/02/09 23:31, torikoshia wrote:\n>> On 2021-02-09 22:54, Fujii Masao wrote:\n>>> On 2021/02/09 19:11, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>>>>>\n>>>>>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>>>>\n>>>>>>>> Thanks for your suggestion and advice!\n>>>>>>>>\n>>>>>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>>>>>\n>>>>>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>>>>>\n>>>>>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>>>>>\n>>>>>>> Thanks for updating the patch!\n>>>>>>>\n>>>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>>>>\n>>>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>>>>\n>>>>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>>>>>\n>>>>>>> Same as above.\n>>>>>>>\n>>>>>>> + /*\n>>>>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>>>>> + *\n>>>>>>> + * It would be ideal this can be synchronously done with updating\n>>>>>>> + * lock information. Howerver, since it gives performance impacts\n>>>>>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>>>>>> + */\n>>>>>>>\n>>>>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>>>>\n>>>>>>> proc->waitStatus = waitStatus;\n>>>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>>>>\n>>>>>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>>>>>> accessed concurrently there.\n>>>>>>>\n>>>>>>> I updated the patch and addressed the above review comments. Patch attached.\n>>>>>>> Barring any objection, I will commit this version.\n>>>>>>\n>>>>>> Thanks for modifying the patch!\n>>>>>> I agree with your comments.\n>>>>>>\n>>>>>> BTW, I ran pgbench several times before and after applying\n>>>>>> this patch.\n>>>>>>\n>>>>>> The environment is virtual machine(CentOS 8), so this is\n>>>>>> just for reference, but there were no significant difference\n>>>>>> in latency or tps(both are below 1%).\n>>>>>\n>>>>> Thanks for the test! I pushed the patch.\n>>>>\n>>>> But I reverted the patch because buildfarm members rorqual and\n>>>> prion don't like the patch. I'm trying to investigate the cause\n>>>> of this failures.\n>>>>\n>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n>>>\n>>> - relation | locktype | mode\n>>> ------------------+----------+---------------------\n>>> - test_prepared_1 | relation | RowExclusiveLock\n>>> - test_prepared_1 | relation | AccessExclusiveLock\n>>> -(2 rows)\n>>> -\n>>> +ERROR: invalid spinlock number: 0\n>>>\n>>> \"rorqual\" reported that the above error happened in the server built with\n>>> --disable-atomics --disable-spinlocks when reading pg_locks after\n>>> the transaction was prepared. The cause of this issue is that \"waitStart\"\n>>> atomic variable in the dummy proc created at the end of prepare transaction\n>>> was not initialized. I updated the patch so that pg_atomic_init_u64() is\n>>> called for the \"waitStart\" in the dummy proc for prepared transaction.\n>>> Patch attached. I confirmed that the patched server built with\n>>> --disable-atomics --disable-spinlocks passed all the regression tests.\n>>\n>> Thanks for fixing the bug, I also tested v9.patch configured with\n>> --disable-atomics --disable-spinlocks on my environment and confirmed\n>> that all tests have passed.\n> \n> Thanks for the test!\n> \n> I found another bug in the patch. InitProcess() initializes \"waitStart\",\n> but previously InitAuxiliaryProcess() did not. This could cause \"invalid\n> spinlock number\" error when reading pg_locks in the standby server.\n> I fixed that. Attached is the updated version of the patch.\n\nI pushed this version. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 15 Feb 2021 15:17:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021/02/15 15:17, Fujii Masao wrote:\n> \n> \n> On 2021/02/10 10:43, Fujii Masao wrote:\n>>\n>>\n>> On 2021/02/09 23:31, torikoshia wrote:\n>>> On 2021-02-09 22:54, Fujii Masao wrote:\n>>>> On 2021/02/09 19:11, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>>>>>\n>>>>>>\n>>>>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>>>>>>\n>>>>>>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>>>>>\n>>>>>>>>> Thanks for your suggestion and advice!\n>>>>>>>>>\n>>>>>>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>>>>>>\n>>>>>>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>>>>>>\n>>>>>>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patch!\n>>>>>>>>\n>>>>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>>>>>\n>>>>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>>>>>\n>>>>>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>>>>>>\n>>>>>>>> Same as above.\n>>>>>>>>\n>>>>>>>> + /*\n>>>>>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>>>>>> + *\n>>>>>>>> + * It would be ideal this can be synchronously done with updating\n>>>>>>>> + * lock information. Howerver, since it gives performance impacts\n>>>>>>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>>>>>>> + */\n>>>>>>>>\n>>>>>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>>>>>\n>>>>>>>> proc->waitStatus = waitStatus;\n>>>>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>>>>>\n>>>>>>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>>>>>>> accessed concurrently there.\n>>>>>>>>\n>>>>>>>> I updated the patch and addressed the above review comments. Patch attached.\n>>>>>>>> Barring any objection, I will commit this version.\n>>>>>>>\n>>>>>>> Thanks for modifying the patch!\n>>>>>>> I agree with your comments.\n>>>>>>>\n>>>>>>> BTW, I ran pgbench several times before and after applying\n>>>>>>> this patch.\n>>>>>>>\n>>>>>>> The environment is virtual machine(CentOS 8), so this is\n>>>>>>> just for reference, but there were no significant difference\n>>>>>>> in latency or tps(both are below 1%).\n>>>>>>\n>>>>>> Thanks for the test! I pushed the patch.\n>>>>>\n>>>>> But I reverted the patch because buildfarm members rorqual and\n>>>>> prion don't like the patch. I'm trying to investigate the cause\n>>>>> of this failures.\n>>>>>\n>>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n>>>>\n>>>> - relation | locktype | mode\n>>>> ------------------+----------+---------------------\n>>>> - test_prepared_1 | relation | RowExclusiveLock\n>>>> - test_prepared_1 | relation | AccessExclusiveLock\n>>>> -(2 rows)\n>>>> -\n>>>> +ERROR: invalid spinlock number: 0\n>>>>\n>>>> \"rorqual\" reported that the above error happened in the server built with\n>>>> --disable-atomics --disable-spinlocks when reading pg_locks after\n>>>> the transaction was prepared. The cause of this issue is that \"waitStart\"\n>>>> atomic variable in the dummy proc created at the end of prepare transaction\n>>>> was not initialized. I updated the patch so that pg_atomic_init_u64() is\n>>>> called for the \"waitStart\" in the dummy proc for prepared transaction.\n>>>> Patch attached. I confirmed that the patched server built with\n>>>> --disable-atomics --disable-spinlocks passed all the regression tests.\n>>>\n>>> Thanks for fixing the bug, I also tested v9.patch configured with\n>>> --disable-atomics --disable-spinlocks on my environment and confirmed\n>>> that all tests have passed.\n>>\n>> Thanks for the test!\n>>\n>> I found another bug in the patch. InitProcess() initializes \"waitStart\",\n>> but previously InitAuxiliaryProcess() did not. This could cause \"invalid\n>> spinlock number\" error when reading pg_locks in the standby server.\n>> I fixed that. Attached is the updated version of the patch.\n> \n> I pushed this version. Thanks!\n\nWhile reading the patch again, I found two minor things.\n\n1. As discussed in another thread [1], the atomic variable \"waitStart\" should\n be initialized at the postmaster startup rather than the startup of each\n child process. I changed \"waitStart\" so that it's initialized in\n InitProcGlobal() and also reset to 0 by using pg_atomic_write_u64() in\n InitProcess() and InitAuxiliaryProcess().\n\n2. Thanks to the above change, InitProcGlobal() initializes \"waitStart\"\n even in PGPROC entries for prepare transactions. But those entries are\n zeroed in MarkAsPreparingGuts(), so \"waitStart\" needs to be initialized\n again. Currently TwoPhaseGetDummyProc() initializes \"waitStart\" in the\n PGPROC entry for prepare transaction. But it's better to do that in\n MarkAsPreparingGuts() instead because that function initializes other\n PGPROC variables. So I moved that initialization code from\n TwoPhaseGetDummyProc() to MarkAsPreparingGuts().\n\nPatch attached. Thought?\n\n[1] https://postgr.es/m/7ef8708c-5b6b-edd3-2cf2-7783f1c7c175@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 16 Feb 2021 16:59:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "On 2021-02-16 16:59, Fujii Masao wrote:\n> On 2021/02/15 15:17, Fujii Masao wrote:\n>> \n>> \n>> On 2021/02/10 10:43, Fujii Masao wrote:\n>>> \n>>> \n>>> On 2021/02/09 23:31, torikoshia wrote:\n>>>> On 2021-02-09 22:54, Fujii Masao wrote:\n>>>>> On 2021/02/09 19:11, Fujii Masao wrote:\n>>>>>> \n>>>>>> \n>>>>>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>>>>>> \n>>>>>>> \n>>>>>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>>>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock \n>>>>>>>>>>>> is necessary when updating \"waitStart\" without holding the \n>>>>>>>>>>>> partition lock? Also GetLockStatusData() needs spinlock when \n>>>>>>>>>>>> reading \"waitStart\"?\n>>>>>>>>>>> \n>>>>>>>>>>> Also it might be worth thinking to use 64-bit atomic \n>>>>>>>>>>> operations like\n>>>>>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>>>>>> \n>>>>>>>>>> Thanks for your suggestion and advice!\n>>>>>>>>>> \n>>>>>>>>>> In the attached patch I used pg_atomic_read_u64() and \n>>>>>>>>>> pg_atomic_write_u64().\n>>>>>>>>>> \n>>>>>>>>>> waitStart is TimestampTz i.e., int64, but it seems \n>>>>>>>>>> pg_atomic_read_xxx and pg_atomic_write_xxx only supports \n>>>>>>>>>> unsigned int, so I cast the type.\n>>>>>>>>>> \n>>>>>>>>>> I may be using these functions not correctly, so if something \n>>>>>>>>>> is wrong, I would appreciate any comments.\n>>>>>>>>>> \n>>>>>>>>>> \n>>>>>>>>>> About the documentation, since your suggestion seems better \n>>>>>>>>>> than v6, I used it as is.\n>>>>>>>>> \n>>>>>>>>> Thanks for updating the patch!\n>>>>>>>>> \n>>>>>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>>> + \n>>>>>>>>> pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>>>>>> \n>>>>>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>>>>>> \n>>>>>>>>> + deadlockStart = \n>>>>>>>>> get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) \n>>>>>>>>> &deadlockStart));\n>>>>>>>>> \n>>>>>>>>> Same as above.\n>>>>>>>>> \n>>>>>>>>> + /*\n>>>>>>>>> + * Record waitStart reusing the deadlock timeout \n>>>>>>>>> timer.\n>>>>>>>>> + *\n>>>>>>>>> + * It would be ideal this can be synchronously done \n>>>>>>>>> with updating\n>>>>>>>>> + * lock information. Howerver, since it gives \n>>>>>>>>> performance impacts\n>>>>>>>>> + * to hold partitionLock longer time, we do it here \n>>>>>>>>> asynchronously.\n>>>>>>>>> + */\n>>>>>>>>> \n>>>>>>>>> IMO it's better to comment why we reuse the deadlock timeout \n>>>>>>>>> timer.\n>>>>>>>>> \n>>>>>>>>> proc->waitStatus = waitStatus;\n>>>>>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>>>>>> \n>>>>>>>>> pg_atomic_write_u64() should be used instead? Because waitStart \n>>>>>>>>> can be\n>>>>>>>>> accessed concurrently there.\n>>>>>>>>> \n>>>>>>>>> I updated the patch and addressed the above review comments. \n>>>>>>>>> Patch attached.\n>>>>>>>>> Barring any objection, I will commit this version.\n>>>>>>>> \n>>>>>>>> Thanks for modifying the patch!\n>>>>>>>> I agree with your comments.\n>>>>>>>> \n>>>>>>>> BTW, I ran pgbench several times before and after applying\n>>>>>>>> this patch.\n>>>>>>>> \n>>>>>>>> The environment is virtual machine(CentOS 8), so this is\n>>>>>>>> just for reference, but there were no significant difference\n>>>>>>>> in latency or tps(both are below 1%).\n>>>>>>> \n>>>>>>> Thanks for the test! I pushed the patch.\n>>>>>> \n>>>>>> But I reverted the patch because buildfarm members rorqual and\n>>>>>> prion don't like the patch. I'm trying to investigate the cause\n>>>>>> of this failures.\n>>>>>> \n>>>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n>>>>> \n>>>>> - relation | locktype | mode\n>>>>> ------------------+----------+---------------------\n>>>>> - test_prepared_1 | relation | RowExclusiveLock\n>>>>> - test_prepared_1 | relation | AccessExclusiveLock\n>>>>> -(2 rows)\n>>>>> -\n>>>>> +ERROR: invalid spinlock number: 0\n>>>>> \n>>>>> \"rorqual\" reported that the above error happened in the server \n>>>>> built with\n>>>>> --disable-atomics --disable-spinlocks when reading pg_locks after\n>>>>> the transaction was prepared. The cause of this issue is that \n>>>>> \"waitStart\"\n>>>>> atomic variable in the dummy proc created at the end of prepare \n>>>>> transaction\n>>>>> was not initialized. I updated the patch so that \n>>>>> pg_atomic_init_u64() is\n>>>>> called for the \"waitStart\" in the dummy proc for prepared \n>>>>> transaction.\n>>>>> Patch attached. I confirmed that the patched server built with\n>>>>> --disable-atomics --disable-spinlocks passed all the regression \n>>>>> tests.\n>>>> \n>>>> Thanks for fixing the bug, I also tested v9.patch configured with\n>>>> --disable-atomics --disable-spinlocks on my environment and \n>>>> confirmed\n>>>> that all tests have passed.\n>>> \n>>> Thanks for the test!\n>>> \n>>> I found another bug in the patch. InitProcess() initializes \n>>> \"waitStart\",\n>>> but previously InitAuxiliaryProcess() did not. This could cause \n>>> \"invalid\n>>> spinlock number\" error when reading pg_locks in the standby server.\n>>> I fixed that. Attached is the updated version of the patch.\n>> \n>> I pushed this version. Thanks!\n> \n> While reading the patch again, I found two minor things.\n> \n> 1. As discussed in another thread [1], the atomic variable \"waitStart\" \n> should\n> be initialized at the postmaster startup rather than the startup of \n> each\n> child process. I changed \"waitStart\" so that it's initialized in\n> InitProcGlobal() and also reset to 0 by using pg_atomic_write_u64() \n> in\n> InitProcess() and InitAuxiliaryProcess().\n> \n> 2. Thanks to the above change, InitProcGlobal() initializes \"waitStart\"\n> even in PGPROC entries for prepare transactions. But those entries \n> are\n> zeroed in MarkAsPreparingGuts(), so \"waitStart\" needs to be \n> initialized\n> again. Currently TwoPhaseGetDummyProc() initializes \"waitStart\" in \n> the\n> PGPROC entry for prepare transaction. But it's better to do that in\n> MarkAsPreparingGuts() instead because that function initializes other\n> PGPROC variables. So I moved that initialization code from\n> TwoPhaseGetDummyProc() to MarkAsPreparingGuts().\n> \n> Patch attached. Thought?\n\nThanks for updating the patch!\n\nIt seems to me that the modification is right.\nI ran some regression tests but didn't find problems.\n\n\nRegards,\n\n\n--\nAtsushi Torikoshi\n\n\n",
"msg_date": "Thu, 18 Feb 2021 16:26:58 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: adding wait_start column to pg_locks"
},
{
"msg_contents": "\n\nOn 2021/02/18 16:26, torikoshia wrote:\n> On 2021-02-16 16:59, Fujii Masao wrote:\n>> On 2021/02/15 15:17, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2021/02/10 10:43, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2021/02/09 23:31, torikoshia wrote:\n>>>>> On 2021-02-09 22:54, Fujii Masao wrote:\n>>>>>> On 2021/02/09 19:11, Fujii Masao wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2021/02/09 18:13, Fujii Masao wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2021/02/09 17:48, torikoshia wrote:\n>>>>>>>>> On 2021-02-05 18:49, Fujii Masao wrote:\n>>>>>>>>>> On 2021/02/05 0:03, torikoshia wrote:\n>>>>>>>>>>> On 2021-02-03 11:23, Fujii Masao wrote:\n>>>>>>>>>>>>> 64-bit fetches are not atomic on some platforms. So spinlock is necessary when updating \"waitStart\" without holding the partition lock? Also GetLockStatusData() needs spinlock when reading \"waitStart\"?\n>>>>>>>>>>>>\n>>>>>>>>>>>> Also it might be worth thinking to use 64-bit atomic operations like\n>>>>>>>>>>>> pg_atomic_read_u64(), for that.\n>>>>>>>>>>>\n>>>>>>>>>>> Thanks for your suggestion and advice!\n>>>>>>>>>>>\n>>>>>>>>>>> In the attached patch I used pg_atomic_read_u64() and pg_atomic_write_u64().\n>>>>>>>>>>>\n>>>>>>>>>>> waitStart is TimestampTz i.e., int64, but it seems pg_atomic_read_xxx and pg_atomic_write_xxx only supports unsigned int, so I cast the type.\n>>>>>>>>>>>\n>>>>>>>>>>> I may be using these functions not correctly, so if something is wrong, I would appreciate any comments.\n>>>>>>>>>>>\n>>>>>>>>>>>\n>>>>>>>>>>> About the documentation, since your suggestion seems better than v6, I used it as is.\n>>>>>>>>>>\n>>>>>>>>>> Thanks for updating the patch!\n>>>>>>>>>>\n>>>>>>>>>> + if (pg_atomic_read_u64(&MyProc->waitStart) == 0)\n>>>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &now));\n>>>>>>>>>>\n>>>>>>>>>> pg_atomic_read_u64() is really necessary? I think that\n>>>>>>>>>> \"pg_atomic_write_u64(&MyProc->waitStart, now)\" is enough.\n>>>>>>>>>>\n>>>>>>>>>> + deadlockStart = get_timeout_start_time(DEADLOCK_TIMEOUT);\n>>>>>>>>>> + pg_atomic_write_u64(&MyProc->waitStart,\n>>>>>>>>>> + pg_atomic_read_u64((pg_atomic_uint64 *) &deadlockStart));\n>>>>>>>>>>\n>>>>>>>>>> Same as above.\n>>>>>>>>>>\n>>>>>>>>>> + /*\n>>>>>>>>>> + * Record waitStart reusing the deadlock timeout timer.\n>>>>>>>>>> + *\n>>>>>>>>>> + * It would be ideal this can be synchronously done with updating\n>>>>>>>>>> + * lock information. Howerver, since it gives performance impacts\n>>>>>>>>>> + * to hold partitionLock longer time, we do it here asynchronously.\n>>>>>>>>>> + */\n>>>>>>>>>>\n>>>>>>>>>> IMO it's better to comment why we reuse the deadlock timeout timer.\n>>>>>>>>>>\n>>>>>>>>>> proc->waitStatus = waitStatus;\n>>>>>>>>>> + pg_atomic_init_u64(&MyProc->waitStart, 0);\n>>>>>>>>>>\n>>>>>>>>>> pg_atomic_write_u64() should be used instead? Because waitStart can be\n>>>>>>>>>> accessed concurrently there.\n>>>>>>>>>>\n>>>>>>>>>> I updated the patch and addressed the above review comments. Patch attached.\n>>>>>>>>>> Barring any objection, I will commit this version.\n>>>>>>>>>\n>>>>>>>>> Thanks for modifying the patch!\n>>>>>>>>> I agree with your comments.\n>>>>>>>>>\n>>>>>>>>> BTW, I ran pgbench several times before and after applying\n>>>>>>>>> this patch.\n>>>>>>>>>\n>>>>>>>>> The environment is virtual machine(CentOS 8), so this is\n>>>>>>>>> just for reference, but there were no significant difference\n>>>>>>>>> in latency or tps(both are below 1%).\n>>>>>>>>\n>>>>>>>> Thanks for the test! I pushed the patch.\n>>>>>>>\n>>>>>>> But I reverted the patch because buildfarm members rorqual and\n>>>>>>> prion don't like the patch. I'm trying to investigate the cause\n>>>>>>> of this failures.\n>>>>>>>\n>>>>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2021-02-09%2009%3A20%3A10\n>>>>>>\n>>>>>> - relation | locktype | mode\n>>>>>> ------------------+----------+---------------------\n>>>>>> - test_prepared_1 | relation | RowExclusiveLock\n>>>>>> - test_prepared_1 | relation | AccessExclusiveLock\n>>>>>> -(2 rows)\n>>>>>> -\n>>>>>> +ERROR: invalid spinlock number: 0\n>>>>>>\n>>>>>> \"rorqual\" reported that the above error happened in the server built with\n>>>>>> --disable-atomics --disable-spinlocks when reading pg_locks after\n>>>>>> the transaction was prepared. The cause of this issue is that \"waitStart\"\n>>>>>> atomic variable in the dummy proc created at the end of prepare transaction\n>>>>>> was not initialized. I updated the patch so that pg_atomic_init_u64() is\n>>>>>> called for the \"waitStart\" in the dummy proc for prepared transaction.\n>>>>>> Patch attached. I confirmed that the patched server built with\n>>>>>> --disable-atomics --disable-spinlocks passed all the regression tests.\n>>>>>\n>>>>> Thanks for fixing the bug, I also tested v9.patch configured with\n>>>>> --disable-atomics --disable-spinlocks on my environment and confirmed\n>>>>> that all tests have passed.\n>>>>\n>>>> Thanks for the test!\n>>>>\n>>>> I found another bug in the patch. InitProcess() initializes \"waitStart\",\n>>>> but previously InitAuxiliaryProcess() did not. This could cause \"invalid\n>>>> spinlock number\" error when reading pg_locks in the standby server.\n>>>> I fixed that. Attached is the updated version of the patch.\n>>>\n>>> I pushed this version. Thanks!\n>>\n>> While reading the patch again, I found two minor things.\n>>\n>> 1. As discussed in another thread [1], the atomic variable \"waitStart\" should\n>> be initialized at the postmaster startup rather than the startup of each\n>> child process. I changed \"waitStart\" so that it's initialized in\n>> InitProcGlobal() and also reset to 0 by using pg_atomic_write_u64() in\n>> InitProcess() and InitAuxiliaryProcess().\n>>\n>> 2. Thanks to the above change, InitProcGlobal() initializes \"waitStart\"\n>> even in PGPROC entries for prepare transactions. But those entries are\n>> zeroed in MarkAsPreparingGuts(), so \"waitStart\" needs to be initialized\n>> again. Currently TwoPhaseGetDummyProc() initializes \"waitStart\" in the\n>> PGPROC entry for prepare transaction. But it's better to do that in\n>> MarkAsPreparingGuts() instead because that function initializes other\n>> PGPROC variables. So I moved that initialization code from\n>> TwoPhaseGetDummyProc() to MarkAsPreparingGuts().\n>>\n>> Patch attached. Thought?\n> \n> Thanks for updating the patch!\n> \n> It seems to me that the modification is right.\n> I ran some regression tests but didn't find problems.\n\nThanks for the review and test! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 22 Feb 2021 18:27:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: adding wait_start column to pg_locks"
}
] |
[
{
"msg_contents": "Hi all,\n\noutfuncs.c contains a switch statement responsible for choosing\nserialization function per node type here:\nhttps://github.com/postgres/postgres/blob/master/src/backend/nodes/outfuncs.c#L3711\nIt spans over >650LOC and is quite unreadable, requiring using search\nor code analysis tools for pretty much anything.\n\nI'd like to sort these case branches alphabetically and I'd like to\nget some input on that prior to submitting a patch. Obvious benefit\nwould be increase in readability, with the downside of somewhat\nmessing up the git history.\n\n(readfuncs.c contains a similar construct for deserializing nodes, but\nthat one is if...else based as opposed to switch, so order there might\nhave performance implications -> I'd reserve that topic for separate\ndiscussion).\n\n---\nBest regards,\nFedir\n\n\n",
"msg_date": "Tue, 15 Dec 2020 14:53:35 -0800",
"msg_from": "Fedir Panasenko <fpanasenko@gmail.com>",
"msg_from_op": true,
"msg_subject": "Sorting case branches in outfuncs.c/outNode alphabetically"
},
{
"msg_contents": "Fedir Panasenko <fpanasenko@gmail.com> writes:\n> outfuncs.c contains a switch statement responsible for choosing\n> serialization function per node type here:\n> https://github.com/postgres/postgres/blob/master/src/backend/nodes/outfuncs.c#L3711\n\nWhy are you concerned about outfuncs.c in particular? Its sibling files\n(copyfuncs, equalfuncs, etc) have much the same structure.\n\n> It spans over >650LOC and is quite unreadable, requiring using search\n> or code analysis tools for pretty much anything.\n\nBut why exactly do you need to read it? It's boring boilerplate.\n\n> I'd like to sort these case branches alphabetically and I'd like to\n> get some input on that prior to submitting a patch.\n\nI'd be a strong -1 for alphabetical sort. To my mind, the entries\nhere, and in other similar places, should match the order in which the\nstruct types are declared in src/include/nodes/*nodes.h. And those\nare not sorted alphabetically, but (more or less) by functionality.\nI would *definitely* not favor a patch that arbitrarily re-orders\nthose header files alphabetically.\n\nNow, IIRC the ordering in the backend/nodes/*.c files is not always\na perfect match to the headers. I'd be good with a patch that makes\nthem line up better. But TBH, that is just neatnik-ism; I still don't\nsee how it makes any interesting difference to readability.\n\nKeep in mind also that various people have shown interest in\nauto-generating the backend/nodes/*.c files from the header\ndeclarations, in which case this discussion would be moot.\n\n> (readfuncs.c contains a similar construct for deserializing nodes, but\n> that one is if...else based as opposed to switch, so order there might\n> have performance implications -> I'd reserve that topic for separate\n> discussion).\n\nYeah, I keep wondering when that structure is going to become a\nnoticeable performance problem. There's little reason to think that\nwe've ordered the node types by frequency there. At some point it might\nmake sense to convert readfuncs' lookup logic into, say, a perfect hash\ntable (cf src/tools/PerfectHash.pm). I'd certainly think that that\nwould be a more useful activity than arguing over the switch order.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Dec 2020 18:22:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sorting case branches in outfuncs.c/outNode alphabetically"
}
] |
[
{
"msg_contents": "Hi all,\n(Added Bruce and Daniel in CC:)\n\n$subject has been mentioned a couple of times lately, mainly by me for\nthe recent cryptohash refactoring that has been done. We use in the\ncore code HMAC with SHA256 for SCRAM, but this logic should go through\nSSL libraries able to support it (OpenSSL and libnss allow that) so as\nrequirements like FIPS can be pushed down to any lower-level library\nwe are building with and not Postgres.\n\nFWIW, I have also bumped into this stuff as being a requirement for\nthe recent thread about file-level encryption in [1] where the code\nmakes use of HMAC with SHA512.\n\nSo, please find attached a patch set to rework all that. This\nprovides a split similar to what I have done recently for cryptohash\nfunctions, with a fallback implementation located as of\nsrc/common/hmac.c, that depends itself on the fallback implementations\nof cryptohash functions. The OpenSSL part is done hmac_openssl.c.\n\nThere are five APIs to be able to plug in HMAC implementations to\ncreate, initialize, update, finalize and free a HMAC context, in a\nfashion similar to cryptohashes.\n\nRegarding OpenSSL, upstream has changed lately the way it is possible\nto control HMACs. 3.0.0 has introduced a new set of APIs, with\ncompatibility macros for older versions, as mentioned here:\nhttps://www.openssl.org/docs/manmaster/man3/EVP_MAC_CTX_new.html\nThe new APIs are named EVP_MAC_CTX_new() and such.\n\nI think that this is a bit too new to use though, as we need to\nsupport OpenSSL down to 1.0.1 on HEAD and because there are\ncompatibility macros. So instead I have decided to rely on the older\ninterface based on HMAC_Init_ex() and co:\nhttps://www.openssl.org/docs/manmaster/man3/HMAC.html\n\nAfter that there is another point to note. In 1.1.0, any consumers of\nHMAC *have* to let OpenSSL allocate the HMAC context, like\ncryptohashes because the internals of the HMAC context are only known\nto OpenSSL. In 1.0.2 and older versions, it is possible to have\naccess to HMAC_CTX. This requires an extra tweak in hmac_openssl.c\nwhere we need to {m,p}alloc by ourselves instead of calling\nHMAC_CTX_new() for 1.1.0 and 1.1.1 but some extra configure switches\nare able to do the trick. That means that we could use resowners only\nwhen building with OpenSSL >= 1.1.0, and not for older versions but\nnote that the patch uses resowners anyway, as a matter of simplicity.\nAs the changes are local to a single file, that's easy enough to\nfollow and update. It would be easy enough to rip off this code once\nsupport for older OpenSSL versions is removed.\n\nPlease note that I have added code that should be enough for the\ncompilation on Windows, but I have not taken the time to check that.\nI have checked that things compiled and that check-world passes\nwith and without OpenSSL 1.1.1 on Linux though, so I guess that I have\nnot messed up too badly. This stuff requires much more tests, like\nmaking sure that we are able to connect to PG correctly with SCRAM\nwhen using combinations like libpq based on OpenSSL and the backend\nusing the fallback, or just check the consistency of the results of\ncomputations with SQL functions or such. An extra thing that can be\ndone is to clean up pgcrypto's px-hmac.c but this also requires SHA1\nin cryptohash.c, something that I have submitted separately in [2].\nSo this could just be done later. This patch has updated the code of\nSCRAM so as we don't use anymore all the SCRAM/HMAC business but the\ngeneric HMAC routines instead for this work.\n\nThoughts are welcome. I am adding that to the next CF.\n\n[1]: https://www.postgresql.org/message-id/X9lhi1ht04I+v/rV@paquier.xyz\n[2]: https://commitfest.postgresql.org/31/2868/\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 16 Dec 2020 16:17:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Refactoring HMAC in the core code"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 04:17:50PM +0900, Michael Paquier wrote:\n> Please note that I have added code that should be enough for the\n> compilation on Windows, but I have not taken the time to check that.\n> I have checked that things compiled and that check-world passes\n> with and without OpenSSL 1.1.1 on Linux though, so I guess that I have\n> not messed up too badly. This stuff requires much more tests, like\n> making sure that we are able to connect to PG correctly with SCRAM\n> when using combinations like libpq based on OpenSSL and the backend\n> using the fallback, or just check the consistency of the results of\n> computations with SQL functions or such. An extra thing that can be\n> done is to clean up pgcrypto's px-hmac.c but this also requires SHA1\n> in cryptohash.c, something that I have submitted separately in [2].\n> So this could just be done later. This patch has updated the code of\n> SCRAM so as we don't use anymore all the SCRAM/HMAC business but the\n> generic HMAC routines instead for this work.\n> \n> Thoughts are welcome. I am adding that to the next CF.\n\nVery nice. Are you planning to apply this soon? If so, I will delay\nmy key management patch until this is applied. If not, I will update my\nHMAC call when you apply this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 12:53:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 12:53:06PM -0500, Bruce Momjian wrote:\n> Very nice. Are you planning to apply this soon? If so, I will delay\n> my key management patch until this is applied. If not, I will update my\n> HMAC call when you apply this.\n\nKnowing that we are in a period of vacations for a lot of people, and\nthat this is a sensitive area of the code that involves\nauthentication, I think that it is better to let this thread brew\nlonger and get more eyes to look at it. As this also concerns\nexternal SSL libraries like libnss, making sure that the APIs have a\nshape flexible enough would be good. Based on my own checks with\nOpenSSL and libnss, I think that's more than enough. But let's be\nsure.\n\nI don't think that this prevents to update your code to rely on this\nnew API as you could post a copy of this patch in your own patch\nseries (the CF bot can pick up a set of patches labeled with\nformat-patch), making your own feature a bit smaller in size. But I\nguess that depends on how you want to maintain a live patch series.\n\nWhat should be really avoided is to commit in the code tree any code\nthat we know could have been refactored out first, so as we always\nhave in the tree what we consider as a clean state and don't\naccumulate duplications. That pays off a lot when it comes to the\nbuildfarm turning suddenly red where a revert is necessary, as\nincremental changes reduce the number of things to work on at once,\nand the number of changes to revert.\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 08:41:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 08:41:01AM +0900, Michael Paquier wrote:\n> Knowing that we are in a period of vacations for a lot of people, and\n> that this is a sensitive area of the code that involves\n> authentication, I think that it is better to let this thread brew\n> longer and get more eyes to look at it. As this also concerns\n> external SSL libraries like libnss, making sure that the APIs have a\n> shape flexible enough would be good. Based on my own checks with\n> OpenSSL and libnss, I think that's more than enough. But let's be\n> sure.\n\nFWIW, I got my eyes on this stuff again today, and please find\nattached a v2, where I have fixed a certain number of issues:\n- Fixed a memory leak with the shrink buffer in the fallback\nimplementation.\n- Fixed a couple of incorrect comments.\n- The logic around the resowner was a bit busted with OpenSSL <=\n1.0.2. So I haev reorganized the code a bit.\n\nThis has been tested on Windows and Linux across all the versions of\nOpenSSL we support on HEAD. I am also attaching a small module called\nhmacfuncs that I used as a way to validate this patch across all the\nversions of OpenSSL and the fallback implementation. As a reference,\nthis matches with the results from Wikipedia here:\nhttps://en.wikipedia.org/wiki/HMAC#Examples\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 15:46:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 03:46:42PM +0900, Michael Paquier wrote:\n> On Fri, Dec 18, 2020 at 08:41:01AM +0900, Michael Paquier wrote:\n> > Knowing that we are in a period of vacations for a lot of people, and\n> > that this is a sensitive area of the code that involves\n> > authentication, I think that it is better to let this thread brew\n> > longer and get more eyes to look at it. As this also concerns\n> > external SSL libraries like libnss, making sure that the APIs have a\n> > shape flexible enough would be good. Based on my own checks with\n> > OpenSSL and libnss, I think that's more than enough. But let's be\n> > sure.\n...\n> This has been tested on Windows and Linux across all the versions of\n> OpenSSL we support on HEAD. I am also attaching a small module called\n> hmacfuncs that I used as a way to validate this patch across all the\n> versions of OpenSSL and the fallback implementation. As a reference,\n> this matches with the results from Wikipedia here:\n> https://en.wikipedia.org/wiki/HMAC#Examples\n\nGreat. See my questions in the key manager thread about whether I\nshould use the init/update/final API or just keep the one-line version.\nAs far as when to commit this, I think the quiet time is actually better\nbecause if you break something, it is less of a disruption while you fix\nit.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 10:48:00 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 10:48:00AM -0500, Bruce Momjian wrote:\n> Great. See my questions in the key manager thread about whether I\n> should use the init/update/final API or just keep the one-line version.\n> As far as when to commit this, I think the quiet time is actually better\n> because if you break something, it is less of a disruption while you fix\n> it.\n\nPlease note that on a related thread that I have begun yesterday,\nHeikki has suggested some changes in the way we handle the opaque data\nused by each cryptohash implementation.\nhttps://www.postgresql.org/message-id/6ebe7f1f-bf37-2688-2ac1-a081d278367c@iki.fi\n\nAs the design used on this thread for HMAC is similar to what I did\nfor cryptohashes, it would be good to conclude first on the interface\nthere, and then come back here so as a consistent design is used. As\na whole, I don't think that there is any need to rush for this stuff.\nI would rather wait more and make sure that we agree on an interface\npeople are happy enough with.\n--\nMichael",
"msg_date": "Sat, 19 Dec 2020 09:35:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 09:35:57AM +0900, Michael Paquier wrote:\n> On Fri, Dec 18, 2020 at 10:48:00AM -0500, Bruce Momjian wrote:\n> > Great. See my questions in the key manager thread about whether I\n> > should use the init/update/final API or just keep the one-line version.\n> > As far as when to commit this, I think the quiet time is actually better\n> > because if you break something, it is less of a disruption while you fix\n> > it.\n> \n> Please note that on a related thread that I have begun yesterday,\n> Heikki has suggested some changes in the way we handle the opaque data\n> used by each cryptohash implementation.\n> https://www.postgresql.org/message-id/6ebe7f1f-bf37-2688-2ac1-a081d278367c@iki.fi\n> \n> As the design used on this thread for HMAC is similar to what I did\n> for cryptohashes, it would be good to conclude first on the interface\n> there, and then come back here so as a consistent design is used. As\n> a whole, I don't think that there is any need to rush for this stuff.\n> I would rather wait more and make sure that we agree on an interface\n> people are happy enough with.\n\nOthers are waiting to continue working. I am not going to hold up a\npatch over a one function, two-line API issue. I will deal with\nwhatever new API you choose, and mine will work fine using the OpenSSL\nAPI until I do.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 19:42:02 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 07:42:02PM -0500, Bruce Momjian wrote:\n> > Please note that on a related thread that I have begun yesterday,\n> > Heikki has suggested some changes in the way we handle the opaque data\n> > used by each cryptohash implementation.\n> > https://www.postgresql.org/message-id/6ebe7f1f-bf37-2688-2ac1-a081d278367c@iki.fi\n> > \n> > As the design used on this thread for HMAC is similar to what I did\n> > for cryptohashes, it would be good to conclude first on the interface\n> > there, and then come back here so as a consistent design is used. As\n> > a whole, I don't think that there is any need to rush for this stuff.\n> > I would rather wait more and make sure that we agree on an interface\n> > people are happy enough with.\n> \n> Others are waiting to continue working. I am not going to hold up a\n> patch over a one function, two-line API issue. I will deal with\n> whatever new API you choose, and mine will work fine using the OpenSSL\n> API until I do.\n\nI will also point out that my patch is going to be bigger and bigger,\nand harder to review, the longer I work on it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 19:43:20 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 03:46:42PM +0900, Michael Paquier wrote:\n> This has been tested on Windows and Linux across all the versions of\n> OpenSSL we support on HEAD. I am also attaching a small module called\n> hmacfuncs that I used as a way to validate this patch across all the\n> versions of OpenSSL and the fallback implementation. As a reference,\n> this matches with the results from Wikipedia here:\n> https://en.wikipedia.org/wiki/HMAC#Examples\n\nPlease find attached a rebased version. I have simplified the\nimplementation to use an opaque pointer similar to the cryptohash\npart, leading to a large cleanup of the allocation logic for both\nimplementations, with and without OpenSSL.\n--\nMichael",
"msg_date": "Fri, 8 Jan 2021 16:11:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Jan 08, 2021 at 04:11:53PM +0900, Michael Paquier wrote:\n> Please find attached a rebased version. I have simplified the\n> implementation to use an opaque pointer similar to the cryptohash\n> part, leading to a large cleanup of the allocation logic for both\n> implementations, with and without OpenSSL.\n\nRebased patch is attached wiht SHA1 added as of a8ed6bb. Now that\nSHA1 is part of the set of options for cryptohashes, a lot of code of\npgcrypto can be cleaned up thanks to the refactoring done here, but\nI am leaving that as a separate item to address later.\n--\nMichael",
"msg_date": "Sat, 23 Jan 2021 13:43:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 01:43:20PM +0900, Michael Paquier wrote:\n> Rebased patch is attached wiht SHA1 added as of a8ed6bb. Now that\n> SHA1 is part of the set of options for cryptohashes, a lot of code of\n> pgcrypto can be cleaned up thanks to the refactoring done here, but\n> I am leaving that as a separate item to address later.\n\nAgain a new rebase, giving v5:\n- Fixed the APIs to return -1 if the caller gives NULL in input, to be\nconsistent with cryptohash.\n- Added a length argument to pg_hmac_final(), wiht sanity checks.\n--\nMichael",
"msg_date": "Mon, 15 Feb 2021 20:25:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Mon, Feb 15, 2021 at 08:25:27PM +0900, Michael Paquier wrote:\n> Again a new rebase, giving v5:\n> - Fixed the APIs to return -1 if the caller gives NULL in input, to be\n> consistent with cryptohash.\n> - Added a length argument to pg_hmac_final(), wiht sanity checks.\n\nSo, this patch has been around for a couple of weeks now, and I would\nlike to get this part done in 14 to close the loop with the parts of\nthe code that had better rely on what the crypto libs have. The main\nadvantage of this change is for SCRAM so as it does not use its own\nimplementation of HMAC whenever possible.\n\nAny objections?\n--\nMichael",
"msg_date": "Fri, 2 Apr 2021 19:04:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 07:04:18PM +0900, Michael Paquier wrote:\n> On Mon, Feb 15, 2021 at 08:25:27PM +0900, Michael Paquier wrote:\n> > Again a new rebase, giving v5:\n> > - Fixed the APIs to return -1 if the caller gives NULL in input, to be\n> > consistent with cryptohash.\n> > - Added a length argument to pg_hmac_final(), wiht sanity checks.\n> \n> So, this patch has been around for a couple of weeks now, and I would\n> like to get this part done in 14 to close the loop with the parts of\n> the code that had better rely on what the crypto libs have. The main\n> advantage of this change is for SCRAM so as it does not use its own\n> implementation of HMAC whenever possible.\n> \n> Any objections?\n\nWorks for me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:10:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring HMAC in the core code"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 10:10:36AM -0400, Bruce Momjian wrote:\n> Works for me.\n\nThanks. I got to spend some time on this stuff again today and did a\ncomplete review, without noticing any issues except some indentation\nthat was strange so I have applied it. Attached is a small extension\nI have used for some of my tests to validate the implementations.\nThis uses some result samples one can find on Wikipedia at [1], for\ninstance.\n\n[1]: https://en.wikipedia.org/wiki/HMAC\n--\nMichael",
"msg_date": "Sat, 3 Apr 2021 19:02:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring HMAC in the core code"
}
] |
[
{
"msg_contents": "I've been looking at the COPY FROM parsing code, trying to refactor it \nso that the parallel COPY would be easier to implement. I haven't \ntouched parallelism itself, just looking for ways to smoothen the way. \nAnd for ways to speed up COPY in general.\n\nCurrently, COPY FROM parses the input one line at a time. Each line is \nconverted to the database encoding separately, or if the file encoding \nmatches the database encoding, we just check that the input is valid for \nthe encoding. It would be more efficient to do the encoding \nconversion/verification in larger chunks. At least potentially; the \ncurrent conversion/verification implementations work one byte a time so \nit doesn't matter too much, but there are faster algorithms out there \nthat use SIMD instructions or lookup tables that benefit from larger inputs.\n\nSo I'd like to change it so that the encoding conversion/verification is \ndone before splitting the input into lines. The problem is that the \nconversion and verification functions throw an error on incomplete \ninput. So we can't pass them a chunk of N raw bytes, if we don't know \nwhere the character boundaries are. The first step in this effort is to \nchange the encoding and conversion routines to allow that. Attached \npatches 0001-0004 do that:\n\nFor encoding conversions, change the signature of the conversion \nfunction, by adding a \"bool noError\" argument and making them return the \nnumber of input bytes successfully converted. That way, the conversion \nfunction can be called in a streaming fashion: load a buffer with raw \ninput without caring about the character boundaries, call the conversion \nfunction to convert it except for the few bytes at the end that might be \nan incomplete character, load the buffer with more data, and repeat.\n\nFor encoding verification, add a new function that works similarly. It \ntakes N bytes of raw input, verifies as much of it as possible, and \nreturns the number of input bytes that were valid. In principle, this \ncould've been implemented by calling the existing pg_encoding_mblen() \nand pg_encoding_verifymb() functions in a loop, but it would be too \nslow. This adds encoding-specific functions for that. The UTF-8 \nimplementation is slightly optimized by basically inlining the \npg_utf8_mblen() call, the other implementations are pretty naive.\n\n- Heikki",
"msg_date": "Wed, 16 Dec 2020 14:17:58 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 02:17:58PM +0200, Heikki Linnakangas wrote:\n> I've been looking at the COPY FROM parsing code, trying to refactor it so\n> that the parallel COPY would be easier to implement. I haven't touched\n> parallelism itself, just looking for ways to smoothen the way. And for ways\n> to speed up COPY in general.\n\nYes, this makes a lot of sense. Glad you are looking into this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 13:04:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "One of the patches in this patch set is worth calling out separately: \n0003-Add-direct-conversion-routines-between-EUC_TW-and-Bi.patch. Per \ncommit message:\n\n Add direct conversion routines between EUC_TW and Big5.\n\n Conversions between EUC_TW and Big5 were previously implemented by\n converting the whole input to MIC first, and then from MIC to the\n target encoding. Implement functions to convert directly between the\n two.\n\n The reason to do this now is that the next patch will change the\n change the conversion function signature so that if the input is\n invalid, we convert as much as we can and return the number of bytes\n successfully converted. That's not possible if we use an intermediary\n format, because if an error happens in the intermediary -> final\n conversion, we lose track of the location of the invalid character in\n the original input. Avoiding the intermediate step should be faster\n too.\n\nThis patch is fairly independent of the others. It could be reviewed and \napplied separately.\n\n\nIn order to verify that the new code is correct, I wrote some helper \nplpgsql functions to generate all valid EUC_TW and Big5 byte sequences \nthat encode one character, and tested converting each of them. Then I \ncompared the the results with unpatched server, to check that the new \ncode performs the same conversion. This is perhaps overkill, but since \nits pretty straightforward to enumerate all the input characters, might \nas well do it.\n\nFor the sake of completeness, I wrote similar helpers for all the other \nencodings and conversions. Except for UTF-8, there are too many formally \nvalid codepoints for that to feasible. This does test round-trip \nconversions of all codepoints from all the other encodings to UTF-8 and \nback, though, so there's pretty good coverage of UTF-8 too.\n\nThis test suite is probably too large to add to the source tree, but for \nthe sake of the archives, I'm attaching it here. The first patch adds \nthe test suite, including the expected output of each conversion. The \nsecond patch contains expected output changes for the above patch to add \ndirect conversions between EUC_TW and Big5. It affected the error \nmessages for some byte sequences that cannot be converted. For example, \non unpatched master:\n\npostgres=# select convert('\\xfdcc', 'euc_tw', 'big5');\nERROR: character with byte sequence 0x95 0xfd 0xcc in encoding \n\"MULE_INTERNAL\" has no equivalent in encoding \"BIG5\"\n\nWith the patch:\n\npostgres=# select convert('\\xfdcc', 'euc_tw', 'big5');\nERROR: character with byte sequence 0xfd 0xcc in encoding \"EUC_TW\" has \nno equivalent in encoding \"BIG5\"\n\nThe old message talked about \"MULE_INTERNAL\" which exposes the \nimplementation detail that we used it as an intermediate in the \nconversion. That can be confusing to a user, the new message makes more \nsense. So that's also nice.\n\n- Heikki",
"msg_date": "Thu, 17 Dec 2020 23:44:22 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 8:18 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Currently, COPY FROM parses the input one line at a time. Each line is\n> converted to the database encoding separately, or if the file encoding\n> matches the database encoding, we just check that the input is valid for\n> the encoding. It would be more efficient to do the encoding\n> conversion/verification in larger chunks. At least potentially; the\n> current conversion/verification implementations work one byte a time so\n> it doesn't matter too much, but there are faster algorithms out there\n> that use SIMD instructions or lookup tables that benefit from larger\ninputs.\n\nHi Heikki,\n\nThis is great news. I've seen examples of such algorithms and that'd be\nnice to have. I haven't studied the patch in detail, but it looks fine on\nthe whole.\n\nIn 0004, it seems you have some doubts about upgrade compatibility. Is that\nbecause user-defined conversions would no longer have the right signature?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Dec 16, 2020 at 8:18 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> Currently, COPY FROM parses the input one line at a time. Each line is> converted to the database encoding separately, or if the file encoding> matches the database encoding, we just check that the input is valid for> the encoding. It would be more efficient to do the encoding> conversion/verification in larger chunks. At least potentially; the> current conversion/verification implementations work one byte a time so> it doesn't matter too much, but there are faster algorithms out there> that use SIMD instructions or lookup tables that benefit from larger inputs.Hi Heikki,This is great news. I've seen examples of such algorithms and that'd be nice to have. I haven't studied the patch in detail, but it looks fine on the whole.In 0004, it seems you have some doubts about upgrade compatibility. Is that because user-defined conversions would no longer have the right signature?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 22 Dec 2020 16:01:48 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 22/12/2020 22:01, John Naylor wrote:\n> In 0004, it seems you have some doubts about upgrade compatibility. Is \n> that because user-defined conversions would no longer have the right \n> signature?\n\nExactly. If you have an extension that adds a custom conversion function \nand does CREATE CONVERSION, the old installation script will fail on the \nnew version. That causes trouble for pg_dump+restore and pg_upgrade.\n\nPerhaps we could accept the old signature in the server when you do \nCREATE CONVERSION, but somehow mark the conversion as broken in the \ncatalog so that you would get a runtime error if you tried to use it. \nThat would be enough to make pg_dump+restore (and pg_upgrade) not throw \nan error, and you could then upgrade the extension later (ALTER \nEXTENSION UPDATE).\n\nI'm not sure it's worth the trouble, though. Custom conversions are very \nrare. And I don't think any other object can depend on a conversion, so \nyou can always drop the conversion before upgrade, and re-create it with \nthe new function signature afterwards. A note in the release notes and a \ncheck in pg_upgrade, with instructions to drop and recreate the \nconversion, are probably enough.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 23 Dec 2020 09:41:43 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 3:41 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>\n> I'm not sure it's worth the trouble, though. Custom conversions are very\n> rare. And I don't think any other object can depend on a conversion, so\n> you can always drop the conversion before upgrade, and re-create it with\n> the new function signature afterwards. A note in the release notes and a\n> check in pg_upgrade, with instructions to drop and recreate the\n> conversion, are probably enough.\n>\n\nThat was my thought as well.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Dec 23, 2020 at 3:41 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\nI'm not sure it's worth the trouble, though. Custom conversions are very \nrare. And I don't think any other object can depend on a conversion, so \nyou can always drop the conversion before upgrade, and re-create it with \nthe new function signature afterwards. A note in the release notes and a \ncheck in pg_upgrade, with instructions to drop and recreate the \nconversion, are probably enough.\nThat was my thought as well.-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 23 Dec 2020 14:05:25 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "Hi Heikki,\n\n0001 through 0003 are straightforward, and I think they can be committed\nnow if you like.\n\n0004 is also pretty straightforward. The check you proposed upthread for\npg_upgrade seems like the best solution to make that workable. I'll take a\nlook at 0005 soon.\n\nI measured the conversions that were rewritten in 0003, and there is indeed\na noticeable speedup:\n\nBig5 to EUC-TW:\n\nhead 196ms\n0001-3 152ms\n\nEUC-TW to Big5:\n\nhead 190ms\n0001-3 144ms\n\nI've attached the driver function for reference. Example use:\n\nselect drive_conversion(\n 1000, 'euc_tw'::name, 'big5'::name,\n convert('a few kB of utf8 text here', 'utf8', 'euc_tw')\n);\n\nI took a look at the test suite also, and the only thing to note is a\ncouple places where the comment doesn't match the code:\n\n+ -- JIS X 0201: 2-byte encoded chars starting with 0x8e (SS2)\n+ byte1 = hex('0e');\n+ for byte2 in hex('a1')..hex('df') loop\n+ return next b(byte1, byte2);\n+ end loop;\n+\n+ -- JIS X 0212: 3-byte encoded chars, starting with 0x8f (SS3)\n+ byte1 = hex('0f');\n+ for byte2 in hex('a1')..hex('fe') loop\n+ for byte3 in hex('a1')..hex('fe') loop\n+ return next b(byte1, byte2, byte3);\n+ end loop;\n+ end loop;\n\nNot sure if it matters , but thought I'd mention it anyway.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Jan 2021 19:23:38 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 28/01/2021 01:23, John Naylor wrote:\n> Hi Heikki,\n> \n> 0001 through 0003 are straightforward, and I think they can be committed \n> now if you like.\n\nThanks for the review!\n\nI did some more rigorous microbenchmarking of patch 1 and 2. I used the \nattached test script, which calls convert_from() function to perform \nUTF-8 verification on two large strings, about 60kb each. One of the \nstrings is pure ASCII, and the other is an HTML page that contains a mix \nof ASCII and multibyte characters.\n\nCompiled with \"gcc -O2\", gcc version 10.2.1 20210110 (Debian 10.2.1-6)\n\n | mixed | ascii\n-----------+-------+-------\n master | 1866 | 1250\n patch 1 | 959 | 507\n patch 1+2 | 1396 | 987\n\nSo, the first patch, \n0001-Add-new-mbverifystr-function-for-each-encoding.patch, made huge \ndifference. Even with pure ASCII input. That's very surprising, because \nthere is already a fast-path for pure-ASCII input in pg_verify_mbstr_len().\n\nEven more surprising was that the second patch \n(0002-Replace-pg_utf8_verifystr-with-a-faster-implementati.patch) \nactually made things worse again. I thought it would give a modest gain, \nbut nope.\n\nIt seems to me that GCC is not doing good job at optimizing the loop in \npg_verify_mbstr(). The first patch fixes that, but the second patch \nsomehow trips up GCC again.\n\nSo I also tried this with \"gcc -O3\" and clang:\n\nCompiled with \"gcc -O3\"\n\n | mixed | ascii\n-----------+-------+-------\n master | 1522 | 1225\n patch 1 | 753 | 507\n patch 1+2 | 868 | 507\n\nCompiled with \"clang -O2\", Debian clang version 11.0.1-2\n\n | mixed | ascii\n-----------+-------+-------\n master | 1257 | 520\n patch 1 | 899 | 507\n patch 1+2 | 884 | 508\n\nWith gcc -O3, the results are a better, but still the second patch seems \nharmful. With clang, I got the result I expected: Almost no difference \nwith pure-ASCII input, because there's already a fast-path for that, and \na nice speedup with multibyte characters. Still, I was surprised how big \nthe speedup from the first patch was, and how little additional gain the \nsecond patch gives.\n\nBased on these results, I'm going to commit the first patch, but not the \nsecond one. There are much faster UTF-8 verification routines out there, \nusing SIMD instructions and whatnot, and we should consider adopting one \nof those, but that's future work.\n\n- Heikki",
"msg_date": "Thu, 28 Jan 2021 13:36:04 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 28/01/2021 01:23, John Naylor wrote:\n> Hi Heikki,\n> \n> 0001 through 0003 are straightforward, and I think they can be committed \n> now if you like.\n> \n> 0004 is also pretty straightforward. The check you proposed upthread for \n> pg_upgrade seems like the best solution to make that workable. I'll take \n> a look at 0005 soon.\n> \n> I measured the conversions that were rewritten in 0003, and there is \n> indeed a noticeable speedup:\n> \n> Big5 to EUC-TW:\n> \n> head 196ms\n> 0001-3 152ms\n> \n> EUC-TW to Big5:\n> \n> head 190ms\n> 0001-3 144ms\n> \n> I've attached the driver function for reference. Example use:\n> \n> select drive_conversion(\n> 1000, 'euc_tw'::name, 'big5'::name,\n> convert('a few kB of utf8 text here', 'utf8', 'euc_tw')\n> );\n\nThanks! I have committed patches 0001 and 0003 in this series, with \nminor comment fixes. Next I'm going to write the pg_upgrade check for \npatch 0004, to get that into a committable state too.\n\n> I took a look at the test suite also, and the only thing to note is a \n> couple places where the comment doesn't match the code:\n> \n> + -- JIS X 0201: 2-byte encoded chars starting with 0x8e (SS2)\n> + byte1 = hex('0e');\n> + for byte2 in hex('a1')..hex('df') loop\n> + return next b(byte1, byte2);\n> + end loop;\n> +\n> + -- JIS X 0212: 3-byte encoded chars, starting with 0x8f (SS3)\n> + byte1 = hex('0f');\n> + for byte2 in hex('a1')..hex('fe') loop\n> + for byte3 in hex('a1')..hex('fe') loop\n> + return next b(byte1, byte2, byte3);\n> + end loop;\n> + end loop;\n> \n> Not sure if it matters , but thought I'd mention it anyway.\n\nGood catch! The comments were correct, and the tests were wrong, not \ntesting those 2- and 3-byte encoded characters as intened. Doesn't \nmatter for testing this patch, I only included those euc_jis_2004 tets \nfor the sake of completeness, but if someone finds this test suite in \nthe archives and want to use it for something real, make sure you fix \nthat first.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 28 Jan 2021 15:05:39 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 28/01/2021 15:05, Heikki Linnakangas wrote:\n> Next I'm going to write the pg_upgrade check for\n> patch 0004, to get that into a committable state too.\n\nAs promised, here are new versions of the remaining patches, with the \npg_upgrade check added. If you have any custom encoding conversions in \nthe old cluster, pg_upgrade now fails:\n\n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> Checking database user is the install user ok\n> Checking database connection settings ok\n> Checking for prepared transactions ok\n> Checking for reg* data types in user tables ok\n> Checking for contrib/isn with bigint-passing mismatch ok\n> Checking for user-defined encoding conversions fatal\n> \n> Your installation contains user-defined encoding conversions.\n> The conversion function parameters changed in PostgreSQL version 14\n> so this cluster cannot currently be upgraded. You can remove the\n> encoding conversions in the old cluster and restart the upgrade.\n> A list of user-defined encoding conversions is in the file:\n> encoding_conversions.txt\n> \n> Failure, exiting\n\nTo test this, I wrote a dummy conversion function, also attached.\n\n- Heikki",
"msg_date": "Thu, 28 Jan 2021 18:43:58 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 7:36 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Even more surprising was that the second patch\n> (0002-Replace-pg_utf8_verifystr-with-a-faster-implementati.patch)\n> actually made things worse again. I thought it would give a modest gain,\n> but nope.\n\nHmm, that surprised me too.\n\n> Based on these results, I'm going to commit the first patch, but not the\n> second one. There are much faster UTF-8 verification routines out there,\n> using SIMD instructions and whatnot, and we should consider adopting one\n> of those, but that's future work.\n\nI have something in mind for that.\n\nI took a look at v2, and for the first encoding I tried, it fails to report\nthe error for invalid input:\n\ncreate database euctest WITH ENCODING 'EUC_CN' LC_COLLATE='zh_CN.eucCN'\nLC_CTYPE='zh_CN.eucCN' TEMPLATE=template0;\n\n\\c euctest\ncreate table foo (a text);\n\nmaster:\n\neuctest=# copy foo from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> ä\n>> \\.\nERROR: character with byte sequence 0xc3 0xa4 in encoding \"UTF8\" has no\nequivalent in encoding \"EUC_CN\"\nCONTEXT: COPY foo, line 1\n\npatch:\n\neuctest=# copy foo from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> ä\n>> \\.\nCOPY 0\neuctest=#\n\nI believe the problem is in UtfToLocal(). I've attached a fix formatted as\na text file to avoid confusing the cfbot. The fix keeps the debugging\nereport() in case you find it useful. Some additional test coverage might\nbe good here, but not sure how much work that would be. I didn't check any\nother conversions yet.\n\n\nv2-0002 seems fine to me, I just have cosmetic comments here:\n\n+ * the same, no conversion is required by we must still validate that the\n\ns/by/but/\n\nThis comment in copyfrom_internal.h above the *StateData struct is the same\nas the corresponding one in copyto.c:\n\n * Multi-byte encodings: all supported client-side encodings encode\nmulti-byte\n * characters by having the first byte's high bit set. Subsequent bytes of\nthe\n * character can have the high bit not set. When scanning data in such an\n * encoding to look for a match to a single-byte (ie ASCII) character, we\nmust\n * use the full pg_encoding_mblen() machinery to skip over multibyte\n * characters, else we might find a false match to a trailing byte. In\n * supported server encodings, there is no possibility of a false match, and\n * it's faster to make useless comparisons to trailing bytes than it is to\n * invoke pg_encoding_mblen() to skip over them. encoding_embeds_ascii is\ntrue\n * when we have to do it the hard way.\n\nThe references to pg_encoding_mblen() and encoding_embeds_ascii, are out of\ndate for copy-from. I'm not sure the rest is relevant to copy-from anymore,\neither. Can you confirm?\n\nThis comment inside the struct is now out of date as well:\n\n * Similarly, line_buf holds the whole input line being processed. The\n * input cycle is first to read the whole line into line_buf, convert it\n * to server encoding there, and then extract the individual attribute\n\nHEAD has this macro already:\n\n/* Shorthand for number of unconsumed bytes available in raw_buf */\n#define RAW_BUF_BYTES(cstate) ((cstate)->raw_buf_len -\n(cstate)->raw_buf_index)\n\nIt might make sense to create a CONVERSION_BUF_BYTES equivalent since the\npatch calculates cstate->conversion_buf_len - cstate->conversion_buf_index\nin a couple places.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 30 Jan 2021 14:47:06 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 30/01/2021 20:47, John Naylor wrote:\n> I took a look at v2, and for the first encoding I tried, it fails to \n> report the error for invalid input:\n\nThat's embarassing...\n\n> I believe the problem is in UtfToLocal(). I've attached a fix formatted \n> as a text file to avoid confusing the cfbot. The fix keeps the debugging \n> ereport() in case you find it useful.\n\nThanks. I fixed it slightly differently, and also changed LocalToUtf() \nto follow the same pattern, even though LocalToUtf() did not have the \nsame bug.\n\n> Some additional test coverage might be good here, but not sure how\n> much work that would be. I didn't check any other conversions yet.\nI added a bunch of tests for various built-in conversions.\n\n> v2-0002 seems fine to me, I just have cosmetic comments here:\n> \n> + * the same, no conversion is required by we must still validate that the\n> \n> s/by/but/\n> \n> This comment in copyfrom_internal.h above the *StateData struct is the \n> same as the corresponding one in copyto.c:\n> \n> * Multi-byte encodings: all supported client-side encodings encode \n> multi-byte\n> * characters by having the first byte's high bit set. Subsequent bytes \n> of the\n> * character can have the high bit not set. When scanning data in such an\n> * encoding to look for a match to a single-byte (ie ASCII) character, \n> we must\n> * use the full pg_encoding_mblen() machinery to skip over multibyte\n> * characters, else we might find a false match to a trailing byte. In\n> * supported server encodings, there is no possibility of a false \n> match, and\n> * it's faster to make useless comparisons to trailing bytes than it is to\n> * invoke pg_encoding_mblen() to skip over them. encoding_embeds_ascii \n> is true\n> * when we have to do it the hard way.\n> \n> The references to pg_encoding_mblen() and encoding_embeds_ascii, are out \n> of date for copy-from. I'm not sure the rest is relevant to copy-from \n> anymore, either. Can you confirm?\n\nYeah, that comment is obsolete for COPY FROM, the encoding conversion \nworks differently now. Removed it from copyfrom_internal.h.\n\n> This comment inside the struct is now out of date as well:\n> \n> * Similarly, line_buf holds the whole input line being processed. The\n> * input cycle is first to read the whole line into line_buf, convert it\n> * to server encoding there, and then extract the individual attribute\n> \n> HEAD has this macro already:\n> \n> /* Shorthand for number of unconsumed bytes available in raw_buf */\n> #define RAW_BUF_BYTES(cstate) ((cstate)->raw_buf_len - \n> (cstate)->raw_buf_index)\n> \n> It might make sense to create a CONVERSION_BUF_BYTES equivalent since \n> the patch calculates cstate->conversion_buf_len - \n> cstate->conversion_buf_index in a couple places.\n\nThanks for the review!\n\nI spent some time refactoring and adding comments all around the patch, \nhopefully making it all more clear. One notable difference is that I \nrenamed 'raw_buf' (which exists in master too) to 'input_buf', and \nrenamed 'conversion_buf' to 'raw_buf'. I'm going to read through this \npatch again another day with fresh eyes, and also try to add some tests \nfor the corner cases at buffer boundaries.\n\nAttached is a new set of patches. I added some regression tests for the \nbuilt-in conversion functions, which cover the bug you found, and many \nother interesting cases that did not have test coverage yet. It comes in \ntwo patches: the first patch uses just the existing convert_from() SQL \nfunction, and the second one uses the new \"noError\" variants of the \nconversion functions. I also kept the bug-fixes compared to the previous \npatch version as a separate commit, for easier review.\n\n- Heikki",
"msg_date": "Mon, 1 Feb 2021 18:15:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Mon, Feb 1, 2021 at 12:15 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Thanks. I fixed it slightly differently, and also changed LocalToUtf()\n> to follow the same pattern, even though LocalToUtf() did not have the\n> same bug.\n\nLooks good to me.\n\n> I added a bunch of tests for various built-in conversions.\n\nNice! I would like to have utf8 tests for every category of invalid byte\n(overlong, surrogate, 5 bytes, etc), but it's not necessary for this patch.\n\n> I spent some time refactoring and adding comments all around the patch,\n> hopefully making it all more clear. One notable difference is that I\n> renamed 'raw_buf' (which exists in master too) to 'input_buf', and\n> renamed 'conversion_buf' to 'raw_buf'. I'm going to read through this\n> patch again another day with fresh eyes, and also try to add some tests\n> for the corner cases at buffer boundaries.\n\nThe comments and renaming are really helpful in understanding that file!\n\nAlthough a new patch is likely forthcoming, I did take a brief look and\nfound the following:\n\n\nIn copyfromparse.c, this is now out of date:\n\n * Read the next input line and stash it in line_buf, with conversion to\n * server encoding.\n\n\nOne of your FIXME comments seems to allude to this, but if we really need a\ndifference here, maybe it should be explained:\n\n+#define INPUT_BUF_SIZE 65536 /* we palloc INPUT_BUF_SIZE+1 bytes */\n\n+#define RAW_BUF_SIZE 65536 /* allocated size of the buffer */\n\n\nLastly, it looks like pg_do_encoding_conversion_buf() ended up in 0003\naccidentally?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Feb 1, 2021 at 12:15 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:> Thanks. I fixed it slightly differently, and also changed LocalToUtf()> to follow the same pattern, even though LocalToUtf() did not have the> same bug.Looks good to me.> I added a bunch of tests for various built-in conversions.Nice! I would like to have utf8 tests for every category of invalid byte (overlong, surrogate, 5 bytes, etc), but it's not necessary for this patch.> I spent some time refactoring and adding comments all around the patch,> hopefully making it all more clear. One notable difference is that I> renamed 'raw_buf' (which exists in master too) to 'input_buf', and> renamed 'conversion_buf' to 'raw_buf'. I'm going to read through this> patch again another day with fresh eyes, and also try to add some tests> for the corner cases at buffer boundaries.The comments and renaming are really helpful in understanding that file!Although a new patch is likely forthcoming, I did take a brief look and found the following:In copyfromparse.c, this is now out of date: * Read the next input line and stash it in line_buf, with conversion to * server encoding.One of your FIXME comments seems to allude to this, but if we really need a difference here, maybe it should be explained:+#define INPUT_BUF_SIZE 65536\t\t/* we palloc INPUT_BUF_SIZE+1 bytes */+#define RAW_BUF_SIZE 65536\t\t/* allocated size of the buffer */Lastly, it looks like pg_do_encoding_conversion_buf() ended up in 0003 accidentally?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Feb 2021 17:42:31 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 02/02/2021 23:42, John Naylor wrote:\n> Although a new patch is likely forthcoming, I did take a brief look and \n> found the following:\n> \n> \n> In copyfromparse.c, this is now out of date:\n> \n> * Read the next input line and stash it in line_buf, with conversion to\n> * server encoding.\n> \n> \n> One of your FIXME comments seems to allude to this, but if we really \n> need a difference here, maybe it should be explained:\n> \n> +#define INPUT_BUF_SIZE 65536 /* we palloc INPUT_BUF_SIZE+1 bytes */\n> \n> +#define RAW_BUF_SIZE 65536 /* allocated size of the buffer */\n\nWe do in fact still need the +1 for the NUL terminator. It was missing \nfrom the last patch version, but that was wrong; my fuzz testing \nactually uncovered a bug caused by that. Fixed.\n\nAttached are new patch versions. The first patch is same as before, but \nrebased, pgindented, and with a couple of tiny fixes where conversion \nfunctions were still missing the \"if (noError) break;\" checks.\n\nI've hacked on the second patch more, doing more refactoring and \ncommenting for readability. I think it's in pretty good shape now.\n\n- Heikki",
"msg_date": "Sun, 7 Feb 2021 20:13:28 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Sun, Feb 7, 2021 at 2:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 02/02/2021 23:42, John Naylor wrote:\n> >\n> > In copyfromparse.c, this is now out of date:\n> >\n> > * Read the next input line and stash it in line_buf, with conversion\nto\n> > * server encoding.\n\nThis comment for CopyReadLine() is still there. Conversion already happened\nby now, so I think this comment is outdated.\n\nOther than that, I think this is ready for commit.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Feb 7, 2021 at 2:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 02/02/2021 23:42, John Naylor wrote:> >> > In copyfromparse.c, this is now out of date:> >> > * Read the next input line and stash it in line_buf, with conversion to> > * server encoding.This comment for CopyReadLine() is still there. Conversion already happened by now, so I think this comment is outdated.Other than that, I think this is ready for commit.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Feb 2021 09:40:02 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 09/02/2021 15:40, John Naylor wrote:\n> On Sun, Feb 7, 2021 at 2:13 PM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> >\n> > On 02/02/2021 23:42, John Naylor wrote:\n> > >\n> > > In copyfromparse.c, this is now out of date:\n> > >\n> > > * Read the next input line and stash it in line_buf, with \n> conversion to\n> > > * server encoding.\n> \n> This comment for CopyReadLine() is still there. Conversion already \n> happened by now, so I think this comment is outdated.\n> \n> Other than that, I think this is ready for commit.\n\nFixed. And also fixed one more bug in allocating raw_buf_size, the \"+ 1\" \nsomehow went missing again. That was causing a failure on Windows at \ncfbot.cputube.org.\n\nI'll read through this one more time with fresh eyes tomorrow or the day \nafter, and push. Thanks for all the review!\n\n- Heikki\n\n\n",
"msg_date": "Tue, 9 Feb 2021 19:36:10 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 09/02/2021 19:36, Heikki Linnakangas wrote:\n> On 09/02/2021 15:40, John Naylor wrote:\n>> On Sun, Feb 7, 2021 at 2:13 PM Heikki Linnakangas <hlinnaka@iki.fi\n>> <mailto:hlinnaka@iki.fi>> wrote:\n>> >\n>> > On 02/02/2021 23:42, John Naylor wrote:\n>> > >\n>> > > In copyfromparse.c, this is now out of date:\n>> > >\n>> > > * Read the next input line and stash it in line_buf, with\n>> conversion to\n>> > > * server encoding.\n>>\n>> This comment for CopyReadLine() is still there. Conversion already\n>> happened by now, so I think this comment is outdated.\n>>\n>> Other than that, I think this is ready for commit.\n> \n> Fixed. And also fixed one more bug in allocating raw_buf_size, the \"+ 1\"\n> somehow went missing again. That was causing a failure on Windows at\n> cfbot.cputube.org.\n> \n> I'll read through this one more time with fresh eyes tomorrow or the day\n> after, and push. Thanks for all the review!\n\nForgot attachment..\n\n- Heikki",
"msg_date": "Tue, 9 Feb 2021 19:44:46 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Tue, Feb 9, 2021 at 1:44 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Fixed. And also fixed one more bug in allocating raw_buf_size, the \"+ 1\"\n> > somehow went missing again. That was causing a failure on Windows at\n> > cfbot.cputube.org.\n> >\n> > I'll read through this one more time with fresh eyes tomorrow or the day\n> > after, and push. Thanks for all the review!\n>\n> Forgot attachment..\n>\n> - Heikki\n\nI went ahead and rebased these.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Mar 2021 12:58:03 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "I wrote:\n\n> I went ahead and rebased these.\n\nIt looks like FreeBSD doesn't like this for some reason.\n\nI also wanted to see if this patch set had any performance effect, with and\nwithout changing how UTF-8 is validated, using the blackhole am from\nhttps://github.com/michaelpq/pg_plugins/tree/master/blackhole_am.\n\ncreate extension blackhole_am;\ncreate table blackhole_tab (a text) using blackhole_am ;\ntime ./inst/bin/psql -c \"copy blackhole_tab from '/path/to/test-copy.txt'\"\n\n....where copy-test.txt is made by\n\nfor i in {1..100}; do cat UTF-8-Sampler.htm >> test-copy.txt ; done;\n\nOn Linux x86-64, gcc 8.4, I get these numbers (minimum of five runs):\n\nmaster:\n109ms\n\nv6 do encoding in larger chunks:\n109ms\n\nv7 utf8 SIMD:\n98ms\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> I went ahead and rebased these.It looks like FreeBSD doesn't like this for some reason.I also wanted to see if this patch set had any performance effect, with and without changing how UTF-8 is validated, using the blackhole am from https://github.com/michaelpq/pg_plugins/tree/master/blackhole_am.create extension blackhole_am;create table blackhole_tab (a text) using blackhole_am ;time ./inst/bin/psql -c \"copy blackhole_tab from '/path/to/test-copy.txt'\"....where copy-test.txt is made by for i in {1..100}; do cat UTF-8-Sampler.htm >> test-copy.txt ; done;On Linux x86-64, gcc 8.4, I get these numbers (minimum of five runs):master:109msv6 do encoding in larger chunks:109msv7 utf8 SIMD:98ms--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Mar 2021 14:05:32 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 2:05 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> I wrote:\n>\n> > I went ahead and rebased these.\n>\n> It looks like FreeBSD doesn't like this for some reason.\n\nOn closer examination, make check was \"terminated\", not that the tests\nfailed...\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Mar 18, 2021 at 2:05 PM John Naylor <john.naylor@enterprisedb.com> wrote:>> I wrote:>> > I went ahead and rebased these.>> It looks like FreeBSD doesn't like this for some reason.On closer examination, make check was \"terminated\", not that the tests failed...--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Mar 2021 14:23:09 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 18/03/2021 20:05, John Naylor wrote:\n> I wrote:\n> \n> > I went ahead and rebased these.\n\nThanks!\n\n> I also wanted to see if this patch set had any performance effect, with \n> and without changing how UTF-8 is validated, using the blackhole am from \n> https://github.com/michaelpq/pg_plugins/tree/master/blackhole_am \n> <https://github.com/michaelpq/pg_plugins/tree/master/blackhole_am>.\n> \n> create extension blackhole_am;\n> create table blackhole_tab (a text) using blackhole_am ;\n> time ./inst/bin/psql -c \"copy blackhole_tab from '/path/to/test-copy.txt'\"\n> \n> ....where copy-test.txt is made by\n> \n> for i in {1..100}; do cat UTF-8-Sampler.htm >> test-copy.txt ; done;\n> \n> On Linux x86-64, gcc 8.4, I get these numbers (minimum of five runs):\n> \n> master:\n> 109ms\n> \n> v6 do encoding in larger chunks:\n> 109ms\n> \n> v7 utf8 SIMD:\n> 98ms\n\nThat's disappointing. Perhaps the file size is just too small to see the \neffect? I'm seeing results between 40 ms and 75 ms on my laptop when I \nrun a test like that multiple times. I used \"WHERE false\" instead of the \nblackhole AM but I don't think that makes much difference (only showing \na few runs here for brevity):\n\nfor i in {1..100}; do cat /tmp/utf8.html >> /tmp/test-copy.txt ; done;\n\npostgres=# create table blackhole_tab (a text) ;\nCREATE TABLE\npostgres=# \\timing\nTiming is on.\npostgres=# copy blackhole_tab from '/tmp/test-copy.txt' where false;\nCOPY 0\nTime: 53.166 ms\npostgres=# copy blackhole_tab from '/tmp/test-copy.txt' where false;\nCOPY 0\nTime: 43.981 ms\npostgres=# copy blackhole_tab from '/tmp/test-copy.txt' where false;\nCOPY 0\nTime: 71.850 ms\npostgres=# copy blackhole_tab from '/tmp/test-copy.txt' where false;\nCOPY 0\n...\n\nI tested that with a larger file:\n\nfor i in {1..10000}; do cat /tmp/utf8.html >> /tmp/test-copy.txt ; done;\npostgres=# copy blackhole_tab from '/tmp/test-copy.txt' where false;\n\nv6 do encoding in larger chunks (best of five):\nTime: 3955.514 ms (00:03.956)\n\nmaster (best of five):\nTime: 4133.767 ms (00:04.134)\n\nSo with that, I'm seeing a measurable difference.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 1 Apr 2021 11:09:02 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 01/04/2021 11:09, Heikki Linnakangas wrote:\n> On 18/03/2021 20:05, John Naylor wrote:\n>> I wrote:\n>>\n>> > I went ahead and rebased these.\n> \n> Thanks!\n\nI read through the patches one more time, made a few small comment \nfixes, and pushed.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 1 Apr 2021 12:27:09 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Perform COPY FROM encoding conversions in larger chunks"
},
{
"msg_contents": "On 04/01/21 05:27, Heikki Linnakangas wrote:\n> I read through the patches one more time, made a few small comment fixes,\n> and pushed.\n\nWow, this whole thread escaped my attention at the time, though my ears\nwould have perked right up if the subject had been something like\n'improve encoding conversion API to stream a buffer at a time'. I think\nthis is of great interest beyond one particular use case in COPY FROM.\nFor example, it could limit the allocations needed when streaming a large\ntext value out to a client; it might be used to advantage with the recent\nwork in incrementally detoasting large values, and so on.\n\nThis part seems a little underdeveloped:\n\n> * TODO: The conversion function interface is not great. Firstly, it\n> * would be nice to pass through the destination buffer size to the\n> * conversion function, so that if you pass a shorter destination buffer, it\n> * could still continue to fill up the whole buffer. Currently, we have to\n> * assume worst case expansion and stop the conversion short, even if there\n> * is in fact space left in the destination buffer. Secondly, it would be\n> * nice to return the number of bytes written to the caller, to avoid a call\n> * to strlen().\n\nIf I understand correctly, this patch already makes a breaking change to\nthe conversion function API. If that's going to be the case anyway, I wonder\nif it's worth going further and changing the API further to eliminate this\nodd limitation.\n\nThere seems to be a sort of common shape that conversion APIs have evolved\ntoward, that can be seen in both the ICU4C converters [0] and in Java's [1].\nThis current tweak to our conversion API seems to get allllmmoooosst there,\nbut just not quite. For example, noError allows us to keep control when\nthe function has stopped converting, but we don't find out which reason\nit stopped.\n\nIf we just went the rest of the way and structured the API like those\nexisting ones, then:\n\n- it would be super easy to write wrappers around ICU4C converters, if\n there were any we wanted to use;\n\n- I could very easily write wrappers presenting any PG-supported charset\n as a Java charset.\n\nThe essence of the API common to ICU4C and Java is this:\n\n1. You pass the function the address and length of a source buffer,\n the address and length of a destination buffer, and a flag that is true\n if you know there is no further input where this source buffer came from.\n (It's allowable to pass false and only then discover you really have no\n more input after all; then you just make one final call passing true.)\n\n2. The function eats as much as it can of the source buffer, fills as much\n as it can of the destination buffer, and returns indicating one of four\n reasons it stopped:\n\n underflow - ran out of source buffer\n overflow - ran out of destination buffer\n malformed - something in source buffer isn't valid in that representation\n unmappable - a valid codepoint not available in destination encoding\n\n Based on that, the caller refills the source buffer, or drains the\n destination buffer, or handles or reports the malformed or unmappable,\n then repeats.\n\n3. The function should update pointers on return to indicate how much\n of the source buffer it consumed and how much of the destination buffer\n it filled.\n\n4. If it left any bytes unconsumed in the source buffer, the caller must\n preserve them (perhaps moved to the front) for the next call.\n\n5. The converter can have internal state (so it is an object in Java, or\n has a UConverter struct allocated in ICU4C, to have a place for its\n state). The state gets flushed on the final call where the flag is\n passed true. In many cases, the converter can be implemented without\n keeping internal state, if it simply leaves, for example, an\n incomplete sequence at the end of the source buffer unconsumed, so the\n caller will move it to the front and supply the rest. On the other hand,\n any unconsumed input after the final call with flush flag true must be\n treated as malformed.\n\n6. On a malformed or unmappable return, the source buffer is left pointed\n at the start of the offending sequence and the length in bytes of\n that sequence is available for error reporting/recovery.\n\nThe efficient handling of states, returning updated pointers, and so on,\nprobably requires a function signature with 'internal' in it ... but\nthe current function signature already has 'internal', so that doesn't\nseem like a deal-breaker.\n\n\nThoughts? It seems a shame to make a breaking change in the conversion\nAPI, only to still end up with an API that \"is not great\" and is still\nimpedance-mismatched to other existing prominent conversion APIs.\n\nRegards,\n-Chap\n\n\n[0]\nhttps://unicode-org.github.io/icu/userguide/conversion/converters.html#3-buffered-or-streamed\n\n[1]\nhttps://docs.oracle.com/javase/9/docs/api/java/nio/charset/CharsetDecoder.html#decode-java.nio.ByteBuffer-java.nio.CharBuffer-boolean-\n\n\n",
"msg_date": "Sat, 1 May 2021 16:06:19 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Perform COPY FROM encoding conversions in larger chunks"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing the patch proposed at [1], I found that there is the case\nwhere deadlock that recovery conflict on lock is involved in may not be\ndetected. This deadlock can happen between backends and the startup\nprocess, in the standby server. Please see the following procedure to\nreproduce the deadlock.\n\n#1. Set up streaming replication.\n\n#2. Set max_standby_streaming_delay to -1 in the standby.\n\n#3. Create two tables in the primary.\n\n [PRIMARY: SESSION1]\n CREATE TABLE t1 ();\n CREATE TABLE t2 ();\n\n#4. Start transaction and access to the table t1.\n\n [STANDBY: SESSION2]\n BEGIN;\n SELECT * FROM t1;\n\n#5. Start transaction and lock table t2 in access exclusive mode,\n in the primary. Also execute pg_switch_wal() to transfer WAL record\n for access exclusive lock to the standby.\n\n [PRIMARY: SESSION1]\n BEGIN;\n LOCK TABLE t2 IN ACCESS EXCLUSIVE MODE;\n SELECT pg_switch_wal();\n\n#6. Access to the table t2 within the transaction that started at #4,\n in the standby.\n\n [STANDBY: SESSION2]\n SELECT * FROM t2;\n\n#7. Lock table t1 in access exclusive mode within the transaction that\n started in #5, in the primary. Also execute pg_switch_wal() to transfer\n WAL record for access exclusive lock to the standby.\n\n [PRIMARY: SESSION1]\n LOCK TABLE t1 IN ACCESS EXCLUSIVE MODE;\n SELECT pg_switch_wal();\n\nAfter doing this procedure, you can see the startup process and backend\nwait for the table lock each other, i.e., deadlock. But this deadlock remains\neven after deadlock_timeout passes.\n\nThis seems a bug to me.\n\n> * Deadlocks involving the Startup process and an ordinary backend process\n> * will be detected by the deadlock detector within the ordinary backend.\n\nThe cause of this issue seems that ResolveRecoveryConflictWithLock() that\nthe startup process calls when recovery conflict on lock happens doesn't\ntake care of deadlock case at all. You can see this fact by reading the above\nsource code comment for ResolveRecoveryConflictWithLock().\n\nTo fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\ntimer in ResolveRecoveryConflictWithLock() so that the startup process can\nsend PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\nThen if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\nthe backend should check whether the deadlock actually happens or not.\nAttached is the POC patch implimenting this.\n\nThought?\n\nRegards,\n\n[1] https://postgr.es/m/9a60178c-a853-1440-2cdc-c3af916cff59@amazon.com\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 16 Dec 2020 21:49:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com>:\n\n> After doing this procedure, you can see the startup process and backend\n> wait for the table lock each other, i.e., deadlock. But this deadlock\n> remains\n> even after deadlock_timeout passes.\n>\n> This seems a bug to me.\n>\n> > * Deadlocks involving the Startup process and an ordinary backend process\n> > * will be detected by the deadlock detector within the ordinary backend.\n>\n> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n> the startup process calls when recovery conflict on lock happens doesn't\n> take care of deadlock case at all. You can see this fact by reading the\n> above\n> source code comment for ResolveRecoveryConflictWithLock().\n>\n> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n> the backend should check whether the deadlock actually happens or not.\n> Attached is the POC patch implimenting this.\n>\n\nI agree that this is a bug.\n\nUnfortunately, we've been hit by it in production.\nSuch deadlock will, eventually, make all sessions wait on the startup\nprocess, making\nstreaming replica unusable. In case replica is used for balancing out RO\nqueries from the primary,\nit causes downtime for the project.\n\nIf I understand things right, session will release it's locks\nwhen max_standby_streaming_delay is reached.\nBut it'd be much better if conflict is resolved faster,\naround deadlock_timeout.\n\nSo — huge +1 from me for fixing it.\n\n\n-- \nVictor Yegorov\n\nср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com>:\nAfter doing this procedure, you can see the startup process and backend\nwait for the table lock each other, i.e., deadlock. But this deadlock remains\neven after deadlock_timeout passes.\n\nThis seems a bug to me.\n\n> * Deadlocks involving the Startup process and an ordinary backend process\n> * will be detected by the deadlock detector within the ordinary backend.\n\nThe cause of this issue seems that ResolveRecoveryConflictWithLock() that\nthe startup process calls when recovery conflict on lock happens doesn't\ntake care of deadlock case at all. You can see this fact by reading the above\nsource code comment for ResolveRecoveryConflictWithLock().\n\nTo fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\ntimer in ResolveRecoveryConflictWithLock() so that the startup process can\nsend PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\nThen if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\nthe backend should check whether the deadlock actually happens or not.\nAttached is the POC patch implimenting this.I agree that this is a bug.Unfortunately, we've been hit by it in production.Such deadlock will, eventually, make all sessions wait on the startup process, makingstreaming replica unusable. In case replica is used for balancing out RO queries from the primary,it causes downtime for the project.If I understand things right, session will release it's locks when max_standby_streaming_delay is reached.But it'd be much better if conflict is resolved faster, around deadlock_timeout.So — huge +1 from me for fixing it.-- Victor Yegorov",
"msg_date": "Wed, 16 Dec 2020 14:36:04 +0100",
"msg_from": "Victor Yegorov <vyegorov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "Hi,\n\nOn 12/16/20 2:36 PM, Victor Yegorov wrote:\n>\n> *CAUTION*: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com \n> <mailto:masao.fujii@oss.nttdata.com>>:\n>\n> After doing this procedure, you can see the startup process and\n> backend\n> wait for the table lock each other, i.e., deadlock. But this\n> deadlock remains\n> even after deadlock_timeout passes.\n>\n> This seems a bug to me.\n>\n+1\n\n>\n> > * Deadlocks involving the Startup process and an ordinary\n> backend process\n> > * will be detected by the deadlock detector within the ordinary\n> backend.\n>\n> The cause of this issue seems that\n> ResolveRecoveryConflictWithLock() that\n> the startup process calls when recovery conflict on lock happens\n> doesn't\n> take care of deadlock case at all. You can see this fact by\n> reading the above\n> source code comment for ResolveRecoveryConflictWithLock().\n>\n> To fix this issue, I think that we should enable\n> STANDBY_DEADLOCK_TIMEOUT\n> timer in ResolveRecoveryConflictWithLock() so that the startup\n> process can\n> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n> the backend should check whether the deadlock actually happens or not.\n> Attached is the POC patch implimenting this.\n>\ngood catch!\n\nI don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT \nshouldn't be set in ResolveRecoveryConflictWithLock() too (it is already \nset in ResolveRecoveryConflictWithBufferPin()).\n\nSo + 1 to consider this as a bug and for the way the patch proposes to \nfix it.\n\nBertrand\n\n\n\n\n\n\nHi,\n\nOn 12/16/20 2:36 PM, Victor Yegorov\n wrote:\n\n\n\n\n\n\n\n\nCAUTION: This email originated\n from outside of the organization. Do not click links\n or open attachments unless you can confirm the\n sender and know the content is safe.\n\n\n\n\n\n\n\n\nср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com>:\n\n\n After doing this procedure, you can see the startup\n process and backend\n wait for the table lock each other, i.e., deadlock. But\n this deadlock remains\n even after deadlock_timeout passes.\n\n This seems a bug to me.\n\n\n\n\n\n+1\n\n\n\n\n\n\n\n > * Deadlocks involving the Startup process and an\n ordinary backend process\n > * will be detected by the deadlock detector within\n the ordinary backend.\n\n The cause of this issue seems that\n ResolveRecoveryConflictWithLock() that\n the startup process calls when recovery conflict on lock\n happens doesn't\n take care of deadlock case at all. You can see this fact\n by reading the above\n source code comment for ResolveRecoveryConflictWithLock().\n\n To fix this issue, I think that we should enable\n STANDBY_DEADLOCK_TIMEOUT\n timer in ResolveRecoveryConflictWithLock() so that the\n startup process can\n send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to\n the backend.\n Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal\n arrives,\n the backend should check whether the deadlock actually\n happens or not.\n Attached is the POC patch implimenting this.\n\n\n\n\n\ngood catch! \n\nI don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT\n shouldn't be set in ResolveRecoveryConflictWithLock() too (it is\n already set in ResolveRecoveryConflictWithBufferPin()).\n\nSo + 1 to consider this as a bug and for the way the patch\n proposes to fix it.\n Bertrand",
"msg_date": "Wed, 16 Dec 2020 15:28:33 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>\n>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>\n>> After doing this procedure, you can see the startup process and backend\n>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>> even after deadlock_timeout passes.\n>>\n>> This seems a bug to me.\n>>\n> +1\n> \n>>\n>> > * Deadlocks involving the Startup process and an ordinary backend process\n>> > * will be detected by the deadlock detector within the ordinary backend.\n>>\n>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>> the startup process calls when recovery conflict on lock happens doesn't\n>> take care of deadlock case at all. You can see this fact by reading the above\n>> source code comment for ResolveRecoveryConflictWithLock().\n>>\n>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>> the backend should check whether the deadlock actually happens or not.\n>> Attached is the POC patch implimenting this.\n>>\n> good catch!\n> \n> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n> \n> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n\nThanks Victor and Bertrand for agreeing!\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 17 Dec 2020 02:15:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On 2020/12/17 2:15, Fujii Masao wrote:\n> \n> \n> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>\n>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>\n>>>\n>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>\n>>> After doing this procedure, you can see the startup process and backend\n>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>>> even after deadlock_timeout passes.\n>>>\n>>> This seems a bug to me.\n>>>\n>> +1\n>>\n>>>\n>>> > * Deadlocks involving the Startup process and an ordinary backend process\n>>> > * will be detected by the deadlock detector within the ordinary backend.\n>>>\n>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>>> the startup process calls when recovery conflict on lock happens doesn't\n>>> take care of deadlock case at all. You can see this fact by reading the above\n>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>\n>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>> the backend should check whether the deadlock actually happens or not.\n>>> Attached is the POC patch implimenting this.\n>>>\n>> good catch!\n>>\n>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n>>\n>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n> \n> Thanks Victor and Bertrand for agreeing!\n> Attached is the updated version of the patch.\n\nAttached is v3 of the patch. Could you review this version?\n\nWhile the startup process is waiting for recovery conflict on buffer pin,\nit repeats sending the request for deadlock check to all the backends\nevery deadlock_timeout. This may increase the workload in the startup\nprocess and backends, but since this is the original behavior, the patch\ndoesn't change that. Also in practice this may not be so harmful because\nthe period that the buffer is kept pinned is basically not so long.\n\nOn the other hand, IMO we should avoid this issue while the startup\nprocess is waiting for recovery conflict on locks since it can take\na long time to release the locks. So the patch tries to fix it.\n\nOr I'm overthinking this? If this doesn't need to be handled,\nthe patch can be simplified more. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 18 Dec 2020 18:35:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "Hi,\n\nOn 12/18/20 10:35 AM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2020/12/17 2:15, Fujii Masao wrote:\n>>\n>>\n>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>>> Hi,\n>>>\n>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>>\n>>>> *CAUTION*: This email originated from outside of the organization. \n>>>> Do not click links or open attachments unless you can confirm the \n>>>> sender and know the content is safe.\n>>>>\n>>>>\n>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao \n>>>> <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>>\n>>>> After doing this procedure, you can see the startup process and \n>>>> backend\n>>>> wait for the table lock each other, i.e., deadlock. But this \n>>>> deadlock remains\n>>>> even after deadlock_timeout passes.\n>>>>\n>>>> This seems a bug to me.\n>>>>\n>>> +1\n>>>\n>>>>\n>>>> > * Deadlocks involving the Startup process and an ordinary \n>>>> backend process\n>>>> > * will be detected by the deadlock detector within the \n>>>> ordinary backend.\n>>>>\n>>>> The cause of this issue seems that \n>>>> ResolveRecoveryConflictWithLock() that\n>>>> the startup process calls when recovery conflict on lock \n>>>> happens doesn't\n>>>> take care of deadlock case at all. You can see this fact by \n>>>> reading the above\n>>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>>\n>>>> To fix this issue, I think that we should enable \n>>>> STANDBY_DEADLOCK_TIMEOUT\n>>>> timer in ResolveRecoveryConflictWithLock() so that the startup \n>>>> process can\n>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the \n>>>> backend.\n>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>>> the backend should check whether the deadlock actually happens \n>>>> or not.\n>>>> Attached is the POC patch implimenting this.\n>>>>\n>>> good catch!\n>>>\n>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT \n>>> shouldn't be set in ResolveRecoveryConflictWithLock() too (it is \n>>> already set in ResolveRecoveryConflictWithBufferPin()).\n>>>\n>>> So + 1 to consider this as a bug and for the way the patch proposes \n>>> to fix it.\n>>\n>> Thanks Victor and Bertrand for agreeing!\n>> Attached is the updated version of the patch.\n>\n> Attached is v3 of the patch. Could you review this version?\n>\n> While the startup process is waiting for recovery conflict on buffer pin,\n> it repeats sending the request for deadlock check to all the backends\n> every deadlock_timeout. This may increase the workload in the startup\n> process and backends, but since this is the original behavior, the patch\n> doesn't change that. \n\nAgree.\n\nIMHO that may need to be rethink (as we are already in a conflict \nsituation, i am tempted to say that the less is being done the better it \nis), but i think that's outside the scope of this patch.\n\n> Also in practice this may not be so harmful because\n> the period that the buffer is kept pinned is basically not so long.\n\nAgree that chances are less to be in this mode for a \"long\" duration (as \ncompare to the lock conflict).\n\n>\n> On the other hand, IMO we should avoid this issue while the startup\n> process is waiting for recovery conflict on locks since it can take\n> a long time to release the locks. So the patch tries to fix it.\nAgree\n>\n> Or I'm overthinking this? If this doesn't need to be handled,\n> the patch can be simplified more. Thought?\n\nI do think that's good to handle it that way for the lock conflict: the \nless work is done the better it is (specially in a conflict situation).\n\nThe patch does look good to me.\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 17:43:12 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/17 2:15, Fujii Masao wrote:\n> >\n> >\n> > On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n> >> Hi,\n> >>\n> >> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n> >>>\n> >>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>>\n> >>>\n> >>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n> >>>\n> >>> After doing this procedure, you can see the startup process and backend\n> >>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n> >>> even after deadlock_timeout passes.\n> >>>\n> >>> This seems a bug to me.\n> >>>\n> >> +1\n> >>\n> >>>\n> >>> > * Deadlocks involving the Startup process and an ordinary backend process\n> >>> > * will be detected by the deadlock detector within the ordinary backend.\n> >>>\n> >>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n> >>> the startup process calls when recovery conflict on lock happens doesn't\n> >>> take care of deadlock case at all. You can see this fact by reading the above\n> >>> source code comment for ResolveRecoveryConflictWithLock().\n> >>>\n> >>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n> >>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n> >>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n> >>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n> >>> the backend should check whether the deadlock actually happens or not.\n> >>> Attached is the POC patch implimenting this.\n> >>>\n> >> good catch!\n> >>\n> >> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n> >>\n> >> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n> >\n> > Thanks Victor and Bertrand for agreeing!\n> > Attached is the updated version of the patch.\n>\n> Attached is v3 of the patch. Could you review this version?\n>\n> While the startup process is waiting for recovery conflict on buffer pin,\n> it repeats sending the request for deadlock check to all the backends\n> every deadlock_timeout. This may increase the workload in the startup\n> process and backends, but since this is the original behavior, the patch\n> doesn't change that. Also in practice this may not be so harmful because\n> the period that the buffer is kept pinned is basically not so long.\n>\n\n@@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n */\n ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n\n+ if (got_standby_deadlock_timeout)\n+ {\n+ /*\n+ * Send out a request for hot-standby backends to check themselves for\n+ * deadlocks.\n+ *\n+ * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n+ * to be signaled by UnpinBuffer() again and send a request for\n+ * deadlocks check if deadlock_timeout happens. This causes the\n+ * request to continue to be sent every deadlock_timeout until the\n+ * buffer is unpinned or ltime is reached. This would increase the\n+ * workload in the startup process and backends. In practice it may\n+ * not be so harmful because the period that the buffer is kept pinned\n+ * is basically no so long. But we should fix this?\n+ */\n+ SendRecoveryConflictWithBufferPin(\n+\nPROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n+ got_standby_deadlock_timeout = false;\n+ }\n+\n\nSince SendRecoveryConflictWithBufferPin() sends the signal to all\nbackends every backend who is waiting on a lock at ProcSleep() and not\nholding a buffer pin blocking the startup process will end up doing a\ndeadlock check, which seems expensive. What is worse is that the\ndeadlock will not be detected because the deadlock involving a buffer\npin isn't detected by CheckDeadLock(). I thought we can replace\nPROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\nPROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\nbackend who has a buffer pin blocking the startup process and not\nwaiting on a lock is also canceled after deadlock_timeout. We can have\nthe backend return from RecoveryConflictInterrupt() when it received\nPROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\nbut it’s also not good because we cannot cancel the backend after\nmax_standby_streaming_delay that has a buffer pin blocking the startup\nprocess. So I wonder if we can have a new signal. When the backend\nreceived this signal it returns from RecoveryConflictInterrupt()\nwithout deadlock check either if it’s not waiting on any lock or if it\ndoesn’t have a buffer pin blocking the startup process. Otherwise it's\ncancelled.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 22 Dec 2020 10:25:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "\n\nOn 2020/12/19 1:43, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 12/18/20 10:35 AM, Fujii Masao wrote:\n>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>\n>>\n>>\n>> On 2020/12/17 2:15, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>>>> Hi,\n>>>>\n>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>>>\n>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>>\n>>>>>\n>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>>>\n>>>>> After doing this procedure, you can see the startup process and backend\n>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>>>>> even after deadlock_timeout passes.\n>>>>>\n>>>>> This seems a bug to me.\n>>>>>\n>>>> +1\n>>>>\n>>>>>\n>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n>>>>>\n>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>>>>> the startup process calls when recovery conflict on lock happens doesn't\n>>>>> take care of deadlock case at all. You can see this fact by reading the above\n>>>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>>>\n>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>>>> the backend should check whether the deadlock actually happens or not.\n>>>>> Attached is the POC patch implimenting this.\n>>>>>\n>>>> good catch!\n>>>>\n>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n>>>>\n>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n>>>\n>>> Thanks Victor and Bertrand for agreeing!\n>>> Attached is the updated version of the patch.\n>>\n>> Attached is v3 of the patch. Could you review this version?\n>>\n>> While the startup process is waiting for recovery conflict on buffer pin,\n>> it repeats sending the request for deadlock check to all the backends\n>> every deadlock_timeout. This may increase the workload in the startup\n>> process and backends, but since this is the original behavior, the patch\n>> doesn't change that. \n> \n> Agree.\n> \n> IMHO that may need to be rethink (as we are already in a conflict situation, i am tempted to say that the less is being done the better it is), but i think that's outside the scope of this patch.\n\nYes, I agree that's better. I think that we should improve that as a separate\npatch only for master branch, after fixing the bug and back-patching that\nat first.\n\n\n> \n>> Also in practice this may not be so harmful because\n>> the period that the buffer is kept pinned is basically not so long.\n> \n> Agree that chances are less to be in this mode for a \"long\" duration (as compare to the lock conflict).\n> \n>>\n>> On the other hand, IMO we should avoid this issue while the startup\n>> process is waiting for recovery conflict on locks since it can take\n>> a long time to release the locks. So the patch tries to fix it.\n> Agree\n>>\n>> Or I'm overthinking this? If this doesn't need to be handled,\n>> the patch can be simplified more. Thought?\n> \n> I do think that's good to handle it that way for the lock conflict: the less work is done the better it is (specially in a conflict situation).\n> \n> The patch does look good to me.\n\nThanks for the review!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 22 Dec 2020 20:41:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "\n\nOn 2020/12/22 10:25, Masahiko Sawada wrote:\n> On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/12/17 2:15, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>>>> Hi,\n>>>>\n>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>>>\n>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>>\n>>>>>\n>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>>>\n>>>>> After doing this procedure, you can see the startup process and backend\n>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>>>>> even after deadlock_timeout passes.\n>>>>>\n>>>>> This seems a bug to me.\n>>>>>\n>>>> +1\n>>>>\n>>>>>\n>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n>>>>>\n>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>>>>> the startup process calls when recovery conflict on lock happens doesn't\n>>>>> take care of deadlock case at all. You can see this fact by reading the above\n>>>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>>>\n>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>>>> the backend should check whether the deadlock actually happens or not.\n>>>>> Attached is the POC patch implimenting this.\n>>>>>\n>>>> good catch!\n>>>>\n>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n>>>>\n>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n>>>\n>>> Thanks Victor and Bertrand for agreeing!\n>>> Attached is the updated version of the patch.\n>>\n>> Attached is v3 of the patch. Could you review this version?\n>>\n>> While the startup process is waiting for recovery conflict on buffer pin,\n>> it repeats sending the request for deadlock check to all the backends\n>> every deadlock_timeout. This may increase the workload in the startup\n>> process and backends, but since this is the original behavior, the patch\n>> doesn't change that. Also in practice this may not be so harmful because\n>> the period that the buffer is kept pinned is basically not so long.\n>>\n> \n> @@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n> */\n> ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> \n> + if (got_standby_deadlock_timeout)\n> + {\n> + /*\n> + * Send out a request for hot-standby backends to check themselves for\n> + * deadlocks.\n> + *\n> + * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n> + * to be signaled by UnpinBuffer() again and send a request for\n> + * deadlocks check if deadlock_timeout happens. This causes the\n> + * request to continue to be sent every deadlock_timeout until the\n> + * buffer is unpinned or ltime is reached. This would increase the\n> + * workload in the startup process and backends. In practice it may\n> + * not be so harmful because the period that the buffer is kept pinned\n> + * is basically no so long. But we should fix this?\n> + */\n> + SendRecoveryConflictWithBufferPin(\n> +\n> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n> + got_standby_deadlock_timeout = false;\n> + }\n> +\n> \n> Since SendRecoveryConflictWithBufferPin() sends the signal to all\n> backends every backend who is waiting on a lock at ProcSleep() and not\n> holding a buffer pin blocking the startup process will end up doing a\n> deadlock check, which seems expensive. What is worse is that the\n> deadlock will not be detected because the deadlock involving a buffer\n> pin isn't detected by CheckDeadLock(). I thought we can replace\n> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\n> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\n> backend who has a buffer pin blocking the startup process and not\n> waiting on a lock is also canceled after deadlock_timeout. We can have\n> the backend return from RecoveryConflictInterrupt() when it received\n> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\n> but it’s also not good because we cannot cancel the backend after\n> max_standby_streaming_delay that has a buffer pin blocking the startup\n> process. So I wonder if we can have a new signal. When the backend\n> received this signal it returns from RecoveryConflictInterrupt()\n> without deadlock check either if it’s not waiting on any lock or if it\n> doesn’t have a buffer pin blocking the startup process. Otherwise it's\n> cancelled.\n\nThanks for pointing out that issue! Using new signal is an idea. Another idea\nis to make a backend skip check the deadlock if GetStartupBufferPinWaitBufId()\nreturns -1, i.e., the startup process is not waiting for buffer pin. So,\nwhat I'm thinkins is;\n\nIn RecoveryConflictInterrupt(), when a backend receive\nPROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK,\n\n1. If a backend isn't waiting for a lock, it does nothing .\n2. If a backend is waiting for a lock and also holding a buffer pin that\n delays recovery, it may be canceled.\n3. If a backend is waiting for a lock and the startup process is not waiting\n for buffer pin (i.e., the startup process is also waiting for a lock),\n it checks for the deadlocks.\n4. If a backend is waiting for a lock and isn't holding a buffer pin that\n delays recovery though the startup process is waiting for buffer pin,\n it does nothing.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 22 Dec 2020 20:42:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On 2020/12/22 20:42, Fujii Masao wrote:\n> \n> \n> On 2020/12/22 10:25, Masahiko Sawada wrote:\n>> On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/12/17 2:15, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>>>>> Hi,\n>>>>>\n>>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>>>>\n>>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>>>\n>>>>>>\n>>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>>>>\n>>>>>> After doing this procedure, you can see the startup process and backend\n>>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>>>>>> even after deadlock_timeout passes.\n>>>>>>\n>>>>>> This seems a bug to me.\n>>>>>>\n>>>>> +1\n>>>>>\n>>>>>>\n>>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n>>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n>>>>>>\n>>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>>>>>> the startup process calls when recovery conflict on lock happens doesn't\n>>>>>> take care of deadlock case at all. You can see this fact by reading the above\n>>>>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>>>>\n>>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>>>>> the backend should check whether the deadlock actually happens or not.\n>>>>>> Attached is the POC patch implimenting this.\n>>>>>>\n>>>>> good catch!\n>>>>>\n>>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n>>>>>\n>>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n>>>>\n>>>> Thanks Victor and Bertrand for agreeing!\n>>>> Attached is the updated version of the patch.\n>>>\n>>> Attached is v3 of the patch. Could you review this version?\n>>>\n>>> While the startup process is waiting for recovery conflict on buffer pin,\n>>> it repeats sending the request for deadlock check to all the backends\n>>> every deadlock_timeout. This may increase the workload in the startup\n>>> process and backends, but since this is the original behavior, the patch\n>>> doesn't change that. Also in practice this may not be so harmful because\n>>> the period that the buffer is kept pinned is basically not so long.\n>>>\n>>\n>> @@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n>> */\n>> ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>>\n>> + if (got_standby_deadlock_timeout)\n>> + {\n>> + /*\n>> + * Send out a request for hot-standby backends to check themselves for\n>> + * deadlocks.\n>> + *\n>> + * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n>> + * to be signaled by UnpinBuffer() again and send a request for\n>> + * deadlocks check if deadlock_timeout happens. This causes the\n>> + * request to continue to be sent every deadlock_timeout until the\n>> + * buffer is unpinned or ltime is reached. This would increase the\n>> + * workload in the startup process and backends. In practice it may\n>> + * not be so harmful because the period that the buffer is kept pinned\n>> + * is basically no so long. But we should fix this?\n>> + */\n>> + SendRecoveryConflictWithBufferPin(\n>> +\n>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n>> + got_standby_deadlock_timeout = false;\n>> + }\n>> +\n>>\n>> Since SendRecoveryConflictWithBufferPin() sends the signal to all\n>> backends every backend who is waiting on a lock at ProcSleep() and not\n>> holding a buffer pin blocking the startup process will end up doing a\n>> deadlock check, which seems expensive. What is worse is that the\n>> deadlock will not be detected because the deadlock involving a buffer\n>> pin isn't detected by CheckDeadLock(). I thought we can replace\n>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\n>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\n>> backend who has a buffer pin blocking the startup process and not\n>> waiting on a lock is also canceled after deadlock_timeout. We can have\n>> the backend return from RecoveryConflictInterrupt() when it received\n>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\n>> but it’s also not good because we cannot cancel the backend after\n>> max_standby_streaming_delay that has a buffer pin blocking the startup\n>> process. So I wonder if we can have a new signal. When the backend\n>> received this signal it returns from RecoveryConflictInterrupt()\n>> without deadlock check either if it’s not waiting on any lock or if it\n>> doesn’t have a buffer pin blocking the startup process. Otherwise it's\n>> cancelled.\n> \n> Thanks for pointing out that issue! Using new signal is an idea. Another idea\n> is to make a backend skip check the deadlock if GetStartupBufferPinWaitBufId()\n> returns -1, i.e., the startup process is not waiting for buffer pin. So,\n> what I'm thinkins is;\n> \n> In RecoveryConflictInterrupt(), when a backend receive\n> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK,\n> \n> 1. If a backend isn't waiting for a lock, it does nothing .\n> 2. If a backend is waiting for a lock and also holding a buffer pin that\n> delays recovery, it may be canceled.\n> 3. If a backend is waiting for a lock and the startup process is not waiting\n> for buffer pin (i.e., the startup process is also waiting for a lock),\n> it checks for the deadlocks.\n> 4. If a backend is waiting for a lock and isn't holding a buffer pin that\n> delays recovery though the startup process is waiting for buffer pin,\n> it does nothing.\n\nI implemented this. Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 22 Dec 2020 23:58:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 11:58 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/22 20:42, Fujii Masao wrote:\n> >\n> >\n> > On 2020/12/22 10:25, Masahiko Sawada wrote:\n> >> On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/12/17 2:15, Fujii Masao wrote:\n> >>>>\n> >>>>\n> >>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n> >>>>> Hi,\n> >>>>>\n> >>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n> >>>>>>\n> >>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>>>>>\n> >>>>>>\n> >>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n> >>>>>>\n> >>>>>> After doing this procedure, you can see the startup process and backend\n> >>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n> >>>>>> even after deadlock_timeout passes.\n> >>>>>>\n> >>>>>> This seems a bug to me.\n> >>>>>>\n> >>>>> +1\n> >>>>>\n> >>>>>>\n> >>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n> >>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n> >>>>>>\n> >>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n> >>>>>> the startup process calls when recovery conflict on lock happens doesn't\n> >>>>>> take care of deadlock case at all. You can see this fact by reading the above\n> >>>>>> source code comment for ResolveRecoveryConflictWithLock().\n> >>>>>>\n> >>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n> >>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n> >>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n> >>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n> >>>>>> the backend should check whether the deadlock actually happens or not.\n> >>>>>> Attached is the POC patch implimenting this.\n> >>>>>>\n> >>>>> good catch!\n> >>>>>\n> >>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n> >>>>>\n> >>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n> >>>>\n> >>>> Thanks Victor and Bertrand for agreeing!\n> >>>> Attached is the updated version of the patch.\n> >>>\n> >>> Attached is v3 of the patch. Could you review this version?\n> >>>\n> >>> While the startup process is waiting for recovery conflict on buffer pin,\n> >>> it repeats sending the request for deadlock check to all the backends\n> >>> every deadlock_timeout. This may increase the workload in the startup\n> >>> process and backends, but since this is the original behavior, the patch\n> >>> doesn't change that. Also in practice this may not be so harmful because\n> >>> the period that the buffer is kept pinned is basically not so long.\n> >>>\n> >>\n> >> @@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n> >> */\n> >> ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> >>\n> >> + if (got_standby_deadlock_timeout)\n> >> + {\n> >> + /*\n> >> + * Send out a request for hot-standby backends to check themselves for\n> >> + * deadlocks.\n> >> + *\n> >> + * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n> >> + * to be signaled by UnpinBuffer() again and send a request for\n> >> + * deadlocks check if deadlock_timeout happens. This causes the\n> >> + * request to continue to be sent every deadlock_timeout until the\n> >> + * buffer is unpinned or ltime is reached. This would increase the\n> >> + * workload in the startup process and backends. In practice it may\n> >> + * not be so harmful because the period that the buffer is kept pinned\n> >> + * is basically no so long. But we should fix this?\n> >> + */\n> >> + SendRecoveryConflictWithBufferPin(\n> >> +\n> >> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n> >> + got_standby_deadlock_timeout = false;\n> >> + }\n> >> +\n> >>\n> >> Since SendRecoveryConflictWithBufferPin() sends the signal to all\n> >> backends every backend who is waiting on a lock at ProcSleep() and not\n> >> holding a buffer pin blocking the startup process will end up doing a\n> >> deadlock check, which seems expensive. What is worse is that the\n> >> deadlock will not be detected because the deadlock involving a buffer\n> >> pin isn't detected by CheckDeadLock(). I thought we can replace\n> >> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\n> >> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\n> >> backend who has a buffer pin blocking the startup process and not\n> >> waiting on a lock is also canceled after deadlock_timeout. We can have\n> >> the backend return from RecoveryConflictInterrupt() when it received\n> >> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\n> >> but it’s also not good because we cannot cancel the backend after\n> >> max_standby_streaming_delay that has a buffer pin blocking the startup\n> >> process. So I wonder if we can have a new signal. When the backend\n> >> received this signal it returns from RecoveryConflictInterrupt()\n> >> without deadlock check either if it’s not waiting on any lock or if it\n> >> doesn’t have a buffer pin blocking the startup process. Otherwise it's\n> >> cancelled.\n> >\n> > Thanks for pointing out that issue! Using new signal is an idea. Another idea\n> > is to make a backend skip check the deadlock if GetStartupBufferPinWaitBufId()\n> > returns -1, i.e., the startup process is not waiting for buffer pin. So,\n> > what I'm thinkins is;\n> >\n> > In RecoveryConflictInterrupt(), when a backend receive\n> > PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK,\n> >\n> > 1. If a backend isn't waiting for a lock, it does nothing .\n> > 2. If a backend is waiting for a lock and also holding a buffer pin that\n> > delays recovery, it may be canceled.\n> > 3. If a backend is waiting for a lock and the startup process is not waiting\n> > for buffer pin (i.e., the startup process is also waiting for a lock),\n> > it checks for the deadlocks.\n> > 4. If a backend is waiting for a lock and isn't holding a buffer pin that\n> > delays recovery though the startup process is waiting for buffer pin,\n> > it does nothing.\n>\n\nGood idea! It could still happen that if the startup process sets\nstartupBufferPinWaitBufId to -1 after sending the signal and before\nthe backend checks it, the backend will end up doing an unmeaningful\ndeadlock check. But the likelihood would be low in practice.\n\nI have two small comments on ResolveRecoveryConflictWithBufferPin() in\nthe v4 patch:\n\nThe code currently has three branches as follow:\n\n if (ltime == 0)\n {\n enable a timeout for deadlock;\n }\n else if (GetCurrentTimestamp() >= ltime)\n {\n send recovery conflict signal;\n }\n else\n {\n enable two timeouts: ltime and deadlock\n }\n\nI think we can rearrange the code similar to the changes you made on\nResolveRecoveryConflictWithLock():\n\n if (GetCurrentTimestamp() >= ltime && ltime != 0)\n {\n Resolve recovery conflict;\n }\n else\n {\n Enable one or two timeouts: ltime and deadlock\n }\n\nIt's more consistent with ResolveRecoveryConflictWithLock(). And\ncurrently the patch doesn't reset got_standby_deadlock_timeout in\n(ltime == 0) case but it will also be resolved by this rearrangement.\n\n---\nIf we always reset got_standby_deadlock_timeout before waiting it's\nnot necessary but we might want to clear got_standby_deadlock_timeout\nalso after disabling all timeouts to ensure that it's cleared at the\nend of the function. In ResolveRecoveryConflictWithLock() we clear\nboth got_standby_lock_timeout and got_standby_deadlock_timeout after\ndisabling all timeouts but we don't do that in\nResolveRecoveryConflictWithBufferPin().\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 23 Dec 2020 19:28:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On 2020/12/23 19:28, Masahiko Sawada wrote:\n> On Tue, Dec 22, 2020 at 11:58 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/12/22 20:42, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/12/22 10:25, Masahiko Sawada wrote:\n>>>> On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/12/17 2:15, Fujii Masao wrote:\n>>>>>>\n>>>>>>\n>>>>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n>>>>>>>>\n>>>>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n>>>>>>>>\n>>>>>>>> After doing this procedure, you can see the startup process and backend\n>>>>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n>>>>>>>> even after deadlock_timeout passes.\n>>>>>>>>\n>>>>>>>> This seems a bug to me.\n>>>>>>>>\n>>>>>>> +1\n>>>>>>>\n>>>>>>>>\n>>>>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n>>>>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n>>>>>>>>\n>>>>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n>>>>>>>> the startup process calls when recovery conflict on lock happens doesn't\n>>>>>>>> take care of deadlock case at all. You can see this fact by reading the above\n>>>>>>>> source code comment for ResolveRecoveryConflictWithLock().\n>>>>>>>>\n>>>>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n>>>>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n>>>>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n>>>>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n>>>>>>>> the backend should check whether the deadlock actually happens or not.\n>>>>>>>> Attached is the POC patch implimenting this.\n>>>>>>>>\n>>>>>>> good catch!\n>>>>>>>\n>>>>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n>>>>>>>\n>>>>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n>>>>>>\n>>>>>> Thanks Victor and Bertrand for agreeing!\n>>>>>> Attached is the updated version of the patch.\n>>>>>\n>>>>> Attached is v3 of the patch. Could you review this version?\n>>>>>\n>>>>> While the startup process is waiting for recovery conflict on buffer pin,\n>>>>> it repeats sending the request for deadlock check to all the backends\n>>>>> every deadlock_timeout. This may increase the workload in the startup\n>>>>> process and backends, but since this is the original behavior, the patch\n>>>>> doesn't change that. Also in practice this may not be so harmful because\n>>>>> the period that the buffer is kept pinned is basically not so long.\n>>>>>\n>>>>\n>>>> @@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n>>>> */\n>>>> ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>>>>\n>>>> + if (got_standby_deadlock_timeout)\n>>>> + {\n>>>> + /*\n>>>> + * Send out a request for hot-standby backends to check themselves for\n>>>> + * deadlocks.\n>>>> + *\n>>>> + * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n>>>> + * to be signaled by UnpinBuffer() again and send a request for\n>>>> + * deadlocks check if deadlock_timeout happens. This causes the\n>>>> + * request to continue to be sent every deadlock_timeout until the\n>>>> + * buffer is unpinned or ltime is reached. This would increase the\n>>>> + * workload in the startup process and backends. In practice it may\n>>>> + * not be so harmful because the period that the buffer is kept pinned\n>>>> + * is basically no so long. But we should fix this?\n>>>> + */\n>>>> + SendRecoveryConflictWithBufferPin(\n>>>> +\n>>>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n>>>> + got_standby_deadlock_timeout = false;\n>>>> + }\n>>>> +\n>>>>\n>>>> Since SendRecoveryConflictWithBufferPin() sends the signal to all\n>>>> backends every backend who is waiting on a lock at ProcSleep() and not\n>>>> holding a buffer pin blocking the startup process will end up doing a\n>>>> deadlock check, which seems expensive. What is worse is that the\n>>>> deadlock will not be detected because the deadlock involving a buffer\n>>>> pin isn't detected by CheckDeadLock(). I thought we can replace\n>>>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\n>>>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\n>>>> backend who has a buffer pin blocking the startup process and not\n>>>> waiting on a lock is also canceled after deadlock_timeout. We can have\n>>>> the backend return from RecoveryConflictInterrupt() when it received\n>>>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\n>>>> but it’s also not good because we cannot cancel the backend after\n>>>> max_standby_streaming_delay that has a buffer pin blocking the startup\n>>>> process. So I wonder if we can have a new signal. When the backend\n>>>> received this signal it returns from RecoveryConflictInterrupt()\n>>>> without deadlock check either if it’s not waiting on any lock or if it\n>>>> doesn’t have a buffer pin blocking the startup process. Otherwise it's\n>>>> cancelled.\n>>>\n>>> Thanks for pointing out that issue! Using new signal is an idea. Another idea\n>>> is to make a backend skip check the deadlock if GetStartupBufferPinWaitBufId()\n>>> returns -1, i.e., the startup process is not waiting for buffer pin. So,\n>>> what I'm thinkins is;\n>>>\n>>> In RecoveryConflictInterrupt(), when a backend receive\n>>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK,\n>>>\n>>> 1. If a backend isn't waiting for a lock, it does nothing .\n>>> 2. If a backend is waiting for a lock and also holding a buffer pin that\n>>> delays recovery, it may be canceled.\n>>> 3. If a backend is waiting for a lock and the startup process is not waiting\n>>> for buffer pin (i.e., the startup process is also waiting for a lock),\n>>> it checks for the deadlocks.\n>>> 4. If a backend is waiting for a lock and isn't holding a buffer pin that\n>>> delays recovery though the startup process is waiting for buffer pin,\n>>> it does nothing.\n>>\n> \n> Good idea! It could still happen that if the startup process sets\n> startupBufferPinWaitBufId to -1 after sending the signal and before\n> the backend checks it, the backend will end up doing an unmeaningful\n> deadlock check. But the likelihood would be low in practice.\n> \n> I have two small comments on ResolveRecoveryConflictWithBufferPin() in\n> the v4 patch:\n> \n> The code currently has three branches as follow:\n> \n> if (ltime == 0)\n> {\n> enable a timeout for deadlock;\n> }\n> else if (GetCurrentTimestamp() >= ltime)\n> {\n> send recovery conflict signal;\n> }\n> else\n> {\n> enable two timeouts: ltime and deadlock\n> }\n> \n> I think we can rearrange the code similar to the changes you made on\n> ResolveRecoveryConflictWithLock():\n> \n> if (GetCurrentTimestamp() >= ltime && ltime != 0)\n> {\n> Resolve recovery conflict;\n> }\n> else\n> {\n> Enable one or two timeouts: ltime and deadlock\n> }\n> \n> It's more consistent with ResolveRecoveryConflictWithLock(). And\n> currently the patch doesn't reset got_standby_deadlock_timeout in\n> (ltime == 0) case but it will also be resolved by this rearrangement.\n\nI didn't want to change the code structure as much as possible because\nthe patch needs to be back-patched. But it's good idea to make the code\nstructures in ResolveRecoveryConflictWithLock() and ...WithBufferPin() similar,\nto make the code simpler and easier-to-read. So I agree with you. Attached\nis the updated of the patch. What about this version?\n\n> \n> ---\n> If we always reset got_standby_deadlock_timeout before waiting it's\n> not necessary but we might want to clear got_standby_deadlock_timeout\n> also after disabling all timeouts to ensure that it's cleared at the\n> end of the function. In ResolveRecoveryConflictWithLock() we clear\n> both got_standby_lock_timeout and got_standby_deadlock_timeout after\n> disabling all timeouts but we don't do that in\n> ResolveRecoveryConflictWithBufferPin().\n\nI agree that it's better to reset got_standby_deadlock_timeout after\nall the timeouts are disabled. So I changed the patch that way. OTOH\ngot_standby_lock_timeout doesn't need to be reset because it's never\nenabled in ResolveRecoveryConflictWithBufferPin(). No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 23 Dec 2020 21:42:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 9:42 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/23 19:28, Masahiko Sawada wrote:\n> > On Tue, Dec 22, 2020 at 11:58 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/12/22 20:42, Fujii Masao wrote:\n> >>>\n> >>>\n> >>> On 2020/12/22 10:25, Masahiko Sawada wrote:\n> >>>> On Fri, Dec 18, 2020 at 6:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/12/17 2:15, Fujii Masao wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/12/16 23:28, Drouvot, Bertrand wrote:\n> >>>>>>> Hi,\n> >>>>>>>\n> >>>>>>> On 12/16/20 2:36 PM, Victor Yegorov wrote:\n> >>>>>>>>\n> >>>>>>>> *CAUTION*: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> ср, 16 дек. 2020 г. в 13:49, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n> >>>>>>>>\n> >>>>>>>> After doing this procedure, you can see the startup process and backend\n> >>>>>>>> wait for the table lock each other, i.e., deadlock. But this deadlock remains\n> >>>>>>>> even after deadlock_timeout passes.\n> >>>>>>>>\n> >>>>>>>> This seems a bug to me.\n> >>>>>>>>\n> >>>>>>> +1\n> >>>>>>>\n> >>>>>>>>\n> >>>>>>>> > * Deadlocks involving the Startup process and an ordinary backend process\n> >>>>>>>> > * will be detected by the deadlock detector within the ordinary backend.\n> >>>>>>>>\n> >>>>>>>> The cause of this issue seems that ResolveRecoveryConflictWithLock() that\n> >>>>>>>> the startup process calls when recovery conflict on lock happens doesn't\n> >>>>>>>> take care of deadlock case at all. You can see this fact by reading the above\n> >>>>>>>> source code comment for ResolveRecoveryConflictWithLock().\n> >>>>>>>>\n> >>>>>>>> To fix this issue, I think that we should enable STANDBY_DEADLOCK_TIMEOUT\n> >>>>>>>> timer in ResolveRecoveryConflictWithLock() so that the startup process can\n> >>>>>>>> send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal to the backend.\n> >>>>>>>> Then if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK signal arrives,\n> >>>>>>>> the backend should check whether the deadlock actually happens or not.\n> >>>>>>>> Attached is the POC patch implimenting this.\n> >>>>>>>>\n> >>>>>>> good catch!\n> >>>>>>>\n> >>>>>>> I don't see any obvious reasons why the STANDBY_DEADLOCK_TIMEOUT shouldn't be set in ResolveRecoveryConflictWithLock() too (it is already set in ResolveRecoveryConflictWithBufferPin()).\n> >>>>>>>\n> >>>>>>> So + 1 to consider this as a bug and for the way the patch proposes to fix it.\n> >>>>>>\n> >>>>>> Thanks Victor and Bertrand for agreeing!\n> >>>>>> Attached is the updated version of the patch.\n> >>>>>\n> >>>>> Attached is v3 of the patch. Could you review this version?\n> >>>>>\n> >>>>> While the startup process is waiting for recovery conflict on buffer pin,\n> >>>>> it repeats sending the request for deadlock check to all the backends\n> >>>>> every deadlock_timeout. This may increase the workload in the startup\n> >>>>> process and backends, but since this is the original behavior, the patch\n> >>>>> doesn't change that. Also in practice this may not be so harmful because\n> >>>>> the period that the buffer is kept pinned is basically not so long.\n> >>>>>\n> >>>>\n> >>>> @@ -529,6 +603,26 @@ ResolveRecoveryConflictWithBufferPin(void)\n> >>>> */\n> >>>> ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> >>>>\n> >>>> + if (got_standby_deadlock_timeout)\n> >>>> + {\n> >>>> + /*\n> >>>> + * Send out a request for hot-standby backends to check themselves for\n> >>>> + * deadlocks.\n> >>>> + *\n> >>>> + * XXX The subsequent ResolveRecoveryConflictWithBufferPin() will wait\n> >>>> + * to be signaled by UnpinBuffer() again and send a request for\n> >>>> + * deadlocks check if deadlock_timeout happens. This causes the\n> >>>> + * request to continue to be sent every deadlock_timeout until the\n> >>>> + * buffer is unpinned or ltime is reached. This would increase the\n> >>>> + * workload in the startup process and backends. In practice it may\n> >>>> + * not be so harmful because the period that the buffer is kept pinned\n> >>>> + * is basically no so long. But we should fix this?\n> >>>> + */\n> >>>> + SendRecoveryConflictWithBufferPin(\n> >>>> +\n> >>>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n> >>>> + got_standby_deadlock_timeout = false;\n> >>>> + }\n> >>>> +\n> >>>>\n> >>>> Since SendRecoveryConflictWithBufferPin() sends the signal to all\n> >>>> backends every backend who is waiting on a lock at ProcSleep() and not\n> >>>> holding a buffer pin blocking the startup process will end up doing a\n> >>>> deadlock check, which seems expensive. What is worse is that the\n> >>>> deadlock will not be detected because the deadlock involving a buffer\n> >>>> pin isn't detected by CheckDeadLock(). I thought we can replace\n> >>>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK with\n> >>>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN but it’s not good because the\n> >>>> backend who has a buffer pin blocking the startup process and not\n> >>>> waiting on a lock is also canceled after deadlock_timeout. We can have\n> >>>> the backend return from RecoveryConflictInterrupt() when it received\n> >>>> PROCSIG_RECOVERY_CONFLICT_BUFFERPIN and is not waiting on any lock,\n> >>>> but it’s also not good because we cannot cancel the backend after\n> >>>> max_standby_streaming_delay that has a buffer pin blocking the startup\n> >>>> process. So I wonder if we can have a new signal. When the backend\n> >>>> received this signal it returns from RecoveryConflictInterrupt()\n> >>>> without deadlock check either if it’s not waiting on any lock or if it\n> >>>> doesn’t have a buffer pin blocking the startup process. Otherwise it's\n> >>>> cancelled.\n> >>>\n> >>> Thanks for pointing out that issue! Using new signal is an idea. Another idea\n> >>> is to make a backend skip check the deadlock if GetStartupBufferPinWaitBufId()\n> >>> returns -1, i.e., the startup process is not waiting for buffer pin. So,\n> >>> what I'm thinkins is;\n> >>>\n> >>> In RecoveryConflictInterrupt(), when a backend receive\n> >>> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK,\n> >>>\n> >>> 1. If a backend isn't waiting for a lock, it does nothing .\n> >>> 2. If a backend is waiting for a lock and also holding a buffer pin that\n> >>> delays recovery, it may be canceled.\n> >>> 3. If a backend is waiting for a lock and the startup process is not waiting\n> >>> for buffer pin (i.e., the startup process is also waiting for a lock),\n> >>> it checks for the deadlocks.\n> >>> 4. If a backend is waiting for a lock and isn't holding a buffer pin that\n> >>> delays recovery though the startup process is waiting for buffer pin,\n> >>> it does nothing.\n> >>\n> >\n> > Good idea! It could still happen that if the startup process sets\n> > startupBufferPinWaitBufId to -1 after sending the signal and before\n> > the backend checks it, the backend will end up doing an unmeaningful\n> > deadlock check. But the likelihood would be low in practice.\n> >\n> > I have two small comments on ResolveRecoveryConflictWithBufferPin() in\n> > the v4 patch:\n> >\n> > The code currently has three branches as follow:\n> >\n> > if (ltime == 0)\n> > {\n> > enable a timeout for deadlock;\n> > }\n> > else if (GetCurrentTimestamp() >= ltime)\n> > {\n> > send recovery conflict signal;\n> > }\n> > else\n> > {\n> > enable two timeouts: ltime and deadlock\n> > }\n> >\n> > I think we can rearrange the code similar to the changes you made on\n> > ResolveRecoveryConflictWithLock():\n> >\n> > if (GetCurrentTimestamp() >= ltime && ltime != 0)\n> > {\n> > Resolve recovery conflict;\n> > }\n> > else\n> > {\n> > Enable one or two timeouts: ltime and deadlock\n> > }\n> >\n> > It's more consistent with ResolveRecoveryConflictWithLock(). And\n> > currently the patch doesn't reset got_standby_deadlock_timeout in\n> > (ltime == 0) case but it will also be resolved by this rearrangement.\n>\n> I didn't want to change the code structure as much as possible because\n> the patch needs to be back-patched. But it's good idea to make the code\n> structures in ResolveRecoveryConflictWithLock() and ...WithBufferPin() similar,\n> to make the code simpler and easier-to-read. So I agree with you. Attached\n> is the updated of the patch. What about this version?\n\nThank you for updating the patch! The patch looks good to me.\n\n>\n> >\n> > ---\n> > If we always reset got_standby_deadlock_timeout before waiting it's\n> > not necessary but we might want to clear got_standby_deadlock_timeout\n> > also after disabling all timeouts to ensure that it's cleared at the\n> > end of the function. In ResolveRecoveryConflictWithLock() we clear\n> > both got_standby_lock_timeout and got_standby_deadlock_timeout after\n> > disabling all timeouts but we don't do that in\n> > ResolveRecoveryConflictWithBufferPin().\n>\n> I agree that it's better to reset got_standby_deadlock_timeout after\n> all the timeouts are disabled. So I changed the patch that way. OTOH\n> got_standby_lock_timeout doesn't need to be reset because it's never\n> enabled in ResolveRecoveryConflictWithBufferPin(). No?\n\nYes, you're right. We need to clear only got_standby_deadlock_timeout.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Dec 2020 12:53:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> you. Attached\n> is the updated of the patch. What about this version?\n\nThe patch contains a hunk in the following structure.\n\n+\tif (got_standby_lock_timeout)\n+\t\tgoto cleanup;\n+\n+\tif (got_standby_deadlock_timeout)\n+\t{\n...\n+\t}\n+\n+cleanup:\n\nIt is eqivalent to\n\n+\tif (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n+\t{\n...\n+\t}\n\nIs there any reason for the goto?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Dec 2020 13:16:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "\n\nOn 2020/12/25 13:16, Kyotaro Horiguchi wrote:\n> At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> you. Attached\n>> is the updated of the patch. What about this version?\n> \n> The patch contains a hunk in the following structure.\n> \n> +\tif (got_standby_lock_timeout)\n> +\t\tgoto cleanup;\n> +\n> +\tif (got_standby_deadlock_timeout)\n> +\t{\n> ...\n> +\t}\n> +\n> +cleanup:\n> \n> It is eqivalent to\n> \n> +\tif (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n> +\t{\n> ...\n> +\t}\n> \n> Is there any reason for the goto?\n\nYes. That's because we have the following code using goto.\n\n+ /* Quick exit if there's no work to be done */\n+ if (!VirtualTransactionIdIsValid(*backends))\n+ goto cleanup;\n\n\nRegarding the back-patch, I was thinking to back-patch this to all the\nsupported branches. But I found that it's not easy to do that to v9.5\nbecause v9.5 doesn't have some infrastructure code that this bug fix\npatch depends on. So at least the commit 37c54863cf as the infrastructure\nalso needs to be back-patched to v9.5. And ISTM that some related commits\nf868a8143a and 8f0de712c3 need to be back-patched. Probably there might\nbe some other changes to be back-patched. Unfortunately they cannot be\napplied to v9.5 cleanly and additional changes are necessary.\n\nThis situation makes me feel that I'm inclined to skip the back-patch to v9.5.\nBecause the next minor version release is the final one for v9.5. So if we\nunexpectedly introduce the bug to v9.5 by the back-patch, there is no\nchance to fix that. OTOH, of course, if we don't do the back-patch, there is\nno chance to fix the deadlock detection bug since the final minor version\nfor v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\nto v9.5 may give more risk than gain.\n\nThought?\n\nRegards, \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 5 Jan 2021 15:26:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "Hi,\n\nOn 1/5/21 7:26 AM, Fujii Masao wrote:\n> CAUTION: This email originated from outside of the organization. Do \n> not click links or open attachments unless you can confirm the sender \n> and know the content is safe.\n>\n>\n>\n> On 2020/12/25 13:16, Kyotaro Horiguchi wrote:\n>> At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote in\n>>> you. Attached\n>>> is the updated of the patch. What about this version?\n>>\n>> The patch contains a hunk in the following structure.\n>>\n>> + if (got_standby_lock_timeout)\n>> + goto cleanup;\n>> +\n>> + if (got_standby_deadlock_timeout)\n>> + {\n>> ...\n>> + }\n>> +\n>> +cleanup:\n>>\n>> It is eqivalent to\n>>\n>> + if (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n>> + {\n>> ...\n>> + }\n>>\n>> Is there any reason for the goto?\n>\n> Yes. That's because we have the following code using goto.\n>\n> + /* Quick exit if there's no work to be done */\n> + if (!VirtualTransactionIdIsValid(*backends))\n> + goto cleanup;\n>\n>\n> Regarding the back-patch, I was thinking to back-patch this to all the\n> supported branches. But I found that it's not easy to do that to v9.5\n> because v9.5 doesn't have some infrastructure code that this bug fix\n> patch depends on. So at least the commit 37c54863cf as the infrastructure\n> also needs to be back-patched to v9.5. And ISTM that some related commits\n> f868a8143a and 8f0de712c3 need to be back-patched. Probably there might\n> be some other changes to be back-patched. Unfortunately they cannot be\n> applied to v9.5 cleanly and additional changes are necessary.\n>\n> This situation makes me feel that I'm inclined to skip the back-patch \n> to v9.5.\n> Because the next minor version release is the final one for v9.5. So \n> if we\n> unexpectedly introduce the bug to v9.5 by the back-patch, there is no\n> chance to fix that. OTOH, of course, if we don't do the back-patch, \n> there is\n> no chance to fix the deadlock detection bug since the final minor version\n> for v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\n> to v9.5 may give more risk than gain.\n>\n> Thought?\n\nReading your arguments, I am inclined to think the same.\n\nBertrand\n\n\n\n",
"msg_date": "Tue, 5 Jan 2021 10:13:58 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "вт, 5 янв. 2021 г. в 07:26, Fujii Masao <masao.fujii@oss.nttdata.com>:\n\n> This situation makes me feel that I'm inclined to skip the back-patch to\n> v9.5.\n> Because the next minor version release is the final one for v9.5. So if we\n> unexpectedly introduce the bug to v9.5 by the back-patch, there is no\n> chance to fix that. OTOH, of course, if we don't do the back-patch, there\n> is\n> no chance to fix the deadlock detection bug since the final minor version\n> for v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\n> to v9.5 may give more risk than gain.\n>\n> Thought?\n>\n\nHonestly, I was thinking that this will not be backpatched at all\nand really am glad we're getting this fixed in the back branches as well.\n\nTherefore I think it's fine to skip 9.5, though I\nwould've mentioned this in the commit message.\n\n\n-- \nVictor Yegorov\n\nвт, 5 янв. 2021 г. в 07:26, Fujii Masao <masao.fujii@oss.nttdata.com>:\nThis situation makes me feel that I'm inclined to skip the back-patch to v9.5.\nBecause the next minor version release is the final one for v9.5. So if we\nunexpectedly introduce the bug to v9.5 by the back-patch, there is no\nchance to fix that. OTOH, of course, if we don't do the back-patch, there is\nno chance to fix the deadlock detection bug since the final minor version\nfor v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\nto v9.5 may give more risk than gain.\n\nThought?Honestly, I was thinking that this will not be backpatched at alland really am glad we're getting this fixed in the back branches as well.Therefore I think it's fine to skip 9.5, though Iwould've mentioned this in the commit message.-- Victor Yegorov",
"msg_date": "Tue, 5 Jan 2021 12:26:47 +0100",
"msg_from": "Victor Yegorov <vyegorov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "At Tue, 5 Jan 2021 15:26:50 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/12/25 13:16, Kyotaro Horiguchi wrote:\n> > At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >> you. Attached\n> >> is the updated of the patch. What about this version?\n> > The patch contains a hunk in the following structure.\n> > +\tif (got_standby_lock_timeout)\n> > +\t\tgoto cleanup;\n> > +\n> > +\tif (got_standby_deadlock_timeout)\n> > +\t{\n> > ...\n> > +\t}\n> > +\n> > +cleanup:\n> > It is eqivalent to\n> > +\tif (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n> > +\t{\n> > ...\n> > +\t}\n> > Is there any reason for the goto?\n> \n> Yes. That's because we have the following code using goto.\n> \n> + /* Quick exit if there's no work to be done */\n> + if (!VirtualTransactionIdIsValid(*backends))\n> + goto cleanup;\n\nIt doesn't seem to be the *cause*. Such straight-forward logic with\nthree-depth indentation is not a thing that should be avoided using\ngoto even if PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK is too lengty\nand sticks out of 80 coloumns.\n\n> Regarding the back-patch, I was thinking to back-patch this to all the\n> supported branches. But I found that it's not easy to do that to v9.5\n> because v9.5 doesn't have some infrastructure code that this bug fix\n> patch depends on. So at least the commit 37c54863cf as the\n> infrastructure\n> also needs to be back-patched to v9.5. And ISTM that some related\n> commits\n> f868a8143a and 8f0de712c3 need to be back-patched. Probably there\n> might\n> be some other changes to be back-patched. Unfortunately they cannot be\n> applied to v9.5 cleanly and additional changes are necessary.\n> \n> This situation makes me feel that I'm inclined to skip the back-patch\n> to v9.5.\n> Because the next minor version release is the final one for v9.5. So\n> if we\n> unexpectedly introduce the bug to v9.5 by the back-patch, there is no\n> chance to fix that. OTOH, of course, if we don't do the back-patch,\n> there is\n> no chance to fix the deadlock detection bug since the final minor\n> version\n> for v9.5 doesn't include that bug fix. But I'm afraid that the\n> back-patch\n> to v9.5 may give more risk than gain.\n> \n> Thought?\n\nIt seems to me that the final minor release should get fixes only for\nissues that we have actually gotten complaints on, or critical-ish\nknown issues such as ones lead to server crash in normal paths. This\nparticular issue is neither of them.\n\nSo +1 for not back-patching to 9.5.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Jan 2021 09:57:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "On Tue, Jan 5, 2021 at 3:26 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/12/25 13:16, Kyotaro Horiguchi wrote:\n> > At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >> you. Attached\n> >> is the updated of the patch. What about this version?\n> >\n> > The patch contains a hunk in the following structure.\n> >\n> > + if (got_standby_lock_timeout)\n> > + goto cleanup;\n> > +\n> > + if (got_standby_deadlock_timeout)\n> > + {\n> > ...\n> > + }\n> > +\n> > +cleanup:\n> >\n> > It is eqivalent to\n> >\n> > + if (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n> > + {\n> > ...\n> > + }\n> >\n> > Is there any reason for the goto?\n>\n> Yes. That's because we have the following code using goto.\n>\n> + /* Quick exit if there's no work to be done */\n> + if (!VirtualTransactionIdIsValid(*backends))\n> + goto cleanup;\n>\n>\n> Regarding the back-patch, I was thinking to back-patch this to all the\n> supported branches. But I found that it's not easy to do that to v9.5\n> because v9.5 doesn't have some infrastructure code that this bug fix\n> patch depends on. So at least the commit 37c54863cf as the infrastructure\n> also needs to be back-patched to v9.5. And ISTM that some related commits\n> f868a8143a and 8f0de712c3 need to be back-patched. Probably there might\n> be some other changes to be back-patched. Unfortunately they cannot be\n> applied to v9.5 cleanly and additional changes are necessary.\n>\n> This situation makes me feel that I'm inclined to skip the back-patch to v9.5.\n> Because the next minor version release is the final one for v9.5. So if we\n> unexpectedly introduce the bug to v9.5 by the back-patch, there is no\n> chance to fix that. OTOH, of course, if we don't do the back-patch, there is\n> no chance to fix the deadlock detection bug since the final minor version\n> for v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\n> to v9.5 may give more risk than gain.\n>\n> Thought?\n\n+1 for not-backpatching to 9.5.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 6 Jan 2021 11:48:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
},
{
"msg_contents": "\n\nOn 2021/01/06 11:48, Masahiko Sawada wrote:\n> On Tue, Jan 5, 2021 at 3:26 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/12/25 13:16, Kyotaro Horiguchi wrote:\n>>> At Wed, 23 Dec 2020 21:42:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> you. Attached\n>>>> is the updated of the patch. What about this version?\n>>>\n>>> The patch contains a hunk in the following structure.\n>>>\n>>> + if (got_standby_lock_timeout)\n>>> + goto cleanup;\n>>> +\n>>> + if (got_standby_deadlock_timeout)\n>>> + {\n>>> ...\n>>> + }\n>>> +\n>>> +cleanup:\n>>>\n>>> It is eqivalent to\n>>>\n>>> + if (!got_standby_lock_timeout && got_standby_deadlock_timeout)\n>>> + {\n>>> ...\n>>> + }\n>>>\n>>> Is there any reason for the goto?\n>>\n>> Yes. That's because we have the following code using goto.\n>>\n>> + /* Quick exit if there's no work to be done */\n>> + if (!VirtualTransactionIdIsValid(*backends))\n>> + goto cleanup;\n>>\n>>\n>> Regarding the back-patch, I was thinking to back-patch this to all the\n>> supported branches. But I found that it's not easy to do that to v9.5\n>> because v9.5 doesn't have some infrastructure code that this bug fix\n>> patch depends on. So at least the commit 37c54863cf as the infrastructure\n>> also needs to be back-patched to v9.5. And ISTM that some related commits\n>> f868a8143a and 8f0de712c3 need to be back-patched. Probably there might\n>> be some other changes to be back-patched. Unfortunately they cannot be\n>> applied to v9.5 cleanly and additional changes are necessary.\n>>\n>> This situation makes me feel that I'm inclined to skip the back-patch to v9.5.\n>> Because the next minor version release is the final one for v9.5. So if we\n>> unexpectedly introduce the bug to v9.5 by the back-patch, there is no\n>> chance to fix that. OTOH, of course, if we don't do the back-patch, there is\n>> no chance to fix the deadlock detection bug since the final minor version\n>> for v9.5 doesn't include that bug fix. But I'm afraid that the back-patch\n>> to v9.5 may give more risk than gain.\n>>\n>> Thought?\n> \n> +1 for not-backpatching to 9.5.\n\nThanks all! I pushed the patch and back-patched to v9.6.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 6 Jan 2021 12:55:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Deadlock between backend and recovery may not be detected"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have \\gset to set some parameters, but not ones in the environment,\nso I fixed this with a new analogous command, \\gsetenv. I considered\nrefactoring SetVariable to include environment variables, but for a\nfirst cut, I just made a separate function and an extra if.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 16 Dec 2020 22:24:29 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "\\gsetenv"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> We have \\gset to set some parameters, but not ones in the environment,\n> so I fixed this with a new analogous command, \\gsetenv.\n\nIn view of the security complaints we just had about \\gset\n(CVE-2020-25696), I cannot fathom why we'd consider adding another\nway to cause similar problems.\n\nWe were fortunate enough to be able to close off the main security risk\nof \\gset without deleting the feature altogether ... but how exactly\nwould we distinguish \"safe\" from \"unsafe\" environment variables? It kind\nof seems like anything that would be worth setting at all would tend to\nfall into the \"unsafe\" category, because the implications of setting it\nwould be unclear. But *for certain* we're not taking a patch that allows\nremotely setting PATH and things like that.\n\nBesides which, you haven't bothered with even one word of positive\njustification. What's the non-hazardous use case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:30:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 05:30:13PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > We have \\gset to set some parameters, but not ones in the environment,\n> > so I fixed this with a new analogous command, \\gsetenv.\n> \n> In view of the security complaints we just had about \\gset\n> (CVE-2020-25696), I cannot fathom why we'd consider adding another\n> way to cause similar problems.\n\nThe RedHat site says, in part:\n\n the attacker can execute arbitrary code as the operating system\n account running psql\n\nThis is true of literally everything that requires a login shell in\norder to use. I remember a \"virus\" my friend Keith McMillan wrote in\nTeX back in the 1994. You can download the PostScript file detailing\nthe procedure, bearing in mind that PostScript also contains ways to\nexecute arbitrary code if opened:\n\nftp://ftp.cerias.purdue.edu/pub/doc/viruses/KeithMcMillan-PlatformIndependantVirus.ps\n\nThat one got itself a remote code execution by fiddling with a\nperson's .emacs, and it got Keith a master's degree in CS. I suspect\nthat equally nasty things are possible when it comes to \\i and \\o in\npsql. It would be a terrible idea to hobble psql in the attempt to\nprevent such attacks.\n\n> We were fortunate enough to be able to close off the main security\n> risk of \\gset without deleting the feature altogether ... but how\n> exactly would we distinguish \"safe\" from \"unsafe\" environment\n> variables? It kind of seems like anything that would be worth\n> setting at all would tend to fall into the \"unsafe\" category,\n> because the implications of setting it would be unclear. But *for\n> certain* we're not taking a patch that allows remotely setting PATH\n> and things like that.\n\nWould you be so kind as to explain what the actual problem is here\nthat not doing this would mitigate?\n\nIf people run code they haven't seen from a server they don't trust,\nneither psql nor anything else[1] can protect them from the\nconsequences. Seeing what they're about to run is dead easy in this\ncase because \\gsetenv, like \\gset and what in my view is the much more\ndangerous \\gexec, is something anyone with the tiniest modicum of\ncaution would run only after testing it with \\g.\n\n> Besides which, you haven't bothered with even one word of positive\n> justification. What's the non-hazardous use case?\n\nThanks for asking, and my apologies for not including it.\n\nI ran into a situation where we sometimes got a very heavily loaded\nand also well-protected PostgreSQL server. At times, just getting a\nshell on it could take a few tries. To mitigate situations like that,\nI used a method that's a long way from new, abstruse, or secret: have\npsql open in a long-lasting tmux or screen session. It could both\naccess the database at a time when getting a new connection would be\nsomewhere between difficult and impossible. The bit that's unlikely\nto be new was when I noticed that it could also shell out\nand send information off to other places, but only when I put together\na pretty baroque procedure that involved using combinations of \\gset,\n\\o, and \\!. All of the same things \\gsetenv could do were doable with\nthose, just less convenient, so I drafted up a patch in the hope that\nfewer others would find themselves jumping through the hoops I did to\nget that set up.\n\nSeparately, I confess to some bafflement at the reasoning behind the\nCVE you referenced. By the time an attacker has compromised a database\nserver, it's already game over. Code running on the compromised\ndatabase is capable of doing much nastier things than crashing a\nclient machine, and very likely has access to other high-value targets\non its own say-so than said client does.\n\nBest,\nDavid.\n\n[1] search for \"gods themselves\" here:\nhttps://en.wikiquote.org/wiki/Friedrich_Schiller\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 17 Dec 2020 04:54:58 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "\nOn 12/16/20 10:54 PM, David Fetter wrote:\n>\n>> Besides which, you haven't bothered with even one word of positive\n>> justification. What's the non-hazardous use case?\n> Thanks for asking, and my apologies for not including it.\n>\n> I ran into a situation where we sometimes got a very heavily loaded\n> and also well-protected PostgreSQL server. At times, just getting a\n> shell on it could take a few tries. To mitigate situations like that,\n> I used a method that's a long way from new, abstruse, or secret: have\n> psql open in a long-lasting tmux or screen session. It could both\n> access the database at a time when getting a new connection would be\n> somewhere between difficult and impossible. The bit that's unlikely\n> to be new was when I noticed that it could also shell out\n> and send information off to other places, but only when I put together\n> a pretty baroque procedure that involved using combinations of \\gset,\n> \\o, and \\!. All of the same things \\gsetenv could do were doable with\n> those, just less convenient, so I drafted up a patch in the hope that\n> fewer others would find themselves jumping through the hoops I did to\n> get that set up.\n\n\nDoes this help?\n\n\n andrew=# select 'abc'::text as foo \\gset\n andrew=# \\setenv FOO :foo\n andrew=# \\! echo $FOO\n abc\n andrew=#\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 10:37:11 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "Hello David,\n\n> We have \\gset to set some parameters, but not ones in the environment,\n> so I fixed this with a new analogous command, \\gsetenv. I considered\n> refactoring SetVariable to include environment variables, but for a\n> first cut, I just made a separate function and an extra if.\n\nMy 0.02ᅵ: ISTM that you do not really need that, it can already be \nachieved with gset, so I would not bother to add a gsetenv.\n\n sh> psql\n SELECT 'Calvin' AS foo \\gset\n \\setenv FOO :foo\n \\! echo $FOO\n Calvin\n\n-- \nFabien.",
"msg_date": "Sun, 20 Dec 2020 14:26:14 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 02:26:14PM +0100, Fabien COELHO wrote:\n> Hello David,\n> \n> > We have \\gset to set some parameters, but not ones in the environment,\n> > so I fixed this with a new analogous command, \\gsetenv. I considered\n> > refactoring SetVariable to include environment variables, but for a\n> > first cut, I just made a separate function and an extra if.\n> \n> My 0.02€: ISTM that you do not really need that, it can already be achieved\n> with gset, so I would not bother to add a gsetenv.\n> \n> sh> psql\n> SELECT 'Calvin' AS foo \\gset\n> \\setenv FOO :foo\n> \\! echo $FOO\n> Calvin\n\nThanks!\n\nYou're the second person who's mentioned this workaround, which goes\nto a couple of points I tried to make earlier:\n\n- This is not by any means a new capability, just a convenience, and\n- In view of the fact that it's a very old capability, the idea that\n it has implications for controlling access or other parts of the\n space of threat models is pretty silly.\n\nHaving dispensed with the idea that there's a new attack surface here,\nI'd like to request that people at least have a look at it as a\nfeature psql users might appreciate having. As the author, I obviously\nsee it that way, but again as the author, it's not for me to make that\ncall.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 20 Dec 2020 18:40:12 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Sun, Dec 20, 2020 at 02:26:14PM +0100, Fabien COELHO wrote:\n>> SELECT 'Calvin' AS foo \\gset\n>> \\setenv FOO :foo\n>> \\! echo $FOO\n>> Calvin\n\n> You're the second person who's mentioned this workaround, which goes\n> to a couple of points I tried to make earlier:\n\n> - This is not by any means a new capability, just a convenience, and\n> - In view of the fact that it's a very old capability, the idea that\n> it has implications for controlling access or other parts of the\n> space of threat models is pretty silly.\n\nThis is essentially the same workaround as what we recommend for anyone\nwho's unhappy with the fix for CVE-2020-25696: do \\gset into a non-special\nvariable and then copy to the special variable. The reason it seems okay\nis that now it is clear that client-side logic intends the special\nvariable change to happen. Thus a compromised server cannot hijack your\nclient-side session all by itself. There's nonzero risk in letting the\nserver modify your PROMPT1, PATH, or whatever, but you took the risk\nintentionally (and, presumably, it's necessary to your purposes).\n\nI tend to agree with you that the compromised-server argument is a little\nbit of a stretch. Still, we did have an actual user complaining about\nthe case for \\gset, and it's clear that in at least some scenarios this\nsort of attack could be used to parlay a server compromise into additional\naccess. So we're not likely to undo the CVE-2020-25696 fix, and we're\nequally unlikely to provide an unrestricted way to set environment\nvariables directly from the server.\n\nIf we could draw a line between \"safe\" and \"unsafe\" environment\nvariables, I'd be willing to consider a patch that allows directly\nsetting only the former. But I don't see how to draw that line.\nMost of the point of any such feature would have to be to affect\nthe behavior of shell commands subsequently invoked with \\! ...\nand we can't know what a given variable would do to those. So on\nthe whole I'm inclined to leave things as-is, where people have to\ndo the assignment manually.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Dec 2020 13:07:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 01:07:12PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > On Sun, Dec 20, 2020 at 02:26:14PM +0100, Fabien COELHO wrote:\n> >> SELECT 'Calvin' AS foo \\gset\n> >> \\setenv FOO :foo\n> >> \\! echo $FOO\n> >> Calvin\n> \n> > You're the second person who's mentioned this workaround, which goes\n> > to a couple of points I tried to make earlier:\n> \n> > - This is not by any means a new capability, just a convenience, and\n> > - In view of the fact that it's a very old capability, the idea that\n> > it has implications for controlling access or other parts of the\n> > space of threat models is pretty silly.\n> \n> This is essentially the same workaround as what we recommend for anyone\n> who's unhappy with the fix for CVE-2020-25696: do \\gset into a non-special\n> variable and then copy to the special variable. The reason it seems okay\n> is that now it is clear that client-side logic intends the special\n> variable change to happen. Thus a compromised server cannot hijack your\n> client-side session all by itself. There's nonzero risk in letting the\n> server modify your PROMPT1, PATH, or whatever, but you took the risk\n> intentionally (and, presumably, it's necessary to your purposes).\n> \n> I tend to agree with you that the compromised-server argument is a little\n> bit of a stretch. Still, we did have an actual user complaining about\n> the case for \\gset, and it's clear that in at least some scenarios this\n> sort of attack could be used to parlay a server compromise into additional\n> access. So we're not likely to undo the CVE-2020-25696 fix, and we're\n> equally unlikely to provide an unrestricted way to set environment\n> variables directly from the server.\n> \n> If we could draw a line between \"safe\" and \"unsafe\" environment\n> variables, I'd be willing to consider a patch that allows directly\n> setting only the former. But I don't see how to draw that line.\n> Most of the point of any such feature would have to be to affect\n> the behavior of shell commands subsequently invoked with \\! ...\n> and we can't know what a given variable would do to those. So on\n> the whole I'm inclined to leave things as-is, where people have to\n> do the assignment manually.\n\nI suppose now's not a great time for this from an optics point of\nview. Taking on the entire security theater industry is way out of\nscope for the PostgreSQL project.\n\nWe have plenty of ways to spawn shells and cause havoc, and we\nwouldn't be able to block them all even if we decided to put a bunch\nof pretty onerous restrictions on psql at this very late date. We have\n\\set, backticks, \\!, and bunches of things less obvious that could,\neven without a compromised server, cause real mischief. I believe that\na more effective way to deal with this reality in a way that helps\nusers is to put clear warnings in the documentation about the fact\nthat psql programs are at least as capable as shell programs in that\nthey are innately capable of doing anything that the psql's system\nuser is authorized to do.\n\nWould a patch along that line help?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 20 Dec 2020 20:05:44 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On 20/12/2020 21:05, David Fetter wrote:\n> We have plenty of ways to spawn shells and cause havoc, and we\n> wouldn't be able to block them all even if we decided to put a bunch\n> of pretty onerous restrictions on psql at this very late date. We have\n> \\set, backticks, \\!, and bunches of things less obvious that could,\n> even without a compromised server, cause real mischief.\n\nThere is a big difference between having to trust the server or not. \nYeah, you could cause a lot of mischief if you let a user run arbitrary \npsql scripts on your behalf. But that's no excuse for opening up a whole \nanother class of problems.\n\n- Heikki\n\n\n",
"msg_date": "Sun, 20 Dec 2020 22:42:40 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 10:42:40PM +0200, Heikki Linnakangas wrote:\n> On 20/12/2020 21:05, David Fetter wrote:\n> > We have plenty of ways to spawn shells and cause havoc, and we\n> > wouldn't be able to block them all even if we decided to put a bunch\n> > of pretty onerous restrictions on psql at this very late date. We have\n> > \\set, backticks, \\!, and bunches of things less obvious that could,\n> > even without a compromised server, cause real mischief.\n> \n> There is a big difference between having to trust the server or not. Yeah,\n> you could cause a lot of mischief if you let a user run arbitrary psql\n> scripts on your behalf. But that's no excuse for opening up a whole another\n> class of problems.\n\nI'm skittish about putting exploits out in public in advance of\ndiscussions about how to mitigate them, but I have constructed several\nthat do pretty bad things using only hostile content in a server and\nthe facilities `psql` already provides.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 21 Dec 2020 00:34:15 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> If we could draw a line between \"safe\" and \"unsafe\" environment\n> variables, I'd be willing to consider a patch that allows directly\n> setting only the former. But I don't see how to draw that line.\n>\n>\nIIUC the threat here is for users that write:\n\nSELECT * FROM view \\gset\n\nBecause if you are writing\n\nSELECT col1, col2, col3 OR SELECT expression AS col1 \\gset\n\nThe query author has explicitly stated which variable names they exactly\nwant to change/create and nothing the server can do will be able to alter\nthose names.\n\nOr *is* that the problem - the server might decide to send back a column\nnamed \"breakme1\" in the first column position even though the user aliased\nthe column name as \"col1\"?\n\nWould a \"\\gsetenv (col1, col2, col3, skip, col4)\" be acceptable that leaves\nthe name locally defined while relying on column position to match?\n\nHow much do we want to handicap a useful feature because someone can use it\nalongside \"SELECT *\"? Can we prevent it from working in such a case\noutright - force an explicit column name list at minimum, and ideally\naliases for expressions? I suspect not, too much of that has to happen on\nthe server. That makes doing this by column position and defining the\nnames strictly locally a compromise worth considering. At worst, there is\nno way to get an unwanted variable to appear on the client even if the data\nfor wanted variables is made bogus by the compromised server. I don't see\nhow avoiding the bogus data problem is even possible.\n\nDavid J.\n\nOn Sun, Dec 20, 2020 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:If we could draw a line between \"safe\" and \"unsafe\" environment\nvariables, I'd be willing to consider a patch that allows directly\nsetting only the former. But I don't see how to draw that line.IIUC the threat here is for users that write:SELECT * FROM view \\gsetBecause if you are writingSELECT col1, col2, col3 OR SELECT expression AS col1 \\gsetThe query author has explicitly stated which variable names they exactly want to change/create and nothing the server can do will be able to alter those names.Or *is* that the problem - the server might decide to send back a column named \"breakme1\" in the first column position even though the user aliased the column name as \"col1\"?Would a \"\\gsetenv (col1, col2, col3, skip, col4)\" be acceptable that leaves the name locally defined while relying on column position to match?How much do we want to handicap a useful feature because someone can use it alongside \"SELECT *\"? Can we prevent it from working in such a case outright - force an explicit column name list at minimum, and ideally aliases for expressions? I suspect not, too much of that has to happen on the server. That makes doing this by column position and defining the names strictly locally a compromise worth considering. At worst, there is no way to get an unwanted variable to appear on the client even if the data for wanted variables is made bogus by the compromised server. I don't see how avoiding the bogus data problem is even possible.David J.",
"msg_date": "Sun, 20 Dec 2020 16:55:10 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sun, Dec 20, 2020 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we could draw a line between \"safe\" and \"unsafe\" environment\n>> variables, I'd be willing to consider a patch that allows directly\n>> setting only the former. But I don't see how to draw that line.\n\n> Because if you are writing\n> SELECT col1, col2, col3 OR SELECT expression AS col1 \\gset\n> The query author has explicitly stated which variable names they exactly\n> want to change/create and nothing the server can do will be able to alter\n> those names.\n\n> Or *is* that the problem - the server might decide to send back a column\n> named \"breakme1\" in the first column position even though the user aliased\n> the column name as \"col1\"?\n\nYeah, exactly. Just because the SQL output *should* have column names\nx, y, z doesn't mean it *will*, if the server is malicious. psql isn't\nbright enough to understand what column names the query ought to produce,\nso it just believes the column names that come back in the query result.\n\n> Would a \"\\gsetenv (col1, col2, col3, skip, col4)\" be acceptable that leaves\n> the name locally defined while relying on column position to match?\n\nHmm, maybe. The key point here is local vs. remote control of which\nvariables get assigned to, and offhand that seems like it'd fix the\nproblem.\n\n> How much do we want to handicap a useful feature because someone can use it\n> alongside \"SELECT *\"?\n\nWhether it's \"SELECT *\" or \"SELECT 1 AS X\" doesn't really matter here.\nThe concern is that somebody has hacked the server to send back something\nthat is *not* what you asked for. For that matter, since the actual\nupdate isn't visible to the user, the attacker could easily make the\nserver send back all the columns the user expected ... plus some\nhe didn't. The attackee might not even realize till later that\nsomething fishy happened.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Dec 2020 22:10:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \\gsetenv"
}
] |
[
{
"msg_contents": "Hi,\nw.r.t. patch v27.\n\n+ * The idea is to prepend underscores as needed until we make a name\nthat\n+ * doesn't collide with anything ...\n\nI wonder if other characters (e.g. [a-z0-9]) can be used so that name\nwithout collision can be found without calling truncate_identifier().\n\n+ else if (strcmp(defel->defname, \"multirange_type_name\") == 0)\n+ {\n+ if (multirangeTypeName != NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"conflicting or redundant options\")));\n\nMaybe make the error message a bit different from occurrences of similar\nerror message (such as including multirangeTypeName) ?\n\nThanks\n\nHi,w.r.t. patch v27.+ * The idea is to prepend underscores as needed until we make a name that+ * doesn't collide with anything ...I wonder if other characters (e.g. [a-z0-9]) can be used so that name without collision can be found without calling truncate_identifier().+ else if (strcmp(defel->defname, \"multirange_type_name\") == 0)+ {+ if (multirangeTypeName != NULL)+ ereport(ERROR,+ (errcode(ERRCODE_SYNTAX_ERROR),+ errmsg(\"conflicting or redundant options\")));Maybe make the error message a bit different from occurrences of similar error message (such as including multirangeTypeName) ?Thanks",
"msg_date": "Wed, 16 Dec 2020 13:54:45 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "range_agg"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 12:54 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> + * The idea is to prepend underscores as needed until we make a name that\n> + * doesn't collide with anything ...\n>\n> I wonder if other characters (e.g. [a-z0-9]) can be used so that name without collision can be found without calling truncate_identifier().\n\nProbably. But multiranges just shares naming logic already existing\nin arrays. If we're going to change this, I think we should change\nthis for arrays too. And this change shouldn't be part of multirange\npatch.\n\n> + else if (strcmp(defel->defname, \"multirange_type_name\") == 0)\n> + {\n> + if (multirangeTypeName != NULL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"conflicting or redundant options\")));\n>\n> Maybe make the error message a bit different from occurrences of similar error message (such as including multirangeTypeName) ?\n\nThis is again isn't an invention of multirange. We use this message\nmany times in DefineRange() and other places. From the first glance,\nI've nothing against changing this to a more informative message, but\nthat should be done globally. And this change isn't directly related\nto multirage. Feel free to propose a patch improving this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 17 Dec 2020 01:03:38 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 1:03 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Dec 17, 2020 at 12:54 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + * The idea is to prepend underscores as needed until we make a name that\n> > + * doesn't collide with anything ...\n> >\n> > I wonder if other characters (e.g. [a-z0-9]) can be used so that name without collision can be found without calling truncate_identifier().\n>\n> Probably. But multiranges just shares naming logic already existing\n> in arrays. If we're going to change this, I think we should change\n> this for arrays too. And this change shouldn't be part of multirange\n> patch.\n\nI gave this another thought. Now we have facility to name multirange\ntypes manually. I think we should give up with underscore naming\ncompletely. If both replacing \"range\" with \"mutlirange\" in the\ntypename and appending \"_multirange\" to the type name failed (very\nunlikely), then let user manually name the multirange. Any thoughts?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 17 Dec 2020 02:34:39 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Letting user manually name the multirange (after a few automatic attempts)\nseems reasonable.\n\nCheers\n\nOn Wed, Dec 16, 2020 at 3:34 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Thu, Dec 17, 2020 at 1:03 AM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Thu, Dec 17, 2020 at 12:54 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > + * The idea is to prepend underscores as needed until we make a\n> name that\n> > > + * doesn't collide with anything ...\n> > >\n> > > I wonder if other characters (e.g. [a-z0-9]) can be used so that name\n> without collision can be found without calling truncate_identifier().\n> >\n> > Probably. But multiranges just shares naming logic already existing\n> > in arrays. If we're going to change this, I think we should change\n> > this for arrays too. And this change shouldn't be part of multirange\n> > patch.\n>\n> I gave this another thought. Now we have facility to name multirange\n> types manually. I think we should give up with underscore naming\n> completely. If both replacing \"range\" with \"mutlirange\" in the\n> typename and appending \"_multirange\" to the type name failed (very\n> unlikely), then let user manually name the multirange. Any thoughts?\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nLetting user manually name the multirange (after a few automatic attempts) seems reasonable.CheersOn Wed, Dec 16, 2020 at 3:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Thu, Dec 17, 2020 at 1:03 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Dec 17, 2020 at 12:54 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + * The idea is to prepend underscores as needed until we make a name that\n> > + * doesn't collide with anything ...\n> >\n> > I wonder if other characters (e.g. [a-z0-9]) can be used so that name without collision can be found without calling truncate_identifier().\n>\n> Probably. But multiranges just shares naming logic already existing\n> in arrays. If we're going to change this, I think we should change\n> this for arrays too. And this change shouldn't be part of multirange\n> patch.\n\nI gave this another thought. Now we have facility to name multirange\ntypes manually. I think we should give up with underscore naming\ncompletely. If both replacing \"range\" with \"mutlirange\" in the\ntypename and appending \"_multirange\" to the type name failed (very\nunlikely), then let user manually name the multirange. Any thoughts?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 16 Dec 2020 15:37:50 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 2:37 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Letting user manually name the multirange (after a few automatic attempts) seems reasonable.\n\nAccepted. Thank you for your feedback.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 17 Dec 2020 02:41:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
}
] |
[
{
"msg_contents": "Hi,\nFor EventTriggerOnConnect():\n\n+ PG_CATCH();\n+ {\n...\n+ AbortCurrentTransaction();\n+ return;\n\nShould runlist be freed in the catch block ?\n\n+ gettext_noop(\"In case of errors in the ON client_connection\nEVENT TRIGGER procedure, this parameter can be used to disable trigger\nactivation and provide access to the database.\"),\n\nI think the text should be on two lines (current line too long).\n\nCheers\n\nHi,For EventTriggerOnConnect():+ PG_CATCH();+ {...+ AbortCurrentTransaction();+ return;Should runlist be freed in the catch block ?+ gettext_noop(\"In case of errors in the ON client_connection EVENT TRIGGER procedure, this parameter can be used to disable trigger activation and provide access to the database.\"),I think the text should be on two lines (current line too long).Cheers",
"msg_date": "Wed, 16 Dec 2020 16:31:08 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: On login trigger: take three"
},
{
"msg_contents": "Hi,\n\nOn 17.12.2020 3:31, Zhihong Yu wrote:\n> Hi,\n> For EventTriggerOnConnect():\n>\n> + PG_CATCH();\n> + {\n> ...\n> + AbortCurrentTransaction();\n> + return;\n>\n> Should runlist be freed in the catch block ?\n\nNo need: it is allocated in transaction memory context and removed on \ntransaction abort.\n\n>\n> + gettext_noop(\"In case of errors in the ON \n> client_connection EVENT TRIGGER procedure, this parameter can be used \n> to disable trigger activation and provide access to the database.\"),\n>\n> I think the text should be on two lines (current line too long).\n\nThank you, fixed.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 16:05:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: On login trigger: take three"
}
] |
[
{
"msg_contents": "Even though the message literally says whether the index \"can safely\" or\n\"cannot\" use deduplication, the function specifically avoids the debug message\nfor system columns, so I think it also makes sense to hide it when\ndeduplication is turned off. \n\ndiff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c\nindex 2f5f14e527..b78b542429 100644\n--- a/src/backend/access/nbtree/nbtutils.c\n+++ b/src/backend/access/nbtree/nbtutils.c\n@@ -2710,6 +2710,9 @@ _bt_allequalimage(Relation rel, bool debugmessage)\n \tif (IsSystemRelation(rel))\n \t\treturn false;\n \n+\tif (!BTGetDeduplicateItems(rel))\n+\t\treturn false;\n+\n \tfor (int i = 0; i < IndexRelationGetNumberOfKeyAttributes(rel); i++)\n \t{\n \t\tOid\t\t\topfamily = rel->rd_opfamily[i];\n-- \n2.17.0\n\n\n",
"msg_date": "Wed, 16 Dec 2020 19:28:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] nbtree: Do not show debugmessage if deduplication is disabled"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 5:28 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Even though the message literally says whether the index \"can safely\" or\n> \"cannot\" use deduplication, the function specifically avoids the debug message\n> for system columns, so I think it also makes sense to hide it when\n> deduplication is turned off.\n\nI disagree. The point of the message is to advertise whether\ndeduplication is possible in principle for indexes where support is\nnot precluded by a significant design issue that will almost certainly\nnot change in the future. The debug message should only apply to\nindexes without INCLUDE non-key columns that are not system catalog\nindexes.\n\nIn general, I think of the storage parameter as advisory. If it wasn't\nadvisory then we'd have no way of rescinding support for deduplication\nin the event of an opclass that somehow gets the \"equality implies\nimage equality\" question wrong. If it wasn't advisory then we might\nend up raising an error when the user explicitly asks for\ndeduplication but that isn't possible -- which might break somebody's\npg_restore workflow.\n\nEven when deduplication is both the safe and the desired behavior,\nthere is at least one case where it's applied selectively. We do this\nin unique indexes, where deduplication can only help with version\nchurn duplicates and so we only try to deduplicate when that appears\nto be a factor. By the same token, when the user disables\ndeduplication via the storage parameter (presumably due to the\nperformance trade-off somehow not seeming useful), they cannot expect\nto get back to an on-disk representation without posting list tuples,\nunless and until they REINDEX.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 17 Dec 2020 11:12:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] nbtree: Do not show debugmessage if deduplication is\n disabled"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen the startup process needs to wait for recovery conflict on lock,\nSTANDBY_LOCK_TIMEOUT is enabled to interrupt ProcWaitForSignal()\nif necessary. If this timeout happens, StandbyLockTimeoutHandler() is\ncalled and this function does nothing as follows.\n\n /*\n * StandbyLockTimeoutHandler() will be called if STANDBY_LOCK_TIMEOUT is exceeded.\n * This doesn't need to do anything, simply waking up is enough.\n */\n void\n StandbyLockTimeoutHandler(void)\n {\n }\n\nBut if STANDBY_LOCK_TIMEOUT happens just before entering ProcWaitForSignal(),\nthe timeout fails to interrupt that wait. Also a signal sent by this timeout\ndoesn't interrupt poll() used in ProcWaitForSignal(), on all platforms.\n\nSo I think that StandbyLockTimeoutHandler() should do SetLatch(MyLatch)\nso that the timeout can interrupt ProcWaitForSignal() even in those cases.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 17 Dec 2020 11:04:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "STANDBY_LOCK_TIMEOUT may not interrupt ProcWaitForSignal()?"
},
{
"msg_contents": "On 2020/12/17 11:04, Fujii Masao wrote:\n> Hi,\n> \n> When the startup process needs to wait for recovery conflict on lock,\n> STANDBY_LOCK_TIMEOUT is enabled to interrupt ProcWaitForSignal()\n> if necessary. If this timeout happens, StandbyLockTimeoutHandler() is\n> called and this function does nothing as follows.\n> \n> /*\n> * StandbyLockTimeoutHandler() will be called if STANDBY_LOCK_TIMEOUT is exceeded.\n> * This doesn't need to do anything, simply waking up is enough.\n> */\n> void\n> StandbyLockTimeoutHandler(void)\n> {\n> }\n> \n> But if STANDBY_LOCK_TIMEOUT happens just before entering ProcWaitForSignal(),\n> the timeout fails to interrupt that wait. Also a signal sent by this timeout\n> doesn't interrupt poll() used in ProcWaitForSignal(), on all platforms.\n> \n> So I think that StandbyLockTimeoutHandler() should do SetLatch(MyLatch)\n> so that the timeout can interrupt ProcWaitForSignal() even in those cases.\n> Thought?\n\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 17 Dec 2020 18:45:00 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: STANDBY_LOCK_TIMEOUT may not interrupt ProcWaitForSignal()?"
},
{
"msg_contents": "\n\nOn 2020/12/17 18:45, Fujii Masao wrote:\n> \n> \n> On 2020/12/17 11:04, Fujii Masao wrote:\n>> Hi,\n>>\n>> When the startup process needs to wait for recovery conflict on lock,\n>> STANDBY_LOCK_TIMEOUT is enabled to interrupt ProcWaitForSignal()\n>> if necessary. If this timeout happens, StandbyLockTimeoutHandler() is\n>> called and this function does nothing as follows.\n>>\n>> /*\n>> * StandbyLockTimeoutHandler() will be called if STANDBY_LOCK_TIMEOUT is exceeded.\n>> * This doesn't need to do anything, simply waking up is enough.\n>> */\n>> void\n>> StandbyLockTimeoutHandler(void)\n>> {\n>> }\n>>\n>> But if STANDBY_LOCK_TIMEOUT happens just before entering ProcWaitForSignal(),\n>> the timeout fails to interrupt that wait. Also a signal sent by this timeout\n>> doesn't interrupt poll() used in ProcWaitForSignal(), on all platforms.\n>>\n>> So I think that StandbyLockTimeoutHandler() should do SetLatch(MyLatch)\n>> so that the timeout can interrupt ProcWaitForSignal() even in those cases.\n>> Thought?\n\nBertrand Drouvot pointed out that this my analysis is incorrect\nbecause handle_sig_alarm() calls SetLatch(), on twitter. So\nStandbyLockTimeoutHandler() doesn't need to call SetLatch().\nYes, he is right. Sorry for my shameful mistake....\n\nI found that other functions, CheckDeadLockAlert() and\nIdleInTransactionSessionTimeoutHandler(), that are triggered by\nSIGALRM also call SetLatch(). This call to SetLatch() is also unneessary.\nPer comment, CheckDeadLockAlert() intentionally does that. But since\nsetting a latch again is cheap and is not harmful, it would not so worth\nremoving that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:11:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: STANDBY_LOCK_TIMEOUT may not interrupt ProcWaitForSignal()?"
}
] |
[
{
"msg_contents": "Hello\n\nIn commit 898e5e3290a72d288923260143930fb32036c00c [1] we lowered the lock level on the parent relation. I found in discussion [2]:\n\n> David Rowley recently pointed out that we can modify\n> CREATE TABLE .. PARTITION OF to likewise not obtain AEL anymore.\n> Apparently it just requires removal of three lines in MergeAttributes.\n\nBut on current HEAD \"create table ... partition of\" still require AccessExclusiveLock on the parent relation. It's neccessary?\n\nregards, Sergei\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=898e5e3290a72d288923260143930fb32036c00c\n[2]: https://www.postgresql.org/message-id/20181025202622.d3x4y4ch7m4pxwnn%40alvherre.pgsql\n\n\n",
"msg_date": "Thu, 17 Dec 2020 15:41:48 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Lock level of create table partition of"
}
] |
[
{
"msg_contents": "Along with the discussed change of the return type of EXTRACT from \nfloat8 to numeric [0], I was looking around what other date/time APIs \nmight be using float arguments or return values. The only thing left \nappears to be the functions make_time, make_timestamp, make_timestamptz, \nand make_interval, which take an argument specifying the seconds, which \nhas type float8 right now. I'm proposing the attached patch to change \nthat to numeric.\n\nCan we change the arguments, as proposed here, or do we need to add \nseparate overloaded versions and leave the existing versions in place?\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu",
"msg_date": "Thu, 17 Dec 2020 17:43:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Change seconds argument of make_*() functions to numeric"
},
{
"msg_contents": "čt 17. 12. 2020 v 17:43 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> Along with the discussed change of the return type of EXTRACT from\n> float8 to numeric [0], I was looking around what other date/time APIs\n> might be using float arguments or return values. The only thing left\n> appears to be the functions make_time, make_timestamp, make_timestamptz,\n> and make_interval, which take an argument specifying the seconds, which\n> has type float8 right now. I'm proposing the attached patch to change\n> that to numeric.\n>\n> Can we change the arguments, as proposed here, or do we need to add\n> separate overloaded versions and leave the existing versions in place?\n>\n\nWhat this change does with views. Can it break upgrade by pg_upgrade?\n\nRegards\n\nPavel\n\n\n> [0]:\n>\n> https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu\n>\n\nčt 17. 12. 2020 v 17:43 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:Along with the discussed change of the return type of EXTRACT from \nfloat8 to numeric [0], I was looking around what other date/time APIs \nmight be using float arguments or return values. The only thing left \nappears to be the functions make_time, make_timestamp, make_timestamptz, \nand make_interval, which take an argument specifying the seconds, which \nhas type float8 right now. I'm proposing the attached patch to change \nthat to numeric.\n\nCan we change the arguments, as proposed here, or do we need to add \nseparate overloaded versions and leave the existing versions in place?What this change does with views. Can it break upgrade by pg_upgrade?RegardsPavel\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu",
"msg_date": "Thu, 17 Dec 2020 17:54:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change seconds argument of make_*() functions to numeric"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Along with the discussed change of the return type of EXTRACT from \n> float8 to numeric [0], I was looking around what other date/time APIs \n> might be using float arguments or return values. The only thing left \n> appears to be the functions make_time, make_timestamp, make_timestamptz, \n> and make_interval, which take an argument specifying the seconds, which \n> has type float8 right now. I'm proposing the attached patch to change \n> that to numeric.\n\nI don't really see the point here. Since the seconds value is constrained\nto 0..60 and will be rounded off to microseconds, you would have to work\nseriously hard to find an example where float8 roundoff error could be\na problem. I don't think we should take whatever speed and compatibility\nhit is implied by using numeric instead of float8.\n\n(make_interval in theory could be an exception, since it doesn't constrain\nthe range of seconds values. But I still don't believe there's a problem\nin practice.)\n\n> Can we change the arguments, as proposed here, or do we need to add \n> separate overloaded versions and leave the existing versions in place?\n\nSince there's no implicit float8 to numeric cast, removing the existing\nversions could quite easily cause failures of queries that work today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Dec 2020 11:55:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change seconds argument of make_*() functions to numeric"
}
] |
[
{
"msg_contents": "I've been giving some thought to $subject. The initial impetus is the\npromise I made to assist with testing of clients built with NSS against\nservers built with openssl, and vice versa.\n\nI've already generalized the process of saving binaries by the buildfarm\nclient. And we could proceed with a purely bespoke module for testing\nthe SSL components, as we already do for testing cross-version\npg_upgrade. But it struck me that it might be better to leverage our\nexisting investment in TAP tests. So I came up with the idea of creating\na child module of PostgresNode.pm, which would set the PATH and other\nvariables appropriately at the start of each method and restore them on\nmethod exit. So then we could have things like:\n\n $openssl_node->start;\n my $connstr = $openssl_node->connstr;\n $nss_node->psql($connstr, ...);\n \n\nTo use this a TAP test would need to know two (or more) install paths\nfor the various nodes, presumably set in environment variables much as\nwe do now for things like TESTDIR. So for the above example, the TAP\ntest could set things up with:\n\n my\n $openssl_node=PostgresNodePath::get_new_node($ENV{OPENSSL_POSTGRES_INSTALL_PATH},'openssl');\n my\n $nss_node=PostgresNodePath::get_new_node($ENV{NSS_POSTGRES_INSTALL_PATH},'nss');\n\nOther possible uses would be things like cross-version testing of\npg_dump (How do we know we haven't broken anything in dumping very old\nversions?).\n\nThe proposed module would look something like this:\n\n package PostgresNodePath;\n\n use strict;\n use warnings;\n\n use parent PostgresNode;\n\n use Exporter qw(import);\n our @EXPORT = qw(get_new_node);\n\n sub get_new_node\n {\n my $installpath= shift;\n my $node = PostgresNode::get_new_node(@_);\n bless $node; # re-bless into current class\n $node->{_installpath} = $installpath;\n return $node;\n }\n\nand then for each class method in PostgresNode.pm we'd have an override\nsomething like:\n\n sub foo\n {\n my $node=shift;\n my $inst = $node->{_installpath};\n local %ENV = %ENV;\n $ENV{PATH} = \"$inst/bin:$ENV{PATH}\";\n $ENV{LD_LIBRARY_PATH} = \"$inst/lib:$ENV{LD_LIBRARY_PATH}\";\n $node->SUPER::foo(@_);\n }\n\nThere might be more elegant ways of doing this, but that's what I came\nup with.\n\nMy main question is: do we want something like this in the core code\n(presumably in src/test/perl), or is it not of sufficiently general\ninterest? If it's wanted I'll submit a patch, probably for the March CF,\nbut January if I manage to get my running shoes on. If not, I'll put it\nin the buildfarm code, but then any TAP tests that want it will likewise\nneed to live there.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 16:37:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "multi-install PostgresNode"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 04:37:54PM -0500, Andrew Dunstan wrote:\n> The proposed module would look something like this:\n>\n> [...]\n>\n> use parent PostgresNode;\n> \n> sub get_new_node\n> {\n> my $installpath= shift;\n> my $node = PostgresNode::get_new_node(@_);\n> bless $node; # re-bless into current class\n> $node->{_installpath} = $installpath;\n> return $node;\n> }\n\nPassing down the installpath as argument and saving it within a\nPostgresNode or child class looks like the correct way of doing things\nto me. This would require an extra routine to be able to get the\ninstall path from a node as _installpath would remain internal to the\nmodule file, right? Shouldn't it be something that ought to be\ndirectly part of PostgresNode actually, where we could enforce the lib\nand bin paths to the output of pg_config if an _installpath is not\nprovided by the caller? In short, I am not sure that we need an extra\nmodule here.\n\n> and then for each class method in PostgresNode.pm we'd have an override\n> something like:\n> \n> sub foo\n> {\n> my $node=shift;\n> my $inst = $node->{_installpath};\n> local %ENV = %ENV;\n> $ENV{PATH} = \"$inst/bin:$ENV{PATH}\";\n> $ENV{LD_LIBRARY_PATH} = \"$inst/lib:$ENV{LD_LIBRARY_PATH}\";\n> $node->SUPER::foo(@_);\n> }\n> \n> There might be more elegant ways of doing this, but that's what I came\n> up with.\n\nAs long as it does not become necessary to pass down _installpath to\nall indidivual binary calls we have in PostgresNode or the extra\nmodule, this gets a +1 from me. So, if I am getting that right, the\nkey point is the use of local %ENV here to make sure that PATH and\nLD_LIBRARY_PATH are only enforced when it comes to calls within\nPostgresNode.pm, right? That's an elegant solution. This is\nsomething I have wanted for a long time for pg_upgrade to be able to\nget rid of its test.sh.\n\n> My main question is: do we want something like this in the core code\n> (presumably in src/test/perl), or is it not of sufficiently general\n> interest? If it's wanted I'll submit a patch, probably for the March CF,\n> but January if I manage to get my running shoes on. If not, I'll put it\n> in the buildfarm code, but then any TAP tests that want it will likewise\n> need to live there.\n\nThis facility gives us the possibility to clean up the test code of\npg_upgrade and move it to a TAP test, so I'd say that it is worth\nhaving in the core code in the long-term.\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 09:55:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 12/17/20 7:55 PM, Michael Paquier wrote:\n> On Thu, Dec 17, 2020 at 04:37:54PM -0500, Andrew Dunstan wrote:\n>> The proposed module would look something like this:\n>>\n>> [...]\n>>\n>> use parent PostgresNode;\n>>\n>> sub get_new_node\n>> {\n>> ��� my $installpath= shift;\n>> ��� my $node = PostgresNode::get_new_node(@_);\n>> ��� bless $node; # re-bless into current class\n>> ��� $node->{_installpath} = $installpath;\n>> ��� return $node;\n>> }\n> Passing down the installpath as argument and saving it within a\n> PostgresNode or child class looks like the correct way of doing things\n> to me. This would require an extra routine to be able to get the\n> install path from a node as _installpath would remain internal to the\n> module file, right? Shouldn't it be something that ought to be\n> directly part of PostgresNode actually, where we could enforce the lib\n> and bin paths to the output of pg_config if an _installpath is not\n> provided by the caller? In short, I am not sure that we need an extra\n> module here.\n>\n>> and then� for each class method in PostgresNode.pm we'd have an override\n>> something like:\n>>\n>> sub foo\n>> {\n>> ��� my $node=shift;\n>> ��� my $inst = $node->{_installpath};\n>> ��� local %ENV = %ENV;\n>> ��� $ENV{PATH} = \"$inst/bin:$ENV{PATH}\";\n>> ��� $ENV{LD_LIBRARY_PATH} = \"$inst/lib:$ENV{LD_LIBRARY_PATH}\";\n>> ��� $node->SUPER::foo(@_);\n>> }\n>>\n>> There might be more elegant ways of doing this, but that's what I came\n>> up with.\n> As long as it does not become necessary to pass down _installpath to\n> all indidivual binary calls we have in PostgresNode or the extra\n> module, this gets a +1 from me. So, if I am getting that right, the\n> key point is the use of local %ENV here to make sure that PATH and\n> LD_LIBRARY_PATH are only enforced when it comes to calls within\n> PostgresNode.pm, right? That's an elegant solution. This is\n> something I have wanted for a long time for pg_upgrade to be able to\n> get rid of its test.sh.\n>\n>> My main question is: do we want something like this in the core code\n>> (presumably in src/test/perl), or is it not of sufficiently general\n>> interest? If it's wanted I'll submit a patch, probably for the March CF,\n>> but January if I manage to get my running shoes on. If not, I'll put it\n>> in the buildfarm code, but then any TAP tests that want it will likewise\n>> need to live there.\n> This facility gives us the possibility to clean up the test code of\n> pg_upgrade and move it to a TAP test, so I'd say that it is worth\n> having in the core code in the long-term.\n\n\nThis turns out to be remarkably short, with the use of a little eval magic.\n\nGive the attached, this test program works just fine:\n\n #!/bin/perl\n use PostgresNodePath;\n $ENV{PG_REGRESS} =\n '/home/andrew/pgl/vpath.12/src/test/regress/pg_regress';\n my $node = get_new_node('/home/andrew/pgl/inst.12.5711','blurfl');\n print $node->info;\n print $node->connstr(),\"\\n\";\n $node->init();\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 19 Dec 2020 11:19:07 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 12/19/20 11:19 AM, Andrew Dunstan wrote:\n>\n>\n> This turns out to be remarkably short, with the use of a little eval magic.\n>\n> Give the attached, this test program works just fine:\n>\n> #!/bin/perl\n> use PostgresNodePath;\n> $ENV{PG_REGRESS} =\n> '/home/andrew/pgl/vpath.12/src/test/regress/pg_regress';\n> my $node = get_new_node('/home/andrew/pgl/inst.12.5711','blurfl');\n> print $node->info;\n> print $node->connstr(),\"\\n\";\n> $node->init();\n>\n\n\nThis time with a typo removed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 20 Dec 2020 12:09:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 2020-12-20 18:09, Andrew Dunstan wrote:\n> On 12/19/20 11:19 AM, Andrew Dunstan wrote:\n>> This turns out to be remarkably short, with the use of a little eval magic.\n>>\n>> Give the attached, this test program works just fine:\n>>\n>> #!/bin/perl\n>> use PostgresNodePath;\n>> $ENV{PG_REGRESS} =\n>> '/home/andrew/pgl/vpath.12/src/test/regress/pg_regress';\n>> my $node = get_new_node('/home/andrew/pgl/inst.12.5711','blurfl');\n>> print $node->info;\n>> print $node->connstr(),\"\\n\";\n>> $node->init();\n> \n> \n> This time with a typo removed.\n\nWhat is proposed the destination of this file? Is it meant to be part \nof a patch? Is it meant to be installed? Is it meant for the buildfarm \ncode?\n\n\n",
"msg_date": "Mon, 11 Jan 2021 15:34:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "> On 17 Dec 2020, at 22:37, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I've been giving some thought to $subject. The initial impetus is the\n> promise I made to assist with testing of clients built with NSS against\n> servers built with openssl, and vice versa.\n\nThanks for tackling!\n\n> My main question is: do we want something like this in the core code\n> (presumably in src/test/perl), or is it not of sufficiently general\n> interest?\n\nTo be able to implement pg_upgrade tests as TAP tests seems like enough of a\nwin to consider this for inclusion in core.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 13 Jan 2021 13:25:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "\nOn 1/11/21 9:34 AM, Peter Eisentraut wrote:\n> On 2020-12-20 18:09, Andrew Dunstan wrote:\n>> On 12/19/20 11:19 AM, Andrew Dunstan wrote:\n>>> This turns out to be remarkably short, with the use of a little eval\n>>> magic.\n>>>\n>>> Give the attached, this test program works just fine:\n>>>\n>>> ���� #!/bin/perl\n>>> ���� use PostgresNodePath;\n>>> ���� $ENV{PG_REGRESS} =\n>>> ���� '/home/andrew/pgl/vpath.12/src/test/regress/pg_regress';\n>>> ���� my $node = get_new_node('/home/andrew/pgl/inst.12.5711','blurfl');\n>>> ���� print $node->info;\n>>> ���� print $node->connstr(),\"\\n\";\n>>> ���� $node->init();\n>>\n>>\n>> This time with a typo removed.\n>\n> What is proposed the destination of this file?� Is it meant to be part\n> of a patch?� Is it meant to be installed?� Is it meant for the\n> buildfarm code?\n\n\nCore code, ideally. I will submit a patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Jan 2021 07:56:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 1/13/21 7:56 AM, Andrew Dunstan wrote:\n> On 1/11/21 9:34 AM, Peter Eisentraut wrote:\n>> On 2020-12-20 18:09, Andrew Dunstan wrote:\n>>> On 12/19/20 11:19 AM, Andrew Dunstan wrote:\n>>>> This turns out to be remarkably short, with the use of a little eval\n>>>> magic.\n>>>>\n>>>> Give the attached, this test program works just fine:\n>>>>\n>>>> ���� #!/bin/perl\n>>>> ���� use PostgresNodePath;\n>>>> ���� $ENV{PG_REGRESS} =\n>>>> ���� '/home/andrew/pgl/vpath.12/src/test/regress/pg_regress';\n>>>> ���� my $node = get_new_node('/home/andrew/pgl/inst.12.5711','blurfl');\n>>>> ���� print $node->info;\n>>>> ���� print $node->connstr(),\"\\n\";\n>>>> ���� $node->init();\n>>>\n>>> This time with a typo removed.\n>> What is proposed the destination of this file?� Is it meant to be part\n>> of a patch?� Is it meant to be installed?� Is it meant for the\n>> buildfarm code?\n>\n> Core code, ideally. I will submit a patch.\n>\n>\n> cheers\n>\n\nHere it is as a patch. I've added some docco in perl pod format, and\nmade it suitable for using on Windows and OSX as well as Linux/*BSD,\nalthough I haven't tested on anything except Linux.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 28 Jan 2021 09:05:02 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 2021-Jan-28, Andrew Dunstan wrote:\n\n... neat stuff, thanks.\n\n> + # Windows picks up DLLs from the PATH rather than *LD_LIBRARY_PATH\n> + # choose the right path separator\n> + if ($Config{osname} eq 'MSWin32')\n> + {\n> + $ENV{PATH} = \"$inst/bin;$inst/lib;$ENV{PATH}\";\n> + }\n> + else\n> + {\n> + $ENV{PATH} = \"$inst/bin:$inst/lib:$ENV{PATH}\";\n> + }\n\nHmm, if only Windows needs lib/ in PATH, then we do we add $inst/lib to\nPATH even when not Windows?\n\n> + if (exists $ENV{DYLIB})\n> + {\n> + $ENV{DYLIB} = \"$inst/lib:$ENV{DYLIB}\";\n> + }\n> + else\n> + {\n> + $ENV{DYLIB} = \"$inst/lib}\";\n\nNote extra closing } in the string here.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Thu, 28 Jan 2021 11:24:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 1/28/21 9:24 AM, Alvaro Herrera wrote:\n> On 2021-Jan-28, Andrew Dunstan wrote:\n>\n> ... neat stuff, thanks.\n>\n>> + # Windows picks up DLLs from the PATH rather than *LD_LIBRARY_PATH\n>> + # choose the right path separator\n>> + if ($Config{osname} eq 'MSWin32')\n>> + {\n>> + $ENV{PATH} = \"$inst/bin;$inst/lib;$ENV{PATH}\";\n>> + }\n>> + else\n>> + {\n>> + $ENV{PATH} = \"$inst/bin:$inst/lib:$ENV{PATH}\";\n>> + }\n> Hmm, if only Windows needs lib/ in PATH, then we do we add $inst/lib to\n> PATH even when not Windows?\n\n\n\nWe could, but there's no point AFAICS. *nix dynamic loaders don't use\nthe PATH on any platform to my knowledge. This is mainly so that Windows\nwill find libpq.dll correctly.\n\n\n\n>\n>> + if (exists $ENV{DYLIB})\n>> + {\n>> + $ENV{DYLIB} = \"$inst/lib:$ENV{DYLIB}\";\n>> + }\n>> + else\n>> + {\n>> + $ENV{DYLIB} = \"$inst/lib}\";\n> Note extra closing } in the string here.\n\n\nOops. fixed, thanks\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 28 Jan 2021 10:19:57 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "\nOn 1/13/21 7:25 AM, Daniel Gustafsson wrote:\n>> On 17 Dec 2020, at 22:37, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I've been giving some thought to $subject. The initial impetus is the\n>> promise I made to assist with testing of clients built with NSS against\n>> servers built with openssl, and vice versa.\n> Thanks for tackling!\n>\n>> My main question is: do we want something like this in the core code\n>> (presumably in src/test/perl), or is it not of sufficiently general\n>> interest?\n> To be able to implement pg_upgrade tests as TAP tests seems like enough of a\n> win to consider this for inclusion in core.\n>\n\nDaniel, did you have any further comments on this? If not, does anyone\nobject to my committing it?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 11:40:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 10:19:57AM -0500, Andrew Dunstan wrote:\n> +BEGIN\n> +{\n> +\n> + # putting this in a BEGIN block means it's run and checked by perl -c\n> +\n> +\n> + # everything other than info and get_new_node that we need to override.\n> + # they are all instance methods, so we can use the same template for all.\n> + my @instance_overrides = qw(init backup start kill9 stop reload restart\n> + promote logrotate safe_psql psql background_psql\n> + interactive_psql poll_query_until command_ok\n> + command_fails command_like command_checks_all\n> + issues_sql_like run_log pg_recvlogical_upto\n> + );\n\nNo actual objections here, but it would be easy to miss the addition\nof a new routine. Would an exclusion filter be more adapted, aka\noverride everything except get_new_node() and info()?\n--\nMichael",
"msg_date": "Wed, 24 Mar 2021 07:36:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "\nOn 3/23/21 6:36 PM, Michael Paquier wrote:\n> On Thu, Jan 28, 2021 at 10:19:57AM -0500, Andrew Dunstan wrote:\n>> +BEGIN\n>> +{\n>> +\n>> + # putting this in a BEGIN block means it's run and checked by perl -c\n>> +\n>> +\n>> + # everything other than info and get_new_node that we need to override.\n>> + # they are all instance methods, so we can use the same template for all.\n>> + my @instance_overrides = qw(init backup start kill9 stop reload restart\n>> + promote logrotate safe_psql psql background_psql\n>> + interactive_psql poll_query_until command_ok\n>> + command_fails command_like command_checks_all\n>> + issues_sql_like run_log pg_recvlogical_upto\n>> + );\n> No actual objections here, but it would be easy to miss the addition\n> of a new routine. Would an exclusion filter be more adapted, aka\n> override everything except get_new_node() and info()?\n\n\n\nActually, following a brief offline discussion today I've thought of a\nway that doesn't require subclassing. Will post that probably tomorrow.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 19:09:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 3/23/21 7:09 PM, Andrew Dunstan wrote:\n> On 3/23/21 6:36 PM, Michael Paquier wrote:\n>> On Thu, Jan 28, 2021 at 10:19:57AM -0500, Andrew Dunstan wrote:\n>>> +BEGIN\n>>> +{\n>>> +\n>>> + # putting this in a BEGIN block means it's run and checked by perl -c\n>>> +\n>>> +\n>>> + # everything other than info and get_new_node that we need to override.\n>>> + # they are all instance methods, so we can use the same template for all.\n>>> + my @instance_overrides = qw(init backup start kill9 stop reload restart\n>>> + promote logrotate safe_psql psql background_psql\n>>> + interactive_psql poll_query_until command_ok\n>>> + command_fails command_like command_checks_all\n>>> + issues_sql_like run_log pg_recvlogical_upto\n>>> + );\n>> No actual objections here, but it would be easy to miss the addition\n>> of a new routine. Would an exclusion filter be more adapted, aka\n>> override everything except get_new_node() and info()?\n>\n>\n> Actually, following a brief offline discussion today I've thought of a\n> way that doesn't require subclassing. Will post that probably tomorrow.\n>\n\n\nAnd here it is. No subclass, no eval, no magic :-) Some of my colleagues\nare a lot happier ;-)\n\nThe downside is that we need to litter PostgresNode with a bunch of\nlines like:\n\n local %ENV = %ENV;\n _set_install_env($self);\n\nThe upside is that there's no longer a possibility that someone will add\na new routine to PostgresNode and forget to update the subclass.\n\nHere is my simple test program:\n\n #!/usr/bin/perl\n\n use lib '/home/andrew/pgl/pg_head/src/test/perl';\n\n # PostgresNode (via TestLib) hijacks stdout, so make a dup before it\n gets a chance\n use vars qw($out);\n BEGIN\n {\n ��� open ($out, \">&STDOUT\");\n }\n\n use PostgresNode;\n\n my $node = PostgresNode->get_new_node('v12', install_path =>\n '/home/andrew/pgl/inst.12.5711');\n\n $ENV{PG_REGRESS} = '/bin/true'; # stupid but necessary\n\n $node->init();\n\n $node->start();\n\n my $version = $node->safe_psql('postgres', 'select version()');\n\n $node->stop();\n\n print $out \"Version: $version\\n\";\n print $out $node->info();\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Mar 2021 07:35:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> And here it is. No subclass, no eval, no magic :-) Some of my colleagues\n> are a lot happier ;-)\n>\n> The downside is that we need to litter PostgresNode with a bunch of\n> lines like:\n>\n> local %ENV = %ENV;\n> _set_install_env($self);\n\nI think it would be even neater having a method that returns the desired\nenvironment and then have the other methods do:\n\n local %ENV = $self->_get_install_env();\n\nThe function could be something like this:\n\nsub _get_install_env\n{\n\tmy $self = shift;\n\tmy $inst = $self->{_install_path};\n\treturn %ENV unless $inst;\n\n my %install_env;\n\tif ($TestLib::windows_os)\n\t{\n\t\t# Windows picks up DLLs from the PATH rather than *LD_LIBRARY_PATH\n\t\t# choose the right path separator\n\t\tif ($Config{osname} eq 'MSWin32')\n\t\t{\n\t\t\t$install_env{PATH} = \"$inst/bin;$inst/lib;$ENV{PATH}\";\n\t\t}\n\t\telse\n\t\t{\n\t\t\t$install_env{PATH} = \"$inst/bin:$inst/lib:$ENV{PATH}\";\n\t\t}\n\t}\n\telse\n\t{\n\t\tmy $dylib_name =\n\t\t $Config{osname} eq 'darwin' ? \"DYLD_LIBRARY_PATH\" : \"LD_LIBRARY_PATH\";\n\t\t$install_env{PATH} = \"$inst/bin:$ENV{PATH}\";\n\t\tif (exists $ENV{$dylib_name})\n\t\t{\n\t\t\t$install_env{$dylib_name} = \"$inst/lib:$ENV{$dylib_name}\";\n\t\t}\n\t\telse\n\t\t{\n\t\t\t$install_env{$dylib_name} = \"$inst/lib\";\n\t\t}\n\t}\n\n return (%ENV, %install_env);\n}\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n",
"msg_date": "Wed, 24 Mar 2021 11:54:03 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "\nOn 3/24/21 7:54 AM, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>\n>> And here it is. No subclass, no eval, no magic :-) Some of my colleagues\n>> are a lot happier ;-)\n>>\n>> The downside is that we need to litter PostgresNode with a bunch of\n>> lines like:\n>>\n>> local %ENV = %ENV;\n>> _set_install_env($self);\n> I think it would be even neater having a method that returns the desired\n> environment and then have the other methods do:\n>\n> local %ENV = $self->_get_install_env();\n\n\nYeah, that's nice. I'll do that. Thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Mar 2021 08:11:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 2021-Mar-24, Dagfinn Ilmari Manns�ker wrote:\n\n> I think it would be even neater having a method that returns the desired\n> environment and then have the other methods do:\n> \n> local %ENV = $self->_get_install_env();\n\nHmm, is it possible to integrate PGHOST and PGPORT handling into this\ntoo? Seems like it is, so the name of the routine should be something\nmore general (and also it should not have the quick \"return unless\n$inst\").\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n",
"msg_date": "Wed, 24 Mar 2021 09:29:22 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "\nOn 3/24/21 8:29 AM, Alvaro Herrera wrote:\n> On 2021-Mar-24, Dagfinn Ilmari Mannsåker wrote:\n>\n>> I think it would be even neater having a method that returns the desired\n>> environment and then have the other methods do:\n>>\n>> local %ENV = $self->_get_install_env();\n> Hmm, is it possible to integrate PGHOST and PGPORT handling into this\n> too? Seems like it is, so the name of the routine should be something\n> more general (and also it should not have the quick \"return unless\n> $inst\").\n>\n\n\nIf we're going to do that we probably shouldn't special case any\nparticular settings, but simply take any extra arguments as extra env\nsettings. And if any setting has undef (e.g. PGAPPNAME) we should unset it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Mar 2021 09:23:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 3/24/21 9:23 AM, Andrew Dunstan wrote:\n> On 3/24/21 8:29 AM, Alvaro Herrera wrote:\n>> On 2021-Mar-24, Dagfinn Ilmari Mannsåker wrote:\n>>\n>>> I think it would be even neater having a method that returns the desired\n>>> environment and then have the other methods do:\n>>>\n>>> local %ENV = $self->_get_install_env();\n>> Hmm, is it possible to integrate PGHOST and PGPORT handling into this\n>> too? Seems like it is, so the name of the routine should be something\n>> more general (and also it should not have the quick \"return unless\n>> $inst\").\n>>\n>\n> If we're going to do that we probably shouldn't special case any\n> particular settings, but simply take any extra arguments as extra env\n> settings. And if any setting has undef (e.g. PGAPPNAME) we should unset it.\n>\n>\n\n\nlike this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Mar 2021 10:58:06 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 2021-Mar-24, Andrew Dunstan wrote:\n\n> \n> On 3/24/21 9:23 AM, Andrew Dunstan wrote:\n> > On 3/24/21 8:29 AM, Alvaro Herrera wrote:\n\n> > If we're going to do that we probably shouldn't special case any\n> > particular settings, but simply take any extra arguments as extra env\n> > settings. And if any setting has undef (e.g. PGAPPNAME) we should unset it.\n\n> like this.\n\nHmm, I like that PGAPPNAME handling has resulted in an overall\nsimplification. I'm not sure why you prefer to keep PGHOST and PGPORT\nhandled individually at each callsite however; why not do it like\n_install, and add them to the environment always? I doubt there's\nanything that requires them *not* to be set; and if there is, it's easy\nto make the claim that that's broken and should be fixed.\n\nI'm just saying that cluttering _get_install_env() with those two\nsettings would result in less clutter overall.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Wed, 24 Mar 2021 12:41:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 3/24/21 11:41 AM, Alvaro Herrera wrote:\n> On 2021-Mar-24, Andrew Dunstan wrote:\n>\n>> On 3/24/21 9:23 AM, Andrew Dunstan wrote:\n>>> On 3/24/21 8:29 AM, Alvaro Herrera wrote:\n>>> If we're going to do that we probably shouldn't special case any\n>>> particular settings, but simply take any extra arguments as extra env\n>>> settings. And if any setting has undef (e.g. PGAPPNAME) we should unset it.\n>> like this.\n> Hmm, I like that PGAPPNAME handling has resulted in an overall\n> simplification. I'm not sure why you prefer to keep PGHOST and PGPORT\n> handled individually at each callsite however; why not do it like\n> _install, and add them to the environment always? I doubt there's\n> anything that requires them *not* to be set; and if there is, it's easy\n> to make the claim that that's broken and should be fixed.\n>\n> I'm just saying that cluttering _get_install_env() with those two\n> settings would result in less clutter overall.\n>\n\n\n\nOK, like this?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Mar 2021 13:53:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 2021-Mar-24, Andrew Dunstan wrote:\n\n> OK, like this?\n\nYeah, looks good!\n\n> +# Private routine to return a copy of the environment with the PATH and (DY)LD_LIBRARY_PATH\n> +# correctly set when there is an install path set for the node.\n> +# Routines that call Postgres binaries need to call this routine like this:\n> +#\n> +# local %ENV = $self->_get_install_env{[%extra_settings]);\n> +#\n> +# A copy of the environmnt is taken and node's host and port settings are added\n> +# as PGHOST and PGPORT, Then the extra settings (if any) are applied. Any setting\n> +# in %extra_settings with a value that is undefined is deleted; the remainder are\n> +# set. Then the PATH and (DY)LD_LIBRARY_PATH are adjusted if the node's install path\n> +# is set, and the copy environment is returned.\n\nThere's a typo \"environmnt\" here, and a couple of lines appear\noverlength.\n\n> +sub _get_install_env\n\nI'd use a name that doesn't have \"install\" in it -- maybe _get_env or\n_get_postgres_env or _get_PostgresNode_env -- but don't really care too\nmuch about it.\n\n\n> +# The install path set in get_new_node needs to be a directory containing\n> +# bin and lib subdirectories as in a standard PostgreSQL installation, so this\n> +# can't be used with installations where the bin and lib directories don't have\n> +# a common parent directory.\n\nI've never heard of an installation where that wasn't true. If there\nwas a need for that, seems like it'd be possible to set them with\n{ bindir => ..., libdir => ...} but I doubt it'll ever be necessary.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"We're here to devour each other alive\" (Hobbes)\n\n\n",
"msg_date": "Wed, 24 Mar 2021 15:33:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 03:33:51PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-24, Andrew Dunstan wrote:\n>> +# The install path set in get_new_node needs to be a directory containing\n>> +# bin and lib subdirectories as in a standard PostgreSQL installation, so this\n>> +# can't be used with installations where the bin and lib directories don't have\n>> +# a common parent directory.\n> \n> I've never heard of an installation where that wasn't true. If there\n> was a need for that, seems like it'd be possible to set them with\n> { bindir => ..., libdir => ...} but I doubt it'll ever be necessary.\n\nThis would imply an installation with some fancy --bindir or --libdir\nspecified in ./configure. Never say never, but I also think that what\nhas been committed is fine. And the result is simple, that's really\ncool. So now pg_upgrade's test.sh can be switched to a TAP test.\n--\nMichael",
"msg_date": "Thu, 25 Mar 2021 12:47:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On 25.03.21 04:47, Michael Paquier wrote:\n> On Wed, Mar 24, 2021 at 03:33:51PM -0300, Alvaro Herrera wrote:\n>> On 2021-Mar-24, Andrew Dunstan wrote:\n>>> +# The install path set in get_new_node needs to be a directory containing\n>>> +# bin and lib subdirectories as in a standard PostgreSQL installation, so this\n>>> +# can't be used with installations where the bin and lib directories don't have\n>>> +# a common parent directory.\n>>\n>> I've never heard of an installation where that wasn't true. If there\n>> was a need for that, seems like it'd be possible to set them with\n>> { bindir => ..., libdir => ...} but I doubt it'll ever be necessary.\n> \n> This would imply an installation with some fancy --bindir or --libdir\n> specified in ./configure. Never say never, but I also think that what\n> has been committed is fine.\n\n/usr/lib64/? /usr/lib/x86_64-linux-gnu/? Seems pretty common.\n\n\n",
"msg_date": "Thu, 25 Mar 2021 09:23:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 09:23:22AM +0100, Peter Eisentraut wrote:\n> On 25.03.21 04:47, Michael Paquier wrote:\n>> This would imply an installation with some fancy --bindir or --libdir\n>> specified in ./configure. Never say never, but I also think that what\n>> has been committed is fine.\n> \n> /usr/lib64/? /usr/lib/x86_64-linux-gnu/? Seems pretty common.\n\nAs part of the main PostgreSQL package set, yes, things are usually\nmixed. Now, when it comes to the handling of conflicts between\nmultiple major versions, I have yet to see installations that do not\nuse the same base path for the binaries and libraries, and the PGDG\npackages do that with /usr/pgsql-NN/. So, I doubt that we are going\nto need this amount of control in reality, but I may be wrong, of\ncourse :)\n--\nMichael",
"msg_date": "Mon, 29 Mar 2021 11:04:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: multi-install PostgresNode"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs mentioned in [1], there are three places where there is the same\nroutine to check if a string is made only of ASCII characters.\n\nThis makes for a small-ish but nice cleanup, as per the attached.\n\nThanks,\n\n[1]: https://www.postgresql.org/message-id/X9lVLGRuG0hTHrVo@paquier.xyz\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 12:57:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Refactor routine to check for ASCII-only case"
},
{
"msg_contents": "On 18/12/2020 05:57, Michael Paquier wrote:\n> As mentioned in [1], there are three places where there is the same\n> routine to check if a string is made only of ASCII characters.\n> \n> This makes for a small-ish but nice cleanup, as per the attached.\n\n+1\n\n- Heikki\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:54:24 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Refactor routine to check for ASCII-only case"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 18/12/2020 05:57, Michael Paquier wrote:\n> >As mentioned in [1], there are three places where there is the same\n> >routine to check if a string is made only of ASCII characters.\n> >\n> >This makes for a small-ish but nice cleanup, as per the attached.\n> \n> +1\n\nYeah, in a quick look, this looks like a good improvement.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 18 Dec 2020 11:30:16 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Refactor routine to check for ASCII-only case"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:30:16AM -0500, Stephen Frost wrote:\n> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>> +1\n> \n> Yeah, in a quick look, this looks like a good improvement.\n\nThanks. This has been applied as of 93e8ff8.\n--\nMichael",
"msg_date": "Mon, 21 Dec 2020 09:44:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor routine to check for ASCII-only case"
}
] |
[
{
"msg_contents": "Hi, hackers\n\npgbench use -f filename[@weight] to receive a sql script file with a weight,\nbut if I create a file contains char'@', like a@2.sql, specify this file without weigth,\npgbench will failed with error:\n\tpgbench: fatal: invalid weight specification: @2.sql\n\nThis action may be unfriendly, because the char '@' is a valid character on Linux\nand Windows.\n\nI have created a patch to modify this action. The patch is attached.\n\nThoughts?\n\nRegards\nShenhao Wang",
"msg_date": "Fri, 18 Dec 2020 06:22:36 +0000",
"msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "pgbench failed when -f option contains a char '@'"
},
{
"msg_contents": "On 18/12/2020 08:22, Wang, Shenhao wrote:\n> Hi, hackers\n> \n> pgbench use -f filename[@weight] to receive a sql script file with a weight,\n> but if I create a file contains char'@', like a@2.sql, specify this file without weigth,\n> pgbench will failed with error:\n> \tpgbench: fatal: invalid weight specification: @2.sql\n> \n> This action may be unfriendly, because the char '@' is a valid character on Linux\n> and Windows.\n> \n> I have created a patch to modify this action. The patch is attached.\n\nThis patch changes it to first check if the file \"a@2.sql\" exists, and \nif it doesn't, only then it tries to interpret it as a weight, as \nfilename \"a\" and weight \"2.sql\". That stilll doesn't fix the underlying \nambiguity, though. If you have a file called \"script\" and \"script@1\", \nthis makes it impossible to specify \"script\" with weight 1, because \"-f \nscript@1\" will now always open the file \"script@1\".\n\nI think we should just leave this as it is. The user can simply rename \nthe file.\n\nOr maybe one change would be worthwhile here: First check if the part \nafter the @ contains only digits. If doesn't, then assume it's part of \nthe filename rather than a weight. That would fix this for cases like \n\"foo@1.sql\", although not for \"foo@1\".\n\n- Heikki\n\n\n",
"msg_date": "Fri, 18 Dec 2020 10:59:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pgbench failed when -f option contains a char '@'"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I think we should just leave this as it is. The user can simply rename \n> the file.\n\nYeah. The assumption when we defined the script-weight syntax was that\nthere's no particular reason to use \"@\" in a script file name, and\nI don't see why that's a bad assumption.\n\n> Or maybe one change would be worthwhile here: First check if the part \n> after the @ contains only digits. If doesn't, then assume it's part of \n> the filename rather than a weight. That would fix this for cases like \n> \"foo@1.sql\", although not for \"foo@1\".\n\nI do not like introducing ambiguity of that sort. Not being entirely\nclear on which script file is going to be read seems like a recipe\nfor security issues.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Dec 2020 10:10:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench failed when -f option contains a char '@'"
},
{
"msg_contents": "\nHello,\n\n> pgbench use -f filename[@weight] to receive a sql script file with a weight,\n\nISTM that I thought of this: \"pgbench -f filen@me@1\" does work.\n\n sh> touch foo@bla\n sh> pgbench -f foo@bla@1\n pgbench: fatal: empty command list for script \"foo@bla\"\n\nThe documentation could point this out, though.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 20 Dec 2020 14:31:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench failed when -f option contains a char '@'"
},
{
"msg_contents": "Hello Tom,\n\n>> I think we should just leave this as it is. The user can simply rename\n>> the file.\n>\n> Yeah. The assumption when we defined the script-weight syntax was that\n> there's no particular reason to use \"@\" in a script file name, and\n> I don't see why that's a bad assumption.\n\nThe \"parser\" looks for the last @ in the argument, so the simple \nworkaround is to append \"@1\".\n\nI suggest the attached doc update, or anything in better English.\n\n-- \nFabien.",
"msg_date": "Sun, 20 Dec 2020 14:43:01 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench failed when -f option contains a char '@'"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> The \"parser\" looks for the last @ in the argument, so the simple \n> workaround is to append \"@1\".\n> I suggest the attached doc update, or anything in better English.\n\nAgreed, done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Dec 2020 13:38:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench failed when -f option contains a char '@'"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs of the work done in 87ae9691, I have played with error injections\nin the code paths using this code, but forgot to count for cases where\ncascading resowner cleanups are involved. Like other resources (JIT,\nDSM, etc.), this requires an allocation in TopMemoryContext to make\nsure that nothing gets forgotten or cleaned up on the way until the\nresowner that did the cryptohash allocation is handled.\n\nAttached is a small extension I have played with by doing some error\ninjections, and a patch. If there are no objections, I would like to\ncommit this fix.\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 16:35:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Incorrect allocation handling for cryptohash functions with OpenSSL"
},
{
"msg_contents": "On 18/12/2020 09:35, Michael Paquier wrote:\n> Hi all,\n> \n> As of the work done in 87ae9691, I have played with error injections\n> in the code paths using this code, but forgot to count for cases where\n> cascading resowner cleanups are involved. Like other resources (JIT,\n> DSM, etc.), this requires an allocation in TopMemoryContext to make\n> sure that nothing gets forgotten or cleaned up on the way until the\n> resowner that did the cryptohash allocation is handled.\n> \n> Attached is a small extension I have played with by doing some error\n> injections, and a patch. If there are no objections, I would like to\n> commit this fix.\n\npg_cryptohash_create() is now susceptible to leaking memory in \nTopMemoryContext, if the allocations fail. I think the attached should \nfix it (but I haven't tested it at all).\n\nBTW, looking at pg_cryptohash_ctx and pg_cryptohash_state, why do we \nneed two structs? They're both allocated and controlled by the \ncryptohash implementation. It would seem simpler to have just one.\n\n- Heikki",
"msg_date": "Fri, 18 Dec 2020 11:35:14 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 18/12/2020 11:35, Heikki Linnakangas wrote:\n> BTW, looking at pg_cryptohash_ctx and pg_cryptohash_state, why do we\n> need two structs? They're both allocated and controlled by the\n> cryptohash implementation. It would seem simpler to have just one.\n\nSomething like this. Passes regression tests, but otherwise untested.\n\n- Heikki",
"msg_date": "Fri, 18 Dec 2020 11:51:55 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:35:14AM +0200, Heikki Linnakangas wrote:\n> pg_cryptohash_create() is now susceptible to leaking memory in\n> TopMemoryContext, if the allocations fail. I think the attached should fix\n> it (but I haven't tested it at all).\n\nYeah, you are right here. If the second allocation fails the first\none would leak. I don't think that your suggested fix is completely\nright though because it ignores that the callers of\npg_cryptohash_create() in the backend expect an error all the time, so\nit could crash. Perhaps we should just bite the bullet and switch the\nOpenSSL and fallback implementations to use allocation APIs that never\ncause an error, and always return NULL? That would have the advantage\nto be more consistent with the frontend that relies in malloc(), at\nthe cost of requiring more changes for the backend code where the\n_create() call would need to handle the NULL case properly. The\nbackend calls are already aware of errors so that would not be\ninvasive except for the addition of some elog(ERROR) or similar, and\nwe could change the fallback implementation to use palloc_extended()\nwith MCXT_ALLOC_NO_OOM.\n\n> BTW, looking at pg_cryptohash_ctx and pg_cryptohash_state, why do we need\n> two structs? They're both allocated and controlled by the cryptohash\n> implementation. It would seem simpler to have just one.\n\nDepending on the implementation, the data to track can be completely \ndifferent, and this split allows to know about the resowner dependency\nonly in the OpenSSL part of cryptohashes, without having to include\nthis knowledge in neither cryptohash.h nor in the fallback\nimplementation that can just use palloc() in the backend.\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 19:10:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:51:55AM +0200, Heikki Linnakangas wrote:\n> Something like this. Passes regression tests, but otherwise untested.\n\n... And I wanted to keep track of the type of cryptohash directly in\nthe shared structure. ;)\n--\nMichael",
"msg_date": "Fri, 18 Dec 2020 19:14:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 18/12/2020 12:10, Michael Paquier wrote:\n> On Fri, Dec 18, 2020 at 11:35:14AM +0200, Heikki Linnakangas wrote:\n>> pg_cryptohash_create() is now susceptible to leaking memory in\n>> TopMemoryContext, if the allocations fail. I think the attached should fix\n>> it (but I haven't tested it at all).\n> \n> Yeah, you are right here. If the second allocation fails the first\n> one would leak. I don't think that your suggested fix is completely\n> right though because it ignores that the callers of\n> pg_cryptohash_create() in the backend expect an error all the time, so\n> it could crash.\n\nAh, right.\n\n> Perhaps we should just bite the bullet and switch the\n> OpenSSL and fallback implementations to use allocation APIs that never\n> cause an error, and always return NULL? That would have the advantage\n> to be more consistent with the frontend that relies in malloc(), at\n> the cost of requiring more changes for the backend code where the\n> _create() call would need to handle the NULL case properly. The\n> backend calls are already aware of errors so that would not be\n> invasive except for the addition of some elog(ERROR) or similar, and\n> we could change the fallback implementation to use palloc_extended()\n> with MCXT_ALLOC_NO_OOM.\n\n-1. On the contrary, I think we should reduce the number of checks \nneeded in the callers, and prefer throwing errors, if possible. It's too \neasy to forget the check, and it makes the code more verbose, too.\n\nIn fact, it might be better if pg_cryptohash_init() and \npg_cryptohash_update() didn't return errors either. If an error happens, \nthey could just set a flag in the pg_cryptohash_ctx, and \npg_cryptohash_final() function would return the error. That way, you \nwould only need to check for error return in the call to \npg_cryptohash_final().\n\n>> BTW, looking at pg_cryptohash_ctx and pg_cryptohash_state, why do we need\n>> two structs? They're both allocated and controlled by the cryptohash\n>> implementation. It would seem simpler to have just one.\n> \n> Depending on the implementation, the data to track can be completely\n> different, and this split allows to know about the resowner dependency\n> only in the OpenSSL part of cryptohashes, without having to include\n> this knowledge in neither cryptohash.h nor in the fallback\n> implementation that can just use palloc() in the backend.\n\n> ... And I wanted to keep track of the type of cryptohash directly in\n> the shared structure. ;)\n\nYou could also define a shared header, with the rest of the struct being \nimplementation-specific:\n\ntypedef struct pg_cryptohash_ctx\n{\n\tpg_cryptohash_type type;\n\n\t/* implementation-specific data follows */\n} pg_cryptohash_ctx;\n\n- Heikki\n\n\n",
"msg_date": "Fri, 18 Dec 2020 12:55:19 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 18/12/2020 12:55, Heikki Linnakangas wrote:\n> On 18/12/2020 12:10, Michael Paquier wrote:\n>> On Fri, Dec 18, 2020 at 11:35:14AM +0200, Heikki Linnakangas wrote:\n>>> pg_cryptohash_create() is now susceptible to leaking memory in\n>>> TopMemoryContext, if the allocations fail. I think the attached should fix\n>>> it (but I haven't tested it at all).\n>>\n>> Yeah, you are right here. If the second allocation fails the first\n>> one would leak. I don't think that your suggested fix is completely\n>> right though because it ignores that the callers of\n>> pg_cryptohash_create() in the backend expect an error all the time, so\n>> it could crash.\n> \n> Ah, right.\n> \n>> Perhaps we should just bite the bullet and switch the\n>> OpenSSL and fallback implementations to use allocation APIs that never\n>> cause an error, and always return NULL? That would have the advantage\n>> to be more consistent with the frontend that relies in malloc(), at\n>> the cost of requiring more changes for the backend code where the\n>> _create() call would need to handle the NULL case properly. The\n>> backend calls are already aware of errors so that would not be\n>> invasive except for the addition of some elog(ERROR) or similar, and\n>> we could change the fallback implementation to use palloc_extended()\n>> with MCXT_ALLOC_NO_OOM.\n> \n> -1. On the contrary, I think we should reduce the number of checks\n> needed in the callers, and prefer throwing errors, if possible. It's too\n> easy to forget the check, and it makes the code more verbose, too.\n> \n> In fact, it might be better if pg_cryptohash_init() and\n> pg_cryptohash_update() didn't return errors either. If an error happens,\n> they could just set a flag in the pg_cryptohash_ctx, and\n> pg_cryptohash_final() function would return the error. That way, you\n> would only need to check for error return in the call to\n> pg_cryptohash_final().\n\nBTW, it's a bit weird that the pg_cryptohash_init/update/final() \nfunctions return success, if the ctx argument is NULL. It would seem \nmore sensible for them to return an error. That way, if a caller forgets \nto check for NULL result from pg_cryptohash_create(), but correctly \nchecks the result of those other functions, it would catch the error. In \nfact, if we documented that pg_cryptohash_create() can return NULL, and \npg_cryptohash_final() always returns error on NULL argument, then it \nwould be sufficient for the callers to only check the return value of \npg_cryptohash_final(). So the usage pattern would be:\n\nctx = pg_cryptohash_create(PG_MD5);\npg_cryptohash_inti(ctx);\npg_update(ctx, data, size);\npg_update(ctx, moredata, size);\nif (pg_cryptohash_final(ctx, &hash) < 0)\n elog(ERROR, \"md5 calculation failed\");\npg_cryptohash_free(ctx);\n\n- Heikki\n\n\n",
"msg_date": "Fri, 18 Dec 2020 13:04:27 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 01:04:27PM +0200, Heikki Linnakangas wrote:\n> BTW, it's a bit weird that the pg_cryptohash_init/update/final() functions\n> return success, if the ctx argument is NULL. It would seem more sensible for\n> them to return an error.\n\nOkay.\n\n> That way, if a caller forgets to check for NULL\n> result from pg_cryptohash_create(), but correctly checks the result of those\n> other functions, it would catch the error. In fact, if we documented that\n> pg_cryptohash_create() can return NULL, and pg_cryptohash_final() always\n> returns error on NULL argument, then it would be sufficient for the callers\n> to only check the return value of pg_cryptohash_final(). So the usage\n> pattern would be:\n> \n> ctx = pg_cryptohash_create(PG_MD5);\n> pg_cryptohash_inti(ctx);\n> pg_update(ctx, data, size);\n> pg_update(ctx, moredata, size);\n> if (pg_cryptohash_final(ctx, &hash) < 0)\n> elog(ERROR, \"md5 calculation failed\");\n> pg_cryptohash_free(ctx);\n\nI'd rather keep the init and update routines to return an error code\ndirectly. This is more consistent with OpenSSL (note that libnss does\nnot return error codes for the init, update and final but it is\npossible to grab for errors then react on that). And we have even in\nour tree code paths a-la-pgcrypto that have callbacks for each phase\nwith some processing in-between. HMAC also gets a bit cleaner by\nkeeping this flexibility IMO.\n--\nMichael",
"msg_date": "Sat, 19 Dec 2020 09:52:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 11:51:55AM +0200, Heikki Linnakangas wrote:\n> On 18/12/2020 11:35, Heikki Linnakangas wrote:\n> > BTW, looking at pg_cryptohash_ctx and pg_cryptohash_state, why do we\n> > need two structs? They're both allocated and controlled by the\n> > cryptohash implementation. It would seem simpler to have just one.\n> \n> Something like this. Passes regression tests, but otherwise untested.\n\nInteresting. I have looked at that with a fresh mind, thanks for the\nidea. This reduces the number of allocations to one making the error\nhandling a no-brainer, at the cost of hiding the cryptohash type\ndirectly to the caller. I originally thought that this would be\nuseful as I recall reading cases in the OpenSSL code doing checks on\nhash type used, but perhaps that's just some over-engineered thoughts\nfrom my side. I have found a couple of small-ish issues, please see\nmy comments below.\n\n+ /*\n+ * FIXME: this always allocates enough space for the largest hash.\n+ * A smaller allocation would be enough for md5, sha224 and sha256.\n+ */\nI am not sure that this is worth complicating more, and we are not\ntalking about a lot of memory (sha512 requires 208 bytes, sha224/256\n104 bytes, md5 96 bytes with a quick measurement). This makes free()\nequally more simple. So just allocating the amount of memory based on\nthe max size in the union looks fine to me. I would add a memset(0)\nafter this allocation though.\n\n-#define ALLOC(size) palloc(size)\n+#define ALLOC(size) MemoryContextAllocExtended(TopMemoryContext, size, MCXT_ALLOC_NO_OOM)\nAs the only allocation in TopMemoryContext is for the context, it\nwould be fine to not use MCXT_ALLOC_NO_OOM here, and fail so as\ncallers in the backend don't need to worry about create() returning\nNULL.\n\n- state->evpctx = EVP_MD_CTX_create();\n+ ctx->evpctx = EVP_MD_CTX_create();\n\n- if (state->evpctx == NULL)\n+ if (ctx->evpctx == NULL)\n {\nIf EVP_MD_CTX_create() fails, you would leak memory with the context\nallocated in TopMemoryContext. So this requires a free of the context\nbefore the elog(ERROR).\n\n+ /*\n+ * Make sure that the resource owner has space to remember this\n+ * reference. This can error out with \"out of memory\", so do this\n+ * before any other allocation to avoid leaking.\n+ */\n #ifndef FRONTEND\n ResourceOwnerEnlargeCryptoHash(CurrentResourceOwner);\n #endif\nRight. Good point.\n\n- /* OpenSSL internals return 1 on success, 0 on failure */\n+ /* openssl internals return 1 on success, 0 on failure */\nIt seems to me that this was not wanted.\n\nAt the same time, I have taken care of your comment from upthread to\nreturn a failure if the caller passes NULL for the context, and\nadjusted some comments. What do you think of the attached?\n--\nMichael",
"msg_date": "Sat, 19 Dec 2020 15:13:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Fri, Dec 18, 2020 at 6:04 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> BTW, it's a bit weird that the pg_cryptohash_init/update/final()\n> functions return success, if the ctx argument is NULL. It would seem\n> more sensible for them to return an error. That way, if a caller forgets\n> to check for NULL result from pg_cryptohash_create(), but correctly\n> checks the result of those other functions, it would catch the error. In\n> fact, if we documented that pg_cryptohash_create() can return NULL, and\n> pg_cryptohash_final() always returns error on NULL argument, then it\n> would be sufficient for the callers to only check the return value of\n> pg_cryptohash_final(). So the usage pattern would be:\n>\n> ctx = pg_cryptohash_create(PG_MD5);\n> pg_cryptohash_inti(ctx);\n> pg_update(ctx, data, size);\n> pg_update(ctx, moredata, size);\n> if (pg_cryptohash_final(ctx, &hash) < 0)\n> elog(ERROR, \"md5 calculation failed\");\n> pg_cryptohash_free(ctx);\n\nTBH, I think there's no point in return an error here at all, because\nit's totally non-specific. You have no idea what failed, just that\nsomething failed. Blech. If we want to check that ctx is non-NULL, we\nshould do that with an Assert(). Complicating the code with error\nchecks that have to be added in multiple places that are far removed\nfrom where the actual problem was detected stinks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Dec 2020 16:28:26 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 04:28:26PM -0500, Robert Haas wrote:\n> TBH, I think there's no point in return an error here at all, because\n> it's totally non-specific. You have no idea what failed, just that\n> something failed. Blech. If we want to check that ctx is non-NULL, we\n> should do that with an Assert(). Complicating the code with error\n> checks that have to be added in multiple places that are far removed\n> from where the actual problem was detected stinks.\n\nYou could technically do that, but only for the backend at the cost of\npainting the code of src/common/ with more #ifdef FRONTEND. Even if\nwe do that, enforcing an error in the backend could be a problem when\nit comes to some code paths. One of them is the SCRAM mock\nauthentication where we had better generate a generic error message.\nUsing an Assert() or just letting the code go through is not good\neither, as we should avoid incorrect computations or crash on OOM, not\nto mention that this would fail the detection of bugs coming directly\nfrom OpenSSL or any other SSL library this code plugs with. In short,\nI think that there are more benefits in letting the caller control the\nerror handling.\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 09:57:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 03:13:50PM +0900, Michael Paquier wrote:\n> At the same time, I have taken care of your comment from upthread to\n> return a failure if the caller passes NULL for the context, and\n> adjusted some comments. What do you think of the attached?\n\nI have looked again at this thread with a fresher mind and I did not\nsee a problem with the previous patch, except some indentation\nissues. So if there are no objections, I'd like to commit the\nattached.\n--\nMichael",
"msg_date": "Wed, 6 Jan 2021 20:42:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 06/01/2021 13:42, Michael Paquier wrote:\n> On Sat, Dec 19, 2020 at 03:13:50PM +0900, Michael Paquier wrote:\n>> At the same time, I have taken care of your comment from upthread to\n>> return a failure if the caller passes NULL for the context, and\n>> adjusted some comments. What do you think of the attached?\n> \n> I have looked again at this thread with a fresher mind and I did not\n> see a problem with the previous patch, except some indentation\n> issues. So if there are no objections, I'd like to commit the\n> attached.\n\nLooks fine to me.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 6 Jan 2021 15:27:03 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 25/12/2020 02:57, Michael Paquier wrote:\n> On Mon, Dec 21, 2020 at 04:28:26PM -0500, Robert Haas wrote:\n>> TBH, I think there's no point in return an error here at all, because\n>> it's totally non-specific. You have no idea what failed, just that\n>> something failed. Blech. If we want to check that ctx is non-NULL, we\n>> should do that with an Assert(). Complicating the code with error\n>> checks that have to be added in multiple places that are far removed\n>> from where the actual problem was detected stinks.\n> \n> You could technically do that, but only for the backend at the cost of\n> painting the code of src/common/ with more #ifdef FRONTEND. Even if\n> we do that, enforcing an error in the backend could be a problem when\n> it comes to some code paths.\n\nYeah, you would still need to remember to check for the error in \nfrontend code. Maybe it would still be a good idea, not sure. It would \nbe a nice backstop, if you forget to check for the error.\n\nI had a quick look at the callers:\n\ncontrib/pgcrypto/internal-sha2.c and \nsrc/backend/utils/adt/cryptohashfuncs.c: the call to \npg_cryptohash_create() is missing check for NULL result. With your \nlatest patch, that's OK because the subsequent pg_cryptohash_update() \ncalls will fail if passed a NULL context. But seems sloppy.\n\ncontrib/pgcrypto/internal.c: all the calls to pg_cryptohash_* functions \nare missing checks for error return codes.\n\ncontrib/uuid-ossp/uuid-ossp.c: uses pg_cryptohash for MD5, but borrows \nthe built-in implementation of SHA1 on some platforms. Should we add \nsupport for SHA1 in pg_cryptohash and use that for consistency?\n\nsrc/backend/libpq/auth-scram.c: SHA256 is used in the mock \nauthentication. If the pg_cryptohash functions fail, we throw a distinct \nerror \"could not encode salt\" that reveals that it was a mock \nauthentication. I don't think this is a big deal in practice, it would \nbe hard for an attacker to induce the SHA256 computation to fail, and \nthere are probably better ways to distinguish mock authentication from \nreal, like timing attacks. But still.\n\nsrc/include/common/checksum_helper.h: in pg_checksum_raw_context, do we \nstill need separate fields for the different sha variants.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 6 Jan 2021 15:58:22 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Wed, Jan 06, 2021 at 03:27:03PM +0200, Heikki Linnakangas wrote:\n> Looks fine to me.\n\nThanks, I have been able to get this part done as of 55fe26a.\n--\nMichael",
"msg_date": "Thu, 7 Jan 2021 12:42:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Wed, Jan 06, 2021 at 03:58:22PM +0200, Heikki Linnakangas wrote:\n> contrib/pgcrypto/internal-sha2.c and\n> src/backend/utils/adt/cryptohashfuncs.c: the call to pg_cryptohash_create()\n> is missing check for NULL result. With your latest patch, that's OK because\n> the subsequent pg_cryptohash_update() calls will fail if passed a NULL\n> context. But seems sloppy.\n\nIs it? pg_cryptohash_create() will never return NULL for the backend.\n\n> contrib/pgcrypto/internal.c: all the calls to pg_cryptohash_* functions are\n> missing checks for error return codes.\n\nIndeed, these are incorrect, thanks. I'll go fix that separately.\n\n> contrib/uuid-ossp/uuid-ossp.c: uses pg_cryptohash for MD5, but borrows the\n> built-in implementation of SHA1 on some platforms. Should we add support for\n> SHA1 in pg_cryptohash and use that for consistency?\n\nYeah, I have sent a separate patch for that:\nhttps://commitfest.postgresql.org/31/2868/\nThe cleanups produced by this patch are kind of nice.\n\n> src/backend/libpq/auth-scram.c: SHA256 is used in the mock authentication.\n> If the pg_cryptohash functions fail, we throw a distinct error \"could not\n> encode salt\" that reveals that it was a mock authentication. I don't think\n> this is a big deal in practice, it would be hard for an attacker to induce\n> the SHA256 computation to fail, and there are probably better ways to\n> distinguish mock authentication from real, like timing attacks. But still.\n\nThis maps with the second error in the mock routine, so wouldn't it be\nbetter to change both and back-patch? The last place where this error\nmessage is used is pg_be_scram_build_secret() for the generation of\nwhat's stored in pg_authid. An idea would be to use \"out of memory\".\nThat can be faced for any palloc() calls.\n\n> src/include/common/checksum_helper.h: in pg_checksum_raw_context, do we\n> still need separate fields for the different sha variants.\n\nUsing separate fields looked cleaner to me if it came to debugging,\nand the cleanup was rather minimal except if we use more switch/case\nto set up the various variables. What about something like the\nattached?\n--\nMichael",
"msg_date": "Thu, 7 Jan 2021 13:15:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On 07/01/2021 06:15, Michael Paquier wrote:\n> On Wed, Jan 06, 2021 at 03:58:22PM +0200, Heikki Linnakangas wrote:\n>> contrib/pgcrypto/internal-sha2.c and\n>> src/backend/utils/adt/cryptohashfuncs.c: the call to pg_cryptohash_create()\n>> is missing check for NULL result. With your latest patch, that's OK because\n>> the subsequent pg_cryptohash_update() calls will fail if passed a NULL\n>> context. But seems sloppy.\n> \n> Is it? pg_cryptohash_create() will never return NULL for the backend.\n\nAh, you're right.\n\n>> src/backend/libpq/auth-scram.c: SHA256 is used in the mock authentication.\n>> If the pg_cryptohash functions fail, we throw a distinct error \"could not\n>> encode salt\" that reveals that it was a mock authentication. I don't think\n>> this is a big deal in practice, it would be hard for an attacker to induce\n>> the SHA256 computation to fail, and there are probably better ways to\n>> distinguish mock authentication from real, like timing attacks. But still.\n> \n> This maps with the second error in the mock routine, so wouldn't it be\n> better to change both and back-patch? The last place where this error\n> message is used is pg_be_scram_build_secret() for the generation of\n> what's stored in pg_authid. An idea would be to use \"out of memory\".\n> That can be faced for any palloc() calls.\n\nHmm. Perhaps it would be best to change all the errors in mock \nauthentication to COMMERROR. It'd be nice to have an accurate error \nmessage in the log, but no need to send it to the client.\n\n>> src/include/common/checksum_helper.h: in pg_checksum_raw_context, do we\n>> still need separate fields for the different sha variants.\n> \n> Using separate fields looked cleaner to me if it came to debugging,\n> and the cleanup was rather minimal except if we use more switch/case\n> to set up the various variables. What about something like the\n> attached?\n\n+1\n\n- Heikki\n\n\n",
"msg_date": "Thu, 7 Jan 2021 09:51:00 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
},
{
"msg_contents": "On Thu, Jan 07, 2021 at 09:51:00AM +0200, Heikki Linnakangas wrote:\n> Hmm. Perhaps it would be best to change all the errors in mock\n> authentication to COMMERROR. It'd be nice to have an accurate error message\n> in the log, but no need to send it to the client.\n\nYeah, we could do that. Still, this mode still requires a hard\nfailure because COMMERROR is just a log, and if only COMMERROR is done\nwe still expect a salt to be generated to send a challenge back to the\nclient, which would require a fallback for the salt if the one\ngenerated from the mock nonce cannot. Need to think more about that.\n\n>> Using separate fields looked cleaner to me if it came to debugging,\n>> and the cleanup was rather minimal except if we use more switch/case\n>> to set up the various variables. What about something like the\n>> attached?\n> \n> +1\n\nThanks, I have committed this part.\n--\nMichael",
"msg_date": "Fri, 8 Jan 2021 11:29:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect allocation handling for cryptohash functions with\n OpenSSL"
}
] |
[
{
"msg_contents": "Hello, hackers\n\n\nI have a question about how to execute valgrind with TAP tests\nin order to check some patches in the community.\nMy main interest is testing src/test/subscription now but\nis there any general way to do it ?\n\nThe documentation [1] says\n\"It's important to realize that the TAP tests will start test server(s) even when you say make installcheck\".\nThen, when I executed postgres that is launched by valgrind, it didn't react to the test execution of \"make installcheck\".\n\nIn other words, I can execute make installcheck without starting up my instance,\nbecause TAP tests create their own servers\nat least in terms of the case of my interested test, src/test/subscription.\n\n[1] - https://www.postgresql.org/docs/13/regress-tap.html\n\nCould someone give me an advice ?\n\nBest Regards,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Fri, 18 Dec 2020 08:45:24 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "how to use valgrind for TAP tests"
},
{
"msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> I have a question about how to execute valgrind with TAP tests\n> in order to check some patches in the community.\n> My main interest is testing src/test/subscription now but\n> is there any general way to do it ?\n\nThe standard solution is\n\n(1) Build normally (well, with -DUSE_VALGRIND)\n(2) Move the postgres executable aside, say\n mv src/backend/postgres src/backend/postgres.orig\n(3) Replace the executable with a wrapper script that invokes\n valgrind on the original executable\n(4) Now you can run \"make check\" with a valgrind'ed server,\n as well as things that depend on \"make check\", such as TAP tests\n\nThe script I use for (3) is attached; adjust paths and options to taste.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 18 Dec 2020 11:02:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how to use valgrind for TAP tests"
},
{
"msg_contents": "Hello,\n\n18.12.2020 19:02, Tom Lane wrote:\n> \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n>> I have a question about how to execute valgrind with TAP tests\n>> in order to check some patches in the community.\n>> My main interest is testing src/test/subscription now but\n>> is there any general way to do it ?\n> The standard solution is\n>\n> (1) Build normally (well, with -DUSE_VALGRIND)\n> (2) Move the postgres executable aside, say\n> mv src/backend/postgres src/backend/postgres.orig\n> (3) Replace the executable with a wrapper script that invokes\n> valgrind on the original executable\n> (4) Now you can run \"make check\" with a valgrind'ed server,\n> as well as things that depend on \"make check\", such as TAP tests\n>\n> The script I use for (3) is attached; adjust paths and options to taste.\nI use the attached patch for this purpose, that slightly simplifies\nthings and covers all the other binaries:\ngit apply .../install-vrunner.patch\nCPPFLAGS=\"-DUSE_VALGRIND -Og\" ./configure --enable-tap-tests\n--enable-debug --enable-cassert && make && make check\n`make check-world` is possible too, with\nsrc/bin/pg_ctl/t/001_start_stop.pl disabled (removed).\n\nBest regards,\nAlexander",
"msg_date": "Sun, 20 Dec 2020 11:00:04 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: how to use valgrind for TAP tests"
},
{
"msg_contents": "Hello\n\nOn Saturday, December 19, 2020 1:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> > I have a question about how to execute valgrind with TAP tests in\n> > order to check some patches in the community.\n> > My main interest is testing src/test/subscription now but is there any\n> > general way to do it ?\n> \n> The standard solution is\n> \n> (1) Build normally (well, with -DUSE_VALGRIND)\n> (2) Move the postgres executable aside, say\n> mv src/backend/postgres src/backend/postgres.orig\n> (3) Replace the executable with a wrapper script that invokes\n> valgrind on the original executable\n> (4) Now you can run \"make check\" with a valgrind'ed server,\n> as well as things that depend on \"make check\", such as TAP tests\n> \n> The script I use for (3) is attached; adjust paths and options to taste.\nThank you so much.\nI couldn't come up with the idea to prepare a wrapper script.\nThis worked successfully.\n\n\nBest,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Thu, 24 Dec 2020 01:15:44 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: how to use valgrind for TAP tests"
},
{
"msg_contents": "Hi, Alexander\n\nOn Sunday, December 20, 2020 5:00 PM Alexander Lakhin wrote:\n> > \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n> >> I have a question about how to execute valgrind with TAP tests in\n> >> order to check some patches in the community.\n> >> My main interest is testing src/test/subscription now but is there\n> >> any general way to do it ?\n> > The standard solution is\n> >\n> > (1) Build normally (well, with -DUSE_VALGRIND)\n> > (2) Move the postgres executable aside, say\n> > mv src/backend/postgres src/backend/postgres.orig\n> > (3) Replace the executable with a wrapper script that invokes\n> > valgrind on the original executable\n> > (4) Now you can run \"make check\" with a valgrind'ed server,\n> > as well as things that depend on \"make check\", such as TAP tests\n> >\n> > The script I use for (3) is attached; adjust paths and options to taste.\n> I use the attached patch for this purpose, that slightly simplifies things and\n> covers all the other binaries:\n> git apply .../install-vrunner.patch\n> CPPFLAGS=\"-DUSE_VALGRIND -Og\" ./configure --enable-tap-tests\n> --enable-debug --enable-cassert && make && make check `make\n> check-world` is possible too, with src/bin/pg_ctl/t/001_start_stop.pl\n> disabled (removed).\nThank you for giving me a fruitful advice !\nWhen I encounter another needs, I'll apply this method as well.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Thu, 24 Dec 2020 01:22:36 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: how to use valgrind for TAP tests"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI am investigating incident with one of out customers: performance of \nthe system isdropped dramatically.\nStack traces of all backends can be found here: \nhttp://www.garret.ru/diag_20201217_102056.stacks_59644\n(this file is 6Mb so I have not attached it to this mail).\n\nWhat I have see in this stack traces is that 642 backends and blocked in \nLWLockAcquire,\nmostly in obtaining shared buffer lock:\n\n#0 0x00007f0e7fe7a087 in semop () from /lib64/libc.so.6\n#1 0x0000000000682fb1 in PGSemaphoreLock \n(sema=sema@entry=0x7f0e1c1f63a0) at pg_sema.c:387\n#2 0x00000000006ed60b in LWLockAcquire (lock=lock@entry=0x7e8b6176d800, \nmode=mode@entry=LW_SHARED) at lwlock.c:1338\n#3 0x00000000006c88a7 in BufferAlloc (foundPtr=0x7ffcc3c8de9b \"\\001\", \nstrategy=0x0, blockNum=997, forkNum=MAIN_FORKNUM, relpersistence=112 \n'p', smgr=0x2fb2df8) at bufmgr.c:1177\n#4 ReadBuffer_common (smgr=0x2fb2df8, relpersistence=<optimized out>, \nrelkind=<optimized out>, forkNum=forkNum@entry=MAIN_FORKNUM, \nblockNum=blockNum@entry=997, mode=RBM_NORMAL, strategy=0x0, \nhit=hit@entry=0x7ffcc3c8df97 \"\") at bufmgr.c:894\n#5 0x00000000006c928b in ReadBufferExtended (reln=0x32c7ed0, \nforkNum=forkNum@entry=MAIN_FORKNUM, blockNum=997, \nmode=mode@entry=RBM_NORMAL, strategy=strategy@entry=0x0) at bufmgr.c:753\n#6 0x00000000006c93ab in ReadBuffer (blockNum=<optimized out>, \nreln=<optimized out>) at bufmgr.c:685\n...\n\nOnly 11 locks from this 642 are unique.\nMoreover: 358 backends are waiting for one lock and 183 - for another.\n\nThere are two backends (pids 291121 and 285927) which are trying to \nobtain exclusive lock while already holding another exclusive lock.\nAnd them block all other backends.\n\nThis is single place in bufmgr (and in postgres) where process tries to \nlock two buffers:\n\n /*\n * To change the association of a valid buffer, we'll need to have\n * exclusive lock on both the old and new mapping partitions.\n */\n if (oldFlags & BM_TAG_VALID)\n {\n ...\n /*\n * Must lock the lower-numbered partition first to avoid\n * deadlocks.\n */\n if (oldPartitionLock < newPartitionLock)\n {\n LWLockAcquire(oldPartitionLock, LW_EXCLUSIVE);\n LWLockAcquire(newPartitionLock, LW_EXCLUSIVE);\n }\n else if (oldPartitionLock > newPartitionLock)\n {\n LWLockAcquire(newPartitionLock, LW_EXCLUSIVE);\n LWLockAcquire(oldPartitionLock, LW_EXCLUSIVE);\n }\n\nThis two backends are blocked in the second lock request.\nI read all connects in bufmgr.c and README file but didn't find \nexplanation why do we need to lock both partitions.\nWhy it is not possible first free old buffer (as it is done in \nInvalidateBuffer) and then repeat attempt to allocate the buffer?\n\nYes, it may require more efforts than just \"gabbing\" the buffer.\nBut in this case there is no need to keep two locks.\n\nI wonder if somebody in the past faced with the similar symptoms and \nwas this problem with holding locks of two partitions in bufmgr already \ndiscussed?\n\nP.S.\nThe customer is using 9.6 version of Postgres, but I have checked that \nthe same code fragment is present in the master.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 18 Dec 2020 15:20:34 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Double partition lock in bufmgr"
},
{
"msg_contents": "В Пт, 18/12/2020 в 15:20 +0300, Konstantin Knizhnik пишет:\n> Hi hackers,\n> \n> I am investigating incident with one of out customers: performance of \n> the system isdropped dramatically.\n> Stack traces of all backends can be found here: \n> http://www.garret.ru/diag_20201217_102056.stacks_59644\n> (this file is 6Mb so I have not attached it to this mail).\n> \n> What I have see in this stack traces is that 642 backends and blocked\n> in \n> LWLockAcquire,\n> mostly in obtaining shared buffer lock:\n> \n> #0 0x00007f0e7fe7a087 in semop () from /lib64/libc.so.6\n> #1 0x0000000000682fb1 in PGSemaphoreLock \n> (sema=sema@entry=0x7f0e1c1f63a0) at pg_sema.c:387\n> #2 0x00000000006ed60b in LWLockAcquire (lock=lock@entry=0x7e8b6176d80\n> 0, \n> mode=mode@entry=LW_SHARED) at lwlock.c:1338\n> #3 0x00000000006c88a7 in BufferAlloc (foundPtr=0x7ffcc3c8de9b\n> \"\\001\", \n> strategy=0x0, blockNum=997, forkNum=MAIN_FORKNUM, relpersistence=112 \n> 'p', smgr=0x2fb2df8) at bufmgr.c:1177\n> #4 ReadBuffer_common (smgr=0x2fb2df8, relpersistence=<optimized\n> out>, \n> relkind=<optimized out>, forkNum=forkNum@entry=MAIN_FORKNUM, \n> blockNum=blockNum@entry=997, mode=RBM_NORMAL, strategy=0x0, \n> hit=hit@entry=0x7ffcc3c8df97 \"\") at bufmgr.c:894\n> #5 0x00000000006c928b in ReadBufferExtended (reln=0x32c7ed0, \n> forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=997, \n> mode=mode@entry=RBM_NORMAL, strategy=strategy@entry=0x0) at\n> bufmgr.c:753\n> #6 0x00000000006c93ab in ReadBuffer (blockNum=<optimized out>, \n> reln=<optimized out>) at bufmgr.c:685\n> ...\n> \n> Only 11 locks from this 642 are unique.\n> Moreover: 358 backends are waiting for one lock and 183 - for another.\n> \n> There are two backends (pids 291121 and 285927) which are trying to \n> obtain exclusive lock while already holding another exclusive lock.\n> And them block all other backends.\n> \n> This is single place in bufmgr (and in postgres) where process tries\n> to \n> lock two buffers:\n> \n> /*\n> * To change the association of a valid buffer, we'll need to\n> have\n> * exclusive lock on both the old and new mapping partitions.\n> */\n> if (oldFlags & BM_TAG_VALID)\n> {\n> ...\n> /*\n> * Must lock the lower-numbered partition first to avoid\n> * deadlocks.\n> */\n> if (oldPartitionLock < newPartitionLock)\n> {\n> LWLockAcquire(oldPartitionLock, LW_EXCLUSIVE);\n> LWLockAcquire(newPartitionLock, LW_EXCLUSIVE);\n> }\n> else if (oldPartitionLock > newPartitionLock)\n> {\n> LWLockAcquire(newPartitionLock, LW_EXCLUSIVE);\n> LWLockAcquire(oldPartitionLock, LW_EXCLUSIVE);\n> }\n> \n> This two backends are blocked in the second lock request.\n> I read all connects in bufmgr.c and README file but didn't find \n> explanation why do we need to lock both partitions.\n> Why it is not possible first free old buffer (as it is done in \n> InvalidateBuffer) and then repeat attempt to allocate the buffer?\n> \n> Yes, it may require more efforts than just \"gabbing\" the buffer.\n> But in this case there is no need to keep two locks.\n> \n> I wonder if somebody in the past faced with the similar symptoms and \n> was this problem with holding locks of two partitions in bufmgr\n> already \n> discussed?\n\nLooks like there is no real need for this double lock. And the change to\nconsequitive lock acquisition really provides scalability gain:\nhttps://bit.ly/3AytNoN\n\nregards\nSokolov Yura\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 16:31:17 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Double partition lock in bufmgr"
}
] |
[
{
"msg_contents": "With valgrind 3.16.1, we fail to get through initdb:\n\n==00:00:00:41.608 11346== Source and destination overlap in memcpy(0xc190a8, 0xc190a8, 512)\n==00:00:00:41.609 11346== at 0x486C674: __GI_memcpy (vg_replace_strmem.c:1035)\n==00:00:00:41.609 11346== by 0x9017DB: write_relmap_file (relmapper.c:932)\n==00:00:00:41.609 11346== by 0x90243B: RelationMapFinishBootstrap (relmapper.c:571)\n==00:00:00:41.609 11346== by 0x551083: BootstrapModeMain (bootstrap.c:530)\n==00:00:00:41.609 11346== by 0x551083: AuxiliaryProcessMain (bootstrap.c:436)\n==00:00:00:41.609 11346== by 0x483E8F: main (main.c:201)\n==00:00:00:41.609 11346== \n==00:00:00:41.615 11346== Source and destination overlap in memcpy(0xc192a8, 0xc192a8, 512)\n==00:00:00:41.615 11346== at 0x486C674: __GI_memcpy (vg_replace_strmem.c:1035)\n==00:00:00:41.615 11346== by 0x9017DB: write_relmap_file (relmapper.c:932)\n==00:00:00:41.615 11346== by 0x551083: BootstrapModeMain (bootstrap.c:530)\n==00:00:00:41.615 11346== by 0x551083: AuxiliaryProcessMain (bootstrap.c:436)\n==00:00:00:41.615 11346== by 0x483E8F: main (main.c:201)\n==00:00:00:41.615 11346== \n\nI'm a bit surprised that we've not seen other complaints from picky\nmemcpy implementations, because this code path definitely is passing\nthe same source and destination pointers.\n\nEvidently we need something like this in write_relmap_file:\n\n\t/* Success, update permanent copy */\n-\tmemcpy(realmap, newmap, sizeof(RelMapFile));\n+\tif (realmap != newmap)\n+\t\tmemcpy(realmap, newmap, sizeof(RelMapFile));\n\nOr possibly it'd be cleaner to have RelationMapFinishBootstrap\nsupply a local RelMapFile variable to construct the mappings in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:49:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "initdb fails under bleeding-edge valgrind"
}
] |
[
{
"msg_contents": "The discussion in [1] pointed out that the existing documentation\nfor the \"jsonb || jsonb\" concatenation operator is far short of\nreality: it fails to acknowledge that the operator will accept\nany cases other than two jsonb array inputs or two jsonb object\ninputs.\n\nI'd about concluded that other cases were handled as if by\nwrapping non-array inputs in one-element arrays and then\nproceeding as for two arrays. That works for most scenarios, eg\n\nregression=# select '[3]'::jsonb || '{}'::jsonb;\n ?column? \n----------\n [3, {}]\n(1 row)\n\nregression=# select '3'::jsonb || '[]'::jsonb;\n ?column? \n----------\n [3]\n(1 row)\n\nregression=# select '3'::jsonb || '4'::jsonb;\n ?column? \n----------\n [3, 4]\n(1 row)\n\nHowever, further experimentation found a case that fails:\n\nregression=# select '3'::jsonb || '{}'::jsonb;\nERROR: invalid concatenation of jsonb objects\n\nI wonder what is the point of this weird exception, and whether\nwhoever devised it can provide a concise explanation of what\nthey think the full behavior of \"jsonb || jsonb\" is. Why isn't\n'[3, {}]' a reasonable result here, if the cases above are OK?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/0d72b76d-ca2b-4263-8888-d6dfca861c51%40www.fastmail.com\n\n\n",
"msg_date": "Fri, 18 Dec 2020 12:20:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Weird special case in jsonb_concat()"
},
{
"msg_contents": "I wrote:\n> However, further experimentation found a case that fails:\n> regression=# select '3'::jsonb || '{}'::jsonb;\n> ERROR: invalid concatenation of jsonb objects\n> I wonder what is the point of this weird exception, and whether\n> whoever devised it can provide a concise explanation of what\n> they think the full behavior of \"jsonb || jsonb\" is. Why isn't\n> '[3, {}]' a reasonable result here, if the cases above are OK?\n\nHere is a proposed patch for that. It turns out that the third\nelse-branch in IteratorConcat() already does the right thing, if\nwe just remove its restrictive else-condition and let it handle\neverything except the two-objects and two-arrays cases. But it\nseemed to me that trying to handle both the object || array\nand array || object cases in that one else-branch was poorly\nthought out: only one line of code can actually be shared, and it\ntook several extra lines of infrastructure to support the sharing.\nSo I split those cases into separate else-branches.\n\nThis also addresses the inadequate documentation that was the\noriginal complaint.\n\nThoughts? Should we back-patch this? The existing behavior\nseems to me to be inconsistent enough to be arguably a bug,\nbut we've not had field complaints saying \"this should work\".\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 19 Dec 2020 15:35:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Weird special case in jsonb_concat()"
},
{
"msg_contents": "so 19. 12. 2020 v 21:35 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > However, further experimentation found a case that fails:\n> > regression=# select '3'::jsonb || '{}'::jsonb;\n> > ERROR: invalid concatenation of jsonb objects\n> > I wonder what is the point of this weird exception, and whether\n> > whoever devised it can provide a concise explanation of what\n> > they think the full behavior of \"jsonb || jsonb\" is. Why isn't\n> > '[3, {}]' a reasonable result here, if the cases above are OK?\n>\n> Here is a proposed patch for that. It turns out that the third\n> else-branch in IteratorConcat() already does the right thing, if\n> we just remove its restrictive else-condition and let it handle\n> everything except the two-objects and two-arrays cases. But it\n> seemed to me that trying to handle both the object || array\n> and array || object cases in that one else-branch was poorly\n> thought out: only one line of code can actually be shared, and it\n> took several extra lines of infrastructure to support the sharing.\n> So I split those cases into separate else-branches.\n>\n> This also addresses the inadequate documentation that was the\n> original complaint.\n>\n> Thoughts? Should we back-patch this? The existing behavior\n> seems to me to be inconsistent enough to be arguably a bug,\n> but we've not had field complaints saying \"this should work\".\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nso 19. 12. 2020 v 21:35 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> However, further experimentation found a case that fails:\n> regression=# select '3'::jsonb || '{}'::jsonb;\n> ERROR: invalid concatenation of jsonb objects\n> I wonder what is the point of this weird exception, and whether\n> whoever devised it can provide a concise explanation of what\n> they think the full behavior of \"jsonb || jsonb\" is. Why isn't\n> '[3, {}]' a reasonable result here, if the cases above are OK?\n\nHere is a proposed patch for that. It turns out that the third\nelse-branch in IteratorConcat() already does the right thing, if\nwe just remove its restrictive else-condition and let it handle\neverything except the two-objects and two-arrays cases. But it\nseemed to me that trying to handle both the object || array\nand array || object cases in that one else-branch was poorly\nthought out: only one line of code can actually be shared, and it\ntook several extra lines of infrastructure to support the sharing.\nSo I split those cases into separate else-branches.\n\nThis also addresses the inadequate documentation that was the\noriginal complaint.\n\nThoughts? Should we back-patch this? The existing behavior\nseems to me to be inconsistent enough to be arguably a bug,\nbut we've not had field complaints saying \"this should work\".+1Pavel\n\n regards, tom lane",
"msg_date": "Sun, 20 Dec 2020 07:32:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Weird special case in jsonb_concat()"
},
{
"msg_contents": "On Sat, Dec 19, 2020, at 21:35, Tom Lane wrote:\n>Here is a proposed patch for that.\n\nI've tested the patch and \"All 202 tests passed\".\n\nIn addition, I've tested it on a json intensive project,\nwhich passes all its own tests.\n\nI haven't studied the jsonfuncs.c code in detail,\nbut the new code looks much cleaner, nice.\n\n>This also addresses the inadequate documentation that was the\n>original complaint.\n\nLooks good.\n\nIn addition, to the user wondering how to append a json array-value \"as is\",\nI think it would be useful to provide an example on how to do this\nin the documentation.\n\nI think there is a risk users will attempt much more fragile\nhacks to achieve this, if we don't provide guidance\nin the documentation.\n\nSuggestion:\n\n <literal>'[\"a\", \"b\"]'::jsonb || '[\"a\", \"d\"]'::jsonb</literal>\n <returnvalue>[\"a\", \"b\", \"a\", \"d\"]</returnvalue>\n </para>\n+ <para>\n+ <literal>'[\"a\", \"b\"]'::jsonb || jsonb_build_array('[\"a\", \"d\"]'::jsonb)</literal>\n+ <returnvalue>[\"a\", \"b\", [\"a\", \"d\"]]</returnvalue>\n+ </para>\n <para>\n <literal>'{\"a\": \"b\"}'::jsonb || '{\"c\": \"d\"}'::jsonb</literal>\n <returnvalue>{\"a\": \"b\", \"c\": \"d\"}</returnvalue>\n\n> Thoughts? Should we back-patch this? The existing behavior\n> seems to me to be inconsistent enough to be arguably a bug,\n> but we've not had field complaints saying \"this should work\".\n\n+1 back-patch, I think it's a bug.\n\nBest regards,\n\nJoel\nOn Sat, Dec 19, 2020, at 21:35, Tom Lane wrote:>Here is a proposed patch for that.I've tested the patch and \"All 202 tests passed\".In addition, I've tested it on a json intensive project,which passes all its own tests.I haven't studied the jsonfuncs.c code in detail,but the new code looks much cleaner, nice.>This also addresses the inadequate documentation that was the>original complaint.Looks good.In addition, to the user wondering how to append a json array-value \"as is\",I think it would be useful to provide an example on how to do thisin the documentation.I think there is a risk users will attempt much more fragilehacks to achieve this, if we don't provide guidancein the documentation.Suggestion: <literal>'[\"a\", \"b\"]'::jsonb || '[\"a\", \"d\"]'::jsonb</literal> <returnvalue>[\"a\", \"b\", \"a\", \"d\"]</returnvalue> </para>+ <para>+ <literal>'[\"a\", \"b\"]'::jsonb || jsonb_build_array('[\"a\", \"d\"]'::jsonb)</literal>+ <returnvalue>[\"a\", \"b\", [\"a\", \"d\"]]</returnvalue>+ </para> <para> <literal>'{\"a\": \"b\"}'::jsonb || '{\"c\": \"d\"}'::jsonb</literal> <returnvalue>{\"a\": \"b\", \"c\": \"d\"}</returnvalue>> Thoughts? Should we back-patch this? The existing behavior> seems to me to be inconsistent enough to be arguably a bug,> but we've not had field complaints saying \"this should work\".+1 back-patch, I think it's a bug.Best regards,Joel",
"msg_date": "Sun, 20 Dec 2020 08:33:38 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: Weird special case in jsonb_concat()"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Sat, Dec 19, 2020, at 21:35, Tom Lane wrote:\n>> Here is a proposed patch for that.\n\n> In addition, to the user wondering how to append a json array-value \"as is\",\n> I think it would be useful to provide an example on how to do this\n> in the documentation.\n\nDone in v13 and HEAD; the older table format doesn't really have room\nfor more examples.\n\n> +1 back-patch, I think it's a bug.\n\nI'm not quite sure it's a bug, but it does seem like fairly unhelpful\nbehavior to throw an error instead of doing something useful, so\nback-patched.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Dec 2020 13:15:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Weird special case in jsonb_concat()"
}
] |
[
{
"msg_contents": "HI\n\n address d2be8fe8 found in\n _DPH_HEAP_ROOT @ fe321000\n in busy allocation ( DPH_HEAP_BLOCK: UserAddr UserSize - VirtAddr VirtSize)\n d2dd3444: d2be8fe8 18 - d2be8000 2000\n 576bab70 verifier!AVrfDebugPageHeapAllocate+0x00000240\n 77aa909b ntdll!RtlDebugAllocateHeap+0x00000039\n 779fbbad ntdll!RtlpAllocateHeap+0x000000ed\n 779fb0cf ntdll!RtlpAllocateHeapInternal+0x0000022f\n 779fae8e ntdll!RtlAllocateHeap+0x0000003e\n 5778aa2f vrfcore!VfCoreRtlAllocateHeap+0x0000001f\n 56b5256c vfbasics!AVrfpRtlAllocateHeap+0x000000dc\n 7558a9f6 ucrtbase!_malloc_base+0x00000026\n 56b538b8 vfbasics!AVrfp_ucrt_malloc+0x00000038\n*** WARNING: Unable to verify checksum for C:\\Program Files (x86)\\psqlODBC\\1300\\bin\\LIBPQ.dll\n d2d973ab LIBPQ!appendBinaryPQExpBuffer+0x0000016b\n d2d855f3 LIBPQ!PQpingParams+0x000002a3\n d2d82406 LIBPQ!PQencryptPasswordConn+0x00000346\n d2d83c0d LIBPQ!PQconnectPoll+0x00000c4d\n d2d85822 LIBPQ!PQpingParams+0x000004d2\n d2d8463a LIBPQ!PQconnectdbParams+0x0000002a\n*** WARNING: Unable to verify checksum for C:\\Program Files (x86)\\psqlODBC\\1300\\bin\\psqlodbc35w.dll\n 51798c6c psqlodbc35w!LIBPQ_connect+0x0000051c [c:\\mingw\\git\\psqlodbc-13.00.0000\\connection.c @ 2879]\n 51795701 psqlodbc35w!CC_connect+0x000000c1 [c:\\mingw\\git\\psqlodbc-13.00.0000\\connection.c @ 1110]\n 517ac698 psqlodbc35w!PGAPI_DriverConnect+0x000002f8 [c:\\mingw\\git\\psqlodbc-13.00.0000\\drvconn.c @ 233]\n 517c3bad psqlodbc35w!SQLDriverConnectW+0x0000016d [c:\\mingw\\git\\psqlodbc-13.00.0000\\odbcapiw.c @ 163]\n 6f7733dc ODBC32!SQLInternalDriverConnectW+0x0000014c\n 6f770fb0 ODBC32!SQLDriverConnectW+0x00000ac0\n 535d2a4a msdasql!CODBCHandle::OHDriverConnect+0x000000da\n 535c554c msdasql!CImpIDBInitialize::Initialize+0x000002ec\n 519c079a oledb32!CDBInitialize::DoInitialize+0x0000003b\n 519beaa4 oledb32!CDBInitialize::Initialize+0x00000034\n 519c0d12 oledb32!CDCMPool::CreateResource+0x00000162\n 51873f3b comsvcs!CHolder::SafeDispenserDriver::CreateResource+0x0000005b\n 51871c5e comsvcs!CHolder::AllocResource+0x000001fe\n 519bb6da oledb32!CDCMPool::DrawResource+0x0000014a\n 519bb24b oledb32!CDCMPoolManager::DrawResource+0x0000020b\n 519b8233 oledb32!CDPO::Initialize+0x00000263\n 51a993ab msado15!_ConnectAsync+0x000001ab\n\n\n\n\n\n\n\n\n\n\nHI\n \n address d2be8fe8 found in\n _DPH_HEAP_ROOT @ fe321000\n in busy allocation ( DPH_HEAP_BLOCK: UserAddr UserSize - VirtAddr VirtSize)\n d2dd3444: d2be8fe8 18 - d2be8000 2000\n 576bab70 verifier!AVrfDebugPageHeapAllocate+0x00000240\n 77aa909b ntdll!RtlDebugAllocateHeap+0x00000039\n 779fbbad ntdll!RtlpAllocateHeap+0x000000ed\n 779fb0cf ntdll!RtlpAllocateHeapInternal+0x0000022f\n 779fae8e ntdll!RtlAllocateHeap+0x0000003e\n 5778aa2f vrfcore!VfCoreRtlAllocateHeap+0x0000001f\n 56b5256c vfbasics!AVrfpRtlAllocateHeap+0x000000dc\n 7558a9f6 ucrtbase!_malloc_base+0x00000026\n 56b538b8 vfbasics!AVrfp_ucrt_malloc+0x00000038\n*** WARNING: Unable to verify checksum for C:\\Program Files (x86)\\psqlODBC\\1300\\bin\\LIBPQ.dll\n d2d973ab LIBPQ!appendBinaryPQExpBuffer+0x0000016b\n d2d855f3 LIBPQ!PQpingParams+0x000002a3\n d2d82406 LIBPQ!PQencryptPasswordConn+0x00000346\n d2d83c0d LIBPQ!PQconnectPoll+0x00000c4d\n d2d85822 LIBPQ!PQpingParams+0x000004d2\n d2d8463a LIBPQ!PQconnectdbParams+0x0000002a\n*** WARNING: Unable to verify checksum for C:\\Program Files (x86)\\psqlODBC\\1300\\bin\\psqlodbc35w.dll\n 51798c6c psqlodbc35w!LIBPQ_connect+0x0000051c [c:\\mingw\\git\\psqlodbc-13.00.0000\\connection.c @ 2879]\n 51795701 psqlodbc35w!CC_connect+0x000000c1 [c:\\mingw\\git\\psqlodbc-13.00.0000\\connection.c @ 1110]\n 517ac698 psqlodbc35w!PGAPI_DriverConnect+0x000002f8 [c:\\mingw\\git\\psqlodbc-13.00.0000\\drvconn.c @ 233]\n 517c3bad psqlodbc35w!SQLDriverConnectW+0x0000016d [c:\\mingw\\git\\psqlodbc-13.00.0000\\odbcapiw.c @ 163]\n 6f7733dc ODBC32!SQLInternalDriverConnectW+0x0000014c\n 6f770fb0 ODBC32!SQLDriverConnectW+0x00000ac0\n 535d2a4a msdasql!CODBCHandle::OHDriverConnect+0x000000da\n 535c554c msdasql!CImpIDBInitialize::Initialize+0x000002ec\n 519c079a oledb32!CDBInitialize::DoInitialize+0x0000003b\n 519beaa4 oledb32!CDBInitialize::Initialize+0x00000034\n 519c0d12 oledb32!CDCMPool::CreateResource+0x00000162\n 51873f3b comsvcs!CHolder::SafeDispenserDriver::CreateResource+0x0000005b\n 51871c5e comsvcs!CHolder::AllocResource+0x000001fe\n 519bb6da oledb32!CDCMPool::DrawResource+0x0000014a\n 519bb24b oledb32!CDCMPoolManager::DrawResource+0x0000020b\n 519b8233 oledb32!CDPO::Initialize+0x00000263\n 51a993ab msado15!_ConnectAsync+0x000001ab",
"msg_date": "Fri, 18 Dec 2020 20:12:22 +0000",
"msg_from": "=?koi8-r?B?8MXS28nOIODSycog8MXU0s/Xyd4=?= <pershin@prosoftsystems.ru>",
"msg_from_op": true,
"msg_subject": "libpq @windows : leaked singlethread_lock makes AppVerifier unhappy"
}
] |
[
{
"msg_contents": "Hi here a little update proposal for ARM architecture.\n\nKind regards.",
"msg_date": "Fri, 18 Dec 2020 21:53:20 +0000",
"msg_from": "David CARLIER <devnexen@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Implements SPIN_LOCK on ARM"
},
{
"msg_contents": "David CARLIER <devnexen@gmail.com> writes:\n> Hi here a little update proposal for ARM architecture.\n\nThis sort of thing is not a \"little proposal\" where you can just\nsend in an unsupported patch and expect it to be accepted.\nYou need to provide some evidence that (a) it actually does anything\nuseful and (b) it isn't a net loss on some ARM architectures.\n\nFor comparison's sake, see\n\nhttps://www.postgresql.org/message-id/flat/CAB10pyamDkTFWU_BVGeEVmkc8%3DEhgCjr6QBk02SCdJtKpHkdFw%40mail.gmail.com\n\nwhere we still haven't pulled the trigger despite a great deal\nmore than zero testing.\n\nFWIW, some casual googling suggests that ARM \"yield\" is not\nall that much like x86 \"pause\": it supposedly encourages\nthe system to swap control away from the thread altogether,\nexactly what we *don't* want in a spinloop. So I'm a little\ndoubtful whether there's a case to be made for this at all.\nBut for sure, you haven't tried to make a case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Dec 2020 17:14:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implements SPIN_LOCK on ARM"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe next commit fest is going to begin in two weeks.\n\nI would like to volunteer as commit fest manager for 2021-01 if the\nrole is not filled and there are no objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 19 Dec 2020 12:40:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Commit fest manager for 2021-01"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 9:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> The next commit fest is going to begin in two weeks.\n>\n> I would like to volunteer as commit fest manager for 2021-01\n>\n\nGlad to hear. I am confident that you can do justice to this role.\n\n if the\n> role is not filled and there are no objections.\n>\n\nI haven't seen anybody volunteering yet.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Dec 2020 10:03:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 10:03:47AM +0530, Amit Kapila wrote:\n> Glad to hear. I am confident that you can do justice to this role.\n\nI also think you will do just fine. Thanks for taking care of this.\n--\nMichael",
"msg_date": "Sat, 19 Dec 2020 14:00:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 6:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Dec 19, 2020 at 10:03:47AM +0530, Amit Kapila wrote:\n> > Glad to hear. I am confident that you can do justice to this role.\n>\n> I also think you will do just fine. Thanks for taking care of this.\n\n+1 on both accounts.\n\nIf you haven't been one before (which I think?), please let me know\nwhat username your account in the system has, and I will make sure you\nget the required permissions-\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 20 Dec 2020 14:26:59 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 10:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Dec 19, 2020 at 6:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sat, Dec 19, 2020 at 10:03:47AM +0530, Amit Kapila wrote:\n> > > Glad to hear. I am confident that you can do justice to this role.\n> >\n> > I also think you will do just fine. Thanks for taking care of this.\n>\n> +1 on both accounts.\n>\n> If you haven't been one before (which I think?), please let me know\n> what username your account in the system has, and I will make sure you\n> get the required permissions-\n\nThanks!\n\nMy usename is masahikosawada.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 21 Dec 2020 06:57:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 10:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sun, Dec 20, 2020 at 10:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sat, Dec 19, 2020 at 6:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Sat, Dec 19, 2020 at 10:03:47AM +0530, Amit Kapila wrote:\n> > > > Glad to hear. I am confident that you can do justice to this role.\n> > >\n> > > I also think you will do just fine. Thanks for taking care of this.\n> >\n> > +1 on both accounts.\n> >\n> > If you haven't been one before (which I think?), please let me know\n> > what username your account in the system has, and I will make sure you\n> > get the required permissions-\n>\n> Thanks!\n>\n> My usename is masahikosawada.\n\nI've now added the required permissions.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 21 Dec 2020 09:21:38 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 5:21 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sun, Dec 20, 2020 at 10:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sun, Dec 20, 2020 at 10:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > > On Sat, Dec 19, 2020 at 6:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > On Sat, Dec 19, 2020 at 10:03:47AM +0530, Amit Kapila wrote:\n> > > > > Glad to hear. I am confident that you can do justice to this role.\n> > > >\n> > > > I also think you will do just fine. Thanks for taking care of this.\n> > >\n> > > +1 on both accounts.\n> > >\n> > > If you haven't been one before (which I think?), please let me know\n> > > what username your account in the system has, and I will make sure you\n> > > get the required permissions-\n> >\n> > Thanks!\n> >\n> > My usename is masahikosawada.\n>\n> I've now added the required permissions.\n\nThank you. After re-logging in it looks the same as before but\nsomething will change on the CommitFest page?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Dec 2020 19:29:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 07:29:37PM +0900, Masahiko Sawada wrote:\n> Thank you. After re-logging in it looks the same as before but\n> something will change on the CommitFest page?\n\nThere should be a link to a new menu called \"administration\" on the\nleft of the existing logout button at the top. From there, you should\nbe able to control the status of all the existing commit fest and\npatches.\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 14:57:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 2:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Dec 24, 2020 at 07:29:37PM +0900, Masahiko Sawada wrote:\n> > Thank you. After re-logging in it looks the same as before but\n> > something will change on the CommitFest page?\n>\n> There should be a link to a new menu called \"administration\" on the\n> left of the existing logout button at the top. From there, you should\n> be able to control the status of all the existing commit fest and\n> patches.\n\nHmm, on the left of the logout button, I can see only the 'edit\nprofile' button and 'Activity log' button.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Dec 2020 16:35:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 04:35:30PM +0900, Masahiko Sawada wrote:\n> Hmm, on the left of the logout button, I can see only the 'edit\n> profile' button and 'Activity log' button.\n\nMaybe that's a cache issue with your browser? Magnus, any ideas?\n\nI cannot control the permissions of the app, but if that proves to be\nnecessary I am fine to switch the CF status. Just let me know if you\nwant me to do so when the time comes.\n--\nMichael",
"msg_date": "Fri, 25 Dec 2020 16:48:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 8:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Dec 25, 2020 at 04:35:30PM +0900, Masahiko Sawada wrote:\n> > Hmm, on the left of the logout button, I can see only the 'edit\n> > profile' button and 'Activity log' button.\n>\n> Maybe that's a cache issue with your browser? Magnus, any ideas?\n>\n> I cannot control the permissions of the app, but if that proves to be\n> necessary I am fine to switch the CF status. Just let me know if you\n> want me to do so when the time comes.\n>\n\nUgh, that was me adding just half the permissions required :/ One should\nadd both the \"can have some permissions at all\" and the \"member of the\nadmins group\". I only did the second part :)\n\nFixed now, sorry about that!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Dec 25, 2020 at 8:48 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Dec 25, 2020 at 04:35:30PM +0900, Masahiko Sawada wrote:\n> Hmm, on the left of the logout button, I can see only the 'edit\n> profile' button and 'Activity log' button.\n\nMaybe that's a cache issue with your browser? Magnus, any ideas?\n\nI cannot control the permissions of the app, but if that proves to be\nnecessary I am fine to switch the CF status. Just let me know if you\nwant me to do so when the time comes.Ugh, that was me adding just half the permissions required :/ One should add both the \"can have some permissions at all\" and the \"member of the admins group\". I only did the second part :)Fixed now, sorry about that! -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 25 Dec 2020 12:20:38 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2021-01"
},
{
"msg_contents": "On Fri, Dec 25, 2020 at 8:20 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n> On Fri, Dec 25, 2020 at 8:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, Dec 25, 2020 at 04:35:30PM +0900, Masahiko Sawada wrote:\n>> > Hmm, on the left of the logout button, I can see only the 'edit\n>> > profile' button and 'Activity log' button.\n>>\n>> Maybe that's a cache issue with your browser? Magnus, any ideas?\n>>\n>> I cannot control the permissions of the app, but if that proves to be\n>> necessary I am fine to switch the CF status. Just let me know if you\n>> want me to do so when the time comes.\n>\n>\n> Ugh, that was me adding just half the permissions required :/ One should add both the \"can have some permissions at all\" and the \"member of the admins group\". I only did the second part :)\n>\n> Fixed now, sorry about that!\n\nThank you! I now can see the administration button.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Dec 2020 22:17:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2021-01"
}
] |
[
{
"msg_contents": "Hi all\n\nThe attached patch set follows on from the discussion in [1] \"Add LWLock\nblocker(s) information\" by adding the actual LWLock* and the numeric\ntranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n\nThis does not provide complete information on blockers, because it's not\nnecessarily valid to compare any two LWLock* pointers between two process\naddress spaces. The locks could be in DSM segments, and those DSM segments\ncould be mapped at different addresses.\n\nI wasn't able to work out a sensible way to map a LWLock* to any sort of\n(tranche-id, lock-index) because there's no requirement that locks in a\ntranche be contiguous or known individually to the lmgr.\n\nDespite that, the patches improve the information available for LWLock\nanalysis significantly.\n\nPatch 1 fixes a bogus tracepoint where an lwlock__acquire event would be\nfired from LWLockWaitForVar, despite that function never actually acquiring\nthe lock.\n\nPatch 2 adds the tranche id and lock pointer for each trace hit. This makes\nit possible to differentiate between individual locks within a tranche, and\n(so long as they aren't tranches in a DSM segment) compare locks between\nprocesses. That means you can do lock-order analysis etc, which was not\npreviously especially feasible. Traces also don't have to do userspace\nreads for the tranche name all the time, so the trace can run with lower\noverhead.\n\nPatch 3 adds a single-path tracepoint for all lock acquires and releases,\nso you only have to probe the lwlock__acquired and lwlock__release events\nto see all acquires/releases, whether conditional or otherwise. It also\nadds start markers that can be used for timing wallclock duration of LWLock\nacquires/releases.\n\nPatch 4 adds some comments on LWLock tranches to try to address some points\nI found confusing and hard to understand when investigating this topic.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAGRY4nz%3DSEs3qc1R6xD3max7sg3kS-L81eJk2aLUWSQAeAFJTA%40mail.gmail.com\n.",
"msg_date": "Sat, 19 Dec 2020 13:00:01 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi Craig,\n\nOn Sat, Dec 19, 2020 at 2:00 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi all\n>\n> The attached patch set follows on from the discussion in [1] \"Add LWLock blocker(s) information\" by adding the actual LWLock* and the numeric tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n>\n> This does not provide complete information on blockers, because it's not necessarily valid to compare any two LWLock* pointers between two process address spaces. The locks could be in DSM segments, and those DSM segments could be mapped at different addresses.\n>\n> I wasn't able to work out a sensible way to map a LWLock* to any sort of (tranche-id, lock-index) because there's no requirement that locks in a tranche be contiguous or known individually to the lmgr.\n>\n> Despite that, the patches improve the information available for LWLock analysis significantly.\n>\n> Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be fired from LWLockWaitForVar, despite that function never actually acquiring the lock.\n>\n> Patch 2 adds the tranche id and lock pointer for each trace hit. This makes it possible to differentiate between individual locks within a tranche, and (so long as they aren't tranches in a DSM segment) compare locks between processes. That means you can do lock-order analysis etc, which was not previously especially feasible. Traces also don't have to do userspace reads for the tranche name all the time, so the trace can run with lower overhead.\n>\n> Patch 3 adds a single-path tracepoint for all lock acquires and releases, so you only have to probe the lwlock__acquired and lwlock__release events to see all acquires/releases, whether conditional or otherwise. It also adds start markers that can be used for timing wallclock duration of LWLock acquires/releases.\n>\n> Patch 4 adds some comments on LWLock tranches to try to address some points I found confusing and hard to understand when investigating this topic.\n>\n\nYou sent in your patch to pgsql-hackers on Dec 19, but you did not\npost it to the next CommitFest[1]. If this was intentional, then you\nneed to take no action. However, if you want your patch to be\nreviewed as part of the upcoming CommitFest, then you need to add it\nyourself before 2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 21:09:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Mon, 28 Dec 2020 at 20:09, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Hi Craig,\n>\n> On Sat, Dec 19, 2020 at 2:00 PM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n> > Hi all\n> >\n> > The attached patch set follows on from the discussion in [1] \"Add LWLock\n> blocker(s) information\" by adding the actual LWLock* and the numeric\n> tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n> >\n> > This does not provide complete information on blockers, because it's not\n> necessarily valid to compare any two LWLock* pointers between two process\n> address spaces. The locks could be in DSM segments, and those DSM segments\n> could be mapped at different addresses.\n> >\n> > I wasn't able to work out a sensible way to map a LWLock* to any sort of\n> (tranche-id, lock-index) because there's no requirement that locks in a\n> tranche be contiguous or known individually to the lmgr.\n> >\n> > Despite that, the patches improve the information available for LWLock\n> analysis significantly.\n> >\n> > Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be\n> fired from LWLockWaitForVar, despite that function never actually acquiring\n> the lock.\n> >\n> > Patch 2 adds the tranche id and lock pointer for each trace hit. This\n> makes it possible to differentiate between individual locks within a\n> tranche, and (so long as they aren't tranches in a DSM segment) compare\n> locks between processes. That means you can do lock-order analysis etc,\n> which was not previously especially feasible. Traces also don't have to do\n> userspace reads for the tranche name all the time, so the trace can run\n> with lower overhead.\n> >\n> > Patch 3 adds a single-path tracepoint for all lock acquires and\n> releases, so you only have to probe the lwlock__acquired and\n> lwlock__release events to see all acquires/releases, whether conditional or\n> otherwise. It also adds start markers that can be used for timing wallclock\n> duration of LWLock acquires/releases.\n> >\n> > Patch 4 adds some comments on LWLock tranches to try to address some\n> points I found confusing and hard to understand when investigating this\n> topic.\n> >\n>\n> You sent in your patch to pgsql-hackers on Dec 19, but you did not\n> post it to the next CommitFest[1]. If this was intentional, then you\n> need to take no action. However, if you want your patch to be\n> reviewed as part of the upcoming CommitFest, then you need to add it\n> yourself before 2021-01-01 AoE[2]. Thanks for your contributions.\n>\n> Regards,\n>\n> [1] https://commitfest.postgresql.org/31/\n> [2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n>\n\nThanks.\n\nCF entry created at https://commitfest.postgresql.org/32/2927/ . I don't\nthink it's urgent and will have limited review time so I didn't try to\nwedge it into the current CF.\n\nOn Mon, 28 Dec 2020 at 20:09, Masahiko Sawada <sawada.mshk@gmail.com> wrote:Hi Craig,\n\nOn Sat, Dec 19, 2020 at 2:00 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi all\n>\n> The attached patch set follows on from the discussion in [1] \"Add LWLock blocker(s) information\" by adding the actual LWLock* and the numeric tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n>\n> This does not provide complete information on blockers, because it's not necessarily valid to compare any two LWLock* pointers between two process address spaces. The locks could be in DSM segments, and those DSM segments could be mapped at different addresses.\n>\n> I wasn't able to work out a sensible way to map a LWLock* to any sort of (tranche-id, lock-index) because there's no requirement that locks in a tranche be contiguous or known individually to the lmgr.\n>\n> Despite that, the patches improve the information available for LWLock analysis significantly.\n>\n> Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be fired from LWLockWaitForVar, despite that function never actually acquiring the lock.\n>\n> Patch 2 adds the tranche id and lock pointer for each trace hit. This makes it possible to differentiate between individual locks within a tranche, and (so long as they aren't tranches in a DSM segment) compare locks between processes. That means you can do lock-order analysis etc, which was not previously especially feasible. Traces also don't have to do userspace reads for the tranche name all the time, so the trace can run with lower overhead.\n>\n> Patch 3 adds a single-path tracepoint for all lock acquires and releases, so you only have to probe the lwlock__acquired and lwlock__release events to see all acquires/releases, whether conditional or otherwise. It also adds start markers that can be used for timing wallclock duration of LWLock acquires/releases.\n>\n> Patch 4 adds some comments on LWLock tranches to try to address some points I found confusing and hard to understand when investigating this topic.\n>\n\nYou sent in your patch to pgsql-hackers on Dec 19, but you did not\npost it to the next CommitFest[1]. If this was intentional, then you\nneed to take no action. However, if you want your patch to be\nreviewed as part of the upcoming CommitFest, then you need to add it\nyourself before 2021-01-01 AoE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_EarthThanks.CF entry created at https://commitfest.postgresql.org/32/2927/ . I don't think it's urgent and will have limited review time so I didn't try to wedge it into the current CF.",
"msg_date": "Thu, 7 Jan 2021 14:16:38 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Sat, 19 Dec 2020 at 13:00, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n> Hi all\n>\n> The attached patch set follows on from the discussion in [1] \"Add LWLock\n> blocker(s) information\" by adding the actual LWLock* and the numeric\n> tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n>\n>\nI've attached a systemtap script that makes use of the information exported\nby the enhanced LWLock tracepoints. It offers something akin to dynamic\n-DLWLOCK_STATS with automatic statistical aggregation and some selective\nLWLOCK_DEBUG output.\n\nThe script isn't super pretty. I didn't try to separate event-data\ncollection from results output, and there's some duplication in places. But\nit gives you an idea what's possible when we report lock pointers and\ntranche IDs to tracepoints and add entry/exit tracepoints.\n\nKey features:\n\n* Collect statistical aggregates on lwlock hold and wait durations across\nall processes. Stats are grouped by lockmode (shared or exclusive) and by\ntranche name, as well as rollup stats across all tranches.\n* Report lock wait and hold durations for each process when that process\nexits. Again, reported by mode and tranche name.\n* For long lock waits, print the waiter pid and waiting pid, along with\neach process's backend type and application_name if known, the acquire\nmode, and the acquire function\n\nThe output is intended to be human-readable, but it'd be quite simple to\nconvert it into raw tsv-style output suitable for ingestion into\nstatistical postprocessing or graphing tools.\n\nIt should be fairly easy to break down the stats by acquire function if\ndesired, so LWLockAcquire(), LWLockWaitForVar(), and LWLockAcquireOrWait()\nare reported separately. They're combined in the current output.\n\nCapturing the current query string is pretty simple if needed, but I didn't\nthink it was likely to be especially useful.\n\nSample output for a pg_regress run attached. Abridged version follows. Here\nthe !!W!! lines are \"waited a long time\", the !!H!! lines are \"held a long\ntime\". Then [pid]:MyBackendType tranche_name wait_time_us (wait_time) in\nwait_func (appliation_name) => [blocker_pid] (blocker_application_name) .\nIf blocker pid wasn't identified it won't be reported - I know how to fix\nthat and will do so soon.\n\n!!W!! [ 93030]:3 BufferContent 12993 (0m0.012993s) in\nlwlock__acquire__start (pg_regress/text)\n!!W!! [ 93036]:3 LockManager 14540 (0m0.014540s) in\nlwlock__acquire__start (pg_regress/float8) => [ 93045] (pg_regress/regproc)\n!!W!! [ 93035]:3 BufferContent 12608 (0m0.012608s) in\nlwlock__acquire__start (pg_regress/float4) => [ 93034] (pg_regress/oid)\n!!W!! [ 93036]:3 LockManager 10301 (0m0.010301s) in\nlwlock__acquire__start (pg_regress/float8)\n!!W!! [ 93043]:3 LockManager 10356 (0m0.010356s) in\nlwlock__acquire__start (pg_regress/pg_lsn)\n!!H!! [ 93033]:3 BufferContent 20579 (0m0.020579s)\n(pg_regress/int8)\n!!W!! [ 93027]:3 BufferContent 10766 (0m0.010766s) in\nlwlock__acquire__start (pg_regress/char) => [ 93037] (pg_regress/bit)\n!!W!! [ 93036]:3 OidGen 12876 (0m0.012876s) in\nlwlock__acquire__start (pg_regress/float8)\n...\n\nThen the summary rollup at the end of the run. This can also be output\nperiodically during the run. Abbreviated for highlights:\n\nwait locks: all procs tranche mode count\n total avg variance min max\n W LW_EXCLUSIVE (all) E 54185\n14062734 259 1850265 1 44177\n W LW_SHARED (all) S 3668\n 1116022 304 1527261 2 18642\n\nheld locks: all procs tranche mode count\n total avg variance min max\n H LW_EXCLUSIVE (all) E 10438060\n 153077259 14 37035 1 195043\n H LW_SHARED (all) S 14199902\n65466934 4 5318 1 44030\n\nall procs by tranche tranche mode count\n total avg variance min max\n W tranche (all) S 3668\n 1116022 304 1527261 2 18642\n W tranche (all) E 54185\n14062734 259 1850265 1 44177\n W tranche WALInsert E 9839\n 2393229 243 1180294 2 14209\n W tranche BufferContent E 3012\n 1726543 573 3869409 2 28186\n W tranche BufferContent S 1664\n657855 395 2185694 2 18642\n W tranche LockFastPath E 28314\n 6327801 223 1278053 1 26133\n W tranche LockFastPath S 87\n 59401 682 3703217 19 9454\n W tranche LockManager E 7223\n 2764392 382 2514863 2 44177\n\n\nHope this is interesting to someone.",
"msg_date": "Fri, 8 Jan 2021 15:17:25 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "> On Sat, Dec 19, 2020 at 01:00:01PM +0800, Craig Ringer wrote:\n>\n> The attached patch set follows on from the discussion in [1] \"Add LWLock\n> blocker(s) information\" by adding the actual LWLock* and the numeric\n> tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n>\n> This does not provide complete information on blockers, because it's not\n> necessarily valid to compare any two LWLock* pointers between two process\n> address spaces. The locks could be in DSM segments, and those DSM segments\n> could be mapped at different addresses.\n>\n> I wasn't able to work out a sensible way to map a LWLock* to any sort of\n> (tranche-id, lock-index) because there's no requirement that locks in a\n> tranche be contiguous or known individually to the lmgr.\n>\n> Despite that, the patches improve the information available for LWLock\n> analysis significantly.\n\nThanks for the patches, this could be indeed useful. I've looked through\nand haven't noticed any issues with either the tracepoint extensions or\ncommentaries, except that I find it is not that clear how trance_id\nindicates a re-initialization here?\n\n /* Re-initialization of individual LWLocks is not permitted */\n Assert(tranche_id >= NUM_INDIVIDUAL_LWLOCKS || !IsUnderPostmaster);\n\n> Patch 2 adds the tranche id and lock pointer for each trace hit. This makes\n> it possible to differentiate between individual locks within a tranche, and\n> (so long as they aren't tranches in a DSM segment) compare locks between\n> processes. That means you can do lock-order analysis etc, which was not\n> previously especially feasible.\n\nI'm curious in which kind of situations lock-order analysis could be\nhelpful?\n\n> Traces also don't have to do userspace reads for the tranche name all\n> the time, so the trace can run with lower overhead.\n\nThis one is also interesting. Just for me to clarify, wouldn't there be\na bit of overhead anyway (due to switching from kernel context to user\nspace when a tracepoint was hit) that will mask name read overhead? Or\nare there any available numbers about it?\n\n\n",
"msg_date": "Wed, 13 Jan 2021 12:21:34 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 2020-12-19 06:00, Craig Ringer wrote:\n> Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be \n> fired from LWLockWaitForVar, despite that function never actually \n> acquiring the lock.\n\nThis was added in 68a2e52bbaf when LWLockWaitForVar() was first \nintroduced. It looks like a mistake to me too, but maybe Heikki wants \nto comment.\n\n\n",
"msg_date": "Thu, 14 Jan 2021 08:56:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 13 Jan 2021 at 19:19, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Sat, Dec 19, 2020 at 01:00:01PM +0800, Craig Ringer wrote:\n> >\n> > The attached patch set follows on from the discussion in [1] \"Add LWLock\n> > blocker(s) information\" by adding the actual LWLock* and the numeric\n> > tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n> >\n> > This does not provide complete information on blockers, because it's not\n> > necessarily valid to compare any two LWLock* pointers between two process\n> > address spaces. The locks could be in DSM segments, and those DSM\n> segments\n> > could be mapped at different addresses.\n> >\n> > I wasn't able to work out a sensible way to map a LWLock* to any sort of\n> > (tranche-id, lock-index) because there's no requirement that locks in a\n> > tranche be contiguous or known individually to the lmgr.\n> >\n> > Despite that, the patches improve the information available for LWLock\n> > analysis significantly.\n>\n> Thanks for the patches, this could be indeed useful. I've looked through\n> and haven't noticed any issues with either the tracepoint extensions or\n> commentaries, except that I find it is not that clear how trance_id\n> indicates a re-initialization here?\n>\n> /* Re-initialization of individual LWLocks is not permitted */\n> Assert(tranche_id >= NUM_INDIVIDUAL_LWLOCKS || !IsUnderPostmaster);\n>\n\nThere should be no reason for anything to call LWLockInitialize(...) on an\nindividual LWLock, since they are all initialized during postmaster startup.\n\nDoing so must be a bug.\n\nBut that's a trivial change that can be done separately.\n\n\n> > Patch 2 adds the tranche id and lock pointer for each trace hit. This\n> makes\n> > it possible to differentiate between individual locks within a tranche,\n> and\n> > (so long as they aren't tranches in a DSM segment) compare locks between\n> > processes. That means you can do lock-order analysis etc, which was not\n> > previously especially feasible.\n>\n> I'm curious in which kind of situations lock-order analysis could be\n> helpful?\n>\n\nIf code-path 1 does\n\n LWLockAcquire(LockA, LW_EXCLUSIVE);\n ...\n LWLockAcquire(LockB, LW_EXCLUSIVE);\n\nand code-path 2 does:\n\n LWLockAcquire(LockB, LW_EXCLUSIVE);\n ...\n LWLockAcquire(LockA, LW_EXCLUSIVE);\n\nthen they're subject to deadlock. But you might not actually hit that often\nin test workloads if the timing required for the deadlock to occur is tight\nand/or occurs on infrequent operations.\n\nIt's not always easy to reason about or prove things about lock order when\nthey're potentially nested deep within many layers of other calls and\ncallbacks. Obviously something we try to avoid with LWLocks, but not\nimpossible.\n\nIf you trace a workload and derive all possible nestings of lock acquire\norder, you can then prove things about whether there are any possible\nordering conflicts and where they might arise.\n\nA PoC to do so is on my TODO.\n\n> Traces also don't have to do userspace reads for the tranche name all\n> > the time, so the trace can run with lower overhead.\n>\n> This one is also interesting. Just for me to clarify, wouldn't there be\n> a bit of overhead anyway (due to switching from kernel context to user\n> space when a tracepoint was hit) that will mask name read overhead? Or\n> are there any available numbers about it?\n>\n\nI don't have numbers on that. Whether it matters will depend way too much\non how you're using the probe points and collecting/consuming the data\nanyway.\n\nIt's a bit unfortunate (IMO) that we make a function call for each\ntracepoint invocation to get the tranche names. Ideally I'd prefer to be\nable to omit the tranche names lookups for these probes entirely for\nsomething as hot as LWLocks. But it's a bit of a pain to look up the\ntranche names from an external trace tool, so instead I'm inclined to see\nif we can enable systemtap's semaphores and only compute the tranche name\nif the target probe is actually enabled. But that'd be separate to this\npatch and require a build change in how systemtap support is compiled and\nlinked.\n\nBTW, a user->kernel->user context switch only occurs when the trace tool's\nprobes use kernel space - such as for perf based probes, or for systemtap's\nkernel-runtime probes. The same markers can be used by e.g. systemtap's\n\"dyninst\" runtime that runs entirely in userspace.\n\nOn Wed, 13 Jan 2021 at 19:19, Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Sat, Dec 19, 2020 at 01:00:01PM +0800, Craig Ringer wrote:\n>\n> The attached patch set follows on from the discussion in [1] \"Add LWLock\n> blocker(s) information\" by adding the actual LWLock* and the numeric\n> tranche ID to each LWLock related TRACE_POSTGRESQL_foo tracepoint.\n>\n> This does not provide complete information on blockers, because it's not\n> necessarily valid to compare any two LWLock* pointers between two process\n> address spaces. The locks could be in DSM segments, and those DSM segments\n> could be mapped at different addresses.\n>\n> I wasn't able to work out a sensible way to map a LWLock* to any sort of\n> (tranche-id, lock-index) because there's no requirement that locks in a\n> tranche be contiguous or known individually to the lmgr.\n>\n> Despite that, the patches improve the information available for LWLock\n> analysis significantly.\n\nThanks for the patches, this could be indeed useful. I've looked through\nand haven't noticed any issues with either the tracepoint extensions or\ncommentaries, except that I find it is not that clear how trance_id\nindicates a re-initialization here?\n\n /* Re-initialization of individual LWLocks is not permitted */\n Assert(tranche_id >= NUM_INDIVIDUAL_LWLOCKS || !IsUnderPostmaster);There should be no reason for anything to call LWLockInitialize(...) on an individual LWLock, since they are all initialized during postmaster startup.Doing so must be a bug.But that's a trivial change that can be done separately. \n\n> Patch 2 adds the tranche id and lock pointer for each trace hit. This makes\n> it possible to differentiate between individual locks within a tranche, and\n> (so long as they aren't tranches in a DSM segment) compare locks between\n> processes. That means you can do lock-order analysis etc, which was not\n> previously especially feasible.\n\nI'm curious in which kind of situations lock-order analysis could be\nhelpful?If code-path 1 does LWLockAcquire(LockA, LW_EXCLUSIVE); ... LWLockAcquire(LockB, LW_EXCLUSIVE);and code-path 2 does: LWLockAcquire(LockB, LW_EXCLUSIVE); ... LWLockAcquire(LockA, LW_EXCLUSIVE); then they're subject to deadlock. But you might not actually hit that often in test workloads if the timing required for the deadlock to occur is tight and/or occurs on infrequent operations.It's not always easy to reason about or prove things about lock order when they're potentially nested deep within many layers of other calls and callbacks. Obviously something we try to avoid with LWLocks, but not impossible.If you trace a workload and derive all possible nestings of lock acquire order, you can then prove things about whether there are any possible ordering conflicts and where they might arise.A PoC to do so is on my TODO. \n\n> Traces also don't have to do userspace reads for the tranche name all\n> the time, so the trace can run with lower overhead.\n\nThis one is also interesting. Just for me to clarify, wouldn't there be\na bit of overhead anyway (due to switching from kernel context to user\nspace when a tracepoint was hit) that will mask name read overhead? Or\nare there any available numbers about it?I don't have numbers on that. Whether it matters will depend way too much on how you're using the probe points and collecting/consuming the data anyway.It's a bit unfortunate (IMO) that we make a function call for each tracepoint invocation to get the tranche names. Ideally I'd prefer to be able to omit the tranche names lookups for these probes entirely for something as hot as LWLocks. But it's a bit of a pain to look up the tranche names from an external trace tool, so instead I'm inclined to see if we can enable systemtap's semaphores and only compute the tranche name if the target probe is actually enabled. But that'd be separate to this patch and require a build change in how systemtap support is compiled and linked.BTW, a user->kernel->user context switch only occurs when the trace tool's probes use kernel space - such as for perf based probes, or for systemtap's kernel-runtime probes. The same markers can be used by e.g. systemtap's \"dyninst\" runtime that runs entirely in userspace.",
"msg_date": "Thu, 14 Jan 2021 16:38:59 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Thu, 14 Jan 2021 at 15:56, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 2020-12-19 06:00, Craig Ringer wrote:\n> > Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be\n> > fired from LWLockWaitForVar, despite that function never actually\n> > acquiring the lock.\n>\n> This was added in 68a2e52bbaf when LWLockWaitForVar() was first\n> introduced. It looks like a mistake to me too, but maybe Heikki wants\n> to comment.\n>\n\nI'm certain it's a copy/paste bug.\n\nOn Thu, 14 Jan 2021 at 15:56, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 2020-12-19 06:00, Craig Ringer wrote:\n> Patch 1 fixes a bogus tracepoint where an lwlock__acquire event would be \n> fired from LWLockWaitForVar, despite that function never actually \n> acquiring the lock.\n\nThis was added in 68a2e52bbaf when LWLockWaitForVar() was first \nintroduced. It looks like a mistake to me too, but maybe Heikki wants \nto comment.I'm certain it's a copy/paste bug.",
"msg_date": "Thu, 14 Jan 2021 16:39:12 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 2021-01-14 09:39, Craig Ringer wrote:\n> On Thu, 14 Jan 2021 at 15:56, Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> On 2020-12-19 06:00, Craig Ringer wrote:\n> > Patch 1 fixes a bogus tracepoint where an lwlock__acquire event\n> would be\n> > fired from LWLockWaitForVar, despite that function never actually\n> > acquiring the lock.\n> \n> This was added in 68a2e52bbaf when LWLockWaitForVar() was first\n> introduced. It looks like a mistake to me too, but maybe Heikki wants\n> to comment.\n> \n> \n> I'm certain it's a copy/paste bug.\n\nI have committed that patch.\n\n\n",
"msg_date": "Fri, 22 Jan 2021 12:02:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n> On 2021-01-14 09:39, Craig Ringer wrote:\n>> On Thu, 14 Jan 2021 at 15:56, Peter Eisentraut \n>> <peter.eisentraut@enterprisedb.com \n>> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n>>\n>> On 2020-12-19 06:00, Craig Ringer wrote:\n>> > Patch 1 fixes a bogus tracepoint where an lwlock__acquire event\n>> would be\n>> > fired from LWLockWaitForVar, despite that function never actually\n>> > acquiring the lock.\n>>\n>> This was added in 68a2e52bbaf when LWLockWaitForVar() was first\n>> introduced. It looks like a mistake to me too, but maybe Heikki \n>> wants\n>> to comment.\n>>\n>>\n>> I'm certain it's a copy/paste bug.\n> \n> I have committed that patch.\n\nThis patch set no longer applies: \nhttp://cfbot.cputube.org/patch_32_2927.log.\n\nCan we get a rebase? Also marked Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 07:50:22 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 20:50, David Steele <david@pgmasters.net> wrote:\n\n> On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n>\n> This patch set no longer applies:\n> http://cfbot.cputube.org/patch_32_2927.log.\n>\n> Can we get a rebase? Also marked Waiting on Author.\n>\n\nRebased as requested.\n\nI'm still interested in whether Andres will be able to do anything about\nidentifying LWLocks in a cross-backend manner. But this work doesn't really\ndepend on that; it'd benefit from it, but would be easily adapted to it\nlater if needed.",
"msg_date": "Wed, 10 Mar 2021 13:38:06 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 10.03.21 06:38, Craig Ringer wrote:\n> On Wed, 3 Mar 2021 at 20:50, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n> \n> This patch set no longer applies:\n> http://cfbot.cputube.org/patch_32_2927.log\n> <http://cfbot.cputube.org/patch_32_2927.log>.\n> \n> Can we get a rebase? Also marked Waiting on Author.\n> \n> \n> Rebased as requested.\n\nIn patch 0001, why was the TRACE_POSTGRESQL_LWLOCK_RELEASE() call moved? \n Is there some correctness issue? If so, we should explain that (at \nleast in the commit message, or as a separate patch).\n\n\n",
"msg_date": "Thu, 11 Mar 2021 08:57:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Thu, 11 Mar 2021 at 15:57, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 10.03.21 06:38, Craig Ringer wrote:\n> > On Wed, 3 Mar 2021 at 20:50, David Steele <david@pgmasters.net\n> > <mailto:david@pgmasters.net>> wrote:\n> >\n> > On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n> >\n> > This patch set no longer applies:\n> > http://cfbot.cputube.org/patch_32_2927.log\n> > <http://cfbot.cputube.org/patch_32_2927.log>.\n> >\n> > Can we get a rebase? Also marked Waiting on Author.\n> >\n> >\n> > Rebased as requested.\n>\n> In patch 0001, why was the TRACE_POSTGRESQL_LWLOCK_RELEASE() call moved?\n> Is there some correctness issue? If so, we should explain that (at\n> least in the commit message, or as a separate patch).\n>\n\nIf you want I can split it out, or drop that change. I thought it was\nsufficiently inconsequential, but you're right to check.\n\nThe current tracepoint TRACE_POSTGRESQL_LWLOCK_RELEASE really means\n\"releaseD\". It's appropriate to emit this as soon as the lock could be\nacquired by anything else. By deferring it until we'd processed the\nwaitlist and woken other backends the window during which the lock was\nreported as \"held\" was longer than it truly was, and it was easy to see one\nbackend acquire the lock while another still appeared to hold it.\n\nIt'd possibly make more sense to have a separate\nTRACE_POSTGRESQL_LWLOCK_RELEASING just before the `pg_atomic_sub_fetch_u32`\ncall. But I didn't want to spam the tracepoints too hard, and there's\nalways going to be some degree of overlap because tracing tools cannot\nintercept and act during the atomic swap, so they'll always see a slightly\npremature or slightly delayed release. This window should be as short as\npossible though, hence moving the tracepoint.\n\nSide note:\n\nThe main reason I didn't want to add more tracepoints than were strictly\nnecessary is that Pg doesn't enable the systemtap semaphores feature, so\nright now we do a T_NAME(lock) evaluation each time we pass a tracepoint if\n--enable-dtrace is compiled in, whether or not anything is tracing. This\nwas fine on pg11 where it was just:\n\n#define T_NAME(lock) \\\n (LWLockTrancheArray[(lock)->tranche])\n\nbut since pg13 it instead expands to\n\n GetLWTrancheName((lock)->tranche)\n\nwhere GetLWTrancheName isn't especially trivial. We'll run that function\nevery single time we pass any of these tracepoints and then discard the\nresult, which is ... not ideal. That applies so long as Pg is compiled with\n--enable-dtrace. I've been meaning to look at enabling the systemtap\nsemaphores feature in our build so these can be wrapped in\nunlikely(TRACE_POSTGRESQL_LWLOCK_RELEASE_ENABLED()) guards, but I wanted to\nwrap this patch set up first as there are some complexities around enabling\nthe semaphores feature.\n\nOn Thu, 11 Mar 2021 at 15:57, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 10.03.21 06:38, Craig Ringer wrote:\n> On Wed, 3 Mar 2021 at 20:50, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n> \n> This patch set no longer applies:\n> http://cfbot.cputube.org/patch_32_2927.log\n> <http://cfbot.cputube.org/patch_32_2927.log>.\n> \n> Can we get a rebase? Also marked Waiting on Author.\n> \n> \n> Rebased as requested.\n\nIn patch 0001, why was the TRACE_POSTGRESQL_LWLOCK_RELEASE() call moved? \n Is there some correctness issue? If so, we should explain that (at \nleast in the commit message, or as a separate patch).If you want I can split it out, or drop that change. I thought it was sufficiently inconsequential, but you're right to check.The current tracepoint TRACE_POSTGRESQL_LWLOCK_RELEASE really means \"releaseD\". It's appropriate to emit this as soon as the lock could be acquired by anything else. By deferring it until we'd processed the waitlist and woken other backends the window during which the lock was reported as \"held\" was longer than it truly was, and it was easy to see one backend acquire the lock while another still appeared to hold it.It'd possibly make more sense to have a separate TRACE_POSTGRESQL_LWLOCK_RELEASING just before the `pg_atomic_sub_fetch_u32` call. But I didn't want to spam the tracepoints too hard, and there's always going to be some degree of overlap because tracing tools cannot intercept and act during the atomic swap, so they'll always see a slightly premature or slightly delayed release. This window should be as short as possible though, hence moving the tracepoint.Side note:The main reason I didn't want to add more tracepoints than were strictly necessary is that Pg doesn't enable the systemtap semaphores feature, so right now we do a T_NAME(lock) evaluation each time we pass a tracepoint if --enable-dtrace is compiled in, whether or not anything is tracing. This was fine on pg11 where it was just:#define T_NAME(lock) \\ (LWLockTrancheArray[(lock)->tranche])but since pg13 it instead expands to GetLWTrancheName((lock)->tranche)where GetLWTrancheName isn't especially trivial. We'll run that function every single time we pass any of these tracepoints and then discard the result, which is ... not ideal. That applies so long as Pg is compiled with --enable-dtrace. I've been meaning to look at enabling the systemtap semaphores feature in our build so these can be wrapped in unlikely(TRACE_POSTGRESQL_LWLOCK_RELEASE_ENABLED()) guards, but I wanted to wrap this patch set up first as there are some complexities around enabling the semaphores feature.",
"msg_date": "Thu, 18 Mar 2021 14:34:51 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 18.03.21 07:34, Craig Ringer wrote:\n> In patch 0001, why was the TRACE_POSTGRESQL_LWLOCK_RELEASE() call\n> moved?\n> Is there some correctness issue? If so, we should explain that (at\n> least in the commit message, or as a separate patch).\n> \n> \n> If you want I can split it out, or drop that change. I thought it was \n> sufficiently inconsequential, but you're right to check.\n> \n> The current tracepoint TRACE_POSTGRESQL_LWLOCK_RELEASE really means \n> \"releaseD\". It's appropriate to emit this as soon as the lock could be \n> acquired by anything else. By deferring it until we'd processed the \n> waitlist and woken other backends the window during which the lock was \n> reported as \"held\" was longer than it truly was, and it was easy to see \n> one backend acquire the lock while another still appeared to hold it.\n\n From the archeology department: The TRACE_POSTGRESQL_LWLOCK_RELEASE \nprobe was in the right place until PG 9.4, but was then moved by \nab5194e6f617a9a9e7aadb3dd1cee948a42d0755, which was a major rewrite, so \nit seems the move might have been accidental. The documentation \nspecifically states that the probe is triggered before waiters are woken \nup, which it specifically does not do at the moment. So this looks like \na straight bug fix to me.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 21:06:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "\nOn 18.03.21 07:34, Craig Ringer wrote:\n> The main reason I didn't want to add more tracepoints than were strictly \n> necessary is that Pg doesn't enable the systemtap semaphores feature, so \n> right now we do a T_NAME(lock) evaluation each time we pass a tracepoint \n> if --enable-dtrace is compiled in, whether or not anything is tracing. \n> This was fine on pg11 where it was just:\n> \n> #define T_NAME(lock) \\\n> (LWLockTrancheArray[(lock)->tranche])\n> \n> but since pg13 it instead expands to\n> \n> GetLWTrancheName((lock)->tranche)\n> \n> where GetLWTrancheName isn't especially trivial. We'll run that function \n> every single time we pass any of these tracepoints and then discard the \n> result, which is ... not ideal. That applies so long as Pg is compiled \n> with --enable-dtrace. I've been meaning to look at enabling the \n> systemtap semaphores feature in our build so these can be wrapped in \n> unlikely(TRACE_POSTGRESQL_LWLOCK_RELEASE_ENABLED()) guards, but I wanted \n> to wrap this patch set up first as there are some complexities around \n> enabling the semaphores feature.\n\nThere is already support for that. See the documentation at the end of \nthis page: \nhttps://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n\n\n",
"msg_date": "Fri, 19 Mar 2021 21:21:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Sat, 20 Mar 2021, 04:21 Peter Eisentraut, <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> On 18.03.21 07:34, Craig Ringer wrote:\n> > The main reason I didn't want to add more tracepoints than were strictly\n> > necessary is that Pg doesn't enable the systemtap semaphores feature, so\n> > right now we do a T_NAME(lock) evaluation each time we pass a tracepoint\n> > if --enable-dtrace is compiled in, whether or not anything is tracing.\n> > This was fine on pg11 where it was just:\n> >\n> > #define T_NAME(lock) \\\n> > (LWLockTrancheArray[(lock)->tranche])\n> >\n> > but since pg13 it instead expands to\n> >\n> > GetLWTrancheName((lock)->tranche)\n> >\n> > where GetLWTrancheName isn't especially trivial. We'll run that function\n> > every single time we pass any of these tracepoints and then discard the\n> > result, which is ... not ideal. That applies so long as Pg is compiled\n> > with --enable-dtrace. I've been meaning to look at enabling the\n> > systemtap semaphores feature in our build so these can be wrapped in\n> > unlikely(TRACE_POSTGRESQL_LWLOCK_RELEASE_ENABLED()) guards, but I wanted\n> > to wrap this patch set up first as there are some complexities around\n> > enabling the semaphores feature.\n>\n> There is already support for that. See the documentation at the end of\n> this page:\n>\n> https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n\n\nPretty sure it won't work right now.\n\nTo use systemtap semaphores (the _ENABLED macros) you need to run dtrace -g\nto generate a probes.o then link that into postgres.\n\nI don't think we do that. I'll double check soon.\n\nOn Sat, 20 Mar 2021, 04:21 Peter Eisentraut, <peter.eisentraut@enterprisedb.com> wrote:\nOn 18.03.21 07:34, Craig Ringer wrote:\n> The main reason I didn't want to add more tracepoints than were strictly \n> necessary is that Pg doesn't enable the systemtap semaphores feature, so \n> right now we do a T_NAME(lock) evaluation each time we pass a tracepoint \n> if --enable-dtrace is compiled in, whether or not anything is tracing. \n> This was fine on pg11 where it was just:\n> \n> #define T_NAME(lock) \\\n> (LWLockTrancheArray[(lock)->tranche])\n> \n> but since pg13 it instead expands to\n> \n> GetLWTrancheName((lock)->tranche)\n> \n> where GetLWTrancheName isn't especially trivial. We'll run that function \n> every single time we pass any of these tracepoints and then discard the \n> result, which is ... not ideal. That applies so long as Pg is compiled \n> with --enable-dtrace. I've been meaning to look at enabling the \n> systemtap semaphores feature in our build so these can be wrapped in \n> unlikely(TRACE_POSTGRESQL_LWLOCK_RELEASE_ENABLED()) guards, but I wanted \n> to wrap this patch set up first as there are some complexities around \n> enabling the semaphores feature.\n\nThere is already support for that. See the documentation at the end of \nthis page: \nhttps://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTSPretty sure it won't work right now.To use systemtap semaphores (the _ENABLED macros) you need to run dtrace -g to generate a probes.o then link that into postgres.I don't think we do that. I'll double check soon.",
"msg_date": "Sat, 20 Mar 2021 08:29:32 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 19.03.21 21:06, Peter Eisentraut wrote:\n> On 18.03.21 07:34, Craig Ringer wrote:\n>> In patch 0001, why was the TRACE_POSTGRESQL_LWLOCK_RELEASE() call\n>> moved?\n>> Is there some correctness issue? If so, we should explain that \n>> (at\n>> least in the commit message, or as a separate patch).\n>>\n>>\n>> If you want I can split it out, or drop that change. I thought it was \n>> sufficiently inconsequential, but you're right to check.\n>>\n>> The current tracepoint TRACE_POSTGRESQL_LWLOCK_RELEASE really means \n>> \"releaseD\". It's appropriate to emit this as soon as the lock could be \n>> acquired by anything else. By deferring it until we'd processed the \n>> waitlist and woken other backends the window during which the lock was \n>> reported as \"held\" was longer than it truly was, and it was easy to \n>> see one backend acquire the lock while another still appeared to hold it.\n> \n> From the archeology department: The TRACE_POSTGRESQL_LWLOCK_RELEASE \n> probe was in the right place until PG 9.4, but was then moved by \n> ab5194e6f617a9a9e7aadb3dd1cee948a42d0755, which was a major rewrite, so \n> it seems the move might have been accidental. The documentation \n> specifically states that the probe is triggered before waiters are woken \n> up, which it specifically does not do at the moment. So this looks like \n> a straight bug fix to me.\n\ncommitted a fix for that\n\n\n",
"msg_date": "Sun, 21 Mar 2021 08:13:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 10.03.21 06:38, Craig Ringer wrote:\n> On Wed, 3 Mar 2021 at 20:50, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> On 1/22/21 6:02 AM, Peter Eisentraut wrote:\n> \n> This patch set no longer applies:\n> http://cfbot.cputube.org/patch_32_2927.log\n> <http://cfbot.cputube.org/patch_32_2927.log>.\n> \n> Can we get a rebase? Also marked Waiting on Author.\n> \n> \n> Rebased as requested.\n> \n> I'm still interested in whether Andres will be able to do anything about \n> identifying LWLocks in a cross-backend manner. But this work doesn't \n> really depend on that; it'd benefit from it, but would be easily adapted \n> to it later if needed.\n\nFirst, a problem: 0002 doesn't build on macOS, because uint64 has been \nused in the probe definitions. That needs to be handled like the other \nnonnative types in that file.\n\nAll the probe changes and additions should be accompanied by \ndocumentation changes.\n\nThe probes used to have an argument to identify the lock, which was \nremoved by 3761fe3c20bb040b15f0e8da58d824631da00caa. The 0001 patch is \nessentially trying to reinstate that, which seems sensible. Perhaps we \nshould also use the argument order that used to be there. It used to be\n\nprobe lwlock__acquire(const char *, int, LWLockMode);\n\nand now it would be\n\nprobe lwlock__acquire(const char *, LWLockMode, LWLock*, int);\n\nAlso, do we need both the tranche name and the tranche id? Or maybe we \ndon't need the name, or can record it differently, which might also \naddress your other concern that it's too expensive to compute. In any \ncase, I think an argument order like\n\nprobe lwlock__acquite(const char *, int, LWLock*, LWLockMode);\n\nwould make more sense.\n\nIn 0004, you add a probe to record the application_name setting? Would \nthere be any value in making that a generic probe that can record any \nGUC change?\n\n\n",
"msg_date": "Mon, 22 Mar 2021 09:38:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 20.03.21 01:29, Craig Ringer wrote:\n> There is already support for that. See the documentation at the end of\n> this page:\n> https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n> <https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS>\n> \n> \n> Pretty sure it won't work right now.\n> \n> To use systemtap semaphores (the _ENABLED macros) you need to run dtrace \n> -g to generate a probes.o then link that into postgres.\n> \n> I don't think we do that. I'll double check soon.\n\nWe do that. (It's -G.)\n\n\n",
"msg_date": "Mon, 22 Mar 2021 10:00:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Mon, 22 Mar 2021 at 17:00, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 20.03.21 01:29, Craig Ringer wrote:\n> > There is already support for that. See the documentation at the end\n> of\n> > this page:\n> >\n> https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n> > <\n> https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n> >\n> >\n> >\n> > Pretty sure it won't work right now.\n> >\n> > To use systemtap semaphores (the _ENABLED macros) you need to run dtrace\n> > -g to generate a probes.o then link that into postgres.\n> >\n> > I don't think we do that. I'll double check soon.\n>\n> We do that. (It's -G.)\n>\n\nHuh. I could've sworn we didn't. My mistake, it's there in\nsrc/backend/Makefile .\n\nIn that case I'll amend the patch to use semaphore guards.\n\n(On a side note, systemtap's semaphore support is actually a massive pain.\nThe way it's implemented in <sys/sdt.h> means that a single compilation\nunit may not use both probes.d style markers produced by the dtrace script\nand use regular DTRACE_PROBE(providername,probename) preprocessor macros.\nIf it attempts to do so, the DTRACE_PROBE macros will emit inline asm that\ntries to reference probename_semaphore symbols that will not exist,\nresulting in linker errors or runtime link errors. But that's really a\nsystemtap problem. Core PostgreSQL doesn't use any explicit\nDTRACE_PROBE(...), STAP_PROBE(...) etc.)\n\nOn Mon, 22 Mar 2021 at 17:00, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 20.03.21 01:29, Craig Ringer wrote:\n> There is already support for that. See the documentation at the end of\n> this page:\n> https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS\n> <https://www.postgresql.org/docs/devel/dynamic-trace.html#DEFINING-TRACE-POINTS>\n> \n> \n> Pretty sure it won't work right now.\n> \n> To use systemtap semaphores (the _ENABLED macros) you need to run dtrace \n> -g to generate a probes.o then link that into postgres.\n> \n> I don't think we do that. I'll double check soon.\n\nWe do that. (It's -G.)Huh. I could've sworn we didn't. My mistake, it's there in src/backend/Makefile .In that case I'll amend the patch to use semaphore guards.(On a side note, systemtap's semaphore support is actually a massive pain. The way it's implemented in <sys/sdt.h> means that a single compilation unit may not use both probes.d style markers produced by the dtrace script and use regular DTRACE_PROBE(providername,probename) preprocessor macros. If it attempts to do so, the DTRACE_PROBE macros will emit inline asm that tries to reference probename_semaphore symbols that will not exist, resulting in linker errors or runtime link errors. But that's really a systemtap problem. Core PostgreSQL doesn't use any explicit DTRACE_PROBE(...), STAP_PROBE(...) etc.)",
"msg_date": "Mon, 12 Apr 2021 13:46:30 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Mon, 22 Mar 2021 at 16:38, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> First, a problem: 0002 doesn't build on macOS, because uint64 has been\n> used in the probe definitions. That needs to be handled like the other\n> nonnative types in that file.\n>\n\nWill fix.\n\nAll the probe changes and additions should be accompanied by\n> documentation changes.\n>\n\nAgreed, will fix.\n\nThe probes used to have an argument to identify the lock, which was\n> removed by 3761fe3c20bb040b15f0e8da58d824631da00caa.\n\n\nHuh. That's exactly the functionality I was looking for. Damn. I understand\nwhy Robert removed it, but its removal makes it much harder to identify an\nLWLock since it might fall in a DSM segment that could be mapped at\ndifferent base addresses in different backends.\n\nRobert's patch didn't replace the offset within tranche with anything else\nto identify the lock. A LWLock* is imperfect due to ASLR and DSM but it's\nbetter than nothing. In theory we could even remap them in trace tools if\nwe had tracepoints on DSM attach and detach that showed their size and base\naddress too.\n\nCC'ing Andres, as he expressed interest in being able to globally identify\nLWLocks too.\n\n\n> The 0001 patch is\n> essentially trying to reinstate that, which seems sensible. Perhaps we\n> should also use the argument order that used to be there. It used to be\n>\n> probe lwlock__acquire(const char *, int, LWLockMode);\n>\n> and now it would be\n>\n> probe lwlock__acquire(const char *, LWLockMode, LWLock*, int);\n>\n> Also, do we need both the tranche name and the tranche id?\n\n\nReasons to have the name:\n\n* There is no easy way to look up the tranche name by ID from outside the\nbackend\n* A tranche ID by itself is pretty much meaningless especially for dynamic\ntranches\n* Any existing scripts will rely on the tranche name\n\nSo the tranche name is really required to generate useful output for any\ndynamic tranches, or simple and readable output from things like perf.\n\nReasons to have the tranche ID:\n\n* The tranche name is not guaranteed to have the same address for a given\nvalue across backends in the presence of ASLR, even for built-in tranches.\nSo tools need to read tranche names as user-space strings, which is much\nmore expensive than consuming an int argument from the trace args. Storing\nand reporting maps of events by tranche name (string) in tools is also more\nexpensive than having a tranche id.\n* When the trace tool or script wants to filter for only one particular\ntranche,particularly when it's a built-in tranche where the tranche ID is\nknown, having the ID is much more useful and efficient.\n* If we can avoid computing the tranche name, emitting just the tranche ID\nwould be much faster.\n\nIt's annoying that we have to pay the cost of computing the tranche name\nthough. It never used to matter, but now that T_NAME() expands to\nGetLWTrancheName() calls as of 29c3e2dd5a6 it's going to cost a little more\non such a hot path. I might see if I can do a little comparison and see how\nmuch.\n\nI could add TRACE_POSTGRESQL_<<tracepointname>>_ENABLED() guards since we\ndo in fact build with SDT semaphore support. That adds a branch for each\ntracepoint, but they're already marshalling arguments and making a function\ncall that does lots more than a single branch, so that seems pretty\nsensible. The main downside of using _ENABLED() USDT semaphore guards is\nthat not all tools are guaranteed to understand or support them. So an\nolder perf, for example, might simply fail to fire events on guarded\nprobes. That seems OK to me, the onus should be on the probe tool to pay\nany costs, not on PostgreSQL. Despite that I don't want to mark the\n_ENABLED() guards unlikely(), since that'd increase the observer effect\nwhere probing LWLocks changes their timing and behaviour. Branch prediction\nshould do a very good job in such cases without being forced.\n\nI wonder a little about the possible cache costs of the _ENABLED() macros\nthough. Their data is in a separate ELF segment and separate .o, with no\nlocality to the traced code. It might be worth checking that before\nproceeding; I guess it's even possible that the GetLWTrancheName() calls\ncould be cheaper. Will see if I can run some checks and report back.\n\nBTW, if you want some of the details on how userspace SDTs work,\nhttps://leezhenghui.github.io/linux/2019/03/05/exploring-usdt-on-linux.html\nis interesting and useful. It helps explain uprobes, ftrace, bcc, etc.\n\nOr maybe we\n> don't need the name, or can record it differently, which might also\n> address your other concern that it's too expensive to compute. In any\n> case, I think an argument order like\n>\n> probe lwlock__acquite(const char *, int, LWLock*, LWLockMode);\n>\n> would make more sense.\n>\n\nOK.\n\nIn 0004, you add a probe to record the application_name setting? Would\n> there be any value in making that a generic probe that can record any\n> GUC change?\n>\n\nYes, there would, but I didn't want to go and do that in the same patch,\nand a named probe on application_name is useful separately to having probes\non any GUC.\n\nThere's value in having a probe with an easily targeted name that probes\nthe application_name since it's of obvious interest and utility to probing\nand tracing tools. A probe specifically on application_name means a probing\nscript doesn't have to fire an event for every GUC, copy the GUC name\nstring, strcmp() it to see if it's the GUC of interest, etc. So specific\nprobes on \"major\" GUCs like this are IMO very useful.\n\n(It'd be possible to instead generate probes for each GUC at compile-time\nusing the preprocessor and the DTRACE_ macros. But as noted above, that\ndoesn't currently work properly in the same compilation unit that a dtrace\nscript-generated probes.h is included in. I think it's probably nicer to\nhave specific probes for GUCs of high interest, then generic probes that\ncapture all GUCs anyway.)\n\nThere are a TON of probes I want to add, and I have a tree full of them\nwaiting to submit progressively. Yes, ability to probe all GUCs is in\nthere. So is detail on walsender, reorder buffer, and snapshot builder\nactivity. Heavyweight lock SDTs. A probe that identifies the backend type\nat startup. SDT probe events emitted for every wait-event. Probes in elog.c\nto let probes observe error unwinding, capture error messages, etc. (Those\ncan also be used with systemtap guru mode scripts to do things like turn a\nparticular elog(DEBUG) into a PANIC at runtime for diagnostic purposes).\nProbes in shm_mq to observe message passing and blocking. A probe that\nfires whenever debug_query_string changes. Lots. But I can't submit them\nall at once, especially without some supporting use cases and scripts that\nother people can use so they can understand why these probes are useful.\n\nSo I figured I'd start here...\n\nOn Mon, 22 Mar 2021 at 16:38, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nFirst, a problem: 0002 doesn't build on macOS, because uint64 has been \nused in the probe definitions. That needs to be handled like the other \nnonnative types in that file.Will fix. \n\nAll the probe changes and additions should be accompanied by \ndocumentation changes.Agreed, will fix. \n\nThe probes used to have an argument to identify the lock, which was \nremoved by 3761fe3c20bb040b15f0e8da58d824631da00caa.Huh. That's exactly the functionality I was looking for. Damn. I understand why Robert removed it, but its removal makes it much harder to identify an LWLock since it might fall in a DSM segment that could be mapped at different base addresses in different backends.Robert's patch didn't replace the offset within tranche with anything else to identify the lock. A LWLock* is imperfect due to ASLR and DSM but it's better than nothing. In theory we could even remap them in trace tools if we had tracepoints on DSM attach and detach that showed their size and base address too.CC'ing Andres, as he expressed interest in being able to globally identify LWLocks too. The 0001 patch is \nessentially trying to reinstate that, which seems sensible. Perhaps we \nshould also use the argument order that used to be there. It used to be\n\nprobe lwlock__acquire(const char *, int, LWLockMode);\n\nand now it would be\n\nprobe lwlock__acquire(const char *, LWLockMode, LWLock*, int);\n\nAlso, do we need both the tranche name and the tranche id?Reasons to have the name:* There is no easy way to look up the tranche name by ID from outside the backend* A tranche ID by itself is pretty much meaningless especially for dynamic tranches* Any existing scripts will rely on the tranche name So the tranche name is really required to generate useful output for any dynamic tranches, or simple and readable output from things like perf.Reasons to have the tranche ID:* The tranche name is not guaranteed to have the same address for a given value across backends in the presence of ASLR, even for built-in tranches. So tools need to read tranche names as user-space strings, which is much more expensive than consuming an int argument from the trace args. Storing and reporting maps of events by tranche name (string) in tools is also more expensive than having a tranche id.* When the trace tool or script wants to filter for only one particular tranche,particularly when it's a built-in tranche where the tranche ID is known, having the ID is much more useful and efficient.* If we can avoid computing the tranche name, emitting just the tranche ID would be much faster.It's annoying that we have to pay the cost of computing the tranche name though. It never used to matter, but now that T_NAME() expands to GetLWTrancheName() calls as of 29c3e2dd5a6 it's going to cost a little more on such a hot path. I might see if I can do a little comparison and see how much.I could add TRACE_POSTGRESQL_<<tracepointname>>_ENABLED() guards since we do in fact build with SDT semaphore support. That adds a branch for each tracepoint, but they're already marshalling arguments and making a function call that does lots more than a single branch, so that seems pretty sensible. The main downside of using _ENABLED() USDT semaphore guards is that not all tools are guaranteed to understand or support them. So an older perf, for example, might simply fail to fire events on guarded probes. That seems OK to me, the onus should be on the probe tool to pay any costs, not on PostgreSQL. Despite that I don't want to mark the _ENABLED() guards unlikely(), since that'd increase the observer effect where probing LWLocks changes their timing and behaviour. Branch prediction should do a very good job in such cases without being forced.I wonder a little about the possible cache costs of the _ENABLED() macros though. Their data is in a separate ELF segment and separate .o, with no locality to the traced code. It might be worth checking that before proceeding; I guess it's even possible that the GetLWTrancheName() calls could be cheaper. Will see if I can run some checks and report back.BTW, if you want some of the details on how userspace SDTs work, https://leezhenghui.github.io/linux/2019/03/05/exploring-usdt-on-linux.html is interesting and useful. It helps explain uprobes, ftrace, bcc, etc.Or maybe we \ndon't need the name, or can record it differently, which might also \naddress your other concern that it's too expensive to compute. In any \ncase, I think an argument order like\n\nprobe lwlock__acquite(const char *, int, LWLock*, LWLockMode);\n\nwould make more sense.OK. \n\nIn 0004, you add a probe to record the application_name setting? Would \nthere be any value in making that a generic probe that can record any \nGUC change?Yes, there would, but I didn't want to go and do that in the same patch, and a named probe on application_name is useful separately to having probes on any GUC.There's value in having a probe with an easily targeted name that probes the application_name since it's of obvious interest and utility to probing and tracing tools. A probe specifically on application_name means a probing script doesn't have to fire an event for every GUC, copy the GUC name string, strcmp() it to see if it's the GUC of interest, etc. So specific probes on \"major\" GUCs like this are IMO very useful.(It'd be possible to instead generate probes for each GUC at compile-time using the preprocessor and the DTRACE_ macros. But as noted above, that doesn't currently work properly in the same compilation unit that a dtrace script-generated probes.h is included in. I think it's probably nicer to have specific probes for GUCs of high interest, then generic probes that capture all GUCs anyway.)There are a TON of probes I want to add, and I have a tree full of them waiting to submit progressively. Yes, ability to probe all GUCs is in there. So is detail on walsender, reorder buffer, and snapshot builder activity. Heavyweight lock SDTs. A probe that identifies the backend type at startup. SDT probe events emitted for every wait-event. Probes in elog.c to let probes observe error unwinding, capture error messages, etc. (Those can also be used with systemtap guru mode scripts to do things like turn a particular elog(DEBUG) into a PANIC at runtime for diagnostic purposes). Probes in shm_mq to observe message passing and blocking. A probe that fires whenever debug_query_string changes. Lots. But I can't submit them all at once, especially without some supporting use cases and scripts that other people can use so they can understand why these probes are useful.So I figured I'd start here...",
"msg_date": "Mon, 12 Apr 2021 14:31:32 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-12 14:31:32 +0800, Craig Ringer wrote:\n> * There is no easy way to look up the tranche name by ID from outside the\n> backend\n\nBut it's near trivial to add that.\n\n\n> It's annoying that we have to pay the cost of computing the tranche name\n> though. It never used to matter, but now that T_NAME() expands to\n> GetLWTrancheName() calls as of 29c3e2dd5a6 it's going to cost a little more\n> on such a hot path. I might see if I can do a little comparison and see how\n> much. I could add TRACE_POSTGRESQL_<<tracepointname>>_ENABLED() guards since we\n> do in fact build with SDT semaphore support. That adds a branch for each\n> tracepoint, but they're already marshalling arguments and making a function\n> call that does lots more than a single branch, so that seems pretty\n> sensible.\n\nI am against adding any overhead for this feature. I honestly think the\nprobes we have right now in postgres do not provide a commensurate\nbenefit.\n\n\n> (It'd be possible to instead generate probes for each GUC at compile-time\n> using the preprocessor and the DTRACE_ macros. But as noted above, that\n> doesn't currently work properly in the same compilation unit that a dtrace\n> script-generated probes.h is included in. I think it's probably nicer to\n> have specific probes for GUCs of high interest, then generic probes that\n> capture all GUCs anyway.)\n>\n> There are a TON of probes I want to add, and I have a tree full of them\n> waiting to submit progressively. Yes, ability to probe all GUCs is in\n> there. So is detail on walsender, reorder buffer, and snapshot builder\n> activity. Heavyweight lock SDTs. A probe that identifies the backend type\n> at startup. SDT probe events emitted for every wait-event. Probes in elog.c\n> to let probes observe error unwinding, capture error messages,\n> etc. [...] A probe that fires whenever debug_query_string\n> changes. Lots. But I can't submit them all at once, especially without\n> some supporting use cases and scripts that other people can use so\n> they can understand why these probes are useful.\n\n-1. This is not scalable. Adding static probes all over has both a\nruntime (L1I, branches, code optimization) and maintenance overhead.\n\n\n> (Those can also be used with systemtap guru mode scripts to do things\n> like turn a particular elog(DEBUG) into a PANIC at runtime for\n> diagnostic purposes).\n\nYikes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 11:23:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 02:23, Andres Freund <andres@anarazel.de> wrote:\n\n[I've changed the order of the quoted sections a little to prioritize\nthe key stuff]\n\n>\n> On 2021-04-12 14:31:32 +0800, Craig Ringer wrote:\n>\n> > It's annoying that we have to pay the cost of computing the tranche name\n> > though. It never used to matter, but now that T_NAME() expands to\n> > GetLWTrancheName() calls as of 29c3e2dd5a6 it's going to cost a little more\n> > on such a hot path. I might see if I can do a little comparison and see how\n> > much. I could add TRACE_POSTGRESQL_<<tracepointname>>_ENABLED() guards since we\n> > do in fact build with SDT semaphore support. That adds a branch for each\n> > tracepoint, but they're already marshalling arguments and making a function\n> > call that does lots more than a single branch, so that seems pretty\n> > sensible.\n>\n> I am against adding any overhead for this feature. I honestly think the\n> probes we have right now in postgres do not provide a commensurate\n> benefit.\n\nI agree that the probes we have now are nearly useless, if not\nentirely useless. The transaction management ones are misplaced and\nutterly worthless. The LWLock ones don't carry enough info to be much\nuse and are incomplete. I doubt anybody uses any of them at all, or\nwould even notice their absence.\n\nIn terms of overhead, what is in place right now is not free. It used\nto be very cheap, but since 29c3e2dd5a6 it's not. I'd like to reduce\nthe current cost and improve functionality at the same time, so it's\nactually useful.\n\n\n> > * There is no easy way to look up the tranche name by ID from outside the\n> > backend\n>\n> But it's near trivial to add that.\n\nReally?\n\nWe can expose a pg_catalog.lwlock_tranches view that lets you observe\nthe current mappings for any given user backend I guess.\n\nBut if I'm looking for performance issues caused by excessive LWLock\ncontention or waits, LWLocks held too long, LWLock lock-ordering\ndeadlocks, or the like, it's something I want to capture across the\nwhole postgres instance. Each backend can have different tranche IDs\n(right?) and there's no way to know what a given non-built-in tranche\nID means for any given backend without accessing backend-specific\nin-memory state. Including for non-user-accessible backends like\nbgworkers and auxprocs, where it's not possible to just query the\nstate from a view directly.\n\nSo we'd be looking at some kind of shm based monstrosity. That doesn't\nsound appealing. Worse, there's no way to solve races with it - is a\ngiven tranche ID already allocated when you see it? If not, can you\nlook it up from the backend before the backend exits/dies? For that\nmatter, how do you do that, since the connection to the backend is\nlikely under the control of an application, not your monitoring and\ndiagnostic tooling.\n\nSome trace tools can poke backend memory directly, but it generally\nrequires debuginfo, is fragile and Pg version specific, slow, and a\nreal pain to use. If we don't attach the LWLock names to the\ntracepoints in some way they're pretty worthless.\n\nAgain, I don't plan to add new costs here. I'm actually proposing to\nreduce an existing cost.\n\nAnd you can always build without `--enable-dtrace` and ... just not care.\n\nAnyway - I'll do some `perf` runs shortly to quantify this:\n\n* With/without tracepoints at all\n* With/without names in tracepoints\n* With/without tracepoint refcounting (_ENABLED() semaphores)\n\nso as to rely less on handwaving.\n\n> > (Those can also be used with systemtap guru mode scripts to do things\n> > like turn a particular elog(DEBUG) into a PANIC at runtime for\n> > diagnostic purposes).\n>\n> Yikes.\n>\n\nWell, it's not like it can happen by accident. You have to\ndeliberately write a script that twiddles process memory, using a tool\nthat requires special privileges and\n\nI recently had to prepare a custom build for a customer that converted\nan elog(DEBUG) into an elog(PANIC) in order to capture a core with\nmuch better diagnostic info for a complex, hard to reproduce and\nintermittent memory management issue. It would've been rather nice to\nbe able to do so with a trace marker instead of a custom build.\n\n> > There are a TON of probes I want to add, and I have a tree full of them\n> > waiting to submit progressively. Yes, ability to probe all GUCs is in\n> > there. So is detail on walsender, reorder buffer, and snapshot builder\n> > activity. Heavyweight lock SDTs. A probe that identifies the backend type\n> > at startup. SDT probe events emitted for every wait-event. Probes in elog.c\n> > to let probes observe error unwinding, capture error messages,\n> > etc. [...] A probe that fires whenever debug_query_string\n> > changes. Lots. But I can't submit them all at once, especially without\n> > some supporting use cases and scripts that other people can use so\n> > they can understand why these probes are useful.\n>\n> -1. This is not scalable. Adding static probes all over has both a\n> runtime (L1I, branches, code optimization) and maintenance overhead.\n\nTake a look at \"sudo perf list\".\n\n\n sched:sched_kthread_work_execute_end [Tracepoint event]\n sched:sched_kthread_work_execute_start [Tracepoint event]\n ...\n sched:sched_migrate_task [Tracepoint event]\n ...\n sched:sched_process_exec [Tracepoint event]\n ...\n sched:sched_process_fork [Tracepoint event]\n ...\n sched:sched_stat_iowait [Tracepoint event]\n ...\n sched:sched_stat_sleep [Tracepoint event]\n sched:sched_stat_wait [Tracepoint event]\n ...\n sched:sched_switch [Tracepoint event]\n ...\n sched:sched_wakeup [Tracepoint event]\n\nThe kernel is packed with extremely useful trace events, and for very\ngood reasons. Some on very hot paths.\n\nI do _not_ want to randomly add probes everywhere. I propose that they be added:\n\n* Where they will meaningfully aid production diagnosis, complex\ntesting, and/or development activity. Expose high level activity of\nkey subsystems via trace markers especially at the boundaries of IPCs\nor logic otherwise passes between processes.\n* Where it's not feasible to instead adjust code structure to make\nDWARF debuginfo based probing sufficient.\n* Where there's no other sensible way to get useful information\nwithout excessive complexity and/or runtime cost, but it could be very\nimportant for understanding intermittent production issues or\nperformance problems at scale in live systems.\n* Where the execution path is not extremely hot - e.g. no static\ntracepoints in spinlocks or atomics.\n* Where a DWARF debuginfo based probe cannot easily replace them, i.e.\ngenerally not placed on entry and exit of stable and well-known\nfunctions.\n\nRe the code structure point above, we have lots of places where we\nreturn in multiple places, or where a single function can do many\ndifferent things with different effects on system state. For example\nright now it's quite complex to place probes to definitively confirm\nthe outcome of a given transaction and capture its commit record lsn.\nFunctions with many branches that each fiddle with system state,\nfunctions that test for the validity of some global and short-circuit\nreturn if invalid, etc. Functions that do long loops over big chunks\nof logic are hard too, e.g. ReorderBufferCommit.\n\nI want to place probes where they will greatly simplify observation of\nimportant global system state that's not easily observed using\ntraditional tools like gdb or logging.\n\nWhen applied sensibly and moderately, trace markers are absolutely\namazing for diagnostic and performance work. You can attach to them in\nproduction builds even without debuginfo and observe behaviour that\nwould otherwise be impossible without complex fiddling around with\nmulti-process gdb. This sort of capability is going to become more and\nmore important as we become more parallel and can rely less on\nsingle-process gdb-style tracing. Diagnostics using logging is a blunt\nhammer that does not scale and is rarely viable for intermittent or\nhard to reproduce production issues.\n\nI will always favour \"native postgres\" solutions where feasible - for\nexample, I want to add some basic reorder buffer state to struct\nWalSnd and the pg_stat_replication views, and I want to expose some\nmeans to get a walsender to report details of its ReorderBuffer state.\n\nBut some things are not very amenable to that. Either the runtime\ncosts of having the facility available are too high (we're never going\nto have a pg_catalog.pg_lwlocks for good reasons) or it's too\ncomplicated to write and maintain. Especially where info is needed\nfrom many processes.\n\nThat's where trace markers become valuable. But right now what we have\nin Pg is worthless, and it seems almost nobody knows how to use the\ntools. I want to change that, but it's a bit of a catch-22. Making\ntooling easy to use benefits enormously from some more stable\ninterfaces that don't break so much version-to-version, don't require\ndeep code knowledge to understand, and work without debuginfo on\nproduction builds. But without some \"oh, wow\" tools, it's hard to\nconvince anyone we should invest any effort in improving the\ninfrastructure...\n\nIt's possible I'm beating a dead horse here. I find these tools\namazingly useful, but they're currently made 10x harder than they need\nto be by the complexities of directly poking at postgres's complex and\nversion-specific internal structure using debuginfo based probing.\nThings that should be simple, like determining the timings of a txn\nfrom xid assignment -> 2pc prepare -> 2pc commit prepared .... really\naren't. Markers that report xid assignment, commit, rollback, etc,\nwith the associated topxid would help immensely.\n\n\n",
"msg_date": "Tue, 13 Apr 2021 10:34:18 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 10:34:18 +0800, Craig Ringer wrote:\n> > But it's near trivial to add that.\n> \n> Really?\n\nYes.\n\n\n> Each backend can have different tranche IDs (right?)\n\nNo, they have to be the same in each. Note how the tranche ID is part of\nstruct LWLock. Which is why LWLockNewTrancheId() has to acquire a lock\netc.\n\n\n> But if I'm looking for performance issues caused by excessive LWLock\n> contention or waits, LWLocks held too long, [...] or the like, it's\n> something I want to capture across the whole postgres instance.\n\nSure.\n\nAlthough I still don't really buy that static tracepoints are the best\nway to measure this kind of thing, given the delay introducing them and\nthe cost of having them around. I think I pointed out\nhttps://postgr.es/m/20200813004233.hdsdfvufqrbdwzgr%40alap3.anarazel.de\nbefore.\n\n\n> LWLock lock-ordering deadlocks\n\nThis seems unrelated to tracepoints to me.\n\n\n> and there's no way to know what a given non-built-in tranche ID means\n> for any given backend without accessing backend-specific in-memory\n> state. Including for non-user-accessible backends like bgworkers and\n> auxprocs, where it's not possible to just query the state from a view\n> directly.\n\nThe only per-backend part is that some backends might not know the\ntranche name for dynamically registered tranches where the\nLWLockRegisterTranche() hasn't been executed in a backend. Which should\npretty much never be an aux process or such. And even for bgworkers it\nseems like a pretty rare thing, because those need to be started by\nsomething...\n\nIt might be worth proposing a shared hashtable with tranch names and\njut reserving enough space for ~hundred entries...\n\n> And you can always build without `--enable-dtrace` and ... just not care.\n\nPractically speaking, distributions enable it, which then incurs the\ncost for everyone.\n\n\n\n> Take a look at \"sudo perf list\".\n> \n> \n> sched:sched_kthread_work_execute_end [Tracepoint event]\n> sched:sched_kthread_work_execute_start [Tracepoint event]\n> ...\n> sched:sched_migrate_task [Tracepoint event]\n> ...\n> sched:sched_process_exec [Tracepoint event]\n> ...\n> sched:sched_process_fork [Tracepoint event]\n> ...\n> sched:sched_stat_iowait [Tracepoint event]\n> ...\n> sched:sched_stat_sleep [Tracepoint event]\n> sched:sched_stat_wait [Tracepoint event]\n> ...\n> sched:sched_switch [Tracepoint event]\n> ...\n> sched:sched_wakeup [Tracepoint event]\n> \n> The kernel is packed with extremely useful trace events, and for very\n> good reasons. Some on very hot paths.\n\nIIRC those aren't really comparable - the kernel actually does modify\nthe executable code to replace the tracepoints with nops.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Apr 2021 20:06:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 11:06, Andres Freund <andres@anarazel.de> wrote:\n\n> > Each backend can have different tranche IDs (right?)\n>\n> No, they have to be the same in each. Note how the tranche ID is part of\n> struct LWLock. Which is why LWLockNewTrancheId() has to acquire a lock\n> etc.\n\nAh. I misunderstood that at some point.\n\nThat makes it potentially more sensible to skip reporting tranche\nnames. Not great, because it's much less convenient to work with trace\ndata full of internal ordinals that must be re-mapped in\npost-processing. But I'm generally OK with deferring runtime costs to\ntooling rather than the db itself so long as doing so is moderately\npractical.\n\nIn this case, I think we could likely get away with removing the\ntranche names from the tracepoints if we instead emit a trace event on\neach dynamic tranche registration that reports the tranche id -> name\nmapping. It still sucks for tools, since they have to scrape up the\nstatic tranche registrations from somewhere else, but ... it'd be\ntolerable.\n\n> > The kernel is packed with extremely useful trace events, and for very\n> > good reasons. Some on very hot paths.\n>\n> IIRC those aren't really comparable - the kernel actually does modify\n> the executable code to replace the tracepoints with nops.\n\nSame with userspace static trace markers (USDTs).\n\nA followup mail will contain a testcase and samples to demonstrate this.\n\n> Although I still don't really buy that static tracepoints are the best\n> way to measure this kind of thing, given the delay introducing them and\n> the cost of having them around. I think I pointed out\n> https://postgr.es/m/20200813004233.hdsdfvufqrbdwzgr%40alap3.anarazel.de\n> before.\n\nYeah. Semaphores are something hot enough that I'd hesitate to touch them.\n\n> > LWLock lock-ordering deadlocks\n>\n> This seems unrelated to tracepoints to me.\n\nIf I can observe which locks are acquired in which order by each proc,\nI can then detect excessive waits and report the stack of held locks\nof both procs and their order of acquisition.\n\nSince LWLocks shmem state doesn't AFAICS track any information on the\nlock holder(s) I don't see a way to do this in-process.\n\nIt's not vital, it's just one of the use cases I have in mind. I\nsuspect that any case where such deadlocks are possible represents a\nmisuse of LWLocks anyway.\n\n> > and there's no way to know what a given non-built-in tranche ID means\n> > for any given backend without accessing backend-specific in-memory\n> > state. Including for non-user-accessible backends like bgworkers and\n> > auxprocs, where it's not possible to just query the state from a view\n> > directly.\n>\n> The only per-backend part is that some backends might not know the\n> tranche name for dynamically registered tranches where the\n> LWLockRegisterTranche() hasn't been executed in a backend. Which should\n> pretty much never be an aux process or such. And even for bgworkers it\n> seems like a pretty rare thing, because those need to be started by\n> something...\n>\n> It might be worth proposing a shared hashtable with tranch names and\n> jut reserving enough space for ~hundred entries...\n\nYeah, that'd probably work and be cheap enough not to really matter.\nMight even save us a chunk of memory by not turning CoW pages into\nprivate mappings for each backend during registration.\n\n> > And you can always build without `--enable-dtrace` and ... just not care.\n>\n> Practically speaking, distributions enable it, which then incurs the\n> cost for everyone.\n\nYep. That's part of why I was so surprised to notice the\nGetLWTrancheName() function call in LWLock tracepoints. Nearly\nanywhere else it wouldn't matter at all, but LWLocks are hot enough\nthat it just might matter for the no-wait fastpath.\n\n\n",
"msg_date": "Tue, 13 Apr 2021 21:05:18 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 21:05, Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> On Tue, 13 Apr 2021 at 11:06, Andres Freund <andres@anarazel.de> wrote:\n> > IIRC those aren't really comparable - the kernel actually does modify\n> > the executable code to replace the tracepoints with nops.\n>\n> Same with userspace static trace markers (USDTs).\n>\n> A followup mail will contain a testcase and samples to demonstrate this.\n\nDemo follows, with source attached too. gcc 10.2 compiling with -O2,\nusing dtrace and <sys/sdt.h> from systemtap 4.4 .\n\nTrivial empty function definition:\n\n __attribute__((noinline))\n void\n no_args(void)\n {\n SDT_NOOP_NO_ARGS();\n }\n\nDisassembly when SDT_NOOP_NO_ARGS is defined as\n\n #define SDT_NOOP_NO_ARGS()\n\nis:\n\n <no_args>:\n retq\n\nWhen built with a probes.d definition processed by the dtrace script\ninstead, the disassembly becomes:\n\n <no_args>:\n nop\n retq\n\nSo ... yup, it's a nop.\n\nNow, if we introduce semaphores that changes.\n\n __attribute__((noinline))\n void\n no_args(void)\n {\n if (SDT_NOOP_NO_ARGS_ENABLED())\n SDT_NOOP_NO_ARGS();\n }\n\ndisassembles to:\n\n <no_args>:\n cmpw $0x0,0x2ec4(%rip) # <sdt_noop_no_args_semaphore>\n jne <no_args+0x10>\n retq\n nopl 0x0(%rax,%rax,1)\n nop\n retq\n\nso the semaphore test is actually quite harmful and wasteful in this\ncase. That's not surprising since this SDT is a simple marker point.\nBut what if we supply arguments to it? It turns out that the\ndisassembly is the same if args are passed, whether locals or globals,\nincluding globals assigned based on program input that can't be\ndetermined at compile time. Still just a nop.\n\nIf I pass a function call as an argument expression to a probe, e.g.\n\n __attribute__((noinline)) static int\n compute_probe_argument(void)\n {\n return 100;\n }\n\n void\n with_computed_arg(void)\n {\n SDT_NOOP_WITH_COMPUTED_ARG(compute_probe_argument());\n }\n\nthen the disassembly with SDTs is:\n\n <with_computed_arg>:\n callq <compute_probe_argument>\n nop\n retq\n\nso the function call isn't elided even if it's unused. That's somewhat\nexpected. The same will be true if the arguments to a probe require\npointer chasing or non-trivial marshalling.\n\nIf a semaphore guard is added this becomes:\n\n <with_computed_arg>:\n cmpw $0x0,0x2e2e(%rip) # <sdt_noop_with_computed_arg_semaphore>\n jne <with_computed_arg+0x10>\n retq\n nopl 0x0(%rax,%rax,1)\n callq <compute_probe_argument>\n nop\n retq\n\nso now the call to compute_probe_argument() is skipped unless the\nprobe is enabled, but the function is longer and requires a test and\njump.\n\nIf I dummy up a function that does some pointer chasing, without\nsemaphores I get\n\n<with_pointer_chasing>:\n mov (%rdi),%rax\n mov (%rax),%rax\n mov (%rax),%rax\n nop\n retq\n\nso the arguments are marshalled then ignored.\n\nwith semaphores I get:\n\n<with_pointer_chasing>:\n cmpw $0x0,0x2d90(%rip) # <sdt_noop_with_pointer_chasing_semaphore>\n jne <with_pointer_chasing+0x10>\n retq\n nopl 0x0(%rax,%rax,1)\n mov (%rdi),%rax\n mov (%rax),%rax\n mov (%rax),%rax\n nop\n retq\n\nso again the probe's argument marshalling is inline in the function\nbody, but at the end, and skipped over.\n\nFindings:\n\n* A probe without arguments or with simple arguments is just a 'nop' instruction\n* Probes that require function calls, pointer chasing, other\nexpression evaluation etc may impose a fixed cost to collect up\narguments even if the probe is disabled.\n* SDT semaphores can avoid that cost but add a branch, so should\nprobably be avoided unless preparing probe arguments is likely to be\nexpensive.\n\nHideous but effective demo code attached.",
"msg_date": "Tue, 13 Apr 2021 21:40:58 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, 13 Apr 2021 at 21:40, Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n\n> Findings:\n>\n> * A probe without arguments or with simple arguments is just a 'nop' instruction\n> * Probes that require function calls, pointer chasing, other\n> expression evaluation etc may impose a fixed cost to collect up\n> arguments even if the probe is disabled.\n> * SDT semaphores can avoid that cost but add a branch, so should\n> probably be avoided unless preparing probe arguments is likely to be\n> expensive.\n\nBack to the topic directly at hand.\n\nAttached a disassembly of what LWLockAcquire looks like now on my\ncurrent build of git master @ 5fe83adad9efd5e3929f0465b44e786dc23c7b55\n. This is an --enable-debug --enable-cassert --enable-dtrace build\nwith -Og -ggdb3.\n\nThe three tracepoints in it are all of the form:\n\n movzwl 0x0(%r13),%edi\n call 0x801c49 <GetLWTrancheName>\n nop\n\nso it's clear we're doing redundant calls to GetLWTrancheName(), as\nexpected. Not ideal.\n\nNow if I patch it to add the _ENABLED() guards on all the tracepoints,\nthe probes look like this:\n\n 0x0000000000803176 <+200>: cmpw $0x0,0x462da8(%rip) #\n0xc65f26 <postgresql_lwlock__acquire_semaphore>\n 0x000000000080317e <+208>: jne 0x80328b <LWLockAcquire+477>\n .... other interleaved code ...\n 0x000000000080328b <+477>: movzwl 0x0(%r13),%edi\n 0x0000000000803290 <+482>: call 0x801c49 <GetLWTrancheName>\n 0x0000000000803295 <+487>: nop\n 0x0000000000803296 <+488>: jmp 0x803184 <LWLockAcquire+214>\n\nso we avoid the GetLWTrancheName() call at the cost of a test and\npossible branch, and a small expansion in total function size. Without\nthe semaphores, LWLockAcquire is 463 bytes. With them, it's 524 bytes,\nwhich is nothing to sneeze at for code that doesn't do anything\n99.999% of the time, but we avoid a bunch of GetLWTrancheName() calls.\n\nIf I instead replace T_NAME() with NULL for all tracepoints in\nLWLockAcquire, the disassembly shows that the tracepoints now become a\nsimple\n\n 0x0000000000803176 <+200>: nop\n\nwhich is pretty hard to be concerned about.\n\nSo at the very least we should be calling GetLWTrancheName() once at\nthe start of the function if built with dtrace support and remembering\nthe value, instead of calling it for each tracepoint.\n\nAnd omitting the tranche name looks like it might be sensible for the\nLWLock code. In most places it won't matter, but LWLocks are hot\nenough that it possibly might. A simple pg_regress run hits\nLWLockAcquire 25 million times, so that's 75 million calls to\nGetLWTrancheName().\n\n\n",
"msg_date": "Tue, 13 Apr 2021 22:48:02 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 11:06 PM Andres Freund <andres@anarazel.de> wrote:\n> No, they have to be the same in each. Note how the tranche ID is part of\n> struct LWLock. Which is why LWLockNewTrancheId() has to acquire a lock\n> etc.\n\nMore precisely, if a tranche ID is defined in multiple backends, it\nneeds to be defined the same way in all of them. But it is possible to\nhave an extension loaded into some backends and not others and have it\ndefine a tranche ID that other backends know nothing about.\n\nAnother point to note is that, originally, I had an idea that each\ntranche of lwlocks was situation in a single array somewhere in\nmemory. Perhaps that was an array of something else, like buffer\ndescriptors, and the lwlocks were just one element of the struct, or\nmaybe it was an array specifically of LWLocks, but one way or the\nother, there was definitely one array that had all the LWLocks from\nthat tranche in it. So before the commit in question --\n3761fe3c20bb040b15f0e8da58d824631da00caa -- T_ID() used to compute an\noffset for a lock within the tranche that was supposed to uniquely\nidentify the lock. However, the whole idea of an array per tranche\nturns out to be broken by design.\n\nConsider parallel query. You could, perhaps, arrange for all the\nLWLocks that a particular query needs to be in one tranche. And that's\nall fine. But what if there are multiple parallel contexts in\nexistence at the same time? I think right now that may be impossible\nas a practical matter, since for example an SQL function that is\ncalled by a parallel query is supposed to run any SQL statements\ninside of it without parallelism. But, that's basically a policy\ndecision. There's nothing in the parallel context machinery itself\nwhich prevents multiple parallel contexts from being active at the\nsame time. And if that happens, then you'd have multiple arrays with\nthe same tranche ID, so how do you identify the locks then? The\npre-3761fe3c20bb040b15f0e8da58d824631da00caa data structure doesn't\nwork because it has only one place to store an array base, but having\nmultiple places to store an array base doesn't fix it either because\nnow you've just given the same identifier to multiple locks.\n\nYou could maybe fix it by putting a limit on how many parallel\ncontexts can be open at the same time, and then having N copies of\neach parallelism-related tranche. But that seems ugly and messy and a\nburden on extension authors and not really what anybody wants.\n\nYou could try to identify locks by pointer addresses, but that's got\nsecurity hazards and the addreses aren't portable across all the\nbackends involved in the parallel query because of how DSM works, so\nit's not really that helpful in terms of matching stuff up.\n\nYou could identify every lock by a tranche ID + an array offset + a\n\"tranche instance ID\". But where would you store the tranche instance\nID to make it readily accessible, other than in the lock itself?\nAndres wasn't thrilled about using even 2 bytes to identify the\nLWLock, so he'll probably like having more bytes in there for that\nkind of thing even less. And to be honest I wouldn't blame him. We\nonly need 12 bytes to implement the lock itself -- we can't justify\nhaving more than a couple of additional bytes for debugging purposes.\n\nOn a broader level, I agree that being able to find out what the\nsystem is doing is really important. But I'm also not entirely\nconvinced that having really fine-grained information here to\ndistinguish between one lock and another is the way to get there.\nPersonally, I've never run into a problem where I really needed to\nknow anything more than the tranche name. Like, I've seen problems for\nexample we can see that there's a lot of contention on\nSubtransSLRULock, or there's problems with WALInsertLock. But I can't\nreally see why I'd need to know which WALInsertLock was experiencing\ncontention. If we were speaking of buffer content locks, I suppose I\ncan imagine wanting more details, but it's not really the buffer\nnumber I'd want to know. I'd want to know the database OID, the\nrelfilenode, the fork number, and the block number. You can argue that\nwe should just expose the buffer number and let the user sort out the\nrest with dtrace/systemtap magic, but that makes it useless in\npractice to an awful lot of people, including me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Apr 2021 14:25:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-13 14:25:23 -0400, Robert Haas wrote:\n> On Mon, Apr 12, 2021 at 11:06 PM Andres Freund <andres@anarazel.de> wrote:\n> You could identify every lock by a tranche ID + an array offset + a\n> \"tranche instance ID\". But where would you store the tranche instance\n> ID to make it readily accessible, other than in the lock itself?\n> Andres wasn't thrilled about using even 2 bytes to identify the\n> LWLock, so he'll probably like having more bytes in there for that\n> kind of thing even less.\n\nI still don't like the two bytes, fwiw ;). Especially because it's 4\nbytes due to padding right now.\n\nI'd like to move the LWLock->waiters list to outside of the lwlock\nitself - at most TotalProcs LWLocks can be waited for, so we don't need\nmillions of empty proclist_heads. That way we could also remove the\nproclist indirection - which shows up a fair bit in contended workloads.\n\nAnd if we had a separate \"lwlocks being waited for\" structure, we could\nalso add more information to it if we wanted to...\n\nThe difficulty of course is having space to indicate which of these\n\"waiting for\" lists are being used - there's not enough space in ->state\nright now to represent that. Two possibile approaches:\n\n- We could make it work if we restricted MAX_BACKENDS to be 2**14 - but\n while I personally think that's a sane upper limit, I already had a\n hard time getting consensus to lower the limit to 2^18-1.\n\n- Use a 64bit integer. Then we can easily fit MAX_BACKENDS lockers, as\n well as an offset to one of MAX_BACKENDS \"wait lists\" into LWLock.\n\n\nIt's not so much that I want to lower the overall memory usage (although\nit doesn't hurt). It's more about being able to fit more data into one\ncacheline together with the lwlock. E.g. being able to fit more into\nBufferDesc would be very useful.\n\nA secondary benefit of such an approach would be that it it makes it a\nlot easier to add efficient adaptive spinning on contended locks. I did\nexperiment with that, and there's some considerable potential for\nperformance benefits there. But for it to scale well we need something\nsimilar to \"mcs locks\", to avoid causing too much contention. And that\npretty much requires some separate space to store wait information\nanyway.\n\nWith an 8 bytes state we probably could also stash the tranche inside\nthat...\n\n\n> On a broader level, I agree that being able to find out what the\n> system is doing is really important. But I'm also not entirely\n> convinced that having really fine-grained information here to\n> distinguish between one lock and another is the way to get there.\n> Personally, I've never run into a problem where I really needed to\n> know anything more than the tranche name.\n\nI think it's quite useful for relatively simple things like analyzing\nthe total amount of time spent in individual locks, without incuring\nmuch overhead when not doing so (for which you need to identify\nindividual locks, otherwise your end - start time is going to be\nmeaningless). And, slightly more advanced, for analyzing what the stack\nwas when the lock was released - which then allows you to see what work\nyou're blocked on, something I found hard to figure out otherwise.\n\nI found that that's mostly quite doable with dynamic probes though.\n\n\n> Like, I've seen problems for example we can see that there's a lot of\n> contention on SubtransSLRULock, or there's problems with\n> WALInsertLock. But I can't really see why I'd need to know which\n> WALInsertLock was experiencing contention.\n\nWell, but you might want to know what the task blocking you was\ndoing. What to optimize might differ if the other task is e.g. a log\nswitch (which acquires all insert locks), than if it's WAL writes by\nVACUUM.\n\n\n> If we were speaking of buffer content locks, I suppose I can imagine\n> wanting more details, but it's not really the buffer number I'd want\n> to know. I'd want to know the database OID, the relfilenode, the fork\n> number, and the block number. You can argue that we should just expose\n> the buffer number and let the user sort out the rest with\n> dtrace/systemtap magic, but that makes it useless in practice to an\n> awful lot of people, including me.\n\nI have wondered if we ought to put some utilities for that in contrib or\nsuch. It's a lot easier to address something new with a decent starting\npoint...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:46:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 14 Apr 2021 at 04:46, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-04-13 14:25:23 -0400, Robert Haas wrote:\n> > On Mon, Apr 12, 2021 at 11:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > You could identify every lock by a tranche ID + an array offset + a\n> > \"tranche instance ID\". But where would you store the tranche instance\n> > ID to make it readily accessible, other than in the lock itself?\n> > Andres wasn't thrilled about using even 2 bytes to identify the\n> > LWLock, so he'll probably like having more bytes in there for that\n> > kind of thing even less.\n>\n> I still don't like the two bytes, fwiw ;). Especially because it's 4\n> bytes due to padding right now.\n\nAha, did I hear you say \"there are two free bytes for me to shove\nsomething marginally useful and irrelevant into\"?\n\n(*grin*)\n\n> I'd like to move the LWLock->waiters list to outside of the lwlock\n> itself - at most TotalProcs LWLocks can be waited for, so we don't need\n> millions of empty proclist_heads. That way we could also remove the\n> proclist indirection - which shows up a fair bit in contended workloads.\n>\n> And if we had a separate \"lwlocks being waited for\" structure, we could\n> also add more information to it if we wanted to...\n\nHaving the ability to observe LWLock waiters would be nice. But you're\nright to constantly emphasise that LWLocks need to be very slim. We\ndon't want to turn them into near-heavyweight locks by saddling them\nwith overhead that's not strictly necessary. A simple pg_regress run\n(with cassert, admittedly) takes 25,000,000 LWLocks and does 24,000\nLWLock waits and 130,000 condvar waits. All in less than a minute.\nOTOH, once someone's waiting we don't care nearly as much about\nbookkeeping cost, it's the un-contested fast paths that're most\ncritical.\n\n> - We could make it work if we restricted MAX_BACKENDS to be 2**14 - but\n> while I personally think that's a sane upper limit, I already had a\n> hard time getting consensus to lower the limit to 2^18-1.\n\n16384 backends is totally fine in sane real world deployments. But\nit'll probably upset marketing people when OtherDatabaseVendor starts\nshouting that they support 14 million connections, and postgres only\nhas 16k. Sigh.\n\nThe real answer here in the long term probably needs to be decoupling\nof executors from connection state inside postgres. But while we're on\nthat topic, how about we convert the entire codebase to Rust while\nriding on a flying rainbow unicorn? We're stuck with the 1:1\nconnection to executor mapping for the foreseeable future.\n\n> - Use a 64bit integer. Then we can easily fit MAX_BACKENDS lockers, as\n> well as an offset to one of MAX_BACKENDS \"wait lists\" into LWLock.\n\nYou know much more than me about the possible impacts of that on\nlayout and caching, but I gather that it's probably undesirable to\nmake LWLocks any bigger.\n\n> I think it's quite useful for relatively simple things like analyzing\n> the total amount of time spent in individual locks, without incuring\n> much overhead when not doing so (for which you need to identify\n> individual locks, otherwise your end - start time is going to be\n> meaningless).\n\nYep.\n\nThat's why the removal of the lock offset is a bit frustrating,\nbecause you can't identify an LWLock instance-wide by LWLock* due to\nthe possibility of different per-backend DSM base-address mappings.\nWell, and ASLR on EXEC_BACKEND systems, but who cares about those?\n\nThe removal was for good reasons though. And it only affects LWLocks\nin DSM, for everything else the LWLock* is good enough. If identifying\nLWLocks in DSM ever matters enough to bother to solve that problem, we\ncan emit trace events on DSM mapping attach in each backend, and trace\ntools can do the work to track which LWLocks are in DSM and convert\ntheir addresses to a reference base address. Pg shouldn't have to pay\nthe price for that unless it's something a lot of people need.\n\n> And, slightly more advanced, for analyzing what the stack\n> was when the lock was released - which then allows you to see what work\n> you're blocked on, something I found hard to figure out otherwise.\n>\n> I found that that's mostly quite doable with dynamic probes though.\n\nYeah, it is.\n\nThat's part of why my patchset here doesn't try to do a lot to LWLock\ntracepoints - I didn't think it was necessary to add a lot.\nThe LWLock code is fairly stable, not usually something you have to\nworry about in production unless you're debugging badly behaved\nextensions, and usually somewhat probe-able with DWARF based dynamic\nprobes. However, the way the wait-loop and fast-path are in the same\nfunction is a serious pain for dynamic probing; you can't tell the\ndifference between a fast-path acquire and an acquire after a wait\nwithout using probes on function+offset or probing by source line.\nBoth those are fine for dev work but useless in tool/library scripts.\n\nI almost wonder if we should test out moving the LWLock wait-loops out\nof LWLockAcquire(), LWLockAcquireOrWait() and LWLockWaitForVar()\nanyway, so the hot parts of the function are smaller. That'd make\ndynamic probing more convenient as a pleasant side effect. I imagine\nyou must've tried this, benchmarked and profiled it, though, and found\nit to be a net loss, otherwise you surely would've done it as part of\nyour various (awesome) performance work.\n\nAnyway, there are some other areas of postgres that are ridiculously\npainful to instrument with dynamic probes, especially in a somewhat\nversion- and build-independent way. Tracking txn commit and abort\n(including 2PC and normal xacts, with capture of commit LSNs) is just\npainful with dynamic probing for example, and is one of my top\npriority areas to get some decent tracepoints for - the current txn\nmanagement tracepoints are utterly worthless. But LWLocks are mostly\nfine, the only really big piece missing is a tracepoint fired exactly\nonce when a lock is released by any release path.\n\n> > Like, I've seen problems for example we can see that there's a lot of\n> > contention on SubtransSLRULock, or there's problems with\n> > WALInsertLock. But I can't really see why I'd need to know which\n> > WALInsertLock was experiencing contention.\n>\n> Well, but you might want to know what the task blocking you was\n> doing. What to optimize might differ if the other task is e.g. a log\n> switch (which acquires all insert locks), than if it's WAL writes by\n> VACUUM.\n\nThat sort of thing is why I've been interested in IDing the LWLock.\nThat, and I work with extension code that probably abuses LWLocks a\nbit, but that's not a problem core postgres should have to care about.\n\n> > If we were speaking of buffer content locks, I suppose I can imagine\n> > wanting more details, but it's not really the buffer number I'd want\n> > to know. I'd want to know the database OID, the relfilenode, the fork\n> > number, and the block number. You can argue that we should just expose\n> > the buffer number and let the user sort out the rest with\n> > dtrace/systemtap magic, but that makes it useless in practice to an\n> > awful lot of people, including me.\n>\n> I have wondered if we ought to put some utilities for that in contrib or\n> such. It's a lot easier to address something new with a decent starting\n> point...\n\nLong term that's exactly what I want to do.\n\nI wrote some with systemtap, but it's since become clear to me that\nsystemtap isn't going to get enough people on board. Setup is a\nhassle. So I'm trying to pivot over to bpf tools now, with the\nintention of getting together a library of canned probes and example\nscripts to help people get started.\n\nI've written systemtap scripts that can track a transaction from\nlocalxid allocation through real xid allocation, commit (or 2pc\nprepare and commit prepared), logical decoding, reorder buffering,\noutput plugin processing, receipt by a logical downstream, downstream\nxid assignment, downstream commit and replorigin advance,\ndownstream->upstream feedback, upstream slot advance, and upstream\nconfirmed flush/catalog_xmin advance. Whole-lifecycle tracking with\ntiming of each phase, across multiple processes and two postgres\ninstances. For now the two postgres instances must be on the same\nhost, but that could be dealt with too. The script reports the\napplication name and pid of the upstream session, the upstream\nlocalxid, the upstream xid, upstream commit lsn, the downstream xid\nand the downstream commit lsn as it goes, and follows associations to\ntrack the transaction through its lifecycle. (The current script is\nwritten for BDR and pglogical, not in-core logical decoding, but the\nprinciples are the same).\n\nThe problem is that scripts like this are just too fragile right now.\nChanges across Pg versions break the dynamic function probes they use,\nthough that can be adapted to somewhat. The bigger problem is the\nnumber of places I have to insert statement (function+offset) probes,\nwhich are just too fragile to make these sorts of scripts generally\nuseful. I have to fix them whenever I want to use them, so there's not\nmuch point trying to get people to use them.\n\nBut it's hard to convince people of the value of static tracepoints\nthat would make this sort of thing so much easier to do in a more\nstable manner when they can't easily see hands-on examples of what's\npossible. There's no \"wow\" factor. So I need to address the worst of\nthe difficult-to-probe sites and start sharing some examples that use\nthem.\n\nI thought this would be low-hanging fruit to start with. Whoops!\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:23:51 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 14 Apr 2021 at 02:25, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> So before the commit in question --\n> 3761fe3c20bb040b15f0e8da58d824631da00caa -- T_ID() used to compute an\n> offset for a lock within the tranche that was supposed to uniquely\n> identify the lock. However, the whole idea of an array per tranche\n> turns out to be broken by design.\n\nYeah, I understand that.\n\nI'd really love it if a committer could add an explanatory comment or\ntwo in the area though. I'm happy to draft a comment patch if anyone\nagrees my suggestion is sensible. The key things I needed to know when\nstudying the code were:\n\n* A LWLock* is always part of a tranche, but locks within a given\ntranche are not necessarily co-located in memory, cross referenced or\nassociated in any way.\n* A LWLock tranche does not track how many LWLocks are in the tranche\nor where they are in memory. It only groups up LWLocks into categories\nand maps the tranche ID to a name.\n* Not all LWLocks are part of the main LWLock array; others can be\nembedded in shmem structs elsewhere, including in DSM segments.\n* LWLocks in DSM segments may not have the same address between\ndifferent backends, because the DSM base address can vary, so a\nLWLock* cannot be reliably compared between backends unless you know\nit's in the main LWLock array or in static shmem.\n\nHaving that info in lwlock.c near the tranche management code or the\ntranche and main lwlock arrays would've been very handy.\n\n\n> You could try to identify locks by pointer addresses, but that's got\n> security hazards and the addreses aren't portable across all the\n> backends involved in the parallel query because of how DSM works, so\n> it's not really that helpful in terms of matching stuff up.\n\nWhat I'm doing now is identifying them by LWLock* across backends. I\nkeep track of DSM segment mappings in each backend inside the trace\nscript and I relocate LWLock* pointers known to be inside DSM segments\nrelative to a dummy base address so they're equal across backends.\n\nIt's a real pain, but it works. The main downside is that the trace\nscript has to observe the DSM attach; if it's started once a backend\nalready has the DSM segment attached, it has no idea the LWLock is in\na DSM segment or how to remap it. But that's not a serious issue.\n\n> On a broader level, I agree that being able to find out what the\n> system is doing is really important. But I'm also not entirely\n> convinced that having really fine-grained information here to\n> distinguish between one lock and another is the way to get there.\n\nAt the start of this thread I would've disagreed with you.\n\nBut yeah, you and Andres are right, because the costs outweigh the\nbenefits here.\n\nI'm actually inclined to revise the patch I sent in order to *remove*\nthe LWLock name from the tracepoint argument. At least for the\nfast-path tracepoints on start-of-acquire and end-of-acquire. I think\nit's probably OK to report it in the lock wait tracepoints, which is\nwhere it's most important to have anyway. So the tracepoint will\nalways report the LWLock* and tranche ID, but it won't report the\ntranche name for the fast-path. I'll add trace events for tranche ID\nregistration, so trace tools can either remember the tranche ID->name\nmappings from there, or capture them from lock wait events and\nremember them.\n\nReasonable? That way we retain the most important trace functionality,\nbut we reduce the overheads.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:41:44 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 14 Apr 2021 at 10:41, Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> On Wed, 14 Apr 2021 at 02:25, Robert Haas <robertmhaas@gmail.com> wrote:\n> > You could try to identify locks by pointer addresses, but that's got\n> > security hazards and the addreses aren't portable across all the\n> > backends involved in the parallel query because of how DSM works, so\n> > it's not really that helpful in terms of matching stuff up.\n>\n> What I'm doing now is identifying them by LWLock* across backends. I\n> keep track of DSM segment mappings in each backend inside the trace\n> script and I relocate LWLock* pointers known to be inside DSM segments\n> relative to a dummy base address so they're equal across backends.\n\nBTW, one of the reasons I did this was to try to identify BDR and\npglogical code that blocks or sleeps while holding a LWLock. I got\nstuck on that for other reasons, so it didn't go anywhere, but those\nissues are now resolved so I should probably return to it at some\npoint.\n\nIt'd be a nice thing to be able to run on postgres itself too.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:45:16 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 12.04.21 07:46, Craig Ringer wrote:\n> > To use systemtap semaphores (the _ENABLED macros) you need to run\n> dtrace\n> > -g to generate a probes.o then link that into postgres.\n> >\n> > I don't think we do that. I'll double check soon.\n> \n> We do that. (It's -G.)\n> \n> \n> Huh. I could've sworn we didn't. My mistake, it's there in \n> src/backend/Makefile .\n> \n> In that case I'll amend the patch to use semaphore guards.\n\nThis whole thread is now obviously moved to consideration for PG15, but \nI did add an open item about this particular issue \n(https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items, search for \n\"dtrace\"). So if you could produce a separate patch that adds the \n_ENABLED guards targeting PG14 (and PG13), that would be helpful.\n\n\n",
"msg_date": "Wed, 14 Apr 2021 15:20:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 4:46 PM Andres Freund <andres@anarazel.de> wrote:\n> I still don't like the two bytes, fwiw ;). Especially because it's 4\n> bytes due to padding right now.\n\nI'm not surprised by that disclosure. But I think it's entirely worth\nit. Making wait states visible in pg_stat_activity isn't the most\nuseful thing I've ever done to PostgreSQL, but it's far from the least\nuseful. If we can get the same benefit with less overhead, that's\ngreat.\n\n> I'd like to move the LWLock->waiters list to outside of the lwlock\n> itself - at most TotalProcs LWLocks can be waited for, so we don't need\n> millions of empty proclist_heads. That way we could also remove the\n> proclist indirection - which shows up a fair bit in contended workloads.\n>\n> And if we had a separate \"lwlocks being waited for\" structure, we could\n> also add more information to it if we wanted to...\n>\n> The difficulty of course is having space to indicate which of these\n> \"waiting for\" lists are being used - there's not enough space in ->state\n> right now to represent that. Two possibile approaches:\n>\n> - We could make it work if we restricted MAX_BACKENDS to be 2**14 - but\n> while I personally think that's a sane upper limit, I already had a\n> hard time getting consensus to lower the limit to 2^18-1.\n>\n> - Use a 64bit integer. Then we can easily fit MAX_BACKENDS lockers, as\n> well as an offset to one of MAX_BACKENDS \"wait lists\" into LWLock.\n\nI'd rather not further reduce MAX_BACKENDS. I still think some day\nwe're going to want to make that bigger again. Maybe not for a while,\nadmittedly. But, do you need to fit this into \"state\"? If you just\nreplaced \"waiters\" with a 32-bit integer, you'd save 4 bytes and have\nbits left over (and maybe restrict the tranche ID to 2^14 and squeeze\nthat in too, as you mention).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:27:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 10:42 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> I'd really love it if a committer could add an explanatory comment or\n> two in the area though. I'm happy to draft a comment patch if anyone\n> agrees my suggestion is sensible. The key things I needed to know when\n> studying the code were:\n>\n> * A LWLock* is always part of a tranche, but locks within a given\n> tranche are not necessarily co-located in memory, cross referenced or\n> associated in any way.\n> * A LWLock tranche does not track how many LWLocks are in the tranche\n> or where they are in memory. It only groups up LWLocks into categories\n> and maps the tranche ID to a name.\n> * Not all LWLocks are part of the main LWLock array; others can be\n> embedded in shmem structs elsewhere, including in DSM segments.\n> * LWLocks in DSM segments may not have the same address between\n> different backends, because the DSM base address can vary, so a\n> LWLock* cannot be reliably compared between backends unless you know\n> it's in the main LWLock array or in static shmem.\n>\n> Having that info in lwlock.c near the tranche management code or the\n> tranche and main lwlock arrays would've been very handy.\n\nI'm willing to review a comment patch along those lines.\n\n> I'm actually inclined to revise the patch I sent in order to *remove*\n> the LWLock name from the tracepoint argument. At least for the\n> fast-path tracepoints on start-of-acquire and end-of-acquire. I think\n> it's probably OK to report it in the lock wait tracepoints, which is\n> where it's most important to have anyway. So the tracepoint will\n> always report the LWLock* and tranche ID, but it won't report the\n> tranche name for the fast-path. I'll add trace events for tranche ID\n> registration, so trace tools can either remember the tranche ID->name\n> mappings from there, or capture them from lock wait events and\n> remember them.\n>\n> Reasonable? That way we retain the most important trace functionality,\n> but we reduce the overheads.\n\nReducing the overheads is good, but I have no opinion on what's\nimportant for people doing tracing, because I am not one of those\npeople.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 10:28:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 14.04.21 15:20, Peter Eisentraut wrote:\n> On 12.04.21 07:46, Craig Ringer wrote:\n>> > To use systemtap semaphores (the _ENABLED macros) you need to run\n>> dtrace\n>> > -g to generate a probes.o then link that into postgres.\n>> >\n>> > I don't think we do that. I'll double check soon.\n>>\n>> We do that. (It's -G.)\n>>\n>>\n>> Huh. I could've sworn we didn't. My mistake, it's there in \n>> src/backend/Makefile .\n>>\n>> In that case I'll amend the patch to use semaphore guards.\n> \n> This whole thread is now obviously moved to consideration for PG15, but \n> I did add an open item about this particular issue \n> (https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items, search for \n> \"dtrace\"). So if you could produce a separate patch that adds the \n> _ENABLED guards targeting PG14 (and PG13), that would be helpful.\n\nHere is a proposed patch for this.",
"msg_date": "Thu, 29 Apr 2021 09:31:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Thu, 29 Apr 2021 at 15:31, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> > So if you could produce a separate patch that adds the\n> > _ENABLED guards targeting PG14 (and PG13), that would be helpful.\n>\n> Here is a proposed patch for this.\n\nLGTM.\n\nApplies and builds fine on master and (with default fuzz) on\nREL_13_STABLE. Works as expected.\n\nThis does increase the size of LWLockAcquire() etc slightly but since\nit skips these function calls, and the semaphores are easily\npredicted, I don't have any doubt it's a net win. +1 for merge.\n\n\n",
"msg_date": "Fri, 30 Apr 2021 11:22:55 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 14 Apr 2021, 22:29 Robert Haas, <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 13, 2021 at 10:42 PM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> > I'd really love it if a committer could add an explanatory comment or\n> > two in the area though. I'm happy to draft a comment patch if anyone\n> > agrees my suggestion is sensible. The key things I needed to know when\n> > studying the code were:\n> > [...]\n>\n> I'm willing to review a comment patch along those lines.\n>\n\nCool. I'll draft soon.\n\nI since noticed that some of the info is present, but it's in lwlock.h\nwhereas in Pg comment detail is more often than not in the .c file.\n\nI prefer it in headers myself anyway, since it's more available to tools\nlike doxygen. I'll add a few \"see lwlock.h\" hints, a short para about\nappropriate lwlock use in the .c into comment etc and post on a separate\nthread soon.\n\n\n> > I'm actually inclined to revise the patch I sent in order to *remove*\n> > the LWLock name from the tracepoint argument.\n>\n\nReducing the overheads is good, but I have no opinion on what's\n> important for people doing tracing, because I am not one of those\n> people.\n>\n\nTruthfully I'm not convinced anyone is \"those people\" right now. I don't\nthink anyone is likely to be making serious use of them due to their\nlimitations.\n\nCertainly that'll be the case for the txn ones which are almost totally\nuseless. They only track the localxid lifecycle, they don't track real txid\nallocation, WAL writing, commit (wal or shmem), etc.\n\nOn Wed, 14 Apr 2021, 22:29 Robert Haas, <robertmhaas@gmail.com> wrote:On Tue, Apr 13, 2021 at 10:42 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n> I'd really love it if a committer could add an explanatory comment or\n> two in the area though. I'm happy to draft a comment patch if anyone\n> agrees my suggestion is sensible. The key things I needed to know when\n> studying the code were:\n> [...]\n\nI'm willing to review a comment patch along those lines.Cool. I'll draft soon.I since noticed that some of the info is present, but it's in lwlock.h whereas in Pg comment detail is more often than not in the .c file.I prefer it in headers myself anyway, since it's more available to tools like doxygen. I'll add a few \"see lwlock.h\" hints, a short para about appropriate lwlock use in the .c into comment etc and post on a separate thread soon.\n> I'm actually inclined to revise the patch I sent in order to *remove*\n> the LWLock name from the tracepoint argument. \nReducing the overheads is good, but I have no opinion on what's\nimportant for people doing tracing, because I am not one of those\npeople.Truthfully I'm not convinced anyone is \"those people\" right now. I don't think anyone is likely to be making serious use of them due to their limitations.Certainly that'll be the case for the txn ones which are almost totally useless. They only track the localxid lifecycle, they don't track real txid allocation, WAL writing, commit (wal or shmem), etc.",
"msg_date": "Fri, 30 Apr 2021 11:23:56 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "> On Fri, Apr 30, 2021 at 11:23:56AM +0800, Craig Ringer wrote:\n> On Wed, 14 Apr 2021, 22:29 Robert Haas, <robertmhaas@gmail.com> wrote:\n> \n> > > I'm actually inclined to revise the patch I sent in order to *remove*\n> > > the LWLock name from the tracepoint argument.\n> \n> > Reducing the overheads is good, but I have no opinion on what's\n> > important for people doing tracing, because I am not one of those\n> > people.\n> >\n> \n> Truthfully I'm not convinced anyone is \"those people\" right now. I don't\n> think anyone is likely to be making serious use of them due to their\n> limitations.\n\nI would like to mention that tracepoints could be useful not only directly,\nthey also:\n\n* deliver an information about what is important enough to trace from the\n developers, who wrote the code, point of view.\n\n* declare more stable tracing points within the code, which are somewhat more\n reliable between the versions.\n\nE.g. writing bcc scripts one is also sort of limited in use of those\ntracepoints because of requirement to provide a specific pid, but still can get\nbetter understanding what to look at (maybe using other methods).\n\n\n",
"msg_date": "Sat, 1 May 2021 18:58:41 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 30.04.21 05:22, Craig Ringer wrote:\n> On Thu, 29 Apr 2021 at 15:31, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>> So if you could produce a separate patch that adds the\n>>> _ENABLED guards targeting PG14 (and PG13), that would be helpful.\n>>\n>> Here is a proposed patch for this.\n> \n> LGTM.\n> \n> Applies and builds fine on master and (with default fuzz) on\n> REL_13_STABLE. Works as expected.\n\ncommitted\n\n\n",
"msg_date": "Mon, 3 May 2021 21:06:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-03 21:06:30 +0200, Peter Eisentraut wrote:\n> On 30.04.21 05:22, Craig Ringer wrote:\n> > On Thu, 29 Apr 2021 at 15:31, Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > > > So if you could produce a separate patch that adds the\n> > > > _ENABLED guards targeting PG14 (and PG13), that would be helpful.\n> > > \n> > > Here is a proposed patch for this.\n> > \n> > LGTM.\n> > \n> > Applies and builds fine on master and (with default fuzz) on\n> > REL_13_STABLE. Works as expected.\n> \n> committed\n\nI'm now getting\n\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockAcquire’:\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1322:58: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1345:57: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1345 | TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1355:54: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1355 | TRACE_POSTGRESQL_LWLOCK_ACQUIRE(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockConditionalAcquire’:\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1407:64: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1407 | TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE_FAIL(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1415:59: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1415 | TRACE_POSTGRESQL_LWLOCK_CONDACQUIRE(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockAcquireOrWait’:\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1488:59: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1488 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1507:58: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1507 | TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1538:68: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1538 | TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT_FAIL(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1547:63: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1547 | TRACE_POSTGRESQL_LWLOCK_ACQUIRE_OR_WAIT(T_NAME(lock), mode);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockWaitForVar’:\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1708:66: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1708 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), LW_EXCLUSIVE);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1728:65: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1728 | TRACE_POSTGRESQL_LWLOCK_WAIT_DONE(T_NAME(lock), LW_EXCLUSIVE);\n | ^\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockRelease’:\n/home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1855:48: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1855 | TRACE_POSTGRESQL_LWLOCK_RELEASE(T_NAME(lock));\n\nIn a build without the trace stuff enabled.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 May 2021 15:15:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 5 May 2021, 06:15 Andres Freund, <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> warning: suggest braces around empty body in an ‘if’ statement\n> [-Wempty-body]\n> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n> | ^\n>\n\n\nOdd that I didn't get that.\n\nI'll send a patch to revise shortly.\n\nOn Wed, 5 May 2021, 06:15 Andres Freund, <andres@anarazel.de> wrote:Hi,\nwarning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n | ^Odd that I didn't get that. I'll send a patch to revise shortly.",
"msg_date": "Wed, 5 May 2021 09:15:11 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On Wed, 5 May 2021 at 09:15, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n\n>> warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n>> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n>> | ^\n>\n> Odd that I didn't get that.\n\nThis compiler complaint is not due to the _ENABLED() test as such.\nIt's due to the fact that we completely define out the\nTRACE_POSTGRESQL_ macros with src/backend/utils/Gen_dummy_probes.sed .\n\nWhile explicit braces could be added around each test, I suggest\nfixing Gen_dummy_probes.sed to emit the usual dummy statement instead.\nPatch attached.",
"msg_date": "Wed, 5 May 2021 12:20:04 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 05.05.21 00:15, Andres Freund wrote:\n> I'm now getting\n> \n> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockAcquire’:\n> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1322:58: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n> | ^\n\nFor clarification, -Wempty-body is not part of the default warnings, right?\n\nAnd even if I turn it on explicitly, I don't get this warning. I read \nsomething that newer compilers don't warn in cases of macro expansion.\n\nWhat compiler are you using in this situation?\n\n\n\n",
"msg_date": "Sat, 8 May 2021 17:04:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 05.05.21 00:15, Andres Freund wrote:\n>> I'm now getting\n>> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockAcquire’:\n>> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1322:58: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n>> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n>> | ^\n\n> What compiler are you using in this situation?\n\nAll of these buildfarm members are now showing this warning:\n\ncalliphoridae\tgcc (Debian 10.1.0-6) 10.1.0\nculicidae\tgcc (Debian 10.1.0-6) 10.1.0\nflaviventris\tgcc (Debian 20200124-1) 10.0.1 20200124 (experimental)\nfrancolin\tgcc (Debian 10.1.0-6) 10.1.0\npiculetœ\tgcc (Debian 10.1.0-6) 10.1.0\nrorqual\t\tgcc (Debian 10.1.0-6) 10.1.0\nserinus\t\tgcc (Debian 20200124-1) 10.0.1 20200124 (experimental)\nskink\t\tgcc (Debian 10.1.0-6) 10.1.0\n\nso there's your answer.\n\n(I wonder why flaviventris and serinus are still using an \"experimental\"\ncompiler version that is now behind mainstream.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 May 2021 13:13:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-08 13:13:47 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 05.05.21 00:15, Andres Freund wrote:\n> >> I'm now getting\n> >> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c: In function ‘LWLockAcquire’:\n> >> /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1322:58: warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n> >> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n> >> | ^\n> \n> > What compiler are you using in this situation?\n\ngcc - I think the warning is pulled in via -Wextra. I think it's\nsomething sensible to warn about, too easy to end up with misleading\nbehaviour when statement-like macros are defined empty.\n\n\n> All of these buildfarm members are now showing this warning:\n> \n> calliphoridae\tgcc (Debian 10.1.0-6) 10.1.0\n> culicidae\tgcc (Debian 10.1.0-6) 10.1.0\n> flaviventris\tgcc (Debian 20200124-1) 10.0.1 20200124 (experimental)\n> francolin\tgcc (Debian 10.1.0-6) 10.1.0\n> piculetœ\tgcc (Debian 10.1.0-6) 10.1.0\n> rorqual\t\tgcc (Debian 10.1.0-6) 10.1.0\n> serinus\t\tgcc (Debian 20200124-1) 10.0.1 20200124 (experimental)\n> skink\t\tgcc (Debian 10.1.0-6) 10.1.0\n\nI think those likely are all mine, so it's not too surprising. They all\nuse something like\nCFLAGS => '-Og -ggdb -g3 -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers -fno-omit-frame-pointer',\n\n\n> (I wonder why flaviventris and serinus are still using an \"experimental\"\n> compiler version that is now behind mainstream.)\n\nThe upgrade script didn't install the newer version it because it had to\nremove some conflicting packages... Should be fixed for runs starting\nnow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 May 2021 11:55:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-08 13:13:47 -0400, Tom Lane wrote:\n>> (I wonder why flaviventris and serinus are still using an \"experimental\"\n>> compiler version that is now behind mainstream.)\n\n> The upgrade script didn't install the newer version it because it had to\n> remove some conflicting packages... Should be fixed for runs starting\n> now.\n\nLooks like that didn't work ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 May 2021 19:51:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 05.05.21 06:20, Craig Ringer wrote:\n> On Wed, 5 May 2021 at 09:15, Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n> \n>>> warning: suggest braces around empty body in an ‘if’ statement [-Wempty-body]\n>>> 1322 | TRACE_POSTGRESQL_LWLOCK_WAIT_START(T_NAME(lock), mode);\n>>> | ^\n>>\n>> Odd that I didn't get that.\n> \n> This compiler complaint is not due to the _ENABLED() test as such.\n> It's due to the fact that we completely define out the\n> TRACE_POSTGRESQL_ macros with src/backend/utils/Gen_dummy_probes.sed .\n> \n> While explicit braces could be added around each test, I suggest\n> fixing Gen_dummy_probes.sed to emit the usual dummy statement instead.\n> Patch attached.\n\nCommitted, with the Gen_dummy_probes.pl change added.\n\n\n",
"msg_date": "Mon, 10 May 2021 13:59:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "On 2021-05-09 19:51:13 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-05-08 13:13:47 -0400, Tom Lane wrote:\n> >> (I wonder why flaviventris and serinus are still using an \"experimental\"\n> >> compiler version that is now behind mainstream.)\n> \n> > The upgrade script didn't install the newer version it because it had to\n> > remove some conflicting packages... Should be fixed for runs starting\n> > now.\n> \n> Looks like that didn't work ...\n\nLooks like it did, but turned out to have some unintended side-effects\n:(.\n\nThe snapshot builds are now new:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=flaviventris&dt=2021-05-10%2015%3A43%3A56&stg=configure\nconfigure:3966: ccache /usr/lib/gcc-snapshot/bin/gcc --version >&5\ngcc (Debian 20210421-1) 11.0.1 20210421 (prerelease) [gcc-11 revision fbb7739892e:d13ce34bd01:3756d99dab6a268d0d8a17583980a86f23f0595a]\n\nBut the aforementioned dependencies that needed to remove broke the\ninstalled old versions of gcc/clang.\n\nI started to build the old versions of llvm manually, but that then hits\nthe issue that at least 3.9 doesn't build with halfway modern versions\nof gcc/clang. So I gotta do it stepwise (i.e. go backwards, build llvm\nn-2 with n-1), will take a bit of time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 May 2021 09:08:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks like it did, but turned out to have some unintended side-effects\n> :(.\n> The snapshot builds are now new:\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=flaviventris&dt=2021-05-10%2015%3A43%3A56&stg=configure\n> configure:3966: ccache /usr/lib/gcc-snapshot/bin/gcc --version >&5\n> gcc (Debian 20210421-1) 11.0.1 20210421 (prerelease) [gcc-11 revision fbb7739892e:d13ce34bd01:3756d99dab6a268d0d8a17583980a86f23f0595a]\n> But the aforementioned dependencies that needed to remove broke the\n> installed old versions of gcc/clang.\n> I started to build the old versions of llvm manually, but that then hits\n> the issue that at least 3.9 doesn't build with halfway modern versions\n> of gcc/clang. So I gotta do it stepwise (i.e. go backwards, build llvm\n> n-2 with n-1), will take a bit of time.\n\nUgh. Memo to self: don't rag on other peoples' buildfarm configurations\nright before a release deadline :-(. Sorry to cause you trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 May 2021 12:14:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-10 12:14:46 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Looks like it did, but turned out to have some unintended side-effects\n> > :(.\n> > The snapshot builds are now new:\n> > https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=flaviventris&dt=2021-05-10%2015%3A43%3A56&stg=configure\n> > configure:3966: ccache /usr/lib/gcc-snapshot/bin/gcc --version >&5\n> > gcc (Debian 20210421-1) 11.0.1 20210421 (prerelease) [gcc-11 revision fbb7739892e:d13ce34bd01:3756d99dab6a268d0d8a17583980a86f23f0595a]\n> > But the aforementioned dependencies that needed to remove broke the\n> > installed old versions of gcc/clang.\n> > I started to build the old versions of llvm manually, but that then hits\n> > the issue that at least 3.9 doesn't build with halfway modern versions\n> > of gcc/clang. So I gotta do it stepwise (i.e. go backwards, build llvm\n> > n-2 with n-1), will take a bit of time.\n> \n> Ugh. Memo to self: don't rag on other peoples' buildfarm configurations\n> right before a release deadline :-(. Sorry to cause you trouble.\n\nNo worries - I knew that I'd have to do this at some point, even though\nI hadn't planned to do that today... I should have all of them green\nbefore end of today.\n\nI found that I actually can build LLVM 3.9 directly, as clang-6 can\nstill build it directly (wheras the oldest gcc still installed can't\nbuild it directly). So it's a bit less painful than I thought at first\n\nThe 3.9 instances (phycodurus, dragonet) tests are running right now,\nand I'm fairly sure they'll pass (most of a --noreport --nostatus run\npassed). Going forward building LLVM 4,5,6 now - the later versions take\nlonger...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 May 2021 09:46:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-10 09:46:02 -0700, Andres Freund wrote:\n> No worries - I knew that I'd have to do this at some point, even though\n> I hadn't planned to do that today... I should have all of them green\n> before end of today.\n> \n> I found that I actually can build LLVM 3.9 directly, as clang-6 can\n> still build it directly (wheras the oldest gcc still installed can't\n> build it directly). So it's a bit less painful than I thought at first\n> \n> The 3.9 instances (phycodurus, dragonet) tests are running right now,\n> and I'm fairly sure they'll pass (most of a --noreport --nostatus run\n> passed). Going forward building LLVM 4,5,6 now - the later versions take\n> longer...\n\nLooks like it's all clear now. All but the results for 11 had cleared up\nuntil yesterday evening, and the rest came in ok over night.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 May 2021 10:35:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Identify LWLocks in tracepoints"
}
] |
[
{
"msg_contents": "Hi,\nw.r.t. the code in BufferAlloc(), the pointers are compared.\n\nShould we instead compare the tranche Id of the two LWLock ?\n\nCheers\n\nHi,w.r.t. the code in BufferAlloc(), the pointers are compared.Should we instead compare the tranche Id of the two LWLock ?Cheers",
"msg_date": "Fri, 18 Dec 2020 23:53:47 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Double partition lock in bufmgr"
},
{
"msg_contents": "On 19.12.2020 10:53, Zhihong Yu wrote:\n> Hi,\n> w.r.t. the code in BufferAlloc(), the pointers are compared.\n>\n> Should we instead compare the tranche Id of the two LWLock ?\n>\n> Cheers\n\nAs far as LWlocks are stored in the array, comparing indexes in this \narray (tranche Id) is equivalent to comparing element's pointers.\nSo I do not see any problem here.\n\nJust as experiment I tried a version of BufferAlloc without double \nlocking (patch is attached).\nI am not absolutely sure that my patch is correct: my main intention was \nto estimate influence of this buffer reassignment on performance.\nI just run standard pgbench for database with scale 100 and default \nshared buffers size (256Mb). So there are should be a lot of page \nreplacements.\nI do not see any noticeable difference:\n\nvanilla: 13087.596845\npatch: 13184.442130",
"msg_date": "Sat, 19 Dec 2020 15:50:30 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Double partition lock in bufmgr"
},
{
"msg_contents": "Hi Konstantin,\n\nOn Sat, Dec 19, 2020 at 9:50 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n>\n>\n> On 19.12.2020 10:53, Zhihong Yu wrote:\n> > Hi,\n> > w.r.t. the code in BufferAlloc(), the pointers are compared.\n> >\n> > Should we instead compare the tranche Id of the two LWLock ?\n> >\n> > Cheers\n>\n> As far as LWlocks are stored in the array, comparing indexes in this\n> array (tranche Id) is equivalent to comparing element's pointers.\n> So I do not see any problem here.\n>\n> Just as experiment I tried a version of BufferAlloc without double\n> locking (patch is attached).\n> I am not absolutely sure that my patch is correct: my main intention was\n> to estimate influence of this buffer reassignment on performance.\n> I just run standard pgbench for database with scale 100 and default\n> shared buffers size (256Mb). So there are should be a lot of page\n> replacements.\n> I do not see any noticeable difference:\n>\n> vanilla: 13087.596845\n> patch: 13184.442130\n>\n\nYou sent in your patch, bufmgr.patch to pgsql-hackers on Dec 19, but\nyou did not post it to the next CommitFest[1]. If this was\nintentional, then you need to take no action. However, if you want\nyour patch to be reviewed as part of the upcoming CommitFest, then you\nneed to add it yourself before 2021-01-01 AoE[2]. Thanks for your\ncontributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 29 Dec 2020 16:59:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Double partition lock in bufmgr"
}
] |
[
{
"msg_contents": "Hi\nHere is my workaround (from unit_tests.dll DLL_PROCESS_DETACH):\n\n //3. Destroy LIBPQ!static pthread_mutex_t singlethread_lock\n- 327 HMODULE hLeakedLibPQ = ::GetModuleHandleA(\"libpq.dll\"); //libpq.dll v13.0.1.20323 (https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_13_00_0000.zip)\n- 328 if (hLeakedLibPQ) {\n- 329 void **singlethread_lock_ptr = (void **)(((BYTE *)hLeakedLibPQ) +\n- 330 #ifdef _WIN64\n- 331 0x484b8\n- 332 #else\n- 333 0x3F26C\n- 334 #endif //_WIN64\n- 335 );\n- 336 if (*singlethread_lock_ptr) {\n- 337 DeleteCriticalSection((LPCRITICAL_SECTION)(*singlethread_lock_ptr));\n- 338 typedef void(*pthread_mutex_destroy)(void *mutex);\n- 339 pthread_mutex_destroy freemtx = (pthread_mutex_destroy)::GetProcAddress(hLeakedLibPQ, \"PQfreemem\");\n- 340 assert(freemtx != NULL);\n- 341 if (freemtx) freemtx(*singlethread_lock_ptr);\n- 342 }\n- 343 }\n\n\n\n\n\n\n\n\n\nHi\nHere is my workaround (from unit_tests.dll DLL_PROCESS_DETACH):\n \n //3. Destroy LIBPQ!static pthread_mutex_t singlethread_lock\n- 327 HMODULE hLeakedLibPQ = ::GetModuleHandleA(\"libpq.dll\"); //libpq.dll v13.0.1.20323 (https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_13_00_0000.zip)\n- 328 if (hLeakedLibPQ) {\n- 329 void **singlethread_lock_ptr = (void **)(((BYTE *)hLeakedLibPQ) +\n\n- 330 #ifdef _WIN64\n- 331 0x484b8\n- 332 #else\n- 333 0x3F26C\n- 334 #endif //_WIN64\n- 335 );\n- 336 if (*singlethread_lock_ptr) {\n- 337 DeleteCriticalSection((LPCRITICAL_SECTION)(*singlethread_lock_ptr));\n- 338 typedef void(*pthread_mutex_destroy)(void *mutex);\n- 339 pthread_mutex_destroy freemtx = (pthread_mutex_destroy)::GetProcAddress(hLeakedLibPQ, \"PQfreemem\");\n- 340 assert(freemtx != NULL);\n- 341 if (freemtx) freemtx(*singlethread_lock_ptr);\n- 342 }\n- 343 }",
"msg_date": "Sat, 19 Dec 2020 17:40:31 +0000",
"msg_from": "=?koi8-r?B?8MXS28nOIODSycog8MXU0s/Xyd4=?= <pershin@prosoftsystems.ru>",
"msg_from_op": true,
"msg_subject": "RE: libpq @windows : leaked singlethread_lock makes AppVerifier\n unhappy"
}
] |
[
{
"msg_contents": "2017-03-24 [7b504eb28] Implement multivariate n-distinct coefficients\n2017-04-05 [2686ee1b7] Collect and use multi-column dependency stats\n2017-05-12 [bc085205c] Change CREATE STATISTICS syntax\n\nThe existing notes say:\n|Add multi-column optimizer statistics to compute the correlation ratio and number of distinct values (Tomas Vondra, David Rowley, �lvaro Herrera)\n|New commands are CREATE STATISTICS, ALTER STATISTICS, and DROP STATISTICS.\n|This feature is helpful in estimating query memory usage and when combining the statistics from individual columns.\n\n\"correlation ratio\" is referring to stxkind=d (dependencies), right ? That's\nvery unclear.\n\n\"helpful in estimating query memory usage\": I guess it means that this allows\nthe planner to correctly account for large vs small number of GROUP BY values,\nbut it sounds more like it's going to help a user to estimate memory use.\n\n\"when combining the statistics from individual columns.\" this is referring to\nstxkind=d, handling correlated/redundant clauses, but it'd be hard for a user\nto know that.\n\nAlso, maybe it should say \"combining stats from columns OF THE SAME TABLE\".\n\nSo I propose:\n|Allow creation of multi-column statistics objects, for computing the\n|dependencies between columns and number of distinct values of combinations of columns\n|(Tomas Vondra, |David Rowley, �lvaro Herrera)\n|The new commands are CREATE STATISTICS, ALTER STATISTICS, and DROP STATISTICS.\n|Improved statistics allow the planner to generate better query plans with more accurate\n|estimates of the row count and memory usage when grouping by multiple\n|columns, and more accurate estimates of the row count if WHERE clauses apply\n|to multiple columns and values of some columns are correlated with values of\n|other columns.\n\n\n",
"msg_date": "Sat, 19 Dec 2020 13:39:27 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "v10 release notes for extended stats"
},
{
"msg_contents": "On Sat, Dec 19, 2020 at 01:39:27PM -0600, Justin Pryzby wrote:\n> 2017-03-24 [7b504eb28] Implement multivariate n-distinct coefficients\n> 2017-04-05 [2686ee1b7] Collect and use multi-column dependency stats\n> 2017-05-12 [bc085205c] Change CREATE STATISTICS syntax\n> \n> The existing notes say:\n> |Add multi-column optimizer statistics to compute the correlation ratio and number of distinct values (Tomas Vondra, David Rowley, �lvaro Herrera)\n> |New commands are CREATE STATISTICS, ALTER STATISTICS, and DROP STATISTICS.\n> |This feature is helpful in estimating query memory usage and when combining the statistics from individual columns.\n> \n> \"correlation ratio\" is referring to stxkind=d (dependencies), right ? That's\n> very unclear.\n> \n> \"helpful in estimating query memory usage\": I guess it means that this allows\n> the planner to correctly account for large vs small number of GROUP BY values,\n> but it sounds more like it's going to help a user to estimate memory use.\n> \n> \"when combining the statistics from individual columns.\" this is referring to\n> stxkind=d, handling correlated/redundant clauses, but it'd be hard for a user\n> to know that.\n> \n> Also, maybe it should say \"combining stats from columns OF THE SAME TABLE\".\n> \n> So I propose:\n> |Allow creation of multi-column statistics objects, for computing the\n> |dependencies between columns and number of distinct values of combinations of columns\n> |(Tomas Vondra, |David Rowley, �lvaro Herrera)\n> |The new commands are CREATE STATISTICS, ALTER STATISTICS, and DROP STATISTICS.\n> |Improved statistics allow the planner to generate better query plans with more accurate\n> |estimates of the row count and memory usage when grouping by multiple\n> |columns, and more accurate estimates of the row count if WHERE clauses apply\n> |to multiple columns and values of some columns are correlated with values of\n> |other columns.\n\nUh, at the time, that was the best text we could come up with. We don't\nusually go back to update them unless there is a very good reason, and I\nam not seeing that above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Sat, 19 Dec 2020 15:11:05 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: v10 release notes for extended stats"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Dec 19, 2020 at 01:39:27PM -0600, Justin Pryzby wrote:\n>> So I propose:\n\n> Uh, at the time, that was the best text we could come up with. We don't\n> usually go back to update them unless there is a very good reason, and I\n> am not seeing that above.\n\nYeah, it's a couple years too late to be worth spending effort on\nimproving the v10 notes, I fear. If there's text in the main\ndocumentation that could be improved, that's a different story.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 19 Dec 2020 15:39:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: v10 release notes for extended stats"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI want to be able to do synchronous vectored file I/O, so I made\nwrapper macros for preadv() and pwritev() with fallbacks for systems\nthat don't have them. Following the precedent of the pg_pread() and\npg_pwrite() macros, the \"pg_\" prefix reflects a subtle contract\nchange: the fallback paths might have the side effect of changing the\nfile position.\n\nThey're non-standard system calls, but the BSDs and Linux have had\nthem for a long time, and for other systems we can use POSIX\nreadv()/writev() with an additional lseek(). The worst case is\nWindows (and maybe our favourite antique Unix build farm animal?)\nwhich has none of those things, so there is a further fallback to a\nloop. Windows does have ReadFileScatter() and WriteFileGather(), but\nthose only work for overlapped (= asynchronous), unbuffered, page\naligned access. They'll very likely be useful for native AIO+DIO\nsupport in the future, but don't fit the bill here.\n\nThis is part of a project to consolidate and offload I/O (about which\nmore soon), but seemed isolated enough to post separately and I guess\nit could be independently useful.",
"msg_date": "Sun, 20 Dec 2020 11:38:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I want to be able to do synchronous vectored file I/O, so I made\n> wrapper macros for preadv() and pwritev() with fallbacks for systems\n> that don't have them. Following the precedent of the pg_pread() and\n> pg_pwrite() macros, the \"pg_\" prefix reflects a subtle contract\n> change: the fallback paths might have the side effect of changing the\n> file position.\n\nIn a quick look, seems OK with some nits:\n\n1. port.h cannot assume that <limits.h> has already been included;\nnor do I want to fix that by including <limits.h> there. Do we\nreally need to define a fallback value of IOV_MAX? If so,\nmaybe the answer is to put the replacement struct iovec and\nIOV_MAX in some new header.\n\n2. I'm not really that happy about loading <sys/uio.h> into\nevery compilation we do, which would be another reason for a\nnew specialized header that either includes <sys/uio.h> or\nprovides fallback definitions.\n\n3. The patch as given won't prove anything except that the code\ncompiles. Is it worth fixing at least one code path to make\nuse of pg_preadv and pg_pwritev, so we can make sure this code\nis tested before there's layers of other new code on top?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 19 Dec 2020 18:34:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I want to be able to do synchronous vectored file I/O, so I made\n> > wrapper macros for preadv() and pwritev() with fallbacks for systems\n> > that don't have them. Following the precedent of the pg_pread() and\n> > pg_pwrite() macros, the \"pg_\" prefix reflects a subtle contract\n> > change: the fallback paths might have the side effect of changing the\n> > file position.\n>\n> In a quick look, seems OK with some nits:\n\nThanks for looking!\n\n> 1. port.h cannot assume that <limits.h> has already been included;\n> nor do I want to fix that by including <limits.h> there. Do we\n> really need to define a fallback value of IOV_MAX? If so,\n> maybe the answer is to put the replacement struct iovec and\n> IOV_MAX in some new header.\n\nOk, I moved all this stuff into port/pg_uio.h.\n\n> 2. I'm not really that happy about loading <sys/uio.h> into\n> every compilation we do, which would be another reason for a\n> new specialized header that either includes <sys/uio.h> or\n> provides fallback definitions.\n\nAck.\n\n> 3. The patch as given won't prove anything except that the code\n> compiles. Is it worth fixing at least one code path to make\n> use of pg_preadv and pg_pwritev, so we can make sure this code\n> is tested before there's layers of other new code on top?\n\nOK, here's a patch to zero-fill fresh WAL segments with pwritev().\n\nI'm drawing a blank on trivial candidate uses for preadv(), without\ninfrastructure from later patches.",
"msg_date": "Sun, 20 Dec 2020 16:26:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> OK, here's a patch to zero-fill fresh WAL segments with pwritev().\n> I'm drawing a blank on trivial candidate uses for preadv(), without\n> infrastructure from later patches.\n\nThis looks OK to me. I tried it on prairiedog (has writev and\npwrite but not pwritev) as well as gaur (has only writev).\nThey seem happy.\n\nOne minor thought is that in\n\n+\t\tstruct iovec iov[Min(IOV_MAX, 1024)];\t/* cap stack space */\n\nit seems like pretty much every use of IOV_MAX would want some\nsimilar cap. Should we centralize that idea with, say,\n\n#define PG_IOV_MAX Min(IOV_MAX, 1024)\n\n? Or will the plausible cap vary across uses?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Dec 2020 02:07:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Sun, Dec 20, 2020 at 8:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One minor thought is that in\n>\n> + struct iovec iov[Min(IOV_MAX, 1024)]; /* cap stack space */\n>\n> it seems like pretty much every use of IOV_MAX would want some\n> similar cap. Should we centralize that idea with, say,\n>\n> #define PG_IOV_MAX Min(IOV_MAX, 1024)\n>\n> ? Or will the plausible cap vary across uses?\n\nHmm. For the real intended user of this, namely worker processes that\nsimulate AIO when native AIO isn't available, higher level code will\nlimit the iov count to much smaller numbers anyway. It wants to try\nto stay under typical device limits for vectored I/O, because split\nrequests would confound attempts to model and limit queue depth and\ncontrol latency. In Andres's AIO prototype he currently has a macro\nPGAIO_MAX_COMBINE set to 16 (meaning approximately 16 data block or\nwal reads/writes = 128KB worth of scatter/gather per I/O request); I\nguess it should really be Min(IOV_MAX, <something>), but I don't\ncurrently have an opinion on the <something>, except that it should\nsurely be closer to 16 than 1024 (for example\n/sys/block/nvme0n1/queue/max_segments is 33 here). I mention all this\nto explain that I don't think the code in patch 0002 is going to turn\nout to be very typical: it's trying to minimise system calls by\nstaying under an API limit (though I cap it for allocation sanity),\nwhereas more typical code probably wants to stay under a device limit,\nso I don't immediately have another use for eg PG_IOV_MAX.\n\n\n",
"msg_date": "Mon, 21 Dec 2020 00:12:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-20 16:26:42 +1300, Thomas Munro wrote:\n> > 1. port.h cannot assume that <limits.h> has already been included;\n> > nor do I want to fix that by including <limits.h> there. Do we\n> > really need to define a fallback value of IOV_MAX? If so,\n> > maybe the answer is to put the replacement struct iovec and\n> > IOV_MAX in some new header.\n> \n> Ok, I moved all this stuff into port/pg_uio.h.\n\nCan we come up with a better name than 'uio'? I find that a not exactly\nmeaningful name.\n\nOr perhaps we could just leave the functions in port/port.h, but extract\nthe value of the define at configure time? That way only pread.c /\npwrite.c would need it.\n\n\n> > 3. The patch as given won't prove anything except that the code\n> > compiles. Is it worth fixing at least one code path to make\n> > use of pg_preadv and pg_pwritev, so we can make sure this code\n> > is tested before there's layers of other new code on top?\n> \n> OK, here's a patch to zero-fill fresh WAL segments with pwritev().\n\nI think that's a good idea. However, I see two small issues: 1) If we\nwrite larger amounts at once, we need to handle partial writes. That's a\nlarge enough amount of IO that it's much more likely to hit a memory\nshortage or such in the kernel - we had to do that a while a go for WAL\nwrites (which can also be larger), if memory serves.\n\nPerhaps we should have pg_pwritev/readv unconditionally go through\npwrite.c/pread.c and add support for \"continuing\" partial writes/reads\nin one central place?\n\n\n> I'm drawing a blank on trivial candidate uses for preadv(), without\n> infrastructure from later patches.\n\nCan't immediately think of something either.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Dec 2020 14:40:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "\n> > I'm drawing a blank on trivial candidate uses for preadv(), without\n> > infrastructure from later patches.\n> \n> Can't immediately think of something either.\n\nThis might be not that trivial , but maybe acquire_sample_rows() from analyze.c ?\n\nPlease note however there's patch https://www.postgresql.org/message-id/20201109180644.GJ16415%40tamriel.snowman.net ( https://commitfest.postgresql.org/30/2799/ ) for adding fadvise, but maybe those two could be even combined so you would be doing e.g. 16x fadvise() and then grab 8 pages in one preadv() call ? I'm find unlikely however that preadv give any additional performance benefit there after having fadvise, but clearly a potential place to test.\n\n-J.\n\n\n",
"msg_date": "Mon, 21 Dec 2020 07:25:50 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 8:25 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> > > I'm drawing a blank on trivial candidate uses for preadv(), without\n> > > infrastructure from later patches.\n> >\n> > Can't immediately think of something either.\n>\n> This might be not that trivial , but maybe acquire_sample_rows() from analyze.c ?\n>\n> Please note however there's patch https://www.postgresql.org/message-id/20201109180644.GJ16415%40tamriel.snowman.net ( https://commitfest.postgresql.org/30/2799/ ) for adding fadvise, but maybe those two could be even combined so you would be doing e.g. 16x fadvise() and then grab 8 pages in one preadv() call ? I'm find unlikely however that preadv give any additional performance benefit there after having fadvise, but clearly a potential place to test.\n\nOh, interesting, that looks like another test case to study with the\nAIO patch set, but I don't think it's worth trying to do a\nsimpler/half-baked version in the meantime. (Since that ANALYZE patch\nuses PrefetchBuffer() it should automatically benefit: the\nposix_fadvise() calls will be replaced with consolidated preadv()\ncalls in a worker process or native AIO equivalent so that system\ncalls are mostly gone from the initiating process, and by the time you\ntry to access the buffer it'll hopefully see that it's finished\nwithout any further system calls. Refinements are possible though,\nlike making use of recent_buffer to avoid double-lookup, and\ntuning/optimisation for how often IOs should be consolidated and\nsubmitted.)\n\n\n",
"msg_date": "Tue, 22 Dec 2020 10:31:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-12-20 16:26:42 +1300, Thomas Munro wrote:\n> > > 1. port.h cannot assume that <limits.h> has already been included;\n> > > nor do I want to fix that by including <limits.h> there. Do we\n> > > really need to define a fallback value of IOV_MAX? If so,\n> > > maybe the answer is to put the replacement struct iovec and\n> > > IOV_MAX in some new header.\n> >\n> > Ok, I moved all this stuff into port/pg_uio.h.\n>\n> Can we come up with a better name than 'uio'? I find that a not exactly\n> meaningful name.\n\nOk, let's try port/pg_iovec.h.\n\n> Or perhaps we could just leave the functions in port/port.h, but extract\n> the value of the define at configure time? That way only pread.c /\n> pwrite.c would need it.\n\nThat won't work for the struct definition, so client code would need\nto remember to do:\n\n#ifdef HAVE_SYS_UIO_H\n#include <sys/uio.h>\n#endif\n\n... which is a little tedious, or port.h would need to do that and\nincur an overhead in every translation unit, which Tom objected to.\nThat's why I liked the separate header idea.\n\n> > > 3. The patch as given won't prove anything except that the code\n> > > compiles. Is it worth fixing at least one code path to make\n> > > use of pg_preadv and pg_pwritev, so we can make sure this code\n> > > is tested before there's layers of other new code on top?\n> >\n> > OK, here's a patch to zero-fill fresh WAL segments with pwritev().\n>\n> I think that's a good idea. However, I see two small issues: 1) If we\n> write larger amounts at once, we need to handle partial writes. That's a\n> large enough amount of IO that it's much more likely to hit a memory\n> shortage or such in the kernel - we had to do that a while a go for WAL\n> writes (which can also be larger), if memory serves.\n>\n> Perhaps we should have pg_pwritev/readv unconditionally go through\n> pwrite.c/pread.c and add support for \"continuing\" partial writes/reads\n> in one central place?\n\nOk, here's a new version with the following changes:\n\n1. Define PG_IOV_MAX, a reasonable size for local variables, not\nlarger than IOV_MAX.\n2 Use 32 rather than 1024, based on off-list complaint about 1024\npotentially swamping the IO system unfairly.\n3. Add a wrapper pg_pwritev_retry() to retry automatically on short\nwrites. (I didn't write pg_preadv_retry(), because I don't currently\nneed it for anything so I didn't want to have to think about EOF vs\nshort-reads-for-implementation-reasons.)\n4. I considered whether pg_pwrite() should have built-in retry\ninstead of a separate wrapper, but I thought of an argument against\nhiding the \"raw\" version: the AIO patch set already understands short\nreads/writes and knows how to retry at a higher level, as that's\nneeded for native AIO too, so I think it makes sense to be able to\nkeep things the same and not solve the same problem twice. A counter\nargument would be that you could get the retry underway sooner with a\ntight loop, but I'm not expecting this to be common.\n\n> > I'm drawing a blank on trivial candidate uses for preadv(), without\n> > infrastructure from later patches.\n>\n> Can't immediately think of something either.\n\nOne idea I had for the future is for xlogreader.c to read the WAL into\na large multi-page circular buffer, which could wrap around using a\npair of iovecs, but that requires lots more work .",
"msg_date": "Wed, 23 Dec 2020 00:06:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 12:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Dec 21, 2020 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > Can we come up with a better name than 'uio'? I find that a not exactly\n> > meaningful name.\n>\n> Ok, let's try port/pg_iovec.h.\n\nI pushed it with that name, and a couple more cosmetic changes. I'll\nkeep an eye on the build farm.\n\n\n",
"msg_date": "Mon, 11 Jan 2021 15:34:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 3:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Dec 23, 2020 at 12:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Mon, Dec 21, 2020 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Can we come up with a better name than 'uio'? I find that a not exactly\n> > > meaningful name.\n> >\n> > Ok, let's try port/pg_iovec.h.\n>\n> I pushed it with that name, and a couple more cosmetic changes. I'll\n> keep an eye on the build farm.\n\nSince only sifaka has managed to return a result so far (nice CPU), I\nhad plenty of time to notice that macOS Big Sur has introduced\npreadv/pwritev. They were missing on Catalina[1].\n\n[1] https://cirrus-ci.com/task/6002157537198080\n\n\n",
"msg_date": "Mon, 11 Jan 2021 15:59:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Mon, Jan 11, 2021 at 3:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Jan 11, 2021 at 3:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I pushed it with that name, and a couple more cosmetic changes. I'll\n> > keep an eye on the build farm.\n>\n> Since only sifaka has managed to return a result so far (nice CPU), I\n> had plenty of time to notice that macOS Big Sur has introduced\n> preadv/pwritev. They were missing on Catalina[1].\n\nThe rest of buildfarm was OK with it too, but I learned of a small\nproblem through CI testing of another patch: it's not OK for\nsrc/port/pwrite.c to do this:\n\n if (part > 0)\n elog(ERROR, \"unexpectedly wrote more than requested\");\n\n... because now when I try to use pg_pwrite() in pg_test_fsync,\nWindows fails to link:\n\nlibpgport.lib(pwrite.obj) : error LNK2019: unresolved external symbol\nerrstart referenced in function pg_pwritev_with_retry\n[C:\\projects\\postgresql\\pg_test_fsync.vcxproj]\n\nI'll go and replace that with an assertion.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 12:40:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 11.01.2021 05:59, Thomas Munro wrote:\n> Since only sifaka has managed to return a result so far (nice CPU), I\n> had plenty of time to notice that macOS Big Sur has introduced\n> preadv/pwritev. They were missing on Catalina[1].\n> \n> [1] https://cirrus-ci.com/task/6002157537198080\n\nHi, Thomas!\n\nIndeed, pwritev is not available on macOS Catalina. So I get compiler \nwarnings about that:\n\n/Users/shinderuk/src/pgwork/devel/build/../src/port/pwrite.c:117:10: \nwarning: 'pwritev' is only available on macOS 11.0 or newer \n[-Wunguarded-availability-new]\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n/Users/shinderuk/src/pgwork/devel/build/../src/include/port/pg_iovec.h:49:20: \nnote: expanded from macro 'pg_pwritev'\n#define pg_pwritev pwritev\n ^~~~~~~\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9: \nnote: 'pwritev' has been marked as being introduced in macOS 11.0 here, \nbut the deployment target is macOS\n 10.15.0\nssize_t pwritev(int, const struct iovec *, int, off_t) \n__DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0), \nwatchos(7.0), tvos(14.0));\n ^\n/Users/shinderuk/src/pgwork/devel/build/../src/port/pwrite.c:117:10: \nnote: enclose 'pwritev' in a __builtin_available check to silence this \nwarning\n part = pg_pwritev(fd, iov, iovcnt, offset);\n ^~~~~~~~~~\n/Users/shinderuk/src/pgwork/devel/build/../src/include/port/pg_iovec.h:49:20: \nnote: expanded from macro 'pg_pwritev'\n#define pg_pwritev pwritev\n ^~~~~~~\n1 warning generated.\n(... several more warnings ...)\n\n\nAnd initdb fails:\n\nrunning bootstrap script ... dyld: lazy symbol binding failed: Symbol \nnot found: _pwritev\n Referenced from: /Users/shinderuk/src/pgwork/devel/install/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\ndyld: Symbol not found: _pwritev\n Referenced from: /Users/shinderuk/src/pgwork/devel/install/bin/postgres\n Expected in: /usr/lib/libSystem.B.dylib\n\n\nRegards.\n\n-- \nSergey Shinderuk\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 13 Jan 2021 12:40:06 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 10:40 PM Sergey Shinderuk\n<s.shinderuk@postgrespro.ru> wrote:\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9:\n> note: 'pwritev' has been marked as being introduced in macOS 11.0 here,\n> but the deployment target is macOS\n> 10.15.0\n> ssize_t pwritev(int, const struct iovec *, int, off_t)\n> __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\n> watchos(7.0), tvos(14.0));\n> ^\n\nHrm... So why did \"configure\" think you have pwritev, then? It seems\nlike you must have been using different compilers or options at\nconfigure time and compile time, no?\n\n\n",
"msg_date": "Wed, 13 Jan 2021 22:56:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 13.01.2021 12:56, Thomas Munro wrote:\n> On Wed, Jan 13, 2021 at 10:40 PM Sergey Shinderuk\n> <s.shinderuk@postgrespro.ru> wrote:\n>> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk/usr/include/sys/uio.h:104:9:\n>> note: 'pwritev' has been marked as being introduced in macOS 11.0 here,\n>> but the deployment target is macOS\n>> 10.15.0\n>> ssize_t pwritev(int, const struct iovec *, int, off_t)\n>> __DARWIN_NOCANCEL(pwritev) __API_AVAILABLE(macos(11.0), ios(14.0),\n>> watchos(7.0), tvos(14.0));\n>> ^\n> \n> Hrm... So why did \"configure\" think you have pwritev, then? It seems\n> like you must have been using different compilers or options at\n> configure time and compile time, no?\n> \n\nNo, i've just rerun configure from clean checkout without any options. \nIt does think that pwritev is available. I'll try to figure this out \nlater and come back to you. Thanks.\n\n\n",
"msg_date": "Wed, 13 Jan 2021 13:17:36 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> On 13.01.2021 12:56, Thomas Munro wrote:\n>> On Wed, Jan 13, 2021 at 10:40 PM Sergey Shinderuk\n>> <s.shinderuk@postgrespro.ru> wrote:\n>>> note: 'pwritev' has been marked as being introduced in macOS 11.0 here,\n>>> but the deployment target is macOS 10.15.0\n\n>> Hrm... So why did \"configure\" think you have pwritev, then? It seems\n>> like you must have been using different compilers or options at\n>> configure time and compile time, no?\n\n> No, i've just rerun configure from clean checkout without any options. \n> It does think that pwritev is available. I'll try to figure this out \n> later and come back to you. Thanks.\n\nThe symptoms sound consistent with using bleeding-edge Xcode on a\nCatalina machine ... please report exact OS and Xcode versions.\n\nI have a different complaint, using Big Sur and Xcode 12.3:\n\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport.a(pread.o) has no symbols\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport_shlib.a(pread_shlib.o) has no symbols\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport_srv.a(pread_srv.o) has no symbols\n\nLooks like we need to be more careful about not including pread.c\nin the build unless it actually has code to contribute.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 11:13:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "I wrote:\n> Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n>>>> note: 'pwritev' has been marked as being introduced in macOS 11.0 here,\n>>>> but the deployment target is macOS 10.15.0\n\n> The symptoms sound consistent with using bleeding-edge Xcode on a\n> Catalina machine ... please report exact OS and Xcode versions.\n\nI can reproduce these warnings on Big Sur + Xcode 12.3 by doing\n\nexport MACOSX_DEPLOYMENT_TARGET=10.15\n\nbefore building; however the executable runs anyway, which I guess\nis unsurprising. AFAICS from config.log, configure has no idea\nthat anything is wrong.\n\n(BTW, at least the rather-old version of ccache I'm using does not\nseem to realize that that environment variable is significant;\nI had to clear ~/.ccache to get consistent results.)\n\nWe've had issues before with weird results from Xcode versions\nnewer than the underlying OS. In the past we've been able to\nwork around that, but I'm not sure that I see a way here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 12:23:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Hmmm ... I can further report that on Catalina + Xcode 12.0,\neverything seems fine. configure correctly detects that preadv\nand pwritev aren't there:\n\nconfigure:15161: checking for preadv\nconfigure:15161: ccache gcc -o conftest -Wall -Wmissing-prototypes -Wpointer-ar\\\nith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-a\\\nttribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-lin\\\ne-argument -g -O2 -isysroot /Applications/Xcode.app/Contents/Developer/Platform\\\ns/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -isysroot /Applications/Xcod\\\ne.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.s\\\ndk conftest.c -lz -lm >&5\nUndefined symbols for architecture x86_64:\n \"_preadv\", referenced from:\n _main in conftest-fca7e9.o\nld: symbol(s) not found for architecture x86_64\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\nconfigure:15161: $? = 1\n\nSo I'm a little confused as to why this test is failing to fail\nwith (I assume) newer Xcode. Can we see the relevant part of\nconfig.log on your machine?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 13:20:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 5:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I have a different complaint, using Big Sur and Xcode 12.3:\n>\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport.a(pread.o) has no symbols\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport_shlib.a(pread_shlib.o) has no symbols\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libpgport_srv.a(pread_srv.o) has no symbols\n>\n> Looks like we need to be more careful about not including pread.c\n> in the build unless it actually has code to contribute.\n\nI did it that way because it made it easy to test different\ncombinations of the replacements on computers that do actually have\npwrite and pwritev, just by tweaking pg_config.h. Here's an attempt\nto do it with AC_REPLACE_FUNCS, which avoids creating empty .o files.\nIt means that to test the replacements on modern systems you have to\ntweak pg_config.h and also add the relevant .o files to LIBOBJS in\nsrc/Makefile.global, but that seems OK.",
"msg_date": "Thu, 14 Jan 2021 08:52:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jan 14, 2021 at 5:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Looks like we need to be more careful about not including pread.c\n>> in the build unless it actually has code to contribute.\n\n> I did it that way because it made it easy to test different\n> combinations of the replacements on computers that do actually have\n> pwrite and pwritev, just by tweaking pg_config.h. Here's an attempt\n> to do it with AC_REPLACE_FUNCS, which avoids creating empty .o files.\n> It means that to test the replacements on modern systems you have to\n> tweak pg_config.h and also add the relevant .o files to LIBOBJS in\n> src/Makefile.global, but that seems OK.\n\nYeah, this looks better. Two gripes, one major and one minor:\n\n* You need to remove pread.o and pwrite.o from the hard-wired\npart of the list in src/port/Makefile, else they get built\nwhether needed or not.\n\n* I don't much like this in fd.h:\n\n@@ -46,6 +46,7 @@\n #include <dirent.h>\n \n \n+struct iovec;\n typedef int File;\n\nbecause it makes it look like iovec and File are of similar\nstatus, which they hardly are. Perhaps more like\n\n #include <dirent.h>\n+ \n+struct iovec;\t\t\t/* avoid including sys/uio.h here */\n \n \n typedef int File;\n\n\nI confirm clean builds on Big Sur and Catalina with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 15:26:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 9:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * You need to remove pread.o and pwrite.o from the hard-wired\n> part of the list in src/port/Makefile, else they get built\n> whether needed or not.\n\nRight, done.\n\n> * I don't much like this in fd.h:\n>\n> @@ -46,6 +46,7 @@\n> #include <dirent.h>\n>\n>\n> +struct iovec;\n> typedef int File;\n>\n> because it makes it look like iovec and File are of similar\n> status, which they hardly are. Perhaps more like\n>\n> #include <dirent.h>\n> +\n> +struct iovec; /* avoid including sys/uio.h here */\n\nDone, except I wrote port/pg_iovec.h.\n\n> I confirm clean builds on Big Sur and Catalina with this.\n\nThanks for checking. I also checked on Windows via CI. Pushed.\n\n\n",
"msg_date": "Thu, 14 Jan 2021 11:22:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "I wrote:\n> So I'm a little confused as to why this test is failing to fail\n> with (I assume) newer Xcode. Can we see the relevant part of\n> config.log on your machine?\n\nAfter further digging I believe I understand what's happening,\nand it's a bit surprising we've not been bit by it before.\nIf the compiler believes (thanks to __API_AVAILABLE macros in\nApple's system headers) that a given library symbol might not\nexist in the lowest macOS version it is compiling for, it will\nstill emit a normal call to that function ... but it also emits\n\n .weak_reference _preadv\n\nmarking the call as a weak reference. If the linker then fails\nto link that call, it doesn't throw an error, it just replaces\nthe call instruction with a NOP :-(. This is why configure's\ntest appears to succeed, since it only checks whether you can\nlink not whether the call would work at runtime. Apple's\nassumption evidently is that you'll guard the call with a run-time\ncheck to see if the function exists before you use it, and you\ndon't want your link to fail if it doesn't.\n\nThe solution to this, according to \"man ld\", is\n\n -no_weak_imports\n Error if any symbols are weak imports (i.e. allowed to be\n unresolved (NULL) at runtime). Useful for config based\n projects that assume they are built and run on the same OS\n version.\n\nI don't particularly care that Apple is looking down their nose\nat people who don't want to make their builds run on multiple OS\nversions, so I think we should just use this and call it good.\n\nAttached is an untested quick hack to make that happen --- Sergey,\ncan you verify that this fixes configure's results on your setup?\n\n(This is not quite committable as-is, it needs something to avoid\nadding -Wl,-no_weak_imports on ancient macOS versions. But it\nwill do to see if the fix works on modern versions.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 13 Jan 2021 18:22:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Tom Lane wrote:\n> The symptoms sound consistent with using bleeding-edge Xcode on a\n> Catalina machine ... please report exact OS and Xcode versions.\n\nmacOS 10.15.7 (19H2)\nXcode 12.3 (12C33)\nmacOS SDK 11.1 (20C63)\n\n\n> Attached is an untested quick hack to make that happen --- Sergey,\n> can you verify that this fixes configure's results on your setup?\n\n\"-no_weak_imports\" doesn't help.\n\nconfigure:15161: checking for pwritev\nconfigure:15161: gcc -o conftest -Wall -Wmissing-prototypes \n-Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels \n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing \n-fwrapv -Wno-unused-command-line-argument -O2 -isysroot \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n -Wl,-no_weak_imports -isysroot \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n conftest.c -lz -lm >&5\nconfigure:15161: $? = 0\nconfigure:15161: result: yes\n\n\nThen I get the same compiler warnings about pwritev and an unrelated \nlink error:\n\nld: weak import of symbol '___darwin_check_fd_set_overflow' not \nsupported because of option: -no_weak_imports for architecture x86_64\nclang: error: linker command failed with exit code 1 (use -v to see \ninvocation)\nmake[2]: *** [postgres] Error 1\nmake[1]: *** [all-backend-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n\nPlease see the logs attached.",
"msg_date": "Thu, 14 Jan 2021 09:32:54 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": ">> The symptoms sound consistent with using bleeding-edge Xcode on a\n>> Catalina machine ... please report exact OS and Xcode versions.\n> \n> macOS 10.15.7 (19H2)\n> Xcode 12.3 (12C33)\n> macOS SDK 11.1 (20C63)\n> \n\nEverything is fine if I run \"configure\" with\nPG_SYSROOT=/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\nI noticed that \"cc\" invoked from command line uses:\n-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\nBut \"xcodebuild -version -sdk macosx Path\" invoked by \"configure\" when \nPG_SYSROOT is not provided gives:\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n\nNow I'm confused about different SDK versions and locations used by \nXcode and CommandLineTools :)\n\n\n",
"msg_date": "Thu, 14 Jan 2021 10:44:12 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> I noticed that \"cc\" invoked from command line uses:\n> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\nHm, how did you determine that exactly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jan 2021 10:42:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 14.01.2021 18:42, Tom Lane wrote:\n>> I noticed that \"cc\" invoked from command line uses:\n>> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n> \n> Hm, how did you determine that exactly?\n> \n\n% echo 'int main(void){}' >tmp.c\n% cc -v tmp.c\nApple clang version 12.0.0 (clang-1200.0.32.28)\nTarget: x86_64-apple-darwin19.6.0\nThread model: posix\nInstalledDir: \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n \n\"/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang\" \n-cc1 -triple x86_64-apple-macosx10.15.0 -Wdeprecated-objc-isa-usage \n-Werror=deprecated-objc-isa-usage -Werror=implicit-function-declaration \n-emit-obj -mrelax-all -disable-free -disable-llvm-verifier \n-discard-value-names -main-file-name tmp.c -mrelocation-model pic \n-pic-level 2 -mthread-model posix -mframe-pointer=all -fno-strict-return \n-masm-verbose -munwind-tables -target-sdk-version=10.15.6 \n-fcompatibility-qualified-id-block-type-checking -target-cpu penryn \n-dwarf-column-info -debugger-tuning=lldb -target-linker-version 609.8 -v \n-resource-dir \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0 \n-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk \n-I/usr/local/include -internal-isystem \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/local/include \n-internal-isystem \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include \n-internal-externc-isystem \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include \n-internal-externc-isystem \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include \n-Wno-reorder-init-list -Wno-implicit-int-float-conversion \n-Wno-c99-designator -Wno-final-dtor-non-final-class -Wno-extra-semi-stmt \n-Wno-misleading-indentation -Wno-quoted-include-in-framework-header \n-Wno-implicit-fallthrough -Wno-enum-enum-conversion \n-Wno-enum-float-conversion -fdebug-compilation-dir /Users/shinderuk \n-ferror-limit 19 -fmessage-length 238 -stack-protector 1 -fstack-check \n-mdarwin-stkchk-strong-link -fblocks -fencode-extended-block-signature \n-fregister-global-dtors-with-atexit -fgnuc-version=4.2.1 \n-fobjc-runtime=macosx-10.15.0 -fmax-type-align=16 \n-fdiagnostics-show-option -fcolor-diagnostics -o \n/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/tmp-91fb5e.o -x c tmp.c\nclang -cc1 version 12.0.0 (clang-1200.0.32.28) default target \nx86_64-apple-darwin19.6.0\nignoring nonexistent directory \n\"/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/local/include\"\nignoring nonexistent directory \n\"/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/Library/Frameworks\"\n#include \"...\" search starts here:\n#include <...> search starts here:\n /usr/local/include\n \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include\n /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include\n \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include\n \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks \n(framework directory)\nEnd of search list.\n \n\"/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld\" \n-demangle -lto_library \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/libLTO.dylib \n-no_deduplicate -dynamic -arch x86_64 -platform_version macos 10.15.0 \n10.15.6 -syslibroot \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -o a.out \n-L/usr/local/lib \n/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/tmp-91fb5e.o -lSystem \n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/lib/darwin/libclang_rt.osx.a\n\n\n",
"msg_date": "Thu, 14 Jan 2021 19:28:25 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> On 14.01.2021 18:42, Tom Lane wrote:\n>>> I noticed that \"cc\" invoked from command line uses:\n>>> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\n>> Hm, how did you determine that exactly?\n\n> % cc -v tmp.c\n> ...\n> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk \n\nOkay, interesting. On my Catalina machine, I see\n\n-isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n\nwhich is also a 10.15 SDK, since I haven't upgraded Xcode past 12.0.\nI wonder if that would change if I did upgrade (but I don't plan to\nrisk it, since this is my only remaining Catalina install).\n\nAfter considerable playing around, I'm guessing that the reason\n-no_weak_imports doesn't help is that it rejects calls that are\nmarked as weak references on the *calling* side. Since AC_CHECK_FUNCS\ndoesn't bother to #include the relevant header file, the compiler\ndoesn't know that preadv() ought to be marked as a weak reference.\nThen, when the test program gets linked against the stub libc that's\nprovided by the SDK, there is a version of preadv() there so no link\nfailure occurs. (There are way more moving parts in this weak-reference\nthing than I'd realized.)\n\nIt seems like the more productive approach would be to try to identify\nthe right sysroot to use. I wonder if there is some less messy way\nto find out the compiler's default sysroot than to scrape it out of\n-v output.\n\nAnother thing I've been realizing while poking at this is that we\nmight not need to set -isysroot explicitly at all, which would then\nlead to the compiler using its default sysroot automatically.\nIn some experimentation, it seems like what we need PG_SYSROOT for\nis just for configure to be able to find tclConfig.sh and the Perl\nheader files. So at this point I'm tempted to try ripping that\nout altogether. If you remove the lines in src/template/darwin\nthat inject PG_SYSROOT into CPPFLAGS and LDFLAGS, do things\nwork for you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jan 2021 13:05:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "I wrote:\n> It seems like the more productive approach would be to try to identify\n> the right sysroot to use. I wonder if there is some less messy way\n> to find out the compiler's default sysroot than to scrape it out of\n> -v output.\n\nThis is, of course, not terribly well documented by Apple. But\nMr. Google suggests that \"xcrun --show-sdk-path\" might serve.\nWhat does that print for you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jan 2021 13:31:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "I borrowed my wife's Mac, which is still on Catalina and up to now\nnever had Xcode on it, and found some very interesting things.\n\nStep 1: download/install Xcode 12.3, open it, agree to license,\nwait for it to finish \"installing components\".\n\nAt this point, /Library/Developer/CommandLineTools doesn't exist,\nand we have the following outputs from various probe commands:\n\n% xcrun --show-sdk-path \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n% xcrun --sdk macosx --show-sdk-path \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n% xcodebuild -version -sdk macosx Path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n\nAlso, cc -v reports\n -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n\nUnsurprisingly, Xcode 12.3 itself only contains\n\n% ls -l /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs\ntotal 0\ndrwxr-xr-x 5 root wheel 160 Nov 30 07:27 DriverKit20.2.sdk\ndrwxr-xr-x 7 root wheel 224 Nov 30 07:27 MacOSX.sdk\nlrwxr-xr-x 1 root wheel 10 Jan 14 15:57 MacOSX11.1.sdk -> MacOSX.sdk\n\nStep 2: install command line tools (I used \"xcode-select --install\"\nto fire this off, rather than the Xcode menu item).\n\nNow I have\n\n% ls -l /Library/Developer/CommandLineTools/SDKs\ntotal 0\nlrwxr-xr-x 1 root wheel 14 Jan 14 16:42 MacOSX.sdk -> MacOSX11.1.sdk\ndrwxr-xr-x 8 root wheel 256 Jul 9 2020 MacOSX10.15.sdk\ndrwxr-xr-x 7 root wheel 224 Nov 30 07:33 MacOSX11.1.sdk\n\nwhich is pretty interesting in itself, because the same directory on\nmy recently-updated-to-Big-Sur Macs does NOT have the 11.1 SDK.\nI wonder what determines which versions get installed here.\n\nMore interesting yet:\n\n% xcrun --show-sdk-path \n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n% xcrun --sdk macosx --show-sdk-path \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n% xcodebuild -version -sdk macosx Path \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n\nand cc -v reports\n -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\nSo apparently, \"xcrun --show-sdk-path\" (without any -sdk option)\nis the most authoritative guide to the compiler's default sysroot.\n\nHowever, googling turns up various people reporting that \"xcrun\n--show-sdk-path\" returns an empty string for them, and our last\nmajor investigation into this [1] found that there are some system\nstates where the compiler appears to have no default sysroot,\nwhich I bet is the same thing. I do not at this point have a recipe\nto reproduce such a state, but we'd be fools to imagine it's no\nlonger possible. My guess about it is that Apple's processes for\nupdating the default sysroot during system updates are just plain\nbuggy, with various corner cases that have ill-understood causes.\n\nAlso, after re-reading [1] I am not at all excited about trying to\nremove the -isysroot switches from our *FLAGS. What I propose to do\nis keep that, but improve our mechanism for choosing a default value\nfor PG_SYSROOT. It looks like first trying \"xcrun --show-sdk-path\",\nand falling back to \"xcodebuild -version -sdk macosx Path\" if that\ndoesn't yield a valid path, is more likely to give a working build\nthan relying entirely on xcodebuild. Maybe there's a case for trying\n\"xcrun --sdk macosx --show-sdk-path\" in between; in my tests that\nseemed noticeably faster than invoking xcodebuild, and I've not yet\nseen a case where it gave a different answer.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20840.1537850987%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 14 Jan 2021 17:13:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 15.01.2021 01:13, Tom Lane wrote:\n> I borrowed my wife's Mac, which is still on Catalina and up to now\n> never had Xcode on it, and found some very interesting things.\n> \n> Step 1: download/install Xcode 12.3, open it, agree to license,\n> wait for it to finish \"installing components\".\n> \n> At this point, /Library/Developer/CommandLineTools doesn't exist,\n> and we have the following outputs from various probe commands:\n> \n> % xcrun --show-sdk-path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n> % xcrun --sdk macosx --show-sdk-path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> % xcodebuild -version -sdk macosx Path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> \n> Also, cc -v reports\n> -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n> \n> Unsurprisingly, Xcode 12.3 itself only contains\n> \n> % ls -l /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs\n> total 0\n> drwxr-xr-x 5 root wheel 160 Nov 30 07:27 DriverKit20.2.sdk\n> drwxr-xr-x 7 root wheel 224 Nov 30 07:27 MacOSX.sdk\n> lrwxr-xr-x 1 root wheel 10 Jan 14 15:57 MacOSX11.1.sdk -> MacOSX.sdk\n> \n> Step 2: install command line tools (I used \"xcode-select --install\"\n> to fire this off, rather than the Xcode menu item).\n> \n> Now I have\n> \n> % ls -l /Library/Developer/CommandLineTools/SDKs\n> total 0\n> lrwxr-xr-x 1 root wheel 14 Jan 14 16:42 MacOSX.sdk -> MacOSX11.1.sdk\n> drwxr-xr-x 8 root wheel 256 Jul 9 2020 MacOSX10.15.sdk\n> drwxr-xr-x 7 root wheel 224 Nov 30 07:33 MacOSX11.1.sdk\n> \n> which is pretty interesting in itself, because the same directory on\n> my recently-updated-to-Big-Sur Macs does NOT have the 11.1 SDK.\n> I wonder what determines which versions get installed here.\n> \n> More interesting yet:\n> \n> % xcrun --show-sdk-path\n> /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n> % xcrun --sdk macosx --show-sdk-path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> % xcodebuild -version -sdk macosx Path\n> /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n> \n> and cc -v reports\n> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n> \n> So apparently, \"xcrun --show-sdk-path\" (without any -sdk option)\n> is the most authoritative guide to the compiler's default sysroot.\n> \n> However, googling turns up various people reporting that \"xcrun\n> --show-sdk-path\" returns an empty string for them, and our last\n> major investigation into this [1] found that there are some system\n> states where the compiler appears to have no default sysroot,\n> which I bet is the same thing. I do not at this point have a recipe\n> to reproduce such a state, but we'd be fools to imagine it's no\n> longer possible. My guess about it is that Apple's processes for\n> updating the default sysroot during system updates are just plain\n> buggy, with various corner cases that have ill-understood causes.\n> \n> Also, after re-reading [1] I am not at all excited about trying to\n> remove the -isysroot switches from our *FLAGS. What I propose to do\n> is keep that, but improve our mechanism for choosing a default value\n> for PG_SYSROOT. It looks like first trying \"xcrun --show-sdk-path\",\n> and falling back to \"xcodebuild -version -sdk macosx Path\" if that\n> doesn't yield a valid path, is more likely to give a working build\n> than relying entirely on xcodebuild. Maybe there's a case for trying\n> \"xcrun --sdk macosx --show-sdk-path\" in between; in my tests that\n> seemed noticeably faster than invoking xcodebuild, and I've not yet\n> seen a case where it gave a different answer.\n> \n> Thoughts?\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/flat/20840.1537850987%40sss.pgh.pa.us\n> \n\nThanks for thorough investigation and sorry for the late reply.\n\nI spent quite some time trying to understand / reverse engineer the \nlogic behind xcrun's default SDK selection. Apparently, \"man xcrun\" is \nnot accurate saying:\n\n\tThe SDK which will be searched defaults to the most recent \navailable...\n\nI didn't find anything really useful or helpful. \n\"/Library/Developer/CommandLineTools\" is hardcoded into \n\"libxcrun.dylib\". On my machine xcrun scans\n\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs\n\nand\n\n/Library/Developer/CommandLineTools/SDKs\n\nin that order, and loads \"SDKSettings.plist\" from each subdirectory. I \nlooked into plists, but couldn't find anything special about \n\"MacOSX10.15.sdk\".\n\n\nOkay, here is what I have:\n\n% ls -l \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs\ntotal 0\ndrwxr-xr-x 5 root wheel 160 Nov 30 15:27 DriverKit20.2.sdk\ndrwxr-xr-x 7 root wheel 224 Nov 30 15:27 MacOSX.sdk\nlrwxr-xr-x 1 root wheel 10 Dec 17 14:25 MacOSX11.1.sdk -> MacOSX.sdk\n\n% ls -l /Library/Developer/CommandLineTools/SDKs\ntotal 0\nlrwxr-xr-x 1 root wheel 14 Nov 17 02:21 MacOSX.sdk -> MacOSX11.0.sdk\ndrwxr-xr-x 8 root wheel 256 Nov 17 02:22 MacOSX10.15.sdk\ndrwxr-xr-x 7 root wheel 224 Oct 19 23:39 MacOSX11.0.sdk\n\n% xcrun --show-sdk-path\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\n% xcrun --sdk macosx --show-sdk-path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\n\nOh, that's weird! Nevertheless I like you suggestion to call \"xcrun\" \nfrom \"configure\".\n\nAdding \"--verbose\" doesn't really explain anything, but just in case.\n\n% xcrun --verbose --no-cache --find cc\nxcrun: note: PATH = \n'/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin'\nxcrun: note: SDKROOT = \n'/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk'\nxcrun: note: TOOLCHAINS = ''\nxcrun: note: DEVELOPER_DIR = '/Applications/Xcode.app/Contents/Developer'\nxcrun: note: XCODE_DEVELOPER_USR_PATH = ''\nxcrun: note: xcrun_db = \n'/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/xcrun_db'\nxcrun: note: xcrun via cc (xcrun)\nxcrun: note: database key is: \ncc|/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk||/Applications/Xcode.app/Contents/Developer|\nxcrun: note: looking up with \n'/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \nmacosx -find cc 2> /dev/null'\nxcrun: note: lookup resolved with 'xcodebuild -find' to \n'/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc'\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc\n\n\n% xcrun --verbose --no-cache --sdk macosx --find cc\nxcrun: note: looking up SDK with \n'/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \nmacosx -version Path'\nxcrun: note: PATH = \n'/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin'\nxcrun: note: SDKROOT = 'macosx'\nxcrun: note: TOOLCHAINS = ''\nxcrun: note: DEVELOPER_DIR = '/Applications/Xcode.app/Contents/Developer'\nxcrun: note: XCODE_DEVELOPER_USR_PATH = ''\nxcrun: note: xcrun_db = \n'/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/xcrun_db'\nxcrun: note: lookup resolved to: \n'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk'\nxcrun: note: looking up SDK with \n'/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n-version PlatformPath'\nxcrun: note: PATH = \n'/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin'\nxcrun: note: SDKROOT = \n'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk'\nxcrun: note: TOOLCHAINS = ''\nxcrun: note: DEVELOPER_DIR = '/Applications/Xcode.app/Contents/Developer'\nxcrun: note: XCODE_DEVELOPER_USR_PATH = ''\nxcrun: note: xcrun_db = \n'/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/xcrun_db'\nxcrun: note: lookup resolved to: \n'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform'\nxcrun: note: PATH = \n'/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin'\nxcrun: note: SDKROOT = \n'/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk'\nxcrun: note: TOOLCHAINS = ''\nxcrun: note: DEVELOPER_DIR = '/Applications/Xcode.app/Contents/Developer'\nxcrun: note: XCODE_DEVELOPER_USR_PATH = ''\nxcrun: note: xcrun_db = \n'/var/folders/8x/jvqv7hyd5h98m7tz2zm9r0yc0000gn/T/xcrun_db'\nxcrun: note: xcrun via cc (xcrun)\nxcrun: note: database key is: \ncc|/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk||/Applications/Xcode.app/Contents/Developer|\nxcrun: note: looking up with \n'/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk \n-find cc 2> /dev/null'\nxcrun: note: lookup resolved with 'xcodebuild -find' to \n'/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc'\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc\n\n\n",
"msg_date": "Fri, 15 Jan 2021 03:13:17 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 14.01.2021 21:05, Tom Lane wrote:\n> After considerable playing around, I'm guessing that the reason\n> -no_weak_imports doesn't help is that it rejects calls that are\n> marked as weak references on the *calling* side. Since AC_CHECK_FUNCS\n> doesn't bother to #include the relevant header file, the compiler\n> doesn't know that preadv() ought to be marked as a weak reference.\n> Then, when the test program gets linked against the stub libc that's\n> provided by the SDK, there is a version of preadv() there so no link\n> failure occurs. (There are way more moving parts in this weak-reference\n> thing than I'd realized.)\n> \n\nOh, that's interesting. I've just played with it a bit and it looks \nexactly as you say.\n\n> Another thing I've been realizing while poking at this is that we\n> might not need to set -isysroot explicitly at all, which would then\n> lead to the compiler using its default sysroot automatically.\n> In some experimentation, it seems like what we need PG_SYSROOT for\n> is just for configure to be able to find tclConfig.sh and the Perl\n> header files. So at this point I'm tempted to try ripping that\n> out altogether. If you remove the lines in src/template/darwin\n> that inject PG_SYSROOT into CPPFLAGS and LDFLAGS, do things\n> work for you?\n\nYes, it works fine.\n\n\n",
"msg_date": "Fri, 15 Jan 2021 04:12:01 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> On 15.01.2021 01:13, Tom Lane wrote:\n>> Also, after re-reading [1] I am not at all excited about trying to\n>> remove the -isysroot switches from our *FLAGS. What I propose to do\n>> is keep that, but improve our mechanism for choosing a default value\n>> for PG_SYSROOT. It looks like first trying \"xcrun --show-sdk-path\",\n>> and falling back to \"xcodebuild -version -sdk macosx Path\" if that\n>> doesn't yield a valid path, is more likely to give a working build\n>> than relying entirely on xcodebuild. Maybe there's a case for trying\n>> \"xcrun --sdk macosx --show-sdk-path\" in between; in my tests that\n>> seemed noticeably faster than invoking xcodebuild, and I've not yet\n>> seen a case where it gave a different answer.\n\n> I spent quite some time trying to understand / reverse engineer the \n> logic behind xcrun's default SDK selection.\n\nYeah, I wasted a fair amount of time on that too, going so far as\nto ktrace xcrun (as I gather you did too). I'm not any more\nenlightened than you are about exactly how it's making the choice.\n\n> Oh, that's weird! Nevertheless I like you suggestion to call \"xcrun\" \n> from \"configure\".\n\nAnyway, after re-reading the previous thread, something I like about\nthe current behavior is that it tends to produce a version-numbered\nsysroot path, ie something ending in \"MacOSX11.1.sdk\" or whatever.\nOne of the hazards we're trying to avoid is some parts of a PG\ninstallation being built against one SDK version while other parts are\nbuilt against another. The typical behavior of \"xcrun --show-sdk-path\"\nseems to be to produce a path ending in \"MacOSX.sdk\", which defeats that.\nSo I think we should accept the path only if it contains a version number,\nand otherwise move on to the other probe commands.\n\nHence, I propose the attached. This works as far as I can tell\nto fix the problem you're seeing.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 14 Jan 2021 20:45:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 15.01.2021 01:13, Tom Lane wrote:\n\n> than relying entirely on xcodebuild. Maybe there's a case for trying\n> \"xcrun --sdk macosx --show-sdk-path\" in between; in my tests that\n> seemed noticeably faster than invoking xcodebuild, and I've not yet\n> seen a case where it gave a different answer.\n> \n\nI see that \"xcrun --sdk macosx --show-sdk-path\" really calls\n\"/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \nmacosx -version Path\" under the hood.\n\n\n% lldb -- xcrun --no-cache --sdk macosx --show-sdk-path\n(lldb) target create \"xcrun\"\nCurrent executable set to 'xcrun' (x86_64).\n(lldb) settings set -- target.run-args \"--no-cache\" \"--sdk\" \"macosx\" \n\"--show-sdk-path\"\n(lldb) settings set target.unset-env-vars SDKROOT\n(lldb) b popen\nBreakpoint 1: where = libsystem_c.dylib`popen, address = 0x00007fff67265607\n(lldb) r\nProcess 26857 launched: '/usr/bin/xcrun' (x86_64)\nProcess 26857 stopped\n* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1\n frame #0: 0x00007fff6e313607 libsystem_c.dylib`popen\nlibsystem_c.dylib`popen:\n-> 0x7fff6e313607 <+0>: pushq %rbp\n 0x7fff6e313608 <+1>: movq %rsp, %rbp\n 0x7fff6e31360b <+4>: pushq %r15\n 0x7fff6e31360d <+6>: pushq %r14\nTarget 0: (xcrun) stopped.\n(lldb) p (char *)$arg1\n(char *) $1 = 0x0000000100406960 \n\"/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \nmacosx -version Path\"\n\n\n% sudo dtrace -n 'pid$target::popen:entry { trace(copyinstr(arg0)) }' -c \n'xcrun --sdk macosx --show-sdk-path'\ndtrace: description 'pid$target::popen:entry ' matched 1 probe\nCPU ID FUNCTION:NAME\n 0 413269 popen:entry \n/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \nmacosx -version Path\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\ndtrace: pid 26905 has exited\n\n\n",
"msg_date": "Fri, 15 Jan 2021 04:53:46 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> I see that \"xcrun --sdk macosx --show-sdk-path\" really calls\n> \"/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \n> macosx -version Path\" under the hood.\n\nHmm. I found something odd on my wife's Mac: although on my other\nmachines, I get something like\n\n$ xcrun --verbose --no-cache --show-sdk-path\nxcrun: note: looking up SDK with '/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk -version PlatformPath'\nxcrun: note: PATH = '/Users/tgl/testversion/bin:/usr/local/autoconf-2.69/bin:/Users/tgl/bin:/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Library/Tcl/bin:/opt/X11/bin'\nxcrun: note: SDKROOT = '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk'\nxcrun: note: TOOLCHAINS = ''\nxcrun: note: DEVELOPER_DIR = '/Applications/Xcode.app/Contents/Developer'\nxcrun: note: XCODE_DEVELOPER_USR_PATH = ''\nxcrun: note: xcrun_db = '/var/folders/3p/2bnrmypd17jcqbtzw79t9blw0000gn/T/xcrun_db'\nxcrun: note: lookup resolved to: '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform'\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n\non her machine there's no detail at all:\n\n% xcrun --verbose --no-cache --show-sdk-path\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n\nSo I'm not sure what to make of that. But I'm hesitant to assume\nthat xcrun is just a wrapper around xcodebuild.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jan 2021 21:04:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 15.01.2021 04:53, Sergey Shinderuk wrote:\n\n> I see that \"xcrun --sdk macosx --show-sdk-path\" really calls\n> \"/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -sdk \n> macosx -version Path\" under the hood.\n> \n\n... and caches the result. xcodebuild not called without --no-cache.\nSo it still make sense to fall back on xcodebuild.\n\n% sudo dtrace -n 'pid$target::popen:entry { trace(copyinstr(arg0)) }' -c \n'xcrun --sdk macosx --show-sdk-path'\ndtrace: description 'pid$target::popen:entry ' matched 1 probe\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk\ndtrace: pid 26981 has exited\n\n\n",
"msg_date": "Fri, 15 Jan 2021 05:04:48 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 15.01.2021 05:04, Tom Lane wrote:\n> \n> on her machine there's no detail at all:\n> \n> % xcrun --verbose --no-cache --show-sdk-path\n> /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n> \n\nThe same on my machine. I get details for --find, but not for \n--show-sdk-path.\n\n\n> So I'm not sure what to make of that. But I'm hesitant to assume\n> that xcrun is just a wrapper around xcodebuild.\n> \n\nI agree.\n\n\n",
"msg_date": "Fri, 15 Jan 2021 05:08:58 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On 15.01.2021 04:45, Tom Lane wrote:\n> Hence, I propose the attached. This works as far as I can tell\n> to fix the problem you're seeing.\nYes, it works fine. Thank you very much.\n\n\n",
"msg_date": "Fri, 15 Jan 2021 05:27:14 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> On 15.01.2021 04:45, Tom Lane wrote:\n>> Hence, I propose the attached. This works as far as I can tell\n>> to fix the problem you're seeing.\n\n> Yes, it works fine. Thank you very much.\n\nGreat. Pushed with a little more polishing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jan 2021 11:30:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
},
{
"msg_contents": "On Thu, Jan 14, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> > On 15.01.2021 01:13, Tom Lane wrote:\n> >> Also, after re-reading [1] I am not at all excited about trying to\n> >> remove the -isysroot switches from our *FLAGS. What I propose to do\n> >> is keep that, but improve our mechanism for choosing a default value\n> >> for PG_SYSROOT. It looks like first trying \"xcrun --show-sdk-path\",\n> >> and falling back to \"xcodebuild -version -sdk macosx Path\" if that\n> >> doesn't yield a valid path, is more likely to give a working build\n> >> than relying entirely on xcodebuild. Maybe there's a case for trying\n> >> \"xcrun --sdk macosx --show-sdk-path\" in between; in my tests that\n> >> seemed noticeably faster than invoking xcodebuild, and I've not yet\n> >> seen a case where it gave a different answer.\n>\n> > I spent quite some time trying to understand / reverse engineer the\n> > logic behind xcrun's default SDK selection.\n>\n> Yeah, I wasted a fair amount of time on that too, going so far as\n> to ktrace xcrun (as I gather you did too). I'm not any more\n> enlightened than you are about exactly how it's making the choice.\n>\n> > Oh, that's weird! Nevertheless I like you suggestion to call \"xcrun\"\n> > from \"configure\".\n>\n> Anyway, after re-reading the previous thread, something I like about\n> the current behavior is that it tends to produce a version-numbered\n> sysroot path, ie something ending in \"MacOSX11.1.sdk\" or whatever.\n> One of the hazards we're trying to avoid is some parts of a PG\n> installation being built against one SDK version while other parts are\n> built against another. The typical behavior of \"xcrun --show-sdk-path\"\n> seems to be to produce a path ending in \"MacOSX.sdk\", which defeats that.\n> So I think we should accept the path only if it contains a version number,\n> and otherwise move on to the other probe commands.\nI don't think enforcing a specific naming scheme makes sense, the minimum\nOSX runtime version is effectively entirely separate from the SDK version.\n\nThe pwritev issue just seems to be caused by a broken configure check,\nI've fixed that here:\nhttps://postgr.es/m/20210119111625.20435-1-james.hilliard1%40gmail.com\n>\n> Hence, I propose the attached. This works as far as I can tell\n> to fix the problem you're seeing.\n>\n> regards, tom lane\n>\n\n\n",
"msg_date": "Tue, 19 Jan 2021 05:21:31 -0700",
"msg_from": "James Hilliard <james.hilliard1@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_preadv() and pg_pwritev()"
}
] |
[
{
"msg_contents": "Hi all\n\nsuppose I started the server with the following command\n\npg_ctl -D . . . start -l <logfilename>\n\nis there a way to get <logfilename> later by sending some query to the server or\n\nreading some configuration file\n\n(for example I can get data directory with the query “show data_directory”)\n\nthanks in advance\n\nDimitry Markman\n\n\n\n",
"msg_date": "Sun, 20 Dec 2020 09:16:12 -0500",
"msg_from": "Dmitry Markman <dmarkman@mac.com>",
"msg_from_op": true,
"msg_subject": "how to find log"
},
{
"msg_contents": "Dmitry Markman <dmarkman@mac.com> writes:\n> suppose I started the server with the following command\n> pg_ctl -D . . . start -l <logfilename>\n> is there a way to get <logfilename> later by sending some query to the server or\n\nNo, the server has no way to know where its stdout/stderr were\npointed to. You might want to enable the syslogger output method\n(see logging_collector) to have something a bit more featureful.\n\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Dec 2020 11:31:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how to find log"
},
{
"msg_contents": "\nOn 12/20/20 11:31 AM, Tom Lane wrote:\n> Dmitry Markman <dmarkman@mac.com> writes:\n>> suppose I started the server with the following command\n>> pg_ctl -D . . . start -l <logfilename>\n>> is there a way to get <logfilename> later by sending some query to the server or\n> No, the server has no way to know where its stdout/stderr were\n> pointed to. You might want to enable the syslogger output method\n> (see logging_collector) to have something a bit more featureful.\n>\n> https://www.postgresql.org/docs/current/runtime-config-logging.html\n>\n> \t\t\t\n\n\n\nAlternatively, asking the OS in many cases will work, e.g. on my linux\nmachine:\n\n\nls -l /proc/{postmasterpid}/fd/1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 20 Dec 2020 12:04:56 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: how to find log"
},
{
"msg_contents": "Thanks Tom, Andrew\nI’ll try out logging_collector facility\n\nthanks again\n\ndm\n\n\n> On Dec 20, 2020, at 12:04 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n> On 12/20/20 11:31 AM, Tom Lane wrote:\n>> Dmitry Markman <dmarkman@mac.com> writes:\n>>> suppose I started the server with the following command\n>>> pg_ctl -D . . . start -l <logfilename>\n>>> is there a way to get <logfilename> later by sending some query to the server or\n>> No, the server has no way to know where its stdout/stderr were\n>> pointed to. You might want to enable the syslogger output method\n>> (see logging_collector) to have something a bit more featureful.\n>> \n>> https://www.postgresql.org/docs/current/runtime-config-logging.html\n>> \n>> \t\t\t\n> \n> \n> \n> Alternatively, asking the OS in many cases will work, e.g. on my linux\n> machine:\n> \n> \n> ls -l /proc/{postmasterpid}/fd/1\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 13:08:35 -0500",
"msg_from": "Dmitry Markman <dmarkman@mac.com>",
"msg_from_op": true,
"msg_subject": "Re: how to find log"
}
] |
[
{
"msg_contents": "Hi,\nI took a look at the rebased patch.\n\n+ <entry><structfield>varisnotnull</structfield></entry>\n+ <entry><type>boolean</type></entry>\n+ <entry></entry>\n+ <entry>\n+ True if the schema variable doesn't allow null value. The default\nvalue is false.\n\nI wonder whether the field can be named in positive tense: e.g.\nvarallowsnull with default of true.\n\n+ <entry><structfield>vareoxaction</structfield></entry>\n+ <literal>n</literal> = no action, <literal>d</literal> = drop the\nvariable,\n+ <literal>r</literal> = reset the variable to its default value.\n\nLooks like there is only one action allowed. I wonder if there is a\npossibility of having two actions at the same time in the future.\n\n+ The <application>PL/pgSQL</application> language has not packages\n+ and then it has not package variables and package constants. The\n\n'has not' -> 'has no'\n\n+ a null value. A variable created as NOT NULL and without an\nexplicitely\n\nexplicitely -> explicitly\n\n+ int nnewmembers;\n+ Oid *oldmembers;\n+ Oid *newmembers;\n\nI wonder if naming nnewmembers newmembercount would be more readable.\n\nFor pg_variable_aclcheck:\n\n+ return ACLCHECK_OK;\n+ else\n+ return ACLCHECK_NO_PRIV;\n\nThe 'else' can be omitted.\n\n+ * Ownership check for a schema variables (specified by OID).\n\n'a schema variable' (no s)\n\nFor NamesFromList():\n\n+ if (IsA(n, String))\n+ {\n+ result = lappend(result, n);\n+ }\n+ else\n+ break;\n\nThere would be no more name if current n is not a String ?\n\n+ * both variants, and returns InvalidOid with not_uniq flag,\nwhen\n\n'and return' (no s)\n\n+ return InvalidOid;\n+ }\n+ else if (OidIsValid(varoid_without_attr))\n\n'else' is not needed (since the if block ends with return).\n\nFor clean_cache_callback(),\n\n+ if (hash_search(schemavarhashtab,\n+ (void *) &svar->varid,\n+ HASH_REMOVE,\n+ NULL) == NULL)\n+ elog(DEBUG1, \"hash table corrupted\");\n\nMaybe add more information to the debug, such as var name.\nShould we come out of the while loop in this scenario ?\n\nCheers\n\nHi,I took a look at the rebased patch.+ <entry><structfield>varisnotnull</structfield></entry>+ <entry><type>boolean</type></entry>+ <entry></entry>+ <entry>+ True if the schema variable doesn't allow null value. The default value is false.I wonder whether the field can be named in positive tense: e.g. varallowsnull with default of true.+ <entry><structfield>vareoxaction</structfield></entry>+ <literal>n</literal> = no action, <literal>d</literal> = drop the variable,+ <literal>r</literal> = reset the variable to its default value.Looks like there is only one action allowed. I wonder if there is a possibility of having two actions at the same time in the future.+ The <application>PL/pgSQL</application> language has not packages+ and then it has not package variables and package constants. The'has not' -> 'has no'+ a null value. A variable created as NOT NULL and without an explicitelyexplicitely -> explicitly+ int nnewmembers;+ Oid *oldmembers;+ Oid *newmembers;I wonder if naming nnewmembers newmembercount would be more readable.For pg_variable_aclcheck:+ return ACLCHECK_OK;+ else+ return ACLCHECK_NO_PRIV;The 'else' can be omitted.+ * Ownership check for a schema variables (specified by OID).'a schema variable' (no s)For NamesFromList():+ if (IsA(n, String))+ {+ result = lappend(result, n);+ }+ else+ break;There would be no more name if current n is not a String ?+ * both variants, and returns InvalidOid with not_uniq flag, when'and return' (no s)+ return InvalidOid;+ }+ else if (OidIsValid(varoid_without_attr))'else' is not needed (since the if block ends with return).For clean_cache_callback(),+ if (hash_search(schemavarhashtab,+ (void *) &svar->varid,+ HASH_REMOVE,+ NULL) == NULL)+ elog(DEBUG1, \"hash table corrupted\");Maybe add more information to the debug, such as var name.Should we come out of the while loop in this scenario ?Cheers",
"msg_date": "Sun, 20 Dec 2020 11:25:24 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: schema variables"
},
{
"msg_contents": "Hi,\nThis is continuation of the previous review.\n\n+ * We should to use schema variable buffer, when\n+ * it is available.\n\n'should use' (no to)\n\n+ /* When buffer of used schema variables loaded from shared memory */\n\nA verb seems missing in the above comment.\n\n+ elog(ERROR, \"unexpected non-SELECT command in LET ... SELECT\");\n\nSince non-SELECT is mentioned, I wonder if the trailing SELECT can be\nomitted.\n\n+ * some collision can be solved simply here to reduce errors\nbased\n+ * on simply existence of some variables. Often error can be\nusing\n\nsimply occurred twice above - I think one should be enough.\nIf you want to keep the second, it should be 'simple'.\n\nCheers\n\nOn Sun, Dec 20, 2020 at 11:25 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I took a look at the rebased patch.\n>\n> + <entry><structfield>varisnotnull</structfield></entry>\n> + <entry><type>boolean</type></entry>\n> + <entry></entry>\n> + <entry>\n> + True if the schema variable doesn't allow null value. The default\n> value is false.\n>\n> I wonder whether the field can be named in positive tense: e.g.\n> varallowsnull with default of true.\n>\n> + <entry><structfield>vareoxaction</structfield></entry>\n> + <literal>n</literal> = no action, <literal>d</literal> = drop the\n> variable,\n> + <literal>r</literal> = reset the variable to its default value.\n>\n> Looks like there is only one action allowed. I wonder if there is a\n> possibility of having two actions at the same time in the future.\n>\n> + The <application>PL/pgSQL</application> language has not packages\n> + and then it has not package variables and package constants. The\n>\n> 'has not' -> 'has no'\n>\n> + a null value. A variable created as NOT NULL and without an\n> explicitely\n>\n> explicitely -> explicitly\n>\n> + int nnewmembers;\n> + Oid *oldmembers;\n> + Oid *newmembers;\n>\n> I wonder if naming nnewmembers newmembercount would be more readable.\n>\n> For pg_variable_aclcheck:\n>\n> + return ACLCHECK_OK;\n> + else\n> + return ACLCHECK_NO_PRIV;\n>\n> The 'else' can be omitted.\n>\n> + * Ownership check for a schema variables (specified by OID).\n>\n> 'a schema variable' (no s)\n>\n> For NamesFromList():\n>\n> + if (IsA(n, String))\n> + {\n> + result = lappend(result, n);\n> + }\n> + else\n> + break;\n>\n> There would be no more name if current n is not a String ?\n>\n> + * both variants, and returns InvalidOid with not_uniq flag,\n> when\n>\n> 'and return' (no s)\n>\n> + return InvalidOid;\n> + }\n> + else if (OidIsValid(varoid_without_attr))\n>\n> 'else' is not needed (since the if block ends with return).\n>\n> For clean_cache_callback(),\n>\n> + if (hash_search(schemavarhashtab,\n> + (void *) &svar->varid,\n> + HASH_REMOVE,\n> + NULL) == NULL)\n> + elog(DEBUG1, \"hash table corrupted\");\n>\n> Maybe add more information to the debug, such as var name.\n> Should we come out of the while loop in this scenario ?\n>\n> Cheers\n>\n\nHi,This is continuation of the previous review.+ * We should to use schema variable buffer, when+ * it is available.'should use' (no to)+ /* When buffer of used schema variables loaded from shared memory */A verb seems missing in the above comment.+ elog(ERROR, \"unexpected non-SELECT command in LET ... SELECT\");Since non-SELECT is mentioned, I wonder if the trailing SELECT can be omitted.+ * some collision can be solved simply here to reduce errors based+ * on simply existence of some variables. Often error can be usingsimply occurred twice above - I think one should be enough.If you want to keep the second, it should be 'simple'.CheersOn Sun, Dec 20, 2020 at 11:25 AM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I took a look at the rebased patch.+ <entry><structfield>varisnotnull</structfield></entry>+ <entry><type>boolean</type></entry>+ <entry></entry>+ <entry>+ True if the schema variable doesn't allow null value. The default value is false.I wonder whether the field can be named in positive tense: e.g. varallowsnull with default of true.+ <entry><structfield>vareoxaction</structfield></entry>+ <literal>n</literal> = no action, <literal>d</literal> = drop the variable,+ <literal>r</literal> = reset the variable to its default value.Looks like there is only one action allowed. I wonder if there is a possibility of having two actions at the same time in the future.+ The <application>PL/pgSQL</application> language has not packages+ and then it has not package variables and package constants. The'has not' -> 'has no'+ a null value. A variable created as NOT NULL and without an explicitelyexplicitely -> explicitly+ int nnewmembers;+ Oid *oldmembers;+ Oid *newmembers;I wonder if naming nnewmembers newmembercount would be more readable.For pg_variable_aclcheck:+ return ACLCHECK_OK;+ else+ return ACLCHECK_NO_PRIV;The 'else' can be omitted.+ * Ownership check for a schema variables (specified by OID).'a schema variable' (no s)For NamesFromList():+ if (IsA(n, String))+ {+ result = lappend(result, n);+ }+ else+ break;There would be no more name if current n is not a String ?+ * both variants, and returns InvalidOid with not_uniq flag, when'and return' (no s)+ return InvalidOid;+ }+ else if (OidIsValid(varoid_without_attr))'else' is not needed (since the if block ends with return).For clean_cache_callback(),+ if (hash_search(schemavarhashtab,+ (void *) &svar->varid,+ HASH_REMOVE,+ NULL) == NULL)+ elog(DEBUG1, \"hash table corrupted\");Maybe add more information to the debug, such as var name.Should we come out of the while loop in this scenario ?Cheers",
"msg_date": "Sun, 20 Dec 2020 12:44:03 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: schema variables"
},
{
"msg_contents": "Hi\n\nne 20. 12. 2020 v 20:24 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi,\n> I took a look at the rebased patch.\n>\n> + <entry><structfield>varisnotnull</structfield></entry>\n> + <entry><type>boolean</type></entry>\n> + <entry></entry>\n> + <entry>\n> + True if the schema variable doesn't allow null value. The default\n> value is false.\n>\n> I wonder whether the field can be named in positive tense: e.g.\n> varallowsnull with default of true.\n>\n\nalthough I prefer positive designed variables, in this case this negative\nform is better due consistency with other system tables\n\npostgres=# select table_name, column_name from information_schema.columns\nwhere column_name like '%null';\n┌──────────────┬──────────────┐\n│ table_name │ column_name │\n╞══════════════╪══════════════╡\n│ pg_type │ typnotnull │\n│ pg_attribute │ attnotnull │\n│ pg_variable │ varisnotnull │\n└──────────────┴──────────────┘\n(3 rows)\n\n\n\n> + <entry><structfield>vareoxaction</structfield></entry>\n> + <literal>n</literal> = no action, <literal>d</literal> = drop the\n> variable,\n> + <literal>r</literal> = reset the variable to its default value.\n>\n\n> Looks like there is only one action allowed. I wonder if there is a\n> possibility of having two actions at the same time in the future.\n>\n\n\nSurely not - these three possibilities are supported and tested\n\nCREATE VARIABLE t1 AS int DEFAULT -1 ON TRANSACTION END RESET\nCREATE TEMP VARIABLE g AS int ON COMMIT DROP;\n\n\n>\n> + The <application>PL/pgSQL</application> language has not packages\n> + and then it has not package variables and package constants. The\n>\n> 'has not' -> 'has no'\n>\n\nfixed\n\n\n> + a null value. A variable created as NOT NULL and without an\n> explicitely\n>\n> explicitely -> explicitly\n>\n\nfixed\n\n\n> + int nnewmembers;\n> + Oid *oldmembers;\n> + Oid *newmembers;\n>\n> I wonder if naming nnewmembers newmembercount would be more readable.\n>\n\nIt is not bad idea, but nnewmembers is used more times on more places, and\nthen this refactoring should be done in independent patch\n\n\n> For pg_variable_aclcheck:\n>\n> + return ACLCHECK_OK;\n> + else\n> + return ACLCHECK_NO_PRIV;\n>\n> The 'else' can be omitted.\n>\n\nagain - this pattern is used more often in related source file, and I used\nit for consistency with other functions. It is not visible from the patch,\nbut when you check the related file, it will be clean.\n\n\n> + * Ownership check for a schema variables (specified by OID).\n>\n> 'a schema variable' (no s)\n>\n> For NamesFromList():\n>\n> + if (IsA(n, String))\n> + {\n> + result = lappend(result, n);\n> + }\n> + else\n> + break;\n>\n> There would be no more name if current n is not a String ?\n>\n\nIt tries to derive the name of a variable from the target list. Usually\nthere can be only strings, but there can be array subscripting too\n(A_indices node)\nI wrote a comment there.\n\n\n>\n> + * both variants, and returns InvalidOid with not_uniq flag,\n> when\n>\n> 'and return' (no s)\n>\n> + return InvalidOid;\n> + }\n> + else if (OidIsValid(varoid_without_attr))\n>\n> 'else' is not needed (since the if block ends with return).\n>\n\nI understand. The `else` is not necessary, but I think so it is better due\nsymmetry\n\nif ()\n{\n return ..\n}\nelse if ()\n{\n return ..\n}\nelse\n{\n return ..\n}\n\nThis style is used more times in Postgres, and usually I prefer it, when I\nwant to emphasize so all ways have some similar pattern. My opinion about\nit is not too strong, The functionality is same, and I think so readability\nis a little bit better (due symmetry) (but I understand well so this can be\nsubjective).\n\n\n\n> For clean_cache_callback(),\n>\n> + if (hash_search(schemavarhashtab,\n> + (void *) &svar->varid,\n> + HASH_REMOVE,\n> + NULL) == NULL)\n> + elog(DEBUG1, \"hash table corrupted\");\n>\n> Maybe add more information to the debug, such as var name.\n> Should we come out of the while loop in this scenario ?\n>\n\nI checked other places, and the same pattern is used in this text. If there\nare problems, then the problem is not with some specific schema variable,\nbut in implementation of a hash table.\n\nMaybe it is better to ignore the result (I did it), like it is done on some\nother places. This part is implementation of some simple garbage collector,\nand is not important if value was removed this or different way. I changed\ncomments for this routine.\n\nRegards\n\nPavel\n\n\n> Cheers\n>\n\nHine 20. 12. 2020 v 20:24 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,I took a look at the rebased patch.+ <entry><structfield>varisnotnull</structfield></entry>+ <entry><type>boolean</type></entry>+ <entry></entry>+ <entry>+ True if the schema variable doesn't allow null value. The default value is false.I wonder whether the field can be named in positive tense: e.g. varallowsnull with default of true.although I prefer positive designed variables, in this case this negative form is better due consistency with other system tables postgres=# select table_name, column_name from information_schema.columns where column_name like '%null';┌──────────────┬──────────────┐│ table_name │ column_name │╞══════════════╪══════════════╡│ pg_type │ typnotnull ││ pg_attribute │ attnotnull ││ pg_variable │ varisnotnull │└──────────────┴──────────────┘(3 rows)+ <entry><structfield>vareoxaction</structfield></entry>+ <literal>n</literal> = no action, <literal>d</literal> = drop the variable,+ <literal>r</literal> = reset the variable to its default value.Looks like there is only one action allowed. I wonder if there is a possibility of having two actions at the same time in the future.Surely not - these three possibilities are supported and testedCREATE VARIABLE t1 AS int DEFAULT -1 ON TRANSACTION END RESETCREATE TEMP VARIABLE g AS int ON COMMIT DROP; + The <application>PL/pgSQL</application> language has not packages+ and then it has not package variables and package constants. The'has not' -> 'has no'fixed + a null value. A variable created as NOT NULL and without an explicitelyexplicitely -> explicitlyfixed + int nnewmembers;+ Oid *oldmembers;+ Oid *newmembers;I wonder if naming nnewmembers newmembercount would be more readable.It is not bad idea, but nnewmembers is used more times on more places, and then this refactoring should be done in independent patch For pg_variable_aclcheck:+ return ACLCHECK_OK;+ else+ return ACLCHECK_NO_PRIV;The 'else' can be omitted.again - this pattern is used more often in related source file, and I used it for consistency with other functions. It is not visible from the patch, but when you check the related file, it will be clean. + * Ownership check for a schema variables (specified by OID).'a schema variable' (no s)For NamesFromList():+ if (IsA(n, String))+ {+ result = lappend(result, n);+ }+ else+ break;There would be no more name if current n is not a String ?It tries to derive the name of a variable from the target list. Usually there can be only strings, but there can be array subscripting too (A_indices node) I wrote a comment there. + * both variants, and returns InvalidOid with not_uniq flag, when'and return' (no s)+ return InvalidOid;+ }+ else if (OidIsValid(varoid_without_attr))'else' is not needed (since the if block ends with return).I understand. The `else` is not necessary, but I think so it is better due symmetry if (){ return ..}else if (){ return ..}else{ return ..}This style is used more times in Postgres, and usually I prefer it, when I want to emphasize so all ways have some similar pattern. My opinion about it is not too strong, The functionality is same, and I think so readability is a little bit better (due symmetry) (but I understand well so this can be subjective). For clean_cache_callback(),+ if (hash_search(schemavarhashtab,+ (void *) &svar->varid,+ HASH_REMOVE,+ NULL) == NULL)+ elog(DEBUG1, \"hash table corrupted\");Maybe add more information to the debug, such as var name.Should we come out of the while loop in this scenario ?I checked other places, and the same pattern is used in this text. If there are problems, then the problem is not with some specific schema variable, but in implementation of a hash table. Maybe it is better to ignore the result (I did it), like it is done on some other places. This part is implementation of some simple garbage collector, and is not important if value was removed this or different way. I changed comments for this routine.RegardsPavelCheers",
"msg_date": "Tue, 22 Dec 2020 12:49:35 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: schema variables"
},
{
"msg_contents": "ne 20. 12. 2020 v 21:43 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi,\n> This is continuation of the previous review.\n>\n> + * We should to use schema variable buffer,\n> when\n> + * it is available.\n>\n> 'should use' (no to)\n>\n\nfixed\n\n\n> + /* When buffer of used schema variables loaded from shared memory\n> */\n>\n> A verb seems missing in the above comment.\n>\n\nI changed this comment\n\n<--><-->/*\n<--><--> * link shared memory with working copy of schema variable's values\n<--><--> * used in this query. This access is used by parallel query\nexecutor's\n<--><--> * workers.\n<--><--> */\n\n\n> + elog(ERROR, \"unexpected non-SELECT command in LET ... SELECT\");\n>\n> Since non-SELECT is mentioned, I wonder if the trailing SELECT can be\n> omitted.\n>\n\ndone\n\n\n> + * some collision can be solved simply here to reduce errors\n> based\n> + * on simply existence of some variables. Often error can be\n> using\n>\n> simply occurred twice above - I think one should be enough.\n> If you want to keep the second, it should be 'simple'.\n>\n\nI rewrote this comment\n\nupdated patch attached\n\nRegards\n\nPavel\n\n\n\n> Cheers\n>\n> On Sun, Dec 20, 2020 at 11:25 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>> Hi,\n>> I took a look at the rebased patch.\n>>\n>> + <entry><structfield>varisnotnull</structfield></entry>\n>> + <entry><type>boolean</type></entry>\n>> + <entry></entry>\n>> + <entry>\n>> + True if the schema variable doesn't allow null value. The default\n>> value is false.\n>>\n>> I wonder whether the field can be named in positive tense: e.g.\n>> varallowsnull with default of true.\n>>\n>> + <entry><structfield>vareoxaction</structfield></entry>\n>> + <literal>n</literal> = no action, <literal>d</literal> = drop the\n>> variable,\n>> + <literal>r</literal> = reset the variable to its default value.\n>>\n>> Looks like there is only one action allowed. I wonder if there is a\n>> possibility of having two actions at the same time in the future.\n>>\n>> + The <application>PL/pgSQL</application> language has not packages\n>> + and then it has not package variables and package constants. The\n>>\n>> 'has not' -> 'has no'\n>>\n>> + a null value. A variable created as NOT NULL and without an\n>> explicitely\n>>\n>> explicitely -> explicitly\n>>\n>> + int nnewmembers;\n>> + Oid *oldmembers;\n>> + Oid *newmembers;\n>>\n>> I wonder if naming nnewmembers newmembercount would be more readable.\n>>\n>> For pg_variable_aclcheck:\n>>\n>> + return ACLCHECK_OK;\n>> + else\n>> + return ACLCHECK_NO_PRIV;\n>>\n>> The 'else' can be omitted.\n>>\n>> + * Ownership check for a schema variables (specified by OID).\n>>\n>> 'a schema variable' (no s)\n>>\n>> For NamesFromList():\n>>\n>> + if (IsA(n, String))\n>> + {\n>> + result = lappend(result, n);\n>> + }\n>> + else\n>> + break;\n>>\n>> There would be no more name if current n is not a String ?\n>>\n>> + * both variants, and returns InvalidOid with not_uniq flag,\n>> when\n>>\n>> 'and return' (no s)\n>>\n>> + return InvalidOid;\n>> + }\n>> + else if (OidIsValid(varoid_without_attr))\n>>\n>> 'else' is not needed (since the if block ends with return).\n>>\n>> For clean_cache_callback(),\n>>\n>> + if (hash_search(schemavarhashtab,\n>> + (void *) &svar->varid,\n>> + HASH_REMOVE,\n>> + NULL) == NULL)\n>> + elog(DEBUG1, \"hash table corrupted\");\n>>\n>> Maybe add more information to the debug, such as var name.\n>> Should we come out of the while loop in this scenario ?\n>>\n>> Cheers\n>>\n>",
"msg_date": "Tue, 22 Dec 2020 14:30:00 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: schema variables"
}
] |
[
{
"msg_contents": "Hi,\n\nThe current wait events are already pretty useful. But I think we could\nmake them more informative without adding real runtime overhead.\n\n\n1) For lwlocks I think it'd be quite useful to show the mode of acquisition in\npg_stat_activity.wait_event_type, instead of just saying 'LWLock'.\n\nI think we should split PG_WAIT_LWLOCK into\nPG_WAIT_LWLOCK_{EXCLUSIVE,SHARED,WAIT_UNTIL_FREE}, and report a different\nwait_event_type based on the different class.\n\nThe fact that it'd break people explicitly looking for LWLock in\npg_stat_activity doesn't seem to outweigh the benefits to me.\n\n\n2) I think it's unhelpful that waits for WAL insertion locks to progress show\nup LWLock acquisitions. LWLockWaitForVar() feels like a distinct enough\noperation that passing in a user-specified wait event is worth the miniscule\nincremental overhead that'd add.\n\nI'd probably just make it a different wait class, and have xlog.c compute that\nbased on the number of the slot being waited for.\n\n\n3) I have observed waking up other processes as part of a lock release to be a\nsignificant performance factor. I would like to add a separate wait event type\nfor that. That'd be a near trivial extension to 1)\n\n\nI also think there's a 4, but I think the tradeoffs are a bit more\ncomplicated:\n\n4) For a few types of lwlock just knowing the tranche isn't\nsufficient. E.g. knowing whether it's one or different buffer mapping locks\nare being waited on is important to judge contention.\n\nFor wait events right now we use 1 byte for the class, 1 byte is unused and 2\nbytes are used for event specific information (the tranche in case of\nlwlocks).\n\nSeems like we could change the split to be a 4bit class and leave 28bit to the\nspecific wait event type? And in lwlocks case we could make something like 4\nbit class, 10 bit tranche, 20 bit sub-tranche?\n\n20 bit aren't enough to uniquely identify a lock for the larger tranches\n(mostly buffer locks, I think), but I think it'd still be enough to\ndisambiguate.\n\nThe hardest part would be to know how to identify individual locks. The\neasiest would probably be to just mask in a parts of the lwlock address\n(e.g. shift it right by INTALIGN, and then mask in the result into the\neventId). That seems a bit unsatisfying.\n\nWe could probably do a bit better: We could just store the information about\ntranche / offset within tranche at LWLockInitialize() time, instead of\ncomputing something just before waiting. While LWLock.tranche is only 16bits\nright now, the following two bytes are currently padding...\n\nThat'd allow us to have proper numerical identification for nearly all\ntranches, without needing to go back to the complexity of having tranches\nspecify base & stride.\n\nEven more API churn around lwlock initialization isn't desirable :(, but we\ncould just add a LWLockInitializeIdentified() or such.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Dec 2020 13:27:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Improving LWLock wait events"
},
{
"msg_contents": "On Mon, 21 Dec 2020 at 05:27, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> The current wait events are already pretty useful. But I think we could\n> make them more informative without adding real runtime overhead.\n>\n>\nAll 1-3 sound pretty sensible to me.\n\nI also think there's a 4, but I think the tradeoffs are a bit more\n> complicated:\n>\n\n> 4) For a few types of lwlock just knowing the tranche isn't\n> sufficient. E.g. knowing whether it's one or different buffer mapping locks\n> are being waited on is important to judge contention.\n>\n\nI've struggled with this quite a bit myself.\n\nIn particular, for tools that validate acquire-ordering safety it's\ndesirable to be able to identify a specific lock in a backend-independent\nway.\n\nThe hardest part would be to know how to identify individual locks. The\n> easiest would probably be to just mask in a parts of the lwlock address\n> (e.g. shift it right by INTALIGN, and then mask in the result into the\n> eventId). That seems a bit unsatisfying.\n>\n\nIt also won't work reliably for locks in dsm segments, since the lock can\nbe mapped to a different address in different backends.\n\nWe could probably do a bit better: We could just store the information about\n> tranche / offset within tranche at LWLockInitialize() time, instead of\n> computing something just before waiting. While LWLock.tranche is only\n> 16bits\n> right now, the following two bytes are currently padding...\n>\n> That'd allow us to have proper numerical identification for nearly all\n> tranches, without needing to go back to the complexity of having tranches\n> specify base & stride.\n>\n\nThat sounds appealing. It'd work for any lock in MainLWLockArray - all\nbuilt-in individual LWLocks, LWTRANCHE_BUFFER_MAPPING,\nLWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, any lock\nallocated by RequestNamedLWLockTranche().\n\nSome of the other tranches allocate locks in contiguous fixed blocks or in\nways that would let them maintain a counter.\n\nWe'd need some kind of \"unknown\" placeholder value for LWLocks where that\ndoesn't make sense, though, like most locks allocated by callers that make\ntheir own LWLockNewTrancheId() call and locks in some of the built-in\ntranches not allocated in MainLWLockArray.\n\nSo I suggest retaining the current LWLockInitialize() and making it a\nwrapper for LWLockInitializeWithIndex() or similar. Use a 1-index and keep\n0 as unknown, or use 0-index and use (max-1) as unknown.\n\nOn Mon, 21 Dec 2020 at 05:27, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nThe current wait events are already pretty useful. But I think we could\nmake them more informative without adding real runtime overhead.\nAll 1-3 sound pretty sensible to me. \nI also think there's a 4, but I think the tradeoffs are a bit more\ncomplicated: \n\n4) For a few types of lwlock just knowing the tranche isn't\nsufficient. E.g. knowing whether it's one or different buffer mapping locks\nare being waited on is important to judge contention.I've struggled with this quite a bit myself.In particular, for tools that validate acquire-ordering safety it's desirable to be able to identify a specific lock in a backend-independent way. \nThe hardest part would be to know how to identify individual locks. The\neasiest would probably be to just mask in a parts of the lwlock address\n(e.g. shift it right by INTALIGN, and then mask in the result into the\neventId). That seems a bit unsatisfying.It also won't work reliably for locks in dsm segments, since the lock can be mapped to a different address in different backends. \n\nWe could probably do a bit better: We could just store the information about\ntranche / offset within tranche at LWLockInitialize() time, instead of\ncomputing something just before waiting. While LWLock.tranche is only 16bits\nright now, the following two bytes are currently padding...\n\nThat'd allow us to have proper numerical identification for nearly all\ntranches, without needing to go back to the complexity of having tranches\nspecify base & stride.That sounds appealing. It'd work for any lock in MainLWLockArray - all built-in individual LWLocks, LWTRANCHE_BUFFER_MAPPING, LWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, any lock allocated by RequestNamedLWLockTranche().Some of the other tranches allocate locks in contiguous fixed blocks or in ways that would let them maintain a counter. We'd need some kind of \"unknown\" placeholder value for LWLocks where that doesn't make sense, though, like most locks allocated by callers that make their own LWLockNewTrancheId() call and locks in some of the built-in tranches not allocated in MainLWLockArray.So I suggest retaining the current LWLockInitialize() and making it a wrapper for LWLockInitializeWithIndex() or similar. Use a 1-index and keep 0 as unknown, or use 0-index and use (max-1) as unknown.",
"msg_date": "Wed, 23 Dec 2020 15:51:50 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving LWLock wait events"
},
{
"msg_contents": "On Wed, 23 Dec 2020 at 15:51, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n>\n> I've struggled with this quite a bit myself.\n>\n>\nBy the way, I sent in a patch to enhance the static tracepoints available\nfor LWLocks. See\nhttps://www.postgresql.org/message-id/CAGRY4nxJo+-HCC2i5H93ttSZ4gZO-FSddCwvkb-qAfQ1zdXd1w@mail.gmail.com\n.\n\nIt'd benefit significantly from the sort of changes you mentioned in #4.\nFor most purposes I've been able to just use the raw LWLock* but having a\nnice neat (tranche,index) value would be ideal.\n\nThe trace patch has helped me identify some excessively long LWLock waits\nin tools I work on. I'll share another of the systemtap scripts I used with\nit soon.\n\nOn Wed, 23 Dec 2020 at 15:51, Craig Ringer <craig.ringer@enterprisedb.com> wrote:I've struggled with this quite a bit myself.By the way, I sent in a patch to enhance the static tracepoints available for LWLocks. See https://www.postgresql.org/message-id/CAGRY4nxJo+-HCC2i5H93ttSZ4gZO-FSddCwvkb-qAfQ1zdXd1w@mail.gmail.com .It'd benefit significantly from the sort of changes you mentioned in #4. For most purposes I've been able to just use the raw LWLock* but having a nice neat (tranche,index) value would be ideal.The trace patch has helped me identify some excessively long LWLock waits in tools I work on. I'll share another of the systemtap scripts I used with it soon.",
"msg_date": "Wed, 23 Dec 2020 15:56:32 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving LWLock wait events"
}
] |
[
{
"msg_contents": "Hi,\nw.r.t. the patch,\n\n+select '[3]'::jsonb || '{}'::jsonb;\n+ ?column?\n+----------\n+ [3, {}]\n+(1 row)\n+\n+select '3'::jsonb || '[]'::jsonb;\n\nShould cases where the empty array precedes non-empty jsonb be added ?\n\nselect '[]'::jsonb || '3'::jsonb;\nselect '{}'::jsonb || '[3]'::jsonb;\n\nCheers\n\nHi,w.r.t. the patch,+select '[3]'::jsonb || '{}'::jsonb;+ ?column?+----------+ [3, {}]+(1 row)++select '3'::jsonb || '[]'::jsonb;Should cases where the empty array precedes non-empty jsonb be added ?select '[]'::jsonb || '3'::jsonb;select '{}'::jsonb || '[3]'::jsonb;Cheers",
"msg_date": "Sun, 20 Dec 2020 13:48:39 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird special case in jsonb_concat()"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile analyzing the issue James reported to us, I realized that if the\nschema option in the control file is specified and the schema doesn’t\nexist we create the schema on CREATE EXTENSION but the created schema\ndoesn’t refer to the extension. Due to this behavior, the schema\nremains even on DROP EXTENSION. You can see this behavior by using the\ntest_ext6 extension in src/test/module/test_extensions. In the control\nfile, it has the schema option:\n\n$ cat src/test/modules/test_extensions/test_ext6.control\ncomment = 'test_ext6'\ndefault_version = '1.0'\nrelocatable = false\nsuperuser = true\nschema = 'test_ext6'\n\nOn CREATE EXTENSION, the schema test_ext6 is created if not exist:\n\npostgres(1:692)=# create extension test_ext6 ;\nCREATE EXTENSION\n\npostgres(1:692)=# \\dn\n List of schemas\n Name | Owner\n-----------+----------\n public | masahiko\n test_ext6 | masahiko\n(2 rows)\n\nBut it isn't dropped on DROP EXTENSION:\n\npostgres(1:692)=# drop extension test_ext6 ;\nDROP EXTENSION\n\npostgres(1:692)=# \\dn\n List of schemas\n Name | Owner\n-----------+----------\n public | masahiko\n test_ext6 | masahiko\n(2 rows)\n\nIs it a bug? Since the created schema obviously depends on the\nextension when we created the schema specified in the schema option, I\nthink we might want to create the dependency so that DROP EXTENSION\ndrops the schema as well. I’ve attached the draft patch so that CREATE\nEXTENSION creates the dependency if it newly creates the schema.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 21 Dec 2020 16:02:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Dependency isn't created between extension and schema"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 04:02:29PM +0900, Masahiko Sawada wrote:\n> Is it a bug? Since the created schema obviously depends on the\n> extension when we created the schema specified in the schema option, I\n> think we might want to create the dependency so that DROP EXTENSION\n> drops the schema as well. I’ve attached the draft patch so that CREATE\n> EXTENSION creates the dependency if it newly creates the schema.\n\nFWIW, I recall that the \"soft\" behavior that exists now is wanted, as\nit is more flexible for DROP EXTENSION: what you are suggesting here\nhas the disadvantage to make DROP EXTENSION fail if any non-extension\nobject has been created on this schema, so this could be disruptive\nwhen it comes to some upgrade scenarios.\n\n <term><replaceable class=\"parameter\">schema_name</replaceable></term>\n <listitem>\n\t<para>\n\t The name of the schema in which to install the extension's\n objects, given that the extension allows its contents to be\n relocated. The named schema must already exist.\nWhile on it.. The docs are incorrect here. As you say,\nCreateExtensionInternal() may internally create a schema.\n--\nMichael",
"msg_date": "Mon, 21 Dec 2020 16:58:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Dependency isn't created between extension and schema"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 21, 2020 at 04:02:29PM +0900, Masahiko Sawada wrote:\n> > Is it a bug? Since the created schema obviously depends on the\n> > extension when we created the schema specified in the schema option, I\n> > think we might want to create the dependency so that DROP EXTENSION\n> > drops the schema as well. I’ve attached the draft patch so that CREATE\n> > EXTENSION creates the dependency if it newly creates the schema.\n>\n> FWIW, I recall that the \"soft\" behavior that exists now is wanted, as\n> it is more flexible for DROP EXTENSION: what you are suggesting here\n> has the disadvantage to make DROP EXTENSION fail if any non-extension\n> object has been created on this schema, so this could be disruptive\n> when it comes to some upgrade scenarios.\n\nThat's potentially an issue even for a schema created explicitly by\nthe extension's install script, since anyone can create an object\nwithin that schema at any time.\n\nIt seems that the only consistent behavior choice would be to mark the\ndependency when Postgres is creating the extension automatically but\nnot when the schema already exists.\n\n> <term><replaceable class=\"parameter\">schema_name</replaceable></term>\n> <listitem>\n> <para>\n> The name of the schema in which to install the extension's\n> objects, given that the extension allows its contents to be\n> relocated. The named schema must already exist.\n> While on it.. The docs are incorrect here. As you say,\n> CreateExtensionInternal() may internally create a schema.\n\nAlternatively the behavior could be updated to match the docs, since\nthat seems like reasonable intent.\n\nJames\n\n\n",
"msg_date": "Mon, 21 Dec 2020 08:29:42 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dependency isn't created between extension and schema"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Mon, Dec 21, 2020 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Mon, Dec 21, 2020 at 04:02:29PM +0900, Masahiko Sawada wrote:\n>>> Is it a bug? Since the created schema obviously depends on the\n>>> extension when we created the schema specified in the schema option, I\n>>> think we might want to create the dependency so that DROP EXTENSION\n>>> drops the schema as well.\n\n>> FWIW, I recall that the \"soft\" behavior that exists now is wanted, as\n>> it is more flexible for DROP EXTENSION: what you are suggesting here\n>> has the disadvantage to make DROP EXTENSION fail if any non-extension\n>> object has been created on this schema, so this could be disruptive\n>> when it comes to some upgrade scenarios.\n\nI think it absolutely is intentional. For example, if several extensions\nall list \"schema1\" in their control files, and you install them all, you\nwould not want dropping the first-created one to force dropping the rest.\nI do not really see any problem here that's worth creating such hazards\nto fix.\n\n(At least in current usage, I think that control files probably always\nlist common schemas not per-extension schemas, so that this scenario\nwould be the norm not the exception.)\n\n> Alternatively the behavior could be updated to match the docs, since\n> that seems like reasonable intent.\n\nThat documentation is talking about the SCHEMA option in CREATE EXTENSION,\nwhich is an entirely different matter from the control-file option.\nA control-file entry is not going to know anything about the specific\ninstallation it's being installed in, while the user issuing CREATE\nEXTENSION presumably has local knowledge; so I don't see any strong\nargument that the two cases must be treated alike.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Dec 2020 11:03:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dependency isn't created between extension and schema"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 1:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > On Mon, Dec 21, 2020 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On Mon, Dec 21, 2020 at 04:02:29PM +0900, Masahiko Sawada wrote:\n> >>> Is it a bug? Since the created schema obviously depends on the\n> >>> extension when we created the schema specified in the schema option, I\n> >>> think we might want to create the dependency so that DROP EXTENSION\n> >>> drops the schema as well.\n>\n> >> FWIW, I recall that the \"soft\" behavior that exists now is wanted, as\n> >> it is more flexible for DROP EXTENSION: what you are suggesting here\n> >> has the disadvantage to make DROP EXTENSION fail if any non-extension\n> >> object has been created on this schema, so this could be disruptive\n> >> when it comes to some upgrade scenarios.\n>\n> I think it absolutely is intentional. For example, if several extensions\n> all list \"schema1\" in their control files, and you install them all, you\n> would not want dropping the first-created one to force dropping the rest.\n> I do not really see any problem here that's worth creating such hazards\n> to fix.\n\nThank you for the comments!\n\nI understand that it is intentional behavior and the downside of my\nidea. But what is the difference between the schema created by\nspecifying the schema option in the control file and by CREATE SCHEMA\nin the install script? Extensions might create the same schema\n\"schema1\" in their install script. In this case, dropping the first\none force dropping the rest. Looking at some extensions in the world.\nsome extensions use the schema option whereas some use the install\nscript. I think it’s reasonable there are two ways to create the\nextension’s schema with different dependencies but I think it’s better\nto be documented. It looked like a non-intuitive behavior when I saw\nit for the first time.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 31 Dec 2020 20:05:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dependency isn't created between extension and schema"
}
] |
[
{
"msg_contents": "Hi\n\nsome Orafce's user reported problems with pg_upgrade. I checked this issue\nand it looks like pg_dump problem:\n\n\npg_restore: creating FUNCTION \"public.nvarchar2(\"public\".\"nvarchar2\",\ninteger, boolean)\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 612; 1255 33206 FUNCTION\nnvarchar2(\"public\".\"nvarchar2\", integer, boolean) pavel\npg_restore: error: could not execute query: ERROR: function\npublic.nvarchar2_transform(internal) does not exist\nCommand was: CREATE FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\",\ninteger, boolean) RETURNS \"public\".\"nvarchar2\"\n LANGUAGE \"c\" IMMUTABLE STRICT SUPPORT \"public\".\"nvarchar2_transform\"\n AS '$libdir/orafce', 'nvarchar2';\n\n\n\n\n--\n-- TOC entry 612 (class 1255 OID 33206)\n-- Name: nvarchar2(\"public\".\"nvarchar2\", integer, boolean); Type: FUNCTION;\nSchema: public; Owner: pavel\n--\n\nCREATE FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer,\nboolean) RETURNS \"public\".\"nvarchar2\"\n LANGUAGE \"c\" IMMUTABLE STRICT SUPPORT \"public\".\"nvarchar2_transform\"\n AS '$libdir/orafce', 'nvarchar2';\n\n-- For binary upgrade, handle extension membership the hard way\nALTER EXTENSION \"orafce\" ADD FUNCTION\n\"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean);\n\n\nALTER FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean)\nOWNER TO \"pavel\";\n\n--\n-- TOC entry 607 (class 1255 OID 33201)\n-- Name: nvarchar2_transform(\"internal\"); Type: FUNCTION; Schema: public;\nOwner: pavel\n--\n\nCREATE FUNCTION \"public\".\"nvarchar2_transform\"(\"internal\") RETURNS\n\"internal\"\n LANGUAGE \"c\" IMMUTABLE STRICT\n AS '$libdir/orafce', 'orafce_varchar_transform';\n\n-- For binary upgrade, handle extension membership the hard way\nALTER EXTENSION \"orafce\" ADD FUNCTION\n\"public\".\"nvarchar2_transform\"(\"internal\");\n\n\nALTER FUNCTION \"public\".\"nvarchar2_transform\"(\"internal\") OWNER TO \"pavel\";\n\nthe supporting function should be dumped first before function where\nsupporting function is used.\n\nRegards\n\nPavel\n\nHisome Orafce's user reported problems with pg_upgrade. I checked this issue and it looks like pg_dump problem:pg_restore: creating FUNCTION \"public.nvarchar2(\"public\".\"nvarchar2\", integer, boolean)\"pg_restore: while PROCESSING TOC:pg_restore: from TOC entry 612; 1255 33206 FUNCTION nvarchar2(\"public\".\"nvarchar2\", integer, boolean) pavelpg_restore: error: could not execute query: ERROR: function public.nvarchar2_transform(internal) does not existCommand was: CREATE FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean) RETURNS \"public\".\"nvarchar2\" LANGUAGE \"c\" IMMUTABLE STRICT SUPPORT \"public\".\"nvarchar2_transform\" AS '$libdir/orafce', 'nvarchar2';---- TOC entry 612 (class 1255 OID 33206)-- Name: nvarchar2(\"public\".\"nvarchar2\", integer, boolean); Type: FUNCTION; Schema: public; Owner: pavel--CREATE FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean) RETURNS \"public\".\"nvarchar2\" LANGUAGE \"c\" IMMUTABLE STRICT SUPPORT \"public\".\"nvarchar2_transform\" AS '$libdir/orafce', 'nvarchar2';-- For binary upgrade, handle extension membership the hard wayALTER EXTENSION \"orafce\" ADD FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean);ALTER FUNCTION \"public\".\"nvarchar2\"(\"public\".\"nvarchar2\", integer, boolean) OWNER TO \"pavel\";---- TOC entry 607 (class 1255 OID 33201)-- Name: nvarchar2_transform(\"internal\"); Type: FUNCTION; Schema: public; Owner: pavel--CREATE FUNCTION \"public\".\"nvarchar2_transform\"(\"internal\") RETURNS \"internal\" LANGUAGE \"c\" IMMUTABLE STRICT AS '$libdir/orafce', 'orafce_varchar_transform';-- For binary upgrade, handle extension membership the hard wayALTER EXTENSION \"orafce\" ADD FUNCTION \"public\".\"nvarchar2_transform\"(\"internal\");ALTER FUNCTION \"public\".\"nvarchar2_transform\"(\"internal\") OWNER TO \"pavel\";the supporting function should be dumped first before function where supporting function is used.RegardsPavel",
"msg_date": "Mon, 21 Dec 2020 10:11:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "bad dependency in pg_dump output related to support function breaks\n binary upgrade"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> some Orafce's user reported problems with pg_upgrade. I checked this issue\n> and it looks like pg_dump problem:\n> ...\n> the supporting function should be dumped first before function where\n> supporting function is used.\n\nI tried to reproduce this and could not. It should work, since\nProcedureCreate definitely makes a dependency on the support function.\nCan you make a self-contained test case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Dec 2020 11:23:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bad dependency in pg_dump output related to support function\n breaks binary upgrade"
},
{
"msg_contents": "po 21. 12. 2020 v 17:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > some Orafce's user reported problems with pg_upgrade. I checked this\n> issue\n> > and it looks like pg_dump problem:\n> > ...\n> > the supporting function should be dumped first before function where\n> > supporting function is used.\n>\n> I tried to reproduce this and could not. It should work, since\n> ProcedureCreate definitely makes a dependency on the support function.\n> Can you make a self-contained test case?\n>\n\nAfter some deeper investigation I found an old bug in Orafce :-/. I am\nsorry for the noise.\n\nThis old bug is related to introduction aliases types of varchar -\nnvarchar2 and varchar2. In this age the \"in\" function can use a\nprotransform column, but there was not a possibility how to set this column\nexternally, and Orafce used dirty update. The value was correct, but the\nnew dependency was not used. Originally it was not a problem, because the\ntransform function was built in. But there was a new issue related to\nPostgres 12 when these functions were renamed. I fixed this issue by\nintroducing my own wrapping function - but without dependency I broke the\nbinary upgrade.\n\nOn Postgres 12 and higher I can use ALTER FUNCTION SUPPORT and all works\nwell. On older platforms I have to hack pg_depend, but it is working too.\n\nAgain I am sorry for false alarm\n\nRegards\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n\npo 21. 12. 2020 v 17:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> some Orafce's user reported problems with pg_upgrade. I checked this issue\n> and it looks like pg_dump problem:\n> ...\n> the supporting function should be dumped first before function where\n> supporting function is used.\n\nI tried to reproduce this and could not. It should work, since\nProcedureCreate definitely makes a dependency on the support function.\nCan you make a self-contained test case?After some deeper investigation I found an old bug in Orafce :-/. I am sorry for the noise.This old bug is related to introduction aliases types of varchar - nvarchar2 and varchar2. In this age the \"in\" function can use a protransform column, but there was not a possibility how to set this column externally, and Orafce used dirty update. The value was correct, but the new dependency was not used. Originally it was not a problem, because the transform function was built in. But there was a new issue related to Postgres 12 when these functions were renamed. I fixed this issue by introducing my own wrapping function - but without dependency I broke the binary upgrade.On Postgres 12 and higher I can use ALTER FUNCTION SUPPORT and all works well. On older platforms I have to hack pg_depend, but it is working too.Again I am sorry for false alarmRegardsPavel \n\n regards, tom lane",
"msg_date": "Mon, 21 Dec 2020 19:26:14 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: bad dependency in pg_dump output related to support function\n breaks binary upgrade"
}
] |
[
{
"msg_contents": "Backends reflect \"GRANT role_name\" changes rather quickly, due to a syscache\ninvalidation callback. Let's register an additional callback to reflect\n\"ALTER ROLE ... [NO]INHERIT\" with equal speed. I propose to back-patch this.\nWhile pg_authid changes may be more frequent than pg_auth_members changes, I\nexpect neither is frequent enough to worry about the resulting acl.c cache\nmiss rate.\n\npg_authid changes don't affect cached_membership_roles, so I could have\ninvalidated cached_privs_roles only. That felt like needless complexity. I\nexpect cached_privs_role gets the bulk of traffic, since SELECT, INSERT,\nUPDATE and DELETE use it. cached_membership_roles pertains to DDL and such.",
"msg_date": "Mon, 21 Dec 2020 01:50:28 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Invalidate acl.c caches for pg_authid.rolinherit changes"
},
{
"msg_contents": "On 12/21/20, 1:51 AM, \"Noah Misch\" <noah@leadboat.com> wrote:\r\n> Backends reflect \"GRANT role_name\" changes rather quickly, due to a syscache\r\n> invalidation callback. Let's register an additional callback to reflect\r\n> \"ALTER ROLE ... [NO]INHERIT\" with equal speed. I propose to back-patch this.\r\n> While pg_authid changes may be more frequent than pg_auth_members changes, I\r\n> expect neither is frequent enough to worry about the resulting acl.c cache\r\n> miss rate.\r\n\r\n+1 to back-patching.\r\n\r\n> pg_authid changes don't affect cached_membership_roles, so I could have\r\n> invalidated cached_privs_roles only. That felt like needless complexity. I\r\n> expect cached_privs_role gets the bulk of traffic, since SELECT, INSERT,\r\n> UPDATE and DELETE use it. cached_membership_roles pertains to DDL and such.\r\n\r\nThe patch looks reasonable to me.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 21 Dec 2020 19:01:51 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalidate acl.c caches for pg_authid.rolinherit changes"
}
] |
[
{
"msg_contents": "ExecBuildAggTrans has this sequence:\n\n if (pertrans->deserialfn.fn_strict)\n scratch.opcode = EEOP_AGG_STRICT_DESERIALIZE;\n else\n scratch.opcode = EEOP_AGG_DESERIALIZE;\n\n scratch.d.agg_deserialize.fcinfo_data = ds_fcinfo;\n scratch.d.agg_deserialize.jumpnull = -1; /* adjust later */\n scratch.resvalue = &trans_fcinfo->args[argno + 1].value;\n scratch.resnull = &trans_fcinfo->args[argno + 1].isnull;\n\n ExprEvalPushStep(state, &scratch);\n adjust_bailout = lappend_int(adjust_bailout,\n state->steps_len - 1);\n\nbut later on, where adjust_bailout is processed, we see this (note that\nEEOP_AGG_DESERIALIZE is not checked for):\n\n if (as->opcode == EEOP_JUMP_IF_NOT_TRUE)\n {\n Assert(as->d.jump.jumpdone == -1);\n as->d.jump.jumpdone = state->steps_len;\n }\n else if (as->opcode == EEOP_AGG_STRICT_INPUT_CHECK_ARGS ||\n as->opcode == EEOP_AGG_STRICT_INPUT_CHECK_NULLS)\n {\n Assert(as->d.agg_strict_input_check.jumpnull == -1);\n as->d.agg_strict_input_check.jumpnull = state->steps_len;\n }\n else if (as->opcode == EEOP_AGG_STRICT_DESERIALIZE)\n {\n Assert(as->d.agg_deserialize.jumpnull == -1);\n as->d.agg_deserialize.jumpnull = state->steps_len;\n }\n else\n Assert(false);\n\nSeems clear to me that the assertion is wrong, and that even though a\nnon-strict DESERIALIZE opcode might not need jumpnull filled in, the\ncode added it to adjust_bailout anyway, so we crash out here on an\nasserts build. This may have been overlooked because all the builtin\ndeserialize functions appear to be strict, but postgis has at least one\nnon-strict one and can hit this.\n\nThis could be fixed either by fixing the assert, or by not adding\nnon-strict deserialize opcodes to adjust_bailout; anyone have any\npreferences?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 21 Dec 2020 12:02:16 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Incorrect assertion in ExecBuildAggTrans ?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe following sequence of statements:\n\nCREATE SCHEMA testschema;\nCREATE TABLE testschema.part (a int) PARTITION BY LIST (a);\nSET default_tablespace TO pg_global;\nALTER TABLE testschema.part SET TABLESPACE pg_default;\nCREATE TABLE testschema.part_78 PARTITION OF testschema.part FOR VALUES \nIN (7, 8) PARTITION BY LIST (a);\n\nProduce error\nERROR: only shared relations can be placed in pg_global tablespace\nwhen been executed in database with default tablespace, but produce no \nerror in database with assigned tablespace.\n\ncreate tablespace my_tblspc location '/tmp/tblspc';\ncreate databse test;\nalter database test set tablespace my_tblspc;\n\n\nThere is the following code in tablecmds.c:\n\n else if (stmt->partbound)\n {\n /*\n * For partitions, when no other tablespace is specified, we \ndefault\n * the tablespace to the parent partitioned table's.\n */\n Assert(list_length(inheritOids) == 1);\n tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n }\n\nIn first case get_rel_tablespace returns 0 (because parent table has no \nexplicit tablespace)\nand in the second: pg_default\n\n\nAlso I am confused that the following statement is rejected:\n\nSET default_tablespace TO pg_default;\nCREATE TABLE testschema.part (a int) PARTITION BY LIST (a);\nERROR: cannot specify default tablespace for partitioned relations\n\nbut still it is possible to set tablespace of parent table to pg_default \nusing alter tablespace command:\n\nRESET default_tablespace;\nCREATE TABLE testschema.part (a int) PARTITION BY LIST (a);\nALTER TABLE testschema.part SET TABLESPACE pg_default;\n\nBut ... it has no effect: testschema.part is till assumed to belong to \ndefault tablespace.\nBecause of the following code in tablecmds.c:\n\n\n /*\n * No work if no change in tablespace.\n */\n oldTableSpace = rel->rd_rel->reltablespace;\n if (newTableSpace == oldTableSpace ||\n (newTableSpace == MyDatabaseTableSpace && oldTableSpace == 0))\n {\n InvokeObjectPostAlterHook(RelationRelationId,\n RelationGetRelid(rel), 0);\n\n relation_close(rel, NoLock);\n return;\n }\n\n\nI found the thread discussing the similar problem:\nhttps://www.postgresql.org/message-id/flat/BY5PR18MB3170E372542F34694E630B12F10C0%40BY5PR18MB3170.namprd18.prod.outlook.com\n\nand looks like the decision was to change nothing and leave everything \nas it is.\n\n From my point of view the source of the problem is that pg_default \n(oid=1663) is treated as database default tablespace.\npg_default stands for concrete tablespace and it is not clear why it is \ntreated in different way comparing with any other tablepsace.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 21 Dec 2020 18:22:23 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Inconsistent/confusing handling of tablespaces for partitioned tables"
}
] |
[
{
"msg_contents": "I found that our largest tables are 40% smaller and 20% faster to pipe\npg_dump -Fc -Z0 |zstd relative to native zlib\n\nSo I wondered how much better when integrated in pg_dump, and found that\nthere's some additional improvement, but a big disadvantage of piping through\nzstd is that it's not identified as a PGDMP file, and, /usr/bin/file on centos7\nfails to even identify zstd by its magic number..\n\nI looked for previous discussion about alternate compressions, but didn't find\nanything for pg_dump.\n\n-- \nJustin",
"msg_date": "Mon, 21 Dec 2020 13:49:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "zstd compression for pg_dump"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I found that our largest tables are 40% smaller and 20% faster to pipe\n> pg_dump -Fc -Z0 |zstd relative to native zlib\n\nThe patch might be a tad smaller if you hadn't included a core file in it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Dec 2020 15:02:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 03:02:40PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I found that our largest tables are 40% smaller and 20% faster to pipe\n> > pg_dump -Fc -Z0 |zstd relative to native zlib\n> \n> The patch might be a tad smaller if you hadn't included a core file in it.\n\nAbout 89% smaller.\n\nThis also fixes the extension (.zst)\nAnd fixes zlib default compression.\nAnd a bunch of cleanup.\n\n-- \nJustin",
"msg_date": "Mon, 21 Dec 2020 20:32:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 01:49:24PM -0600, Justin Pryzby wrote:\n> a big disadvantage of piping through zstd is that it's not identified as a\n> PGDMP file, and, /usr/bin/file on centos7 fails to even identify zstd by its\n> magic number..\n\nOther reasons are that pg_dump |zstd >output.zst loses the exit status of\npg_dump, and that it's not \"transparent\" (one needs to type\n\"zstd -dq |pg_restore\").\n\nOn Mon, Dec 21, 2020 at 08:32:35PM -0600, Justin Pryzby wrote:\n> On Mon, Dec 21, 2020 at 03:02:40PM -0500, Tom Lane wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > I found that our largest tables are 40% smaller and 20% faster to pipe\n> > > pg_dump -Fc -Z0 |zstd relative to native zlib\n> > \n> > The patch might be a tad smaller if you hadn't included a core file in it.\n> \n> About 89% smaller.\n> \n> This also fixes the extension (.zst)\n> And fixes zlib default compression.\n> And a bunch of cleanup.\n\nI rebased so the \"typedef struct compression\" patch is first and zstd on top of\nthat (say, in case someone wants to bikeshed about which compression algorithm\nto support). And made a central struct with all the compression-specific info\nto further isolate the compress-specific changes.\n\nAnd handle compression of \"plain\" archive format.\nAnd fix compilation for MSVC and make --without-zstd the default.\n\nAnd fix cfgets() (which I think is actually unused code for the code paths for\ncompressed FP).\n\nAnd add fix for pre-existing problem: ftello() on unseekable input.\n\nI also started a patch to allow compression of \"tar\" format, but I didn't\ninclude that here yet.\n\nNote, there's currently several \"compression\" patches in CF app. This patch\nseems to be independent of the others, but probably shouldn't be totally\nuncoordinated (like adding lz4 in one and ztsd in another might be poor\nexecution).\n\nhttps://commitfest.postgresql.org/31/2897/\n - Faster pglz compression\nhttps://commitfest.postgresql.org/31/2813/\n - custom compression methods for toast\nhttps://commitfest.postgresql.org/31/2773/\n - libpq compression\n\n-- \nJustin",
"msg_date": "Sun, 3 Jan 2021 20:53:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\n> 4 янв. 2021 г., в 07:53, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> Note, there's currently several \"compression\" patches in CF app. This patch\n> seems to be independent of the others, but probably shouldn't be totally\n> uncoordinated (like adding lz4 in one and ztsd in another might be poor\n> execution).\n> \n> https://commitfest.postgresql.org/31/2897/\n> - Faster pglz compression\n> https://commitfest.postgresql.org/31/2813/\n> - custom compression methods for toast\n> https://commitfest.postgresql.org/31/2773/\n> - libpq compression\n\nI think that's downside of our development system: patch authors do not want to create dependencies on other patches.\nI'd say that both lz4 and zstd should be supported in TOAST, FPIs, libpq, and pg_dump. As to pglz - I think we should not proliferate it any further.\nLz4 and Zstd represent a different tradeoff actually. Basically, lz4 is so CPU-cheap that one should use it whenever they write to disk or network interface. Zstd represent an actual bandwith\\CPU tradeoff.\nAlso, all patchsets do not touch important possibility - preexisting dictionary could radically improve compression of small data (event in pglz).\n\nSome minor notes on patchset at this thread.\n\nLibpq compression encountered some problems with memory consumption which required some extra config efforts. Did you measure memory usage for this patchset?\n\n[PATCH 03/20] Support multiple compression algs/levels/opts..\nabtracts -> abstracts\nenum CompressionAlgorithm actually represent the very same thing as in \"Custom compression methods\"\n\nDaniil, is levels definition compatible with libpq compression patch?\n+typedef struct Compress {\n+\tCompressionAlgorithm\talg;\n+\tint\t\t\tlevel;\n+\t/* Is a nondefault level set ? This is useful since different compression\n+\t * methods have different \"default\" levels. For now we assume the levels\n+\t * are all integer, though.\n+\t*/\n+\tbool\t\tlevel_set;\n+} Compress;\n\n[PATCH 04/20] struct compressLibs\nI think this directive would be correct.\n+// #ifdef HAVE_LIBZ?\n\nHere's extra comment\n// && errno == ENOENT)\n\n\n[PATCH 06/20] pg_dump: zstd compression\n\nI'd propose to build with Zstd by default. It seems other patches do it this way. Though, I there are possible downsides.\n\n\nThanks for working on this! We will have very IO-efficient Postgres :)\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Jan 2021 11:04:57 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Mon, Jan 04, 2021 at 11:04:57AM +0500, Andrey Borodin wrote:\n> > 4 янв. 2021 г., в 07:53, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > Note, there's currently several \"compression\" patches in CF app. This patch\n> > seems to be independent of the others, but probably shouldn't be totally\n> > uncoordinated (like adding lz4 in one and ztsd in another might be poor\n> > execution).\n> > \n> > https://commitfest.postgresql.org/31/2897/\n> > - Faster pglz compression\n> > https://commitfest.postgresql.org/31/2813/\n> > - custom compression methods for toast\n> > https://commitfest.postgresql.org/31/2773/\n> > - libpq compression\n> \n> I think that's downside of our development system: patch authors do not want to create dependencies on other patches.\n\nI think in these cases, someone who notices common/overlapping patches should\nsuggest that the authors review each other's work. In some cases, I think it's\nappropriate to come up with a \"shared\" preliminary patch(es), which both (all)\npatch authors can include as 0001 until its finalized and merged. That might\nbe true for some things like the tableam work, or the two \"online checksum\"\npatches.\n\n> I'd say that both lz4 and zstd should be supported in TOAST, FPIs, libpq, and pg_dump. As to pglz - I think we should not proliferate it any further.\n\npg_basebackup came up as another use on another thread, I think related to\nlibpq protocol compression.\n\n> Libpq compression encountered some problems with memory consumption which\n> required some extra config efforts. Did you measure memory usage for this\n> patchset?\n\nRAM use is not significantly different from zlib, except that zstd --long adds\nmore memory.\n\n$ command time -v pg_dump -d ts -t ... -Fc -Z0 |wc -c\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:28.77\n Maximum resident set size (kbytes): 40504\n\t1397288924 # no compression: 1400MB\n\n$ command time -v pg_dump -d ts -t ... -Fc |wc -c\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:37.17\n Maximum resident set size (kbytes): 40504\n\t132932415 # default (zlib) compression: 132 MB\n\n$ command time -v ./pg_dump -d ts -t ... -Fc |wc -c\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:29.28\n Maximum resident set size (kbytes): 40568\n\t86048139 # zstd: 86MB\n\n$ command time -v ./pg_dump -d ts -t ... -Fc -Z 'alg=zstd opt=zstdlong' |wc -c\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:30.49\n Maximum resident set size (kbytes): 180332\n\t72202937 # zstd long: 180MB\n\n> [PATCH 04/20] struct compressLibs\n> I think this directive would be correct.\n> +// #ifdef HAVE_LIBZ?\n\nI'm not sure .. I'm thinking of making the COMPR_ALG_* always defined, and then\nfail later if an operation is unsupported. There's an excessive number of\n#ifdefs already, so the early commits are intended to minimize as far as\npossible what's needed for each additional compression\nalgorithm(lib/method/whatever it's called). I haven't tested much with\npg_restore of files with unsupported compression libs.\n\n> [PATCH 06/20] pg_dump: zstd compression\n> I'd propose to build with Zstd by default. It seems other patches do it this way. Though, I there are possible downsides.\n\nYes...but the cfbot turns red if the patch require zstd, so it defaults to\noff until it's included in the build environments (but for now, the main patch\nisn't being tested).\n\nThanks for looking.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 4 Jan 2021 01:06:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "Hi!\n\n> On Jan 4, 2021, at 11:04 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Daniil, is levels definition compatible with libpq compression patch?\n> +typedef struct Compress {\n> +\tCompressionAlgorithm\talg;\n> +\tint\t\t\tlevel;\n> +\t/* Is a nondefault level set ? This is useful since different compression\n> +\t * methods have different \"default\" levels. For now we assume the levels\n> +\t * are all integer, though.\n> +\t*/\n> +\tbool\t\tlevel_set;\n> +} Compress;\n\nSimilarly to this patch, it is also possible to define the compression level at the initialization stage in libpq compression patch.\n\nThe difference is that in libpq compression patch the default compression level always equal to 1, independently of the chosen compression algorithm.\n\n> On Jan 4, 2021, at 11:04 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Libpq compression encountered some problems with memory consumption which required some extra config efforts.\n\n\n> On Jan 4, 2021, at 12:06 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> RAM use is not significantly different from zlib, except that zstd --long adds\n> more memory.\n\nRegarding ZSTD memory usage:\n\nRecently I’ve made a couple of tests of libpq compression with different ZLIB/ZSTD compression levels which shown that compressing/decompressing ZSTD w/ high compression levels \nrequire to allocate more virtual (Commited_AS) memory, which may be exploited by malicious clients:\n\nhttps://www.postgresql.org/message-id/62527092-16BD-479F-B503-FA527AF3B0C2%40yandex-team.ru\n\nWe can avoid high memory usage by limiting the max window size to 8MB. This should effectively disable the support of compression levels above 19:\nhttps://www.postgresql.org/message-id/6A45DFAA-1682-4EF2-B835-C5F46615EC49%40yandex-team.ru\n\nSo maybe it is worthwhile to use similar restrictions in this patch.\n\n—\nDaniil Zakhlystov\n\n",
"msg_date": "Mon, 4 Jan 2021 15:17:50 +0500",
"msg_from": "Daniil Zakhlystov <usernamedt@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 1/4/21 3:53 AM, Justin Pryzby wrote:\n>> About 89% smaller.\n\nDid a quick code review of the patch. I have not yet taken it for a spin \nyet and there are parts of the code I have not read yet.\n\n## Is there any reason for this diff?\n\n- cfp *fp = pg_malloc(sizeof(cfp));\n+ cfp *fp = pg_malloc0(sizeof(cfp));\n\n## Since we know have multiple returns in cfopen() I am not sure that \nsetting fp to NULL is still clearer than just returning NULL.\n\n## I do not like that this pretends to support r+, w+ and a+ but does \nnot actually do so since it does not create an input stream in those cases.\n\nelse if (mode[0] == 'w' || mode[0] == 'a' ||\n\tstrchr(mode, '+') != NULL)\n[...]\nelse if (strchr(mode, 'r'))\n\n## Wouldn't cfread(), cfwrite(), cfgetc(), cfgets(), cfclose() and \ncfeof() be cleaner with sitch statments similar to cfopen()?\n\n## \"/* Should be called \"method\" or \"library\" ? */\"\n\nMaybe, but personally I think algorithm is fine too.\n\n## \"Is a nondefault level set ?\"\n\nThe PostgreSQL project does not use space before question mark (at least \nnot in English).\n\n## Why isn't level_set just a local variable in parse_compression()? It \ndoes not seem to be used elsewhere.\n\n## Shouldn't we call the Compression variable in OpenArchive() \nnocompress to match with the naming convention in other places.\n\nAnd in general I wonder if we should not write \"nocompression = \n{COMPR_ALG_NONE}\" rather than \"nocompression = {0}\".\n\n## Why not use const on the pointers to Compression for functions like \ncfopen()? As far as I can see several of them could be const.\n\n## Shouldn't \"AH->compression.alg = Z_DEFAULT_COMPRESSION\" in ReadHead() \nbe \"AH->compression.alg = COMPR_ALG_DEFAULT\"?\n\nAdditionally I am not convinced that returning COMPR_ALG_DEFAULT will \neven work but I have not had the time to test that theory yet. And in \ngeneral I am quite sceptical of that we really need of COMPR_ALG_DEFAULT.\n\n## Some white space issues\n\nAdd spaces around plus in \"atoi(1+eq)\" and \"pg_log_error(\"unknown \ncompression algorithm: %s\", 1+eq)\".\n\nAdd spaces around plus in parse_compression(), e.g. in \"strlen(1+eq)\".\n\n## Shouldn't hasSuffix() take the current compression algorithm as a \nparameter? Or alternatively look up which compression algorithm to use \nfrom the suffix?\n\n## Why support multiple ways to write zlib on the command line? I do not \nsee any advatange of being able to write it as libz.\n\n## I feel renaming SaveOutput() to GetOutput() would make it more clear \nwhat it does now that you have changed the return type.\n\n## You have accidentally committed \"-runstatedir\" in configure. I have \nno idea why we do not have it (maybe it is something Debian specific) \nbut even if we are going to add it it should not be in this patch. Same \nwith the parenthesis changes to LARGE_OFF_T.\n\n## This is probably out of scope of your patch but I am not a fan of the \nfallback logic in cfopen_read(). I feel ideally we should always know if \nthere is a suffix or not and not try to guess file names and do \npointless syscalls.\n\n## COMPR_ALG_DEFAULT looks like it would error out for archive and \ndirectory if someone has neither zlib nor zstandard. It feels like it \nshould default to uncompressed if we have neither. Or at least give a \nbetter error message.\n\n> Note, there's currently several \"compression\" patches in CF app. This patch\n> seems to be independent of the others, but probably shouldn't be totally\n> uncoordinated (like adding lz4 in one and ztsd in another might be poor\n> execution).\n\nA thought here is that maybe we want to use the same values for the \nenums in all patches. Especially if we write the numeric value to pg \ndump files.\n\nAndreas\n\n\n\n",
"msg_date": "Sun, 10 Jan 2021 22:06:25 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 1/4/21 11:17 AM, Daniil Zakhlystov wrote:\n> Hi!\n> \n>> On Jan 4, 2021, at 11:04 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> Daniil, is levels definition compatible with libpq compression patch?\n>> +typedef struct Compress {\n>> +\tCompressionAlgorithm\talg;\n>> +\tint\t\t\tlevel;\n>> +\t/* Is a nondefault level set ? This is useful since different compression\n>> +\t * methods have different \"default\" levels. For now we assume the levels\n>> +\t * are all integer, though.\n>> +\t*/\n>> +\tbool\t\tlevel_set;\n>> +} Compress;\n> \n> Similarly to this patch, it is also possible to define the compression level at the initialization stage in libpq compression patch.\n> \n> The difference is that in libpq compression patch the default compression level always equal to 1, independently of the chosen compression algorithm.\n> \n>> On Jan 4, 2021, at 11:04 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> Libpq compression encountered some problems with memory consumption which required some extra config efforts.\n> \n> \n>> On Jan 4, 2021, at 12:06 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> RAM use is not significantly different from zlib, except that zstd --long adds\n>> more memory.\n> \n> Regarding ZSTD memory usage:\n> \n> Recently I’ve made a couple of tests of libpq compression with different ZLIB/ZSTD compression levels which shown that compressing/decompressing ZSTD w/ high compression levels \n> require to allocate more virtual (Commited_AS) memory, which may be exploited by malicious clients:\n> \n> https://www.postgresql.org/message-id/62527092-16BD-479F-B503-FA527AF3B0C2%40yandex-team.ru\n> \n> We can avoid high memory usage by limiting the max window size to 8MB. This should effectively disable the support of compression levels above 19:\n> https://www.postgresql.org/message-id/6A45DFAA-1682-4EF2-B835-C5F46615EC49%40yandex-team.ru\n> \n> So maybe it is worthwhile to use similar restrictions in this patch.\n>\n\nI think there's a big difference between those two patches. In the libpq\ncase, the danger is that the client requests the server to compress the\ndata in a way that requires a lot of memory. I.e. the memory is consumed\non the server.\n\nWith this pg_dump patch, the compression is done by the pg_dump process,\nnot the server. So if the attacker configures the compression in a way\nthat requires a lot of memory, so what? He'll just allocate memory on\nthe client machine, where he could also just run a custom binary that\ndoes a huge malloc().\n\nSo I don't think we need to worry about this too much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Mar 2021 23:31:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 1/3/21 9:53 PM, Justin Pryzby wrote:\n\n> I rebased so the \"typedef struct compression\" patch is first and zstd on top of\n> that (say, in case someone wants to bikeshed about which compression algorithm\n> to support). And made a central struct with all the compression-specific info\n> to further isolate the compress-specific changes.\n\nIt has been a few months since there was a new patch and the current one \nno longer applies, so marking Returned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:19:30 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
}
] |
[
{
"msg_contents": "Up to now, if you shut down the database with \"pg_ctl stop -m immediate\"\nthen clients get a scary message claiming that something has crashed,\nbecause backends can't tell whether the SIGQUIT they got was sent for\na crash-and-restart situation or because of an immediate-stop command.\n\nThis isn't great from a fit-and-finish perspective, and it occurs to me\nthat it's really easy to do better: the postmaster can stick a flag\ninto shared memory explaining the reason for SIGQUIT. While we don't\nlike the postmaster touching shared memory, there doesn't seem to be\nany need for interlocking or anything like that, so there is no risk\ninvolved that's greater than those already taken by pmsignal.c.\n\nSo, here's a very simple proposed patch. Some issues for possible\nbikeshedding:\n\n* Up to now, pmsignal.c has only been for child-to-postmaster\ncommunication, so maybe there is some better place to put the\nsupport code. I can't think of one though.\n\n* I chose to report the same message for immediate shutdown as we\nalready use for SIGTERM (fast shutdown or pg_terminate_backend()).\nShould it be different, and if so what?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 21 Dec 2020 16:43:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Up to now, if you shut down the database with \"pg_ctl stop -m immediate\"\n> then clients get a scary message claiming that something has crashed,\n> because backends can't tell whether the SIGQUIT they got was sent for\n> a crash-and-restart situation or because of an immediate-stop command.\n>\n> This isn't great from a fit-and-finish perspective, and it occurs to me\n> that it's really easy to do better: the postmaster can stick a flag\n> into shared memory explaining the reason for SIGQUIT. While we don't\n> like the postmaster touching shared memory, there doesn't seem to be\n> any need for interlocking or anything like that, so there is no risk\n> involved that's greater than those already taken by pmsignal.c.\n\n+1 to improve the message.\n\n> So, here's a very simple proposed patch. Some issues for possible\n> bikeshedding:\n>\n> * Up to now, pmsignal.c has only been for child-to-postmaster\n> communication, so maybe there is some better place to put the\n> support code. I can't think of one though.\n\n+1 to have it here as we already have the required shared memory\ninitialization code to add in new flags there -\nPMSignalState->sigquit_reason.\n\nIf I'm correct, quickdie() doesn't access any shared memory because\none of the reason we can be in quickdie() is when the shared memory\nitself is corrupted(the comment down below on why we don't call\nroc_exit() or atexit() says), in such case, will GetQuitSignalReason()\nhave some problem in accessing the shared memory i.e. + return\nPMSignalState->sigquit_reason;?\n\n> * I chose to report the same message for immediate shutdown as we\n> already use for SIGTERM (fast shutdown or pg_terminate_backend()).\n> Should it be different, and if so what?\n\nAFAIK, errmsg(terminating connection due to administrator command\") is\nemitted when there's no specific reason. But we know exactly why we\nare terminating in this case, how about having an error message like\nerrmsg(\"terminating connection due to immediate shutdown request\")));\n? There are other places where errmsg(\"terminating connection due to\nXXXX reasons\"); is used. We also log messages when an immediate\nshutdown request is received errmsg(\"received immediate shutdown\nrequest\").\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Dec 2020 06:59:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 2:29 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Dec 22, 2020 at 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Up to now, if you shut down the database with \"pg_ctl stop -m immediate\"\n> > then clients get a scary message claiming that something has crashed,\n> > because backends can't tell whether the SIGQUIT they got was sent for\n> > a crash-and-restart situation or because of an immediate-stop command.\n> >\n> > This isn't great from a fit-and-finish perspective, and it occurs to me\n> > that it's really easy to do better: the postmaster can stick a flag\n> > into shared memory explaining the reason for SIGQUIT. While we don't\n> > like the postmaster touching shared memory, there doesn't seem to be\n> > any need for interlocking or anything like that, so there is no risk\n> > involved that's greater than those already taken by pmsignal.c.\n>\n> +1 to improve the message.\n>\n> > So, here's a very simple proposed patch. Some issues for possible\n> > bikeshedding:\n> >\n> > * Up to now, pmsignal.c has only been for child-to-postmaster\n> > communication, so maybe there is some better place to put the\n> > support code. I can't think of one though.\n>\n> +1 to have it here as we already have the required shared memory\n> initialization code to add in new flags there -\n> PMSignalState->sigquit_reason.\n>\n> If I'm correct, quickdie() doesn't access any shared memory because\n> one of the reason we can be in quickdie() is when the shared memory\n> itself is corrupted(the comment down below on why we don't call\n> roc_exit() or atexit() says), in such case, will GetQuitSignalReason()\n> have some problem in accessing the shared memory i.e. + return\n> PMSignalState->sigquit_reason;?\n>\n> > * I chose to report the same message for immediate shutdown as we\n> > already use for SIGTERM (fast shutdown or pg_terminate_backend()).\n> > Should it be different, and if so what?\n>\n> AFAIK, errmsg(terminating connection due to administrator command\") is\n> emitted when there's no specific reason. But we know exactly why we\n> are terminating in this case, how about having an error message like\n> errmsg(\"terminating connection due to immediate shutdown request\")));\n> ? There are other places where errmsg(\"terminating connection due to\n> XXXX reasons\"); is used. We also log messages when an immediate\n> shutdown request is received errmsg(\"received immediate shutdown\n> request\").\n\n+1. I definitely think having this message be different can be useful.\n\nSee also the thread about tracking shutdown reasons (connection\nstatistics) -- not the same thing, but the same concepts apply.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 22 Dec 2020 09:51:28 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, Dec 22, 2020 at 2:29 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> If I'm correct, quickdie() doesn't access any shared memory because\n>> one of the reason we can be in quickdie() is when the shared memory\n>> itself is corrupted(the comment down below on why we don't call\n>> roc_exit() or atexit() says), in such case, will GetQuitSignalReason()\n>> have some problem in accessing the shared memory i.e. + return\n>> PMSignalState->sigquit_reason;?\n\nIt couldn't really have any problem in physically accessing the field;\nwe never detach from the main shared memory block. The risk that needs\nto be thought about is that shared memory contains garbage --- for\nexample, imagine that a failing process scribbled in the wrong part of\nshared memory before crashing. So the hazard here is that there's a\nsmall chance that sigquit_reason will contain the wrong value, which\nwould cause the patch to print a misleading message, or more likely\nnot print anything (since I didn't put a default case in that switch).\nThat seems fine to me. Also, because the sequence of events would be\n(1) failing process scribbles and crashes, (2) postmaster updates\nsigquit_reason, (3) other child processes examine sigquit_reason,\nit's fairly likely that we'd get the right answer even if the field\ngot clobbered during (1).\n\nThere might be an argument for emitting the \"unexpected SIGQUIT\"\ntext if we find garbage in sigquit_reason. Any thoughts about that?\n\n>> AFAIK, errmsg(terminating connection due to administrator command\") is\n>> emitted when there's no specific reason. But we know exactly why we\n>> are terminating in this case, how about having an error message like\n>> errmsg(\"terminating connection due to immediate shutdown request\")));\n>> ? There are other places where errmsg(\"terminating connection due to\n>> XXXX reasons\"); is used. We also log messages when an immediate\n>> shutdown request is received errmsg(\"received immediate shutdown\n>> request\").\n\n> +1. I definitely think having this message be different can be useful.\n\nOK, will use \"terminating connection due to immediate shutdown\nrequest\".\n\n> See also the thread about tracking shutdown reasons (connection\n> statistics) -- not the same thing, but the same concepts apply.\n\nHm. I wondered for a bit if that patch could make use of this one\nto improve its results. For the specific case of SIGQUIT it seems\nmoot because we aren't going to let backends send any shutdown\nstatistics during an emergency stop. But maybe the idea could be\nextended to let more-accurate termination reasons be provided in\nsome other cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Dec 2020 12:32:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Tue, Dec 22, 2020 at 2:29 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> If I'm correct, quickdie() doesn't access any shared memory because\n> >> one of the reason we can be in quickdie() is when the shared memory\n> >> itself is corrupted(the comment down below on why we don't call\n> >> roc_exit() or atexit() says), in such case, will GetQuitSignalReason()\n> >> have some problem in accessing the shared memory i.e. + return\n> >> PMSignalState->sigquit_reason;?\n>\n> It couldn't really have any problem in physically accessing the field;\n> we never detach from the main shared memory block. The risk that needs\n> to be thought about is that shared memory contains garbage --- for\n> example, imagine that a failing process scribbled in the wrong part of\n> shared memory before crashing. So the hazard here is that there's a\n> small chance that sigquit_reason will contain the wrong value, which\n> would cause the patch to print a misleading message, or more likely\n> not print anything (since I didn't put a default case in that switch).\n> That seems fine to me. Also, because the sequence of events would be\n> (1) failing process scribbles and crashes, (2) postmaster updates\n> sigquit_reason, (3) other child processes examine sigquit_reason,\n> it's fairly likely that we'd get the right answer even if the field\n> got clobbered during (1).\n\nHmm.\n\n> There might be an argument for emitting the \"unexpected SIGQUIT\"\n> text if we find garbage in sigquit_reason. Any thoughts about that?\n\nAlthough I can't think of any case now, IMHO we can still have a\ndefault case(we may or may not hit it) in the switch with a message\nsomething like \"terminating connection due to unexpected SIGQUIT\".\n\n> >> AFAIK, errmsg(terminating connection due to administrator command\") is\n> >> emitted when there's no specific reason. But we know exactly why we\n> >> are terminating in this case, how about having an error message like\n> >> errmsg(\"terminating connection due to immediate shutdown request\")));\n> >> ? There are other places where errmsg(\"terminating connection due to\n> >> XXXX reasons\"); is used. We also log messages when an immediate\n> >> shutdown request is received errmsg(\"received immediate shutdown\n> >> request\").\n>\n> > +1. I definitely think having this message be different can be useful.\n>\n> OK, will use \"terminating connection due to immediate shutdown\n> request\".\n\nThanks.\n\nI don't have any further comments on the patch.\n\n> > See also the thread about tracking shutdown reasons (connection\n> > statistics) -- not the same thing, but the same concepts apply.\n>\n> Hm. I wondered for a bit if that patch could make use of this one\n> to improve its results. For the specific case of SIGQUIT it seems\n> moot because we aren't going to let backends send any shutdown\n> statistics during an emergency stop.\n\nYeah.\n\n> But maybe the idea could be extended to let more-accurate termination reasons be provided in\n> some other cases.\n\nYeah. For instance, the idea can be extended to the following scenario\n- currently for smart and fast shutdown postmaster sends single signal\nSIGTERM, so the backend can not know what was the exact reason for the\nshutdown. If the postmaster updates the sigterm reason, (the way this\npatch does, just before signalling children with SIGTERM), then the\nbackend would know that information and can report better.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Dec 2020 12:07:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, Dec 22, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There might be an argument for emitting the \"unexpected SIGQUIT\"\n>> text if we find garbage in sigquit_reason. Any thoughts about that?\n\n> Although I can't think of any case now, IMHO we can still have a\n> default case(we may or may not hit it) in the switch with a message\n> something like \"terminating connection due to unexpected SIGQUIT\".\n\nI don't really want to add a default case just on speculation. We\ngenerally prefer to avoid writing a default in a switch that's supposed\nto cover all values of an enum type, because without the default most C\ncompilers will warn you if you omit a value, whereas with the default\nthey won't. Admittedly, it's unlikely someone would add a new\nQuitSignalReason and forget to update this code, but still it's not\nreally project style to do it like that. I don't think there's enough\nrisk here to go against the style.\n\nHence, pushed it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Dec 2020 13:04:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-21 16:43:33 -0500, Tom Lane wrote:\n> Up to now, if you shut down the database with \"pg_ctl stop -m immediate\"\n> then clients get a scary message claiming that something has crashed,\n> because backends can't tell whether the SIGQUIT they got was sent for\n> a crash-and-restart situation or because of an immediate-stop command.\n\n+many\n\n\n> This isn't great from a fit-and-finish perspective, and it occurs to me\n> that it's really easy to do better: the postmaster can stick a flag\n> into shared memory explaining the reason for SIGQUIT. While we don't\n> like the postmaster touching shared memory, there doesn't seem to be\n> any need for interlocking or anything like that, so there is no risk\n> involved that's greater than those already taken by pmsignal.c.\n> \n> So, here's a very simple proposed patch. Some issues for possible\n> bikeshedding:\n\n> * Up to now, pmsignal.c has only been for child-to-postmaster\n> communication, so maybe there is some better place to put the\n> support code. I can't think of one though.\n\nSeems fine with me.\n\n\n> * I chose to report the same message for immediate shutdown as we\n> already use for SIGTERM (fast shutdown or pg_terminate_backend()).\n> Should it be different, and if so what?\n\nTo do better I think we'd have to distinguish the different cases? An\nerror message like\n\"terminating connection due to {fast shutdown,immediate shutdown,connection termination} administrator command\"\nor such could be helpful, but I don't think your patch adds *quite*\nenough state?\n\n\nI'd like to not log all these repeated messages into the server\nlog. It's quite annoying to have to digg through thousands of lines of\nrepeated \"terminating connection...\" lines that add absolutely no\nadditional information, just because I am shutting down the\nserver. Similarly, trying to find the reason for a PANIC is often hard\ndue to all the other messages.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Dec 2020 15:03:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "On Sat, Dec 26, 2020 at 4:33 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-12-21 16:43:33 -0500, Tom Lane wrote:\n> > * I chose to report the same message for immediate shutdown as we\n> > already use for SIGTERM (fast shutdown or pg_terminate_backend()).\n> > Should it be different, and if so what?\n>\n> To do better I think we'd have to distinguish the different cases? An\n> error message like\n> \"terminating connection due to {fast shutdown,immediate shutdown,connection termination} administrator command\"\n> or such could be helpful, but I don't think your patch adds *quite*\n> enough state?\n\nCurrently, for fast shutdown, the \"FATAL: terminating connection due\nto administrator command\" message is shown in server logs per backend.\nThe idea used for immediate shutdown can be extended to fast shutdown\nas well, that is postmaster can set the signal state just before\nsignalling the backends with SIGTERM and later in ProcessInterrupts()\nthe status can be checked and report something like \"FATAL:\nterminating connection due to fast shutdown command\".\n\nAnd for smart shutdown, since the postmaster waits until the normal\nbackends to go away on their own and no FATAL messages get logged, so\nwe don't need to set the signal state.\n\n> I'd like to not log all these repeated messages into the server\n> log. It's quite annoying to have to digg through thousands of lines of\n> repeated \"terminating connection...\" lines that add absolutely no\n> additional information, just because I am shutting down the\n> server. Similarly, trying to find the reason for a PANIC is often hard\n> due to all the other messages.\n\nCurrently, only one \"terminating connection due to XXXX\"\nmessage(WARNING for immediate shutdown, FATAL for fast shutdown) gets\nlogged in the server logs per backend, so the number of log messages\nfor each shutdown depends on the number of active backends plus other\nbg workers if any. If we don't want to let each active backend to show\nup these messages separately, then how about postmaster (as it anyways\nknows what are the active backends it currently has) checking if all\nthe backends have exited properly and showing only one message,\nsomething like \"the active backends are terminated due to XXXX\"?\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Dec 2020 11:57:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-12-21 16:43:33 -0500, Tom Lane wrote:\n>> * I chose to report the same message for immediate shutdown as we\n>> already use for SIGTERM (fast shutdown or pg_terminate_backend()).\n>> Should it be different, and if so what?\n\n[ per upthread, I did already change the SIGQUIT message to specify\n\"immediate shutdown\" ]\n\n> To do better I think we'd have to distinguish the different cases? An\n> error message like\n> \"terminating connection due to {fast shutdown,immediate shutdown,connection termination} administrator command\"\n> or such could be helpful, but I don't think your patch adds *quite*\n> enough state?\n\nWell, if you want to distinguish different causes for SIGTERM then\nyou'd need additional mechanism for that. I think we'd have to have\na per-child termination-reason field, since SIGTERM might be sent to\njust an individual backend rather than the whole flotilla at once.\nI didn't think it was quite worth the trouble --- \"administrator command\"\nseems close enough for both fast shutdown and pg_terminate_backend() ---\nbut you could certainly argue differently.\n\nI suppose a compromise position could be to let the postmaster export its\n\"Shutdown\" state variable, and then let backends assume that SIGTERM means\nfast shutdown or pg_terminate_backend depending on the state of that one\nglobal variable. But it'd be a bit imprecise so I don't really feel it's\nmore useful than what we have.\n\n> I'd like to not log all these repeated messages into the server\n> log. It's quite annoying to have to digg through thousands of lines of\n> repeated \"terminating connection...\"\n\nHm. That's an orthogonal issue, but certainly worth considering.\nThere are a couple of levels we could consider:\n\n1. Just make the logged messages less verbose (they certainly don't\nneed the DETAIL and HINT lines).\n\n2. Suppress the log entries altogether.\n\nI would have been against #2 before this patch, because it'd mean\nthat a rogue SIGQUIT leaves no clear trace in the log. But with\nthis patch, we can be fairly sure that we know whether SIGQUIT came\nfrom the postmaster, and then it might be all right to suppress the\nlog entry altogether when it did.\n\nI'd be happy to write up a patch for either of these, but let's\ndecide what we want first.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Dec 2020 13:37:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-26 13:37:15 -0500, Tom Lane wrote:\n> I suppose a compromise position could be to let the postmaster export its\n> \"Shutdown\" state variable, and then let backends assume that SIGTERM means\n> fast shutdown or pg_terminate_backend depending on the state of that one\n> global variable. But it'd be a bit imprecise so I don't really feel it's\n> more useful than what we have.\n\nFair enough, I think.\n\n\n> > I'd like to not log all these repeated messages into the server\n> > log. It's quite annoying to have to digg through thousands of lines of\n> > repeated \"terminating connection...\"\n> \n> Hm. That's an orthogonal issue, but certainly worth considering.\n> There are a couple of levels we could consider:\n> \n> 1. Just make the logged messages less verbose (they certainly don't\n> need the DETAIL and HINT lines).\n> \n> 2. Suppress the log entries altogether.\n> \n> I would have been against #2 before this patch, because it'd mean\n> that a rogue SIGQUIT leaves no clear trace in the log. But with\n> this patch, we can be fairly sure that we know whether SIGQUIT came\n> from the postmaster, and then it might be all right to suppress the\n> log entry altogether when it did.\n> \n> I'd be happy to write up a patch for either of these, but let's\n> decide what we want first.\n\nMy vote would be #2, with the same reasoning as yours.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Dec 2020 11:16:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-12-26 13:37:15 -0500, Tom Lane wrote:\n>>> I'd like to not log all these repeated messages into the server\n>>> log. It's quite annoying to have to digg through thousands of lines of\n>>> repeated \"terminating connection...\"\n\n>> Hm. That's an orthogonal issue, but certainly worth considering.\n>> There are a couple of levels we could consider:\n>> 1. Just make the logged messages less verbose (they certainly don't\n>> need the DETAIL and HINT lines).\n>> 2. Suppress the log entries altogether.\n\n> My vote would be #2, with the same reasoning as yours.\n\nThe most straightforward way to do that is to introduce a new error\nlevel. Having to renumber existing levels is a bit of a pain, but\nI'm not aware of anything that should break in source-code terms.\nWe make similar ABI breaks in every major release.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 28 Dec 2020 13:25:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-28 13:25:14 -0500, Tom Lane wrote:\n> The most straightforward way to do that is to introduce a new error\n> level. Having to renumber existing levels is a bit of a pain, but\n> I'm not aware of anything that should break in source-code terms.\n> We make similar ABI breaks in every major release.\n\nI don't see a problem either.\n\n\n> \t/* Select default errcode based on elevel */\n> \tif (elevel >= ERROR)\n> \t\tedata->sqlerrcode = ERRCODE_INTERNAL_ERROR;\n> -\telse if (elevel == WARNING)\n> +\telse if (elevel >= WARNING)\n> \t\tedata->sqlerrcode = ERRCODE_WARNING;\n> \telse\n> \t\tedata->sqlerrcode = ERRCODE_SUCCESSFUL_COMPLETION;\n\n> @@ -2152,6 +2157,7 @@ write_eventlog(int level, const char *line, int len)\n> \t\t\teventlevel = EVENTLOG_INFORMATION_TYPE;\n> \t\t\tbreak;\n> \t\tcase WARNING:\n> +\t\tcase WARNING_CLIENT_ONLY:\n> \t\t\teventlevel = EVENTLOG_WARNING_TYPE;\n> \t\t\tbreak;\n> \t\tcase ERROR:\n> [...]\n\nI don't think it needs to be done right now, but I again want to suggest\nit'd be nice if we split log levels into a bitmask. If we bits, separate\nfrom the log level, for do-not-log-to-client and do-not-log-to-server\nsome of this code would imo look nicer.\n\n\nLooks good to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Dec 2020 11:14:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't think it needs to be done right now, but I again want to suggest\n> it'd be nice if we split log levels into a bitmask. If we bits, separate\n> from the log level, for do-not-log-to-client and do-not-log-to-server\n> some of this code would imo look nicer.\n\nHmm, maybe. I agree that would be better done as a separate patch though.\n\nI had a thought while looking at elog.c: we could further reduce the risk\nof quickdie() crashing if we make it do what elog.c does when it gets into\nerror recursion trouble:\n\n error_context_stack = NULL;\n debug_query_string = NULL;\n\nNot invoking error context callbacks would significantly reduce the\nfootprint of code that can be reached from quickdie's ereports, and\nthe current call stack isn't really relevant to a report of SIGQUIT\nanyway. The argument for not reporting debug_query_string is a little\nthinner, but if that string is long it could result in additional\npalloc work inside elog.c, thus increasing the amount of stuff that\nhas to work to get the report out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Dec 2020 15:01:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
},
{
"msg_contents": "I wrote:\n> I had a thought while looking at elog.c: we could further reduce the risk\n> of quickdie() crashing if we make it do what elog.c does when it gets into\n> error recursion trouble:\n> error_context_stack = NULL;\n> debug_query_string = NULL;\n\nOn closer inspection, there's not much need to touch debug_query_string\nhere, because elog.c only consults that for making log entries, which\nwe're suppressing. I pushed it with just the error_context_stack reset.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Dec 2020 18:05:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Better client reporting for \"immediate stop\" shutdowns"
}
] |
[
{
"msg_contents": "As I did last 2 years, I reviewed docs for v14...\n\nThis year I've started early, since it takes more than a little effort and it's\nnot much fun to argue the change in each individual hunk.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581",
"msg_date": "Mon, 21 Dec 2020 22:11:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc review for v14"
},
{
"msg_contents": "On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:\n> As I did last 2 years, I reviewed docs for v14...\n\nThanks for gathering all that!\n\n> This year I've started early, since it takes more than a little effort and it's\n> not much fun to argue the change in each individual hunk.\n\n0001-pgindent-typos.not-a-patch touches pg_bsd_indent.\n\n> \t/*\n> -\t * XmlTable returns table - set of composite values. The error context, is\n> -\t * used for producement more values, between two calls, there can be\n> -\t * created and used another libxml2 error context. It is libxml2 global\n> -\t * value, so it should be refreshed any time before any libxml2 usage,\n> -\t * that is finished by returning some value.\n> +\t * XmlTable returns a table-set of composite values. The error context is\n> +\t * used for providing more detail. Between two calls, other libxml2\n> +\t * error contexts might have been created and used ; since they're libxml2 \n> +\t * global values, they should be refreshed each time before any libxml2 usage\n> +\t * that finishes by returning some value.\n> \t */\n\nThat's indeed incorrect, but I am not completely sure if what you have\nhere is correct either. I'll try to study this code a bit more first,\nthough I have said that once in the past. :p\n\n> --- a/src/bin/pg_dump/pg_restore.c\n> +++ b/src/bin/pg_dump/pg_restore.c\n> @@ -305,7 +305,7 @@ main(int argc, char **argv)\n> \t/* Complain if neither -f nor -d was specified (except if dumping TOC) */\n> \tif (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)\n> \t{\n> -\t\tpg_log_error(\"one of -d/--dbname and -f/--file must be specified\");\n> +\t\tpg_log_error(\"one of -d/--dbname, -f/--file or -l/--list must be specified\");\n> \t\texit_nicely(1);\n> \t}\n\nYou have forgotten to update the TAP test pg_dump/t/001_basic.pl.\nThe message does not seem completely incorrect to me either. Hmm.\nRestraining more the set of options is something to consider, though\nit could be annoying. I have discarded this one for now.\n\n> Specifies the amount of memory that should be allocated at server\n> - startup time for use by parallel queries. When this memory region is\n> + startup for use by parallel queries. When this memory region is\n> insufficient or exhausted by concurrent queries, new parallel queries\n> try to allocate extra shared memory temporarily from the operating\n> system using the method configured with\n> <varname>dynamic_shared_memory_type</varname>, which may be slower due\n> to memory management overheads. Memory that is allocated at startup\n> - time with <varname>min_dynamic_shared_memory</varname> is affected by\n> + with <varname>min_dynamic_shared_memory</varname> is affected by\n> the <varname>huge_pages</varname> setting on operating systems where\n> that is supported, and may be more likely to benefit from larger pages\n> on operating systems where that is managed automatically.\n\nThe current formulation is not that confusing, but I agree that this\nis an improvement. Thomas, you are behind this one. What do you\nthink?\n\nI have applied most of it on HEAD, except 0011 and the things noted\nabove. Thanks again.\n--\nMichael",
"msg_date": "Thu, 24 Dec 2020 17:12:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:\n> I have applied most of it on HEAD, except 0011 and the things noted\n> above. Thanks again.\n\nThank you.\n\nI see that I accidentally included ZSTD_COMPRESSION in pg_backup_archiver.h\nwhile cherry-picking from the branch where I first fixed this. Sorry :(\n\n> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.\n\nI'm hoping that someone will apply it there, but I realize that access to its\nrepository is tightly controlled :)\n\nOn Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:\n> Restraining more the set of options is something to consider, though\n> it could be annoying. I have discarded this one for now.\n\nEven though its -d is unused, I guess since wouldn't serve any significant\npurpose, we shouldn't make pg_restore -l -d fail for no reason.\n\nI think a couple of these should be backpatched.\ndoc/src/sgml/ref/pg_dump.sgml\ndoc/src/sgml/sources.sgml\ndoc/src/sgml/cube.sgml?\ndoc/src/sgml/func.sgml?\n\n-- \nJustin",
"msg_date": "Sun, 27 Dec 2020 14:26:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:\n> > 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.\n>\n> I'm hoping that someone will apply it there, but I realize that access to\n> its\n> repository is tightly controlled :)\n>\n\nNot as much \"tightly controlled\" as \"nobody's really bothered to grant any\npermissions\".\n\nI've applied the patch, thanks! While at it I fixed the indentation of the\n\"target\" row in the patch, I think you didn't take the fix all the way :)\n\nYou may also want to submit those fixes upstream in freebsd? The typos seem\nto be present at\nhttps://github.com/freebsd/freebsd/tree/master/usr.bin/indent as well. (If\nso, please include the updated version that I applied, so we don't diverge\non that)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Dec 27, 2020 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.\n\nI'm hoping that someone will apply it there, but I realize that access to its\nrepository is tightly controlled :)Not as much \"tightly controlled\" as \"nobody's really bothered to grant any permissions\".I've applied the patch, thanks! While at it I fixed the indentation of the \"target\" row in the patch, I think you didn't take the fix all the way :)You may also want to submit those fixes upstream in freebsd? The typos seem to be present at https://github.com/freebsd/freebsd/tree/master/usr.bin/indent as well. (If so, please include the updated version that I applied, so we don't diverge on that)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 28 Dec 2020 11:42:03 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:\n> Not as much \"tightly controlled\" as \"nobody's really bothered to grant any\n> permissions\".\n\nMagnus, do I have an access to that? This is the second time I am\ncrossing an issue with this issue, but I don't really know if I should\nact on it or not :)\n--\nMichael",
"msg_date": "Tue, 29 Dec 2020 09:37:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Thu, Dec 24, 2020 at 9:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:\n> > Specifies the amount of memory that should be allocated at server\n> > - startup time for use by parallel queries. When this memory region is\n> > + startup for use by parallel queries. When this memory region is\n> > insufficient or exhausted by concurrent queries, new parallel queries\n> > try to allocate extra shared memory temporarily from the operating\n> > system using the method configured with\n> > <varname>dynamic_shared_memory_type</varname>, which may be slower due\n> > to memory management overheads. Memory that is allocated at startup\n> > - time with <varname>min_dynamic_shared_memory</varname> is affected by\n> > + with <varname>min_dynamic_shared_memory</varname> is affected by\n> > the <varname>huge_pages</varname> setting on operating systems where\n> > that is supported, and may be more likely to benefit from larger pages\n> > on operating systems where that is managed automatically.\n>\n> The current formulation is not that confusing, but I agree that this\n> is an improvement. Thomas, you are behind this one. What do you\n> think?\n\nLGTM.\n\n\n",
"msg_date": "Tue, 29 Dec 2020 13:59:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Tue, Dec 29, 2020 at 01:59:58PM +1300, Thomas Munro wrote:\n> LGTM.\n\nThanks, I have done this one then.\n--\nMichael",
"msg_date": "Tue, 29 Dec 2020 16:57:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:\n> I think a couple of these should be backpatched.\n> doc/src/sgml/ref/pg_dump.sgml\n\nThis part can go down to 9.5.\n\n> doc/src/sgml/sources.sgml\n\nYes, I have done an extra effort on those fixes where needed. On top\nof that, I have included catalogs.sgml, pgstatstatements.sgml,\nexplain.sgml, pg_verifybackup.sgml and wal.sgml in 13.\n\n> doc/src/sgml/cube.sgml?\n> doc/src/sgml/func.sgml?\n\nThese two are some beautification for the format of the function, so I\nhave left them out.\n--\nMichael",
"msg_date": "Tue, 29 Dec 2020 18:22:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:\n> > Not as much \"tightly controlled\" as \"nobody's really bothered to grant\n> any\n> > permissions\".\n>\n> Magnus, do I have an access to that? This is the second time I am\n> crossing an issue with this issue, but I don't really know if I should\n> act on it or not :)\n>\n\nNo, at this point it's just Tom (who has all the commits) and me (who set\nit up, and now has one commit). It's all manually handled.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:\n> Not as much \"tightly controlled\" as \"nobody's really bothered to grant any\n> permissions\".\n\nMagnus, do I have an access to that? This is the second time I am\ncrossing an issue with this issue, but I don't really know if I should\nact on it or not :)No, at this point it's just Tom (who has all the commits) and me (who set it up, and now has one commit). It's all manually handled.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 29 Dec 2020 11:37:24 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Magnus, do I have an access to that? This is the second time I am\n>> crossing an issue with this issue, but I don't really know if I should\n>> act on it or not :)\n\n> No, at this point it's just Tom (who has all the commits) and me (who set\n> it up, and now has one commit). It's all manually handled.\n\nFTR, I have no objection to Michael (or any other PG committer) having\nwrite access to that repo. I think so far it's a matter of nobody's\nbothered because there's so little need.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Dec 2020 09:36:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:\n> Yes, I have done an extra effort on those fixes where needed. On top\n> of that, I have included catalogs.sgml, pgstatstatements.sgml,\n> explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.\n\nJustin, I got to look at the libxml2 part, and finished by rewording\nthe comment block as follows:\n+ * XmlTable returns a table-set of composite values. This error context\n+ * is used for providing more details, and needs to be reset between two\n+ * internal calls of libxml2 as different error contexts might have been\n+ * created or used.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Sun, 3 Jan 2021 15:10:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Jan 03, 2021 at 03:10:54PM +0900, Michael Paquier wrote:\n> On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:\n> > Yes, I have done an extra effort on those fixes where needed. On top\n> > of that, I have included catalogs.sgml, pgstatstatements.sgml,\n> > explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.\n> \n> Justin, I got to look at the libxml2 part, and finished by rewording\n> the comment block as follows:\n> + * XmlTable returns a table-set of composite values. This error context\n> + * is used for providing more details, and needs to be reset between two\n> + * internal calls of libxml2 as different error contexts might have been\n> + * created or used.\n\nI don't like \"this error context\", since \"this\" seems to be referring to the\n\"tableset of composite values\" as an err context.\n\nI guess you mean: \"needs to be reset between each internal call to libxml2..\"\n\nSo I'd suggest:\n\n> + * XmlTable returns a table-set of composite values. The error context\n> + * is used for providing additional detail. It needs to be reset between each\n> + * call to libxml2, since different error contexts might have been\n> + * created or used since it was last set.\n\n\nBut actually, maybe we should just use the comment that exists everywhere else\nfor that.\n\n /* Propagate context related error context to libxml2 */\n xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);\n\nMaybe should elaborate and say:\n\t/*\n\t * Propagate context related error context to libxml2 (needs to be\n\t * reset before each call, in case other error contexts have been assigned since\n\t * it was first set) */\n\t */\n xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 3 Jan 2021 00:33:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Jan 03, 2021 at 12:33:54AM -0600, Justin Pryzby wrote:\n> \n> But actually, maybe we should just use the comment that exists everywhere else\n> for that.\n> \n> /* Propagate context related error context to libxml2 */\n> xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);\n\nI quite like your suggestion to be a maximum simple here, and the docs\nof upstream also give a lot of context:\nhttp://xmlsoft.org/html/libxml-xmlerror.html#xmlSetStructuredErrorFunc\n\nSo let's use this version and call it a day for this part.\n--\nMichael",
"msg_date": "Sun, 3 Jan 2021 21:05:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:\n> So let's use this version and call it a day for this part.\n\nThis has been done as of b49154b.\n--\nMichael",
"msg_date": "Wed, 6 Jan 2021 10:37:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Wed, Jan 6, 2021 at 10:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:\n> > So let's use this version and call it a day for this part.\n>\n> This has been done as of b49154b.\n\nIt seems to me that all work has been done. Can we mark this patch\nentry as \"Committed\"? Or waiting for something on the author?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 22 Jan 2021 21:53:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Fri, Jan 22, 2021 at 09:53:13PM +0900, Masahiko Sawada wrote:\n> It seems to me that all work has been done. Can we mark this patch\n> entry as \"Committed\"? Or waiting for something on the author?\n\nPatch 0005 posted on [1], related to some docs of replication slots,\nstill needs a lookup.\n\n[1]: https://www.postgresql.org/message-id/20201227202604.GC26311@telsasoft.com\n--\nMichael",
"msg_date": "Sat, 23 Jan 2021 14:24:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "Hi Justin,\n\nOn Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:\n> Thank you.\n\nI have been looking at 0005, the patch dealing with the docs of the\nreplication stats, and have some comments.\n\n <para>\n Number of times transactions were spilled to disk while decoding changes\n- from WAL for this slot. Transactions may get spilled repeatedly, and\n- this counter gets incremented on every such invocation.\n+ from WAL for this slot. A given transaction may be spilled multiple times, and\n+ this counter is incremented each time.\n </para></entry>\nThe original can be a bit hard to read, and I don't think that the new\nformulation is an improvement. I actually find confusing that this\nmixes in the same sentence that a transaction can be spilled multiple\ntimes and increment this counter each time. What about splitting that\ninto two sentences? Here is an idea:\n\"This counter is incremented each time a transaction is spilled. The\nsame transaction may be spilled multiple times.\"\n\n- Number of transactions spilled to disk after the memory used by\n- logical decoding of changes from WAL for this slot exceeds\n+ Number of transactions spilled to disk because the memory used by\n+ logical decoding of changes from WAL for this slot exceeded\nWhat does \"logical decoding of changes from WAL\" mean? Here is an\nidea to clarify all that:\n\"Number of transactions spilled to disk once the memory used by\nlogical decoding to decode changes from WAL has exceeded\nlogical_decoding_work_mem.\"\n\n Number of in-progress transactions streamed to the decoding output plugin\n- after the memory used by logical decoding of changes from WAL for this\n- slot exceeds <literal>logical_decoding_work_mem</literal>. Streaming only\n+ because the memory used by logical decoding of changes from WAL for this\n+ slot exceeded <literal>logical_decoding_work_mem</literal>. Streaming only\n works with toplevel transactions (subtransactions can't be streamed\n- independently), so the counter does not get incremented for subtransactions+ independently), so the counter is not incremented for subtransactions.\nI have the same issue here with \"by logical decoding of changes from\nWAL\". I'd say \"after the memory used by logical decoding to decode\nchanges from WAL for this slot has exceeded logical_decoding_work_mem\".\n\n output plugin while decoding changes from WAL for this slot. Transactions\n- may get streamed repeatedly, and this counter gets incremented on every\n- such invocation.\n+ may be streamed multiple times, and this counter is incremented each time.\nI would split this stuff into two sentences:\n\"This counter is incremented each time a transaction is streamed. The\nsame transaction may be streamed multiple times.\n\n Resets statistics to zero for a single replication slot, or for all\n- replication slots in the cluster. The argument can be either the name\n- of the slot to reset the stats or NULL. If the argument is NULL, all\n- counters shown in the <structname>pg_stat_replication_slots</structname>\n- view for all replication slots are reset.\n+ replication slots in the cluster. The argument can be either NULL or the name\n+ of a slot for which stats are to be reset. If the argument is NULL, all\n+ counters in the <structname>pg_stat_replication_slots</structname>\n+ view are reset for all replication slots.\nHere also, I find rather confusing that this paragraph tells multiple\ntimes that NULL resets the stats for all the replication slots. NULL\nshould use a <literal> markup, and it is cleaner to use \"statistics\"\nrather than \"stats\" IMO. So I guess we could simplify things as\nfollows:\n\"Resets statistics of the replication slot defined by the argument. If\nthe argument is NULL, resets statistics for all the replication\nslots.\"\n--\nMichael",
"msg_date": "Sat, 23 Jan 2021 19:15:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sat, Jan 23, 2021 at 07:15:40PM +0900, Michael Paquier wrote:\n> I have been looking at 0005, the patch dealing with the docs of the\n> replication stats, and have some comments.\n\nAnd attached is a patch to clarify all that. I am letting that sleep\nfor a couple of days for now, so please let me know if you have any\ncomments.\n--\nMichael",
"msg_date": "Wed, 27 Jan 2021 14:52:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 02:52:14PM +0900, Michael Paquier wrote:\n> And attached is a patch to clarify all that. I am letting that sleep\n> for a couple of days for now, so please let me know if you have any\n> comments.\n\nI have spent some time on that, and applied this stuff as of 2a5862f\nafter some extra tweaks. As there is nothing left, this CF entry is\nnow closed.\n--\nMichael",
"msg_date": "Fri, 29 Jan 2021 16:33:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "Another round of doc fixen.\n\nwdiff to follow\n\ncommit 389c4ac2febe21fd48480a86819d94fd2eb9c1cc\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Feb 10 17:19:51 2021 -0600\n\n doc review for pg_stat_progress_create_index\n \n ab0dfc961b6a821f23d9c40c723d11380ce195a6\n \n should backpatch to v13\n\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex c602ee4427..16eb1d9e9c 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -5725,7 +5725,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,\n </para>\n <para>\n When creating an index on a partitioned table, this column is set to\n the number of partitions on which the index has been [-completed.-]{+created.+}\n </para></entry>\n </row>\n </tbody>\n\ncommit bff6f0b557ff79365fc21d0ae261bad0fcb96539\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Feb 6 15:17:51 2021 -0600\n\n *an old and \"deleted [has] happened\"\n \n Heikki missed this in 6b387179baab8d0e5da6570678eefbe61f3acc79\n\ndiff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\nindex 3763b4b995..a51f2c9920 100644\n--- a/doc/src/sgml/protocol.sgml\n+++ b/doc/src/sgml/protocol.sgml\n@@ -6928,8 +6928,8 @@ Delete\n</term>\n<listitem>\n<para>\n Identifies the following TupleData message as [-a-]{+an+} old tuple.\n This field is present if the table in which the delete[-has-]\n happened has REPLICA IDENTITY set to FULL.\n</para>\n</listitem>\n\ncommit 9bd601fa82ceeaf09573ce31eb3c081b4ae7a45d\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 23 21:03:37 2021 -0600\n\n doc review for logical decoding of prepared xacts\n \n 0aa8a01d04c8fe200b7a106878eebc3d0af9105c\n\ndiff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml\nindex b854f2ccfc..71e9f36b8e 100644\n--- a/doc/src/sgml/logicaldecoding.sgml\n+++ b/doc/src/sgml/logicaldecoding.sgml\n@@ -791,9 +791,9 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,\n <para>\n The optional <function>filter_prepare_cb</function> callback\n is called to determine whether data that is part of the current\n two-phase commit transaction should be considered for [-decode-]{+decoding+}\n at this prepare stage or {+later+} as a regular one-phase transaction at\n <command>COMMIT PREPARED</command> [-time later.-]{+time.+} To signal that\n decoding should be skipped, return <literal>true</literal>;\n <literal>false</literal> otherwise. When the callback is not\n defined, <literal>false</literal> is assumed (i.e. nothing is\n@@ -820,11 +820,11 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx\n The required <function>begin_prepare_cb</function> callback is called\n whenever the start of a prepared transaction has been decoded. The\n <parameter>gid</parameter> field, which is part of the\n <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback to\n check if the plugin has already received this [-prepare-]{+PREPARE+} in which case it\n can skip the remaining changes of the transaction. This can only happen\n if the user restarts the decoding after receiving the [-prepare-]{+PREPARE+} for a\n transaction but before receiving the [-commit prepared-]{+COMMIT PREPARED,+} say because of some\n error.\n <programlisting>\n typedef void (*LogicalDecodeBeginPrepareCB) (struct LogicalDecodingContext *ctx,\n@@ -842,7 +842,7 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx\n decoded. The <function>change_cb</function> callback for all modified\n rows will have been called before this, if there have been any modified\n rows. The <parameter>gid</parameter> field, which is part of the\n <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.\n <programlisting>\n typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,\n ReorderBufferTXN *txn,\n@@ -856,9 +856,9 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx\n\n <para>\n The required <function>commit_prepared_cb</function> callback is called\n whenever a transaction [-commit prepared-]{+COMMIT PREPARED+} has been decoded. The\n <parameter>gid</parameter> field, which is part of the\n <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.\n <programlisting>\n typedef void (*LogicalDecodeCommitPreparedCB) (struct LogicalDecodingContext *ctx,\n ReorderBufferTXN *txn,\n@@ -872,15 +872,15 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx\n\n <para>\n The required <function>rollback_prepared_cb</function> callback is called\n whenever a transaction [-rollback prepared-]{+ROLLBACK PREPARED+} has been decoded. The\n <parameter>gid</parameter> field, which is part of the\n <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback. The\n parameters <parameter>prepare_end_lsn</parameter> and\n <parameter>prepare_time</parameter> can be used to check if the plugin\n has received this [-prepare transaction-]{+PREPARE TRANSACTION+} in which case it can apply the\n rollback, otherwise, it can skip the rollback operation. The\n <parameter>gid</parameter> alone is not sufficient because the downstream\n node can have {+a+} prepared transaction with same identifier.\n <programlisting>\n typedef void (*LogicalDecodeRollbackPreparedCB) (struct LogicalDecodingContext *ctx,\n ReorderBufferTXN *txn,\n@@ -1122,7 +1122,7 @@ OutputPluginWrite(ctx, true);\n the <function>stream_commit_cb</function> callback\n (or possibly aborted using the <function>stream_abort_cb</function> callback).\n If two-phase commits are supported, the transaction can be prepared using the\n <function>stream_prepare_cb</function> callback, [-commit prepared-]{+COMMIT PREPARED+} using the\n <function>commit_prepared_cb</function> callback or aborted using the\n <function>rollback_prepared_cb</function>.\n </para>\n\ncommit 7ddf562c7b384b4a802111ac1b0eab3698982c8e\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 23 21:02:47 2021 -0600\n\n doc review for multiranges\n \n 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n\ndiff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml\nindex 6e3d82b85b..ec95b4eb01 100644\n--- a/doc/src/sgml/extend.sgml\n+++ b/doc/src/sgml/extend.sgml\n@@ -448,7 +448,7 @@\n of <type>anycompatible</type> and <type>anycompatiblenonarray</type>\n inputs, the array element types of <type>anycompatiblearray</type>\n inputs, the range subtypes of <type>anycompatiblerange</type> inputs,\n and the multirange subtypes of [-<type>anycompatiablemultirange</type>-]{+<type>anycompatiblemultirange</type>+}\n inputs. If <type>anycompatiblenonarray</type> is present then the\n common type is required to be a non-array type. Once a common type is\n identified, arguments in <type>anycompatible</type>\n\ncommit 4fa1fd9769c93dbec71fa92097ebfea5f420bb09\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 23 20:33:10 2021 -0600\n\n doc review: logical decode in prepare\n \n a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\n\ndiff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml\nindex cf705ed9cd..b854f2ccfc 100644\n--- a/doc/src/sgml/logicaldecoding.sgml\n+++ b/doc/src/sgml/logicaldecoding.sgml\n@@ -1214,7 +1214,7 @@ stream_commit_cb(...); <-- commit of the streamed transaction\n </para>\n\n <para>\n When a prepared transaction is [-rollbacked-]{+rolled back+} using the\n <command>ROLLBACK PREPARED</command>, then the\n <function>rollback_prepared_cb</function> callback is invoked and when the\n prepared transaction is committed using <command>COMMIT PREPARED</command>,\n\ncommit d27a74968b61354ad1186a4740063dd4ac0b1bea\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 23 17:17:58 2021 -0600\n\n doc review for FDW bulk inserts\n \n b663a4136331de6c7364226e3dbf7c88bfee7145\n\ndiff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml\nindex 854913ae5f..12e00bfc2f 100644\n--- a/doc/src/sgml/fdwhandler.sgml\n+++ b/doc/src/sgml/fdwhandler.sgml\n@@ -672,9 +672,8 @@ GetForeignModifyBatchSize(ResultRelInfo *rinfo);\n\n Report the maximum number of tuples that a single\n <function>ExecForeignBatchInsert</function> call can handle for\n the specified foreign table.[-That is,-] The executor passes at most\n the {+given+} number of tuples[-that this function returns-] to <function>ExecForeignBatchInsert</function>.\n <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing\n the target foreign table.\n The FDW is expected to provide a foreign server and/or foreign\n\ncommit 2b8fdcc91562045b6b2cec0e69a724e078cfbdb5\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Feb 3 00:51:25 2021 -0600\n\n doc review: piecemeal construction of partitioned indexes\n \n 5efd604ec0a3bdde98fe19d8cada69ab4ef80db3\n \n backpatch to v11\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 1e9a4625cc..a8cbd45d35 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3962,8 +3962,8 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02\n As explained above, it is possible to create indexes on partitioned tables\n so that they are applied automatically to the entire hierarchy.\n This is very\n convenient, as not only[-will-] the existing partitions [-become-]{+will be+} indexed, but\n [-also-]{+so will+} any partitions that are created in the [-future will.-]{+future.+} One limitation is\n that it's not possible to use the <literal>CONCURRENTLY</literal>\n qualifier when creating such a partitioned index. To avoid long lock\n times, it is possible to use <command>CREATE INDEX ON ONLY</command>\n\ncommit 2f6d8a4d0157b632ad1e0ff3b0a54c4d38199637\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 30 18:10:21 2021 -0600\n\n duplicate words\n \n commit 9c4f5192f69ed16c99e0d079f0b5faebd7bad212\n Allow pg_rewind to use a standby server as the source system.\n \n commit 4a252996d5fda7662b2afdf329a5c95be0fe3b01\n Add tests for tuplesort.c.\n \n commit 0a2bc5d61e713e3fe72438f020eea5fcc90b0f0b\n Move per-agg and per-trans duplicate finding to the planner.\n \n commit 623a9ba79bbdd11c5eccb30b8bd5c446130e521c\n snapshot scalability: cache snapshots using a xact completion counter.\n \n commit 2c03216d831160bedd72d45f712601b6f7d03f1c\n Revamp the WAL record format.\n\ndiff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c\nindex e723253297..25d6df1659 100644\n--- a/src/backend/access/transam/xlogutils.c\n+++ b/src/backend/access/transam/xlogutils.c\n@@ -433,8 +433,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,\n * NB: A redo function should normally not call this directly. To get a page\n * to modify, use XLogReadBufferForRedoExtended instead. It is important that\n * all pages modified by a WAL record are registered in the WAL records, or\n * they will be invisible to tools that[-that-] need to know which pages are[-*-] modified.\n */\nBuffer\nXLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,\ndiff --git a/src/backend/optimizer/prep/prepagg.c b/src/backend/optimizer/prep/prepagg.c\nindex 929a8ea13b..89046f9afb 100644\n--- a/src/backend/optimizer/prep/prepagg.c\n+++ b/src/backend/optimizer/prep/prepagg.c\n@@ -71,7 +71,7 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);\n *\n * Information about the aggregates and transition functions are collected\n * in the root->agginfos and root->aggtransinfos lists. The 'aggtranstype',\n * 'aggno', and 'aggtransno' fields [-in-]{+of each Aggref+} are filled [-in in each Aggref.-]{+in.+}\n *\n * NOTE: This modifies the Aggrefs in the input expression in-place!\n *\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex cf12eda504..b9fbdcb88f 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -2049,7 +2049,7 @@ GetSnapshotDataReuse(Snapshot snapshot)\n\t * holding ProcArrayLock) exclusively). Thus the xactCompletionCount check\n\t * ensures we would detect if the snapshot would have changed.\n\t *\n\t * As the snapshot contents are the same as it was before, it is[-is-] safe\n\t * to re-enter the snapshot's xmin into the PGPROC array. None of the rows\n\t * visible under the snapshot could already have been removed (that'd\n\t * require the set of running transactions to change) and it fulfills the\ndiff --git a/src/bin/pg_rewind/libpq_source.c b/src/bin/pg_rewind/libpq_source.c\nindex 86d2adcaee..ac794cf4eb 100644\n--- a/src/bin/pg_rewind/libpq_source.c\n+++ b/src/bin/pg_rewind/libpq_source.c\n@@ -539,7 +539,7 @@ process_queued_fetch_requests(libpq_source *src)\n\t\t\t\t\t\t chunkoff, rq->path, (int64) rq->offset);\n\n\t\t\t/*\n\t\t\t * We should not receive[-receive-] more data than we requested, or\n\t\t\t * pg_read_binary_file() messed up. We could receive less,\n\t\t\t * though, if the file was truncated in the source after we\n\t\t\t * checked its size. That's OK, there should be a WAL record of\ndiff --git a/src/test/regress/expected/tuplesort.out b/src/test/regress/expected/tuplesort.out\nindex 3fc1998bf2..418f296a3f 100644\n--- a/src/test/regress/expected/tuplesort.out\n+++ b/src/test/regress/expected/tuplesort.out\n@@ -1,7 +1,7 @@\n-- only use parallelism when explicitly intending to do so\nSET max_parallel_maintenance_workers = 0;\nSET max_parallel_workers = 0;\n-- A table with[-with-] contents that, when sorted, triggers abbreviated\n-- key aborts. One easy way to achieve that is to use uuids that all\n-- have the same prefix, as abbreviated keys for uuids just use the\n-- first sizeof(Datum) bytes.\ndiff --git a/src/test/regress/sql/tuplesort.sql b/src/test/regress/sql/tuplesort.sql\nindex 7d7e02f02a..846484d561 100644\n--- a/src/test/regress/sql/tuplesort.sql\n+++ b/src/test/regress/sql/tuplesort.sql\n@@ -2,7 +2,7 @@\nSET max_parallel_maintenance_workers = 0;\nSET max_parallel_workers = 0;\n\n-- A table with[-with-] contents that, when sorted, triggers abbreviated\n-- key aborts. One easy way to achieve that is to use uuids that all\n-- have the same prefix, as abbreviated keys for uuids just use the\n-- first sizeof(Datum) bytes.\n\ncommit 4920f9520d7ba1b420bcf03ae48178d74425a622\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sun Jan 17 10:57:21 2021 -0600\n\n doc review for checksum docs\n \n cf621d9d84db1e6edaff8ffa26bad93fdce5f830\n\ndiff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\nindex 66de1ee2f8..02f576a1a9 100644\n--- a/doc/src/sgml/wal.sgml\n+++ b/doc/src/sgml/wal.sgml\n@@ -237,19 +237,19 @@\n </indexterm>\n\n <para>\n [-Data-]{+By default, data+} pages are not[-checksum-] protected by [-default,-]{+checksums,+} but this can optionally be\n enabled for a cluster. When enabled, each data page will be [-assigned-]{+ASSIGNED+} a\n checksum that is updated when the page is written and verified [-every-]{+each+} time\n the page is read. Only data pages are protected by [-checksums,-]{+checksums;+} internal data\n structures and temporary files are not.\n </para>\n\n <para>\n Checksums [-are-]{+verification is+} normally [-enabled-]{+ENABLED+} when the cluster is initialized using <link\n linkend=\"app-initdb-data-checksums\"><application>initdb</application></link>.\n They can also be enabled or disabled at a later time as an offline\n operation. Data checksums are enabled or disabled at the full cluster\n level, and cannot be specified[-individually-] for {+individual+} databases or tables.\n </para>\n\n <para>\n@@ -260,9 +260,9 @@\n </para>\n\n <para>\n When attempting to recover from corrupt [-data-]{+data,+} it may be necessary to bypass\n the checksum [-protection in order to recover data.-]{+protection.+} To do this, temporarily set the configuration\n parameter <xref linkend=\"guc-ignore-checksum-failure\" />.\n </para>\n\n <sect2 id=\"checksums-offline-enable-disable\">\n\ncommit fc69321a5ebc55cb1df9648bc28215672cffbf31\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Jan 20 16:10:49 2021 -0600\n\n Doc review for psql \\dX\n \n ad600bba0422dde4b73fbd61049ff2a3847b068a\n\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex 13c1edfa4d..d0f397d5ea 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -1930,8 +1930,9 @@ testdb=>\n </para>\n\n <para>\n The [-column-]{+status+} of [-the-]{+each+} kind of extended [-stats-]{+statistics is shown in a column+}\n{+ named after the \"kind\"+} (e.g. [-Ndistinct) shows its status.-]{+Ndistinct).+}\n NULL means that it doesn't [-exists.-]{+exist.+} \"defined\" means that it was requested\n when creating the statistics.\n You can use pg_stats_ext if you'd like to know whether <link linkend=\"sql-analyze\">\n <command>ANALYZE</command></link> was run and statistics are available to the\n\ncommit 78035a725e13e28bbae9e62fe7013bef435d70e3\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Feb 6 15:13:37 2021 -0600\n\n *an exclusive\n \n 3c84046490bed3c22e0873dc6ba492e02b8b9051\n\ndiff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml\nindex 85cf23bca2..b6d2c2014f 100644\n--- a/doc/src/sgml/ref/drop_index.sgml\n+++ b/doc/src/sgml/ref/drop_index.sgml\n@@ -45,7 +45,7 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] <replaceable class=\"parameter\">name</r\n <para>\n Drop the index without locking out concurrent selects, inserts, updates,\n and deletes on the index's table. A normal <command>DROP INDEX</command>\n acquires {+an+} exclusive lock on the table, blocking other accesses until the\n index drop can be completed. With this option, the command instead\n waits until conflicting transactions have completed.\n </para>\n\ncommit c36ac4c1f85f620ae9ce9cfa7c14b6c95dcdedc5\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Dec 30 09:39:16 2020 -0600\n\n function comment: get_am_name\n\ndiff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c\nindex eff9535ed0..188109e474 100644\n--- a/src/backend/commands/amcmds.c\n+++ b/src/backend/commands/amcmds.c\n@@ -186,7 +186,7 @@ get_am_oid(const char *amname, bool missing_ok)\n}\n\n/*\n * get_am_name - given an access method [-OID name and type,-]{+OID,+} look up its name.\n */\nchar *\nget_am_name(Oid amOid)\n\ncommit 22e6f0e2d4eaf78e449393bf2bf8b3f8af2b71f8\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Mon Jan 18 14:37:17 2021 -0600\n\n One fewer (not one less)\n\ndiff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c\nindex 9abcee32af..f6760eb31e 100644\n--- a/contrib/pageinspect/heapfuncs.c\n+++ b/contrib/pageinspect/heapfuncs.c\n@@ -338,7 +338,7 @@ tuple_data_split_internal(Oid relid, char *tupdata,\n\t\tattr = TupleDescAttr(tupdesc, i);\n\n\t\t/*\n\t\t * Tuple header can specify [-less-]{+fewer+} attributes than tuple descriptor as\n\t\t * ALTER TABLE ADD COLUMN without DEFAULT keyword does not actually\n\t\t * change tuples in pages, so attributes with numbers greater than\n\t\t * (t_infomask2 & HEAP_NATTS_MASK) should be treated as NULL.\ndiff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml\nindex cebc09ef91..1b00e543a6 100644\n--- a/doc/src/sgml/charset.sgml\n+++ b/doc/src/sgml/charset.sgml\n@@ -619,7 +619,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE \"fr_FR\";\n name such as <literal>de_DE</literal> can be considered unique\n within a given database even though it would not be unique globally.\n Use of the stripped collation names is recommended, since it will\n make one [-less-]{+fewer+} thing you need to change if you decide to change to\n another database encoding. Note however that the <literal>default</literal>,\n <literal>C</literal>, and <literal>POSIX</literal> collations can be used regardless of\n the database encoding.\ndiff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml\nindex 0b24a55505..693423e524 100644\n--- a/doc/src/sgml/ref/create_type.sgml\n+++ b/doc/src/sgml/ref/create_type.sgml\n@@ -867,7 +867,7 @@ CREATE TYPE <replaceable class=\"parameter\">name</replaceable>\n Before <productname>PostgreSQL</productname> version 8.3, the name of\n a generated array type was always exactly the element type's name with one\n underscore character (<literal>_</literal>) prepended. (Type names were\n therefore restricted in length to one [-less-]{+fewer+} character than other names.)\n While this is still usually the case, the array type name may vary from\n this in case of maximum-length names or collisions with user type names\n that begin with underscore. Writing code that depends on this convention\ndiff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml\nindex e81addcfa9..aa172d102b 100644\n--- a/doc/src/sgml/rules.sgml\n+++ b/doc/src/sgml/rules.sgml\n@@ -1266,7 +1266,7 @@ CREATE [ OR REPLACE ] RULE <replaceable class=\"parameter\">name</replaceable> AS\n<para>\n The query trees generated from rule actions are thrown into the\n rewrite system again, and maybe more rules get applied resulting\n in [-more-]{+additional+} or [-less-]{+fewer+} query trees.\n So a rule's actions must have either a different\n command type or a different result relation than the rule itself is\n on, otherwise this recursive process will end up in an infinite loop.\ndiff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c\nindex 24a27e387d..0b56b0fa5a 100644\n--- a/src/backend/access/common/heaptuple.c\n+++ b/src/backend/access/common/heaptuple.c\n@@ -719,11 +719,11 @@ heap_copytuple_with_tuple(HeapTuple src, HeapTuple dest)\n}\n\n/*\n * Expand a tuple which has [-less-]{+fewer+} attributes than required. For each attribute\n * not present in the sourceTuple, if there is a missing value that will be\n * used. Otherwise the attribute will be set to NULL.\n *\n * The source tuple must have [-less-]{+fewer+} attributes than the required number.\n *\n * Only one of targetHeapTuple and targetMinimalTuple may be supplied. The\n * other argument must be NULL.\ndiff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\nindex 7295cf0215..64908ac39c 100644\n--- a/src/backend/commands/analyze.c\n+++ b/src/backend/commands/analyze.c\n@@ -1003,7 +1003,7 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)\n * As of May 2004 we use a new two-stage method: Stage one selects up\n * to targrows random blocks (or all blocks, if there aren't so many).\n * Stage two scans these blocks and uses the Vitter algorithm to create\n * a random sample of targrows rows (or [-less,-]{+fewer,+} if there are [-less-]{+fewer+} in the\n * sample of blocks). The two stages are executed simultaneously: each\n * block is processed as soon as stage one returns its number and while\n * the rows are read stage two controls which ones are to be inserted\ndiff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c\nindex 4d185c27b4..078aaef539 100644\n--- a/src/backend/utils/adt/jsonpath_exec.c\n+++ b/src/backend/utils/adt/jsonpath_exec.c\n@@ -263,7 +263,7 @@ static int\tcompareDatetime(Datum val1, Oid typid1, Datum val2, Oid typid2,\n *\t\timplement @? and @@ operators, which in turn are intended to have an\n *\t\tindex support. Thus, it's desirable to make it easier to achieve\n *\t\tconsistency between index scan results and sequential scan results.\n *\t\tSo, we throw as [-less-]{+few+} errors as possible. Regarding this function,\n *\t\tsuch behavior also matches behavior of JSON_EXISTS() clause of\n *\t\tSQL/JSON. Regarding jsonb_path_match(), this function doesn't have\n *\t\tan analogy in SQL/JSON, so we define its behavior on our own.\ndiff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\nindex 47ca4ddbb5..52314d3aa1 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -645,7 +645,7 @@ scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq,\n\n\t\t\t/*\n\t\t\t * The calculation so far gave us a selectivity for the \"<=\" case.\n\t\t\t * We'll have one [-less-]{+fewer+} tuple for \"<\" and one additional tuple for\n\t\t\t * \">=\", the latter of which we'll reverse the selectivity for\n\t\t\t * below, so we can simply subtract one tuple for both cases. The\n\t\t\t * cases that need this adjustment can be identified by iseq being\ndiff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c\nindex fa2b49c676..55c9445898 100644\n--- a/src/backend/utils/cache/catcache.c\n+++ b/src/backend/utils/cache/catcache.c\n@@ -1497,7 +1497,7 @@ GetCatCacheHashValue(CatCache *cache,\n *\t\tIt doesn't make any sense to specify all of the cache's key columns\n *\t\there: since the key is unique, there could be at most one match, so\n *\t\tyou ought to use SearchCatCache() instead. Hence this function takes\n *\t\tone [-less-]{+fewer+} Datum argument than SearchCatCache() does.\n *\n *\t\tThe caller must not modify the list object or the pointed-to tuples,\n *\t\tand must call ReleaseCatCacheList() when done with the list.\ndiff --git a/src/backend/utils/misc/sampling.c b/src/backend/utils/misc/sampling.c\nindex 0c327e823f..7348b86682 100644\n--- a/src/backend/utils/misc/sampling.c\n+++ b/src/backend/utils/misc/sampling.c\n@@ -42,7 +42,7 @@ BlockSampler_Init(BlockSampler bs, BlockNumber nblocks, int samplesize,\n\tbs->N = nblocks;\t\t\t/* measured table size */\n\n\t/*\n\t * If we decide to reduce samplesize for tables that have [-less-]{+fewer+} or not much\n\t * more than samplesize blocks, here is the place to do it.\n\t */\n\tbs->n = samplesize;\ndiff --git a/src/backend/utils/mmgr/freepage.c b/src/backend/utils/mmgr/freepage.c\nindex e4ee1aab97..10a1effb74 100644\n--- a/src/backend/utils/mmgr/freepage.c\n+++ b/src/backend/utils/mmgr/freepage.c\n@@ -495,7 +495,7 @@ FreePageManagerDump(FreePageManager *fpm)\n * if we search the parent page for the first key greater than or equal to\n * the first key on the current page, the downlink to this page will be either\n * the exact index returned by the search (if the first key decreased)\n * or one [-less-]{+fewer+} (if the first key increased).\n */\nstatic void\nFreePageBtreeAdjustAncestorKeys(FreePageManager *fpm, FreePageBtree *btp)\ndiff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\nindex a4a3f40048..627a244fb7 100644\n--- a/src/bin/pgbench/pgbench.c\n+++ b/src/bin/pgbench/pgbench.c\n@@ -6458,7 +6458,7 @@ threadRun(void *arg)\n\n\t\t\t/*\n\t\t\t * If advanceConnectionState changed client to finished state,\n\t\t\t * that's one [-less-]{+fewer+} client that remains.\n\t\t\t */\n\t\t\tif (st->state == CSTATE_FINISHED || st->state == CSTATE_ABORTED)\n\t\t\t\tremains--;\ndiff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h\nindex d27c8601fa..e3d2e751ea 100644\n--- a/src/include/pg_config_manual.h\n+++ b/src/include/pg_config_manual.h\n@@ -21,7 +21,7 @@\n\n/*\n * Maximum length for identifiers (e.g. table names, column names,\n * function names). Names actually are limited to one [-less-]{+fewer+} byte than this,\n * because the length must include a trailing zero byte.\n *\n * Changing this requires an initdb.\n@@ -87,7 +87,7 @@\n\n/*\n * MAXPGPATH: standard size of a pathname buffer in PostgreSQL (hence,\n * maximum usable pathname length is one [-less).-]{+fewer).+}\n *\n * We'd use a standard system header symbol for this, if there weren't\n * so many to choose from: MAXPATHLEN, MAX_PATH, PATH_MAX are all\ndiff --git a/src/interfaces/ecpg/include/sqlda-native.h b/src/interfaces/ecpg/include/sqlda-native.h\nindex 67d3c7b4e4..9e73f1f1b1 100644\n--- a/src/interfaces/ecpg/include/sqlda-native.h\n+++ b/src/interfaces/ecpg/include/sqlda-native.h\n@@ -7,7 +7,7 @@\n\n/*\n * Maximum length for identifiers (e.g. table names, column names,\n * function names). Names actually are limited to one [-less-]{+fewer+} byte than this,\n * because the length must include a trailing zero byte.\n *\n * This should be at least as much as NAMEDATALEN of the database the\ndiff --git a/src/test/regress/expected/geometry.out b/src/test/regress/expected/geometry.out\nindex 84f7eabb66..9799cfbdbd 100644\n--- a/src/test/regress/expected/geometry.out\n+++ b/src/test/regress/expected/geometry.out\n@@ -4325,7 +4325,7 @@ SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';\n <(100,1),115> | ((-15,1),(18.6827201635,82.3172798365),(100,116),(181.317279836,82.3172798365),(215,1),(181.317279836,-80.3172798365),(100,-114),(18.6827201635,-80.3172798365))\n(6 rows)\n\n-- Too [-less-]{+few+} points error\nSELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';\nERROR: must request at least 2 points\n-- Zero radius error\ndiff --git a/src/test/regress/sql/geometry.sql b/src/test/regress/sql/geometry.sql\nindex 96df0ab05a..b0ab6d03ec 100644\n--- a/src/test/regress/sql/geometry.sql\n+++ b/src/test/regress/sql/geometry.sql\n@@ -424,7 +424,7 @@ SELECT f1, f1::polygon FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';\n-- To polygon with less points\nSELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';\n\n-- Too [-less-]{+few+} points error\nSELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';\n\n-- Zero radius error\n\ncommit 1c00249319faf6dc23aadf4568ead5adc65ff57f\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Feb 10 17:45:07 2021 -0600\n\n comment typos\n\ndiff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h\nindex 395be1ca9a..99a03c8f21 100644\n--- a/src/include/lib/simplehash.h\n+++ b/src/include/lib/simplehash.h\n@@ -626,7 +626,7 @@ restart:\n\t\tuint32\t\tcuroptimal;\n\t\tSH_ELEMENT_TYPE *entry = &data[curelem];\n\n\t\t/* any empty bucket can[-directly-] be used {+directly+} */\n\t\tif (entry->status == SH_STATUS_EMPTY)\n\t\t{\n\t\t\ttb->members++;\n\ncommit 2ac95b66e30785d480ef04c11d12b1075548045e\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Nov 14 23:09:21 2020 -0600\n\n typos in master\n\ndiff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml\nindex 7c341c8e3f..fe88c2273a 100644\n--- a/doc/src/sgml/datatype.sgml\n+++ b/doc/src/sgml/datatype.sgml\n@@ -639,7 +639,7 @@ NUMERIC\n\n <para>\n The <literal>NaN</literal> (not a number) value is used to represent\n undefined [-calculational-]{+computational+} results. In general, any operation with\n a <literal>NaN</literal> input yields another <literal>NaN</literal>.\n The only exception is when the operation's other inputs are such that\n the same output would be obtained if the <literal>NaN</literal> were to\n\ncommit d6d3499f52e664b7da88a3f2c94701cae6d76609\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Dec 5 22:43:12 2020 -0600\n\n pg_restore: \"must be specified\" and --list\n \n This was discussed here, but the idea got lost.\n https://www.postgresql.org/message-id/flat/20190612170201.GA11881%40alvherre.pgsql#2984347ab074e6f198bd294fa41884df\n\ndiff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\nindex 589b4aed53..f6e6e41329 100644\n--- a/src/bin/pg_dump/pg_restore.c\n+++ b/src/bin/pg_dump/pg_restore.c\n@@ -305,7 +305,7 @@ main(int argc, char **argv)\n\t/* Complain if neither -f nor -d was specified (except if dumping TOC) */\n\tif (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)\n\t{\n\t\tpg_log_error(\"one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified\");\n\t\texit_nicely(1);\n\t}\n\ndiff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl\nindex 083fb3ad08..8280914c2a 100644\n--- a/src/bin/pg_dump/t/001_basic.pl\n+++ b/src/bin/pg_dump/t/001_basic.pl\n@@ -63,8 +63,8 @@ command_fails_like(\n\ncommand_fails_like(\n\t['pg_restore'],\n\tqr{\\Qpg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified\\E},\n\t'pg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be specified');\n\ncommand_fails_like(\n\t[ 'pg_restore', '-s', '-a', '-f -' ],\n\ncommit 7c2dee70b0450bac5cfa2c3db52b4a2b2e535a9e\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Feb 15 15:53:34 2020 -0600\n\n Update comment obsolete since 69c3936a\n\ndiff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c\nindex 601b6dab03..394b4e667b 100644\n--- a/src/backend/executor/nodeAgg.c\n+++ b/src/backend/executor/nodeAgg.c\n@@ -2064,8 +2064,7 @@ initialize_hash_entry(AggState *aggstate, TupleHashTable hashtable,\n}\n\n/*\n * Look up hash entries for the current tuple in all hashed grouping [-sets,-]\n[- * returning an array of pergroup pointers suitable for advance_aggregates.-]{+sets.+}\n *\n * Be aware that lookup_hash_entry can reset the tmpcontext.\n *\n\ncommit 4b81f9512395cb321730e0a3dba1c659b9c2fee3\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Fri Jan 8 13:09:55 2021 -0600\n\n doc: pageinspect\n \n d6061f83a166b015657fda8623c704fcb86930e9\n \n backpatch to 9.6?\n\ndiff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml\nindex a0be779940..a7bce41b7c 100644\n--- a/doc/src/sgml/pageinspect.sgml\n+++ b/doc/src/sgml/pageinspect.sgml\n@@ -211,7 +211,7 @@ test=# SELECT tuple_data_split('pg_class'::regclass, t_data, t_infomask, t_infom\n </para>\n <para>\n If <parameter>do_detoast</parameter> is <literal>true</literal>,\n [-attribute that-]{+attributes+} will be detoasted as needed. Default value is\n <literal>false</literal>.\n </para>\n </listitem>",
"msg_date": "Wed, 10 Feb 2021 17:55:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "Rebased, with a few additions.",
"msg_date": "Mon, 22 Feb 2021 02:03:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Mon, Feb 22, 2021 at 02:03:45AM -0600, Justin Pryzby wrote:\n> Rebased, with a few additions.\n\nThanks. I have done a pass through this series, and applied most of\nthis stuff with a backpatch for the doc portions.\n\n+ The status of each kind of extended statistics is shown in a column\n+ named after the \"kind\" (e.g. Ndistinct).\n+ NULL means that it doesn't exist. \"defined\" means that it was requested\nFrom 0009, there is a grammar mistake on HEAD here, but I don't\nunderstand what you mean by \"kind\" here. Wouldn't it be better to not\nuse quotes and just refer to \"its type of statistics\"?\n\n0016 was missing some <command> markups.\n\nThis leaves 0003, 0004, 0005, 0010, 0012, 0018, 0020 and 0021 as these\ndid not look like improvements after review.\n--\nMichael",
"msg_date": "Wed, 24 Feb 2021 16:18:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 04:18:51PM +0900, Michael Paquier wrote:\n> On Mon, Feb 22, 2021 at 02:03:45AM -0600, Justin Pryzby wrote:\n> > Rebased, with a few additions.\n> \n> Thanks. I have done a pass through this series, and applied most of\n> this stuff with a backpatch for the doc portions.\n> \n> + The status of each kind of extended statistics is shown in a column\n> + named after the \"kind\" (e.g. Ndistinct).\n> + NULL means that it doesn't exist. \"defined\" means that it was requested\n> From 0009, there is a grammar mistake on HEAD here, but I don't\n> understand what you mean by \"kind\" here. Wouldn't it be better to not\n> use quotes and just refer to \"its type of statistics\"?\n\nI mean stxkind. \"type\" doesn't mean anything.\n\n> 0016 was missing some <command> markups.\n> \n> This leaves 0003, 0004, 0005, 0010, 0012, 0018, 0020 and 0021 as these\n> did not look like improvements after review.\n\nThanks.\n\n- vacuum the main relation. This option is required when the\n+ vacuum the main relation. This option may not be disabled when the\n <literal>FULL</literal> option is used.\n\n\"This option is required..\" sounds like \"this option must be specified\", which\nis wrong.\n\n- publisher. Once the synchronization is done, the control of the\n+ publisher. Once synchronization is done, control of the\n replication of the table is given back to the main apply process where\n- the replication continues as normal.\n+ replication continues as normal.\n\nI think \"the synchronization\" is ok, but \"the control\" is poor, and \"the\nreplication\" is unneeded.\n\n When creating an index on a partitioned table, this column is set to\n- the number of partitions on which the index has been completed.\n+ the number of partitions on which the index has been created.\n\nWhat is index \"completion\" ?\n\n This is very\n- convenient, as not only will the existing partitions become indexed, but\n- also any partitions that are created in the future will. One limitation is\n+ convenient, as not only the existing partitions will be indexed, but\n+ so will any partitions that are created in the future. One limitation is\n\n\"become indexed\" sounds strange (and vague), and \"will.\" is additionally awkward.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Feb 2021 01:39:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 01:39:55AM -0600, Justin Pryzby wrote:\n> On Wed, Feb 24, 2021 at 04:18:51PM +0900, Michael Paquier wrote:\n>> + The status of each kind of extended statistics is shown in a column\n>> + named after the \"kind\" (e.g. Ndistinct).\n>> + NULL means that it doesn't exist. \"defined\" means that it was requested\n>> From 0009, there is a grammar mistake on HEAD here, but I don't\n>> understand what you mean by \"kind\" here. Wouldn't it be better to not\n>> use quotes and just refer to \"its type of statistics\"?\n> \n> I mean stxkind. \"type\" doesn't mean anything.\n\nHow would you reword that then?\n\n> - vacuum the main relation. This option is required when the\n> + vacuum the main relation. This option may not be disabled when the\n> <literal>FULL</literal> option is used.\n> \n> \"This option is required..\" sounds like \"this option must be specified\", which\n> is wrong.\n\nHmm. Wouldn't it be better to say then \"this option cannot be\ndisabled when FULL is used\"?\n\n> When creating an index on a partitioned table, this column is set to\n> - the number of partitions on which the index has been completed.\n> + the number of partitions on which the index has been created.\n> \n> What is index \"completion\" ?\n\nDone with. Perhaps Alvaro has a comment to offer here as this comes\nfrom ab0dfc9.\n\n> This is very\n> - convenient, as not only will the existing partitions become indexed, but\n> - also any partitions that are created in the future will. One limitation is\n> + convenient, as not only the existing partitions will be indexed, but\n> + so will any partitions that are created in the future. One limitation is\n> \n> \"become indexed\" sounds strange (and vague), and \"will.\" is additionally awkward.\n\nNot that strange to me (see dbca945).\n--\nMichael",
"msg_date": "Thu, 25 Feb 2021 17:05:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 04:18:51PM +0900, Michael Paquier wrote:\n> This leaves 0003, 0004, 0005, 0010, 0012, 0018, 0020 and 0021 as these\n> did not look like improvements after review.\n\nIt looks like you applied 0010...but I agree that it's not an improvement. It\nappears that's something I intended to go back and revisit myself.\nThe rest of the patch looks right, to me.\n\nSubject: [PATCH 10/21] doc review for checksum docs\n doc/src/sgml/wal.sgml | 18 +++++++++--------- \n\nI'm suggesting to either revert that part, or apply these more polished changes\nin 0002.\n\n-- \nJustin",
"msg_date": "Sun, 28 Feb 2021 18:46:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 06:46:47PM -0600, Justin Pryzby wrote:\n> It looks like you applied 0010...but I agree that it's not an improvement. It\n> appears that's something I intended to go back and revisit myself.\n> The rest of the patch looks right, to me.\n\nOops. This was not intended.\n\n> I'm suggesting to either revert that part, or apply these more polished changes\n> in 0002.\n\nI would just group both things together. Monday helping, I can see\nthat the new wording is better on a couple of points after doing a\ndiff of wal.sgml with c82d59d6:\n- \"checksum protected\" in the first sentence is weird, so I agree that\nusing \"By default, data pages are not protected by checksums\" is an\nimprovement.\n- \"assigned\" is indeed a bit strange, \"includes\" is an improvement,\nand I would tend to not use a passive form here.\n- \"to recover from corrupt data\" is redundant with \"to recover data\"\nso the second one should be removed. My take is to use \"page\ncorruptions\" instead of \"corrupt data\", which should be corrupted data\nto be grammatically correct.\n\nThis gives the attached, that has as result to not change the second\nparagraph compared to the pre-c82d59d6 version of the area.\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 13:11:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 01:11:10PM +0900, Michael Paquier wrote:\n> On Sun, Feb 28, 2021 at 06:46:47PM -0600, Justin Pryzby wrote:\n> > It looks like you applied 0010...but I agree that it's not an improvement. It\n> > appears that's something I intended to go back and revisit myself.\n> > The rest of the patch looks right, to me.\n> \n> Oops. This was not intended.\n> \n> > I'm suggesting to either revert that part, or apply these more polished changes\n> > in 0002.\n> \n> I would just group both things together. Monday helping, I can see\n> that the new wording is better on a couple of points after doing a\n> diff of wal.sgml with c82d59d6:\n> - \"checksum protected\" in the first sentence is weird, so I agree that\n> using \"By default, data pages are not protected by checksums\" is an\n> improvement.\n> - \"assigned\" is indeed a bit strange, \"includes\" is an improvement,\n> and I would tend to not use a passive form here.\n\n+1\n\n> - \"to recover from corrupt data\" is redundant with \"to recover data\"\n> so the second one should be removed. My take is to use \"page\n> corruptions\" instead of \"corrupt data\", which should be corrupted data\n> to be grammatically correct.\n\n> - Checksums verification is normally ENABLED when the cluster is initialized using <link\n> + Checksums are normally enabled when the cluster is initialized using <link\n\nI still have an issue with the sentence that begins:\n\"Checksums are normally enabled...\"\n\nIt sounds much too close to \"Checksums are typically enabled.\", which is wrong.\nSo I proposed something like:\n\n|Enabling checksums is normally done when the cluster is first created by <link\n|...\n\nNote, the patch I sent said \"create\" but should be \"created\".\n\n> - When attempting to recover from corrupt data, it may be necessary to bypass\n> - the checksum protection. To do this, temporarily set the configuration\n> - parameter <xref linkend=\"guc-ignore-checksum-failure\" />.\n> + When attempting to recover from page corruptions, it may be necessary to\n> + bypass the checksum protection. To do this, temporarily set the\n> + configuration parameter <xref linkend=\"guc-ignore-checksum-failure\" />.\n\n\"page corruptions\" is wrong .. you could say \"corrupt pages\"\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Feb 2021 22:33:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 10:33:55PM -0600, Justin Pryzby wrote:\n> I still have an issue with the sentence that begins:\n> \"Checksums are normally enabled...\"\n\nYou could say here \"Checksums can be enabled\", but \"normally\" does not\nsound bad to me either as it insists on the fact that it is better to\ndo that when the cluster is initdb'd as this has no downtime compared\nto enabling checksums on an existing cluster.\n\n> Note, the patch I sent said \"create\" but should be \"created\".\n\nInitialized sounds better to me, FWIW.\n\n> \"page corruptions\" is wrong .. you could say \"corrupt pages\"\n\n\"corruptED pages\" would sound more correct to me as something that has\nalready happened. Anyway, I'd rather keep what I am proposing\nupthread.\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 15:17:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 03:17:40PM +0900, Michael Paquier wrote:\n> You could say here \"Checksums can be enabled\", but \"normally\" does not\n> sound bad to me either as it insists on the fact that it is better to\n> do that when the cluster is initdb'd as this has no downtime compared\n> to enabling checksums on an existing cluster.\n\nI looked at that again this morning, and the last version sent\nupthread still looked fine to me, so I have just applied that.\n\nThanks for caring about that, Justin.\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 10:57:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "Another round of doc review, not yet including all of yesterday's commits.\n\n29c8d614c3 duplicate words\ndiff --git a/src/include/lib/sort_template.h b/src/include/lib/sort_template.h\nindex 771c789ced..24d6d0006c 100644\n--- a/src/include/lib/sort_template.h\n+++ b/src/include/lib/sort_template.h\n@@ -241,7 +241,7 @@ ST_SCOPE void ST_SORT(ST_ELEMENT_TYPE *first, size_t n\n \n /*\n * Find the median of three values. Currently, performance seems to be best\n- * if the the comparator is inlined here, but the med3 function is not inlined\n+ * if the comparator is inlined here, but the med3 function is not inlined\n * in the qsort function.\n */\n static pg_noinline ST_ELEMENT_TYPE *\ne7c370c7c5 pg_amcheck: remove Double semi-colon\ndiff --git a/src/bin/pg_amcheck/t/004_verify_heapam.pl b/src/bin/pg_amcheck/t/004_verify_heapam.pl\nindex 36607596b1..2171d236a7 100644\n--- a/src/bin/pg_amcheck/t/004_verify_heapam.pl\n+++ b/src/bin/pg_amcheck/t/004_verify_heapam.pl\n@@ -175,7 +175,7 @@ sub write_tuple\n \tseek($fh, $offset, 0)\n \t\tor BAIL_OUT(\"seek failed: $!\");\n \tdefined(syswrite($fh, $buffer, HEAPTUPLE_PACK_LENGTH))\n-\t\tor BAIL_OUT(\"syswrite failed: $!\");;\n+\t\tor BAIL_OUT(\"syswrite failed: $!\");\n \treturn;\n }\n \nb745e9e60e a statistics objects\ndiff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c\nindex 463d44a68a..4674168ff8 100644\n--- a/src/backend/statistics/extended_stats.c\n+++ b/src/backend/statistics/extended_stats.c\n@@ -254,7 +254,7 @@ BuildRelationExtStatistics(Relation onerel, double totalrows,\n * that would require additional columns.\n *\n * See statext_compute_stattarget for details about how we compute statistics\n- * target for a statistics objects (from the object target, attribute targets\n+ * target for a statistics object (from the object target, attribute targets\n * and default statistics target).\n */\n int\ne7d5c5d9dc guc.h: remove mention of \"doit\"\ndiff --git a/src/include/utils/guc.h b/src/include/utils/guc.h\nindex 1892c7927b..1126b34798 100644\n--- a/src/include/utils/guc.h\n+++ b/src/include/utils/guc.h\n@@ -90,8 +90,7 @@ typedef enum\n * dividing line between \"interactive\" and \"non-interactive\" sources for\n * error reporting purposes.\n *\n- * PGC_S_TEST is used when testing values to be used later (\"doit\" will always\n- * be false, so this never gets stored as the actual source of any value).\n+ * PGC_S_TEST is used when testing values to be used later.\n * For example, ALTER DATABASE/ROLE tests proposed per-database or per-user\n * defaults this way, and CREATE FUNCTION tests proposed function SET clauses\n * this way. This is an interactive case, but it needs its own source value\nad5f9a2023 Caller\ndiff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c\nindex 9961d27df4..09fcff6729 100644\n--- a/src/backend/utils/adt/jsonfuncs.c\n+++ b/src/backend/utils/adt/jsonfuncs.c\n@@ -1651,7 +1651,7 @@ push_null_elements(JsonbParseState **ps, int num)\n * this path. E.g. the path [a][0][b] with the new value 1 will produce the\n * structure {a: [{b: 1}]}.\n *\n- * Called is responsible to make sure such path does not exist yet.\n+ * Caller is responsible to make sure such path does not exist yet.\n */\n static void\n push_path(JsonbParseState **st, int level, Datum *path_elems,\n@@ -4887,7 +4887,7 @@ IteratorConcat(JsonbIterator **it1, JsonbIterator **it2,\n * than just one last element, this flag will instruct to create the whole\n * chain of corresponding objects and insert the value.\n *\n- * JB_PATH_CONSISTENT_POSITION for an array indicates that the called wants to\n+ * JB_PATH_CONSISTENT_POSITION for an array indicates that the caller wants to\n * keep values with fixed indices. Indices for existing elements could be\n * changed (shifted forward) in case if the array is prepended with a new value\n * and a negative index out of the range, so this behavior will be prevented\n9acedbd4af as\ndiff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c\nindex 20e7d57d41..40a54ad0bd 100644\n--- a/src/backend/commands/copyfrom.c\n+++ b/src/backend/commands/copyfrom.c\n@@ -410,7 +410,7 @@ CopyMultiInsertBufferCleanup(CopyMultiInsertInfo *miinfo,\n * Once flushed we also trim the tracked buffers list down to size by removing\n * the buffers created earliest first.\n *\n- * Callers should pass 'curr_rri' is the ResultRelInfo that's currently being\n+ * Callers should pass 'curr_rri' as the ResultRelInfo that's currently being\n * used. When cleaning up old buffers we'll never remove the one for\n * 'curr_rri'.\n */\n9f78de5042 exist\ndiff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\nindex 5bdaceefd5..182a133033 100644\n--- a/src/backend/commands/analyze.c\n+++ b/src/backend/commands/analyze.c\n@@ -617,7 +617,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n \t *\n \t * We assume that VACUUM hasn't set pg_class.reltuples already, even\n \t * during a VACUUM ANALYZE. Although VACUUM often updates pg_class,\n-\t * exceptions exists. A \"VACUUM (ANALYZE, INDEX_CLEANUP OFF)\" command\n+\t * exceptions exist. A \"VACUUM (ANALYZE, INDEX_CLEANUP OFF)\" command\n \t * will never update pg_class entries for index relations. It's also\n \t * possible that an individual index's pg_class entry won't be updated\n \t * during VACUUM if the index AM returns NULL from its amvacuumcleanup()\na45af383ae rebuilt\ndiff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c\nindex 096a06f7b3..6487a9e3fc 100644\n--- a/src/backend/commands/cluster.c\n+++ b/src/backend/commands/cluster.c\n@@ -1422,7 +1422,7 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,\n \t\t\t\t\t\t\t\t PROGRESS_CLUSTER_PHASE_FINAL_CLEANUP);\n \n \t/*\n-\t * If the relation being rebuild is pg_class, swap_relation_files()\n+\t * If the relation being rebuilt is pg_class, swap_relation_files()\n \t * couldn't update pg_class's own pg_class entry (check comments in\n \t * swap_relation_files()), thus relfrozenxid was not updated. That's\n \t * annoying because a potential reason for doing a VACUUM FULL is a\nf24c2c1075 docs review: logical replication\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex a382258aee..bc4a8b2279 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -4137,7 +4137,7 @@ restore_command = 'copy \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n On the subscriber side, specifies how many replication origins (see\n <xref linkend=\"replication-origins\"/>) can be tracked simultaneously,\n effectively limiting how many logical replication subscriptions can\n- be created on the server. Setting it a lower value than the current\n+ be created on the server. Setting it to a lower value than the current\n number of tracked replication origins (reflected in\n <link linkend=\"view-pg-replication-origin-status\">pg_replication_origin_status</link>,\n not <link linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>)\ndiff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\nindex 3fad5f34e6..7645ee032c 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -602,13 +602,12 @@\n </para>\n \n <para>\n- The subscriber also requires the <varname>max_replication_slots</varname>\n- be set to configure how many replication origins can be tracked. In this\n- case it should be set to at least the number of subscriptions that will be\n- added to the subscriber, plus some reserve for table synchronization.\n- <varname>max_logical_replication_workers</varname> must be set to at least\n- the number of subscriptions, again plus some reserve for the table\n- synchronization. Additionally the <varname>max_worker_processes</varname>\n+ <varname>max_replication_slots</varname> must also be set on the subscriber.\n+ It should be set to at least the number of\n+ subscriptions that will be added to the subscriber, plus some reserve for\n+ table synchronization. <varname>max_logical_replication_workers</varname>\n+ must be set to at least the number of subscriptions, again plus some reserve\n+ for the table synchronization. Additionally the <varname>max_worker_processes</varname>\n may need to be adjusted to accommodate for replication workers, at least\n (<varname>max_logical_replication_workers</varname>\n + <literal>1</literal>). Note that some extensions and parallel queries\n83f9954468 accessmtd\ndiff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\nindex 9f6303266f..ba03e8aa8f 100644\n--- a/src/backend/catalog/heap.c\n+++ b/src/backend/catalog/heap.c\n@@ -1119,6 +1119,7 @@ AddNewRelationType(const char *typeName,\n *\treltypeid: OID to assign to rel's rowtype, or InvalidOid to select one\n *\treloftypeid: if a typed table, OID of underlying type; else InvalidOid\n *\townerid: OID of new rel's owner\n+ *\taccessmtd: OID of new rel's access method\n *\ttupdesc: tuple descriptor (source of column definitions)\n *\tcooked_constraints: list of precooked check constraints and defaults\n *\trelkind: relkind for new rel\n573eeb8666 language fixen\ndiff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 3bbae6dd91..4adb34a21b 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -185,7 +185,7 @@\n never issue <command>VACUUM FULL</command>. In this approach, the idea\n is not to keep tables at their minimum size, but to maintain steady-state\n usage of disk space: each table occupies space equivalent to its\n- minimum size plus however much space gets used up between vacuumings.\n+ minimum size plus however much space gets used up between vacuum runs.\n Although <command>VACUUM FULL</command> can be used to shrink a table back\n to its minimum size and return the disk space to the operating system,\n there is not much point in this if the table will just grow again in the\ndiff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\nindex d1af624f44..89ff58338e 100644\n--- a/doc/src/sgml/perform.sgml\n+++ b/doc/src/sgml/perform.sgml\n@@ -1899,7 +1899,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;\n much faster. The following are configuration changes you can make\n to improve performance in such cases. Except as noted below, durability\n is still guaranteed in case of a crash of the database software;\n- only abrupt operating system stoppage creates a risk of data loss\n+ only an abrupt operating system crash creates a risk of data loss\n or corruption when these settings are used.\n \n <itemizedlist>\ndiff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml\nindex 4d60dc2cda..17579e50af 100644\n--- a/doc/src/sgml/ref/createuser.sgml\n+++ b/doc/src/sgml/ref/createuser.sgml\n@@ -44,7 +44,7 @@ PostgreSQL documentation\n If you wish to create a new superuser, you must connect as a\n superuser, not merely with <literal>CREATEROLE</literal> privilege.\n Being a superuser implies the ability to bypass all access permission\n- checks within the database, so superuserdom should not be granted lightly.\n+ checks within the database, so superuser access should not be granted lightly.\n </para>\n \n <para>\nd37a8a04f7 wal_compression\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex cf4e82e8b5..a382258aee 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -3098,7 +3098,7 @@ include_dir 'conf.d'\n <listitem>\n <para>\n When this parameter is <literal>on</literal>, the <productname>PostgreSQL</productname>\n- server compresses a full page image written to WAL when\n+ server compresses full page images written to WAL when\n <xref linkend=\"guc-full-page-writes\"/> is on or during a base backup.\n A compressed page image will be decompressed during WAL replay.\n The default value is <literal>off</literal>.\ne6025e2e81 amcheck\ndiff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml\nindex a2571d33ae..30fcb033e3 100644\n--- a/doc/src/sgml/amcheck.sgml\n+++ b/doc/src/sgml/amcheck.sgml\n@@ -457,14 +457,13 @@ SET client_min_messages = DEBUG1;\n </listitem>\n <listitem>\n <para>\n- File system or storage subsystem faults where checksums happen to\n- simply not be enabled.\n+ File system or storage subsystem faults where checksums are\n+ not enabled.\n </para>\n <para>\n- Note that <filename>amcheck</filename> examines a page as represented in some\n- shared memory buffer at the time of verification if there is only a\n- shared buffer hit when accessing the block. Consequently,\n- <filename>amcheck</filename> does not necessarily examine data read from the\n+ Note that <filename>amcheck</filename> examines a page as represented in a\n+ shared memory buffer at the time of verification. If the page is cached,\n+ <filename>amcheck</filename> will not examine data read from the\n file system at the time of verification. Note that when checksums are\n enabled, <filename>amcheck</filename> may raise an error due to a checksum\n failure when a corrupt block is read into a buffer.\nd987f0505e spell: vacuum\ndiff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\nindex 44e50620fd..d7fffddbce 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -1520,7 +1520,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n </listitem>\n </varlistentry>\n \n- <varlistentry id=\"reloption-autovacuum-vauum-scale-factor\" xreflabel=\"autovacuum_vacuum_scale_factor\">\n+ <varlistentry id=\"reloption-autovacuum-vacuum-scale-factor\" xreflabel=\"autovacuum_vacuum_scale_factor\">\n <term><literal>autovacuum_vacuum_scale_factor</literal>, <literal>toast.autovacuum_vacuum_scale_factor</literal> (<type>floating point</type>)\n <indexterm>\n <primary><varname>autovacuum_vacuum_scale_factor</varname> </primary>\n@@ -1610,7 +1610,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n </listitem>\n </varlistentry>\n \n- <varlistentry id=\"reloption-autovacuum-vauum-cost-limit\" xreflabel=\"autovacuum_vacuum_cost_limit\">\n+ <varlistentry id=\"reloption-autovacuum-vacuum-cost-limit\" xreflabel=\"autovacuum_vacuum_cost_limit\">\n <term><literal>autovacuum_vacuum_cost_limit</literal>, <literal>toast.autovacuum_vacuum_cost_limit</literal> (<type>integer</type>)\n <indexterm>\n <primary><varname>autovacuum_vacuum_cost_limit</varname></primary>\n69e597176b doc review: Fix use of cursor sensitivity terminology\ndiff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml\nindex 8a2b8cc892..aa3d1d1fa1 100644\n--- a/doc/src/sgml/ref/declare.sgml\n+++ b/doc/src/sgml/ref/declare.sgml\n@@ -335,7 +335,7 @@ DECLARE liahona CURSOR FOR SELECT * FROM films;\n <para>\n According to the SQL standard, changes made to insensitive cursors by\n <literal>UPDATE ... WHERE CURRENT OF</literal> and <literal>DELETE\n- ... WHERE CURRENT OF</literal> statements are visibible in that same\n+ ... WHERE CURRENT OF</literal> statements are visible in that same\n cursor. <productname>PostgreSQL</productname> treats these statements like\n all other data changing statements in that they are not visible in\n insensitive cursors.\n3399caf133 doc review: Make use of in-core query id added by commit 5fd9dfa5f5\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 04712769ca..cf4e82e8b5 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -7732,7 +7732,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n The <xref linkend=\"pgstatstatements\"/> extension also requires a query\n identifier to be computed. Note that an external module can\n alternatively be used if the in-core query identifier computation\n- specification isn't acceptable. In this case, in-core computation\n+ method isn't acceptable. In this case, in-core computation\n must be disabled. The default is <literal>off</literal>.\n </para>\n <note>\n567b33c755 doc review: Move pg_stat_statements query jumbling to core.\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex ae1a38b8bc..04712769ca 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -7737,7 +7737,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n </para>\n <note>\n <para>\n- To ensure that a only one query identifier is calculated and\n+ To ensure that only one query identifier is calculated and\n displayed, extensions that calculate query identifiers should\n throw an error if a query identifier has already been computed.\n </para>\ndiff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml\nindex 5ad4f0aed2..e235504e9a 100644\n--- a/doc/src/sgml/pgstatstatements.sgml\n+++ b/doc/src/sgml/pgstatstatements.sgml\n@@ -406,7 +406,7 @@\n <note>\n <para>\n The following details about constant replacement and\n- <structfield>queryid</structfield> only applies when <xref\n+ <structfield>queryid</structfield> only apply when <xref\n linkend=\"guc-compute-query-id\"/> is enabled. If you use an external\n module instead to compute <structfield>queryid</structfield>, you\n should refer to its documentation for details.\ne292ee3e35 doc review: Add function to log the memory contexts of specified backend process.\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex be22f4b61b..679738f615 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -24926,12 +24926,12 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n <returnvalue>boolean</returnvalue>\n </para>\n <para>\n- Requests to log the memory contexts whose backend process has\n- the specified process ID. These memory contexts will be logged at\n+ Requests to log the memory contexts of the backend with the\n+ specified process ID. These memory contexts will be logged at\n <literal>LOG</literal> message level. They will appear in\n the server log based on the log configuration set\n (See <xref linkend=\"runtime-config-logging\"/> for more information),\n- but will not be sent to the client whatever the setting of\n+ but will not be sent to the client regardless of\n <xref linkend=\"guc-client-min-messages\"/>.\n Only superusers can request to log the memory contexts.\n </para></entry>\n@@ -25037,9 +25037,9 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n \n <para>\n <function>pg_log_backend_memory_contexts</function> can be used\n- to log the memory contexts of the backend process. For example,\n+ to log the memory contexts of a backend process. For example,\n <programlisting>\n-postgres=# SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n+postgres=# SELECT pg_log_backend_memory_contexts(pg_backend_pid()); -- XXX\n pg_log_backend_memory_contexts \n --------------------------------\n t\n@@ -25061,8 +25061,8 @@ LOG: level: 1; TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0\n LOG: level: 1; ErrorContext: 8192 total in 1 blocks; 7928 free (3 chunks); 264 used\n LOG: Grand total: 1651920 bytes in 201 blocks; 622360 free (88 chunks); 1029560 used\n </screen>\n- For more than 100 child contexts under the same parent one,\n- 100 child contexts and a summary of the remaining ones will be logged.\n+ If there are more than 100 child contexts under the same parent, the first\n+ 100 child contexts are logged, along with a summary of the remaining contexts.\n Note that frequent calls to this function could incur significant overhead,\n because it may generate a large number of log messages.\n </para>\n85330eeda7 doc review: Stop archive recovery if WAL generated with wal_level=minimal is found.\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 26628f3e6d..ae1a38b8bc 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -2723,7 +2723,7 @@ include_dir 'conf.d'\n Note that changing <varname>wal_level</varname> to\n <literal>minimal</literal> makes any base backups taken before\n unavailable for archive recovery and standby server, which may\n- lead to database loss.\n+ lead to data loss.\n </para>\n <para>\n In <literal>logical</literal> level, the same information is logged as\ndiff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\nindex e0d3f246e9..d1af624f44 100644\n--- a/doc/src/sgml/perform.sgml\n+++ b/doc/src/sgml/perform.sgml\n@@ -1747,7 +1747,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;\n <xref linkend=\"guc-max-wal-senders\"/> to zero.\n But note that changing these settings requires a server restart,\n and makes any base backups taken before unavailable for archive\n- recovery and standby server, which may lead to database loss.\n+ recovery and standby server, which may lead to data loss.\n </para>\n \n <para>\ndfdae1597d doc review: Add unistr function\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 7b75e0bca2..be22f4b61b 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -3560,7 +3560,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>\n <returnvalue>text</returnvalue>\n </para>\n <para>\n- Evaluate escaped Unicode characters in argument. Unicode characters\n+ Evaluate escaped Unicode characters in the argument. Unicode characters\n can be specified as\n <literal>\\<replaceable>XXXX</replaceable></literal> (4 hexadecimal\n digits), <literal>\\+<replaceable>XXXXXX</replaceable></literal> (6\n491445e3c9 doc review: postgres_fdw: Add option to control whether to keep connections open.\ndiff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\nindex fd34956936..e8cb679164 100644\n--- a/doc/src/sgml/postgres-fdw.sgml\n+++ b/doc/src/sgml/postgres-fdw.sgml\n@@ -551,8 +551,8 @@ OPTIONS (ADD password_required 'false');\n <title>Connection Management Options</title>\n \n <para>\n- By default all the open connections that <filename>postgres_fdw</filename>\n- established to the foreign servers are kept in local session for re-use.\n+ By default, all connections that <filename>postgres_fdw</filename>\n+ establishes to foreign servers are kept open for re-use in the local session.\n </para>\n \n <variablelist>\n@@ -562,11 +562,11 @@ OPTIONS (ADD password_required 'false');\n <listitem>\n <para>\n This option controls whether <filename>postgres_fdw</filename> keeps\n- the connections to the foreign server open so that the subsequent\n+ the connections to the foreign server open so that subsequent\n queries can re-use them. It can only be specified for a foreign server.\n The default is <literal>on</literal>. If set to <literal>off</literal>,\n all connections to this foreign server will be discarded at the end of\n- transaction.\n+ each transaction.\n </para>\n </listitem>\n </varlistentry>\n95a43e5c2d doc review: BRIN minmax-multi indexes\ndiff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml\nindex d2476481af..ce7c210575 100644\n--- a/doc/src/sgml/brin.sgml\n+++ b/doc/src/sgml/brin.sgml\n@@ -730,7 +730,7 @@ LOG: request for BRIN range summarization for index \"brin_wi_idx\" page 128 was\n for <xref linkend=\"sql-altertable\"/>. When set to a positive value,\n each block range is assumed to contain this number of distinct non-null\n values. When set to a negative value, which must be greater than or\n- equal to -1, the number of distinct non-null is assumed linear with\n+ equal to -1, the number of distinct non-null values is assumed to grow linearly with\n the maximum possible number of tuples in the block range (about 290\n rows per block). The default value is <literal>-0.1</literal>, and\n the minimum number of distinct non-null values is <literal>16</literal>.\n@@ -1214,7 +1214,7 @@ typedef struct BrinOpcInfo\n \n <para>\n The minmax-multi operator class is also intended for data types implementing\n- a totally ordered sets, and may be seen as a simple extension of the minmax\n+ a totally ordered set, and may be seen as a simple extension of the minmax\n operator class. While minmax operator class summarizes values from each block\n range into a single contiguous interval, minmax-multi allows summarization\n into multiple smaller intervals to improve handling of outlier values.\ne53ac30d44 doc review: Track total amounts of times spent writing and syncing WAL data to disk.\ndiff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\nindex 0f13c43095..24cf567ee2 100644\n--- a/doc/src/sgml/wal.sgml\n+++ b/doc/src/sgml/wal.sgml\n@@ -797,7 +797,7 @@\n <literal>fsync</literal>, or <literal>fsync_writethrough</literal>,\n the write operation moves WAL buffers to kernel cache and\n <function>issue_xlog_fsync</function> syncs them to disk. Regardless\n- of the setting of <varname>track_wal_io_timing</varname>, the numbers\n+ of the setting of <varname>track_wal_io_timing</varname>, the number\n of times <function>XLogWrite</function> writes and\n <function>issue_xlog_fsync</function> syncs WAL data to disk are also\n counted as <literal>wal_write</literal> and <literal>wal_sync</literal>\n6e0c552d1c doc review: Be clear about whether a recovery pause has taken effect.\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 0606b6a9aa..7b75e0bca2 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -25576,7 +25576,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());\n Returns recovery pause state. The return values are <literal>\n not paused</literal> if pause is not requested, <literal>\n pause requested</literal> if pause is requested but recovery is\n- not yet paused and, <literal>paused</literal> if the recovery is\n+ not yet paused, and <literal>paused</literal> if the recovery is\n actually paused.\n </para></entry>\n </row>\n4bbf35a579 doc review: Add pg_amcheck, a CLI for contrib/amcheck.\ndiff --git a/doc/src/sgml/ref/pg_amcheck.sgml b/doc/src/sgml/ref/pg_amcheck.sgml\nindex fcc96b430a..d01e26faa8 100644\n--- a/doc/src/sgml/ref/pg_amcheck.sgml\n+++ b/doc/src/sgml/ref/pg_amcheck.sgml\n@@ -460,7 +460,7 @@ PostgreSQL documentation\n <term><option>--skip=<replaceable class=\"parameter\">option</replaceable></option></term>\n <listitem>\n <para>\n- If <literal>\"all-frozen\"</literal> is given, table corruption checks\n+ If <literal>all-frozen</literal> is given, table corruption checks\n will skip over pages in all tables that are marked as all frozen.\n </para>\n <para>\nc7bf0bcc61 doc review: Pass all scan keys to BRIN consistent function at once\ndiff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml\nindex d2f12bb605..d2476481af 100644\n--- a/doc/src/sgml/brin.sgml\n+++ b/doc/src/sgml/brin.sgml\n@@ -833,7 +833,7 @@ typedef struct BrinOpcInfo\n Returns whether all the ScanKey entries are consistent with the given\n indexed values for a range.\n The attribute number to use is passed as part of the scan key.\n- Multiple scan keys for the same attribute may be passed at once, the\n+ Multiple scan keys for the same attribute may be passed at once; the\n number of entries is determined by the <literal>nkeys</literal> parameter.\n </para>\n </listitem>\n50454d9cf5 doc review: Add support for PROVE_TESTS and PROVE_FLAGS in MSVC scripts\ndiff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml\nindex 64687b12e6..cb6bb05dc5 100644\n--- a/doc/src/sgml/install-windows.sgml\n+++ b/doc/src/sgml/install-windows.sgml\n@@ -499,8 +499,8 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\\IPC-Run-0.94\\lib';\n \n <para>\n The TAP tests run with <command>vcregress</command> support the\n- environment variables <varname>PROVE_TESTS</varname>, that is expanded\n- automatically using the name patterns given, and\n+ environment variables <varname>PROVE_TESTS</varname>, which is\n+ expanded as a glob pattern, and\n <varname>PROVE_FLAGS</varname>. These can be set on a Windows terminal,\n before running <command>vcregress</command>:\n <programlisting>\n633e7a3b54 doc review: VACUUM (PROCESS_TOAST)\ndiff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml\nindex 6a0028a514..949ca23797 100644\n--- a/doc/src/sgml/ref/vacuum.sgml\n+++ b/doc/src/sgml/ref/vacuum.sgml\n@@ -219,7 +219,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n corresponding <literal>TOAST</literal> table for each relation, if one\n exists. This is normally the desired behavior and is the default.\n Setting this option to false may be useful when it is only necessary to\n- vacuum the main relation. This option is required when the\n+ vacuum the main relation. This option may not be disabled when the\n <literal>FULL</literal> option is used.\n </para>\n </listitem>\n7e84a06724 doc review: Multiple xacts during table sync in logical replication\ndiff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\nindex e95d446dac..3fad5f34e6 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -490,9 +490,9 @@\n any changes that happened during the initial data copy using standard\n logical replication. During this synchronization phase, the changes\n are applied and committed in the same order as they happened on the\n- publisher. Once the synchronization is done, the control of the\n+ publisher. Once synchronization is done, control of the\n replication of the table is given back to the main apply process where\n- the replication continues as normal.\n+ replication continues as normal.\n </para>\n </sect2>\n </sect1>\n8259924473 doc review: pg_stat_progress_create_index\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex f637fe0415..8287587f61 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -5890,7 +5890,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,\n </para>\n <para>\n When creating an index on a partitioned table, this column is set to\n- the number of partitions on which the index has been completed.\n+ the number of partitions on which the index has been created.\n This field is <literal>0</literal> during a <literal>REINDEX</literal>.\n </para></entry>\n </row>\n576580e6c3 doc review: piecemeal construction of partitioned indexes\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 30e4170963..354f9e57bd 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3957,8 +3957,8 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02\n As explained above, it is possible to create indexes on partitioned tables\n so that they are applied automatically to the entire hierarchy.\n This is very\n- convenient, as not only will the existing partitions become indexed, but\n- also any partitions that are created in the future will. One limitation is\n+ convenient, as not only the existing partitions will be indexed, but\n+ so will any partitions that are created in the future. One limitation is\n that it's not possible to use the <literal>CONCURRENTLY</literal>\n qualifier when creating such a partitioned index. To avoid long lock\n times, it is possible to use <command>CREATE INDEX ON ONLY</command>\n1384db4053 doc review: psql \\dX\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex ddb7043362..a3cfd3b557 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -1927,9 +1927,10 @@ testdb=>\n </para>\n \n <para>\n- The column of the kind of extended stats (e.g. Ndistinct) shows its status.\n- NULL means that it doesn't exists. \"defined\" means that it was requested\n- when creating the statistics.\n+ The status of each kind of extended statistics is shown in a column\n+ named after its statistic kind (e.g. Ndistinct).\n+ \"defined\" means that it was requested when creating the statistics,\n+ and NULL means it wasn't requested. \n You can use pg_stats_ext if you'd like to know whether <link linkend=\"sql-analyze\">\n <command>ANALYZE</command></link> was run and statistics are available to the\n planner.",
"msg_date": "Thu, 8 Apr 2021 11:40:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 11:40:08AM -0500, Justin Pryzby wrote:\n> Another round of doc review, not yet including all of yesterday's commits.\n\nThanks for compiling all that. I got through the whole set and\napplied the most relevant parts on HEAD. Some of them applied down to\n9.6, so I have fixed it down where needed, for the parts that did not\nconflict too heavily.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 14:03:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 02:03:27PM +0900, Michael Paquier wrote:\n> On Thu, Apr 08, 2021 at 11:40:08AM -0500, Justin Pryzby wrote:\n> > Another round of doc review, not yet including all of yesterday's commits.\n> \n> Thanks for compiling all that. I got through the whole set and\n> applied the most relevant parts on HEAD. Some of them applied down to\n> 9.6, so I have fixed it down where needed, for the parts that did not\n> conflict too heavily.\n\nThanks. Rebased with remaining, queued fixes.\n\n-- \nJustin",
"msg_date": "Mon, 12 Apr 2021 23:39:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "A bunch more found with things like this.\n\nfind src -name '*.c' |xargs grep '^[[:space:]]*/\\?\\*' |grep -woE '[[:lower:]]{3,8}' |sed 's/.*/\\L&/' |\n sort |uniq -c |sort -nr |awk '$1==1' |less",
"msg_date": "Fri, 16 Apr 2021 02:03:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc review for v14"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 02:03:10AM -0500, Justin Pryzby wrote:\n> A bunch more found with things like this.\n\nThanks, applied most of it!\n--\nMichael",
"msg_date": "Mon, 19 Apr 2021 11:42:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc review for v14"
}
] |
[
{
"msg_contents": "Dear All\n\n\nIn startup process we only launch bgwriter when ArchiveRecoveryRequested is true, which means we will not lauch bgwriter in master node.\nThe bgwriters can write the dirty buffers to disk which helps startup process to do less IO when we complete xlog replay and request to do END_OF_RECOVERY checkpoint.\nSo can we delete the limit of ArchiveRecoveryRequested, and enable launch bgwriter in master node ?\n\n\n 7128 /*\n 7129 * Let postmaster know we've started redo now, so that it can launch\n 7130 * checkpointer to perform restartpoints. We don't bother during\n 7131 * crash recovery as restartpoints can only be performed during\n 7132 * archive recovery. And we'd like to keep crash recovery simple, to\n 7133 * avoid introducing bugs that could affect you when recovering after\n 7134 * crash.\n 7135 *\n 7136 * After this point, we can no longer assume that we're the only\n 7137 * process in addition to postmaster! Also, fsync requests are\n 7138 * subsequently to be handled by the checkpointer, not locally.\n 7139 */\n 7140 if (ArchiveRecoveryRequested && IsUnderPostmaster)\n 7141 {\n 7142 PublishStartupProcessInformation();\n 7143 EnableSyncRequestForwarding();\n 7144 SendPostmasterSignal(PMSIGNAL_RECOVERY_STARTED);\n 7145 bgwriterLaunched = true;\n 7146 }\n\n\nThanks\nRay\nDear AllIn startup process we only launch bgwriter when ArchiveRecoveryRequested is true, which means we will not lauch bgwriter in master node.The bgwriters can write the dirty buffers to disk which helps startup process to do less IO when we complete xlog replay and request to do END_OF_RECOVERY checkpoint.So can we delete the limit of ArchiveRecoveryRequested, and enable launch bgwriter in master node ? 7128 /* 7129 * Let postmaster know we've started redo now, so that it can launch 7130 * checkpointer to perform restartpoints. We don't bother during 7131 * crash recovery as restartpoints can only be performed during 7132 * archive recovery. And we'd like to keep crash recovery simple, to 7133 * avoid introducing bugs that could affect you when recovering after 7134 * crash. 7135 * 7136 * After this point, we can no longer assume that we're the only 7137 * process in addition to postmaster! Also, fsync requests are 7138 * subsequently to be handled by the checkpointer, not locally. 7139 */ 7140 if (ArchiveRecoveryRequested && IsUnderPostmaster) 7141 { 7142 PublishStartupProcessInformation(); 7143 EnableSyncRequestForwarding(); 7144 SendPostmasterSignal(PMSIGNAL_RECOVERY_STARTED); 7145 bgwriterLaunched = true; 7146 }ThanksRay",
"msg_date": "Tue, 22 Dec 2020 15:50:48 +0800 (CST)",
"msg_from": "Thunder <thunder1@126.com>",
"msg_from_op": true,
"msg_subject": "Improve the performance to create END_OF_RECOVERY checkpoint"
},
{
"msg_contents": "Hi Ray,\n\n> So can we delete the limit of ArchiveRecoveryRequested, and enable launch bgwriter in master node ?\n\nPlease take a look on https://commitfest.postgresql.org/29/2706/ and the related email thread.\n\n-J. \n\n\n",
"msg_date": "Tue, 22 Dec 2020 08:14:22 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: Improve the performance to create END_OF_RECOVERY checkpoint"
}
] |
[
{
"msg_contents": "Hi, all\r\n\r\nIn Stream Replication Protocol [1], the documentation of `START_REPLICATION` message is\r\n\r\nXLogData (B)\r\n …\r\nPrimary keepalive message (B)\r\n …\r\nStandby status update (F)\r\n …\r\nHot Standby feedback message (F)\r\n ...\r\n\r\nI’m confused about the means of ‘B’ and ‘F’? If it doesn't make sense, why we document here?\r\nHowever, if it makes sense, should we explain it?\r\nCan someone help me out?\r\n\r\nAnyway, thanks in advance!\r\n\r\n[1] https://www.postgresql.org/docs/devel/protocol-replication.html\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\n\n\n\n\n\n\r\nHi, all\r\n\n\nIn Stream Replication Protocol [1], the documentation of `START_REPLICATION` message is\n\n\nXLogData (B)\n …\nPrimary keepalive message (B)\n …\nStandby status update (F)\n …\nHot Standby feedback message (F)\n ...\n\n\nI’m confused about the means of ‘B’ and ‘F’? If it doesn't make sense, why we document here?\nHowever, if it makes sense, should we explain it?\nCan someone help me out?\n\n\nAnyway, thanks in advance!\n\n\n[1] \r\nhttps://www.postgresql.org/docs/devel/protocol-replication.html\n\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Tue, 22 Dec 2020 09:07:21 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Confused about stream replication protocol documentation"
},
{
"msg_contents": "\n\nOn 2020/12/22 18:07, Li Japin wrote:\n> Hi, all\n> \n> In Stream Replication Protocol [1], the documentation of `START_REPLICATION` message is\n> \n> XLogData (B)\n> …\n> Primary keepalive message (B)\n> …\n> Standby status update (F)\n> …\n> Hot Standby feedback message (F)\n> ...\n> \n> I’m confused about the means of ‘B’ and ‘F’? If it doesn't make sense, why we document here?\n> However, if it makes sense, should we explain it?\n> Can someone help me out?\n\n‘B’ means a backend and ‘F’ means a frontend. Maybe as [1] does, we should\nadd the note like \"Each is marked to indicate that it can be sent by\n a frontend (F) and a backend (B)\" into the description about each message\n format for START_REPLICATION.\n\n[1]\nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 23 Dec 2020 00:13:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Confused about stream replication protocol documentation"
},
{
"msg_contents": "On Dec 22, 2020, at 11:13 PM, Fujii Masao <masao.fujii@oss.nttdata.com<mailto:masao.fujii@oss.nttdata.com>> wrote:\r\n\r\n‘B’ means a backend and ‘F’ means a frontend. Maybe as [1] does, we should\r\nadd the note like \"Each is marked to indicate that it can be sent by\r\na frontend (F) and a backend (B)\" into the description about each message\r\nformat for START_REPLICATION.\r\n\r\n[1]\r\nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html\r\n\r\nThanks for your clarify. Maybe we should move the \"protocol message formats”\r\nbefore “stream replication protocol” or referenced it in \"stream replication protocol”.\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\n\n\n\n\n\n\n\n\n\nOn Dec 22, 2020, at 11:13 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n‘B’\r\n means a backend and ‘F’ means a frontend. Maybe as [1] does, we should\nadd\r\n the note like \"Each is marked to indicate that it can be sent by\na\r\n frontend (F) and a backend (B)\" into the description about each message\nformat\r\n for START_REPLICATION.\n\n[1]\nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html\n\n\n\nThanks for your clarify. Maybe we should move the \"protocol message formats”\nbefore “stream replication protocol” or referenced it in \"stream replication protocol”.\n\n\n\n\n\n--\nBest regards\nJapin Li",
"msg_date": "Wed, 23 Dec 2020 02:08:07 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Confused about stream replication protocol documentation"
},
{
"msg_contents": "\n\nOn 2020/12/23 11:08, Li Japin wrote:\n> \n>> On Dec 22, 2020, at 11:13 PM, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n>>\n>> ‘B’ means a backend and ‘F’ means a frontend. Maybe as [1] does, we should\n>> add the note like \"Each is marked to indicate that it can be sent by\n>> a frontend (F) and a backend (B)\" into the description about each message\n>> format for START_REPLICATION.\n>>\n>> [1]\n>> https://www.postgresql.org/docs/devel/protocol-message-formats.html <https://www.postgresql.org/docs/devel/protocol-message-formats.html>\n> \n> Thanks for your clarify. Maybe we should move the \"protocol message formats”\n> before “stream replication protocol” or referenced it in \"stream replication protocol”.\n\nI like the latter. And maybe it's better to reference to also\n\"53.6. Message Data Types\" there because the messages for\nSTART_REPLICATION use the message data types.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 23 Dec 2020 21:11:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Confused about stream replication protocol documentation"
},
{
"msg_contents": "On Dec 23, 2020, at 8:11 PM, Fujii Masao <masao.fujii@oss.nttdata.com<mailto:masao.fujii@oss.nttdata.com>> wrote:\r\n\r\n\r\nOn 2020/12/23 11:08, Li Japin wrote:\r\nOn Dec 22, 2020, at 11:13 PM, Fujii Masao <masao.fujii@oss.nttdata.com<mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com>> wrote:\r\n\r\n‘B’ means a backend and ‘F’ means a frontend. Maybe as [1] does, we should\r\nadd the note like \"Each is marked to indicate that it can be sent by\r\na frontend (F) and a backend (B)\" into the description about each message\r\nformat for START_REPLICATION.\r\n\r\n[1]\r\nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html <https://www.postgresql.org/docs/devel/protocol-message-formats.html>\r\nThanks for your clarify. Maybe we should move the \"protocol message formats”\r\nbefore “stream replication protocol” or referenced it in \"stream replication protocol”.\r\n\r\nI like the latter. And maybe it's better to reference to also\r\n\"53.6. Message Data Types\" there because the messages for\r\nSTART_REPLICATION use the message data types.\r\n\r\nAdd reference about “protocol message types” and “protocol message formats”.\r\n\r\nindex 4899bacda7..5793936b42 100644\r\n--- a/doc/src/sgml/protocol.sgml\r\n+++ b/doc/src/sgml/protocol.sgml\r\n@@ -2069,8 +2069,9 @@ The commands accepted in replication mode are:\r\n </para>\r\n\r\n <para>\r\n- WAL data is sent as a series of CopyData messages. (This allows\r\n- other information to be intermixed; in particular the server can send\r\n+ WAL data is sent as a series of CopyData messages\r\n+ (See <xref linkend=\"protocol-message-types\"/> and <xref linkend=\"protocol-message-formats\"/>).\r\n+ (This allows other information to be intermixed; in particular the server can send\r\n an ErrorResponse message if it encounters a failure after beginning\r\n to stream.) The payload of each CopyData message from server to the\r\n client contains a message of one of the following formats:\r\n\r\n--\r\nBest regards\r\nJapin Li",
"msg_date": "Thu, 24 Dec 2020 02:28:53 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Confused about stream replication protocol documentation"
},
{
"msg_contents": "\nPatch applied to master, thanks.\n\n---------------------------------------------------------------------------\n\nOn Thu, Dec 24, 2020 at 02:28:53AM +0000, Li Japin wrote:\n> \n> On Dec 23, 2020, at 8:11 PM, Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> \n> \n> On 2020/12/23 11:08, Li Japin wrote:\n> \n> On Dec 22, 2020, at 11:13 PM, Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>\n> wrote:\n> \n> ‘B’ means a backend and ‘F’ means a frontend. Maybe as [1] does, we\n> should\n> add the note like \"Each is marked to indicate that it can be sent\n> by\n> a frontend (F) and a backend (B)\" into the description about each\n> message\n> format for START_REPLICATION.\n> \n> [1]\n> https://www.postgresql.org/docs/devel/protocol-message-formats.html\n> <https://www.postgresql.org/docs/devel/\n> protocol-message-formats.html>\n> \n> Thanks for your clarify. Maybe we should move the \"protocol message\n> formats”\n> before “stream replication protocol” or referenced it in \"stream\n> replication protocol”.\n> \n> \n> I like the latter. And maybe it's better to reference to also\n> \"53.6. Message Data Types\" there because the messages for\n> START_REPLICATION use the message data types.\n> \n> \n> Add reference about “protocol message types” and “protocol message formats”.\n> \n> index 4899bacda7..5793936b42 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -2069,8 +2069,9 @@ The commands accepted in replication mode are:\n> </para>\n> \n> <para>\n> - WAL data is sent as a series of CopyData messages. (This allows\n> - other information to be intermixed; in particular the server can send\n> + WAL data is sent as a series of CopyData messages\n> + (See <xref linkend=\"protocol-message-types\"/> and <xref linkend=\n> \"protocol-message-formats\"/>).\n> + (This allows other information to be intermixed; in particular the\n> server can send\n> an ErrorResponse message if it encounters a failure after beginning\n> to stream.) The payload of each CopyData message from server to the\n> client contains a message of one of the following formats:\n> \n> --\n> Best regards\n> Japin Li\n> \n\n> diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\n> index 4899bacda7..5793936b42 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -2069,8 +2069,9 @@ The commands accepted in replication mode are:\n> </para>\n> \n> <para>\n> - WAL data is sent as a series of CopyData messages. (This allows\n> - other information to be intermixed; in particular the server can send\n> + WAL data is sent as a series of CopyData messages\n> + (See <xref linkend=\"protocol-message-types\"/> and <xref linkend=\"protocol-message-formats\"/>).\n> + (This allows other information to be intermixed; in particular the server can send\n> an ErrorResponse message if it encounters a failure after beginning\n> to stream.) The payload of each CopyData message from server to the\n> client contains a message of one of the following formats:\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 1 Nov 2023 13:57:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Confused about stream replication protocol documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, $subject is not allowed. We do plan the mat view query\nbefore every refresh. I propose to show the explain/explain analyze of\nthe select part of the mat view in case of Refresh Mat View(RMV). It\nwill be useful for the user to know what exactly is being planned and\nexecuted as part of RMV. Please note that we already have\nexplain/explain analyze CTAS/Create Mat View(CMV), where we show the\nexplain/explain analyze of the select part. This proposal will do the\nsame thing.\n\nThe behaviour can be like this:\nEXPLAIN REFRESH MATERIALIZED VIEW mv1; --> will not refresh the mat\nview, but shows the select part's plan of mat view.\nEXPLAIN ANALYZE REFRESH MATERIALIZED VIEW mv1; --> will refresh the\nmat view and shows the select part's plan of mat view.\n\nThoughts? If okay, I will post a patch later.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Dec 2020 19:01:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 7:01 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Currently, $subject is not allowed. We do plan the mat view query\n> before every refresh. I propose to show the explain/explain analyze of\n> the select part of the mat view in case of Refresh Mat View(RMV). It\n> will be useful for the user to know what exactly is being planned and\n> executed as part of RMV. Please note that we already have\n> explain/explain analyze CTAS/Create Mat View(CMV), where we show the\n> explain/explain analyze of the select part. This proposal will do the\n> same thing.\n>\n> The behaviour can be like this:\n> EXPLAIN REFRESH MATERIALIZED VIEW mv1; --> will not refresh the mat\n> view, but shows the select part's plan of mat view.\n> EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW mv1; --> will refresh the\n> mat view and shows the select part's plan of mat view.\n>\n> Thoughts? If okay, I will post a patch later.\n\nAttaching below patches:\n\n0001 - Rearrange Refresh Mat View Code - Currently, the function\nExecRefreshMatView in matview.c is having many lines of code which is\nnot at all good from readability and maintainability perspectives.\nThis patch adds a few functions and moves the code from\nExecRefreshMatView to them making the code look better.\n\n0002 - EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW support and tests.\n\nIf this proposal is useful, I have few open points - 1) In the patch I\nhave added a new mat view info parameter to ExplainOneQuery(), do we\nalso need to add it to ExplainOneQuery_hook_type? 2) Do we document\n(under respective command pages or somewhere else) that we allow\nexplain/explain analyze for a command?\n\nThoughts?\n\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 28 Dec 2020 17:56:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 5:56 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Dec 22, 2020 at 7:01 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Currently, $subject is not allowed. We do plan the mat view query\n> > before every refresh. I propose to show the explain/explain analyze of\n> > the select part of the mat view in case of Refresh Mat View(RMV). It\n> > will be useful for the user to know what exactly is being planned and\n> > executed as part of RMV. Please note that we already have\n> > explain/explain analyze CTAS/Create Mat View(CMV), where we show the\n> > explain/explain analyze of the select part. This proposal will do the\n> > same thing.\n> >\n> > The behaviour can be like this:\n> > EXPLAIN REFRESH MATERIALIZED VIEW mv1; --> will not refresh the mat\n> > view, but shows the select part's plan of mat view.\n> > EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW mv1; --> will refresh the\n> > mat view and shows the select part's plan of mat view.\n> >\n> > Thoughts? If okay, I will post a patch later.\n>\n> Attaching below patches:\n>\n> 0001 - Rearrange Refresh Mat View Code - Currently, the function\n> ExecRefreshMatView in matview.c is having many lines of code which is\n> not at all good from readability and maintainability perspectives.\n> This patch adds a few functions and moves the code from\n> ExecRefreshMatView to them making the code look better.\n>\n> 0002 - EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW support and tests.\n>\n> If this proposal is useful, I have few open points - 1) In the patch I\n> have added a new mat view info parameter to ExplainOneQuery(), do we\n> also need to add it to ExplainOneQuery_hook_type? 2) Do we document\n> (under respective command pages or somewhere else) that we allow\n> explain/explain analyze for a command?\n>\n> Thoughts?\n\nAttaching v2 patch set reabsed on the latest master f7a1a805cb. And\nalso added an entry for upcoming commitfest -\nhttps://commitfest.postgresql.org/32/2928/\n\nPlease consider the v2 patches for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Jan 2021 15:23:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Thu, 07 Jan 2021 at 17:53, Bharath Rupireddy wrote:\n> On Mon, Dec 28, 2020 at 5:56 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Tue, Dec 22, 2020 at 7:01 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > Currently, $subject is not allowed. We do plan the mat view query\n>> > before every refresh. I propose to show the explain/explain analyze of\n>> > the select part of the mat view in case of Refresh Mat View(RMV). It\n>> > will be useful for the user to know what exactly is being planned and\n>> > executed as part of RMV. Please note that we already have\n>> > explain/explain analyze CTAS/Create Mat View(CMV), where we show the\n>> > explain/explain analyze of the select part. This proposal will do the\n>> > same thing.\n>> >\n>> > The behaviour can be like this:\n>> > EXPLAIN REFRESH MATERIALIZED VIEW mv1; --> will not refresh the mat\n>> > view, but shows the select part's plan of mat view.\n>> > EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW mv1; --> will refresh the\n>> > mat view and shows the select part's plan of mat view.\n>> >\n>> > Thoughts? If okay, I will post a patch later.\n>>\n>> Attaching below patches:\n>>\n>> 0001 - Rearrange Refresh Mat View Code - Currently, the function\n>> ExecRefreshMatView in matview.c is having many lines of code which is\n>> not at all good from readability and maintainability perspectives.\n>> This patch adds a few functions and moves the code from\n>> ExecRefreshMatView to them making the code look better.\n>>\n>> 0002 - EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW support and tests.\n>>\n>> If this proposal is useful, I have few open points - 1) In the patch I\n>> have added a new mat view info parameter to ExplainOneQuery(), do we\n>> also need to add it to ExplainOneQuery_hook_type? 2) Do we document\n>> (under respective command pages or somewhere else) that we allow\n>> explain/explain analyze for a command?\n>>\n>> Thoughts?\n>\n> Attaching v2 patch set reabsed on the latest master f7a1a805cb. And\n> also added an entry for upcoming commitfest -\n> https://commitfest.postgresql.org/32/2928/\n>\n> Please consider the v2 patches for further review.\n>\n\nThanks for updating the patch!\n\n+\t/* Get the data generating query. */\n+\tdataQuery = get_matview_query(stmt, &matviewRel, &matviewOid);\n \n-\t/*\n-\t * Check for active uses of the relation in the current transaction, such\n-\t * as open scans.\n-\t *\n-\t * NB: We count on this to protect us against problems with refreshing the\n-\t * data using TABLE_INSERT_FROZEN.\n-\t */\n-\tCheckTableNotInUse(matviewRel, \"REFRESH MATERIALIZED VIEW\");\n+\trelowner = matviewRel->rd_rel->relowner;\n\nAfter apply the patch, there is a duplicate\n\nrelowner = matviewRel->rd_rel->relowner;\n\n+\telse if(matviewInfo)\n+\t\tdest = CreateTransientRelDestReceiver(matviewInfo->OIDNewHeap);\n\nIf the `matviewInfo->OIDNewHeap` is invalid, IMO we don't need create\nDestReceiver, isn't it? And we should add a space after `if`.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.Ltd.\n\n\n",
"msg_date": "Fri, 08 Jan 2021 16:20:15 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 1:50 PM japin <japinli@hotmail.com> wrote:\n> Thanks for updating the patch!\n>\n> + /* Get the data generating query. */\n> + dataQuery = get_matview_query(stmt, &matviewRel, &matviewOid);\n>\n> - /*\n> - * Check for active uses of the relation in the current transaction, such\n> - * as open scans.\n> - *\n> - * NB: We count on this to protect us against problems with refreshing the\n> - * data using TABLE_INSERT_FROZEN.\n> - */\n> - CheckTableNotInUse(matviewRel, \"REFRESH MATERIALIZED VIEW\");\n> + relowner = matviewRel->rd_rel->relowner;\n>\n> After apply the patch, there is a duplicate\n>\n> relowner = matviewRel->rd_rel->relowner;\n\nCorrected that.\n\n> + else if(matviewInfo)\n> + dest = CreateTransientRelDestReceiver(matviewInfo->OIDNewHeap);\n>\n> If the `matviewInfo->OIDNewHeap` is invalid, IMO we don't need create\n> DestReceiver, isn't it? And we should add a space after `if`.\n\nYes, we can skip creating the dest receiver when OIDNewHeap is\ninvalid, this can happen for plain explain refresh mat view case.\n\n if (explainInfo && !explainInfo->es->analyze)\n OIDNewHeap = InvalidOid;\n else\n OIDNewHeap = get_new_heap_oid(stmt, matviewRel, matviewOid,\n &relpersistence);\n\nSince we don't call ExecutorRun for plain explain, we can skip the\ndest receiver creation. I modified the code as below in explain.c.\n\n if (into)\n dest = CreateIntoRelDestReceiver(into);\n else if (matviewInfo && OidIsValid(matviewInfo->OIDNewHeap))\n dest = CreateTransientRelDestReceiver(matviewInfo->OIDNewHeap);\n else\n dest = None_Receiver;\n\nThanks for taking a look at the patches.\n\nAttaching v3 patches, please consider these for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 8 Jan 2021 14:54:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Fri, 08 Jan 2021 at 17:24, Bharath Rupireddy wrote:\n> On Fri, Jan 8, 2021 at 1:50 PM japin <japinli@hotmail.com> wrote:\n>> Thanks for updating the patch!\n>>\n>> + /* Get the data generating query. */\n>> + dataQuery = get_matview_query(stmt, &matviewRel, &matviewOid);\n>>\n>> - /*\n>> - * Check for active uses of the relation in the current transaction, such\n>> - * as open scans.\n>> - *\n>> - * NB: We count on this to protect us against problems with refreshing the\n>> - * data using TABLE_INSERT_FROZEN.\n>> - */\n>> - CheckTableNotInUse(matviewRel, \"REFRESH MATERIALIZED VIEW\");\n>> + relowner = matviewRel->rd_rel->relowner;\n>>\n>> After apply the patch, there is a duplicate\n>>\n>> relowner = matviewRel->rd_rel->relowner;\n>\n> Corrected that.\n>\n>> + else if(matviewInfo)\n>> + dest = CreateTransientRelDestReceiver(matviewInfo->OIDNewHeap);\n>>\n>> If the `matviewInfo->OIDNewHeap` is invalid, IMO we don't need create\n>> DestReceiver, isn't it? And we should add a space after `if`.\n>\n> Yes, we can skip creating the dest receiver when OIDNewHeap is\n> invalid, this can happen for plain explain refresh mat view case.\n>\n> if (explainInfo && !explainInfo->es->analyze)\n> OIDNewHeap = InvalidOid;\n> else\n> OIDNewHeap = get_new_heap_oid(stmt, matviewRel, matviewOid,\n> &relpersistence);\n>\n> Since we don't call ExecutorRun for plain explain, we can skip the\n> dest receiver creation. I modified the code as below in explain.c.\n>\n> if (into)\n> dest = CreateIntoRelDestReceiver(into);\n> else if (matviewInfo && OidIsValid(matviewInfo->OIDNewHeap))\n> dest = CreateTransientRelDestReceiver(matviewInfo->OIDNewHeap);\n> else\n> dest = None_Receiver;\n>\n> Thanks for taking a look at the patches.\n>\n\nThanks!\n\n> Attaching v3 patches, please consider these for further review.\n>\n\nI find that both the declaration and definition of match_matview_with_new_data()\nhave a tab between type and variable. We can use pgindent to fix it.\nWhat do you think?\n\n\nstatic void\nmatch_matview_with_new_data(RefreshMatViewStmt *stmt, Relation matviewRel,\n ^\n Oid matviewOid, Oid OIDNewHeap, Oid relowner,\n int save_sec_context, char relpersistence,\n uint64 processed)\n ^\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 09 Jan 2021 00:20:28 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 9:50 PM japin <japinli@hotmail.com> wrote:\n> > Attaching v3 patches, please consider these for further review.\n> >\n>\n> I find that both the declaration and definition of match_matview_with_new_data()\n> have a tab between type and variable. We can use pgindent to fix it.\n> What do you think?\n>\n>\n> static void\n> match_matview_with_new_data(RefreshMatViewStmt *stmt, Relation matviewRel,\n> ^\n> Oid matviewOid, Oid OIDNewHeap, Oid relowner,\n> int save_sec_context, char relpersistence,\n> uint64 processed)\n> ^\n\nI ran pgindent on 0001 patch to fix the above. 0002 patch has no\nchanges. If I'm correct, pgindent will be run periodically on master.\n\nAttaching v4 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 9 Jan 2021 07:08:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Sat, 09 Jan 2021 at 09:38, Bharath Rupireddy wrote:\n> On Fri, Jan 8, 2021 at 9:50 PM japin <japinli@hotmail.com> wrote:\n>> > Attaching v3 patches, please consider these for further review.\n>> >\n>>\n>> I find that both the declaration and definition of match_matview_with_new_data()\n>> have a tab between type and variable. We can use pgindent to fix it.\n>> What do you think?\n>>\n>>\n>> static void\n>> match_matview_with_new_data(RefreshMatViewStmt *stmt, Relation matviewRel,\n>> ^\n>> Oid matviewOid, Oid OIDNewHeap, Oid relowner,\n>> int save_sec_context, char relpersistence,\n>> uint64 processed)\n>> ^\n>\n> I ran pgindent on 0001 patch to fix the above. 0002 patch has no\n> changes. If I'm correct, pgindent will be run periodically on master.\n>\n\nThanks for your point out. I don't know before.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 09 Jan 2021 10:02:46 +0800",
"msg_from": "japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi Japin,\n\nOn 1/8/21 9:02 PM, japin wrote:\n> \n> On Sat, 09 Jan 2021 at 09:38, Bharath Rupireddy wrote:\n>> On Fri, Jan 8, 2021 at 9:50 PM japin <japinli@hotmail.com> wrote:\n>>\n>> I ran pgindent on 0001 patch to fix the above. 0002 patch has no\n>> changes. If I'm correct, pgindent will be run periodically on master.\n>>\n> \n> Thanks for your point out. I don't know before.\n\nDo you know if you will have time to review this patch during the \ncurrent commitfest?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 07:56:43 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Wed, 03 Mar 2021 at 20:56, David Steele <david@pgmasters.net> wrote:\n> Do you know if you will have time to review this patch during the\n> current commitfest?\n>\n\nSorry for the late reply! I think I have time to review this patch\nand I will do it later.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 04 Mar 2021 14:10:58 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 11:41 AM Japin Li <japinli@hotmail.com> wrote:\n> On Wed, 03 Mar 2021 at 20:56, David Steele <david@pgmasters.net> wrote:\n> > Do you know if you will have time to review this patch during the\n> > current commitfest?\n> >\n>\n> Sorry for the late reply! I think I have time to review this patch\n> and I will do it later.\n\nThanks! I will look forward for more review comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 12:23:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Thu, 04 Mar 2021 at 14:53, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks! I will look forward for more review comments.\n>\n\nv4-0001-Rearrange-Refresh-Mat-View-Code.patch\n---------------------------------------------\n\n+static Oid\n+get_new_heap_oid(RefreshMatViewStmt *stmt, Relation matviewRel, Oid matviewOid,\n+\t\t\t\t char *relpersistence)\n+{\n+\tOid\t\t\tOIDNewHeap;\n+\tbool\t\tconcurrent;\n+\tOid\t\t\ttableSpace;\n+\n+\tconcurrent = stmt->concurrent;\n+\n+\t/* Concurrent refresh builds new data in temp tablespace, and does diff. */\n+\tif (concurrent)\n+\t{\n+\t\ttableSpace = GetDefaultTablespace(RELPERSISTENCE_TEMP, false);\n+\t\t*relpersistence = RELPERSISTENCE_TEMP;\n+\t}\n\nSince the concurrent only use in one place, I think we can remove the local variable\nconcurrent in get_new_heap_oid().\n\nThe others looks good to me.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 05 Mar 2021 12:01:51 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 9:32 AM Japin Li <japinli@hotmail.com> wrote:\n> On Thu, 04 Mar 2021 at 14:53, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks! I will look forward for more review comments.\n> >\n>\n> v4-0001-Rearrange-Refresh-Mat-View-Code.patch\n> ---------------------------------------------\n>\n> +static Oid\n> +get_new_heap_oid(RefreshMatViewStmt *stmt, Relation matviewRel, Oid matviewOid,\n> + char *relpersistence)\n> +{\n> + Oid OIDNewHeap;\n> + bool concurrent;\n> + Oid tableSpace;\n> +\n> + concurrent = stmt->concurrent;\n> +\n> + /* Concurrent refresh builds new data in temp tablespace, and does diff. */\n> + if (concurrent)\n> + {\n> + tableSpace = GetDefaultTablespace(RELPERSISTENCE_TEMP, false);\n> + *relpersistence = RELPERSISTENCE_TEMP;\n> + }\n>\n> Since the concurrent only use in one place, I think we can remove the local variable\n> concurrent in get_new_heap_oid().\n\nDone.\n\n> The others looks good to me.\n\nThanks.\n\nAttaching v5 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 5 Mar 2021 17:18:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> Attaching v5 patch set for further review.\n>\n\nThe v5 patch looks good to me, if there is no objection, I'll change the\ncf status to \"Ready for Committer\" in few days.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sun, 07 Mar 2021 14:19:03 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Attaching v5 patch set for further review.\n> >\n>\n> The v5 patch looks good to me, if there is no objection, I'll change the\n> cf status to \"Ready for Committer\" in few days.\n\nThanks for the review.\n\nAs I mentioned upthread, I have 2 open points:\n1) In the patch I have added a new mat view info parameter to\nExplainOneQuery(), do we also need to add it to\nExplainOneQuery_hook_type? IMO, we should not (for now), because this\nwould create a backward compatibility issue.\n2) Do we document (under respective command pages or somewhere else)\nthat we allow explain/explain analyze for a command?\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 7 Mar 2021 11:55:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Sun, 07 Mar 2021 at 14:25, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > Attaching v5 patch set for further review.\n>> >\n>>\n>> The v5 patch looks good to me, if there is no objection, I'll change the\n>> cf status to \"Ready for Committer\" in few days.\n>\n> Thanks for the review.\n>\n> As I mentioned upthread, I have 2 open points:\n> 1) In the patch I have added a new mat view info parameter to\n> ExplainOneQuery(), do we also need to add it to\n> ExplainOneQuery_hook_type? IMO, we should not (for now), because this\n> would create a backward compatibility issue.\n\nSorry, I do not know how PostgreSQL handle the backward compatibility issue.\nIs there a guideline?\n\n> 2) Do we document (under respective command pages or somewhere else)\n> that we allow explain/explain analyze for a command?\n>\n\nIMO, we can add a new page to list the commands that can be explain/explain analyze,\nsince it's clear for users.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sun, 07 Mar 2021 14:43:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 12:13 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Sun, 07 Mar 2021 at 14:25, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> > Attaching v5 patch set for further review.\n> >> >\n> >>\n> >> The v5 patch looks good to me, if there is no objection, I'll change the\n> >> cf status to \"Ready for Committer\" in few days.\n> >\n> > Thanks for the review.\n> >\n> > As I mentioned upthread, I have 2 open points:\n> > 1) In the patch I have added a new mat view info parameter to\n> > ExplainOneQuery(), do we also need to add it to\n> > ExplainOneQuery_hook_type? IMO, we should not (for now), because this\n> > would create a backward compatibility issue.\n>\n> Sorry, I do not know how PostgreSQL handle the backward compatibility issue.\n> Is there a guideline?\n\nI'm not aware of any guidelines as such, but we usually avoid any\nchanges to existing API, adding/making changes to system catalogs and\nso on.\n\n> > 2) Do we document (under respective command pages or somewhere else)\n> > that we allow explain/explain analyze for a command?\n> >\n>\n> IMO, we can add a new page to list the commands that can be explain/explain analyze,\n> since it's clear for users.\n\nWe are listing all the supported commands in explain.sgml, so added\nthe CREATE MATERIALIZED VIEW(it's missing even though it's supported\nprior to this patch) and REFRESH MATERIALIZED VIEW there.\n\nAttaching v6 patch set. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 7 Mar 2021 15:03:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Sun, 07 Mar 2021 at 17:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sun, Mar 7, 2021 at 12:13 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> On Sun, 07 Mar 2021 at 14:25, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n>> >>\n>> >> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> >> > Attaching v5 patch set for further review.\n>> >> >\n>> >>\n>> >> The v5 patch looks good to me, if there is no objection, I'll change the\n>> >> cf status to \"Ready for Committer\" in few days.\n>> >\n>> > Thanks for the review.\n>> >\n>> > As I mentioned upthread, I have 2 open points:\n>> > 1) In the patch I have added a new mat view info parameter to\n>> > ExplainOneQuery(), do we also need to add it to\n>> > ExplainOneQuery_hook_type? IMO, we should not (for now), because this\n>> > would create a backward compatibility issue.\n>>\n>> Sorry, I do not know how PostgreSQL handle the backward compatibility issue.\n>> Is there a guideline?\n>\n> I'm not aware of any guidelines as such, but we usually avoid any\n> changes to existing API, adding/making changes to system catalogs and\n> so on.\n>\n\nThanks for explaining, I'd be inclined keep it backward compatibility.\n\n>> > 2) Do we document (under respective command pages or somewhere else)\n>> > that we allow explain/explain analyze for a command?\n>> >\n>>\n>> IMO, we can add a new page to list the commands that can be explain/explain analyze,\n>> since it's clear for users.\n>\n> We are listing all the supported commands in explain.sgml, so added\n> the CREATE MATERIALIZED VIEW(it's missing even though it's supported\n> prior to this patch) and REFRESH MATERIALIZED VIEW there.\n>\n> Attaching v6 patch set. Please have a look.\n>\n\nLGTM.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sun, 07 Mar 2021 20:07:15 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\n+ * EXPLAIN ANALYZE CREATE TABLE AS or REFRESH MATERIALIZED VIEW\n+ * WITH NO DATA is weird.\n\nMaybe it is clearer to spell out WITH NO DATA for both statements, instead\nof sharing it.\n\n- if (!stmt->skipData)\n+ if (!stmt->skipData && !explainInfo)\n...\n+ else if (explainInfo)\n\nIt would be cleaner to put the 'if (explainInfo)' as the first check. That\nway, the check for skipData can be simplified.\n\nCheers\n\n\n\nOn Sun, Mar 7, 2021 at 1:34 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Sun, Mar 7, 2021 at 12:13 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> > On Sun, 07 Mar 2021 at 14:25, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n> > >>\n> > >> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >> > Attaching v5 patch set for further review.\n> > >> >\n> > >>\n> > >> The v5 patch looks good to me, if there is no objection, I'll change\n> the\n> > >> cf status to \"Ready for Committer\" in few days.\n> > >\n> > > Thanks for the review.\n> > >\n> > > As I mentioned upthread, I have 2 open points:\n> > > 1) In the patch I have added a new mat view info parameter to\n> > > ExplainOneQuery(), do we also need to add it to\n> > > ExplainOneQuery_hook_type? IMO, we should not (for now), because this\n> > > would create a backward compatibility issue.\n> >\n> > Sorry, I do not know how PostgreSQL handle the backward compatibility\n> issue.\n> > Is there a guideline?\n>\n> I'm not aware of any guidelines as such, but we usually avoid any\n> changes to existing API, adding/making changes to system catalogs and\n> so on.\n>\n> > > 2) Do we document (under respective command pages or somewhere else)\n> > > that we allow explain/explain analyze for a command?\n> > >\n> >\n> > IMO, we can add a new page to list the commands that can be\n> explain/explain analyze,\n> > since it's clear for users.\n>\n> We are listing all the supported commands in explain.sgml, so added\n> the CREATE MATERIALIZED VIEW(it's missing even though it's supported\n> prior to this patch) and REFRESH MATERIALIZED VIEW there.\n>\n> Attaching v6 patch set. Please have a look.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,+ * EXPLAIN ANALYZE CREATE TABLE AS or REFRESH MATERIALIZED VIEW+ * WITH NO DATA is weird.Maybe it is clearer to spell out WITH NO DATA for both statements, instead of sharing it.- if (!stmt->skipData)+ if (!stmt->skipData && !explainInfo)...+ else if (explainInfo)It would be cleaner to put the 'if (explainInfo)' as the first check. That way, the check for skipData can be simplified.CheersOn Sun, Mar 7, 2021 at 1:34 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Sun, Mar 7, 2021 at 12:13 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Sun, 07 Mar 2021 at 14:25, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Sun, Mar 7, 2021 at 11:49 AM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> On Fri, 05 Mar 2021 at 19:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> > Attaching v5 patch set for further review.\n> >> >\n> >>\n> >> The v5 patch looks good to me, if there is no objection, I'll change the\n> >> cf status to \"Ready for Committer\" in few days.\n> >\n> > Thanks for the review.\n> >\n> > As I mentioned upthread, I have 2 open points:\n> > 1) In the patch I have added a new mat view info parameter to\n> > ExplainOneQuery(), do we also need to add it to\n> > ExplainOneQuery_hook_type? IMO, we should not (for now), because this\n> > would create a backward compatibility issue.\n>\n> Sorry, I do not know how PostgreSQL handle the backward compatibility issue.\n> Is there a guideline?\n\nI'm not aware of any guidelines as such, but we usually avoid any\nchanges to existing API, adding/making changes to system catalogs and\nso on.\n\n> > 2) Do we document (under respective command pages or somewhere else)\n> > that we allow explain/explain analyze for a command?\n> >\n>\n> IMO, we can add a new page to list the commands that can be explain/explain analyze,\n> since it's clear for users.\n\nWe are listing all the supported commands in explain.sgml, so added\nthe CREATE MATERIALIZED VIEW(it's missing even though it's supported\nprior to this patch) and REFRESH MATERIALIZED VIEW there.\n\nAttaching v6 patch set. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 7 Mar 2021 08:45:33 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 10:13 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n>\n> + * EXPLAIN ANALYZE CREATE TABLE AS or REFRESH MATERIALIZED VIEW\n> + * WITH NO DATA is weird.\n>\n> Maybe it is clearer to spell out WITH NO DATA for both statements, instead of sharing it.\n\nDone that way.\n\n> - if (!stmt->skipData)\n> + if (!stmt->skipData && !explainInfo)\n> ...\n> + else if (explainInfo)\n>\n> It would be cleaner to put the 'if (explainInfo)' as the first check. That way, the check for skipData can be simplified.\n\nChanged.\n\nThanks for review comments. Attaching v7 patch set with changes only\nin 0002 patch. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 8 Mar 2021 09:58:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn Mon, 08 Mar 2021 at 12:28, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sun, Mar 7, 2021 at 10:13 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> Hi,\n>>\n>> + * EXPLAIN ANALYZE CREATE TABLE AS or REFRESH MATERIALIZED VIEW\n>> + * WITH NO DATA is weird.\n>>\n>> Maybe it is clearer to spell out WITH NO DATA for both statements, instead of sharing it.\n>\n> Done that way.\n>\n>> - if (!stmt->skipData)\n>> + if (!stmt->skipData && !explainInfo)\n>> ...\n>> + else if (explainInfo)\n>>\n>> It would be cleaner to put the 'if (explainInfo)' as the first check. That way, the check for skipData can be simplified.\n>\n> Changed.\n>\n> Thanks for review comments. Attaching v7 patch set with changes only\n> in 0002 patch. Please have a look.\n>\n\nThe v7 patch looks good to me, and there is no other advice, so I change\nthe status to \"Ready for Committer\".\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 09:29:53 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 7:00 AM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Mon, 08 Mar 2021 at 12:28, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Sun, Mar 7, 2021 at 10:13 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> Hi,\n> >>\n> >> + * EXPLAIN ANALYZE CREATE TABLE AS or REFRESH MATERIALIZED VIEW\n> >> + * WITH NO DATA is weird.\n> >>\n> >> Maybe it is clearer to spell out WITH NO DATA for both statements, instead of sharing it.\n> >\n> > Done that way.\n> >\n> >> - if (!stmt->skipData)\n> >> + if (!stmt->skipData && !explainInfo)\n> >> ...\n> >> + else if (explainInfo)\n> >>\n> >> It would be cleaner to put the 'if (explainInfo)' as the first check. That way, the check for skipData can be simplified.\n> >\n> > Changed.\n> >\n> > Thanks for review comments. Attaching v7 patch set with changes only\n> > in 0002 patch. Please have a look.\n> >\n>\n> The v7 patch looks good to me, and there is no other advice, so I change\n> the status to \"Ready for Committer\".\n\nThanks for the review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Mar 2021 09:02:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "[ Sorry for not looking at this thread sooner ]\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Currently, $subject is not allowed. We do plan the mat view query\n> before every refresh. I propose to show the explain/explain analyze of\n> the select part of the mat view in case of Refresh Mat View(RMV).\n\nTBH, I think we should reject this. The problem with it is that it\nbinds us to the assumption that REFRESH MATERIALIZED VIEW has an\nexplainable plan. There are various people poking at ideas like\nincremental matview updates, which might rely on some implementation\nthat doesn't exactly equate to a SQL query. Incremental updates are\nhard enough already; they'll be even harder if they also have to\nmaintain compatibility with a pre-existing EXPLAIN behavior.\n\nI don't really see that this feature buys us anything you can't\nget by explaining the view's query, so I think we're better advised\nto keep our options open about how REFRESH might be implemented\nin future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Mar 2021 15:45:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> [ Sorry for not looking at this thread sooner ]\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Currently, $subject is not allowed. We do plan the mat view query\n> > before every refresh. I propose to show the explain/explain analyze of\n> > the select part of the mat view in case of Refresh Mat View(RMV).\n>\n> TBH, I think we should reject this. The problem with it is that it\n> binds us to the assumption that REFRESH MATERIALIZED VIEW has an\n> explainable plan. There are various people poking at ideas like\n> incremental matview updates, which might rely on some implementation\n> that doesn't exactly equate to a SQL query. Incremental updates are\n> hard enough already; they'll be even harder if they also have to\n> maintain compatibility with a pre-existing EXPLAIN behavior.\n>\n> I don't really see that this feature buys us anything you can't\n> get by explaining the view's query, so I think we're better advised\n> to keep our options open about how REFRESH might be implemented\n> in future.\n\nThat makes sense to me. Thanks for the comments. I'm fine to withdraw the patch.\n\nI would like to see if the 0001 patch(attaching here) will be useful\nat all. It just splits up the existing ExecRefreshMatView into a few\nfunctions to make the code readable. I'm okay to withdraw it if no one\nagrees.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 16 Mar 2021 17:43:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn Tue, 16 Mar 2021 at 20:13, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Tue, Mar 16, 2021 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> [ Sorry for not looking at this thread sooner ]\n>>\n>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> > Currently, $subject is not allowed. We do plan the mat view query\n>> > before every refresh. I propose to show the explain/explain analyze of\n>> > the select part of the mat view in case of Refresh Mat View(RMV).\n>>\n>> TBH, I think we should reject this. The problem with it is that it\n>> binds us to the assumption that REFRESH MATERIALIZED VIEW has an\n>> explainable plan. There are various people poking at ideas like\n>> incremental matview updates, which might rely on some implementation\n>> that doesn't exactly equate to a SQL query. Incremental updates are\n>> hard enough already; they'll be even harder if they also have to\n>> maintain compatibility with a pre-existing EXPLAIN behavior.\n>>\n>> I don't really see that this feature buys us anything you can't\n>> get by explaining the view's query, so I think we're better advised\n>> to keep our options open about how REFRESH might be implemented\n>> in future.\n>\n> That makes sense to me. Thanks for the comments. I'm fine to withdraw the patch.\n>\n> I would like to see if the 0001 patch(attaching here) will be useful\n> at all. It just splits up the existing ExecRefreshMatView into a few\n> functions to make the code readable.\n\n+1.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 18 Mar 2021 09:51:09 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 8:13 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > I don't really see that this feature buys us anything you can't\n> > get by explaining the view's query, so I think we're better advised\n> > to keep our options open about how REFRESH might be implemented\n> > in future.\n>\n> That makes sense to me. Thanks for the comments. I'm fine to withdraw the\npatch.\n>\n> I would like to see if the 0001 patch(attaching here) will be useful\n> at all. It just splits up the existing ExecRefreshMatView into a few\n> functions to make the code readable. I'm okay to withdraw it if no one\n> agrees.\n\nSide note for future reference: While the feature named in the CF entry has\nbeen rejected, the remaining 0001 patch currently proposed no longer\nmatches the title, or category. It is possible within the CF app, and\nhelpful, to rename the entry when the scope changes.\n\nThe proposed patch in the CF for incremental view maintenance [1] does some\nrefactoring of its own in implementing the feature. I don't think it makes\nsense to commit a refactoring that conflicts with that, while not\nnecessarily making life easier for that feature. Incremental view\nmaintenance is highly desirable, so I don't want to put up unnecessary\nroadblocks.\n\n[1] https://commitfest.postgresql.org/33/2138/\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Mar 16, 2021 at 8:13 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> On Tue, Mar 16, 2021 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:> > I don't really see that this feature buys us anything you can't> > get by explaining the view's query, so I think we're better advised> > to keep our options open about how REFRESH might be implemented> > in future.>> That makes sense to me. Thanks for the comments. I'm fine to withdraw the patch.>> I would like to see if the 0001 patch(attaching here) will be useful> at all. It just splits up the existing ExecRefreshMatView into a few> functions to make the code readable. I'm okay to withdraw it if no one> agrees.Side note for future reference: While the feature named in the CF entry has been rejected, the remaining 0001 patch currently proposed no longer matches the title, or category. It is possible within the CF app, and helpful, to rename the entry when the scope changes.The proposed patch in the CF for incremental view maintenance [1] does some refactoring of its own in implementing the feature. I don't think it makes sense to commit a refactoring that conflicts with that, while not necessarily making life easier for that feature. Incremental view maintenance is highly desirable, so I don't want to put up unnecessary roadblocks.[1] https://commitfest.postgresql.org/33/2138/--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 10 Jul 2021 07:49:08 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sat, Jul 10, 2021 at 5:19 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Side note for future reference: While the feature named in the CF entry has been rejected, the remaining 0001 patch currently proposed no longer matches the title, or category. It is possible within the CF app, and helpful, to rename the entry when the scope changes.\n>\n> The proposed patch in the CF for incremental view maintenance [1] does some refactoring of its own in implementing the feature. I don't think it makes sense to commit a refactoring that conflicts with that, while not necessarily making life easier for that feature. Incremental view maintenance is highly desirable, so I don't want to put up unnecessary roadblocks.\n\nThanks. I'm okay to close the CF\nentry(https://commitfest.postgresql.org/33/2928/) and stop this\nthread.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 10 Jul 2021 18:53:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN/EXPLAIN ANALYZE REFRESH MATERIALIZED VIEW"
}
] |
[
{
"msg_contents": "ISTM that heap_compute_xid_horizon_for_tuples() calculates\nlatestRemovedXid for index deletion callers without sufficient care.\nThe function only follows line pointer redirects, which is necessary\nbut not sufficient to visit all relevant heap tuple headers -- it also\nneeds to traverse HOT chains, but that doesn't happen. AFAICT\nheap_compute_xid_horizon_for_tuples() might therefore fail to produce\na sufficiently recent latestRemovedXid value for the index deletion\noperation as a whole. This might in turn lead to the REDO routine\n(e.g. btree_xlog_delete()) doing conflict processing incorrectly\nduring hot standby.\n\nAttached is an instrumentation patch. If I run \"make check\" with the\npatch applied, I get test output failures that can be used to get a\ngeneral sense of the problem:\n\n$ cat /code/postgresql/patch/build/src/test/regress/regression.diffs |\ngrep \"works okay this time\" | wc -l\n382\n\n$ cat /code/postgresql/patch/build/src/test/regress/regression.diffs |\ngrep \"hot chain bug\"\n+WARNING: hot chain bug, latestRemovedXid: 2307,\nlatestRemovedXidWithHotChain: 2316\n+WARNING: hot chain bug, latestRemovedXid: 4468,\nlatestRemovedXidWithHotChain: 4538\n+WARNING: hot chain bug, latestRemovedXid: 4756,\nlatestRemovedXidWithHotChain: 4809\n+WARNING: hot chain bug, latestRemovedXid: 5000,\nlatestRemovedXidWithHotChain: 5001\n+WARNING: hot chain bug, latestRemovedXid: 7683,\nlatestRemovedXidWithHotChain: 7995\n+WARNING: hot chain bug, latestRemovedXid: 13450,\nlatestRemovedXidWithHotChain: 13453\n+WARNING: hot chain bug, latestRemovedXid: 10040,\nlatestRemovedXidWithHotChain: 10041\n\nSo out of 389 calls, we see 7 failures on this occasion, which is\ntypical. Heap pruning usually saves us in practice (since it is highly\ncorrelated with setting LP_DEAD bits on index pages in the first\nplace), and even when it doesn't it's not particularly likely that the\nissue will make the crucial difference for the deletion operation as a\nwhole.\n\nThe code that is now heap_compute_xid_horizon_for_tuples() ran in REDO\nroutines directly prior to Postgres 12.\nheap_compute_xid_horizon_for_tuples() is a descendant of code added by\nSimon’s commit a760893d in 2010 -- pretty close to HOT’s initial\nintroduction. So this has been around for a long time.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 22 Dec 2020 09:52:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "HOT chain bug in latestRemovedXid calculation"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 9:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> ISTM that heap_compute_xid_horizon_for_tuples() calculates\n> latestRemovedXid for index deletion callers without sufficient care.\n> The function only follows line pointer redirects, which is necessary\n> but not sufficient to visit all relevant heap tuple headers -- it also\n> needs to traverse HOT chains, but that doesn't happen. AFAICT\n> heap_compute_xid_horizon_for_tuples() might therefore fail to produce\n> a sufficiently recent latestRemovedXid value for the index deletion\n> operation as a whole. This might in turn lead to the REDO routine\n> (e.g. btree_xlog_delete()) doing conflict processing incorrectly\n> during hot standby.\n\nI attach a concrete fix for this bug. My basic approach is to\nrestructure the code so that it follows both LP_REDIRECT redirects as\nwell as HOT chain t_ctid page offset numbers in the same loop. This is\nloosely based on similar loops in places like heap_hot_search_buffer()\nand heap_prune_chain().\n\nI also replaced the old \"conjecture\" comments about why it is that our\nhandling of LP_DEAD line pointers is correct. These comments match\nwhat you'll see in the original 2010 commit (commit a760893d), which\nis inappropriate. At the time Simon wrote that comment, a\nlatestRemovedXid return value of InvalidTransactionId had roughly the\nopposite meaning. The meaning changed significantly just a few months\nafter a760893d, in commit 52010027efc. The old \"conjecture\" comments\nwere intended to convey something along the lines of \"here is why it\nis currently thought necessary to take this conservative approach with\nLP_DEAD line pointers\". But the comment should say almost the opposite\nthing now -- something close to \"here is why it's okay that we take\nthe seemingly lax approach of skipping LP_DEAD line pointers -- that's\nactually safe\".\n\nThe patch has new comments that explain the issue by comparing it to\nthe approach taken by index AMs such as nbtree during VACUUM\nproper/bulk deletion. Index vacuuming can rely on heap pruning records\nhaving generated latestRemovedXid values that obviate any need for\nnbtree VACUUM records to explicitly log their own latestRemovedXid\nvalue (which is why nbtree VACUUM cannot include extra \"recently dead\"\nindex tuples). This makes it obvious, I think -- LP_DEAD line pointers\nin heap pages come from pruning, and pruning generates its own\nlatestRemovedXid at precisely the point that line pointers become\nLP_DEAD.\n\nI would like to commit this patch to v12, the first version that did\nthis process during original execution rather than in REDO routines.\nIt seems worth keeping the back branches in sync here. I suspect that\nthe old approach used prior to Postgres 12 has subtle buglets caused\nby inconsistencies during Hot Standby (I have heard rumors). I'd\nrather just not go there given the lack of field reports about this\nproblem.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 28 Dec 2020 21:49:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain bug in latestRemovedXid calculation"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 9:49 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would like to commit this patch to v12, the first version that did\n> this process during original execution rather than in REDO routines.\n> It seems worth keeping the back branches in sync here. I suspect that\n> the old approach used prior to Postgres 12 has subtle buglets caused\n> by inconsistencies during Hot Standby (I have heard rumors). I'd\n> rather just not go there given the lack of field reports about this\n> problem.\n\nPushed that a moment ago.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 30 Dec 2020 16:33:47 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain bug in latestRemovedXid calculation"
}
] |
[
{
"msg_contents": "Here's an attempt at closing the race condition discussed in [1]\n(and in some earlier threads, though I'm too lazy to find them).\n\nThe core problem is that the bgworker management APIs were designed\nwithout any thought for exception conditions, notably \"we're not\ngonna launch any more workers because we're shutting down the database\".\nA process waiting for a worker in WaitForBackgroundWorkerStartup or\nWaitForBackgroundWorkerShutdown will wait forever, so that the database\nfails to shut down without manual intervention.\n\nI'd supposed that we would need some incompatible changes in those APIs\nin order to fix this, but after further study it seems like we could\nhack things by just acting as though a request that won't be serviced\nhas already run to completion. I'm not terribly satisfied with that\nas a long-term solution --- it seems to me that callers should be told\nthat there was a failure. But this looks to be enough to solve the\nlockup condition for existing callers, and it seems like it'd be okay\nto backpatch.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16785-c0207d8c67fb5f25%40postgresql.org",
"msg_date": "Tue, 22 Dec 2020 16:40:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Preventing hangups in bgworker start/stop during DB shutdown"
},
{
"msg_contents": "On Wed, 23 Dec 2020 at 05:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Here's an attempt at closing the race condition discussed in [1]\n> (and in some earlier threads, though I'm too lazy to find them).\n>\n> The core problem is that the bgworker management APIs were designed\n> without any thought for exception conditions, notably \"we're not\n> gonna launch any more workers because we're shutting down the database\".\n> A process waiting for a worker in WaitForBackgroundWorkerStartup or\n> WaitForBackgroundWorkerShutdown will wait forever, so that the database\n> fails to shut down without manual intervention.\n>\n> I'd supposed that we would need some incompatible changes in those APIs\n> in order to fix this, but after further study it seems like we could\n> hack things by just acting as though a request that won't be serviced\n> has already run to completion. I'm not terribly satisfied with that\n> as a long-term solution --- it seems to me that callers should be told\n> that there was a failure. But this looks to be enough to solve the\n> lockup condition for existing callers, and it seems like it'd be okay\n> to backpatch.\n>\n> Thoughts?\n>\n\nCallers who launch bgworkers already have to cope with conditions such as\nthe worker failing immediately after launch, or before attaching to the\nshmem segment used for worker management by whatever extension is launching\nit.\n\nSo I think it's reasonable to lie and say we launched it. The caller must\nalready cope with this case to behave correctly.\n\nPatch specifics:\n\n> This function should only be called from the postmaster\n\nIt'd be good to\n\n Assert(IsPostmasterEnvironment && !IsUnderPostmaster)\n\nin these functions.\n\nOtherwise at first read the patch and rationale looks sensible to me.\n\n(When it comes to the bgw APIs in general I have a laundry list of things\nI'd like to change or improve around signal handling, error trapping and\nrecovery, and lots more, but that's for another thread.)\n\nOn Wed, 23 Dec 2020 at 05:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:Here's an attempt at closing the race condition discussed in [1]\n(and in some earlier threads, though I'm too lazy to find them).\n\nThe core problem is that the bgworker management APIs were designed\nwithout any thought for exception conditions, notably \"we're not\ngonna launch any more workers because we're shutting down the database\".\nA process waiting for a worker in WaitForBackgroundWorkerStartup or\nWaitForBackgroundWorkerShutdown will wait forever, so that the database\nfails to shut down without manual intervention.\n\nI'd supposed that we would need some incompatible changes in those APIs\nin order to fix this, but after further study it seems like we could\nhack things by just acting as though a request that won't be serviced\nhas already run to completion. I'm not terribly satisfied with that\nas a long-term solution --- it seems to me that callers should be told\nthat there was a failure. But this looks to be enough to solve the\nlockup condition for existing callers, and it seems like it'd be okay\nto backpatch.\n\nThoughts?Callers who launch bgworkers already have to cope with conditions such as the worker failing immediately after launch, or before attaching to the shmem segment used for worker management by whatever extension is launching it.So I think it's reasonable to lie and say we launched it. The caller must already cope with this case to behave correctly.Patch specifics:> This function should only be called from the postmasterIt'd be good to Assert(IsPostmasterEnvironment && !IsUnderPostmaster)in these functions.Otherwise at first read the patch and rationale looks sensible to me.(When it comes to the bgw APIs in general I have a laundry list of things I'd like to change or improve around signal handling, error trapping and recovery, and lots more, but that's for another thread.)",
"msg_date": "Wed, 23 Dec 2020 12:33:14 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Preventing hangups in bgworker start/stop during DB shutdown"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 3:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's an attempt at closing the race condition discussed in [1]\n> (and in some earlier threads, though I'm too lazy to find them).\n>\n> The core problem is that the bgworker management APIs were designed\n> without any thought for exception conditions, notably \"we're not\n> gonna launch any more workers because we're shutting down the database\".\n> A process waiting for a worker in WaitForBackgroundWorkerStartup or\n> WaitForBackgroundWorkerShutdown will wait forever, so that the database\n> fails to shut down without manual intervention.\n>\n> I'd supposed that we would need some incompatible changes in those APIs\n> in order to fix this, but after further study it seems like we could\n> hack things by just acting as though a request that won't be serviced\n> has already run to completion. I'm not terribly satisfied with that\n> as a long-term solution --- it seems to me that callers should be told\n> that there was a failure. But this looks to be enough to solve the\n> lockup condition for existing callers, and it seems like it'd be okay\n> to backpatch.\n>\n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/flat/16785-c0207d8c67fb5f25%40postgresql.org\n\n1) Yeah, the postmaster will not be able to start the bg workers in\nfollowing cases, when bgworker_should_start_now returns false. So we\nmight encounter the hang issue.\nstatic bool\nbgworker_should_start_now(BgWorkerStartTime start_time)\n{\n switch (pmState)\n {\n case PM_NO_CHILDREN:\n case PM_WAIT_DEAD_END:\n case PM_SHUTDOWN_2:\n case PM_SHUTDOWN:\n case PM_WAIT_BACKENDS:\n case PM_STOP_BACKENDS:\n break;\n\n2) What if postmaster enters pmState >= PM_STOP_BACKENDS state after\nit calls BackgroundWorkerStateChange(pmState < PM_STOP_BACKENDS)?\nFirst of all, is it possible? I think yes, because we are in\nsigusr1_handler(), and I don't see we blocking the signal that sets\npmState >= PM_STOP_BACKENDS either in sigusr1_handler or in\nBackgroundWorkerStateChange(). Though it's a small window, we might\nget into the hangup issue? If yes, can we check the pmState in the for\nloop in BackgroundWorkerStateChange()?\n\n if (CheckPostmasterSignal(PMSIGNAL_BACKGROUND_WORKER_CHANGE))\n {\n- BackgroundWorkerStateChange();\n+ /* Accept new dynamic worker requests only if not stopping. */\n+ BackgroundWorkerStateChange(pmState < PM_STOP_BACKENDS);\n StartWorkerNeeded = true;\n }\n\n3) Can we always say that if bgw_restart_time is BGW_NEVER_RESTART,\nthen it's a dynamic bg worker? I think we can also have normal\nbgworkers with BGW_NEVER_RESTART flag(I see one in worker_spi.c\n_PG_init()), will there be any problem? Or do we need some comment\ntweaking?\n\n+ /*\n+ * If this is a dynamic worker request, and we aren't allowing new\n+ * dynamic workers, then immediately mark it for termination; the next\n+ * stanza will take care of cleaning it up.\n+ */\n+ if (slot->worker.bgw_restart_time == BGW_NEVER_RESTART &&\n+ !allow_new_workers)\n+ slot->terminate = true;\n\n4) IIUC, in the patch we mark slot->terminate = true only for\nBGW_NEVER_RESTART kind bg workers, what happens if a bg worker has\nbgw_restart_time seconds and don't we hit the hanging issue(that we\nare trying to solve here) for those bg workers?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 24 Dec 2020 18:19:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Preventing hangups in bgworker start/stop during DB shutdown"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Wed, Dec 23, 2020 at 3:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's an attempt at closing the race condition discussed in [1]\n>> (and in some earlier threads, though I'm too lazy to find them).\n\n> 2) What if postmaster enters pmState >= PM_STOP_BACKENDS state after\n> it calls BackgroundWorkerStateChange(pmState < PM_STOP_BACKENDS)?\n> First of all, is it possible? I think yes, because we are in\n> sigusr1_handler(), and I don't see we blocking the signal that sets\n> pmState >= PM_STOP_BACKENDS either in sigusr1_handler or in\n> BackgroundWorkerStateChange().\n\nIf you're asking whether the postmaster's signal handlers can interrupt\neach other, they can't; see comment at the start of each one. If you're\nwondering about the order of operations in sigusr1_handler, I agree that\nseems wrong now. I'm inclined to move the BackgroundWorkerStateChange\ncall to be just before maybe_start_bgworkers(). That way it's after the\npossible entry to PM_HOT_STANDBY state. The later steps can't change\npmState, except for PostmasterStateMachine, which would be responsible for\nanything that needs to be done to bgworkers as a result of making a state\nchange.\n\n> 3) Can we always say that if bgw_restart_time is BGW_NEVER_RESTART,\n> then it's a dynamic bg worker?\n\nThat assumption's already baked into ResetBackgroundWorkerCrashTimes,\nfor one. Personally I'd have designed things with some more-explicit\nindicator, but I'm not interested in revisiting those API decisions now;\nany cleanup we might undertake would result in a non-back-patchable fix.\n\nAs far as whether it's formally correct to do this given the current\nAPIs, I think it is. We're essentially pretending that the worker\ngot launched and instantly exited. If it's BGW_NEVER_RESTART then that\nwould result in deregistration in any case, while if it's not that,\nthen the worker record should get kept for a possible later restart.\n\n> I think we can also have normal\n> bgworkers with BGW_NEVER_RESTART flag(I see one in worker_spi.c\n> _PG_init()),\n\nThat might be a bug in worker_spi.c, but since that's only test code,\nI don't care about it too much. Nobody's really going to care what\nthat module does in a postmaster shutdown.\n\n> 4) IIUC, in the patch we mark slot->terminate = true only for\n> BGW_NEVER_RESTART kind bg workers, what happens if a bg worker has\n> bgw_restart_time seconds and don't we hit the hanging issue(that we\n> are trying to solve here) for those bg workers?\n\nThe hang problem is not with the worker itself, it's with anything\nthat might be waiting around for the worker to finish. It doesn't\nseem to me to make a whole lot of sense to wait for a restartable\nworker; what would that mean?\n\nThere's definitely room to revisit and clarify these APIs, and maybe\n(if we don't change them altogether) add some Asserts about what are\nsane combinations of properties. But my purpose today is just to get\na back-patchable bug fix, and that sort of change wouldn't fit in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Dec 2020 11:13:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Preventing hangups in bgworker start/stop during DB shutdown"
},
{
"msg_contents": "I wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> 4) IIUC, in the patch we mark slot->terminate = true only for\n>> BGW_NEVER_RESTART kind bg workers, what happens if a bg worker has\n>> bgw_restart_time seconds and don't we hit the hanging issue(that we\n>> are trying to solve here) for those bg workers?\n\n> The hang problem is not with the worker itself, it's with anything\n> that might be waiting around for the worker to finish. It doesn't\n> seem to me to make a whole lot of sense to wait for a restartable\n> worker; what would that mean?\n\nUpon further looking around, I noted that autoprewarm's\nautoprewarm_start_worker() function does that, so we can't really\ndismiss it.\n\nHowever, what we can do instead is to change the condition to be\n\"cancel pending bgworker requests if there is a waiting process\".\nAlmost all of the time, that means it's a dynamic bgworker with\nBGW_NEVER_RESTART, so there's no difference. In the exceptional\ncases like autoprewarm_start_worker, this would result in removing\na bgworker registration record for a restartable worker ... but\nsince we're shutting down, that record would have no effect before\nthe postmaster exits, anyway. I think we can live with that, at\nleast till such time as somebody redesigns this in a cleaner way.\n\nI pushed a fix along those lines.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Dec 2020 17:07:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Preventing hangups in bgworker start/stop during DB shutdown"
},
{
"msg_contents": "On Fri, 25 Dec 2020 at 06:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> >> 4) IIUC, in the patch we mark slot->terminate = true only for\n> >> BGW_NEVER_RESTART kind bg workers, what happens if a bg worker has\n> >> bgw_restart_time seconds and don't we hit the hanging issue(that we\n> >> are trying to solve here) for those bg workers?\n>\n> > The hang problem is not with the worker itself, it's with anything\n> > that might be waiting around for the worker to finish. It doesn't\n> > seem to me to make a whole lot of sense to wait for a restartable\n> > worker; what would that mean?\n>\n> Upon further looking around, I noted that autoprewarm's\n> autoprewarm_start_worker() function does that, so we can't really\n> dismiss it.\n>\n> However, what we can do instead is to change the condition to be\n> \"cancel pending bgworker requests if there is a waiting process\".\n> Almost all of the time, that means it's a dynamic bgworker with\n> BGW_NEVER_RESTART, so there's no difference. In the exceptional\n> cases like autoprewarm_start_worker, this would result in removing\n> a bgworker registration record for a restartable worker ... but\n> since we're shutting down, that record would have no effect before\n> the postmaster exits, anyway. I think we can live with that, at\n> least till such time as somebody redesigns this in a cleaner way.\n>\n> I pushed a fix along those lines.\n>\n>\nThanks for the change.\n\nCleanups like this in the BGW API definitely make life easier.\n\nOn Fri, 25 Dec 2020 at 06:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> 4) IIUC, in the patch we mark slot->terminate = true only for\n>> BGW_NEVER_RESTART kind bg workers, what happens if a bg worker has\n>> bgw_restart_time seconds and don't we hit the hanging issue(that we\n>> are trying to solve here) for those bg workers?\n\n> The hang problem is not with the worker itself, it's with anything\n> that might be waiting around for the worker to finish. It doesn't\n> seem to me to make a whole lot of sense to wait for a restartable\n> worker; what would that mean?\n\nUpon further looking around, I noted that autoprewarm's\nautoprewarm_start_worker() function does that, so we can't really\ndismiss it.\n\nHowever, what we can do instead is to change the condition to be\n\"cancel pending bgworker requests if there is a waiting process\".\nAlmost all of the time, that means it's a dynamic bgworker with\nBGW_NEVER_RESTART, so there's no difference. In the exceptional\ncases like autoprewarm_start_worker, this would result in removing\na bgworker registration record for a restartable worker ... but\nsince we're shutting down, that record would have no effect before\nthe postmaster exits, anyway. I think we can live with that, at\nleast till such time as somebody redesigns this in a cleaner way.\n\nI pushed a fix along those lines.\nThanks for the change.Cleanups like this in the BGW API definitely make life easier.",
"msg_date": "Fri, 22 Jan 2021 16:13:22 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Preventing hangups in bgworker start/stop during DB shutdown"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.