threads
listlengths
1
2.99k
[ { "msg_contents": "serinus' experimental gcc whines about a few places in network.c:\n\n../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':\n../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]\n 1893 | pdst[nb] = ~pip[nb];\n | ~~~~~~~~~^~~~~~~~~~\n../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n 27 | unsigned char ipaddr[16]; /* up to 128 bits of address */\n | ^~~~~~\n../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n\nThe code in question looks like\n\n {\n int nb = ip_addrsize(ip);\n unsigned char *pip = ip_addr(ip);\n unsigned char *pdst = ip_addr(dst);\n\n while (nb-- > 0)\n pdst[nb] = ~pip[nb];\n }\n\nThere's nothing actually wrong with this, but I'm wondering if\nwe could silence the warning by changing the loop condition to\n\n while (--nb >= 0)\n\nwhich seems like it might be marginally more readable anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Mar 2022 16:23:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "More weird compiler warnings" }, { "msg_contents": "Hi,\n\nOn 2022-03-26 16:23:26 -0400, Tom Lane wrote:\n> serinus' experimental gcc whines about a few places in network.c:\n>\n> ../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':\n> ../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]\n> 1893 | pdst[nb] = ~pip[nb];\n> | ~~~~~~~~~^~~~~~~~~~\n> ../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n> 27 | unsigned char ipaddr[16]; /* up to 128 bits of address */\n> | ^~~~~~\n> ../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n>\n> The code in question looks like\n>\n> {\n> int nb = ip_addrsize(ip);\n> unsigned char *pip = ip_addr(ip);\n> unsigned char *pdst = ip_addr(dst);\n>\n> while (nb-- > 0)\n> pdst[nb] = ~pip[nb];\n> }\n>\n> There's nothing actually wrong with this\n\nI reported this to the gcc folks, that's clearly a bug. I suspect that it\nmight not just cause spurious warnings, but also code generation issues - but\nI don't know that part for sure.\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=104986\n\n\n> but I'm wondering if we could silence the warning by changing the loop condition to\n>\n> while (--nb >= 0)\n>\n> which seems like it might be marginally more readable anyway.\n\nYes, that looks like it silences it. I modified the small reproducer I had in\nthat bug (https://godbolt.org/z/ejK9h6von) and the warning vanishes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Mar 2022 13:55:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More weird compiler warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-26 16:23:26 -0400, Tom Lane wrote:\n>> serinus' experimental gcc whines about a few places in network.c:\n\n> I reported this to the gcc folks, that's clearly a bug. I suspect that it\n> might not just cause spurious warnings, but also code generation issues - but\n> I don't know that part for sure.\n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104986\n\nHmm, looks like the gcc folk aren't too sure either ;-). But yeah,\ngiven the discussion so far it's plausible there could be actually\nbad code emitted.\n\n>> but I'm wondering if we could silence the warning by changing the loop condition to\n>> \twhile (--nb >= 0)\n>> which seems like it might be marginally more readable anyway.\n\n> Yes, that looks like it silences it. I modified the small reproducer I had in\n> that bug (https://godbolt.org/z/ejK9h6von) and the warning vanishes.\n\nOkay, so we can change this code, or just do nothing and wait for\na repaired gcc. Since that's an unreleased version there's no\nconcern about any possible bug in-the-wild. I think it probably\nshould come down to whether we think the predecrement form is\nindeed more readable. I'm about +0.1 towards changing, what\ndo you think?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Mar 2022 17:04:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: More weird compiler warnings" }, { "msg_contents": "Hi,\n\nOn 2022-03-26 17:04:16 -0400, Tom Lane wrote:\n> Hmm, looks like the gcc folk aren't too sure either ;-).\n\nHeh, yea ;)\n\n> Okay, so we can change this code, or just do nothing and wait for\n> a repaired gcc. Since that's an unreleased version there's no\n> concern about any possible bug in-the-wild. I think it probably\n> should come down to whether we think the predecrement form is\n> indeed more readable.\n\nAgreed.\n\n\n> I'm about +0.1 towards changing, what do you think?\n\nSimilar.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Mar 2022 14:26:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More weird compiler warnings" }, { "msg_contents": "Hi,\n\nOn 2022-03-26 13:55:49 -0700, Andres Freund wrote:\n> On 2022-03-26 16:23:26 -0400, Tom Lane wrote:\n> > serinus' experimental gcc whines about a few places in network.c:\n> >\n> > ../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':\n> > ../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]\n> > 1893 | pdst[nb] = ~pip[nb];\n> > | ~~~~~~~~~^~~~~~~~~~\n> > ../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n> > 27 | unsigned char ipaddr[16]; /* up to 128 bits of address */\n> > | ^~~~~~\n> > ../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16\n> >\n> > The code in question looks like\n> >\n> > {\n> > int nb = ip_addrsize(ip);\n> > unsigned char *pip = ip_addr(ip);\n> > unsigned char *pdst = ip_addr(dst);\n> >\n> > while (nb-- > 0)\n> > pdst[nb] = ~pip[nb];\n> > }\n> >\n> > There's nothing actually wrong with this\n> \n> I reported this to the gcc folks, that's clearly a bug. I suspect that it\n> might not just cause spurious warnings, but also code generation issues - but\n> I don't know that part for sure.\n> \n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104986\n> \n> \n> > but I'm wondering if we could silence the warning by changing the loop condition to\n> >\n> > while (--nb >= 0)\n> >\n> > which seems like it might be marginally more readable anyway.\n> \n> Yes, that looks like it silences it. I modified the small reproducer I had in\n> that bug (https://godbolt.org/z/ejK9h6von) and the warning vanishes.\n\nThe recent discussion about warnings reminded me of this. Given the gcc bug\nhasn't been fixed, I think we should make that change. I'd vote for\nbackpatching it as well - what do you think?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Mar 2023 10:18:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More weird compiler warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-26 13:55:49 -0700, Andres Freund wrote:\n>> On 2022-03-26 16:23:26 -0400, Tom Lane wrote:\n>>> but I'm wondering if we could silence the warning by changing the loop condition to\n>>> while (--nb >= 0)\n>>> which seems like it might be marginally more readable anyway.\n\n>> Yes, that looks like it silences it. I modified the small reproducer I had in\n>> that bug (https://godbolt.org/z/ejK9h6von) and the warning vanishes.\n\n> The recent discussion about warnings reminded me of this. Given the gcc bug\n> hasn't been fixed, I think we should make that change. I'd vote for\n> backpatching it as well - what do you think?\n\n+1, can't hurt anything AFAICS.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Mar 2023 14:31:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: More weird compiler warnings" }, { "msg_contents": "On 2023-03-16 14:31:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-03-26 13:55:49 -0700, Andres Freund wrote:\n> >> On 2022-03-26 16:23:26 -0400, Tom Lane wrote:\n> >>> but I'm wondering if we could silence the warning by changing the loop condition to\n> >>> while (--nb >= 0)\n> >>> which seems like it might be marginally more readable anyway.\n> \n> >> Yes, that looks like it silences it. I modified the small reproducer I had in\n> >> that bug (https://godbolt.org/z/ejK9h6von) and the warning vanishes.\n> \n> > The recent discussion about warnings reminded me of this. Given the gcc bug\n> > hasn't been fixed, I think we should make that change. I'd vote for\n> > backpatching it as well - what do you think?\n> \n> +1, can't hurt anything AFAICS.\n\nDone.\n\n\n", "msg_date": "Thu, 16 Mar 2023 14:53:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More weird compiler warnings" } ]
[ { "msg_contents": "Hi,\n\nI'm working to increase the test coverage of pgstat related stuff higher (for\nthe shared memory stats patch, of course).\n\n\"Accidentally\" noticed that\n SELECT * FROM pg_stat_get_replication_slot(NULL);\ncrashes. This is present in HEAD and 14.\n\nI guess we'll have to add a code-level check in 14 to deal with this?\n\n\npg_stat_get_subscription_stats() also is wrongly marked. But at least in the\ntrivial cases just returns bogus results (for 0/InvalidOid). That's only in\nHEAD, so easy to deal with.\n\nThe other functions returned by\n SELECT oid::regprocedure FROM pg_proc WHERE proname LIKE 'pg%stat%' AND pronargs > 0 AND NOT proisstrict;\nlook ok.\n\n\nI wonder if we ought to make PG_GETARG_DATUM(n) assert that !PG_ARGISNULL(n)?\nThat'd perhaps make it easier to catch some of these...\n\nIt'd be nice to have a test in sanity check to just call each non-strict\nfunction with NULL inputs automatically. But the potential side-effects\nprobably makes that not a realistic option?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Mar 2022 14:24:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if we ought to make PG_GETARG_DATUM(n) assert that !PG_ARGISNULL(n)?\n> That'd perhaps make it easier to catch some of these...\n\nDon't see the point; such cases will crash just fine without any\nassert. The problem is lack of test coverage ...\n\n> It'd be nice to have a test in sanity check to just call each non-strict\n> function with NULL inputs automatically. But the potential side-effects\n> probably makes that not a realistic option?\n\n... and as you say, brute force testing seems difficult. I'm\nparticularly worried about multi-argument functions, as in\nprinciple we'd need to check each argument separately, and cons\nup something plausible to pass to the other arguments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Mar 2022 17:41:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "Hi,\n\nOn 2022-03-26 17:41:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if we ought to make PG_GETARG_DATUM(n) assert that !PG_ARGISNULL(n)?\n> > That'd perhaps make it easier to catch some of these...\n> \n> Don't see the point; such cases will crash just fine without any\n> assert. The problem is lack of test coverage ...\n\nNot reliably. Byval types typically won't crash, just do something\nbogus. As e.g. in the case of pg_stat_get_subscription_stats(NULL) I found to\nalso be wrong upthread.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Mar 2022 14:52:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "On Sun, Mar 27, 2022 at 2:54 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I'm working to increase the test coverage of pgstat related stuff higher (for\n> the shared memory stats patch, of course).\n>\n> \"Accidentally\" noticed that\n> SELECT * FROM pg_stat_get_replication_slot(NULL);\n> crashes. This is present in HEAD and 14.\n>\n> I guess we'll have to add a code-level check in 14 to deal with this?\n\nThis problem is reproducible in both PG14 & Head, changing isstrict\nsolves the problem. In PG14 should we also add a check in\npg_stat_get_replication_slot so that it can solve the problem for the\nexisting users who have already installed PG14 or will this be handled\nautomatically when upgrading to the new version.\n\nRegards,\nVignesh", "msg_date": "Sun, 27 Mar 2022 11:59:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "On Sun, Mar 27, 2022 at 11:59 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Mar 27, 2022 at 2:54 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I'm working to increase the test coverage of pgstat related stuff higher (for\n> > the shared memory stats patch, of course).\n> >\n> > \"Accidentally\" noticed that\n> > SELECT * FROM pg_stat_get_replication_slot(NULL);\n> > crashes. This is present in HEAD and 14.\n> >\n> > I guess we'll have to add a code-level check in 14 to deal with this?\n>\n> This problem is reproducible in both PG14 & Head, changing isstrict\n> solves the problem. In PG14 should we also add a check in\n> pg_stat_get_replication_slot so that it can solve the problem for the\n> existing users who have already installed PG14 or will this be handled\n> automatically when upgrading to the new version.\n>\n\nI am not sure if for 14 we can make a catalog change as that would\nrequire catversion bump, so adding a code-level check as suggested by\nAndres seems like a better option. Andres/Tom, any better ideas for\nthis?\n\nThanks for the patch but for HEAD, we also need handling and test for\npg_stat_get_subscription_stats. Considering this for HEAD, we can mark\nboth pg_stat_get_replication_slot and pg_stat_get_subscription_stats\nas strict and in 14, we need to add a code-level check for\npg_stat_get_replication_slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Mar 2022 08:28:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "Hi,\n\nOn 2022-03-28 08:28:29 +0530, Amit Kapila wrote:\n> I am not sure if for 14 we can make a catalog change as that would\n> require catversion bump, so adding a code-level check as suggested by\n> Andres seems like a better option. Andres/Tom, any better ideas for\n> this?\n\nI think we could do the catalog change too, so that future initdb's are marked\ncorrectly. But we obviously do need the code-level check nevertheless.\n\n\n> Thanks for the patch but for HEAD, we also need handling and test for\n> pg_stat_get_subscription_stats. Considering this for HEAD, we can mark\n> both pg_stat_get_replication_slot and pg_stat_get_subscription_stats\n> as strict and in 14, we need to add a code-level check for\n> pg_stat_get_replication_slot.\n\nFWIW, I have a test for both, I was a bit \"stuck\" on where to put the\npg_stat_get_subscription_stats(NULL) test. I had put the\npg_stat_get_replication_slot(NULL) in contrib/test_decoding/sql/stats.sql\nbut pg_stat_get_subscription_stats() doesn't really fit there. I think I'm\ncoming down to putting a section of such tests into src/test/regress/sql/stats.sql\ninstead. In the hope of preventing future such occurrances by encouraging\npeople to copy the test...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 27 Mar 2022 21:09:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-28 08:28:29 +0530, Amit Kapila wrote:\n>> I am not sure if for 14 we can make a catalog change as that would\n>> require catversion bump, so adding a code-level check as suggested by\n>> Andres seems like a better option. Andres/Tom, any better ideas for\n>> this?\n\n> I think we could do the catalog change too, so that future initdb's are marked\n> correctly. But we obviously do need the code-level check nevertheless.\n\nYeah. We have to install the C-level check, so I don't see any\npoint in changing the catalogs in back branches. That'll create\nconfusion while not saving anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Mar 2022 00:17:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "On Sun, Mar 27, 2022 at 6:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-26 17:41:53 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I wonder if we ought to make PG_GETARG_DATUM(n) assert that !PG_ARGISNULL(n)?\n> > > That'd perhaps make it easier to catch some of these...\n> >\n> > Don't see the point; such cases will crash just fine without any\n> > assert. The problem is lack of test coverage ...\n>\n> Not reliably. Byval types typically won't crash, just do something\n> bogus. As e.g. in the case of pg_stat_get_subscription_stats(NULL) I found to\n> also be wrong upthread.\n\nRight. But it seems like we cannot simply add PG_ARGISNULL () to\nPG_GETARG_DATUM(). There are some codes such as array_remove() and\narray_replace() that call PG_GETARG_DATUM() and PG_ARGISNULL() and\npass these values to functions that do is-null check\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 28 Mar 2022 13:42:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "Hi,\n\nOn 2022-03-27 21:09:29 -0700, Andres Freund wrote:\n> FWIW, I have a test for both, I was a bit \"stuck\" on where to put the\n> pg_stat_get_subscription_stats(NULL) test. I had put the\n> pg_stat_get_replication_slot(NULL) in contrib/test_decoding/sql/stats.sql\n> but pg_stat_get_subscription_stats() doesn't really fit there. I think I'm\n> coming down to putting a section of such tests into src/test/regress/sql/stats.sql\n> instead. In the hope of preventing future such occurrances by encouraging\n> people to copy the test...\n\nPushed with tests there.\n\nVignesh, thanks for the patches! I already had something locally, should have\nmentioned that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 27 Mar 2022 21:54:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" }, { "msg_contents": "On Mon, Mar 28, 2022 at 10:24 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-03-27 21:09:29 -0700, Andres Freund wrote:\n> > FWIW, I have a test for both, I was a bit \"stuck\" on where to put the\n> > pg_stat_get_subscription_stats(NULL) test. I had put the\n> > pg_stat_get_replication_slot(NULL) in contrib/test_decoding/sql/stats.sql\n> > but pg_stat_get_subscription_stats() doesn't really fit there. I think I'm\n> > coming down to putting a section of such tests into src/test/regress/sql/stats.sql\n> > instead. In the hope of preventing future such occurrances by encouraging\n> > people to copy the test...\n>\n> Pushed with tests there.\n>\n\nThank you.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:04:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_get_replication_slot() marked not strict, crashes" } ]
[ { "msg_contents": "Hi all,\n\nThis is Christos Maris <https://www.linkedin.com/in/christosmaris/>, and\nI'd like to declare my interest in the GSoC project mentioned above.\nI'm an experienced Fullstack Developer, but still an open-source beginner.\nThat, combined with the fact that I'd really like to contribute to Postgres\nand learn Perl, makes this particular project ideal for me.\n\nI want to ask if the mentor of this project (Stephen Frost) could help me\nwith my application (if that's allowed, of course)?\n\nI appreciate any help you can provide.\nChristos\n\nHi all,This is Christos Maris, and I'd like to declare my interest in the GSoC project mentioned above.I'm an experienced Fullstack Developer, but still an open-source beginner. That, combined with the fact that I'd really like to contribute to Postgres and learn Perl, makes this particular project ideal for me.I want to ask if the mentor of this project (Stephen Frost) could help me with my application (if that's allowed, of course)?I appreciate any help you can provide.Christos", "msg_date": "Sun, 27 Mar 2022 14:04:17 +0300", "msg_from": "Christos Maris <christos.c.maris@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: Improve PostgreSQL Regression Test Coverage" }, { "msg_contents": "Greetings,\n\n* Christos Maris (christos.c.maris@gmail.com) wrote:\n> This is Christos Maris <https://www.linkedin.com/in/christosmaris/>, and\n> I'd like to declare my interest in the GSoC project mentioned above.\n> I'm an experienced Fullstack Developer, but still an open-source beginner.\n> That, combined with the fact that I'd really like to contribute to Postgres\n> and learn Perl, makes this particular project ideal for me.\n\nGlad to hear that you're interested!\n\n> I want to ask if the mentor of this project (Stephen Frost) could help me\n> with my application (if that's allowed, of course)?\n\nYou're welcome to send it to me off-list to take a look at and I can\nprovide comments on it, but note that I can't provide excessive help in\ncrafting it. This is to be your proposal after all, not mine.\n\nBe sure to review this:\n\nhttps://google.github.io/gsocguides/student/writing-a-proposal\n\nWhen it comes to working on your proposal.\n\nThanks,\n\nStephen", "msg_date": "Tue, 29 Mar 2022 13:20:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC: Improve PostgreSQL Regression Test Coverage" } ]
[ { "msg_contents": "Hi,\n\nOn master I got a FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\"\non an assert-enabled instance and with (I think) data over a certain length.\n\nI whittled it down to the attached bash (careful - it drops stuff). It \nhas 5 tsv-data lines (one long line) that COPY slurps into a table. The \nmiddle, third line causes the problem, later on. Shortening the long \nline to somewhere below 2000 characters fixes it again.\n\nMore info in the attached .sh file.\n\nIf debug-assert is 'off', the problem does not occur. (REL_14_STABLE \nalso does not have the problem, assertions or not)\n\nthanks,\n\nErik Rijkers", "msg_date": "Sun, 27 Mar 2022 20:32:45 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Sun, 27 Mar 2022 20:32:45 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> On master I got a FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\"\n> on an assert-enabled instance and with (I think) data over a certain\n> length.\n> \n> I whittled it down to the attached bash (careful - it drops stuff).\n> It has 5 tsv-data lines (one long line) that COPY slurps into a table.\n> The middle, third line causes the problem, later on. Shortening the\n> long line to somewhere below 2000 characters fixes it again.\n> \n> More info in the attached .sh file.\n\nIt is reproducible for me. Thanks for the reproducer.\n\n> If debug-assert is 'off', the problem does not occur. (REL_14_STABLE\n> also does not have the problem, assertions or not)\n\nIt seems like related with [1]?\n\nInserting EnsurePortalSnapshotExists() to RunFromStore fixes this but\nI'm not sure where is the right place to do this..\n\n[1] https://www.postgresql.org/message-id/flat/20210623035916.GL29179%40telsasoft.com#f802617a00cee4d013ad8fa69e1af048\n\nFor someone's information, this is more readable stack trace.\n\n#0 0x00007f43aeed037f in raise () from /lib64/libc.so.6\n#1 0x00007f43aeebadb5 in abort () from /lib64/libc.so.6\n#2 0x0000000000b28747 in ExceptionalCondition (\n conditionName=0xba2c48 \"HaveRegisteredOrActiveSnapshot()\", \n errorType=0xba2882 \"FailedAssertion\", \n fileName=0xba2870 \"toast_internals.c\", lineNumber=670) at assert.c:69\n#3 0x00000000004ac776 in init_toast_snapshot (toast_snapshot=0x7ffce64f7440)\n at toast_internals.c:670\n#4 0x00000000005164ea in heap_fetch_toast_slice (toastrel=0x7f43b193cad0, \n valueid=16393, attrsize=1848, sliceoffset=0, slicelength=1848, \n result=0x1cbb948) at heaptoast.c:688\n#5 0x000000000049fc86 in table_relation_fetch_toast_slice (\n toastrel=0x7f43b193cad0, valueid=16393, attrsize=1848, sliceoffset=0, \n slicelength=1848, result=0x1cbb948)\n at ../../../../src/include/access/tableam.h:1892\n#6 0x00000000004a0a0f in toast_fetch_datum (attr=0x1d6b171) at detoast.c:375\n#7 0x000000000049fffb in detoast_attr (attr=0x1d6b171) at detoast.c:123\n#8 0x0000000000b345ba in pg_detoast_datum_packed (datum=0x1d6b171)\n at fmgr.c:1757\n#9 0x0000000000aece72 in text_to_cstring (t=0x1d6b171) at varlena.c:225\n#10 0x0000000000aedda2 in textout (fcinfo=0x7ffce64f77a0) at varlena.c:574\n#11 0x0000000000b331bf in FunctionCall1Coll (flinfo=0x1d695e0, collation=0, \n arg1=30847345) at fmgr.c:1138\n#12 0x0000000000b3422b in OutputFunctionCall (flinfo=0x1d695e0, val=30847345)\n at fmgr.c:1575\n#13 0x00000000004a6b6c in printtup (slot=0x1cb81f0, self=0x1c96e90)\n at printtup.c:357\n#14 0x000000000099499f in RunFromStore (portal=0x1cf9380, \n direction=ForwardScanDirection, count=0, dest=0x1c96e90) at pquery.c:1096\n#15 0x00000000009944e3 in PortalRunSelect (portal=0x1cf9380, forward=true, \n count=0, dest=0x1c96e90) at pquery.c:917\n#16 0x00000000009941d3 in PortalRun (portal=0x1cf9380, \n count=9223372036854775807, isTopLevel=true, run_once=true, \n dest=0x1c96e90, altdest=0x1c96e90, qc=0x7ffce64f7ac0) at pquery.c:765\n#17 0x000000000098df4b in exec_simple_query (\n query_string=0x1c96030 \"fetch all in myportal;\") at postgres.c:1250\n#18 0x00000000009923a3 in PostgresMain (dbname=0x1cc11b0 \"postgres\", \n username=0x1cc1188 \"horiguti\") at postgres.c:4520\n#19 0x00000000008c6caf in BackendRun (port=0x1cb74c0) at postmaster.c:4593\n#20 0x00000000008c6631 in BackendStartup (port=0x1cb74c0) at postmaster.c:4321\n#21 0x00000000008c29cb in ServerLoop () at postmaster.c:1801\n#22 0x00000000008c2298 in PostmasterMain (argc=1, argv=0x1c8e0e0)\n at postmaster.c:1473\n#23 0x00000000007c14c3 in main (argc=1, argv=0x1c8e0e0) at main.c:202\n\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Mar 2022 18:36:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Inserting EnsurePortalSnapshotExists() to RunFromStore fixes this but\n> I'm not sure where is the right place to do this..\n\nThen, I found that portal->holdSnapshot is that. I came up with the\nattached. It does the follows:\n\n1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.\n\n2. Use holdSnapshot in RunFromStore if any.\n\n\nThe rerpducer is reduced to as small as the following.\n\nCREATE TABLE t (a text);\nINSERT INTO t VALUES('some random text');\nBEGIN;\nDECLARE c CURSOR FOR SELECT * FROM t;\nFETCH ALL IN c;\n\nBut I haven't come up with a reasonable way to generate the 'some\nrandom text' yet.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 29 Mar 2022 17:06:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Tue, 29 Mar 2022 17:06:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Then, I found that portal->holdSnapshot is that. I came up with the\n> attached. It does the follows:\n> \n> 1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.\n> \n> 2. Use holdSnapshot in RunFromStore if any.\n> \n> \n> The rerpducer is reduced to as small as the following.\n> \n> CREATE TABLE t (a text);\n> INSERT INTO t VALUES('some random text');\n> BEGIN;\n> DECLARE c CURSOR FOR SELECT * FROM t;\n> FETCH ALL IN c;\n> \n> But I haven't come up with a reasonable way to generate the 'some\n> random text' yet.\n\nI gave up and took a straightforward way to generate one.\n\nI don't like that it uses a fixed length for the random text, but\nanyway it works for now...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 29 Mar 2022 18:10:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Tue, 29 Mar 2022 at 11:10, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 29 Mar 2022 17:06:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > Then, I found that portal->holdSnapshot is that. I came up with the\n> > attached. It does the follows:\n> >\n> > 1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.\n> >\n> > 2. Use holdSnapshot in RunFromStore if any.\n> >\n> >\n> > The rerpducer is reduced to as small as the following.\n> >\n> > CREATE TABLE t (a text);\n> > INSERT INTO t VALUES('some random text');\n> > BEGIN;\n> > DECLARE c CURSOR FOR SELECT * FROM t;\n> > FETCH ALL IN c;\n> >\n> > But I haven't come up with a reasonable way to generate the 'some\n> > random text' yet.\n>\n> I gave up and took a straightforward way to generate one.\n>\n> I don't like that it uses a fixed length for the random text, but\n> anyway it works for now...\n\nAn shorter (?) reproducer might be the following, which forces any\nvalue for 'a' to be toasted and thus triggering the check in\ninit_toast_snapshot regardless of value length:\n\nCREATE TABLE t (a text);\nALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;\nINSERT INTO t VALUES ('toast');\nBEGIN;\nDECLARE c CURSOR FOR SELECT * FROM t;\nFETCH ALL IN c;\n\nEnjoy,\n\n-Matthias\n\n\n", "msg_date": "Tue, 29 Mar 2022 12:50:09 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Op 29-03-2022 om 12:50 schreef Matthias van de Meent:\n> On Tue, 29 Mar 2022 at 11:10, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Tue, 29 Mar 2022 17:06:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> Then, I found that portal->holdSnapshot is that. I came up with the\n>>> attached. It does the follows:\n>>>\n>>> 1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.\n>>>\n>>> 2. Use holdSnapshot in RunFromStore if any.\n>>>\n>>>\n>>> The rerpducer is reduced to as small as the following.\n>>>\n>>> CREATE TABLE t (a text);\n>>> INSERT INTO t VALUES('some random text');\n>>> BEGIN;\n>>> DECLARE c CURSOR FOR SELECT * FROM t;\n>>> FETCH ALL IN c;\n>>>\n>>> But I haven't come up with a reasonable way to generate the 'some\n>>> random text' yet.\n>>\n>> I gave up and took a straightforward way to generate one.\n>>\n>> I don't like that it uses a fixed length for the random text, but\n>> anyway it works for now...\n> \n> An shorter (?) reproducer might be the following, which forces any\n> value for 'a' to be toasted and thus triggering the check in\n> init_toast_snapshot regardless of value length:\n> \n> CREATE TABLE t (a text);\n> ALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;\n> INSERT INTO t VALUES ('toast');\n> BEGIN;\n> DECLARE c CURSOR FOR SELECT * FROM t;\n> FETCH ALL IN c;\n\nExcellent. That indeed immediately forces the error.\n\n(and the patch prevents it)\n\nThanks!\n\n\n> \n> Enjoy,\n> \n> -Matthias\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:29:15 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Tue, 29 Mar 2022 13:29:15 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> Op 29-03-2022 om 12:50 schreef Matthias van de Meent:\n> > An shorter (?) reproducer might be the following, which forces any\n> > value for 'a' to be toasted and thus triggering the check in\n> > init_toast_snapshot regardless of value length:\n> > CREATE TABLE t (a text);\n> > ALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;\n> > INSERT INTO t VALUES ('toast');\n> > BEGIN;\n> > DECLARE c CURSOR FOR SELECT * FROM t;\n> > FETCH ALL IN c;\n\nYeah, unfortunately I tried that first and saw it didn't work. And it\nstill doesn't for me. With such a short text pg_detoast_datum_pakced\ndoesn't call detoast_attr. Actually it is VARATT_IS_1B. (@master)\n\nI think I'm missing something here. I'm going to examine around.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 30 Mar 2022 10:06:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Wed, 30 Mar 2022 10:06:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 29 Mar 2022 13:29:15 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> > Op 29-03-2022 om 12:50 schreef Matthias van de Meent:\n> > > An shorter (?) reproducer might be the following, which forces any\n> > > value for 'a' to be toasted and thus triggering the check in\n> > > init_toast_snapshot regardless of value length:\n> > > CREATE TABLE t (a text);\n> > > ALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;\n> > > INSERT INTO t VALUES ('toast');\n> > > BEGIN;\n> > > DECLARE c CURSOR FOR SELECT * FROM t;\n> > > FETCH ALL IN c;\n> \n> Yeah, unfortunately I tried that first and saw it didn't work. And it\n> still doesn't for me. With such a short text pg_detoast_datum_pakced\n> doesn't call detoast_attr. Actually it is VARATT_IS_1B. (@master)\n> \n> I think I'm missing something here. I'm going to examine around.\n\nHmm. Strange. My memory tells that I did the same thing before.. I\nthought that it is somewhat related to compression since repeat('x',\n4096) didin't seem working at that time, but it worked this time.\nMaybe I was confused between extended and external..\n\nBut, in the first place the *fix* has been found to be wrong. I'm\ngoing to search for the right fix..\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:46:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Wed, 30 Mar 2022 11:46:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> But, in the first place the *fix* has been found to be wrong. I'm\n> going to search for the right fix..\n\nFETCH uses the snapshot at DECLARE. So anyhow I needed to set the\nqueryDesk's snapshot used in PortalRunSelect to the FETCH's portal's\nholdSnapshot. What I did in this version is:\n\n1. Add a new member \"snapshot\" to the type DestReceiver.\n\n2. In PortalRunSelect, set the DECLARE'd query's snapshot to the\n member iff the dest is tupelstore and the active snapshot is not\n set.\n\n3. In FillPortalStore, copy the snapshot to the portal's holdSnapshot.\n\n4. RunFromStore uses holdSnapshot if any.\n\nI'm not still confident on this, but it should be better than the v1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 30 Mar 2022 17:58:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "At Wed, 30 Mar 2022 17:58:24 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 30 Mar 2022 11:46:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > But, in the first place the *fix* has been found to be wrong. I'm\n> > going to search for the right fix..\n> \n> FETCH uses the snapshot at DECLARE. So anyhow I needed to set the\n> queryDesk's snapshot used in PortalRunSelect to the FETCH's portal's\n> holdSnapshot. What I did in this version is:\n\nBy the way, this is, given that the added check in init_toast_snapshot\nis valid, a long-standing \"bug\", which at least back to 12.\n\nI'm not sure what to do for this.\n\n1. ignore the check for certain cases?\n\n2. apply any fix only to master and call it a day. 14 and earlier\n doesn't have the assertion check so they don't complain.\n\n3. apply a fix to all affected versions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Apr 2022 17:28:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Wed, Mar 30, 2022 at 05:58:24PM +0900, Kyotaro Horiguchi wrote:\n> I'm not still confident on this, but it should be better than the v1.\n\n+Andres as this seems to be related to 277692220.\nI added it as an Opened Item.\n\n\n", "msg_date": "Wed, 13 Apr 2022 15:28:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-13 15:28:19 -0500, Justin Pryzby wrote:\n> On Wed, Mar 30, 2022 at 05:58:24PM +0900, Kyotaro Horiguchi wrote:\n> > I'm not still confident on this, but it should be better than the v1.\n> \n> +Andres as this seems to be related to 277692220.\n\nFWIW, that'd just mean it's an old bug that wasn't easily noticeable\nbefore, not that it's the fault of 277692220.\n\n\n> I added it as an Opened Item.\n\nIOW, it'd belong in \"Older bugs affecting stable branches\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Apr 2022 17:38:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Wed, Apr 13, 2022 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-13 15:28:19 -0500, Justin Pryzby wrote:\n> > On Wed, Mar 30, 2022 at 05:58:24PM +0900, Kyotaro Horiguchi wrote:\n> > > I'm not still confident on this, but it should be better than the v1.\n> >\n> > +Andres as this seems to be related to 277692220.\n>\n> FWIW, that'd just mean it's an old bug that wasn't easily noticeable\n> before, not that it's the fault of 277692220.\n\nI think you're still on the hook to do something about it for this\nrelease. You could decide to revert the commit adding the assertion\nand punt on doing thing about the underlying bug, but we can't just be\nlike \"oh, an assertion is failing, we'll get to that someday\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 09:54:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 13, 2022 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n>> FWIW, that'd just mean it's an old bug that wasn't easily noticeable\n>> before, not that it's the fault of 277692220.\n\n> I think you're still on the hook to do something about it for this\n> release.\n\nI think you're trying to shoot the messenger. As Andres says,\n277692220 just exposes that there is some pre-existing bug here.\nIt's probably related to 84f5c2908, so I was planning to take a\nlook at it at some point, but there are a few other higher-priority\nbugs in the way.\n\nI see no point in reverting 277692220. Removing the Assert would\nprevent, or at least complicate, detection of other similar bugs.\nAnd it'd do nothing to help end users, who won't have assertions\nenabled anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 10:42:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Apr 13, 2022 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> >> FWIW, that'd just mean it's an old bug that wasn't easily noticeable\n> >> before, not that it's the fault of 277692220.\n>\n> > I think you're still on the hook to do something about it for this\n> > release.\n>\n> I think you're trying to shoot the messenger. As Andres says,\n> 277692220 just exposes that there is some pre-existing bug here.\n> It's probably related to 84f5c2908, so I was planning to take a\n> look at it at some point, but there are a few other higher-priority\n> bugs in the way.\n\nWell, if you're willing to look at it that's fine, but I just don't\nagree that it's OK to just commit things that add failing assertions\nand drive off into the sunset. The code is always going to have a\ncertain number of unfixed bugs, and that's fine, and finding them is\ngood in itself, but people want to be able to run the software in the\nmeantime, and some of those people are developers or other individuals\nwho want to run it with assertions enabled. It's a judgement call\nwhether this assertion failure is going to bite enough people to be a\nproblem, but if it were something that happened easily enough to\nprevent you from working on the source code, I'm sure you wouldn't be\nOK with leaving it in there until someone got around to looking at it.\nGiven that it took about a month and a half for someone to report them\nproblem, it's not as bad as all that, but I guess I won't be surprised\nif we keep getting complaints until something gets done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 11:33:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-14 11:33:45 -0400, Robert Haas wrote:\n> On Thu, Apr 14, 2022 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Wed, Apr 13, 2022 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > >> FWIW, that'd just mean it's an old bug that wasn't easily noticeable\n> > >> before, not that it's the fault of 277692220.\n> >\n> > > I think you're still on the hook to do something about it for this\n> > > release.\n> >\n> > I think you're trying to shoot the messenger. As Andres says,\n> > 277692220 just exposes that there is some pre-existing bug here.\n> > It's probably related to 84f5c2908, so I was planning to take a\n> > look at it at some point, but there are a few other higher-priority\n> > bugs in the way.\n> \n> Well, if you're willing to look at it that's fine, but I just don't\n> agree that it's OK to just commit things that add failing assertions\n> and drive off into the sunset.\n\nI'm not planning to ride into the sunset / ignore this issue. All I said\nis that it's imo not the right thing to say that that commit broke\nthings in 15. And that not a semantics game, because it means that the\nfix needs to go back further than 277692220.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Apr 2022 08:41:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 14, 2022 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think you're trying to shoot the messenger. As Andres says,\n>> 277692220 just exposes that there is some pre-existing bug here.\n>> It's probably related to 84f5c2908, so I was planning to take a\n>> look at it at some point, but there are a few other higher-priority\n>> bugs in the way.\n\n> Well, if you're willing to look at it that's fine, but I just don't\n> agree that it's OK to just commit things that add failing assertions\n> and drive off into the sunset.\n\nI don't think Andres had any intention of ignoring it indefinitely.\nWhat he is doing is prioritizing it lower than several of the other\nopen items, an opinion I fully agree with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 11:46:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 11:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think Andres had any intention of ignoring it indefinitely.\n> What he is doing is prioritizing it lower than several of the other\n> open items, an opinion I fully agree with.\n\nWell, my point is that, historically, relegating things to the older\nbugs section often means nothing happens, which I think would not be\ngreat in this case. However, I'll try to shut up about the procedural\nissues for the moment, since I seem to be in the minority.\n\nI got curious and looked at the underlying problem here and I am\nwondering whether HaveRegisteredOrActiveSnapshot() is just buggy. It\nseems to me that the code is always going to return true if there are\nany active snapshots, and the rest of the function is intended to test\nwhether there is a registered snapshot other than the catalog\nsnapshot. But I don't think that's what this code does:\n\n if (pairingheap_is_empty(&RegisteredSnapshots) ||\n !pairingheap_is_singular(&RegisteredSnapshots))\n return false;\n\n return CatalogSnapshot == NULL;\n\nSo, if there are 0 active snapshots we return false. And if there is 1\nactive snapshot then we return true if there's no catalog snapshot and\notherwise false. So far so good. But if there are 2 or more registered\nsnapshots then the pairing heap will be neither empty nor singular so\nit seems to me we will return false, which seems to me to be the wrong\nanswer. I tried this:\n\ndiff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\nindex a0be0c411a..4e8e26a362 100644\n--- a/src/backend/utils/time/snapmgr.c\n+++ b/src/backend/utils/time/snapmgr.c\n@@ -1646,9 +1646,10 @@ HaveRegisteredOrActiveSnapshot(void)\n * removed at any time due to invalidation processing. If explicitly\n * registered more than one snapshot has to be in RegisteredSnapshots.\n */\n- if (pairingheap_is_empty(&RegisteredSnapshots) ||\n- !pairingheap_is_singular(&RegisteredSnapshots))\n+ if (pairingheap_is_empty(&RegisteredSnapshots))\n return false;\n+ if (!pairingheap_is_singular(&RegisteredSnapshots))\n+ return true;\n\n return CatalogSnapshot == NULL;\n }\n\nI find that 'make check-world' passes with this change, which is\ndisturbing, because it also passes without this change. That means we\ndon't have any tests that reach HaveRegisteredOrActiveSnapshot() with\nmore than one registered snapshot.\n\nAlso, unless we have plans to use HaveRegisteredOrActiveSnapshot() in\nmore places, I think we should consider revising the whole approach\nhere. The way init_toast_snapshot() is coded, we basically have some\ncode that tests for active and registered snapshots and finds the\noldest one. We error out if there isn't one. And then we get to this\nassertion, which checks the same stuff a second time but with an\nadditional check to see whether we ended up with the catalog snapshot.\nWouldn't it make more sense if GetOldestSnapshot() just refused to\nreturn the catalog snapshot in the first place?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 12:16:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-14 12:16:45 -0400, Robert Haas wrote:\n> I got curious and looked at the underlying problem here and I am\n> wondering whether HaveRegisteredOrActiveSnapshot() is just buggy. It\n> seems to me that the code is always going to return true if there are\n> any active snapshots, and the rest of the function is intended to test\n> whether there is a registered snapshot other than the catalog\n> snapshot. But I don't think that's what this code does:\n> \n> if (pairingheap_is_empty(&RegisteredSnapshots) ||\n> !pairingheap_is_singular(&RegisteredSnapshots))\n> return false;\n> \n> return CatalogSnapshot == NULL;\n\nCertainly looks off...\n\n\n> I find that 'make check-world' passes with this change, which is\n> disturbing, because it also passes without this change. That means we\n> don't have any tests that reach HaveRegisteredOrActiveSnapshot() with\n> more than one registered snapshot.\n\nPart of that is because right now the assertion is placed \"too deep\" -\nit should be much higher up, so it's reached even if there's not\nactually a toast datum. But there's of other bugs preventing that :(. A\nlot of bugs have been hidden by the existence of CatalogSnapshot (which\nof course isn't something one actually can rely on).\n\n\n> Also, unless we have plans to use HaveRegisteredOrActiveSnapshot() in\n> more places,\n\nI think we should, but there's the other bugs that need to be fixed\nfirst :(. Namely that we have plenty places doing catalog accesses\nwithout an active or registered snapshot :(.\n\n\n> I think we should consider revising the whole approach here. The way\n> init_toast_snapshot() is coded, we basically have some code that tests\n> for active and registered snapshots and finds the oldest one. We error\n> out if there isn't one. And then we get to this assertion, which\n> checks the same stuff a second time but with an additional check to\n> see whether we ended up with the catalog snapshot. Wouldn't it make\n> more sense if GetOldestSnapshot() just refused to return the catalog\n> snapshot in the first place?\n\nI'm worried that that could cause additional bugs. Consider code using\nGetOldestSnapshot() to check if tuples need to be preserved or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Apr 2022 09:54:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-14 09:54:25 -0700, Andres Freund wrote:\n> On 2022-04-14 12:16:45 -0400, Robert Haas wrote:\n> > I got curious and looked at the underlying problem here and I am\n> > wondering whether HaveRegisteredOrActiveSnapshot() is just buggy. It\n> > seems to me that the code is always going to return true if there are\n> > any active snapshots, and the rest of the function is intended to test\n> > whether there is a registered snapshot other than the catalog\n> > snapshot. But I don't think that's what this code does:\n> > \n> > if (pairingheap_is_empty(&RegisteredSnapshots) ||\n> > !pairingheap_is_singular(&RegisteredSnapshots))\n> > return false;\n> > \n> > return CatalogSnapshot == NULL;\n> \n> Certainly looks off...\n> \n> \n> > I find that 'make check-world' passes with this change, which is\n> > disturbing, because it also passes without this change. That means we\n> > don't have any tests that reach HaveRegisteredOrActiveSnapshot() with\n> > more than one registered snapshot.\n> \n> Part of that is because right now the assertion is placed \"too deep\" -\n> it should be much higher up, so it's reached even if there's not\n> actually a toast datum. But there's of other bugs preventing that :(. A\n> lot of bugs have been hidden by the existence of CatalogSnapshot (which\n> of course isn't something one actually can rely on).\n> \n> \n> > Also, unless we have plans to use HaveRegisteredOrActiveSnapshot() in\n> > more places,\n> \n> I think we should, but there's the other bugs that need to be fixed\n> first :(. Namely that we have plenty places doing catalog accesses\n> without an active or registered snapshot :(.\n\nAh, we actually were debating some of these issues more recently, in:\nhttps://www.postgresql.org/message-id/20220311030721.olixpzcquqkw2qyt%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/20220311021047.hgtqkrl6n52srvdu%40alap3.anarazel.de\n\nIt looks like the same bug, and that the patch in this thread fixes\nthem. And that we need to backpatch the fix, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Apr 2022 09:58:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> Part of that is because right now the assertion is placed \"too deep\" -\n> it should be much higher up, so it's reached even if there's not\n> actually a toast datum. But there's of other bugs preventing that :(. A\n> lot of bugs have been hidden by the existence of CatalogSnapshot (which\n> of course isn't something one actually can rely on).\n\nI am wondering whether ffaa44cb559db332baeee7d25dedd74a61974203 bet on\nthe wrong horse. When I invented CatalogSnapshot, I intended for it to\nbe an ephemeral snapshot that we'd keep for as long as we could safely\ndo so and then discard it as soon as there's any hint of a problem.\nBut that commit really takes the opposite approach, trying to keep\nCatalogSnapshot valid for a longer period of time instead of just\nchucking it.\n\n> > Also, unless we have plans to use HaveRegisteredOrActiveSnapshot() in\n> > more places,\n>\n> I think we should, but there's the other bugs that need to be fixed\n> first :(. Namely that we have plenty places doing catalog accesses\n> without an active or registered snapshot :(.\n\nOuch.\n\n> I'm worried that that could cause additional bugs. Consider code using\n> GetOldestSnapshot() to check if tuples need to be preserved or such.\n\nBut there is no such code: GetOldestSnapshot() has only one caller.\nAnd I don't think we'd add more just to do something like that,\nbecause we have other mechanisms that are specifically designed for\ntesting whether tuples are prunable that are better suited to such\ntasks. It should really only be used when you don't know which of the\nbackend's current snapshots ought to be used for something, but are\nsure that using the oldest one will be good enough. And in that kind\nof situation, it's hard to see why using the catalog snapshot would\never be the right idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 13:21:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I am wondering whether ffaa44cb559db332baeee7d25dedd74a61974203 bet on\n> the wrong horse. When I invented CatalogSnapshot, I intended for it to\n> be an ephemeral snapshot that we'd keep for as long as we could safely\n> do so and then discard it as soon as there's any hint of a problem.\n> But that commit really takes the opposite approach, trying to keep\n> CatalogSnapshot valid for a longer period of time instead of just\n> chucking it.\n\nNot really following? ISTM the point of ffaa44cb5 was that if we don't\ninclude the CatalogSnapshot in our advertised xmin, then the length\nof time we can safely use it is *zero*.\n\n> But there is no such code: GetOldestSnapshot() has only one caller.\n> And I don't think we'd add more just to do something like that,\n> because we have other mechanisms that are specifically designed for\n> testing whether tuples are prunable that are better suited to such\n> tasks. It should really only be used when you don't know which of the\n> backend's current snapshots ought to be used for something, but are\n> sure that using the oldest one will be good enough. And in that kind\n> of situation, it's hard to see why using the catalog snapshot would\n> ever be the right idea.\n\nWhat if the reason we need a snapshot is to detoast some toasted item\nwe read from a catalog with the CatalogSnapshot? There might not be\nany other valid snapshot, so I don't think I buy your argument here.\n\n... Unless your argument is that the session should always have an\nolder non-catalog snapshot, which I'm not sure whether I buy or not.\nBut if we do believe that, then adding mechanisms to force it to be so\ncould be an alternative solution. But why is that better than what\nwe're doing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 13:36:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-14 13:36:51 -0400, Tom Lane wrote:\n> What if the reason we need a snapshot is to detoast some toasted item\n> we read from a catalog with the CatalogSnapshot? There might not be\n> any other valid snapshot, so I don't think I buy your argument here.\n\nWe definitely do have places doing that, but is it ever actually safe?\nPart of the catalog access might cause cache invalidations to be\nprocessed, which can invalidate the snapshot (including resetting\nMyProc->xmin). Afaics we always would have push or register the\nsnapshot, either will copy the snapshot, I think?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Apr 2022 11:03:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I am wondering whether ffaa44cb559db332baeee7d25dedd74a61974203 bet on\n> > the wrong horse. When I invented CatalogSnapshot, I intended for it to\n> > be an ephemeral snapshot that we'd keep for as long as we could safely\n> > do so and then discard it as soon as there's any hint of a problem.\n> > But that commit really takes the opposite approach, trying to keep\n> > CatalogSnapshot valid for a longer period of time instead of just\n> > chucking it.\n>\n> Not really following? ISTM the point of ffaa44cb5 was that if we don't\n> include the CatalogSnapshot in our advertised xmin, then the length\n> of time we can safely use it is *zero*.\n\nNo, I don't think so. I'm proposing that you shouldn't be taking a\ncatalog snapshot unless you already hold some other snapshot, and that\nthe catalog snapshot should always be newer than whatever that other\nsnapshot is. If we made that be true, then your catalog snapshot\ncouldn't ever be the thing holding back xmin -- and then I think it\ndoesn't need to be registered.\n\n> > But there is no such code: GetOldestSnapshot() has only one caller.\n> > And I don't think we'd add more just to do something like that,\n> > because we have other mechanisms that are specifically designed for\n> > testing whether tuples are prunable that are better suited to such\n> > tasks. It should really only be used when you don't know which of the\n> > backend's current snapshots ought to be used for something, but are\n> > sure that using the oldest one will be good enough. And in that kind\n> > of situation, it's hard to see why using the catalog snapshot would\n> > ever be the right idea.\n>\n> What if the reason we need a snapshot is to detoast some toasted item\n> we read from a catalog with the CatalogSnapshot? There might not be\n> any other valid snapshot, so I don't think I buy your argument here.\n>\n> ... Unless your argument is that the session should always have an\n> older non-catalog snapshot, which I'm not sure whether I buy or not.\n> But if we do believe that, then adding mechanisms to force it to be so\n> could be an alternative solution. But why is that better than what\n> we're doing?\n\nThat is exactly my argument, but I'm not actually sure whether it is\nin fact better. I was responding to Andres's statement that\nCatalogSnapshot was hiding a lot of bugs because it makes it look like\nwe have a snapshot when we don't really. And my point is that\nffaa44cb559db332baeee7d25dedd74a61974203 made such problems a lot more\nlikely, because before that the snapshot was not in\nRegisteredSnapshots, and therefore anybody asking \"hey, do we have a\nsnapshot?\" wasn't likely to get confused, because they would only see\nthe catalog snapshot if they specifically went and looked at\nCatalogSnapshot(Set), and if you do that, hopefully you'll think about\nthe special semantics of that snapshot and write code that works with\nthem. But with CatalogSnapshot in RegisteredSnapshots, any code that\nlooks through RegisteredSnapshots has to worry about whether what it's\nfinding there is actually just the CatalogSnapshot, and if Andres's\nstatement that we have a lot of bugs here is to be believed, then we\nhave not done a good job finding and updating all such code. We can\ncontinue down the path of finding and fixing it -- or we can back out\nparts of ffaa44cb559db332baeee7d25dedd74a61974203.\n\nJust to be clear, I'm not debating that that commit fixed some real\nproblems and I think parts of it are really necessary fixes. But to\nquote from the commit message:\n\n The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting\n for whether MyPgXact->xmin could be cleared or advanced. In normal\n transactions this was masked by the fact that the transaction snapshot\n would be older, but during backend startup and certain utility commands\n it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had\n been cleared ...\n\nAnd what I'm suggesting is that *perhaps* we ought to have fixed those\n\"during backend startup and certain utility commands\" by having\nSnapshotResetXmin() do InvalidateCatalogSnapshot(), and maybe also\nmade the code in those commands that's doing catalog lookups hold\nacquire and hold a snapshot of its own around the operation. The\nalternative you chose, namely, incorporating the xmin into the\nbackend's xmin computation, I think can also be made to work. I am\njust not sure that it's the best approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 14:19:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-14 12:16:45 -0400, Robert Haas wrote:\n>> I got curious and looked at the underlying problem here and I am\n>> wondering whether HaveRegisteredOrActiveSnapshot() is just buggy. It\n>> seems to me that the code is always going to return true if there are\n>> any active snapshots, and the rest of the function is intended to test\n>> whether there is a registered snapshot other than the catalog\n>> snapshot. But I don't think that's what this code does:\n>> \n>> if (pairingheap_is_empty(&RegisteredSnapshots) ||\n>> !pairingheap_is_singular(&RegisteredSnapshots))\n>> return false;\n>> \n>> return CatalogSnapshot == NULL;\n\n> Certainly looks off...\n\nYeah, this is broken. Whilst waiting around for a build on wrasse's\nhost, I reproduced the problem shown in this thread, and here's what\nI see at the point of the exception:\n\n(gdb) p RegisteredSnapshots\n$5 = {ph_compare = 0x9a6000 <xmin_cmp>, ph_arg = 0x0, \n ph_root = 0xec3168 <CatalogSnapshotData+72>}\n(gdb) p *RegisteredSnapshots.ph_root\n$6 = {first_child = 0x2d85d70, next_sibling = 0x0, prev_or_parent = 0x0}\n(gdb) p CatalogSnapshotData\n$7 = {snapshot_type = SNAPSHOT_MVCC, xmin = 52155, xmax = 52155, \n xip = 0x2d855b0, xcnt = 0, subxip = 0x2de9130, subxcnt = 0, \n suboverflowed = false, takenDuringRecovery = false, copied = false, \n curcid = 0, speculativeToken = 0, vistest = 0x0, active_count = 0, \n regd_count = 0, ph_node = {first_child = 0x2d85d70, next_sibling = 0x0, \n prev_or_parent = 0x0}, whenTaken = 0, lsn = 0, snapXactCompletionCount = 1}\n(gdb) p CatalogSnapshot \n$8 = (Snapshot) 0xec3120 <CatalogSnapshotData>\n(gdb) p *(Snapshot) (0x2d85d70-72)\n$9 = {snapshot_type = SNAPSHOT_MVCC, xmin = 52155, xmax = 52155, xip = 0x0, \n xcnt = 0, subxip = 0x0, subxcnt = 0, suboverflowed = false, \n takenDuringRecovery = false, copied = true, curcid = 0, \n speculativeToken = 0, vistest = 0x0, active_count = 0, regd_count = 2, \n ph_node = {first_child = 0x0, next_sibling = 0x0, \n prev_or_parent = 0xec3168 <CatalogSnapshotData+72>}, whenTaken = 0, \n lsn = 0, snapXactCompletionCount = 0}\n(gdb) p ActiveSnapshot\n$10 = (ActiveSnapshotElt *) 0x0\n\nSo in fact there IS another registered snapshot, and\nHaveRegisteredOrActiveSnapshot is just lying. I think the\ncorrect test is more nearly what we have in\nInvalidateCatalogSnapshotConditionally:\n\n if (CatalogSnapshot &&\n ActiveSnapshot == NULL &&\n pairingheap_is_singular(&RegisteredSnapshots))\n // then the CatalogSnapshot is the only one.\n\nErgo, this actually is a bug in 277692220.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 14:47:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 14, 2022 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Not really following? ISTM the point of ffaa44cb5 was that if we don't\n>> include the CatalogSnapshot in our advertised xmin, then the length\n>> of time we can safely use it is *zero*.\n\n> No, I don't think so. I'm proposing that you shouldn't be taking a\n> catalog snapshot unless you already hold some other snapshot, and that\n> the catalog snapshot should always be newer than whatever that other\n> snapshot is. If we made that be true, then your catalog snapshot\n> couldn't ever be the thing holding back xmin -- and then I think it\n> doesn't need to be registered.\n\nIf you don't register it, then you need to also make sure that it's\ndestroyed whenever that older snapshot is. Which I think would require\neither a lot of useless/inefficient CatalogSnapshot destructions, or\ninfrastructure that's more or less isomorphic to the RegisteredSnapshot\nheap.\n\n> That is exactly my argument, but I'm not actually sure whether it is\n> in fact better. I was responding to Andres's statement that\n> CatalogSnapshot was hiding a lot of bugs because it makes it look like\n> we have a snapshot when we don't really.\n\nWell, we DO have a snapshot, and it is 100% perfectly safe to use, if it's\nregistered. Andres' complaint is that that snapshot might get invalidated\nwhen you weren't expecting it, but I'm not really convinced that we have\nall that many bugs of that ilk. Wouldn't CLOBBER_CACHE_ALWAYS testing\nfind them?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 15:05:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If you don't register it, then you need to also make sure that it's\n> destroyed whenever that older snapshot is. Which I think would require\n> either a lot of useless/inefficient CatalogSnapshot destructions, or\n> infrastructure that's more or less isomorphic to the RegisteredSnapshot\n> heap.\n\nWell, if that's true, then I agree that it's a good argument against\nthat approach. But I guess I'm confused as to why we'd end up in that\nsituation. Suppose we do these two things:\n\n1. Decree that SnapshotResetXmin calls InvalidateCatalogSnapshot. It's\nthe other way around right now, but that's only because we're\nregistering the catalog snapshot.\n2. Bomb out in GetCatalogSnapshot if you don't have an active or\nregistered snapshot already.\n\nIs there some reason we'd need any more infrastructure than that?\n\n> Well, we DO have a snapshot, and it is 100% perfectly safe to use, if it's\n> registered. Andres' complaint is that that snapshot might get invalidated\n> when you weren't expecting it, but I'm not really convinced that we have\n> all that many bugs of that ilk. Wouldn't CLOBBER_CACHE_ALWAYS testing\n> find them?\n\nHmm, that's a good question. I don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 15:55:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 14, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If you don't register it, then you need to also make sure that it's\n>> destroyed whenever that older snapshot is.\n\n> Well, if that's true, then I agree that it's a good argument against\n> that approach. But I guess I'm confused as to why we'd end up in that\n> situation. Suppose we do these two things:\n\n> 1. Decree that SnapshotResetXmin calls InvalidateCatalogSnapshot. It's\n> the other way around right now, but that's only because we're\n> registering the catalog snapshot.\n> 2. Bomb out in GetCatalogSnapshot if you don't have an active or\n> registered snapshot already.\n\n> Is there some reason we'd need any more infrastructure than that?\n\nYes.\n\n1. Create snapshot 1 (beginning of transaction).\n2. Create catalog snapshot (okay because of snapshot 1).\n3. Create snapshot 2.\n4. Destroy snapshot 1.\n5. Catalog snapshot is still there and is now the oldest.\n\nThe implementation you propose would have to also forbid this sequence\nof events, which is (a) difficult and (b) would add instability to the\nsystem, since there's really no reason that this should be Not OK.\n\nI'm basically not on board with adding complication to make the system\nless performant and more brittle, and I don't see how the direction\nyou want to go isn't that.\n\n(BTW, this thought experiment also puts a hole in the test added by\n277692220: even if HaveRegisteredOrActiveSnapshot were doing what\nit claims to do, it would allow use of the catalog snapshot for\ndetoasting after step 4, which I suppose is not what Andres intended.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Apr 2022 16:26:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-14 15:05:50 -0400, Tom Lane wrote:\n> Andres' complaint is that that snapshot might get invalidated when you\n> weren't expecting it, but I'm not really convinced that we have all\n> that many bugs of that ilk. Wouldn't CLOBBER_CACHE_ALWAYS testing\n> find them?\n\nDon't see why it would - we don't have any mechanism in place for\nenforcing that we don't update / delete a tuple we've looked up with an\nxmin that wasn't continually enforced. A typical pattern is to use a\ncatalog cache (registered an all) for a syscache lookup, but then not\nhave a registered / active snapshot until an eventual update / delete\n(after the syscache scan ends). Which isn't safe, because without a\nMyProc->xmin set, the tuple we're updating / deleting could be updated,\nremoved and replaced with another tuple.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 14 Apr 2022 18:30:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Thu, Apr 14, 2022 at 4:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Well, if that's true, then I agree that it's a good argument against\n> > that approach. But I guess I'm confused as to why we'd end up in that\n> > situation. Suppose we do these two things:\n>\n> > 1. Decree that SnapshotResetXmin calls InvalidateCatalogSnapshot. It's\n> > the other way around right now, but that's only because we're\n> > registering the catalog snapshot.\n> > 2. Bomb out in GetCatalogSnapshot if you don't have an active or\n> > registered snapshot already.\n>\n> > Is there some reason we'd need any more infrastructure than that?\n>\n> Yes.\n>\n> 1. Create snapshot 1 (beginning of transaction).\n> 2. Create catalog snapshot (okay because of snapshot 1).\n> 3. Create snapshot 2.\n> 4. Destroy snapshot 1.\n> 5. Catalog snapshot is still there and is now the oldest.\n\nSorry, I'm not seeing the problem. If we call SnapshotResetXmin()\nafter step 4, then the catalog snapshot would get invalidated under my\nproposal. If we don't, then our advertised xmin has not changed and\nnothing can be pruned out from under us.\n\n> I'm basically not on board with adding complication to make the system\n> less performant and more brittle, and I don't see how the direction\n> you want to go isn't that.\n\nWell ... I agree that brittle is bad, but it's not clear to me which\nway is actually less brittle. As for performant, I think you might be\nmisjudging the situation. My original design for removing SnapshotNow\ndidn't even have the catalog snapshot - it just took a new snapshot\nevery time. That was mostly fine, but Andres found a somewhat extreme\ntest case where it exhibited a significant regression, so I added the\ncatalog snapshot stuff to work around that. So I'm not AT ALL\nconvinced that giving catalog snapshots longer lifetimes is a relevant\nthing to do. There's some value in it if you can construct a test case\nwhere the overall rate of snapshot taking is extremely high, but in\nnormal cases that isn't so. It's certainly not worth complicating the\ncode for backend startup or DDL commands to reduce the number of\nsnapshots, because you're never going to have those things happening\nat a high enough rate to matter, or so I believe.\n\nThe way you get a benefit from CatalogSnapshot is to construct a\nworkload with a lot of really cheap SQL statements each of which has\nto do a bunch of catalog lookups, and then run that at high\nconcurrency. The concurrency makes taking snapshots more expensive,\nbecause the cache lines are contended, and having the same command use\nthe same snapshot over and over again instead of taking new ones then\nbrings the cost down enough to be measurable. But people run SQL\nstatements in a tight loop, not DDL or backend startup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Apr 2022 08:47:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well ... I agree that brittle is bad, but it's not clear to me which\n> way is actually less brittle. As for performant, I think you might be\n> misjudging the situation. My original design for removing SnapshotNow\n> didn't even have the catalog snapshot - it just took a new snapshot\n> every time. That was mostly fine, but Andres found a somewhat extreme\n> test case where it exhibited a significant regression, so I added the\n> catalog snapshot stuff to work around that. So I'm not AT ALL\n> convinced that giving catalog snapshots longer lifetimes is a relevant\n> thing to do.\n\nPerhaps not. But right now we have to first think about correctness and\nhow much trouble it will be to get to correctness. ISTM you are arguing\nfor a design in which it's required that there is always a registered\nor active snapshot that's older than the catalog snapshot (if any).\nI tried revising snapmgr.c to enforce that, starting with adding \n\n@@ -421,6 +421,13 @@ GetNonHistoricCatalogSnapshot(Oid relid)\n \n if (CatalogSnapshot == NULL)\n {\n+ /*\n+ * The catalog snapshot must always be newer than some active or\n+ * registered snapshot. (XXX explain why)\n+ */\n+ Assert(ActiveSnapshot != NULL ||\n+ !pairingheap_is_empty(&RegisteredSnapshots));\n+\n /* Get new snapshot. */\n CatalogSnapshot = GetSnapshotData(&CatalogSnapshotData);\n \nand this blew up with truly impressive thoroughness. The autovac\nlauncher, logical replication launcher, and incoming backends all\nfail this assertion instantly, making it impossible to find out\nwhat else might be broken --- but I'm sure there is a lot.\n\n(If you want to try this for yourself, remember that the postmaster\nwill relaunch the AV and LR launchers on failure, meaning that your\ndisk will fill with core files very very quickly. Just sayin'.)\n\nSo maybe we can go that direction, but it's going to require a lot of\ncode additions to push extra snapshots in places that haven't bothered\nto date; and I'm not convinced that that'd be buying us anything\nexcept more GetSnapshotData() calls.\n\nPlan B is to grant catalog snapshots some more-durable status than\nwhat Plan A envisions. I'm not very sure about the details, but if\nyou don't want to go that route then you'd better set about making\nthe above assertion work.\n\nIn the meantime, since it's clear that HaveRegisteredOrActiveSnapshot\nis failing to meet its contract, I'm going to go fix it. I think\n(based on the above argument) that what it intends to enforce is not\nreally the system design we need, but it certainly isn't helping\nanyone that it enforces that design incorrectly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Apr 2022 14:42:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-16 14:42:39 -0400, Tom Lane wrote:\n> In the meantime, since it's clear that HaveRegisteredOrActiveSnapshot\n> is failing to meet its contract, I'm going to go fix it.\n\n+1\n\n\n> I think (based on the above argument) that what it intends to enforce\n> is not really the system design we need, but it certainly isn't\n> helping anyone that it enforces that design incorrectly.\n\nI think it's approximately right for the current caller. But that caller\nlikely needs an improved design around snapshots...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 17 Apr 2022 06:31:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-16 14:42:39 -0400, Tom Lane wrote:\n>> I think (based on the above argument) that what it intends to enforce\n>> is not really the system design we need, but it certainly isn't\n>> helping anyone that it enforces that design incorrectly.\n\n> I think it's approximately right for the current caller. But that caller\n> likely needs an improved design around snapshots...\n\nYeah, I think the real issue is that checking\nHaveRegisteredOrActiveSnapshot in this way doesn't provide a very\ngood guarantee of what we really want to know, which is that the\nsession's advertised xmin is old enough to prevent removal of\nwhatever toast data we're trying to fetch. The fact that we\nhave a snapshot at the instant of fetch doesn't prove that it\nexisted continually since we fetched the toast reference, which\nseems to be the condition we actually need to assure. (And TBH\nI see little reason to think that whether the snapshot is the\nCatalogSnapshot or not changes things in any meaningful way.)\n\nI don't yet see a practical way to check for the real concern.\nWhile it's something to worry about, there's no reason to think\nthat v15 is any worse than prior versions in this area, is there?\nSo I'm inclined to remove this from the list of v15 open items,\nor at least demote the remaining concern to \"older bug\" status.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Apr 2022 11:51:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Sat, Apr 16, 2022 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> and this blew up with truly impressive thoroughness. The autovac\n> launcher, logical replication launcher, and incoming backends all\n> fail this assertion instantly, making it impossible to find out\n> what else might be broken --- but I'm sure there is a lot.\n\nOK, thanks for trying that.\n\n> In the meantime, since it's clear that HaveRegisteredOrActiveSnapshot\n> is failing to meet its contract, I'm going to go fix it. I think\n> (based on the above argument) that what it intends to enforce is not\n> really the system design we need, but it certainly isn't helping\n> anyone that it enforces that design incorrectly.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 09:22:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Hi,\n\nOn 2022-04-17 11:51:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-04-16 14:42:39 -0400, Tom Lane wrote:\n> >> I think (based on the above argument) that what it intends to enforce\n> >> is not really the system design we need, but it certainly isn't\n> >> helping anyone that it enforces that design incorrectly.\n> \n> > I think it's approximately right for the current caller. But that caller\n> > likely needs an improved design around snapshots...\n> \n> Yeah, I think the real issue is that checking\n> HaveRegisteredOrActiveSnapshot in this way doesn't provide a very\n> good guarantee of what we really want to know, which is that the\n> session's advertised xmin is old enough to prevent removal of\n> whatever toast data we're trying to fetch.\n\nRight. It's better than what was there before though - I added\nHaveRegisteredOrActiveSnapshot() in the course of\n7c38ef2a5d6cf6d8dc3834399d7a1c364d64ce64. Where the problem was that we\ndidn't have *any* snapshot other than the catalog snapshot, and the\ncatalog snapshot only sometimes (iirc for that bug it depended on the\norder in which objects were deleted). That makes such bugs much harder\nto detect.\n\n\n> The fact that we have a snapshot at the instant of fetch doesn't prove\n> that it existed continually since we fetched the toast reference,\n> which seems to be the condition we actually need to assure.\n\nRight.\n\n\n> (And TBH I see little reason to think that whether the snapshot is the\n> CatalogSnapshot or not changes things in any meaningful way.)\n\nIt is a meaningful difference, see e.g. the bug referenced above.\n\n\n> I don't yet see a practical way to check for the real concern.\n> While it's something to worry about, there's no reason to think\n> that v15 is any worse than prior versions in this area, is there?\n> So I'm inclined to remove this from the list of v15 open items,\n> or at least demote the remaining concern to \"older bug\" status.\n\nYes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Apr 2022 07:39:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 10:39 AM Andres Freund <andres@anarazel.de> wrote:\n> Right. It's better than what was there before though - I added\n> HaveRegisteredOrActiveSnapshot() in the course of\n> 7c38ef2a5d6cf6d8dc3834399d7a1c364d64ce64. Where the problem was that we\n> didn't have *any* snapshot other than the catalog snapshot, and the\n> catalog snapshot only sometimes (iirc for that bug it depended on the\n> order in which objects were deleted). That makes such bugs much harder\n> to detect.\n\nI still think it would be better to have GetOldestSnapshot() be\nsmarter and refuse to return the catalog snapshot. For one thing, that\nway we'd be testing for the problem case in non-assert builds also.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 10:44:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-17 11:51:58 -0400, Tom Lane wrote:\n>> The fact that we have a snapshot at the instant of fetch doesn't prove\n>> that it existed continually since we fetched the toast reference,\n>> which seems to be the condition we actually need to assure.\n\n> Right.\n\n>> (And TBH I see little reason to think that whether the snapshot is the\n>> CatalogSnapshot or not changes things in any meaningful way.)\n\n> It is a meaningful difference, see e.g. the bug referenced above.\n\nWell, that's true given the current arrangements for managing\nCatalogSnapshot; but that doesn't make the CatalogSnapshot any\nless of a protection when it exists. The direction I was vaguely\nimagining is that we create some refcount-like infrastructure\ndirectly ensuring that once a snapshot is used to read a toast\nreference, it gets kept around until we dereference or discard\nthat reference. With a scheme like that, there'd be no reason to\ndiscriminate against a CatalogSnapshot as being the protective\nsnapshot.\n\n(I hasten to add that I have no idea how to make this half-baked\nplan work, and there may be better solutions anyway.)\n\n>> While it's something to worry about, there's no reason to think\n>> that v15 is any worse than prior versions in this area, is there?\n>> So I'm inclined to remove this from the list of v15 open items,\n>> or at least demote the remaining concern to \"older bug\" status.\n\n> Yes.\n\nOK, I'll update the open-items page.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 10:51:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I still think it would be better to have GetOldestSnapshot() be\n> smarter and refuse to return the catalog snapshot. For one thing, that\n> way we'd be testing for the problem case in non-assert builds also.\n\nI was wondering about that too. On the other hand, given that\nwe know this area is squishy, transforming fails-in-assert-builds\nto fails-everywhere is not necessarily desirable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 10:53:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 10:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I still think it would be better to have GetOldestSnapshot() be\n> > smarter and refuse to return the catalog snapshot. For one thing, that\n> > way we'd be testing for the problem case in non-assert builds also.\n>\n> I was wondering about that too. On the other hand, given that\n> we know this area is squishy, transforming fails-in-assert-builds\n> to fails-everywhere is not necessarily desirable.\n\nI agree that it's a little unclear. In general, I think if we're going\nto blow up and die, doing it closer to the place where the problem is\nhappening is for the best. On the other hand, if in most practical\ncases we're going to stumble through and get the right answer anyway,\nthen it's maybe not great to break a bunch of accidentally-working\ncases. However, it does strikes me that this principal could easily be\noverdone. init_toast_snapshot() could pick a random snapshot (or take\na new one) in order to call InitToastSnapshot() and that would often\nwork fine. Yet, upon realizing that things are busted, it chooses to\nerror out instead. I approve of that choice, and don't think we should\nrule out the idea of making that check more robust.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 11:14:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I agree that it's a little unclear. In general, I think if we're going\n> to blow up and die, doing it closer to the place where the problem is\n> happening is for the best. On the other hand, if in most practical\n> cases we're going to stumble through and get the right answer anyway,\n> then it's maybe not great to break a bunch of accidentally-working\n> cases. However, it does strikes me that this principal could easily be\n> overdone. init_toast_snapshot() could pick a random snapshot (or take\n> a new one) in order to call InitToastSnapshot() and that would often\n> work fine. Yet, upon realizing that things are busted, it chooses to\n> error out instead. I approve of that choice, and don't think we should\n> rule out the idea of making that check more robust.\n\nI'm all for improving robustness, but failing in cases that would have\nworked before (even if only accidentally) is not going to be seen by\nusers as more robust. I think that this late stage of the development\ncycle is not the time to be putting in changes that are not actually\ngoing to fix bugs but only call greater attention to the possibility\nthat a bug exists.\n\nTBH, given where we are in the dev cycle, I thought there was a lot of\nsense behind your earlier thought that HaveRegisteredOrActiveSnapshot\nshould be reverted entirely. I'm okay with keeping it as an assertion-\nonly check, so that it won't bother end users. I'm not okay with\nadding end-user-visible failures, at least not till early in v16.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 11:26:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I agree that it's a little unclear. In general, I think if we're going\n> > to blow up and die, doing it closer to the place where the problem is\n> > happening is for the best. On the other hand, if in most practical\n> > cases we're going to stumble through and get the right answer anyway,\n> > then it's maybe not great to break a bunch of accidentally-working\n> > cases. However, it does strikes me that this principal could easily be\n> > overdone. init_toast_snapshot() could pick a random snapshot (or take\n> > a new one) in order to call InitToastSnapshot() and that would often\n> > work fine. Yet, upon realizing that things are busted, it chooses to\n> > error out instead. I approve of that choice, and don't think we should\n> > rule out the idea of making that check more robust.\n>\n> I'm all for improving robustness, but failing in cases that would have\n> worked before (even if only accidentally) is not going to be seen by\n> users as more robust. I think that this late stage of the development\n> cycle is not the time to be putting in changes that are not actually\n> going to fix bugs but only call greater attention to the possibility\n> that a bug exists.\n>\n> TBH, given where we are in the dev cycle, I thought there was a lot of\n> sense behind your earlier thought that HaveRegisteredOrActiveSnapshot\n> should be reverted entirely. I'm okay with keeping it as an assertion-\n> only check, so that it won't bother end users. I'm not okay with\n> adding end-user-visible failures, at least not till early in v16.\n\nI wasn't really taking a position either way about timing. If we can\ndemonstrate that things other than HaveRegisteredOrActiveSnapshot()\nitself are misbehaving, then I think fixes for those bugs are\npotentially back-patchable no matter where we are in the release\ncycle, but in terms of when we make changes to try to detect bugs we\ndon't know about yet, I could go either way on whether to do that now\nor wait. We can't know whether the bugs we haven't found yet will\ncause a big problem for someone tomorrow, ten years from now, or\nnever.\n\nI am not really very happy about HaveRegisteredOrActiveSnapshot(),\nhonestly. I think in the form we have it in the tree it looks\nunder-engineered. It's not really testing the right thing (even\nleaving aside the bug fix) as you have been pointing out, it doesn't\nreally mesh well with the sanity checking that was there before as I\nhave been pointing out, and it's only used in one place. I wouldn't be\nsad if it got reverted. However, I don't think it's going to do us any\ngreat harm, either. Although it's a long way from the best thing we\nhave in the tree, it's also a long way from the worst thing we have in\nthe tree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 12:50:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I wasn't really taking a position either way about timing. If we can\n> demonstrate that things other than HaveRegisteredOrActiveSnapshot()\n> itself are misbehaving, then I think fixes for those bugs are\n> potentially back-patchable no matter where we are in the release\n> cycle,\n\nSure, but ...\n\n> but in terms of when we make changes to try to detect bugs we\n> don't know about yet, I could go either way on whether to do that now\n> or wait. We can't know whether the bugs we haven't found yet will\n> cause a big problem for someone tomorrow, ten years from now, or\n> never.\n\n... I think in this case we do have a pretty good idea of the possible\nconsequences. Most of the time, an unsafe toast fetch will work\nfine because the toast data is still there. If you're very unlucky\nthen it's been deleted, and vacuumed away, and then you get a \"missing\nchunk number\" error. If you're really astronomically unlucky, perhaps\nthe toast OID has been recycled and you get the wrong data (it's not\nclear to me whether the toast snapshot visibility rules would prevent\nthis). I doubt we need to factor that last scenario into practical risk\nestimates, though. So adding a non-assert check for snapshot misuse\nwould effectively convert \"if you're very unlucky you get a weird error\"\nto \"lucky or not, you get some other weird error\", which no user is\ngoing to see as an improvement.\n\n> I am not really very happy about HaveRegisteredOrActiveSnapshot(),\n> honestly.\n\nMe either. If we find any other cases where it gives a false positive,\nI'll be for removing it rather than fixing it. But for the moment\nI'm content to leave it until we have a well-engineered solution to\nthe real problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 13:20:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-17 11:51:58 -0400, Tom Lane wrote:\n>> The fact that we have a snapshot at the instant of fetch doesn't prove\n>> that it existed continually since we fetched the toast reference,\n>> which seems to be the condition we actually need to assure.\n\n> Right.\n\nBTW, after thinking about this for a bit I am less concerned than\nI was about the system being full of bugs of this ilk. The executor\nper se should be fine because it does everything under a live snapshot.\nWe had bugs with cases that shove executor output into long-lived\ntuplestores, but we've dealt with that scenario. Catalog updates\nperformed on tuples fetched from a catalog scan seem safe enough too.\nAndres was worried about catalog updates performed using tuples fetched\nfrom catcache, but that's not a problem because we detoasted every value\nwhen it went into the catcache, cf 08e261cbc.\n\n(Mind you, 08e261cbc's solution is risky performancewise, because it\nmeans we have to re-toast every value during such catalog updates,\ninstead of being able to carry the old values of unchanged columns\nforward. But it's not a correctness bug.)\n\n(Also, the whining I did in 08e261cbc's commit message is no longer\nrelevant now that we read catalogs with MVCC snapshots.)\n\nThere may be some corner cases that aren't described by any of these\nthree blanket scenarios, but they've got to be pretty few and far\nbetween.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 16:06:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Mon, Apr 18, 2022 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There may be some corner cases that aren't described by any of these\n> three blanket scenarios, but they've got to be pretty few and far\n> between.\n\nMy first thought whenever anything like this comes up is cursors,\nespecially but not only holdable cursors. Also, plpgsql variables,\nmaybe mixed with embedded COMMIT/ROLLBACK. I don't find it\nparticularly hard to believe we have some bugs in\ninsufficiently-well-considered parts of the system that pass around\ndatums outside of the normal executor flow, but I don't know exactly\nhow to find them all, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Apr 2022 07:39:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 18, 2022 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There may be some corner cases that aren't described by any of these\n>> three blanket scenarios, but they've got to be pretty few and far\n>> between.\n\n> My first thought whenever anything like this comes up is cursors,\n> especially but not only holdable cursors. Also, plpgsql variables,\n> maybe mixed with embedded COMMIT/ROLLBACK.\n\nThose exact cases have had detoasting bugs in the past and are now fixed.\n\n> I don't find it\n> particularly hard to believe we have some bugs in\n> insufficiently-well-considered parts of the system that pass around\n> datums outside of the normal executor flow, but I don't know exactly\n> how to find them all, either.\n\nI'm not here to claim that there are precisely zero remaining bugs\nof this ilk. I'm just saying that I think we've flushed out most\nof them. I think there is some value in trying to think of a way\nto prove that none remain, but it's not a problem we can solve\nfor v15.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Apr 2022 10:36:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\",\n File: \"toast_internals.c\", Line: 670, PID: 19403)" }, { "msg_contents": "On Tue, Apr 19, 2022 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not here to claim that there are precisely zero remaining bugs\n> of this ilk. I'm just saying that I think we've flushed out most\n> of them. I think there is some value in trying to think of a way\n> to prove that none remain, but it's not a problem we can solve\n> for v15.\n\nSure, that's fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Apr 2022 12:11:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRAP: FailedAssertion(\"HaveRegisteredOrActiveSnapshot()\", File:\n \"toast_internals.c\", Line: 670, PID: 19403)" } ]
[ { "msg_contents": "Hi,\n\nWhen I read the documentation of pg_waldump, I found the description about\n-B option is wrong.\n\n -B block\n --block=block\n Only display records that modify the given block. The relation must also be provided with --relation or -l.\n\nBefore 52b5568, the -l option is short for --relation, however, it has been\nchanged to -R, and we forgot to update the documentation.\n\nHere is a patch for it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Mon, 28 Mar 2022 11:02:05 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Fix pg_waldump documentation about block option" }, { "msg_contents": "On Mon, Mar 28, 2022 at 4:02 PM Japin Li <japinli@hotmail.com> wrote:\n> Before 52b5568, the -l option is short for --relation, however, it has been\n> changed to -R, and we forgot to update the documentation.\n>\n> Here is a patch for it.\n\nPushed. Thanks!\n\n\n", "msg_date": "Mon, 28 Mar 2022 16:28:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_waldump documentation about block option" } ]
[ { "msg_contents": "I notice a number of patches have a failure in the cfbot on the\n027_stream_regress test. I think these are related to a bug in that\ntest being discussed in a thread somewhere though I don't have it\nhandy. Is that right?\n\nI think it doesn't indicate anything wrong with the individual patches right?\n\nFor an example the \"Add checkpoint and redo LSN to LogCheckpointEnd\nlog message\" patch which is a fairly simple patch adding a few details\nto some log messages is failing with this which doesn't make much\nsense since it shouldn't be affecting the actual recovery at all:\n\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/cluster.out\n/tmp/cirrus-ci-build/src/test/recovery/tmp_check/results/cluster.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/cluster.out\n2022-03-28 01:18:36.126774178 +0000\n+++ /tmp/cirrus-ci-build/src/test/recovery/tmp_check/results/cluster.out\n2022-03-28 01:23:24.489517050 +0000\n@@ -467,7 +467,8 @@\n where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous);\n hundred | lhundred | thousand | lthousand | tenthous | ltenthous\n ---------+----------+----------+-----------+----------+-----------\n-(0 rows)\n+ 0 | 99 | 0 | 999 | 0 | 9999\n+(1 row)\n\n reset enable_indexscan;\n reset maintenance_work_mem;\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:28:20 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "CFBot has failures on 027_stream_regress for a number of patches" }, { "msg_contents": "Hi,\n\nOn 2022-03-28 15:28:20 -0400, Greg Stark wrote:\n> I notice a number of patches have a failure in the cfbot on the\n> 027_stream_regress test. I think these are related to a bug in that\n> test being discussed in a thread somewhere though I don't have it\n> handy. Is that right?\n\nIt looks more like a bug in the general regression tests that are more likely\nto be triggered by 027_stream_regress.\n\n\n> I think it doesn't indicate anything wrong with the individual patches right?\n\nDepends on the patch, I think. It certainly also has triggered for bugs in\npatches...\n\n\n> For an example the \"Add checkpoint and redo LSN to LogCheckpointEnd\n> log message\" patch which is a fairly simple patch adding a few details\n> to some log messages is failing with this which doesn't make much\n> sense since it shouldn't be affecting the actual recovery at all:\n\n027_stream_regress.pl runs the normal regression tests via streaming rep - the\nfailure here is on the primary (i.e. doesn't involve recovery itself). Due to\nthe smaller shared_buffers setting the test uses, some edge cases are\ntriggered more often.\n\n\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/cluster.out\n> /tmp/cirrus-ci-build/src/test/recovery/tmp_check/results/cluster.out ---\n> /tmp/cirrus-ci-build/src/test/regress/expected/cluster.out 2022-03-28\n> 01:18:36.126774178 +0000 +++\n> /tmp/cirrus-ci-build/src/test/recovery/tmp_check/results/cluster.out\n> 2022-03-28 01:23:24.489517050 +0000 @@ -467,7 +467,8 @@ where row(hundred,\n> thousand, tenthous) <= row(lhundred, lthousand, ltenthous); hundred |\n> lhundred | thousand | lthousand | tenthous | ltenthous\n> ---------+----------+----------+-----------+----------+----------- -(0 rows)\n> + 0 | 99 | 0 | 999 | 0 | 9999 +(1 row)\n\nThis one is: https://postgr.es/m/CA%2BhUKGLV3wzmYFbNs%2BTZ1%2Bw0e%3Dhc61fcvrF3OmutuaPBuZMd0w%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:49:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CFBot has failures on 027_stream_regress for a number of patches" } ]
[ { "msg_contents": "Hi hackers,\n\nIt's a natural requirement to unregister the callback for transaction or\nsubtransaction when the callback is invoked, so we don't have to\nunregister the callback somewhere. If it's not allowed to do it\nin CallXactCallback() or CallSubXactCallback(), we must find the\nright place and handle it carefully.\n\nLuckily, we just need a few lines of code to support this feature,\nby saving the next pointer before calling the callback.\n\nThe usage looks like:\n```\n static void\n AtEOXact_cleanup_state(SubXactEvent event, SubTransactionId mySubid,\n SubTransactionId parentSubid, void *arg)\n {\n SubXactCleanupItem *item = (SubXactCleanupItem *)arg;\n Assert(FullTransactionIdEquals(item->fullXid,\nGetTopFullTransactionIdIfAny()));\n if (item->mySubid == mySubid &&\n (event == SUBXACT_EVENT_COMMIT_SUB || event ==\nSUBXACT_EVENT_ABORT_SUB))\n {\n /* to do some cleanup for subtransaction */\n ...\n UnregisterSubXactCallback(AtEOXact_cleanup_state, arg);\n }\n }\n```\n\nRegards,\nHao Wu", "msg_date": "Tue, 29 Mar 2022 14:48:54 +0800", "msg_from": "Hao Wu <gfphoenix78@gmail.com>", "msg_from_op": true, "msg_subject": "Enables to call Unregister*XactCallback() in Call*XactCallback()" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 14:48:54 +0800, Hao Wu wrote:\n> It's a natural requirement to unregister the callback for transaction or\n> subtransaction when the callback is invoked, so we don't have to\n> unregister the callback somewhere.\n\nYou normally shouldn'd need to do this frequently - what's your use case?\nUnregisterXactCallback() is O(N), so workloads registering / unregistering a\nlot of callbacks would be problematic.\n\n> Luckily, we just need a few lines of code to support this feature,\n> by saving the next pointer before calling the callback.\n\nThat seems reasonable...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 09:39:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Enables to call Unregister*XactCallback() in Call*XactCallback()" }, { "msg_contents": "> You normally shouldn'd need to do this frequently - what's your use case?\n> UnregisterXactCallback() is O(N), so workloads registering / unregistering\n> a\n> lot of callbacks would be problematic.\n>\n> It's not about workloads or efficiency. Here is the use case:\nI want to register a callback for some subtransaction, and only run this\ncallback once\nwhen the subtransaction ends, no matter if it was committed or cancelled.\n\nIt's reasonable to unregister the callback at the end of the callback, or I\nhave to\ncall UnregisterSubXactCallback() somewhere. Because\nSubXact_callbacks/Xact_callbacks\nis only set by Unregister*XactCallback(). The question now becomes\n1. where to call UnregisterSubXactCallback()\n2. ensure that calling UnregisterSubXactCallback() is not triggered in the\ncurrent callback\n\nThis patch enables us to safely delete the current callback when the\ncallback finishes to\nimplement run callback once and unregister in one place.\n\nRegards,\nHao Wu\n\n\nYou normally shouldn'd need to do this frequently - what's your use case?\nUnregisterXactCallback() is O(N), so workloads registering / unregistering a\nlot of callbacks would be problematic.\nIt's not about workloads or efficiency. Here is the use case:I want to register a callback for some subtransaction, and only run this callback oncewhen the subtransaction ends, no matter if it was committed or cancelled.It's reasonable to unregister the callback at the end of the callback, or I have tocall UnregisterSubXactCallback() somewhere. Because SubXact_callbacks/Xact_callbacksis only set by Unregister*XactCallback(). The question now becomes1. where to call UnregisterSubXactCallback()2. ensure that calling UnregisterSubXactCallback() is not triggered in the current callbackThis patch enables us to safely delete the current callback when the callback finishes toimplement run callback once and unregister in one place.Regards,Hao Wu", "msg_date": "Wed, 6 Apr 2022 07:27:21 +0800", "msg_from": "Hao Wu <gfphoenix78@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Enables to call Unregister*XactCallback() in Call*XactCallback()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-29 14:48:54 +0800, Hao Wu wrote:\n>> It's a natural requirement to unregister the callback for transaction or\n>> subtransaction when the callback is invoked, so we don't have to\n>> unregister the callback somewhere.\n\n> You normally shouldn'd need to do this frequently - what's your use case?\n> UnregisterXactCallback() is O(N), so workloads registering / unregistering a\n> lot of callbacks would be problematic.\n\nIt'd only be slow if you had a lot of distinct callbacks registered\nat the same time, which doesn't sound like a common situation.\n\n>> Luckily, we just need a few lines of code to support this feature,\n>> by saving the next pointer before calling the callback.\n\n> That seems reasonable...\n\nYeah. Whether it's efficient or not, seems like it should *work*.\nI'm a bit inclined to call this a bug-fix and backpatch it.\n\nI went looking for other occurrences of this code in places that have\nan unregister function, and found one in ResourceOwnerReleaseInternal,\nso I think we should fix that too. Also, a comment seems advisable;\nthat leads me to the attached v2.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 26 Sep 2022 18:05:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Enables to call Unregister*XactCallback() in Call*XactCallback()" }, { "msg_contents": "On Mon, Sep 26, 2022 at 06:05:34PM -0400, Tom Lane wrote:\n> Yeah. Whether it's efficient or not, seems like it should *work*.\n> I'm a bit inclined to call this a bug-fix and backpatch it.\n> \n> I went looking for other occurrences of this code in places that have\n> an unregister function, and found one in ResourceOwnerReleaseInternal,\n> so I think we should fix that too. Also, a comment seems advisable;\n> that leads me to the attached v2.\n\nLGTM. I have no opinion on back-patching.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 26 Sep 2022 15:13:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enables to call Unregister*XactCallback() in Call*XactCallback()" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Mon, Sep 26, 2022 at 06:05:34PM -0400, Tom Lane wrote:\n>> Yeah. Whether it's efficient or not, seems like it should *work*.\n>> I'm a bit inclined to call this a bug-fix and backpatch it.\n\n> LGTM. I have no opinion on back-patching.\n\nI had second thoughts about back-patching: doing so would encourage\nextensions to rely on this working in pre-v16 branches, which they'd\nbetter not since they might be in a not-up-to-date installation.\n\nWe could still squeeze this into v15 without creating such a hazard,\nbut post-rc1 doesn't seem like a good time for inessential tweaks.\n\nHence, pushed to HEAD only.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:28:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Enables to call Unregister*XactCallback() in Call*XactCallback()" } ]
[ { "msg_contents": "Hi Hackers!\n\nPostgres Pro team would like to present a result of several years'\ndevelopment - a Generic JSON (GSON) API, announced in 2020 as \"The Grand\nUnification\". Finally, it is ready and awaiting review.\nFor the sake of compatibility with SQL standard we need one JSON data type,\nwhich is internally\n- jsonb by default, to optimize json objects storage;\n- Optionally behave as \"old textual json\" for the cases demanding full Json\ntext representation;\nAnd, logically, here comes the idea of Generic JSON.\nCurrent JSON API is different for json and jsonb data types:\n- Json has lexer and parser with visitor interface;\n- Jsonb uses Json lexer and parser for input, and several functions and\niterators for access.\nThis makes difficult to implement jsonpath functions for json (required by\nthe SQL standard)\nand GIN indexes for json.\nAlso, it makes very difficult to add new features like partial\ndecompression/detoasting or\nslicing, different storage formats, etc.\n\nGeneric JSON (GSON) — New API\n- New generic JSON API is based on jsonb API:\n- JSON datums, containers, and iterators are wrapped into generic Json,\nJsonContainer,\nand JsonIterator structures.\n- JsonbValue and its builder function pushJsonbValue() are renamed and used\nas is.\n- All container-specific functions are hidden into JsonContainerOps, which\nhas three\nimplementations:\n- JsonbContainerOps for jsonb\n- JsontContainerOps for json\n- JsonvContainerOps for in-memory tree-like JsonValue\nFor json only iterators need to be implemented, access functions are\nimplemented using these\niterators.\nAnd as an icing on the cake GSON allows using JSONb as SQL JSON.\n\nNew GSON functionality is available on github as\nhttps://github.com/postgrespro/postgres/tree/gson\nWe decided to divide this functionality into the set of patches:\n\n1) 1_json_prep_rebased.patch.gz. Improvements and refactoring of existing\nJSONb interface, as\npreparation for introducing new Generic JSON - hiding internal variables,\nfunctions used in API\nare extracted in Json structures, and assembling future GSON generic\nfunction set;\n\n2) 2_generic_json.patch.gz. New GSON (Generic JSON) interface itself - data\nwrappers, generic\nset of functions, JSONb redirect via new API. GSON is defined as new header\njson_generic.h\nand C source json_generic.c. All internal operations are hidden inside\ngeneric function set;\n\n3) 3_tmp_stack_allocated_json.patch.gz. Allocation of temporary JSONb\nobjects on stack, with\nfreeing when they are not needed anymore;\n\n4) 4_generic_json_refactoring.patch.gz. GSON refactoring and optimization -\nadding JsonCopy,\nJsonValueCopy container operations, container allocation function, hiding\ndirect internal fields access, excluding jsonv functions;\n\n5) 5_gson_refactoring_1.patch.gz. Further refactoring and optimization -\ngetting rid of old\njsonx and jsont macros and functions, introducing Json container header\nwith parting container\ndata and header, and hiding into header structure.\n\nThe main contributor of GSON is Postgres Pro developer Nikita Glukhov with\nthe guidance of Oleg Bartunov.\n\nWe're waiting for community' review, and of course proposals, advices,\nquestions and further discussions.\nThank you!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Tue, 29 Mar 2022 12:41:47 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "Generic JSON API" } ]
[ { "msg_contents": "Good day.\n\nv14 introduced the way to get original text for some kind of expressions\nusing new 'funcformat' - COERCE_SQL_SYNTAX:\n- EXTRACT(part from timestamp)\n- (text IS [form] NORMALIZED)\nand others.\n\nMentioned EXTRACT and NORMALIZED statements has parts, that are not\nusual arguments but some kind of syntax. At least, there is no way to:\n\n\tPREPARE a(text) as select extract($1 from now());\n\nBut JumbleExpr doesn't distinguish it and marks this argument as a\nvariable constant, ie remembers it in 'clocations'.\n\nI believe such \"non-variable constant\" should not be jumbled as\nreplaceable thing.\n\nIn our case (extended pg_stat_statements), we attempt to generalize\nplan and then explain generalized plan. But using constant list from\nJumbleState we mistakenly replace first argument in EXTRACT expression\nwith parameter. And then 'get_func_sql_syntax' fails on assertion \"first\nargument is text constant\".\n\nSure we could workaround in our plan mutator with skipping such first\nargument. But I wonder, is it correct at all to not count it as a\nnon-modifiable syntax part in JumbleExpr?\n\n------\n\nregards,\n\nSokolov Yura\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.coma\n\n\n\n", "msg_date": "Tue, 29 Mar 2022 15:52:57 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Jumble Query with COERCE_SQL_SYNTAX" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 29, 2022 at 03:52:57PM +0300, Yura Sokolov wrote:\n> \n> v14 introduced the way to get original text for some kind of expressions\n> using new 'funcformat' - COERCE_SQL_SYNTAX:\n> - EXTRACT(part from timestamp)\n> - (text IS [form] NORMALIZED)\n> and others.\n> \n> Mentioned EXTRACT and NORMALIZED statements has parts, that are not\n> usual arguments but some kind of syntax. At least, there is no way to:\n> \n> \tPREPARE a(text) as select extract($1 from now());\n> \n> But JumbleExpr doesn't distinguish it and marks this argument as a\n> variable constant, ie remembers it in 'clocations'.\n> \n> I believe such \"non-variable constant\" should not be jumbled as\n> replaceable thing.\n\nYeah, the problem is really that those are some form of sublanguage inside SQL,\nwhich is always a mess :(\n\nIt's probably an implementation detail that we treat those as syntactic sugar\nfor plain function calls, but since that's what we're doing I don't think it's\nreally sensible to change that. For instance, for the query jumbler using this\nquery or \"select pg_catalog.extract($1, now())\" are identical, and that form\ncan be prepared. Maybe it would make sense to allow a parameter for the\nEXTRACT(x FROM y), since we're already allowing a non-standard form with\nplain string literal? The story is a bit different for NORMALIZED though.\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:23:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Jumble Query with COERCE_SQL_SYNTAX" } ]
[ { "msg_contents": "Back in 367bc42 (for 9.2!) we \"avoid[ed] index rebuild[ing] for\nno-rewrite ALTER TABLE\n.. ALTER TYPE.\" However the docs still claim that \"a table rewrite is\nnot needed; but any indexes on the affected columns must still be\nrebuilt.\"\n\nI've attached a simple patch to update the docs to match the current behavior.\n\nThanks,\nJames Coleman", "msg_date": "Tue, 29 Mar 2022 10:03:45 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Tue, 29 Mar 2022 at 16:04, James Coleman <jtc331@gmail.com> wrote:\n>\n> Back in 367bc42 (for 9.2!) we \"avoid[ed] index rebuild[ing] for\n> no-rewrite ALTER TABLE\n> .. ALTER TYPE.\" However the docs still claim that \"a table rewrite is\n> not needed; but any indexes on the affected columns must still be\n> rebuilt.\"\n\nAlthough indexes might indeed not need a rebuild, in many cases they\nstill do (e.g. when the type changes between text and a domain of text\nwith a different collation).\n\nI think that the current state of the docs is better in that regard;\nas it explicitly warns for index rebuilds, even when the letter of the\ndocs is incorrect: there are indeed cases we don't need to rebuild the\nindexes; but that would require more elaboration.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 29 Mar 2022 17:29:32 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:29 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 29 Mar 2022 at 16:04, James Coleman <jtc331@gmail.com> wrote:\n> >\n> > Back in 367bc42 (for 9.2!) we \"avoid[ed] index rebuild[ing] for\n> > no-rewrite ALTER TABLE\n> > .. ALTER TYPE.\" However the docs still claim that \"a table rewrite is\n> > not needed; but any indexes on the affected columns must still be\n> > rebuilt.\"\n>\n> Although indexes might indeed not need a rebuild, in many cases they\n> still do (e.g. when the type changes between text and a domain of text\n> with a different collation).\n>\n> I think that the current state of the docs is better in that regard;\n> as it explicitly warns for index rebuilds, even when the letter of the\n> docs is incorrect: there are indeed cases we don't need to rebuild the\n> indexes; but that would require more elaboration.\n\nAdmittedly I hadn't thought of that case. But isn't it already covered\nin the existing docs by the phrase \"or an unconstrained domain over\nthe new type\"? I don't love the word \"or\" there because there's a\nsense in which the first clause \"binary coercible to the new type\" is\nstill accurate for your example unless you narrowly separate \"domain\"\nand \"type\", but I think that narrow distinction is what's technically\nthere already.\n\nThat being said, I could instead of removing the clause entirely\nreplace it with something like \"indexes may still need to be rebuilt\nwhen the new type is a constrained domain\".\n\nThoughts?\n\nJames Coleman\n\n\n", "msg_date": "Wed, 30 Mar 2022 10:04:21 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Wed, Mar 30, 2022 at 10:04 AM James Coleman <jtc331@gmail.com> wrote:\n> Admittedly I hadn't thought of that case. But isn't it already covered\n> in the existing docs by the phrase \"or an unconstrained domain over\n> the new type\"? I don't love the word \"or\" there because there's a\n> sense in which the first clause \"binary coercible to the new type\" is\n> still accurate for your example unless you narrowly separate \"domain\"\n> and \"type\", but I think that narrow distinction is what's technically\n> there already.\n>\n> That being said, I could instead of removing the clause entirely\n> replace it with something like \"indexes may still need to be rebuilt\n> when the new type is a constrained domain\".\n\nYou're talking about this as if the normal cases is that indexes don't\nneed to rebuilt, but sometimes they do. However, if I recall\ncorrectly, the way the code is structured, it supposes that the\nindexes do need a rebuild, and then tries to prove that they actually\ndon't. That disconnect makes me nervous, because it seems to me that\nit could easily lead to a situation where our documentation contains\nsubtle errors. I think somebody should go through and study the\nalgorithm as it exists in the code, and then write documentation to\nmatch it. And I think that when they do that, they should approach it\nfrom the same point of view that the code does i.e. \"these are the\nconditions for skipping the index rebuild\" rather than \"these are the\nconditions for performing an index rebuild.\" By doing it that way, I\nthink we minimize the likelihood of disconnects between code and\ndocumentation, and also make it easier to update in future if the\nalgorithm gets changed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 11:41:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Wed, Mar 30, 2022 at 11:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 10:04 AM James Coleman <jtc331@gmail.com> wrote:\n> > Admittedly I hadn't thought of that case. But isn't it already covered\n> > in the existing docs by the phrase \"or an unconstrained domain over\n> > the new type\"? I don't love the word \"or\" there because there's a\n> > sense in which the first clause \"binary coercible to the new type\" is\n> > still accurate for your example unless you narrowly separate \"domain\"\n> > and \"type\", but I think that narrow distinction is what's technically\n> > there already.\n> >\n> > That being said, I could instead of removing the clause entirely\n> > replace it with something like \"indexes may still need to be rebuilt\n> > when the new type is a constrained domain\".\n>\n> You're talking about this as if the normal cases is that indexes don't\n> need to rebuilt, but sometimes they do. However, if I recall\n> correctly, the way the code is structured, it supposes that the\n> indexes do need a rebuild, and then tries to prove that they actually\n> don't. That disconnect makes me nervous, because it seems to me that\n> it could easily lead to a situation where our documentation contains\n> subtle errors. I think somebody should go through and study the\n> algorithm as it exists in the code, and then write documentation to\n> match it. And I think that when they do that, they should approach it\n> from the same point of view that the code does i.e. \"these are the\n> conditions for skipping the index rebuild\" rather than \"these are the\n> conditions for performing an index rebuild.\" By doing it that way, I\n> think we minimize the likelihood of disconnects between code and\n> documentation, and also make it easier to update in future if the\n> algorithm gets changed.\n\nHmm, having it match the way it works makes sense. Would you feel\ncomfortable with an intermediate step (queueing up that as a larger\nchange) changing the clause to something like \"indexes will still have\nto be rebuilt unless the system can guarantee that the sort order is\nproven to be unchanged\" (with appropriate wordsmithing to be a bit\nless verbose if possible)?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Wed, 30 Mar 2022 16:33:18 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Wed, Mar 30, 2022 at 4:33 PM James Coleman <jtc331@gmail.com> wrote:\n> Hmm, having it match the way it works makes sense. Would you feel\n> comfortable with an intermediate step (queueing up that as a larger\n> change) changing the clause to something like \"indexes will still have\n> to be rebuilt unless the system can guarantee that the sort order is\n> proven to be unchanged\" (with appropriate wordsmithing to be a bit\n> less verbose if possible)?\n\nYeah, that seems fine. It's arguable how much detail we should go into\nhere - but a statement of the form you propose is not misleading, and\nthat's what seems most important to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:41:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Wed, Mar 30, 2022 at 5:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 4:33 PM James Coleman <jtc331@gmail.com> wrote:\n> > Hmm, having it match the way it works makes sense. Would you feel\n> > comfortable with an intermediate step (queueing up that as a larger\n> > change) changing the clause to something like \"indexes will still have\n> > to be rebuilt unless the system can guarantee that the sort order is\n> > proven to be unchanged\" (with appropriate wordsmithing to be a bit\n> > less verbose if possible)?\n>\n> Yeah, that seems fine. It's arguable how much detail we should go into\n> here - but a statement of the form you propose is not misleading, and\n> that's what seems most important to me.\n\nAll right, thanks for feedback. Attached is v2 with such a change.\nI've not included examples, and I'm about 50/50 on doing so. What are\nyour thoughts on adding in parens \"e.g., changing from varchar to text\navoids rebuilding indexes while changing from text to a domain of text\nwith a different collation will require rebuilding indexes\"?\n\nThanks,\nJames Coleman", "msg_date": "Thu, 31 Mar 2022 09:17:46 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:17 AM James Coleman <jtc331@gmail.com> wrote:\n> All right, thanks for feedback. Attached is v2 with such a change.\n> I've not included examples, and I'm about 50/50 on doing so. What are\n> your thoughts on adding in parens \"e.g., changing from varchar to text\n> avoids rebuilding indexes while changing from text to a domain of text\n> with a different collation will require rebuilding indexes\"?\n\nOn the patch, I suggest that instead of saying \"can verify that sort\norder and/or hashing semantics are unchanged\" you say something like\n\"can verify that the new index would be logically equivalent to the\ncurrent one\", mostly because I do not think that \"and/or\" looks very\ngood in formal writing.\n\nI think it would be fine to include examples, but I think that the\nphrasing you suggest here doesn't seem great. I'm not sure how to fix\nit exactly. Maybe it needs a little more explanation?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:43:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 9:17 AM James Coleman <jtc331@gmail.com> wrote:\n> > All right, thanks for feedback. Attached is v2 with such a change.\n> > I've not included examples, and I'm about 50/50 on doing so. What are\n> > your thoughts on adding in parens \"e.g., changing from varchar to text\n> > avoids rebuilding indexes while changing from text to a domain of text\n> > with a different collation will require rebuilding indexes\"?\n>\n> On the patch, I suggest that instead of saying \"can verify that sort\n> order and/or hashing semantics are unchanged\" you say something like\n> \"can verify that the new index would be logically equivalent to the\n> current one\", mostly because I do not think that \"and/or\" looks very\n> good in formal writing.\n\nAgreed re: \"and/or\".\n\n> I think it would be fine to include examples, but I think that the\n> phrasing you suggest here doesn't seem great. I'm not sure how to fix\n> it exactly. Maybe it needs a little more explanation?\n\nIs the attached more along the lines of what you were thinking?\n\nThanks,\nJames Coleman", "msg_date": "Thu, 31 Mar 2022 10:13:57 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:14 AM James Coleman <jtc331@gmail.com> wrote:\n> Is the attached more along the lines of what you were thinking?\n\nYeah. Maybe this would be a little clearer: \"For example, if the\ncollation for a column has been changed, an index rebuild is always\nrequired, because the new sort order might be different. However, in\nthe absence of a collation change, a column can be changed from text\nto varchar or vice versa without rebuilding the indexes, because these\ndata types sort identically.\"\n\nWe don't seem to be very consistent about whether we write type names\nlike VARCHAR in upper case or lower case in the documentation. I'd\nvote for using lower-case, but it probably doesn't matter much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:28:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 10:14 AM James Coleman <jtc331@gmail.com> wrote:\n> > Is the attached more along the lines of what you were thinking?\n>\n> Yeah. Maybe this would be a little clearer: \"For example, if the\n> collation for a column has been changed, an index rebuild is always\n> required, because the new sort order might be different. However, in\n> the absence of a collation change, a column can be changed from text\n> to varchar or vice versa without rebuilding the indexes, because these\n> data types sort identically.\"\n\nUpdated.\n\n> We don't seem to be very consistent about whether we write type names\n> like VARCHAR in upper case or lower case in the documentation. I'd\n> vote for using lower-case, but it probably doesn't matter much.\n\nGrepping suggests lower case is more common (2 to 1) so I switched to\nthat. Interestingly for \"text\" it's 700+ to 1 :)\n\nThanks,\nJames Coleman", "msg_date": "Thu, 31 Mar 2022 10:51:28 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:51 AM James Coleman <jtc331@gmail.com> wrote:\n> Updated.\n\nThis version looks fine to me. If nobody objects I will commit it and\ncredit myself as a co-author.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:24:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 10:51 AM James Coleman <jtc331@gmail.com> wrote:\n> > Updated.\n>\n> This version looks fine to me. If nobody objects I will commit it and\n> credit myself as a co-author.\n\nSounds great; thanks again for the review.\n\nJames Coleman\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:19:08 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Thu, Mar 31, 2022 at 4:19 PM James Coleman <jtc331@gmail.com> wrote:\n> On Thu, Mar 31, 2022 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Mar 31, 2022 at 10:51 AM James Coleman <jtc331@gmail.com> wrote:\n> > > Updated.\n> >\n> > This version looks fine to me. If nobody objects I will commit it and\n> > credit myself as a co-author.\n>\n> Sounds great; thanks again for the review.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 08:58:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" }, { "msg_contents": "On Fri, Apr 1, 2022 at 8:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 4:19 PM James Coleman <jtc331@gmail.com> wrote:\n> > On Thu, Mar 31, 2022 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Thu, Mar 31, 2022 at 10:51 AM James Coleman <jtc331@gmail.com> wrote:\n> > > > Updated.\n> > >\n> > > This version looks fine to me. If nobody objects I will commit it and\n> > > credit myself as a co-author.\n> >\n> > Sounds great; thanks again for the review.\n>\n> Done.\n\nThanks. I marked the CF entry as committed.\n\nJames Coleman\n\n\n", "msg_date": "Fri, 1 Apr 2022 09:01:49 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct docs re: rewriting indexes when table rewrite is skipped" } ]
[ { "msg_contents": "Over in the \"Document atthasmissing default optimization avoids\nverification table scan\" thread David Johnston (who I've cc'd)\nsuggested that my goals might be better implemented with a simple\nrestructuring of the Notes section of the ALTER TABLE docs. I think\nthis is also along the lines of Tom Lane's suggestion of a \"unified\ndiscussion\", but I've chosen for now (and simplicity's sake) not to\nbreak this into an entirely new page. If reviewers feel that is\nwarranted at this stage, I can do that, but it seems to me that for\nnow this improves the structure and sets us up for such a future page\nbut falls short of sufficient content to move into its own page.\n\nOne question on the changes: the current docs say \"when attaching a\nnew partition it may be scanned to verify that existing rows meet the\npartition constraint\". The word \"may\" there seems to suggest there may\nalso be occasions where scans are not needed, but no examples of such\ncases are present. I'm not immediately aware of such a case. Are these\nconstraints always validated? If not, in which cases can such a scan\nbe skipped?\n\nI've also incorporated the slight correction in \"Correct docs re:\nrewriting indexes when table rewrite is skipped\" [2] here, and will\nrebase this patch if that gets committed.\n\nThanks,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAKFQuwZyBaJjNepdTM3kO8PLaCpRdRd8%2BmtLT8QdE73oAsGv8Q%40mail.gmail.com\n2: https://www.postgresql.org/message-id/CAAaqYe90Ea3RG%3DA7H-ONvTcx549-oQhp07BrHErwM%3DAyH2ximg%40mail.gmail.com", "msg_date": "Tue, 29 Mar 2022 10:20:37 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" }, { "msg_contents": "On Tue, 29 Mar 2022 at 16:20, James Coleman <jtc331@gmail.com> wrote:\n>\n> Over in the \"Document atthasmissing default optimization avoids\n> verification table scan\" thread David Johnston (who I've cc'd)\n> suggested that my goals might be better implemented with a simple\n> restructuring of the Notes section of the ALTER TABLE docs. I think\n> this is also along the lines of Tom Lane's suggestion of a \"unified\n> discussion\", but I've chosen for now (and simplicity's sake) not to\n> break this into an entirely new page. If reviewers feel that is\n> warranted at this stage, I can do that, but it seems to me that for\n> now this improves the structure and sets us up for such a future page\n> but falls short of sufficient content to move into its own page.\n>\n> One question on the changes: the current docs say \"when attaching a\n> new partition it may be scanned to verify that existing rows meet the\n> partition constraint\". The word \"may\" there seems to suggest there may\n> also be occasions where scans are not needed, but no examples of such\n> cases are present. I'm not immediately aware of such a case. Are these\n> constraints always validated? If not, in which cases can such a scan\n> be skipped?\n>\n> I've also incorporated the slight correction in \"Correct docs re:\n> rewriting indexes when table rewrite is skipped\" [2] here, and will\n> rebase this patch if that gets committed.\n\nSee comments in that thread.\n\n> + Changing the type of an existing column will require the entire table and its\n> + indexes to be rewritten. As an exception, if the <literal>USING</literal> clause\n> + does not change the column contents and the old type is either binary coercible\n> + to the new type or an unconstrained domain over the new type, a table rewrite is\n> + not needed.\n\nThis implies \"If the old type is [...] an unconstrained domain over\nthe new type, a table rewrite is not needed.\", which is the wrong way\naround.\n\nI'd go with something along the lines of:\n\n+ Changing the type of an existing column will require the entire table to be\n+ rewritten, unless the <literal>USING</literal> clause is only a\nbinary coercible\n+ cast, or if the new type is an unconstrained\n<literal>DOMAIN<literal> over the\n+ old type.\n\nThat would drop the reference to index rebuilding; but that should be\ncovered in other parts of the docs.\n\n> + The following alterations of the table require the entire table, and in some\n> + cases its indexes as well, to be rewritten.\n\nIt is impossible to rewrite the table without at the same time also\nrewriting the indexes; as the location of tuples changes and thus\npreviously generated indexes will become invalid. At the same time;\nchanges to columns might not require a table rewrite, while still\nrequiring the indexes to be rewritten. I suggest changing the order of\n\"table\" and \"index\", or dropping the clause.\n\n> + [...] For a large table such a rewrite\n> + may take a significant amount of time and will temporarily require as much as\n> + double the disk space.\n\nI'd replace the will with could. Technically, this \"double the disk\nspace\" could be even higher than that; due to index rebuilds taking up\nto 3x normal space (one original index which is only dropped at the\nend, one sorted tuple store for the rebuild, and one new index).\n\n> - Similarly, when attaching a new partition it may be scanned to verify that\n> - existing rows meet the partition constraint.\n> + Attaching a new partition requires scanning the table to verify that existing\n> + rows meet the partition constraint.\n\nThis is also (and better!) documented under section\nsql-altertable-attach-partition: we will skip full table scan if the\ntable partition's existing constraints already imply the new partition\nconstraints. The previous wording is better in that regard (\"may\nneed\", instead of \"requires\"), though it could be improved by refering\nto the sql-altertable-attach-partition section.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:58:15 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:58 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 29 Mar 2022 at 16:20, James Coleman <jtc331@gmail.com> wrote:\n> >\n> > Over in the \"Document atthasmissing default optimization avoids\n> > verification table scan\" thread David Johnston (who I've cc'd)\n> > suggested that my goals might be better implemented with a simple\n> > restructuring of the Notes section of the ALTER TABLE docs. I think\n> > this is also along the lines of Tom Lane's suggestion of a \"unified\n> > discussion\", but I've chosen for now (and simplicity's sake) not to\n> > break this into an entirely new page. If reviewers feel that is\n> > warranted at this stage, I can do that, but it seems to me that for\n> > now this improves the structure and sets us up for such a future page\n> > but falls short of sufficient content to move into its own page.\n> >\n> > One question on the changes: the current docs say \"when attaching a\n> > new partition it may be scanned to verify that existing rows meet the\n> > partition constraint\". The word \"may\" there seems to suggest there may\n> > also be occasions where scans are not needed, but no examples of such\n> > cases are present. I'm not immediately aware of such a case. Are these\n> > constraints always validated? If not, in which cases can such a scan\n> > be skipped?\n> >\n> > I've also incorporated the slight correction in \"Correct docs re:\n> > rewriting indexes when table rewrite is skipped\" [2] here, and will\n> > rebase this patch if that gets committed.\n>\n> See comments in that thread.\n\nRebased since that thread has now resulted in a committed patch.\n\n> > + Changing the type of an existing column will require the entire table and its\n> > + indexes to be rewritten. As an exception, if the <literal>USING</literal> clause\n> > + does not change the column contents and the old type is either binary coercible\n> > + to the new type or an unconstrained domain over the new type, a table rewrite is\n> > + not needed.\n>\n> This implies \"If the old type is [...] an unconstrained domain over\n> the new type, a table rewrite is not needed.\", which is the wrong way\n> around.\n>\n> I'd go with something along the lines of:\n>\n> + Changing the type of an existing column will require the entire table to be\n> + rewritten, unless the <literal>USING</literal> clause is only a\n> binary coercible\n> + cast, or if the new type is an unconstrained\n> <literal>DOMAIN<literal> over the\n> + old type.\n\nThat language is actually unchanged from the existing docs; is there\nan error in the existing docs you're seeing? I'm actually imagining\nthat it can probably got either way -- from unconstrained domain over\nnew type to new type or from old type to unconstrained domain over old\ntype.\n\n> That would drop the reference to index rebuilding; but that should be\n> covered in other parts of the docs.\n\nPart of the whole point of this restructuring is to make both of those\nclear; I think we should retain the comments about indexes.\n\n> > + The following alterations of the table require the entire table, and in some\n> > + cases its indexes as well, to be rewritten.\n>\n> It is impossible to rewrite the table without at the same time also\n> rewriting the indexes; as the location of tuples changes and thus\n> previously generated indexes will become invalid. At the same time;\n> changes to columns might not require a table rewrite, while still\n> requiring the indexes to be rewritten. I suggest changing the order of\n> \"table\" and \"index\", or dropping the clause.\n\nAh, that's a good point. I've rewritten that part.\n\n> > + [...] For a large table such a rewrite\n> > + may take a significant amount of time and will temporarily require as much as\n> > + double the disk space.\n>\n> I'd replace the will with could. Technically, this \"double the disk\n> space\" could be even higher than that; due to index rebuilds taking up\n> to 3x normal space (one original index which is only dropped at the\n> end, one sorted tuple store for the rebuild, and one new index).\n\nThat's also the existing language, but I agree it seems a bit overly\nprecise (and in the process probably incorrect). There's a lot of\ncomplexity here: depending on the type change (and USING clause!) and\ntable width it could be even more than 3x. I've reworded to try to\ncapture what's really going on here.\n\nWhy \"could\" instead of \"will\"? All table rewrites will always require\na extra disk space, right?\n\n> > - Similarly, when attaching a new partition it may be scanned to verify that\n> > - existing rows meet the partition constraint.\n> > + Attaching a new partition requires scanning the table to verify that existing\n> > + rows meet the partition constraint.\n>\n> This is also (and better!) documented under section\n> sql-altertable-attach-partition: we will skip full table scan if the\n> table partition's existing constraints already imply the new partition\n> constraints. The previous wording is better in that regard (\"may\n> need\", instead of \"requires\"), though it could be improved by refering\n> to the sql-altertable-attach-partition section.\n\nUpdated, and I added an xref to that section (I think that's the\ncorrect tagging).\n\nThanks,\nJames Coleman", "msg_date": "Fri, 1 Apr 2022 10:10:36 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" }, { "msg_contents": "On Fri, 1 Apr 2022 at 16:10, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 10:58 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Tue, 29 Mar 2022 at 16:20, James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > Over in the \"Document atthasmissing default optimization avoids\n> > > verification table scan\" thread David Johnston (who I've cc'd)\n> > > suggested that my goals might be better implemented with a simple\n> > > restructuring of the Notes section of the ALTER TABLE docs. I think\n> > > this is also along the lines of Tom Lane's suggestion of a \"unified\n> > > discussion\", but I've chosen for now (and simplicity's sake) not to\n> > > break this into an entirely new page. If reviewers feel that is\n> > > warranted at this stage, I can do that, but it seems to me that for\n> > > now this improves the structure and sets us up for such a future page\n> > > but falls short of sufficient content to move into its own page.\n> > >\n> > > One question on the changes: the current docs say \"when attaching a\n> > > new partition it may be scanned to verify that existing rows meet the\n> > > partition constraint\". The word \"may\" there seems to suggest there may\n> > > also be occasions where scans are not needed, but no examples of such\n> > > cases are present. I'm not immediately aware of such a case. Are these\n> > > constraints always validated? If not, in which cases can such a scan\n> > > be skipped?\n> > >\n> > > I've also incorporated the slight correction in \"Correct docs re:\n> > > rewriting indexes when table rewrite is skipped\" [2] here, and will\n> > > rebase this patch if that gets committed.\n> >\n> > See comments in that thread.\n>\n> Rebased since that thread has now resulted in a committed patch.\n>\n> > > + Changing the type of an existing column will require the entire table and its\n> > > + indexes to be rewritten. As an exception, if the <literal>USING</literal> clause\n> > > + does not change the column contents and the old type is either binary coercible\n> > > + to the new type or an unconstrained domain over the new type, a table rewrite is\n> > > + not needed.\n> >\n> > This implies \"If the old type is [...] an unconstrained domain over\n> > the new type, a table rewrite is not needed.\", which is the wrong way\n> > around.\n> >\n> > I'd go with something along the lines of:\n> >\n> > + Changing the type of an existing column will require the entire table to be\n> > + rewritten, unless the <literal>USING</literal> clause is only a\n> > binary coercible\n> > + cast, or if the new type is an unconstrained\n> > <literal>DOMAIN<literal> over the\n> > + old type.\n>\n> That language is actually unchanged from the existing docs; is there\n> an error in the existing docs you're seeing? I'm actually imagining\n> that it can probably got either way -- from unconstrained domain over\n> new type to new type or from old type to unconstrained domain over old\n> type.\n\nCREATE DOMAIN constrained AS text NOT NULL;\nCREATE DOMAIN unconstrained_on_constrained AS constrained;\n\nCREATE TABLE tst (col unconstrained_on_constrained);\nALTER TABLE tst ALTER COLUMN col TYPE constrained; -- table scan\n\nMoving from an unconstrained domain over a constrained domain means\nthat we still do the table scan. Domain nesting is weird in that way.\n\n> > That would drop the reference to index rebuilding; but that should be\n> > covered in other parts of the docs.\n>\n> Part of the whole point of this restructuring is to make both of those\n> clear; I think we should retain the comments about indexes.\n\nOK; I mentioned it because table rewrite also implies index rewrite;\nassuming this is correctly referenced in other parts of the docs.\n\n> > > + The following alterations of the table require the entire table, and in some\n> > > + cases its indexes as well, to be rewritten.\n> >\n> > It is impossible to rewrite the table without at the same time also\n> > rewriting the indexes; as the location of tuples changes and thus\n> > previously generated indexes will become invalid. At the same time;\n> > changes to columns might not require a table rewrite, while still\n> > requiring the indexes to be rewritten. I suggest changing the order of\n> > \"table\" and \"index\", or dropping the clause.\n>\n> Ah, that's a good point. I've rewritten that part.\n>\n> > > + [...] For a large table such a rewrite\n> > > + may take a significant amount of time and will temporarily require as much as\n> > > + double the disk space.\n> >\n> > I'd replace the will with could. Technically, this \"double the disk\n> > space\" could be even higher than that; due to index rebuilds taking up\n> > to 3x normal space (one original index which is only dropped at the\n> > end, one sorted tuple store for the rebuild, and one new index).\n>\n> That's also the existing language, but I agree it seems a bit overly\n> precise (and in the process probably incorrect). There's a lot of\n> complexity here: depending on the type change (and USING clause!) and\n> table width it could be even more than 3x. I've reworded to try to\n> capture what's really going on here.\n>\n> Why \"could\" instead of \"will\"? All table rewrites will always require\n> a extra disk space, right?\n\nTable bloat will be removed, as an equivalent of `VACUUM FULL` or\n`CLUSTER` is run on the database. This can remove up to 99.99...% of\nthe current size of the table; e.g. if there's a few tuples of\nMaxHeapTupleSize at the end of the table (likely in 32-bit mode, or\npre-pg14 systems).\n\n> > > - Similarly, when attaching a new partition it may be scanned to verify that\n> > > - existing rows meet the partition constraint.\n> > > + Attaching a new partition requires scanning the table to verify that existing\n> > > + rows meet the partition constraint.\n> >\n> > This is also (and better!) documented under section\n> > sql-altertable-attach-partition: we will skip full table scan if the\n> > table partition's existing constraints already imply the new partition\n> > constraints. The previous wording is better in that regard (\"may\n> > need\", instead of \"requires\"), though it could be improved by refering\n> > to the sql-altertable-attach-partition section.\n>\n> Updated, and I added an xref to that section (I think that's the\n> correct tagging).\n\n> + The following alterations of the table require the entire table to be rewritten\n> + and its indexes to be rebuilt.\n\n> + The following alterations of the table require that it be scanned in its entirety\n> + to ensure that no existing values are contrary to the new constraints placed on\n> + the table.\n\nCould you maybe reword these sentences to replace \"alterations of\"\nwith \"changes to\"? Although fittingly used in the context of 'ALTER\nTABLE\", I'm not a fan of the phrasing that comes with the use of\n'alterations'.\n\n> + Attaching a new partition may require scanning the table to verify that existing\n> + rows meet the partition constraint.\n\nI think that \"the table\" should be \"some tables\", as default\npartitions of the parent table might need to be checked as well.\n\nThanks!\n\n- Matthias\n\n\n", "msg_date": "Fri, 1 Apr 2022 18:03:00 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" }, { "msg_contents": "I think this patch is missing \"SET [UN]LOGGED\", defaults of identity columns\nand domains, and access method.\n\nAnd tablespace, even though that rewrites the *files*, but not tuples (maybe\nthese docs should say that).\n\n\n", "msg_date": "Tue, 26 Jul 2022 08:08:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3604/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:17:12 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Restructure ALTER TABLE notes to clarify table rewrites and\n verification scans" } ]
[ { "msg_contents": "Hi,\n\nSeparate from the minutia in [1] I'd like to discuss a few questions of more\ngeneral interest. I'll post another question or two later.\n\n\n1) How to react to corrupted statsfiles?\n\nIn HEAD we stop reading stats at the point we detect the stats file to be\ncorrupted. The contents of the currently read stats \"entry\" are zeroed out,\nbut prior entries stay loaded.\n\nThis means that we can get an entirely inconsisten state of the stats, without\nreally knowing:\n\nE.g. if a per-db stats file fails to load halfway through, we'll have already\nloaded the pg_stat_database entry. Thus pg_stat_database.stats_reset will not\nbe reset, but at the same time we'll only have part of the data in\npg_stat_database.\n\n\nThat seems like a mess? IMO it'd make more sense to just throw away all stats\nin that case. Either works for the shmstats patch, it's just a few lines to\nchange.\n\nUnless there's support for throwing away stats, I'll change the shmstats patch\nto match the current behaviour. Currently it resets all \"global\" stats like\nbgwriter, checkpointer etc whenever there's a failure, but keeps already\nloaded database / table / function stats, which doesn't make much sense.\n\n\n2) What do we want to do with stats when starting up in recovery?\n\nToday we throw out stats whenever crash recovery is needed. Which includes\nstarting up a replica DB_SHUTDOWNED_IN_RECOVERY.\n\nThe shared memory stats patch loads the stats in the startup process (whether\nthere's recovery or not). It's obviously no problem to just mirror the current\nbehaviour, we just need to decide what we want?\n\nThe reason we throw stats away before recovery seem to originate in concerns\naround old stats being confused for new stats. Of course, \"now\" that we have\nstreaming replication, throwing away stats *before* recovery, doesn't provide\nany sort of protection against that. In fact, currently there's no automatic\nmechanism whatsoever to get rid of stats for dropped objects on a standby.\n\n\nIn the shared memory stats patch, dropped catalog objects cause the stats to\nalso be dropped on the standby. So that whole concern is largely gone.\n\nI'm inclined to, for now, either leave the behaviour exactly as it currently\nis, or to continuing throwing away stats during normal crash recovery, but to\nstop doing so for DB_SHUTDOWNED_IN_RECOVERY.\n\nI think it'd now be feasible to just never throw stats away during crash\nrecovery, by writing out stats during checkpoint/restartpoints, but that's\nclearly work for later. The alternatives in the prior paragraph in contrast is\njust a question of when to call the \"restore\" and when the \"throw away\"\nfunction, a call that has to be made anyway.\n\n\n3) Does anybody care about preserving the mishmash style of function comments\npresent in pgstat? There's at least:\n\n/* ----------\n * pgstat_write_db_statsfile() -\n *\t\tWrite the stat file for a single database.\n *\n *\tIf writing to the permanent file (happens when the collector is\n\n/* ----------\n * pgstat_get_replslot_entry\n *\n * Return the entry of replication slot stats with the given name. Return\n * NULL if not found and the caller didn't request to create it.\n\n/*\n * Lookup the hash table entry for the specified table. If no hash\n * table entry exists, initialize it, if the create parameter is true.\n * Else, return NULL.\n */\n\n/* ----------\n * pgstat_send() -\n *\n *\t\tSend out one statistics message to the collector\n * ----------\n */\n\n/*\n * PostPrepare_PgStat\n *\t\tClean up after successful PREPARE.\n *\n * Note: AtEOXact_PgStat is not called during PREPARE.\n */\n\n\n---- or not. Summary indented with two tabs. Longer comment indented with a\ntab. The function name in the comment or not. Function parens or not. And\nquite a few more differences.\n\nI find these a *pain* to maintain. Most function comments have to be touched\nto remove references to the stats collector and invariably such changes end up\nchanging formatting as well. Whenever adding a new function I have an internal\ndebate about which comment style to follow.\n\nI've already spent a considerable amount of time going through the patch to\nreduce incidental \"format\" changes, but there's quite a few more instances\nthat need to be cleaned up.\n\nI'm inclined to just do a pass through the files and normalize the comment\nstyles, in a separate commit. Personally I'd go for no ---, no copy of the\nfunction name, no tabs - but I don't really care as long as it's consistent.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220329072417.er26q5pxc4pbldn7%40alap3.anarazel.de\n\n\n", "msg_date": "Tue, 29 Mar 2022 12:17:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Higher level questions around shared memory stats" }, { "msg_contents": "On Tue, Mar 29, 2022 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> 1) How to react to corrupted statsfiles?\n>\n> IMO it'd make more sense to just throw away all stats\n> in that case.\n\nThat seems reasonable to me. I think there's some downside, in that\nstats are important, and having some of them might be better than\nhaving none of them. On the other hand, stats file corruption should\nbe very rare, and if it's not, we need to fix that. I think what's\nactually most important here is the error reporting. We need to make\nit clear, at least via log messages, that something bad has happened.\nAnd maybe we should have, inside the stats system, something that\nkeeps track of when the stats file was last recreated from scratch\nbecause of a corruption event, separately from when it was last\nintentionally reset.\n\n> 2) What do we want to do with stats when starting up in recovery?\n>\n> Today we throw out stats whenever crash recovery is needed. Which includes\n> starting up a replica DB_SHUTDOWNED_IN_RECOVERY.\n>\n> The shared memory stats patch loads the stats in the startup process (whether\n> there's recovery or not). It's obviously no problem to just mirror the current\n> behaviour, we just need to decide what we want?\n>\n> The reason we throw stats away before recovery seem to originate in concerns\n> around old stats being confused for new stats. Of course, \"now\" that we have\n> streaming replication, throwing away stats *before* recovery, doesn't provide\n> any sort of protection against that. In fact, currently there's no automatic\n> mechanism whatsoever to get rid of stats for dropped objects on a standby.\n\nDoes redo update the stats?\n\n> 3) Does anybody care about preserving the mishmash style of function comments\n> present in pgstat? There's at least:\n\nI can't speak for everyone in the universe, but I think it should be\nfine to clean that up.\n\n> I'm inclined to just do a pass through the files and normalize the comment\n> styles, in a separate commit. Personally I'd go for no ---, no copy of the\n> function name, no tabs - but I don't really care as long as it's consistent.\n\n+1 for that style.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 16:24:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 16:24:02 -0400, Robert Haas wrote:\n> On Tue, Mar 29, 2022 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > 1) How to react to corrupted statsfiles?\n> >\n> > IMO it'd make more sense to just throw away all stats\n> > in that case.\n>\n> That seems reasonable to me. I think there's some downside, in that\n> stats are important, and having some of them might be better than\n> having none of them. On the other hand, stats file corruption should\n> be very rare, and if it's not, we need to fix that.\n\nI think it's reasonably rare because in cases there'd be corruption, we'd\ntypically not even have written them out / throw them away explicitly - we\nonly read stats when starting without crash recovery.\n\nSo the \"expected\" case of corruption afaicts solely is a OS crash just after\nthe shutdown checkpoint completed?\n\n\n> I think what's actually most important here is the error reporting. We need\n> to make it clear, at least via log messages, that something bad has\n> happened.\n\nThe message currently (on HEAD, but similarly on the path) is:\n\t\t\t\tereport(pgStatRunningInCollector ? LOG : WARNING,\n\t\t\t\t\t\t(errmsg(\"corrupted statistics file \\\"%s\\\"\",\n\t\t\t\t\t\t\t\tstatfile)));\n\n\n> And maybe we should have, inside the stats system, something that\n> keeps track of when the stats file was last recreated from scratch because\n> of a corruption event, separately from when it was last intentionally reset.\n\nThat would be easy to add. Don't think we have a good place to show the\ninformation right now - perhaps just new functions not part of any view?\n\nI can think of these different times:\n\n- Last time stats were removed due to starting up in crash recovery\n- Last time stats were created from scratch, because no stats data file was\n present at startup\n- Last time stats were thrown away due to corruption\n- Last time a subset of stats were reset using one of the pg_reset* functions\n\nMakes sense?\n\n\n> > 2) What do we want to do with stats when starting up in recovery?\n> >\n> > Today we throw out stats whenever crash recovery is needed. Which includes\n> > starting up a replica DB_SHUTDOWNED_IN_RECOVERY.\n> >\n> > The shared memory stats patch loads the stats in the startup process (whether\n> > there's recovery or not). It's obviously no problem to just mirror the current\n> > behaviour, we just need to decide what we want?\n> >\n> > The reason we throw stats away before recovery seem to originate in concerns\n> > around old stats being confused for new stats. Of course, \"now\" that we have\n> > streaming replication, throwing away stats *before* recovery, doesn't provide\n> > any sort of protection against that. In fact, currently there's no automatic\n> > mechanism whatsoever to get rid of stats for dropped objects on a standby.\n>\n> Does redo update the stats?\n\nWith \"update\" do you mean generate new stats? In the shared memory stats patch\nit triggers stats to be dropped, on HEAD it just resets all stats at startup.\n\nRedo itself doesn't generate stats, but bgwriter, checkpointer, backends do.\n\n\nThanks,\n\nAndres\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:01:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "On Tue, Mar 29, 2022 at 5:01 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it's reasonably rare because in cases there'd be corruption, we'd\n> typically not even have written them out / throw them away explicitly - we\n> only read stats when starting without crash recovery.\n>\n> So the \"expected\" case of corruption afaicts solely is a OS crash just after\n> the shutdown checkpoint completed?\n\nCan we prevent that case from occurring, so that there are no expected cases?\n\n> > And maybe we should have, inside the stats system, something that\n> > keeps track of when the stats file was last recreated from scratch because\n> > of a corruption event, separately from when it was last intentionally reset.\n>\n> That would be easy to add. Don't think we have a good place to show the\n> information right now - perhaps just new functions not part of any view?\n\nI defer to you on where to put it.\n\n> I can think of these different times:\n>\n> - Last time stats were removed due to starting up in crash recovery\n> - Last time stats were created from scratch, because no stats data file was\n> present at startup\n> - Last time stats were thrown away due to corruption\n> - Last time a subset of stats were reset using one of the pg_reset* functions\n>\n> Makes sense?\n\nYes. Possibly that last could be broken in to two: when all stats were\nlast reset, when some stats were last reset.\n\n> > Does redo update the stats?\n>\n> With \"update\" do you mean generate new stats? In the shared memory stats patch\n> it triggers stats to be dropped, on HEAD it just resets all stats at startup.\n>\n> Redo itself doesn't generate stats, but bgwriter, checkpointer, backends do.\n\nWell, I guess what I'm trying to figure out is what happens if we run\nin recovery for a long time -- say, a year -- and then get promoted.\nDo we have reasons to expect that the stats will be accurate enough to\nuse at that point, or will they be way off?\n\nI don't have a great understanding of how this all works, but if\nrunning recovery for a long time is going to lead to a situation where\nthe stats progressively diverge from reality, then preserving them\ndoesn't seem as valuable as if they're going to be more or less\naccurate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:42:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "On 29.03.22 23:01, Andres Freund wrote:\n>> I think what's actually most important here is the error reporting. We need\n>> to make it clear, at least via log messages, that something bad has\n>> happened.\n> The message currently (on HEAD, but similarly on the path) is:\n> \t\t\t\tereport(pgStatRunningInCollector ? LOG : WARNING,\n> \t\t\t\t\t\t(errmsg(\"corrupted statistics file \\\"%s\\\"\",\n> \t\t\t\t\t\t\t\tstatfile)));\n\nCorrupted how? How does it know? Is there a checksum, was the file the \nwrong length, what happened? I think this could use more detail.\n\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:44:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 21:44:20 +0200, Peter Eisentraut wrote:\n> On 29.03.22 23:01, Andres Freund wrote:\n> > > I think what's actually most important here is the error reporting. We need\n> > > to make it clear, at least via log messages, that something bad has\n> > > happened.\n> > The message currently (on HEAD, but similarly on the path) is:\n> > \t\t\t\tereport(pgStatRunningInCollector ? LOG : WARNING,\n> > \t\t\t\t\t\t(errmsg(\"corrupted statistics file \\\"%s\\\"\",\n> > \t\t\t\t\t\t\t\tstatfile)));\n> \n> Corrupted how?\n\nWe can't parse it. Which can mean that it's truncated (we notice this because\nwe expect an 'E' as the last byte), bits flipped in the wrong place (there's\ndifferent bytes indicating different types of stats). Corruption within\nindividual stats aren't detected.\n\nNote that this is very old code / behaviour, not meaningfully affected by\nshared memory stats patch.\n\n\n> How does it know? Is there a checksum, was the file the wrong length, what\n> happened? I think this could use more detail.\n\nI agree. But it's independent of the shared memory stats patch, so I don't\nwant to tie improving it to that already huge patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 13:44:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 14:42:23 -0400, Robert Haas wrote:\n> On Tue, Mar 29, 2022 at 5:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think it's reasonably rare because in cases there'd be corruption, we'd\n> > typically not even have written them out / throw them away explicitly - we\n> > only read stats when starting without crash recovery.\n> >\n> > So the \"expected\" case of corruption afaicts solely is a OS crash just after\n> > the shutdown checkpoint completed?\n> \n> Can we prevent that case from occurring, so that there are no expected cases?\n\nWe likely can, at least for the causes of corruption I know of. We already\nwrite the statsfile into a temporary filename and then rename into place. I\nthink all we'd need to do is to use durable_rename() to make sure it's durable\nonce renamed into place.\n\nIt's really unrelated to the shared memory stats patch though, so I'd prefer\nnot to tie it to that.\n\n\n> > I can think of these different times:\n> >\n> > - Last time stats were removed due to starting up in crash recovery\n> > - Last time stats were created from scratch, because no stats data file was\n> > present at startup\n> > - Last time stats were thrown away due to corruption\n> > - Last time a subset of stats were reset using one of the pg_reset* functions\n> >\n> > Makes sense?\n> \n> Yes. Possibly that last could be broken in to two: when all stats were\n> last reset, when some stats were last reset.\n\nBelieve it or not, we don't currently have a function to reset all stats. We\nshould definitely add that though, because the invocation to reset all stats\ngets more ridiculous^Wcomplicated with each release.\n\nI think the minimal invocation currently is something like:\n\n-- reset all stats shared between databases\nSELECT pg_stat_reset_shared('archiver');\nSELECT pg_stat_reset_shared('bgwriter');\nSELECT pg_stat_reset_shared('wal');\nSELECT pg_stat_reset_replication_slot(NULL);\nSELECT pg_stat_reset_slru(NULL);\nSELECT pg_stat_reset_subscription_stats(NULL);\n\n-- connect to each database and reset the stats in that database\nSELECT pg_stat_reset();\n\n\nI've protested against replication slot, slru, subscription stats not being\nresettable via pg_stat_reset_shared(), nobody else seemed to care.\n\n\n> > > Does redo update the stats?\n> >\n> > With \"update\" do you mean generate new stats? In the shared memory stats patch\n> > it triggers stats to be dropped, on HEAD it just resets all stats at startup.\n> >\n> > Redo itself doesn't generate stats, but bgwriter, checkpointer, backends do.\n> \n> Well, I guess what I'm trying to figure out is what happens if we run\n> in recovery for a long time -- say, a year -- and then get promoted.\n> Do we have reasons to expect that the stats will be accurate enough to\n> use at that point, or will they be way off?\n\nWhat do you mean with 'accurate enough'?\n\nWith or without shared memory stats pg_stat_all_tables.{n_mod_since_analyze,\nn_ins_since_vacuum, n_live_tup, n_dead_tup ...} will be be zero. The replay\nprocess doesn't update them.\n\nIn contrast to that, things like pg_stat_all_tables.{seq_scan, seq_tup_read,\nidx_tup_fetch, ...} will be accurate, with one exception below.\n\npg_stat_bgwriter, pg_stat_wal, etc will always be accurate.\n\n\nOn HEAD, there may be a lot of dead stats for dropped databases / tables /\nfunctions that have been dropped since the start of the cluster. They will\neventually get removed, once autovacuum starts running in the respective\ndatabase (i.e. pgstat_vacuum_stat() gets run).\n\nThe exception noted above is that because pg_stat_all_tables contents are\nnever removed during recovery, it becomes a lot more plausible for oid\nconflicts to occur. So the stats for two different tables might get added up\naccidentally - but that'll just affect the non-zero columns, of course.\n\n\nWith the shared memory stats patch, stats for dropped objects (i.e. databases,\ntables, ... ) are removed shortly after they have been dropped, so that\nconflict risk doesn't exist anymore.\n\n\nSo I don't think increasing inaccuracy is a reason to throw away stats on\nreplica startup. Particularly because we already don't throw them away when\npromoting the replica, just when having started it last.\n\n\n\n> I don't have a great understanding of how this all works, but if\n> running recovery for a long time is going to lead to a situation where\n> the stats progressively diverge from reality, then preserving them\n> doesn't seem as valuable as if they're going to be more or less\n> accurate.\n\nMinus the oid wraparound risk on HEAD, the only way they increasingly diverge\nis that the '0' in a bunch of pg_stat_all_tables columns might get less and\nless accurate. But that's not the type of divergence you're talking about, I\nthink.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:08:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 12:17:27 -0700, Andres Freund wrote:\n> Separate from the minutia in [1] I'd like to discuss a few questions of more\n> general interest. I'll post another question or two later.\n\n4) What to do with the stats_temp_directory GUC / PG_STAT_TMP_DIR define /\n pg_stats_temp directory?\n\n With shared memory stats patch, the stats system itself doesn't need it\n anymore. But pg_stat_statements also uses PG_STAT_TMP_DIR to store\n pgss_query_texts.stat. That file can be fairly hot, so there's benefit in\n having something like stats_temp_directory.\n\n I'm inclined to just leave the guc / define / directory around, with a\n note saying that it's just used by extensions?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 16:35:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 16:35:50 -0700, Andres Freund wrote:\n> On 2022-03-29 12:17:27 -0700, Andres Freund wrote:\n> > Separate from the minutia in [1] I'd like to discuss a few questions of more\n> > general interest. I'll post another question or two later.\n>\n> 4) What to do with the stats_temp_directory GUC / PG_STAT_TMP_DIR define /\n> pg_stats_temp directory?\n>\n> With shared memory stats patch, the stats system itself doesn't need it\n> anymore. But pg_stat_statements also uses PG_STAT_TMP_DIR to store\n> pgss_query_texts.stat. That file can be fairly hot, so there's benefit in\n> having something like stats_temp_directory.\n>\n> I'm inclined to just leave the guc / define / directory around, with a\n> note saying that it's just used by extensions?\n\nI had searched before on codesearch.debian.net whether there are external\nextensions using it, without finding one (just a copy of pgstat.h). Now I\nsearched using https://cs.github.com/ ([1]) and found\n\nhttps://github.com/powa-team/pg_sortstats\nhttps://github.com/uptimejp/sql_firewall\nhttps://github.com/legrandlegrand/pg_stat_sql_plans\nhttps://github.com/ossc-db/pg_store_plans\n\nWhich seems to weigh in favor of at least keeping the directory and\ndefine. They all don't seem to use the guc, but just PG_STAT_TMP_DIR.\n\n\nWe currently have code removing files both in pg_stat and the configured\npg_stats_temp directory (defaulting to pg_stat_tmp). All files matching\nglobal.(stat|tmp), db_[0-9]+.(tmp|stat) are removed.\n\nWith the shared memory stats patch there's only a single file, so we don't\nneed that anymore. I guess some extension could rely on files being removed\nsomehow, but it's hard to believe, because it'd conflict with the stats\ncollector's files.\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://cs.github.com/?scopeName=All+repos&scope=&q=PG_STAT_TMP_DIR+NOT+path%3Afilemap.c+NOT+path%3Apgstat.h+NOT+path%3Abasebackup.c+NOT+path%3Apg_stat_statements.c+NOT+path%3Aguc.c\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:09:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "At Wed, 30 Mar 2022 17:09:44 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-03-30 16:35:50 -0700, Andres Freund wrote:\n> > On 2022-03-29 12:17:27 -0700, Andres Freund wrote:\n> > > Separate from the minutia in [1] I'd like to discuss a few questions of more\n> > > general interest. I'll post another question or two later.\n> >\n> > 4) What to do with the stats_temp_directory GUC / PG_STAT_TMP_DIR define /\n> > pg_stats_temp directory?\n> >\n> > With shared memory stats patch, the stats system itself doesn't need it\n> > anymore. But pg_stat_statements also uses PG_STAT_TMP_DIR to store\n> > pgss_query_texts.stat. That file can be fairly hot, so there's benefit in\n> > having something like stats_temp_directory.\n> >\n> > I'm inclined to just leave the guc / define / directory around, with a\n> > note saying that it's just used by extensions?\n> \n> I had searched before on codesearch.debian.net whether there are external\n> extensions using it, without finding one (just a copy of pgstat.h). Now I\n> searched using https://cs.github.com/ ([1]) and found\n> \n> https://github.com/powa-team/pg_sortstats\n> https://github.com/uptimejp/sql_firewall\n> https://github.com/legrandlegrand/pg_stat_sql_plans\n> https://github.com/ossc-db/pg_store_plans\n> \n> Which seems to weigh in favor of at least keeping the directory and\n> define. They all don't seem to use the guc, but just PG_STAT_TMP_DIR.\n\nThe varialbe is not being officially exposed to extensions not even in\nthe core. That is, it is defined (non-static) in guc.c but does not\nappear in header files. I'm not sure why pg_stat_statements decided to\nuse PG_STAT_TMP_DIR instead of trying to use stats_temp_directory\n(also known in the core as pgstast_temp_directory). I guess all\nextensions above are just following the pg_stat_statements' practice.\nAt least pg_store_plans does.\n\nAfter moving to shared stats, we might want to expose the GUC variable\nitself. Then hide/remove the macro PG_STAT_TMP_DIR. This breaks the\nextensions but it is better than keeping using PG_STAT_TMP_DIR for\nuncertain reasons. The existence of the macro can be used as the\nmarker of the feature change. This is the chance to break the (I\nthink) bad practice shared among the extensions. At least I am okay\nwith that.\n\n#ifdef PG_STAT_TMP_DIR\n#define PGSP_TEXT_FILE\tPG_STAT_TMP_DIR \"/pgsp_plan_texts.stat\"\n#endif\n\n> We currently have code removing files both in pg_stat and the configured\n> pg_stats_temp directory (defaulting to pg_stat_tmp). All files matching\n> global.(stat|tmp), db_[0-9]+.(tmp|stat) are removed.\n> \n> With the shared memory stats patch there's only a single file, so we don't\n> need that anymore. I guess some extension could rely on files being removed\n> somehow, but it's hard to believe, because it'd conflict with the stats\n> collector's files.\n\nYes, I intentionally avoided using the file names that are removed by\nthe logic in pg_store_plans. But, it is a kind of rare to use such\nnames inadvertently, though..\n\n> Greetings,\n> \n> Andres Freund\n> \n> \n> [1] https://cs.github.com/?scopeName=All+repos&scope=&q=PG_STAT_TMP_DIR+NOT+path%3Afilemap.c+NOT+path%3Apgstat.h+NOT+path%3Abasebackup.c+NOT+path%3Apg_stat_statements.c+NOT+path%3Aguc.c\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:16:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 16:16:31 +0900, Kyotaro Horiguchi wrote:\n> After moving to shared stats, we might want to expose the GUC variable\n> itself. Then hide/remove the macro PG_STAT_TMP_DIR. This breaks the\n> extensions but it is better than keeping using PG_STAT_TMP_DIR for\n> uncertain reasons. The existence of the macro can be used as the\n> marker of the feature change. This is the chance to break the (I\n> think) bad practice shared among the extensions. At least I am okay\n> with that.\n\nI don't really understand why we'd want to do it that way round? Why not just\nleave PG_STAT_TMP_DIR in place, and remove the GUC? Since nothing uses the\nGUC, we're not loosing anything, and we'd not break extensions unnecessarily?\n\nObviously there's no strong demand for pg_stat_statements et al to use the\nuser-configurable stats_temp_directory, given they've not done so for years\nwithout complaints? The code to support the configurable stats_temp_dir isn't\nhuge, but it's not small either.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 14:04:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "At Thu, 31 Mar 2022 14:04:16 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-03-31 16:16:31 +0900, Kyotaro Horiguchi wrote:\n> > After moving to shared stats, we might want to expose the GUC variable\n> > itself. Then hide/remove the macro PG_STAT_TMP_DIR. This breaks the\n> > extensions but it is better than keeping using PG_STAT_TMP_DIR for\n> > uncertain reasons. The existence of the macro can be used as the\n> > marker of the feature change. This is the chance to break the (I\n> > think) bad practice shared among the extensions. At least I am okay\n> > with that.\n> \n> I don't really understand why we'd want to do it that way round? Why not just\n> leave PG_STAT_TMP_DIR in place, and remove the GUC? Since nothing uses the\n> GUC, we're not loosing anything, and we'd not break extensions unnecessarily?\n\nYeah, I'm two-sided here.\n\nI think so and did so in the past versions of this patch. But when\nasked anew, I came to think I might want to keep (and make use more\nof) the configuarable aspect of the dbserver's dedicated temporary\ndirectory. The change is reliably detectable on extensions and the\nrequried change is easy.\n\n> Obviously there's no strong demand for pg_stat_statements et al to use the\n> user-configurable stats_temp_directory, given they've not done so for years\n> without complaints? The code to support the configurable stats_temp_dir isn't\n> huge, but it's not small either.\n\nI even doubt anyone have stats_temp_directory changed in a cluster.\nThus I agree that it is reasonable that we take advantage of the\nchance to remove the feature of little significance.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Apr 2022 11:33:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "On Thu, Mar 31, 2022 at 7:33 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 31 Mar 2022 14:04:16 -0700, Andres Freund <andres@anarazel.de>\n> wrote in\n> > Hi,\n> >\n> > On 2022-03-31 16:16:31 +0900, Kyotaro Horiguchi wrote:\n> > > After moving to shared stats, we might want to expose the GUC variable\n> > > itself. Then hide/remove the macro PG_STAT_TMP_DIR. This breaks the\n> > > extensions but it is better than keeping using PG_STAT_TMP_DIR for\n> > > uncertain reasons. The existence of the macro can be used as the\n> > > marker of the feature change. This is the chance to break the (I\n> > > think) bad practice shared among the extensions. At least I am okay\n> > > with that.\n> >\n> > I don't really understand why we'd want to do it that way round? Why not\n> just\n> > leave PG_STAT_TMP_DIR in place, and remove the GUC? Since nothing uses\n> the\n> > GUC, we're not loosing anything, and we'd not break extensions\n> unnecessarily?\n>\n> Yeah, I'm two-sided here.\n>\n> I think so and did so in the past versions of this patch. But when\n> asked anew, I came to think I might want to keep (and make use more\n> of) the configuarable aspect of the dbserver's dedicated temporary\n> directory. The change is reliably detectable on extensions and the\n> requried change is easy.\n>\n> > Obviously there's no strong demand for pg_stat_statements et al to use\n> the\n> > user-configurable stats_temp_directory, given they've not done so for\n> years\n> > without complaints? The code to support the configurable stats_temp_dir\n> isn't\n> > huge, but it's not small either.\n>\n> I even doubt anyone have stats_temp_directory changed in a cluster.\n> Thus I agree that it is reasonable that we take advantage of the\n> chance to remove the feature of little significance.\n>\n>\nDo we really think no one has taken our advice in the documentation and\nmoved their stats_temp_directory to a RAM-based file system?\n\nhttps://www.postgresql.org/docs/current/runtime-config-statistics.html\n\nIt doesn't seem right that extensions are making the decision of where to\nplace their temporary statistics files. If the user has specified a\ndirectory for them the system should be placing them there.\n\nThe question is whether current uses of PG_STAT_TMP_DIR can be made to use\nthe value of the GUC without them knowing or caring about the fact we\nchanged PG_STAT_TMP_DIR from a static define for pg_stat_tmp to whatever\nthe current value of stats_temp_directory is. I take it from the compiler\ndirective of #define that this isn't doable.\n\nGiven all that I'd say we remove stats_temp_directory (noting it as being\nobsolete as only core code was ever given leave to use it and that core\ncode now uses shared memory instead - basically forcing the choice of\nRAM-based file system onto the user).\n\nIf we later want to coerce extensions (and even our own code) to use a\nuser-supplied directory we can then remove the define and give them an API\nto use instead. I suspect such an API would look different than just \"here\nis a directory name\" anyway (e.g., force extensions to use a subdirectory\nunder the base directory for their data). We would name the base directory\nGUC something like \"stats_temp_base_directory\" and be happy that\nstats_temp_directory isn't sitting there already giving us the stink eye.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 7:33 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Thu, 31 Mar 2022 14:04:16 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-03-31 16:16:31 +0900, Kyotaro Horiguchi wrote:\n> > After moving to shared stats, we might want to expose the GUC variable\n> > itself. Then hide/remove the macro PG_STAT_TMP_DIR.  This breaks the\n> > extensions but it is better than keeping using PG_STAT_TMP_DIR for\n> > uncertain reasons. The existence of the macro can be used as the\n> > marker of the feature change.  This is the chance to break the (I\n> > think) bad practice shared among the extensions.  At least I am okay\n> > with that.\n> \n> I don't really understand why we'd want to do it that way round? Why not just\n> leave PG_STAT_TMP_DIR in place, and remove the GUC? Since nothing uses the\n> GUC, we're not loosing anything, and we'd not break extensions unnecessarily?\n\nYeah, I'm two-sided here.\n\nI think so and did so in the past versions of this patch.  But when\nasked anew, I came to think I might want to keep (and make use more\nof) the configuarable aspect of the dbserver's dedicated temporary\ndirectory.  The change is reliably detectable on extensions and the\nrequried change is easy.\n\n> Obviously there's no strong demand for pg_stat_statements et al to use the\n> user-configurable stats_temp_directory, given they've not done so for years\n> without complaints?  The code to support the configurable stats_temp_dir isn't\n> huge, but it's not small either.\n\nI even doubt anyone have stats_temp_directory changed in a cluster.\nThus I agree that it is reasonable that we take advantage of the\nchance to remove the feature of little significance.Do we really think no one has taken our advice in the documentation and moved their stats_temp_directory to a RAM-based file system?https://www.postgresql.org/docs/current/runtime-config-statistics.htmlIt doesn't seem right that extensions are making the decision of where to place their temporary statistics files.  If the user has specified a directory for them the system should be placing them there.The question is whether current uses of PG_STAT_TMP_DIR can be made to use the value of the GUC without them knowing or caring about the fact we changed PG_STAT_TMP_DIR from a static define for pg_stat_tmp to whatever the current value of stats_temp_directory is.  I take it from the compiler directive of #define that this isn't doable.Given all that I'd say we remove stats_temp_directory (noting it as being obsolete as only core code was ever given leave to use it and that core code now uses shared memory instead - basically forcing the choice of RAM-based file system onto the user).If we later want to coerce extensions (and even our own code) to use a user-supplied directory we can then remove the define and give them an API to use instead.  I suspect such an API would look different than just \"here is a directory name\" anyway (e.g., force extensions to use a subdirectory under the base directory for their data).  We would name the base directory GUC something like \"stats_temp_base_directory\" and be happy that stats_temp_directory isn't sitting there already giving us the stink eye.David J.", "msg_date": "Thu, 31 Mar 2022 20:06:07 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 20:06:07 -0700, David G. Johnston wrote:\n> Do we really think no one has taken our advice in the documentation and\n> moved their stats_temp_directory to a RAM-based file system?\n\nI'm pretty sure some have, I've seen it in the field in the past.\n\n\n> The question is whether current uses of PG_STAT_TMP_DIR can be made to use\n> the value of the GUC without them knowing or caring about the fact we\n> changed PG_STAT_TMP_DIR from a static define for pg_stat_tmp to whatever\n> the current value of stats_temp_directory is. I take it from the compiler\n> directive of #define that this isn't doable.\n\nCorrect, we can't.\n\n\n> If we later want to coerce extensions (and even our own code) to use a\n> user-supplied directory we can then remove the define and give them an API\n> to use instead.\n\nFWIW, it's not quite there yet (as in, not a goal for 15), but with a bit\nfurther work, a number of such extensions could use the shared memory stats\ninterface to store their additional stats. And they wouldn't have to care\nabout where the files get stored.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 22:12:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "At Thu, 31 Mar 2022 22:12:25 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-03-31 20:06:07 -0700, David G. Johnston wrote:\n> > Do we really think no one has taken our advice in the documentation and\n> > moved their stats_temp_directory to a RAM-based file system?\n> \n> I'm pretty sure some have, I've seen it in the field in the past.\n\nOh. But no problem if no extensions enumerated upthread are not used\nthere. If one of them is used, the DBA saw some files generated under\npg_stat_tmp..\n\n> > The question is whether current uses of PG_STAT_TMP_DIR can be made to use\n> > the value of the GUC without them knowing or caring about the fact we\n> > changed PG_STAT_TMP_DIR from a static define for pg_stat_tmp to whatever\n> > the current value of stats_temp_directory is. I take it from the compiler\n> > directive of #define that this isn't doable.\n> \n> Correct, we can't.\n> \n> \n> > If we later want to coerce extensions (and even our own code) to use a\n> > user-supplied directory we can then remove the define and give them an API\n> > to use instead.\n> \n> FWIW, it's not quite there yet (as in, not a goal for 15), but with a bit\n> further work, a number of such extensions could use the shared memory stats\n> interface to store their additional stats. And they wouldn't have to care\n> about where the files get stored.\n\npg_stat_statements has moved stored queries from shared memory to file\nin comparison between memory efficiency and speed. So fast storage\ncould give benefit. I'm not sure the files is flushed so frequently,\nthough. And in the first place \n\nBut, as Andres saying, currently it *is* stored in the data directory\nand no one seems complaing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 01 Apr 2022 14:59:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 12:17:27 -0700, Andres Freund wrote:\n> Separate from the minutia in [1] I'd like to discuss a few questions of more\n> general interest. I'll post another question or two later.\n\n5) What does track_counts = off mean?\n\nI just was trying to go through the shared memory stats patch to ensure\npgstat_track_counts has the same effect as before. It's kinda hard to discern\nwhat exactly it supposed to be doing because it's quite inconsistently\napplied.\n\n- all \"global\" stats ignore it (archiver, bgwriter, checkpointer, slru wal)\n- subscription, replication slot stats ignore it\n- *some* database level stats pay heed\n - pgstat_report_autovac, pgstat_report_connect don't\n - pgstat_report_recovery_conflict, pgstat_report_deadlock,\n pgstat_report_checksum_failures_in_db do respect it\n- function stats have their own setting\n\nUnless we conclude something else I'll make sure the patch is \"bug\ncompatible\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Apr 2022 16:21:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nAlvaro, added you because you were the original author for a lot of that\ncode. Fujii, you touched it last...\n\n\n6) Should any part of the \"reuse_stats\" logic in table_recheck_autovac() be\nkept?\n\nWith the shared memory stats patch, autovacuum can cheaply access individual\nstats, so the whole scheme for avoiding stats accesses is moot.\n\nSo far the patchset had touched autovacuum.c a bit too lightly, removing the\nautovac_refresh_stats() call and rephrasing a few comments, but not removing\ne.g. the reuse_stats variable / branches in table_recheck_autovac. Which\ndoesn't seem great. I've just tried to go through and update the autovacuum.c\ncode and comments in light of the shared memory stats patch..\n\nI don't really see a point in keeping any of it - but I was curious whether\nanybody else does?\n\nI'm still polishing, so I didn't want to send a whole new version with these\nadjustments to the list yet, but you can see the state as of the time of\nsending this email at [1].\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/commit/276c053110cfe71bf134519e8e4ab053e6d2a7f0#diff-3035fb5dace7bcd77f0eeafe32458cd808c5adb83d62ebdf54f0170cf7db93e7\n\n\n", "msg_date": "Sat, 2 Apr 2022 19:45:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "On 2022-Apr-02, Andres Freund wrote:\n\n> 6) Should any part of the \"reuse_stats\" logic in table_recheck_autovac() be\n> kept?\n> \n> With the shared memory stats patch, autovacuum can cheaply access individual\n> stats, so the whole scheme for avoiding stats accesses is moot.\n\nAgreed, I don't think there's need to keep any of that.\n\n> I don't really see a point in keeping any of it - but I was curious whether\n> anybody else does?\n\nI don't either.\n\n> I'm still polishing, so I didn't want to send a whole new version with these\n> adjustments to the list yet, but you can see the state as of the time of\n> sending this email at [1].\n\nI'll have a look, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 3 Apr 2022 13:34:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Higher level questions around shared memory stats" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 14:08:41 -0700, Andres Freund wrote:\n> On 2022-03-30 14:42:23 -0400, Robert Haas wrote:\n> > On Tue, Mar 29, 2022 at 5:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I can think of these different times:\n> > >\n> > > - Last time stats were removed due to starting up in crash recovery\n> > > - Last time stats were created from scratch, because no stats data file was\n> > > present at startup\n> > > - Last time stats were thrown away due to corruption\n> > > - Last time a subset of stats were reset using one of the pg_reset* functions\n> > >\n> > > Makes sense?\n> > \n> > Yes. Possibly that last could be broken in to two: when all stats were\n> > last reset, when some stats were last reset.\n> \n> Believe it or not, we don't currently have a function to reset all stats. We\n> should definitely add that though, because the invocation to reset all stats\n> gets more ridiculous^Wcomplicated with each release.\n\nI assume we'd want all of these to be persistent (until the next time stats\nare lost, of course)?\n\nWe currently have the following SQL functions showing reset times:\n pg_stat_get_bgwriter_stat_reset_time\n pg_stat_get_db_stat_reset_time\nand kind of related:\n pg_stat_get_snapshot_timestamp\n\nThere's also a few functions showing the last time something has happened,\nlike pg_stat_get_last_analyze_time().\n\nTrying to fit those patterns we could add functions like:\n\npg_stat_get_stats_last_cleared_recovery_time\npg_stat_get_stats_last_cleared_corrupted_time\npg_stat_get_stats_last_not_present_time\npg_stat_get_stats_last_partially_reset_time\n\nNot great, but I don't immediately have a better idea.\n\nMaybe it'd look better as a view / SRF? pg_stat_stats / pg_stat_get_stats()?\nPotential column names:\n- cleared_recovery_time\n- cleared_corrupted_time\n- not_preset_time\n- partially_reset_time\n\nIt's not a lot of time to code these up. However, it's late in cycle, and\nthey're not suddenly needed due to shared memory stats, so I have a few doubts\nabout adding them now?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 12:34:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Higher level questions around shared memory stats" } ]
[ { "msg_contents": "Hi hackers,\n\nIs there any reason to keep reset_shared() around anymore? It is now just\na wrapper function for CreateSharedMemoryAndSemaphores(), and AFAICT the\ninformation in the comments is already covered by comments in the shared\nmemory code. I think it's arguable that the name of the function makes it\nclear that it might recreate the shared memory, but if that is a concern,\nperhaps we could rename the function to something like\nCreateOrRecreateSharedMemoryAndSemaphores().\n\nI've attached a patch that simply removes this wrapper function. This is\nadmittedly just nitpicking, so I don't intend to carry this patch further\nif anyone is opposed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 29 Mar 2022 15:17:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "remove reset_shared()" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 29, 2022 at 03:17:02PM -0700, Nathan Bossart wrote:\n> Hi hackers,\n>\n> Is there any reason to keep reset_shared() around anymore? It is now just\n> a wrapper function for CreateSharedMemoryAndSemaphores(), and AFAICT the\n> information in the comments is already covered by comments in the shared\n> memory code. I think it's arguable that the name of the function makes it\n> clear that it might recreate the shared memory, but if that is a concern,\n> perhaps we could rename the function to something like\n> CreateOrRecreateSharedMemoryAndSemaphores().\n>\n> I've attached a patch that simply removes this wrapper function. This is\n> admittedly just nitpicking, so I don't intend to carry this patch further\n> if anyone is opposed.\n\nI'm +0.5 for it, it doesn't bring much and makes things a bit harder to\nunderstand, as you need to go through an extra function.\n\n\n", "msg_date": "Wed, 30 Mar 2022 09:19:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: remove reset_shared()" }, { "msg_contents": "Hi!\n\nIn general I'm for this patch. Some time ago I was working on a patch\nrelated to shared memory and noticed\nno reason to have reset_shared() function.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!In general I'm for this patch. Some time ago I was working on a patch related to shared memory and noticed no reason to have reset_shared() function.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 15 Jul 2022 15:40:47 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: remove reset_shared()" }, { "msg_contents": "On Fri, 15 Jul 2022 at 16:41, Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Hi!\n>\n> In general I'm for this patch. Some time ago I was working on a patch\n> related to shared memory and noticed\n> no reason to have reset_shared() function.\n>\n\nHi, hackers!\nI see the proposed patch as uncontroversial and good enough to be\ncommitted. It will make the code a little clearer. Personally, I don't like\nleaving functions that are just wrappers for another and called only once.\nBut I think that if there's a question of code readability it's not bad to\nrestore the comments on the purpose of a call that were originally in the\ncode.\n\nPFA v2 of a patch (only the comment removed in v1 is restored in v2).\n\nOverall I'd like to move it to RfC if none have any objections.", "msg_date": "Fri, 15 Jul 2022 16:48:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: remove reset_shared()" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> I see the proposed patch as uncontroversial and good enough to be\n> committed. It will make the code a little clearer. Personally, I don't like\n> leaving functions that are just wrappers for another and called only once.\n\nYeah, I like this for a different reason: just a couple days ago I was\ncomparing the postmaster's startup sequence to that used in standalone\nmode (in postgres.c) and was momentarily confused because one had\nreset_shared() where the other had CreateSharedMemoryAndSemaphores().\n\nLooking in our git history, it seems that reset_shared() used to embed\nslightly more knowledge, but it sure looks pretty pointless now.\n\n> But I think that if there's a question of code readability it's not bad to\n> restore the comments on the purpose of a call that were originally in the\n> code.\n\nActually I think you chose the wrong place to move the comment to.\nIt applies to the initial postmaster start, because what it's pointing\nout is that we'll probably choose the same IPC keys as any previous\nrun did. If we felt the need to enforce that during a crash restart,\nwe surely could do so directly.\n\nPushed after fiddling with the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Jul 2022 12:34:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: remove reset_shared()" }, { "msg_contents": "On Sat, Jul 16, 2022 at 12:34:11PM -0400, Tom Lane wrote:\n> Pushed after fiddling with the comments.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 16 Jul 2022 13:30:34 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: remove reset_shared()" } ]
[ { "msg_contents": "Hi,\n\nI represent a small group of developers. We are working on an open-source\nPostgreSQL extension to enable processing of genomic data inside Postgres.\nWe have an extensive knowledge of molecular biology or data science and\nnone of the Postgres internals.\n\nI don’t know if this mailing list is a good place to ask this question, but\nif it’s not, just correct me.\n\n*The problem:*\n\nWe currently have a one-to-many function (an operation that produces\nmultiple rows per one one input row). Now we would like to translate that\nfunctionality to a sensible many-to-many. We need to know how we are\nconstrained by the internals of Postgres itself and what syntax we should\nuse.\n\nAlso, the operation we are implementing requires knowing the full set of\ninputs before it can be computed.\n\n*Current solution:*\n\nThere is ValuePerCall (1/0 returned rows) or Materialize mode (any number\nof returned rows), however the second one does not offer any invocation\ncounter (like ValuePerCall does). Hence to provide any persistence between\nsubcalls we introduced the following syntax:\n\n*SELECT _ FROM table t, my_function(t.a, t.b, t.c, number_of_rows);*\n\nWhere by FROM a, b we mean cartesian product a times b. And my_function for\nfirst (number_of_rows - 1) invocations returns an empty set and the full\nresult set for the last one.\n\nSadly this syntax requires us to enter a number of rows which is not very\nconvenient.\n\nDo you know how to handle this situation correctly? We looked for example\nat the code of tablefunc but the syntax there requires a full SQL query as\nan input, so that wasn’t useful.\n\nHi,\nI represent a small group of developers. We are working on an open-source PostgreSQL extension to enable processing of genomic data inside Postgres. We have an extensive knowledge of molecular biology or data science and none of the Postgres internals.\nI don’t know if this mailing list is a good place to ask this question, but if it’s not, just correct me.\nThe problem:\nWe currently have a one-to-many function (an operation that produces multiple rows per one one input row). Now we would like to translate that functionality to a sensible many-to-many. We need to know how we are constrained by the internals of Postgres itself and what syntax we should use.\nAlso, the operation we are implementing requires knowing the full set of inputs before it can be computed.\nCurrent solution:\nThere is ValuePerCall (1/0 returned rows) or Materialize mode (any number of returned rows), however the second one does not offer any invocation counter (like ValuePerCall does). Hence to provide any persistence between subcalls we introduced the following syntax:\nSELECT _ FROM table t, my_function(t.a, t.b, t.c, number_of_rows);\nWhere by FROM a, b we mean cartesian product a times b. And my_function for first (number_of_rows - 1) invocations returns an empty set and the full result set for the last one.\nSadly this syntax requires us to enter a number of rows which is not very convenient.\nDo you know how to handle this situation correctly? We looked for example at the code of tablefunc but the syntax there requires a full SQL query as an input, so that wasn’t useful.", "msg_date": "Wed, 30 Mar 2022 14:48:22 +0200", "msg_from": "=?UTF-8?Q?Piotr_Styczy=C5=84ski?= <piotr@styczynski.in>", "msg_from_op": true, "msg_subject": "Returning multiple rows in materialized mode inside the extension" }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:01 AM Piotr Styczyński <piotr@styczynski.in>\nwrote:\n\n> I don’t know if this mailing list is a good place to ask this question,\n> but if it’s not, just correct me.\n>\npgsql-general is probably better\n\n\n> *The problem:*\n>\n> We currently have a one-to-many function (an operation that produces\n> multiple rows per one one input row).\n>\nNow we would like to translate that functionality to a sensible\n> many-to-many.\n>\nThis seems like a big gap.\n\nInput Situation Rows:\n1\n2\n3\nWhat is the expected output\n1 A\n1 B\n1 C\n2 A\n2 B\n2 C\n3 A\n3 B\n3 C\n\nI really don't know how you would change the internals to handle this - I'm\ndoubting it would even be possible. If asked to accomplish this using just\nstandard PostgreSQL I would turn the inputs into an array\n\n{1,2,3}\n\nand pass that array into a set-returning function. Now I have:\n\n{1,2,3} A\n{1,2,3} B\n{1,2,3} C\n\nas an output, and I can just unnest the array column to produce the final\nresult.\n\nSomething like (not tested):\n\nSELECT unnest(arr_input.arr), func_call\nFROM\n(SELECT array_agg(inputvals) AS arr FROM tbl) AS arr_input\nLATERAL func_call(arr_input.arr)\n;\n\nDavid J.\n\nOn Wed, Mar 30, 2022 at 9:01 AM Piotr Styczyński <piotr@styczynski.in> wrote:\nI don’t know if this mailing list is a good place to ask this question, but if it’s not, just correct me.pgsql-general is probably better \nThe problem:\nWe currently have a one-to-many function (an operation that produces multiple rows per one one input row).Now we would like to translate that functionality to a sensible many-to-many.This seems like a big gap.Input Situation Rows:123What is the expected output1 A1 B1 C2 A2 B2 C3 A3 B3 CI really don't know how you would change the internals to handle this - I'm doubting it would even be possible.  If asked to accomplish this using just standard PostgreSQL I would turn the inputs into an array{1,2,3} and pass that array into a set-returning function.  Now I have:{1,2,3} A{1,2,3} B{1,2,3} Cas an output, and I can just unnest the array column to produce the final result.Something like (not tested):SELECT unnest(arr_input.arr), func_callFROM(SELECT array_agg(inputvals) AS arr FROM tbl) AS arr_inputLATERAL func_call(arr_input.arr);David J.", "msg_date": "Wed, 30 Mar 2022 09:12:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Returning multiple rows in materialized mode inside the extension" } ]
[ { "msg_contents": "Forking: <20220316151253.GB28503@telsasoft.com>\n\nOn Wed, Mar 16, 2022 at 10:12:54AM -0500, Justin Pryzby wrote:\n> Also, with a partial regression DB, this crashes when writing to stdout.\n> \n> $ src/bin/pg_basebackup/pg_basebackup --wal-method fetch -Ft -D - -h /tmp --no-sync --compress=lz4 |wc -c\n> pg_basebackup: bbstreamer_lz4.c:172: bbstreamer_lz4_compressor_content: Assertion `mystreamer->base.bbs_buffer.maxlen >= out_bound' failed.\n> 24117248\n> \n> #4 0x000055555555e8b4 in bbstreamer_lz4_compressor_content (streamer=0x5555555a5260, member=0x7fffffffc760, \n> data=0x7ffff3068010 \"{ \\\"PostgreSQL-Backup-Manifest-Version\\\": 1,\\n\\\"Files\\\": [\\n{ \\\"Path\\\": \\\"backup_label\\\", \\\"Size\\\": 227, \\\"Last-Modified\\\": \\\"2022-03-16 02:29:11 GMT\\\", \\\"Checksum-Algorithm\\\": \\\"CRC32C\\\", \\\"Checksum\\\": \\\"46f69d99\\\" },\\n{ \\\"Pa\"..., len=401072, context=BBSTREAMER_MEMBER_CONTENTS) at bbstreamer_lz4.c:172\n> mystreamer = 0x5555555a5260\n> next_in = 0x7ffff3068010 \"{ \\\"PostgreSQL-Backup-Manifest-Version\\\": 1,\\n\\\"Files\\\": [\\n{ \\\"Path\\\": \\\"backup_label\\\", \\\"Size\\\": 227, \\\"Last-Modified\\\": \\\"2022-03-16 02:29:11 GMT\\\", \\\"Checksum-Algorithm\\\": \\\"CRC32C\\\", \\\"Checksum\\\": \\\"46f69d99\\\" },\\n{ \\\"Pa\"...\n> ...\n> \n> (gdb) p mystreamer->base.bbs_buffer.maxlen\n> $1 = 524288\n> (gdb) p (int) LZ4F_compressBound(len, &mystreamer->prefs)\n> $4 = 524300\n> \n> This is with: liblz4-1:amd64 1.9.2-2ubuntu0.20.04.1\n\nIt looks like maybe this code was copied from\nbbstreamer_lz4_compressor_finalize() which has an Assert rather than expanding\nthe buffer like in bbstreamer_lz4_compressor_new()\n\ncommit e70c12214b5ba0bc93c083fdb046304a633018ef\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Mon Mar 28 23:24:15 2022 -0500\n\n basebackup: fix crash with lz4 + stdout + manifests\n \n That's just one known way to trigger this issue.\n\ndiff --git a/src/bin/pg_basebackup/bbstreamer_lz4.c b/src/bin/pg_basebackup/bbstreamer_lz4.c\nindex 67f841d96a9..8e8352a450c 100644\n--- a/src/bin/pg_basebackup/bbstreamer_lz4.c\n+++ b/src/bin/pg_basebackup/bbstreamer_lz4.c\n@@ -170,7 +170,11 @@ bbstreamer_lz4_compressor_content(bbstreamer *streamer,\n \t * forward the content to next streamer and empty the buffer.\n \t */\n \tout_bound = LZ4F_compressBound(len, &mystreamer->prefs);\n-\tAssert(mystreamer->base.bbs_buffer.maxlen >= out_bound);\n+\n+\t/* Enlarge buffer if it falls short of compression bound. */\n+\tif (mystreamer->base.bbs_buffer.maxlen < out_bound)\n+\t\tenlargeStringInfo(&mystreamer->base.bbs_buffer, out_bound);\n+\n \tif (avail_out < out_bound)\n \t{\n \t\t\tbbstreamer_content(mystreamer->base.bbs_next, member,\n\n\n", "msg_date": "Wed, 30 Mar 2022 09:35:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "basebackup/lz4 crash" }, { "msg_contents": "On Wed, Mar 30, 2022 at 10:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It looks like maybe this code was copied from\n> bbstreamer_lz4_compressor_finalize() which has an Assert rather than expanding\n> the buffer like in bbstreamer_lz4_compressor_new()\n\nHmm, this isn't great. On the server side, we set up a chain of bbsink\nbuffers that can't be resized later. Each bbsink tells the next bbsink\nhow to make its buffer, but the successor bbsink has control of that\nbuffer and resizing it on-the-fly is not allowed. It looks like\nbbstreamer_lz4_compressor_new() is mimicking that logic, but not well.\nIt sets the initial buffer size to\nLZ4F_compressBound(streamer->base.bbs_buffer.maxlen, prefs), but\nstreamer->base.bbs_buffer.maxlen is not any kind of promise from the\ncaller about future chunk sizes. It's just whatever initStringInfo()\nhappens to do. My guess is that Dipesh thought that the buffer\nwouldn't need to be resized because \"we made it big enough already\"\nbut that's not the case. The server knows how much data it is going to\nread from disk at a time, but the client has to deal with whatever the\nserver sends.\n\nI think your proposed change is OK, modulo some comments. But I think\nmaybe we ought to delete all the stuff related to compressed_bound\nfrom bbstreamer_lz4_compressor_new() as well, because I don't see that\nthere's any point. And then I think we should also add logic similar\nto what you've added here to bbstreamer_lz4_compressor_finalize(), so\nthat we're not making the assumption that the buffer will get enlarged\nat some earlier point.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 12:57:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: basebackup/lz4 crash" }, { "msg_contents": "Hi,\n\n> I think your proposed change is OK, modulo some comments. But I think\n> maybe we ought to delete all the stuff related to compressed_bound\n> from bbstreamer_lz4_compressor_new() as well, because I don't see that\n> there's any point. And then I think we should also add logic similar\n> to what you've added here to bbstreamer_lz4_compressor_finalize(), so\n> that we're not making the assumption that the buffer will get enlarged\n> at some earlier point.\n>\n> Thoughts?\nI agree that we should remove the compression bound stuff from\nbbstreamer_lz4_compressor_new() and add a fix in\nbbstreamer_lz4_compressor_content() and bbstreamer_lz4_compressor_finalize()\nto enlarge the buffer if it falls short of the compress bound.\n\nPatch attached.\n\nThanks,\nDipesh", "msg_date": "Thu, 31 Mar 2022 12:49:32 +0530", "msg_from": "Dipesh Pandit <dipesh.pandit@gmail.com>", "msg_from_op": false, "msg_subject": "Re: basebackup/lz4 crash" }, { "msg_contents": "On Thu, Mar 31, 2022 at 3:19 AM Dipesh Pandit <dipesh.pandit@gmail.com> wrote:\n> Patch attached.\n\nCommitted. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 10:45:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: basebackup/lz4 crash" } ]
[ { "msg_contents": "Since the previous update 30 additional patches have been committed\nfrom the CF. This leaves us with 120 Needs Review and 20 Ready for\nCommitter. There's only a few days left until the end of the month.\n\n* Add comment about startup process getting a free procState array slot always\n* Consistent use of SSL/TLS in docs\n* Allow COPY \"text\" to output a header and add header matching mode to COPY FROM\n* Enable SSL library detection via PQsslAttribute\n* Fully WAL logged CREATE DATABASE - No Checkpoints\n* Skipping logical replication transactions on subscriber side\n* Add system view pg_ident_file_mappings\n* document the need to analyze partitioned tables\n* Expose get_query_def()\n* JSON path numeric literal syntax\n* use has_privs_for_role for predefined roles\n* Support for MERGE\n* Column filtering in logical replication\n* Corruption during WAL replay\n* Doc patch for retryable xacts\n* pg_statio_all_tables: several rows per table due to invalid TOAST index\n* Parameter for planner estimates of recursive queries\n* logical decoding and replication of sequences\n* Add relation and block-level filtering to pg_waldump\n* pgbench - allow retries on some errors\n* pgcrypto: Remove internal padding implementation\n* Preserving db/ts/relfilenode OIDs across pg_upgrade\n* Add new reloption to views for enabling row level security\n* Fix handling of outer GroupingFunc within subqueries\n* Fix concurrent deadlock scenario with DROP INDEX on partitioned index\n* Fix bogus dependency management for GENERATED expressions\n* Fix firing of RI triggers during cross-partition updates of partitioned tables\n* Add fix to table_to_xmlschema regex when timestamp has time zone\n* ltree_gist indexes broken after pg_upgrade from 12\n* ExecTypeSetColNames is fundamentally broken\n\n\nI'm going to start moving any patches that are Waiting on Author to\nthe next CF if they made any progress recently.\n\nPatches that are Waiting on Author and haven't had activity in months\n-- traditionally they were set to Returned with Feedback. It seems the\nfeeling these days is to not lose state on them and just move them to\nthe next CF. I'm not sure that's wise, it ends up just filling up the\nlist with patches nobody's working on.\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:41:26 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Commitfest Update" }, { "msg_contents": "On Wed, Mar 30, 2022 at 2:42 PM Greg Stark <stark@mit.edu> wrote:\n> Patches that are Waiting on Author and haven't had activity in months\n> -- traditionally they were set to Returned with Feedback. It seems the\n> feeling these days is to not lose state on them and just move them to\n> the next CF. I'm not sure that's wise, it ends up just filling up the\n> list with patches nobody's working on.\n\nYes, we should mark those Returned with Feedback or some other status\nthat causes them not to get carried forward. The CF is full of stuff\nthat isn't likely to get committed any time in the foreseeable future,\nand that's really unhelpful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:43:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 30.03.22 20:41, Greg Stark wrote:\n> Patches that are Waiting on Author and haven't had activity in months\n> -- traditionally they were set to Returned with Feedback. It seems the\n> feeling these days is to not lose state on them and just move them to\n> the next CF. I'm not sure that's wise, it ends up just filling up the\n> list with patches nobody's working on.\n\nIf you set them to Returned with Feedback now, they can still be \nreawoken later by setting them to Needs Review and pulling them into the \nthen-next commit fest. That preserves all the state and context.\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:38:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Wed, Mar 30, 2022 at 02:41:26PM -0400, Greg Stark wrote:\n>\n> Patches that are Waiting on Author and haven't had activity in months\n> -- traditionally they were set to Returned with Feedback. It seems the\n> feeling these days is to not lose state on them and just move them to\n> the next CF. I'm not sure that's wise, it ends up just filling up the\n> list with patches nobody's working on.\n\n+1 for closing such patches as Returned with Feedback, for the same reasons\nRobert and Peter already stated.\n\nNote that I already closed such CF entries during the last commitfest, so\nhopefully there shouldn't be too much. Last time I used \"both Waiting on\nAuthor and no activity from the author, since at least the 15th of the month\"\nas the threshold to close such patches (although I closed them at the beginning\nof the next month), as it seems to be the usual (and informal) rule.\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:01:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 2022-Mar-31, Julien Rouhaud wrote:\n\n> On Wed, Mar 30, 2022 at 02:41:26PM -0400, Greg Stark wrote:\n> >\n> > Patches that are Waiting on Author and haven't had activity in months\n> > -- traditionally they were set to Returned with Feedback. It seems the\n> > feeling these days is to not lose state on them and just move them to\n> > the next CF. I'm not sure that's wise, it ends up just filling up the\n> > list with patches nobody's working on.\n> \n> +1 for closing such patches as Returned with Feedback, for the same reasons\n> Robert and Peter already stated.\n\nFeature request for the commitfest app: any time a patch is closed as\nRwF, send an automated email replying to the last posted version of the\npatch, indicating the state change and the URL of the CF entry. That\nway, if somebody wants to resurrect it later, it's easy to find the CF\nentry that needs to be edited.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cuando mañana llegue pelearemos segun lo que mañana exija\" (Mowgli)\n\n\n", "msg_date": "Thu, 31 Mar 2022 12:56:20 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Thu, 31 Mar 2022 at 12:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Mar-31, Julien Rouhaud wrote:\n>\n> > On Wed, Mar 30, 2022 at 02:41:26PM -0400, Greg Stark wrote:\n> > >\n> > > Patches that are Waiting on Author and haven't had activity in months\n> > > -- traditionally they were set to Returned with Feedback. It seems the\n> > > feeling these days is to not lose state on them and just move them to\n> > > the next CF. I'm not sure that's wise, it ends up just filling up the\n> > > list with patches nobody's working on.\n> >\n> > +1 for closing such patches as Returned with Feedback, for the same reasons\n> > Robert and Peter already stated.\n>\n> Feature request for the commitfest app: any time a patch is closed as\n> RwF, send an automated email replying to the last posted version of the\n> patch, indicating the state change and the URL of the CF entry. That\n> way, if somebody wants to resurrect it later, it's easy to find the CF\n> entry that needs to be edited.\n\nCan normal users actually punt their own patch to a next commitfest?\nAll I see for my old patches is the 'change status'-button, and the\navailable options are not obviously linked to 'change registration to\nthis or upcoming CF'.\nUpdating the status of RwF/Rejected patches from previous commitfests\nto Open (to the best of my knowledge) doesn't automatically register\nit to newer CFs, so 'resurrecting' in the CF application can't really\nbe done by normal users.\n\nI know that this has happened earlier; where someone re-opened their\nold RwF-patches in closed commitfests; after which those patches got\nlost in the traffic because they are not open in the current (or\nupcoming) commitfests.\n\n- Matthias\n\n\n", "msg_date": "Thu, 31 Mar 2022 13:09:06 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 2022-Mar-31, Matthias van de Meent wrote:\n\n> I know that this has happened earlier; where someone re-opened their\n> old RwF-patches in closed commitfests; after which those patches got\n> lost in the traffic because they are not open in the current (or\n> upcoming) commitfests.\n\nHmm, it's quite possible that an explicit action \"move to next CF\" is\nrequired. I don't really know what actions are allowed, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 31 Mar 2022 14:53:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "чт, 31 мар. 2022 г. в 15:09, Matthias van de Meent <\nboekewurm+postgres@gmail.com>:\n\n> On Thu, 31 Mar 2022 at 12:56, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2022-Mar-31, Julien Rouhaud wrote:\n> >\n> > > On Wed, Mar 30, 2022 at 02:41:26PM -0400, Greg Stark wrote:\n> > > >\n> > > > Patches that are Waiting on Author and haven't had activity in months\n> > > > -- traditionally they were set to Returned with Feedback. It seems\n> the\n> > > > feeling these days is to not lose state on them and just move them to\n> > > > the next CF. I'm not sure that's wise, it ends up just filling up the\n> > > > list with patches nobody's working on.\n> > >\n> > > +1 for closing such patches as Returned with Feedback, for the same\n> reasons\n> > > Robert and Peter already stated.\n> >\n> > Feature request for the commitfest app: any time a patch is closed as\n> > RwF, send an automated email replying to the last posted version of the\n> > patch, indicating the state change and the URL of the CF entry. That\n> > way, if somebody wants to resurrect it later, it's easy to find the CF\n> > entry that needs to be edited.\n>\n> Can normal users actually punt their own patch to a next commitfest?\n> All I see for my old patches is the 'change status'-button, and the\n> available options are not obviously linked to 'change registration to\n> this or upcoming CF'.\n> Updating the status of RwF/Rejected patches from previous commitfests\n> to Open (to the best of my knowledge) doesn't automatically register\n> it to newer CFs, so 'resurrecting' in the CF application can't really\n> be done by normal users.\n>\n> I know that this has happened earlier; where someone re-opened their\n> old RwF-patches in closed commitfests; after which those patches got\n> lost in the traffic because they are not open in the current (or\n> upcoming) commitfests.\n>\n\nIn my experience, re-applying an updated patch to a new CF is very easy.\nYou can re-attach the existing discussion thread. The only information that\ncan be lost is CF-specific fields like reviewer/author which is worth\nre-adding manually.\n\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nчт, 31 мар. 2022 г. в 15:09, Matthias van de Meent <boekewurm+postgres@gmail.com>:On Thu, 31 Mar 2022 at 12:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Mar-31, Julien Rouhaud wrote:\n>\n> > On Wed, Mar 30, 2022 at 02:41:26PM -0400, Greg Stark wrote:\n> > >\n> > > Patches that are Waiting on Author and haven't had activity in months\n> > > -- traditionally they were set to Returned with Feedback. It seems the\n> > > feeling these days is to not lose state on them and just move them to\n> > > the next CF. I'm not sure that's wise, it ends up just filling up the\n> > > list with patches nobody's working on.\n> >\n> > +1 for closing such patches as Returned with Feedback, for the same reasons\n> > Robert and Peter already stated.\n>\n> Feature request for the commitfest app: any time a patch is closed as\n> RwF, send an automated email replying to the last posted version of the\n> patch, indicating the state change and the URL of the CF entry.  That\n> way, if somebody wants to resurrect it later, it's easy to find the CF\n> entry that needs to be edited.\n\nCan normal users actually punt their own patch to a next commitfest?\nAll I see for my old patches is the 'change status'-button, and the\navailable options are not obviously linked to 'change registration to\nthis or upcoming CF'.\nUpdating the status of RwF/Rejected patches from previous commitfests\nto Open (to the best of my knowledge) doesn't automatically register\nit to newer CFs, so 'resurrecting' in the CF application can't really\nbe done by normal users.\n\nI know that this has happened earlier; where someone re-opened their\nold RwF-patches in closed commitfests; after which those patches got\nlost in the traffic because they are not open in the current (or\nupcoming) commitfests.In my experience, re-applying an updated patch to a new CF is very easy. You can re-attach the existing discussion thread. The only information that can be lost is CF-specific fields like reviewer/author which is worth re-adding manually.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 31 Mar 2022 17:23:46 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> In my experience, re-applying an updated patch to a new CF is very easy.\n> You can re-attach the existing discussion thread. The only information that\n> can be lost is CF-specific fields like reviewer/author which is worth\n> re-adding manually.\n\nYeah. In fact, it might be a good idea to intentionally *not* bring\nforward the old reviewers list, as they may have lost interest.\n\nThis reminds me of a point I've been meaning to bring up: it seems to\noften happen that someone adds their name as reviewer, but then loses\ninterest and doesn't do anything more with the patch. I think that's\nproblematic because people see that the patch already has a reviewer\nand look for something else to do. Would it be feasible or reasonable\nto drop reviewers if they've not commented in the thread in X amount\nof time?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:11:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This reminds me of a point I've been meaning to bring up: it seems to\n> often happen that someone adds their name as reviewer, but then loses\n> interest and doesn't do anything more with the patch. I think that's\n> problematic because people see that the patch already has a reviewer\n> and look for something else to do. Would it be feasible or reasonable\n> to drop reviewers if they've not commented in the thread in X amount\n> of time?\n\nIn theory, this might cause someone who made a valuable contribution\nto the discussion to not get credited in the commit message. But it\nprobably wouldn't in practice, because I at least always construct the\nlist of reviewers from the thread, not the CF app, since that tends to\nbe wildly inaccurate in both directions. So maybe it's fine? Not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:31:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... Would it be feasible or reasonable\n>> to drop reviewers if they've not commented in the thread in X amount\n>> of time?\n\n> In theory, this might cause someone who made a valuable contribution\n> to the discussion to not get credited in the commit message. But it\n> probably wouldn't in practice, because I at least always construct the\n> list of reviewers from the thread, not the CF app, since that tends to\n> be wildly inaccurate in both directions. So maybe it's fine? Not sure.\n\nHmm, I tend to believe what's in the CF app, so maybe I'm dropping the\nball on review credits :-(. But there are various ways we could implement\nthis. One way would be a nagbot that sends private email along the lines\nof \"you haven't commented on patch X in Y months. Please remove your name\nfrom the list of reviewers if you don't intend to review it any more.\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:37:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 3/31/22 07:37, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Mar 31, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> ... Would it be feasible or reasonable\n>>> to drop reviewers if they've not commented in the thread in X amount\n>>> of time?\n> \n>> In theory, this might cause someone who made a valuable contribution\n>> to the discussion to not get credited in the commit message. But it\n>> probably wouldn't in practice, because I at least always construct the\n>> list of reviewers from the thread, not the CF app, since that tends to\n>> be wildly inaccurate in both directions. So maybe it's fine? Not sure.\n> \n> Hmm, I tend to believe what's in the CF app, so maybe I'm dropping the\n> ball on review credits :-(. But there are various ways we could implement\n> this. One way would be a nagbot that sends private email along the lines\n> of \"you haven't commented on patch X in Y months. Please remove your name\n> from the list of reviewers if you don't intend to review it any more.\"\n\nIt seems there wasn't a definitive decision here. Are there any\nobjections to more aggressive pruning of the Reviewers entries? So\ncommitters would need to go through the thread for full attribution,\nmoving forward.\n\nIf there are no objections, I'll start doing that during next Friday's\npatch sweep.\n\n--Jacob\n\n\n", "msg_date": "Fri, 8 Jul 2022 14:41:52 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, 8 Jul 2022, 23:41 Jacob Champion, <jchampion@timescale.com> wrote:\n>\n> On 3/31/22 07:37, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Thu, Mar 31, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> ... Would it be feasible or reasonable\n>>>> to drop reviewers if they've not commented in the thread in X amount\n>>>> of time?\n>>\n>>> In theory, this might cause someone who made a valuable contribution\n>>> to the discussion to not get credited in the commit message. But it\n>>> probably wouldn't in practice, because I at least always construct the\n>>> list of reviewers from the thread, not the CF app, since that tends to\n>>> be wildly inaccurate in both directions. So maybe it's fine? Not sure.\n>>\n>> Hmm, I tend to believe what's in the CF app, so maybe I'm dropping the\n>> ball on review credits :-(. But there are various ways we could implement\n>> this. One way would be a nagbot that sends private email along the lines\n>> of \"you haven't commented on patch X in Y months. Please remove your name\n>> from the list of reviewers if you don't intend to review it any more.\"\n>\n> It seems there wasn't a definitive decision here. Are there any\n> objections to more aggressive pruning of the Reviewers entries? So\n> committers would need to go through the thread for full attribution,\n> moving forward.\n>\n> If there are no objections, I'll start doing that during next Friday's\n> patch sweep.\n\n\nNo objections, but this adds another item to the implicit commitfest\napp user manual, that to the best of my knowledge seems to be mostly\nimplicit institutional knowledge plus bits of information spread\naround the mailing lists.\n\nDo we have an actual manual or otherwise a (single?) place with\ndocumentation on how certain fields of the CFA should be used or\ninterpreted, so that (new) hackers know what to expect or where to\nlook?\n\nExamples of information about using the CFA that I couldn't find:\n- The Version field may contain a single specific postgresql version\nnumber, 'stable', or nothing. If my patch needs backpatching to all\nbackbranches, which do I select? The oldest supported PG version, or\n'stable'? Related to that: which version is indicated by 'stable'?\n\n- When creating or updating a CF entry, who are responsible for\nfilling in which fields? May the author assign reviewers/committers,\nor should they do so themselves?\n\n- Should the 'git link' be filled with a link to the committed patch\nonce committed, or is it a general purpose link to share a git\nrepository with the proposed changes?\n\n- What should (or shoudn't) Annotations be used for?\n\n- What should I expect of the comment / review system of the CFA?\nShould I use feature over direct on-list mails?\n\nI have checked the wiki page on becoming a developer [0], but that\npage seems woefully outdated with statements like \"Commitfests are\nscheduled to start on the 15th of the month\" which hasn't been true\nsince 2015. The pages on Commitfests [1] and the Developer FAQ [2]\ndon't add much help either on how to use the CommitFest app. Even\n(parts of) the checklist for the CFM on the wiki [3] still assumes the\nold CF app that was last used in 2014: \"It's based on the current\nCommitFest app (written by Robert Haas), and will change once the new\nCF App is done.\"\n\nI'm not asking anyone to drop all and get all the features of the CFA\ndocumented, but for my almost 2 years of following the -hackers list I\nfeel like I still haven't gotten a good understanding of the\napplication that is meant to coordinate the shared state in patch\ndevelopment, and I think that's a quite a shame.\n\n-Matthias\n\n[0] https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n[1] https://wiki.postgresql.org/wiki/CommitFest\n[2] https://wiki.postgresql.org/wiki/Developer_FAQ\n[3] https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\n", "msg_date": "Mon, 11 Jul 2022 15:07:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "> On 11 Jul 2022, at 15:07, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n\n> No objections, but this adds another item to the implicit commitfest\n> app user manual, that to the best of my knowledge seems to be mostly\n> implicit institutional knowledge plus bits of information spread\n> around the mailing lists.\n\nThat's mostly true yes, which means that anything I write below is to be taken\nwith n grains of salt as it's my interpretation of said institutional\nknowledge.\n\n> Do we have an actual manual or otherwise a (single?) place with\n> documentation on how certain fields of the CFA should be used or\n> interpreted, so that (new) hackers know what to expect or where to\n> look?\n\nWe don't AFAIK, but we should. Either an actual written manual (which may end\nup in many tldr folders) or inline help within the app (the latter being my\npreference I think).\n\n> Examples of information about using the CFA that I couldn't find:\n> - The Version field may contain a single specific postgresql version\n> number, 'stable', or nothing. If my patch needs backpatching to all\n> backbranches, which do I select? The oldest supported PG version, or\n> 'stable'? Related to that: which version is indicated by 'stable'?\n\nI'll refer to the commitmessage from the CF app repo on this:\n\ncommit a3bac5922db76efd5b6bb331a7141e9ca3209c4a\nAuthor: Magnus Hagander <magnus@hagander.net>\nDate: Wed Feb 6 21:05:06 2019 +0100\n\n Add a field to each patch for target version\n\n This is particularly interesting towards the end of a cycle where it can\n be used to flag patches that are not intended for the current version\n but still needs review.\n\nThe thread on -hackers which concluded on adding the field has a lot more of\nthe reasoning but some quick digging didn't find it.\n\n> - When creating or updating a CF entry, who are responsible for\n> filling in which fields? May the author assign reviewers/committers,\n> or should they do so themselves?\n\nReviewers and committers sign themselves up.\n\n> - Should the 'git link' be filled with a link to the committed patch\n> once committed, or is it a general purpose link to share a git\n> repository with the proposed changes?\n\nThe gitlink field is (was?) primarily meant to hold links to external repos for\nlarge patchsets where providing a repo on top of the patches in the thread is\nvaluable. An example would be Andres et.al's IO work where being able to\nfollow the work as it unfolds in a repo is valuable for reviewers.\n\n> - What should (or shoudn't) Annotations be used for?\n\nAnnotations are used for indicating that certain emails are specifically\nimportant and/or highlight them as taking specific design decisions etc. It\ncan be used for anything that is providing value to the a new reader of the\nthread really.\n\n> - What should I expect of the comment / review system of the CFA?\n> Should I use feature over direct on-list mails?\n\nI think that's up to personal preference, for reviewers who aren't subscribed\nto -hackers it's clearly useful to attach it to the thread. For anyone already\nsubscribed and used to corresponding on the mailinglist I would think that's\nthe easiest option.\n\n> I'm not asking anyone to drop all and get all the features of the CFA\n> documented, but for my almost 2 years of following the -hackers list I\n> feel like I still haven't gotten a good understanding of the\n> application that is meant to coordinate the shared state in patch\n> development, and I think that's a quite a shame.\n\nThere has been a lot of discussions around how to improve the CF app but they\nhave to a large extent boiled down to ENOTENOUGHTIME. I still have my hopes\nthat we can address these before too long, and adding user documentation is\nclearly an important one.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 11 Jul 2022 20:55:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 08, 2022 at 02:41:52PM -0700, Jacob Champion wrote:\n> On 3/31/22 07:37, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> On Thu, Mar 31, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> ... Would it be feasible or reasonable\n> >>> to drop reviewers if they've not commented in the thread in X amount\n> >>> of time?\n> > \n> >> In theory, this might cause someone who made a valuable contribution\n> >> to the discussion to not get credited in the commit message. But it\n> >> probably wouldn't in practice, because I at least always construct the\n> >> list of reviewers from the thread, not the CF app, since that tends to\n> >> be wildly inaccurate in both directions. So maybe it's fine? Not sure.\n> > \n> > Hmm, I tend to believe what's in the CF app, so maybe I'm dropping the\n> > ball on review credits :-(. But there are various ways we could implement\n> > this. One way would be a nagbot that sends private email along the lines\n> > of \"you haven't commented on patch X in Y months. Please remove your name\n> > from the list of reviewers if you don't intend to review it any more.\"\n> \n> It seems there wasn't a definitive decision here. Are there any\n> objections to more aggressive pruning of the Reviewers entries? So\n> committers would need to go through the thread for full attribution,\n> moving forward.\n> \n> If there are no objections, I'll start doing that during next Friday's\n> patch sweep.\n\nI think it's fine to update the cfapp fields to reflect reality...\n\n..but a couple updates that I just saw seem wrong. The reviewers field was\nnullified, even though the patches haven't been updated in a long time.\nThere's nothing new to review. All this has done is lost information that\nsomeone else (me, in this case) went to the bother of adding.\n\nAlso, cfapp has a page for \"patches where you are the author\", but the cfbot\ndoesn't, and I think people probably look at cfbot more than the cfapp itself.\nSo being marked as a reviewer is not very visible even to oneself.\nBut, one of the cfbot patches I sent to Thomas would change that. Each user's\npage would *also* show patches where they're a reviewer (\"Needs review -\nReviewer\"). That maybe provides an incentive to 1) help maintain the patch; or\notherwise 2) remove oneself.\n\nAlso, TBH, this seems to create a lot of busywork. I'd prefer to see someone\npick one of the patches that hasn't seen a review in 6 (or 16) months, and send\nout their most critical review and recommend it be closed, or send an updated\npatch with their own fixes as an 0099 patch.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jul 2022 16:57:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/15/22 14:57, Justin Pryzby wrote:\n> On Fri, Jul 08, 2022 at 02:41:52PM -0700, Jacob Champion wrote:\n>\n>> If there are no objections, I'll start doing that during next Friday's\n>> patch sweep.\n> \n> I think it's fine to update the cfapp fields to reflect reality...\n> \n> ..but a couple updates that I just saw seem wrong.\n\nHm, okay. Let me hold off on continuing then; I'm only about 25% in. The\ngeneral rule I was applying was \"if you were marked Reviewer prior to\nJune, and you haven't interacted with the patchset this commitfest, I've\nremoved you.\"\n\n> The reviewers field was\n> nullified, even though the patches haven't been updated in a long time.\n> There's nothing new to review. All this has done is lost information that\n> someone else (me, in this case) went to the bother of adding.\n\nMy understanding from upthread was that we wanted to get out of the\nhabit of using Reviewers as a historical record, and move towards using\nit as a marker of current activity. As Tom said, \"people see that the\npatch already has a reviewer and look for something else to do.\"\n\nI am sorry that I ended up reverting your work, though.\n\n> Also, cfapp has a page for \"patches where you are the author\", but the cfbot\n> doesn't,\n\n(I assume you mean \"reviewer\"?)\n\n> and I think people probably look at cfbot more than the cfapp itself.\n\nI think some people do. But the number of dead/non-applicable patches\nthat need manual reminders suggests to me that maybe it's not an\noverwhelming majority of people.\n\n> So being marked as a reviewer is not very visible even to oneself.\n> But, one of the cfbot patches I sent to Thomas would change that. Each user's\n> page would *also* show patches where they're a reviewer (\"Needs review -\n> Reviewer\"). That maybe provides an incentive to 1) help maintain the patch; or\n> otherwise 2) remove oneself.\nI didn't notice cfbot's user pages until this CF, so it wouldn't have\nbeen an effective incentive for me, at least.\n\nAlso, I would like to see us fold cfbot output into the official CF,\nrather than do the opposite.\n\n> Also, TBH, this seems to create a lot of busywork.\n\nWell, yes, but only because it's not automated. I don't think that's a\ngood reason not to do it, but it is a good reason not to make a person\ndo it.\n\n> I'd prefer to see someone\n> pick one of the patches that hasn't seen a review in 6 (or 16) months, and send\n> out their most critical review and recommend it be closed, or send an updated\n> patch with their own fixes as an 0099 patch.\n\nThat would be cool, but who is \"someone\"? There have been many, many\nstatements about the amount of CF cruft that needs to be removed. Seems\nlike the CFM is in a decent position to actually do it.\n\n--Jacob\n\n\n", "msg_date": "Fri, 15 Jul 2022 15:17:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 03:17:49PM -0700, Jacob Champion wrote:\n> > Also, cfapp has a page for \"patches where you are the author\", but the cfbot\n> > doesn't,\n> \n> (I assume you mean \"reviewer\"?)\n\nYes\n\n> Also, I would like to see us fold cfbot output into the official CF,\n> rather than do the opposite.\n\nThat's been the plan for years :)\n\n> > I'd prefer to see someone\n> > pick one of the patches that hasn't seen a review in 6 (or 16) months, and send\n> > out their most critical review and recommend it be closed, or send an updated\n> > patch with their own fixes as an 0099 patch.\n> \n> That would be cool, but who is \"someone\"? There have been many, many\n> statements about the amount of CF cruft that needs to be removed. Seems\n> like the CFM is in a decent position to actually do it.\n\nI have hesitated to even try to begin the conversation.\n\nSince a patch author initially creates the CF entry, why shouldn't they also be\nresponsible for moving them to the next cf. This serves to indicate a\ncontinued interest. Ideally they also set back to \"needs review\" after\naddressing feedback, but I imagine many would forget, and this seems like a\nreasonable task for a CFM to do - look at WOA patches that pass tests to see if\nthey're actually WOA.\n\nSimilarly, patches could be summarily set to \"waiting on author\" if they didn't\nrecently apply, compile, and pass tests. That's the minimum standard.\nHowever, I think it's better not to do this immediately after the patch stops\napplying/compiling/failing tests, since it's usually easy enough to review it.\n\nIt should be the author's responsibility to handle that; I don't know why the\naccepted process seems to involve sending dozens of emails to say \"needs\nrebase\". You're putting to good use of some cfapp email features I don't\nremember seeing used before; that seems much better. Also, it's possible to\nsubscribe to CF updates (possibly not a well-advertised, well-known or\nwell-used feature). \n\nI didn't know until recently that when a CF entry is closed, that it's possible\n(I think) for the author themselves to reopen it and \"move it to the next CF\".\nI suggest to point this out to people; I suppose I'm not the only one who finds\nit offputting when a patch is closed in batch at the end of the month after\ngetting only insignificant review.\n\nThanks for considering\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jul 2022 18:15:32 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/15/22 16:15, Justin Pryzby wrote:\n> On Fri, Jul 15, 2022 at 03:17:49PM -0700, Jacob Champion wrote:\n>> Also, I would like to see us fold cfbot output into the official CF,\n>> rather than do the opposite.\n> \n> That's been the plan for years :)\n\nIs there something other than lack of round tuits that's blocking\nprogress? I'm happy to donate more time in this area, but I don't know\nif my first patch proposal was helpful (or even on the right list --\npgsql-www, right?).\n\n>>> I'd prefer to see someone\n>>> pick one of the patches that hasn't seen a review in 6 (or 16) months, and send\n>>> out their most critical review and recommend it be closed, or send an updated\n>>> patch with their own fixes as an 0099 patch.\n>>\n>> That would be cool, but who is \"someone\"? There have been many, many\n>> statements about the amount of CF cruft that needs to be removed. Seems\n>> like the CFM is in a decent position to actually do it.\n> \n> I have hesitated to even try to begin the conversation.\n> \n> Since a patch author initially creates the CF entry, why shouldn't they also be\n> responsible for moving them to the next cf. This serves to indicate a\n> continued interest. Ideally they also set back to \"needs review\" after\n> addressing feedback, but I imagine many would forget, and this seems like a\n> reasonable task for a CFM to do - look at WOA patches that pass tests to see if\n> they're actually WOA.\n\nI agree in principle -- I think, ideally, WoA patches should be\nprocedurally closed at the end of a commitfest, and carried forward when\nthe author has actually responded. The problems I can imagine resulting\nfrom this are\n\n- Some reviewers mark WoA _immediately_ upon sending an email. I think\nauthors should have a small grace period to respond before having their\npatches automatically \"muted\" by the system, if the review happens right\nat the end of the CF.\n\n- Carrying forward a closed patch is not actually easy. See below.\n\n> Similarly, patches could be summarily set to \"waiting on author\" if they didn't\n> recently apply, compile, and pass tests. That's the minimum standard.\n> However, I think it's better not to do this immediately after the patch stops\n> applying/compiling/failing tests, since it's usually easy enough to review it.\n\nIt's hard to argue with that, but without automation, this is plenty of\nbusy work too.\n\n> It should be the author's responsibility to handle that; I don't know why the\n> accepted process seems to involve sending dozens of emails to say \"needs\n> rebase\". You're putting to good use of some cfapp email features I don't\n> remember seeing used before; that seems much better. Also, it's possible to\n> subscribe to CF updates (possibly not a well-advertised, well-known or\n> well-used feature). \n\nI don't think it should be reviewers' responsibility to say \"needs\nrebase\" over and over again, and I also don't think a new author should\nhave to refresh the cfbot every single day to find out whether or not\ntheir patch still applies. These things should be handled by the app.\n\n(Small soapbox, hopefully relevant: I used to be in the camp that making\ncontributors jump through small procedural hoops would somehow weed out\npeople who were making low-effort patches. I've since come around to the\nposition that this just tends to select for people with more free time\nand/or persistence. I don't like the idea of raising the busy-work bar\nfor authors, especially without first fixing the automation problem.)\n\n> I didn't know until recently that when a CF entry is closed, that it's possible\n> (I think) for the author themselves to reopen it and \"move it to the next CF\".\n> I suggest to point this out to people; I suppose I'm not the only one who finds\n> it offputting when a patch is closed in batch at the end of the month after\n> getting only insignificant review.\n\nI think this may have been the goal but I don't think it actually works\nin practice. The app refuses to let you carry a RwF patch to a new CF.\n\n--\n\nThis is important stuff to discuss, for sure, but I also want to revisit\nthe thing I put on pause, which is to clear out old Reviewer entries to\nmake it easier for new reviewers to find work to do. If we're not using\nReviewers as a historical record, is there any reason for me not to keep\nclearing that out?\n\nIt undoes work that you and others have done to make the historical\nrecord more accurate, and I think that's understandably frustrating. But\nI thought we were trying to move away from that usage of it altogether.\n\n--Jacob\n\n\n", "msg_date": "Fri, 15 Jul 2022 17:23:48 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 05:23:48PM -0700, Jacob Champion wrote:\n> On 7/15/22 16:15, Justin Pryzby wrote:\n> > On Fri, Jul 15, 2022 at 03:17:49PM -0700, Jacob Champion wrote:\n> >> Also, I would like to see us fold cfbot output into the official CF,\n> >> rather than do the opposite.\n> > \n> > That's been the plan for years :)\n> \n> Is there something other than lack of round tuits that's blocking\n> progress? I'm happy to donate more time in this area, but I don't know\n> if my first patch proposal was helpful (or even on the right list --\n> pgsql-www, right?).\n\ncfbot is Thomas's project, so moving it run on postgres vm was one step, but I\nimagine the \"integration with cfapp\" requires coordination with Magnus.\n\nWhat patch ?\n\n> > Similarly, patches could be summarily set to \"waiting on author\" if they didn't\n> > recently apply, compile, and pass tests. That's the minimum standard.\n> > However, I think it's better not to do this immediately after the patch stops\n> > applying/compiling/failing tests, since it's usually easy enough to review it.\n> \n> It's hard to argue with that, but without automation, this is plenty of\n> busy work too.\n\nI don't think that's busywork, since it's understood to require human\njudgement, like 1) to deal with false-positive test failures, and 2) check if\nthere's actually anything left for the author to do; 3) check if it passed\ntests recently; 4) evaluate existing opinions in the thread and make a\njudgement call.\n\n> > I didn't know until recently that when a CF entry is closed, that it's possible\n> > (I think) for the author themselves to reopen it and \"move it to the next CF\".\n> > I suggest to point this out to people; I suppose I'm not the only one who finds\n> > it offputting when a patch is closed in batch at the end of the month after\n> > getting only insignificant review.\n> \n> I think this may have been the goal but I don't think it actually works\n> in practice. The app refuses to let you carry a RwF patch to a new CF.\n\nI was able to do what Peter said. I don't know why the cfapp has that\nrestriction, it seems like an artificial constraint.\n\nhttps://www.postgresql.org/message-id/8498f959-e7a5-b0ec-7761-26984e581a51%40enterprisedb.com\nhttps://commitfest.postgresql.org/32/2888/\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jul 2022 21:13:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 05:23:48PM -0700, Jacob Champion wrote:\n> This is important stuff to discuss, for sure, but I also want to revisit\n> the thing I put on pause, which is to clear out old Reviewer entries to\n> make it easier for new reviewers to find work to do. If we're not using\n> Reviewers as a historical record, is there any reason for me not to keep\n> clearing that out?\n\n> It undoes work that you and others have done to make the historical\n> record more accurate, and I think that's understandably frustrating. But\n> I thought we were trying to move away from that usage of it altogether.\n\nI don't agree that I'm using it \"for historical record\". See 3499.\n\nThere's certainly some value in updating the cfapp to be \"more accurate\" for\nsome definition. By chance, I saw the \"activity log\".\nhttps://commitfest.postgresql.org/activity/\n\nHonestly, most of the changes seems to be for the worse (16 patches had the\nreview field nullified). Incomplete list of changes:\n\n3609 - removed Nathan\n3561 - removed Michael\n3046 - removed PeterE\n3396 - removed Tom\n3396 - removed Robert and Bharath\n2710 - removed Julien\n3612 - removed Nathan (added by Greg)\n3295 - removed Andrew\n2573 - removed Daniel\n3623 - removed Hou Zhijie\n3260 - removed Fabien\n3041 - removed Masahiko\n2161 - removed Michael\n\nI'm not suggesting to give the community regulars special treatment, but you\ncould reasonably assume that when they added themselves and then \"didn't remove\nthemself\", it was on purpose and not by omission. I think most of those people\nwould be surprised to learn that they're no longer considered to be reviewing\nthe patch.\n\n> If someone put a lot of review into a patchset a few months ago, they\n> absolutely deserve credit. But if that entry has been sitting with no\n> feedback this month, why is it useful to keep that Reviewer around?\n\nI don't know what to say to this.\n\nWhy do you think it's useful to remove annotations that people added ? (And, if\nit were useful, why shouldn't that be implemented in the cfapp, which already\nhas all the needed information.)\n\nCan you give an example of a patch where you sent a significant review, and\nadded yourself as a reviewer, but wouldn't mind if someone summarily removed\nyou, in batch ? It seems impolite to remove someone who is, in fact, a\nreviewer.\n\nThe stated goal was to avoid the scenario that a would-be reviewer decides not\nto review a patch because cfapp already shows someone else as a reviewer. I'm\nsure that could happen, but I doubt it's something that happens frequently..\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jul 2022 21:37:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 05:23:48PM -0700, Jacob Champion wrote:\n> I agree in principle -- I think, ideally, WoA patches should be\n> procedurally closed at the end of a commitfest, and carried forward when\n> the author has actually responded. The problems I can imagine resulting\n> from this are\n> \n> - Some reviewers mark WoA _immediately_ upon sending an email. I think\n> authors should have a small grace period to respond before having their\n> patches automatically \"muted\" by the system, if the review happens right\n> at the end of the CF.\n\nOn this point, I'd like to think that a window of two weeks is a right\nbalance. That's half of the commit fest, so that leaves plenty of\ntime for one to answer. There is always the case where one is on\nvacations for a period longer than that, but it is also possible for\nan author to add a new entry in a future CF for the same patch. If I\nrecall correctly, it should be possible to move a patch that has been\nclosed even if the past CF has been marked as been concluded, allowing\none to keep the same patch with its history and annotations.\n--\nMichael", "msg_date": "Sat, 16 Jul 2022 11:59:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 09:37:14PM -0500, Justin Pryzby wrote:\n> I'm not suggesting to give the community regulars special treatment, but you\n> could reasonably assume that when they added themselves and then \"didn't remove\n> themself\", it was on purpose and not by omission. I think most of those people\n> would be surprised to learn that they're no longer considered to be reviewing\n> the patch.\n\nYeah, I happened to look in my commitfest update folder this morning and\nwas surprised to learn that I was no longer reviewing 3612. I spent a good\namount of time getting that patch into a state where I felt it was\nready-for-committer, and I intended to follow up on any further\ndevelopments in the thread. It's not uncommon for me to use the filter\nfunctionality in the app to keep track of patches I'm reviewing.\n\nOf course, there are probably patches where I could be removed from the\nreviewers field. I can try to stay on top of that better. If you think I\nshouldn't be marked as a reviewer and that it's hindering further review\nfor a patch, feel free to message me publicly or privately about it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 17 Jul 2022 08:17:13 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "Maybe we should have two reviewers columns -- one for history-tracking\npurposes (and commit msg credit) and another for current ones.\n\nPersonally, I don't use the CF app when building reviewer lists. I scan\nthe threads instead.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 18 Jul 2022 15:05:51 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Mon, Jul 18, 2022 at 03:05:51PM +0200, Alvaro Herrera wrote:\n> Maybe we should have two reviewers columns -- one for history-tracking\n> purposes (and commit msg credit) and another for current ones.\n\nMaybe. Or, the list of reviewers shouldn't be shown prominently in the list of\npatches. But changing that would currently break cfbot's web scraping.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 18 Jul 2022 08:13:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Fri, Jul 15, 2022 at 05:23:48PM -0700, Jacob Champion wrote:\n>> This is important stuff to discuss, for sure, but I also want to revisit\n>> the thing I put on pause, which is to clear out old Reviewer entries to\n>> make it easier for new reviewers to find work to do. If we're not using\n>> Reviewers as a historical record, is there any reason for me not to keep\n>> clearing that out?\n\nOn Fri, Jul 15, 2022 at 09:37:14PM -0500, Justin Pryzby wrote:\n> Why do you think it's useful to remove annotations that people added ? (And, if\n> it were useful, why shouldn't that be implemented in the cfapp, which already\n> has all the needed information.)\n\nOr, to say it differently, since \"reviewers\" are preserved when a patch is\nmoved to the next CF, it comes as a surprise when by some other mechanism or\npolicy the field doesn't stay there. (If it's intended to be more like a\nper-CF field, I think its behavior should be changed in the cfapp, to avoid\nmanual effort, and to avoid other people executing it differently.)\n\n> It undoes work that you and others have done to make the historical\n> record more accurate, and I think that's understandably frustrating. But\n> I thought we were trying to move away from that usage of it altogether.\n\nI gather that your goal was to make the \"reviewers\" field more like \"people who\nare reviewing the current version of the patch\", to make it easy to\nfind/isolate patch-versions which need to be reviewed, and hopefully accelarate\nthe process.\n\nBut IMO there's already no trouble finding the list of patches which need to be\nreviewed - it's the long list that say \"Needs Review\" - which is what's\nactually needed; that's not easy to do, which is why it's a long list, and no\namount of updating the annotations will help with that. I doubt many people\nsearch for patches to review by seeking out those which have no reviewer (which\nis not a short list anyway). I think they look for the most interesting\npatches, or the ones that are going to be specifically useful to them.\n\nHere's an idea: send out batch mails to people who are listed as reviewers for\npatches which \"Need Review\". That starts to treat the reviewers field as a\nfunctional thing rather than purely an annotation. Be sure in your message to\nsay \"You are receiving this message because you're listed as a reviewer for a\npatch which -Needs Review-\". I think it's reasonable to get a message like\nthat 1 or 2 times per month (but not per-month-per-patch). Ideally it'd\ninclude the list of patches specific to that reviewer, but I think it'd okay to\nget an un-personalized email reminder 1x/month with a link.\n\nBTW, one bulk update to make is for the few dozen patches that say \"v15\" on\nthem, and (excluding bugfixes) those are nearly all wrong. Since the field\nisn't visible in cfbot, it's mostly ignored. The field is useful toward the\nend of a release cycle to indicate patches that aren't intended for\nconsideration for the next release. Ideally, it'd also be used to indicate the\npatches *are* being considered, but it seems like nobody does that and it ends\nup being a surprise which patches are or are not committed (this seems weird\nand easy to avoid but .. ). The patches which say \"v15\" are probably from\npatches submitted during the v15 cycle, and now the version should be removed,\nunless there's a reason to believe the patch is going to target v16 (like if a\ncommitter has assigned themself).\n\nThanks for receiving my criticism well :)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 18 Jul 2022 13:06:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "Justin,\n\n(Consolidating replies here.)\n\nOn 7/15/22 19:13, Justin Pryzby wrote:\n> cfbot is Thomas's project, so moving it run on postgres vm was one step, but I\n> imagine the \"integration with cfapp\" requires coordination with Magnus.\n> \n> What patch ?\n\nhttps://www.postgresql.org/message-id/CAAWbhmg84OsO5VkaSjX4jokHy8mdpWpNKFgZJHHbb4mprXmtiQ%40mail.gmail.com\n\nIt was intended to be a toe in the water -- see if I'm following\nconventions, and if I even have the right list.\n\n>>> Similarly, patches could be summarily set to \"waiting on author\" if they didn't\n>>> recently apply, compile, and pass tests. That's the minimum standard.\n>>> However, I think it's better not to do this immediately after the patch stops\n>>> applying/compiling/failing tests, since it's usually easy enough to review it.\n>>\n>> It's hard to argue with that, but without automation, this is plenty of\n>> busy work too.\n> \n> I don't think that's busywork, since it's understood to require human\n> judgement, like 1) to deal with false-positive test failures, and 2) check if\n> there's actually anything left for the author to do; 3) check if it passed\n> tests recently; 4) evaluate existing opinions in the thread and make a\n> judgement call.\n\n[Dev hat; strong opinions ahead.]\n\nI maintain that 1) and 3) are busy work. You should not have to do those\nthings, in the ideal end state.\n\n>> I think this may have been the goal but I don't think it actually works\n>> in practice. The app refuses to let you carry a RwF patch to a new CF.\n> \n> I was able to do what Peter said. I don't know why the cfapp has that\n> restriction, it seems like an artificial constraint.\nThanks, I'll work on a patch.\n\n> On Fri, Jul 15, 2022 at 05:23:48PM -0700, Jacob Champion wrote:\n> I'm not suggesting to give the community regulars special treatment, but you\n> could reasonably assume that when they added themselves and then \"didn't remove\n> themself\", it was on purpose and not by omission. I think most of those people\n> would be surprised to learn that they're no longer considered to be reviewing\n> the patch.\n\nFor some people, I can maybe(?) assume that, but I'm being honest when I\nsay that I don't really know who that's true for. I definitely don't\nthink it's true for everybody. And once I start making those decisions\nas a CFM, then it really does boil down to who I know and have\ninteracted with before.\n\n> Can you give an example of a patch where you sent a significant review, and\n> added yourself as a reviewer, but wouldn't mind if someone summarily removed\n> you, in batch ?\n\nLiterally all of them. That's probably the key disconnect here, and why\nI didn't think too hard about doing it. (I promise, I didn't think to\nmyself \"I would really hate it if someone did this to me\", and then go\nahead and do it to twenty-some other people. :D)\n\nI come from OSS communities that discourage cookie-licking, whether\naccidental or on purpose. I don't like marking myself as a Reviewer in\ngeneral (although I have done it, because it seems like the thing to do\nhere?). Simultaneous reviews are never \"wasted work\" and I'd just rather\nnot call dibs on a patch. So I wouldn't have a problem with someone\ncoming along, seeing that I haven't interacted with a patch for a while,\nand removing my name. I trust that committers will give credit if credit\nis due.\n\n> The stated goal was to avoid the scenario that a would-be reviewer decides not\n> to review a patch because cfapp already shows someone else as a reviewer. I'm\n> sure that could happen, but I doubt it's something that happens frequently..\n\nI do that every commitfest. It's one of the first things I look at to\ndetermine what to pay attention to, because I'm trying to find patches\nthat have slipped through the cracks. As Tom pointed out, others do it\ntoo, though I don't know how many or if their motivations match mine.\n\n>> Why do you think it's useful to remove annotations that people added ? (And, if\n>> it were useful, why shouldn't that be implemented in the cfapp, which already\n>> has all the needed information.)\n> \n> Or, to say it differently, since \"reviewers\" are preserved when a patch is\n> moved to the next CF, it comes as a surprise when by some other mechanism or\n> policy the field doesn't stay there. (If it's intended to be more like a\n> per-CF field, I think its behavior should be changed in the cfapp, to avoid\n> manual effort, and to avoid other people executing it differently.)\n\nIt was my assumption, based on the upthread discussion, that that was\nthe end goal, and that we just hadn't implemented it yet for lack of time.\n\n>> It undoes work that you and others have done to make the historical\n>> record more accurate, and I think that's understandably frustrating. But\n>> I thought we were trying to move away from that usage of it altogether.\n> \n> I gather that your goal was to make the \"reviewers\" field more like \"people who\n> are reviewing the current version of the patch\", to make it easy to\n> find/isolate patch-versions which need to be reviewed, and hopefully accelarate\n> the process.\n\nYes.\n> But IMO there's already no trouble finding the list of patches which need to be\n> reviewed - it's the long list that say \"Needs Review\" - which is what's\n> actually needed; that's not easy to do, which is why it's a long list, and no\n> amount of updating the annotations will help with that. I doubt many people\n> search for patches to review by seeking out those which have no reviewer (which\n> is not a short list anyway).\n\nI do, because I'm looking to maximize bang for buck. When people have\nasked me as CFM for help finding patches to review, that's one of the\ncriteria I use. Needs Review is just too broad, and frankly seems to be\noverwhelming to new reviewers I've spoken with.\n\nHonestly, what I really want is buckets based on categories of *what you\ncan actually do for the patch*. \"We want this but it's been stuck in\nperma-rebase\", \"this is blocked on something you can't possibly\ninfluence as a reviewer\", \"this needs review from someone who\nunderstands every internal system\", \"this has been stale for months and\nis on the chopping block\". These are all things that seem to pop up\nsemi-regularly.\n\n> Here's an idea: send out batch mails to people who are listed as reviewers for\n> patches which \"Need Review\". That starts to treat the reviewers field as a\n> functional thing rather than purely an annotation. Be sure in your message to\n> say \"You are receiving this message because you're listed as a reviewer for a\n> patch which -Needs Review-\". I think it's reasonable to get a message like\n> that 1 or 2 times per month (but not per-month-per-patch). Ideally it'd\n> include the list of patches specific to that reviewer, but I think it'd okay to\n> get an un-personalized email reminder 1x/month with a link.\n\nI don't think that addresses the goal, or at least not the goal that I\nwas pursuing when I was pruning this field. Reminding existing reviewers\ndoesn't help organize the incoming funnel of new reviewers. And while it\nmight remind the regular contributors to remove their names if they've\ngone stale, it won't ease the CFM busywork for people who have\ncompletely gone silent.\n\n> BTW, one bulk update to make is for the few dozen patches that say \"v15\" on\n> them, and (excluding bugfixes) those are nearly all wrong. Since the field\n> isn't visible in cfbot, it's mostly ignored. The field is useful toward the\n> end of a release cycle to indicate patches that aren't intended for\n> consideration for the next release. Ideally, it'd also be used to indicate the\n> patches *are* being considered, but it seems like nobody does that and it ends\n> up being a surprise which patches are or are not committed (this seems weird\n> and easy to avoid but .. ). The patches which say \"v15\" are probably from\n> patches submitted during the v15 cycle, and now the version should be removed,\n> unless there's a reason to believe the patch is going to target v16 (like if a\n> committer has assigned themself).\n\nMy recent track record makes me reluctant to do any more bulk updates if\nI don't really understand how a field is being used. That whole thing\nsounds like something we should address with a better workflow.\n\n> Thanks for receiving my criticism well :)\n\nAnd thank you for speaking up so quickly! It's a lot easier to undo\npartial damage :D (Speaking of which: where is that CF update stream you\nmentioned?)\n\n--Jacob\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:00:01 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/18/22 06:13, Justin Pryzby wrote:\n> On Mon, Jul 18, 2022 at 03:05:51PM +0200, Alvaro Herrera wrote:\n>> Maybe we should have two reviewers columns -- one for history-tracking\n>> purposes (and commit msg credit) and another for current ones.\n> \n> Maybe. Or, the list of reviewers shouldn't be shown prominently in the list of\n> patches. But changing that would currently break cfbot's web scraping.\n\nI think separating use cases of \"what you can currently do for this\npatch\" and \"what others have historically done for this patch\" is\nimportant. Whether that's best done with more columns or with some other\nworkflow, I'm not sure.\n\nIt seems like being able to mark items on a personal level, in a way\nthat doesn't interfere with recordkeeping being done centrally, could\nhelp as well.\n\n--Jacob\n\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:02:24 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/15/22 19:59, Michael Paquier wrote:\n> On this point, I'd like to think that a window of two weeks is a right\n> balance. That's half of the commit fest, so that leaves plenty of\n> time for one to answer. There is always the case where one is on\n> vacations for a period longer than that, but it is also possible for\n> an author to add a new entry in a future CF for the same patch.\n\nThat seems reasonable. My suggestion was going to be more aggressive, at\nfive days, but really anywhere in that range seems good.\n\n--Jacob\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:04:04 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/17/22 08:17, Nathan Bossart wrote:\n> On Fri, Jul 15, 2022 at 09:37:14PM -0500, Justin Pryzby wrote:\n>> I'm not suggesting to give the community regulars special treatment, but you\n>> could reasonably assume that when they added themselves and then \"didn't remove\n>> themself\", it was on purpose and not by omission. I think most of those people\n>> would be surprised to learn that they're no longer considered to be reviewing\n>> the patch.\n> \n> Yeah, I happened to look in my commitfest update folder this morning and\n> was surprised to learn that I was no longer reviewing 3612. I spent a good\n> amount of time getting that patch into a state where I felt it was\n> ready-for-committer, and I intended to follow up on any further\n> developments in the thread. It's not uncommon for me to use the filter\n> functionality in the app to keep track of patches I'm reviewing.\n\nI'm sorry again for interrupting that flow. Thank you for speaking up\nand establishing the use case.\n\n> Of course, there are probably patches where I could be removed from the\n> reviewers field. I can try to stay on top of that better. If you think I\n> shouldn't be marked as a reviewer and that it's hindering further review\n> for a patch, feel free to message me publicly or privately about it.\n\nSure. I don't plan on removing anyone else from a Reviewer list this\ncommitfest, but if I do come across a reason I'll make sure to ask first. :)\n\n--Jacob\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:06:34 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Mon, Jul 18, 2022 at 12:06:34PM -0700, Jacob Champion wrote:\n> On 7/17/22 08:17, Nathan Bossart wrote:\n>> Yeah, I happened to look in my commitfest update folder this morning and\n>> was surprised to learn that I was no longer reviewing 3612. I spent a good\n>> amount of time getting that patch into a state where I felt it was\n>> ready-for-committer, and I intended to follow up on any further\n>> developments in the thread. It's not uncommon for me to use the filter\n>> functionality in the app to keep track of patches I'm reviewing.\n> \n> I'm sorry again for interrupting that flow. Thank you for speaking up\n> and establishing the use case.\n\nNo worries. Thanks for managing this commitfest!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:59:09 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On Mon, Jul 18, 2022 at 12:00:01PM -0700, Jacob Champion wrote:\n> And thank you for speaking up so quickly! It's a lot easier to undo\n> partial damage :D (Speaking of which: where is that CF update stream you\n> mentioned?)\n\nhttps://commitfest.postgresql.org/activity/\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 18 Jul 2022 17:32:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" }, { "msg_contents": "On 7/18/22 15:32, Justin Pryzby wrote:\n> On Mon, Jul 18, 2022 at 12:00:01PM -0700, Jacob Champion wrote:\n>> And thank you for speaking up so quickly! It's a lot easier to undo\n>> partial damage :D (Speaking of which: where is that CF update stream you\n>> mentioned?)\n> \n> https://commitfest.postgresql.org/activity/\n\nThank you. At this point, I think I've repaired all the entries.\n\n--Jacob\n\n\n\n", "msg_date": "Mon, 18 Jul 2022 15:53:25 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Update" } ]
[ { "msg_contents": "Hi hackers,\n\nMinor oversight with commit 0da92dc\n<https://github.com/postgres/postgres/commit/0da92dc530c9251735fc70b20cd004d9630a1266>\n.\nRelationIdGetRelation can return NULL, then it is necessary to check the\nreturn.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 31 Mar 2022 10:06:04 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Avoid dereference null relation pointer\n (src/backend/replication/logical/reorderbuffer.c)" } ]
[ { "msg_contents": "Hi,\n\nLatest snapshot tarball fails to build on SLES 12.5, which uses GCC\n4.8-8. Build log is attached.\n\nPlease let me know if you want me to provide more info.\n\nThanks!\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL guy\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Thu, 31 Mar 2022 15:23:22 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "head fails to build on SLES 12" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> Latest snapshot tarball fails to build on SLES 12.5, which uses GCC\n> 4.8-8. Build log is attached.\n\nHmm, what version of libzstd is present?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:26:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12" }, { "msg_contents": "Hi,\n\nOn Thu, 2022-03-31 at 10:26 -0400, Tom Lane wrote:\n> Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> > Latest snapshot tarball fails to build on SLES 12.5, which uses GCC\n> > 4.8-8. Build log is attached.\n> \n> Hmm, what version of libzstd is present?\n\n1.3.3\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Thu, 31 Mar 2022 15:38:39 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "Re: head fails to build on SLES 12" }, { "msg_contents": "On Thu, Mar 31, 2022 at 03:38:39PM +0100, Devrim G�nd�z wrote:\n> On Thu, 2022-03-31 at 10:26 -0400, Tom Lane wrote:\n> > Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> > > Latest snapshot tarball fails to build on SLES 12.5, which uses GCC\n> > > 4.8-8. Build log is attached.\n> > \n> > Hmm, what version of libzstd is present?\n> \n> 1.3.3\n\nThat's due to commit e9537321a74a2b062c8f7a452314b4570913f780.\n\nPossible responses look like:\n - Use 0 which also means \"default\" (need to verify that works across versions);\n - Or #ifndef ZSTD_CLEVEL_DEFAULT #define ZSTD_CLEVEL_DEFAULT 3;\n - Add a test for a minimum zstd version v1.3.7. This may be a good idea for\n v15 in any case, since we're using a few different APIs (at least\n ZSTD_compress and ZSTD_compressStream2 and execve(zstd)).\n\nI dug up this history:\n\ncommit b2632bcf6cf7b9b96e0ac99beea079df4d1eaec5\nMerge: 170f948e 869e2718\nAuthor: Yann Collet <Cyan4973@users.noreply.github.com>\nDate: Tue Jun 12 12:09:01 2018 -0700\n\n Merge pull request #1174 from duc0/document_default_level\n \n Expose ZSTD_CLEVEL_DEFAULT and update documentation\n\ncommit e34c000e44444b9f8bd62e5af0a355ee186eb21f\nAuthor: Duc Ngo <duc@fb.com>\nDate: Fri Jun 8 11:29:51 2018 -0700\n\n Expose ZSTD_CLEVEL_DEFAULT and update documentation\n\ncommit 6d4fef36de21908e333b2a1fde8ded0a7f086ae1\nAuthor: Yann Collet <cyan@fb.com>\nDate: Wed May 17 18:36:15 2017 -0700\n\n Added ZSTD_compress_generic()\n\n Used in fileio.c (zstd cli).\n Need to set macro ZSTD_NEWAPI to trigger it.\n\ncommit 236d94fa9a4ff8723922971274a119c6084d5dbc\nAuthor: Yann Collet <yann.collet.73@gmail.com>\nDate: Wed May 18 12:06:33 2016 +0200\n\n reverted default compression level to 1\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:37:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Possible responses look like:\n> - Use 0 which also means \"default\" (need to verify that works across versions);\n> - Or #ifndef ZSTD_CLEVEL_DEFAULT #define ZSTD_CLEVEL_DEFAULT 3;\n> - Add a test for a minimum zstd version v1.3.7. This may be a good idea for\n> v15 in any case, since we're using a few different APIs (at least\n> ZSTD_compress and ZSTD_compressStream2 and execve(zstd)).\n\nIn view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\nthat some clarity about the minimum supported version of zstd\nseems essential. I don't want to be dealing with threading bugs\nin ancient zstd versions. However, why do you suggest 1.3.7 in\nparticular?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:44:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "On Thu, Mar 31, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Possible responses look like:\n> > - Use 0 which also means \"default\" (need to verify that works across versions);\n> > - Or #ifndef ZSTD_CLEVEL_DEFAULT #define ZSTD_CLEVEL_DEFAULT 3;\n> > - Add a test for a minimum zstd version v1.3.7. This may be a good idea for\n> > v15 in any case, since we're using a few different APIs (at least\n> > ZSTD_compress and ZSTD_compressStream2 and execve(zstd)).\n>\n> In view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\n> that some clarity about the minimum supported version of zstd\n> seems essential. I don't want to be dealing with threading bugs\n> in ancient zstd versions. However, why do you suggest 1.3.7 in\n> particular?\n\nOne thing to note is that apparently threading wasn't enabled in the\ndefault build before 1.5.0, which was released in May 2021, but it did\nexist as an option in the code for some period of time prior to that.\nI don't know how long exactly. I don't want to jump to the conclusion\nthat other people's old versions are full of bugs, but if that should\nhappen to be true here, there's some chance that PostgreSQL users\nwon't be exposed to them just because threading wasn't enabled by\ndefault until quite recently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 12:26:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "On Thu, Mar 31, 2022 at 12:26:46PM -0400, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > Possible responses look like:\n> > > - Use 0 which also means \"default\" (need to verify that works across versions);\n> > > - Or #ifndef ZSTD_CLEVEL_DEFAULT #define ZSTD_CLEVEL_DEFAULT 3;\n> > > - Add a test for a minimum zstd version v1.3.7. This may be a good idea for\n> > > v15 in any case, since we're using a few different APIs (at least\n> > > ZSTD_compress and ZSTD_compressStream2 and execve(zstd)).\n> >\n> > In view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\n> > that some clarity about the minimum supported version of zstd\n> > seems essential. I don't want to be dealing with threading bugs\n> > in ancient zstd versions. However, why do you suggest 1.3.7 in\n> > particular?\n> \n> One thing to note is that apparently threading wasn't enabled in the\n> default build before 1.5.0, which was released in May 2021, but it did\n> exist as an option in the code for some period of time prior to that.\n> I don't know how long exactly. I don't want to jump to the conclusion\n> that other people's old versions are full of bugs, but if that should\n> happen to be true here, there's some chance that PostgreSQL users\n> won't be exposed to them just because threading wasn't enabled by\n> default until quite recently.\n\nRight. Importantly, it's a run-time failure condition if threading wasn't\nenabled at compile time. Postgres should still compile --with-zstd even if it\nwasn't, and pg_basebackup should work, except if workers is specified (or maybe\nif workers>0, but it's possible that allowing workers=0 wasn't true before some\nversion). I'll write more later.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:34:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\n>> that some clarity about the minimum supported version of zstd\n>> seems essential. I don't want to be dealing with threading bugs\n>> in ancient zstd versions. However, why do you suggest 1.3.7 in\n>> particular?\n\n> One thing to note is that apparently threading wasn't enabled in the\n> default build before 1.5.0, which was released in May 2021, but it did\n> exist as an option in the code for some period of time prior to that.\n> I don't know how long exactly. I don't want to jump to the conclusion\n> that other people's old versions are full of bugs, but if that should\n> happen to be true here, there's some chance that PostgreSQL users\n> won't be exposed to them just because threading wasn't enabled by\n> default until quite recently.\n\nHm. After rereading 51c0d186d I see that we're not asking for\nparallel compression unless the user tells us to, so I guess\nour fallback answer for any complaints in that area can be\n\"if it hurts, don't do it\". Still, I like the idea of having\na well-defined minimum zstd version that we consider supported.\nThe evident fact that their APIs are still changing (or at\nleast have done so within the memory of LTS platforms) makes\nthat fairly pressing. Question is what to set the minimum to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 12:37:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "On Thu, Mar 31, 2022 at 12:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm. After rereading 51c0d186d I see that we're not asking for\n> parallel compression unless the user tells us to, so I guess\n> our fallback answer for any complaints in that area can be\n> \"if it hurts, don't do it\".\n\nRight. We can also tell people that if they are running buggy versions\nof libzstd or liblz4 or libz, they should upgrade to non-buggy\nversions. Our ability to paper over bugs in compression libraries is\ngoing to be extremely limited.\n\n> Still, I like the idea of having\n> a well-defined minimum zstd version that we consider supported.\n> The evident fact that their APIs are still changing (or at\n> least have done so within the memory of LTS platforms) makes\n> that fairly pressing. Question is what to set the minimum to.\n\nI think we should aim, if we can, to be compatible with libzstd\nversions that are still being shipped with still-supported releases of\nmainstream Linux distributions. If that turns out to be too hard, we\ncan be less ambitious.\n\nOn the particular question of ZSTD_CLEVEL_DEFAULT, it does not seem\nlikely that the library would have only recently exposed a symbol that\nis required for correct use of the API, so I bet there's a relatively\nsimple way to avoid needing that altogether (perhaps by writing \"0\"\ninstead).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 12:55:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "\nOn 3/31/22 10:23, Devrim Gündüz wrote:\n> Hi,\n>\n> Latest snapshot tarball fails to build on SLES 12.5, which uses GCC\n> 4.8-8. Build log is attached.\n>\n> Please let me know if you want me to provide more info.\n>\n\n\nAFAICT we don't have any buildfarm animals currently reporting on SLES\n12 (and the one we did have was on s390x, if that matters).\n\nSo if this is something that needs support we should address that. After\nall, that's exactly what the buildfarm was created for.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:58:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12" }, { "msg_contents": "On 03/31/2022 2:58 pm, Andrew Dunstan wrote:\n> On 3/31/22 10:23, Devrim Gündüz wrote:\n>> Hi,\n>> \n>> Latest snapshot tarball fails to build on SLES 12.5, which uses GCC\n>> 4.8-8. Build log is attached.\n>> \n>> Please let me know if you want me to provide more info.\n>> \n> \n> \n> AFAICT we don't have any buildfarm animals currently reporting on SLES\n> 12 (and the one we did have was on s390x, if that matters).\n> \n> So if this is something that needs support we should address that. \n> After\n> all, that's exactly what the buildfarm was created for.\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nI can spin up a VM on SLES 12 assuming someone can point me to the right \nplace to get\nan ISO, etc, and I don't need a payfor license.\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:03:50 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12" }, { "msg_contents": "On Thu, Mar 31, 2022 at 11:44:40AM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Possible responses look like:\n> > - Use 0 which also means \"default\" (need to verify that works across versions);\n> > - Or #ifndef ZSTD_CLEVEL_DEFAULT #define ZSTD_CLEVEL_DEFAULT 3;\n> > - Add a test for a minimum zstd version v1.3.7. This may be a good idea for\n> > v15 in any case, since we're using a few different APIs (at least\n> > ZSTD_compress and ZSTD_compressStream2 and execve(zstd)).\n> \n> In view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\n> that some clarity about the minimum supported version of zstd\n> seems essential. I don't want to be dealing with threading bugs\n> in ancient zstd versions. However, why do you suggest 1.3.7 in\n> particular?\n\nThat's where I found that ZSTD_CLEVEL_DEFAULT was added, in their git.\n\nI've just installed a .deb for 1.3.8, and discovered that the APIs used by\nbasebackup were considered experimental/nonpublic/static-lib-only until 1.4.0\nhttps://github.com/facebook/zstd/releases/tag/v1.4.0\nZSTD_CCtx_setParameter ZSTD_c_compressionLevel ZSTD_c_nbWorkers ZSTD_CCtx_reset ZSTD_reset_session_only ZSTD_compressStream2 ZSTD_e_continue ZSTD_e_end\n\nFYI: it looks like parallel, thread workers were also a nonpublic,\n\"experimental\" API until v1.3.7. Actually, ZSTD_p_nbThreads was renamed to\nZSTD_p_nbWorkers and then (in 1.3.8) renamed to ZSTD_c_nbWorkers.\n\nVersions less than 1.3.8 would have required compile-time conditionals for both\nZSTD_CLEVEL_DEFAULT and ZSTD_p_nbThreads/ZSTD_p_nbWorkers/ZSTD_c_nbWorkers (but\nthat is moot).\n\nNegative compression levels were added in 1.3.4 (but I think the range of their\nlevels was originally -1..-7 and now expanded). And long-distance matching\nadded in 1.3.2.\n\ncirrus has installed:\nlinux (debian bullseye) 1.4.8\nmacos has 1.5.0\nfreebsd has 1.5.0\n\ndebian buster (oldstable) has v1.3.8, which is too old.\nubuntu focal LTS has 1.4.4 (bionic LTS has 1.3.3 which seems too old)\nrhel7 has 1.5.2 in EPEL;\n\nSLES 12.5 has zstd 1.3.3, so it won't be supported. But postgres should fail\ngracefully during ./configure rather than during build.\n\nNote that check-world fails if wal_compression is enabled due to two\noutstanding issues, so it's not currently possible to set that in CI or\nbuildfarm...\nhttps://www.postgresql.org/message-id/c86ce84f-dd38-9951-102f-13a931210f52%40dunslane.net", "msg_date": "Thu, 31 Mar 2022 17:42:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Mar 31, 2022 at 11:44:40AM -0400, Tom Lane wrote:\n>> In view of 51c0d186d (\"Allow parallel zstd compression\"), I agree\n>> that some clarity about the minimum supported version of zstd\n>> seems essential. I don't want to be dealing with threading bugs\n>> in ancient zstd versions. However, why do you suggest 1.3.7 in\n>> particular?\n\n> That's where I found that ZSTD_CLEVEL_DEFAULT was added, in their git.\n\n> I've just installed a .deb for 1.3.8, and discovered that the APIs used by\n> basebackup were considered experimental/nonpublic/static-lib-only until 1.4.0\n\nIndeed. I tried building against 1.3.6 (mainly because it was laying\naround) and the error reported by Devrim is just the tip of the iceberg.\nWith \"make -k\", I see unknown-symbol failures on\n\nZSTD_CCtx_setParameter\nZSTD_c_compressionLevel\nZSTD_c_nbWorkers\nZSTD_CCtx_reset\nZSTD_reset_session_only\nZSTD_compressStream2\nZSTD_e_continue\nZSTD_e_end\n\nI wonder whether Robert's ambition to be compatible with old versions\nextends that far.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 19:38:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "On Thu, Mar 31, 2022 at 7:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Indeed. I tried building against 1.3.6 (mainly because it was laying\n> around) and the error reported by Devrim is just the tip of the iceberg.\n> With \"make -k\", I see unknown-symbol failures on\n>\n> ZSTD_CCtx_setParameter\n> ZSTD_c_compressionLevel\n> ZSTD_c_nbWorkers\n> ZSTD_CCtx_reset\n> ZSTD_reset_session_only\n> ZSTD_compressStream2\n> ZSTD_e_continue\n> ZSTD_e_end\n>\n> I wonder whether Robert's ambition to be compatible with old versions\n> extends that far.\n\nIt definitely doesn't, and the fact that there's that much difference\nin the APIs between 2018 and the present frankly makes my heart sink.\n\nIt looks like this stuff may be what libzstd calls the \"advanced API\"\nwhich was considered unstable until 1.4.0 according to\nhttps://github.com/facebook/zstd/releases -- so maybe we ought to\ndeclare anything pre-1.4.0. to be unsupported. I really hope they're\nserious about keeping the advanced API stable, though. I'm excited\nabout the potential of using libzstd, but I don't want to have to keep\nadjusting our code to work with new library versions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:39:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 7:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Indeed. I tried building against 1.3.6 (mainly because it was laying\n>> around) and the error reported by Devrim is just the tip of the iceberg.\n>> ...\n>> I wonder whether Robert's ambition to be compatible with old versions\n>> extends that far.\n\n> It definitely doesn't, and the fact that there's that much difference\n> in the APIs between 2018 and the present frankly makes my heart sink.\n\nAFAICS it's just additions; this stuff is not evidence that they\nbroke anything.\n\n> It looks like this stuff may be what libzstd calls the \"advanced API\"\n> which was considered unstable until 1.4.0 according to\n> https://github.com/facebook/zstd/releases -- so maybe we ought to\n> declare anything pre-1.4.0. to be unsupported.\n\nThat seems like the appropriate answer to me. I verified that we\nbuild cleanly and pass check-world against 1.4.0, and later I'm\ngoing to set up BF member longfin to use that. So that will give\nus an anchor that we support zstd that far back. Had we written\nthis code earlier, maybe we'd have confined ourselves to 1.3.x\nfeatures ... but we didn't, and I don't see much value in doing\nso now.\n\nIn short, I think we should push Justin's version-check patch,\nand also fix the SGML docs to say that we require zstd >= 1.4.0.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 21:10:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "On Thu, Mar 31, 2022 at 09:10:00PM -0400, Tom Lane wrote:\n> In short, I think we should push Justin's version-check patch,\n> and also fix the SGML docs to say that we require zstd >= 1.4.0.\n\n1.4.0 was released in April 2019, just 3 years ago. It does not sound\nthat bad to me to make this version number a requirement for 15~ if\none wants to use zstd.\n--\nMichael", "msg_date": "Fri, 1 Apr 2022 13:35:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" }, { "msg_contents": "I wrote:\n> That seems like the appropriate answer to me. I verified that we\n> build cleanly and pass check-world against 1.4.0, and later I'm\n> going to set up BF member longfin to use that.\n\nDone ...\n\n> In short, I think we should push Justin's version-check patch,\n> and also fix the SGML docs to say that we require zstd >= 1.4.0.\n\n... and done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Apr 2022 11:06:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: head fails to build on SLES 12 (wal_compression=zstd)" } ]
[ { "msg_contents": "Hi hackers,\n\nI learned from Peter [1] that SLRU test coverage leaves much to be\ndesired and makes it difficult to refactor it. Here is a draft of a\npatch that tries to address it.\n\nI used src/test/modules/test_* modules as an example. While on it, I\nmoved the Asserts() outside of SimpleLruInit(). It didn't seem to be a\nright place to check the IsUnderPostmaster value, and complicated the\ntest implementation. I also renamed SlruSyncFileTag() to\nSlruSyncSegment() and changed its signature. I think it makes the\ninterface easier to reason about.\n\nI noticed that SLRU uses int's for slotno, while FileTag->slotno is\nuint32. Can't this cause us any grief? Finally, I believe\nSimpleLruWritePage() name is confusing, because in fact it works with\na slot, not a page. But I didn't change the name in my patch, yet.\n\nIf time permits, please take a quick look at the patch and let me know\nif I'm moving the right direction. There will be more tests in the\nfinal version, but I would appreciate an early feedback.\n\n[1]: https://postgr.es/m/220fab30-dff0-b055-f803-4338219f1021%40enterprisedb.com\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 31 Mar 2022 17:30:41 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Unit tests for SLRU" }, { "msg_contents": "> On 31 Mar 2022, at 16:30, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\nThanks for hacking on increasing test coverage!\n\n> While on it, I moved the Asserts() outside of SimpleLruInit(). It didn't seem\n> to be a right place to check the IsUnderPostmaster value, and complicated the\n> test implementation.\n\n+ *\n+ * Returns false if the cache didn't exist before the call, true otherwise.\n */\n-void\n+bool\n SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,\n\nIf we're changing the API to make it testable, that should be noted in the\ncomment and how the return value should be interpreted (and when it can be\nignored). It also doesn't seem all that appealing that SimpleLruInit can\nreturn false on successful function invocation, it makes for a confusing API.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:24:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Thu, Mar 31, 2022 at 05:30:41PM +0300, Aleksander Alekseev wrote:\n> I used src/test/modules/test_* modules as an example.\n\n> If time permits, please take a quick look at the patch and let me know\n> if I'm moving the right direction. There will be more tests in the\n> final version, but I would appreciate an early feedback.\n\nThe default place for this kind of test is regress.c, with plain \"make check\"\ncalling the regress.c function. src/test/modules is for things requiring an\nextension module or otherwise unable to run through regress.c.\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:47:28 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "пт, 1 апр. 2022 г. в 07:47, Noah Misch <noah@leadboat.com>:\n\n> On Thu, Mar 31, 2022 at 05:30:41PM +0300, Aleksander Alekseev wrote:\n> > I used src/test/modules/test_* modules as an example.\n>\n> > If time permits, please take a quick look at the patch and let me know\n> > if I'm moving the right direction. There will be more tests in the\n> > final version, but I would appreciate an early feedback.\n>\n> The default place for this kind of test is regress.c, with plain \"make\n> check\"\n> calling the regress.c function. src/test/modules is for things requiring\n> an\n> extension module or otherwise unable to run through regress.c.\n>\n+1 for placement c functions into regress.c if it's possible for the aim of\nsimplification.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nпт, 1 апр. 2022 г. в 07:47, Noah Misch <noah@leadboat.com>:On Thu, Mar 31, 2022 at 05:30:41PM +0300, Aleksander Alekseev wrote:\n> I used src/test/modules/test_* modules as an example.\n\n> If time permits, please take a quick look at the patch and let me know\n> if I'm moving the right direction. There will be more tests in the\n> final version, but I would appreciate an early feedback.\n\nThe default place for this kind of test is regress.c, with plain \"make check\"\ncalling the regress.c function.  src/test/modules is for things requiring an\nextension module or otherwise unable to run through regress.c.+1 for placement c functions into regress.c if it's possible for the aim of simplification.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 1 Apr 2022 12:58:35 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi Daniel,\n\n> It also doesn't seem all that appealing that SimpleLruInit can\n> return false on successful function invocation, it makes for a confusing\nAPI.\n\nAgree. I think using an additional `bool *found` argument would be less\nconfusing.\n\nNoah, Pavel,\n\n>> The default place for this kind of test is regress.c, with plain \"make\ncheck\"\n>> calling the regress.c function. src/test/modules is for things\nrequiring an\n>> extension module or otherwise unable to run through regress.c.\n>\n> +1 for placement c functions into regress.c if it's possible for the aim\nof simplification.\n\nThanks for your feedback. I'm OK with placing the tests to regress.c, but I\nwould like to double-check if this would be a right place for them.\n\nI'm reading src/test/modules/README and it says:\n\n\"\"\"\nsrc/test/modules contains PostgreSQL extensions that are primarily or\nentirely\nintended for testing PostgreSQL and/or to serve as example code. [...]\n\nIf you're adding new hooks or other functionality exposed as C-level API\nthis\nis where to add the tests for it.\n\"\"\"\n\nSLRU looks like a quite general-purpose container. I can imagine how\nsomeone may decide to use it in an extension. Wouldn't it be more logical\nto place it near:\n\nsrc/test/modules/test_rbtree\nsrc/test/modules/test_shm_mq\n\n... etc?\n\nAgain, I don't have a strong opinion here. If you insist, I will place the\ntests to regress.c.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Daniel,> It also doesn't seem all that appealing that SimpleLruInit can> return false on successful function invocation, it makes for a confusing API.Agree. I think using an additional `bool *found` argument would be less confusing.Noah, Pavel,>> The default place for this kind of test is regress.c, with plain \"make check\">> calling the regress.c function.  src/test/modules is for things requiring an>> extension module or otherwise unable to run through regress.c.>> +1 for placement c functions into regress.c if it's possible for the aim of simplification.Thanks for your feedback. I'm OK with placing the tests to regress.c, but I would like to double-check if this would be a right place for them.I'm reading src/test/modules/README and it says:\"\"\"src/test/modules contains PostgreSQL extensions that are primarily or entirelyintended for testing PostgreSQL and/or to serve as example code. [...]If you're adding new hooks or other functionality exposed as C-level API thisis where to add the tests for it.\"\"\"SLRU looks like a quite general-purpose container. I can imagine how someone may decide to use it in an extension. Wouldn't it be more logical to place it near:src/test/modules/test_rbtreesrc/test/modules/test_shm_mq... etc?Again, I don't have a strong opinion here. If you insist, I will place the tests to regress.c. -- Best regards,Aleksander Alekseev", "msg_date": "Tue, 5 Apr 2022 12:38:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": ">\n> Again, I don't have a strong opinion here. If you insist, I will place the\n> tests to regress.c.\n>\n\nIt is up to committer to decide, but I think it would be good to place\ntests in regression.\nIn my opinion, many things from core may be used by extensions. And then it\nis up to extension authors to make relevant tests.\n\nFor me, it's enough to make a reasonable test coverage for SLRU without\ninvesting too much effort into generalization.\n\n-- \nBest regards,\nMaxim Orlov.\n\nAgain, I don't have a strong opinion here. If you insist, I will place the tests to regress.c.It is up to committer to decide, but I think it would be good to place tests in regression.In my opinion, many things from core may be used by extensions. And then it is up to extension authors to make relevant tests.For me, it's enough to make a reasonable test coverage for SLRU without investing too much effort into generalization. -- Best regards,Maxim Orlov.", "msg_date": "Tue, 5 Apr 2022 14:22:22 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi hackers,\n\n>> Again, I don't have a strong opinion here. If you insist, I will place the\n>> tests to regress.c.\n>\n> It is up to committer to decide, but I think it would be good to place tests\n> in regression. In my opinion, many things from core may be used by extensions.\n> And then it is up to extension authors to make relevant tests.\n>\n> For me, it's enough to make a reasonable test coverage for SLRU without\n> investing too much effort into generalization.\n\nOK, here is an updated version of the patch. Changes comparing to v1:\n\n* Tests are moved to regress.c, as asked by the majority;\n* SimpleLruInit() returns void as before, per Daniel's feedback;\n* Most of the initial refactorings were reverted in order to keep the patch\n as small as possible, per Maxim's feedback.\n\nLCOV indicates 65% of code coverage in terms of lines of code. I'm currently\nworking on improving this number.\n\nMore feedback is always welcome!\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 5 Apr 2022 15:14:17 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi hackers,\n\n> OK, here is an updated version of the patch. Changes comparing to v1:\n>\n> * Tests are moved to regress.c, as asked by the majority;\n> * SimpleLruInit() returns void as before, per Daniel's feedback;\n> * Most of the initial refactorings were reverted in order to keep the patch\n> as small as possible, per Maxim's feedback.\n\nHere is version 3 of the patch.\n\nTest coverage is 92.3% of functions, 73.4% of lines of code. Not all error\nhandling was covered, and I couldn't cover SimpleLruWaitIO(). The latest\nrequires writing a concurrent test, which from what I know is not exactly what\nunit tests are for. We can make it public if we want to, but considering the\nsimplicity of the function and the existence of many other tests I didn't find\nit necessary.\n\nI think the tests are about as good as they will ever get.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 6 Apr 2022 15:44:38 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi hackers,\n\n> Here is version 3 of the patch.\n> [...]\n> I think the tests are about as good as they will ever get.\n\nHere is version 4. Same as v3, but with resolved conflicts against the\ncurrent `master` branch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 7 Apr 2022 13:35:02 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": ">\n> Hi hackers,\n>\n> > Here is version 3 of the patch.\n> > [...]\n> > I think the tests are about as good as they will ever get.\n>\n> Here is version 4. Same as v3, but with resolved conflicts against the\n> current `master` branch.\n>\nHi, Alexander!\nThe test seems good enough to be pushed.\n\nOnly one thing to note. Maybe it would be good not to copy-paste Assert\nafter every call of SimpleLruInit, putting it into the wrapper function\ninstead. So the test can call calling the inner function (without assert)\nand all other callers using the wrapper. Not sure about naming though.\nMaybe rename current SimpleLruInit -> SimpleLruInitInner and a new wrapper\nbeing under the old name (SimpleLruInit).\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi hackers,\n\n> Here is version 3 of the patch.\n> [...]\n> I think the tests are about as good as they will ever get.\n\nHere is version 4. Same as v3, but with resolved conflicts against the\ncurrent `master` branch.Hi, Alexander!The test seems good enough to be pushed.Only one thing to note. Maybe it would be good not to copy-paste Assert after every call of SimpleLruInit, putting it into the wrapper function instead. So the test can call calling the inner function (without assert) and all other callers using the wrapper. Not sure about naming though. Maybe  rename current SimpleLruInit -> SimpleLruInitInner and a new wrapper being under the old name (SimpleLruInit).--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 13 Apr 2022 15:51:30 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Wed, Apr 13, 2022 at 03:51:30PM +0400, Pavel Borisov wrote:\n> Only one thing to note. Maybe it would be good not to copy-paste Assert\n> after every call of SimpleLruInit, putting it into the wrapper function\n> instead. So the test can call calling the inner function (without assert)\n> and all other callers using the wrapper. Not sure about naming though.\n> Maybe rename current SimpleLruInit -> SimpleLruInitInner and a new wrapper\n> being under the old name (SimpleLruInit).\n\nI have looked at what you have here..\n\nThis patch redesigns SimpleLruInit() so as the caller would be now in\ncharge of checking that the SLRU has been created in the context of\nthe postmaster (aka when initializing shared memory). While this\nshould work as long as the amount of shared memory area is correctly\nsized in _PG_init() and that this area is initialized, then attached\nlater like for autoprewarm.c (this counts for LWLockRegisterTranche(),\nfor example), I am not really convinced that this is something that a\npatch aimed at extending testing coverage should redesign, especially\nwith a routine as old as that. If you don't know what you are doing,\nit could easily lead to problems with external code. Note that I\ndon't object to the addition of a new code path or a routine that\nwould be able to create a SLRU on-the-fly with less restrictions, but\nI am not convinced that this we should change this behavior (well,\nthere is a new argument that would force a recompilation). I am not\nsure what could be the use cases in favor of a SLRU created outside\nthe _PG_init() phase, but perhaps you have more imagination than I do\nfor such matters ;p\n\nFWIW, I'd like to think that the proper way of doing things for this\ntest facility is to initialize a SLRU through a loading of _PG_init()\nwhen processing shared_preload_libraries, meaning that you'd better\nput this facility in src/test/modules/ with a custom configuration\nfile with shared_preload_libraries set and a NO_INSTALLCHECK, without\ntouching at SimpleLruInit().\n--\nMichael", "msg_date": "Thu, 10 Nov 2022 16:01:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Thu, Nov 10, 2022 at 04:01:02PM +0900, Michael Paquier wrote:\n> I have looked at what you have here..\n\nThe comment at the top of SimpleLruInit() for sync_handler has been\nfixed as of 5ca3645, and backpatched. Nice catch.\n--\nMichael", "msg_date": "Thu, 10 Nov 2022 17:17:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi Michael,\n\nThanks for the review and also for committing 5ca3645.\n\n> This patch redesigns SimpleLruInit() [...]\n> I am not really convinced that this is something that a\n> patch aimed at extending testing coverage should redesign, especially\n> with a routine as old as that.\n> [...] you'd better put this facility in src/test/modules/\n\nFair enough. PFA the corrected patch v5.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 10 Nov 2022 18:40:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Thu, Nov 10, 2022 at 06:40:44PM +0300, Aleksander Alekseev wrote:\n> Fair enough. PFA the corrected patch v5.\n\nIs there a reason why you need a TAP test here? It is by design more\nexpensive than pg_regress and it does not require --enable-tap-tests.\nSee for example what we do for snapshot_too_old, commit_ts,\nworker_spi, etc., where each module uses a custom configuration file.\n\nHmm. If I were to write that, I think that I would make the SLRU\ndirectory configurable as a PGC_POSTMASTER, at least, for the purpose\nof the exercise, and also split test_slru() into more user-callable\nfunctions so as it would be possible to mix more cross-check scenarios\nwith the low-level C APIs if need be, with adapted input parameters.\n--\nMichael", "msg_date": "Fri, 11 Nov 2022 14:11:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Fri, Nov 11, 2022 at 02:11:08PM +0900, Michael Paquier wrote:\n> Is there a reason why you need a TAP test here? It is by design more\n> expensive than pg_regress and it does not require --enable-tap-tests.\n> See for example what we do for snapshot_too_old, commit_ts,\n> worker_spi, etc., where each module uses a custom configuration file.\n\nI have put my hands on that, and I found that the tests were a bit\noverengineered. First, SimpleLruDoesPhysicalPageExist() is not that\nmuch necessary before and after each operation, like truncation or\ndeletion, as the previous pages were doing equal tests. The hardcoded\npage number lacks a bit of flexibility and readability IMO, especially\nwhen combined with the number of pages per segments, as well.\n\nI have reworked that as per the attached, that provides basically the\nsame coverage, going through a SQL interface for the whole thing.\nLike all the other tests of its kind, this does not use a TAP test,\nrelying on a custom configuration file instead. This still needs some\npolishing, but the basics are here.\n\nWhat do you think?\n--\nMichael", "msg_date": "Mon, 14 Nov 2022 20:19:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi Michael,\n\n> I have reworked that as per the attached, that provides basically the\n> same coverage, going through a SQL interface for the whole thing.\n> Like all the other tests of its kind, this does not use a TAP test,\n> relying on a custom configuration file instead. This still needs some\n> polishing, but the basics are here.\n\nMany thanks for the updated patch. I didn't know one can run tests\nwith a custom postgresql.conf without using TAP tests.\n\n> What do you think?\n\nIt looks much better than before. I replaced strcpy() with strncpy()\nand pgindent'ed the code. Other than that to me it looks ready to be\ncommitted.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 15 Nov 2022 13:15:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "> On 15 Nov 2022, at 11:15, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\n>> What do you think?\n> \n> It looks much better than before. I replaced strcpy() with strncpy()\n> and pgindent'ed the code.\n\n+\t/* write given data to the page */\n+\tstrncpy(TestSlruCtl->shared->page_buffer[slotno], data, BLCKSZ - 1);\n\nWould it make sense to instead use pg_pwrite to closer match the code being\ntested?\n\n> Other than that to me it looks ready to be committed.\n\nAgreed, reading over it nothing sticks out.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:39:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "Hi, Alexander!\n> > I have reworked that as per the attached, that provides basically the\n> > same coverage, going through a SQL interface for the whole thing.\n> > Like all the other tests of its kind, this does not use a TAP test,\n> > relying on a custom configuration file instead. This still needs some\n> > polishing, but the basics are here.\n>\n> Many thanks for the updated patch. I didn't know one can run tests\n> with a custom postgresql.conf without using TAP tests.\n>\n> > What do you think?\n>\n> It looks much better than before. I replaced strcpy() with strncpy()\n> and pgindent'ed the code. Other than that to me it looks ready to be\n> committed.\nI've looked through the patch again. I agree it looks better and can\nbe committed.\nMark it as RfC now.\n\nRegards,\nPavel Borisov\n\n\n", "msg_date": "Tue, 15 Nov 2022 14:43:06 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Tue, Nov 15, 2022 at 11:39:20AM +0100, Daniel Gustafsson wrote:\n> +\t/* write given data to the page */\n> +\tstrncpy(TestSlruCtl->shared->page_buffer[slotno], data, BLCKSZ - 1);\n> \n> Would it make sense to instead use pg_pwrite to closer match the code being\n> tested?\n\nHmm. I am not exactly sure what we'd gain with that, as it would\nimply that we need to write directly to the file using SlruFileName()\nafter doing ourselves a OpenTransientFile(), duplicating what\nSlruPhysicalWritePage() does to create a fd to feed to a pg_pwrite()?\nOr I misunderstood your point.\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 09:22:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" }, { "msg_contents": "On Tue, Nov 15, 2022 at 02:43:06PM +0400, Pavel Borisov wrote:\n> I've looked through the patch again. I agree it looks better and can\n> be committed.\n> Mark it as RfC now.\n\nOkay, applied, then. The SQL function names speak by themselves, even\nif some of them refer to pages but they act on segments, but that's\neasy enough to see the difference through the code when we do segment\nnumber compilations, as well.\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 09:56:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unit tests for SLRU" } ]
[ { "msg_contents": "Do not add openssl dependencies to libpq pkg-config file if openssl is\ndisabled to avoid the following build failure with libdbi-drivers raised\nsince commit beff361bc1edc24ee5f8b2073a1e5e4c92ea66eb:\n\nconfigure: error: Package requirements (libpq) were not met:\n\nPackage 'libssl', required by 'libpq', not found\nPackage 'libcrypto', required by 'libpq', not found\n\nFixes:\n - http://autobuild.buildroot.org/results/415cb61a58b928a42623ed90b0b60c59032f0a4e\n\nSigned-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>\n---\n src/interfaces/libpq/Makefile | 2 ++\n 1 file changed, 2 insertions(+)\n\ndiff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile\nindex 89bf5e0126..b5fd72a4ac 100644\n--- a/src/interfaces/libpq/Makefile\n+++ b/src/interfaces/libpq/Makefile\n@@ -95,7 +95,9 @@ SHLIB_PREREQS = submake-libpgport\n \n SHLIB_EXPORTS = exports.txt\n \n+ifeq ($(with_ssl),openssl)\n PKG_CONFIG_REQUIRES_PRIVATE = libssl libcrypto\n+endif\n \n all: all-lib libpq-refs-stamp\n \n-- \n2.35.1\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 18:37:59 +0200", "msg_from": "Fabrice Fontaine <fontaine.fabrice@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] src/interfaces/libpq/Makefile: fix pkg-config without openssl" }, { "msg_contents": "> On 31 Mar 2022, at 18:37, Fabrice Fontaine <fontaine.fabrice@gmail.com> wrote:\n\n> +ifeq ($(with_ssl),openssl)\n> PKG_CONFIG_REQUIRES_PRIVATE = libssl libcrypto\n> +endif\n\nThat seems reasonable, is there any reason why the referenced commit didn't do\nthat?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 15:35:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] src/interfaces/libpq/Makefile: fix pkg-config without\n openssl" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 31 Mar 2022, at 18:37, Fabrice Fontaine <fontaine.fabrice@gmail.com> wrote:\n>> +ifeq ($(with_ssl),openssl)\n>> PKG_CONFIG_REQUIRES_PRIVATE = libssl libcrypto\n>> +endif\n\n> That seems reasonable, is there any reason why the referenced commit didn't do\n> that?\n\nLooks like a clear oversight to me, but maybe Peter will\nthink differently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Apr 2022 09:59:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] src/interfaces/libpq/Makefile: fix pkg-config without\n openssl" }, { "msg_contents": "On 31.03.22 18:37, Fabrice Fontaine wrote:\n> Do not add openssl dependencies to libpq pkg-config file if openssl is\n> disabled to avoid the following build failure with libdbi-drivers raised\n> since commit beff361bc1edc24ee5f8b2073a1e5e4c92ea66eb:\n> \n> configure: error: Package requirements (libpq) were not met:\n> \n> Package 'libssl', required by 'libpq', not found\n> Package 'libcrypto', required by 'libpq', not found\n> \n> Fixes:\n> -http://autobuild.buildroot.org/results/415cb61a58b928a42623ed90b0b60c59032f0a4e\n> \n> Signed-off-by: Fabrice Fontaine<fontaine.fabrice@gmail.com>\n\nFixed, thanks.\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 17:17:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] src/interfaces/libpq/Makefile: fix pkg-config without\n openssl" } ]
[ { "msg_contents": "On 2022-Mar-28, Alvaro Herrera wrote:\n\n>> I intend to get this pushed after lunch.\n\n>Pushed, with one more change: fetching the tuple ID junk attribute in\n>ExecMerge was not necessary, since we already had done that in\n>ExecModifyTable. We just needed to pass that down to ExecMerge, and\n>make sure to handle the case where there isn't one.\nHi,\n\nI think that there is an oversight at 7103ebb\n<https://github.com/postgres/postgres/commit/7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\nThere is no chance of Assert preventing this bug.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 31 Mar 2022 14:38:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "support for MERGE" }, { "msg_contents": "> On 31 Mar 2022, at 19:38, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> I think that there is an oversight at 7103ebb\n> There is no chance of Assert preventing this bug.\n\nThis seems reasonable from brief reading of the code, NULL is a legitimate\nvalue for the map and that should yield an empty list AFAICT.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:10:35 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: support for MERGE" }, { "msg_contents": "On 2022-Mar-31, Daniel Gustafsson wrote:\n\n> > On 31 Mar 2022, at 19:38, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> > I think that there is an oversight at 7103ebb\n> > There is no chance of Assert preventing this bug.\n> \n> This seems reasonable from brief reading of the code, NULL is a legitimate\n> value for the map and that should yield an empty list AFAICT.\n\nThere's no bug here and this is actually intentional: if the map is\nNULL, this function should not be called.\n\nIn the code before this commit, there was an assert that this variable\nwas not null:\n\n static List *\n adjust_partition_colnos(List *colnos, ResultRelInfo *leaf_part_rri)\n {\n- List *new_colnos = NIL;\n TupleConversionMap *map = ExecGetChildToRootMap(leaf_part_rri);\n! AttrMap *attrMap;\n ListCell *lc;\n \n! Assert(map != NULL); /* else we shouldn't be here */\n! attrMap = map->attrMap;\n \n foreach(lc, colnos)\n {\n\n\nWe could add an Assert that map is not null in the new function, but\nreally there's no point: if the map is null, we'll crash just fine in\nthe following line.\n\nI would argue that we should *remove* the Assert() that I left in\nadjust_partition_colnos_with_map.\n\nEven if we wanted to make the function handle the case of a NULL map,\nthen the right fix is not to return NIL, but rather we should return the\noriginal list.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 2 Apr 2022 17:02:01 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: support for MERGE" }, { "msg_contents": "Hi,\n\nOn 2022-04-02 17:02:01 +0200, Alvaro Herrera wrote:\n> There's no bug here and this is actually intentional: if the map is\n> NULL, this function should not be called.\n\nThis made me, again, wonder if we should add a pg_nonnull attibute to c.h. The\ncompiler can probably figure it out in this case, but there's plenty cases it\ncan't, because the function definition is in a different translation unit. And\nIMO it helps humans too.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Sat, 2 Apr 2022 09:28:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: support for MERGE" }, { "msg_contents": "Em sáb., 2 de abr. de 2022 às 12:01, Alvaro Herrera <alvherre@alvh.no-ip.org>\nescreveu:\n\n> On 2022-Mar-31, Daniel Gustafsson wrote:\n>\n> > > On 31 Mar 2022, at 19:38, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > > I think that there is an oversight at 7103ebb\n> > > There is no chance of Assert preventing this bug.\n> >\n> > This seems reasonable from brief reading of the code, NULL is a\n> legitimate\n> > value for the map and that should yield an empty list AFAICT.\n>\n> There's no bug here and this is actually intentional: if the map is\n> NULL, this function should not be called.\n>\nIMHO, actually there are bug here.\nExecGetChildToRootMap is clear, is possible returning NULL.\nTo discover if the map is NULL, ExecGetChildToRootMap needs to process\n\"ResultRelInfo *leaf_part_rri\".\nSo, the argument \"if the map is NULL, this function should not be called\",\nis contradictory.\n\nActually, with Assert at function adjust_partition_colnos_using_map,\nwill never be checked, because it crashed before, both\nproduction and debug modes.\n\n\n> In the code before this commit, there was an assert that this variable\n> was not null:\n>\n> static List *\n> adjust_partition_colnos(List *colnos, ResultRelInfo *leaf_part_rri)\n> {\n> - List *new_colnos = NIL;\n> TupleConversionMap *map = ExecGetChildToRootMap(leaf_part_rri);\n> ! AttrMap *attrMap;\n> ListCell *lc;\n>\n> ! Assert(map != NULL); /* else we shouldn't be here */\n> ! attrMap = map->attrMap;\n>\n> foreach(lc, colnos)\n> {\n>\n>\n> We could add an Assert that map is not null in the new function, but\n> really there's no point: if the map is null, we'll crash just fine in\n> the following line.\n>\n> I would argue that we should *remove* the Assert() that I left in\n> adjust_partition_colnos_with_map.\n>\n> Even if we wanted to make the function handle the case of a NULL map,\n> then the right fix is not to return NIL, but rather we should return the\n> original list.\n>\nIf the right fix is to return the original list, here is the patch attached.\n\nregards\nRanier Vilela", "msg_date": "Sat, 2 Apr 2022 14:57:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for MERGE" }, { "msg_contents": "On 2022-Apr-02, Ranier Vilela wrote:\n\n> Em sáb., 2 de abr. de 2022 às 12:01, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> escreveu:\n\n> IMHO, actually there are bug here.\n> ExecGetChildToRootMap is clear, is possible returning NULL.\n> To discover if the map is NULL, ExecGetChildToRootMap needs to process\n> \"ResultRelInfo *leaf_part_rri\".\n> So, the argument \"if the map is NULL, this function should not be called\",\n> is contradictory.\n\nI was not explicit enough. I meant \"if no map is needed to adjust\ncolumns, then this function should not be called\". The caller already\nknows if it's needed or not; it doesn't depend on literally testing\n'map'. If somebody mis-calls this function, it would have crashed, yes;\nbut that's a caller bug, not this function's.\n\nA few days ago, the community Coverity also complained about this, so I\nadded an Assert that the map is not null, which should silence it.\n\n> If the right fix is to return the original list, here is the patch attached.\n\n... for a buggy caller (one that calls it when unnecessary), then yes\nthis would be the correct code -- except that now the caller doesn't\nknow if the returned list needs to be freed or not. So it seems better\nto avoid accumulating pointless calls to this function by just not\ncoping with them.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n", "msg_date": "Tue, 12 Apr 2022 15:47:08 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: support for MERGE" }, { "msg_contents": "Em ter., 12 de abr. de 2022 às 10:47, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2022-Apr-02, Ranier Vilela wrote:\n>\n> > Em sáb., 2 de abr. de 2022 às 12:01, Alvaro Herrera <\n> alvherre@alvh.no-ip.org>\n> > escreveu:\n>\n> > IMHO, actually there are bug here.\n> > ExecGetChildToRootMap is clear, is possible returning NULL.\n> > To discover if the map is NULL, ExecGetChildToRootMap needs to process\n> > \"ResultRelInfo *leaf_part_rri\".\n> > So, the argument \"if the map is NULL, this function should not be\n> called\",\n> > is contradictory.\n>\n> I was not explicit enough. I meant \"if no map is needed to adjust\n> columns, then this function should not be called\". The caller already\n> knows if it's needed or not; it doesn't depend on literally testing\n> 'map'. If somebody mis-calls this function, it would have crashed, yes;\n> but that's a caller bug, not this function's.\n>\nThanks for the explanation.\n\n\n>\n> A few days ago, the community Coverity also complained about this, so I\n> added an Assert that the map is not null, which should silence it.\n>\nThanks for hardening this.\n\n\n>\n> > If the right fix is to return the original list, here is the patch\n> attached.\n>\n> ... for a buggy caller (one that calls it when unnecessary), then yes\n> this would be the correct code -- except that now the caller doesn't\n> know if the returned list needs to be freed or not. So it seems better\n> to avoid accumulating pointless calls to this function by just not\n> coping with them.\n>\n Sure, it is always better to avoid doing work, unless strictly necessary.\n\nregards,\nRanier Vilela\n\nEm ter., 12 de abr. de 2022 às 10:47, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2022-Apr-02, Ranier Vilela wrote:\n\n> Em sáb., 2 de abr. de 2022 às 12:01, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> escreveu:\n\n> IMHO, actually there are bug here.\n> ExecGetChildToRootMap is clear, is possible returning NULL.\n> To discover if the map is NULL, ExecGetChildToRootMap needs to process\n> \"ResultRelInfo *leaf_part_rri\".\n> So, the argument \"if the map is NULL, this function should not be called\",\n> is contradictory.\n\nI was not explicit enough.  I meant \"if no map is needed to adjust\ncolumns, then this function should not be called\".  The caller already\nknows if it's needed or not; it doesn't depend on literally testing\n'map'.  If somebody mis-calls this function, it would have crashed, yes;\nbut that's a caller bug, not this function's.Thanks for the explanation. \n\nA few days ago, the community Coverity also complained about this, so I\nadded an Assert that the map is not null, which should silence it.Thanks for hardening this. \n\n> If the right fix is to return the original list, here is the patch attached.\n\n... for a buggy caller (one that calls it when unnecessary), then yes\nthis would be the correct code -- except that now the caller doesn't\nknow if the returned list needs to be freed or not.  So it seems better\nto avoid accumulating pointless calls to this function by just not\ncoping with them. Sure, it is always better to avoid doing work, unless strictly necessary.regards,Ranier Vilela", "msg_date": "Tue, 12 Apr 2022 11:19:00 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for MERGE" } ]
[ { "msg_contents": "So the last commitfest often runs over and \"feature freeze\" isn't\nscheduled until April 7. Do committers want to keep the commitfest\nopen until then? Or close it now and focus it only on a few pending\nfeatures they're already working on?\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:46:21 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Commitfest closing" }, { "msg_contents": "\n\n> On Mar 31, 2022, at 4:47 PM, Greg Stark <stark@mit.edu> wrote:\n> \n> So the last commitfest often runs over and \"feature freeze\" isn't\n> scheduled until April 7. Do committers want to keep the commitfest\n> open until then? Or close it now and focus it only on a few pending\n> features they're already working on?\n\nIn past years the CF has been kept open. Let’s stick with that.\n\nCheers\n\nAndrew\n\n\n", "msg_date": "Thu, 31 Mar 2022 17:03:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Commitfest closing" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On Mar 31, 2022, at 4:47 PM, Greg Stark <stark@mit.edu> wrote:\n>> So the last commitfest often runs over and \"feature freeze\" isn't\n>> scheduled until April 7. Do committers want to keep the commitfest\n>> open until then? Or close it now and focus it only on a few pending\n>> features they're already working on?\n\n> In past years the CF has been kept open. Let’s stick with that.\n\nYeah, I was assuming it would stay open till late next week.\n\nWe do have at least two patches in the queue that we want to put in\nafter everything else: my frontend logging patch, and the\nPGDLLIMPORT-for-everything patch that I believe Robert has taken\nresponsibility for. So if we want those in before the nominal\nfeature freeze, other stuff is going to need to be done a day or\nso beforehand. But that still gives us most of a week.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 17:19:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest closing" } ]
[ { "msg_contents": "Hi,\n\nI'm interested in adding more ergonomics to DDL commands, in\nparticular supporting IF EXISTS for ALTER TABLE … ALTER COLUMN, so\nthat if a column doesn't exist the command is skipped.\n\nIF EXISTS is already supported in various places (e.g. ALTER TABLE …\nADD COLUMN IF NOT EXISTS, and ALTER TABLE … DROP COLUMN IF EXISTS),\nbut it's not available for any of the ALTER COLUMN sub commands.\n\nThe motivation is to make it easier to write idempotent migrations\nthat can be incrementally authored, such that they can be re-executed\nmultiple times without having to write an \"up\" and \"down\" migration.\nhttps://github.com/graphile/migrate#idempotency elaborates a bit more\non the approach.\n\nThe current approach I see is to write something like:\n\nDO $$\n BEGIN\n IF EXISTS (SELECT 1\n FROM information_schema.columns\n WHERE table_schema = 'myschema' AND table_name = 'mytable' AND\ncolumn_name = 'mycolume')\n THEN\n ALTER TABLE myschema.mytable RENAME mycolume TO mycolumn;\n END IF;\n END\n$$;\n\nI think ideally the IF EXISTS would be added to all of the ALTER\nCOLUMN commands, however for the moment I have only added it to the {\nSET | DROP } NOT NULL command to demonstrate the approach and see if\nthere's in-principle support for such a change.\n\nQuestions:\n\n1. I assume this is not part of the SQL specification, so this would\nintroduce more deviation to PostgreSQL. Is that accurate? Is that\nproblematic?\n2. I believe I'm missing some code paths for table inheritance, is that correct?\n3. I haven't updated the documentation—is it correct to do that in\ndoc/src/sgml/ref/alter_table.sgml?\n4. This is my first time attempting to contribute to PostgreSQL, have\nI missed anything?\n\n-- \nCheers,\nBrad", "msg_date": "Fri, 1 Apr 2022 10:39:06 +1100", "msg_from": "Bradley Ayers <bradley.ayers@gmail.com>", "msg_from_op": true, "msg_subject": "[WIP] ALTER COLUMN IF EXISTS" }, { "msg_contents": "On Thu, Mar 31, 2022 at 4:39 PM Bradley Ayers <bradley.ayers@gmail.com>\nwrote:\n\n>\n> I'm interested in adding more ergonomics to DDL commands, in\n> particular supporting IF EXISTS for ALTER TABLE … ALTER COLUMN, so\n> that if a column doesn't exist the command is skipped.\n>\n> IF EXISTS is already supported in various places (e.g. ALTER TABLE …\n> ADD COLUMN IF NOT EXISTS, and ALTER TABLE … DROP COLUMN IF EXISTS),\n> but it's not available for any of the ALTER COLUMN sub commands.\n>\n\nAt present the project seems to largely consider the IF EXISTS/IF NOT\nEXISTS features to have been largely a mistake and while removing it is not\ngoing to happen the desire to change or extend it is not strong.\n\nIf you want to make a go at this I would suggest not writing any new code\nat first but instead take inventory of what is already implemented, how it\nis implemented, what gaps there are, and proposals to fill those gaps.\nWrite the theory/rules that we follow in our existing (or future)\nimplementation of this idempotence feature. Then get agreement to\nimplement the proposals from enough important people that a well-written\npatch would be considered acceptable to commit.\nI don't know if any amount of planning and presentation will convince\neveryone this is a good idea in theory, let alone one that we want to\nmaintain while the author goes off to other projects (this being your first\npatch that seems like a reasonable assumption).\n\nI can say you have some community support in the endeavor but, and maybe\nthis is biasing me, my (fairly recent) attempt at what I considered\nbug-fixing in this area was not accepted. On that note, as part of your\nresearch, you should find the previous email threads on this topic (there\nare quite a few I am sure), and make you own judgements from those. Aside\nfrom it being my opinion I don't have any information at hand that isn't in\nthe email archives.\n\nDavid J.\n\nOn Thu, Mar 31, 2022 at 4:39 PM Bradley Ayers <bradley.ayers@gmail.com> wrote:\nI'm interested in adding more ergonomics to DDL commands, in\nparticular supporting IF EXISTS for ALTER TABLE … ALTER COLUMN, so\nthat if a column doesn't exist the command is skipped.\n\nIF EXISTS is already supported in various places (e.g. ALTER TABLE …\nADD COLUMN IF NOT EXISTS, and ALTER TABLE … DROP COLUMN IF EXISTS),\nbut it's not available for any of the ALTER COLUMN sub commands.At present the project seems to largely consider the IF EXISTS/IF NOT EXISTS features to have been largely a mistake and while removing it is not going to happen the desire to change or extend it is not strong.If you want to make a go at this I would suggest not writing any new code at first but instead take inventory of what is already implemented, how it is implemented, what gaps there are, and proposals to fill those gaps.  Write the theory/rules that we follow in our existing (or future) implementation of this idempotence feature.  Then get agreement to implement the proposals from enough important people that a well-written patch would be considered acceptable to commit.I don't know if any amount of planning and presentation will convince everyone this is a good idea in theory, let alone one that we want to maintain while the author goes off to other projects (this being your first patch that seems like a reasonable assumption).I can say you have some community support in the endeavor but, and maybe this is biasing me, my (fairly recent) attempt at what I considered bug-fixing in this area was not accepted.  On that note, as part of your research, you should find the previous email threads on this topic (there are quite a few I am sure), and make you own judgements from those.  Aside from it being my opinion I don't have any information at hand that isn't in the email archives.David J.", "msg_date": "Thu, 31 Mar 2022 17:01:29 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [WIP] ALTER COLUMN IF EXISTS" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 4:39 PM Bradley Ayers <bradley.ayers@gmail.com>\n> wrote:\n>> I'm interested in adding more ergonomics to DDL commands, in\n>> particular supporting IF EXISTS for ALTER TABLE … ALTER COLUMN, so\n>> that if a column doesn't exist the command is skipped.\n\n> At present the project seems to largely consider the IF EXISTS/IF NOT\n> EXISTS features to have been largely a mistake and while removing it is not\n> going to happen the desire to change or extend it is not strong.\n\nThat might be an overstatement. There's definitely a camp that\ndoesn't like CREATE IF NOT EXISTS, precisely on the grounds that it's\nnot idempotent --- success of the command tells you very little about\nthe state of the object, beyond the fact that some object of that name\nnow exists. (DROP IF EXISTS, by comparison, *is* idempotent: success\nguarantees that the object now does not exist. CREATE OR REPLACE\nis also idempotent, or at least much closer than IF NOT EXISTS.)\nIt's not entirely clear to me whether ALTER IF EXISTS could escape any\nof that concern, but offhand it seems like it's close to the CREATE\nproblem. I do kind of wonder what the use-case for it is, anyway.\n\nOne thing to keep in mind is that unlike some other DBMSes, you\ncan script pretty much any conditional DDL you want in Postgres.\nThis considerably reduces the pressure to provide conditionalization\nbuilt right into the DDL commands. As a result, we (or at least I)\nprefer to offer only the most clearly useful, best-defined cases\nas built-in DDL features. So there's definitely a hurdle that\nan ALTER IF EXISTS patch would have to clear before having a chance\nof being accepted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:30:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [WIP] ALTER COLUMN IF EXISTS" }, { "msg_contents": "On Thu, Mar 31, 2022 at 8:02 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> At present the project seems to largely consider the IF EXISTS/IF NOT EXISTS features to have been largely a mistake and while removing it is not going to happen the desire to change or extend it is not strong.\n\nI like the IF [NOT] EXISTS stuff quite a bit. I wish it had existed\nback when I was doing application programming with PostgreSQL. I would\nhave used it for exactly the sorts of things that Bradley mentions.\n\nI don't know how far it's worth taking this stuff. I dislike the fact\nthat when you get beyond what you can do with IF [NOT] EXISTS, you're\nsuddenly thrown into having to write SQL against system catalog\ncontents which, if you're the sort of person who really likes the IF\n[NOT] EXISTS commands, may well be something you don't feel terribly\ncomfortable doing. It's almost tempting to propose new SQL functions\njust for these kinds of scripts. Like instead of adding support\nfor....\n\n ALTER TABLE myschema.mytable IF EXISTS RENAME IF EXISTS this TO that;\n\n...and I presume you need IF EXISTS twice, once for the table and once\nfor the column, we could instead make it possible for people to write:\n\nIF pg_table_exists('myschema.mytable') AND\npg_table_has_column('myschema.mytable', 'this') THEN\n ALTER TABLE myschema.mytable RENAME this TO that;\nEND IF;\n\nAn advantage of that approach is that you could also do more\ncomplicated things that are never going to work with any number of\nIF-EXISTS clauses. For example, imagine you want to rename foo to bar\nand bar to baz, unless that's been done already. Well with these\nfunctions you can just do this:\n\nIF pg_table_has_column('mytab', 'foo') THEN\n ALTER TABLE mytab RENAME bar TO baz;\n ALTER TABLE mytab RENAME foo TO bar;\nEND;\n\nThere's no way to get there with just IF EXISTS.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:52:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [WIP] ALTER COLUMN IF EXISTS" }, { "msg_contents": "> On 1 Apr 2022, at 02:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n\n>> At present the project seems to largely consider the IF EXISTS/IF NOT\n>> EXISTS features to have been largely a mistake and while removing it is not\n>> going to happen the desire to change or extend it is not strong.\n> \n> That might be an overstatement.\n\nISTR that patches which have been rejected have largely added support for the\nsyntax for the sake of adding support for the syntax, not because there was a\nneed or usecase for it. When the patch is accompanied with an actual usecase\nit's also easier to reason about.\n\nNow, the usecase of \"I wanted to to start working on PostgreSQL and this seemed\nlike a good first patch\" is clearly also very important.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:10:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [WIP] ALTER COLUMN IF EXISTS" } ]
[ { "msg_contents": "Hi hackers!\n\nNow we have two data types xid and xid8. The first one (xid) makes a\nnumeric ring, and xid8 are monotonous.\n\nAs per [1] \"Unlike xid values, xid8 values increase strictly monotonically\nand cannot be reused in the lifetime of a database cluster.\"\n\nAs a consequence of [1] xid8 can have min/max functions (committed in [2]),\nwhich xid can not have.\n\nWhen working on 64xid patch [3] we assume that even 64xid's technically can\nbe wraparound-ed, although it's very much unlikely. I wonder what is\nexpected to be with xid8 values at this (unlikely) 64xid wraparound?\n\nWhat do you think about this? Wouldn't it be better to change xid8 to form\na numeric ring like xid? I think it is necessary for any\n64-wraparound-enabled implementation of 64xids.\n\nPlease feel free to share your thoughts.\n\n[1] https://www.postgresql.org/docs/current/datatype-oid.html\n[2]\nhttps://www.postgresql.org/message-id/flat/47d77b18c44f87f8222c4c7a3e2dee6b%40oss.nttdata.com\n[3]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\nHi hackers!Now we have two data types xid and xid8. The first one (xid) makes a numeric ring, and xid8 are monotonous.As per [1] \"Unlike xid values, xid8 values increase strictly monotonically and cannot be reused in the lifetime of a database cluster.\"As a consequence of [1] xid8 can have min/max functions (committed in [2]), which xid can not have.When working on 64xid patch [3] we assume that even 64xid's technically can be wraparound-ed, although it's very much unlikely. I wonder what is expected to be with xid8 values at this (unlikely) 64xid wraparound? What do you think about this? Wouldn't it be better to change xid8 to form a numeric ring like xid? I think it is necessary for any 64-wraparound-enabled implementation of 64xids.Please feel free to share your thoughts.[1] https://www.postgresql.org/docs/current/datatype-oid.html[2] https://www.postgresql.org/message-id/flat/47d77b18c44f87f8222c4c7a3e2dee6b%40oss.nttdata.com[3] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com", "msg_date": "Fri, 1 Apr 2022 16:13:17 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Is monotonous xid8 is a right way to do?" }, { "msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n\n> Hi hackers!\n>\n> Now we have two data types xid and xid8. The first one (xid) makes a\n> numeric ring, and xid8 are monotonous.\n>\n> As per [1] \"Unlike xid values, xid8 values increase strictly monotonically\n> and cannot be reused in the lifetime of a database cluster.\"\n>\n> As a consequence of [1] xid8 can have min/max functions (committed in [2]),\n> which xid can not have.\n>\n> When working on 64xid patch [3] we assume that even 64xid's technically can\n> be wraparound-ed, although it's very much unlikely. I wonder what is\n> expected to be with xid8 values at this (unlikely) 64xid wraparound?\n\nEven if a cluster was consuming a million XIDs per second, it would take\nover half a million years to wrap around the 64bit range. Is that really\nsomething we should worry about?\n\nilmari@[local]:5432 ~=# select 2::numeric^64/10^9/3600/24/365;\n┌──────────────────┐\n│ ?column? │\n├──────────────────┤\n│ 584942.417355072 │\n└──────────────────┘\n\n- ilmari\n\n\n", "msg_date": "Fri, 01 Apr 2022 13:33:05 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Is monotonous xid8 is a right way to do?" }, { "msg_contents": "Hi!\n\nIn my view, FullTransactionId type was implemented without considering 64\nbit wraparound. Which seems to be unlikely to happen. Then on that basis\nxid8 type was created. Details of that particular implementation\ninfiltrated into documentation and became sort of normal. In my opinion,\nsemantically, both of these types should be treated as similar\ntypes although with different sizes. Thus, again, xid and xid8 types should\nbe a ring and have no min and max functions. At least, in a sort of\n\"conventional\" way when minimal value is minimal in a mathematical way and\nso for maximum.\n\nFor example, max may be implemented as max(0, 42, 18446744073709551615) =\n42, which is a bit weird.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!In my view, FullTransactionId type was implemented without considering 64 bit wraparound. Which seems to be unlikely to happen. Then on that basis xid8 type was created. Details of that particular implementation infiltrated into documentation and became sort of normal. In my opinion, semantically, both of these types should be treated as similar types although with different sizes. Thus, again, xid and xid8 types should be a ring and have no min and max functions. At least, in a sort of \"conventional\" way when minimal value is minimal in a mathematical way and so for maximum.For example, max may be implemented as max(0, 42, 18446744073709551615) = 42, which is a bit weird.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 1 Apr 2022 15:36:28 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is monotonous xid8 is a right way to do?" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Even if a cluster was consuming a million XIDs per second, it would take\n> over half a million years to wrap around the 64bit range. Is that really\n> something we should worry about?\n>\n> ilmari@[local]:5432 ~=# select 2::numeric^64/10^9/3600/24/365;\n\nOops, that should be 10^6, not 10^9. I was dithering over whether to do\nit as a million or a billion per second. For a billion XIDs per second\nit would last a mere half millennium.\n\n> ┌──────────────────┐\n> │ ?column? │\n> ├──────────────────┤\n> │ 584942.417355072 │\n> └──────────────────┘\n>\n> - ilmari\n\n\n", "msg_date": "Fri, 01 Apr 2022 13:39:07 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Is monotonous xid8 is a right way to do?" }, { "msg_contents": "On Fri, 1 Apr 2022 at 14:13, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi hackers!\n>\n> Now we have two data types xid and xid8. The first one (xid) makes a numeric ring, and xid8 are monotonous.\n>\n> As per [1] \"Unlike xid values, xid8 values increase strictly monotonically and cannot be reused in the lifetime of a database cluster.\"\n>\n> As a consequence of [1] xid8 can have min/max functions (committed in [2]), which xid can not have.\n>\n> When working on 64xid patch [3] we assume that even 64xid's technically can be wraparound-ed, although it's very much unlikely. I wonder what is expected to be with xid8 values at this (unlikely) 64xid wraparound?\n>\n> What do you think about this? Wouldn't it be better to change xid8 to form a numeric ring like xid? I think it is necessary for any 64-wraparound-enabled implementation of 64xids.\n>\n> Please feel free to share your thoughts.\n\nAssuming that each Xid is WAL-logged (or at least one in 8) we won't\nsee xid8 wraparound, as our WAL is byte-addressable with only 64 bits\nused as the identifier. As such, we can only fit a maximum of 2^61\nxid8s in our WAL; which is less than what would be needed to wrap\naround.\n\nAddressed another way: If we'd have a system that consumed one xid\nevery CPU clock; then the best available x86 hardware right now would\ncurrently consume ~ 5.5B xids every second. This would still leave\naround 100 years of this system running non-stop before we'd be\nhitting xid8 wraparound (= 2^64 / 5.5e9 (xid8 /sec) / 3600 (min /hour)\n/ 730 (hour / month)/ 12 (month /year)).\n\nI don't think we'll have to consider that an issue for now. Maybe,\neventually, if we start doing distributed transactions where\ntransaction IDs are reasonably consumed at a rate higher than 5B /sec\n(and not logged at that rate) we can start considering this to be a\nproblem.\n\nA different and more important issue (IMO) is that the xlog record\nheader currently only supports 32-bit xids -- long-running\ntransactions can reasonably see a xid4 wraparound in their lifetime.\n\nEnjoy,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 1 Apr 2022 14:43:03 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is monotonous xid8 is a right way to do?" }, { "msg_contents": ">\n> A different and more important issue (IMO) is that the xlog record\n>\nheader currently only supports 32-bit xids -- long-running\n> transactions can reasonably see a xid4 wraparound in their lifetime.\n\nYou're completely right. This is a first of making xid's 64-bit proposed\n[1] i.e, making SLRU 64-bit [2]\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/flat/CALT9ZEEf1uywYN%2BVaRuSwNMGE5%3DeFOy7ZTwtP2g%2BW9oJDszqQw%40mail.gmail.com#bd4f64b73cb3b969e119da7e5a7b1f30\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nA different and more important issue (IMO) is that the xlog record\nheader currently only supports 32-bit xids -- long-running\ntransactions can reasonably see a xid4 wraparound in their lifetime.You're completely right. This is a first of making xid's 64-bit proposed [1] i.e, making SLRU 64-bit [2][1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com [2] https://www.postgresql.org/message-id/flat/CALT9ZEEf1uywYN%2BVaRuSwNMGE5%3DeFOy7ZTwtP2g%2BW9oJDszqQw%40mail.gmail.com#bd4f64b73cb3b969e119da7e5a7b1f30-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 1 Apr 2022 16:53:44 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is monotonous xid8 is a right way to do?" } ]
[ { "msg_contents": "So far I've been moving patches with failing tests to Waiting on\nAuthor but there are a number with \"minor\" failures or failures which\nlook unrelated to the patch. There are 20 patches with at least one\nfailing test in Ready for Comitter (2) and Needs Review (18).\n\nHere's a summary of the reasons for these failures. Mostly they're\nbitrot due to other patches but there are a few test failures and one\nwhere Cirrus is just serving up a blank page...\n\n\nReady for Committer:\n\n37/3496 Avoid erroring out when unable to remove or parse logical\nrewrite files to save ...\n\n[14:12:40.557] t/013_crash_restart.pl (Wstat: 256 Tests: 18 Failed: 1)\n[14:12:40.557] Failed test: 5\n\n37/3574 Frontend error logging style\n\nPatch bitrot: plan is to let this sit till the end of the commitfest,\nthen rebase and push\n\nNeeds Review:\n\n37/3374 Allows database-specific role memberships\n\nPatch has bitrotted. It does seem a bit unfair to ask authors to keep\nrebasing their patches and then not review them until they bitrot\nagain .... I guess this is a downside of concentrating a lot of\ncommits in the CF.\n\n37/3506 CREATEROLE and role ownership hierarchies\n\nThread digression was committed but main patch is still pending. cfbot\nis testing the digression.\n\n37/3500 Collecting statistics about contents of JSONB columns\n\nPatch bitrotted, presumably due to the other JSON patchset. I moved it\nto Waiting on Author but I think it's mainly in need of design review\nso it could still be useful to look at it.\n\n37/2433 Erase the distinctClause if the result is unique by definition\n\nPatch has bitrotted. But there are multiple versions of this patchset\nwith different approaches and there is a lot of divergence of opinions\non which way to go. I'm inclined to mark this \"returned with feedback\"\nand suggest to start with fresh threads on individual parts of this\npatch.\n\n37/2266 Fix up partitionwise join on how equi-join conditions between\nthe partition keys...\n\n[08:10:46.390] # poll_query_until timed out executing this query:\n[08:10:46.390] # SELECT '0/13AC5E80' <= replay_lsn AND state = 'streaming'\n[08:10:46.390] # FROM pg_catalog.pg_stat_replication\n[08:10:46.390] # WHERE application_name IN ('standby_1', 'walreceiver')\n\n37/2957 Identify missing publications from publisher while\ncreate/alter subscription.\n\nIt looks like this was committed, yay! Please remember to update the\nCF app when committing patches. It's sometimes hard for me to tell if\nit was only part of a patch or a related fix discovered in discussing\na patch that was committed or if the commit resolves the entry.\n\n37/2218 Implement INSERT SET syntax\n\nTom: \"This patch has been basically ignored for a full two years now\n(Remarkably, it's still passing in the cfbot.)\"\n\nWell it has bitrotted in gram.y now :(\n\n37/2138 Incremental Materialized View Maintenance\n\nbitrotted in trigger.c. This is a huge patchset and probably hard to\nkeep rebased and I think still looking for more fundamental design\nreview.\n\n37/3071 Lazy JIT IR code generation to increase JIT speed with partitions\n\n[00:01:20.167] t/013_partition.pl (Wstat: 7424 Tests: 31 Failed: 0)\n[00:01:20.167] Non-zero exit status: 29\n[00:01:20.167] Parse errors: No plan found in TAP output\n\n37/3181 Map WAL segment files on PMEM as WAL buffers\n\n[19:38:40.880] configure: error: library 'libpmem' (version >= 1.5) is\nrequired for PMEM support\n\nNot sure if this is just the expected output if a platform lacks this\nlibrary or if the tests need to be adjusted to add a configure option.\n\n37/3052 Merging statistics from children instead of re-sampling everything\n\nUh, Cirrus is just giving me a blank page on this one. No idea what's\ngoing on here.\n\n37/3490 Pluggable toaster\n\nAn updated patch has been posted since the failure\n\n37/1712 Remove self join on a unique column\n\nPatch bitrotted in a regression test expected output\n\n37/3433 Removing more vacuumlazy.c special cases, relfrozenxid optimizations\n\n[06:50:33.148] vacuum.c: In function ‘vac_update_relstats’:\n[06:50:33.148] vacuum.c:1481:6: error: ‘oldrelminmxid’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\n[06:50:33.148] 1481 | errmsg_internal(\"overwrote invalid\npg_class.relminmxid value %u with new value %u in table \\\"%s\\\"\",\n[06:50:33.148] | ^~~~~~~~~~~~~~~\n[06:50:33.148] vacuum.c:1475:6: error: ‘oldrelfrozenxid’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\n[06:50:33.148] 1475 | errmsg_internal(\"overwrote invalid\npg_class.relfrozenxid value %u with new value %u in table \\\"%s\\\"\",\n[06:50:33.148] | ^~~~~~~~~~~~~~~\n\n\n37/2901 SQL/JSON: functions\n\nPatchset is mostly applied so the cfbot is seeing patch conflicts.\n\n37/3589 Shared memory based stats collector\n\ncfbot is trying to apply a digression patch rather than the main patch\n(which is up to version 67! and has 17 parts)\n\n37/3546 Support custom authentication methods using hooks\n\nPatch bitrot in hba.c. But the discussion is mainly about the design\nand whether it's \"the right way to go\" anyways.\n\n37/3048 pg_stat_statements: Track statement entry timestamp\n\n[13:19:51.544] pg_stat_statements.c: In function ‘entry_reset’:\n[13:19:51.544] pg_stat_statements.c:2598:32: error:\n‘minmax_stats_reset’ may be used uninitialized in this function\n[-Werror=maybe-uninitialized]\n[13:19:51.544] 2598 | entry->minmax_stats_since = minmax_stats_reset;\n[13:19:51.544] | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~\n\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 1 Apr 2022 11:42:38 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Patches with failing tests in Commitfest" } ]
[ { "msg_contents": "To whom it may concern:\n\nHere is my proposal (https://docs.google.com/document/d/1KKKDU6iP0GOkAMSdGRyxFJRgW964JFVROnpKkbzWyNw/edit?usp=sharing <https://docs.google.com/document/d/1KKKDU6iP0GOkAMSdGRyxFJRgW964JFVROnpKkbzWyNw/edit?usp=sharing>) for GSoC. Please check! \n\nThanks,\nJinlang Wang\nTo whom it may concern:Here is my proposal (https://docs.google.com/document/d/1KKKDU6iP0GOkAMSdGRyxFJRgW964JFVROnpKkbzWyNw/edit?usp=sharing) for GSoC. Please check! Thanks,Jinlang Wang", "msg_date": "Fri, 1 Apr 2022 12:39:17 -0400", "msg_from": "Jinlang Wang <wangjinlang226@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: pgmoneta: Write-Ahead Log (WAL) infrastructure (2022) " }, { "msg_contents": "Hi Jinlang,\n\nOn 4/1/22 12:39, Jinlang Wang wrote:\n> To whom it may concern:\n>\n> Here is my proposal (https://docs.google.com/document/d/1KKKDU6iP0GOkAMSdGRyxFJRgW964JFVROnpKkbzWyNw/edit?usp=sharing <https://docs.google.com/document/d/1KKKDU6iP0GOkAMSdGRyxFJRgW964JFVROnpKkbzWyNw/edit?usp=sharing>) for GSoC. Please check!\n>\n\nThanks for your proposal to Google Summer of Code 2022 !\n\n\nWe'll follow up off-list to get this finalized.\n\n\nBest regards,\n\n  Jesper\n\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 13:00:05 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: GSoC: pgmoneta: Write-Ahead Log (WAL) infrastructure (2022)" } ]
[ { "msg_contents": "Hi,\n\nright now I am looking at a test added in the shmstats patch that's slow on\nCI, on windows only. Unfortunately the regress_log_* output is useless as-is\nto figure out where things hang.\n\nI've hit this several times before. Of course it's not too hard to hack up\nsomething printing elapsed time. But ISTM that it'd be better if we were able\nto prefix the logging into regress_log_* with something like\n[timestamp + time since start of test]\nor\n[timestamp + time since start of test + time since last log message]\n\n\nThis isn't just useful to figure out what parts of test are slow, but also\nhelps correlate server logs with the regress_log_* output. Which right now is\nhard and inaccurate, requiring manually correlating statements between server\nlog and the tap test (often there's no logging for statements in the\nregress_log_*).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:21:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "On Fri, Apr 01, 2022 at 10:21:50AM -0700, Andres Freund wrote:\n> right now I am looking at a test added in the shmstats patch that's slow on\n> CI, on windows only. Unfortunately the regress_log_* output is useless as-is\n> to figure out where things hang.\n> \n> I've hit this several times before. Of course it's not too hard to hack up\n> something printing elapsed time. But ISTM that it'd be better if we were able\n> to prefix the logging into regress_log_* with something like\n> [timestamp + time since start of test]\n> or\n> [timestamp + time since start of test + time since last log message]\n> \n> \n> This isn't just useful to figure out what parts of test are slow, but also\n> helps correlate server logs with the regress_log_* output. Which right now is\n> hard and inaccurate, requiring manually correlating statements between server\n> log and the tap test (often there's no logging for statements in the\n> regress_log_*).\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:44:09 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "\nOn 4/1/22 13:44, Nathan Bossart wrote:\n> On Fri, Apr 01, 2022 at 10:21:50AM -0700, Andres Freund wrote:\n>> right now I am looking at a test added in the shmstats patch that's slow on\n>> CI, on windows only. Unfortunately the regress_log_* output is useless as-is\n>> to figure out where things hang.\n>>\n>> I've hit this several times before. Of course it's not too hard to hack up\n>> something printing elapsed time. But ISTM that it'd be better if we were able\n>> to prefix the logging into regress_log_* with something like\n>> [timestamp + time since start of test]\n>> or\n>> [timestamp + time since start of test + time since last log message]\n>>\n>>\n>> This isn't just useful to figure out what parts of test are slow, but also\n>> helps correlate server logs with the regress_log_* output. Which right now is\n>> hard and inaccurate, requiring manually correlating statements between server\n>> log and the tap test (often there's no logging for statements in the\n>> regress_log_*).\n> +1\n>\n\n\nMaybe one way would be to make a change in\nsrc/test/perl/PostgreSQL/Test/SimpleTee.pm. The simplest thing would\njust be to add a timestamp, the other things would involve a bit more\nbookkeeping. It should also be checked to make sure it doesn't add too\nmuch overhead, although I would be surprised if it did.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 1 Apr 2022 15:16:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "Hi,\n\nOn 2022-04-01 10:21:50 -0700, Andres Freund wrote:\n> right now I am looking at a test added in the shmstats patch that's slow on\n> CI, on windows only. Unfortunately the regress_log_* output is useless as-is\n> to figure out where things hang.\n\n<two hours of debugging later>\n\nThis turns out to be a problem somewhere in the tap testing infrastructure,\nrather than the test itself. The slow thing wasn't anything the test did. All\nthe time is spent in an is(). To verify that, I added a bunch of\n\nok(1, \"this is some long output to test a theory\");\nprint_time();\n\na few tests before the slow test. And:\n\nok 7 - this is some long output to test a theory\n# test theory 1: 0.000 sec\nok 8 - this is some long output to test a theory\n# test theory 2: 0.000 sec\nok 9 - this is some long output to test a theory\n# test theory 3: 40.484 sec\nok 10 - this is some long output to test a theory\n# test theory 4: 0.001 sec\nok 11 - this is some long output to test a theory\n# test theory 5: 0.000 sec\n\nThe problem also vanishes when running tests without PROVE_FLAGS=-j$something\n\n\nWhat this looks like to me is that when running tests concurrently, the buffer\nof the file descriptor used to report tap test output fills up. The blocked\ntest can only progress once prove gets around to reading from that fd,\npresumably when another test finishes.\n\nGah. I want my time back.\n\n\nI can't reproduce a similar issue on linux. But of course I'm using a newer\nperl, and it's likely a timing dependent issue, so it's not guaranteed to be a\nwindows problem. But ...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Apr 2022 12:29:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "On 4/1/22 15:16, Andrew Dunstan wrote:\n> On 4/1/22 13:44, Nathan Bossart wrote:\n>> On Fri, Apr 01, 2022 at 10:21:50AM -0700, Andres Freund wrote:\n>>> right now I am looking at a test added in the shmstats patch that's slow on\n>>> CI, on windows only. Unfortunately the regress_log_* output is useless as-is\n>>> to figure out where things hang.\n>>>\n>>> I've hit this several times before. Of course it's not too hard to hack up\n>>> something printing elapsed time. But ISTM that it'd be better if we were able\n>>> to prefix the logging into regress_log_* with something like\n>>> [timestamp + time since start of test]\n>>> or\n>>> [timestamp + time since start of test + time since last log message]\n>>>\n>>>\n>>> This isn't just useful to figure out what parts of test are slow, but also\n>>> helps correlate server logs with the regress_log_* output. Which right now is\n>>> hard and inaccurate, requiring manually correlating statements between server\n>>> log and the tap test (often there's no logging for statements in the\n>>> regress_log_*).\n>> +1\n>>\n>\n> Maybe one way would be to make a change in\n> src/test/perl/PostgreSQL/Test/SimpleTee.pm. The simplest thing would\n> just be to add a timestamp, the other things would involve a bit more\n> bookkeeping. It should also be checked to make sure it doesn't add too\n> much overhead, although I would be surprised if it did.\n>\n\n\nAlong these lines. Untested, it clearly needs a bit of polish (e.g. a\nway to turn it on or off for a filehandle). We could use Time::Hires if\nyou want higher resolution times.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 1 Apr 2022 16:25:44 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "On 4/1/22 16:25, Andrew Dunstan wrote:\n> On 4/1/22 15:16, Andrew Dunstan wrote:\n>> On 4/1/22 13:44, Nathan Bossart wrote:\n>>> On Fri, Apr 01, 2022 at 10:21:50AM -0700, Andres Freund wrote:\n>>>> right now I am looking at a test added in the shmstats patch that's slow on\n>>>> CI, on windows only. Unfortunately the regress_log_* output is useless as-is\n>>>> to figure out where things hang.\n>>>>\n>>>> I've hit this several times before. Of course it's not too hard to hack up\n>>>> something printing elapsed time. But ISTM that it'd be better if we were able\n>>>> to prefix the logging into regress_log_* with something like\n>>>> [timestamp + time since start of test]\n>>>> or\n>>>> [timestamp + time since start of test + time since last log message]\n>>>>\n>>>>\n>>>> This isn't just useful to figure out what parts of test are slow, but also\n>>>> helps correlate server logs with the regress_log_* output. Which right now is\n>>>> hard and inaccurate, requiring manually correlating statements between server\n>>>> log and the tap test (often there's no logging for statements in the\n>>>> regress_log_*).\n>>> +1\n>>>\n>> Maybe one way would be to make a change in\n>> src/test/perl/PostgreSQL/Test/SimpleTee.pm. The simplest thing would\n>> just be to add a timestamp, the other things would involve a bit more\n>> bookkeeping. It should also be checked to make sure it doesn't add too\n>> much overhead, although I would be surprised if it did.\n>>\n>\n> Along these lines. Untested, it clearly needs a bit of polish (e.g. a\n> way to turn it on or off for a filehandle). We could use Time::Hires if\n> you want higher resolution times.\n>\n>\n\n\nHere's a version that actually works. It produces traces that look like\nthis:\n\n\nandrew@emma:pg_upgrade $ grep '([0-9]*s)'\ntmp_check/log/regress_log_002_pg_upgrade\n[21:55:06](63s) ok 1 - dump before running pg_upgrade\n[21:55:22](79s) ok 2 - run of pg_upgrade for new instance\n[21:55:27](84s) ok 3 - old and new dumps match after pg_upgrade\n[21:55:27](84s) 1..3\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 2 Apr 2022 06:57:20 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "\nOn 4/2/22 06:57, Andrew Dunstan wrote:\n> Here's a version that actually works. It produces traces that look like\n> this:\n>\n>\n> andrew@emma:pg_upgrade $ grep '([0-9]*s)'\n> tmp_check/log/regress_log_002_pg_upgrade\n> [21:55:06](63s) ok 1 - dump before running pg_upgrade\n> [21:55:22](79s) ok 2 - run of pg_upgrade for new instance\n> [21:55:27](84s) ok 3 - old and new dumps match after pg_upgrade\n> [21:55:27](84s) 1..3\n>\n\nI know there's a lot going on, but are people interested in this? It's a\npretty small patch to produce something that seems quite useful.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 17:02:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/2/22 06:57, Andrew Dunstan wrote:\n>> Here's a version that actually works. It produces traces that look like\n>> this:\n>> andrew@emma:pg_upgrade $ grep '([0-9]*s)'\n>> tmp_check/log/regress_log_002_pg_upgrade\n>> [21:55:06](63s) ok 1 - dump before running pg_upgrade\n>> [21:55:22](79s) ok 2 - run of pg_upgrade for new instance\n>> [21:55:27](84s) ok 3 - old and new dumps match after pg_upgrade\n>> [21:55:27](84s) 1..3\n\n> I know there's a lot going on, but are people interested in this? It's a\n> pretty small patch to produce something that seems quite useful.\n\nI too think that the elapsed time is useful. I'm less convinced\nthat the time-of-day marker is useful.\n\nIt also seems kind of odd that the elapsed time accumulates rather\nthan being reset for each line. As it stands one would be doing a lot\nof mental subtractions rather than being able to see directly how long\neach step takes. I suppose that on fast machines where each step is\nunder one second, accumulation would be more useful than printing a\nlot of zeroes --- but on the other hand, those aren't the cases where\nyou're going to be terribly concerned about the runtime.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 17:21:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 4/2/22 06:57, Andrew Dunstan wrote:\n> >> Here's a version that actually works. It produces traces that look like\n> >> this:\n> >> andrew@emma:pg_upgrade $ grep '([0-9]*s)'\n> >> tmp_check/log/regress_log_002_pg_upgrade\n> >> [21:55:06](63s) ok 1 - dump before running pg_upgrade\n> >> [21:55:22](79s) ok 2 - run of pg_upgrade for new instance\n> >> [21:55:27](84s) ok 3 - old and new dumps match after pg_upgrade\n> >> [21:55:27](84s) 1..3\n> \n> > I know there's a lot going on, but are people interested in this? It's a\n> > pretty small patch to produce something that seems quite useful.\n\nIt's been 0 days since I last wanted this.\n\n\n> I too think that the elapsed time is useful. I'm less convinced\n> that the time-of-day marker is useful.\n\nI think it'd be quite useful if it had more precision - it's a pita to\ncorrelate regress_log_* output with server logs.\n\n\n> It also seems kind of odd that the elapsed time accumulates rather\n> than being reset for each line. As it stands one would be doing a lot\n> of mental subtractions rather than being able to see directly how long\n> each step takes. I suppose that on fast machines where each step is\n> under one second, accumulation would be more useful than printing a\n> lot of zeroes --- but on the other hand, those aren't the cases where\n> you're going to be terribly concerned about the runtime.\n\nI like both - if you want to find where the slowdown among a lot of log lines\nis, it's easier to look at the time accumulated elapsed time. If you actually\nwant to see how long individual things take, non-accumulated is more useful.\n\nI've printed both in the past...\n\nAny chance we could print higher res time?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 14:38:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n>> I too think that the elapsed time is useful. I'm less convinced\n>> that the time-of-day marker is useful.\n\n> I think it'd be quite useful if it had more precision - it's a pita to\n> correlate regress_log_* output with server logs.\n\nFair point. Maybe we could keep the timestamp (with ms precision\nif possible) and then the parenthetical bit is time-since-last-line\n(also with ms precision)? I think that would more or less satisfy\nboth uses.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 17:45:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 17:45:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n> >> I too think that the elapsed time is useful. I'm less convinced\n> >> that the time-of-day marker is useful.\n> \n> > I think it'd be quite useful if it had more precision - it's a pita to\n> > correlate regress_log_* output with server logs.\n> \n> Fair point. Maybe we could keep the timestamp (with ms precision\n> if possible) and then the parenthetical bit is time-since-last-line\n> (also with ms precision)? I think that would more or less satisfy\n> both uses.\n\nWould work for me...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 14:58:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "\nOn 4/7/22 17:58, Andres Freund wrote:\n> Hi,\n>\n> On 2022-04-07 17:45:09 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n>>>> I too think that the elapsed time is useful. I'm less convinced\n>>>> that the time-of-day marker is useful.\n>>> I think it'd be quite useful if it had more precision - it's a pita to\n>>> correlate regress_log_* output with server logs.\n>> Fair point. Maybe we could keep the timestamp (with ms precision\n>> if possible) and then the parenthetical bit is time-since-last-line\n>> (also with ms precision)? I think that would more or less satisfy\n>> both uses.\n> Would work for me...\n>\n\nAll doable. Time::HiRes gives us a higher resolution timer. I'll post a\nnew version in a day or two.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 19:55:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "On 4/7/22 19:55, Andrew Dunstan wrote:\n> On 4/7/22 17:58, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-04-07 17:45:09 -0400, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> On 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n>>>>> I too think that the elapsed time is useful. I'm less convinced\n>>>>> that the time-of-day marker is useful.\n>>>> I think it'd be quite useful if it had more precision - it's a pita to\n>>>> correlate regress_log_* output with server logs.\n>>> Fair point. Maybe we could keep the timestamp (with ms precision\n>>> if possible) and then the parenthetical bit is time-since-last-line\n>>> (also with ms precision)? I think that would more or less satisfy\n>>> both uses.\n>> Would work for me...\n>>\n> All doable. Time::HiRes gives us a higher resolution timer. I'll post a\n> new version in a day or two.\n\n\nNew version attached.\n\n\nSample traces:\n\n\nandrew@emma:log $ egrep '^\\[[0-9][0-9]:[00-9][0-9]:' regress_log_020_pg_receivewal | tail -n 15\n[09:22:45.031](0.000s) ok 30 # skip postgres was not built with LZ4 support\n[09:22:45.032](0.000s) ok 31 # skip postgres was not built with LZ4 support\n[09:22:45.296](0.265s) ok 32 - streaming some WAL\n[09:22:45.297](0.001s) ok 33 - check that previously partial WAL is now complete\n[09:22:45.298](0.001s) ok 34 - check stream dir permissions\n[09:22:45.298](0.000s) # Testing pg_receivewal with slot as starting streaming point\n[09:22:45.582](0.284s) ok 35 - pg_receivewal fails with non-existing slot: exit code not 0\n[09:22:45.583](0.001s) ok 36 - pg_receivewal fails with non-existing slot: matches\n[09:22:45.618](0.036s) ok 37 - WAL streamed from the slot's restart_lsn\n[09:22:45.619](0.001s) ok 38 - WAL from the slot's restart_lsn has been archived\n[09:22:46.597](0.978s) ok 39 - Stream some wal after promoting, resuming from the slot's position\n[09:22:46.598](0.001s) ok 40 - WAL segment 00000001000000000000000B archived after timeline jump\n[09:22:46.598](0.000s) ok 41 - WAL segment 00000002000000000000000C archived after timeline jump\n[09:22:46.598](0.000s) ok 42 - timeline history file archived after timeline jump\n[09:22:46.599](0.001s) 1..42\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 8 Apr 2022 09:51:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "\nOn 4/8/22 09:51, Andrew Dunstan wrote:\n> On 4/7/22 19:55, Andrew Dunstan wrote:\n>> On 4/7/22 17:58, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2022-04-07 17:45:09 -0400, Tom Lane wrote:\n>>>> Andres Freund <andres@anarazel.de> writes:\n>>>>> On 2022-04-07 17:21:09 -0400, Tom Lane wrote:\n>>>>>> I too think that the elapsed time is useful. I'm less convinced\n>>>>>> that the time-of-day marker is useful.\n>>>>> I think it'd be quite useful if it had more precision - it's a pita to\n>>>>> correlate regress_log_* output with server logs.\n>>>> Fair point. Maybe we could keep the timestamp (with ms precision\n>>>> if possible) and then the parenthetical bit is time-since-last-line\n>>>> (also with ms precision)? I think that would more or less satisfy\n>>>> both uses.\n>>> Would work for me...\n>>>\n>> All doable. Time::HiRes gives us a higher resolution timer. I'll post a\n>> new version in a day or two.\n>\n> New version attached.\n>\n>\n> Sample traces:\n>\n>\n> andrew@emma:log $ egrep '^\\[[0-9][0-9]:[00-9][0-9]:' regress_log_020_pg_receivewal | tail -n 15\n> [09:22:45.031](0.000s) ok 30 # skip postgres was not built with LZ4 support\n> [09:22:45.032](0.000s) ok 31 # skip postgres was not built with LZ4 support\n> [09:22:45.296](0.265s) ok 32 - streaming some WAL\n> [09:22:45.297](0.001s) ok 33 - check that previously partial WAL is now complete\n> [09:22:45.298](0.001s) ok 34 - check stream dir permissions\n> [09:22:45.298](0.000s) # Testing pg_receivewal with slot as starting streaming point\n> [09:22:45.582](0.284s) ok 35 - pg_receivewal fails with non-existing slot: exit code not 0\n> [09:22:45.583](0.001s) ok 36 - pg_receivewal fails with non-existing slot: matches\n> [09:22:45.618](0.036s) ok 37 - WAL streamed from the slot's restart_lsn\n> [09:22:45.619](0.001s) ok 38 - WAL from the slot's restart_lsn has been archived\n> [09:22:46.597](0.978s) ok 39 - Stream some wal after promoting, resuming from the slot's position\n> [09:22:46.598](0.001s) ok 40 - WAL segment 00000001000000000000000B archived after timeline jump\n> [09:22:46.598](0.000s) ok 41 - WAL segment 00000002000000000000000C archived after timeline jump\n> [09:22:46.598](0.000s) ok 42 - timeline history file archived after timeline jump\n> [09:22:46.599](0.001s) 1..42\n>\n>\n\n\nIn the absence of further comment I have pushed this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 10 Apr 2022 09:23:16 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" }, { "msg_contents": "On 2022-04-10 09:23:16 -0400, Andrew Dunstan wrote:\n> In the absence of further comment I have pushed this.\n\nThanks!\n\n\n", "msg_date": "Sun, 10 Apr 2022 09:43:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Can we automatically add elapsed times to tap test log?" } ]
[ { "msg_contents": "Hi,\n\nI think it's pretty evident that the names we've chosen for the\nvarious PostgreSQL shutdown modes are pretty terrible, and maybe we\nshould try to do something about that. There is nothing \"smart\" about\na smart shutdown. The usual result of attempting a smart shutdown is\nthat the server never shuts down at all, because typically there are\ngoing to be some applications using connections that are kept open\nmore or less permanently. What ends up happening when you attempt a\n\"smart\" shutdown is that you've basically put the server into a mode\nwhere you're irreversibly committed to accepting no new connections,\nbut because you have a connection pooler or something that keeps\nconnections open forever, you never shut down either. It is in effect\na denial-of-service attack on the database you're supposed to be\nadministering.\n\nSimilarly, \"fast\" shutdowns are not in any way fast. It is pretty\ncommon for a fast shutdown to take many minutes or even tens of\nminutes to complete. This doesn't require some kind of extreme\nworkload to hit; I've run into it during casual benchmarking runs.\nIt's very easy to have enough dirty data in shared buffers, or enough\ndirty in the operating system cache that will have to be fsync'd in\norder to complete the shutdown checkpoint, to make things take an\nextremely long time. In some ways, this is an even more effective\ndenial-of-service attack than a smart shutdown. True, the database\nwill at some point actually finish shutting down, but in the meantime\nnot only will we not accept new connections but we'll evict all of the\nexisting ones. Good luck maintaining five nines of availability if\nwaiting for a clean shutdown to complete is any part of the process.\nIt might be smarter to initiate a regular (non-shutdown) checkpoint\nfirst, without cutting off connections, and then when that finishes,\nproceed as we do now. The second checkpoint will complete a lot\nfaster, so while the overall operation still won't be fast, at least\nwe'd be refusing connections for a shorter period of time before the\nsystem is actually shut down and you can do whatever maintenance you\nneed to do.\n\n\"immediate\" shutdowns aren't as bad as the other two, but they're\nstill bad. One of the big problems is that I encounter in this area is\nthat Oracle uses the name \"immediate\" shutdown to mean a normal\nshutdown with a checkpoint allowing for a clean restart. Users coming\nfrom Oracle are sometimes extremely surprised to discover that an\nimmediate shutdown is actually a server crash that will require\nrecovery. Even if you don't come from Oracle, there's really nothing\nabout the name of this shutdown mode that intrinsically makes you\nunderstand that it's something you should do only as a last resort.\nWho doesn't like things that are immediate? The problem with this\ntheory is that you make the shutdown quicker at the price of startup\nbecoming much, much slower, because the crash recovery is very likely\ngoing to take a whole lot longer than the shutdown checkpoint would\nhave done.\n\nI attach herewith a modest patch to rename these shutdown modes to\nmore accurately correspond to their actual characteristics.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 1 Apr 2022 13:22:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "PostgreSQL shutdown modes" }, { "msg_contents": "Isn't this missing support in pg_dumb ?\n\n\n", "msg_date": "Fri, 1 Apr 2022 13:35:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL shutdown modes" }, { "msg_contents": "On Fri, Apr 01, 2022 at 01:22:05PM -0400, Robert Haas wrote:\n> I attach herewith a modest patch to rename these shutdown modes to\n> more accurately correspond to their actual characteristics.\n>\n> Date: Fri, 1 Apr 2022 12:50:05 -0400\n\nI love the idea. Just in time, before the feature freeze deadline.\n--\nMichael", "msg_date": "Sat, 2 Apr 2022 11:58:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL shutdown modes" }, { "msg_contents": "+1 for the idea of changing the name, as it's really confusing.\n\nI had quick check in the patch and noticed below replacements:\n\n-#define SmartShutdown 1\n-#define FastShutdown 2\n-#define ImmediateShutdown 3\n+#define DumbShutdown 1\n+#define SlowShutdown 2\n+#define CrappyShutdown 3\n\nAbout the new naming, if \"Crappy\" can be replaced with something else. But\nwas not able to come up with any proper suggestions here. Or may be\n\"Immediate\" is appropriate, as here it's talking about a \"Shutdown\"\noperation.\n\n\n\nOn Sat, Apr 2, 2022 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 01, 2022 at 01:22:05PM -0400, Robert Haas wrote:\n> > I attach herewith a modest patch to rename these shutdown modes to\n> > more accurately correspond to their actual characteristics.\n> >\n> > Date: Fri, 1 Apr 2022 12:50:05 -0400\n>\n> I love the idea. Just in time, before the feature freeze deadline.\n> --\n> Michael\n>\n\n\n-- \nRushabh Lathia\n\n+1 for the idea of changing the name, as it's really confusing.I had quick check in the patch and noticed below replacements:-#define\t\t\tSmartShutdown\t1-#define\t\t\tFastShutdown\t2-#define\t\t\tImmediateShutdown\t3+#define\t\t\tDumbShutdown\t1+#define\t\t\tSlowShutdown\t2+#define\t\t\tCrappyShutdown\t3About the new naming,  if \"Crappy\" can be replaced with something else. Butwas not able to come up with any proper suggestions here.  Or may be\"Immediate\" is appropriate, as here it's talking about a \"Shutdown\" operation.On Sat, Apr 2, 2022 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Apr 01, 2022 at 01:22:05PM -0400, Robert Haas wrote:\n> I attach herewith a modest patch to rename these shutdown modes to\n> more accurately correspond to their actual characteristics.\n>\n> Date: Fri, 1 Apr 2022 12:50:05 -0400\n\nI love the idea.  Just in time, before the feature freeze deadline.\n--\nMichael\n-- Rushabh Lathia", "msg_date": "Sat, 2 Apr 2022 11:38:47 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL shutdown modes" }, { "msg_contents": "On 2022-04-01 13:22, Robert Haas wrote:\n> I attach herewith a modest patch to rename these shutdown modes to\n> more accurately correspond to their actual characteristics.\n\nI've waited for April 2nd to submit this comment, but it seemed to me \nthat the\nsuggestion about the first-pass checkpoint in 'slow' mode is a \nno-foolin' good one.\nThen I wondered whether there could be an option to accompany the 'dumb' \nmode that\nwould take a WHERE clause, to be implicitly applied to pg_stat_activity, \nwhose\npurpose would be to select those sessions that are ok to evict without \nwaiting for\nthem to exit. It could recognize, say, backend connections in no current \ntransaction\nthat are from your pesky app or connection pooler that holds things \nopen. It could\nalso, for example, select things in transaction state but where\n current_timestamp - state_change > '5 minutes' (so it would be \nre-evaluated every\nso often until ready to shut down).\n\nFor conciseness (and sanity), maybe the WHERE clause could be implicitly \napplied,\nnot to pg_stat_activity directly, but to a (virtual or actual) view that \nhas\nalready been restricted to client backend sessions, and already has a \ncolumn\nfor current_timestamp - state_change.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 02 Apr 2022 09:39:52 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: PostgreSQL shutdown modes" }, { "msg_contents": "At Sat, 2 Apr 2022 11:58:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Apr 01, 2022 at 01:22:05PM -0400, Robert Haas wrote:\n> > I attach herewith a modest patch to rename these shutdown modes to\n> > more accurately correspond to their actual characteristics.\n> >\n> > Date: Fri, 1 Apr 2022 12:50:05 -0400\n> \n> I love the idea. Just in time, before the feature freeze deadline.\n\nFWIW, this came in to my mailbox with at \"4/2 2:22 JST\":p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 04 Apr 2022 12:04:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL shutdown modes" }, { "msg_contents": "On Sat, Apr 2, 2022 at 9:39 AM <chap@anastigmatix.net> wrote:\n> I've waited for April 2nd to submit this comment, but it seemed to me\n> that the\n> suggestion about the first-pass checkpoint in 'slow' mode is a\n> no-foolin' good one.\n\nYeah. While the patch itself is mostly in jest, everything I wrote in\nthe email is unfortunately pretty much 100% accurate, no fooling. I\nthink it would be worth doing a number of things:\n\n- Provide some way of backing out of smart shutdown mode.\n- Provide some way of making a smart shutdown turn into a fast\nshutdown after a configurable period of time.\n- Do a preparatory checkpoint before the real shutdown checkpoint\nespecially in fast mode, but maybe also in smart mode. Maybe there's\nsome even smarter thing we could be doing here, not sure what exactly.\n- Consider renaming \"immediate\" mode, maybe to \"crash\" or something.\nOracle uses \"abort\".\n\n> Then I wondered whether there could be an option to accompany the 'dumb'\n> mode that\n> would take a WHERE clause, to be implicitly applied to pg_stat_activity,\n> whose\n> purpose would be to select those sessions that are ok to evict without\n> waiting for\n> them to exit. It could recognize, say, backend connections in no current\n> transaction\n> that are from your pesky app or connection pooler that holds things\n> open. It could\n> also, for example, select things in transaction state but where\n> current_timestamp - state_change > '5 minutes' (so it would be\n> re-evaluated every\n> so often until ready to shut down).\n\nSeems like this might be better done in user-space than hard-coded\ninto the server behavior.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:19:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL shutdown modes" } ]
[ { "msg_contents": "Hi, hackers. Thank you for your attention to this topic.\n\nJulien Rouhaud wrote:\n> +static void show_loop_info(Instrumentation *instrument, bool isworker,\n> + ExplainState *es);\n> \n> I think this should be done as a separate refactoring commit.\nSure. I divided the patch. Now Justin's refactor commit is separated. \nAlso I actualized it a bit.\n\n> Most of the comments I have are easy to fix. But I think that the real \n> problem\n> is the significant overhead shown by Ekaterina that for now would apply \n> even if\n> you don't consume the new stats, for instance if you have \n> pg_stat_statements.\n> And I'm still not sure of what is the best way to avoid that.\nI took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.\nSo currently it's no overheads during basic load. Operations using \nINSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part \nof INSTRUMENT_ALL), but they are much less significant than before. I \napply new overhead statistics collected by pgbench with auto _explain \nenabled.\n\n> Why do you need to initialize min_t and min_tuples but not max_t and\n> max_tuples while both will initially be 0 and possibly updated \n> afterwards?\nWe need this initialization for min values so comment about it located \nabove the block of code with initialization.\n\nI am convinced that the latest changes have affected the patch in a \npositive way. I'll be pleased to hear your thoughts on this.\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 01 Apr 2022 23:46:47 +0300", "msg_from": "Ekaterina Sokolova <e.sokolova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 01, 2022 at 11:46:47PM +0300, Ekaterina Sokolova wrote:\n>\n> > Most of the comments I have are easy to fix. But I think that the real\n> > problem\n> > is the significant overhead shown by Ekaterina that for now would apply\n> > even if\n> > you don't consume the new stats, for instance if you have\n> > pg_stat_statements.\n> > And I'm still not sure of what is the best way to avoid that.\n> I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.\n> So currently it's no overheads during basic load. Operations using\n> INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part of\n> INSTRUMENT_ALL), but they are much less significant than before. I apply new\n> overhead statistics collected by pgbench with auto _explain enabled.\n\nCan you give a bit more details on your bench scenario? I see contradictory\nresults, where the patched version with more code is sometimes way faster,\nsometimes way slower. If you're using pgbench\ndefault queries (including write queries) I don't think that any of them will\nhit the loop code, so it's really a best case scenario. Also write queries\nwill make tests less stable for no added value wrt. this code.\n\nIdeally you would need a custom scenario with a single read-only query\ninvolving a nested loop or something like that to check how much overhead you\nreally get when you cumulate those values. I will try to\n>\n> > Why do you need to initialize min_t and min_tuples but not max_t and\n> > max_tuples while both will initially be 0 and possibly updated\n> > afterwards?\n> We need this initialization for min values so comment about it located above\n> the block of code with initialization.\n\nSure, but if we're going to have a branch for nloops == 0, I think it would be\nbetter to avoid redundant / useless instructions, something like:\n\nif (nloops == 0)\n{\n min_t = totaltime;\n min_tuple = tuplecount;\n}\nelse\n{\n if (min_t...)\n ...\n}\n\nWhile on that part of the patch, there's an extra new line between max_t and\nmin_tuple processing.\n\n\n", "msg_date": "Sat, 2 Apr 2022 22:43:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "This is not passing regression tests due to some details of the plan\noutput - marking Waiting on Author:\n\ndiff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out\nc:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out\n--- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05\n17:00:25.433576100 +0000\n+++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out\n2022-04-05 17:18:30.092203500 +0000\n@@ -2251,10 +2251,7 @@\n Workers Planned: 2\n Workers Launched: N\n -> Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)\n- Loop Min Rows: N Max Rows: N Total Rows: N\n Output: lprt_b.b\n- Worker 0: actual rows=N loops=N\n- Worker 1: actual rows=N loops=N\n -> Materialize (actual rows=N loops=N)\n Loop Min Rows: N Max Rows: N Total Rows: N\n Output: lprt_a.a\n@@ -2263,10 +2260,8 @@\n Workers Planned: 1\n Workers Launched: N\n -> Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)\n- Loop Min Rows: N Max Rows: N Total Rows: N\n Output: lprt_a.a\n- Worker 0: actual rows=N loops=N\n-(24 rows)\n+(19 rows)\n\n drop table lprt_b;\n delete from lprt_a where a = 1;\n\n\n", "msg_date": "Tue, 5 Apr 2022 17:14:09 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Tue, Apr 05, 2022 at 05:14:09PM -0400, Greg Stark wrote:\n> This is not passing regression tests due to some details of the plan\n> output - marking Waiting on Author:\n\nIt's unstable due to parallel workers.\nI'm not sure what the usual workarounds here.\nMaybe set parallel_leader_participation=no for this test.\n\n> diff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out\n> c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out\n> --- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05\n> 17:00:25.433576100 +0000\n> +++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out\n> 2022-04-05 17:18:30.092203500 +0000\n> @@ -2251,10 +2251,7 @@\n> Workers Planned: 2\n> Workers Launched: N\n> -> Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)\n> - Loop Min Rows: N Max Rows: N Total Rows: N\n> Output: lprt_b.b\n> - Worker 0: actual rows=N loops=N\n> - Worker 1: actual rows=N loops=N\n> -> Materialize (actual rows=N loops=N)\n> Loop Min Rows: N Max Rows: N Total Rows: N\n> Output: lprt_a.a\n> @@ -2263,10 +2260,8 @@\n> Workers Planned: 1\n> Workers Launched: N\n> -> Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)\n> - Loop Min Rows: N Max Rows: N Total Rows: N\n> Output: lprt_a.a\n> - Worker 0: actual rows=N loops=N\n> -(24 rows)\n> +(19 rows)\n\n\n", "msg_date": "Mon, 11 Apr 2022 07:34:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi, hackers!\n\nWe started discussion about overheads and how to calculate it correctly.\n\nJulien Rouhaud wrote:\n> Can you give a bit more details on your bench scenario? I see \n> contradictory\n> results, where the patched version with more code is sometimes way \n> faster,\n> sometimes way slower. If you're using pgbench\n> default queries (including write queries) I don't think that any of \n> them will\n> hit the loop code, so it's really a best case scenario. Also write \n> queries\n> will make tests less stable for no added value wrt. this code.\n> \n> Ideally you would need a custom scenario with a single read-only query\n> involving a nested loop or something like that to check how much \n> overhead you\n> really get when you cumulate those values.\nI created 2 custom scenarios. First one contains VERBOSE flag so this \nscenario uses extra statistics. Second one doesn't use new feature and \ndoesn't disable its use (therefore still collect data).\nI attach scripts for pgbench to this letter.\n\nMain conclusions are:\n1) the use of additional statistics affects no more than 4.5%;\n2) data collection affects no more than 1.5%.\nI think testing on another machine would be very helpful, so if you get \na chance, I'd be happy if you share your observations.\n\nSome fixes:\n\n> Sure, but if we're going to have a branch for nloops == 0, I think it \n> would be\n> better to avoid redundant / useless instructions\nRight. I done it.\n\nJustin Pryzby wrote:\n> Maybe set parallel_leader_participation=no for this test.\nThanks for reporting the issue and advice. I set \nparallel_leader_participation = off. I hope this helps to solve the \nproblem of inconsistencies in the outputs.\n\nIf you have any comments on this topic or want to share your \nimpressions, please write to me.\nThank you very much for your contribution to the development of this \npatch.\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jun 2022 20:16:06 +0300", "msg_from": "Ekaterina Sokolova <e.sokolova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 24, 2022 at 08:16:06PM +0300, Ekaterina Sokolova wrote:\n>\n> We started discussion about overheads and how to calculate it correctly.\n>\n> Julien Rouhaud wrote:\n> > Can you give a bit more details on your bench scenario?\n> > [...]\n> > Ideally you would need a custom scenario with a single read-only query\n> > involving a nested loop or something like that to check how much\n> > overhead you\n> > really get when you cumulate those values.\n> I created 2 custom scenarios. First one contains VERBOSE flag so this\n> scenario uses extra statistics. Second one doesn't use new feature and\n> doesn't disable its use (therefore still collect data).\n> I attach scripts for pgbench to this letter.\n\nI don't think that this scenario is really representative for the problem I was\nmentioning as you're only testing the overhead using the EXPLAIN (ANALYZE)\ncommand, which doesn't say much about normal query execution.\n\nI did a simple benchmark using a scale 50 pgbench on a pg_stat_statements\nenabled instance, and the following scenario:\n\nSET enable_mergejoin = off;\nSELECT count(*) FROM pgbench_accounts JOIN pgbench_tellers on aid = tid;\n\n(which forces a nested loop) and compared the result from this patch and fixing\npg_stat_statements to not request INSTRUMENT extra, something like:\n\ndiff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\nindex 049da9fe6d..9a2177e438 100644\n--- a/contrib/pg_stat_statements/pg_stat_statements.c\n+++ b/contrib/pg_stat_statements/pg_stat_statements.c\n@@ -985,7 +985,7 @@ pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)\n MemoryContext oldcxt;\n\n oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);\n- queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL, false);\n+ queryDesc->totaltime = InstrAlloc(1, (INSTRUMENT_ALL & ~INSTRUMENT_EXTRA), false);\n MemoryContextSwitchTo(oldcxt);\n }\n }\n\nIt turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly\nrequested by INSTRUMENT_ALL adds a ~27% overhead.\n\nI'm not sure that I actually believe these results, but they're really\nconsistent, so maybe that's real.\n\nAnyway, even if the overheadwas only 1.5% like in your own benchmark, that\nstill wouldn't be acceptable. Such a feature is in my opinion very welcome,\nbut it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).\n\nNote that this was done using a \"production build\" (so with -02, without assert\nand such). Doing the same on a debug build (and a scale 20 pgbench), the\noverhead is about 1.75%, which is closer to your result. What was the\nconfigure option you used for your benchmark?\n\nAlso, I don't think it's not acceptable to ask every single extension that\ncurrently relies on INSTRUMENT_ALL to be patched and drop some random\nINSTRUMENT_XXX flags to avoid this overhead. So as I mentioned previously, I\nthink we should keep INSTRUMENT_ALL to mean something like \"all instrumentation\nthat gives metrics at the statement level\", and have INSTRUMENT_EXTRA be\noutside of INSTRUMENT_ALL. Maybe this new category should have a global flag\nto request all of them, and maybe there should be some additional alias to grab\nall categories.\n\nWhile at it, INSTRUMENT_EXTRA doesn't really seem like a nice name either since\nthere's no guarantee that the next time someone adds a new instrument option\nfor per-node information, she will want to combine it with this one. Maybe\nINSTRUMENT_MINMAX_LOOPS or something like that?\n\n\n", "msg_date": "Sat, 30 Jul 2022 20:54:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:\n>\n> It turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly\n> requested by INSTRUMENT_ALL adds a ~27% overhead.\n>\n> I'm not sure that I actually believe these results, but they're really\n> consistent, so maybe that's real.\n>\n> Anyway, even if the overheadwas only 1.5% like in your own benchmark, that\n> still wouldn't be acceptable. Such a feature is in my opinion very welcome,\n> but it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).\n\nI did the same benchmark this morning, although trying to stop all background\njobs and things on my machine that could interfere with the results, using\nlonger runs and more runs and I now get a reproducible ~1% overhead, which is\nway more believable. Not sure what happened yesterday as I got reproducible\nnumber doing the same benchmark twice, I guess that the fun doing performance\ntests on a development machine.\n\nAnyway, 1% is in my opinion still too much overhead for extensions that won't\nget any extra information.\n\n\n", "msg_date": "Sun, 31 Jul 2022 11:49:39 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On 31/7/2022 10:49, Julien Rouhaud wrote:\n> On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:\n> Anyway, 1% is in my opinion still too much overhead for extensions that won't\n> get any extra information.\nI have read all the thread and still can't understand something. What \nvaluable data can I find with these extra statistics if no one \nparameterized node in the plan exists?\nAlso, thinking about min/max time in the explain, I guess it would be \nnecessary in rare cases. Usually, the execution time will correlate to \nthe number of tuples scanned, won't it? So, maybe skip the time \nboundaries in the instrument structure?\nIn my experience, it is enough to know the total number of tuples \nbubbled up from a parameterized node to decide further optimizations. \nMaybe simplify this feature up to the one total_rows field in the case \nof nloops > 1 and in the presence of parameters?\nAnd at the end. If someone wants a lot of additional statistics, why not \ngive them that by extension? It is only needed to add a hook into the \npoint of the node explanation and some efforts to make instrumentation \nextensible. But here, honestly, I don't have code/ideas so far.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 22 Sep 2023 15:14:43 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" } ]
[ { "msg_contents": "Hi,\n\nWhile inspecting the MERGE documentation, I noticed that there is an extra\nsemicolon in one of the examples that shouldn't be there.\n\ndiff --git a/doc/src/sgml/ref/merge.sgml b/doc/src/sgml/ref/merge.sgml\nindex c547122c9b..ac1c0a83dd 100644\n--- a/doc/src/sgml/ref/merge.sgml\n+++ b/doc/src/sgml/ref/merge.sgml\n@@ -596,7 +596,7 @@ ON s.winename = w.winename\nWHEN NOT MATCHED AND s.stock_delta > 0 THEN\n INSERT VALUES(s.winename, s.stock_delta)\nWHEN MATCHED AND w.stock + s.stock_delta > 0 THEN\n- UPDATE SET stock = w.stock + s.stock_delta;\n+ UPDATE SET stock = w.stock + s.stock_delta\nWHEN MATCHED THEN\n DELETE;\n</programlisting>\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nHi,While inspecting the MERGE documentation, I noticed that there is an extrasemicolon in one of the examples that shouldn't be there.diff --git a/doc/src/sgml/ref/merge.sgml b/doc/src/sgml/ref/merge.sgmlindex c547122c9b..ac1c0a83dd 100644--- a/doc/src/sgml/ref/merge.sgml+++ b/doc/src/sgml/ref/merge.sgml@@ -596,7 +596,7 @@ ON s.winename = w.winenameWHEN NOT MATCHED AND s.stock_delta > 0 THEN   INSERT VALUES(s.winename, s.stock_delta)WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN-  UPDATE SET stock = w.stock + s.stock_delta;+  UPDATE SET stock = w.stock + s.stock_deltaWHEN MATCHED THEN   DELETE;</programlisting>--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Fri, 01 Apr 2022 17:59:51 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": true, "msg_subject": "merge documentation fix" }, { "msg_contents": "On 2022-Apr-01, Euler Taveira wrote:\n\n> Hi,\n> \n> While inspecting the MERGE documentation, I noticed that there is an extra\n> semicolon in one of the examples that shouldn't be there.\n\nThanks, pushed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Sat, 2 Apr 2022 17:19:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: merge documentation fix" } ]
[ { "msg_contents": "Hi hackers,\n\n6198420 ensured that has_privs_of_role() is used for predefined roles,\nwhich means that the role inheritance hierarchy is checked instead of mere\nrole membership. However, inheritance is still not respected for\npg_hba.conf. Specifically, \"samerole\", \"samegroup\", and \"+\" still use\nis_member_of_role_nosuper().\n\nThe attached patch introduces has_privs_of_role_nosuper() and uses it for\nthe aforementioned pg_hba.conf functionality. I think this is desirable\nfor consistency. If a role_a has membership in role_b but none of its\nprivileges (i.e., NOINHERIT), does it make sense that role_a should match\n+role_b in pg_hba.conf? It is true that role_a could always \"SET ROLE\nrole_b\", and with this change, the user won't even have the ability to log\nin to run SET ROLE. But I'm not sure if that's a strong enough argument\nfor deviating from the standard role privilege checks.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 1 Apr 2022 15:06:48 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Apr 1, 2022 at 6:06 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> 6198420 ensured that has_privs_of_role() is used for predefined roles,\n> which means that the role inheritance hierarchy is checked instead of mere\n> role membership. However, inheritance is still not respected for\n> pg_hba.conf. Specifically, \"samerole\", \"samegroup\", and \"+\" still use\n> is_member_of_role_nosuper().\n>\n> The attached patch introduces has_privs_of_role_nosuper() and uses it for\n> the aforementioned pg_hba.conf functionality. I think this is desirable\n> for consistency. If a role_a has membership in role_b but none of its\n> privileges (i.e., NOINHERIT), does it make sense that role_a should match\n> +role_b in pg_hba.conf? It is true that role_a could always \"SET ROLE\n> role_b\", and with this change, the user won't even have the ability to log\n> in to run SET ROLE. But I'm not sure if that's a strong enough argument\n> for deviating from the standard role privilege checks.\n>\n> Thoughts?\n>\n\nGood catch, I think this is a logical followup to the previous\nhas_privs_of_role patch.\n\nReviewed and +1\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:36:13 -0400", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Mon, Apr 04, 2022 at 09:36:13AM -0400, Joshua Brindle wrote:\n> Good catch, I think this is a logical followup to the previous\n> has_privs_of_role patch.\n> \n> Reviewed and +1\n\nThanks! I created a commitfest entry for this:\n\n\thttps://commitfest.postgresql.org/38/3609/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 07:25:51 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Mon, Apr 04, 2022 at 07:25:51AM -0700, Nathan Bossart wrote:\n> On Mon, Apr 04, 2022 at 09:36:13AM -0400, Joshua Brindle wrote:\n>> Good catch, I think this is a logical followup to the previous\n>> has_privs_of_role patch.\n>> \n>> Reviewed and +1\n> \n> Thanks! I created a commitfest entry for this:\n\nThis patch looks simple, but it is a very sensitive area so I think\nthat we should be really careful. pg_hba.conf does not have a lot of\ntest coverage, so I'd really prefer if we add something to see the\ndifference of behavior and check the behavior that we are switching\nhere. What I have just committed in 051b096 would help a bit here,\nactually, and changing pg_hba.conf rules with rule reload is cheap.\n\nJoe, you are registered as a reviewer and committer of this patch, by\nthe way. Are you planning to look at it?\n--\nMichael", "msg_date": "Thu, 6 Oct 2022 17:09:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On 10/6/22 04:09, Michael Paquier wrote:\n> On Mon, Apr 04, 2022 at 07:25:51AM -0700, Nathan Bossart wrote:\n>> On Mon, Apr 04, 2022 at 09:36:13AM -0400, Joshua Brindle wrote:\n>>> Good catch, I think this is a logical followup to the previous\n>>> has_privs_of_role patch.\n>>> \n>>> Reviewed and +1\n>> \n>> Thanks! I created a commitfest entry for this:\n> \n> This patch looks simple, but it is a very sensitive area so I think\n> that we should be really careful. pg_hba.conf does not have a lot of\n> test coverage, so I'd really prefer if we add something to see the\n> difference of behavior and check the behavior that we are switching\n> here.\n\nAgreed\n\n> Joe, you are registered as a reviewer and committer of this patch, by\n> the way. Are you planning to look at it?\n\nI am meaning to get to it, but as you say wanted to spend some time to \nunderstand the nuances and life keeps getting in the way. I will try to \nprioritize it over the next week.\n\nJoe\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Oct 2022 07:33:46 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Thu, Oct 06, 2022 at 07:33:46AM -0400, Joe Conway wrote:\n> On 10/6/22 04:09, Michael Paquier wrote:\n>> This patch looks simple, but it is a very sensitive area so I think\n>> that we should be really careful. pg_hba.conf does not have a lot of\n>> test coverage, so I'd really prefer if we add something to see the\n>> difference of behavior and check the behavior that we are switching\n>> here.\n> \n> Agreed\n\nHere is a new version of the patch with a test.\n\n>> Joe, you are registered as a reviewer and committer of this patch, by\n>> the way. Are you planning to look at it?\n> \n> I am meaning to get to it, but as you say wanted to spend some time to\n> understand the nuances and life keeps getting in the way. I will try to\n> prioritize it over the next week.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Oct 2022 10:43:43 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Thu, Oct 06, 2022 at 10:43:43AM -0700, Nathan Bossart wrote:\n> Here is a new version of the patch with a test.\n\nThanks, that helps a lot. Now I grab the difference even if your\nprevious patch was already switching the documentation to tell exactly\nthat. On the ground of 6198420, it looks indeed strange to not do the\nsame for pg_hba.conf. That makes the whole story more consistent, for\none.\n\n+$node->safe_psql('postgres', \"CREATE DATABASE role1;\");\n+$node->safe_psql('postgres', \"CREATE ROLE role1 LOGIN PASSWORD 'pass';\");\n+$node->safe_psql('postgres', \"CREATE ROLE role2 LOGIN SUPERUSER INHERIT IN ROLE role1 PASSWORD 'pass';\");\n+$node->safe_psql('postgres', \"CREATE ROLE role3 LOGIN SUPERUSER NOINHERIT IN ROLE role1 PASSWORD 'pass';\");\nSo this comes down to role3, where HEAD allows a connection as long as\nit is a member of role1 for +role1, samegroup and samerole, but the\npatch would prevent the connection when role3 does not inherit the\npermissions of role1, even if it is a superuser.\n\nsamegroup is a synonym of samerole, but fine by me to keep the full\ncoverage and all three sets.\n\nRather than putting that in a separate script, which means\ninitializing a new node, etc. could it be better to put that in\n001_password.pl instead? It would be cheaper.\n--\nMichael", "msg_date": "Fri, 7 Oct 2022 11:06:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Oct 07, 2022 at 11:06:47AM +0900, Michael Paquier wrote:\n> Rather than putting that in a separate script, which means\n> initializing a new node, etc. could it be better to put that in\n> 001_password.pl instead? It would be cheaper.\n\nWorks for me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Oct 2022 20:27:11 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Thu, Oct 06, 2022 at 08:27:11PM -0700, Nathan Bossart wrote:\n> On Fri, Oct 07, 2022 at 11:06:47AM +0900, Michael Paquier wrote:\n>> Rather than putting that in a separate script, which means\n>> initializing a new node, etc. could it be better to put that in\n>> 001_password.pl instead? It would be cheaper.\n> \n> Works for me.\n\nThanks. I would perhaps use names less generic than role{1,2,3} for\nthe roles or \"role1\" for the database name, but the logic looks\nsound.\n--\nMichael", "msg_date": "Fri, 7 Oct 2022 15:34:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Apr 1, 2022 at 6:07 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> 6198420 ensured that has_privs_of_role() is used for predefined roles,\n> which means that the role inheritance hierarchy is checked instead of mere\n> role membership. However, inheritance is still not respected for\n> pg_hba.conf. Specifically, \"samerole\", \"samegroup\", and \"+\" still use\n> is_member_of_role_nosuper().\n>\n> The attached patch introduces has_privs_of_role_nosuper() and uses it for\n> the aforementioned pg_hba.conf functionality. I think this is desirable\n> for consistency. If a role_a has membership in role_b but none of its\n> privileges (i.e., NOINHERIT), does it make sense that role_a should match\n> +role_b in pg_hba.conf? It is true that role_a could always \"SET ROLE\n> role_b\", and with this change, the user won't even have the ability to log\n> in to run SET ROLE. But I'm not sure if that's a strong enough argument\n> for deviating from the standard role privilege checks.\n\nI hadn't noticed this thread before.\n\nI'm not sure whether this is properly considered a privilege check. It\ncould even be an anti-privilege, if the pg_hba.conf line in question\nis maked \"reject\".\n\nI'm not taking the position that what this patch does is wrong, but I\n*am* taking the position that it's a judgement call what the correct\nbehavior is here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 07:59:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Oct 07, 2022 at 03:34:51PM +0900, Michael Paquier wrote:\n> Thanks. I would perhaps use names less generic than role{1,2,3} for\n> the roles or \"role1\" for the database name, but the logic looks\n> sound.\n\nHere is a new version with more descriptive role names.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 7 Oct 2022 12:44:30 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Oct 07, 2022 at 03:34:51PM +0900, Michael Paquier wrote:\n>> Thanks. I would perhaps use names less generic than role{1,2,3} for\n>> the roles or \"role1\" for the database name, but the logic looks\n>> sound.\n\n> Here is a new version with more descriptive role names.\n\nThere's another problem there, which is that buildfarm animals\nusing -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS will complain\nabout role names that don't start with \"regress_\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 16:18:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Oct 07, 2022 at 04:18:59PM -0400, Tom Lane wrote:\n> There's another problem there, which is that buildfarm animals\n> using -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS will complain\n> about role names that don't start with \"regress_\".\n\nHuh, I hadn't noticed that one before. It looks like roles must start with\n\"regress_\" and database names must include \"regression\", so I ended up\nusing \"regress_regression_group\" for the samegroup/samerole tests.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 7 Oct 2022 14:58:36 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Fri, Oct 07, 2022 at 07:59:08AM -0400, Robert Haas wrote:\n> I hadn't noticed this thread before.\n> \n> I'm not sure whether this is properly considered a privilege check. It\n> could even be an anti-privilege, if the pg_hba.conf line in question\n> is maked \"reject\".\n> \n> I'm not taking the position that what this patch does is wrong, but I\n> *am* taking the position that it's a judgement call what the correct\n> behavior is here.\n\nThe interpretation can go both ways I guess. Now I find the argument\nto treat a HBA entry based on privileges and not membership quite\nappealing in terms of consistency wiht SET ROLE, particularly\nconsidering the recent thread with predefined roles. Also, it seems\nto me here that it would become easier to reason around role\nhierarchies, one case being HBA entries that include predefined\nroles for the role(s) to match.\n--\nMichael", "msg_date": "Sat, 8 Oct 2022 13:55:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On 10/7/22 17:58, Nathan Bossart wrote:\n> On Fri, Oct 07, 2022 at 04:18:59PM -0400, Tom Lane wrote:\n>> There's another problem there, which is that buildfarm animals\n>> using -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS will complain\n>> about role names that don't start with \"regress_\".\n> \n> Huh, I hadn't noticed that one before. It looks like roles must start with\n> \"regress_\" and database names must include \"regression\", so I ended up\n> using \"regress_regression_group\" for the samegroup/samerole tests.\n\n\nThanks -- looks good to me. If there are no other comments or concerns, \nI will commit/push by the end of the weekend.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 8 Oct 2022 10:38:00 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Thanks -- looks good to me. If there are no other comments or concerns, \n> I will commit/push by the end of the weekend.\n\nRobert seems to think that this patch might be completely misguided,\nso I'm not sure we have real consensus. I think he may have a point.\n\nAn angle that he didn't bring up is that we've had proposals, and\neven I think a patch, for inventing database-local privileges.\nIf that were to become a thing, it would interact very badly with\nthis idea, because it would often not be clear which set of privileges\nto consider. As long as HBA checks consider membership, and we don't\ninvent database-local role membership, there's no problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Oct 2022 11:14:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sat, Oct 8, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > Thanks -- looks good to me. If there are no other comments or concerns,\n> > I will commit/push by the end of the weekend.\n>\n> Robert seems to think that this patch might be completely misguided,\n> so I'm not sure we have real consensus. I think he may have a point.\n>\n> An angle that he didn't bring up is that we've had proposals, and\n> even I think a patch, for inventing database-local privileges.\n> If that were to become a thing, it would interact very badly with\n> this idea, because it would often not be clear which set of privileges\n> to consider. As long as HBA checks consider membership, and we don't\n> invent database-local role membership, there's no problem.\n\nThis argument feels a little bit thin to me, because (1) one could\nequally well postulate that we'd want to invent database-local role\nmembership and (2) presumably the relevant set of privileges would be\nthose for the database to which the user wishes to authenticate.\n\nI think what is bothering me is a feeling that a privilege is\nsomething that you get because you've authenticated. If you haven't\nauthenticated yet, you have no privileges. So why should it matter\nwhether the role to which you could hypothetically authenticate would\ninherit the privileges of some other role or not?\n\nOr to put it another way, I don't have any intuition for why someone\nwould want the system to behave in this way rather than in the way\nthat it does now. In general, what role inheritance does is actually\npretty easy to understand: either you just have the ability to access\nthe privileges of some other role at need, or you have those\nprivileges all the time even without activating them explicitly. I\nthink in most cases people will expect membership in a predefined role\nor a role used as a group to behave in the second way, and membership\nin a login role to be used in the first way, but I think there will\nlikely be some exceptions in both directions, which is fine, because\nwe can support that.\n\nBut the usage where you mention a group in pg_hba.conf feels\northogonal to all of that to me. In that case, it's not really about\nprivileges at all, or at least I don't think so. It's about letting\none group of people log into the system from, say, a certain IP\naddress, and others not (or maybe the reverse). It seems reasonably\nlikely that you wouldn't want the role you used for grouping purposes\nin a case like this to hold any privileges at all, or that if it did\nhave any privileges you wouldn't want them accessible in any way to\nthe group members, because if you create a group called\npeople_who_can_log_in_from_the_modem_pool, you do not therefore want\nto end up with tables owned by\npeople_who_can_log_in_from_the_modem_pool. Under that theory, this\npatch is going in the wrong direction.\n\nNow there may be some other scenario in which the patch is going in\nexactly the right direction, and if I knew what it was, maybe I'd\nagree that the patch was a great idea. But I haven't seen anything\nlike that on the thread. Basically, the argument is just that the\nchange would make things more consistent. However, it might be an\nabuse of the term. If you go out and buy blue curtains because you\nhave a blue couch, that's consistent interior decor. If you go out and\nbuy a blue car because you have a blue couch, that's not really\nconsistent anything, it's just two fairly-unrelated things that are\nboth blue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 8 Oct 2022 11:46:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sat, Oct 8, 2022 at 8:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Oct 8, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> > > Thanks -- looks good to me. If there are no other comments or concerns,\n> > > I will commit/push by the end of the weekend.\n> >\n> > Robert seems to think that this patch might be completely misguided,\n> > so I'm not sure we have real consensus. I think he may have a point.\n>\n> I think what is bothering me is a feeling that a privilege is\n> something that you get because you've authenticated. If you haven't\n> authenticated yet, you have no privileges. So why should it matter\n> whether the role to which you could hypothetically authenticate would\n> inherit the privileges of some other role or not?\n>\n> Or to put it another way, I don't have any intuition for why someone\n> would want the system to behave in this way rather than in the way\n> that it does now.\n>\n\nI'm also in the \"inheritance isn't relevant here\" camp. One doesn't\ninherit an ability to LOGIN from a group that has a LOGIN attribute. The\n[NO]INHERIT attribute doesn't even apply. This feature is so closely\nrelated to LOGIN that [NO]INHERIT should likewise not apply here as well.\n\nWe've decided to conjoin two arguably orthogonal concerns here and need to\nkeep in mind that any given aspect of the overall capability might very\nwell only apply to a subset of the system. In this case inheritance only\napplies to object permissions, not attributes, and not authentication\n(which doesn't have any kind of explicit permission bit in the system to\ninherit, making it just like LOGIN).\n\nI would tend to agree that even membership probably shouldn't be involved\nhere, and that this entire feature would be implemented in an orthogonal\nmanner. I don't see any specific need to try and move to a more isolated\nimplementation, but trying to involve inheritance just seems wrong. The\nstatus quo seems like a good place to stay.\n\nDavid J.\n\nOn Sat, Oct 8, 2022 at 8:47 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Oct 8, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > Thanks -- looks good to me. If there are no other comments or concerns,\n> > I will commit/push by the end of the weekend.\n>\n> Robert seems to think that this patch might be completely misguided,\n> so I'm not sure we have real consensus.  I think he may have a point.\nI think what is bothering me is a feeling that a privilege is\nsomething that you get because you've authenticated. If you haven't\nauthenticated yet, you have no privileges. So why should it matter\nwhether the role to which you could hypothetically authenticate would\ninherit the privileges of some other role or not?\n\nOr to put it another way, I don't have any intuition for why someone\nwould want the system to behave in this way rather than in the way\nthat it does now.I'm also in the \"inheritance isn't relevant here\" camp.  One doesn't inherit an ability to LOGIN from a group that has a LOGIN attribute.  The [NO]INHERIT attribute doesn't even apply.  This feature is so closely related to LOGIN that [NO]INHERIT should likewise not apply here as well.We've decided to conjoin two arguably orthogonal concerns here and need to keep in mind that any given aspect of the overall capability might very well only apply to a subset of the system.  In this case inheritance only applies to object permissions, not attributes, and not authentication (which doesn't have any kind of explicit permission bit in the system to inherit, making it just like LOGIN).I would tend to agree that even membership probably shouldn't be involved here, and that this entire feature would be implemented in an orthogonal manner.  I don't see any specific need to try and move to a more isolated implementation, but trying to involve inheritance just seems wrong.  The status quo seems like a good place to stay.David J.", "msg_date": "Sat, 8 Oct 2022 09:57:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sat, Oct 08, 2022 at 11:46:50AM -0400, Robert Haas wrote:\n> Now there may be some other scenario in which the patch is going in\n> exactly the right direction, and if I knew what it was, maybe I'd\n> agree that the patch was a great idea. But I haven't seen anything\n> like that on the thread. Basically, the argument is just that the\n> change would make things more consistent. However, it might be an\n> abuse of the term. If you go out and buy blue curtains because you\n> have a blue couch, that's consistent interior decor. If you go out and\n> buy a blue car because you have a blue couch, that's not really\n> consistent anything, it's just two fairly-unrelated things that are\n> both blue.\n\nI believe I started this thread after reviewing the remaining uses of\nis_member_of_role() after 6198420 was committed and wondering whether this\ncase was an oversight. If upon closer inspection we think that mere\nmembership is appropriate for pg_hba.conf, I'm fully prepared to go and\nmark this commitfest entry as Rejected. It obviously does not seem as\nclear-cut as 6198420. And I'll admit I don't have a concrete use-case in\nhand to justify the behavior change.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 8 Oct 2022 10:06:40 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sat, Oct 08, 2022 at 09:57:02AM -0700, David G. Johnston wrote:\n> I would tend to agree that even membership probably shouldn't be involved\n> here, and that this entire feature would be implemented in an orthogonal\n> manner. I don't see any specific need to try and move to a more isolated\n> implementation, but trying to involve inheritance just seems wrong. The\n> status quo seems like a good place to stay.\n\nOkay, I think there are sufficient votes against this change to simply mark\nit Rejected. Thanks for the discussion!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 8 Oct 2022 10:12:22 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sat, Oct 08, 2022 at 10:12:22AM -0700, Nathan Bossart wrote:\n> Okay, I think there are sufficient votes against this change to simply mark\n> it Rejected. Thanks for the discussion!\n\nEven if the patch is at the end rejected, I think that the test is\nstill useful once you switch its logic to use membership and not\ninherited privileges for the roles created, and there is zero coverage\nfor \"samplegroup\" and its kind currently.\n--\nMichael", "msg_date": "Sun, 9 Oct 2022 10:19:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sun, Oct 09, 2022 at 10:19:51AM +0900, Michael Paquier wrote:\n> Even if the patch is at the end rejected, I think that the test is\n> still useful once you switch its logic to use membership and not\n> inherited privileges for the roles created, and there is zero coverage\n> for \"samplegroup\" and its kind currently.\n\nHere you go.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 9 Oct 2022 14:13:48 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Sun, Oct 09, 2022 at 02:13:48PM -0700, Nathan Bossart wrote:\n> Here you go.\n\nThanks, applied. It took me a few minutes to note that\nregress_regression_* is required in the object names because we need\nto use the same name for the parent role and the database, with\n\"regress_\" being required for the role and \"regression\" being required\nfor the database. I have added an extra section where pg_hba.conf is\nset to match only the parent role, while on it. perltidy has reshaped\nthings in an interesting way, because the generated log_[un]like is\nlong, it seems.\n--\nMichael", "msg_date": "Tue, 11 Oct 2022 14:01:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "On Tue, Oct 11, 2022 at 02:01:07PM +0900, Michael Paquier wrote:\n> Thanks, applied. It took me a few minutes to note that\n> regress_regression_* is required in the object names because we need\n> to use the same name for the parent role and the database, with\n> \"regress_\" being required for the role and \"regression\" being required\n> for the database. I have added an extra section where pg_hba.conf is\n> set to match only the parent role, while on it. perltidy has reshaped\n> things in an interesting way, because the generated log_[un]like is\n> long, it seems.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 10:40:59 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Sat, Oct 08, 2022 at 11:46:50AM -0400, Robert Haas wrote:\n> > Now there may be some other scenario in which the patch is going in\n> > exactly the right direction, and if I knew what it was, maybe I'd\n> > agree that the patch was a great idea. But I haven't seen anything\n> > like that on the thread. Basically, the argument is just that the\n> > change would make things more consistent. However, it might be an\n> > abuse of the term. If you go out and buy blue curtains because you\n> > have a blue couch, that's consistent interior decor. If you go out and\n> > buy a blue car because you have a blue couch, that's not really\n> > consistent anything, it's just two fairly-unrelated things that are\n> > both blue.\n> \n> I believe I started this thread after reviewing the remaining uses of\n> is_member_of_role() after 6198420 was committed and wondering whether this\n> case was an oversight. If upon closer inspection we think that mere\n> membership is appropriate for pg_hba.conf, I'm fully prepared to go and\n> mark this commitfest entry as Rejected. It obviously does not seem as\n> clear-cut as 6198420. And I'll admit I don't have a concrete use-case in\n> hand to justify the behavior change.\n\nLooks like we've already ended up there, but my recollection of this is\nthat it was very much intentional to use is_member_of_role() here.\nPerhaps it should have been better commented (as all uses of\nis_member_of_role() instead of has_privs_of_role() really should have\nlots of comments as to exactly why it makes sense in those cases).\n\nThanks,\n\nStephen", "msg_date": "Sun, 16 Oct 2022 12:04:09 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: use has_privs_of_role() for pg_hba.conf" } ]
[ { "msg_contents": "I got a crash running the below query on the regression database:\n\n\"\"\"\nselect pg_catalog.json_object_agg_unique(10,\n cast(ref_0.level2_no as int4)) \n\t \tover (partition by ref_0.parent_no \n\t\t\torder by ref_0.level2_no)\nfrom public.transition_table_level2 as ref_0;\n\"\"\"\n\nAttached the backtrace.\n\nPS: I'm cc'ing Andrew and Nikita because my feeling is that this is \nf4fb45d15c59d7add2e1b81a9d477d0119a9691a responsability. \n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Sat, 2 Apr 2022 00:25:04 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "JSON constructors and window functions" }, { "msg_contents": "\nOn 4/2/22 01:25, Jaime Casanova wrote:\n> I got a crash running the below query on the regression database:\n>\n> \"\"\"\n> select pg_catalog.json_object_agg_unique(10,\n> cast(ref_0.level2_no as int4)) \n> \t \tover (partition by ref_0.parent_no \n> \t\t\torder by ref_0.level2_no)\n> from public.transition_table_level2 as ref_0;\n> \"\"\"\n>\n> Attached the backtrace.\n>\n> PS: I'm cc'ing Andrew and Nikita because my feeling is that this is \n> f4fb45d15c59d7add2e1b81a9d477d0119a9691a responsability.\n\n\n\nHmm. Thanks for the report. The code in json_unique_check_key() looks\nsane enough., so the issue is probably elsewhere. I'll keep digging.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:40:03 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 4/2/22 15:40, Andrew Dunstan wrote:\n> On 4/2/22 01:25, Jaime Casanova wrote:\n>> I got a crash running the below query on the regression database:\n>>\n>> \"\"\"\n>> select pg_catalog.json_object_agg_unique(10,\n>> cast(ref_0.level2_no as int4)) \n>> \t \tover (partition by ref_0.parent_no \n>> \t\t\torder by ref_0.level2_no)\n>> from public.transition_table_level2 as ref_0;\n>> \"\"\"\n>>\n>> Attached the backtrace.\n>>\n>> PS: I'm cc'ing Andrew and Nikita because my feeling is that this is \n>> f4fb45d15c59d7add2e1b81a9d477d0119a9691a responsability.\n>\n>\n> Hmm. Thanks for the report. The code in json_unique_check_key() looks\n> sane enough., so the issue is probably elsewhere. I'll keep digging.\n\n\n\nHaven't found the issue yet :-( It happens on the second call for the\npartition to  json_check_unique_key().\n\nHere's a more idiomatic and self-contained query that triggers the problem.\n\n\nselect json_objectagg('10' : ref_0.level2 with unique keys) \n    over (partition by ref_0.parent_no order by ref_0.level2)\nfrom (values (1::int,1::int),(1,2),(2,1),(2,2)) as ref_0(parent_no,level2);\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Apr 2022 18:56:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Hi,\n\nOn 2022-04-03 18:56:39 -0400, Andrew Dunstan wrote:\n> Haven't found the issue yet :-( It happens on the second call for the\n> partition to� json_check_unique_key().\n>\n> Here's a more idiomatic and self-contained query that triggers the problem.\n>\n>\n> select json_objectagg('10' : ref_0.level2 with unique keys)\n> ��� over (partition by ref_0.parent_no order by ref_0.level2)\n> from (values (1::int,1::int),(1,2),(2,1),(2,2)) as ref_0(parent_no,level2);\n\nThe hash was created in a context that's already freed.\n\n#1 0x00007febbcc556f9 in wipe_mem (ptr=0x7febbf084f88, size=6392) at /home/andres/src/postgresql/src/include/utils/memdebug.h:42\n#2 0x00007febbcc5603e in AllocSetReset (context=0x7febbf084e80) at /home/andres/src/postgresql/src/backend/utils/mmgr/aset.c:591\n#3 0x00007febbcc61ed6 in MemoryContextResetOnly (context=0x7febbf084e80) at /home/andres/src/postgresql/src/backend/utils/mmgr/mcxt.c:181\n#4 0x00007febbcc561cf in AllocSetDelete (context=0x7febbf084e80) at /home/andres/src/postgresql/src/backend/utils/mmgr/aset.c:654\n#5 0x00007febbcc62155 in MemoryContextDelete (context=0x7febbf084e80) at /home/andres/src/postgresql/src/backend/utils/mmgr/mcxt.c:252\n#6 0x00007febbcc31ee2 in hash_destroy (hashp=0x7febbf084fa0) at /home/andres/src/postgresql/src/backend/utils/hash/dynahash.c:876\n#7 0x00007febbcb01ac5 in json_unique_check_free (cxt=0x7febbf03f548) at /home/andres/src/postgresql/src/backend/utils/adt/json.c:985\n#8 0x00007febbcb01b7c in json_unique_builder_free (cxt=0x7febbf03f548) at /home/andres/src/postgresql/src/backend/utils/adt/json.c:1014\n#9 0x00007febbcb0218f in json_object_agg_finalfn (fcinfo=0x7ffeab802e20) at /home/andres/src/postgresql/src/backend/utils/adt/json.c:1227\n#10 0x00007febbc84e110 in finalize_windowaggregate (winstate=0x7febbf037730, perfuncstate=0x7febbf057560, peraggstate=0x7febbf0552f8, result=0x7febbf057520,\n isnull=0x7febbf057540) at /home/andres/src/postgresql/src/backend/executor/nodeWindowAgg.c:626\n#11 0x00007febbc84ea9b in eval_windowaggregates (winstate=0x7febbf037730) at /home/andres/src/postgresql/src/backend/executor/nodeWindowAgg.c:993\n#12 0x00007febbc8514a7 in ExecWindowAgg (pstate=0x7febbf037730) at /home/andres/src/postgresql/src/backend/executor/nodeWindowAgg.c:2207\n#13 0x00007febbc7fda4d in ExecProcNodeFirst (node=0x7febbf037730) at /home/andres/src/postgresql/src/backend/executor/execProcnode.c:463\n#14 0x00007febbc7f12fb in ExecProcNode (node=0x7febbf037730) at /home/andres/src/postgresql/src/include/executor/executor.h:259\n#15 0x00007febbc7f41b7 in ExecutePlan (estate=0x7febbf0374f0, planstate=0x7febbf037730, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true,\n numberTuples=0, direction=ForwardScanDirection, dest=0x7febbf030098, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:1636\n#16 0x00007febbc7f19ff in standard_ExecutorRun (queryDesc=0x7febbef79030, direction=ForwardScanDirection, count=0, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:363\n#17 0x00007febbc7f17ee in ExecutorRun (queryDesc=0x7febbef79030, direction=ForwardScanDirection, count=0, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:307\n#18 0x00007febbca6d2cc in PortalRunSelect (portal=0x7febbefcbc10, forward=true, count=0, dest=0x7febbf030098)\n at /home/andres/src/postgresql/src/backend/tcop/pquery.c:924\n#19 0x00007febbca6cf5c in PortalRun (portal=0x7febbefcbc10, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x7febbf030098,\n\n\nI don't think you're allowed to free stuff in a finalfunc - we might reuse the\ntransition state for further calls to the aggregate.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 17:11:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 4/3/22 20:11, Andres Freund wrote:\n> Hi,\n>\n> On 2022-04-03 18:56:39 -0400, Andrew Dunstan wrote:\n>> Haven't found the issue yet :-( It happens on the second call for the\n>> partition to  json_check_unique_key().\n>>\n>> Here's a more idiomatic and self-contained query that triggers the problem.\n>>\n>>\n>> select json_objectagg('10' : ref_0.level2 with unique keys)\n>>     over (partition by ref_0.parent_no order by ref_0.level2)\n>> from (values (1::int,1::int),(1,2),(2,1),(2,2)) as ref_0(parent_no,level2);\n> The hash was created in a context that's already freed.\n>\n[...]\n>\n>\n> I don't think you're allowed to free stuff in a finalfunc - we might reuse the\n> transition state for further calls to the aggregate.\n>\n\n\nDoh! Of course! I'll fix it in the morning. Thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Apr 2022 22:46:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 4/3/22 22:46, Andrew Dunstan wrote:\n> On 4/3/22 20:11, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-04-03 18:56:39 -0400, Andrew Dunstan wrote:\n>>> Haven't found the issue yet :-( It happens on the second call for the\n>>> partition to  json_check_unique_key().\n>>>\n>>> Here's a more idiomatic and self-contained query that triggers the problem.\n>>>\n>>>\n>>> select json_objectagg('10' : ref_0.level2 with unique keys)\n>>>     over (partition by ref_0.parent_no order by ref_0.level2)\n>>> from (values (1::int,1::int),(1,2),(2,1),(2,2)) as ref_0(parent_no,level2);\n>> The hash was created in a context that's already freed.\n>>\n> [...]\n>>\n>> I don't think you're allowed to free stuff in a finalfunc - we might reuse the\n>> transition state for further calls to the aggregate.\n>>\n>\n> Doh! Of course! I'll fix it in the morning. Thanks.\n>\n>\n\n\nI've committed a fix for this. I didn't find something to clean out the\nhash table, so I just removed the 'hash_destroy' and left it at that.\nAll the test I did came back with expected results.\n\nMaybe a hash_reset() is something worth having assuming it's possible? I\nnote that simplehash has a reset function.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 11:09:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/3/22 22:46, Andrew Dunstan wrote:\n>> On 4/3/22 20:11, Andres Freund wrote:\n>>> I don't think you're allowed to free stuff in a finalfunc - we might reuse the\n>>> transition state for further calls to the aggregate.\n\n>> Doh! Of course! I'll fix it in the morning. Thanks.\n\n> I've committed a fix for this. I didn't find something to clean out the\n> hash table, so I just removed the 'hash_destroy' and left it at that.\n> All the test I did came back with expected results.\n> Maybe a hash_reset() is something worth having assuming it's possible? I\n> note that simplehash has a reset function.\n\nBut removing the hash entries would be just as much of a problem\nwouldn't it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Apr 2022 11:43:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Are we missing regression tests using these functions as window functions?\n\nHm. I suppose it's possible to write a general purpose regression test\nthat loops over all aggregate functions and runs them as window\nfunctions and aggregates over the same data sets and compares results.\nAt least for the case of aggregate functions with a single parameter\nbelonging to a chosen set of data types.\n\n\n", "msg_date": "Mon, 4 Apr 2022 11:54:23 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Hi,\n\nOn 2022-04-04 11:43:31 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 4/3/22 22:46, Andrew Dunstan wrote:\n> >> On 4/3/22 20:11, Andres Freund wrote:\n> >>> I don't think you're allowed to free stuff in a finalfunc - we might reuse the\n> >>> transition state for further calls to the aggregate.\n> \n> >> Doh! Of course! I'll fix it in the morning. Thanks.\n> \n> > I've committed a fix for this. I didn't find something to clean out the\n> > hash table, so I just removed the 'hash_destroy' and left it at that.\n> > All the test I did came back with expected results.\n> > Maybe a hash_reset() is something worth having assuming it's possible? I\n> > note that simplehash has a reset function.\n> \n> But removing the hash entries would be just as much of a problem\n> wouldn't it?\n\nI think so. I guess we could mark it as FINALFUNC_MODIFY = READ_WRITE. But I\ndon't see a reason why it'd be needed here.\n\nIs it a problem that skipped_keys is reset in the finalfunc? I don't know how\nthese functions work. So far I don't understand why\nJsonUniqueBuilderState->skipped_keys is long lived...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:21:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Hi,\n\nOn 2022-04-04 11:54:23 -0400, Greg Stark wrote:\n> Are we missing regression tests using these functions as window functions?\n\nSo far, yes.\n\nISTM that 4eb97988796 should have at least included the crashing statement as\na test... The statement can be simpler too:\n\nSELECT json_objectagg(k : v with unique keys) OVER (ORDER BY k) FROM (VALUES (1,1), (2,2)) a(k,v);\n\nis sufficient to trigger the crash for me, without even using asan (after\nreverting the bugfix, of course).\n\n\n> Hm. I suppose it's possible to write a general purpose regression test\n> that loops over all aggregate functions and runs them as window\n> functions and aggregates over the same data sets and compares results.\n> At least for the case of aggregate functions with a single parameter\n> belonging to a chosen set of data types.\n\nI was wondering about that too. Hardest part would be to come up with values\nto pass to the aggregates.\n\nI don't think it'd help in this case though, since it depends on special case\ngrammar stuff to even be reached. json_objectagg(k : v with unique\nkeys). \"Normal\" use of aggregates can't even reach the problematic path\nafaics.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:33:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 4/4/22 12:33, Andres Freund wrote:\n> Hi,\n>\n> On 2022-04-04 11:54:23 -0400, Greg Stark wrote:\n>> Are we missing regression tests using these functions as window functions?\n> So far, yes.\n>\n> ISTM that 4eb97988796 should have at least included the crashing statement as\n> a test... The statement can be simpler too:\n>\n> SELECT json_objectagg(k : v with unique keys) OVER (ORDER BY k) FROM (VALUES (1,1), (2,2)) a(k,v);\n>\n> is sufficient to trigger the crash for me, without even using asan (after\n> reverting the bugfix, of course).\n>\n\n\nI will add some regression tests.\n\n\n>> Hm. I suppose it's possible to write a general purpose regression test\n>> that loops over all aggregate functions and runs them as window\n>> functions and aggregates over the same data sets and compares results.\n>> At least for the case of aggregate functions with a single parameter\n>> belonging to a chosen set of data types.\n> I was wondering about that too. Hardest part would be to come up with values\n> to pass to the aggregates.\n>\n> I don't think it'd help in this case though, since it depends on special case\n> grammar stuff to even be reached. json_objectagg(k : v with unique\n> keys). \"Normal\" use of aggregates can't even reach the problematic path\n> afaics.\n>\n\nIt can, as Jaime's original post showed.\n\nBut on further consideration I'm thinking this area needs some rework.\nISTM that it might be a whole lot simpler and comprehensible to generate\nthe json first without bothering about null values or duplicate keys and\nthen in the finalizer check for null values to be skipped and duplicate\nkeys. That way we would need to keep far less state for the aggregation\nfunctions, although it might be marginally less efficient. Thoughts?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 14:19:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 4/4/22 11:43, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 4/3/22 22:46, Andrew Dunstan wrote:\n>>> On 4/3/22 20:11, Andres Freund wrote:\n>>>> I don't think you're allowed to free stuff in a finalfunc - we might reuse the\n>>>> transition state for further calls to the aggregate.\n>>> Doh! Of course! I'll fix it in the morning. Thanks.\n>> I've committed a fix for this. I didn't find something to clean out the\n>> hash table, so I just removed the 'hash_destroy' and left it at that.\n>> All the test I did came back with expected results.\n>> Maybe a hash_reset() is something worth having assuming it's possible? I\n>> note that simplehash has a reset function.\n> But removing the hash entries would be just as much of a problem\n> wouldn't it?\n>\n> \t\t\t\n\n\nYes, quite possibly. It looks from some experimentation as though,\nunlike my naive preconception, it doesn't process each frame again from\nthe beginning, so losing the hash entries could indeed be an issue here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 14:25:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "Hi,\r\n\r\nOn 4/4/22 2:19 PM, Andrew Dunstan wrote:\r\n> \r\n> On 4/4/22 12:33, Andres Freund wrote:\r\n\r\n> It can, as Jaime's original post showed.\r\n> \r\n> But on further consideration I'm thinking this area needs some rework.\r\n> ISTM that it might be a whole lot simpler and comprehensible to generate\r\n> the json first without bothering about null values or duplicate keys and\r\n> then in the finalizer check for null values to be skipped and duplicate\r\n> keys. That way we would need to keep far less state for the aggregation\r\n> functions, although it might be marginally less efficient. Thoughts?\r\n\r\nThis is still on the open items list[1]. Given this is a \r\nuser-triggerable crash and we are approaching PG15 Beta 1, I wanted to \r\ncheck in and see if there was any additional work required to eliminate \r\nthe crash, or if the work at this point is just optimization.\r\n\r\nIf the latter, I'd suggest we open up a new open item for it.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items", "msg_date": "Tue, 10 May 2022 09:51:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "\nOn 2022-05-10 Tu 09:51, Jonathan S. Katz wrote:\n> Hi,\n>\n> On 4/4/22 2:19 PM, Andrew Dunstan wrote:\n>>\n>> On 4/4/22 12:33, Andres Freund wrote:\n>\n>> It can, as Jaime's original post showed.\n>>\n>> But on further consideration I'm thinking this area needs some rework.\n>> ISTM that it might be a whole lot simpler and comprehensible to generate\n>> the json first without bothering about null values or duplicate keys and\n>> then in the finalizer check for null values to be skipped and duplicate\n>> keys. That way we would need to keep far less state for the aggregation\n>> functions, although it might be marginally less efficient. Thoughts?\n>\n> This is still on the open items list[1]. Given this is a\n> user-triggerable crash and we are approaching PG15 Beta 1, I wanted to\n> check in and see if there was any additional work required to\n> eliminate the crash, or if the work at this point is just optimization.\n>\n> If the latter, I'd suggest we open up a new open item for it.\n>\n> Thanks,\n>\n> Jonathan\n>\n> [1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n\n\n\nI believe all the issues here have been fixed. See commit 112fdb3528\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 10 May 2022 10:25:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" }, { "msg_contents": "On 5/10/22 10:25 AM, Andrew Dunstan wrote:\r\n\r\n> I believe all the issues here have been fixed. See commit 112fdb3528\r\n\r\nThanks! I have updated Open Items.\r\n\r\nJonathan", "msg_date": "Tue, 10 May 2022 10:43:02 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: JSON constructors and window functions" } ]
[ { "msg_contents": "On 2/10/22 19:17, Tomas Vondra wrote:\n>> I've polished & pushed the first part adding sequence decoding\n>> infrastructure etc. Attached are the two remaining parts.\n>>\n>> I plan to wait a day or two and then push the test_decoding part. The\n>> last part (for built-in replication) will need more work and maybe\n>> rethinking the grammar etc.\n>>\n\n>I've pushed the second part, adding sequences to test_decoding.\n\nHi,\n\nMinor oversight with commit 0da92dc\n<https://github.com/postgres/postgres/commit/0da92dc530c9251735fc70b20cd004d9630a1266>\n.\nRelationIdGetRelation can return NULL, then it is necessary to check the\nreturn.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 2 Apr 2022 15:12:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "logical decoding and replication of sequences" } ]
[ { "msg_contents": "One patch is failing with what looks like a generic Cirrus issue:\n\nhttps://cirrus-ci.com/task/5389918250729472\n\nFailed to start an instance: INVALID_ARGUMENT: Operation with name\n\"operation-1648936682461-5dbb2fd37177b-5095285b-b153ee83\" failed with\nstatus = HttpJsonStatusCode{statusCode=INVALID_ARGUMENT} and message =\nBAD REQUEST\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 2 Apr 2022 21:06:14 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "CFBot failing with \"Failed to start an instance\"" }, { "msg_contents": "On Sun, Apr 3, 2022 at 1:07 PM Greg Stark <stark@mit.edu> wrote:\n> https://cirrus-ci.com/task/5389918250729472\n>\n> Failed to start an instance: INVALID_ARGUMENT: Operation with name\n> \"operation-1648936682461-5dbb2fd37177b-5095285b-b153ee83\" failed with\n> status = HttpJsonStatusCode{statusCode=INVALID_ARGUMENT} and message =\n> BAD REQUEST\n\nI guess I should teach it to retry if it fails like that, but it does\ntry again in ~24 hours...\n\nHere's the test history for that branch in case it's helpful (I really\nshould probably put these links on the page somewhere...:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3234\n\n\n", "msg_date": "Sun, 3 Apr 2022 14:52:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CFBot failing with \"Failed to start an instance\"" } ]
[ { "msg_contents": "Hi,\n\nIndependently of a problem with a recent commit, it seems that\n$SUBJECT in all releases (well, I only tested as far back as 11). I\nattach an addition to the tests to show this, but here's a stand-alone\nrepro:\n\nDROP TABLE IF EXISTS clstr_expression;\n\nCREATE TABLE clstr_expression(id serial primary key, a int, b text COLLATE \"C\");\nINSERT INTO clstr_expression(a, b) SELECT g.i % 42, 'prefix'||g.i FROM\ngenerate_series(1, 133) g(i);\nCREATE INDEX clstr_expression_minus_a ON clstr_expression ((-a), b);\nCREATE INDEX clstr_expression_upper_b ON clstr_expression ((upper(b)));\n\nCLUSTER clstr_expression USING clstr_expression_minus_a;\nWITH rows AS\n (SELECT ctid, lag(a) OVER (ORDER BY ctid) AS la, a FROM clstr_expression)\nSELECT * FROM rows WHERE la < a;\n\nAll good, and now for the part that I think is misbehaving:\n\nCLUSTER clstr_expression USING clstr_expression_upper_b;\nWITH rows AS\n (SELECT ctid, lag(b) OVER (ORDER BY ctid) AS lb, b FROM clstr_expression)\nSELECT * FROM rows WHERE upper(lb) > upper(b);\n\nThat should produce no rows. It works as expected if you SET\nenable_seqscan = off and re-run CLUSTER, revealing that it's the\nseq-scan-and-sort strategy that is broken. It also works as expected\nfor non-yet-abbreviatable collations.", "msg_date": "Sun, 3 Apr 2022 16:05:00 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Sun, Apr 3, 2022 at 11:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> Independently of a problem with a recent commit, it seems that\n> $SUBJECT in all releases (well, I only tested as far back as 11).\n\nI can confirm the problem on v10 as well.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 3 Apr 2022 15:22:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Sun, Apr 3, 2022 at 8:22 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Sun, Apr 3, 2022 at 11:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Independently of a problem with a recent commit, it seems that\n> > $SUBJECT in all releases (well, I only tested as far back as 11).\n>\n> I can confirm the problem on v10 as well.\n\nThanks for confirming. I got as far as seeing that the two calls to\nFormIndexDatum() are producing garbage in {l,r}_index_values, in the\nloop at the end of comparetup_cluster(), but I'll have to come back to\nthis after some other stuff.\n\n\n", "msg_date": "Mon, 4 Apr 2022 10:04:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Sun, Apr 3, 2022 at 1:22 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n> I can confirm the problem on v10 as well.\n\nWe will need a backpatchable fix, since Thomas' recent fix (commit\ncc58eecc5d75a9329a6d49a25a6499aea7ee6fd6) only targeted the master\nbranch.\n\nIf we really needed the performance advantage of abbreviated keys in\nthis case then it would have taken more than 7 years for this bug to\ncome to light. The backpatchable fix can be very simple. We can just\ncopy what tuplesort_set_bound() does with abbreviated keys in\ntuplesort_begin_cluster(), to explicitly disable abbreviated keys\nup-front for affected tuplesorts. (Just for CLUSTER tuplesorts on an\nexpression index.)\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 3 Apr 2022 16:11:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Mon, Apr 4, 2022 at 11:12 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> We will need a backpatchable fix, since Thomas' recent fix (commit\n> cc58eecc5d75a9329a6d49a25a6499aea7ee6fd6) only targeted the master\n> branch.\n\nI probably should have made it clearer in the commit message,\ncc58eecc5 doesn't fix this problem in the master branch. It only\nfixes the code that incorrectly assumed that datum1 was always\navailable. Now it skips the optimised path, and falls back to the\nslow path, that still has *this* bug, and the test upthread still\nfails. I wrote about this separately because it's clearly independent\nand I didn't want it to be mistaken for an open item for 15.\n\n\n", "msg_date": "Mon, 4 Apr 2022 11:33:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Sun, Apr 3, 2022 at 4:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I probably should have made it clearer in the commit message,\n> cc58eecc5 doesn't fix this problem in the master branch. It only\n> fixes the code that incorrectly assumed that datum1 was always\n> available.\n\nAttached patch fixes the issue, and includes the test case that you posted.\n\nThere is only a one line change to tuplesort.c. This is arguably the\nsame bug -- abbreviation is just another \"haveDatum1 optimization\"\nthat needs to be accounted for.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 12 Apr 2022 11:01:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Tue, Apr 12, 2022 at 11:01 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch fixes the issue, and includes the test case that you posted.\n\nPushed a similar patch just now. Backpatched to all supported branches.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 Apr 2022 17:18:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" }, { "msg_contents": "On Thu, Apr 21, 2022 at 12:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Apr 12, 2022 at 11:01 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached patch fixes the issue, and includes the test case that you posted.\n>\n> Pushed a similar patch just now. Backpatched to all supported branches.\n\nThanks.\n\n\n", "msg_date": "Thu, 21 Apr 2022 13:52:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CLUSTER sort on abbreviated expressions is broken" } ]
[ { "msg_contents": "Hi,\n\nWe've had bugs in pg_upgrade where post-upgrade xid horizons weren't correctly\nset. We've had bugs were indexes were corrupted during replay.\n\nThe latter can be caught by wal_consistency_checking - but that's pretty\nexpensive.\n\nIt seems $subject would have a chance of catching some of these bugs, as well\nas exposing amcheck to a database with a bit more varied content?\n\nDepending on the cost it might make sense to do this optionally, via\nPG_TEST_EXTRA?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 11:53:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Sun, Apr 3, 2022 at 11:53 AM Andres Freund <andres@anarazel.de> wrote:\n> We've had bugs in pg_upgrade where post-upgrade xid horizons weren't correctly\n> set. We've had bugs were indexes were corrupted during replay.\n>\n> The latter can be caught by wal_consistency_checking - but that's pretty\n> expensive.\n>\n> It seems $subject would have a chance of catching some of these bugs, as well\n> as exposing amcheck to a database with a bit more varied content?\n\nI thought that Andrew Dunstan (CC'd) had a BF animal that did this\nsetup. But I'm not sure if that ever ended up happening.\n\nI meant to tell the authors of verify_heapam() (also CC'd) that it\nreally helped with my recent VACUUM project. While the assertions that\nI wrote in vacuumlazy.c might catch certain bugs like this,\nverify_heapam() is much more effective in practice.\n\nLet's say that an all-visible page (or all-frozen page) has XIDs from\nbefore relfrozenxid. Why should the next VACUUM (or any VACUUM) be\nable to observe the problem? A testing strategy that doesn't rely on\nthese kinds of accidental details to catch bugs is far better than one\nthat does.\n\nDefinitely all in favor of using verify_heapam() to its full\npotential. So I'm +1 on your proposal.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 3 Apr 2022 19:10:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Sun, Apr 3, 2022 at 10:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I meant to tell the authors of verify_heapam() (also CC'd) that it\n> really helped with my recent VACUUM project. While the assertions that\n> I wrote in vacuumlazy.c might catch certain bugs like this,\n> verify_heapam() is much more effective in practice.\n\nYeah, I was very excited about verify_heapam(). There is a lot more\nstuff that we could check, but a lot of those things would be much\nmore expensive to check. It does a good job, I think, checking all the\nthings that a human being could potentially spot just by looking at an\nindividual page. I love the idea of using it in regression testing in\nmore places. It might find bugs in amcheck, which would be good, but I\nthink it's even more likely to help us find bugs in other code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 10:02:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Mon, Apr 4, 2022 at 7:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, I was very excited about verify_heapam(). There is a lot more\n> stuff that we could check, but a lot of those things would be much\n> more expensive to check.\n\nIf anything I understated the value of verify_heapam() with this kind\nof work before. Better to show just how valuable it is using an\nexample.\n\nLet's introduce a fairly blatant bug to lazyvacuum.c. This change\nmakes VACUUM fail to account for the fact that skipping a skippable\nrange with an all-visible page makes it unsafe to advance\nrelfrozenxid:\n\n--- a/src/backend/access/heap/vacuumlazy.c\n+++ b/src/backend/access/heap/vacuumlazy.c\n@@ -1371,8 +1371,6 @@ lazy_scan_skip(LVRelState *vacrel, Buffer\n*vmbuffer, BlockNumber next_block,\n else\n {\n *skipping_current_range = true;\n- if (skipsallvis)\n- vacrel->skippedallvis = true;\n }\n\n return next_unskippable_block;\n\nIf I run \"make check-world\", the tests all pass! But when I run pg_amcheck\nagainst an affected \"regression\" database, it will complain about\nrelfrozenxid related corruption in several different tables.\n\n> It does a good job, I think, checking all the\n> things that a human being could potentially spot just by looking at an\n> individual page. I love the idea of using it in regression testing in\n> more places. It might find bugs in amcheck, which would be good, but I\n> think it's even more likely to help us find bugs in other code.\n\nI'd really like it if amcheck had HOT chain verification. That's the\nother area where catching bugs passively with assertions and whatnot\nis clearly not good enough.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:27:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "\n\n> On Apr 4, 2022, at 9:27 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I'd really like it if amcheck had HOT chain verification. That's the\n> other area where catching bugs passively with assertions and whatnot\n> is clearly not good enough.\n\nI agree, and was hoping to get around to this in the postgres 15 development cycle. Alas, that did not happen. Worse, I have several other projects that will keep me busy for the next few months, at least.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:30:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "Hi,\n\nOn 2022-04-04 10:02:37 -0400, Robert Haas wrote:\n> It does a good job, I think, checking all the things that a human being\n> could potentially spot just by looking at an individual page.\n\nI think there's a few more things that'd be good to check. For example amcheck\ndoesn't verify that HOT chains are reasonable, which can often be spotted\nlooking at an individual page. Which is a bit unfortunate, given how many bugs\nwe had in that area.\n\nStuff to check around that:\n- target of redirect has HEAP_ONLY_TUPLE, HEAP_UPDATED set\n- In a valid ctid chain within a page (i.e. xmax = xmin):\n - tuples have HEAP_UPDATED set\n - HEAP_ONLY_TUPLE / HEAP_HOT_UPDATED matches across chains elements\n\nI think it'd also be good to check for things like visible tuples following\ninvisible ones.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 11:16:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Mon, Apr 4, 2022 at 2:16 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-04 10:02:37 -0400, Robert Haas wrote:\n> > It does a good job, I think, checking all the things that a human being\n> > could potentially spot just by looking at an individual page.\n>\n> I think there's a few more things that'd be good to check. For example amcheck\n> doesn't verify that HOT chains are reasonable, which can often be spotted\n> looking at an individual page. Which is a bit unfortunate, given how many bugs\n> we had in that area.\n>\n> Stuff to check around that:\n> - target of redirect has HEAP_ONLY_TUPLE, HEAP_UPDATED set\n> - In a valid ctid chain within a page (i.e. xmax = xmin):\n> - tuples have HEAP_UPDATED set\n> - HEAP_ONLY_TUPLE / HEAP_HOT_UPDATED matches across chains elements\n>\n> I think it'd also be good to check for things like visible tuples following\n> invisible ones.\n\nInteresting.\n\n*takes notes*\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 14:31:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Sun, Apr 03, 2022 at 11:53:03AM -0700, Andres Freund wrote:\n> It seems $subject would have a chance of catching some of these bugs, as well\n> as exposing amcheck to a database with a bit more varied content?\n\nMakes sense to me to extend that.\n\n> Depending on the cost it might make sense to do this optionally, via\n> PG_TEST_EXTRA?\n\nYes, it would be good to check the difference in run-time before\nintroducing more. A logical dump of the regression database is no\nmore than 15MB if I recall correctly, so my guess is that most of the\nruntime is still going to be eaten by the run of pg_regress.\n--\nMichael", "msg_date": "Tue, 5 Apr 2022 08:46:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "Hi,\n\nOn 2022-04-05 08:46:06 +0900, Michael Paquier wrote:\n> On Sun, Apr 03, 2022 at 11:53:03AM -0700, Andres Freund wrote:\n> > It seems $subject would have a chance of catching some of these bugs, as well\n> > as exposing amcheck to a database with a bit more varied content?\n> \n> Makes sense to me to extend that.\n> \n> > Depending on the cost it might make sense to do this optionally, via\n> > PG_TEST_EXTRA?\n> \n> Yes, it would be good to check the difference in run-time before\n> introducing more. A logical dump of the regression database is no\n> more than 15MB if I recall correctly, so my guess is that most of the\n> runtime is still going to be eaten by the run of pg_regress.\n\nOn my workstation it takes about 2.39s to run pg_amcheck on a regression\ndatabase with all thoroughness options enabled. With -j4 it's 0.62s.\n\nWithout more thorough checking it's 1.24s and 0.30s with -j4.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 17:39:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "On Mon, Apr 04, 2022 at 05:39:58PM -0700, Andres Freund wrote:\n> On my workstation it takes about 2.39s to run pg_amcheck on a regression\n> database with all thoroughness options enabled. With -j4 it's 0.62s.\n> \n> Without more thorough checking it's 1.24s and 0.30s with -j4.\n\nOkay. That sounds like an argument to enable that by default, with\nparallelism.\n--\nMichael", "msg_date": "Tue, 5 Apr 2022 11:12:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" }, { "msg_contents": "\nOn 4/3/22 22:10, Peter Geoghegan wrote:\n> On Sun, Apr 3, 2022 at 11:53 AM Andres Freund <andres@anarazel.de> wrote:\n>> We've had bugs in pg_upgrade where post-upgrade xid horizons weren't correctly\n>> set. We've had bugs were indexes were corrupted during replay.\n>>\n>> The latter can be caught by wal_consistency_checking - but that's pretty\n>> expensive.\n>>\n>> It seems $subject would have a chance of catching some of these bugs, as well\n>> as exposing amcheck to a database with a bit more varied content?\n> I thought that Andrew Dunstan (CC'd) had a BF animal that did this\n> setup. But I'm not sure if that ever ended up happening.\n\n\nI don't think any of my BF animals do anything special in this area.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Apr 2022 08:54:35 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Run pg_amcheck in 002_pg_upgrade.pl and 027_stream_regress.pl?" } ]
[ { "msg_contents": "My pet dinosaur gaur just failed [1] in\nsrc/test/recovery/t/022_crash_temp_files.pl, which does this:\n\n-----\nmy $ret = PostgreSQL::Test::Utils::system_log('pg_ctl', 'kill', 'KILL', $pid);\nis($ret, 0, 'killed process with KILL');\n\n# Close psql session\n$killme->finish;\n$killme2->finish;\n\n# Wait till server restarts\n$node->poll_query_until('postgres', undef, '');\n-----\n\nIt's hard to be totally sure, but I think what happened is that\ngaur hit the in-hindsight-obvious race condition in this code:\nwe managed to execute a successful iteration of poll_query_until\nbefore the postmaster had noticed its dead child and commenced\nthe restart. The test lines after these are not prepared to see\nfailure-to-connect.\n\nIt's not obvious to me how to remove this race condition.\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gaur&dt=2022-04-03%2021%3A14%3A41\n\n\n", "msg_date": "Mon, 04 Apr 2022 00:50:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Race condition in server-crash testing" }, { "msg_contents": "Hi,\n\nOn 2022-04-04 00:50:27 -0400, Tom Lane wrote:\n> My pet dinosaur gaur just failed [1] in\n> src/test/recovery/t/022_crash_temp_files.pl, which does this:\n> \n> -----\n> my $ret = PostgreSQL::Test::Utils::system_log('pg_ctl', 'kill', 'KILL', $pid);\n> is($ret, 0, 'killed process with KILL');\n> \n> # Close psql session\n> $killme->finish;\n> $killme2->finish;\n> \n> # Wait till server restarts\n> $node->poll_query_until('postgres', undef, '');\n> -----\n> \n> It's hard to be totally sure, but I think what happened is that\n> gaur hit the in-hindsight-obvious race condition in this code:\n> we managed to execute a successful iteration of poll_query_until\n> before the postmaster had noticed its dead child and commenced\n> the restart. The test lines after these are not prepared to see\n> failure-to-connect.\n> \n> It's not obvious to me how to remove this race condition.\n> Thoughts?\n\nMaybe we can use pump_until() with the psql that's not getting killed? With a\nnon-matching regex? That'd only return once the backend was killed by\npostmaster, afaics?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 22:07:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in server-crash testing" }, { "msg_contents": "On Sun, Apr 03, 2022 at 10:07:21PM -0700, Andres Freund wrote:\n> On 2022-04-04 00:50:27 -0400, Tom Lane wrote:\n> > My pet dinosaur gaur just failed [1] in\n> > src/test/recovery/t/022_crash_temp_files.pl, which does this:\n> > \n> > -----\n> > my $ret = PostgreSQL::Test::Utils::system_log('pg_ctl', 'kill', 'KILL', $pid);\n> > is($ret, 0, 'killed process with KILL');\n> > \n> > # Close psql session\n> > $killme->finish;\n> > $killme2->finish;\n> > \n> > # Wait till server restarts\n> > $node->poll_query_until('postgres', undef, '');\n> > -----\n> > \n> > It's hard to be totally sure, but I think what happened is that\n> > gaur hit the in-hindsight-obvious race condition in this code:\n> > we managed to execute a successful iteration of poll_query_until\n> > before the postmaster had noticed its dead child and commenced\n> > the restart. The test lines after these are not prepared to see\n> > failure-to-connect.\n> > \n> > It's not obvious to me how to remove this race condition.\n> > Thoughts?\n> \n> Maybe we can use pump_until() with the psql that's not getting killed? With a\n> non-matching regex? That'd only return once the backend was killed by\n> postmaster, afaics?\n\nSounds good; I suspect that will be better than any of the ideas I scratched\ndown when\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2021-08-31%2015%3A00%3A52\nfailed the same way. For what it's worth, those were:\n\n- Check that pg_postmaster_start_time() has changed. But that runs into EPIPE\n trouble, requiring a write to a file or an eval{} to trap the EPIPE.\n- Likewise, but check \"select checkpoint_lsn from pg_control_checkpoint();\".\n- Poll pg_controldata until a new checkpoint happens. Compare checkpoint LSN.\n Use checkpoint_timeout=1h to avoid non-end-of-recovery checkpoints.\n- Poll logfile until \"all server processes terminated; reinitializing\". Can\n be fooled with certain log_min_messages settings, but so can our other\n log-scraping tests.\n- Grab the pid of e.g. the checkpointer and poll for that process to be gone.\n Can be fooled by PID reuse.\n\n\n", "msg_date": "Sun, 3 Apr 2022 22:52:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in server-crash testing" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-04 00:50:27 -0400, Tom Lane wrote:\n>> It's hard to be totally sure, but I think what happened is that\n>> gaur hit the in-hindsight-obvious race condition in this code:\n>> we managed to execute a successful iteration of poll_query_until\n>> before the postmaster had noticed its dead child and commenced\n>> the restart. The test lines after these are not prepared to see\n>> failure-to-connect.\n>> It's not obvious to me how to remove this race condition.\n>> Thoughts?\n\n> Maybe we can use pump_until() with the psql that's not getting killed? With a\n> non-matching regex? That'd only return once the backend was killed by\n> postmaster, afaics?\n\nGood idea. What I actually did was to borrow the recently-fixed code\nin 013_crash_restart.pl that checks for psql's \"connection lost\"\nreport.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Apr 2022 20:46:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in server-crash testing" } ]
[ { "msg_contents": "Hi hackers!\n \nI’ve been working on this [ https://www.postgresql.org/message-id/flat/cfcca574-6967-c5ab-7dc3-2c82b6723b99%40mail.ru ] bug. Finally, I’ve come up with the patch you can find attached. Basically what is does is rises a PROC_IN_VACUUM flag and resets it afterwards. I know this seems kinda crunchy and I hope you guys will give me some hints on where to continue. This [ https://www.postgresql.org/message-id/20220218175119.7hwv7ksamfjwijbx%40alap3.anarazel.de ] message contains reproduction script. Thank you very much in advance.\n \nKind regards,\nDaniel Shelepanov", "msg_date": "Mon, 04 Apr 2022 11:51:10 +0300", "msg_from": "=?UTF-8?B?RGFuaWVsIFNoZWxlcGFub3Y=?= <deniel1495@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?Y29sbGVjdF9jb3JydXB0X2l0ZW1zX3ZhY3V1bS5wYXRjaA==?=" }, { "msg_contents": "On Mon, Apr 4, 2022 at 4:51 AM Daniel Shelepanov <deniel1495@mail.ru> wrote:\n> I’ve been working on this [https://www.postgresql.org/message-id/flat/cfcca574-6967-c5ab-7dc3-2c82b6723b99%40mail.ru] bug. Finally, I’ve come up with the patch you can find attached. Basically what is does is rises a PROC_IN_VACUUM flag and resets it afterwards. I know this seems kinda crunchy and I hope you guys will give me some hints on where to continue. This [https://www.postgresql.org/message-id/20220218175119.7hwv7ksamfjwijbx%40alap3.anarazel.de] message contains reproduction script. Thank you very much in advance.\n\nI noticed the CommitFest entry for this thread today and decided to\ntake a look. I think the general issue here can be stated in this way:\nsuppose a VACUUM computes an all-visible cutoff X, i.e. it thinks all\ncommitted XIDs < X are all-visible. Then, at a later time, pg_visible\ncomputes an all-visible cutoff Y, i.e. it thinks all committed XIDs <\nY are all-visible. If Y < X, pg_check_visible() might falsely report\ncorruption, because VACUUM might have marked as all-visible some page\ncontaining tuples which pg_check_visibile() thinks aren't really\nall-visible.\n\nIn reality, the oldest all-visible XID cannot move backward, but\nComputeXidHorizons() lets it move backward, because it's intended for\nuse by a caller who wants to mark pages all-visible, and it's only\nconcerned with making sure that the value is old enough to be safe.\nAnd that's a problem for the way that pg_visibility is (mis-)using it.\n\nTo say that another way, ComputeXidHorizons() is perfectly fine with\nreturning a value that is older than the true answer, as long as it\nnever returns a value that is newer than the new answer. pg_visibility\nwants the opposite. Here, a value that is newer than the true value\ncan't do worse than hide corruption, which is sort of OK, but a value\nthat's older than the true value can report corruption where none\nexists, which is very bad.\n\nI have a feeling, therefore, that this isn't really a complete fix. I\nthink it might address one way for the horizon reported by\nComputeXidHorizons() to move backward, but not all the ways.\n\nUnfortunately, I am out of time for today to study this... but will\ntry to find more time on another day.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jul 2022 17:50:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In reality, the oldest all-visible XID cannot move backward, but\n> ComputeXidHorizons() lets it move backward, because it's intended for\n> use by a caller who wants to mark pages all-visible, and it's only\n> concerned with making sure that the value is old enough to be safe.\n\nRight.\n\n> And that's a problem for the way that pg_visibility is (mis-)using it.\n\n> To say that another way, ComputeXidHorizons() is perfectly fine with\n> returning a value that is older than the true answer, as long as it\n> never returns a value that is newer than the new answer. pg_visibility\n> wants the opposite. Here, a value that is newer than the true value\n> can't do worse than hide corruption, which is sort of OK, but a value\n> that's older than the true value can report corruption where none\n> exists, which is very bad.\n\nMaybe we need a different function for pg_visibility to call?\nIf we want ComputeXidHorizons to serve both these purposes, then it\nhas to always deliver exactly the right answer, which seems like\na definition that will be hard and expensive to achieve.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Jul 2022 17:55:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe we need a different function for pg_visibility to call?\n> If we want ComputeXidHorizons to serve both these purposes, then it\n> has to always deliver exactly the right answer, which seems like\n> a definition that will be hard and expensive to achieve.\n\nYeah, I was thinking along similar lines.\n\nI'm also kind of wondering why these calculations use\nlatestCompletedXid. Is that something we do solely to reduce locking?\nThe XIDs of running transactions matter, and their snapshots matter,\nand the XIDs that could start running in the future matter, but I\ndon't know why it matters what the latest completed XID is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jul 2022 21:47:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Wed, Jul 27, 2022 at 09:47:19PM -0400, Robert Haas wrote:\n> On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Maybe we need a different function for pg_visibility to call?\n> > If we want ComputeXidHorizons to serve both these purposes, then it\n> > has to always deliver exactly the right answer, which seems like\n> > a definition that will be hard and expensive to achieve.\n> \n> Yeah, I was thinking along similar lines.\n> \n> I'm also kind of wondering why these calculations use\n> latestCompletedXid. Is that something we do solely to reduce locking?\n> The XIDs of running transactions matter, and their snapshots matter,\n> and the XIDs that could start running in the future matter, but I\n> don't know why it matters what the latest completed XID is.\n\nDaniel, it seems to me that this thread is waiting for some input from\nyou, based on the remarks of Tom and Robert. Are you planning to do\nso? This is marked as a bug fix, so I have moved this item to the\nnext CF for now.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 14:14:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi hackers!\n\nDaniel is busy with other tasks. I've found this topic and this problem\nseems to be actual\nor v15 too.\nPlease correct me if I am wrong. I've checked another discussion related to\npg_visibility [1].\nAccording to discussion: if using latest completed xid is not right for\nchecking visibility, than\nit should be the least running transaction xid? So it must be another\nfunction to be used for\nthese calculations, not the GetOldestNonRemovableTransactionId that uses\nthe ComputeXidHorizons.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/c0610352-8433-ab4b-986d-0e803c628efe%40postgrespro.ru\n\nOn Wed, Oct 12, 2022 at 8:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 27, 2022 at 09:47:19PM -0400, Robert Haas wrote:\n> > On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Maybe we need a different function for pg_visibility to call?\n> > > If we want ComputeXidHorizons to serve both these purposes, then it\n> > > has to always deliver exactly the right answer, which seems like\n> > > a definition that will be hard and expensive to achieve.\n> >\n> > Yeah, I was thinking along similar lines.\n> >\n> > I'm also kind of wondering why these calculations use\n> > latestCompletedXid. Is that something we do solely to reduce locking?\n> > The XIDs of running transactions matter, and their snapshots matter,\n> > and the XIDs that could start running in the future matter, but I\n> > don't know why it matters what the latest completed XID is.\n>\n> Daniel, it seems to me that this thread is waiting for some input from\n> you, based on the remarks of Tom and Robert. Are you planning to do\n> so? This is marked as a bug fix, so I have moved this item to the\n> next CF for now.\n> --\n> Michael\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Daniel is busy with other tasks. I've found this topic and this problem seems to be actualor v15 too.Please correct me if I am wrong. I've checked another discussion related to pg_visibility [1]. According to discussion: if using latest completed xid is not right for checking visibility, thanit should be the least running transaction xid? So it must be another function to be used forthese calculations, not the GetOldestNonRemovableTransactionId that usesthe ComputeXidHorizons.[1] https://www.postgresql.org/message-id/flat/c0610352-8433-ab4b-986d-0e803c628efe%40postgrespro.ruOn Wed, Oct 12, 2022 at 8:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jul 27, 2022 at 09:47:19PM -0400, Robert Haas wrote:\n> On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Maybe we need a different function for pg_visibility to call?\n> > If we want ComputeXidHorizons to serve both these purposes, then it\n> > has to always deliver exactly the right answer, which seems like\n> > a definition that will be hard and expensive to achieve.\n> \n> Yeah, I was thinking along similar lines.\n> \n> I'm also kind of wondering why these calculations use\n> latestCompletedXid. Is that something we do solely to reduce locking?\n> The XIDs of running transactions matter, and their snapshots matter,\n> and the XIDs that could start running in the future matter, but I\n> don't know why it matters what the latest completed XID is.\n\nDaniel, it seems to me that this thread is waiting for some input from\nyou, based on the remarks of Tom and Robert.  Are you planning to do\nso?  This is marked as a bug fix, so I have moved this item to the\nnext CF for now.\n--\nMichael\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 7 Nov 2022 16:30:32 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi hackers!\n\nJust to bump this thread, because the problem seems to be still actual:\n\nPlease correct me if I am wrong. I've checked another discussion related to\npg_visibility [1].\nAccording to discussion: if using latest completed xid is not right for\nchecking visibility, than\nit should be the least running transaction xid? So it must be another\nfunction to be used for\nthese calculations, not the GetOldestNonRemovableTransactionId that uses\nthe ComputeXidHorizons.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/c0610352-8433-ab4b-986d-0e803c628efe%40postgrespro.ru\n\nOn Wed, Oct 12, 2022 at 8:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Wed, Jul 27, 2022 at 09:47:19PM -0400, Robert Haas wrote:\n>> > On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > > Maybe we need a different function for pg_visibility to call?\n>> > > If we want ComputeXidHorizons to serve both these purposes, then it\n>> > > has to always deliver exactly the right answer, which seems like\n>> > > a definition that will be hard and expensive to achieve.\n>> >\n>> > Yeah, I was thinking along similar lines.\n>> >\n>> > I'm also kind of wondering why these calculations use\n>> > latestCompletedXid. Is that something we do solely to reduce locking?\n>> > The XIDs of running transactions matter, and their snapshots matter,\n>> > and the XIDs that could start running in the future matter, but I\n>> > don't know why it matters what the latest completed XID is.\n>>\n>> Daniel, it seems to me that this thread is waiting for some input from\n>> you, based on the remarks of Tom and Robert. Are you planning to do\n>> so? This is marked as a bug fix, so I have moved this item to the\n>> next CF for now.\n>> --\n>> Michael\n>>\n>\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Just to bump this thread, because the problem seems to be still actual:Please correct me if I am wrong. I've checked another discussion related to pg_visibility [1]. According to discussion: if using latest completed xid is not right for checking visibility, thanit should be the least running transaction xid? So it must be another function to be used forthese calculations, not the GetOldestNonRemovableTransactionId that usesthe ComputeXidHorizons.[1] https://www.postgresql.org/message-id/flat/c0610352-8433-ab4b-986d-0e803c628efe%40postgrespro.ruOn Wed, Oct 12, 2022 at 8:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jul 27, 2022 at 09:47:19PM -0400, Robert Haas wrote:\n> On Wed, Jul 27, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Maybe we need a different function for pg_visibility to call?\n> > If we want ComputeXidHorizons to serve both these purposes, then it\n> > has to always deliver exactly the right answer, which seems like\n> > a definition that will be hard and expensive to achieve.\n> \n> Yeah, I was thinking along similar lines.\n> \n> I'm also kind of wondering why these calculations use\n> latestCompletedXid. Is that something we do solely to reduce locking?\n> The XIDs of running transactions matter, and their snapshots matter,\n> and the XIDs that could start running in the future matter, but I\n> don't know why it matters what the latest completed XID is.\n\nDaniel, it seems to me that this thread is waiting for some input from\nyou, based on the remarks of Tom and Robert.  Are you planning to do\nso?  This is marked as a bug fix, so I have moved this item to the\nnext CF for now.\n--\nMichael\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Wed, 14 Dec 2022 21:56:12 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "This patch has been waiting on the author for about a year now, so I will close\nit as Returned with Feedback. Plesae feel free to resubmit to a future CF when\nthere is renewed interest in working on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 09:21:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi!\n\nOn Tue, Jul 4, 2023 at 10:21 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> This patch has been waiting on the author for about a year now, so I will close\n> it as Returned with Feedback. Plesae feel free to resubmit to a future CF when\n> there is renewed interest in working on this.\n\nI'd like to revive this thread. While the patch proposed definitely\nmakes things better. But as pointed out by Robert and Tom, it didn't\nallow to avoid all false reports. The reason is that the way we\ncurrently calculate the oldest xmin, it could move backwards (see\ncomments to ComputeXidHorizons()). The attached patch implements own\nfunction to calculate strict oldest xmin, which should be always\ngreater or equal to any xid horizon calculated before. I have to do\nthe following changes in comparison to what ComputeXidHorizons() do.\n\n1. Ignore processes xmin's, because they take into account connection\nto other databases which were ignored before.\n2. Ignore KnownAssignedXids, because they are not database-aware.\nWhile primary could compute its horizons database-aware.\n3. Ignore walsender xmin, because it could go backward if some\nreplication connections don't use replication slots.\n\nSurely these would significantly sacrifice accuracy. But we have to do\nso in order to avoid reporting false errors.\n\nAny thoughts?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 6 Nov 2023 11:30:15 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi Alexander,\n\n06.11.2023 12:30, Alexander Korotkov wrote:\n> Surely these would significantly sacrifice accuracy. But we have to do\n> so in order to avoid reporting false errors.\n>\n\nI've reduced the dirty reproducer Daniel Shelepanov posted initially\nto the following:\nnumdbs=10\nfor ((d=1;d<=$numdbs;d++)); do\n   createdb db$d\n   psql db$d -c \"create extension pg_visibility\"\ndone\n\nfor ((i=1;i<=300;i++)); do\necho \"iteration $i\"\nfor ((d=1;d<=$numdbs;d++)); do\n(\necho \"\ncreate table vacuum_test as select 42 i;\nvacuum (disable_page_skipping) vacuum_test;\nselect * from pg_check_visible('vacuum_test');\n\" | psql db$d -a -q >psql-$d.log 2>&1\n) &\ndone\nwait\n\nres=0\nfor ((d=1;d<=$numdbs;d++)); do\ngrep -q '0 rows' psql-$d.log || { echo \"Error condition in psql-$d.log:\"; cat psql-$d.log; res=1; break; }\npsql db$d -q -c \"drop table vacuum_test\"\ndone\n[ $res == 0 ] || break;\ndone\n\nIt looks like the v2 patch doesn't fix the original issue. Maybe I miss\nsomething, but with the patch applied, I see the failure immediately,\nthough without the patch several iterations are needed to get it.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 7 Nov 2023 14:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi, Alexander.\n\nOn Tue, Nov 7, 2023 at 1:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> It looks like the v2 patch doesn't fix the original issue. Maybe I miss\n> something, but with the patch applied, I see the failure immediately,\n> though without the patch several iterations are needed to get it.\n\n\nThat's a bug in the patch. Thank you for cathing it. It should start\ncalculation from latestCompletedXid + 1, not InvalidTransactionId.\nPlease, check the revised patch.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 7 Nov 2023 13:38:40 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "07.11.2023 14:38, Alexander Korotkov wrote:\n> Hi, Alexander.\n>\n> On Tue, Nov 7, 2023 at 1:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> It looks like the v2 patch doesn't fix the original issue. Maybe I miss\n>> something, but with the patch applied, I see the failure immediately,\n>> though without the patch several iterations are needed to get it.\n>\n> That's a bug in the patch. Thank you for cathing it. It should start\n> calculation from latestCompletedXid + 1, not InvalidTransactionId.\n> Please, check the revised patch.\n\nThanks for looking at this!\nUnfortunately, I still see the failure with the v3, but not on a first\niteration:\n...\niteration 316\nError condition in psql-8.log:\ncreate table vacuum_test as select 42 i;\nvacuum (disable_page_skipping) vacuum_test;\nselect * from pg_check_visible('vacuum_test');\n  t_ctid\n--------\n  (0,1)\n(1 row)\n\n(I've double-checked that the patch is applied and get_strict_xid_horizon()\nis called.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 7 Nov 2023 16:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi, Alexander!\n\nOn Tue, Nov 7, 2023 at 3:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 07.11.2023 14:38, Alexander Korotkov wrote:\n> > Hi, Alexander.\n> >\n> > On Tue, Nov 7, 2023 at 1:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> >> It looks like the v2 patch doesn't fix the original issue. Maybe I miss\n> >> something, but with the patch applied, I see the failure immediately,\n> >> though without the patch several iterations are needed to get it.\n> >\n> > That's a bug in the patch. Thank you for cathing it. It should start\n> > calculation from latestCompletedXid + 1, not InvalidTransactionId.\n> > Please, check the revised patch.\n>\n> Thanks for looking at this!\n> Unfortunately, I still see the failure with the v3, but not on a first\n> iteration:\n> ...\n> iteration 316\n> Error condition in psql-8.log:\n> create table vacuum_test as select 42 i;\n> vacuum (disable_page_skipping) vacuum_test;\n> select * from pg_check_visible('vacuum_test');\n> t_ctid\n> --------\n> (0,1)\n> (1 row)\n>\n> (I've double-checked that the patch is applied and get_strict_xid_horizon()\n> is called.)\n\nI managed to reproduce this on a Linux VM. This problem should arise\nbecause in extension I don't have access to ProcArrayStruct. So, my\ncode is iterating the whole PGPROC's array. I reimplemented the new\nhorizon calculation function in the core with usage of\nProcArrayStruct. Now it doesn't fall for me. Please, recheck.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 4 Dec 2023 02:23:37 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi Alexander,\n\n04.12.2023 03:23, Alexander Korotkov wrote:\n> I managed to reproduce this on a Linux VM. This problem should arise\n> because in extension I don't have access to ProcArrayStruct. So, my\n> code is iterating the whole PGPROC's array. I reimplemented the new\n> horizon calculation function in the core with usage of\n> ProcArrayStruct. Now it doesn't fall for me. Please, recheck.\n\nYes, v4 works for me as well (thousands of iterations passed).\nThank you!\n\nThough the test passes even without manipulations with the PROC_IN_VACUUM\nflag in pg_visibility.c (maybe the test is not good enough to show why\nthose manipulations are needed).\nI also couldn't see where VISHORIZON_DATA_STRICT comes into play...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 4 Dec 2023 13:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi!\n\nI agree with Alexander Lakhin about PROC_IN_VACUUM and \nVISHORIZON_DATA_STRICT:\n1) probably manipulations with the PROC_IN_VACUUM flag in \npg_visibility.c were needed for condition [1] and can be removed now;\n2) the VISHORIZON_DATA_STRICT macro is probably unnecessary too (since \nwe are not going to use it in the GlobalVisHorizonKindForRel() function).\n\nAlso It would be nice to remove the get_strict_xid_horizon() function \nfrom the comment (replace to GetStrictOldestNonRemovableTransactionId()?).\n\n[1] \nhttps://github.com/postgres/postgres/blob/4d0cf0b05defcee985d5af38cb0db2b9c2f8dbae/src/backend/storage/ipc/procarray.c#L1812\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 22:03:37 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi!\n\nOn Tue, Dec 5, 2023 at 9:03 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> I agree with Alexander Lakhin about PROC_IN_VACUUM and\n> VISHORIZON_DATA_STRICT:\n> 1) probably manipulations with the PROC_IN_VACUUM flag in\n> pg_visibility.c were needed for condition [1] and can be removed now;\n\nRight, PROC_IN_VACUUM is no longer required. The possible benefit of\nit would be to avoid bloat during a possibly long run of\npg_visibility() function. But the downside are problems with the\nsnapshot if the invoking query contains something except a single call\nof the pg_visibility() function, and complexity. Removed.\n\n> 2) the VISHORIZON_DATA_STRICT macro is probably unnecessary too (since\n> we are not going to use it in the GlobalVisHorizonKindForRel() function).\n\nMakes sense, removed.\n\n> Also It would be nice to remove the get_strict_xid_horizon() function\n> from the comment (replace to GetStrictOldestNonRemovableTransactionId()?).\n\nRight, fixed.\n\nThe revised patch is attached. Besides the fixes above, it contains\nimprovements for comments and the detailed commit message.\n\nTom, Robert, what do you think about the patch attached? It required\na new type of xid horizon in core and sacrifices accuracy. But this\nis the only way I can imagine, we can fix the problem in a general\nway.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 9 Jan 2024 11:17:28 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi!\n\nThank you, there is one small point left (in the comment): can you \nreplace \"guarantteed to be to be newer\" to \"guaranteed to be newer\", \nfile src/backend/storage/ipc/procarray.c?\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n", "msg_date": "Sat, 13 Jan 2024 20:33:10 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "Hi, Dmitry!\n\nOn Sat, Jan 13, 2024 at 7:33 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> Thank you, there is one small point left (in the comment): can you\n> replace \"guarantteed to be to be newer\" to \"guaranteed to be newer\",\n> file src/backend/storage/ipc/procarray.c?\n\nFixed. Thank you for catching this.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sun, 14 Jan 2024 04:35:26 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Sun, Jan 14, 2024 at 4:35 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sat, Jan 13, 2024 at 7:33 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > Thank you, there is one small point left (in the comment): can you\n> > replace \"guarantteed to be to be newer\" to \"guaranteed to be newer\",\n> > file src/backend/storage/ipc/procarray.c?\n>\n> Fixed. Thank you for catching this.\n\nI made the following improvements to the patch.\n1. I find a way to implement the path with less changes to the core\ncode. The GetRunningTransactionData() function allows to get the\nleast running xid, all I need is to add database-aware values.\n2. I added the TAP test reproducing the original issue.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 12 Mar 2024 14:10:59 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Tue, Mar 12, 2024 at 02:10:59PM +0200, Alexander Korotkov wrote:\n> I'm going to push this if no objections.\n\nCommit e85662d wrote:\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n\n> @@ -2740,6 +2741,8 @@ GetRunningTransactionData(void)\n> \t */\n> \tfor (index = 0; index < arrayP->numProcs; index++)\n> \t{\n> +\t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n> +\t\tPGPROC\t *proc = &allProcs[pgprocno];\n> \t\tTransactionId xid;\n> \n> \t\t/* Fetch xid just once - see GetNewTransactionId */\n> @@ -2760,6 +2763,13 @@ GetRunningTransactionData(void)\n> \t\tif (TransactionIdPrecedes(xid, oldestRunningXid))\n> \t\t\toldestRunningXid = xid;\n> \n> +\t\t/*\n> +\t\t * Also, update the oldest running xid within the current database.\n> +\t\t */\n> +\t\tif (proc->databaseId == MyDatabaseId &&\n> +\t\t\tTransactionIdPrecedes(xid, oldestRunningXid))\n> +\t\t\toldestDatabaseRunningXid = xid;\n\nShouldn't that be s/oldestRunningXid/oldestDatabaseRunningXid/?\n\nWhile this isn't a hot path, I likely would test TransactionIdPrecedes()\nbefore fetching pgprocno and PGPROC, to reduce wasted cache misses.\n\n\n", "msg_date": "Sun, 30 Jun 2024 16:18:16 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Mon, Jul 1, 2024 at 2:18 AM Noah Misch <noah@leadboat.com> wrote:\n> On Tue, Mar 12, 2024 at 02:10:59PM +0200, Alexander Korotkov wrote:\n> > I'm going to push this if no objections.\n>\n> Commit e85662d wrote:\n> > --- a/src/backend/storage/ipc/procarray.c\n> > +++ b/src/backend/storage/ipc/procarray.c\n>\n> > @@ -2740,6 +2741,8 @@ GetRunningTransactionData(void)\n> > */\n> > for (index = 0; index < arrayP->numProcs; index++)\n> > {\n> > + int pgprocno = arrayP->pgprocnos[index];\n> > + PGPROC *proc = &allProcs[pgprocno];\n> > TransactionId xid;\n> >\n> > /* Fetch xid just once - see GetNewTransactionId */\n> > @@ -2760,6 +2763,13 @@ GetRunningTransactionData(void)\n> > if (TransactionIdPrecedes(xid, oldestRunningXid))\n> > oldestRunningXid = xid;\n> >\n> > + /*\n> > + * Also, update the oldest running xid within the current database.\n> > + */\n> > + if (proc->databaseId == MyDatabaseId &&\n> > + TransactionIdPrecedes(xid, oldestRunningXid))\n> > + oldestDatabaseRunningXid = xid;\n>\n> Shouldn't that be s/oldestRunningXid/oldestDatabaseRunningXid/?\n\nThank you for catching this.\n\n> While this isn't a hot path, I likely would test TransactionIdPrecedes()\n> before fetching pgprocno and PGPROC, to reduce wasted cache misses.\n\nAnd thanks for suggestion.\n\nThe patchset is attached. 0001 implements\ns/oldestRunningXid/oldestDatabaseRunningXid/. 0002 implements cache\nmisses optimization.\n\nIf no objection, I'll backpatch 0001 and apply 0002 to the head.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 3 Jul 2024 00:31:48 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Wed, Jul 03, 2024 at 12:31:48AM +0300, Alexander Korotkov wrote:\n> On Mon, Jul 1, 2024 at 2:18 AM Noah Misch <noah@leadboat.com> wrote:\n> > Commit e85662d wrote:\n> > > --- a/src/backend/storage/ipc/procarray.c\n> > > +++ b/src/backend/storage/ipc/procarray.c\n> >\n> > > @@ -2740,6 +2741,8 @@ GetRunningTransactionData(void)\n> > > */\n> > > for (index = 0; index < arrayP->numProcs; index++)\n> > > {\n> > > + int pgprocno = arrayP->pgprocnos[index];\n> > > + PGPROC *proc = &allProcs[pgprocno];\n> > > TransactionId xid;\n> > >\n> > > /* Fetch xid just once - see GetNewTransactionId */\n> > > @@ -2760,6 +2763,13 @@ GetRunningTransactionData(void)\n> > > if (TransactionIdPrecedes(xid, oldestRunningXid))\n> > > oldestRunningXid = xid;\n> > >\n> > > + /*\n> > > + * Also, update the oldest running xid within the current database.\n> > > + */\n> > > + if (proc->databaseId == MyDatabaseId &&\n> > > + TransactionIdPrecedes(xid, oldestRunningXid))\n> > > + oldestDatabaseRunningXid = xid;\n> >\n> > Shouldn't that be s/oldestRunningXid/oldestDatabaseRunningXid/?\n> \n> Thank you for catching this.\n> \n> > While this isn't a hot path, I likely would test TransactionIdPrecedes()\n> > before fetching pgprocno and PGPROC, to reduce wasted cache misses.\n> \n> And thanks for suggestion.\n> \n> The patchset is attached. 0001 implements\n> s/oldestRunningXid/oldestDatabaseRunningXid/. 0002 implements cache\n> misses optimization.\n> \n> If no objection, I'll backpatch 0001 and apply 0002 to the head.\n\nLooks fine. I'd drop the comment update as saying the obvious, but keeping it\nis okay.\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:59:49 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "This causes an assertion failure when executed in a hot standby server:\n\n select * from pg_check_visible('pg_database');\n\nTRAP: failed Assert(\"!RecoveryInProgress()\"), File: \n\"../src/backend/storage/ipc/procarray.c\", Line: 2710, PID: 1142572\n\nGetStrictOldestNonRemovableTransactionId does this:\n\n> \tif (rel == NULL || rel->rd_rel->relisshared || RecoveryInProgress())\n> \t{\n> \t\t/* Shared relation: take into account all running xids */\n> \t\trunningTransactions = GetRunningTransactionData();\n> \t\tLWLockRelease(ProcArrayLock);\n> \t\tLWLockRelease(XidGenLock);\n> \t\treturn runningTransactions->oldestRunningXid;\n> \t}\n\nAnd GetRunningTransactionData() has this:\n\n> \tAssert(!RecoveryInProgress());\n\nSo it's easy to see that you will hit that assertion.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 21:39:11 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Tue, Aug 13, 2024 at 9:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> This causes an assertion failure when executed in a hot standby server:\n>\n> select * from pg_check_visible('pg_database');\n>\n> TRAP: failed Assert(\"!RecoveryInProgress()\"), File:\n> \"../src/backend/storage/ipc/procarray.c\", Line: 2710, PID: 1142572\n>\n> GetStrictOldestNonRemovableTransactionId does this:\n>\n> > if (rel == NULL || rel->rd_rel->relisshared || RecoveryInProgress())\n> > {\n> > /* Shared relation: take into account all running xids */\n> > runningTransactions = GetRunningTransactionData();\n> > LWLockRelease(ProcArrayLock);\n> > LWLockRelease(XidGenLock);\n> > return runningTransactions->oldestRunningXid;\n> > }\n>\n> And GetRunningTransactionData() has this:\n>\n> > Assert(!RecoveryInProgress());\n>\n> So it's easy to see that you will hit that assertion.\n\nOh, thank you!\nI'll fix this and add a test for recovery!\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Tue, 13 Aug 2024 22:15:52 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:15 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Tue, Aug 13, 2024 at 9:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > This causes an assertion failure when executed in a hot standby server:\n> >\n> > select * from pg_check_visible('pg_database');\n> >\n> > TRAP: failed Assert(\"!RecoveryInProgress()\"), File:\n> > \"../src/backend/storage/ipc/procarray.c\", Line: 2710, PID: 1142572\n> >\n> > GetStrictOldestNonRemovableTransactionId does this:\n> >\n> > > if (rel == NULL || rel->rd_rel->relisshared || RecoveryInProgress())\n> > > {\n> > > /* Shared relation: take into account all running xids */\n> > > runningTransactions = GetRunningTransactionData();\n> > > LWLockRelease(ProcArrayLock);\n> > > LWLockRelease(XidGenLock);\n> > > return runningTransactions->oldestRunningXid;\n> > > }\n> >\n> > And GetRunningTransactionData() has this:\n> >\n> > > Assert(!RecoveryInProgress());\n> >\n> > So it's easy to see that you will hit that assertion.\n>\n> Oh, thank you!\n> I'll fix this and add a test for recovery!\n\nAttached patch fixes the problem and adds the corresponding test. I\nwould appreciate if you take a look at it.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 14 Aug 2024 04:51:39 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On 14/08/2024 04:51, Alexander Korotkov wrote:\n> On Tue, Aug 13, 2024 at 10:15 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n>> On Tue, Aug 13, 2024 at 9:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>>\n>>> This causes an assertion failure when executed in a hot standby server:\n>>>\n>>> select * from pg_check_visible('pg_database');\n>>>\n>>> TRAP: failed Assert(\"!RecoveryInProgress()\"), File:\n>>> \"../src/backend/storage/ipc/procarray.c\", Line: 2710, PID: 1142572\n>>>\n>>> GetStrictOldestNonRemovableTransactionId does this:\n>>>\n>>>> if (rel == NULL || rel->rd_rel->relisshared || RecoveryInProgress())\n>>>> {\n>>>> /* Shared relation: take into account all running xids */\n>>>> runningTransactions = GetRunningTransactionData();\n>>>> LWLockRelease(ProcArrayLock);\n>>>> LWLockRelease(XidGenLock);\n>>>> return runningTransactions->oldestRunningXid;\n>>>> }\n>>>\n>>> And GetRunningTransactionData() has this:\n>>>\n>>>> Assert(!RecoveryInProgress());\n>>>\n>>> So it's easy to see that you will hit that assertion.\n>>\n>> Oh, thank you!\n>> I'll fix this and add a test for recovery!\n> \n> Attached patch fixes the problem and adds the corresponding test. I\n> would appreciate if you take a look at it.\n\nThe code changes seem fine. I think the \"Ignore KnownAssignedXids\" \ncomment above the function could be made more clear. It's not wrong, but \nI think it doesn't explain the reasoning very well:\n\n* We are now doing no effectively no checking in a standby, because we \nalways just use nextXid. It's better than nothing, I suppose it will \ncatch very broken cases where an XID is in the future, but that's all.\n\n* We *could* use KnownAssignedXids for shared catalogs, because with \nshared catalogs, the global horizon is used, not a database-aware one.\n\n* Then again, there might be rare corner cases that a transaction has \ncrashed in the primary without writing a commit/abort record, and hence \nit looks like it's still running in the standby but has already ended in \nthe primary. So I think it's good we ignore KnownAssignedXids for shared \ncatalogs anyway.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 10:20:52 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" }, { "msg_contents": "On Wed, Aug 14, 2024 at 10:20 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 14/08/2024 04:51, Alexander Korotkov wrote:\n> > On Tue, Aug 13, 2024 at 10:15 PM Alexander Korotkov\n> > <aekorotkov@gmail.com> wrote:\n> >> On Tue, Aug 13, 2024 at 9:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>>\n> >>> This causes an assertion failure when executed in a hot standby server:\n> >>>\n> >>> select * from pg_check_visible('pg_database');\n> >>>\n> >>> TRAP: failed Assert(\"!RecoveryInProgress()\"), File:\n> >>> \"../src/backend/storage/ipc/procarray.c\", Line: 2710, PID: 1142572\n> >>>\n> >>> GetStrictOldestNonRemovableTransactionId does this:\n> >>>\n> >>>> if (rel == NULL || rel->rd_rel->relisshared || RecoveryInProgress())\n> >>>> {\n> >>>> /* Shared relation: take into account all running xids */\n> >>>> runningTransactions = GetRunningTransactionData();\n> >>>> LWLockRelease(ProcArrayLock);\n> >>>> LWLockRelease(XidGenLock);\n> >>>> return runningTransactions->oldestRunningXid;\n> >>>> }\n> >>>\n> >>> And GetRunningTransactionData() has this:\n> >>>\n> >>>> Assert(!RecoveryInProgress());\n> >>>\n> >>> So it's easy to see that you will hit that assertion.\n> >>\n> >> Oh, thank you!\n> >> I'll fix this and add a test for recovery!\n> >\n> > Attached patch fixes the problem and adds the corresponding test. I\n> > would appreciate if you take a look at it.\n>\n> The code changes seem fine. I think the \"Ignore KnownAssignedXids\"\n> comment above the function could be made more clear. It's not wrong, but\n> I think it doesn't explain the reasoning very well:\n>\n> * We are now doing no effectively no checking in a standby, because we\n> always just use nextXid. It's better than nothing, I suppose it will\n> catch very broken cases where an XID is in the future, but that's all.\n>\n> * We *could* use KnownAssignedXids for shared catalogs, because with\n> shared catalogs, the global horizon is used, not a database-aware one.\n>\n> * Then again, there might be rare corner cases that a transaction has\n> crashed in the primary without writing a commit/abort record, and hence\n> it looks like it's still running in the standby but has already ended in\n> the primary. So I think it's good we ignore KnownAssignedXids for shared\n> catalogs anyway.\n\nThank you for the detailed explanation. I've updated the\nGetStrictOldestNonRemovableTransactionId() header comment accordingly.\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Thu, 15 Aug 2024 00:40:07 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: collect_corrupt_items_vacuum.patch" } ]
[ { "msg_contents": "Hi,\nHope you’re having a good day!\nI am Mariam Fahmy, A senior computer and systems engineering student at\nfaculty of engineering, AinShams university.\n\nI am interested in working with pgmoneta during GSoC 2022.\n\nHere is a link to the draft proposal for implementing storage engine in\npgmoneta:\nhttps://docs.google.com/document/d/1EbRzgfZCDWG6LCD0puil8bUt10aehT4skfZ1YFlbNKk/edit?usp=drivesdk\n\nI would be grateful if you can have a look at it and give me your feedback.\n\nRegards,\nMariam.\n\nHi,Hope you’re having a good day!I am Mariam Fahmy, A senior computer and systems engineering student at faculty of engineering, AinShams university. I am interested in working with pgmoneta during GSoC 2022.Here is a link to the draft proposal for implementing storage engine in pgmoneta:https://docs.google.com/document/d/1EbRzgfZCDWG6LCD0puil8bUt10aehT4skfZ1YFlbNKk/edit?usp=drivesdkI would be grateful if you can have a look at it and give me your feedback.Regards,Mariam.", "msg_date": "Mon, 4 Apr 2022 15:16:24 +0200", "msg_from": "Mariam Fahmy <mariamfahmy66@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: pgmoneta, storage API" }, { "msg_contents": "Hi Mariam,\n\nOn 4/4/22 09:16, Mariam Fahmy wrote:\n> Hope you’re having a good day!\n> I am Mariam Fahmy, A senior computer and systems engineering student at\n> faculty of engineering, AinShams university.\n>\n> I am interested in working with pgmoneta during GSoC 2022.\n>\n> Here is a link to the draft proposal for implementing storage engine in\n> pgmoneta:\n> https://docs.google.com/document/d/1EbRzgfZCDWG6LCD0puil8bUt10aehT4skfZ1YFlbNKk/edit?usp=drivesdk\n>\n> I would be grateful if you can have a look at it and give me your feedback.\n\n\nThanks for your proposal to Google Summer of Code 2022 !\n\n\nWe'll follow up off-list to get this finalized.\n\n\nBest regards,\n  Jesper\n\n\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:23:58 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: GSoC: pgmoneta, storage API" } ]
[ { "msg_contents": "Hi,\n\n\ncan someone point out to me, why we don't consider pushdowns of the joinqual for these queries beyond the distinct on?\n\nWhen the qual matches the distinct clause, it should be possible to generate both parametrized and non parametrized subplans for the same query. The same should hold true for aggregates, if the group by clause matches. Is there any specific reason we aren't doing that already?\n\n\nRegards\n\nArne", "msg_date": "Mon, 4 Apr 2022 19:39:49 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "pushdown of joinquals beyond group by/distinct on" }, { "msg_contents": "On Tue, 5 Apr 2022 at 07:40, Arne Roland <A.Roland@index.de> wrote:\n> can someone point out to me, why we don't consider pushdowns of the joinqual for these queries beyond the distinct on?\n>\n> When the qual matches the distinct clause, it should be possible to generate both parametrized and non parametrized subplans for the same query. The same should hold true for aggregates, if the group by clause matches. Is there any specific reason we aren't doing that already?\n\nYour example shows that it's not always beneficial to pushdown such\nquals. In all cases where we currently consider qual pushdowns, we do\nso without any costing. This is done fairly early in planning before\nwe have any visibility as to if it would be useful or not.\n\nWith your example case, if we unconditionally rewrote the subquery to\nbe laterally joined and pushed the condition into the subquery then we\ncould slow down a bunch of cases as the planner would be forced into\nusing a parameterized nested loop.\n\nI don't really see how we could properly cost this short of performing\nthe join search twice. The join search is often the most costly part\nof planning. When you consider that there might be many quals to push\nand/or many subqueries to do this to, the number of times we'd need to\nperform the join search might explode fairly quickly. That wouldn't\nbe great for queries where there are many join-levels to search.\n\nIt might be possible if we could come up with some heuristics earlier\nin planning to determine if it's going to be a useful transformation\nto make. However, that seems fairly difficult in the absence of any\ncardinality estimations.\n\nDavid\n\n\n", "msg_date": "Tue, 5 Apr 2022 11:09:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pushdown of joinquals beyond group by/distinct on" } ]
[ { "msg_contents": "I spent some time thinking about a special case of evaluation of the row\nfilter and wrote a comment that might be useful (see the attachment). However\nnow I think that it's not perfect if the code really relies on the fact that\nvalue of an indexed column cannot be TOASTed due to size restrictions.\n\nI could hit two different error messages when trying activate TOAST on an\nindex column (in this case PG was build with 16kB pages), but still I think\nthe code is unnecessarily fragile if it relies on such errors:\n\n\nERROR: index row requires 8224 bytes, maximum size is 8191\n\nERROR: index row size 8048 exceeds btree version 4 maximum 5432 for index \"b_pkey\"\nDETAIL: Index row references tuple (0,3) in relation \"b\".\nHINT: Values larger than 1/3 of a buffer page cannot be indexed.\n\n\nNote that at least in ExtractReplicaIdentity() we do expect that an indexed\ncolumn value can be TOASTed.\n\n\t/*\n\t * If the tuple, which by here only contains indexed columns, still has\n\t * toasted columns, force them to be inlined. This is somewhat unlikely\n\t * since there's limits on the size of indexed columns, so we don't\n\t * duplicate toast_flatten_tuple()s functionality in the above loop over\n\t * the indexed columns, even if it would be more efficient.\n\t */\n\tif (HeapTupleHasExternal(key_tuple))\n\t{\n\t\tHeapTuple\toldtup = key_tuple;\n\n\t\tkey_tuple = toast_flatten_tuple(oldtup, desc);\n\t\theap_freetuple(oldtup);\n\t}\n\nDo I miss anything?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 05 Apr 2022 11:50:55 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Logical replication row filtering and TOAST" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> I spent some time thinking about a special case of evaluation of the row\n> filter and wrote a comment that might be useful (see the attachment). However\n> now I think that it's not perfect if the code really relies on the fact that\n> value of an indexed column cannot be TOASTed due to size restrictions.\n> \n> I could hit two different error messages when trying activate TOAST on an\n> index column (in this case PG was build with 16kB pages), but still I think\n> the code is unnecessarily fragile if it relies on such errors:\n> \n> \n> ERROR: index row requires 8224 bytes, maximum size is 8191\n> \n> ERROR: index row size 8048 exceeds btree version 4 maximum 5432 for index \"b_pkey\"\n> DETAIL: Index row references tuple (0,3) in relation \"b\".\n> HINT: Values larger than 1/3 of a buffer page cannot be indexed.\n> \n> \n> Note that at least in ExtractReplicaIdentity() we do expect that an indexed\n> column value can be TOASTed.\n> \n> \t/*\n> \t * If the tuple, which by here only contains indexed columns, still has\n> \t * toasted columns, force them to be inlined. This is somewhat unlikely\n> \t * since there's limits on the size of indexed columns, so we don't\n> \t * duplicate toast_flatten_tuple()s functionality in the above loop over\n> \t * the indexed columns, even if it would be more efficient.\n> \t */\n> \tif (HeapTupleHasExternal(key_tuple))\n> \t{\n> \t\tHeapTuple\toldtup = key_tuple;\n> \n> \t\tkey_tuple = toast_flatten_tuple(oldtup, desc);\n> \t\theap_freetuple(oldtup);\n> \t}\n> \n> Do I miss anything?\n\nWell, I see now that the point might be that, in heap_update(),\n\"id_has_external\" would be true the indexed value could be TOASTed, so that\nthe (flattened) old tuple would be WAL logged:\n\n\told_key_tuple = ExtractReplicaIdentity(relation, &oldtup,\n\t\t\t\t\t\t\t\t\t\t bms_overlap(modified_attrs, id_attrs) ||\n\t\t\t\t\t\t\t\t\t\t id_has_external,\n\t\t\t\t\t\t\t\t\t\t &old_key_copied);\n\nNevertheless, a comment in pgoutput_row_filter(), saying that TOASTed values\nare not expected if old_slot is NULL, might be useful.\n\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 05 Apr 2022 12:22:16 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Logical replication row filtering and TOAST" }, { "msg_contents": "On Tue, Apr 5, 2022 at 3:52 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Antonin Houska <ah@cybertec.at> wrote:\n>\n> > I spent some time thinking about a special case of evaluation of the row\n> > filter and wrote a comment that might be useful (see the attachment). However\n> > now I think that it's not perfect if the code really relies on the fact that\n> > value of an indexed column cannot be TOASTed due to size restrictions.\n> >\n> > I could hit two different error messages when trying activate TOAST on an\n> > index column (in this case PG was build with 16kB pages), but still I think\n> > the code is unnecessarily fragile if it relies on such errors:\n> >\n> >\n> > ERROR: index row requires 8224 bytes, maximum size is 8191\n> >\n> > ERROR: index row size 8048 exceeds btree version 4 maximum 5432 for index \"b_pkey\"\n> > DETAIL: Index row references tuple (0,3) in relation \"b\".\n> > HINT: Values larger than 1/3 of a buffer page cannot be indexed.\n> >\n> >\n> > Note that at least in ExtractReplicaIdentity() we do expect that an indexed\n> > column value can be TOASTed.\n> >\n> > /*\n> > * If the tuple, which by here only contains indexed columns, still has\n> > * toasted columns, force them to be inlined. This is somewhat unlikely\n> > * since there's limits on the size of indexed columns, so we don't\n> > * duplicate toast_flatten_tuple()s functionality in the above loop over\n> > * the indexed columns, even if it would be more efficient.\n> > */\n> > if (HeapTupleHasExternal(key_tuple))\n> > {\n> > HeapTuple oldtup = key_tuple;\n> >\n> > key_tuple = toast_flatten_tuple(oldtup, desc);\n> > heap_freetuple(oldtup);\n> > }\n> >\n> > Do I miss anything?\n>\n> Well, I see now that the point might be that, in heap_update(),\n> \"id_has_external\" would be true the indexed value could be TOASTed, so that\n> the (flattened) old tuple would be WAL logged:\n>\n\nRight.\n\n>\n> Nevertheless, a comment in pgoutput_row_filter(), saying that TOASTed values\n> are not expected if old_slot is NULL, might be useful.\n>\n\nHow about something like the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 5 Apr 2022 16:02:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication row filtering and TOAST" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Nevertheless, a comment in pgoutput_row_filter(), saying that TOASTed values\n> > are not expected if old_slot is NULL, might be useful.\n> >\n> \n> How about something like the attached?\n\nYes, that'd be sufficient. Thanks.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 05 Apr 2022 13:59:30 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Logical replication row filtering and TOAST" }, { "msg_contents": "On Tue, Apr 5, 2022 at 8:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> How about something like the attached?\n>\n\nLGTM.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:50:50 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication row filtering and TOAST" }, { "msg_contents": "On Wed, Apr 6, 2022 at 7:21 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Tue, Apr 5, 2022 at 8:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > How about something like the attached?\n> >\n>\n> LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Apr 2022 10:03:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication row filtering and TOAST" } ]
[ { "msg_contents": "Hi:\n\nIn my recent work, I want to check if the xmin for all the tuples is\nCurrentTransactioniId,\nthen I found we can improve the fastpath for\nTransactionIdIsCurrentTransactionId\nlike below, would it be safe? This would be helpful if we have lots of\nsub transactionId.\n\ndiff --git a/src/backend/access/transam/xact.c\nb/src/backend/access/transam/xact.c\nindex 3596a7d7345..e4721a6cb39 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -935,8 +935,12 @@ TransactionIdIsCurrentTransactionId(TransactionId xid)\n * Likewise, InvalidTransactionId and FrozenTransactionId are\ncertainly\n * not my transaction ID, so we can just return \"false\" immediately\nfor\n * any non-normal XID.\n+ *\n+ * And any Transaction IDs precede TransactionXmin are certainly not\n+ * my transaction ID as well.\n */\n- if (!TransactionIdIsNormal(xid))\n+\n+ if (TransactionIdPrecedes(xid, TransactionXmin))\n return false;\n\n if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))\n\n-- \nBest Regards\nAndy Fan\n\nHi:In my recent work,  I want to check if the xmin for all the tuples is CurrentTransactioniId,then I found we can improve the  fastpath for TransactionIdIsCurrentTransactionId like below,  would it be safe?  This would be helpful if we have lots of sub transactionId. diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.cindex 3596a7d7345..e4721a6cb39 100644--- a/src/backend/access/transam/xact.c+++ b/src/backend/access/transam/xact.c@@ -935,8 +935,12 @@ TransactionIdIsCurrentTransactionId(TransactionId xid)         * Likewise, InvalidTransactionId and FrozenTransactionId are certainly         * not my transaction ID, so we can just return \"false\" immediately for         * any non-normal XID.+        *+        * And any Transaction IDs precede TransactionXmin are certainly not+        * my transaction ID as well.         */-       if (!TransactionIdIsNormal(xid))++       if (TransactionIdPrecedes(xid, TransactionXmin))                return false;        if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))-- Best RegardsAndy Fan", "msg_date": "Tue, 5 Apr 2022 21:11:48 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "A fastpath for TransactionIdIsCurrentTransactionId" } ]
[ { "msg_contents": "Hi,\n\nI wanted to have a WAL record spanning multiple WAL files of size, say\n16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\nwould help here. Please let me know if there's any way to generate\nsuch large WAL records.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 5 Apr 2022 18:42:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "On 2022-Apr-05, Bharath Rupireddy wrote:\n\n> Hi,\n> \n> I wanted to have a WAL record spanning multiple WAL files of size, say\n> 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n> would help here. Please let me know if there's any way to generate\n> such large WAL records.\n\nIt's easier to use pg_logical_emit_message().\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:18:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "On Tue, 5 Apr 2022 at 15:13, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I wanted to have a WAL record spanning multiple WAL files of size, say\n> 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n> would help here. Please let me know if there's any way to generate\n> such large WAL records.\n\nThe function pg_logical_emit_message (callable with REPLICATION\npermissions from SQL) allows you to emit records of arbitrary length <\n2GB - 2B (for now), which should be enough.\n\nOther than that, you could try to generate 16MB of subtransaction IDs;\nthe commit record would contain all subxids and thus be at least 16MB\nin size.\n\n-Matthias\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:39:04 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 5, 2022 at 9:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Apr-05, Bharath Rupireddy wrote:\n>\n> > Hi,\n> >\n> > I wanted to have a WAL record spanning multiple WAL files of size, say\n> > 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n> > would help here. Please let me know if there's any way to generate\n> > such large WAL records.\n>\n> It's easier to use pg_logical_emit_message().\n>\n>\nNot sure I understand the question correctly here. What if I use the below\ncode\nwhere the len might be very large? like 64MB.\n\n XLogBeginInsert();\nXLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\nXLogRegisterData((char *)data, len);\n\nXLogInsert(..);\n\n-- \nBest Regards\nAndy Fan\n\nHi, On Tue, Apr 5, 2022 at 9:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Apr-05, Bharath Rupireddy wrote:\n\n> Hi,\n> \n> I wanted to have a WAL record spanning multiple WAL files of size, say\n> 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n> would help here. Please let me know if there's any way to generate\n> such large WAL records.\n\nIt's easier to use pg_logical_emit_message().Not sure I understand the question correctly here.  What if I use the below code where the len might be very large?  like 64MB.  XLogBeginInsert();\tXLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\tXLogRegisterData((char *)data, len);  \tXLogInsert(..); -- Best RegardsAndy Fan", "msg_date": "Tue, 5 Apr 2022 22:09:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 10:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > I wanted to have a WAL record spanning multiple WAL files of size, say\n>> > 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n>> > would help here. Please let me know if there's any way to generate\n>> > such large WAL records.\n>>\n>> It's easier to use pg_logical_emit_message().\n>\n> Not sure I understand the question correctly here. What if I use the below code\n> where the len might be very large? like 64MB.\n>\n> XLogBeginInsert();\n> XLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\n> XLogRegisterData((char *)data, len);\n>\n> XLogInsert(..);\n\nWell, that's how to do it from C. And pg_logical_emit_message() is how\nto do it from SQL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 12:40:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "On Wed, Apr 6, 2022 at 12:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 5, 2022 at 10:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >> > I wanted to have a WAL record spanning multiple WAL files of size, say\n> >> > 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n> >> > would help here. Please let me know if there's any way to generate\n> >> > such large WAL records.\n> >>\n> >> It's easier to use pg_logical_emit_message().\n> >\n> > Not sure I understand the question correctly here. What if I use the\n> below code\n> > where the len might be very large? like 64MB.\n> >\n> > XLogBeginInsert();\n> > XLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\n> > XLogRegisterData((char *)data, len);\n> >\n> > XLogInsert(..);\n>\n> Well, that's how to do it from C. And pg_logical_emit_message() is how\n> to do it from SQL.\n>\n>\nOK, Thanks for your confirmation!\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Apr 6, 2022 at 12:41 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 5, 2022 at 10:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > I wanted to have a WAL record spanning multiple WAL files of size, say\n>> > 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n>> > would help here. Please let me know if there's any way to generate\n>> > such large WAL records.\n>>\n>> It's easier to use pg_logical_emit_message().\n>\n> Not sure I understand the question correctly here.  What if I use the below code\n> where the len might be very large?  like 64MB.\n>\n>  XLogBeginInsert();\n> XLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\n> XLogRegisterData((char *)data, len);\n>\n> XLogInsert(..);\n\nWell, that's how to do it from C. And pg_logical_emit_message() is how\nto do it from SQL.OK, Thanks for your confirmation!-- Best RegardsAndy Fan", "msg_date": "Wed, 6 Apr 2022 09:26:06 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" }, { "msg_contents": "On Wed, Apr 6, 2022 at 6:56 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 12:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Tue, Apr 5, 2022 at 10:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >> > I wanted to have a WAL record spanning multiple WAL files of size, say\n>> >> > 16MB. I'm wondering if the Full Page Images (FPIs) of a TOAST table\n>> >> > would help here. Please let me know if there's any way to generate\n>> >> > such large WAL records.\n>> >>\n>> >> It's easier to use pg_logical_emit_message().\n>> >\n>> > Not sure I understand the question correctly here. What if I use the below code\n>> > where the len might be very large? like 64MB.\n>> >\n>> > XLogBeginInsert();\n>> > XLogRegisterData((char *)&xl_append, sizeof(xl_cstore_append));\n>> > XLogRegisterData((char *)data, len);\n>> >\n>> > XLogInsert(..);\n>>\n>> Well, that's how to do it from C. And pg_logical_emit_message() is how\n>> to do it from SQL.\n>>\n>\n> OK, Thanks for your confirmation!\n\nThanks all for your responses. Yes, using pg_logical_emit_message() is\neasy, but it might come in the way of logical decoding as those\nmessages get decoded.\n\nPS: I wrote a small extension (just for fun) called pg_synthesize_wal\n[1] implementing functions to generate huge WAL records. I used the\n\"Custom WAL Resource Managers\" feature [2] that got committed to PG15.\n\n[1] https://github.com/BRupireddy/pg_synthesize_wal\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 22 Apr 2022 19:32:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to generate a WAL record spanning multiple WAL files?" } ]
[ { "msg_contents": "Hi,\n\nI'm thinking if there's a way in core postgres to achieve $subject. In\nreality, the sync/async standbys can either be closer/farther (which\nmeans sync/async standbys can receive WAL at different times) to\nprimary, especially in cloud HA environments with primary in one\nAvailability Zone(AZ)/Region and standbys in different AZs/Regions.\n$subject may not be possible on dev systems (say, for testing some HA\nfeatures) unless we can inject a delay in WAL senders before sending\nWAL.\n\nHow about having two developer-only GUCs {async,\nsync}_wal_sender_delay? When set, the async and sync WAL senders will\ndelay sending WAL by {async, sync}_wal_sender_delay\nmilliseconds/seconds? Although, I can't think of any immediate use, it\nwill be useful someday IMO, say for features like [1], if it gets in.\nWith this set of GUCs, one can even add core regression tests for HA\nfeatures.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACWCj60g6TzYMbEO07ZhnBGbdCveCrD413udqbRM0O59RA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 5 Apr 2022 21:23:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 9:23 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I'm thinking if there's a way in core postgres to achieve $subject. In\n> reality, the sync/async standbys can either be closer/farther (which\n> means sync/async standbys can receive WAL at different times) to\n> primary, especially in cloud HA environments with primary in one\n> Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> $subject may not be possible on dev systems (say, for testing some HA\n> features) unless we can inject a delay in WAL senders before sending\n> WAL.\n>\n> How about having two developer-only GUCs {async,\n> sync}_wal_sender_delay? When set, the async and sync WAL senders will\n> delay sending WAL by {async, sync}_wal_sender_delay\n> milliseconds/seconds? Although, I can't think of any immediate use, it\n> will be useful someday IMO, say for features like [1], if it gets in.\n> With this set of GUCs, one can even add core regression tests for HA\n> features.\n>\n> Thoughts?\n\nI think this is a common problem, people run into. Once way to\nsimulate network delay is what you suggest, yes. But I was wondering\nif there are tools/libraries that can help us to do that. Googling\ngives OS specific tools but nothing like a C or perl library which can\nbe used for this purpose.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 6 Apr 2022 16:30:40 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" }, { "msg_contents": "On Wed, Apr 6, 2022 at 4:30 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Apr 5, 2022 at 9:23 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I'm thinking if there's a way in core postgres to achieve $subject. In\n> > reality, the sync/async standbys can either be closer/farther (which\n> > means sync/async standbys can receive WAL at different times) to\n> > primary, especially in cloud HA environments with primary in one\n> > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> > $subject may not be possible on dev systems (say, for testing some HA\n> > features) unless we can inject a delay in WAL senders before sending\n> > WAL.\n> >\n> > How about having two developer-only GUCs {async,\n> > sync}_wal_sender_delay? When set, the async and sync WAL senders will\n> > delay sending WAL by {async, sync}_wal_sender_delay\n> > milliseconds/seconds? Although, I can't think of any immediate use, it\n> > will be useful someday IMO, say for features like [1], if it gets in.\n> > With this set of GUCs, one can even add core regression tests for HA\n> > features.\n> >\n> > Thoughts?\n>\n> I think this is a common problem, people run into. Once way to\n> simulate network delay is what you suggest, yes. But I was wondering\n> if there are tools/libraries that can help us to do that. Googling\n> gives OS specific tools but nothing like a C or perl library which can\n> be used for this purpose.\n\nThanks. IMO, non-postgres tools (not sure if they exist, if at all\nthey exist) to simulate network delays may not be reliable and usable\neasily, say, for adding some TAP tests for HA features. Especially in\nthe cloud-world usage of those external tools may not even be\npossible. With the developer-only GUCs as being proposed here in this\nthread, it's pretty much easy to simulate what we want, but only the\nextra caution is to not let others (probably non-superusers) set and\nmisuse these developer-only GUCs. I think that's even true for all the\nexisting developer-only GUCs.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Apr 2022 19:14:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" }, { "msg_contents": "On Fri, Apr 8, 2022 at 6:44 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Apr 6, 2022 at 4:30 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Apr 5, 2022 at 9:23 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I'm thinking if there's a way in core postgres to achieve $subject. In\n> > > reality, the sync/async standbys can either be closer/farther (which\n> > > means sync/async standbys can receive WAL at different times) to\n> > > primary, especially in cloud HA environments with primary in one\n> > > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> > > $subject may not be possible on dev systems (say, for testing some HA\n> > > features) unless we can inject a delay in WAL senders before sending\n> > > WAL.\n>\n\nSimulation will be helpful even for end customers to simulate faults in the\nproduction environments during availability zone/disaster recovery drills.\n\n\n\n> > >\n> > > How about having two developer-only GUCs {async,\n> > > sync}_wal_sender_delay? When set, the async and sync WAL senders will\n> > > delay sending WAL by {async, sync}_wal_sender_delay\n> > > milliseconds/seconds? Although, I can't think of any immediate use, it\n> > > will be useful someday IMO, say for features like [1], if it gets in.\n> > > With this set of GUCs, one can even add core regression tests for HA\n> > > features.\n>\n\nI would suggest doing this at the slot level, instead of two GUCs that\ncontrol the behavior of all the slots (physical/logical). Something like\n\"pg_suspend_replication_slot and pg_Resume_replication_slot\"?\nAlternatively a GUC on the standby side instead of primary so that the wal\nreceiver stops responding to the wal sender? This helps achieve the same as\nabove but the granularity is now at individual replica level.\n\nThanks,\nSatya\n\nOn Fri, Apr 8, 2022 at 6:44 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Apr 6, 2022 at 4:30 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Apr 5, 2022 at 9:23 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I'm thinking if there's a way in core postgres to achieve $subject. In\n> > reality, the sync/async standbys can either be closer/farther (which\n> > means sync/async standbys can receive WAL at different times) to\n> > primary, especially in cloud HA environments with primary in one\n> > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> > $subject may not be possible on dev systems (say, for testing some HA\n> > features) unless we can inject a delay in WAL senders before sending\n> > WAL.Simulation will be helpful even for end customers to simulate faults in the production environments during availability zone/disaster recovery drills. \n> >\n> > How about having two developer-only GUCs {async,\n> > sync}_wal_sender_delay? When set, the async and sync WAL senders will\n> > delay sending WAL by {async, sync}_wal_sender_delay\n> > milliseconds/seconds? Although, I can't think of any immediate use, it\n> > will be useful someday IMO, say for features like [1], if it gets in.\n> > With this set of GUCs, one can even add core regression tests for HA\n> > features.I would suggest doing this at the slot level, instead of two GUCs that control the behavior of all the slots (physical/logical). Something like \"pg_suspend_replication_slot and pg_Resume_replication_slot\"?Alternatively a GUC on the standby side instead of primary so that the wal receiver stops responding to the wal sender? This helps achieve the same as above but the granularity is now at individual replica level. Thanks,Satya", "msg_date": "Fri, 8 Apr 2022 09:52:23 -0700", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" }, { "msg_contents": "On Fri, Apr 8, 2022 at 10:22 PM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n>> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> > >\n>> > > Hi,\n>> > >\n>> > > I'm thinking if there's a way in core postgres to achieve $subject. In\n>> > > reality, the sync/async standbys can either be closer/farther (which\n>> > > means sync/async standbys can receive WAL at different times) to\n>> > > primary, especially in cloud HA environments with primary in one\n>> > > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n>> > > $subject may not be possible on dev systems (say, for testing some HA\n>> > > features) unless we can inject a delay in WAL senders before sending\n>> > > WAL.\n>\n> Simulation will be helpful even for end customers to simulate faults in the production environments during availability zone/disaster recovery drills.\n\nRight.\n\n>> > > How about having two developer-only GUCs {async,\n>> > > sync}_wal_sender_delay? When set, the async and sync WAL senders will\n>> > > delay sending WAL by {async, sync}_wal_sender_delay\n>> > > milliseconds/seconds? Although, I can't think of any immediate use, it\n>> > > will be useful someday IMO, say for features like [1], if it gets in.\n>> > > With this set of GUCs, one can even add core regression tests for HA\n>> > > features.\n>\n> I would suggest doing this at the slot level, instead of two GUCs that control the behavior of all the slots (physical/logical). Something like \"pg_suspend_replication_slot and pg_Resume_replication_slot\"?\n\nHaving the control at the replication slot level seems reasonable\ninstead of at the WAL sender level. As there can be many slots on the\nprimary, we must have a way to specify which slots need to be delayed\nand by how much time before sending WAL. If GUCs, they must be of list\ntypes and I'm not sure that would come out well.\n\nInstead, two (superuser-only/users with replication role) functions\nsuch as pg_replication_slot_set_delay(slot_name,\ndelay_in_milliseconds)/pg_replication_slot_unset_delay(slot_name).\npg_replication_slot_set_delay will set ReplicationSlot->delay and the\nWAL sender checks MyReplicationSlot->delay > 0 and waits before\nsending WAL. pg_replication_slot_unset_delay will set\nReplicationSlot->delay to 0, or instead of\npg_replication_slot_unset_delay, the\npg_replication_slot_set_delay(slot_name, 0) can be used, this way only\nsingle function.\n\nIf the users want a standby to receive WAL with a delay, they can use\npg_replication_slot_set_delay after creating the replication slot.\n\nThoughts?\n\n> Alternatively a GUC on the standby side instead of primary so that the wal receiver stops responding to the wal sender?\n\nI think we have wal_receiver_status_interval GUC on WAL receiver that\nachieves the above i.e. not responding to the primary at all, one can\nset wal_receiver_status_interval to, say, 1day.\n\n[1]\n {\n {\"wal_receiver_status_interval\", PGC_SIGHUP, REPLICATION_STANDBY,\n gettext_noop(\"Sets the maximum interval between WAL\nreceiver status reports to the sending server.\"),\n NULL,\n GUC_UNIT_S\n },\n &wal_receiver_status_interval,\n 10, 0, INT_MAX / 1000,\n NULL, NULL, NULL\n },\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 9 Apr 2022 14:38:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" }, { "msg_contents": "On Sat, Apr 09, 2022 at 02:38:50PM +0530, Bharath Rupireddy wrote:\n> On Fri, Apr 8, 2022 at 10:22 PM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> >> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> > >\n> >> > > Hi,\n> >> > >\n> >> > > I'm thinking if there's a way in core postgres to achieve $subject. In\n> >> > > reality, the sync/async standbys can either be closer/farther (which\n> >> > > means sync/async standbys can receive WAL at different times) to\n> >> > > primary, especially in cloud HA environments with primary in one\n> >> > > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> >> > > $subject may not be possible on dev systems (say, for testing some HA\n> >> > > features) unless we can inject a delay in WAL senders before sending\n> >> > > WAL.\n> >\n> > Simulation will be helpful even for end customers to simulate faults in the\n> > production environments during availability zone/disaster recovery drills.\n>\n> Right.\n\nI'm not sure that's actually helpful. If you want to do some realistic testing\nyou need to fully simulate various network incidents and only delaying postgres\nreplication is never going to be close to that. You should instead rely on\ntool like tc, which can do much more than what $subject could ever do, and do\nthat for all your HA stack. At the very least you don't want to validate that\nyour setup is working as excpected by just simulating a faulty postgres\nreplication connection but still having all your clients and HA agent not\nhaving any network issue at all.\n\n\n", "msg_date": "Sat, 9 Apr 2022 21:08:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther\n (network distance) to primary in core postgres?" }, { "msg_contents": "On Sat, Apr 9, 2022 at 6:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Apr 09, 2022 at 02:38:50PM +0530, Bharath Rupireddy wrote:\n> > On Fri, Apr 8, 2022 at 10:22 PM SATYANARAYANA NARLAPURAM\n> > <satyanarlapuram@gmail.com> wrote:\n> > >\n> > >> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >> > >\n> > >> > > Hi,\n> > >> > >\n> > >> > > I'm thinking if there's a way in core postgres to achieve $subject. In\n> > >> > > reality, the sync/async standbys can either be closer/farther (which\n> > >> > > means sync/async standbys can receive WAL at different times) to\n> > >> > > primary, especially in cloud HA environments with primary in one\n> > >> > > Availability Zone(AZ)/Region and standbys in different AZs/Regions.\n> > >> > > $subject may not be possible on dev systems (say, for testing some HA\n> > >> > > features) unless we can inject a delay in WAL senders before sending\n> > >> > > WAL.\n> > >\n> > > Simulation will be helpful even for end customers to simulate faults in the\n> > > production environments during availability zone/disaster recovery drills.\n> >\n> > Right.\n>\n> I'm not sure that's actually helpful. If you want to do some realistic testing\n> you need to fully simulate various network incidents and only delaying postgres\n> replication is never going to be close to that. You should instead rely on\n> tool like tc, which can do much more than what $subject could ever do, and do\n> that for all your HA stack. At the very least you don't want to validate that\n> your setup is working as excpected by just simulating a faulty postgres\n> replication connection but still having all your clients and HA agent not\n> having any network issue at all.\n\nAgree that the external networking tools and commands can be used.\nIMHO, not everyone is familiar with those tools and the tools may not\nbe portable and reliable all the time. And developers may not be able\nto use those tools to test some of the HA related features (which may\nrequire sync and async standbys being closer/farther to the primary)\nthat I or some other postgres HA solution providers may develop.\nHaving a reliable way within the core would actually help.\n\nUpon thinking further, how about we have hooks in WAL sender code\n(perhaps with replication slot info that it manages and some other\ninfo) and one can implement an extension of their choice (similar to\nauth_delay and ClientAuthentication_hook)?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 22 Apr 2022 19:53:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to simulate sync/async standbys being closer/farther (network\n distance) to primary in core postgres?" } ]
[ { "msg_contents": "Hello Hackers,\n\nReporting a bug with the new MERGE statement. Tested against 75edb919613ee835e7680e40137e494c7856bcf9.\n\npsql output as follows:\n\n...\npsql:merge.sql:33: ERROR: variable not found in subplan target lists\nROLLBACK\n[local] joe@joe=# \\errverbose\nERROR: XX000: variable not found in subplan target lists\nLOCATION: fix_join_expr_mutator, setrefs.c:2800\n\nStack trace:\n\nfix_join_expr_mutator setrefs.c:2800\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:2992\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nfix_join_expr setrefs.c:2753\nset_plan_refs setrefs.c:1085\nset_plan_references setrefs.c:315\nstandard_planner planner.c:498\nplanner planner.c:277\npg_plan_query postgres.c:883\npg_plan_queries postgres.c:975\nexec_simple_query postgres.c:1169\nPostgresMain postgres.c:4520\nBackendRun postmaster.c:4593\nBackendStartup postmaster.c:4321\nServerLoop postmaster.c:1801\nPostmasterMain postmaster.c:1473\nmain main.c:202\n__libc_start_main 0x00007fc4ccc0b1e2\n_start 0x000000000048804e\n\nReproducer script:\n\nBEGIN;\nDROP TABLE IF EXISTS item, incoming, source CASCADE;\n\nCREATE TABLE item\n (order_id INTEGER NOT NULL,\n item_id INTEGER NOT NULL,\n quantity INTEGER NOT NULL,\n price NUMERIC NOT NULL,\n CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n\nINSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n\nCREATE TABLE incoming (order_id, item_id, quantity, price)\n AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n\nCREATE TABLE source (order_id, item_id, quantity, price) AS\n (SELECT order_id, item_id, incoming.quantity, incoming.price\n FROM item LEFT JOIN incoming USING (order_id, item_id));\n\nMERGE INTO item a\nUSING source b\n ON (a.order_id, a.item_id) =\n (b.order_id, b.item_id)\n WHEN NOT MATCHED\n THEN INSERT (order_id, item_id, quantity, price)\n VALUES (order_id, item_id, quantity, price)\n WHEN MATCHED\n AND a.* IS DISTINCT FROM b.*\n THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n WHEN MATCHED\n AND (b.quantity IS NULL AND b.price IS NULL)\n THEN DELETE;\nCOMMIT;\n\nIt seems related to the use of a.* and b.*\n\nSorry I can't be more specific. Error manifests when planning occurs and that is well outside of my code base knowledge.\n\nHope this helps.\n\nCheers,\n-Joe\n\n\n", "msg_date": "Tue, 05 Apr 2022 23:17:30 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "MERGE bug report" }, { "msg_contents": "On Tue, Apr 5, 2022 at 3:18 PM Joe Wildish <joe@lateraljoin.com> wrote:\n\n> Hello Hackers,\n>\n> Reporting a bug with the new MERGE statement. Tested against\n> 75edb919613ee835e7680e40137e494c7856bcf9.\n>\n> psql output as follows:\n>\n> ...\n> psql:merge.sql:33: ERROR: variable not found in subplan target lists\n> ROLLBACK\n> [local] joe@joe=# \\errverbose\n> ERROR: XX000: variable not found in subplan target lists\n> LOCATION: fix_join_expr_mutator, setrefs.c:2800\n>\n> Stack trace:\n>\n> fix_join_expr_mutator setrefs.c:2800\n> expression_tree_mutator nodeFuncs.c:3348\n> fix_join_expr_mutator setrefs.c:2853\n> expression_tree_mutator nodeFuncs.c:2992\n> fix_join_expr_mutator setrefs.c:2853\n> expression_tree_mutator nodeFuncs.c:3348\n> fix_join_expr_mutator setrefs.c:2853\n> fix_join_expr setrefs.c:2753\n> set_plan_refs setrefs.c:1085\n> set_plan_references setrefs.c:315\n> standard_planner planner.c:498\n> planner planner.c:277\n> pg_plan_query postgres.c:883\n> pg_plan_queries postgres.c:975\n> exec_simple_query postgres.c:1169\n> PostgresMain postgres.c:4520\n> BackendRun postmaster.c:4593\n> BackendStartup postmaster.c:4321\n> ServerLoop postmaster.c:1801\n> PostmasterMain postmaster.c:1473\n> main main.c:202\n> __libc_start_main 0x00007fc4ccc0b1e2\n> _start 0x000000000048804e\n>\n> Reproducer script:\n>\n> BEGIN;\n> DROP TABLE IF EXISTS item, incoming, source CASCADE;\n>\n> CREATE TABLE item\n> (order_id INTEGER NOT NULL,\n> item_id INTEGER NOT NULL,\n> quantity INTEGER NOT NULL,\n> price NUMERIC NOT NULL,\n> CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n>\n> INSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n>\n> CREATE TABLE incoming (order_id, item_id, quantity, price)\n> AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n>\n> CREATE TABLE source (order_id, item_id, quantity, price) AS\n> (SELECT order_id, item_id, incoming.quantity, incoming.price\n> FROM item LEFT JOIN incoming USING (order_id, item_id));\n>\n> MERGE INTO item a\n> USING source b\n> ON (a.order_id, a.item_id) =\n> (b.order_id, b.item_id)\n> WHEN NOT MATCHED\n> THEN INSERT (order_id, item_id, quantity, price)\n> VALUES (order_id, item_id, quantity, price)\n> WHEN MATCHED\n> AND a.* IS DISTINCT FROM b.*\n> THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n> WHEN MATCHED\n> AND (b.quantity IS NULL AND b.price IS NULL)\n> THEN DELETE;\n> COMMIT;\n>\n> It seems related to the use of a.* and b.*\n>\n> Sorry I can't be more specific. Error manifests when planning occurs and\n> that is well outside of my code base knowledge.\n>\n> Hope this helps.\n>\n> Cheers,\n> -Joe\n>\nHi,\nIt seems all the calls to fix_join_expr_mutator() are within setrefs.c\n\nI haven't found where in nodeFuncs.c fix_join_expr_mutator is called.\n\nI am on commit 75edb919613ee835e7680e40137e494c7856bcf9 .\n\nOn Tue, Apr 5, 2022 at 3:18 PM Joe Wildish <joe@lateraljoin.com> wrote:Hello Hackers,\n\nReporting a bug with the new MERGE statement. Tested against 75edb919613ee835e7680e40137e494c7856bcf9.\n\npsql output as follows:\n\n...\npsql:merge.sql:33: ERROR:  variable not found in subplan target lists\nROLLBACK\n[local] joe@joe=# \\errverbose\nERROR:  XX000: variable not found in subplan target lists\nLOCATION:  fix_join_expr_mutator, setrefs.c:2800\n\nStack trace:\n\nfix_join_expr_mutator setrefs.c:2800\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:2992\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nfix_join_expr setrefs.c:2753\nset_plan_refs setrefs.c:1085\nset_plan_references setrefs.c:315\nstandard_planner planner.c:498\nplanner planner.c:277\npg_plan_query postgres.c:883\npg_plan_queries postgres.c:975\nexec_simple_query postgres.c:1169\nPostgresMain postgres.c:4520\nBackendRun postmaster.c:4593\nBackendStartup postmaster.c:4321\nServerLoop postmaster.c:1801\nPostmasterMain postmaster.c:1473\nmain main.c:202\n__libc_start_main 0x00007fc4ccc0b1e2\n_start 0x000000000048804e\n\nReproducer script:\n\nBEGIN;\nDROP TABLE IF EXISTS item, incoming, source CASCADE;\n\nCREATE TABLE item\n  (order_id    INTEGER NOT NULL,\n   item_id     INTEGER NOT NULL,\n   quantity    INTEGER NOT NULL,\n   price       NUMERIC NOT NULL,\n   CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n\nINSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n\nCREATE TABLE incoming (order_id, item_id, quantity, price)\n  AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n\nCREATE TABLE source (order_id, item_id, quantity, price) AS\n  (SELECT order_id, item_id, incoming.quantity, incoming.price\n     FROM item LEFT JOIN incoming USING (order_id, item_id));\n\nMERGE INTO item a\nUSING source b\n   ON (a.order_id, a.item_id) =\n      (b.order_id, b.item_id)\n WHEN NOT MATCHED\n THEN INSERT (order_id, item_id, quantity, price)\n      VALUES (order_id, item_id, quantity, price)\n WHEN MATCHED\n  AND a.* IS DISTINCT FROM b.*\n THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n WHEN MATCHED\n  AND (b.quantity IS NULL AND b.price IS NULL)\n THEN DELETE;\nCOMMIT;\n\nIt seems related to the use of a.* and b.*\n\nSorry I can't be more specific. Error manifests when planning occurs and that is well outside of my code base knowledge.\n\nHope this helps.\n\nCheers,\n-JoeHi,It seems all the calls to fix_join_expr_mutator() are within setrefs.cI haven't found where in nodeFuncs.c fix_join_expr_mutator is called.I am on commit 75edb919613ee835e7680e40137e494c7856bcf9 .", "msg_date": "Tue, 5 Apr 2022 15:35:27 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" }, { "msg_contents": "On Tue, Apr 5, 2022 at 3:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, Apr 5, 2022 at 3:18 PM Joe Wildish <joe@lateraljoin.com> wrote:\n>\n>> Hello Hackers,\n>>\n>> Reporting a bug with the new MERGE statement. Tested against\n>> 75edb919613ee835e7680e40137e494c7856bcf9.\n>>\n>> psql output as follows:\n>>\n>> ...\n>> psql:merge.sql:33: ERROR: variable not found in subplan target lists\n>> ROLLBACK\n>> [local] joe@joe=# \\errverbose\n>> ERROR: XX000: variable not found in subplan target lists\n>> LOCATION: fix_join_expr_mutator, setrefs.c:2800\n>>\n>> Stack trace:\n>>\n>> fix_join_expr_mutator setrefs.c:2800\n>> expression_tree_mutator nodeFuncs.c:3348\n>> fix_join_expr_mutator setrefs.c:2853\n>> expression_tree_mutator nodeFuncs.c:2992\n>> fix_join_expr_mutator setrefs.c:2853\n>> expression_tree_mutator nodeFuncs.c:3348\n>> fix_join_expr_mutator setrefs.c:2853\n>> fix_join_expr setrefs.c:2753\n>> set_plan_refs setrefs.c:1085\n>> set_plan_references setrefs.c:315\n>> standard_planner planner.c:498\n>> planner planner.c:277\n>> pg_plan_query postgres.c:883\n>> pg_plan_queries postgres.c:975\n>> exec_simple_query postgres.c:1169\n>> PostgresMain postgres.c:4520\n>> BackendRun postmaster.c:4593\n>> BackendStartup postmaster.c:4321\n>> ServerLoop postmaster.c:1801\n>> PostmasterMain postmaster.c:1473\n>> main main.c:202\n>> __libc_start_main 0x00007fc4ccc0b1e2\n>> _start 0x000000000048804e\n>>\n>> Reproducer script:\n>>\n>> BEGIN;\n>> DROP TABLE IF EXISTS item, incoming, source CASCADE;\n>>\n>> CREATE TABLE item\n>> (order_id INTEGER NOT NULL,\n>> item_id INTEGER NOT NULL,\n>> quantity INTEGER NOT NULL,\n>> price NUMERIC NOT NULL,\n>> CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n>>\n>> INSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n>>\n>> CREATE TABLE incoming (order_id, item_id, quantity, price)\n>> AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n>>\n>> CREATE TABLE source (order_id, item_id, quantity, price) AS\n>> (SELECT order_id, item_id, incoming.quantity, incoming.price\n>> FROM item LEFT JOIN incoming USING (order_id, item_id));\n>>\n>> MERGE INTO item a\n>> USING source b\n>> ON (a.order_id, a.item_id) =\n>> (b.order_id, b.item_id)\n>> WHEN NOT MATCHED\n>> THEN INSERT (order_id, item_id, quantity, price)\n>> VALUES (order_id, item_id, quantity, price)\n>> WHEN MATCHED\n>> AND a.* IS DISTINCT FROM b.*\n>> THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n>> WHEN MATCHED\n>> AND (b.quantity IS NULL AND b.price IS NULL)\n>> THEN DELETE;\n>> COMMIT;\n>>\n>> It seems related to the use of a.* and b.*\n>>\n>> Sorry I can't be more specific. Error manifests when planning occurs and\n>> that is well outside of my code base knowledge.\n>>\n>> Hope this helps.\n>>\n>> Cheers,\n>> -Joe\n>>\n> Hi,\n> It seems all the calls to fix_join_expr_mutator() are within setrefs.c\n>\n> I haven't found where in nodeFuncs.c fix_join_expr_mutator is called.\n>\n> I am on commit 75edb919613ee835e7680e40137e494c7856bcf9 .\n>\n\nPardon - I typed too fast:\n\nThe call to fix_join_expr_mutator() is on this line (3348):\n\n resultlist = lappend(resultlist,\n mutator((Node *) lfirst(temp),\n context));\n\nOn Tue, Apr 5, 2022 at 3:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Tue, Apr 5, 2022 at 3:18 PM Joe Wildish <joe@lateraljoin.com> wrote:Hello Hackers,\n\nReporting a bug with the new MERGE statement. Tested against 75edb919613ee835e7680e40137e494c7856bcf9.\n\npsql output as follows:\n\n...\npsql:merge.sql:33: ERROR:  variable not found in subplan target lists\nROLLBACK\n[local] joe@joe=# \\errverbose\nERROR:  XX000: variable not found in subplan target lists\nLOCATION:  fix_join_expr_mutator, setrefs.c:2800\n\nStack trace:\n\nfix_join_expr_mutator setrefs.c:2800\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:2992\nfix_join_expr_mutator setrefs.c:2853\nexpression_tree_mutator nodeFuncs.c:3348\nfix_join_expr_mutator setrefs.c:2853\nfix_join_expr setrefs.c:2753\nset_plan_refs setrefs.c:1085\nset_plan_references setrefs.c:315\nstandard_planner planner.c:498\nplanner planner.c:277\npg_plan_query postgres.c:883\npg_plan_queries postgres.c:975\nexec_simple_query postgres.c:1169\nPostgresMain postgres.c:4520\nBackendRun postmaster.c:4593\nBackendStartup postmaster.c:4321\nServerLoop postmaster.c:1801\nPostmasterMain postmaster.c:1473\nmain main.c:202\n__libc_start_main 0x00007fc4ccc0b1e2\n_start 0x000000000048804e\n\nReproducer script:\n\nBEGIN;\nDROP TABLE IF EXISTS item, incoming, source CASCADE;\n\nCREATE TABLE item\n  (order_id    INTEGER NOT NULL,\n   item_id     INTEGER NOT NULL,\n   quantity    INTEGER NOT NULL,\n   price       NUMERIC NOT NULL,\n   CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n\nINSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n\nCREATE TABLE incoming (order_id, item_id, quantity, price)\n  AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n\nCREATE TABLE source (order_id, item_id, quantity, price) AS\n  (SELECT order_id, item_id, incoming.quantity, incoming.price\n     FROM item LEFT JOIN incoming USING (order_id, item_id));\n\nMERGE INTO item a\nUSING source b\n   ON (a.order_id, a.item_id) =\n      (b.order_id, b.item_id)\n WHEN NOT MATCHED\n THEN INSERT (order_id, item_id, quantity, price)\n      VALUES (order_id, item_id, quantity, price)\n WHEN MATCHED\n  AND a.* IS DISTINCT FROM b.*\n THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n WHEN MATCHED\n  AND (b.quantity IS NULL AND b.price IS NULL)\n THEN DELETE;\nCOMMIT;\n\nIt seems related to the use of a.* and b.*\n\nSorry I can't be more specific. Error manifests when planning occurs and that is well outside of my code base knowledge.\n\nHope this helps.\n\nCheers,\n-JoeHi,It seems all the calls to fix_join_expr_mutator() are within setrefs.cI haven't found where in nodeFuncs.c fix_join_expr_mutator is called.I am on commit 75edb919613ee835e7680e40137e494c7856bcf9 . Pardon - I typed too fast:The call to fix_join_expr_mutator() is on this line (3348):                    resultlist = lappend(resultlist,                                         mutator((Node *) lfirst(temp),                                                 context));", "msg_date": "Tue, 5 Apr 2022 15:40:21 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" }, { "msg_contents": "On Wed, Apr 6, 2022 at 6:18 AM Joe Wildish <joe@lateraljoin.com> wrote:\n\n> Hello Hackers,\n>\n> Reporting a bug with the new MERGE statement. Tested against\n> 75edb919613ee835e7680e40137e494c7856bcf9.\n>\n> psql output as follows:\n>\n> ...\n> psql:merge.sql:33: ERROR: variable not found in subplan target lists\n> ROLLBACK\n> [local] joe@joe=# \\errverbose\n> ERROR: XX000: variable not found in subplan target lists\n> LOCATION: fix_join_expr_mutator, setrefs.c:2800\n>\n> Stack trace:\n>\n> fix_join_expr_mutator setrefs.c:2800\n> expression_tree_mutator nodeFuncs.c:3348\n> fix_join_expr_mutator setrefs.c:2853\n> expression_tree_mutator nodeFuncs.c:2992\n> fix_join_expr_mutator setrefs.c:2853\n> expression_tree_mutator nodeFuncs.c:3348\n> fix_join_expr_mutator setrefs.c:2853\n> fix_join_expr setrefs.c:2753\n> set_plan_refs setrefs.c:1085\n> set_plan_references setrefs.c:315\n> standard_planner planner.c:498\n> planner planner.c:277\n> pg_plan_query postgres.c:883\n> pg_plan_queries postgres.c:975\n> exec_simple_query postgres.c:1169\n> PostgresMain postgres.c:4520\n> BackendRun postmaster.c:4593\n> BackendStartup postmaster.c:4321\n> ServerLoop postmaster.c:1801\n> PostmasterMain postmaster.c:1473\n> main main.c:202\n> __libc_start_main 0x00007fc4ccc0b1e2\n> _start 0x000000000048804e\n>\n> Reproducer script:\n>\n> BEGIN;\n> DROP TABLE IF EXISTS item, incoming, source CASCADE;\n>\n> CREATE TABLE item\n> (order_id INTEGER NOT NULL,\n> item_id INTEGER NOT NULL,\n> quantity INTEGER NOT NULL,\n> price NUMERIC NOT NULL,\n> CONSTRAINT pk_item PRIMARY KEY (order_id, item_id));\n>\n> INSERT INTO item VALUES (100, 1, 4, 100.00), (100, 2, 9, 199.00);\n>\n> CREATE TABLE incoming (order_id, item_id, quantity, price)\n> AS (VALUES (100, 1, 4, 100.00), (100, 3, 1, 200.00));\n>\n> CREATE TABLE source (order_id, item_id, quantity, price) AS\n> (SELECT order_id, item_id, incoming.quantity, incoming.price\n> FROM item LEFT JOIN incoming USING (order_id, item_id));\n>\n> MERGE INTO item a\n> USING source b\n> ON (a.order_id, a.item_id) =\n> (b.order_id, b.item_id)\n> WHEN NOT MATCHED\n> THEN INSERT (order_id, item_id, quantity, price)\n> VALUES (order_id, item_id, quantity, price)\n> WHEN MATCHED\n> AND a.* IS DISTINCT FROM b.*\n> THEN UPDATE SET (quantity, price) = (b.quantity, b.price)\n> WHEN MATCHED\n> AND (b.quantity IS NULL AND b.price IS NULL)\n> THEN DELETE;\n> COMMIT;\n>\n> It seems related to the use of a.* and b.*\n>\n\nThat's right. The varattno is set to zero for whole-row Var. And in this\ncase these whole-row Vars are not included in the targetlist.\n\nAttached is an attempt for the fix.\n\nThanks\nRichard", "msg_date": "Wed, 6 Apr 2022 15:38:52 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" }, { "msg_contents": "On 2022-Apr-06, Richard Guo wrote:\n\n> That's right. The varattno is set to zero for whole-row Var. And in this\n> case these whole-row Vars are not included in the targetlist.\n> \n> Attached is an attempt for the fix.\n\nWow, this is very interesting. I was surprised that this patch was\nnecessary at all -- I mean, if wholerow refs don't work, then why do\nreferences to any other columns work? The answer is that parse_merge.c\nis already setting up the subplan's targetlist by expanding all vars of\nthe source relation. I then remembered than in Simon's (or Pavan's)\noriginal coding, parse_merge.c had a hack to include a var with the\nsource's wholerow in that targetlist, which I had later removed ...\n\nI eventually realized that there's no need for parse_merge.c to expand\nthe source rel at all, and indeed it's wasteful: we can just let\npreprocess_targetlist include the vars that are referenced by either\nquals or each action's targetlist instead. That led me to the attached\npatch, which is not commit-quality yet but it should show what I have in\nmind.\n\nI added a test query to tickle this problematic case.\n\nAnother point, not completely connected to this bug but appearing in the\nsame function, is that we have some redundant code: we can just let the\nstanza for UPDATE/DELETE do the identity columns dance. This saves a\nfew lines in the MERGE-specific stanza there, which was doing exactly\nthe same thing. (There's a difference in the \"inh\" test, but I think\nthat was just outdated.)\n\nI also discovered that the comment for fix_join_expr needed an update,\nsince it doesn't mention MERGE, and it does mention all other situations\nin which it is used. Added that too.\n\n\nThis patch is a comment about \"aggregates, window functions and\nplaceholder vars\". This was relevant and correct when only the qual of\neach action was being handled (i.e., Richard's patch). Now that we're\nalso handling the action's targetlist, I think I need to put the PVC\nflags back. But no tests broke, which probably means we also need some\nadditional tests cases.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Apr 2022 23:26:38 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" }, { "msg_contents": "On Sat, Apr 9, 2022 at 5:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Apr-06, Richard Guo wrote:\n>\n> > That's right. The varattno is set to zero for whole-row Var. And in this\n> > case these whole-row Vars are not included in the targetlist.\n> >\n> > Attached is an attempt for the fix.\n>\n> Wow, this is very interesting. I was surprised that this patch was\n> necessary at all -- I mean, if wholerow refs don't work, then why do\n> references to any other columns work? The answer is that parse_merge.c\n> is already setting up the subplan's targetlist by expanding all vars of\n> the source relation. I then remembered than in Simon's (or Pavan's)\n> original coding, parse_merge.c had a hack to include a var with the\n> source's wholerow in that targetlist, which I had later removed ...\n>\n\nAt first I was wondering whether we need to also include vars used in\neach action's targetlist, just as what we did for each action's qual.\nThen later I realized parse_merge.c already did that. But now it looks\nmuch better to process them two in preprocess_targetlist.\n\n\n>\n> I eventually realized that there's no need for parse_merge.c to expand\n> the source rel at all, and indeed it's wasteful: we can just let\n> preprocess_targetlist include the vars that are referenced by either\n> quals or each action's targetlist instead. That led me to the attached\n> patch, which is not commit-quality yet but it should show what I have in\n> mind.\n>\n\nThis patch looks in a good shape to me.\n\nA minor comment is that we can use list_concat_copy(list1, list2)\ninstead of list_concat(list_copy(list1), list2) for better efficiency.\n\nThanks\nRichard\n\nOn Sat, Apr 9, 2022 at 5:26 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Apr-06, Richard Guo wrote:\n\n> That's right. The varattno is set to zero for whole-row Var. And in this\n> case these whole-row Vars are not included in the targetlist.\n> \n> Attached is an attempt for the fix.\n\nWow, this is very interesting.  I was surprised that this patch was\nnecessary at all -- I mean, if wholerow refs don't work, then why do\nreferences to any other columns work?  The answer is that parse_merge.c\nis already setting up the subplan's targetlist by expanding all vars of\nthe source relation.  I then remembered than in Simon's (or Pavan's)\noriginal coding, parse_merge.c had a hack to include a var with the\nsource's wholerow in that targetlist, which I had later removed ...At first I was wondering whether we need to also include vars used ineach action's targetlist, just as what we did for each action's qual.Then later I realized parse_merge.c already did that. But now it looksmuch better to process them two in preprocess_targetlist. \n\nI eventually realized that there's no need for parse_merge.c to expand\nthe source rel at all, and indeed it's wasteful: we can just let\npreprocess_targetlist include the vars that are referenced by either\nquals or each action's targetlist instead.  That led me to the attached\npatch, which is not commit-quality yet but it should show what I have in\nmind.This patch looks in a good shape to me.A minor comment is that we can use list_concat_copy(list1, list2)instead of list_concat(list_copy(list1), list2) for better efficiency.ThanksRichard", "msg_date": "Mon, 11 Apr 2022 12:25:28 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" }, { "msg_contents": "On 2022-Apr-11, Richard Guo wrote:\n\n> At first I was wondering whether we need to also include vars used in\n> each action's targetlist, just as what we did for each action's qual.\n> Then later I realized parse_merge.c already did that. But now it looks\n> much better to process them two in preprocess_targetlist.\n\nYeah. I pushed that.\n\nHowever, now EXPLAIN VERBOSE doesn't show the columns from the source\nrelation in the Output line --- I think only those that are used as join\nquals are shown, thanks to distribute_quals_to_rels. I think it would\nbe better to fix this. Maybe expanding the source target list earlier\nis called for, after all. I looked at transformUpdateStmt and siblings\nfor inspiration, but came out blank.\n\n> A minor comment is that we can use list_concat_copy(list1, list2)\n> instead of list_concat(list_copy(list1), list2) for better efficiency.\n\nThanks for that tip.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La vida es para el que se aventura\"\n\n\n", "msg_date": "Tue, 12 Apr 2022 09:47:47 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE bug report" } ]
[ { "msg_contents": "Hi,\ncould you help me understand if this is an intended behaviour, or I'm\nincorrectly querying a \"char\" field? I have simple table with column\ndeclared as:\n c_tinyint char NOT NULL\n\nThe column contains tiny integers in range 0-10. When I query the column\nfrom my app using libpq values 1-10 are returned correctly as 0x1-0x10. But\nvalue of zero is returned as 0x20 (expected 0x0).\nThe pgAdmin displays result of the query\nSELECT c_tinyint, ascii(c_tinyint) FROM tbl\nas shown below:\n| c_tinyint | ascii |\n| character(1) | integer |\n| | 0 |\n| | 1 |\n| | 2 |\n| | 3 |\n| | 4 |\n\nThank you\nK\n\nHi,could you help me understand if this is an intended behaviour, or I'm incorrectly querying a \"char\" field? I have simple table with column declared as:  c_tinyint char NOT NULLThe column contains tiny integers in range 0-10. When I query the column from my app using libpq values 1-10 are returned correctly as 0x1-0x10. But value of zero is returned as 0x20 (expected 0x0).The pgAdmin displays result of the querySELECT c_tinyint, ascii(c_tinyint) FROM tblas shown below:| c_tinyint       | ascii     || character(1) | integer ||                     |  0         |\n|                     |  1         |\n|                     |  2         |\n|                     |  3         |\n|                     |  4         |Thank youK", "msg_date": "Tue, 5 Apr 2022 21:58:17 -0700", "msg_from": "Konstantin Izmailov <pgfizm@gmail.com>", "msg_from_op": true, "msg_subject": "zero char is returned as space" }, { "msg_contents": "Konstantin Izmailov <pgfizm@gmail.com> writes:\n> could you help me understand if this is an intended behaviour, or I'm\n> incorrectly querying a \"char\" field?\n\nWe do not support '\\0' as an element of a string value. You didn't\nshow how you're trying to insert this value, but I suspect that\nPostgres saw it as an empty string which it then space-padded to\nlength 1 because that's what char(1) does.\n\nDon't use a string field to store an integer. What with the need\nfor a length header, you wouldn't be saving any space compared to\n\"smallint\" even if there weren't any semantic issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 01:08:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: zero char is returned as space" }, { "msg_contents": "Tom,\nthank you very much! It makes sense now.\n\nK\n\nOn Tue, Apr 5, 2022 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Konstantin Izmailov <pgfizm@gmail.com> writes:\n> > could you help me understand if this is an intended behaviour, or I'm\n> > incorrectly querying a \"char\" field?\n>\n> We do not support '\\0' as an element of a string value. You didn't\n> show how you're trying to insert this value, but I suspect that\n> Postgres saw it as an empty string which it then space-padded to\n> length 1 because that's what char(1) does.\n>\n> Don't use a string field to store an integer. What with the need\n> for a length header, you wouldn't be saving any space compared to\n> \"smallint\" even if there weren't any semantic issues.\n>\n> regards, tom lane\n>\n\nTom,thank you very much! It makes sense now.KOn Tue, Apr 5, 2022 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Konstantin Izmailov <pgfizm@gmail.com> writes:\n> could you help me understand if this is an intended behaviour, or I'm\n> incorrectly querying a \"char\" field?\n\nWe do not support '\\0' as an element of a string value.  You didn't\nshow how you're trying to insert this value, but I suspect that\nPostgres saw it as an empty string which it then space-padded to\nlength 1 because that's what char(1) does.\n\nDon't use a string field to store an integer.  What with the need\nfor a length header, you wouldn't be saving any space compared to\n\"smallint\" even if there weren't any semantic issues.\n\n                        regards, tom lane", "msg_date": "Tue, 5 Apr 2022 22:23:53 -0700", "msg_from": "Konstantin Izmailov <pgfizm@gmail.com>", "msg_from_op": true, "msg_subject": "Re: zero char is returned as space" } ]
[ { "msg_contents": "In this email, I would like to discuss allowing streaming logical\ntransactions (large in-progress transactions) by background workers\nand parallel apply in general. The goal of this work is to improve the\nperformance of the apply work in logical replication.\n\nCurrently, for large transactions, the publisher sends the data in\nmultiple streams (changes divided into chunks depending upon\nlogical_decoding_work_mem), and then on the subscriber-side, the apply\nworker writes the changes into temporary files and once it receives\nthe commit, it read from the file and apply the entire transaction. To\nimprove the performance of such transactions, we can instead allow\nthem to be applied via background workers. There could be multiple\nways to achieve this:\n\nApproach-1: Assign a new bgworker (if available) as soon as the xact's\nfirst stream came and the main apply worker will send changes to this\nnew worker via shared memory. We keep this worker assigned till the\ntransaction commit came and also wait for the worker to finish at\ncommit. This preserves commit ordering and avoid writing to and\nreading from file in most cases. We still need to spill if there is no\nworker available. We also need to allow stream_stop to complete by the\nbackground worker to finish it to avoid deadlocks because T-1's\ncurrent stream of changes can update rows in conflicting order with\nT-2's next stream of changes.\n\nApproach-2: Assign another worker to spill the changes and only allow\nto apply at the commit time by the same or another worker. Now, to\npreserve, the commit order, we need to wait at commit so that the\nassigned respective workers can finish. This won't avoid spilling to\ndisk and reading back at commit time but can help in receiving and\nprocessing more data than we are doing currently but not sure if this\ncan win over Approach-1 because we still need to write and read from\nthe file and we need to probably use share memory queue to send the\ndata to other background workers to process it.\n\nWe need to change error handling to allow the above parallelization.\nThe current model for apply is such that if any error occurs while\napplying we will simply report the error in server logs and the apply\nworker will exit. On the restart, it will again get the transaction\ndata which previously failed and it will try to apply it again. Now,\nin the new approach (say Approach-1), we need to ensure that all the\nactive workers that are applying in-progress transactions should also\nexit before the main apply worker exit to allow rollback of currently\napplied transactions and re-apply them as we get the data again. This\nis required to avoid losing transactions if any later transaction got\ncommitted and updated the replication origin as in such cases the\nearlier transactions won't be resent. This won't be much different\nthan what we do now, where say two transactions, t-1, and t-2 have\nmultiple streams overlapped. Now, if the error happened before one of\nthose is completed via commit or rollback, all the data needs to be\nresent by the server and processed again by the apply worker.\n\nThe next step in this area is to parallelize apply of all possible\ntransactions. I think the main things we need to care about to allow\nthis are:\n1. Transaction dependency: We can't simply allow dependent\ntransactions to perform in parallel as that can lead to inconsistency.\nSay, if we insert a row in the first transaction and update it in the\nsecond transaction and allow both transactions to apply in parallel,\nthe insert-one may occur later and the update will fail.\n2. Deadlocks: It can happen because now the transactions will be\napplied in parallel. Say transaction T-1 updates row-2 and row-3 and\ntransaction T-2 updates row-3 and row-2, if we allow in parallel then\nthere is a chance of deadlock whereas there is no such risk in serial\nexecution where the commit order is preserved.\n\nWe can solve both problems if we allow only independent xacts to be\nparallelized. The transactions would be considered dependent if they\noperate on the same set of rows from the same table. Now apart from\nthis, there could be other cases where determining transaction\ndependency won't be straightforward, so we can disallow those\ntransactions to participate in parallel apply. Those are the cases\nwhere we can use functions in the table definition expressions. We can\nthink of identifying safe functions like all built-in functions, and\nany immutable functions (and probably stable functions). We need to\ncheck safety for cases such as (a) trigger functions, (b) column\ndefault value expressions (as those can call functions), (c)\nconstraint expressions, (d) foreign keys, (e) operations on\npartitioned tables (especially those performed via\npublish_via_partition_root option) as we need to check for expressions\non all partitions.\n\nThe transactions that operate on the same set of tables and are\nperforming truncate can lead to deadlock, so we need to consider such\ntransactions as a dependent.\n\nThe basic idea is that for each running xact we can maintain the table\noid, row id(pkey or replica identity), and xid in the hash table in\napply worker. For any new xact, we need to check if it doesn't\nconflict with one of the previous running xacts and only then allow it\nto be applied parallelly. We can collect all the changes of a\ntransaction in the in-memory buffer while checking its dependency and\nthen allow it to perform by one of the available workers at commit. If\nthe rows for a particular transaction exceed a certain threshold then\nwe need to escalate to a table-level strategy which means any other\ntransaction operating on the same table will be considered dependent.\nFor very large transactions that didn't fit in the in-memory buffer,\neither we need to spill those to disk or just decide to not\nparallelize them. We need to remove rows from the hash table once the\ntransaction is applied completely.\n\nThe other thing we need to ensure while parallelizing independent\ntransactions is to preserve the commit order of transactions. This is\nto ensure that in case of errors, we won't get replicas out of sync.\nSay, if we allow the commit order to be changed then it is possible\nthat some later transaction has updated the replication_origin LSN to\na later value than the transaction for which the apply is in progress.\nNow, if the error occurs for such an in-progress transaction, the\nserver won't send the changes for such a transaction as the\nreplication_origin's LSN would have moved ahead.\n\nEven though we are preserving commit order there will be a benefit of\ndoing parallel apply as we should be able to parallelize most of the\nwrites in the transactions.\n\nThoughts?\n\nThanks to Hou-San and Shi-San for helping me to investigate these ideas.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Apr 2022 10:49:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n\r\n> In this email, I would like to discuss allowing streaming logical\r\n> transactions (large in-progress transactions) by background workers\r\n> and parallel apply in general. The goal of this work is to improve the\r\n> performance of the apply work in logical replication.\r\n> \r\n> Currently, for large transactions, the publisher sends the data in\r\n> multiple streams (changes divided into chunks depending upon\r\n> logical_decoding_work_mem), and then on the subscriber-side, the apply\r\n> worker writes the changes into temporary files and once it receives\r\n> the commit, it read from the file and apply the entire transaction. To\r\n> improve the performance of such transactions, we can instead allow\r\n> them to be applied via background workers. There could be multiple\r\n> ways to achieve this:\r\n> \r\n> Approach-1: Assign a new bgworker (if available) as soon as the xact's\r\n> first stream came and the main apply worker will send changes to this\r\n> new worker via shared memory. We keep this worker assigned till the\r\n> transaction commit came and also wait for the worker to finish at\r\n> commit. This preserves commit ordering and avoid writing to and\r\n> reading from file in most cases. We still need to spill if there is no\r\n> worker available. We also need to allow stream_stop to complete by the\r\n> background worker to finish it to avoid deadlocks because T-1's\r\n> current stream of changes can update rows in conflicting order with\r\n> T-2's next stream of changes.\r\n> \r\n\r\nAttach the POC patch for the Approach-1 of \"Perform streaming logical\r\ntransactions by background workers\". The patch is still a WIP patch as\r\nthere are serval TODO items left, including:\r\n\r\n* error handling for bgworker\r\n* support for SKIP the transaction in bgworker \r\n* handle the case when there is no more worker available\r\n (might need spill the data to the temp file in this case)\r\n* some potential bugs\r\n\r\nThe original patch is borrowed from an old thread[1] and was rebased and\r\nextended/cleaned by me. Comments and suggestions are welcome.\r\n\r\n[1] https://www.postgresql.org/message-id/8eda5118-2dd0-79a1-4fe9-eec7e334de17%40postgrespro.ru\r\n\r\nHere are some performance results of the patch shared by Shi Yu off-list.\r\n\r\nThe performance was tested by varying\r\nlogical_decoding_work_mem, which include two cases:\r\n\r\n1) bulk insert.\r\n2) create savepoint and rollback to savepoint.\r\n\r\nI used synchronous logical replication in the test, compared SQL execution\r\ntimes before and after applying the patch.\r\n\r\nThe results are as follows. The bar charts and the details of the test are\r\nAttached as well.\r\n\r\nRESULT - bulk insert (5kk)\r\n----------------------------------\r\nlogical_decoding_work_mem 64kB 128kB 256kB 512kB 1MB 2MB 4MB 8MB 16MB 32MB 64MB\r\nHEAD 51.673 51.199 51.166 50.259 52.898 50.651 51.156 51.210 50.678 51.256 51.138\r\npatched 36.198 35.123 34.223 29.198 28.712 29.090 29.709 29.408 34.367 34.716 35.439\r\n\r\nRESULT - rollback to savepoint (600k)\r\n----------------------------------\r\nlogical_decoding_work_mem 64kB 128kB 256kB 512kB 1MB 2MB 4MB 8MB 16MB 32MB 64MB\r\nHEAD 31.101 31.087 30.931 31.015 30.920 31.109 30.863 31.008 30.875 30.775 29.903\r\npatched 28.115 28.487 27.804 28.175 27.734 29.047 28.279 27.909 28.277 27.345 28.375\r\n\r\n\r\nSummary:\r\n1) bulk insert\r\n\r\nFor different logical_decoding_work_mem size, it takes about 30% ~ 45% less\r\ntime, which looks good to me. After applying this patch, it seems that the\r\nperformance is better when logical_decoding_work_mem is between 512kB and 8MB.\r\n\r\n2) rollback to savepoint\r\n\r\nThere is an improvement of about 5% ~ 10% after applying this patch.\r\n\r\nIn this case, the patch spend less time handling the part that is not\r\nrolled back, because it saves the time writing the changes into a temporary file\r\nand reading the file. And for the part that is rolled back, it would spend more\r\ntime than HEAD, because it takes more time to write to filesystem and rollback\r\nthan writing a temporary file and truncating the file. Overall, the results looks\r\ngood.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 8 Apr 2022 09:14:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, April 8, 2022 5:14 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> \r\n> > In this email, I would like to discuss allowing streaming logical\r\n> > transactions (large in-progress transactions) by background workers\r\n> > and parallel apply in general. The goal of this work is to improve the\r\n> > performance of the apply work in logical replication.\r\n> >\r\n> > Currently, for large transactions, the publisher sends the data in\r\n> > multiple streams (changes divided into chunks depending upon\r\n> > logical_decoding_work_mem), and then on the subscriber-side, the apply\r\n> > worker writes the changes into temporary files and once it receives\r\n> > the commit, it read from the file and apply the entire transaction. To\r\n> > improve the performance of such transactions, we can instead allow\r\n> > them to be applied via background workers. There could be multiple\r\n> > ways to achieve this:\r\n> >\r\n> > Approach-1: Assign a new bgworker (if available) as soon as the xact's\r\n> > first stream came and the main apply worker will send changes to this\r\n> > new worker via shared memory. We keep this worker assigned till the\r\n> > transaction commit came and also wait for the worker to finish at\r\n> > commit. This preserves commit ordering and avoid writing to and\r\n> > reading from file in most cases. We still need to spill if there is no\r\n> > worker available. We also need to allow stream_stop to complete by the\r\n> > background worker to finish it to avoid deadlocks because T-1's\r\n> > current stream of changes can update rows in conflicting order with\r\n> > T-2's next stream of changes.\r\n> >\r\n> \r\n> Attach the POC patch for the Approach-1 of \"Perform streaming logical\r\n> transactions by background workers\". The patch is still a WIP patch as\r\n> there are serval TODO items left, including:\r\n> \r\n> * error handling for bgworker\r\n> * support for SKIP the transaction in bgworker\r\n> * handle the case when there is no more worker available\r\n> (might need spill the data to the temp file in this case)\r\n> * some potential bugs\r\n> \r\n> The original patch is borrowed from an old thread[1] and was rebased and\r\n> extended/cleaned by me. Comments and suggestions are welcome.\r\n\r\nAttach a new version patch which improved the error handling and handled the case\r\nwhen there is no more worker available (will spill the data to the temp file in this case).\r\n\r\nCurrently, it still doesn't support skip the streamed transaction in bgworker, because\r\nin this approach, we don't know the last lsn for the streamed transaction being applied,\r\nso cannot get the lsn to SKIP. I will think more about it and keep testing the patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 14 Apr 2022 03:42:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Apr 14, 2022 at 9:12 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, April 8, 2022 5:14 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>\n> Attach a new version patch which improved the error handling and handled the case\n> when there is no more worker available (will spill the data to the temp file in this case).\n>\n> Currently, it still doesn't support skip the streamed transaction in bgworker, because\n> in this approach, we don't know the last lsn for the streamed transaction being applied,\n> so cannot get the lsn to SKIP. I will think more about it and keep testing the patch.\n>\n\nI think we can avoid performing the streaming transaction by bgworker\nif skip_lsn is set. This needs some more thought but anyway I see\nanother problem in this patch. I think we won't be able to make the\ndecision whether to apply the change for a relation that is not in the\n'READY' state (see should_apply_changes_for_rel) as we won't know\n'remote_final_lsn' by that time for streaming transactions. I think\nwhat we can do here is that before assigning the transaction to\nbgworker, we can check if any of the rels is not in the 'READY' state,\nwe can make the transaction spill the changes as we are doing now.\nEven if we do such a check, it is still possible that some rel on\nwhich this transaction is performing operation can appear to be in\n'non-ready' state after starting bgworker and for such a case I think\nwe need to give error and restart the transaction as we have no way to\nknow whether we need to perform an operation on the 'rel'. This is\npossible if the user performs REFRESH PUBLICATION in parallel to this\ntransaction as that can add a new rel to the pg_subscription_rel.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Apr 2022 12:27:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, April 19, 2022 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Apr 14, 2022 at 9:12 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, April 8, 2022 5:14 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach a new version patch which improved the error handling and handled\r\n> the case\r\n> > when there is no more worker available (will spill the data to the temp file in\r\n> this case).\r\n> >\r\n> > Currently, it still doesn't support skip the streamed transaction in bgworker,\r\n> because\r\n> > in this approach, we don't know the last lsn for the streamed transaction\r\n> being applied,\r\n> > so cannot get the lsn to SKIP. I will think more about it and keep testing the\r\n> patch.\r\n> >\r\n> \r\n> I think we can avoid performing the streaming transaction by bgworker\r\n> if skip_lsn is set. This needs some more thought but anyway I see\r\n> another problem in this patch. I think we won't be able to make the\r\n> decision whether to apply the change for a relation that is not in the\r\n> 'READY' state (see should_apply_changes_for_rel) as we won't know\r\n> 'remote_final_lsn' by that time for streaming transactions. I think\r\n> what we can do here is that before assigning the transaction to\r\n> bgworker, we can check if any of the rels is not in the 'READY' state,\r\n> we can make the transaction spill the changes as we are doing now.\r\n> Even if we do such a check, it is still possible that some rel on\r\n> which this transaction is performing operation can appear to be in\r\n> 'non-ready' state after starting bgworker and for such a case I think\r\n> we need to give error and restart the transaction as we have no way to\r\n> know whether we need to perform an operation on the 'rel'. This is\r\n> possible if the user performs REFRESH PUBLICATION in parallel to this\r\n> transaction as that can add a new rel to the pg_subscription_rel.\r\n\r\nChanged as suggested.\r\n\r\nAttach the new version patch which cleanup some code and fix above problem. For\r\nnow, it won't apply streaming transaction in bgworker if skiplsn is set or any\r\ntable is not in 'READY' state.\r\n\r\nBesides, extent the subscription streaming option to ('on/off/apply(apply in\r\nbgworker)/spool(spool to file)') so that user can control whether to apply The\r\ntransaction in a bgworker.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 20 Apr 2022 08:57:02 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, April 20, 2022 4:57 PM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Tuesday, April 19, 2022 2:58 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Apr 14, 2022 at 9:12 AM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Friday, April 8, 2022 5:14 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Attach a new version patch which improved the error handling and\r\n> > > handled\r\n> > the case\r\n> > > when there is no more worker available (will spill the data to the\r\n> > > temp file in\r\n> > this case).\r\n> > >\r\n> > > Currently, it still doesn't support skip the streamed transaction in\r\n> > > bgworker,\r\n> > because\r\n> > > in this approach, we don't know the last lsn for the streamed\r\n> > > transaction\r\n> > being applied,\r\n> > > so cannot get the lsn to SKIP. I will think more about it and keep\r\n> > > testing the\r\n> > patch.\r\n> > >\r\n> >\r\n> > I think we can avoid performing the streaming transaction by bgworker\r\n> > if skip_lsn is set. This needs some more thought but anyway I see\r\n> > another problem in this patch. I think we won't be able to make the\r\n> > decision whether to apply the change for a relation that is not in the\r\n> > 'READY' state (see should_apply_changes_for_rel) as we won't know\r\n> > 'remote_final_lsn' by that time for streaming transactions. I think\r\n> > what we can do here is that before assigning the transaction to\r\n> > bgworker, we can check if any of the rels is not in the 'READY' state,\r\n> > we can make the transaction spill the changes as we are doing now.\r\n> > Even if we do such a check, it is still possible that some rel on\r\n> > which this transaction is performing operation can appear to be in\r\n> > 'non-ready' state after starting bgworker and for such a case I think\r\n> > we need to give error and restart the transaction as we have no way to\r\n> > know whether we need to perform an operation on the 'rel'. This is\r\n> > possible if the user performs REFRESH PUBLICATION in parallel to this\r\n> > transaction as that can add a new rel to the pg_subscription_rel.\r\n> \r\n> Changed as suggested.\r\n> \r\n> Attach the new version patch which cleanup some code and fix above problem.\r\n> For now, it won't apply streaming transaction in bgworker if skiplsn is set or any\r\n> table is not in 'READY' state.\r\n> \r\n> Besides, extent the subscription streaming option to ('on/off/apply(apply in\r\n> bgworker)/spool(spool to file)') so that user can control whether to apply The\r\n> transaction in a bgworker.\r\n\r\nSorry, there was a miss in the pg_dump testcase which cause failure in CFbot.\r\nAttach a new version patch which fix that.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 20 Apr 2022 12:22:12 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hello Hou-san. Here are my review comments for v4-0001. Sorry, there\nare so many of them (it is a big patch); some are trivial, and others\nyou might easily dismiss due to my misunderstanding of the code. But\nhopefully, there are at least some comments that can be helpful in\nimproving the patch quality.\n\n======\n\n1. General comment - terms\n\nNeeds to be more consistent about what exactly you will call this new\nworker. Sometimes called \"locally apply worker\"; sometimes \"bgworker\";\nsometimes \"subworker\", sometimes \"BGW\", sometimes other variations etc\n… Need to pick ONE good name then update all the references/comments\nin the patch to use that name consistently throughout.\n\n~~~\n\n2. General comment - option values\n\nI felt the \"streaming\" option values ought to be different from what\nthis patch proposes so it affected some of my following review\ncomments. (Later I give example what I thought the values should be).\n\n~~~\n\n3. General comment - bool option change to enum\n\nThis option change for \"streaming\" is similar to the options change\nfor \"copy_data=force\" that Vignesh is doing for his \"infinite\nrecursion\" patch v9-0002 [1]. Yet they seem implemented differently\n(i.e. char versus enum). I think you should discuss the 2 approaches\nwith Vignesh and then code these option changes in a consistent way.\n\n~~~\n\n4. General comment - worker.c globals\n\nThere seems a growing number of global variables in the worker.c code.\nI was wondering is it really necessary? because the logic becomes more\nintricate now if you have to know that some global was set up as a\nside-effect of some other function call. E.g maybe if you could do a\nfew more HTAB lookups to identify the bgworker then might not need to\nrely on the globals so much?\n\n======\n\n5. Commit message - typo\n\nand then on the subscriber-side, the apply worker writes the changes into\ntemporary files and once it receives the commit, it read from the file and\napply the entire transaction. To improve the performance of such transactions,\n\ntypo: \"read\" -> \"reads\"\ntypo: \"apply\" -> \"applies\"\n\n~~~\n\n6. Commit message - wording\n\nIn this approach, we assign a new bgworker (if available) as soon as the xact's\nfirst stream came and the main apply worker will send changes to this new\nworker via shared memory. The bgworker will directly apply the change instead\nof writing it to temporary files. We keep this worker assigned till the\ntransaction commit came and also wait for the worker to finish at commit. This\n\nwording: \"came\" -> \"is received\" (2x)\n\n~~~\n\n7. Commit message - terms\n\n(this is the same point as comment #1)\n\nI think there is too much changing of terminology. IMO it will be\neasier if you always just call the current main apply workers the\n\"apply worker\" and always call this new worker the \"bgworker\" (or some\nbetter name). But never just call it the \"worker\".\n\n~~~\n\n8. Commit message - typo\n\ntransaction commit came and also wait for the worker to finish at commit. This\npreserves commit ordering and avoid writing to and reading from file in most\ncases. We still need to spill if there is no worker available. We also need to\n\ntypo: \"avoid\" -> \"avoids\"\n\n~~~\n\n9. Commit message - wording/typo\n\nAlso extend the subscription streaming option so that user can control whether\napply the streaming transaction in a bgworker or spill the change to disk. User\n\nwording: \"Also extend\" -> \"This patch also extends\"\ntypo: \"whether apply\" -> \"whether to apply\"\n\n~~~\n\n10. Commit message - option values\n\napply the streaming transaction in a bgworker or spill the change to disk. User\ncan set the streaming option to 'on/off', 'apply', 'spool'. For now, 'on' and\n\nThose values do not really seem intuitive to me. E.g. if you set\n\"apply\" then you already said above that sometimes it might have to\nspool anyway if there were no bgworkers available. Why not just name\nthem like \"on/off/parallel\"?\n\n(I have written more about this in a later comment #14)\n\n======\n\n11. doc/src/sgml/catalogs.sgml - wording\n\n+ Controls in which modes we handle the streaming of in-progress\ntransactions.\n+ <literal>f</literal> = disallow streaming of in-progress transactions\n\nwording: \"Controls in which modes we handle...\" -> \"Controls how to handle...\"\n\n~~~\n\n12. doc/src/sgml/catalogs.sgml - wording\n\n+ <literal>a</literal> = apply changes directly in background worker\n\nwording: \"in background worker\" -> \"using a background worker\"\n\n~~~\n\n13. doc/src/sgml/catalogs.sgml - option values\n\nAnyway, all this page will be different if I can persuade you to\nchange the option values (see comment #14)\n\n======\n\n14. doc/src/sgml/ref/create_subscription.sgml - option values\n\nSince the default value is \"off\" I felt these options would be\nbetter/simpler if they are just like \"off/on/parallel\". E.g.\nSpecifically, I think the \"on\" should behave the same as the current\ncode does, so the user should deliberately choose to use this new\nbgworker approach.\n\ne.g.\n- \"off\" = off, same as current PG15\n- \"on\" = on, same as current PG15\n- \"parallel\" = try to use the new bgworker to apply stream\n\n======\n\n15. src/backend/commands/subscriptioncmds.c - SubOpts\n\nVignesh uses similar code for his \"infinite recursion\" patch being\ndeveloped [1] but he used an enum but here you use a char. I think you\nshould discuss together both decide to use either enum or char for the\nmember so there is a consistency.\n\n~~~\n\n16. src/backend/commands/subscriptioncmds.c - combine conditions\n\n+ /*\n+ * The set of strings accepted here should match up with the\n+ * grammar's opt_boolean_or_string production.\n+ */\n+ if (pg_strcasecmp(sval, \"true\") == 0)\n+ return SUBSTREAM_APPLY;\n+ if (pg_strcasecmp(sval, \"false\") == 0)\n+ return SUBSTREAM_OFF;\n+ if (pg_strcasecmp(sval, \"on\") == 0)\n+ return SUBSTREAM_APPLY;\n+ if (pg_strcasecmp(sval, \"off\") == 0)\n+ return SUBSTREAM_OFF;\n+ if (pg_strcasecmp(sval, \"spool\") == 0)\n+ return SUBSTREAM_SPOOL;\n+ if (pg_strcasecmp(sval, \"apply\") == 0)\n+ return SUBSTREAM_APPLY;\n\nBecause I think the possible option values should be different to\nthese I can’t comment much on this code, except to suggest IMO the if\nconditions should be combined where the options are considered to be\nequivalent.\n\n======\n\n17. src/backend/replication/logical/launcher.c - stop_worker\n\n@@ -72,6 +72,7 @@ static void logicalrep_launcher_onexit(int code, Datum arg);\n static void logicalrep_worker_onexit(int code, Datum arg);\n static void logicalrep_worker_detach(void);\n static void logicalrep_worker_cleanup(LogicalRepWorker *worker);\n+static void stop_worker(LogicalRepWorker *worker);\n\nThe function name does not seem consistent with the other similar static funcs.\n\n~~~\n\n18. src/backend/replication/logical/launcher.c - change if\n\n@@ -225,7 +226,7 @@ logicalrep_worker_find(Oid subid, Oid relid, bool\nonly_running)\n LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n\n if (w->in_use && w->subid == subid && w->relid == relid &&\n- (!only_running || w->proc))\n+ (!only_running || w->proc) && !w->subworker)\n {\nMaybe code would be easier (and then you can comment it) if you do like:\n\n/* TODO: comment here */\nif (w->subworker)\ncontinue;\n\n~~~\n\n19. src/backend/replication/logical/launcher.c -\nlogicalrep_worker_launch comment\n\n@@ -262,9 +263,9 @@ logicalrep_workers_find(Oid subid, bool only_running)\n /*\n * Start new apply background worker, if possible.\n */\n-void\n+bool\n logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid,\n- Oid relid)\n+ Oid relid, dsm_handle subworker_dsm)\n\nSaying \"start new apply...\" comment feels a bit misleading. E.g. this\nis also called to start the sync worker. And also for the main apply\nworker (which we are not really calling a \"background worker\" in other\nplaces). So this is the same kind of terminology problem as my review\ncomment #1.\n\n~~~\n\n20. src/backend/replication/logical/launcher.c - asserts?\n\nI thought maybe there should be some assertions in this code upfront.\nE.g. cannot have OidIsValid(relid) and subworker_dsm valid at the same\ntime.\n\n~~~\n\n21. src/backend/replication/logical/launcher.c - terms\n\n+ else\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication apply worker for subscription %u\", subid);\n\nI think the names of all these workers is a bit vague still in the\nmessages – e.g. \"logical replication worker\" versus \"logical\nreplication apply worker\" sounds too similar to me. So this is kind of\nsame as my review comment #1.\n\n~~~\n\n22. src/backend/replication/logical/launcher.c -\nlogicalrep_worker_stop double unlock?\n\n@@ -450,6 +465,18 @@ logicalrep_worker_stop(Oid subid, Oid relid)\n return;\n }\n\n+ stop_worker(worker);\n+\n+ LWLockRelease(LogicalRepWorkerLock);\n+}\n\nIIUC, sometimes it seems that stop_worker() function might already\nrelease the lock before it returns. In that case won’t this other\nexplicit lock release be a problem?\n\n~~~\n\n23. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n\n@@ -600,6 +625,28 @@ logicalrep_worker_attach(int slot)\n static void\n logicalrep_worker_detach(void)\n {\n+ /*\n+ * If we are the main apply worker, stop all the sub apply workers we\n+ * started before.\n+ */\n+ if (!MyLogicalRepWorker->subworker)\n+ {\n+ List *workers;\n+ ListCell *lc;\n+\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+\n+ workers = logicalrep_workers_find(MyLogicalRepWorker->subid, true);\n+ foreach(lc, workers)\n+ {\n+ LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n+ if (w->subworker)\n+ stop_worker(w);\n+ }\n+\n+ LWLockRelease(LogicalRepWorkerLock);\n\nCan this have the same double-unlock problem as I described in the\nprevious review comment #22?\n\n~~~\n\n24. src/backend/replication/logical/launcher.c - ApplyLauncherMain\n\n@@ -869,7 +917,7 @@ ApplyLauncherMain(Datum main_arg)\n wait_time = wal_retrieve_retry_interval;\n\n logicalrep_worker_launch(sub->dbid, sub->oid, sub->name,\n- sub->owner, InvalidOid);\n+ sub->owner, InvalidOid, DSM_HANDLE_INVALID);\n }\nNow that the logicalrep_worker_launch is retuning a bool, should this\ncall be checking the return value and taking appropriate action if it\nfailed?\n\n======\n\n25. src/backend/replication/logical/origin.c - acquire comment\n\n+ /*\n+ * We allow the apply worker to get the slot which is acquired by its\n+ * leader process.\n+ */\n+ else if (curstate->acquired_by != 0 && acquire)\n\nThe comment was not very clear to me. Does the term \"apply worker\" in\nthe comment make sense, or should that say \"bgworker\"? This might be\nanother example of my review comment #1.\n\n~~~\n\n26. src/backend/replication/logical/origin.c - acquire code\n\n+ /*\n+ * We allow the apply worker to get the slot which is acquired by its\n+ * leader process.\n+ */\n+ else if (curstate->acquired_by != 0 && acquire)\n {\n ereport(ERROR,\n\nI somehow felt that this param would be better called 'skip_acquire',\nso all the callers would have to use the opposite boolean and then\nthis code would say like below (which seemed easier to me). YMMV.\n\nelse if (curstate->acquired_by != 0 && !skip_acquire)\n {\n ereport(ERROR,\n\n=====\n\n27. src/backend/replication/logical/tablesync.c\n\n@@ -568,7 +568,8 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)\n MySubscription->oid,\n MySubscription->name,\n MyLogicalRepWorker->userid,\n- rstate->relid);\n+ rstate->relid,\n+ DSM_HANDLE_INVALID);\n hentry->last_start_time = now;\nNow that the logicalrep_worker_launch is returning a bool, should this\ncall be checking that the launch was successful before it changes the\nlast_start_time?\n\n======\n\n28. src/backend/replication/logical/worker.c - file comment\n\n+ * 1) Separate background workers\n+ *\n+ * Assign a new bgworker (if available) as soon as the xact's first stream came\n+ * and the main apply worker will send changes to this new worker via shared\n+ * memory. We keep this worker assigned till the transaction commit came and\n+ * also wait for the worker to finish at commit. This preserves commit ordering\n+ * and avoid writing to and reading from file in most cases. We still need to\n+ * spill if there is no worker available. We also need to allow stream_stop to\n+ * complete by the background worker to finish it to avoid deadlocks because\n+ * T-1's current stream of changes can update rows in conflicting order with\n+ * T-2's next stream of changes.\n\nThis comment fragment looks the same as the commit message so the\ntypos/wording reported already for the commit message are applicable\nhere too.\n\n~~~\n\n29. src/backend/replication/logical/worker.c - file comment\n\n+ * If no worker is available to handle streamed transaction, we write the data\n * to temporary files and then applied at once when the final commit arrives.\n\nwording: \"we write the data\" -> \"the data is written\"\n\n~~~\n\n30. src/backend/replication/logical/worker.c - ParallelState\n\n+typedef struct ParallelState\n\nAdd to typedefs.list\n\n~~~\n\n31. src/backend/replication/logical/worker.c - ParallelState flags\n\n+typedef struct ParallelState\n+{\n+ slock_t mutex;\n+ bool attached;\n+ bool ready;\n+ bool finished;\n+ bool failed;\n+ Oid subid;\n+ TransactionId stream_xid;\n+ uint32 n;\n+} ParallelState;\n\nThose bool states look independent to me. Should they be one enum\nmember instead of lots of bool members?\n\n~~~\n\n32. src/backend/replication/logical/worker.c - ParallelState comments\n\n+typedef struct ParallelState\n+{\n+ slock_t mutex;\n+ bool attached;\n+ bool ready;\n+ bool finished;\n+ bool failed;\n+ Oid subid;\n+ TransactionId stream_xid;\n+ uint32 n;\n+} ParallelState;\n\nNeeds some comments. Some might be self-evident but some are not -\ne.g. what is 'n'?\n\n~~~\n\n33. src/backend/replication/logical/worker.c - WorkerState\n\n+typedef struct WorkerState\n\nAdd to typedefs.list\n\n~~~\n\n34. src/backend/replication/logical/worker.c - WorkerEntry\n\n+typedef struct WorkerEntry\n\nAdd to typedefs.list\n\n~~~\n\n35. src/backend/replication/logical/worker.c - static function names\n\n+/* Worker setup and interactions */\n+static void setup_dsm(WorkerState *wstate);\n+static WorkerState *setup_background_worker(void);\n+static void wait_for_worker_ready(WorkerState *wstate, bool notify);\n+static void wait_for_transaction_finish(WorkerState *wstate);\n+static void send_data_to_worker(WorkerState *wstate, Size nbytes,\n+ const void *data);\n+static WorkerState *find_or_start_worker(TransactionId xid, bool start);\n+static void free_stream_apply_worker(void);\n+static bool transaction_applied_in_bgworker(TransactionId xid);\n+static void check_workers_status(void);\n\nAll these new functions have random-looking names. Since they all are\nnew to this feature I thought they should all be named similarly...\n\ne.g. something like\nbgworker_setup\nbgworker_check_status\nbgworker_wait_for_ready\netc.\n\n~~~\n\n36. src/backend/replication/logical/worker.c - nchanges\n\n+\n+static uint32 nchanges = 0;\n+\n\nWhat is this? Needs a comment.\n\n~~~\n\n37. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n static bool\n handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n {\n- TransactionId xid;\n+ TransactionId current_xid = InvalidTransactionId;\n\n /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ if (!in_streamed_transaction && !isLogicalApplyWorker)\n return false;\nIs it correct to be testing the isLogicalApplyWorker here?\n\ne.g. What if the streaming code is not using bgworkers at all?\n\nAt least maybe that comment (/* not in streaming mode */) should be updated?\n\n~~~\n\n38. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ if (current_xid != stream_xid &&\n+ !list_member_int(subxactlist, (int) current_xid))\n+ {\n+ MemoryContext oldctx;\n+ char *spname = (char *) palloc(64 * sizeof(char));\n+ sprintf(spname, \"savepoint_for_xid_%u\", current_xid);\n\nCan't the name just be a char[64] on the stack?\n\n~~~\n\n39. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /*\n+ * XXX The publisher side don't always send relation update message\n+ * after the streaming transaction, so update the relation in main\n+ * worker here.\n+ */\n\ntypo: \"don't\" -> \"doesn't\" ?\n\n~~~\n\n40. src/backend/replication/logical/worker.c - apply_handle_commit_prepared\n\n@@ -976,30 +1116,51 @@ apply_handle_commit_prepared(StringInfo s)\n char gid[GIDSIZE];\n\n logicalrep_read_commit_prepared(s, &prepare_data);\n+\n set_apply_error_context_xact(prepare_data.xid, prepare_data.commit_lsn);\n\nSpurious whitespace?\n\n~~~\n\n41. src/backend/replication/logical/worker.c - apply_handle_commit_prepared\n\n+ /* Check if we have prepared transaction in another bgworker */\n+ if (transaction_applied_in_bgworker(prepare_data.xid))\n+ {\n+ elog(DEBUG1, \"received commit for streamed transaction %u\", prepare_data.xid);\n\n- /* There is no transaction when COMMIT PREPARED is called */\n- begin_replication_step();\n+ /* Send commit message */\n+ send_data_to_worker(stream_apply_worker, s->len, s->data);\n\nIt seems a bit complex/tricky that the code is always relying on all\nthe side-effects that the global stream_apply_worker will be set.\n\nI am not sure if it is possible to remove the global and untangle\neverything. E.g. Why not change the transaction_applied_in_bgworker to\nreturn the bgworker (instead of return bool) and then can assign it to\na local var in this function.\n\nOr can’t you do HTAB lookup in a few more places instead of carrying\naround the knowledge of some global var that was initialized in some\nother place?\n\nIt would be easier if you can eliminate having to be aware of\nside-effects happening behind the scenes.\n\n~~~\n\n42. src/backend/replication/logical/worker.c - apply_handle_rollback_prepared\n\n@@ -1019,35 +1180,51 @@ apply_handle_rollback_prepared(StringInfo s)\n char gid[GIDSIZE];\n\n logicalrep_read_rollback_prepared(s, &rollback_data);\n+\n set_apply_error_context_xact(rollback_data.xid,\nrollback_data.rollback_end_lsn);\n\nSpurious whitespace?\n\n~~~\n\n43. src/backend/replication/logical/worker.c - apply_handle_rollback_prepared\n\n+ /* Check if we are processing the prepared transaction in a bgworker */\n+ if (transaction_applied_in_bgworker(rollback_data.xid))\n+ {\n+ send_data_to_worker(stream_apply_worker, s->len, s->data);\n\nSame as previous comment #41. Relies on the side effect of something\nsetting the global stream_apply_worker.\n\n~~~\n\n44. src/backend/replication/logical/worker.c - find_or_start_worker\n\n+ /*\n+ * For streaming transactions that is being applied in bgworker, we cannot\n+ * decide whether to apply the change for a relation that is not in the\n+ * READY state (see should_apply_changes_for_rel) as we won't know\n+ * remote_final_lsn by that time. So, we don't start new bgworker in this\n+ * case.\n+ */\n\ntypo: \"that is\" -> \"that are\"\n\n~~~\n\n45. src/backend/replication/logical/worker.c - find_or_start_worker\n\n+ if (MySubscription->stream != SUBSTREAM_APPLY)\n+ return NULL;\n...\n+ else if (start && !XLogRecPtrIsInvalid(MySubscription->skiplsn))\n+ return NULL;\n...\n+ else if (start && !AllTablesyncsReady())\n+ return NULL;\n+ else if (!start && ApplyWorkersHash == NULL)\n+ return NULL;\n\nI am not sure but I think most of that rejection if/else can probably\njust be \"if\" (not \"else if\") because otherwise, the code would have\nreturned anyhow, right? Removing all the \"else\" might make the code\nmore readable.\n\n~~~\n\n46. src/backend/replication/logical/worker.c - find_or_start_worker\n\n+ if (wstate == NULL)\n+ {\n+ /*\n+ * If there is no more worker can be launched here, remove the\n+ * entry in hash table.\n+ */\n+ hash_search(ApplyWorkersHash, &xid, HASH_REMOVE, &found);\n+ return NULL;\n+ }\n\nwording: \"If there is no more worker can be launched here, remove\" ->\n\"If the bgworker cannot be launched, remove...\"\n\n~~~\n\n47. src/backend/replication/logical/worker.c - free_stream_apply_worker\n\n+/*\n+ * Add the worker to the freelist and remove the entry from hash table.\n+ */\n+static void\n+free_stream_apply_worker(void)\n\nIMO it might be better to pass the bgworker here instead of silently\nworking with the global stream_apply_worker.\n\n~~~\n\n48. src/backend/replication/logical/worker.c - free_stream_apply_worker\n\n+ elog(LOG, \"adding finished apply worker #%u for xid %u to the idle list\",\n+ stream_apply_worker->pstate->n, stream_apply_worker->pstate->stream_xid);\n\nShould the be an Assert here to check the bgworker state really was FINISHED?\n\n~~~\n\n49. src/backend/replication/logical/worker.c - serialize_stream_prepare\n\n+static void\n+serialize_stream_prepare(LogicalRepPreparedTxnData *prepare_data)\n\nMissing function comment.\n\n~~~\n\n50. src/backend/replication/logical/worker.c - serialize_stream_start\n\n-/*\n- * Handle STREAM START message.\n- */\n static void\n-apply_handle_stream_start(StringInfo s)\n+serialize_stream_start(bool first_segment)\n\nMissing function comment.\n\n~~~\n\n51. src/backend/replication/logical/worker.c - serialize_stream_stop\n\n+static void\n+serialize_stream_stop()\n+{\n\nMissing function comment.\n\n~~~\n\n52. src/backend/replication/logical/worker.c - general serialize_XXXX\n\nI can see now that you have created many serialize_XXX functions which\nseem to only be called one time. It looks like the only purpose is to\nencapsulate the code to make the handler function shorter? But it\nseems a bit uneven that you did this only for the serialize cases. If\nyou really want these separate functions then perhaps there ought to\nalso be the equivalent bgworker functions too. There seem to be always\n3 scenarios:\n\ni.e\n1. Worker is the bgworker\n2. Worker is Main Apply but a bgworker exists\n3. Worker is Main apply and bgworker does not exist.\n\nPerhaps every handler function should have THREE other little\nfunctions that it calls appropriately?\n\n~~~\n\n53. src/backend/replication/logical/worker.c - serialize_stream_abort\n\n+\n+static void\n+serialize_stream_abort(TransactionId xid, TransactionId subxid)\n+{\n\nMissing function comment.\n\n~~~\n\n54. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ if (isLogicalApplyWorker)\n+ {\n+ ereport(LOG,\n+ (errcode_for_file_access(),\n+ errmsg(\"[Apply BGW #%u] aborting current transaction xid=%u, subxid=%u\",\n+ MyParallelState->n, GetCurrentTransactionIdIfAny(),\nGetCurrentSubTransactionId())));\n\nWhy is the errcode using errcode_for_file_access? (2x)\n\n~~~\n\n55. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /*\n+ * OK, so it's a subxact. Rollback to the savepoint.\n+ *\n+ * We also need to read the subxactlist, determine the offset\n+ * tracked for the subxact, and truncate the list.\n+ */\n+ int i;\n+ bool found = false;\n+ char *spname = (char *) palloc(64 * sizeof(char));\n\nCan that just be char[64] on the stack?\n\n~~~\n\n56. src/backend/replication/logical/worker.c - apply_dispatch\n\n@@ -2511,6 +3061,7 @@ apply_dispatch(StringInfo s)\n break;\n\n case LOGICAL_REP_MSG_STREAM_START:\n+ elog(LOG, \"LOGICAL_REP_MSG_STREAM_START\");\n apply_handle_stream_start(s);\n break;\n\nI guess this is just for debugging purposes so you should put some\nFIXME comment here as a reminder to get rid of it later?\n\n~~~\n\n57. src/backend/replication/logical/worker.c - store_flush_position,\nisLogicalApplyWorker\n\n@@ -2618,6 +3169,10 @@ store_flush_position(XLogRecPtr remote_lsn)\n {\n FlushPosition *flushpos;\n\n+ /* We only need to collect the LSN in main apply worker */\n+ if (isLogicalApplyWorker)\n+ return;\n+\n\nThis comment is not specific to this function, but for global\nisLogicalApplyWorker IMO this should be implemented to look more like\nthe inline function am_tablesync_worker().\n\ne.g. I think you should replace this global with something like\nam_apply_bgworker()\n\nMaybe it should do something like check the value of\nMyLogicalRepWorker->subworker?\n\n~~~\n\n58. src/backend/replication/logical/worker.c - LogicalRepApplyLoop\n\n@@ -3467,6 +4025,7 @@ TwoPhaseTransactionGid(Oid subid, TransactionId\nxid, char *gid, int szgid)\n snprintf(gid, szgid, \"pg_gid_%u_%u\", subid, xid);\n }\n\n+\n /*\n * Execute the initial sync with error handling. Disable the subscription,\n * if it's required.\n\nSpurious whitespace\n\n~~~\n\n59. src/backend/replication/logical/worker.c - ApplyWorkerMain\n\n@@ -3733,7 +4292,7 @@ ApplyWorkerMain(Datum main_arg)\n\n options.proto.logical.publication_names = MySubscription->publications;\n options.proto.logical.binary = MySubscription->binary;\n- options.proto.logical.streaming = MySubscription->stream;\n+ options.proto.logical.streaming = (MySubscription->stream != SUBSTREAM_OFF);\n options.proto.logical.twophase = false;\n\nI was not sure why this is converting from an enum to a boolean? Is it right?\n\n~~~\n\n60. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n+ shmq_res = shm_mq_receive(mqh, &len, &data, false);\n+\n+ if (shmq_res != SHM_MQ_SUCCESS)\n+ break;\n\nShould this log some more error information here?\n\n~~~\n\n61. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n+ if (len == 0)\n+ {\n+ elog(LOG, \"[Apply BGW #%u] got zero-length message, stopping\", pst->n);\n+ break;\n+ }\n+ else\n+ {\n+ XLogRecPtr start_lsn;\n+ XLogRecPtr end_lsn;\n+ TimestampTz send_time;\n\nMaybe the \"else\" is not needed here, and if you remove it then it will\nget rid of all the unnecessary indentation.\n\n~~~\n\n62. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n+ /*\n+ * We use first byte of message for additional communication between\n+ * main Logical replication worker and Apply BGWorkers, so if it\n+ * differs from 'w', then process it first.\n+ */\n\n\nI was thinking maybe this switch should include\n\ncase 'w':\nbreak;\nbecause then for the \"default\" case you should give ERROR because\nsomething unexpected arrived.\n\n~~~\n\n63. src/backend/replication/logical/worker.c - ApplyBgwShutdown\n\n+static void\n+ApplyBgwShutdown(int code, Datum arg)\n+{\n+ SpinLockAcquire(&MyParallelState->mutex);\n+ MyParallelState->failed = true;\n+ SpinLockRelease(&MyParallelState->mutex);\n+\n+ dsm_detach((dsm_segment *) DatumGetPointer(arg));\n+}\n\nShould this do detach first and set the flag last?\n\n~~~\n\n64. src/backend/replication/logical/worker.c - LogicalApplyBgwMain\n\n+ /*\n+ * Acquire a worker number.\n+ *\n+ * By convention, the process registering this background worker should\n+ * have stored the control structure at key 0. We look up that key to\n+ * find it. Our worker number gives our identity: there may be just one\n+ * worker involved in this parallel operation, or there may be many.\n+ */\n\nMaybe there should be another elog closer to this comment? So as soon\nas you know the BGW number log something?\n\ne.g.\nelog(LOG, \"[Apply BGW #%u] starting\", pst->n);\n\n~~~\n\n65. src/backend/replication/logical/worker.c - setup_background_worker\n\n+/*\n+ * Register background workers.\n+ */\n+static WorkerState *\n+setup_background_worker(void)\n\nI think that comment needs some more info because it is doing more\nthan just registering... it is successfully launching the worker\nfirst.\n\n~~~\n\n66. src/backend/replication/logical/worker.c - setup_background_worker\n\n+ if (launched)\n+ {\n+ /* Wait for worker to become ready. */\n+ wait_for_worker_ready(wstate, false);\n+\n+ ApplyWorkersList = lappend(ApplyWorkersList, wstate);\n+ nworkers += 1;\n+ }\n\nDo you really need to carry around this global 'nworkers' variable?\nCan’t you just check the length of the ApplyWorkerList to get this\nnumber?\n\n~~~\n\n67. src/backend/replication/logical/worker.c - send_data_to_worker\n\n+/*\n+ * Send the data to worker via shared-memory queue.\n+ */\n+static void\n+send_data_to_worker(WorkerState *wstate, Size nbytes, const void *data)\n\nwording: \"to worker\" -> \"to the specified apply bgworker\"\n\nThis is just another example of my comment #1.\n\n~~~\n\n68. src/backend/replication/logical/worker.c - send_data_to_worker\n\n+ if (result != SHM_MQ_SUCCESS)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"could not send tuple to shared-memory queue\")));\n+}\n\ntypo: is \"tuples\" the right word here?\n\n~~~\n\n69. src/backend/replication/logical/worker.c - wait_for_worker_ready\n\n+\n+static void\n+wait_for_worker_ready(WorkerState *wstate, bool notify)\n+{\n\nMissing function comment.\n\n~~~\n\n70. src/backend/replication/logical/worker.c - wait_for_worker_ready\n\n+\n+static void\n+wait_for_worker_ready(WorkerState *wstate, bool notify)\n+{\n\n'notify' seems a bit of a poor name here. And this param seems a bit\nof a strange side-effect for something called wait_for_worker_ready.\nIf really need to do this way maybe name it something more verbose\nlike 'notify_received_stream_stop'?\n\n~~~\n\n71. src/backend/replication/logical/worker.c - wait_for_worker_ready\n\n+ if (!result)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_RESOURCES),\n+ errmsg(\"one or more background workers failed to start\")));\n\nIs the ERROR code reachable? IIUC there is no escape from the previous\nfor (;;) loop except when the result is set to true.\n\n~~~\n\n72. src/backend/replication/logical/worker.c - wait_for_transaction_finish\n\n+\n+static void\n+wait_for_transaction_finish(WorkerState *wstate)\n+{\n\nMissing function comment.\n\n~~~\n\n73. src/backend/replication/logical/worker.c - wait_for_transaction_finish\n\n+ if (finished)\n+ {\n+ break;\n+ }\n\nThe brackets are not needed for 1 statement.\n\n~~~\n\n74. src/backend/replication/logical/worker.c - transaction_applied_in_bgworker\n\n+static bool\n+transaction_applied_in_bgworker(TransactionId xid)\n\nInstead of side-effect assigning the global variable, why not return\nthe bgworker (or NULL) and let the caller work with the result?\n\n~~~\n\n75. src/backend/replication/logical/worker.c - check_workers_status\n\n+/*\n+ * Check the status of workers and report an error if any bgworker exit\n+ * unexpectedly.\n\nwording: -> \"... if any bgworker has exited unexpectedly ...\"\n\n~~~\n\n76. src/backend/replication/logical/worker.c - check_workers_status\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Background worker %u exited unexpectedly\",\n+ wstate->pstate->n)));\n\nShould that message also give more identifying info about the\n*current* worker doing the ERROR - e.g.the one which found this the\nother bgworker was failed? Or is that just the PIC in the log message\ngood enough?\n\n~~~\n\n77. src/backend/replication/logical/worker.c - check_workers_status\n\n+ if (!AllTablesyncsReady() && nfreeworkers != list_length(ApplyWorkersList))\n+ {\n\nI did not really understand this code, but isn't there a possibility\nthat it will cause many restarts if the tablesyncs are taking a long\ntime to complete?\n\n======\n\n78. src/include/catalog/pg_subscription.\n\n@@ -122,6 +122,18 @@ typedef struct Subscription\n List *publications; /* List of publication names to subscribe to */\n } Subscription;\n\n+/* Disallow streaming in-progress transactions */\n+#define SUBSTREAM_OFF 'f'\n+\n+/*\n+ * Streaming transactions are written to a temporary file and applied only\n+ * after the transaction is committed on upstream.\n+ */\n+#define SUBSTREAM_SPOOL 's'\n+\n+/* Streaming transactions are appied immediately via a background worker */\n+#define SUBSTREAM_APPLY 'a'\n\nIIRC Vignesh had a similar options requirement for his \"infinite\nrecursion\" patch [1], except he was using enums instead of #define for\nchar. Maybe discuss with Vignesh (and either he should change or you\nshould change) so there is a consistent code style for the options.\n\n======\n\n79. src/include/replication/logicalproto.h - old extern\n\n@@ -243,8 +243,10 @@ extern TransactionId\nlogicalrep_read_stream_start(StringInfo in,\n extern void logicalrep_write_stream_stop(StringInfo out);\n extern void logicalrep_write_stream_commit(StringInfo out,\nReorderBufferTXN *txn,\n XLogRecPtr commit_lsn);\n-extern TransactionId logicalrep_read_stream_commit(StringInfo out,\n+extern TransactionId logicalrep_read_stream_commit_old(StringInfo out,\n LogicalRepCommitData *commit_data);\n\nIs anybody still using this \"old\" function? Maybe I missed it.\n\n======\n\n80. src/include/replication/logicalworker.h\n\n@@ -13,6 +13,7 @@\n #define LOGICALWORKER_H\n\n extern void ApplyWorkerMain(Datum main_arg);\n+extern void LogicalApplyBgwMain(Datum main_arg);\n\nThe new name seems inconsistent with the old one. What about calling\nit ApplyBgworkerMain?\n\n======\n\n81. src/test/regress/expected/subscription.out\n\nIsn't this missing some test cases for the new options added? E.g. I\nnever see streaming value is set to 's'.\n\n======\n\n82. src/test/subscription/t/029_on_error.pl\n\nIf options values were changed how I suggested (review comment #14)\nthen I think a change such as this would not be necessary because\neverything would be backward compatible.\n\n\n------\n[1] https://www.postgresql.org/message-id/CALDaNm2Fe%3Dg4Tx-DhzwD6NU0VRAfaPedXwWO01maNU7_OfS8fw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 22 Apr 2022 14:12:17 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, April 22, 2022 12:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Hello Hou-san. Here are my review comments for v4-0001. Sorry, there\r\n> are so many of them (it is a big patch); some are trivial, and others\r\n> you might easily dismiss due to my misunderstanding of the code. But\r\n> hopefully, there are at least some comments that can be helpful in\r\n> improving the patch quality.\r\n\r\nThanks for the comments !\r\nI think most of the comments make sense and here are explanations for\r\nsome of them.\r\n\r\n> 24. src/backend/replication/logical/launcher.c - ApplyLauncherMain\r\n> \r\n> @@ -869,7 +917,7 @@ ApplyLauncherMain(Datum main_arg)\r\n> wait_time = wal_retrieve_retry_interval;\r\n> \r\n> logicalrep_worker_launch(sub->dbid, sub->oid, sub->name,\r\n> - sub->owner, InvalidOid);\r\n> + sub->owner, InvalidOid, DSM_HANDLE_INVALID);\r\n> }\r\n> Now that the logicalrep_worker_launch is retuning a bool, should this\r\n> call be checking the return value and taking appropriate action if it\r\n> failed?\r\n\r\nNot sure we can change the logic of existing caller. I think only the new\r\ncaller in the patch is necessary to check this.\r\n\r\n\r\n> 26. src/backend/replication/logical/origin.c - acquire code\r\n> \r\n> + /*\r\n> + * We allow the apply worker to get the slot which is acquired by its\r\n> + * leader process.\r\n> + */\r\n> + else if (curstate->acquired_by != 0 && acquire)\r\n> {\r\n> ereport(ERROR,\r\n> \r\n> I somehow felt that this param would be better called 'skip_acquire',\r\n> so all the callers would have to use the opposite boolean and then\r\n> this code would say like below (which seemed easier to me). YMMV.\r\n> \r\n> else if (curstate->acquired_by != 0 && !skip_acquire)\r\n> {\r\n> ereport(ERROR,\r\n\r\nNot sure about this.\r\n\r\n\r\n> 59. src/backend/replication/logical/worker.c - ApplyWorkerMain\r\n> \r\n> @@ -3733,7 +4292,7 @@ ApplyWorkerMain(Datum main_arg)\r\n> \r\n> options.proto.logical.publication_names = MySubscription->publications;\r\n> options.proto.logical.binary = MySubscription->binary;\r\n> - options.proto.logical.streaming = MySubscription->stream;\r\n> + options.proto.logical.streaming = (MySubscription->stream != SUBSTREAM_OFF);\r\n> options.proto.logical.twophase = false;\r\n>\r\n> I was not sure why this is converting from an enum to a boolean? Is it right?\r\n\r\nI think it's ok, the \"logical.streaming\" is used in publisher which don't need\r\nto know the exact type of the streaming(it only need to know whether the\r\nstreaming is enabled for now)\r\n\r\n\r\n> 63. src/backend/replication/logical/worker.c - ApplyBgwShutdown\r\n> \r\n> +static void\r\n> +ApplyBgwShutdown(int code, Datum arg)\r\n> +{\r\n> + SpinLockAcquire(&MyParallelState->mutex);\r\n> + MyParallelState->failed = true;\r\n> + SpinLockRelease(&MyParallelState->mutex);\r\n> +\r\n> + dsm_detach((dsm_segment *) DatumGetPointer(arg));\r\n> +}\r\n> \r\n> Should this do detach first and set the flag last?\r\n\r\nNot sure about this. I think it's fine to detach this at the end.\r\n\r\n> 76. src/backend/replication/logical/worker.c - check_workers_status\r\n> \r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"Background worker %u exited unexpectedly\",\r\n> + wstate->pstate->n)));\r\n> \r\n> Should that message also give more identifying info about the\r\n> *current* worker doing the ERROR - e.g.the one which found this the\r\n> other bgworker was failed? Or is that just the PIC in the log message\r\n> good enough?\r\n\r\nCurrently, only the main apply worker should report this error, so not sure do\r\nwe need to report the current worker.\r\n\r\n> 77. src/backend/replication/logical/worker.c - check_workers_status\r\n> \r\n> + if (!AllTablesyncsReady() && nfreeworkers != list_length(ApplyWorkersList))\r\n> + {\r\n> \r\n> I did not really understand this code, but isn't there a possibility\r\n> that it will cause many restarts if the tablesyncs are taking a long\r\n> time to complete?\r\n\r\nI think it's ok, after restarting, we won't start bgworker until all the table\r\nis READY.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n\r\n\r\n", "msg_date": "Mon, 25 Apr 2022 08:35:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 8, 2022 at 2:44 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > In this email, I would like to discuss allowing streaming logical\n> > transactions (large in-progress transactions) by background workers\n> > and parallel apply in general. The goal of this work is to improve the\n> > performance of the apply work in logical replication.\n> >\n> > Currently, for large transactions, the publisher sends the data in\n> > multiple streams (changes divided into chunks depending upon\n> > logical_decoding_work_mem), and then on the subscriber-side, the apply\n> > worker writes the changes into temporary files and once it receives\n> > the commit, it read from the file and apply the entire transaction. To\n> > improve the performance of such transactions, we can instead allow\n> > them to be applied via background workers. There could be multiple\n> > ways to achieve this:\n> >\n> > Approach-1: Assign a new bgworker (if available) as soon as the xact's\n> > first stream came and the main apply worker will send changes to this\n> > new worker via shared memory. We keep this worker assigned till the\n> > transaction commit came and also wait for the worker to finish at\n> > commit. This preserves commit ordering and avoid writing to and\n> > reading from file in most cases. We still need to spill if there is no\n> > worker available. We also need to allow stream_stop to complete by the\n> > background worker to finish it to avoid deadlocks because T-1's\n> > current stream of changes can update rows in conflicting order with\n> > T-2's next stream of changes.\n> >\n>\n> Attach the POC patch for the Approach-1 of \"Perform streaming logical\n> transactions by background workers\". The patch is still a WIP patch as\n> there are serval TODO items left, including:\n>\n> * error handling for bgworker\n> * support for SKIP the transaction in bgworker\n> * handle the case when there is no more worker available\n> (might need spill the data to the temp file in this case)\n> * some potential bugs\n>\n> The original patch is borrowed from an old thread[1] and was rebased and\n> extended/cleaned by me. Comments and suggestions are welcome.\n>\n> [1] https://www.postgresql.org/message-id/8eda5118-2dd0-79a1-4fe9-eec7e334de17%40postgrespro.ru\n>\n> Here are some performance results of the patch shared by Shi Yu off-list.\n>\n> The performance was tested by varying\n> logical_decoding_work_mem, which include two cases:\n>\n> 1) bulk insert.\n> 2) create savepoint and rollback to savepoint.\n>\n> I used synchronous logical replication in the test, compared SQL execution\n> times before and after applying the patch.\n>\n> The results are as follows. The bar charts and the details of the test are\n> Attached as well.\n>\n> RESULT - bulk insert (5kk)\n> ----------------------------------\n> logical_decoding_work_mem 64kB 128kB 256kB 512kB 1MB 2MB 4MB 8MB 16MB 32MB 64MB\n> HEAD 51.673 51.199 51.166 50.259 52.898 50.651 51.156 51.210 50.678 51.256 51.138\n> patched 36.198 35.123 34.223 29.198 28.712 29.090 29.709 29.408 34.367 34.716 35.439\n>\n> RESULT - rollback to savepoint (600k)\n> ----------------------------------\n> logical_decoding_work_mem 64kB 128kB 256kB 512kB 1MB 2MB 4MB 8MB 16MB 32MB 64MB\n> HEAD 31.101 31.087 30.931 31.015 30.920 31.109 30.863 31.008 30.875 30.775 29.903\n> patched 28.115 28.487 27.804 28.175 27.734 29.047 28.279 27.909 28.277 27.345 28.375\n>\n>\n> Summary:\n> 1) bulk insert\n>\n> For different logical_decoding_work_mem size, it takes about 30% ~ 45% less\n> time, which looks good to me. After applying this patch, it seems that the\n> performance is better when logical_decoding_work_mem is between 512kB and 8MB.\n>\n> 2) rollback to savepoint\n>\n> There is an improvement of about 5% ~ 10% after applying this patch.\n>\n> In this case, the patch spend less time handling the part that is not\n> rolled back, because it saves the time writing the changes into a temporary file\n> and reading the file. And for the part that is rolled back, it would spend more\n> time than HEAD, because it takes more time to write to filesystem and rollback\n> than writing a temporary file and truncating the file. Overall, the results looks\n> good.\n\nOne comment on the design:\nWe should have a strategy to release the workers which have completed\napplying the transactions, else even though there are some idle\nworkers for one of the subscriptions, it cannot be used by other\nsubscriptions.\nLike in the following case:\nLet's say max_logical_replication_workers is set to 10, if\nsubscription sub_1 uses all the 10 workers to apply the transactions\nand all the 10 workers have finished applying the transactions and\nthen subscription sub_2 requests some workers for applying\ntransactions, subscription sub_2 will not get any workers.\nMaybe if the workers have completed applying the transactions,\nsubscription sub_2 should be able to get these workers in this case.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 26 Apr 2022 12:48:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, April 25, 2022 4:35 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> On Friday, April 22, 2022 12:12 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > Hello Hou-san. Here are my review comments for v4-0001. Sorry, there\r\n> > are so many of them (it is a big patch); some are trivial, and others\r\n> > you might easily dismiss due to my misunderstanding of the code. But\r\n> > hopefully, there are at least some comments that can be helpful in\r\n> > improving the patch quality.\r\n> \r\n> Thanks for the comments !\r\n> I think most of the comments make sense and here are explanations for some\r\n> of them.\r\n\r\nHi,\r\n\r\nI addressed the rest of Peter's comments and here is a new version patch.\r\n\r\nThe naming of the newly introduced option and worker might\r\nneed more thought, so I haven't change all of them. I will think over\r\nand change it later.\r\n\r\nOne comment I didn't address:\r\n> 3. General comment - bool option change to enum\r\n> \r\n> This option change for \"streaming\" is similar to the options change\r\n> for \"copy_data=force\" that Vignesh is doing for his \"infinite\r\n> recursion\" patch v9-0002 [1]. Yet they seem implemented differently\r\n> (i.e. char versus enum). I think you should discuss the 2 approaches\r\n> with Vignesh and then code these option changes in a consistent way.\r\n> \r\n> [1] https://www.postgresql.org/message-id/CALDaNm2Fe%3Dg4Tx-DhzwD6NU0VRAfaPedXwWO01maNU7_OfS8fw%40mail.gmail.> com\r\n\r\nI think the \"streaming\" option is a bit different from the \"copy_data\" option.\r\nBecause the \"streaming\" is a column of the system table (pg_subscription) which\r\nshould use \"char\" type to represent different values in this case(For example:\r\npg_class.relkind/pg_class.relpersistence/pg_class.relreplident ...).\r\n\r\nAnd the \"copy_data\" option is not a system table column and I think it's fine\r\nto use Enum for it.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 29 Apr 2022 02:06:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 29, 2022 10:07 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> I addressed the rest of Peter's comments and here is a new version patch.\r\n> \r\n\r\nThanks for your patch.\r\n\r\nThe patch modified streaming option in logical replication, it can be set to\r\n'on', 'off' and 'apply'. The new option 'apply' haven't been tested in the tap test.\r\nAttach a patch which modified the subscription tap test to cover both 'on' and\r\n'apply' option. (The main patch is also attached to make cfbot happy.)\r\n\r\nBesides, I noticed that for two-phase commit transactions, if the transaction is\r\nprepared by a background worker, the background worker would be asked to handle\r\nthe message about commit/rollback this transaction. Is it possible that the\r\nmessages about commit/rollback prepared transaction are handled by apply worker\r\ndirectly?\r\n\r\nRegards,\r\nShi yu", "msg_date": "Fri, 29 Apr 2022 05:22:41 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 8, 2022 at 6:14 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > In this email, I would like to discuss allowing streaming logical\n> > transactions (large in-progress transactions) by background workers\n> > and parallel apply in general. The goal of this work is to improve the\n> > performance of the apply work in logical replication.\n> >\n> > Currently, for large transactions, the publisher sends the data in\n> > multiple streams (changes divided into chunks depending upon\n> > logical_decoding_work_mem), and then on the subscriber-side, the apply\n> > worker writes the changes into temporary files and once it receives\n> > the commit, it read from the file and apply the entire transaction. To\n> > improve the performance of such transactions, we can instead allow\n> > them to be applied via background workers. There could be multiple\n> > ways to achieve this:\n> >\n> > Approach-1: Assign a new bgworker (if available) as soon as the xact's\n> > first stream came and the main apply worker will send changes to this\n> > new worker via shared memory. We keep this worker assigned till the\n> > transaction commit came and also wait for the worker to finish at\n> > commit. This preserves commit ordering and avoid writing to and\n> > reading from file in most cases. We still need to spill if there is no\n> > worker available. We also need to allow stream_stop to complete by the\n> > background worker to finish it to avoid deadlocks because T-1's\n> > current stream of changes can update rows in conflicting order with\n> > T-2's next stream of changes.\n> >\n>\n> Attach the POC patch for the Approach-1 of \"Perform streaming logical\n> transactions by background workers\". The patch is still a WIP patch as\n> there are serval TODO items left, including:\n>\n> * error handling for bgworker\n> * support for SKIP the transaction in bgworker\n> * handle the case when there is no more worker available\n> (might need spill the data to the temp file in this case)\n> * some potential bugs\n\nAre you planning to support \"Transaction dependency\" Amit mentioned in\nhis first mail in this patch? IIUC since the background apply worker\napplies the streamed changes as soon as receiving them from the main\napply worker, a conflict that doesn't happen in the current streaming\nlogical replication could happen.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 2 May 2022 15:16:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 8, 2022 at 6:14 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > In this email, I would like to discuss allowing streaming logical\n> > > transactions (large in-progress transactions) by background workers\n> > > and parallel apply in general. The goal of this work is to improve the\n> > > performance of the apply work in logical replication.\n> > >\n> > > Currently, for large transactions, the publisher sends the data in\n> > > multiple streams (changes divided into chunks depending upon\n> > > logical_decoding_work_mem), and then on the subscriber-side, the apply\n> > > worker writes the changes into temporary files and once it receives\n> > > the commit, it read from the file and apply the entire transaction. To\n> > > improve the performance of such transactions, we can instead allow\n> > > them to be applied via background workers. There could be multiple\n> > > ways to achieve this:\n> > >\n> > > Approach-1: Assign a new bgworker (if available) as soon as the xact's\n> > > first stream came and the main apply worker will send changes to this\n> > > new worker via shared memory. We keep this worker assigned till the\n> > > transaction commit came and also wait for the worker to finish at\n> > > commit. This preserves commit ordering and avoid writing to and\n> > > reading from file in most cases. We still need to spill if there is no\n> > > worker available. We also need to allow stream_stop to complete by the\n> > > background worker to finish it to avoid deadlocks because T-1's\n> > > current stream of changes can update rows in conflicting order with\n> > > T-2's next stream of changes.\n> > >\n> >\n> > Attach the POC patch for the Approach-1 of \"Perform streaming logical\n> > transactions by background workers\". The patch is still a WIP patch as\n> > there are serval TODO items left, including:\n> >\n> > * error handling for bgworker\n> > * support for SKIP the transaction in bgworker\n> > * handle the case when there is no more worker available\n> > (might need spill the data to the temp file in this case)\n> > * some potential bugs\n>\n> Are you planning to support \"Transaction dependency\" Amit mentioned in\n> his first mail in this patch? IIUC since the background apply worker\n> applies the streamed changes as soon as receiving them from the main\n> apply worker, a conflict that doesn't happen in the current streaming\n> logical replication could happen.\n>\n\nThis patch seems to be waiting for stream_stop to finish, so I don't\nsee how the issues related to \"Transaction dependency\" can arise? What\ntype of conflict/issues you have in mind?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 May 2022 14:39:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 8, 2022 at 6:14 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, April 6, 2022 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > In this email, I would like to discuss allowing streaming logical\n> > > > transactions (large in-progress transactions) by background workers\n> > > > and parallel apply in general. The goal of this work is to improve the\n> > > > performance of the apply work in logical replication.\n> > > >\n> > > > Currently, for large transactions, the publisher sends the data in\n> > > > multiple streams (changes divided into chunks depending upon\n> > > > logical_decoding_work_mem), and then on the subscriber-side, the apply\n> > > > worker writes the changes into temporary files and once it receives\n> > > > the commit, it read from the file and apply the entire transaction. To\n> > > > improve the performance of such transactions, we can instead allow\n> > > > them to be applied via background workers. There could be multiple\n> > > > ways to achieve this:\n> > > >\n> > > > Approach-1: Assign a new bgworker (if available) as soon as the xact's\n> > > > first stream came and the main apply worker will send changes to this\n> > > > new worker via shared memory. We keep this worker assigned till the\n> > > > transaction commit came and also wait for the worker to finish at\n> > > > commit. This preserves commit ordering and avoid writing to and\n> > > > reading from file in most cases. We still need to spill if there is no\n> > > > worker available. We also need to allow stream_stop to complete by the\n> > > > background worker to finish it to avoid deadlocks because T-1's\n> > > > current stream of changes can update rows in conflicting order with\n> > > > T-2's next stream of changes.\n> > > >\n> > >\n> > > Attach the POC patch for the Approach-1 of \"Perform streaming logical\n> > > transactions by background workers\". The patch is still a WIP patch as\n> > > there are serval TODO items left, including:\n> > >\n> > > * error handling for bgworker\n> > > * support for SKIP the transaction in bgworker\n> > > * handle the case when there is no more worker available\n> > > (might need spill the data to the temp file in this case)\n> > > * some potential bugs\n> >\n> > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > his first mail in this patch? IIUC since the background apply worker\n> > applies the streamed changes as soon as receiving them from the main\n> > apply worker, a conflict that doesn't happen in the current streaming\n> > logical replication could happen.\n> >\n>\n> This patch seems to be waiting for stream_stop to finish, so I don't\n> see how the issues related to \"Transaction dependency\" can arise? What\n> type of conflict/issues you have in mind?\n\nSuppose we set both publisher and subscriber:\n\nOn publisher:\ncreate table test (i int);\ninsert into test values (0);\ncreate publication test_pub for table test;\n\nOn subscriber:\ncreate table test (i int primary key);\ncreate subscription test_sub connection '...' publication test_pub; --\nvalue 0 is replicated via initial sync\n\nNow, both 'test' tables have value 0.\n\nAnd suppose two concurrent transactions are executed on the publisher\nin following order:\n\nTX-1:\nbegin;\ninsert into test select generate_series(0, 10000); -- changes will be streamed;\n\n TX-2:\n begin;\n delete from test where c = 0;\n commit;\n\nTX-1:\ncommit;\n\nWith the current streaming logical replication, these changes will be\napplied successfully since the deletion is applied before the\n(streamed) insertion. Whereas with the apply bgworker, it fails due to\nan unique constraint violation since the insertion is applied first.\nI've confirmed that it happens with v5 patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 2 May 2022 20:35:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > his first mail in this patch? IIUC since the background apply worker\n> > > applies the streamed changes as soon as receiving them from the main\n> > > apply worker, a conflict that doesn't happen in the current streaming\n> > > logical replication could happen.\n> > >\n> >\n> > This patch seems to be waiting for stream_stop to finish, so I don't\n> > see how the issues related to \"Transaction dependency\" can arise? What\n> > type of conflict/issues you have in mind?\n>\n> Suppose we set both publisher and subscriber:\n>\n> On publisher:\n> create table test (i int);\n> insert into test values (0);\n> create publication test_pub for table test;\n>\n> On subscriber:\n> create table test (i int primary key);\n> create subscription test_sub connection '...' publication test_pub; --\n> value 0 is replicated via initial sync\n>\n> Now, both 'test' tables have value 0.\n>\n> And suppose two concurrent transactions are executed on the publisher\n> in following order:\n>\n> TX-1:\n> begin;\n> insert into test select generate_series(0, 10000); -- changes will be streamed;\n>\n> TX-2:\n> begin;\n> delete from test where c = 0;\n> commit;\n>\n> TX-1:\n> commit;\n>\n> With the current streaming logical replication, these changes will be\n> applied successfully since the deletion is applied before the\n> (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> an unique constraint violation since the insertion is applied first.\n> I've confirmed that it happens with v5 patch.\n>\n\nGood point but I am not completely sure if doing transaction\ndependency tracking for such cases is really worth it. I feel for such\nconcurrent cases users can anyway now also get conflicts, it is just a\nmatter of timing. One more thing to check transaction dependency, we\nmight need to spill the data for streaming transactions in which case\nwe might lose all the benefits of doing it via a background worker. Do\nwe see any simple way to avoid this?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 3 May 2022 09:45:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 3, 2022 at 2:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > > his first mail in this patch? IIUC since the background apply worker\n> > > > applies the streamed changes as soon as receiving them from the main\n> > > > apply worker, a conflict that doesn't happen in the current streaming\n> > > > logical replication could happen.\n> > > >\n> > >\n> > > This patch seems to be waiting for stream_stop to finish, so I don't\n> > > see how the issues related to \"Transaction dependency\" can arise? What\n> > > type of conflict/issues you have in mind?\n> >\n> > Suppose we set both publisher and subscriber:\n> >\n> > On publisher:\n> > create table test (i int);\n> > insert into test values (0);\n> > create publication test_pub for table test;\n> >\n> > On subscriber:\n> > create table test (i int primary key);\n> > create subscription test_sub connection '...' publication test_pub; --\n> > value 0 is replicated via initial sync\n> >\n> > Now, both 'test' tables have value 0.\n> >\n> > And suppose two concurrent transactions are executed on the publisher\n> > in following order:\n> >\n> > TX-1:\n> > begin;\n> > insert into test select generate_series(0, 10000); -- changes will be streamed;\n> >\n> > TX-2:\n> > begin;\n> > delete from test where c = 0;\n> > commit;\n> >\n> > TX-1:\n> > commit;\n> >\n> > With the current streaming logical replication, these changes will be\n> > applied successfully since the deletion is applied before the\n> > (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> > an unique constraint violation since the insertion is applied first.\n> > I've confirmed that it happens with v5 patch.\n> >\n>\n> Good point but I am not completely sure if doing transaction\n> dependency tracking for such cases is really worth it. I feel for such\n> concurrent cases users can anyway now also get conflicts, it is just a\n> matter of timing. One more thing to check transaction dependency, we\n> might need to spill the data for streaming transactions in which case\n> we might lose all the benefits of doing it via a background worker. Do\n> we see any simple way to avoid this?\n>\n\nAvoiding unexpected differences like this is why I suggested the\noption should have to be explicitly enabled instead of being on by\ndefault as it is in the current patch. See my review comment #14 [1].\nIt means the user won't have to change their existing code as a\nworkaround.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 May 2022 17:16:41 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 3, 2022 at 5:16 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n...\n\n> Avoiding unexpected differences like this is why I suggested the\n> option should have to be explicitly enabled instead of being on by\n> default as it is in the current patch. See my review comment #14 [1].\n> It means the user won't have to change their existing code as a\n> workaround.\n>\n> ------\n> [1] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n>\n\nSorry I was wrong above. It seems this behaviour was already changed\nin the latest patch v5 so now the option value 'on' means what it\nalways did. Thanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 4 May 2022 09:44:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 3, 2022 at 9:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > > his first mail in this patch? IIUC since the background apply worker\n> > > > applies the streamed changes as soon as receiving them from the main\n> > > > apply worker, a conflict that doesn't happen in the current streaming\n> > > > logical replication could happen.\n> > > >\n> > >\n> > > This patch seems to be waiting for stream_stop to finish, so I don't\n> > > see how the issues related to \"Transaction dependency\" can arise? What\n> > > type of conflict/issues you have in mind?\n> >\n> > Suppose we set both publisher and subscriber:\n> >\n> > On publisher:\n> > create table test (i int);\n> > insert into test values (0);\n> > create publication test_pub for table test;\n> >\n> > On subscriber:\n> > create table test (i int primary key);\n> > create subscription test_sub connection '...' publication test_pub; --\n> > value 0 is replicated via initial sync\n> >\n> > Now, both 'test' tables have value 0.\n> >\n> > And suppose two concurrent transactions are executed on the publisher\n> > in following order:\n> >\n> > TX-1:\n> > begin;\n> > insert into test select generate_series(0, 10000); -- changes will be streamed;\n> >\n> > TX-2:\n> > begin;\n> > delete from test where c = 0;\n> > commit;\n> >\n> > TX-1:\n> > commit;\n> >\n> > With the current streaming logical replication, these changes will be\n> > applied successfully since the deletion is applied before the\n> > (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> > an unique constraint violation since the insertion is applied first.\n> > I've confirmed that it happens with v5 patch.\n> >\n>\n> Good point but I am not completely sure if doing transaction\n> dependency tracking for such cases is really worth it. I feel for such\n> concurrent cases users can anyway now also get conflicts, it is just a\n> matter of timing. One more thing to check transaction dependency, we\n> might need to spill the data for streaming transactions in which case\n> we might lose all the benefits of doing it via a background worker. Do\n> we see any simple way to avoid this?\n>\n\nI think the other kind of problem that can happen here is delete\nfollowed by an insert. If in the example provided by you, TX-1\nperforms delete (say it is large enough to cause streaming) and TX-2\nperforms insert then I think it will block the apply worker because\ninsert will start waiting infinitely. Currently, I think it will lead\nto conflict due to insert but that is still solvable by allowing users\nto remove conflicting rows.\n\nIt seems both these problems are due to the reason that the table on\npublisher and subscriber has different constraints otherwise, we would\nhave seen the same behavior on the publisher as well.\n\nThere could be a few ways to avoid these and similar problems:\na. detect the difference in constraints between publisher and\nsubscribers like primary key and probably others (like whether there\nis any volatile function present in index expression) when applying\nthe change and then we give ERROR to the user that she must change the\nstreaming mode to 'spill' instead of 'apply' (aka parallel apply).\nb. Same as (a) but instead of ERROR just LOG this information and\nchange the mode to spill for the transactions that operate on that\nparticular relation.\n\nI think we can cache this information in LogicalRepRelMapEntry.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 May 2022 09:20:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v5-0001.\n\nI will take a look at the v5-0002 (TAP) patch another time.\n\n======\n\n1. Commit message\n\nThe message still refers to \"apply background\". Should that say \"apply\nbackground worker\"?\n\nOther parts just call this the \"worker\". Personally, I think it might\nbe better to coin some new term for this thing (e.g. \"apply-bgworker\"\nor something like that of your choosing) so then you can just\nconcisely *always* refer to that everywhere without any ambiguity. e.g\nsame applies to every comment and every message in this patch. They\nshould all use identical terminology (e.g. \"apply-bgworker\").\n\n~~~\n\n2. Commit message\n\n\"We also need to allow stream_stop to complete by the apply background\nto finish it to...\"\n\nWording: ???\n\n~~~\n\n3. Commit message\n\nThis patch also extends the subscription streaming option so that user\ncan control whether apply the streaming transaction in a apply\nbackground or spill the change to disk.\n\nWording: \"user\" -> \"the user\"\nTypo: \"whether apply\" -> \"whether to apply\"\nTypo: \"a apply\" -> \"an apply\"\n\n~~~\n\n4. Commit message\n\nUser can set the streaming option to 'on/off', 'apply'. For now,\n'apply' means the streaming will be applied via a apply background if\navailable. 'on' means the streaming transaction will be spilled to\ndisk.\n\n\nI think \"apply\" might not be the best choice of values for this\nmeaning, but I think Hou-san already said [1] that this was being\nreconsidered.\n\n~~~\n\n5. doc/src/sgml/catalogs.sgml - formatting\n\n@@ -7863,11 +7863,15 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\ncount&gt;</replaceable>:<replaceable>&l\n\n <row>\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n- <structfield>substream</structfield> <type>bool</type>\n+ <structfield>substream</structfield> <type>char</type>\n </para>\n <para>\n- If true, the subscription will allow streaming of in-progress\n- transactions\n+ Controls how to handle the streaming of in-progress transactions.\n+ <literal>f</literal> = disallow streaming of in-progress transactions\n+ <literal>o</literal> = spill the changes of in-progress transactions to\n+ disk and apply at once after the transaction is committed on the\n+ publisher.\n+ <literal>a</literal> = apply changes directly using a background worker\n </para></entry>\n </row>\n\nNeeds to be consistent with other value lists on this page.\n\n5a. The first sentence to end with \":\"\n\n5b. List items to end with \",\"\n\n~~~\n\n6. doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>apply</literal> incoming\n+ changes are directly applied via one of the background worker, if\n+ available. If no background worker is free to handle streaming\n+ transaction then the changes are written to a file and applied after\n+ the transaction is committed. Note that if error happen when applying\n+ changes in background worker, it might not report the finish LSN of\n+ the remote transaction in server log.\n </para>\n\n6a. Typo: \"one of the background worker,\" -> \"one of the background workers,\"\n\n6b. Wording\nBEFORE\nNote that if error happen when applying changes in background worker,\nit might not report the finish LSN of the remote transaction in server\nlog.\nSUGGESTION\nNote that if an error happens when applying changes in a background\nworker, it might not report the finish LSN of the remote transaction\nin the server log.\n\n~~~\n\n7. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+static char\n+defGetStreamingMode(DefElem *def)\n+{\n+ /*\n+ * If no parameter given, assume \"true\" is meant.\n+ */\n+ if (def->arg == NULL)\n+ return SUBSTREAM_ON;\n\nBut is that right? IIUC all the docs said that the default is OFF.\n\n~~~\n\n8. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+ /*\n+ * The set of strings accepted here should match up with the\n+ * grammar's opt_boolean_or_string production.\n+ */\n+ if (pg_strcasecmp(sval, \"true\") == 0 ||\n+ pg_strcasecmp(sval, \"on\") == 0)\n+ return SUBSTREAM_ON;\n+ if (pg_strcasecmp(sval, \"apply\") == 0)\n+ return SUBSTREAM_APPLY;\n+ if (pg_strcasecmp(sval, \"false\") == 0 ||\n+ pg_strcasecmp(sval, \"off\") == 0)\n+ return SUBSTREAM_OFF;\n\nPerhaps should re-order these OFF/ON/APPLY to be consistent with the\nT_Integer case above here.\n\n~~~\n\n9. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\nThe \"start new apply background worker ...\" function comment feels a\nbit misleading now that seems what you are calling this new kind of\nworker. E.g. this is also called to start the sync worker. And also\nfor the apply worker (which we are not really calling a \"background\nworker\" in other places). This comment is the same as [PSv4] #19.\n\n~~~\n\n10. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n@@ -275,6 +280,9 @@ logicalrep_worker_launch(Oid dbid, Oid subid,\nconst char *subname, Oid userid,\n int nsyncworkers;\n TimestampTz now;\n\n+ /* We don't support table sync in subworker */\n+ Assert(!((subworker_dsm != DSM_HANDLE_INVALID) && OidIsValid(relid)));\n\nI think you should declare a new variable like:\nbool is_subworker = subworker_dsm != DSM_HANDLE_INVALID;\n\nThen this Assert can be simplified, and also you can re-use the\n'is_subworker' later multiple times in this same function to simplify\nlots of other code also.\n\n~~~\n\n11. src/backend/replication/logical/launcher.c - logicalrep_worker_stop_internal\n\n+/*\n+ * Workhorse for logicalrep_worker_stop() and logicalrep_worker_detach(). Stop\n+ * the worker and wait for wait for it to die.\n+ */\n+static void\n+logicalrep_worker_stop_internal(LogicalRepWorker *worker)\n\nTypo: \"wait for\" is repeated 2x.\n\n~~~\n\n12. src/backend/replication/logical/origin.c - replorigin_session_setup\n\n@@ -1110,7 +1110,11 @@ replorigin_session_setup(RepOriginId node)\n if (curstate->roident != node)\n continue;\n\n- else if (curstate->acquired_by != 0)\n+ /*\n+ * We allow the apply worker to get the slot which is acquired by its\n+ * leader process.\n+ */\n+ else if (curstate->acquired_by != 0 && acquire)\n\nI still feel this is overly-cofusing. Shouldn't comment say \"Allow the\napply bgworker to get the slot...\".\n\nAlso the parameter name 'acquire' is hard to reconcile with the\ncomment. E.g. I feel all this would be easier to understand if the\nparam was was refactored with a name like 'bgworker' and the code was\nchanged to:\nelse if (curstate->acquired_by != 0 && !bgworker)\n\nOf course, the value true/false would need to be flipped on calls too.\nThis is the same as my previous comment [PSv4] #26.\n\n~~~\n\n13. src/backend/replication/logical/proto.c\n\n@@ -1138,14 +1138,11 @@ logicalrep_write_stream_commit(StringInfo out,\nReorderBufferTXN *txn,\n /*\n * Read STREAM COMMIT from the output stream.\n */\n-TransactionId\n+void\n logicalrep_read_stream_commit(StringInfo in, LogicalRepCommitData *commit_data)\n {\n- TransactionId xid;\n uint8 flags;\n\n- xid = pq_getmsgint(in, 4);\n-\n /* read flags (unused for now) */\n flags = pq_getmsgbyte(in);\n\nThere is something incompatible with the read/write functions here.\nThe write writes the txid before the flags, but the read_commit does\nnot read it at all – if only reads the flags (???) if this is really\ncorrect then I think there need to be some comments to explain WHY it\nis correct.\n\nNOTE: See also review comment 28 where I proposed another way to write\nthis code.\n\n~~~\n\n14. src/backend/replication/logical/worker.c - comment\n\nThe whole comment is similar to the commit message so any changes\nthere should be made here also.\n\n~~~\n\n15. src/backend/replication/logical/worker.c - ParallelState\n\n+/*\n+ * Shared information among apply workers.\n+ */\n+typedef struct ParallelState\n\nIt looks like there is already another typedef called \"ParallelState\"\nbecause it is already in the typedefs.list. Maybe this name should be\nchanged or maybe make it static or something?\n\n~~~\n\n16. src/backend/replication/logical/worker.c - defines\n\n+/*\n+ * States for apply background worker.\n+ */\n+#define APPLY_BGWORKER_ATTACHED 'a'\n+#define APPLY_BGWORKER_READY 'r'\n+#define APPLY_BGWORKER_BUSY 'b'\n+#define APPLY_BGWORKER_FINISHED 'f'\n+#define APPLY_BGWORKER_EXIT 'e'\n\nThose char states all look independent. So wouldn’t this be\nrepresented better as an enum to reinforce that fact?\n\n~~~\n\n17. src/backend/replication/logical/worker.c - functions\n\n+/* Worker setup and interactions */\n+static WorkerState *apply_bgworker_setup(void);\n+static WorkerState *find_or_start_apply_bgworker(TransactionId xid,\n+ bool start);\n\n\nMaybe rename to apply_bgworker_find_or_start() to match the pattern of\nthe others?\n\n~~~\n\n18. src/backend/replication/logical/worker.c - macros\n\n+#define am_apply_bgworker() (MyLogicalRepWorker->subworker)\n+#define applying_changes_in_bgworker() (in_streamed_transaction &&\nstream_apply_worker != NULL)\n\n18a. Somehow I felt these are not in the best place.\n- Maybe am_apply_bgworker() should be in worker_internal.h?\n- Maybe the applying_changes_in_bgworker() should be nearby the\nstream_apply_worker declaration\n\n18b. Maybe applying_changes_in_bgworker should be renamed to something\nelse to match the pattern of the others (e.g. \"apply_bgworker_active\"\nor something)\n\n~~~\n\n19. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /*\n+ * If we decided to apply the changes of this transaction in a apply\n+ * background worker, pass the data to the worker.\n+ */\n\nTypo: \"in a apply\" -> \"in an apply\"\n\n~~~\n\n20. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /*\n+ * XXX The publisher side doesn't always send relation update message\n+ * after the streaming transaction, so update the relation in main\n+ * apply worker here.\n+ */\n\nWording: \"doesn't always send relation update message\" -> \"doesn't\nalways send relation update messages\" ??\n\n~~~\n\n21. src/backend/replication/logical/worker.c - apply_handle_commit_prepared\n\n+ apply_bgworker_set_state(APPLY_BGWORKER_FINISHED);\n\nIt seems somewhat confusing to see calls to apply_bgworker_set_state()\nwhen we may or may not even be an apply bgworker.\n\nI know it adds more code, but I somehow feel it is more readable if\nall these calls were changed to look below. Please consider it.\n\nSUGGESTION\nif (am_bgworker())\napply_bgworker_set_state(XXX);\n\nThen you can also change the apply_bgworker_set_state to\nAssert(am_apply_bgworker());\n\n\n~~~\n\n22. src/backend/replication/logical/worker.c - find_or_start_apply_bgworker\n\n+\n+ if (!start && ApplyWorkersHash == NULL)\n+ return NULL;\n+\n\nIIUC maybe this extra check is not really necessary. I see no harm to\ncreate the HashTable even if was called in this state. If the 'start'\nflag is false then nothing is going to be found anyway, so it will\nreturn NULL. e.g. Might as well make the code a few lines\nshorter/simpler by removing this check.\n\n~~~\n\n23. src/backend/replication/logical/worker.c - apply_bgworker_free\n\n+/*\n+ * Add the worker to the freelist and remove the entry from hash table.\n+ */\n+static void\n+apply_bgworker_free(WorkerState *wstate)\n+{\n+ bool found;\n+ MemoryContext oldctx;\n+ TransactionId xid = wstate->pstate->stream_xid;\n\nIf you are not going to check the value of 'found' then why bother to\npass this param at all? Can't you just pass NULL?\n\n~~~\n\n24. src/backend/replication/logical/worker.c - apply_bgworker_free\n\nShould there be an Assert that the bgworker state really was FINISHED?\nI think I asked this already [PSv4] #48.\n\n~~~\n\n24. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n@@ -1088,24 +1416,71 @@ apply_handle_stream_prepare(StringInfo s)\n logicalrep_read_stream_prepare(s, &prepare_data);\n set_apply_error_context_xact(prepare_data.xid, prepare_data.prepare_lsn);\n\n- elog(DEBUG1, \"received prepare for streamed transaction %u\",\nprepare_data.xid);\n+ /*\n+ * If we are in a bgworker, just prepare the transaction.\n+ */\n+ if (am_apply_bgworker())\n\nDon’t need to say \"If we are...\" because the am_apply_worker()\ncondition makes it clear this is true.\n\n~~~\n\n25. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n- if (MyLogicalRepWorker->stream_fileset == NULL)\n+ stream_apply_worker = find_or_start_apply_bgworker(stream_xid, first_segment);\n+\n+ if (applying_changes_in_bgworker())\n {\n\nIIUC this condition seems overkill. I think you can just say if\n(stream_apply_worker)\n\n~~~\n\n26. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ if (found)\n+ {\n+ elog(LOG, \"rolled back to savepoint %s\", spname);\n+ RollbackToSavepoint(spname);\n+ CommitTransactionCommand();\n+ subxactlist = list_truncate(subxactlist, i + 1);\n+ }\n\nShould that elog use the \"[Apply BGW #%u]\" format like the others for BGW?\n\n~~~\n\n27. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\nShould this function be setting stream_apply_worker = NULL somewhere\nwhen all is done?\n\n~~~\n\n28. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n+/*\n+ * Handle STREAM COMMIT message.\n+ */\n+static void\n+apply_handle_stream_commit(StringInfo s)\n+{\n+ LogicalRepCommitData commit_data;\n+ TransactionId xid;\n+\n+ if (in_streamed_transaction)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_PROTOCOL_VIOLATION),\n+ errmsg_internal(\"STREAM COMMIT message without STREAM STOP\")));\n+\n+ xid = pq_getmsgint(s, 4);\n+ logicalrep_read_stream_commit(s, &commit_data);\n+ set_apply_error_context_xact(xid, commit_data.commit_lsn);\n\nThere is something a bit odd about this code. I think the\nlogicalrep_read_stream_commit() should take another param and the Txid\nbe extracted/read only INSIDE that logicalrep_read_stream_commit\nfunction. See also review comment #13.\n\n~~~\n\n29. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\nI am unsure, but should something be setting the stream_apply_worker =\nNULL somewhere when all is done?\n\n~~~\n\n30. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n30a.\n+ if (shmq_res != SHM_MQ_SUCCESS)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"lost connection to the main apply worker\")));\n\n30b.\n+ default:\n+ elog(ERROR, \"unexpected message\");\n+ break;\n\nShould both those error messages have the \"[Apply BGW #%u]\" prefix\nlike the other BGW messages?\n\n~~~\n\n31. src/backend/replication/logical/worker.c - ApplyBgwShutdown\n\n+/*\n+ * Set the failed flag so that the main apply worker can realize we have\n+ * shutdown.\n+ */\n+static void\n+ApplyBgwShutdown(int code, Datum arg)\n\nThe comment does not seem to be in sync with the code. E.g.\nWording: \"failed flag\" -> \"exit state\" ??\n\n~~~\n\n32. src/backend/replication/logical/worker.c - ApplyBgwShutdown\n\n+/*\n+ * Set the failed flag so that the main apply worker can realize we have\n+ * shutdown.\n+ */\n+static void\n+ApplyBgwShutdown(int code, Datum arg)\n\nIf the 'code' param is deliberately unused it might be better to say\nso in the comment...\n\n~~~\n\n33. src/backend/replication/logical/worker.c - LogicalApplyBgwMain\n\n33a.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"unable to map dynamic shared memory segment\")));\n\n33b.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"bad magic number in dynamic shared memory segment\")));\n+\n\n33c.\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription %u will not \"\n+ \"start because the subscription was removed during startup\",\n+ MyLogicalRepWorker->subid)));\n\nShould all these messages have \"[Apply BGW ?]\" prefix even though they\nare not yet attached?\n\n~~~\n\n34. src/backend/replication/logical/worker.c - setup_dsm\n\n+ * We need one key to register the location of the header, and we need\n+ * nworkers keys to track the locations of the message queues.\n+ */\n\nThis comment about 'nworkers' seems stale because that variable no\nlonger exists.\n\n~~~\n\n35. src/backend/replication/logical/worker.c - apply_bgworker_setup\n\n+/*\n+ * Start apply worker background worker process and allocat shared memory for\n+ * it.\n+ */\n+static WorkerState *\n+apply_bgworker_setup(void)\n\ntypo: \"allocat\" -> \"allocate\"\n\n~~~\n\n36. src/backend/replication/logical/worker.c - apply_bgworker_setup\n\n+ elog(LOG, \"setting up apply worker #%u\", list_length(ApplyWorkersList) + 1)\n\nShould this message have the standard \"[Apply BGW %u]\" pattern?\n\n~~~\n\n37. src/backend/replication/logical/worker.c - apply_bgworker_setup\n\n+ if (launched)\n+ {\n+ /* Wait for worker to become ready. */\n+ apply_bgworker_wait_for(wstate, APPLY_BGWORKER_ATTACHED);\n+\n+ ApplyWorkersList = lappend(ApplyWorkersList, wstate);\n+ }\n\nSince there is a state APPLY_BGWORKER_READY I think either this\ncomment is wrong or this passed parameter ATTACHED must be wrong.\n\n~~~\n\n38. src/backend/replication/logical/worker.c - apply_bgworker_send_data\n\n+ if (result != SHM_MQ_SUCCESS)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"could not send tuples to shared-memory queue\")));\n+}\n\nWording: Is it right to ocall these \"tuples\" or better just say\n\"data\"? I am not sure. Already asked this in [PSv4] #68\n\n~~~\n\n39. src/backend/replication/logical/worker.c - apply_bgworker_wait_for\n\n+/*\n+ * Wait until the state of apply background worker reach the 'wait_for_state'\n+ */\n+static void\n+apply_bgworker_wait_for(WorkerState *wstate, char wait_for_state)\n\ntypo: \"reach\" -> \"reaches\"\n\n~~~\n\n40. src/backend/replication/logical/worker.c - apply_bgworker_wait_for\n\n+ /* If the worker is ready, we have succeeded. */\n+ SpinLockAcquire(&wstate->pstate->mutex);\n+ status = wstate->pstate->state;\n+ SpinLockRelease(&wstate->pstate->mutex);\n+\n+ if (status == wait_for_state)\n+ break;\n\n40a. What does this mention \"ready\". This function might be waiting\nfor a different state to that.\n\n40b. Anyway, I think this comment should be a few lines lower, above\nthe if (status == wait_for_state)\n\n~~~\n\n41. src/backend/replication/logical/worker.c - apply_bgworker_wait_for\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Background worker %u failed to apply transaction %u\",\n+ wstate->pstate->n, wstate->pstate->stream_xid)));\n\nShould this message have the standard \"[Apply BGW %u]\" pattern?\n\n~~~\n\n42. src/backend/replication/logical/worker.c - check_workers_status\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Background worker %u exited unexpectedly\",\n+ wstate->pstate->n)));\n\nShould this message have the standard \"[Apply BGW %u]\" pattern? Or if\nthis is just from Apply worker maybe it should be clearer like \"Apply\nworker detected apply bgworker %u exited unexpectedly\".\n\n~~~\n\n43. src/backend/replication/logical/worker.c - check_workers_status\n\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply workers for subscription \\\"%s\\\"\nwill restart\",\n+ MySubscription->name),\n+ errdetail(\"Cannot start table synchronization while bgworkers are \"\n+ \"handling streamed replication transaction\")));\n\nI am not sure, but isn't the message backwards? e.g. Should it say more like:\n\"Cannot handle streamed transactions using bgworkers while table\nsynchronization is still in progress\".\n\n~~~\n\n44. src/backend/replication/logical/worker.c - apply_bgworker_set_state\n\n+ elog(LOG, \"[Apply BGW #%u] set state to %c\",\n+ MyParallelState->n, state);\n\nThe line wrapping seemed overkill here.\n\n~~~\n\n45. src/backend/utils/activity/wait_event.c\n\n@@ -388,6 +388,9 @@ pgstat_get_wait_ipc(WaitEventIPC w)\n case WAIT_EVENT_HASH_GROW_BUCKETS_REINSERT:\n event_name = \"HashGrowBucketsReinsert\";\n break;\n+ case WAIT_EVENT_LOGICAL_APPLY_WORKER_READY:\n+ event_name = \"LogicalApplyWorkerReady\";\n+ break;\n\nI am not sure this is the best name for this event since the only\nplace it is used (in apply_bgworker_wait_for) is not only waiting for\nREADY state. Maybe a name like WAIT_EVENT_LOGICAL_APPLY_BGWORKER or\nWAIT_EVENT_LOGICAL_APPLY_WORKER_SYNC would be more appropriate? Need\nto change the wait_event.h also.\n\n~~~\n\n46. src/include/catalog/pg_subscription.h\n\n+/* Disallow streaming in-progress transactions */\n+#define SUBSTREAM_OFF 'f'\n+\n+/*\n+ * Streaming transactions are written to a temporary file and applied only\n+ * after the transaction is committed on upstream.\n+ */\n+#define SUBSTREAM_ON 'o'\n+\n+/* Streaming transactions are appied immediately via a background worker */\n+#define SUBSTREAM_APPLY 'a'\n\n46a. There is not really any overarching comment that associates these\n#defines back to the new 'stream' field so you are just supposed to\nguess that's what they are for?\n\n46b. I also feel that using 'o' for ON is not consistent with the 'f'\nof OFF. IMO better to use 't/f' for true/false instead of 'o/f'. Also\ndon't forget update docs, pg_dump.c etc.\n\n46c. Typo: \"appied\" -> \"applied\"\n\n~~~~\n\n47. src/test/regress/expected/subscription.out - missting test\n\nMissing some test cases for all new option values? E.g. Where is the\ntest using streaming value is set to 'apply'. Same comment as [PSv4]\n#81\n\n------\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716E8D536552467EFB512EF94FC9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[PSv4] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 5 May 2022 15:45:36 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 29, 2022 at 3:22 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n...\n> Thanks for your patch.\n>\n> The patch modified streaming option in logical replication, it can be set to\n> 'on', 'off' and 'apply'. The new option 'apply' haven't been tested in the tap test.\n> Attach a patch which modified the subscription tap test to cover both 'on' and\n> 'apply' option. (The main patch is also attached to make cfbot happy.)\n>\n\nHere are my review comments for v5-0002 (TAP tests)\n\nYour changes followed a similar pattern of refactoring so most of my\ncomments below is repeated for all the files.\n\n======\n\n1. Commit message\n\nFor the tap tests about streaming option in logical replication, test both\n'on' and 'apply' option.\n\nSUGGESTION\nChange all TAP tests using the PUBLICATION \"streaming\" option, so they\nnow test both 'on' and 'apply' values.\n\n~~~\n\n2. src/test/subscription/t/015_stream.pl\n\n+sub test_streaming\n+{\n\nI think the function should have a comment to say that its purpose is\nto encapsulate all the common (stream related) test steps so the same\ncode can be run both for the streaming=on and streaming=apply cases.\n\n~~~\n\n3. src/test/subscription/t/015_stream.pl\n\n+\n+# Test streaming mode on\n\n+# Test streaming mode apply\n\nThese comments fell too small. IMO they should both be more prominent like:\n\n################################\n# Test using streaming mode 'on'\n################################\n\n###################################\n# Test using streaming mode 'apply'\n###################################\n\n~~~\n\n4. src/test/subscription/t/015_stream.pl\n\n+# Test streaming mode apply\n+$node_publisher->safe_psql('postgres', \"DELETE FROM test_tab WHERE (a > 2)\");\n $node_publisher->wait_for_catchup($appname);\n\nI think those 2 lines do not really belong after the \"# Test streaming\nmode apply\" comment. IIUC they are really just doing cleanup from the\nprior test part so I think they should\n\na) be *above* this comment (and say \"# cleanup the test data\") or\nb) maybe it is best to put all the cleanup lines actually inside the\n'test_streaming' function so that the last thing the function does is\nclean up after itself.\n\noption b seems tidier to me.\n\n~~~\n\n5. src/test/subscription/t/016_stream_subxact.pl\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n6. src/test/subscription/t/016_stream_subxact.pl\n\nThe comments for the different streaming nodes should be more\nprominent. (same as comment #3)\n\n~~~\n\n7. src/test/subscription/t/016_stream_subxact.pl\n\n+# Test streaming mode apply\n+$node_publisher->safe_psql('postgres', \"DELETE FROM test_tab WHERE (a > 2)\");\n $node_publisher->wait_for_catchup($appname);\n\nThese don't seem to belong here. They are clean up from the prior\ntest. (same as comment #4)\n\n~~~\n\n8. src/test/subscription/t/017_stream_ddl.pl\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n9. src/test/subscription/t/017_stream_ddl.pl\n\nThe comments for the different streaming nodes should be more\nprominent. (same as comment #3)\n\n~~~\n\n10. src/test/subscription/t/017_stream_ddl.pl\n\n+# Test streaming mode apply\n $node_publisher->safe_psql(\n 'postgres', q{\n-BEGIN;\n-INSERT INTO test_tab VALUES (2001, md5(2001::text), -2001, 2*2001);\n-ALTER TABLE test_tab ADD COLUMN e INT;\n-SAVEPOINT s1;\n-INSERT INTO test_tab VALUES (2002, md5(2002::text), -2002, 2*2002, -3*2002);\n-COMMIT;\n+DELETE FROM test_tab WHERE (a > 2);\n+ALTER TABLE test_tab DROP COLUMN c, DROP COLUMN d, DROP COLUMN e,\nDROP COLUMN f;\n });\n\n $node_publisher->wait_for_catchup($appname);\n\nThese don't seem to belong here. They are clean up from the prior\ntest. (same as comment #4)\n\n~~~\n\n11. .../t/018_stream_subxact_abort.pl\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n12. .../t/018_stream_subxact_abort.pl\n\nThe comments for the different streaming nodes should be more\nprominent. (same as comment #3)\n\n~~~\n\n13. .../t/018_stream_subxact_abort.pl\n\n+# Test streaming mode apply\n+$node_publisher->safe_psql('postgres', \"DELETE FROM test_tab WHERE (a > 2)\");\n $node_publisher->wait_for_catchup($appname);\n\nThese don't seem to belong here. They are clean up from the prior\ntest. (same as comment #4)\n\n~~~\n\n14. .../t/019_stream_subxact_ddl_abort.pl\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n15. .../t/019_stream_subxact_ddl_abort.pl\n\nThe comments for the different streaming nodes should be more\nprominent. (same as comment #3)\n\n~~~\n\n16. .../t/019_stream_subxact_ddl_abort.pl\n\n+test_streaming($node_publisher, $node_subscriber, $appname);\n+\n+# Test streaming mode apply\n $node_publisher->safe_psql(\n 'postgres', q{\n-BEGIN;\n-INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series(3,500) s(i);\n-ALTER TABLE test_tab ADD COLUMN c INT;\n-SAVEPOINT s1;\n-INSERT INTO test_tab SELECT i, md5(i::text), -i FROM\ngenerate_series(501,1000) s(i);\n-ALTER TABLE test_tab ADD COLUMN d INT;\n-SAVEPOINT s2;\n-INSERT INTO test_tab SELECT i, md5(i::text), -i, 2*i FROM\ngenerate_series(1001,1500) s(i);\n-ALTER TABLE test_tab ADD COLUMN e INT;\n-SAVEPOINT s3;\n-INSERT INTO test_tab SELECT i, md5(i::text), -i, 2*i, -3*i FROM\ngenerate_series(1501,2000) s(i);\n+DELETE FROM test_tab WHERE (a > 2);\n ALTER TABLE test_tab DROP COLUMN c;\n-ROLLBACK TO s1;\n-INSERT INTO test_tab SELECT i, md5(i::text), i FROM\ngenerate_series(501,1000) s(i);\n-COMMIT;\n });\n-\n $node_publisher->wait_for_catchup($appname);\n\nThese don't seem to belong here. They are clean up from the prior\ntest. (same as comment #4)\n\n~~~\n\n17. .../subscription/t/022_twophase_cascade.\n\n+# ---------------------\n+# 2PC + STREAMING TESTS\n+# ---------------------\n+sub test_streaming\n+{\n\nI think maybe that 2PC comment should not have been moved. IMO it\nbelongs in the main test body...\n\n~~~\n\n18. .../subscription/t/022_twophase_cascade.\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n19. .../subscription/t/022_twophase_cascade.\n\n+sub test_streaming\n+{\n+ my ($node_A, $node_B, $node_C, $appname_B, $appname_C, $streaming) = @_;\n\nIf you called that '$streaming' param something more like\n'$streaming_mode' it would read better I think.\n\n~~~\n\n20. .../subscription/t/023_twophase_stream.pl\n\nsub test_streaming should be commented. (same as comment #2)\n\n~~~\n\n21. .../subscription/t/023_twophase_stream.pl\n\nThe comments for the different streaming nodes should be more\nprominent. (same as comment #3)\n\n~~~\n\n22. .../subscription/t/023_twophase_stream.pl\n\n+# Test streaming mode apply\n $node_publisher->safe_psql('postgres', \"DELETE FROM test_tab WHERE a > 2;\");\n-\n-# Then insert, update and delete enough rows to exceed the 64kB limit.\n-$node_publisher->safe_psql('postgres', q{\n- BEGIN;\n- INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series(3,\n5000) s(i);\n- UPDATE test_tab SET b = md5(b) WHERE mod(a,2) = 0;\n- DELETE FROM test_tab WHERE mod(a,3) = 0;\n- PREPARE TRANSACTION 'test_prepared_tab';});\n-\n-$node_publisher->wait_for_catchup($appname);\n-\n-# check that transaction is in prepared state on subscriber\n-$result = $node_subscriber->safe_psql('postgres', \"SELECT count(*)\nFROM pg_prepared_xacts;\");\n-is($result, qq(1), 'transaction is prepared on subscriber');\n-\n-# 2PC transaction gets aborted\n-$node_publisher->safe_psql('postgres', \"ROLLBACK PREPARED\n'test_prepared_tab';\");\n-\n $node_publisher->wait_for_catchup($appname);\n\nThese don't seem to belong here. They are clean up from the prior\ntest. (same as comment #4)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 6 May 2022 18:56:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, May 4, 2022 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 3, 2022 at 9:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > > > his first mail in this patch? IIUC since the background apply worker\n> > > > > applies the streamed changes as soon as receiving them from the main\n> > > > > apply worker, a conflict that doesn't happen in the current streaming\n> > > > > logical replication could happen.\n> > > > >\n> > > >\n> > > > This patch seems to be waiting for stream_stop to finish, so I don't\n> > > > see how the issues related to \"Transaction dependency\" can arise? What\n> > > > type of conflict/issues you have in mind?\n> > >\n> > > Suppose we set both publisher and subscriber:\n> > >\n> > > On publisher:\n> > > create table test (i int);\n> > > insert into test values (0);\n> > > create publication test_pub for table test;\n> > >\n> > > On subscriber:\n> > > create table test (i int primary key);\n> > > create subscription test_sub connection '...' publication test_pub; --\n> > > value 0 is replicated via initial sync\n> > >\n> > > Now, both 'test' tables have value 0.\n> > >\n> > > And suppose two concurrent transactions are executed on the publisher\n> > > in following order:\n> > >\n> > > TX-1:\n> > > begin;\n> > > insert into test select generate_series(0, 10000); -- changes will be streamed;\n> > >\n> > > TX-2:\n> > > begin;\n> > > delete from test where c = 0;\n> > > commit;\n> > >\n> > > TX-1:\n> > > commit;\n> > >\n> > > With the current streaming logical replication, these changes will be\n> > > applied successfully since the deletion is applied before the\n> > > (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> > > an unique constraint violation since the insertion is applied first.\n> > > I've confirmed that it happens with v5 patch.\n> > >\n> >\n> > Good point but I am not completely sure if doing transaction\n> > dependency tracking for such cases is really worth it. I feel for such\n> > concurrent cases users can anyway now also get conflicts, it is just a\n> > matter of timing. One more thing to check transaction dependency, we\n> > might need to spill the data for streaming transactions in which case\n> > we might lose all the benefits of doing it via a background worker. Do\n> > we see any simple way to avoid this?\n> >\n\nI agree that it is just a matter of timing. I think new issues that\nhaven't happened on the current streaming logical replication\ndepending on the timing could happen with this feature and vice versa.\n\n>\n> I think the other kind of problem that can happen here is delete\n> followed by an insert. If in the example provided by you, TX-1\n> performs delete (say it is large enough to cause streaming) and TX-2\n> performs insert then I think it will block the apply worker because\n> insert will start waiting infinitely. Currently, I think it will lead\n> to conflict due to insert but that is still solvable by allowing users\n> to remove conflicting rows.\n>\n> It seems both these problems are due to the reason that the table on\n> publisher and subscriber has different constraints otherwise, we would\n> have seen the same behavior on the publisher as well.\n>\n> There could be a few ways to avoid these and similar problems:\n> a. detect the difference in constraints between publisher and\n> subscribers like primary key and probably others (like whether there\n> is any volatile function present in index expression) when applying\n> the change and then we give ERROR to the user that she must change the\n> streaming mode to 'spill' instead of 'apply' (aka parallel apply).\n> b. Same as (a) but instead of ERROR just LOG this information and\n> change the mode to spill for the transactions that operate on that\n> particular relation.\n\nGiven that it doesn't introduce a new kind of problem I don't think we\nneed special treatment for that at least in this feature. If we want\nsuch modes we can discuss it separately.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 10 May 2022 14:04:42 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, May 4, 2022 at 8:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, May 3, 2022 at 5:16 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> ...\n>\n> > Avoiding unexpected differences like this is why I suggested the\n> > option should have to be explicitly enabled instead of being on by\n> > default as it is in the current patch. See my review comment #14 [1].\n> > It means the user won't have to change their existing code as a\n> > workaround.\n> >\n> > ------\n> > [1] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n> >\n>\n> Sorry I was wrong above. It seems this behaviour was already changed\n> in the latest patch v5 so now the option value 'on' means what it\n> always did. Thanks!\n\nHaving it optional seems a good idea. BTW can the user configure how\nmany apply bgworkers can be used per subscription or in the whole\nsystem? Like max_sync_workers_per_subscription, is it better to have a\nconfiguration parameter or a subscription option for that? If so,\nsetting it to 0 probably means to disable the parallel apply feature.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 10 May 2022 14:09:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 10, 2022 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 4, 2022 at 8:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, May 3, 2022 at 5:16 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > ...\n> >\n> > > Avoiding unexpected differences like this is why I suggested the\n> > > option should have to be explicitly enabled instead of being on by\n> > > default as it is in the current patch. See my review comment #14 [1].\n> > > It means the user won't have to change their existing code as a\n> > > workaround.\n> > >\n> > > ------\n> > > [1] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n> > >\n> >\n> > Sorry I was wrong above. It seems this behaviour was already changed\n> > in the latest patch v5 so now the option value 'on' means what it\n> > always did. Thanks!\n>\n> Having it optional seems a good idea. BTW can the user configure how\n> many apply bgworkers can be used per subscription or in the whole\n> system? Like max_sync_workers_per_subscription, is it better to have a\n> configuration parameter or a subscription option for that? If so,\n> setting it to 0 probably means to disable the parallel apply feature.\n>\n\nYeah, that might be useful but we are already giving an option while\ncreating a subscription whether to allow parallelism, so will it be\nuseful to give one more way to disable this feature? OTOH, having\nsomething like max_parallel_apply_workers/max_bg_apply_workers at the\nsystem level can give better control for how much parallelism the user\nwishes to allow for apply work. If we have such a new parameter then I\nthink max_logical_replication_workers should include apply workers,\nparallel apply workers, and table synchronization? In such a case,\ndon't we need to think of increasing the default value of\nmax_logical_replication_workers?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 May 2022 14:28:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 10, 2022 at 10:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 4, 2022 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 3, 2022 at 9:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > > > > his first mail in this patch? IIUC since the background apply worker\n> > > > > > applies the streamed changes as soon as receiving them from the main\n> > > > > > apply worker, a conflict that doesn't happen in the current streaming\n> > > > > > logical replication could happen.\n> > > > > >\n> > > > >\n> > > > > This patch seems to be waiting for stream_stop to finish, so I don't\n> > > > > see how the issues related to \"Transaction dependency\" can arise? What\n> > > > > type of conflict/issues you have in mind?\n> > > >\n> > > > Suppose we set both publisher and subscriber:\n> > > >\n> > > > On publisher:\n> > > > create table test (i int);\n> > > > insert into test values (0);\n> > > > create publication test_pub for table test;\n> > > >\n> > > > On subscriber:\n> > > > create table test (i int primary key);\n> > > > create subscription test_sub connection '...' publication test_pub; --\n> > > > value 0 is replicated via initial sync\n> > > >\n> > > > Now, both 'test' tables have value 0.\n> > > >\n> > > > And suppose two concurrent transactions are executed on the publisher\n> > > > in following order:\n> > > >\n> > > > TX-1:\n> > > > begin;\n> > > > insert into test select generate_series(0, 10000); -- changes will be streamed;\n> > > >\n> > > > TX-2:\n> > > > begin;\n> > > > delete from test where c = 0;\n> > > > commit;\n> > > >\n> > > > TX-1:\n> > > > commit;\n> > > >\n> > > > With the current streaming logical replication, these changes will be\n> > > > applied successfully since the deletion is applied before the\n> > > > (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> > > > an unique constraint violation since the insertion is applied first.\n> > > > I've confirmed that it happens with v5 patch.\n> > > >\n> > >\n> > > Good point but I am not completely sure if doing transaction\n> > > dependency tracking for such cases is really worth it. I feel for such\n> > > concurrent cases users can anyway now also get conflicts, it is just a\n> > > matter of timing. One more thing to check transaction dependency, we\n> > > might need to spill the data for streaming transactions in which case\n> > > we might lose all the benefits of doing it via a background worker. Do\n> > > we see any simple way to avoid this?\n> > >\n>\n> I agree that it is just a matter of timing. I think new issues that\n> haven't happened on the current streaming logical replication\n> depending on the timing could happen with this feature and vice versa.\n>\n\nHere by vice versa, do you mean some problems that can happen with\ncurrent code won't happen after new implementation? If so, can you\ngive one such example?\n\n> >\n> > I think the other kind of problem that can happen here is delete\n> > followed by an insert. If in the example provided by you, TX-1\n> > performs delete (say it is large enough to cause streaming) and TX-2\n> > performs insert then I think it will block the apply worker because\n> > insert will start waiting infinitely. Currently, I think it will lead\n> > to conflict due to insert but that is still solvable by allowing users\n> > to remove conflicting rows.\n> >\n> > It seems both these problems are due to the reason that the table on\n> > publisher and subscriber has different constraints otherwise, we would\n> > have seen the same behavior on the publisher as well.\n> >\n> > There could be a few ways to avoid these and similar problems:\n> > a. detect the difference in constraints between publisher and\n> > subscribers like primary key and probably others (like whether there\n> > is any volatile function present in index expression) when applying\n> > the change and then we give ERROR to the user that she must change the\n> > streaming mode to 'spill' instead of 'apply' (aka parallel apply).\n> > b. Same as (a) but instead of ERROR just LOG this information and\n> > change the mode to spill for the transactions that operate on that\n> > particular relation.\n>\n> Given that it doesn't introduce a new kind of problem I don't think we\n> need special treatment for that at least in this feature.\n>\n\nIsn't the problem related to infinite wait by insert as explained in\nmy previous email (in the above-quoted text) a new kind of problem\nthat won't exist in the current implementation?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 May 2022 14:39:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 10, 2022 at 6:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 10:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 4, 2022 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, May 3, 2022 at 9:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 2, 2022 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, May 2, 2022 at 6:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, May 2, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > Are you planning to support \"Transaction dependency\" Amit mentioned in\n> > > > > > > his first mail in this patch? IIUC since the background apply worker\n> > > > > > > applies the streamed changes as soon as receiving them from the main\n> > > > > > > apply worker, a conflict that doesn't happen in the current streaming\n> > > > > > > logical replication could happen.\n> > > > > > >\n> > > > > >\n> > > > > > This patch seems to be waiting for stream_stop to finish, so I don't\n> > > > > > see how the issues related to \"Transaction dependency\" can arise? What\n> > > > > > type of conflict/issues you have in mind?\n> > > > >\n> > > > > Suppose we set both publisher and subscriber:\n> > > > >\n> > > > > On publisher:\n> > > > > create table test (i int);\n> > > > > insert into test values (0);\n> > > > > create publication test_pub for table test;\n> > > > >\n> > > > > On subscriber:\n> > > > > create table test (i int primary key);\n> > > > > create subscription test_sub connection '...' publication test_pub; --\n> > > > > value 0 is replicated via initial sync\n> > > > >\n> > > > > Now, both 'test' tables have value 0.\n> > > > >\n> > > > > And suppose two concurrent transactions are executed on the publisher\n> > > > > in following order:\n> > > > >\n> > > > > TX-1:\n> > > > > begin;\n> > > > > insert into test select generate_series(0, 10000); -- changes will be streamed;\n> > > > >\n> > > > > TX-2:\n> > > > > begin;\n> > > > > delete from test where c = 0;\n> > > > > commit;\n> > > > >\n> > > > > TX-1:\n> > > > > commit;\n> > > > >\n> > > > > With the current streaming logical replication, these changes will be\n> > > > > applied successfully since the deletion is applied before the\n> > > > > (streamed) insertion. Whereas with the apply bgworker, it fails due to\n> > > > > an unique constraint violation since the insertion is applied first.\n> > > > > I've confirmed that it happens with v5 patch.\n> > > > >\n> > > >\n> > > > Good point but I am not completely sure if doing transaction\n> > > > dependency tracking for such cases is really worth it. I feel for such\n> > > > concurrent cases users can anyway now also get conflicts, it is just a\n> > > > matter of timing. One more thing to check transaction dependency, we\n> > > > might need to spill the data for streaming transactions in which case\n> > > > we might lose all the benefits of doing it via a background worker. Do\n> > > > we see any simple way to avoid this?\n> > > >\n> >\n> > I agree that it is just a matter of timing. I think new issues that\n> > haven't happened on the current streaming logical replication\n> > depending on the timing could happen with this feature and vice versa.\n> >\n>\n> Here by vice versa, do you mean some problems that can happen with\n> current code won't happen after new implementation? If so, can you\n> give one such example?\n>\n> > >\n> > > I think the other kind of problem that can happen here is delete\n> > > followed by an insert. If in the example provided by you, TX-1\n> > > performs delete (say it is large enough to cause streaming) and TX-2\n> > > performs insert then I think it will block the apply worker because\n> > > insert will start waiting infinitely. Currently, I think it will lead\n> > > to conflict due to insert but that is still solvable by allowing users\n> > > to remove conflicting rows.\n> > >\n> > > It seems both these problems are due to the reason that the table on\n> > > publisher and subscriber has different constraints otherwise, we would\n> > > have seen the same behavior on the publisher as well.\n> > >\n> > > There could be a few ways to avoid these and similar problems:\n> > > a. detect the difference in constraints between publisher and\n> > > subscribers like primary key and probably others (like whether there\n> > > is any volatile function present in index expression) when applying\n> > > the change and then we give ERROR to the user that she must change the\n> > > streaming mode to 'spill' instead of 'apply' (aka parallel apply).\n> > > b. Same as (a) but instead of ERROR just LOG this information and\n> > > change the mode to spill for the transactions that operate on that\n> > > particular relation.\n> >\n> > Given that it doesn't introduce a new kind of problem I don't think we\n> > need special treatment for that at least in this feature.\n> >\n>\n> Isn't the problem related to infinite wait by insert as explained in\n> my previous email (in the above-quoted text) a new kind of problem\n> that won't exist in the current implementation?\n>\n\nSorry I had completely missed the point that the commit order won't be\nchanged. I agree that this new implementation would introduce a new\nkind of issue as you mentioned above, and the opposite is not true.\n\nRegarding the case you explained in the previous email I also think it\nwill happen with the parallel apply feature. The apply worker will be\nblocked until the conflict is resolved. I'm not sure how to avoid\nthat. It would be not easy to compare constraints between publisher\nand subscribers when replicating partitioning tables.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 11 May 2022 12:46:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 10, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 4, 2022 at 8:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Tue, May 3, 2022 at 5:16 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > ...\n> > >\n> > > > Avoiding unexpected differences like this is why I suggested the\n> > > > option should have to be explicitly enabled instead of being on by\n> > > > default as it is in the current patch. See my review comment #14 [1].\n> > > > It means the user won't have to change their existing code as a\n> > > > workaround.\n> > > >\n> > > > ------\n> > > > [1] https://www.postgresql.org/message-id/CAHut%2BPuqYP5eD5wcSCtk%3Da6KuMjat2UCzqyGoE7sieCaBsVskQ%40mail.gmail.com\n> > > >\n> > >\n> > > Sorry I was wrong above. It seems this behaviour was already changed\n> > > in the latest patch v5 so now the option value 'on' means what it\n> > > always did. Thanks!\n> >\n> > Having it optional seems a good idea. BTW can the user configure how\n> > many apply bgworkers can be used per subscription or in the whole\n> > system? Like max_sync_workers_per_subscription, is it better to have a\n> > configuration parameter or a subscription option for that? If so,\n> > setting it to 0 probably means to disable the parallel apply feature.\n> >\n>\n> Yeah, that might be useful but we are already giving an option while\n> creating a subscription whether to allow parallelism, so will it be\n> useful to give one more way to disable this feature? OTOH, having\n> something like max_parallel_apply_workers/max_bg_apply_workers at the\n> system level can give better control for how much parallelism the user\n> wishes to allow for apply work.\n\nOr we can have something like\nmax_parallel_apply_workers_per_subscription that controls how many\nparallel apply workers can launch per subscription. That also gives\nbetter control for the number of parallel apply workers.\n\n> If we have such a new parameter then I\n> think max_logical_replication_workers should include apply workers,\n> parallel apply workers, and table synchronization?\n\nAgreed.\n\n> In such a case,\n> don't we need to think of increasing the default value of\n> max_logical_replication_workers?\n\nI think we would need to think about that if the parallel apply is\nenabled by default but given that it's disabled by default I'm fine\nwith the current default value.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 11 May 2022 13:05:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, May 11, 2022 at 9:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 6:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 10, 2022 at 10:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, May 4, 2022 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > I think the other kind of problem that can happen here is delete\n> > > > followed by an insert. If in the example provided by you, TX-1\n> > > > performs delete (say it is large enough to cause streaming) and TX-2\n> > > > performs insert then I think it will block the apply worker because\n> > > > insert will start waiting infinitely. Currently, I think it will lead\n> > > > to conflict due to insert but that is still solvable by allowing users\n> > > > to remove conflicting rows.\n> > > >\n> > > > It seems both these problems are due to the reason that the table on\n> > > > publisher and subscriber has different constraints otherwise, we would\n> > > > have seen the same behavior on the publisher as well.\n> > > >\n> > > > There could be a few ways to avoid these and similar problems:\n> > > > a. detect the difference in constraints between publisher and\n> > > > subscribers like primary key and probably others (like whether there\n> > > > is any volatile function present in index expression) when applying\n> > > > the change and then we give ERROR to the user that she must change the\n> > > > streaming mode to 'spill' instead of 'apply' (aka parallel apply).\n> > > > b. Same as (a) but instead of ERROR just LOG this information and\n> > > > change the mode to spill for the transactions that operate on that\n> > > > particular relation.\n> > >\n> > > Given that it doesn't introduce a new kind of problem I don't think we\n> > > need special treatment for that at least in this feature.\n> > >\n> >\n> > Isn't the problem related to infinite wait by insert as explained in\n> > my previous email (in the above-quoted text) a new kind of problem\n> > that won't exist in the current implementation?\n> >\n>\n> Sorry I had completely missed the point that the commit order won't be\n> changed. I agree that this new implementation would introduce a new\n> kind of issue as you mentioned above, and the opposite is not true.\n>\n> Regarding the case you explained in the previous email I also think it\n> will happen with the parallel apply feature. The apply worker will be\n> blocked until the conflict is resolved. I'm not sure how to avoid\n> that. It would be not easy to compare constraints between publisher\n> and subscribers when replicating partitioning tables.\n>\n\nI agree that partitioned tables need some more thought but in some\nsimple cases where replication happens via individual partition tables\n(default), we can detect as we do for normal tables. OTOH, when\nreplication happens via root (publish_via_partition_root) it could be\ntricky as the partitions could be different on both sides. I think the\ncases where we can't safely identify the constraint difference won't\nbe considered for apply via a new bg worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 May 2022 10:24:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, May 11, 2022 at 9:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 10, 2022 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Having it optional seems a good idea. BTW can the user configure how\n> > > many apply bgworkers can be used per subscription or in the whole\n> > > system? Like max_sync_workers_per_subscription, is it better to have a\n> > > configuration parameter or a subscription option for that? If so,\n> > > setting it to 0 probably means to disable the parallel apply feature.\n> > >\n> >\n> > Yeah, that might be useful but we are already giving an option while\n> > creating a subscription whether to allow parallelism, so will it be\n> > useful to give one more way to disable this feature? OTOH, having\n> > something like max_parallel_apply_workers/max_bg_apply_workers at the\n> > system level can give better control for how much parallelism the user\n> > wishes to allow for apply work.\n>\n> Or we can have something like\n> max_parallel_apply_workers_per_subscription that controls how many\n> parallel apply workers can launch per subscription. That also gives\n> better control for the number of parallel apply workers.\n>\n\nI think we can go either way in this matter as both have their pros\nand cons. I feel limiting the parallel workers per subscription gives\nbetter control but OTOH, it may not allow max usage of parallelism\nbecause some quota from other subscriptions might remain unused. Let\nus see what Hou-San or others think on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 May 2022 10:40:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, May 5, 2022 1:46 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n\r\n> Here are my review comments for v5-0001.\r\n> I will take a look at the v5-0002 (TAP) patch another time.\r\n\r\nThanks for the comments !\r\n\r\n> 4. Commit message\r\n> \r\n> User can set the streaming option to 'on/off', 'apply'. For now,\r\n> 'apply' means the streaming will be applied via a apply background if\r\n> available. 'on' means the streaming transaction will be spilled to\r\n> disk.\r\n> \r\n> \r\n> I think \"apply\" might not be the best choice of values for this\r\n> meaning, but I think Hou-san already said [1] that this was being\r\n> reconsidered.\r\n\r\nYes, I am thinking over this along with some other related stuff[1] posted by Amit\r\nand sawada. Will change this in next version.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAA4eK1%2B7D4qAQUQEE8zzQ0fGCqeBWd3rzTaY5N0jVs-VXFc_Xw%40mail.gmail.com\r\n\r\n> 7. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\r\n> \r\n> +static char\r\n> +defGetStreamingMode(DefElem *def)\r\n> +{\r\n> + /*\r\n> + * If no parameter given, assume \"true\" is meant.\r\n> + */\r\n> + if (def->arg == NULL)\r\n> + return SUBSTREAM_ON;\r\n> \r\n> But is that right? IIUC all the docs said that the default is OFF.\r\n\r\nI think it's right. \"arg == NULL\" means user specify the streaming option\r\nwithout the value. Like CREATE SUBSCRIPTION xxx WITH(streaming). The value should\r\nbe 'on' in this case.\r\n\r\n\r\n> 12. src/backend/replication/logical/origin.c - replorigin_session_setup\r\n> \r\n> @@ -1110,7 +1110,11 @@ replorigin_session_setup(RepOriginId node)\r\n> if (curstate->roident != node)\r\n> continue;\r\n> \r\n> - else if (curstate->acquired_by != 0)\r\n> + /*\r\n> + * We allow the apply worker to get the slot which is acquired by its\r\n> + * leader process.\r\n> + */\r\n> + else if (curstate->acquired_by != 0 && acquire)\r\n> \r\n> I still feel this is overly-cofusing. Shouldn't comment say \"Allow the\r\n> apply bgworker to get the slot...\".\r\n> \r\n> Also the parameter name 'acquire' is hard to reconcile with the\r\n> comment. E.g. I feel all this would be easier to understand if the\r\n> param was was refactored with a name like 'bgworker' and the code was\r\n> changed to:\r\n> else if (curstate->acquired_by != 0 && !bgworker)\r\n> \r\n> Of course, the value true/false would need to be flipped on calls too.\r\n> This is the same as my previous comment [PSv4] #26.\r\n\r\nI feel it's not good idea to mention bgworker in origin.c. I have remove this\r\ncomment and add some other comments in worker.c.\r\n\r\n> 26. src/backend/replication/logical/worker.c - apply_handle_stream_abort\r\n> \r\n> + if (found)\r\n> + {\r\n> + elog(LOG, \"rolled back to savepoint %s\", spname);\r\n> + RollbackToSavepoint(spname);\r\n> + CommitTransactionCommand();\r\n> + subxactlist = list_truncate(subxactlist, i + 1);\r\n> + }\r\n> \r\n> Should that elog use the \"[Apply BGW #%u]\" format like the others for BGW?\r\n\r\nI feel the \"[Apply BGW #%u]\" is a bit hacky and some of them comes from the old\r\npatchset. I will recheck these logs and adjust them and change some log\r\nlevel in next version.\r\n\r\n> 27. src/backend/replication/logical/worker.c - apply_handle_stream_abort\r\n> \r\n> Should this function be setting stream_apply_worker = NULL somewhere\r\n> when all is done?\r\n> 29. src/backend/replication/logical/worker.c - apply_handle_stream_commit\r\n> \r\n> I am unsure, but should something be setting the stream_apply_worker =\r\n> NULL somewhere when all is done?\r\n\r\nI think the worker already be set to NULL in apply_handle_stream_stop.\r\n\r\n\r\n> 32. src/backend/replication/logical/worker.c - ApplyBgwShutdown\r\n> \r\n> +/*\r\n> + * Set the failed flag so that the main apply worker can realize we have\r\n> + * shutdown.\r\n> + */\r\n> +static void\r\n> +ApplyBgwShutdown(int code, Datum arg)\r\n> \r\n> If the 'code' param is deliberately unused it might be better to say\r\n> so in the comment...\r\n\r\nNot sure about this. After searching the codes, I think most of the callback\r\nfunctions doesn't use and add comments for the 'code' param.\r\n\r\n\r\n> 45. src/backend/utils/activity/wait_event.c\r\n> \r\n> @@ -388,6 +388,9 @@ pgstat_get_wait_ipc(WaitEventIPC w)\r\n> case WAIT_EVENT_HASH_GROW_BUCKETS_REINSERT:\r\n> event_name = \"HashGrowBucketsReinsert\";\r\n> break;\r\n> + case WAIT_EVENT_LOGICAL_APPLY_WORKER_READY:\r\n> + event_name = \"LogicalApplyWorkerReady\";\r\n> + break;\r\n> \r\n> I am not sure this is the best name for this event since the only\r\n> place it is used (in apply_bgworker_wait_for) is not only waiting for\r\n> READY state. Maybe a name like WAIT_EVENT_LOGICAL_APPLY_BGWORKER or\r\n> WAIT_EVENT_LOGICAL_APPLY_WORKER_SYNC would be more appropriate? Need\r\n> to change the wait_event.h also.\r\n\r\nI noticed a similar named \"WAIT_EVENT_LOGICAL_SYNC_STATE_CHANGE\", so I changed\r\nthis to WAIT_EVENT_LOGICAL_APPLY_WORKER_STATE_CHANGE.\r\n\r\n> 47. src/test/regress/expected/subscription.out - missting test\r\n> \r\n> Missing some test cases for all new option values? E.g. Where is the\r\n> test using streaming value is set to 'apply'. Same comment as [PSv4]\r\n> #81\r\n\r\nThe new option is tested in the second patch posted by Shi yu.\r\n\r\nI addressed other comments from Peter and the 2PC related comment from Shi.\r\nHere is the version patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 13 May 2022 08:48:33 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, May 11, 2022 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, May 11, 2022 at 9:35 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Tue, May 10, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, May 10, 2022 at 10:39 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > Having it optional seems a good idea. BTW can the user configure\r\n> > > > how many apply bgworkers can be used per subscription or in the\r\n> > > > whole system? Like max_sync_workers_per_subscription, is it better\r\n> > > > to have a configuration parameter or a subscription option for\r\n> > > > that? If so, setting it to 0 probably means to disable the parallel apply\r\n> feature.\r\n> > > >\r\n> > >\r\n> > > Yeah, that might be useful but we are already giving an option while\r\n> > > creating a subscription whether to allow parallelism, so will it be\r\n> > > useful to give one more way to disable this feature? OTOH, having\r\n> > > something like max_parallel_apply_workers/max_bg_apply_workers at\r\n> > > the system level can give better control for how much parallelism\r\n> > > the user wishes to allow for apply work.\r\n> >\r\n> > Or we can have something like\r\n> > max_parallel_apply_workers_per_subscription that controls how many\r\n> > parallel apply workers can launch per subscription. That also gives\r\n> > better control for the number of parallel apply workers.\r\n> >\r\n> \r\n> I think we can go either way in this matter as both have their pros and cons. I\r\n> feel limiting the parallel workers per subscription gives better control but\r\n> OTOH, it may not allow max usage of parallelism because some quota from\r\n> other subscriptions might remain unused. Let us see what Hou-San or others\r\n> think on this matter?\r\n\r\nThanks for Amit and Sawada-san's comments !\r\nI will think over these approaches and reply soon.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 13 May 2022 08:52:32 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, May 6, 2022 4:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for v5-0002 (TAP tests)\r\n> \r\n> Your changes followed a similar pattern of refactoring so most of my\r\n> comments below is repeated for all the files.\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 1. Commit message\r\n> \r\n> For the tap tests about streaming option in logical replication, test both\r\n> 'on' and 'apply' option.\r\n> \r\n> SUGGESTION\r\n> Change all TAP tests using the PUBLICATION \"streaming\" option, so they\r\n> now test both 'on' and 'apply' values.\r\n> \r\n\r\nOK. But \"streaming\" is a subscription option, so I modified it to:\r\nChange all TAP tests using the SUBSCRIPTION \"streaming\" option, so they\r\nnow test both 'on' and 'apply' values.\r\n\r\n> ~~~\r\n> \r\n> 4. src/test/subscription/t/015_stream.pl\r\n> \r\n> +# Test streaming mode apply\r\n> +$node_publisher->safe_psql('postgres', \"DELETE FROM test_tab WHERE (a > 2)\");\r\n> $node_publisher->wait_for_catchup($appname);\r\n> \r\n> I think those 2 lines do not really belong after the \"# Test streaming\r\n> mode apply\" comment. IIUC they are really just doing cleanup from the\r\n> prior test part so I think they should\r\n> \r\n> a) be *above* this comment (and say \"# cleanup the test data\") or\r\n> b) maybe it is best to put all the cleanup lines actually inside the\r\n> 'test_streaming' function so that the last thing the function does is\r\n> clean up after itself.\r\n> \r\n> option b seems tidier to me.\r\n> \r\n\r\nI also think option b seems better, so I put them inside test_streaming().\r\n\r\nThe rest of the comments are fixed as suggested.\r\n\r\nBesides, I noticed that we didn't free the background worker after preparing a\r\ntransaction in the main patch, so made some small changes to fix it.\r\n\r\nAttach the updated patches.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Fri, 13 May 2022 09:57:15 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "\"Here are my review comments for v6-0001.\n\n======\n\n1. General\n\nI saw that now in most places you are referring to the new kind of\nworker as the \"apply background worker\". But there are a few comments\nremaining that still refer to \"bgworker\". Please search the entire\npatch for \"bgworker\" in the comments and replace them with \"apply\nbackground worker\".\n\n======\n\n2. Commit message\n\nWe also need to allow stream_stop to complete by the\napply background worker to finish it to avoid deadlocks because T-1's current\nstream of changes can update rows in conflicting order with T-2's next stream\nof changes.\n\nSomething is not right with this wording: \"to complete by the apply\nbackground worker to finish it...\".\n\nMaybe just omit the words \"to finish it\" (??).\n\n~~~\n\n3. Commit message\n\nThis patch also extends the subscription streaming option so that...\n\nSUGGESTION\nThis patch also extends the SUBSCRIPTION 'streaming' option so that...\n\n======\n\n4. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+/*\n+ * Extract the streaming mode value from a DefElem. This is like\n+ * defGetBoolean() but also accepts the special value and \"apply\".\n+ */\n+static char\n+defGetStreamingMode(DefElem *def)\n\nTypo: \"special value and...\" -> \"special value of...\"\n\n======\n\n5. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n+\n+ if (subworker_dsm == DSM_HANDLE_INVALID)\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyWorkerMain\");\n+ else\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyBgworkerMain\");\n+\n+\n\n5a.\nThis condition should be using the new 'is_subworker' bool\n\n5b.\nDouble blank lines?\n\n~~~\n\n6. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n- else\n+ else if (subworker_dsm == DSM_HANDLE_INVALID)\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u\", subid);\n+ else\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication apply worker for subscription %u\", subid);\n snprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication worker\");\n\nThis condition also should be using the new 'is_subworker' bool\n\n~~~\n\n7. src/backend/replication/logical/launcher.c - logicalrep_worker_stop_internal\n\n+\n+ Assert(LWLockHeldByMe(LogicalRepWorkerLock));\n+\n\nI think there should be a comment here to say that this lock is\nrequired/expected to be released by the caller of this function.\n\n======\n\n8. src/backend/replication/logical/origin.c - replorigin_session_setup\n\n@@ -1068,7 +1068,7 @@ ReplicationOriginExitCleanup(int code, Datum arg)\n * with replorigin_session_reset().\n */\n void\n-replorigin_session_setup(RepOriginId node)\n+replorigin_session_setup(RepOriginId node, bool acquire)\n {\n\nThis function has been problematic for several reviews. I saw that you\nremoved the previously confusing comment but I still feel some kind of\nexplanation is needed for the vague 'acquire' parameter. OTOH perhaps\nif you just change the param name to 'must_acquire' then I think it\nwould be self-explanatory.\n\n======\n\n9. src/backend/replication/logical/worker.c - General\n\nSome of the logs have a prefix \"[Apply BGW #%u]\" and some do not; I\ndid not really understand how you decided to prefix or not so I did\nnot comment about them individually. Are they all OK? Perhaps if you\ncan explain the reason for the choices I can review it better next\ntime.\n\n~~~\n\n10. src/backend/replication/logical/worker.c - General\n\nThere are multiple places in the code where there is code checking\nif/else for bgworker or normal apply worker. And in those places,\nthere is often a comment like:\n\n\"If we are in main apply worker...\"\n\nBut it is redundant to say \"If we are\" because we know we are.\nInstead, those cases should say a comment at the top of the else like:\n\n/* This is the main apply worker. */\n\nAnd then the \"If we are in main apply worker\" text can be removed from\nthe comment. There are many examples in the patch like this. Please\nsearch and modify all of them.\n\n~~~\n\n11. src/backend/replication/logical/worker.c - file header comment\n\nThe whole comment is similar to the commit message so any changes made\nthere (for #2, #3) should be made here also.\n\n~~~\n\n12. src/backend/replication/logical/worker.c\n\n+typedef struct WorkerEntry\n+{\n+ TransactionId xid;\n+ WorkerState *wstate;\n+} WorkerEntry;\n\nMissing comment for this structure\n\n~~~\n\n13. src/backend/replication/logical/worker.c\n\nWorkerState\nWorkerEntry\n\nI felt that these struct names seem too generic - shouldn't they be\nsomething more like ApplyBgworkerState, ApplyBgworkerEntry\n\n~~~\n\n14. src/backend/replication/logical/worker.c\n\n+static List *ApplyWorkersIdleList = NIL;\n\nIMO maybe ApplyWorkersFreeList is a better name than IdleList for\nthis. \"Idle\" sounds just like it is paused rather than available for\nsomeone else to use. If you change this then please search the rest of\nthe patch for mentions in log messages etc\n\n~~~\n\n15. src/backend/replication/logical/worker.c\n\n+static WorkerState *stream_apply_worker = NULL;\n+\n+/* check if we apply transaction in apply bgworker */\n+#define apply_bgworker_active() (in_streamed_transaction &&\nstream_apply_worker != NULL)\n\nWording: \"if we apply transaction\" -> \"if we are applying the transaction\"\n\n~~~\n\n16. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ * For the main apply worker, if in streaming mode (receiving a block of\n+ * streamed transaction), we send the data to the apply background worker.\n+ *\n+ * For the apply background worker, define a savepoint if new subtransaction\n+ * was started.\n *\n * Returns true for streamed transactions, false otherwise (regular mode).\n */\n static bool\n handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n\n16a.\nTypo: \"if new subtransaction\" -> \"if a new subtransaction\"\n\n16b.\nThat \"regular mode\" comment seems not quite right because IIUC it also\nreturns false also for a bgworker (which hardly seems like a \"regular\nmode\")\n\n~~~\n\n17. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n- /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ /*\n+ * Return if we are not in streaming mode and are not in an apply\n+ * background worker.\n+ */\n+ if (!in_streamed_transaction && !am_apply_bgworker())\n return false;\n\nSomehow I found this condition confusing, the comment is not helpful\neither because it just says exactly what the code says. Can you give a\nbetter explanatory comment?\n\ne.g.\nMaybe the comment should be:\n\"Return if not in streaming mode (unless this is an apply background worker)\"\n\ne.g.\nMaybe condition is easier to understand if written as:\nif (!(in_streamed_transaction || am_apply_bgworker()))\n\n~~~\n\n18. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ if (action == LOGICAL_REP_MSG_RELATION)\n+ {\n+ LogicalRepRelation *rel = logicalrep_read_rel(s);\n+ logicalrep_relmap_update(rel);\n+ }\n+\n+ }\n+ else\n+ {\n+ /* Add the new subxact to the array (unless already there). */\n+ subxact_info_add(current_xid);\n\nUnnecessary blank line.\n\n~~~\n\n19. src/backend/replication/logical/worker.c - find_or_start_apply_bgworker\n\n+ if (found)\n+ {\n+ entry->wstate->pstate->state = APPLY_BGWORKER_BUSY;\n+ return entry->wstate;\n+ }\n+ else if (!start)\n+ return NULL;\n+\n+ /* If there is at least one worker in the idle list, then take one. */\n+ if (list_length(ApplyWorkersIdleList) > 0)\n\nI felt that there should be a comment (after the return NULL) that says:\n\n/*\n * Start a new apply background worker\n */\n\n~~~\n\n20. src/backend/replication/logical/worker.c - apply_bgworker_free\n\n+/*\n+ * Add the worker to the freelist and remove the entry from hash table.\n+ */\n+static void\n+apply_bgworker_free(WorkerState *wstate)\n\n20a.\nTypo: \"freelist\" -> \"free list\"\n\n20b.\nElsewhere (and in the log message) this is called the idle list (but\nactually I prefer \"free list\" like in this comment). See also comment\n#14.\n\n~~~\n\n21. src/backend/replication/logical/worker.c - apply_bgworker_free\n\n+ hash_search(ApplyWorkersHash, &xid,\n+ HASH_REMOVE, &found);\n\n21a.\nIf you are not going to check the value of ‘found’ then why bother to\npass this param at all; can’t you just pass NULL? (I think I asked the\nsame question in a previous review)\n\n21b.\nThe wrapping over 2 lines seems unnecessary here.\n\n~~~\n\n22. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n /*\n- * Initialize the worker's stream_fileset if we haven't yet. This will be\n- * used for the entire duration of the worker so create it in a permanent\n- * context. We create this on the very first streaming message from any\n- * transaction and then use it for this and other streaming transactions.\n- * Now, we could create a fileset at the start of the worker as well but\n- * then we won't be sure that it will ever be used.\n+ * If we are in main apply worker, check if there is any free bgworker\n+ * we can use to process this transaction.\n */\n- if (MyLogicalRepWorker->stream_fileset == NULL)\n+ stream_apply_worker = apply_bgworker_find_or_start(stream_xid, first_segment);\n\n22a.\nTypo: \"in main apply worker\" -> \"in the main apply worker\"\n\n22b.\nSince this is not if/else code, it might be better to put\nAssert(!am_apply_bgworker()); above this just to make it more clear.\n\n~~~\n\n23. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n+ /*\n+ * If we have free worker or we already started to apply this\n+ * transaction in bgworker, we pass the data to worker.\n+ */\n\nSUGGESTION\nIf we have found a free worker or if we are already applying this\ntransaction in an apply background worker, then we pass the data to\nthat worker.\n\n~~~\n\n24. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+apply_handle_stream_abort(StringInfo s)\n {\n- StringInfoData s2;\n- int nchanges;\n- char path[MAXPGPATH];\n- char *buffer = NULL;\n- MemoryContext oldcxt;\n- BufFile *fd;\n+ TransactionId xid;\n+ TransactionId subxid;\n\n- maybe_start_skipping_changes(lsn);\n+ if (in_streamed_transaction)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_PROTOCOL_VIOLATION),\n+ errmsg_internal(\"STREAM COMMIT message without STREAM STOP\")));\n\nTypo?\n\nShouldn't that errmsg say \"STREAM ABORT message...\" instead of \"STREAM\nCOMMIT message...\"\n\n~~~\n\n25. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ for(i = list_length(subxactlist) - 1; i >= 0; i--)\n+ {\n\nMissing space after \"for\"\n\n~~~\n\n26. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ if (found)\n+ {\n+ elog(LOG, \"rolled back to savepoint %s\", spname);\n+ RollbackToSavepoint(spname);\n+ CommitTransactionCommand();\n+ subxactlist = list_truncate(subxactlist, i + 1);\n+ }\n\nDoes this need to log anything if nothing was found? Or is it ok to\nleave as-is and silently ignore it?\n\n~~~\n\n27. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n+ if (len == 0)\n+ {\n+ elog(LOG, \"[Apply BGW #%u] got zero-length message, stopping\", pst->n);\n+ break;\n+ }\n\nMaybe it is unnecessary to say \"stopping\" because it will say that in\nthe next log anyway when it breaks out of the main loop.\n\n~~~\n\n28. src/backend/replication/logical/worker.c - LogicalApplyBgwLoop\n\n+ default:\n+ elog(ERROR, \"unexpected message\");\n+ break;\n\nPerhaps the switch byte should be in a variable so then you can log\nwhat was the unexpected byte code received. e.g. Similar to\napply_handle_tuple_routing function.\n\n~~~\n\n29. src/backend/replication/logical/worker.c - LogicalApplyBgwMain\n\n+ /*\n+ * The apply bgworker don't need to monopolize this replication origin\n+ * which was already acquired by its leader process.\n+ */\n+ replorigin_session_setup(originid, false);\n+ replorigin_session_origin = originid;\n+ CommitTransactionCommand();\n\nTypo: The apply bgworker don't need ...\"\n\n-> \"The apply background workers don't need ...\"\nor -> \"The apply background worker doesn't need ...\"\n\n~~~\n\n30. src/backend/replication/logical/worker.c - apply_bgworker_setup\n\n+/*\n+ * Start apply worker background worker process and allocate shared memory for\n+ * it.\n+ */\n+static WorkerState *\n+apply_bgworker_setup(void)\n\nTypo: \"apply worker background worker process\" -> \"apply background\nworker process\"\n\n~~~\n\n31. src/backend/replication/logical/worker.c - apply_bgworker_wait_for\n\n+ /* If any workers (or the postmaster) have died, we have failed. */\n+ if (status == APPLY_BGWORKER_EXIT)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Background worker %u failed to apply transaction %u\",\n+ wstate->pstate->n, wstate->pstate->stream_xid)));\n\nThe errmsg should start with a lowercase letter.\n\n~~~\n\n32. src/backend/replication/logical/worker.c - check_workers_status\n\n+ /*\n+ * We don't lock here as in the worst case we will just detect the\n+ * failure of worker a bit later.\n+ */\n+ if (wstate->pstate->state == APPLY_BGWORKER_EXIT)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"Background worker %u exited unexpectedly\",\n+ wstate->pstate->n)));\n\nThe errmsg should start with a lowercase letter.\n\n~~~\n\n33. src/backend/replication/logical/worker.c - check_workers_status\n\n+/* Set the state of apply background worker */\n+static void\n+apply_bgworker_set_state(char state)\n\nMaybe OK, or perhaps choose from one of:\n- \"Set the state of an apply background worker\"\n- \"Set the apply background worker state\"\n\n======\n\n34. src/bin/pg_dump/pg_dump.c - getSubscriptions\n\n@@ -4450,7 +4450,7 @@ getSubscriptions(Archive *fout)\n if (fout->remoteVersion >= 140000)\n appendPQExpBufferStr(query, \" s.substream,\\n\");\n else\n- appendPQExpBufferStr(query, \" false AS substream,\\n\");\n+ appendPQExpBufferStr(query, \" 'f' AS substream,\\n\");\n\n\nIs that logic right? Before this patch the attribute was bool; now it\nis char. So doesn't there need to be some conversion/mapping here for\nwhen you read from >= 140000 but it was still bool so you need to\nconvert 'false' -> 'f' and 'true' -> 't'?\n\n======\n\n35. src/include/replication/origin.h\n\n@@ -53,7 +53,7 @@ extern XLogRecPtr\nreplorigin_get_progress(RepOriginId node, bool flush);\n\n extern void replorigin_session_advance(XLogRecPtr remote_commit,\n XLogRecPtr local_commit);\n-extern void replorigin_session_setup(RepOriginId node);\n+extern void replorigin_session_setup(RepOriginId node, bool acquire);\n\nAs previously suggested in comment #8 maybe the 2nd parm should be\n'must_acquire'.\n\n======\n\n36. src/include/replication/worker_internal.h\n\n@@ -60,6 +60,8 @@ typedef struct LogicalRepWorker\n */\n FileSet *stream_fileset;\n\n+ bool subworker;\n+\n\nProbably this new member deserves a comment.\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 18 May 2022 17:11:04 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v6-0002.\n\n======\n\n1. src/test/subscription/t/015_stream.pl\n\n+################################\n+# Test using streaming mode 'on'\n+################################\n $node_subscriber->safe_psql('postgres',\n \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub WITH (streaming = on)\"\n );\n-\n $node_publisher->wait_for_catchup($appname);\n-\n # Also wait for initial table sync to finish\n my $synced_query =\n \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n $node_subscriber->poll_query_until('postgres', $synced_query)\n or die \"Timed out while waiting for subscriber to synchronize data\";\n-\n my $result =\n $node_subscriber->safe_psql('postgres',\n \"SELECT count(*), count(c), count(d = 999) FROM test_tab\");\n is($result, qq(2|2|2), 'check initial data was copied to subscriber');\n\n1a.\nSeveral whitespace lines became removed by the patch. IMO it was\nbetter (e.g. less squishy) how it looked originally.\n\n1b.\nMaybe some more blank lines should be added to the 'apply' test part\ntoo, to match 1a.\n\n~~~\n\n2. src/test/subscription/t/015_stream.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n\nShould that say \"... after changing SUBSCRIPTION\"?\n\n~~~\n\n3. src/test/subscription/t/016_stream_subxact.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n+\n\nShould that say \"... after changing SUBSCRIPTION\"?\n\n~~~\n\n4. src/test/subscription/t/017_stream_ddl.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n+\n\nShould that say \"... after changing SUBSCRIPTION\"?\n\n~~~\n\n5. .../t/018_stream_subxact_abort.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n\nShould that say \"... after changing SUBSCRIPTION\" ?\n\n~~~\n\n6. .../t/019_stream_subxact_ddl_abort.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n+\n\nShould that say \"... after changing SUBSCRIPTION\"?\n\n~~~\n\n7. .../subscription/t/023_twophase_stream.pl\n\n###############################\n# Check initial data was copied to subscriber\n###############################\n\nPerhaps the above comment now looks a bit out-of-place with the extra #####.\n\nLooks better now as just:\n# Check initial data was copied to the subscriber\n\n~~~\n\n8. .../subscription/t/023_twophase_stream.pl\n\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = '$appname' AND state = 'streaming';\"\n+) or die \"Timed out while waiting for apply to restart after changing\nPUBLICATION\";\n\nShould that say \"... after changing SUBSCRIPTION\"?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 19 May 2022 16:22:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, May 13, 2022 4:53 PM houzj.fnst@fujitsu.com wrote:\r\n> On Wednesday, May 11, 2022 1:10 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, May 11, 2022 at 9:35 AM Masahiko Sawada\r\n> > <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Tue, May 10, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > >\r\n> > > > On Tue, May 10, 2022 at 10:39 AM Masahiko Sawada\r\n> > <sawada.mshk@gmail.com> wrote:\r\n> > > > >\r\n> > > > > Having it optional seems a good idea. BTW can the user configure\r\n> > > > > how many apply bgworkers can be used per subscription or in the\r\n> > > > > whole system? Like max_sync_workers_per_subscription, is it better\r\n> > > > > to have a configuration parameter or a subscription option for\r\n> > > > > that? If so, setting it to 0 probably means to disable the parallel apply\r\n> > feature.\r\n> > > > >\r\n> > > >\r\n> > > > Yeah, that might be useful but we are already giving an option while\r\n> > > > creating a subscription whether to allow parallelism, so will it be\r\n> > > > useful to give one more way to disable this feature? OTOH, having\r\n> > > > something like max_parallel_apply_workers/max_bg_apply_workers at\r\n> > > > the system level can give better control for how much parallelism\r\n> > > > the user wishes to allow for apply work.\r\n> > >\r\n> > > Or we can have something like\r\n> > > max_parallel_apply_workers_per_subscription that controls how many\r\n> > > parallel apply workers can launch per subscription. That also gives\r\n> > > better control for the number of parallel apply workers.\r\n> > >\r\n> >\r\n> > I think we can go either way in this matter as both have their pros and cons. I\r\n> > feel limiting the parallel workers per subscription gives better control but\r\n> > OTOH, it may not allow max usage of parallelism because some quota from\r\n> > other subscriptions might remain unused. Let us see what Hou-San or others\r\n> > think on this matter?\r\n> \r\n> Thanks for Amit and Sawada-san's comments !\r\n> I will think over these approaches and reply soon.\r\nAfter reading the thread, I wrote two patches for these comments.\r\n\r\nThe first patch (see v6-0003):\r\nImprove the feature as suggested in [1].\r\nFor the issue mentioned by Amit-san (there is a block problem in the case\r\nmentioned by Sawada-san), after investigating, I think this issue is caused by\r\nunique index. So I added a check to make sure the unique columns are the same\r\nbetween publisher and subscriber.\r\nFor other cases, I added the check that if there is any non-immutable function\r\npresent in expression in subscriber's relation. Check from the following 3\r\nitems:\r\n a. The function in triggers;\r\n b. Column default value expressions and domain constraints;\r\n c. Constraint expressions.\r\nBTW, I do not add partitioned table related code. I think this part needs other\r\nadditional modifications. I will add this later when these modifications are\r\nfinished.\r\n\r\nThe second patch (see v6-0004):\r\nImprove the feature as suggested in [2].\r\nAdd a GUC \"max_apply_bgworkers_per_subscription\" to control parallelism. This\r\nGUC controls how many apply background workers can be launched per\r\nsubscription. I set its default value to 3 and do not change the default value\r\nof other GUCs.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1JwahU_WuP3S%2B7POqta%3DPhm_3gxZeVmJuuoUq1NV%3DkrXA%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAA4eK1%2B7D4qAQUQEE8zzQ0fGCqeBWd3rzTaY5N0jVs-VXFc_Xw%40mail.gmail.com\r\n\r\nAttach the patches. (Did not change v6-0001 and v6-0002.)\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 25 May 2022 02:24:59 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, May 25, 2022 11:25 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the patches. (Did not change v6-0001 and v6-0002.)\r\nHi,\r\n\r\n\r\nSome review comments on the new patches from v6-0001 to v6-0004.\r\n\r\n<v6-0001>\r\n\r\n(1) create_subscription.sgml\r\n\r\n+ the transaction is committed. Note that if an error happens when\r\n+ applying changes in a background worker, it might not report the\r\n+ finish LSN of the remote transaction in the server log.\r\n\r\nI suggest to add a couple of sentences like below\r\nto the section of logical-replication-conflicts in logical-replication.sgml.\r\n\r\n\"\r\nSetting streaming mode to 'apply' can export invalid LSN as\r\nfinish LSN of failed transaction. Changing the streaming mode and\r\nmaking the same conflict writes the finish LSN of the\r\nfailed transaction in the server log if required.\r\n\"\r\n\r\n(2) ApplyBgworkerMain\r\n\r\n\r\n+ PG_TRY();\r\n+ {\r\n+ LogicalApplyBgwLoop(mqh, pst);\r\n+ }\r\n+ PG_CATCH();\r\n+ {\r\n\r\n...\r\n\r\n+ pgstat_report_subscription_error(MySubscription->oid, false);\r\n+\r\n+ PG_RE_THROW();\r\n+ }\r\n+ PG_END_TRY();\r\n\r\n\r\nWhen I stream a transaction in-progress and it causes an error(duplication error),\r\nseemingly the subscription stats (values in pg_stat_subscription_stats) don't\r\nget updated properly. The 2nd argument should be true for apply error.\r\n\r\nAlso, I observe that both apply_error_count and sync_error_count\r\nget updated together by error. I think we need to check this point as well.\r\n\r\n\r\n<v6-0003>\r\n\r\n\r\n(3) logicalrep_write_attrs\r\n\r\n+ if (rel->rd_rel->relhasindex)\r\n+ {\r\n+ List *indexoidlist = RelationGetIndexList(rel);\r\n+ ListCell *indexoidscan;\r\n+ foreach(indexoidscan, indexoidlist)\r\n\r\nand\r\n\r\n+ if (indexRel->rd_index->indisunique)\r\n+ {\r\n+ int i;\r\n+ /* Add referenced attributes to idindexattrs */\r\n+ for (i = 0; i < indexRel->rd_index->indnatts; i++)\r\n\r\nWe don't have each blank line after variable declarations.\r\nThere might be some other codes where this point can be applied.\r\nPlease check.\r\n\r\n\r\n(4)\r\n\r\n+ /*\r\n+ * If any unique index exist, check that they are same as remoterel.\r\n+ */\r\n+ if (!rel->sameunique)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n+ errmsg(\"cannot replicate relation with different unique index\"),\r\n+ errhint(\"Please change the streaming option to 'on' instead of 'apply'.\")));\r\n\r\n\r\nWhen I create a logical replication setup with different constraints\r\nand let streaming of in-progress transaction run,\r\nI keep getting this error.\r\n\r\nThis should be documented as a restriction or something,\r\nto let users know the replication progress can't go forward by\r\nany differences written like in the commit-message in v6-0003.\r\n\r\nAlso, it would be preferable to test this as well, if we\r\ndon't dislike having TAP tests for this.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n", "msg_date": "Sun, 29 May 2022 12:25:12 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, May 18, 2022 3:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \"Here are my review comments for v6-0001.\r\nThanks for your comments.\r\n\r\n> 7. src/backend/replication/logical/launcher.c - logicalrep_worker_stop_internal\r\n> \r\n> +\r\n> + Assert(LWLockHeldByMe(LogicalRepWorkerLock));\r\n> +\r\n> \r\n> I think there should be a comment here to say that this lock is\r\n> required/expected to be released by the caller of this function.\r\nIMHO, it maybe not a problem to read code here.\r\nIn addition, keep consistent with other places where invoke this function in\r\nthe same file. So I did not change this.\r\n\r\n> 9. src/backend/replication/logical/worker.c - General\r\n> \r\n> Some of the logs have a prefix \"[Apply BGW #%u]\" and some do not; I\r\n> did not really understand how you decided to prefix or not so I did\r\n> not comment about them individually. Are they all OK? Perhaps if you\r\n> can explain the reason for the choices I can review it better next\r\n> time.\r\nI think most of these logs should be logged in debug mode. So I changed them to\r\n\"DEBUG1\" level.\r\nAnd I added the prefix to all messages logged by apply background worker and\r\ndeleted some logs that I think maybe not very helpful. \r\n\r\n> 11. src/backend/replication/logical/worker.c - file header comment\r\n> \r\n> The whole comment is similar to the commit message so any changes made\r\n> there (for #2, #3) should be made here also.\r\nImprove the comments as suggested in #2.\r\nSorry but I did not find same message as #2 here.\r\n\r\n> 13. src/backend/replication/logical/worker.c\r\n> \r\n> WorkerState\r\n> WorkerEntry\r\n> \r\n> I felt that these struct names seem too generic - shouldn't they be\r\n> something more like ApplyBgworkerState, ApplyBgworkerEntry\r\n> \r\n> ~~~\r\nI think we have used \"ApplyBgworkerState\" in the patch. So I improved this with\r\nthe following modifications:\r\n```\r\nApplyBgworkerState -> ApplyBgworkerStatus\r\nWorkerState -> ApplyBgworkerState\r\nWorkerEntry -> ApplyBgworkerEntry\r\n```\r\nBTW, I also modified the relevant comments and variable names.\r\n\r\n> 16. src/backend/replication/logical/worker.c - handle_streamed_transaction\r\n> \r\n> + * For the main apply worker, if in streaming mode (receiving a block of\r\n> + * streamed transaction), we send the data to the apply background worker.\r\n> + *\r\n> + * For the apply background worker, define a savepoint if new subtransaction\r\n> + * was started.\r\n> *\r\n> * Returns true for streamed transactions, false otherwise (regular mode).\r\n> */\r\n> static bool\r\n> handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\r\n> \r\n> 16a.\r\n> Typo: \"if new subtransaction\" -> \"if a new subtransaction\"\r\n> \r\n> 16b.\r\n> That \"regular mode\" comment seems not quite right because IIUC it also\r\n> returns false also for a bgworker (which hardly seems like a \"regular\r\n> mode\")\r\n16a. Improved it as suggested.\r\n16b. Changed the comment as follows:\r\nFrom:\r\n```\r\n* Returns true for streamed transactions, false otherwise (regular mode).\r\n```\r\nTo:\r\n```\r\n * For non-streamed transactions, returns false;\r\n * For streamed transactions, returns true if in main apply worker, false\r\n * otherwise.\r\n```\r\n\r\n> 19. src/backend/replication/logical/worker.c - find_or_start_apply_bgworker\r\n> \r\n> + if (found)\r\n> + {\r\n> + entry->wstate->pstate->state = APPLY_BGWORKER_BUSY;\r\n> + return entry->wstate;\r\n> + }\r\n> + else if (!start)\r\n> + return NULL;\r\n> +\r\n> + /* If there is at least one worker in the idle list, then take one. */\r\n> + if (list_length(ApplyWorkersIdleList) > 0)\r\n> \r\n> I felt that there should be a comment (after the return NULL) that says:\r\n> \r\n> /*\r\n> * Start a new apply background worker\r\n> */\r\n> \r\n> ~~~\r\nImprove this comment here.\r\nAfter the code that you mentioned, it will try to get a apply background\r\nworker (try to start one or take one from idle list). So I change the comment\r\nas follows:\r\nFrom:\r\n```\r\n/* If there is at least one worker in the idle list, then take one. */\r\n```\r\nTo:\r\n```\r\n/*\r\n * Now, we try to get a apply background worker.\r\n * If there is at least one worker in the idle list, then take one.\r\n * Otherwise, we try to start a new apply background worker.\r\n */\r\n```\r\n\r\n> 22. src/backend/replication/logical/worker.c - apply_handle_stream_start\r\n> \r\n> /*\r\n> - * Initialize the worker's stream_fileset if we haven't yet. This will be\r\n> - * used for the entire duration of the worker so create it in a permanent\r\n> - * context. We create this on the very first streaming message from any\r\n> - * transaction and then use it for this and other streaming transactions.\r\n> - * Now, we could create a fileset at the start of the worker as well but\r\n> - * then we won't be sure that it will ever be used.\r\n> + * If we are in main apply worker, check if there is any free bgworker\r\n> + * we can use to process this transaction.\r\n> */\r\n> - if (MyLogicalRepWorker->stream_fileset == NULL)\r\n> + stream_apply_worker = apply_bgworker_find_or_start(stream_xid,\r\n> first_segment);\r\n> \r\n> 22a.\r\n> Typo: \"in main apply worker\" -> \"in the main apply worker\"\r\n> \r\n> 22b.\r\n> Since this is not if/else code, it might be better to put\r\n> Assert(!am_apply_bgworker()); above this just to make it more clear.\r\n22a. Improved it as suggested.\r\n22b. \r\nIMHO, since we have `if (am_apply_bgworker())` above and it will return in this\r\nif-condition, so I just think Assert() might be a bit redundant here.\r\nSo I did not change this.\r\n \r\n> 26. src/backend/replication/logical/worker.c - apply_handle_stream_abort\r\n> \r\n> + if (found)\r\n> + {\r\n> + elog(LOG, \"rolled back to savepoint %s\", spname);\r\n> + RollbackToSavepoint(spname);\r\n> + CommitTransactionCommand();\r\n> + subxactlist = list_truncate(subxactlist, i + 1);\r\n> + }\r\n> \r\n> Does this need to log anything if nothing was found? Or is it ok to\r\n> leave as-is and silently ignore it?\r\nYes, I think it is okay.\r\n\r\n> 33. src/backend/replication/logical/worker.c - check_workers_status\r\n> \r\n> +/* Set the state of apply background worker */\r\n> +static void\r\n> +apply_bgworker_set_state(char state)\r\n> \r\n> Maybe OK, or perhaps choose from one of:\r\n> - \"Set the state of an apply background worker\"\r\n> - \"Set the apply background worker state\"\r\nImprove it by using the second one.\r\n\r\n> 34. src/bin/pg_dump/pg_dump.c - getSubscriptions\r\n> \r\n> @@ -4450,7 +4450,7 @@ getSubscriptions(Archive *fout)\r\n> if (fout->remoteVersion >= 140000)\r\n> appendPQExpBufferStr(query, \" s.substream,\\n\");\r\n> else\r\n> - appendPQExpBufferStr(query, \" false AS substream,\\n\");\r\n> + appendPQExpBufferStr(query, \" 'f' AS substream,\\n\");\r\n> \r\n> \r\n> Is that logic right? Before this patch the attribute was bool; now it\r\n> is char. So doesn't there need to be some conversion/mapping here for\r\n> when you read from >= 140000 but it was still bool so you need to\r\n> convert 'false' -> 'f' and 'true' -> 't'?\r\nYes, I think it is right.\r\nWe could handle the input of option \"streaming\" : on/true/off/false/apply.\r\n\r\nThe rest of the comments are improved as suggested.\r\n\r\n\r\nAnd thanks for Shi Yu to improve the patch 0002 by addressing the comments in\r\n[1].\r\n\r\nAttach the new patches(only changed 0001 and 0002)\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPv_0nfUxriwxBQnZTOF5dy5nfG5NtWMr8e00mPrt2Vjzw%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 30 May 2022 08:51:59 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 30, 2022 at 2:22 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patches(only changed 0001 and 0002)\n>\n\nFew comments/suggestions for 0001 and 0003\n=====================================\n0001\n--------\n1.\n+ else\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication apply worker for subscription %u\", subid);\n\nCan we slightly change the message to: \"logical replication background\napply worker for subscription %u\"?\n\n2. Can we think of separating the new logic for applying the xact by\nbgworker into a new file like applybgwroker or applyparallel? We have\npreviously done the same in the case of vacuum (see vacuumparallel.c).\n\n3.\n+ /*\n+ * XXX The publisher side doesn't always send relation update messages\n+ * after the streaming transaction, so update the relation in main\n+ * apply worker here.\n+ */\n+ if (action == LOGICAL_REP_MSG_RELATION)\n+ {\n+ LogicalRepRelation *rel = logicalrep_read_rel(s);\n+ logicalrep_relmap_update(rel);\n+ }\n\nI think the publisher side won't send the relation update message\nafter streaming transaction only if it has already been sent for a\nnon-streaming transaction in which case we don't need to update the\nlocal cache here. This is as per my understanding of\nmaybe_send_schema(), do let me know if I am missing something? If my\nunderstanding is correct then we don't need this change.\n\n4.\n+ * For the main apply worker, if in streaming mode (receiving a block of\n+ * streamed transaction), we send the data to the apply background worker.\n *\n- * If in streaming mode (receiving a block of streamed transaction), we\n- * simply redirect it to a file for the proper toplevel transaction.\n\nThis comment is slightly confusing. Can we change it to something\nlike: \"In streaming case (receiving a block of streamed transaction),\nfor SUBSTREAM_ON mode, we simply redirect it to a file for the proper\ntoplevel transaction, and for SUBSTREAM_APPLY mode, we send the\nchanges to background apply worker.\"?\n\n5.\n+apply_handle_stream_abort(StringInfo s)\n {\n...\n...\n+ /*\n+ * If the two XIDs are the same, it's in fact abort of toplevel xact,\n+ * so just free the subxactlist.\n+ */\n+ if (subxid == xid)\n+ {\n+ set_apply_error_context_xact(subxid, InvalidXLogRecPtr);\n\n- fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDONLY,\n- false);\n+ AbortCurrentTransaction();\n\n- buffer = palloc(BLCKSZ);\n+ EndTransactionBlock(false);\n+ CommitTransactionCommand();\n+\n+ in_remote_transaction = false;\n...\n...\n}\n\nHere, can we update the replication origin as we are doing in\napply_handle_rollback_prepared? Currently, we don't do it because we\nare just cleaning up temporary files for which we don't even have a\ntransaction. Also, we don't have the required infrastructure to\nadvance origins for aborts as we have for abort prepared. See commits\n[1eb6d6527a][8a812e5106]. If we think it is a good idea then I think\nwe need to send abort_lsn and abort_time from the publisher and we\nneed to be careful to make it work with lower subscriber versions that\ndon't have the facility to process these additional values.\n\n0003\n--------\n6.\n+ /*\n+ * If any unique index exist, check that they are same as remoterel.\n+ */\n+ if (!rel->sameunique)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot replicate relation with different unique index\"),\n+ errhint(\"Please change the streaming option to 'on' instead of 'apply'.\")));\n\nI think we can do better here. Instead of simply erroring out and\nasking the user to change streaming mode, we can remember this in the\nsystem catalog probably in pg_subscription, and then on restart, we\ncan change the streaming mode to 'on', perform the transaction, and\nagain change the streaming mode to apply. I am not sure whether we\nwant to do it in the first version or not, so if you agree with this,\ndeveloping it as a separate patch would be a good idea.\n\nAlso, please update comments here as to why we don't handle such cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 May 2022 17:08:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 30, 2022 at 5:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 30, 2022 at 2:22 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > Attach the new patches(only changed 0001 and 0002)\n> >\n>\n\nThis patch allows the same replication origin to be used by the main\napply worker and the bgworker that uses it to apply streaming\ntransactions. See the changes [1] in the patch. I am not completely\nsure whether that is a good idea even though I could not spot or think\nof problems that can't be fixed in your patch. I see that currently\nboth the main apply worker and bgworker will assign MyProcPid to the\nassigned origin slot, this can create the problem because\nReplicationOriginExitCleanup() can clean it up even though the main\napply worker or another bgworker is still using that origin slot. Now,\none way to fix is that we assign only the main apply worker's\nMyProcPid to session_replication_state->acquired_by. I have tried to\nthink about the concurrency issues as multiple workers could now point\nto the same replication origin state. I think it is safe because the\npatch maintains the commit order by allowing only one process to\ncommit at a time, so no two workers will be operating on the same\norigin at the same time. Even, though there is no case where the patch\nwill try to advance the session's origin concurrently, it appears safe\nto do so as we change/advance the session_origin LSNs under\nreplicate_state LWLock.\n\nAnother idea could be that we allow multiple replication origins (one\nfor each bgworker and one for the main apply worker) for the apply\nworkers corresponding to a subscription. Then on restart, we can find\nthe highest LSN among all the origins for a subscription. This should\nwork primarily because we will maintain the commit order. Now, for\nthis to work we need to somehow map all the origins for a subscription\nand one possibility is that we have a subscription id in each of the\norigin names. Currently we use (\"pg_%u\", MySubscription->oid) as\norigin_name. We can probably append some unique identifier number for\neach worker to allow each origin to have a subscription id. We need to\ndrop all origins for a particular subscription on DROP SUBSCRIPTION. I\nthink having multiple origins for the same subscription will have some\nadditional work when we try to filter changes based on origin.\n\nThe advantage of the first idea is that it won't increase the need to\nhave more origins per subscription but it is quite possible that I am\nmissing something and there are problems due to which we can't use\nthat approach.\n\nThoughts?\n\n[1]:\n-replorigin_session_setup(RepOriginId node)\n+replorigin_session_setup(RepOriginId node, bool acquire)\n {\n static bool registered_cleanup;\n int i;\n@@ -1110,7 +1110,7 @@ replorigin_session_setup(RepOriginId node)\n if (curstate->roident != node)\n continue;\n\n- else if (curstate->acquired_by != 0)\n+ else if (curstate->acquired_by != 0 && acquire)\n {\n...\n...\n\n+ /*\n+ * The apply bgworker don't need to monopolize this replication origin\n+ * which was already acquired by its leader process.\n+ */\n+ replorigin_session_setup(originid, false);\n+ replorigin_session_origin = originid;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 May 2022 14:22:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 31, 2022 at 5:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 30, 2022 at 5:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 30, 2022 at 2:22 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > Attach the new patches(only changed 0001 and 0002)\n> > >\n> >\n>\n> This patch allows the same replication origin to be used by the main\n> apply worker and the bgworker that uses it to apply streaming\n> transactions. See the changes [1] in the patch. I am not completely\n> sure whether that is a good idea even though I could not spot or think\n> of problems that can't be fixed in your patch. I see that currently\n> both the main apply worker and bgworker will assign MyProcPid to the\n> assigned origin slot, this can create the problem because\n> ReplicationOriginExitCleanup() can clean it up even though the main\n> apply worker or another bgworker is still using that origin slot.\n\nGood point.\n\n> Now,\n> one way to fix is that we assign only the main apply worker's\n> MyProcPid to session_replication_state->acquired_by. I have tried to\n> think about the concurrency issues as multiple workers could now point\n> to the same replication origin state. I think it is safe because the\n> patch maintains the commit order by allowing only one process to\n> commit at a time, so no two workers will be operating on the same\n> origin at the same time. Even, though there is no case where the patch\n> will try to advance the session's origin concurrently, it appears safe\n> to do so as we change/advance the session_origin LSNs under\n> replicate_state LWLock.\n\nRight. That way, the cleanup is done only by the main apply worker.\nProbably the bgworker can check if the origin is already acquired by\nits (leader) main apply worker process for safety.\n\n>\n> Another idea could be that we allow multiple replication origins (one\n> for each bgworker and one for the main apply worker) for the apply\n> workers corresponding to a subscription. Then on restart, we can find\n> the highest LSN among all the origins for a subscription. This should\n> work primarily because we will maintain the commit order. Now, for\n> this to work we need to somehow map all the origins for a subscription\n> and one possibility is that we have a subscription id in each of the\n> origin names. Currently we use (\"pg_%u\", MySubscription->oid) as\n> origin_name. We can probably append some unique identifier number for\n> each worker to allow each origin to have a subscription id. We need to\n> drop all origins for a particular subscription on DROP SUBSCRIPTION. I\n> think having multiple origins for the same subscription will have some\n> additional work when we try to filter changes based on origin.\n\nIt also seems to work but need additional work and resource.\n\n> The advantage of the first idea is that it won't increase the need to\n> have more origins per subscription but it is quite possible that I am\n> missing something and there are problems due to which we can't use\n> that approach.\n\nI prefer the first idea as it's simpler than the second one. I don't\nsee any concurrency problem so far unless I'm not missing something.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 1 Jun 2022 11:00:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jun 1, 2022 at 7:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 31, 2022 at 5:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 30, 2022 at 5:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 30, 2022 at 2:22 PM wangw.fnst@fujitsu.com\n> > > <wangw.fnst@fujitsu.com> wrote:\n> > > >\n> > > > Attach the new patches(only changed 0001 and 0002)\n> > > >\n> > >\n> >\n> > This patch allows the same replication origin to be used by the main\n> > apply worker and the bgworker that uses it to apply streaming\n> > transactions. See the changes [1] in the patch. I am not completely\n> > sure whether that is a good idea even though I could not spot or think\n> > of problems that can't be fixed in your patch. I see that currently\n> > both the main apply worker and bgworker will assign MyProcPid to the\n> > assigned origin slot, this can create the problem because\n> > ReplicationOriginExitCleanup() can clean it up even though the main\n> > apply worker or another bgworker is still using that origin slot.\n>\n> Good point.\n>\n> > Now,\n> > one way to fix is that we assign only the main apply worker's\n> > MyProcPid to session_replication_state->acquired_by. I have tried to\n> > think about the concurrency issues as multiple workers could now point\n> > to the same replication origin state. I think it is safe because the\n> > patch maintains the commit order by allowing only one process to\n> > commit at a time, so no two workers will be operating on the same\n> > origin at the same time. Even, though there is no case where the patch\n> > will try to advance the session's origin concurrently, it appears safe\n> > to do so as we change/advance the session_origin LSNs under\n> > replicate_state LWLock.\n>\n> Right. That way, the cleanup is done only by the main apply worker.\n> Probably the bgworker can check if the origin is already acquired by\n> its (leader) main apply worker process for safety.\n>\n\nYeah, that makes sense.\n\n> >\n> > Another idea could be that we allow multiple replication origins (one\n> > for each bgworker and one for the main apply worker) for the apply\n> > workers corresponding to a subscription. Then on restart, we can find\n> > the highest LSN among all the origins for a subscription. This should\n> > work primarily because we will maintain the commit order. Now, for\n> > this to work we need to somehow map all the origins for a subscription\n> > and one possibility is that we have a subscription id in each of the\n> > origin names. Currently we use (\"pg_%u\", MySubscription->oid) as\n> > origin_name. We can probably append some unique identifier number for\n> > each worker to allow each origin to have a subscription id. We need to\n> > drop all origins for a particular subscription on DROP SUBSCRIPTION. I\n> > think having multiple origins for the same subscription will have some\n> > additional work when we try to filter changes based on origin.\n>\n> It also seems to work but need additional work and resource.\n>\n> > The advantage of the first idea is that it won't increase the need to\n> > have more origins per subscription but it is quite possible that I am\n> > missing something and there are problems due to which we can't use\n> > that approach.\n>\n> I prefer the first idea as it's simpler than the second one. I don't\n> see any concurrency problem so far unless I'm not missing something.\n>\n\nThanks for evaluating it and sharing your opinion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Jun 2022 10:49:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jun 1, 2022 1:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Jun 1, 2022 at 7:30 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, May 31, 2022 at 5:53 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Mon, May 30, 2022 at 5:08 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Mon, May 30, 2022 at 2:22 PM wangw.fnst@fujitsu.com\r\n> > > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > > >\r\n> > > > > Attach the new patches(only changed 0001 and 0002)\r\n> > > > >\r\n> > > >\r\n> > >\r\n> > > This patch allows the same replication origin to be used by the main\r\n> > > apply worker and the bgworker that uses it to apply streaming\r\n> > > transactions. See the changes [1] in the patch. I am not completely\r\n> > > sure whether that is a good idea even though I could not spot or think\r\n> > > of problems that can't be fixed in your patch. I see that currently\r\n> > > both the main apply worker and bgworker will assign MyProcPid to the\r\n> > > assigned origin slot, this can create the problem because\r\n> > > ReplicationOriginExitCleanup() can clean it up even though the main\r\n> > > apply worker or another bgworker is still using that origin slot.\r\n> >\r\n> > Good point.\r\n> >\r\n> > > Now,\r\n> > > one way to fix is that we assign only the main apply worker's\r\n> > > MyProcPid to session_replication_state->acquired_by. I have tried to\r\n> > > think about the concurrency issues as multiple workers could now point\r\n> > > to the same replication origin state. I think it is safe because the\r\n> > > patch maintains the commit order by allowing only one process to\r\n> > > commit at a time, so no two workers will be operating on the same\r\n> > > origin at the same time. Even, though there is no case where the patch\r\n> > > will try to advance the session's origin concurrently, it appears safe\r\n> > > to do so as we change/advance the session_origin LSNs under\r\n> > > replicate_state LWLock.\r\n> >\r\n> > Right. That way, the cleanup is done only by the main apply worker.\r\n> > Probably the bgworker can check if the origin is already acquired by\r\n> > its (leader) main apply worker process for safety.\r\n> >\r\n> \r\n> Yeah, that makes sense.\r\n> \r\n> > >\r\n> > > Another idea could be that we allow multiple replication origins (one\r\n> > > for each bgworker and one for the main apply worker) for the apply\r\n> > > workers corresponding to a subscription. Then on restart, we can find\r\n> > > the highest LSN among all the origins for a subscription. This should\r\n> > > work primarily because we will maintain the commit order. Now, for\r\n> > > this to work we need to somehow map all the origins for a subscription\r\n> > > and one possibility is that we have a subscription id in each of the\r\n> > > origin names. Currently we use (\"pg_%u\", MySubscription->oid) as\r\n> > > origin_name. We can probably append some unique identifier number for\r\n> > > each worker to allow each origin to have a subscription id. We need to\r\n> > > drop all origins for a particular subscription on DROP SUBSCRIPTION. I\r\n> > > think having multiple origins for the same subscription will have some\r\n> > > additional work when we try to filter changes based on origin.\r\n> >\r\n> > It also seems to work but need additional work and resource.\r\n> >\r\n> > > The advantage of the first idea is that it won't increase the need to\r\n> > > have more origins per subscription but it is quite possible that I am\r\n> > > missing something and there are problems due to which we can't use\r\n> > > that approach.\r\n> >\r\n> > I prefer the first idea as it's simpler than the second one. I don't\r\n> > see any concurrency problem so far unless I'm not missing something.\r\n> >\r\n> \r\n> Thanks for evaluating it and sharing your opinion.\r\nThanks for your comments and opinions.\r\n\r\nI fixed this problem by following the first suggestion. I also added the\r\nrelevant checks and changed the relevant comments.\r\n\r\nThanks for Shi Yu to add some tests as suggested by Osumi-san in [1].#4 and\r\nimprove the 0002 patch by adding some checks to see if the apply background\r\nworker starts.\r\n\r\nAttach the new patches.\r\n1. Add some descriptions related to \"apply\" mode to logical-replication.sgml\r\nand create_subscription.sgml.(suggested by Osumi-san in [1].#1,#4)\r\n2. Fix the problem that values in pg_stat_subscription_stats are not updated\r\nproperly. (suggested by Osumi-san in [1].#2)\r\n3. Improve the code formatting of the patches. (suggested by Osumi-san in [1].#3)\r\n4. Add some tests in 0003 patch. And improve some tests by adding some checks\r\nto see if the apply background worker starts in 0002 patch. (suggested by\r\nOsumi-san in [1].#4 and Shi Yu)\r\n5. Improve the log message. (suggested by Amit-san in [2].#1)\r\n6. Separate the new logic related to apply background worker to new file\r\napplybgwroker.c. (suggested by Amit-san in [2].#2)\r\n7. Improve function handle_streamed_transaction. (suggested by Amit-san in[2].#3)\r\n8. Improve some comments. (suggested by Amit-san in [2].#4,#6 and me)\r\n9. Fix the problem that the structure member \"acquired_by\" is incorrectly set\r\nwhen apply background worker tries to get replication origin.\r\n(suggested by Amit-san in [3])\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83735AEE38370254ED495B06EDDA9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n[2] - https://www.postgresql.org/message-id/CAA4eK1Jt08SYbRt_-rbSWNg%3DX9-m8%2BRdP5PosfnQgyF-z8bkxQ%40mail.gmail.com\r\n[3] - https://www.postgresql.org/message-id/CAA4eK1%2BZ6ahpTQK2KzkvQ1kN-urVS9-N_RDM11MS%2BbtqaB8Bpw%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 2 Jun 2022 10:01:37 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 30, 2022 7:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Few comments/suggestions for 0001 and 0003\r\n> =====================================\r\n> 0001\r\n> --------\r\nThanks for your comments.\r\n\r\n> 1.\r\n> + else\r\n> + snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> + \"logical replication apply worker for subscription %u\", subid);\r\n> \r\n> Can we slightly change the message to: \"logical replication background\r\n> apply worker for subscription %u\"?\r\nImprove the message as suggested.\r\n\r\n> 2. Can we think of separating the new logic for applying the xact by\r\n> bgworker into a new file like applybgwroker or applyparallel? We have\r\n> previously done the same in the case of vacuum (see vacuumparallel.c).\r\nImprove the patch as suggested. I separated the new logic related to apply\r\nbackground worker to new file src/backend/replication/logical/applybgwroker.c.\r\n\r\n> 3.\r\n> + /*\r\n> + * XXX The publisher side doesn't always send relation update messages\r\n> + * after the streaming transaction, so update the relation in main\r\n> + * apply worker here.\r\n> + */\r\n> + if (action == LOGICAL_REP_MSG_RELATION)\r\n> + {\r\n> + LogicalRepRelation *rel = logicalrep_read_rel(s);\r\n> + logicalrep_relmap_update(rel);\r\n> + }\r\n> \r\n> I think the publisher side won't send the relation update message\r\n> after streaming transaction only if it has already been sent for a\r\n> non-streaming transaction in which case we don't need to update the\r\n> local cache here. This is as per my understanding of\r\n> maybe_send_schema(), do let me know if I am missing something? If my\r\n> understanding is correct then we don't need this change.\r\nI think we need this change because the publisher will invoke function\r\ncleanup_rel_sync_cache when committing a streaming transaction, then it will\r\nset \"schema_sent\" to true for related entry. Later, publisher may not send this\r\nschema in function maybe_send_schema because we already sent this schema\r\n(schema_sent = true).\r\nIf we do not have this change, It would cause an error in the following case:\r\nSuppose that after walsender worker starts, first we commit a streaming\r\ntransaction. Walsender sends relation update message, and only apply background\r\nworker can update relation map cache by this message. After this, if we commit\r\na non-streamed transaction that contains same replicated table, walsender will\r\nnot send relation update message, so main apply worker would not get relation\r\nupdate message.\r\nI think we need this change to update relation map cache not only in apply\r\nbackground worker but also in apply main worker.\r\nIn addition, we should also handle the LOGICAL_REP_MSG_TYPE message just like\r\nLOGICAL_REP_MSG_RELATION. So improve the code you mentioned. BTW, I simplify\r\nthe function handle_streamed_transaction().\r\n\r\n> 4.\r\n> + * For the main apply worker, if in streaming mode (receiving a block of\r\n> + * streamed transaction), we send the data to the apply background worker.\r\n> *\r\n> - * If in streaming mode (receiving a block of streamed transaction), we\r\n> - * simply redirect it to a file for the proper toplevel transaction.\r\n> \r\n> This comment is slightly confusing. Can we change it to something\r\n> like: \"In streaming case (receiving a block of streamed transaction),\r\n> for SUBSTREAM_ON mode, we simply redirect it to a file for the proper\r\n> toplevel transaction, and for SUBSTREAM_APPLY mode, we send the\r\n> changes to background apply worker.\"?\r\nImprove the comments as suggested.\r\n\r\n> 5.\r\n> +apply_handle_stream_abort(StringInfo s)\r\n> {\r\n> ...\r\n> ...\r\n> + /*\r\n> + * If the two XIDs are the same, it's in fact abort of toplevel xact,\r\n> + * so just free the subxactlist.\r\n> + */\r\n> + if (subxid == xid)\r\n> + {\r\n> + set_apply_error_context_xact(subxid, InvalidXLogRecPtr);\r\n> \r\n> - fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path,\r\n> O_RDONLY,\r\n> - false);\r\n> + AbortCurrentTransaction();\r\n> \r\n> - buffer = palloc(BLCKSZ);\r\n> + EndTransactionBlock(false);\r\n> + CommitTransactionCommand();\r\n> +\r\n> + in_remote_transaction = false;\r\n> ...\r\n> ...\r\n> }\r\n> \r\n> Here, can we update the replication origin as we are doing in\r\n> apply_handle_rollback_prepared? Currently, we don't do it because we\r\n> are just cleaning up temporary files for which we don't even have a\r\n> transaction. Also, we don't have the required infrastructure to\r\n> advance origins for aborts as we have for abort prepared. See commits\r\n> [1eb6d6527a][8a812e5106]. If we think it is a good idea then I think\r\n> we need to send abort_lsn and abort_time from the publisher and we\r\n> need to be careful to make it work with lower subscriber versions that\r\n> don't have the facility to process these additional values.\r\nI think it is a good idea. I will consider this and add this part in next\r\nversion.\r\n\r\n> 0003\r\n> --------\r\n> 6.\r\n> + /*\r\n> + * If any unique index exist, check that they are same as remoterel.\r\n> + */\r\n> + if (!rel->sameunique)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n> + errmsg(\"cannot replicate relation with different unique index\"),\r\n> + errhint(\"Please change the streaming option to 'on' instead of 'apply'.\")));\r\n> \r\n> I think we can do better here. Instead of simply erroring out and\r\n> asking the user to change streaming mode, we can remember this in the\r\n> system catalog probably in pg_subscription, and then on restart, we\r\n> can change the streaming mode to 'on', perform the transaction, and\r\n> again change the streaming mode to apply. I am not sure whether we\r\n> want to do it in the first version or not, so if you agree with this,\r\n> developing it as a separate patch would be a good idea.\r\n> \r\n> Also, please update comments here as to why we don't handle such cases.\r\nYes, I think it is a good idea. I will develop it as a separate patch later.\r\nAnd I added the comments atop function apply_bgworker_relation_check as\r\nbelow:\r\n```\r\n * Although we maintains the commit order by allowing only one process to\r\n * commit at a time, our access order to the relation has changed.\r\n * This could cause unexpected problems if the unique column on the replicated\r\n * table is inconsistent with the publisher-side or contains non-immutable\r\n * functions when applying transactions in the apply background worker.\r\n```\r\n\r\nI also made some other changes. The new patches and the modification details\r\nwere attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62758A881FF3240171B7B21B9EDE9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 2 Jun 2022 10:03:32 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, May 29, 2022 8:25 PM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi,\r\n> \r\n> \r\n> Some review comments on the new patches from v6-0001 to v6-0004.\r\nThanks for your comments.\r\n\r\n> <v6-0001>\r\n> \r\n> (1) create_subscription.sgml\r\n> \r\n> + the transaction is committed. Note that if an error happens when\r\n> + applying changes in a background worker, it might not report the\r\n> + finish LSN of the remote transaction in the server log.\r\n> \r\n> I suggest to add a couple of sentences like below\r\n> to the section of logical-replication-conflicts in logical-replication.sgml.\r\n> \r\n> \"\r\n> Setting streaming mode to 'apply' can export invalid LSN as\r\n> finish LSN of failed transaction. Changing the streaming mode and\r\n> making the same conflict writes the finish LSN of the\r\n> failed transaction in the server log if required.\r\n> \"\r\nAdd the sentences as suggested.\r\n\r\n> (2) ApplyBgworkerMain\r\n> \r\n> \r\n> + PG_TRY();\r\n> + {\r\n> + LogicalApplyBgwLoop(mqh, pst);\r\n> + }\r\n> + PG_CATCH();\r\n> + {\r\n> \r\n> ...\r\n> \r\n> + pgstat_report_subscription_error(MySubscription->oid, false);\r\n> +\r\n> + PG_RE_THROW();\r\n> + }\r\n> + PG_END_TRY();\r\n> \r\n> \r\n> When I stream a transaction in-progress and it causes an error(duplication error),\r\n> seemingly the subscription stats (values in pg_stat_subscription_stats) don't\r\n> get updated properly. The 2nd argument should be true for apply error.\r\n> \r\n> Also, I observe that both apply_error_count and sync_error_count\r\n> get updated together by error. I think we need to check this point as well.\r\nYes, we should input \"true\" to 2nd argument here to log \"apply error\".\r\nAnd after checking the second point you mentioned, I think it is caused by the\r\nfirst point you mentioned and another reason:\r\nWith the patch v6 (or v7) and we specify option \"apply\", when a streamed\r\ntransaction causes an error (duplication error), the function\r\npgstat_report_subscription_error is invoke twice (in main apply worker and\r\napply background worker, see function ApplyWorkerMain()->start_apply() and\r\nApplyBgworkerMain). This means for one same error, we will send twice stats\r\nmessage.\r\nSo to fix this, I removed the code that you mentioned and then, just invoke\r\nfunction LogicalApplyBgwLoop here.\r\n\r\n> <v6-0003>\r\n> \r\n> \r\n> (3) logicalrep_write_attrs\r\n> \r\n> + if (rel->rd_rel->relhasindex)\r\n> + {\r\n> + List *indexoidlist = RelationGetIndexList(rel);\r\n> + ListCell *indexoidscan;\r\n> + foreach(indexoidscan, indexoidlist)\r\n> \r\n> and\r\n> \r\n> + if (indexRel->rd_index->indisunique)\r\n> + {\r\n> + int i;\r\n> + /* Add referenced attributes to idindexattrs */\r\n> + for (i = 0; i < indexRel->rd_index->indnatts; i++)\r\n> \r\n> We don't have each blank line after variable declarations.\r\n> There might be some other codes where this point can be applied.\r\n> Please check.\r\nImprove the formatting as you suggested. And I run pgindent for new patches.\r\n\r\n> (4)\r\n> \r\n> + /*\r\n> + * If any unique index exist, check that they are same as remoterel.\r\n> + */\r\n> + if (!rel->sameunique)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n> + errmsg(\"cannot replicate relation with different unique index\"),\r\n> + errhint(\"Please change the streaming option to 'on' instead of\r\n> 'apply'.\")));\r\n> \r\n> \r\n> When I create a logical replication setup with different constraints\r\n> and let streaming of in-progress transaction run,\r\n> I keep getting this error.\r\n> \r\n> This should be documented as a restriction or something,\r\n> to let users know the replication progress can't go forward by\r\n> any differences written like in the commit-message in v6-0003.\r\n> \r\n> Also, it would be preferable to test this as well, if we\r\n> don't dislike having TAP tests for this.\r\nYes, you are right. Thank for your reminder.\r\nI added this in the paragraph introducing value \"apply\" in\r\ncreate_subscription.sgml:\r\n```\r\nTo run in this mode, there are following two requirements. The first\r\nis that the unique column should be the same between publisher and\r\nsubscriber; the second is that there should not be any non-immutable\r\nfunction in subscriber-side replicated table.\r\n```\r\nAlso added the related tests. (refer to 032_streaming_apply.pl in v8-0003)\r\n\r\nI also made some other changes. The new patches and the modification details\r\nwere attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62758A881FF3240171B7B21B9EDE9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 2 Jun 2022 10:04:31 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Jun 2, 2022 6:02 PM I wrote:\r\n> Attach the new patches.\r\n\r\nI tried to improve the patches by following 2 points:\r\n\r\n1. Improved the patch as suggested by Amit-san that I mentioned in [1].\r\nWhen publisher sends a \"STREAM ABORT\" message to subscriber, add the lsn and\r\ntime of this abort to this message.(see function logicalrep_write_stream_abort)\r\nWhen subscriber receives this message, it will update the replication origin.\r\n(see function apply_handle_stream_abort and function RecordTransactionAbort)\r\n\r\n2. Fixed missing settings for two GUCs (session_replication_role and\r\nsearch_path) in apply background worker in patch 0001 and improved checking of\r\ntrigger functions in patch 0003.\r\n\r\nThanks to Hou Zhi Jie for adding the aborts message related infrastructure for\r\nthe first point.\r\nThanks to Shi Yu for pointing out the second point.\r\n\r\nAttach the new patches.(only changed 0001 and 0003)\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275FBD9359F8ED0EDE7E5459EDE9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 8 Jun 2022 07:12:30 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jun 8, 2022 3:13 PM I wrote:\r\n> Attach the new patches.(only changed 0001 and 0003)\r\n\r\nI tried to improve the patches by following points:\r\n\r\n1. Initialize variable include_abort_lsn to false. It reports a warning in\r\ncfbot. (see patch v10-0001)\r\nBTW, I merged the patch that added the new GUC (see v9-0004) into patch 0001.\r\n\r\n2. Because of the improvement #2 in [1], the foreign key could not be detected\r\nwhen checking trigger function. So added additional checks for the foreign key.\r\n(see patch 0004)\r\n\r\n3. Adding a check for the partition table when trying to apply changes in the\r\napply background worker. (see patch 0004)\r\nIn additional, the partition cache map on subscriber have several bugs (see\r\nthread [2]). Because patch 0004 is developed based on the patches in [2], so I\r\nmerged the patches(v4-0001~v4-0003) in [2] into a temporary patch 0003 here.\r\nAfter the patches in [2] is committed, I will delete patch 0003 and rebase\r\npatch 0004.\r\n\r\n4. Improve constraint checking in a separate patch as suggested by Amit-san in\r\n[3] #6.(see patch 0005)\r\nI added a new field \"bool subretry\" in catalog pg_subscription. I use this\r\nfield to indicate whether the transaction that we are going to process has\r\nfailed before.\r\nIf apply worker/bgworker was exit with an error, this field will be set to\r\ntrue; If we successfully apply a transaction, this field will be set to false.\r\nIf we retry to apply a streaming transaction, whether the user sets the\r\nstreaming option to \"on\" or \"apply\", we will apply the transaction in the apply\r\nworker.\r\n\r\nAttach the new patches.\r\nOnly changed patches 0001, 0004 and added new separate patch 0005.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275208A2F8ED832710F65E09EA49%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n[2] - https://www.postgresql.org/message-id/flat/OSZPR01MB6310F46CD425A967E4AEF736FDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n[3] - https://www.postgresql.org/message-id/CAA4eK1Jt08SYbRt_-rbSWNg%3DX9-m8%2BRdP5PosfnQgyF-z8bkxQ%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 14 Jun 2022 03:37:04 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Jun 14, 2022 11:17 AM I wrote:\r\n> Attach the new patches.\r\n> ......\r\n> 3. Adding a check for the partition table when trying to apply changes in the\r\n> apply background worker. (see patch 0004)\r\n> In additional, the partition cache map on subscriber have several bugs (see\r\n> thread [2]). Because patch 0004 is developed based on the patches in [2], so I\r\n> merged the patches(v4-0001~v4-0003) in [2] into a temporary patch 0003 here.\r\n> After the patches in [2] is committed, I will delete patch 0003 and rebase\r\n> patch 0004.\r\nI added some test cases for this (see patch 0004). In patch 0005, I made\r\ncorresponding adjustments according to these test cases.\r\nI also slightly modified the comments about the check for unique index. (see\r\npatch 0004)\r\n\r\nAlso rebased the temporary patch 0003 because the first patch in thread [1] is\r\ncommitted (see commit 5a97b132 in HEAD) .\r\n\r\nAttach the new patches.\r\nOnly changed patches 0004, 0005.\r\n\r\n[1] - https://www.postgresql.org/message-id/OSZPR01MB6310F46CD425A967E4AEF736FDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 15 Jun 2022 08:26:40 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jun 14, 2022 at 9:07 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n>\n> Attach the new patches.\n> Only changed patches 0001, 0004 and added new separate patch 0005.\n>\n\nFew questions/comments on 0001\n===========================\n1.\nIn the commit message, I see: \"We also need to allow stream_stop to\ncomplete by the apply background worker to avoid deadlocks because\nT-1's current stream of changes can update rows in conflicting order\nwith T-2's next stream of changes.\"\n\nThinking about this, won't the T-1 and T-2 deadlock on the publisher\nnode as well if the above statement is true?\n\n2.\n+ <para>\n+ The apply background workers are taken from the pool defined by\n+ <varname>max_logical_replication_workers</varname>.\n+ </para>\n+ <para>\n+ The default value is 3. This parameter can only be set in the\n+ <filename>postgresql.conf</filename> file or on the server command\n+ line.\n+ </para>\n\nIs there a reason to choose this number as 3? Why not 2 similar to\nmax_sync_workers_per_subscription?\n\n3.\n+\n+ <para>\n+ Setting streaming mode to <literal>apply</literal> could export invalid LSN\n+ as finish LSN of failed transaction. Changing the streaming mode and making\n+ the same conflict writes the finish LSN of the failed transaction in the\n+ server log if required.\n+ </para>\n\nHow will the user identify that this is an invalid LSN value and she\nshouldn't use it to SKIP the transaction? Can we change the second\nsentence to: \"User should change the streaming mode to 'on' if they\nwould instead wish to see the finish LSN on error. Users can use\nfinish LSN to SKIP applying the transaction.\" I think we can give\nreference to docs where the SKIP feature is explained.\n\n4.\n+ * This file contains routines that are intended to support setting up, using,\n+ * and tearing down a ApplyBgworkerState.\n+ * Refer to the comments in file header of logical/worker.c to see more\n+ * informations about apply background worker.\n\nTypo. /informations/information.\n\nConsider having an empty line between the above two lines.\n\n5.\n+ApplyBgworkerState *\n+apply_bgworker_find_or_start(TransactionId xid, bool start)\n{\n...\n...\n+ if (!TransactionIdIsValid(xid))\n+ return NULL;\n+\n+ /*\n+ * We don't start new background worker if we are not in streaming apply\n+ * mode.\n+ */\n+ if (MySubscription->stream != SUBSTREAM_APPLY)\n+ return NULL;\n+\n+ /*\n+ * We don't start new background worker if user has set skiplsn as it's\n+ * possible that user want to skip the streaming transaction. For\n+ * streaming transaction, we need to spill the transaction to disk so that\n+ * we can get the last LSN of the transaction to judge whether to skip\n+ * before starting to apply the change.\n+ */\n+ if (start && !XLogRecPtrIsInvalid(MySubscription->skiplsn))\n+ return NULL;\n+\n+ /*\n+ * For streaming transactions that are being applied in apply background\n+ * worker, we cannot decide whether to apply the change for a relation\n+ * that is not in the READY state (see should_apply_changes_for_rel) as we\n+ * won't know remote_final_lsn by that time. So, we don't start new apply\n+ * background worker in this case.\n+ */\n+ if (start && !AllTablesyncsReady())\n+ return NULL;\n...\n...\n}\n\nCan we move some of these starting checks to a separate function like\ncanstartapplybgworker()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Jun 2022 17:42:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jun 15, 2022 at 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Few questions/comments on 0001\r\n> ===========================\r\nThanks for your comments.\r\n\r\n> 1.\r\n> In the commit message, I see: \"We also need to allow stream_stop to\r\n> complete by the apply background worker to avoid deadlocks because\r\n> T-1's current stream of changes can update rows in conflicting order\r\n> with T-2's next stream of changes.\"\r\n> \r\n> Thinking about this, won't the T-1 and T-2 deadlock on the publisher\r\n> node as well if the above statement is true?\r\nYes, I think so.\r\nI think if table's unique index/constraint of the publisher and the subscriber\r\nare consistent, the deadlock will occur on the publisher-side.\r\nIf it is inconsistent, deadlock may only occur in the subscriber. But since we\r\nadded the check for these (see patch 0004), so it seems okay to not handle this\r\nat STREAM_STOP.\r\n\r\nBTW, I made the following improvements to the code (#a, #c are improved in 0004\r\npatch, #b, #d and #e are improved in 0001 patch.) :\r\na.\r\nI added some comments in the function apply_handle_stream_stop to explain why\r\nwe do not need to allow stream_stop to complete by the apply background worker.\r\nb.\r\nI deleted related commit message in 0001 patch and the related comments in file\r\nheader (worker.c).\r\nc.\r\nRenamed the function logicalrep_rel_mark_apply_bgworker to\r\nlogicalrep_rel_mark_safe_in_apply_bgworker. Also did some slight improvements\r\nin this function.\r\nd.\r\nWhen apply worker sends stream xact messages to apply background worker, only\r\nwait for apply background worker to complete when commit, prepare and abort of\r\ntoplevel xact.\r\ne.\r\nThe state setting of apply background worker was not very accurate before, so\r\nimproved this (see the invocations to function pgstat_report_activity in\r\nfunction LogicalApplyBgwLoop, apply_handle_stream_start and\r\napply_handle_stream_abort).\r\n\r\n> 2.\r\n> + <para>\r\n> + The apply background workers are taken from the pool defined by\r\n> + <varname>max_logical_replication_workers</varname>.\r\n> + </para>\r\n> + <para>\r\n> + The default value is 3. This parameter can only be set in the\r\n> + <filename>postgresql.conf</filename> file or on the server command\r\n> + line.\r\n> + </para>\r\n> \r\n> Is there a reason to choose this number as 3? Why not 2 similar to\r\n> max_sync_workers_per_subscription?\r\nImproved the default as suggested.\r\n\r\n> 3.\r\n> +\r\n> + <para>\r\n> + Setting streaming mode to <literal>apply</literal> could export invalid LSN\r\n> + as finish LSN of failed transaction. Changing the streaming mode and making\r\n> + the same conflict writes the finish LSN of the failed transaction in the\r\n> + server log if required.\r\n> + </para>\r\n> \r\n> How will the user identify that this is an invalid LSN value and she\r\n> shouldn't use it to SKIP the transaction? Can we change the second\r\n> sentence to: \"User should change the streaming mode to 'on' if they\r\n> would instead wish to see the finish LSN on error. Users can use\r\n> finish LSN to SKIP applying the transaction.\" I think we can give\r\n> reference to docs where the SKIP feature is explained.\r\nImproved the sentence as suggested.\r\nAnd I added the reference after the statement in your suggestion.\r\nIt looks like:\r\n```\r\n... Users can use finish LSN to SKIP applying the transaction by running <link\r\nlinkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\nSKIP</command></link>.\r\n```\r\n\r\n> 4.\r\n> + * This file contains routines that are intended to support setting up, using,\r\n> + * and tearing down a ApplyBgworkerState.\r\n> + * Refer to the comments in file header of logical/worker.c to see more\r\n> + * informations about apply background worker.\r\n> \r\n> Typo. /informations/information.\r\n> \r\n> Consider having an empty line between the above two lines.\r\nImproved the message as suggested.\r\n\r\n> 5.\r\n> +ApplyBgworkerState *\r\n> +apply_bgworker_find_or_start(TransactionId xid, bool start)\r\n> {\r\n> ...\r\n> ...\r\n> + if (!TransactionIdIsValid(xid))\r\n> + return NULL;\r\n> +\r\n> + /*\r\n> + * We don't start new background worker if we are not in streaming apply\r\n> + * mode.\r\n> + */\r\n> + if (MySubscription->stream != SUBSTREAM_APPLY)\r\n> + return NULL;\r\n> +\r\n> + /*\r\n> + * We don't start new background worker if user has set skiplsn as it's\r\n> + * possible that user want to skip the streaming transaction. For\r\n> + * streaming transaction, we need to spill the transaction to disk so that\r\n> + * we can get the last LSN of the transaction to judge whether to skip\r\n> + * before starting to apply the change.\r\n> + */\r\n> + if (start && !XLogRecPtrIsInvalid(MySubscription->skiplsn))\r\n> + return NULL;\r\n> +\r\n> + /*\r\n> + * For streaming transactions that are being applied in apply background\r\n> + * worker, we cannot decide whether to apply the change for a relation\r\n> + * that is not in the READY state (see should_apply_changes_for_rel) as we\r\n> + * won't know remote_final_lsn by that time. So, we don't start new apply\r\n> + * background worker in this case.\r\n> + */\r\n> + if (start && !AllTablesyncsReady())\r\n> + return NULL;\r\n> ...\r\n> ...\r\n> }\r\n> \r\n> Can we move some of these starting checks to a separate function like\r\n> canstartapplybgworker()?\r\nImproved as suggested.\r\n\r\nBTW, I rebased the temporary patch 0003 because one patch in thread [1] is\r\ncommitted (see commit b7658c24c7 in HEAD).\r\n\r\nAttach the new patches.\r\nOnly changed patches 0001, 0004.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 17 Jun 2022 07:17:10 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jun 17, 2022 at 12:47 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patches.\n> Only changed patches 0001, 0004.\n>\n\nFew more comments on the previous version of patch:\n===========================================\n1.\n+/*\n+ * Count the number of registered (not necessarily running) apply background\n+ * worker for a subscription.\n+ */\n\n/worker/workers\n\n2.\n+static void\n+apply_bgworker_setup_dsm(ApplyBgworkerState *wstate)\n+{\n...\n...\n+ int64 queue_size = 160000000; /* 16 MB for now */\n\nI think it would be better to use define for this rather than a\nhard-coded value.\n\n3.\n+/*\n+ * Status for apply background worker.\n+ */\n+typedef enum ApplyBgworkerStatus\n+{\n+ APPLY_BGWORKER_ATTACHED = 0,\n+ APPLY_BGWORKER_READY,\n+ APPLY_BGWORKER_BUSY,\n+ APPLY_BGWORKER_FINISHED,\n+ APPLY_BGWORKER_EXIT\n+} ApplyBgworkerStatus;\n\nIt would be better if you can add comments to explain each of these states.\n\n4.\n+ /* Set up one message queue per worker, plus one. */\n+ mq = shm_mq_create(shm_toc_allocate(toc, (Size) queue_size),\n+ (Size) queue_size);\n+ shm_toc_insert(toc, APPLY_BGWORKER_KEY_MQ, mq);\n+ shm_mq_set_sender(mq, MyProc);\n\n\nI don't understand the meaning of 'plus one' in the above comment as\nthe patch seems to be setting up just one queue here?\n\n5.\n+\n+ /* Attach the queues. */\n+ wstate->mq_handle = shm_mq_attach(mq, seg, NULL);\n\nSimilar to above. If there is only one queue then the comment should\nsay queue instead of queues.\n\n6.\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u\", subid);\n+ else\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication background apply worker for subscription %u \", subid);\n\nNo need for extra space after %u in the above code.\n\n7.\n+ launched = logicalrep_worker_launch(MyLogicalRepWorker->dbid,\n+ MySubscription->oid,\n+ MySubscription->name,\n+ MyLogicalRepWorker->userid,\n+ InvalidOid,\n+ dsm_segment_handle(wstate->dsm_seg));\n+\n+ if (launched)\n+ {\n+ /* Wait for worker to attach. */\n+ apply_bgworker_wait_for(wstate, APPLY_BGWORKER_ATTACHED);\n\nIn logicalrep_worker_launch(), we already seem to be waiting for\nworkers to attach via WaitForReplicationWorkerAttach(), so it is not\nclear to me why we need to wait again? If there is a genuine reason\nthen it is better to add some comments to explain it. I think in some\nway, we need to know if the worker is successfully attached and we may\nnot get that via WaitForReplicationWorkerAttach, so there needs to be\nsome way to know that but this doesn't sound like a very good idea. If\nthat understanding is correct then can we think of a better way?\n\n8. I think we can simplify apply_bgworker_find_or_start by having\nseparate APIs for find and start. Most of the places need to use find\nAPI except for the first stream. If we do that then I think you don't\nneed to make a hash entry unless we established ApplyBgworkerState\nwhich currently looks odd as you need to remove the entry if we fail\nto allocate the state.\n\n9.\n+ /*\n+ * TO IMPROVE: Do we need to display the apply background worker's\n+ * information in pg_stat_replication ?\n+ */\n+ UpdateWorkerStats(last_received, send_time, false);\n\nIn this do you mean to say pg_stat_subscription? If so, then to decide\nwhether we need to update stats here we should see what additional\ninformation we can update here which is not possible via the main\napply worker?\n\n10.\nApplyBgworkerMain\n{\n...\n+ /* Load the subscription into persistent memory context. */\n+ ApplyContext = AllocSetContextCreate(TopMemoryContext,\n...\n\nThis comment seems to be copied from ApplyWorkerMain but doesn't apply here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Jun 2022 14:27:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jun 17, 2022 at 12:47 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jun 15, 2022 at 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Few questions/comments on 0001\n> > ===========================\n> Thanks for your comments.\n>\n> > 1.\n> > In the commit message, I see: \"We also need to allow stream_stop to\n> > complete by the apply background worker to avoid deadlocks because\n> > T-1's current stream of changes can update rows in conflicting order\n> > with T-2's next stream of changes.\"\n> >\n> > Thinking about this, won't the T-1 and T-2 deadlock on the publisher\n> > node as well if the above statement is true?\n> Yes, I think so.\n> I think if table's unique index/constraint of the publisher and the subscriber\n> are consistent, the deadlock will occur on the publisher-side.\n> If it is inconsistent, deadlock may only occur in the subscriber. But since we\n> added the check for these (see patch 0004), so it seems okay to not handle this\n> at STREAM_STOP.\n>\n> BTW, I made the following improvements to the code (#a, #c are improved in 0004\n> patch, #b, #d and #e are improved in 0001 patch.) :\n> a.\n> I added some comments in the function apply_handle_stream_stop to explain why\n> we do not need to allow stream_stop to complete by the apply background worker.\n>\n\nI have improved the comments in this and other related sections of the\npatch. See attached.\n\n>\n>\n> > 3.\n> > +\n> > + <para>\n> > + Setting streaming mode to <literal>apply</literal> could export invalid LSN\n> > + as finish LSN of failed transaction. Changing the streaming mode and making\n> > + the same conflict writes the finish LSN of the failed transaction in the\n> > + server log if required.\n> > + </para>\n> >\n> > How will the user identify that this is an invalid LSN value and she\n> > shouldn't use it to SKIP the transaction? Can we change the second\n> > sentence to: \"User should change the streaming mode to 'on' if they\n> > would instead wish to see the finish LSN on error. Users can use\n> > finish LSN to SKIP applying the transaction.\" I think we can give\n> > reference to docs where the SKIP feature is explained.\n> Improved the sentence as suggested.\n>\n\nYou haven't answered first part of the comment: \"How will the user\nidentify that this is an invalid LSN value and she shouldn't use it to\nSKIP the transaction?\". Have you checked what value it displays? For\nexample, in one of the case in apply_error_callback as shown in below\ncode, we don't even display finish LSN if it is invalid.\nelse if (XLogRecPtrIsInvalid(errarg->finish_lsn))\nerrcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" in transaction %u\",\n errarg->origin_name,\n logicalrep_message_type(errarg->command),\n errarg->remote_xid);\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 20 Jun 2022 08:29:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for the v11-0001 patch.\n\n(I will review the remaining patches 0002-0005 and post any comments later)\n\n======\n\n1. General\n\nI still feel that 'apply' seems like a meaningless enum value for this\nfeature because from a user point-of-view every replicated change gets\n\"applied\". IMO something like 'streaming = parallel' or 'streaming =\nbackground' (etc) might have more meaning for a user.\n\n======\n\n2. Commit message\n\nWe also need to allow stream_stop to complete by the\napply background worker to avoid deadlocks because T-1's current stream of\nchanges can update rows in conflicting order with T-2's next stream of changes.\n\nDid this mean to say?\n\"allow stream_stop to complete by\" -> \"allow stream_stop to be performed by\"\n\n~~~\n\n3. Commit message\n\nThis patch also extends the SUBSCRIPTION 'streaming' option so that the user\ncan control whether to apply the streaming transaction in an apply background\nworker or spill the change to disk. User can set the streaming option to\n'on/off', 'apply'. For now, 'apply' means the streaming will be applied via a\napply background worker if available. 'on' means the streaming transaction will\nbe spilled to disk.\n\n3a.\n\"option\" -> \"parameter\" (2x)\n\n3b.\n\"User can\" -> \"The user can\"\n\n3c.\nI think this part should also mention that the stream parameter\ndefault is unchanged...\n\n======\n\n4. doc/src/sgml/config.sgml\n\n+ <para>\n+ Maximum number of apply background workers per subscription. This\n+ parameter controls the amount of parallelism of the streaming of\n+ in-progress transactions if we set subscription option\n+ <literal>streaming</literal> to <literal>apply</literal>.\n+ </para>\n\n\"if we set subscription option <literal>streaming</literal> to\n<literal>apply</literal>.\" -> \"when subscription parameter\n <literal>streaming = apply</literal>.\n\n======\n\n5. doc/src/sgml/config.sgml\n\n+ <para>\n+ Setting streaming mode to <literal>apply</literal> could export invalid LSN\n+ as finish LSN of failed transaction. Changing the streaming mode and making\n+ the same conflict writes the finish LSN of the failed transaction in the\n+ server log if required.\n+ </para>\n\nThis text made no sense to me. Can you reword it?\n\nIIUC it means something like this:\nWhen the streaming mode is 'apply', the finish LSN of failed\ntransactions may not be logged. In that case, it may be necessary to\nchange the streaming mode and cause the same conflicts again so the\nfinish LSN of the failed transaction will be written to the server\nlog.\n\n======\n\n6. doc/src/sgml/protocol.sgml\n\nSince there are protocol changes made here, shouldn’t there also be\nsome corresponding LOGICALREP_PROTO_XXX constants and special checking\nadded in the worker.c?\n\n======\n\n7. doc/src/sgml/ref/create_subscription.sgml\n\n+ for this subscription. The default value is <literal>off</literal>,\n+ all transactions are fully decoded on the publisher and only then\n+ sent to the subscriber as a whole.\n+ </para>\n\nSUGGESTION\nThe default value is off, meaning all transactions are fully decoded\non the publisher and only then sent to the subscriber as a whole.\n\n~~~\n\n8. doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>on</literal>, the changes of transaction are\n+ written to temporary files and then applied at once after the\n+ transaction is committed on the publisher.\n+ </para>\n\nSUGGESTION\nIf set to on, the incoming changes are written to a temporary file and\nthen applied only after the transaction is committed on the publisher.\n\n~~~\n\n9. doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>apply</literal> incoming\n+ changes are directly applied via one of the background workers, if\n+ available. If no background worker is free to handle streaming\n+ transaction then the changes are written to a file and applied after\n+ the transaction is committed. Note that if an error happens when\n+ applying changes in a background worker, it might not report the\n+ finish LSN of the remote transaction in the server log.\n </para>\n\nSUGGESTION\nIf set to apply, the incoming changes are directly applied via one of\nthe apply background workers, if available. If no background worker is\nfree to handle streaming transactions then the changes are written to\na file and applied after the transaction is committed. Note that if an\nerror happens when applying changes in a background worker, the finish\nLSN of the remote transaction might not be reported in the server log.\n\n======\n\n10. src/backend/access/transam/xact.c\n\n@@ -1741,6 +1742,13 @@ RecordTransactionAbort(bool isSubXact)\n elog(PANIC, \"cannot abort transaction %u, it was already committed\",\n xid);\n\n+ /*\n+ * Are we using the replication origins feature? Or, in other words,\n+ * are we replaying remote actions?\n+ */\n+ replorigin = (replorigin_session_origin != InvalidRepOriginId &&\n+ replorigin_session_origin != DoNotReplicateId);\n+\n /* Fetch the data we need for the abort record */\n nrels = smgrGetPendingDeletes(false, &rels);\n nchildren = xactGetCommittedChildren(&children);\n@@ -1765,6 +1773,11 @@ RecordTransactionAbort(bool isSubXact)\n MyXactFlags, InvalidTransactionId,\n NULL);\n\n+ if (replorigin)\n+ /* Move LSNs forward for this replication origin */\n+ replorigin_session_advance(replorigin_session_origin_lsn,\n+ XactLastRecEnd);\n+\n\nI did not see any reason why the code assigning the 'replorigin' and\nthe code checking the 'replorigin' are separated like they are. I\nthought these 2 new code fragments should be kept together. Perhaps it\nwas decided this assignment must be outside the critical section? But\nif that’s the case maybe a comment explaining so would be good.\n\n~~~\n\n11. src/backend/access/transam/xact.c\n\n+ if (replorigin)\n+ /* Move LSNs forward for this replication origin */\n+ replorigin_session_advance(replorigin_session_origin_lsn,\n+\n\nThe positioning of that comment is unusual. Maybe better before the check?\n\n======\n\n12. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+ /*\n+ * If no parameter given, assume \"true\" is meant.\n+ */\n+ if (def->arg == NULL)\n+ return SUBSTREAM_ON;\n\nSUGGESTION for comment\nIf the streaming parameter is given but no parameter value is\nspecified, then assume \"true\" is meant.\n\n~~~\n\n13. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+ /*\n+ * Allow 0, 1, \"true\", \"false\", \"on\", \"off\" or \"apply\".\n+ */\n\nIMO these should be in a order consistent with the code.\n\nSUGGESTION\nAllow 0, 1, “false”, \"true\", “off”, \"on\", or \"apply\".\n\n======\n\n14. src/backend/replication/logical/Makefile\n\n- worker.o\n+ worker.o \\\n+ applybgwroker.o\n\ntypo \"applybgwroker\" -> \"applybgworker\"\n\n======\n\n15. .../replication/logical/applybgwroker.c\n\n+/*-------------------------------------------------------------------------\n+ * applybgwroker.c\n+ * Support routines for applying xact by apply background worker\n+ *\n+ * Copyright (c) 2016-2022, PostgreSQL Global Development Group\n+ *\n+ * IDENTIFICATION\n+ * src/backend/replication/logical/applybgwroker.c\n\n15a.\nTypo in filename: \"applybgwroker\" -> \"applybgworker\"\n\n15b.\nTypo in file header comment: \"applybgwroker\" -> \"applybgworker\"\n\n~~~\n\n16. .../replication/logical/applybgwroker.c\n\n+/*\n+ * entry for a hash table we use to map from xid to our apply background worker\n+ * state.\n+ */\n+typedef struct ApplyBgworkerEntry\n\nComment should start uppercase.\n\n~~~\n\n17. .../replication/logical/applybgwroker.c\n\n+/*\n+ * Fields to record the share informations between main apply worker and apply\n+ * background worker.\n+ */\n\nSUGGESTION\nInformation shared between main apply worker and apply background worker.\n\n~~~\n\n18. .../replication/logical/applybgwroker.c\n\n+/* apply background worker setup */\n+static ApplyBgworkerState *apply_bgworker_setup(void);\n+static void apply_bgworker_setup_dsm(ApplyBgworkerState *wstate);\n\nIMO there was not really any need for this comment – these are just\nfunction forward declares.\n\n~~~\n\n19. .../replication/logical/applybgwroker.c - find_or_start_apply_bgworker\n\n+ if (found)\n+ {\n+ entry->wstate->pstate->status = APPLY_BGWORKER_BUSY;\n+ return entry->wstate;\n+ }\n+ else if (!start)\n+ return NULL;\n\nI felt this might be more readable without the else:\n\nif (found)\n{\nentry->wstate->pstate->status = APPLY_BGWORKER_BUSY;\nreturn entry->wstate;\n}\nAssert(!found)\nif (!start)\nreturn NULL;\n\n~~~\n\n20. .../replication/logical/applybgwroker.c - find_or_start_apply_bgworker\n\n+ /*\n+ * Now, we try to get a apply background worker. If there is at least one\n+ * worker in the idle list, then take one. Otherwise, we try to start a\n+ * new apply background worker.\n+ */\n\n20a.\n\"a apply\" -> \"an apply\"\n\n20b.\nIMO it's better to call this the free list (not the idle list)\n\n~~~\n\n21. .../replication/logical/applybgwroker.c - find_or_start_apply_bgworker\n\n+ /*\n+ * If the apply background worker cannot be launched, remove entry\n+ * in hash table.\n+ */\n\n\"remove entry in hash table\" -> \"remove the entry from the hash table\"\n\n~~~\n\n22. .../replication/logical/applybgwroker.c - apply_bgworker_free\n\n+/*\n+ * Add the worker to the free list and remove the entry from hash table.\n+ */\n\n\"from hash table\" -> \"from the hash table\"\n\n~~~\n\n23. .../replication/logical/applybgwroker.c - apply_bgworker_free\n\n+ elog(DEBUG1, \"adding finished apply worker #%u for xid %u to the idle list\",\n+ wstate->pstate->n, wstate->pstate->stream_xid);\n\nIMO it's better to call this the free list (not the idle list)\n\n~~~\n\n24. .../replication/logical/applybgwroker.c - LogicalApplyBgwLoop\n\n+/* Apply Background Worker main loop */\n+static void\n+LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared *pst)\n\nWhy is the name incosistent with other function names in the file?\nShould it be apply_bgworker_loop?\n\n~~~\n\n25. .../replication/logical/applybgwroker.c - LogicalApplyBgwLoop\n\n+ /*\n+ * Push apply error context callback. Fields will be filled during\n+ * applying a change.\n+ */\n\n\"during\" -> \"when\"\n\n~~~\n\n26. .../replication/logical/applybgwroker.c - LogicalApplyBgwLoop\n\n+ /*\n+ * We use first byte of message for additional communication between\n+ * main Logical replication worker and apply bgworkers, so if it\n+ * differs from 'w', then process it first.\n+ */\n\n\"bgworkers\" -> \"background workers\"\n\n~~~\n\n27. .../replication/logical/applybgwroker.c - ApplyBgwShutdown\n\nFor consistency should it be called apply_bgworker_shutdown?\n\n~~~\n\n28. .../replication/logical/applybgwroker.c - LogicalApplyBgwMain\n\nFor consistency should it be called apply_bgworker_main?\n\n~~~\n\n29. .../replication/logical/applybgwroker.c - apply_bgworker_check_status\n\n+ errdetail(\"Cannot handle streamed replication transaction by apply \"\n+ \"bgworkers until all tables are synchronized\")));\n\n\"bgworkers\" -> \"background workers\"\n\n======\n\n30. src/backend/replication/logical/decode.c\n\n@@ -651,9 +651,10 @@ DecodeCommit(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n {\n for (i = 0; i < parsed->nsubxacts; i++)\n {\n- ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr);\n+ ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr,\n+ commit_time);\n }\n- ReorderBufferForget(ctx->reorder, xid, buf->origptr);\n+ ReorderBufferForget(ctx->reorder, xid, buf->origptr, commit_time);\n\nReorderBufferForget was declared with 'abort_time' param. So it makes\nthese calls a bit confusing looking to be passing 'commit_time'\n\nMaybe better to do like below and pass 'forget_time' (inside that\n'if') along with an explanatory comment:\n\nTimestampTz forget_time = commit_time;\n\n======\n\n31. src/backend/replication/logical/launcher.c - logicalrep_worker_find\n\n+ /* We only need main apply worker or table sync worker here */\n\n\"need\" -> \"are interested in the\"\n\n~~~\n\n32. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n+ if (!is_subworker)\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyWorkerMain\");\n+ else\n+ snprintf(bgw.bgw_function_name, BGW_MAXLEN, \"ApplyBgworkerMain\");\n\nIMO better to reverse this and express the condition as 'if (is_subworker)'\n\n~~~\n\n33. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n+ else if (!is_subworker)\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u\", subid);\n+ else\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication background apply worker for subscription %u \", subid);\n\n33a.\nDitto. IMO better to reverse this and express the condition as 'if\n(is_subworker)'\n\n33b.\n\"background apply worker\" -> \"apply background worker\"\n\n~~~\n\n34. src/backend/replication/logical/launcher.c - logicalrep_worker_stop\n\nIMO this code logic should be rewritten to be simpler to have a common\nLWLockRelease. This also makes the code more like\nlogicalrep_worker_detach, which seems like a good thing.\n\nSUGGESTION\nlogicalrep_worker_stop(Oid subid, Oid relid)\n{\nLogicalRepWorker *worker;\n\nLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n\nworker = logicalrep_worker_find(subid, relid, false);\n\nif (worker)\n logicalrep_worker_stop_internal(worker);\n\nLWLockRelease(LogicalRepWorkerLock);\n}\n\n~~~\n\n35. src/backend/replication/logical/launcher.c -\nlogicalrep_apply_background_worker_count\n\n+/*\n+ * Count the number of registered (not necessarily running) apply background\n+ * worker for a subscription.\n+ */\n\n\"worker\" -> \"workers\"\n\n~~~\n\n36. src/backend/replication/logical/launcher.c -\nlogicalrep_apply_background_worker_count\n\n+ int res = 0;\n+\n\nA better variable name here would be 'count', or even 'n'.\n\n======\n\n36. src/backend/replication/logical/origin.c\n\n+ * However, If must_acquire is false, we allow process to get the slot which is\n+ * already acquired by other process.\n\nSUGGESTION\nHowever, if the function parameter 'must_acquire' is false, we allow\nthe process to use the same slot already acquired by another process.\n\n~~~\n\n37. src/backend/replication/logical/origin.c\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\n+ errmsg(\"could not find correct replication state slot for\nreplication origin with OID %u for apply background worker\",\n+ node),\n+ errhint(\"There is no replication state slot set by its main apply worker.\")));\n\n37a.\nSomehow, I felt the errmsg and the errhint could be clearer. Maybe like this?\n\n\" apply background worker could not find replication state slot for\nreplication origin with OID %u\",\n\n\"There is no replication state slot set by the main apply worker.\"\n\n37b.\nAlso, I think thet generally the 'errhint' informs some advice or some\naction that the user can take to fix the problem. But is this errhint\nactually saying anything useful for the user? Perhaps you meant to say\n'errdetail' here?\n\n======\n\n38. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\n\n+ /*\n+ * If the version of the publisher is lower than the version of the\n+ * subscriber, it may not support sending these two fields, so only take\n+ * these fields when include_abort_lsn is true.\n+ */\n+ if (include_abort_lsn)\n+ {\n+ abort_data->abort_lsn = pq_getmsgint64(in);\n+ abort_data->abort_time = pq_getmsgint64(in);\n+ }\n+ else\n+ {\n+ abort_data->abort_lsn = InvalidXLogRecPtr;\n+ abort_data->abort_time = 0;\n+ }\n\nThis comment is documenting a decision that was made elsewhere.\n\nBut it somehow feels wrong to me that the decision to read or not read\nthe abort time/lsn is made by the caller of this function. IMO it\nmight make more sense if the server version was simply passed as a\nparam and then this function can be in control of its own destiny and\nmake the decision does it need to read those extra fields or not. An\nextra member flag can be added to LogicalRepStreamAbortData to\nindicate if abort_data read these values or not.\n\n======\n\n39. src/backend/replication/logical/worker.c\n\n * Streamed transactions (large transactions exceeding a memory limit on the\n- * upstream) are not applied immediately, but instead, the data is written\n- * to temporary files and then applied at once when the final commit arrives.\n+ * upstream) are applied via one of two approaches.\n\n\"via\" -> \"using\"\n\n~~~\n\n40. src/backend/replication/logical/worker.c\n\n+ * Assign a new apply background worker (if available) as soon as the xact's\n+ * first stream is received and the main apply worker will send changes to this\n+ * new worker via shared memory. We keep this worker assigned till the\n+ * transaction commit is received and also wait for the worker to finish at\n+ * commit. This preserves commit ordering and avoids writing to and reading\n+ * from file in most cases. We still need to spill if there is no worker\n+ * available. We also need to allow stream_stop to complete by the background\n+ * worker to avoid deadlocks because T-1's current stream of changes can update\n+ * rows in conflicting order with T-2's next stream of changes.\n\n40a.\n\"and the main apply -> \". The main apply\"\n\n40b.\n\"and avoids writing to and reading from file in most cases.\" -> \"and\navoids file I/O in most cases.\"\n\n40c.\n\"We still need to spill if\" -> \"We still need to spill to a file if\"\n\n40d.\n\"We also need to allow stream_stop to complete by the background\nworker\" -> \"We also need to allow stream_stop to be performed by the\nbackground worker\"\n\n~~~\n\n41. src/backend/replication/logical/worker.c\n\n-static ApplyErrorCallbackArg apply_error_callback_arg =\n+ApplyErrorCallbackArg apply_error_callback_arg =\n {\n .command = 0,\n .rel = NULL,\n@@ -242,7 +246,7 @@ static ApplyErrorCallbackArg apply_error_callback_arg =\n .origin_name = NULL,\n };\n\nMaybe it is still a good idea to at least keep the old comment here:\n/* Struct for saving and restoring apply errcontext information */\n\n~~\n\n42. src/backend/replication/logical/worker.c\n\n+/* check if we are applying the transaction in apply background worker */\n+#define apply_bgworker_active() (in_streamed_transaction &&\nstream_apply_worker != NULL)\n\n42a.\nUppercase comment.\n\n42b.\n\"in apply background worker\" -> \"in apply background worker\"\n\n~~~\n\n43. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n@@ -426,41 +437,76 @@ end_replication_step(void)\n }\n\n /*\n- * Handle streamed transactions.\n+ * Handle streamed transactions for both main apply worker and apply background\n+ * worker.\n *\n- * If in streaming mode (receiving a block of streamed transaction), we\n- * simply redirect it to a file for the proper toplevel transaction.\n+ * In streaming case (receiving a block of streamed transaction), for\n+ * SUBSTREAM_ON mode, we simply redirect it to a file for the proper toplevel\n+ * transaction, and for SUBSTREAM_APPLY mode, we send the changes to background\n+ * apply worker (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes will\n+ * also be applied in main apply worker).\n *\n- * Returns true for streamed transactions, false otherwise (regular mode).\n+ * For non-streamed transactions, returns false;\n+ * For streamed transactions, returns true if in main apply worker (except we\n+ * apply streamed transaction in \"apply\" mode and address\n+ * LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes), false otherwise.\n */\n\nMaybe it is accurate (I don’t know), but this header comment seems\nexcessively complicated with so many quirks about when to return\ntrue/false. Can it be reworded into plainer language?\n\n~~~\n\n44. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\nBecause there are so many returns for each of these conditions,\nconsider refactoring the logic to change all the if/else to just be\n\"if\" and then you can comment each separate cases better. I think it\nmay be clearer.\n\nSUGGESTION\n\n/* This is the apply background worker */\nif (am_apply_bgworker())\n{\n...\nreturn false;\n}\n\n/* This is the main apply, but there is an apply background worker */\nif (apply_bgworker_active())\n{\n...\nreturn true;\n}\n\n/* This is the main apply, and there is no apply background worker */\n...\nreturn true;\n\n~~~\n\n45. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ /*\n+ * This is the main apply worker. Check if we are processing this\n+ * transaction in a apply background worker.\n+ */\n+ if (wstate)\n\nI think the part that says \"This is the main apply worker\" should be\nat the top of the 'else'\n\n~~~\n\n46. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ /*\n+ * This is the main apply worker and the transaction has been\n+ * serialized to file, replay all the spooled operations.\n+ */\n\nSUGGESTION\nThe transaction has been serialized to file. Replay all the spooled operations.\n\n~~~\n\n47. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ /* unlink the files with serialized changes and subxact info. */\n+ stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid);\n\nStart comment with capital letter.\n\n~~~\n\n48. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n+ /* If we are in a apply background worker, begin the transaction */\n+ AcceptInvalidationMessages();\n+ maybe_reread_subscription();\n\nThe \"if we are\" part of the comment is not needed because the fact the\ncode is inside am_apply_bgworker() makes this obvious anyway/\n\n~~~\n\n49. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n+ /* open the spool file for this transaction */\n+ stream_open_file(MyLogicalRepWorker->subid, stream_xid, first_segment);\n+\n\nStart the comment uppercase.\n\n+ /* if this is not the first segment, open existing subxact file */\n+ if (!first_segment)\n+ subxact_info_read(MyLogicalRepWorker->subid, stream_xid);\n\nStart the comment uppercase.\n\n~~~\n\n50. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /* Check whether the publisher sends abort_lsn and abort_time. */\n+ if (am_apply_bgworker())\n+ include_abort_lsn = MyParallelState->server_version >= 150000;\n+\n+ logicalrep_read_stream_abort(s, &abort_data, include_abort_lsn);\n\nHere is where I felt maybe just the server version could be passed so\nthe logicalrep_read_stream_abort could decide itself what message\nparts needed to be read. Basically it seems strange that the message\ncontain parts which might not be read. I felt it is better to always\nread the whole message then later you can choose what parts you are\ninterested in.\n\n~~~\n\n51. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /*\n+ * This is the main apply worker. Check if we are processing this\n+ * transaction in a apply background worker.\n+ */\n\n+ /*\n+ * We are in main apply worker and the transaction has been serialized\n+ * to file.\n+ */\n\n51a.\nI thought the \"This is the main apply worker\" and \"We are in main\napply worker\" should just be be a comment top of this \"else\"\n\n51b.\n\"a apply worker\" -> \"an apply worker\"\n\n51c.\nThere seemed to be some missing comment to say this logic is telling\nthe bgworker to abort and then waiting for it to do so.\n\n~~~\n\n52. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\nI did not really understand why the patch relocates this function to\nanother place in the file. Can't it be left in the same place?\n\n~~~\n\n53. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n+ /*\n+ * This is the main apply worker. Check if we are processing this\n+ * transaction in an apply background worker.\n+ */\n\nI thought the top of the else should just say \"This is the main apply worker.\"\n\nThen the if (wstate) part should say “Check if we are processing this\ntransaction in an apply background worker, and if so tell it to\ncomment the message”/\n\n~~~\n\n54. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n+ /*\n+ * This is the main apply worker and the transaction has been\n+ * serialized to file, replay all the spooled operations.\n+ */\n\nSUGGESTION\nThe transaction has been serialized to file, so replay all the spooled\noperations.\n\n~~~\n\n55. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n+ /* unlink the files with serialized changes and subxact info */\n+ stream_cleanup_files(MyLogicalRepWorker->subid, xid);\n\nUppercase comment.\n\n======\n\n56. src/backend/utils/misc/guc.c\n\n@@ -3220,6 +3220,18 @@ static struct config_int ConfigureNamesInt[] =\n NULL, NULL, NULL\n },\n\n+ {\n+ {\"max_apply_bgworkers_per_subscription\",\n+ PGC_SIGHUP,\n+ REPLICATION_SUBSCRIBERS,\n+ gettext_noop(\"Maximum number of apply backgrand workers per subscription.\"),\n+ NULL,\n+ },\n+ &max_apply_bgworkers_per_subscription,\n+ 3, 0, MAX_BACKENDS,\n+ NULL, NULL, NULL\n+ },\n+\n\n\"backgrand\" -> \"background\"\n\n======\n\n57. src/include/catalog/pg_subscription.h\n\n@@ -109,7 +110,7 @@ typedef struct Subscription\n bool enabled; /* Indicates if the subscription is enabled */\n bool binary; /* Indicates if the subscription wants data in\n * binary format */\n- bool stream; /* Allow streaming in-progress transactions. */\n+ char stream; /* Allow streaming in-progress transactions. */\n char twophasestate; /* Allow streaming two-phase transactions */\n bool disableonerr; /* Indicates if the subscription should be\n * automatically disabled if a worker error\n\nI felt probably this 'stream' comment should be the same as for 'substream'.\n\n======\n\n58. src/include/replication/worker_internal.h\n\n+/*\n+ * Shared information among apply workers.\n+ */\n+typedef struct ApplyBgworkerShared\n\nSUGGESTION (maybe you can do better than this)\nStruct for sharing information between apply main and apply background workers.\n\n~~~\n\n59. src/include/replication/worker_internal.h\n\n+ /* Status for apply background worker. */\n+ ApplyBgworkerStatus status;\n\n\"Status for\" -> \"Status of\"\n\n~~~\n\n60. src/include/replication/worker_internal.h\n\n+extern PGDLLIMPORT MemoryContext ApplyMessageContext;\n+\n+extern PGDLLIMPORT ApplyErrorCallbackArg apply_error_callback_arg;\n+\n+extern PGDLLIMPORT bool MySubscriptionValid;\n+\n+extern PGDLLIMPORT volatile ApplyBgworkerShared *MyParallelState;\n+extern PGDLLIMPORT List *subxactlist;\n+\n\nI did not recognise the significance why are the last 2 externs\ngrouped togeth but the others are not.\n\n~~~\n\n61. src/include/replication/worker_internal.h\n\n+/* prototype needed because of stream_commit */\n+extern void apply_dispatch(StringInfo s);\n\n61a.\nI was unsure if this comment is useful to anyone...\n\n61b.\nIf you decide to keep it, please use uppercase.\n\n~~~\n\n62. src/include/replication/worker_internal.h\n\n+/* apply background worker setup and interactions */\n+extern ApplyBgworkerState *apply_bgworker_find_or_start(TransactionId xid,\n+ bool start);\n\nUppercase comment.\n\n======\n\n63.\n\nI also did a quick check of all the new debug logging added. Here is\neveryhing from patch v11-0001.\n\napply_bgworker_free:\n+ elog(DEBUG1, \"adding finished apply worker #%u for xid %u to the idle list\",\n+ wstate->pstate->n, wstate->pstate->stream_xid);\n\nLogicalApplyBgwLoop:\n+ elog(DEBUG1, \"[Apply BGW #%u] ended processing streaming chunk,\"\n+ \"waiting on shm_mq_receive\", pst->n);\n\n+ elog(DEBUG1, \"[Apply BGW #%u] exiting\", pst->n);\n\nApplyBgworkerMain:\n+ elog(DEBUG1, \"[Apply BGW #%u] started\", pst->n);\n\napply_bgworker_setup:\n+ elog(DEBUG1, \"setting up apply worker #%u\",\nlist_length(ApplyWorkersList) + 1);\n\napply_bgworker_set_status:\n+ elog(DEBUG1, \"[Apply BGW #%u] set status to %d\", MyParallelState->n, status);\n\napply_bgworker_subxact_info_add:\n+ elog(DEBUG1, \"[Apply BGW #%u] defining savepoint %s\",\n+ MyParallelState->n, spname);\n\napply_handle_stream_prepare:\n+ elog(DEBUG1, \"received prepare for streamed transaction %u\",\n+ prepare_data.xid);\n\napply_handle_stream_start:\n+ elog(DEBUG1, \"starting streaming of xid %u\", stream_xid);\n\napply_handle_stream_stop:\n+ elog(DEBUG1, \"stopped streaming of xid %u, %u changes streamed\",\nstream_xid, nchanges);\n\napply_handle_stream_abort:\n+ elog(DEBUG1, \"[Apply BGW #%u] aborting current transaction xid=%u, subxid=%u\",\n+ MyParallelState->n, GetCurrentTransactionIdIfAny(),\n+ GetCurrentSubTransactionId());\n\n+ elog(DEBUG1, \"[Apply BGW #%u] rolling back to savepoint %s\",\n+ MyParallelState->n, spname);\n\napply_handle_stream_commit:\n+ elog(DEBUG1, \"received commit for streamed transaction %u\", xid);\n\n\nObservations:\n\n63a.\nEvery new introduced message is at level DEBUG1 (not DEBUG). AFAIK\nthis is OK, because the messages are all protocol related and every\nother existing debug message of the current replication worker.c was\nalso at the same DEBUG1 level.\n\n63b.\nThe prefix \"[Apply BGW #%u]\" is used to indicate the bgworker is\nexecuting the code, but it does not seem to be used 100% consistently\n- e.g. there are some apply_bgworker_XXX functions not using this\nprefix. Is that OK or a mistake?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Austrlia\n\n\n", "msg_date": "Tue, 21 Jun 2022 11:41:20 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jun 21, 2022 at 7:11 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for the v11-0001 patch.\n>\n> (I will review the remaining patches 0002-0005 and post any comments later)\n>\n> ======\n>\n> 1. General\n>\n> I still feel that 'apply' seems like a meaningless enum value for this\n> feature because from a user point-of-view every replicated change gets\n> \"applied\". IMO something like 'streaming = parallel' or 'streaming =\n> background' (etc) might have more meaning for a user.\n>\n\n+1. I would prefer 'streaming = parallel' as that suits here because\nwe allow streams (set of changes) of a transaction to be applied in\nparallel to other transactions or in parallel to a stream of changes\nfrom another streaming transaction.\n\n> ======\n>\n> 10. src/backend/access/transam/xact.c\n>\n> @@ -1741,6 +1742,13 @@ RecordTransactionAbort(bool isSubXact)\n> elog(PANIC, \"cannot abort transaction %u, it was already committed\",\n> xid);\n>\n> + /*\n> + * Are we using the replication origins feature? Or, in other words,\n> + * are we replaying remote actions?\n> + */\n> + replorigin = (replorigin_session_origin != InvalidRepOriginId &&\n> + replorigin_session_origin != DoNotReplicateId);\n> +\n> /* Fetch the data we need for the abort record */\n> nrels = smgrGetPendingDeletes(false, &rels);\n> nchildren = xactGetCommittedChildren(&children);\n> @@ -1765,6 +1773,11 @@ RecordTransactionAbort(bool isSubXact)\n> MyXactFlags, InvalidTransactionId,\n> NULL);\n>\n> + if (replorigin)\n> + /* Move LSNs forward for this replication origin */\n> + replorigin_session_advance(replorigin_session_origin_lsn,\n> + XactLastRecEnd);\n> +\n>\n> I did not see any reason why the code assigning the 'replorigin' and\n> the code checking the 'replorigin' are separated like they are. I\n> thought these 2 new code fragments should be kept together. Perhaps it\n> was decided this assignment must be outside the critical section? But\n> if that’s the case maybe a comment explaining so would be good.\n>\n\nI also don't see any particular reason for this apart from being\nsimilar to RecordTransactionCommit(). I think it should be fine either\nway.\n\n> ~~~\n>\n> 11. src/backend/access/transam/xact.c\n>\n> + if (replorigin)\n> + /* Move LSNs forward for this replication origin */\n> + replorigin_session_advance(replorigin_session_origin_lsn,\n> +\n>\n> The positioning of that comment is unusual. Maybe better before the check?\n>\n\nThis again seems to be due to a similar code in\nRecordTransactionCommit(). I would suggest let's keep the code\nconsistent.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Jun 2022 09:54:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "FYI - the latest patch set v12* on this thread no longer applies.\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\nv12-0003-A-temporary-patch-that-includes-patch-in-another.patch\nerror: patch failed: src/backend/replication/logical/relation.c:307\nerror: src/backend/replication/logical/relation.c: patch does not apply\nerror: patch failed: src/backend/replication/logical/worker.c:2358\nerror: src/backend/replication/logical/worker.c: patch does not apply\nerror: patch failed: src/test/subscription/t/013_partition.pl:868\nerror: src/test/subscription/t/013_partition.pl: patch does not apply\n[postgres@CentOS7-x64 oss_postgres_misc]$\n\n~~\n\nI know the v12-0003 was meant just a temporary patch for something\nthat may now already be pushed, but it cannot be just skipped either\nbecause then v12-0004 will also fail.\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\nv12-0004-Add-some-checks-before-using-apply-background-wo.patch\nerror: patch failed: src/backend/replication/logical/relation.c:433\nerror: src/backend/replication/logical/relation.c: patch does not apply\nerror: patch failed: src/backend/replication/logical/worker.c:2403\nerror: src/backend/replication/logical/worker.c: patch does not apply\n[postgres@CentOS7-x64 oss_postgres_misc]$\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Jun 2022 12:47:55 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v12-0002\n\n======\n\n1. Commit message\n\n\"streaming\" option -> \"streaming\" parameter\n\n~~~\n\n2. General (every file in this patch)\n\n\"streaming\" option -> \"streaming\" parameter\n\n~~~\n\n3. .../subscription/t/022_twophase_cascade.pl\n\nFor every test file in this patch the new function is passed $is_apply\n= 0/1 to indicate to use 'on' or 'apply' parameter value. But in this\ntest file the parameter is passed as $streaming_mode = 'on'/'apply'.\n\nI was wondering if (for the sake of consistency) it might be better to\nuse the same parameter kind for all of the test files. Actually, I\ndon't care if you choose to do nothing and leave this as-is; I am just\nposting this review comment in case it was not a deliberate decision\nto implement them differently.\n\ne.g.\n+ my ($node_publisher, $node_subscriber, $appname, $is_apply) = @_;\n\nversus\n+ my ($node_A, $node_B, $node_C, $appname_B, $appname_C, $streaming_mode) =\n+ @_;\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Jun 2022 16:50:03 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jun 20, 2022 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> I have improved the comments in this and other related sections of the\r\n> patch. See attached.\r\nThanks for your comments and patch!\r\nImproved the comments as you suggested.\r\n\r\n> > > 3.\r\n> > > +\r\n> > > + <para>\r\n> > > + Setting streaming mode to <literal>apply</literal> could export invalid\r\n> LSN\r\n> > > + as finish LSN of failed transaction. Changing the streaming mode and\r\n> making\r\n> > > + the same conflict writes the finish LSN of the failed transaction in the\r\n> > > + server log if required.\r\n> > > + </para>\r\n> > >\r\n> > > How will the user identify that this is an invalid LSN value and she\r\n> > > shouldn't use it to SKIP the transaction? Can we change the second\r\n> > > sentence to: \"User should change the streaming mode to 'on' if they\r\n> > > would instead wish to see the finish LSN on error. Users can use\r\n> > > finish LSN to SKIP applying the transaction.\" I think we can give\r\n> > > reference to docs where the SKIP feature is explained.\r\n> > Improved the sentence as suggested.\r\n> >\r\n> \r\n> You haven't answered first part of the comment: \"How will the user\r\n> identify that this is an invalid LSN value and she shouldn't use it to\r\n> SKIP the transaction?\". Have you checked what value it displays? For\r\n> example, in one of the case in apply_error_callback as shown in below\r\n> code, we don't even display finish LSN if it is invalid.\r\n> else if (XLogRecPtrIsInvalid(errarg->finish_lsn))\r\n> errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> during \\\"%s\\\" in transaction %u\",\r\n> errarg->origin_name,\r\n> logicalrep_message_type(errarg->command),\r\n> errarg->remote_xid);\r\nI am sorry that I missed something in my previous reply.\r\nThe invalid LSN value here is to say InvalidXLogRecPtr (0/0).\r\nHere is an example :\r\n```\r\n2022-06-23 14:30:11.343 CST [822333] logical replication worker CONTEXT: processing remote data for replication origin \"pg_16389\" during \"INSERT\" for replication target relation \"public.tab\" in transaction 727 finished at 0/0\r\n```\r\nSo I try to improve the sentence in pg-doc by changing from\r\n```\r\nSetting streaming mode to <literal>apply</literal> could export invalid LSN as\r\nfinish LSN of failed transaction.\r\n```\r\nto \r\n```\r\nSetting streaming mode to <literal>apply</literal> could export invalid LSN\r\n(0/0) as finish LSN of failed transaction.\r\n```\r\n\r\nI also improved the patches as you suggested in [1]:\r\n> 1.\r\n> +/*\r\n> + * Count the number of registered (not necessarily running) apply background\r\n> + * worker for a subscription.\r\n> + */\r\n> \r\n> /worker/workers\r\nImproved as suggested.\r\n\r\n> 2.\r\n> +static void\r\n> +apply_bgworker_setup_dsm(ApplyBgworkerState *wstate)\r\n> +{\r\n> ...\r\n> ...\r\n> + int64 queue_size = 160000000; /* 16 MB for now */\r\n> \r\n> I think it would be better to use define for this rather than a\r\n> hard-coded value.\r\nImproved as suggested.\r\nAdded a macro like this:\r\n```\r\n/* queue size of DSM, 16 MB for now. */\r\n#define DSM_QUEUE_SIZE\t160000000\r\n```\r\n\r\n> 3.\r\n> +/*\r\n> + * Status for apply background worker.\r\n> + */\r\n> +typedef enum ApplyBgworkerStatus\r\n> +{\r\n> + APPLY_BGWORKER_ATTACHED = 0,\r\n> + APPLY_BGWORKER_READY,\r\n> + APPLY_BGWORKER_BUSY,\r\n> + APPLY_BGWORKER_FINISHED,\r\n> + APPLY_BGWORKER_EXIT\r\n> +} ApplyBgworkerStatus;\r\n> \r\n> It would be better if you can add comments to explain each of these states.\r\nImproved as suggested.\r\nAdded the comments like below:\r\n```\r\nAPPLY_BGWORKER_BUSY = 0,\t\t\t/* assigned to a transaction */\r\nAPPLY_BGWORKER_FINISHED,\t\t/* transaction is completed */\r\nAPPLY_BGWORKER_EXIT\t\t\t\t/* exit */\r\n```\r\nIn addition, after improving the point #7 as you suggested, I removed\r\n\"APPLY_BGWORKER_ATTACHED\". And I removed \"APPLY_BGWORKER_READY\" in v12.\r\n\r\n> 4.\r\n> + /* Set up one message queue per worker, plus one. */\r\n> + mq = shm_mq_create(shm_toc_allocate(toc, (Size) queue_size),\r\n> + (Size) queue_size);\r\n> + shm_toc_insert(toc, APPLY_BGWORKER_KEY_MQ, mq);\r\n> + shm_mq_set_sender(mq, MyProc);\r\n> \r\n> \r\n> I don't understand the meaning of 'plus one' in the above comment as\r\n> the patch seems to be setting up just one queue here?\r\nYes, you are right. Improved as below:\r\n```\r\n/* Set up message queue for the worker. */\r\n```\r\n\r\n> 5.\r\n> +\r\n> + /* Attach the queues. */\r\n> + wstate->mq_handle = shm_mq_attach(mq, seg, NULL);\r\n> \r\n> Similar to above. If there is only one queue then the comment should\r\n> say queue instead of queues.\r\nImproved as suggested.\r\n\r\n> 6.\r\n> snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> \"logical replication worker for subscription %u\", subid);\r\n> + else\r\n> + snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> + \"logical replication background apply worker for subscription %u \", subid);\r\n> \r\n> No need for extra space after %u in the above code.\r\nImproved as suggested.\r\n\r\n> 7.\r\n> + launched = logicalrep_worker_launch(MyLogicalRepWorker->dbid,\r\n> + MySubscription->oid,\r\n> + MySubscription->name,\r\n> + MyLogicalRepWorker->userid,\r\n> + InvalidOid,\r\n> + dsm_segment_handle(wstate->dsm_seg));\r\n> +\r\n> + if (launched)\r\n> + {\r\n> + /* Wait for worker to attach. */\r\n> + apply_bgworker_wait_for(wstate, APPLY_BGWORKER_ATTACHED);\r\n> \r\n> In logicalrep_worker_launch(), we already seem to be waiting for\r\n> workers to attach via WaitForReplicationWorkerAttach(), so it is not\r\n> clear to me why we need to wait again? If there is a genuine reason\r\n> then it is better to add some comments to explain it. I think in some\r\n> way, we need to know if the worker is successfully attached and we may\r\n> not get that via WaitForReplicationWorkerAttach, so there needs to be\r\n> some way to know that but this doesn't sound like a very good idea. If\r\n> that understanding is correct then can we think of a better way?\r\nImproved the related logic.\r\nThe reason we wait again here in previous version is to wait for apply bgworker\r\nto attach the memory queue, but function WaitForReplicationWorkerAttach could\r\nnot do that.\r\nNow to improve this, we invoke the function logicalrep_worker_attach after the\r\nattaching the memory queue instead of before.\r\nAlso to make sure worker has not die due to error or some reasons, I modified\r\nthe function logicalrep_worker_launch and function\r\nWaitForReplicationWorkerAttach. And then, we could judge whether the worker\r\nstarted successfully or died according to the return value of the function\r\nlogicalrep_worker_launch.\r\n\r\n> 8. I think we can simplify apply_bgworker_find_or_start by having\r\n> separate APIs for find and start. Most of the places need to use find\r\n> API except for the first stream. If we do that then I think you don't\r\n> need to make a hash entry unless we established ApplyBgworkerState\r\n> which currently looks odd as you need to remove the entry if we fail\r\n> to allocate the state.\r\nImproved as suggested.\r\n\r\n> 9.\r\n> + /*\r\n> + * TO IMPROVE: Do we need to display the apply background worker's\r\n> + * information in pg_stat_replication ?\r\n> + */\r\n> + UpdateWorkerStats(last_received, send_time, false);\r\n> \r\n> In this do you mean to say pg_stat_subscription? If so, then to decide\r\n> whether we need to update stats here we should see what additional\r\n> information we can update here which is not possible via the main\r\n> apply worker?\r\nYes, it should be pg_stat_subscription. I think we do not need to update these\r\nstatistics here.\r\nI think the messages received in function LogicalApplyBgwLoop in apply bgworker\r\nhave handled in function LogicalRepApplyLoop in apply worker, these statistics\r\nhave been updated. (see function LogicalRepApplyLoop)\r\n\r\n> 10.\r\n> ApplyBgworkerMain\r\n> {\r\n> ...\r\n> + /* Load the subscription into persistent memory context. */\r\n> + ApplyContext = AllocSetContextCreate(TopMemoryContext,\r\n> ...\r\n> \r\n> This comment seems to be copied from ApplyWorkerMain but doesn't apply\r\n> here.\r\nYes, you are right. Improved as below:\r\n```\r\n/* Init the memory context for the apply background worker to work in. */\r\n```\r\n\r\nIn addition, I also tried to improve the patches by following points:\r\na.\r\nIn the function apply_handle_stream_abort, when invoking the function\r\nset_apply_error_context_xact, I forgot to change the second input parameter.\r\nSo changed \"InvalidXLogRecPtr\" to \"abort_lsn\".\r\nb.\r\nImproved the function name from \"canstartapplybgworker\" to\r\n\"apply_bgworker_can_start\".\r\nc.\r\nDetach the dsm segment if we fail to launch a apply bgworker. (see function\r\napply_bgworker_setup)\r\n\r\nBTW, I deleted the temporary patch 0003 (v12) and rebased patches because the\r\ncommit 26b3455afa and ac0e2d387a in HEAD.\r\nAnd now, I am improving the patches as suggested by Peter-san in [3]. I will\r\nsend new patches soon.\r\n\r\nAttach the new patches.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BQQHGb0afmM_Cf2qu%3DUJoCnvs3VcZ%2B1xTiySx205fU1w%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB6275208A2F8ED832710F65E09EA49%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n[3] - https://www.postgresql.org/message-id/CAHut%2BPtu_eWOVWAKrwkUFdTAh_r-RZsbDFkFmKwEAmxws%3DSh5w%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 23 Jun 2022 07:21:43 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jun 23, 2022 at 12:51 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jun 20, 2022 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have improved the comments in this and other related sections of the\n> > patch. See attached.\n> Thanks for your comments and patch!\n> Improved the comments as you suggested.\n>\n> > > > 3.\n> > > > +\n> > > > + <para>\n> > > > + Setting streaming mode to <literal>apply</literal> could export invalid\n> > LSN\n> > > > + as finish LSN of failed transaction. Changing the streaming mode and\n> > making\n> > > > + the same conflict writes the finish LSN of the failed transaction in the\n> > > > + server log if required.\n> > > > + </para>\n> > > >\n> > > > How will the user identify that this is an invalid LSN value and she\n> > > > shouldn't use it to SKIP the transaction? Can we change the second\n> > > > sentence to: \"User should change the streaming mode to 'on' if they\n> > > > would instead wish to see the finish LSN on error. Users can use\n> > > > finish LSN to SKIP applying the transaction.\" I think we can give\n> > > > reference to docs where the SKIP feature is explained.\n> > > Improved the sentence as suggested.\n> > >\n> >\n> > You haven't answered first part of the comment: \"How will the user\n> > identify that this is an invalid LSN value and she shouldn't use it to\n> > SKIP the transaction?\". Have you checked what value it displays? For\n> > example, in one of the case in apply_error_callback as shown in below\n> > code, we don't even display finish LSN if it is invalid.\n> > else if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > during \\\"%s\\\" in transaction %u\",\n> > errarg->origin_name,\n> > logicalrep_message_type(errarg->command),\n> > errarg->remote_xid);\n> I am sorry that I missed something in my previous reply.\n> The invalid LSN value here is to say InvalidXLogRecPtr (0/0).\n> Here is an example :\n> ```\n> 2022-06-23 14:30:11.343 CST [822333] logical replication worker CONTEXT: processing remote data for replication origin \"pg_16389\" during \"INSERT\" for replication target relation \"public.tab\" in transaction 727 finished at 0/0\n> ```\n>\n\nI don't think it is a good idea to display invalid values. We can mask\nthis as we are doing in other cases in function apply_error_callback.\nThe ideal way is that we provide a view/system table for users to\ncheck these errors but that is a matter of another patch. So users\nprobably need to check Logs to see if the error is from a background\napply worker to decide whether or not to switch streaming mode.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 23 Jun 2022 14:13:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jun 23, 2022 at 16:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Jun 23, 2022 at 12:51 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Mon, Jun 20, 2022 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > I have improved the comments in this and other related sections of the\r\n> > > patch. See attached.\r\n> > Thanks for your comments and patch!\r\n> > Improved the comments as you suggested.\r\n> >\r\n> > > > > 3.\r\n> > > > > +\r\n> > > > > + <para>\r\n> > > > > + Setting streaming mode to <literal>apply</literal> could export invalid\r\n> > > LSN\r\n> > > > > + as finish LSN of failed transaction. Changing the streaming mode and\r\n> > > making\r\n> > > > > + the same conflict writes the finish LSN of the failed transaction in the\r\n> > > > > + server log if required.\r\n> > > > > + </para>\r\n> > > > >\r\n> > > > > How will the user identify that this is an invalid LSN value and she\r\n> > > > > shouldn't use it to SKIP the transaction? Can we change the second\r\n> > > > > sentence to: \"User should change the streaming mode to 'on' if they\r\n> > > > > would instead wish to see the finish LSN on error. Users can use\r\n> > > > > finish LSN to SKIP applying the transaction.\" I think we can give\r\n> > > > > reference to docs where the SKIP feature is explained.\r\n> > > > Improved the sentence as suggested.\r\n> > > >\r\n> > >\r\n> > > You haven't answered first part of the comment: \"How will the user\r\n> > > identify that this is an invalid LSN value and she shouldn't use it to\r\n> > > SKIP the transaction?\". Have you checked what value it displays? For\r\n> > > example, in one of the case in apply_error_callback as shown in below\r\n> > > code, we don't even display finish LSN if it is invalid.\r\n> > > else if (XLogRecPtrIsInvalid(errarg->finish_lsn))\r\n> > > errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> > > during \\\"%s\\\" in transaction %u\",\r\n> > > errarg->origin_name,\r\n> > > logicalrep_message_type(errarg->command),\r\n> > > errarg->remote_xid);\r\n> > I am sorry that I missed something in my previous reply.\r\n> > The invalid LSN value here is to say InvalidXLogRecPtr (0/0).\r\n> > Here is an example :\r\n> > ```\r\n> > 2022-06-23 14:30:11.343 CST [822333] logical replication worker CONTEXT:\r\n> processing remote data for replication origin \"pg_16389\" during \"INSERT\" for\r\n> replication target relation \"public.tab\" in transaction 727 finished at 0/0\r\n> > ```\r\n> >\r\n> \r\n> I don't think it is a good idea to display invalid values. We can mask\r\n> this as we are doing in other cases in function apply_error_callback.\r\n> The ideal way is that we provide a view/system table for users to\r\n> check these errors but that is a matter of another patch. So users\r\n> probably need to check Logs to see if the error is from a background\r\n> apply worker to decide whether or not to switch streaming mode.\r\n\r\nThanks for your comments.\r\nI improved it as you suggested. I mask the LSN if it is invalid LSN(0/0).\r\nAlso, I improved the related pg-doc as following:\r\n```\r\n When the streaming mode is <literal>parallel</literal>, the finish LSN of\r\n failed transactions may not be logged. In that case, it may be necessary to\r\n change the streaming mode to <literal>on</literal> and cause the same\r\n conflicts again so the finish LSN of the failed transaction will be written\r\n to the server log. For the usage of finish LSN, please refer to <link\r\n linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\n SKIP</command></link>.\r\n```\r\nAfter improving this (mask invalid LSN), I found that this improvement and\r\nparallel apply patch do not seem to have a strong correlation. Would it be\r\nbetter to improve and commit in another separate patch?\r\n\r\n\r\nI also improved patches as suggested by Peter-san in [1] and [2].\r\nThanks for Shi Yu to improve the patches by addressing the comments in [2].\r\n\r\nAttach the new patches.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPtu_eWOVWAKrwkUFdTAh_r-RZsbDFkFmKwEAmxws%3DSh5w%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAHut%2BPsDzRu6PD1uSRkftRXef-KwrOoYrcq7Cm0v4otisi5M%2Bg%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 28 Jun 2022 03:21:33 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jun 21, 2022 at 9:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for the v11-0001 patch.\r\n> \r\n> (I will review the remaining patches 0002-0005 and post any comments later)\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> 6. doc/src/sgml/protocol.sgml\r\n> \r\n> Since there are protocol changes made here, shouldn’t there also be\r\n> some corresponding LOGICALREP_PROTO_XXX constants and special checking\r\n> added in the worker.c?\r\n\r\nI think it is okay not to add new macro. Because we just expanded the existing\r\noptions (\"streaming\"). And we added a check for version in function\r\napply_handle_stream_abort.\r\n\r\n> 8. doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> + <para>\r\n> + If set to <literal>on</literal>, the changes of transaction are\r\n> + written to temporary files and then applied at once after the\r\n> + transaction is committed on the publisher.\r\n> + </para>\r\n> \r\n> SUGGESTION\r\n> If set to on, the incoming changes are written to a temporary file and\r\n> then applied only after the transaction is committed on the publisher.\r\n\r\nIn \"on\" mode, there may be more than one temporary file for one streaming\r\ntransaction. (see the invocation of function BufFileCreateFileSet in function\r\nstream_open_file and function subxact_info_write)\r\nSo I think the existing description might be better.\r\nIf you feel this sentence is not clear, I will try to improve it later.\r\n\r\n> 10. src/backend/access/transam/xact.c\r\n> \r\n> @@ -1741,6 +1742,13 @@ RecordTransactionAbort(bool isSubXact)\r\n> elog(PANIC, \"cannot abort transaction %u, it was already committed\",\r\n> xid);\r\n> \r\n> + /*\r\n> + * Are we using the replication origins feature? Or, in other words,\r\n> + * are we replaying remote actions?\r\n> + */\r\n> + replorigin = (replorigin_session_origin != InvalidRepOriginId &&\r\n> + replorigin_session_origin != DoNotReplicateId);\r\n> +\r\n> /* Fetch the data we need for the abort record */\r\n> nrels = smgrGetPendingDeletes(false, &rels);\r\n> nchildren = xactGetCommittedChildren(&children);\r\n> @@ -1765,6 +1773,11 @@ RecordTransactionAbort(bool isSubXact)\r\n> MyXactFlags, InvalidTransactionId,\r\n> NULL);\r\n> \r\n> + if (replorigin)\r\n> + /* Move LSNs forward for this replication origin */\r\n> + replorigin_session_advance(replorigin_session_origin_lsn,\r\n> + XactLastRecEnd);\r\n> +\r\n> \r\n> I did not see any reason why the code assigning the 'replorigin' and\r\n> the code checking the 'replorigin' are separated like they are. I\r\n> thought these 2 new code fragments should be kept together. Perhaps it\r\n> was decided this assignment must be outside the critical section? But\r\n> if that’s the case maybe a comment explaining so would be good.\r\n> \r\n> ~~~\r\n> \r\n> 11. src/backend/access/transam/xact.c\r\n> \r\n> + if (replorigin)\r\n> + /* Move LSNs forward for this replication origin */\r\n> + replorigin_session_advance(replorigin_session_origin_lsn,\r\n> +\r\n> \r\n> The positioning of that comment is unusual. Maybe better before the check?\r\n\r\nAs Amit-san said in [1], this is just for consistency with the code in the\r\nfunction RecordTransactionCommit.\r\n\r\n> 12. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\r\n> \r\n> + /*\r\n> + * If no parameter given, assume \"true\" is meant.\r\n> + */\r\n> + if (def->arg == NULL)\r\n> + return SUBSTREAM_ON;\r\n> \r\n> SUGGESTION for comment\r\n> If the streaming parameter is given but no parameter value is\r\n> specified, then assume \"true\" is meant.\r\n\r\nI think it might be better to be consistent with the function defGetBoolean\r\nhere.\r\n\r\n> 24. .../replication/logical/applybgwroker.c - LogicalApplyBgwLoop\r\n> \r\n> +/* Apply Background Worker main loop */\r\n> +static void\r\n> +LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared\r\n> *pst)\r\n> \r\n> Why is the name incosistent with other function names in the file?\r\n> Should it be apply_bgworker_loop?\r\n\r\nI think this function name would be better to be consistent with the function\r\nLogicalRepApplyLoop.\r\n\r\n> 28. .../replication/logical/applybgwroker.c - LogicalApplyBgwMain\r\n> \r\n> For consistency should it be called apply_bgworker_main?\r\n\r\nI think this function name would be better to be consistent with the function\r\nApplyWorkerMain.\r\n\r\n> 30. src/backend/replication/logical/decode.c\r\n> \r\n> @@ -651,9 +651,10 @@ DecodeCommit(LogicalDecodingContext *ctx,\r\n> XLogRecordBuffer *buf,\r\n> {\r\n> for (i = 0; i < parsed->nsubxacts; i++)\r\n> {\r\n> - ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr);\r\n> + ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr,\r\n> + commit_time);\r\n> }\r\n> - ReorderBufferForget(ctx->reorder, xid, buf->origptr);\r\n> + ReorderBufferForget(ctx->reorder, xid, buf->origptr, commit_time);\r\n> \r\n> ReorderBufferForget was declared with 'abort_time' param. So it makes\r\n> these calls a bit confusing looking to be passing 'commit_time'\r\n> \r\n> Maybe better to do like below and pass 'forget_time' (inside that\r\n> 'if') along with an explanatory comment:\r\n> \r\n> TimestampTz forget_time = commit_time;\r\n\r\nI did not change this. I am just not sure how much this will help.\r\n\r\n> 36. src/backend/replication/logical/launcher.c -\r\n> logicalrep_apply_background_worker_count\r\n> \r\n> + int res = 0;\r\n> +\r\n> \r\n> A better variable name here would be 'count', or even 'n'.\r\n\r\nI think this variable name would be better to be consistent with the function\r\nlogicalrep_sync_worker_count.\r\n\r\n> 38. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\r\n> \r\n> + /*\r\n> + * If the version of the publisher is lower than the version of the\r\n> + * subscriber, it may not support sending these two fields, so only take\r\n> + * these fields when include_abort_lsn is true.\r\n> + */\r\n> + if (include_abort_lsn)\r\n> + {\r\n> + abort_data->abort_lsn = pq_getmsgint64(in);\r\n> + abort_data->abort_time = pq_getmsgint64(in);\r\n> + }\r\n> + else\r\n> + {\r\n> + abort_data->abort_lsn = InvalidXLogRecPtr;\r\n> + abort_data->abort_time = 0;\r\n> + }\r\n> \r\n> This comment is documenting a decision that was made elsewhere.\r\n> \r\n> But it somehow feels wrong to me that the decision to read or not read\r\n> the abort time/lsn is made by the caller of this function. IMO it\r\n> might make more sense if the server version was simply passed as a\r\n> param and then this function can be in control of its own destiny and\r\n> make the decision does it need to read those extra fields or not. An\r\n> extra member flag can be added to LogicalRepStreamAbortData to\r\n> indicate if abort_data read these values or not.\r\n\r\nI understand what you mean. But I am not sure if it is appropriate to introduce\r\nversion information in the file proto.c just for the STREAM_ABORT message. And\r\nI think it might complicate the file proto.c if introducing version\r\ninformation. Also, I think it might not be a good idea to add a flag to\r\nLogicalRepStreamAbortData (There is no similar flag in structure\r\nLogicalRep.*Data).\r\nSo, I just introduce a flag to decide whether we should read these fields from\r\nthe STREAM_ABORT message.\r\n\r\n> 41. src/backend/replication/logical/worker.c\r\n> \r\n> -static ApplyErrorCallbackArg apply_error_callback_arg =\r\n> +ApplyErrorCallbackArg apply_error_callback_arg =\r\n> {\r\n> .command = 0,\r\n> .rel = NULL,\r\n> @@ -242,7 +246,7 @@ static ApplyErrorCallbackArg apply_error_callback_arg =\r\n> .origin_name = NULL,\r\n> };\r\n> \r\n> Maybe it is still a good idea to at least keep the old comment here:\r\n> /* Struct for saving and restoring apply errcontext information */\r\n\r\nI think the old comment looks like it was for the structure\r\nApplyErrorCallbackArg, not the variable apply_error_callback_arg.\r\nSo I did not add new comments here for variable apply_error_callback_arg.\r\n\r\n> 42. src/backend/replication/logical/worker.c\r\n> \r\n> +/* check if we are applying the transaction in apply background worker */\r\n> +#define apply_bgworker_active() (in_streamed_transaction &&\r\n> stream_apply_worker != NULL)\r\n> \r\n> 42a.\r\n> Uppercase comment.\r\n> \r\n> 42b.\r\n> \"in apply background worker\" -> \"in apply background worker\"\r\n\r\n=> 42a.\r\nimproved as suggested.\r\n=> 42b.\r\nSorry, I am not sure what you mean.\r\n\r\n> 43. src/backend/replication/logical/worker.c - handle_streamed_transaction\r\n> \r\n> @@ -426,41 +437,76 @@ end_replication_step(void)\r\n> }\r\n> \r\n> /*\r\n> - * Handle streamed transactions.\r\n> + * Handle streamed transactions for both main apply worker and apply\r\n> background\r\n> + * worker.\r\n> *\r\n> - * If in streaming mode (receiving a block of streamed transaction), we\r\n> - * simply redirect it to a file for the proper toplevel transaction.\r\n> + * In streaming case (receiving a block of streamed transaction), for\r\n> + * SUBSTREAM_ON mode, we simply redirect it to a file for the proper toplevel\r\n> + * transaction, and for SUBSTREAM_APPLY mode, we send the changes to\r\n> background\r\n> + * apply worker (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE\r\n> changes will\r\n> + * also be applied in main apply worker).\r\n> *\r\n> - * Returns true for streamed transactions, false otherwise (regular mode).\r\n> + * For non-streamed transactions, returns false;\r\n> + * For streamed transactions, returns true if in main apply worker (except we\r\n> + * apply streamed transaction in \"apply\" mode and address\r\n> + * LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes), false\r\n> otherwise.\r\n> */\r\n> \r\n> Maybe it is accurate (I don’t know), but this header comment seems\r\n> excessively complicated with so many quirks about when to return\r\n> true/false. Can it be reworded into plainer language?\r\n\r\nImproved the comments like below:\r\n```\r\n * For non-streamed transactions, returns false;\r\n * For streamed transactions, returns true if in main apply worker, false\r\n * otherwise.\r\n *\r\n * But there are two exceptions: If we apply streamed transaction in main apply\r\n * worker with parallel mode, it will return false when we address\r\n * LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes.\r\n```\r\n\r\n> 46. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\r\n> \r\n> + /*\r\n> + * This is the main apply worker and the transaction has been\r\n> + * serialized to file, replay all the spooled operations.\r\n> + */\r\n> \r\n> SUGGESTION\r\n> The transaction has been serialized to file. Replay all the spooled operations.\r\n\r\nBoth #46 and #54 seem to try to improve on the same comment. Personally I\r\nprefer the improvement in #54. So improved this as suggested in #54.\r\n\r\n> 50. src/backend/replication/logical/worker.c - apply_handle_stream_abort\r\n> \r\n> + /* Check whether the publisher sends abort_lsn and abort_time. */\r\n> + if (am_apply_bgworker())\r\n> + include_abort_lsn = MyParallelState->server_version >= 150000;\r\n> +\r\n> + logicalrep_read_stream_abort(s, &abort_data, include_abort_lsn);\r\n> \r\n> Here is where I felt maybe just the server version could be passed so\r\n> the logicalrep_read_stream_abort could decide itself what message\r\n> parts needed to be read. Basically it seems strange that the message\r\n> contain parts which might not be read. I felt it is better to always\r\n> read the whole message then later you can choose what parts you are\r\n> interested in.\r\n\r\nPlease refer to the reply to #38.\r\nIn addition, we do not always read these two new fields from STREAM_ABORT\r\nmessage. Because if the subscriber's version is higher than the publisher's\r\nversion, it may try to read data that in the invalid area.\r\nI think this is not a correct behaviour.\r\n\r\n> 63.\r\n> \r\n> I also did a quick check of all the new debug logging added. Here is\r\n> everyhing from patch v11-0001.\r\n> \r\n> apply_bgworker_free:\r\n> + elog(DEBUG1, \"adding finished apply worker #%u for xid %u to the idle list\",\r\n> + wstate->pstate->n, wstate->pstate->stream_xid);\r\n> \r\n> LogicalApplyBgwLoop:\r\n> + elog(DEBUG1, \"[Apply BGW #%u] ended processing streaming chunk,\"\r\n> + \"waiting on shm_mq_receive\", pst->n);\r\n> \r\n> + elog(DEBUG1, \"[Apply BGW #%u] exiting\", pst->n);\r\n> \r\n> ApplyBgworkerMain:\r\n> + elog(DEBUG1, \"[Apply BGW #%u] started\", pst->n);\r\n> \r\n> apply_bgworker_setup:\r\n> + elog(DEBUG1, \"setting up apply worker #%u\",\r\n> list_length(ApplyWorkersList) + 1);\r\n> \r\n> apply_bgworker_set_status:\r\n> + elog(DEBUG1, \"[Apply BGW #%u] set status to %d\", MyParallelState->n,\r\n> status);\r\n> \r\n> apply_bgworker_subxact_info_add:\r\n> + elog(DEBUG1, \"[Apply BGW #%u] defining savepoint %s\",\r\n> + MyParallelState->n, spname);\r\n> \r\n> apply_handle_stream_prepare:\r\n> + elog(DEBUG1, \"received prepare for streamed transaction %u\",\r\n> + prepare_data.xid);\r\n> \r\n> apply_handle_stream_start:\r\n> + elog(DEBUG1, \"starting streaming of xid %u\", stream_xid);\r\n> \r\n> apply_handle_stream_stop:\r\n> + elog(DEBUG1, \"stopped streaming of xid %u, %u changes streamed\",\r\n> stream_xid, nchanges);\r\n> \r\n> apply_handle_stream_abort:\r\n> + elog(DEBUG1, \"[Apply BGW #%u] aborting current transaction xid=%u,\r\n> subxid=%u\",\r\n> + MyParallelState->n, GetCurrentTransactionIdIfAny(),\r\n> + GetCurrentSubTransactionId());\r\n> \r\n> + elog(DEBUG1, \"[Apply BGW #%u] rolling back to savepoint %s\",\r\n> + MyParallelState->n, spname);\r\n> \r\n> apply_handle_stream_commit:\r\n> + elog(DEBUG1, \"received commit for streamed transaction %u\", xid);\r\n> \r\n> \r\n> Observations:\r\n> \r\n> 63a.\r\n> Every new introduced message is at level DEBUG1 (not DEBUG). AFAIK\r\n> this is OK, because the messages are all protocol related and every\r\n> other existing debug message of the current replication worker.c was\r\n> also at the same DEBUG1 level.\r\n> \r\n> 63b.\r\n> The prefix \"[Apply BGW #%u]\" is used to indicate the bgworker is\r\n> executing the code, but it does not seem to be used 100% consistently\r\n> - e.g. there are some apply_bgworker_XXX functions not using this\r\n> prefix. Is that OK or a mistake?\r\n\r\nThanks for your check. I confirm this point in v13. And there are 5 functions\r\ndo not use the prefix \"[Apply BGW #%u]\":\r\n```\r\napply_bgworker_free\r\napply_bgworker_setup\r\napply_bgworker_send_data\r\napply_bgworker_wait_for\r\napply_bgworker_check_status\r\n```\r\nThese 5 functions do not use this prefix because they only output logs in apply\r\nworker. So I think it is okay.\r\n\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [2].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1J9_jcLNVqmxt_d28uGi6hAV31wjYdgmg1p8BGuEctNpw%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB62758DBE8FA12BA72A43AC819EB89%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Tue, 28 Jun 2022 03:23:10 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jun 23, 2022 at 9:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v12-0002\r\n\r\nThanks for your comments.\r\n\r\n> 3. .../subscription/t/022_twophase_cascade.pl\r\n> \r\n> For every test file in this patch the new function is passed $is_apply\r\n> = 0/1 to indicate to use 'on' or 'apply' parameter value. But in this\r\n> test file the parameter is passed as $streaming_mode = 'on'/'apply'.\r\n> \r\n> I was wondering if (for the sake of consistency) it might be better to\r\n> use the same parameter kind for all of the test files. Actually, I\r\n> don't care if you choose to do nothing and leave this as-is; I am just\r\n> posting this review comment in case it was not a deliberate decision\r\n> to implement them differently.\r\n> \r\n> e.g.\r\n> + my ($node_publisher, $node_subscriber, $appname, $is_apply) = @_;\r\n> \r\n> versus\r\n> + my ($node_A, $node_B, $node_C, $appname_B, $appname_C,\r\n> $streaming_mode) =\r\n> + @_;\r\n\r\nThis is because in 022_twophase_cascade.pl, altering subscription streaming\r\nmode is inside test_streaming(), it would be more convenient to pass which\r\nstreaming mode we use (on or apply), we can directly use that in alter\r\nsubscription command.\r\nIn other files, we need to get the option because we only check the log in\r\napply mode, so I think it is sufficient to pass 'is_apply' (whose value is 0 or\r\n1).\r\nBecause of these differences, I did not change it.\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62758DBE8FA12BA72A43AC819EB89%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Tue, 28 Jun 2022 03:23:55 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jun 28, 2022 at 8:51 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thu, Jun 23, 2022 at 16:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Jun 23, 2022 at 12:51 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > On Mon, Jun 20, 2022 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > > I have improved the comments in this and other related sections of the\n> > > > patch. See attached.\n> > > Thanks for your comments and patch!\n> > > Improved the comments as you suggested.\n> > >\n> > > > > > 3.\n> > > > > > +\n> > > > > > + <para>\n> > > > > > + Setting streaming mode to <literal>apply</literal> could export invalid\n> > > > LSN\n> > > > > > + as finish LSN of failed transaction. Changing the streaming mode and\n> > > > making\n> > > > > > + the same conflict writes the finish LSN of the failed transaction in the\n> > > > > > + server log if required.\n> > > > > > + </para>\n> > > > > >\n> > > > > > How will the user identify that this is an invalid LSN value and she\n> > > > > > shouldn't use it to SKIP the transaction? Can we change the second\n> > > > > > sentence to: \"User should change the streaming mode to 'on' if they\n> > > > > > would instead wish to see the finish LSN on error. Users can use\n> > > > > > finish LSN to SKIP applying the transaction.\" I think we can give\n> > > > > > reference to docs where the SKIP feature is explained.\n> > > > > Improved the sentence as suggested.\n> > > > >\n> > > >\n> > > > You haven't answered first part of the comment: \"How will the user\n> > > > identify that this is an invalid LSN value and she shouldn't use it to\n> > > > SKIP the transaction?\". Have you checked what value it displays? For\n> > > > example, in one of the case in apply_error_callback as shown in below\n> > > > code, we don't even display finish LSN if it is invalid.\n> > > > else if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > > > errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > > > during \\\"%s\\\" in transaction %u\",\n> > > > errarg->origin_name,\n> > > > logicalrep_message_type(errarg->command),\n> > > > errarg->remote_xid);\n> > > I am sorry that I missed something in my previous reply.\n> > > The invalid LSN value here is to say InvalidXLogRecPtr (0/0).\n> > > Here is an example :\n> > > ```\n> > > 2022-06-23 14:30:11.343 CST [822333] logical replication worker CONTEXT:\n> > processing remote data for replication origin \"pg_16389\" during \"INSERT\" for\n> > replication target relation \"public.tab\" in transaction 727 finished at 0/0\n> > > ```\n> > >\n> >\n> > I don't think it is a good idea to display invalid values. We can mask\n> > this as we are doing in other cases in function apply_error_callback.\n> > The ideal way is that we provide a view/system table for users to\n> > check these errors but that is a matter of another patch. So users\n> > probably need to check Logs to see if the error is from a background\n> > apply worker to decide whether or not to switch streaming mode.\n>\n> Thanks for your comments.\n> I improved it as you suggested. I mask the LSN if it is invalid LSN(0/0).\n> Also, I improved the related pg-doc as following:\n> ```\n> When the streaming mode is <literal>parallel</literal>, the finish LSN of\n> failed transactions may not be logged. In that case, it may be necessary to\n> change the streaming mode to <literal>on</literal> and cause the same\n> conflicts again so the finish LSN of the failed transaction will be written\n> to the server log. For the usage of finish LSN, please refer to <link\n> linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\n> SKIP</command></link>.\n> ```\n> After improving this (mask invalid LSN), I found that this improvement and\n> parallel apply patch do not seem to have a strong correlation. Would it be\n> better to improve and commit in another separate patch?\n>\n\nIs there any other case where we can hit this code path (mask\ninvalidLSN) without this patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 28 Jun 2022 09:45:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Jun 28, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jun 28, 2022 at 8:51 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thu, Jun 23, 2022 at 16:44 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > On Thu, Jun 23, 2022 at 12:51 PM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Mon, Jun 20, 2022 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > > I have improved the comments in this and other related sections of the\r\n> > > > > patch. See attached.\r\n> > > > Thanks for your comments and patch!\r\n> > > > Improved the comments as you suggested.\r\n> > > >\r\n> > > > > > > 3.\r\n> > > > > > > +\r\n> > > > > > > + <para>\r\n> > > > > > > + Setting streaming mode to <literal>apply</literal> could export\r\n> invalid\r\n> > > > > LSN\r\n> > > > > > > + as finish LSN of failed transaction. Changing the streaming mode\r\n> and\r\n> > > > > making\r\n> > > > > > > + the same conflict writes the finish LSN of the failed transaction in\r\n> the\r\n> > > > > > > + server log if required.\r\n> > > > > > > + </para>\r\n> > > > > > >\r\n> > > > > > > How will the user identify that this is an invalid LSN value and she\r\n> > > > > > > shouldn't use it to SKIP the transaction? Can we change the second\r\n> > > > > > > sentence to: \"User should change the streaming mode to 'on' if they\r\n> > > > > > > would instead wish to see the finish LSN on error. Users can use\r\n> > > > > > > finish LSN to SKIP applying the transaction.\" I think we can give\r\n> > > > > > > reference to docs where the SKIP feature is explained.\r\n> > > > > > Improved the sentence as suggested.\r\n> > > > > >\r\n> > > > >\r\n> > > > > You haven't answered first part of the comment: \"How will the user\r\n> > > > > identify that this is an invalid LSN value and she shouldn't use it to\r\n> > > > > SKIP the transaction?\". Have you checked what value it displays? For\r\n> > > > > example, in one of the case in apply_error_callback as shown in below\r\n> > > > > code, we don't even display finish LSN if it is invalid.\r\n> > > > > else if (XLogRecPtrIsInvalid(errarg->finish_lsn))\r\n> > > > > errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> > > > > during \\\"%s\\\" in transaction %u\",\r\n> > > > > errarg->origin_name,\r\n> > > > > logicalrep_message_type(errarg->command),\r\n> > > > > errarg->remote_xid);\r\n> > > > I am sorry that I missed something in my previous reply.\r\n> > > > The invalid LSN value here is to say InvalidXLogRecPtr (0/0).\r\n> > > > Here is an example :\r\n> > > > ```\r\n> > > > 2022-06-23 14:30:11.343 CST [822333] logical replication worker CONTEXT:\r\n> > > processing remote data for replication origin \"pg_16389\" during \"INSERT\" for\r\n> > > replication target relation \"public.tab\" in transaction 727 finished at 0/0\r\n> > > > ```\r\n> > > >\r\n> > >\r\n> > > I don't think it is a good idea to display invalid values. We can mask\r\n> > > this as we are doing in other cases in function apply_error_callback.\r\n> > > The ideal way is that we provide a view/system table for users to\r\n> > > check these errors but that is a matter of another patch. So users\r\n> > > probably need to check Logs to see if the error is from a background\r\n> > > apply worker to decide whether or not to switch streaming mode.\r\n> >\r\n> > Thanks for your comments.\r\n> > I improved it as you suggested. I mask the LSN if it is invalid LSN(0/0).\r\n> > Also, I improved the related pg-doc as following:\r\n> > ```\r\n> > When the streaming mode is <literal>parallel</literal>, the finish LSN of\r\n> > failed transactions may not be logged. In that case, it may be necessary to\r\n> > change the streaming mode to <literal>on</literal> and cause the same\r\n> > conflicts again so the finish LSN of the failed transaction will be written\r\n> > to the server log. For the usage of finish LSN, please refer to <link\r\n> > linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\n> > SKIP</command></link>.\r\n> > ```\r\n> > After improving this (mask invalid LSN), I found that this improvement and\r\n> > parallel apply patch do not seem to have a strong correlation. Would it be\r\n> > better to improve and commit in another separate patch?\r\n> >\r\n> \r\n> Is there any other case where we can hit this code path (mask\r\n> invalidLSN) without this patch?\r\n\r\nI realized that there is no normal case that could hit this code path in HEAD.\r\nIf we want to hit this code path, we must set apply_error_callback_arg.rel to\r\nvalid relation and set finish LSN to InvalidXLogRecPtr.\r\nBut now in HEAD, we only set apply_error_callback_arg.rel to valid relation\r\nafter setting finish LSN to valid LSN.\r\nSo it seems fine change this along with the parallel apply patch.\r\n\r\nRegards,\r\nWang wei\r\n\r\n", "msg_date": "Tue, 28 Jun 2022 07:20:43 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Below are some review comments for patches v14-0001, and v14-0002:\n\n========\nv14-0001\n========\n\n1.1 Commit message\n\nFor now, 'parallel' means the streaming will be applied\nvia a apply background worker if available. 'on' means the streaming\ntransaction will be spilled to disk. By the way, we do not change the default\nbehaviour.\n\nSUGGESTION (minor tweaks)\nThe parameter value 'parallel' means the streaming will be applied via\nan apply background worker, if available. The parameter value 'on'\nmeans the streaming transaction will be spilled to disk. The default\nvalue is 'off' (same as current behaviour).\n\n======\n\n1.2 doc/src/sgml/protocol.sgml - Protocol constants\n\nPreviously I wrote that since there are protocol changes here,\nshouldn’t there also be some corresponding LOGICALREP_PROTO_XXX\nconstants and special checking added in the worker.c?\n\nBut you said [1 comment #6] you think it is OK because...\n\nIMO, I still disagree with the reply. The fact is that the protocol\n*has* been changed, so IIUC that is precisely the reason for having\nthose protocol constants.\n\ne.g I am guessing you might assign the new one somewhere here:\n--\n server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n options.proto.logical.proto_version =\n server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n LOGICALREP_PROTO_VERSION_NUM;\n--\n\nAnd then later you would refer to this new protocol version (instead\nof the server version) when calling to the apply_handle_stream_abort\nfunction.\n\n======\n\n1.3 doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>on</literal>, the changes of transaction are\n+ written to temporary files and then applied at once after the\n+ transaction is committed on the publisher.\n+ </para>\n\nPreviously I suggested changing some text but it was rejected [1\ncomment #8] because you said there may be *multiple* files, not just\none. That is fair enough, but there were some other changes to that\nsuggested text unrelated to the number of files.\n\nSUGGESTION #2\nIf set to on, the incoming changes are written to temporary files and\nthen applied only after the transaction is committed on the publisher.\n\n~~~\n\n1.4\n\n+ <para>\n+ If set to <literal>parallel</literal>, incoming changes are directly\n+ applied via one of the apply background workers, if available. If no\n+ background worker is free to handle streaming transaction then the\n+ changes are written to a file and applied after the transaction is\n+ committed. Note that if an error happens when applying changes in a\n+ background worker, the finish LSN of the remote transaction might\n+ not be reported in the server log.\n </para>\n\nShould this also say \"written to temporary files\" instead of \"written\nto a file\"?\n\n======\n\n1.5 src/backend/commands/subscriptioncmds.c\n\n+ /*\n+ * If no parameter given, assume \"true\" is meant.\n+ */\n\nPreviously I suggested an update for this comment, but it was rejected\n[1 comment #12] saying you wanted consistency with defGetBoolean.\n\nSure, that is one point of view. Another one is that \"two wrongs don't\nmake a right\". IIUC that comment as it currently stands is incorrect\nbecause in this case there *is* a parameter given - it is just the\nparameter *value* that is missing. Maybe see what other people think?\n\n======\n\n1.6. src/backend/replication/logical/Makefile\n\nIt seems to me like these files were intended to be listed in\nalphabetical order, so you should move this new file accordingly.\n\n======\n\n1.7 .../replication/logical/applybgworker.c\n\n+/* queue size of DSM, 16 MB for now. */\n+#define DSM_QUEUE_SIZE 160000000\n\nThe comment should start uppercase.\n\n~~~\n\n1.8 .../replication/logical/applybgworker.c - apply_bgworker_can_start\n\nMaybe this is just my opinion but it sounds a bit strange to over-use\n\"we\" in all the comments.\n\n1.8.a\n+/*\n+ * Confirm if we can try to start a new apply background worker.\n+ */\n+static bool\n+apply_bgworker_can_start(TransactionId xid)\n\nSUGGESTION\nCheck if starting a new apply background worker is allowed.\n\n1.8.b\n+ /*\n+ * We don't start new background worker if we are not in streaming parallel\n+ * mode.\n+ */\n\nSUGGESTION\nDon't start a new background worker if not in streaming parallel mode.\n\n1.8.c\n+ /*\n+ * We don't start new background worker if user has set skiplsn as it's\n+ * possible that user want to skip the streaming transaction. For\n+ * streaming transaction, we need to spill the transaction to disk so that\n+ * we can get the last LSN of the transaction to judge whether to skip\n+ * before starting to apply the change.\n+ */\n\nSUGGESTION\nDon't start a new background worker if...\n\n~~~\n\n1.9 .../replication/logical/applybgworker.c - apply_bgworker_start\n\n+/*\n+ * Try to start worker inside ApplyWorkersHash for requested xid.\n+ */\n+ApplyBgworkerState *\n+apply_bgworker_start(TransactionId xid)\n\nThe comment seems not quite right.\n\nSUGGESTION\nTry to start an apply background worker and, if successful, cache it\nin ApplyWorkersHash keyed by the specified xid.\n\n~~~\n\n1.10 .../replication/logical/applybgworker.c - apply_bgworker_find\n\n+ /*\n+ * Find entry for requested transaction.\n+ */\n+ entry = hash_search(ApplyWorkersHash, &xid, HASH_FIND, &found);\n+ if (found)\n+ {\n+ entry->wstate->pstate->status = APPLY_BGWORKER_BUSY;\n+ return entry->wstate;\n+ }\n+ else\n+ return NULL;\n+}\n\nIMO it is an unexpected side-effect for the function called \"find\" to\nbe also modifying the thing that it found. IMO this setting BUSY\nshould either be done by the caller, or else this function name should\nbe renamed to make it obvious that this is doing more than just\n\"finding\" something.\n\n~~~\n\n1.11 .../replication/logical/applybgworker.c - LogicalApplyBgwLoop\n\n+ /*\n+ * Push apply error context callback. Fields will be filled applying\n+ * applying a change.\n+ */\n\nTypo: \"applying applying\"\n\n~~~\n\n1.12 .../replication/logical/applybgworker.c - apply_bgworker_setup\n\n+ if (launched)\n+ ApplyWorkersList = lappend(ApplyWorkersList, wstate);\n+ else\n+ {\n+ shm_mq_detach(wstate->mq_handle);\n+ dsm_detach(wstate->dsm_seg);\n+ pfree(wstate);\n+\n+ wstate->mq_handle = NULL;\n+ wstate->dsm_seg = NULL;\n+ wstate = NULL;\n+ }\n\nI am not sure what those first 2 NULL assignments are trying to\nachieve. Nothing AFAICT. In any case, it looks like a bug to deference\nthe 'wstate' after you already pfree-d it in the line above.\n\n~~~\n\n1.13 .../replication/logical/applybgworker.c - apply_bgworker_check_status\n\n+ * Exit if any relation is not in the READY state and if any worker is handling\n+ * the streaming transaction at the same time. Because for streaming\n+ * transactions that is being applied in apply background worker, we cannot\n+ * decide whether to apply the change for a relation that is not in the READY\n+ * state (see should_apply_changes_for_rel) as we won't know remote_final_lsn\n+ * by that time.\n+ */\n+void\n+apply_bgworker_check_status(void)\n\nSomehow, I felt that this \"Exit if...\" comment really belonged at the\nappropriate place in the function body, instead of in the function\nheader.\n\n======\n\n1.14 src/backend/replication/logical/launcher.c - WaitForReplicationWorkerAttach\n\n@@ -151,8 +153,10 @@ get_subscription_list(void)\n *\n * This is only needed for cleaning up the shared memory in case the worker\n * fails to attach.\n+ *\n+ * Returns false if the attach fails. Otherwise return true.\n */\n-static void\n+static bool\n WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n\nComment should say either \"Return\" or \"returns\"; not one of each.\n\n~~~\n\n1.15. src/backend/replication/logical/launcher.c -\nWaitForReplicationWorkerAttach\n\n+ return worker->in_use ? true : false;\n\nSame as just:\nreturn worker->in_use;\n\n~~~\n\n1.16. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n+ bool is_subworker = (subworker_dsm != DSM_HANDLE_INVALID);\n+\n+ /* We don't support table sync in subworker */\n+ Assert(!(is_subworker && OidIsValid(relid)));\n\nI'm not sure the comment is good. It sounds like it is something that\nmight be possible but is just current \"not supported\". In fact, I\nthought this is really just a sanity check because the combination of\nthose params is just plain wrong isn't it? Maybe a better comment is\njust:\n/* Sanity check */\n\n======\n\n1.17 src/backend/replication/logical/proto.c\n\n+ /*\n+ * If the version of the publisher is lower than the version of the\n+ * subscriber, it may not support sending these two fields. So these\n+ * fields are only taken if they are included.\n+ */\n+ if (include_abort_lsn)\n\n1.17a\nI thought that the comment about \"versions of publishers lower than\nversion of subscribers...\" is bogus. Perhaps you have in mind just\nthinking about versions prior to PG16 but that is not what the comment\nis saying. E.g. sometime in the future, the publisher may be PG18 and\nthe subscriber might be PG25. So that might work fine (even though the\npublisher is a lower version), but this comment will be completely\nmisleading. BTW this is another reason I think code needs to be using\nprotocol versions (not server versions). [See other comment #1.2]\n\n1.17b.\nAnyway, I felt that any comment describing the meaning of the the\n'include_abort_lsn' param would be better in the function header\ncomment, instead of in the function body.\n\n======\n\n1.18 src/backend/replication/logical/worker.c - file header comment\n\n+ * 1) Separate background workers\n+ *\n+ * Assign a new apply background worker (if available) as soon as the xact's...\n\nSomehow this long comment did not ever mention that this mode is\nselected by the user using the 'streaming=parallel'. I thought\nprobably it should say that somewhere here.\n\n~~~\n\n1.19. src/backend/replication/logical/worker.c -\n\nApplyErrorCallbackArg apply_error_callback_arg =\n{\n.command = 0,\n.rel = NULL,\n.remote_attnum = -1,\n.remote_xid = InvalidTransactionId,\n.finish_lsn = InvalidXLogRecPtr,\n.origin_name = NULL,\n};\n\nI still thought that the above initialization deserves some sort of\ncomment, even if you don't want to use the comment text previously\nsuggested [1 comment #41]\n\n~~~\n\n1.20 src/backend/replication/logical/worker.c -\n\n@@ -251,27 +258,38 @@ static MemoryContext LogicalStreamingContext = NULL;\n WalReceiverConn *LogRepWorkerWalRcvConn = NULL;\n\n Subscription *MySubscription = NULL;\n-static bool MySubscriptionValid = false;\n+bool MySubscriptionValid = false;\n\n bool in_remote_transaction = false;\n static XLogRecPtr remote_final_lsn = InvalidXLogRecPtr;\n\n /* fields valid only when processing streamed transaction */\n-static bool in_streamed_transaction = false;\n+bool in_streamed_transaction = false;\n\nThe tab alignment here looks wrong. IMO it's not worth trying to align\nthese at all. I think the tabs are leftover from before when the vars\nused to be static.\n\n~~~\n\n1.21 src/backend/replication/logical/worker.c - apply_bgworker_active\n\n+/* Check if we are applying the transaction in apply background worker */\n+#define apply_bgworker_active() (in_streamed_transaction &&\nstream_apply_worker != NULL)\n\nSorry [1 comment #42b], I had meant to write \"in apply background\nworker\" -> \"in an apply background worker\".\n\n~~~\n\n1.22 src/backend/replication/logical/worker.c - skip_xact_finish_lsn\n\n /*\n * We enable skipping all data modification changes (INSERT, UPDATE, etc.) for\n * the subscription if the remote transaction's finish LSN matches\nthe subskiplsn.\n * Once we start skipping changes, we don't stop it until we skip all\nchanges of\n * the transaction even if pg_subscription is updated and\nMySubscription->skiplsn\n- * gets changed or reset during that. Also, in streaming transaction cases, we\n- * don't skip receiving and spooling the changes since we decide whether or not\n+ * gets changed or reset during that. Also, in streaming transaction\ncases (streaming = on),\n+ * we don't skip receiving and spooling the changes since we decide\nwhether or not\n * to skip applying the changes when starting to apply changes. The\nsubskiplsn is\n * cleared after successfully skipping the transaction or applying non-empty\n * transaction. The latter prevents the mistakenly specified subskiplsn from\n- * being left.\n+ * being left. Note that we cannot skip the streaming transaction in parallel\n+ * mode, because we cannot get the finish LSN before applying the changes.\n */\n\n\"in parallel mode, because\" -> \"in 'streaming = parallel' mode, because\"\n\n~~~\n\n1.23 src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n1.23a\n /*\n- * Handle streamed transactions.\n+ * Handle streamed transactions for both main apply worker and apply background\n+ * worker.\n\nSUGGESTION\nHandle streamed transactions for both the main apply worker and the\napply background workers.\n\n1.23b\n+ * In streaming case (receiving a block of streamed transaction), for\n+ * SUBSTREAM_ON mode, we simply redirect it to a file for the proper toplevel\n+ * transaction, and for SUBSTREAM_PARALLEL mode, we send the changes to\n+ * background apply worker (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE\n+ * changes will also be applied in main apply worker).\n\n\"background apply worker\" -> \"apply background workers\"\n\nAlso, I think you don't need to say \"we\" everywhere:\n\"we simply redirect it\" -> \"simply redirect it\"\n\"we send the changes\" -> \"send the changes\"\n\n1.23c.\n+ * But there are two exceptions: If we apply streamed transaction in main apply\n+ * worker with parallel mode, it will return false when we address\n+ * LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes.\n\nSUGGESTION\nException: When parallel mode is applying streamed transaction in the\nmain apply worker, (e.g. when addressing\nLOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes), then return false.\n\n~~~\n\n1.24 src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n1.24a.\n /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ if (!(in_streamed_transaction || am_apply_bgworker()))\n return false;\nUppercase comment\n\n1.24b\n+ /* define a savepoint for a subxact if needed. */\n+ apply_bgworker_subxact_info_add(current_xid);\n\nUppercase comment\n\n~~~\n\n1.25 src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /*\n+ * This is the main apply worker, and there is an apply background\n+ * worker. So we apply the changes of this transaction in an apply\n+ * background worker. Pass the data to the worker.\n+ */\n\nSUGGESTION (to be more consistent with the next comment)\nThis is the main apply worker, but there is an apply background\nworker, so apply the changes of this transaction in that background\nworker. Pass the data to the worker.\n\n~~~\n\n1.26 src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /*\n+ * This is the main apply worker, but there is no apply background\n+ * worker. So we write to temporary files and apply when the final\n+ * commit arrives.\n\nSUGGESTION\nThis is the main apply worker, but there is no apply background\nworker, so write to temporary files and apply when the final commit\narrives.\n\n~~~\n\n1.27 src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ /*\n+ * Check if we are processing this transaction in an apply background\n+ * worker.\n+ */\n\nSUGGESTION:\nCheck if we are processing this transaction in an apply background\nworker and if so, send the changes to that worker.\n\n~~~\n\n1.28 src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ if (wstate)\n+ {\n+ apply_bgworker_send_data(wstate, s->len, s->data);\n+\n+ /*\n+ * Wait for apply background worker to finish. This is required to\n+ * maintain commit order which avoids failures due to transaction\n+ * dependencies and deadlocks.\n+ */\n+ apply_bgworker_wait_for(wstate, APPLY_BGWORKER_FINISHED);\n+ apply_bgworker_free(wstate);\n\nI think maybe the comment can be changed slightly, and then it can\nmove up one line to the top of this code block (above the 3\nstatements). I think it will become more readable.\n\nSUGGESTION\nAfter sending the data to the apply background worker, wait for that\nworker to finish. This is necessary to maintain commit order which\navoids failures due to transaction dependencies and deadlocks.\n\n~~~\n\n1.29 src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n+ /*\n+ * If no worker is available for the first stream start, we start to\n+ * serialize all the changes of the transaction.\n+ */\n+ else\n+ {\n\n1.29a.\nI felt that this comment should be INSIDE the else { block to be more readable.\n\n1.29b.\nThe comment can also be simplified a bit\nSUGGESTION:\nSince no apply background worker is available for the first stream\nstart, serialize all the changes of the transaction.\n\n~~~\n\n1.30 src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n+ /* if this is not the first segment, open existing subxact file */\n+ if (!first_segment)\n+ subxact_info_read(MyLogicalRepWorker->subid, stream_xid);\n\nUppercase comment\n\n~~~\n\n1.31. src/backend/replication/logical/worker.c - apply_handle_stream_stop\n\n+ if (apply_bgworker_active())\n+ {\n+ char action = LOGICAL_REP_MSG_STREAM_STOP;\n\nAre all the tabs before the variable needed?\n\n~~~\n\n1.32. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /* Check whether the publisher sends abort_lsn and abort_time. */\n+ if (am_apply_bgworker())\n+ include_abort_lsn = MyParallelState->server_version >= 150000;\n\nPreviously I already reported about this [1 comment #50]\n\nI just do not trust this code to do the correct thing. E.g. what if\nstreaming=parallel but all bgworkers are exhausted then IIUC the\nam_apply_bgworker() will not be true. But then with both PG15 servers\nfor pub/sub you will WRITE something but then you will not READ it.\nWon't the stream IO will get out of step and everything will fall\napart?\n\nPerhaps the include_abort_lsn assignment should be unconditionally\nset, and I think this should be a protocol version check instead of a\nserver version check shouldn’t it (see my earlier comment 1.2)\n\n~~~\n\n1.32 src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\nBTW, I think the PG16devel is now stamped in the GitHub HEAD so\nperhaps all of your 150000 checks should be now changed to say 160000?\n\n~~~\n\n1.33 src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /*\n+ * We are in main apply worker and the transaction has been serialized\n+ * to file.\n+ */\n+ else\n+ serialize_stream_abort(xid, subxid);\n\nI think this will be more readable if written like:\n\nelse\n{\n/* put comment here... */\nserialize_stream_abort(xid, subxid);\n}\n\n~~~\n\n1.34 src/backend/replication/logical/worker.c - apply_dispatch\n\n-\n /*\n * Logical replication protocol message dispatcher.\n */\n-static void\n+void\n apply_dispatch(StringInfo s)\n\nMaybe removing the whitespace is not really needed as part of this patch?\n\n======\n\n1.35 src/include/catalog/pg_subscription.h\n\n+/* Disallow streaming in-progress transactions */\n+#define SUBSTREAM_OFF 'f'\n+\n+/*\n+ * Streaming transactions are written to a temporary file and applied only\n+ * after the transaction is committed on upstream.\n+ */\n+#define SUBSTREAM_ON 't'\n+\n+/* Streaming transactions are applied immediately via a background worker */\n+#define SUBSTREAM_PARALLEL 'p'\n+\n\n1.35a\nShould all these \"Streaming transactions\" be called \"Streaming\nin-progress transactions\"?\n\n1.35b.\nEither align the values or don’t. Currently, they seem half-aligned.\n\n1.35c.\nSUGGESTION (modify the 1st comment to be more consistent with the others)\nStreaming in-progress transactions are disallowed.\n\n======\n\n1.36 src/include/replication/worker_internal.h\n\n extern int logicalrep_sync_worker_count(Oid subid);\n+extern int logicalrep_apply_background_worker_count(Oid subid);\n\nJust wondering if this should be called\n\"logicalrep_apply_bgworker_count(Oid subid);\" for consistency with the\nother function naming.\n\n========\nv14-0002\n========\n\n2.1 Commit message\n\nChange all TAP tests using the SUBSCRIPTION \"streaming\" option, so they\nnow test both 'on' and 'parallel' values.\n\n\"option\" -> \"parameter\"\n\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275DCCDF35B3BBD52CA02CC9EB89%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 1 Jul 2022 16:43:21 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 1, 2022 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ======\n>\n> 1.2 doc/src/sgml/protocol.sgml - Protocol constants\n>\n> Previously I wrote that since there are protocol changes here,\n> shouldn’t there also be some corresponding LOGICALREP_PROTO_XXX\n> constants and special checking added in the worker.c?\n>\n> But you said [1 comment #6] you think it is OK because...\n>\n> IMO, I still disagree with the reply. The fact is that the protocol\n> *has* been changed, so IIUC that is precisely the reason for having\n> those protocol constants.\n>\n> e.g I am guessing you might assign the new one somewhere here:\n> --\n> server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> options.proto.logical.proto_version =\n> server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> LOGICALREP_PROTO_VERSION_NUM;\n> --\n>\n> And then later you would refer to this new protocol version (instead\n> of the server version) when calling to the apply_handle_stream_abort\n> function.\n>\n> ======\n>\n\nOne point related to this that occurred to me is how it will behave if\nthe publisher is of version >=16 whereas the subscriber is of versions\n<=15? Won't in that case publisher sends the new fields but\nsubscribers won't be reading those which may cause some problems.\n\n> ======\n>\n> 1.5 src/backend/commands/subscriptioncmds.c\n>\n> + /*\n> + * If no parameter given, assume \"true\" is meant.\n> + */\n>\n> Previously I suggested an update for this comment, but it was rejected\n> [1 comment #12] saying you wanted consistency with defGetBoolean.\n>\n> Sure, that is one point of view. Another one is that \"two wrongs don't\n> make a right\". IIUC that comment as it currently stands is incorrect\n> because in this case there *is* a parameter given - it is just the\n> parameter *value* that is missing.\n>\n\nYou have a point but if we see this function in the vicinity then the\nproposed comment also makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 1 Jul 2022 15:13:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Below are some review comments for patch v14-0003:\n\n========\nv14-0003\n========\n\n3.1 Commit message\n\nIf any of the following checks are violated, an error will be reported.\n1. The unique columns between publisher and subscriber are difference.\n2. There is any non-immutable function present in expression in\nsubscriber's relation. Check from the following 4 items:\n a. The function in triggers;\n b. Column default value expressions and domain constraints;\n c. Constraint expressions.\n d. The foreign keys.\n\nSUGGESTION (rewording to match the docs and the code).\n\nAdd some checks before using apply background worker to apply changes.\nstreaming=parallel mode has two requirements:\n1) The unique columns must be the same between publisher and subscriber\n2) There cannot be any non-immutable functions in the subscriber-side\nreplicated table. Look for functions in the following places:\n* a. Trigger functions\n* b. Column default value expressions and domain constraints\n* c. Constraint expressions\n* d. Foreign keys\n\n======\n\n3.2 doc/src/sgml/ref/create_subscription.sgml\n\n+ To run in this mode, there are following two requirements. The first\n+ is that the unique column should be the same between publisher and\n+ subscriber; the second is that there should not be any non-immutable\n+ function in subscriber-side replicated table.\n\nSUGGESTION\nParallel mode has two requirements: 1) the unique columns must be the\nsame between publisher and subscriber; 2) there cannot be any\nnon-immutable functions in the subscriber-side replicated table.\n\n======\n\n3.3 .../replication/logical/applybgworker.c - apply_bgworker_relation_check\n\n+ * Check if changes on this logical replication relation can be applied by\n+ * apply background worker.\n\nSUGGESTION\nCheck if changes on this relation can be applied by an apply background worker.\n\n\n~~~\n\n3.4\n\n+ * Although we maintains the commit order by allowing only one process to\n+ * commit at a time, our access order to the relation has changed.\n\nSUGGESTION\nAlthough the commit order is maintained only allowing one process to\ncommit at a time, the access order to the relation has changed.\n\n~~~\n\n3.5\n\n+ /* Check only we are in apply bgworker. */\n+ if (!am_apply_bgworker())\n+ return;\n\nSUGGESTION\n/* Skip check if not an apply background worker. */\n\n~~~\n\n3.6\n\n+ /*\n+ * If it is a partitioned table, we do not check it, we will check its\n+ * partition later.\n+ */\n\nThis comment is lacking useful details:\n\n/* Partition table checks are done later in (?????) */\n\n~~~\n\n3.7\n\n+ if (!rel->sameunique)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot replicate relation with different unique index\"),\n+ errhint(\"Please change the streaming option to 'on' instead of\n'parallel'.\")));\n\nMaybe the first message should change slightly so it is worded\nconsistently with the other one.\n\nSUGGESTION\nerrmsg(\"cannot replicate relation. Unique indexes must be the same\"),\n\n======\n\n3.8 src/backend/replication/logical/proto.c\n\n-#define LOGICALREP_IS_REPLICA_IDENTITY 1\n+#define LOGICALREP_IS_REPLICA_IDENTITY 0x0001\n+#define LOGICALREP_IS_UNIQUE 0x0002\n\nI think these constants should named differently to reflect that they\nare just attribute flags. They should should use similar bitset styles\nto the other nearby constants.\n\nSUGGESTION\n#define ATTR_IS_REPLICA_IDENTITY (1 << 0)\n#define ATTR_IS_UNIQUE (1 << 1)\n\n~~~\n\n3.9 src/backend/replication/logical/proto.c - logicalrep_write_attrs\n\nThis big slab of new code to get the BMS looks very similar to\nRelationGetIdentityKeyBitmap. So perhaps this code should be\nencapsulated in another function like that one (in relcache.c?) and\nthen just called from logicalrep_write_attrs\n\n======\n\n3.10 src/backend/replication/logical/relation.c -\nlogicalrep_relmap_reset_volatility_cb\n\n+/*\n+ * Reset the flag volatility of all existing entry in the relation map cache.\n+ */\n+static void\n+logicalrep_relmap_reset_volatility_cb(Datum arg, int cacheid, uint32 hashvalue)\n\nSUGGESTION\nReset the volatility flag of all entries in the relation map cache.\n\n~~~\n\n3.11 src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_safe_in_apply_bgworker\n\n+/*\n+ * Check if unique index/constraint matches and mark sameunique and volatility\n+ * flag.\n+ *\n+ * Don't throw any error here just mark the relation entry as not sameunique or\n+ * FUNCTION_NONIMMUTABLE as we only check these in apply background worker.\n+ */\n+static void\n+logicalrep_rel_mark_safe_in_apply_bgworker(LogicalRepRelMapEntry *entry)\n\nSUGGESTION\nCheck if unique index/constraint matches and assign 'sameunique' flag.\nCheck if there are any non-immutable functions and assign the\n'volatility' flag. Note: Don't throw any error here - these flags will\nbe checked in the apply background worker.\n\n~~~\n\n3.12 src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_safe_in_apply_bgworker\n\nI did not really understand why you used an enum for one flag\n(volatility) but not the other one (sameunique); shouldn’t both of\nthese be tri-values: unknown/yes/no?\n\nFor E.g. there is a quick exit from this function if the\nFUNCTION_UNKNOWN, but there is no equivalent quick exit for the\nsameunique? It seems inconsistent.\n\n~~~\n\n3.13 src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_safe_in_apply_bgworker\n\n+ /*\n+ * Check whether there is any non-immutable function in the local table.\n+ *\n+ * a. The function in triggers;\n+ * b. Column default value expressions and domain constraints;\n+ * c. Constraint expressions;\n+ * d. Foreign keys.\n+ */\n\nSUGGESTION\n* Check if there is any non-immutable function in the local table.\n* Look for functions in the following places:\n* a. trigger functions\n* b. Column default value expressions and domain constraints\n* c. Constraint expressions\n* d. Foreign keys\n\n~~~\n\n3.14 src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_safe_in_apply_bgworker\n\nThere are lots of places setting FUNCTION_NONIMMUTABLE, so I think\nthis code might be tidier if you just have a single return at the end\nof this function and 'goto' it.\n\ne.g.\nif (...)\ngoto function_not_immutable;\n\n...\n\nreturn;\n\nfunction_not_immutable:\nentry->volatility = FUNCTION_NONIMMUTABLE;\n======\n\n3.15 src/backend/replication/logical/worker.c - apply_handle_stream_stop\n\n+ /*\n+ * Unlike stream_commit, we don't need to wait here for stream_stop to\n+ * finish. Allowing the other transaction to be applied before stream_stop\n+ * is finished can only lead to failures if the unique index/constraint is\n+ * different between publisher and subscriber. But for such cases, we don't\n+ * allow streamed transactions to be applied in parallel. See\n+ * apply_bgworker_relation_check.\n+ */\n\n\"can only lead to failures\" -> \"can lead to failures\"\n\n~~~\n\n3.16 src/backend/replication/logical/worker.c - apply_handle_tuple_routing\n\n@@ -2534,13 +2548,14 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n }\n MemoryContextSwitchTo(oldctx);\n\n+ part_entry = logicalrep_partition_open(relmapentry, partrel,\n+ attrmap);\n+\n+ apply_bgworker_relation_check(part_entry);\n+\n /* Check if we can do the update or delete on the leaf partition. */\n if (operation == CMD_UPDATE || operation == CMD_DELETE)\n- {\n- part_entry = logicalrep_partition_open(relmapentry, partrel,\n- attrmap);\n check_relation_updatable(part_entry);\n- }\n\nPerhaps the apply_bgworker_relation_check(part_entry); should be done\nAFTER the CMD_UPDATE/CMD_DELETE check because then it will not change\nthe existing errors for those cases.\n\n======\n\n3.17 src/backend/utils/cache/typcache.c\n\n+/*\n+ * GetDomainConstraints --- get DomainConstraintState list of\nspecified domain type\n+ */\n+List *\n+GetDomainConstraints(Oid type_id)\n\nThis is an unusual-looking function header comment, with the function\nname and the \"---\".\n\n======\n\n3.18 src/include/replication/logicalrelation.h\n\n+/*\n+ * States to determine volatility of the function in expressions in one\n+ * relation.\n+ */\n+typedef enum RelFuncVolatility\n+{\n+ FUNCTION_UNKNOWN = 0, /* initializing */\n+ FUNCTION_IMMUTABLE, /* all functions are immutable function */\n+ FUNCTION_NONIMMUTABLE /* at least one non-immutable function */\n+} RelFuncVolatility;\n+\n\nI think the comments can be improved, and also the values can be more\nself-explanatory. e.g.\n\ntypedef enum RelFuncVolatility\n{\nFUNCTION_UNKNOWN_IMMUATABLE, /* unknown */\nFUNCTION_ALL_MUTABLE, /* all functions are immutable */\nFUNCTION_NOT_ALL_IMMUTABLE /* not all functions are immuatble */\n} RelFuncVolatility;\n\n~~~\n\n3.18\n\nRelFuncVolatility should be added to typedefs.list\n\n~~~\n\n3.19\n\n@@ -31,6 +42,11 @@ typedef struct LogicalRepRelMapEntry\n Relation localrel; /* relcache entry (NULL when closed) */\n AttrMap *attrmap; /* map of local attributes to remote ones */\n bool updatable; /* Can apply updates/deletes? */\n+ bool sameunique; /* Are all unique columns of the local\n+ relation contained by the unique columns in\n+ remote? */\n\n(This is similar to review comment 3.12)\n\nI felt it was inconsistent for this to be a boolean but for the\n'volatility' member to be an enum. AFAIK these 2 flags are similar\nkinds – e.g. essentially tri-state flags unknown/true/false so I\nthought they should be treated the same. E.g. both enums?\n\n~~~\n\n3.20\n\n+ RelFuncVolatility volatility; /* all functions in localrel are\n+ immutable function? */\n\nSUGGESTION\n/* Indicator of local relation function volatility */\n\n======\n\n3.21 .../subscription/t/022_twophase_cascade.pl\n\n+ if ($streaming_mode eq 'parallel')\n+ {\n+ $node_C->safe_psql(\n+ 'postgres', \"\n+ ALTER TABLE test_tab ALTER c DROP DEFAULT\");\n+ }\n+\n\nIndentation of the ALTER does not seem right.\n\n======\n\n3.22 .../subscription/t/032_streaming_apply.pl\n\n3.22.a\n+# Setup structure on publisher\n\n\"structure\"?\n\n3.22.b\n+# Setup structure on subscriber\n\n\"structure\"?\n\n~~~\n\n3.23\n\n+# Check that a background worker starts if \"streaming\" option is specified as\n+# \"parallel\". We have to look for the DEBUG1 log messages about that, so\n+# temporarily bump up the log verbosity.\n+$node_subscriber->append_conf('postgresql.conf', \"log_min_messages = debug1\");\n+$node_subscriber->reload;\n+\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series(1,\n5000) s(i)\"\n+);\n+\n+$node_subscriber->wait_for_log(qr/\\[Apply BGW #\\d+\\] started/, 0);\n+$node_subscriber->append_conf('postgresql.conf',\n+ \"log_min_messages = warning\");\n+$node_subscriber->reload;\n\nI didn't really think it was necessary to bump this log level, and to\nverify that the bgworker is started, because this test is anyway going\nto ensure that the ERROR \"cannot replicate relation with different\nunique index\" happens, so that is already implicitly ensuring the\nbgworker was used.\n\n~~~\n\n3.24\n\n+# Then we check the unique index on partition table.\n+$node_subscriber->safe_psql(\n+ 'postgres', qq{\n+CREATE TRIGGER insert_trig\n+BEFORE INSERT ON test_tab_partition\n+FOR EACH ROW EXECUTE PROCEDURE trigger_func();\n+ALTER TABLE test_tab_partition ENABLE REPLICA TRIGGER insert_trig;\n+});\n\nLooks like the wrong comment. I think it should say something like\n\"Check the trigger on the partition table.\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 4 Jul 2022 14:11:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Below are some review comments for patch v14-0004:\n\n========\nv14-0004\n========\n\n4.0 General.\n\nThis comment is an after-thought but as I write this mail I am\nwondering if most of this 0004 patch is even necessary at all? Instead\nof introducing a new column and all the baggage that goes with it,\ncan't the same functionality be achieved just by toggling the\nstreaming mode 'substream' value from 'p' (parallel) to 't' (on)\nwhenever an error occurs causing a retry? Anyway, if you do change it\nthis way then most of the following comments can be disregarded.\n\n\n======\n\n4.1 Commit message\n\nPatch needs an explanatory commit message. Currently, there is nothing.\n\n======\n\n4.2 doc/src/sgml/catalogs.sgml\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subretry</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ If true, the subscription will not try to apply streaming transaction\n+ in <literal>parallel</literal> mode. See\n+ <xref linkend=\"sql-createsubscription\"/> for more information.\n+ </para></entry>\n+ </row>\n\nI think it is overkill to mention anything about the\nstreaming=parallel here because IIUC it is nothing to do with this\nfield at all. I thought you really only need to say something brief\nlike:\n\nSUGGESTION:\nTrue if the previous apply change failed and a retry was required.\n\n======\n\n4.3 doc/src/sgml/ref/create_subscription.sgml\n\n@@ -244,6 +244,10 @@ CREATE SUBSCRIPTION <replaceable\nclass=\"parameter\">subscription_name</replaceabl\n is that the unique column should be the same between publisher and\n subscriber; the second is that there should not be any non-immutable\n function in subscriber-side replicated table.\n+ When applying a streaming transaction, if either requirement is not\n+ met, the background worker will exit with an error. And when retrying\n+ later, we will try to apply this transaction in <literal>on</literal>\n+ mode.\n </para>\n\nI did not think it is good to say \"we\" in the docs.\n\nSUGGESTION\nWhen applying a streaming transaction, if either requirement is not\nmet, the background worker will exit with an error. Parallel mode is\ndisregarded when retrying; instead the transaction will be applied\nusing <literal>streaming = on</literal>.\n\n======\n\n4.4 .../replication/logical/applybgworker.c\n\n+ /*\n+ * We don't start new background worker if retry was set as it's possible\n+ * that the last time we tried to apply a transaction in background worker\n+ * and the check failed (see function apply_bgworker_relation_check). So\n+ * we will try to apply this transaction in apply worker.\n+ */\n\nSUGGESTION (simplified, and remove \"we\")\nDon't use apply background workers for retries, because it is possible\nthat the last time we tried to apply a transaction using an apply\nbackground worker the checks failed (see function\napply_bgworker_relation_check).\n\n~~~\n\n4.5\n\n+ elog(DEBUG1, \"retry to apply an streaming transaction in apply \"\n+ \"background worker\");\n\nIMO the log message is too confusing\n\nSUGGESTION\n\"apply background workers are not used for retries\"\n\n======\n\n4.6 src/backend/replication/logical/worker.c\n\n4.6.a - apply_handle_commit\n\n+ /* Set the flag that we will not retry later. */\n+ set_subscription_retry(false);\n\nBut the comment is wrong, isn't it? Shouldn’t it just say that we are\nnot *currently* retrying? And can’t this just anyway be redundant if\nonly the catalog column has a DEFAULT value of false?\n\n4.6.b - apply_handle_prepare\nDitto\n\n4.6.c - apply_handle_commit_prepared\nDitto\n\n4.6.d - apply_handle_rollback_prepared\nDitto\n\n4.6.e - apply_handle_stream_prepare\nDitto\n\n4.6.f - apply_handle_stream_abort\nDitto\n\n4.6.g - apply_handle_stream_commit\nDitto\n\n~~~\n\n4.7 src/backend/replication/logical/worker.c\n\n4.7.a - start_table_sync\n\n@@ -3894,6 +3917,9 @@ start_table_sync(XLogRecPtr *origin_startpos,\nchar **myslotname)\n }\n PG_CATCH();\n {\n+ /* Set the flag that we will retry later. */\n+ set_subscription_retry(true);\n\nMaybe this should say more like \"Flag that the next apply will be the\nresult of a retry\"\n\n4.7.b - start_apply\nDitto\n\n~~~\n\n4.8 src/backend/replication/logical/worker.c - set_subscription_retry\n\n+\n+/*\n+ * Set subretry of pg_subscription catalog.\n+ *\n+ * If retry is true, subscriber is about to exit with an error. Otherwise, it\n+ * means that the changes was applied successfully.\n+ */\n+static void\n+set_subscription_retry(bool retry)\n\n\"changes\" -> \"change\" ?\n\n~~~\n\n4.8 src/backend/replication/logical/worker.c - set_subscription_retry\n\nIsn't this flag only every used when streaming=parallel? But it does\nnot seem ot be checking that anywhere before potentiall executing all\nthis code when maybe will never be used.\n\n======\n\n4.9 src/include/catalog/pg_subscription.h\n\n@@ -76,6 +76,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n bool subdisableonerr; /* True if a worker error should cause the\n * subscription to be disabled */\n\n+ bool subretry; /* True if the previous apply change failed. */\n\nI was wondering if you can give this column a DEFAULT value of false,\nbecause then perhaps most of the patch code from worker.c may be able\nto be eliminated.\n\n======\n\n4.10 .../subscription/t/032_streaming_apply.pl\n\nI felt that the test cases all seem to blend together. IMO it will be\nmore readable if the main text parts are visually separated\n\ne.g using a comment like:\n# =================================================\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 4 Jul 2022 16:47:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jun 28, 2022 11:22 AM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> I also improved patches as suggested by Peter-san in [1] and [2].\r\n> Thanks for Shi Yu to improve the patches by addressing the comments in [2].\r\n> \r\n> Attach the new patches.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nHere are some comments.\r\n\r\n0001 patch\r\n==============\r\n1.\r\n+\t/* Check If there are free worker slot(s) */\r\n+\tLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n\r\nI think \"Check If\" should be \"Check if\".\r\n\r\n0003 patch\r\n==============\r\n1.\r\nShould we call apply_bgworker_relation_check() in apply_handle_truncate()?\r\n\r\n0004 patch\r\n==============\r\n1.\r\n@@ -3932,6 +3958,9 @@ start_apply(XLogRecPtr origin_startpos)\r\n \t}\r\n \tPG_CATCH();\r\n \t{\r\n+\t\t/* Set the flag that we will retry later. */\r\n+\t\tset_subscription_retry(true);\r\n+\r\n \t\tif (MySubscription->disableonerr)\r\n \t\t\tDisableSubscriptionAndExit();\r\n \t\tElse\r\n\r\nI think we need to emit the error and recover from the error state before\r\nsetting the retry flag, like what we do in DisableSubscriptionAndExit().\r\nOtherwise if an error is detected when setting the retry flag, we won't get the\r\nerror message reported by the apply worker.\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Thu, 7 Jul 2022 03:31:57 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 1, 2022 at 14:43 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Below are some review comments for patches v14-0001, and v14-0002:\r\n\r\nThanks for your comments.\r\n\r\n> 1.10 .../replication/logical/applybgworker.c - apply_bgworker_find\r\n> \r\n> + /*\r\n> + * Find entry for requested transaction.\r\n> + */\r\n> + entry = hash_search(ApplyWorkersHash, &xid, HASH_FIND, &found);\r\n> + if (found)\r\n> + {\r\n> + entry->wstate->pstate->status = APPLY_BGWORKER_BUSY;\r\n> + return entry->wstate;\r\n> + }\r\n> + else\r\n> + return NULL;\r\n> +}\r\n> \r\n> IMO it is an unexpected side-effect for the function called \"find\" to\r\n> be also modifying the thing that it found. IMO this setting BUSY\r\n> should either be done by the caller, or else this function name should\r\n> be renamed to make it obvious that this is doing more than just\r\n> \"finding\" something.\r\n\r\nSince we set the state to BUSY in the function apply_bgworker_start and the\r\nstate is not modified (set to FINISHED) until the transaction completes, I\r\nthink we do not need to set this state to BUSY again in the function\r\napply_bgworker_find during applying the transaction.\r\nSo I removed it and invoked function Assert.\r\nI also invoked function Assert in function apply_bgworker_start.\r\n\r\n> 1.16. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\r\n> \r\n> + bool is_subworker = (subworker_dsm != DSM_HANDLE_INVALID);\r\n> +\r\n> + /* We don't support table sync in subworker */\r\n> + Assert(!(is_subworker && OidIsValid(relid)));\r\n> \r\n> I'm not sure the comment is good. It sounds like it is something that\r\n> might be possible but is just current \"not supported\". In fact, I\r\n> thought this is really just a sanity check because the combination of\r\n> those params is just plain wrong isn't it? Maybe a better comment is\r\n> just:\r\n> /* Sanity check */\r\n\r\nImproved this comment as following:\r\n```\r\n/* Sanity check : we don't support table sync in subworker. */\r\n```\r\n\r\n> 1.22 src/backend/replication/logical/worker.c - skip_xact_finish_lsn\r\n> \r\n> /*\r\n> * We enable skipping all data modification changes (INSERT, UPDATE, etc.) for\r\n> * the subscription if the remote transaction's finish LSN matches\r\n> the subskiplsn.\r\n> * Once we start skipping changes, we don't stop it until we skip all\r\n> changes of\r\n> * the transaction even if pg_subscription is updated and\r\n> MySubscription->skiplsn\r\n> - * gets changed or reset during that. Also, in streaming transaction cases, we\r\n> - * don't skip receiving and spooling the changes since we decide whether or not\r\n> + * gets changed or reset during that. Also, in streaming transaction\r\n> cases (streaming = on),\r\n> + * we don't skip receiving and spooling the changes since we decide\r\n> whether or not\r\n> * to skip applying the changes when starting to apply changes. The\r\n> subskiplsn is\r\n> * cleared after successfully skipping the transaction or applying non-empty\r\n> * transaction. The latter prevents the mistakenly specified subskiplsn from\r\n> - * being left.\r\n> + * being left. Note that we cannot skip the streaming transaction in parallel\r\n> + * mode, because we cannot get the finish LSN before applying the changes.\r\n> */\r\n> \r\n> \"in parallel mode, because\" -> \"in 'streaming = parallel' mode, because\"\r\n\r\nNot sure about this.\r\n\r\n> 1.28 src/backend/replication/logical/worker.c - apply_handle_stream_prepare\r\n> \r\n> + if (wstate)\r\n> + {\r\n> + apply_bgworker_send_data(wstate, s->len, s->data);\r\n> +\r\n> + /*\r\n> + * Wait for apply background worker to finish. This is required to\r\n> + * maintain commit order which avoids failures due to transaction\r\n> + * dependencies and deadlocks.\r\n> + */\r\n> + apply_bgworker_wait_for(wstate, APPLY_BGWORKER_FINISHED);\r\n> + apply_bgworker_free(wstate);\r\n> \r\n> I think maybe the comment can be changed slightly, and then it can\r\n> move up one line to the top of this code block (above the 3\r\n> statements). I think it will become more readable.\r\n> \r\n> SUGGESTION\r\n> After sending the data to the apply background worker, wait for that\r\n> worker to finish. This is necessary to maintain commit order which\r\n> avoids failures due to transaction dependencies and deadlocks.\r\n\r\nI think it might be better to add a new comment before invoking function\r\napply_bgworker_send_data. Improve the comments as you suggested.\r\nI improved this point in function apply_handle_stream_prepare,\r\napply_handle_stream_abort and apply_handle_stream_commit. What do you think\r\nabout changing it like this:\r\n```\r\n/* Send STREAM PREPARE message to the apply background worker. */\r\napply_bgworker_send_data(wstate, s->len, s->data);\r\n\r\n/*\r\n * After sending the data to the apply background worker, wait for\r\n * that worker to finish. This is necessary to maintain commit\r\n * order which avoids failures due to transaction dependencies and\r\n * deadlocks.\r\n */\r\napply_bgworker_wait_for(wstate, APPLY_BGWORKER_FINISHED);\r\n```\r\n\r\n> 1.34 src/backend/replication/logical/worker.c - apply_dispatch\r\n> \r\n> -\r\n> /*\r\n> * Logical replication protocol message dispatcher.\r\n> */\r\n> -static void\r\n> +void\r\n> apply_dispatch(StringInfo s)\r\n> \r\n> Maybe removing the whitespace is not really needed as part of this patch?\r\n\r\nYes, this change is not necessary for this patch.\r\nBut since this change does not involve the modification of comments and actual\r\ncode, it just adjusts the blank line between the function modified by this\r\npatch and the previous function, so I think it is okay in this patch.\r\n\r\n> 2.1 Commit message\r\n> \r\n> Change all TAP tests using the SUBSCRIPTION \"streaming\" option, so they\r\n> now test both 'on' and 'parallel' values.\r\n> \r\n> \"option\" -> \"parameter\"\r\n\r\nSorry I missed this point when I was merging the patches. I merged this change\r\nin v15.\r\n\r\nAttach the new patches.\r\nAlso improved the patches as suggested in [1], [2] and [3].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1KgovaRcbSuzzWki1HVso6oLAdZ2aPr1nWxX1x%3DVDBQJg%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAHut%2BPtRNAOwFtBp_TnDWdC7UpcTxPJzQnrm%3DNytN7cVBt5zRQ%40mail.gmail.com\r\n[3] - https://www.postgresql.org/message-id/CAHut%2BPvrw%2BtgCEYGxv%2BnKrqg-zbJdYEXee6o4irPAsYoXcuUcw%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 7 Jul 2022 03:44:04 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 1, 2022 at 17:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> On Fri, Jul 1, 2022 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > ======\r\n> >\r\n> > 1.2 doc/src/sgml/protocol.sgml - Protocol constants\r\n> >\r\n> > Previously I wrote that since there are protocol changes here,\r\n> > shouldn’t there also be some corresponding LOGICALREP_PROTO_XXX\r\n> > constants and special checking added in the worker.c?\r\n> >\r\n> > But you said [1 comment #6] you think it is OK because...\r\n> >\r\n> > IMO, I still disagree with the reply. The fact is that the protocol\r\n> > *has* been changed, so IIUC that is precisely the reason for having\r\n> > those protocol constants.\r\n> >\r\n> > e.g I am guessing you might assign the new one somewhere here:\r\n> > --\r\n> > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\r\n> > options.proto.logical.proto_version =\r\n> > server_version >= 150000 ?\r\n> LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\r\n> > server_version >= 140000 ?\r\n> LOGICALREP_PROTO_STREAM_VERSION_NUM :\r\n> > LOGICALREP_PROTO_VERSION_NUM;\r\n> > --\r\n> >\r\n> > And then later you would refer to this new protocol version (instead\r\n> > of the server version) when calling to the apply_handle_stream_abort\r\n> > function.\r\n> >\r\n> > ======\r\n> >\r\n> \r\n> One point related to this that occurred to me is how it will behave if\r\n> the publisher is of version >=16 whereas the subscriber is of versions\r\n> <=15? Won't in that case publisher sends the new fields but\r\n> subscribers won't be reading those which may cause some problems.\r\n\r\nMakes sense. Fixed this point.\r\nAs Peter-san suggested, I added a new protocol macro\r\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM.\r\nThis new macro marks the version that supports apply background worker (it\r\nmeans we will read abort_lsn and abort_time). And the publisher sends abort_lsn\r\nand abort_time fields only if subscriber will read them. (see function\r\nlogicalrep_write_stream_abort)\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755C6C9A75EB09F7218B589E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 7 Jul 2022 03:45:31 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jul 4, 2022 at 12:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Below are some review comments for patch v14-0003:\r\n\r\nThanks for your comments.\r\n\r\n> 3.1 Commit message\r\n> \r\n> If any of the following checks are violated, an error will be reported.\r\n> 1. The unique columns between publisher and subscriber are difference.\r\n> 2. There is any non-immutable function present in expression in\r\n> subscriber's relation. Check from the following 4 items:\r\n> a. The function in triggers;\r\n> b. Column default value expressions and domain constraints;\r\n> c. Constraint expressions.\r\n> d. The foreign keys.\r\n> \r\n> SUGGESTION (rewording to match the docs and the code).\r\n> \r\n> Add some checks before using apply background worker to apply changes.\r\n> streaming=parallel mode has two requirements:\r\n> 1) The unique columns must be the same between publisher and subscriber\r\n> 2) There cannot be any non-immutable functions in the subscriber-side\r\n> replicated table. Look for functions in the following places:\r\n> * a. Trigger functions\r\n> * b. Column default value expressions and domain constraints\r\n> * c. Constraint expressions\r\n> * d. Foreign keys\r\n> \r\n> ======\r\n> \r\n> 3.2 doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> + To run in this mode, there are following two requirements. The first\r\n> + is that the unique column should be the same between publisher and\r\n> + subscriber; the second is that there should not be any non-immutable\r\n> + function in subscriber-side replicated table.\r\n> \r\n> SUGGESTION\r\n> Parallel mode has two requirements: 1) the unique columns must be the\r\n> same between publisher and subscriber; 2) there cannot be any\r\n> non-immutable functions in the subscriber-side replicated table.\r\n\r\nI did not write clearly enough before. So I made some slight modifications to\r\nthe first requirement you suggested. Like this:\r\n```\r\n1) The unique column in the relation on the subscriber-side should also be the\r\nunique column on the publisher-side;\r\n```\r\n\r\n> 3.9 src/backend/replication/logical/proto.c - logicalrep_write_attrs\r\n> \r\n> This big slab of new code to get the BMS looks very similar to\r\n> RelationGetIdentityKeyBitmap. So perhaps this code should be\r\n> encapsulated in another function like that one (in relcache.c?) and\r\n> then just called from logicalrep_write_attrs\r\n\r\nI think the file relcache.c should contain cache-build operations, and the code\r\nI added doesn't have this operation. So I didn't change.\r\n\r\n> 3.12 src/backend/replication/logical/relation.c -\r\n> logicalrep_rel_mark_safe_in_apply_bgworker\r\n> \r\n> I did not really understand why you used an enum for one flag\r\n> (volatility) but not the other one (sameunique); shouldn’t both of\r\n> these be tri-values: unknown/yes/no?\r\n> \r\n> For E.g. there is a quick exit from this function if the\r\n> FUNCTION_UNKNOWN, but there is no equivalent quick exit for the\r\n> sameunique? It seems inconsistent.\r\n\r\nAfter rethinking patch 0003, I think we only need one flag. So I merged flags\r\n'volatility' and 'sameunique' into a new flag 'parallel'. It is a tri-state\r\nflag. And I also made some related modifications.\r\n\r\n> 3.14 src/backend/replication/logical/relation.c -\r\n> logicalrep_rel_mark_safe_in_apply_bgworker\r\n> \r\n> There are lots of places setting FUNCTION_NONIMMUTABLE, so I think\r\n> this code might be tidier if you just have a single return at the end\r\n> of this function and 'goto' it.\r\n> \r\n> e.g.\r\n> if (...)\r\n> goto function_not_immutable;\r\n> \r\n> ...\r\n> \r\n> return;\r\n> \r\n> function_not_immutable:\r\n> entry->volatility = FUNCTION_NONIMMUTABLE;\r\n\r\nPersonally, I do not like to use the `goto` syntax if it is not necessary,\r\nbecause the `goto` syntax will forcibly change the flow of code execution.\r\n\r\n> 3.17 src/backend/utils/cache/typcache.c\r\n> \r\n> +/*\r\n> + * GetDomainConstraints --- get DomainConstraintState list of\r\n> specified domain type\r\n> + */\r\n> +List *\r\n> +GetDomainConstraints(Oid type_id)\r\n> \r\n> This is an unusual-looking function header comment, with the function\r\n> name and the \"---\".\r\n\r\nNot sure about this. Please refer to function lookup_rowtype_tupdesc_internal.\r\n\r\n> 3.19\r\n> \r\n> @@ -31,6 +42,11 @@ typedef struct LogicalRepRelMapEntry\r\n> Relation localrel; /* relcache entry (NULL when closed) */\r\n> AttrMap *attrmap; /* map of local attributes to remote ones */\r\n> bool updatable; /* Can apply updates/deletes? */\r\n> + bool sameunique; /* Are all unique columns of the local\r\n> + relation contained by the unique columns in\r\n> + remote? */\r\n> \r\n> (This is similar to review comment 3.12)\r\n> \r\n> I felt it was inconsistent for this to be a boolean but for the\r\n> 'volatility' member to be an enum. AFAIK these 2 flags are similar\r\n> kinds – e.g. essentially tri-state flags unknown/true/false so I\r\n> thought they should be treated the same. E.g. both enums?\r\n\r\nPlease refer to the reply to #3.12.\r\n\r\n> 3.22 .../subscription/t/032_streaming_apply.pl\r\n> \r\n> 3.22.a\r\n> +# Setup structure on publisher\r\n> \r\n> \"structure\"?\r\n> \r\n> 3.22.b\r\n> +# Setup structure on subscriber\r\n> \r\n> \"structure\"?\r\n\r\nJust refer to other subscription test.\r\n\r\n> 3.23\r\n> \r\n> +# Check that a background worker starts if \"streaming\" option is specified as\r\n> +# \"parallel\". We have to look for the DEBUG1 log messages about that, so\r\n> +# temporarily bump up the log verbosity.\r\n> +$node_subscriber->append_conf('postgresql.conf', \"log_min_messages =\r\n> debug1\");\r\n> +$node_subscriber->reload;\r\n> +\r\n> +$node_publisher->safe_psql('postgres',\r\n> + \"INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series(1,\r\n> 5000) s(i)\"\r\n> +);\r\n> +\r\n> +$node_subscriber->wait_for_log(qr/\\[Apply BGW #\\d+\\] started/, 0);\r\n> +$node_subscriber->append_conf('postgresql.conf',\r\n> + \"log_min_messages = warning\");\r\n> +$node_subscriber->reload;\r\n> \r\n> I didn't really think it was necessary to bump this log level, and to\r\n> verify that the bgworker is started, because this test is anyway going\r\n> to ensure that the ERROR \"cannot replicate relation with different\r\n> unique index\" happens, so that is already implicitly ensuring the\r\n> bgworker was used.\r\n\r\nSince it takes almost no time, I think a more detailed confirmation is fine.\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755C6C9A75EB09F7218B589E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 7 Jul 2022 03:46:25 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jul 4, 2022 at 14:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Below are some review comments for patch v14-0004:\r\n\r\nThanks for your comments.\r\n\r\n> 4.0 General.\r\n> \r\n> This comment is an after-thought but as I write this mail I am\r\n> wondering if most of this 0004 patch is even necessary at all? Instead\r\n> of introducing a new column and all the baggage that goes with it,\r\n> can't the same functionality be achieved just by toggling the\r\n> streaming mode 'substream' value from 'p' (parallel) to 't' (on)\r\n> whenever an error occurs causing a retry? Anyway, if you do change it\r\n> this way then most of the following comments can be disregarded.\r\n\r\nIn the approach that you mentioned, after retrying, the transaction will always\r\nbe applied in \"on\" mode. This will change the user's setting. \r\nThat is to say, in most cases, user needs to manually reset option \"streaming\"\r\nto \"parallel\". I think it might not be very friendly.\r\n\r\n> 4.6 src/backend/replication/logical/worker.c\r\n> \r\n> 4.6.a - apply_handle_commit\r\n> \r\n> + /* Set the flag that we will not retry later. */\r\n> + set_subscription_retry(false);\r\n> \r\n> But the comment is wrong, isn't it? Shouldn’t it just say that we are\r\n> not *currently* retrying? And can’t this just anyway be redundant if\r\n> only the catalog column has a DEFAULT value of false?\r\n> \r\n> 4.6.b - apply_handle_prepare\r\n> Ditto\r\n> \r\n> 4.6.c - apply_handle_commit_prepared\r\n> Ditto\r\n> \r\n> 4.6.d - apply_handle_rollback_prepared\r\n> Ditto\r\n> \r\n> 4.6.e - apply_handle_stream_prepare\r\n> Ditto\r\n> \r\n> 4.6.f - apply_handle_stream_abort\r\n> Ditto\r\n> \r\n> 4.6.g - apply_handle_stream_commit\r\n> Ditto\r\n\r\nSet default value of the field \"subretry\" to \"false\" as you suggested.\r\nWe need to reset this field to false after retrying to apply a streaming\r\ntransaction in main apply worker (\"on\" mode).\r\nI think this comment is not clear. So I change it to\r\n```\r\nReset the retry flag.\r\n```\r\n\r\n> 4.7 src/backend/replication/logical/worker.c\r\n> \r\n> 4.7.a - start_table_sync\r\n> \r\n> @@ -3894,6 +3917,9 @@ start_table_sync(XLogRecPtr *origin_startpos,\r\n> char **myslotname)\r\n> }\r\n> PG_CATCH();\r\n> {\r\n> + /* Set the flag that we will retry later. */\r\n> + set_subscription_retry(true);\r\n> \r\n> Maybe this should say more like \"Flag that the next apply will be the\r\n> result of a retry\"\r\n> \r\n> 4.7.b - start_apply\r\n> Ditto\r\n\r\nSimilar to the reply in #4.6, I changed it to `Set the retry flag.`.\r\n\r\n> 4.8 src/backend/replication/logical/worker.c - set_subscription_retry\r\n> \r\n> +\r\n> +/*\r\n> + * Set subretry of pg_subscription catalog.\r\n> + *\r\n> + * If retry is true, subscriber is about to exit with an error. Otherwise, it\r\n> + * means that the changes was applied successfully.\r\n> + */\r\n> +static void\r\n> +set_subscription_retry(bool retry)\r\n> \r\n> \"changes\" -> \"change\" ?\r\n\r\nI did not make it clear before.\r\nI modified \"changes\" to \"transaction\".\r\n\r\n> 4.8 src/backend/replication/logical/worker.c - set_subscription_retry\r\n> \r\n> Isn't this flag only every used when streaming=parallel? But it does\r\n> not seem ot be checking that anywhere before potentiall executing all\r\n> this code when maybe will never be used.\r\n\r\nYes, currently this field is only checked by apply background worker.\r\n\r\n> 4.9 src/include/catalog/pg_subscription.h\r\n> \r\n> @@ -76,6 +76,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\r\n> BKI_SHARED_RELATION BKI_ROW\r\n> bool subdisableonerr; /* True if a worker error should cause the\r\n> * subscription to be disabled */\r\n> \r\n> + bool subretry; /* True if the previous apply change failed. */\r\n> \r\n> I was wondering if you can give this column a DEFAULT value of false,\r\n> because then perhaps most of the patch code from worker.c may be able\r\n> to be eliminated.\r\n\r\nPlease refer to the reply to #4.6.\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755C6C9A75EB09F7218B589E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 7 Jul 2022 03:47:39 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 7, 2022 at 11:44 AM I wrote:\r\n> Attach the new patches.\r\n\r\nI found a failure on CFbot [1], which after investigation I think is due to my\r\nprevious modification (see response to #1.10 in [2]).\r\n\r\nFor a streaming transaction, if we failed in the first chunk of streamed\r\nchanges for this transaction in the apply background worker, we will set the\r\nstatus of this apply background worker to APPLY_BGWORKER_EXIT. \r\nAnd at the same time, main apply worker obtains apply background worker\r\nin the function apply_bgworker_find when processing the second chunk of\r\nstreamed changes for this transaction, the status of apply background worker\r\nis APPLY_BGWORKER_EXIT. So the following assertion will fail:\r\n```\r\nAssert(status == APPLY_BGWORKER_BUSY);\r\n```\r\n\r\nTo fix this, before invoking function assert, I try to detect the failure of\r\napply background worker. If the status is APPLY_BGWORKER_EXIT, then exit with\r\nan error.\r\n\r\nI also made some other small improvements.\r\n\r\nAttach the new patches.\r\n\r\n[1] - https://cirrus-ci.com/task/6383178511286272?logs=test_world#L2636\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB62755C6C9A75EB09F7218B589E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 7 Jul 2022 10:20:45 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Below are my review comments for the v16* patch set:\n\n========\nv16-0001\n========\n\n1.0 <general>\n\nThere are places (comments, docs, errmsgs, etc) in the patch referring\nto \"parallel mode\". I think every one of those references should be\nfound and renamed to \"parallel streaming mode\" or \"streaming=parallel\"\nor at the very least match sure that \"streaming\" is in the same\nsentence. IMO it's too vague just saying \"parallel\" without also\nsaying the context is for the \"streaming\" parameter.\n\nI have commented on some of those examples below, but please search\neverything anyway (including the docs) to catch the ones I haven't\nexplicitly mentioned.\n\n======\n\n1.1 src/backend/commands/subscriptioncmds.c\n\n+defGetStreamingMode(DefElem *def)\n+{\n+ /*\n+ * If no value given, assume \"true\" is meant.\n+ */\n\nPlease fix this comment to identical to this pushed patch [1]\n\n======\n\n1.2 .../replication/logical/applybgworker.c - apply_bgworker_start\n\n+ if (list_length(ApplyWorkersFreeList) > 0)\n+ {\n+ wstate = (ApplyBgworkerState *) llast(ApplyWorkersFreeList);\n+ ApplyWorkersFreeList = list_delete_last(ApplyWorkersFreeList);\n+ Assert(wstate->pstate->status == APPLY_BGWORKER_FINISHED);\n+ }\n\nThe Assert that the entries in the free-list are FINISHED seems like\nunnecessary checking. IIUC, code is already doing the Assert that\nentries are FINISHED before allowing them into the free-list in the\nfirst place.\n\n~~~\n\n1.3 .../replication/logical/applybgworker.c - apply_bgworker_find\n\n+ if (found)\n+ {\n+ char status = entry->wstate->pstate->status;\n+\n+ /* If any workers (or the postmaster) have died, we have failed. */\n+ if (status == APPLY_BGWORKER_EXIT)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"background worker %u failed to apply transaction %u\",\n+ entry->wstate->pstate->n,\n+ entry->wstate->pstate->stream_xid)));\n+\n+ Assert(status == APPLY_BGWORKER_BUSY);\n+\n+ return entry->wstate;\n+ }\n\nWhy not remove that Assert but change the condition to be:\n\nif (status != APPLY_BGWORKER_BUSY)\nereport(...)\n\n======\n\n1.4 src/backend/replication/logical/proto.c - logicalrep_write_stream_abort\n\n@@ -1163,31 +1163,56 @@ logicalrep_read_stream_commit(StringInfo in,\nLogicalRepCommitData *commit_data)\n /*\n * Write STREAM ABORT to the output stream. Note that xid and subxid will be\n * same for the top-level transaction abort.\n+ *\n+ * If write_abort_lsn is true, send the abort_lsn and abort_time fields.\n+ * Otherwise not.\n */\n\n\"Otherwise not.\" -> \", otherwise don't.\"\n\n~~~\n\n1.5 src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\n\n+ *\n+ * If read_abort_lsn is true, try to read the abort_lsn and abort_time fields.\n+ * Otherwise not.\n */\n void\n-logicalrep_read_stream_abort(StringInfo in, TransactionId *xid,\n- TransactionId *subxid)\n+logicalrep_read_stream_abort(StringInfo in,\n+ LogicalRepStreamAbortData *abort_data,\n+ bool read_abort_lsn)\n\n\"Otherwise not.\" -> \", otherwise don't.\"\n\n======\n\n1.6 src/backend/replication/logical/worker.c - file comment\n\n+ * If streaming = parallel, We assign a new apply background worker (if\n+ * available) as soon as the xact's first stream is received. The main apply\n\n\"We\" -> \"we\" ... or maybe better just remove it completely.\n\n~~~\n\n1.7 src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n+ /*\n+ * After sending the data to the apply background worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ apply_bgworker_send_data(wstate, s->len, s->data);\n+ apply_bgworker_wait_for(wstate, APPLY_BGWORKER_FINISHED);\n+ apply_bgworker_free(wstate);\n\nThe comment should be changed how you had suggested [2], so that it\nwill be formatted the same way as a couple of other similar comments.\n\n~~~\n\n1.8 src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /* Check whether the publisher sends abort_lsn and abort_time. */\n+ if (am_apply_bgworker())\n+ read_abort_lsn = MyParallelState->server_version >= 160000;\n\nThis is handling decisions about read/write of the protocol bytes. I\nthink feel like it will be better to be checking the server *protocol*\nversion (not the server postgres version) to make this decision – e.g.\nthis code should be using the new macro you introduced so it will end\nup looking much like how the pgoutput_stream_abort code is doing it.\n\n~~~\n\n1.9 src/backend/replication/logical/worker.c - store_flush_position\n\n@@ -2636,6 +2999,10 @@ store_flush_position(XLogRecPtr remote_lsn)\n {\n FlushPosition *flushpos;\n\n+ /* We only need to collect the LSN in main apply worker */\n+ if (am_apply_bgworker())\n+ return;\n+\n\nSUGGESTION\n/* Skip if not the main apply worker */\n\n======\n\n1.10 src/backend/replication/pgoutput/pgoutput.c\n\n@@ -1820,6 +1820,8 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n XLogRecPtr abort_lsn)\n {\n ReorderBufferTXN *toptxn;\n+ bool write_abort_lsn = false;\n+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n\n /*\n * The abort should happen outside streaming block, even for streamed\n@@ -1832,8 +1834,13 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n\n Assert(rbtxn_is_streamed(toptxn));\n\n+ /* We only send abort_lsn and abort_time if the subscriber needs them. */\n+ if (data->protocol_version >= LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\n+ write_abort_lsn = true;\n+\n\nIMO it's simpler to remove the declaration default assignment, and\ninstead this code can be written as:\n\nwrite_abort_lsn = data->protocol_version >=\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM;\n\n======\n\n1.11 src/include/replication/logicalproto.h\n\n+ *\n+ * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n+ * with support for streaming large transactions in apply background worker.\n+ * Introduced in PG16.\n\n\"in apply background worker\" -> \"using apply background workers\"\n\n~~~\n\n1.12\n\n+extern void logicalrep_read_stream_abort(StringInfo in,\n+ LogicalRepStreamAbortData *abort_data,\n+ bool include_abort_lsn);\n\nI think the \"include_abort_lsn\" is now renamed to \"include_abort_lsn\".\n\n\n========\nv16-0002\n========\n\nNo comments.\n\n\n========\nv16-0003\n========\n\n3.0 <general>\n\nSame comment about \"parallel mode\" as in comment #1.0\n\n======\n\n3.1 doc/src/sgml/ref/create_subscription.sgml\n\n+ the publisher-side; 2) there cannot be any non-immutable functions\n+ in the subscriber-side replicated table.\n\nThe functions are not table data so maybe it's better to say\n\"functions in the ...\" -> \"functions used by the ...\". If you change\nthis then there are equivalent comments and commit messages that\nshould change to match it.\n\n======\n\n3.2 .../replication/logical/applybgworker.c - apply_bgworker_relation_check\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot replicate target relation \\\"%s.%s\\\" in parallel \"\n+ \"mode\", rel->remoterel.nspname, rel->remoterel.relname),\n+ errdetail(\"The unique column on subscriber is not the unique \"\n+ \"column on publisher or there is at least one \"\n+ \"non-immutable function.\"),\n+ errhint(\"Please change the streaming option to 'on' instead of\n'parallel'.\")));\n\n3.2a\nSUGGESTED errmsg\n\"cannot replicate target relation \\\"%s.%s\\\" using subscription\nparameter streaming=parallel\"\n\n3.2b\nSUGGESTED errhint\n\"Please change to use subscription parameter streaming=on\"\n\n3.3\nThe errcode seems the wrong one. Perhaps it should be\nERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE.\n\n======\n\n3.4 src/backend/replication/logical/proto.c - logicalrep_write_attrs\n\nIn [3] you wrote:\nI think the file relcache.c should contain cache-build operations, and the code\nI added doesn't have this operation. So I didn't change.\n\nBut I only gave relcache.c as an example. It can also be a new static\nfunction in this same file, but anyway I still think this big slab of\ncode might be better if not done inline in logicalrep_write_attrs.\n\n~~~\n\n3.5 src/backend/replication/logical/proto.c - logicalrep_read_attrs\n\n@@ -1012,11 +1062,14 @@ logicalrep_read_attrs(StringInfo in,\nLogicalRepRelation *rel)\n {\n uint8 flags;\n\n- /* Check for replica identity column */\n+ /* Check for replica identity and unique column */\n flags = pq_getmsgbyte(in);\n- if (flags & LOGICALREP_IS_REPLICA_IDENTITY)\n+ if (flags & ATTR_IS_REPLICA_IDENTITY)\n attkeys = bms_add_member(attkeys, i);\n\n+ if (flags & ATTR_IS_UNIQUE)\n+ attunique = bms_add_member(attunique, i);\n\nThe code comment really applies to all 3 statements so maybe better\nnot to have the blank line here.\n\n======\n\n3.6 src/backend/replication/logical/relation.c - logicalrep_rel_mark_parallel\n\n3.6.a\n+ /* Fast path if we marked 'parallel' flag. */\n+ if (entry->parallel != PARALLEL_APPLY_UNKNOWN)\n+ return;\n\nSUGGESTED\nFast path if 'parallel' flag is already known.\n\n~\n\n3.6.b\n+ /* Initialize the flag. */\n+ entry->parallel = PARALLEL_APPLY_SAFE;\n\nI think it makes more sense if assigning SAFE is the very *last* thing\nthis function does – not the first thing.\n\n~\n\n3.6.c\n+ /*\n+ * First, we check if the unique column in the relation on the\n+ * subscriber-side is also the unique column on the publisher-side.\n+ */\n\n\"First, we check...\" -> \"First, check...\"\n\n~\n\n3.6.d\n+ /*\n+ * Then, We check if there is any non-immutable function in the local\n+ * table. Look for functions in the following places:\n\n\n\"Then, We check...\" -> \"Then, check\"\n\n~~~\n\n3.7 src/backend/replication/logical/relation.c - logicalrep_rel_mark_parallel\n\n From [3] you wrote:\nPersonally, I do not like to use the `goto` syntax if it is not necessary,\nbecause the `goto` syntax will forcibly change the flow of code execution.\n\nYes, but OTOH readability is a major consideration too, and in this\nfunction by simply saying goto parallel_unsafe; you can have 3 returns\ninstead of 7 returns, and it will take ~10 lines less code to do the\nsame functionality.\n\n======\n\n3.8 src/include/replication/logicalrelation.h\n\n+/*\n+ * States to determine if changes on one relation can be applied by an apply\n+ * background worker.\n+ */\n+typedef enum RelParallel\n+{\n+ PARALLEL_APPLY_UNKNOWN = 0, /* unknown */\n+ PARALLEL_APPLY_SAFE, /* Can apply changes in an apply background\n+ worker */\n+ PARALLEL_APPLY_UNSAFE /* Can not apply changes in an apply background\n+ worker */\n+} RelParallel;\n\n3.8a\n\"can be applied by an apply background worker.\" -> \"can be applied\nusing an apply background worker.\"\n\n~\n\n3.8b\nThe enum is described, and IMO the enum values are self-explanatory\nnow. So commenting them individually is not adding any useful\ninformation. I think those comments can be removed.\n\n~\n\n3.8c\nThe RelParallel name does not have much meaning to it - there is\nnothing really about that name that says it is related to validation\nstates. Maybe \"ParallelSafety\" or \"ParalleApplySafety\" or something\nsimilar?\n\n~~~\n\n3.9 src/include/replication/logicalrelation.h\n\n+ RelParallel parallel; /* Can apply changes in an apply\n+ background worker? */\n\nThis comment is like #3.8c.\n\nIMO the member name 'parallel' doesn't really have enough meaning.\nWhat about something like 'parallel_apply', or 'parallel_ok', or\n'parallel_safe', or something similar.\n\n======\n\n3.10 .../subscription/t/032_streaming_apply.pl\n\nIn [3] you wrote:\nSince it takes almost no time, I think a more detailed confirmation is fine.\n\nYes, but I think a confirmation is a confirmation regardless - the\ntest will either pass/fail and this additional code won't change the\nresult. e.g. Maybe the extra code does not hurt much, but AFAIK having\na \"detailed confirmation\" doesn't really achieve anything useful\neither. I previously suggested to removed it simply because it means\nless test code to maintain.\n\n========\nv16-0004\n========\n\n4.0 <general>\n\nSame comment about \"parallel mode\" as in comment #1.0\n\n======\n\n4.1 Commit message\n\nIf the user sets the subscription_parameter \"streaming\" to \"parallel\", when\napplying a streaming transaction, we will try to apply this transaction in\napply background worker. However, when the changes in this transaction cannot\nbe applied in apply background worker, the background worker will exit with an\nerror. In this case, we can retry applying this streaming transaction in \"on\"\nmode. In this way, we may avoid blocking logical replication here.\n\nSo we introduce field \"subretry\" in catalog \"pg_subscription\". When the\nsubscriber exit with an error, we will try to set this flag to true, and when\nthe transaction is applied successfully, we will try to set this flag to false.\n\nThen when we try to apply a streaming transaction in apply background worker,\nwe can see if this transaction has failed before based on the \"subretry\" field.\n\n~\n\nI reworded above to remove most of the \"we\" this and \"we\" that...\n\nSUGGESTION\nWhen the subscription parameter is set streaming=parallel, the logic\ntries to apply the streaming transaction using an apply background\nworker. If this fails the background worker exits with an error.\n\nIn this case, retry applying the streaming transaction using the\nnormal streaming=on mode. This is done to avoid getting caught in a\nloop of the same retry errors.\n\nA new flag field \"subretry\" has been introduced to catalog\n\"pg_subscription\". If the subscriber exits with an error, this flag\nwill be set true, and whenever the transaction is applied\nsuccessfully, this flag is reset false. Now, when deciding how to\napply a streaming transaction, the logic can know if this transaction\nhas previously failed or not (by checking the \"subretry\" field).\n\n======\n\n4.2 doc/src/sgml/catalogs.sgml\n\n+ <para>\n+ True if the previous apply change failed and a retry was required.\n+ </para></entry>\n\n\"was\" required? \"will be required\"? It is a bit vague what tense to use...\n\nSUGGESTION 1\nTrue if the previous apply change failed, necessitating a retry\n\nSUGGESTION 2\nTrue if the previous apply change failed\n\n======\n\n4.3 doc/src/sgml/ref/create_subscription.sgml\n\n+ <literal>parallel</literal> mode is disregarded when retrying;\n+ instead the transaction will be applied using <literal>on</literal>\n+ mode.\n\n\"on mode\" etc sounds strange.\n\nSUGGESTION\nDuring the retry the streaming=parallel mode is ignored. The retried\ntransaction will be applied using streaming=on mode.\n\n======\n\n4.4 src/backend/replication/logical/worker.c - set_subscription_retry\n\n+ if (MySubscription->retry == retry ||\n+ am_apply_bgworker())\n+ return;\n+\n\nSomehow I feel that this quick exit condition is not quite what it\nseems. IIUC the purpose of this is really to avoid doing the tuple\nupdates if it is not necessary to do them. So if retry was already set\ntrue then there is no need to update tuple to true again. So if retry\nwas already set false then there is no need to update the tuple to\nfalse. But I just don't see how the (hypothetical) code below can work\nas expected, because where is the code updating the value of\nMySubscription->retry ???\n\nset_subscription_retry(true);\nset_subscription_retry(true);\n\nI think at least there needs to be some detailed comments explaining\nwhat this quick exit is really doing because my guess is that\ncurrently it is not quite working as expected.\n\n~~~\n\n4.5\n\n+ /* reset subretry */\n\nUppercase comment\n\n\n------\n[1] https://github.com/postgres/postgres/commit/8445f5a21d40b969673ca03918c74b4fbc882bf4\n[2] https://www.postgresql.org/message-id/OS3PR01MB62755C6C9A75EB09F7218B589E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n[3] https://www.postgresql.org/message-id/OS3PR01MB6275120502A4730AB9932FCA9E839%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 13 Jul 2022 14:33:18 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 7, 2022 at 11:32 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\r\n> Thanks for updating the patch.\r\n> \r\n> Here are some comments.\r\n\r\nThanks for your comments.\r\n\r\n> 0001 patch\r\n> ==============\r\n> 1.\r\n> +\t/* Check If there are free worker slot(s) */\r\n> +\tLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> \r\n> I think \"Check If\" should be \"Check if\".\r\n\r\nFixed.\r\n\r\n> 0003 patch\r\n> ==============\r\n> 1.\r\n> Should we call apply_bgworker_relation_check() in apply_handle_truncate()?\r\n\r\nBecause TRUNCATE blocks all other operations on the table, I think that when\r\ntwo transactions on the publisher-side operate on the same table, at least one\r\nof them will be blocked. So I think for this case the blocking will happen on\r\nthe publisher-side.\r\n\r\n> 0004 patch\r\n> ==============\r\n> 1.\r\n> @@ -3932,6 +3958,9 @@ start_apply(XLogRecPtr origin_startpos)\r\n> \t}\r\n> \tPG_CATCH();\r\n> \t{\r\n> +\t\t/* Set the flag that we will retry later. */\r\n> +\t\tset_subscription_retry(true);\r\n> +\r\n> \t\tif (MySubscription->disableonerr)\r\n> \t\t\tDisableSubscriptionAndExit();\r\n> \t\tElse\r\n> \r\n> I think we need to emit the error and recover from the error state before\r\n> setting the retry flag, like what we do in DisableSubscriptionAndExit().\r\n> Otherwise if an error is detected when setting the retry flag, we won't get the\r\n> error message reported by the apply worker.\r\n\r\nYou are right.\r\nI fixed this point as you suggested. (I moved the operations you mentioned from\r\nthe function DisableSubscriptionAndExit to before setting the retry flag.)\r\nI also made a similar modification in the function start_table_sync.\r\n\r\nAttach the news patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 13 Jul 2022 05:48:45 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jul 13, 2022 at 13:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Below are my review comments for the v16* patch set:\r\n\r\nThanks for your comments.\r\n\r\n> ========\r\n> v16-0001\r\n> ========\r\n> \r\n> 1.0 <general>\r\n> \r\n> There are places (comments, docs, errmsgs, etc) in the patch referring\r\n> to \"parallel mode\". I think every one of those references should be\r\n> found and renamed to \"parallel streaming mode\" or \"streaming=parallel\"\r\n> or at the very least match sure that \"streaming\" is in the same\r\n> sentence. IMO it's too vague just saying \"parallel\" without also\r\n> saying the context is for the \"streaming\" parameter.\r\n> \r\n> I have commented on some of those examples below, but please search\r\n> everything anyway (including the docs) to catch the ones I haven't\r\n> explicitly mentioned.\r\n\r\nI checked all places in the patch where the word \"parallel\" is used (case\r\ninsensitive), and I think it is clear that the description is related to stream\r\ntransactions. So I am not so sure. Could you please give me some examples? I\r\nwill improve them later.\r\n\r\n> 1.2 .../replication/logical/applybgworker.c - apply_bgworker_start\r\n> \r\n> + if (list_length(ApplyWorkersFreeList) > 0)\r\n> + {\r\n> + wstate = (ApplyBgworkerState *) llast(ApplyWorkersFreeList);\r\n> + ApplyWorkersFreeList = list_delete_last(ApplyWorkersFreeList);\r\n> + Assert(wstate->pstate->status == APPLY_BGWORKER_FINISHED);\r\n> + }\r\n> \r\n> The Assert that the entries in the free-list are FINISHED seems like\r\n> unnecessary checking. IIUC, code is already doing the Assert that\r\n> entries are FINISHED before allowing them into the free-list in the\r\n> first place.\r\n\r\nJust for robustness.\r\n\r\n> 1.3 .../replication/logical/applybgworker.c - apply_bgworker_find\r\n> \r\n> + if (found)\r\n> + {\r\n> + char status = entry->wstate->pstate->status;\r\n> +\r\n> + /* If any workers (or the postmaster) have died, we have failed. */\r\n> + if (status == APPLY_BGWORKER_EXIT)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"background worker %u failed to apply transaction %u\",\r\n> + entry->wstate->pstate->n,\r\n> + entry->wstate->pstate->stream_xid)));\r\n> +\r\n> + Assert(status == APPLY_BGWORKER_BUSY);\r\n> +\r\n> + return entry->wstate;\r\n> + }\r\n> \r\n> Why not remove that Assert but change the condition to be:\r\n> \r\n> if (status != APPLY_BGWORKER_BUSY)\r\n> ereport(...)\r\n\r\nWhen I check \"APPLY_BGWORKER_EXIT\", I use the function \"ereport\" to report the\r\nerror, because \"APPLY_BGWORKER_EXIT\" is a possible use case.\r\nBut for \"APPLY_BGWORKER_BUSY\", this use case should not happen here. So I think\r\nit's fine to only check this for developers when the compile option\r\n\"--enable-cassert\" is specified.\r\n\r\n> ========\r\n> v16-0003\r\n> ========\r\n> \r\n> 3.0 <general>\r\n> \r\n> Same comment about \"parallel mode\" as in comment #1.0\r\n> \r\n> ======\r\n\r\nPlease refer to the reply to #1.0.\r\n\r\n> 3.5 src/backend/replication/logical/proto.c - logicalrep_read_attrs\r\n> \r\n> @@ -1012,11 +1062,14 @@ logicalrep_read_attrs(StringInfo in,\r\n> LogicalRepRelation *rel)\r\n> {\r\n> uint8 flags;\r\n> \r\n> - /* Check for replica identity column */\r\n> + /* Check for replica identity and unique column */\r\n> flags = pq_getmsgbyte(in);\r\n> - if (flags & LOGICALREP_IS_REPLICA_IDENTITY)\r\n> + if (flags & ATTR_IS_REPLICA_IDENTITY)\r\n> attkeys = bms_add_member(attkeys, i);\r\n> \r\n> + if (flags & ATTR_IS_UNIQUE)\r\n> + attunique = bms_add_member(attunique, i);\r\n> \r\n> The code comment really applies to all 3 statements so maybe better\r\n> not to have the blank line here.\r\n\r\nI think it looks a bit messy without the blank line.\r\nSo I tried to improve it to the following:\r\n```\r\n\t\t/* Check for replica identity column */\r\n\t\tflags = pq_getmsgbyte(in);\r\n\t\tif (flags & ATTR_IS_REPLICA_IDENTITY)\r\n\t\t\tattkeys = bms_add_member(attkeys, i);\r\n\r\n\t\t/* Check for unique column */\r\n\t\tif (flags & ATTR_IS_UNIQUE)\r\n\t\t\tattunique = bms_add_member(attunique, i);\r\n```\r\n\r\n> 3.6 src/backend/replication/logical/relation.c - logicalrep_rel_mark_parallel\r\n> \r\n> 3.6.a\r\n> + /* Fast path if we marked 'parallel' flag. */\r\n> + if (entry->parallel != PARALLEL_APPLY_UNKNOWN)\r\n> + return;\r\n> \r\n> SUGGESTED\r\n> Fast path if 'parallel' flag is already known.\r\n> \r\n> ~\r\n> \r\n> 3.6.b\r\n> + /* Initialize the flag. */\r\n> + entry->parallel = PARALLEL_APPLY_SAFE;\r\n> \r\n> I think it makes more sense if assigning SAFE is the very *last* thing\r\n> this function does – not the first thing.\r\n> \r\n> ~\r\n> \r\n> 3.6.c\r\n> + /*\r\n> + * First, we check if the unique column in the relation on the\r\n> + * subscriber-side is also the unique column on the publisher-side.\r\n> + */\r\n> \r\n> \"First, we check...\" -> \"First, check...\"\r\n> \r\n> ~\r\n> \r\n> 3.6.d\r\n> + /*\r\n> + * Then, We check if there is any non-immutable function in the local\r\n> + * table. Look for functions in the following places:\r\n> \r\n> \r\n> \"Then, We check...\" -> \"Then, check\"\r\n\r\n=>3.6.a\r\n=>3.6.c\r\n=>3.6.d\r\nImproved as suggested.\r\n\r\n=>3.6.b\r\nNot sure about this.\r\n\r\n> 3.7 src/backend/replication/logical/relation.c - logicalrep_rel_mark_parallel\r\n> \r\n> From [3] you wrote:\r\n> Personally, I do not like to use the `goto` syntax if it is not necessary,\r\n> because the `goto` syntax will forcibly change the flow of code execution.\r\n> \r\n> Yes, but OTOH readability is a major consideration too, and in this\r\n> function by simply saying goto parallel_unsafe; you can have 3 returns\r\n> instead of 7 returns, and it will take ~10 lines less code to do the\r\n> same functionality.\r\n\r\nI am still not sure about this, I think I will change this if some more people\r\nthink `goto` is better here.\r\n\r\n> 4.3 doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> + <literal>parallel</literal> mode is disregarded when retrying;\r\n> + instead the transaction will be applied using <literal>on</literal>\r\n> + mode.\r\n> \r\n> \"on mode\" etc sounds strange.\r\n> \r\n> SUGGESTION\r\n> During the retry the streaming=parallel mode is ignored. The retried\r\n> transaction will be applied using streaming=on mode.\r\n\r\nSince it's part of the streaming option document. I think it's fine to directly\r\nsay \"<literal>parallel</literal> mode\"\r\n\r\n> 4.4 src/backend/replication/logical/worker.c - set_subscription_retry\r\n> \r\n> + if (MySubscription->retry == retry ||\r\n> + am_apply_bgworker())\r\n> + return;\r\n> +\r\n> \r\n> Somehow I feel that this quick exit condition is not quite what it\r\n> seems. IIUC the purpose of this is really to avoid doing the tuple\r\n> updates if it is not necessary to do them. So if retry was already set\r\n> true then there is no need to update tuple to true again. So if retry\r\n> was already set false then there is no need to update the tuple to\r\n> false. But I just don't see how the (hypothetical) code below can work\r\n> as expected, because where is the code updating the value of\r\n> MySubscription->retry ???\r\n> \r\n> set_subscription_retry(true);\r\n> set_subscription_retry(true);\r\n> \r\n> I think at least there needs to be some detailed comments explaining\r\n> what this quick exit is really doing because my guess is that\r\n> currently it is not quite working as expected.\r\n\r\nThe subscription cache is be updated in maybe_reread_subscription() and is\r\ninvoked at every transaction. And we reset the retry flag at transaction end,\r\nso it should be fine. And I think the quick exit check code is similar to\r\nclear_subscription_skip_lsn.\r\n\r\nAttach the news patches.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPv0yWynWTmp4o34s0d98xVubys9fy%3Dp0YXsZ5_sUcNnMw%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 19 Jul 2022 02:28:43 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Jul 19, 2022 at 10:29 AM I wrote:\r\n> Attach the news patches.\r\n\r\nNot able to apply patches cleanly because the change in HEAD (366283961a).\r\nTherefore, I rebased the patch based on the changes in HEAD.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 22 Jul 2022 02:56:39 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 22, 2022 at 8:26 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tues, Jul 19, 2022 at 10:29 AM I wrote:\n> > Attach the news patches.\n>\n> Not able to apply patches cleanly because the change in HEAD (366283961a).\n> Therefore, I rebased the patch based on the changes in HEAD.\n>\n> Attach the new patches.\n>\n\nFew comments on 0001:\n======================\n1.\n- <structfield>substream</structfield> <type>bool</type>\n+ <structfield>substream</structfield> <type>char</type>\n </para>\n <para>\n- If true, the subscription will allow streaming of in-progress\n- transactions\n+ Controls how to handle the streaming of in-progress transactions:\n+ <literal>f</literal> = disallow streaming of in-progress transactions,\n+ <literal>t</literal> = spill the changes of in-progress transactions to\n+ disk and apply at once after the transaction is committed on the\n+ publisher,\n+ <literal>p</literal> = apply changes directly using a background worker\n\nShouldn't the description of 'p' be something like: apply changes\ndirectly using a background worker, if available, otherwise, it\nbehaves the same as 't'\n\n2.\nNote that if an error happens when\n+ applying changes in a background worker, the finish LSN of the\n+ remote transaction might not be reported in the server log.\n\nIs there any case where finish LSN can be reported when applying via\nbackground worker, if not, then we should use 'won't' instead of\n'might not'?\n\n3.\n+#define PG_LOGICAL_APPLY_SHM_MAGIC 0x79fb2447 // TODO Consider change\n\nIt is better to change this as the same magic number is used by\nPG_TEST_SHM_MQ_MAGIC\n\n4.\n+ /* Ignore statistics fields that have been updated. */\n+ s.cursor += IGNORE_SIZE_IN_MESSAGE;\n\nCan we change the comment to: \"Ignore statistics fields that have been\nupdated by the main apply worker.\"? Will it be better to name the\ndefine as \"SIZE_STATS_MESSAGE\"?\n\n5.\n+/* Apply Background Worker main loop */\n+static void\n+LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared *shared)\n{\n...\n...\n\n+ apply_dispatch(&s);\n+\n+ if (ConfigReloadPending)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ }\n+\n+ MemoryContextSwitchTo(oldctx);\n+ MemoryContextReset(ApplyMessageContext);\n\nWe should not process the config file under ApplyMessageContext. You\nshould switch context before processing the config file. See other\nsimilar usages in the code.\n\n6.\n+/* Apply Background Worker main loop */\n+static void\n+LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared *shared)\n{\n...\n...\n+ MemoryContextSwitchTo(oldctx);\n+ MemoryContextReset(ApplyMessageContext);\n+ }\n+\n+ MemoryContextSwitchTo(TopMemoryContext);\n+ MemoryContextReset(ApplyContext);\n...\n}\n\nI don't see the need to reset ApplyContext here as we don't do\nanything in that context here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Jul 2022 19:20:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tues, Jul 19, 2022 at 10:29 AM I wrote:\n> > Attach the news patches.\n>\n> Not able to apply patches cleanly because the change in HEAD (366283961a).\n> Therefore, I rebased the patch based on the changes in HEAD.\n>\n> Attach the new patches.\n\n+ /* Check the foreign keys. */\n+ fkeys = RelationGetFKeyList(entry->localrel);\n+ if (fkeys)\n+ entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\n\nSo if there is a foreign key on any of the tables which are parts of a\nsubscription then we do not allow changes for that subscription to be\napplied in parallel? I think this is a big limitation because having\nforeign key on the table is very normal right? I agree that if we\nallow them then there could be failure due to out of order apply\nright? but IMHO we should not put the restriction instead let it fail\nif there is ever such conflict. Because if there is a conflict the\ntransaction will be sent again. Do we see that there could be wrong\nor inconsistent results if we allow such things to be executed in\nparallel. If not then IMHO just to avoid some corner case failure we\nare restricting very normal cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 14:30:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jul 26, 2022 at 2:30 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Jul 19, 2022 at 10:29 AM I wrote:\n> > > Attach the news patches.\n> >\n> > Not able to apply patches cleanly because the change in HEAD (366283961a).\n> > Therefore, I rebased the patch based on the changes in HEAD.\n> >\n> > Attach the new patches.\n>\n> + /* Check the foreign keys. */\n> + fkeys = RelationGetFKeyList(entry->localrel);\n> + if (fkeys)\n> + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\n>\n> So if there is a foreign key on any of the tables which are parts of a\n> subscription then we do not allow changes for that subscription to be\n> applied in parallel? I think this is a big limitation because having\n> foreign key on the table is very normal right? I agree that if we\n> allow them then there could be failure due to out of order apply\n> right? but IMHO we should not put the restriction instead let it fail\n> if there is ever such conflict. Because if there is a conflict the\n> transaction will be sent again. Do we see that there could be wrong\n> or inconsistent results if we allow such things to be executed in\n> parallel. If not then IMHO just to avoid some corner case failure we\n> are restricting very normal cases.\n\nsome more comments..\n1.\n+ /*\n+ * If we have found a free worker or if we are already\napplying this\n+ * transaction in an apply background worker, then we\npass the data to\n+ * that worker.\n+ */\n+ if (first_segment)\n+ apply_bgworker_send_data(stream_apply_worker, s->len, s->data);\n\nComment says that if we have found a free worker or we are already\napplying in the worker then pass the changes to the worker\nbut actually as per the code here we are only passing in case of first_segment?\n\nI think what you are trying to say is that if it is first segment then send the\n\n2.\n+ /*\n+ * This is the main apply worker. Check if there is any free apply\n+ * background worker we can use to process this transaction.\n+ */\n+ if (first_segment)\n+ stream_apply_worker = apply_bgworker_start(stream_xid);\n+ else\n+ stream_apply_worker = apply_bgworker_find(stream_xid);\n\nSo currently, whenever we get a new streamed transaction we try to\nstart a new background worker for that. Why do we need to start/close\nthe background apply worker every time we get a new streamed\ntransaction. I mean we can keep the worker in the pool for time being\nand if there is a new transaction looking for a worker then we can\nfind from that. Starting a worker is costly operation and since we\nare using parallelism for this mean we are expecting that there would\nbe frequent streamed transaction needing parallel apply worker so why\nnot to let it wait for a certain amount of time so that if load is low\nit will anyway stop and if the load is high it will be reused for next\nstreamed transaction.\n\n\n3.\nWhy are we restricting parallel apply workers only for the streamed\ntransactions, because streaming depends upon the size of the logical\ndecoding work mem so making steaming and parallel apply tightly\ncoupled seems too restrictive to me. Do we see some obvious problems\nin applying other transactions in parallel?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 15:03:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comment for patch v19-0001:\n\n======\n\n1.1 Missing docs for protocol version\n\nSince you bumped the logical replication protocol version for this\npatch, now there is some missing documentation to describe this new\nprotocol version. e.g. See here [1]\n\n======\n\n1.2 doc/src/sgml/config.sgml\n\n+ <para>\n+ Maximum number of apply background workers per subscription. This\n+ parameter controls the amount of parallelism of the streaming of\n+ in-progress transactions when subscription parameter\n+ <literal>streaming = parallel</literal>.\n+ </para>\n\nSUGGESTION\nMaximum number of apply background workers per subscription. This\nparameter controls the amount of parallelism for streaming of\nin-progress transactions with subscription parameter\n<literal>streaming = parallel</literal>.\n\n======\n\n1.3 src/sgml/protocol.sgml\n\n@@ -6809,6 +6809,25 @@ psql \"dbname=postgres replication=database\" -c\n\"IDENTIFY_SYSTEM;\"\n </listitem>\n </varlistentry>\n\n+ <varlistentry>\n+ <term>Int64 (XLogRecPtr)</term>\n+ <listitem>\n+ <para>\n+ The LSN of the abort.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n+ <term>Int64 (TimestampTz)</term>\n+ <listitem>\n+ <para>\n+ Abort timestamp of the transaction. The value is in number\n+ of microseconds since PostgreSQL epoch (2000-01-01).\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nThere are missing notes on these new fields. They both should says\nsomething like \"This field is available since protocol version 4.\"\n(See similar examples on the same docs page)\n\n======\n\n1.4 src/backend/replication/logical/applybgworker.c - apply_bgworker_start\n\nPreviously [1] I wrote:\n> The Assert that the entries in the free-list are FINISHED seems like\n> unnecessary checking. IIUC, code is already doing the Assert that\n> entries are FINISHED before allowing them into the free-list in the\n> first place.\n\nIMO this Assert just causes unnecessary doubts, but if you really want\nto keep it then I think it belongs logically *above* the\nlist_delete_last.\n\n~~~\n\n1.5 src/backend/replication/logical/applybgworker.c - apply_bgworker_start\n\n+ server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n+ wstate->shared->server_version =\n+ server_version >= 160000 ? LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n+ server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n+ server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n+ LOGICALREP_PROTO_VERSION_NUM;\n\nIt makes no sense to assign a protocol version to a server_version.\nPerhaps it is just a simple matter of a poorly named struct member.\ne.g Maybe everything is OK if it said something like\nwstate->shared->proto_version.\n\n~~~\n\n1.6 src/backend/replication/logical/applybgworker.c - LogicalApplyBgwLoop\n\n+/* Apply Background Worker main loop */\n+static void\n+LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared *shared)\n\n'shared' seems a very vague param name. Maybe can be 'bgw_shared' or\n'parallel_shared' or something better?\n\n~~~\n\n1.7 src/backend/replication/logical/applybgworker.c - ApplyBgworkerMain\n\n+/*\n+ * Apply Background Worker entry point\n+ */\n+void\n+ApplyBgworkerMain(Datum main_arg)\n+{\n+ volatile ApplyBgworkerShared *shared;\n\n'shared' seems a very vague var name. Maybe can be 'bgw_shared' or\n'parallel_shared' or something better?\n\n~~~\n\n1.8 src/backend/replication/logical/applybgworker.c - apply_bgworker_setup_dsm\n\n+static void\n+apply_bgworker_setup_dsm(ApplyBgworkerState *wstate)\n+{\n+ shm_toc_estimator e;\n+ Size segsize;\n+ dsm_segment *seg;\n+ shm_toc *toc;\n+ ApplyBgworkerShared *shared;\n+ shm_mq *mq;\n\n'shared' seems a very vague var name. Maybe can be 'bgw_shared' or\n'parallel_shared' or something better?\n\n~~~\n\n1.9 src/backend/replication/logical/applybgworker.c - apply_bgworker_setup_dsm\n\n+ server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n+ shared->server_version =\n+ server_version >= 160000 ? LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n+ server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n+ server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n+ LOGICALREP_PROTO_VERSION_NUM;\n\nSame as earlier review comment #1.5\n\n======\n\n1.10 src/backend/replication/logical/worker.c\n\n@@ -22,8 +22,28 @@\n * STREAMED TRANSACTIONS\n * ---------------------\n * Streamed transactions (large transactions exceeding a memory limit on the\n- * upstream) are not applied immediately, but instead, the data is written\n- * to temporary files and then applied at once when the final commit arrives.\n+ * upstream) are applied using one of two approaches.\n+ *\n+ * 1) Separate background workers\n\n\"two approaches.\" --> \"two approaches:\"\n\n~~~\n\n1.11 src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /* Check whether the publisher sends abort_lsn and abort_time. */\n+ if (am_apply_bgworker())\n+ read_abort_lsn = MyParallelShared->server_version >=\n+ LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM;\n\nIMO makes no sense to compare a server version with a protocol\nversion. Same as review comment #1.5\n\n======\n\n1.12 src/include/replication/worker_internal.h\n\n+typedef struct ApplyBgworkerShared\n+{\n+ slock_t mutex;\n+\n+ /* Status of apply background worker. */\n+ ApplyBgworkerStatus status;\n+\n+ /* server version of publisher. */\n+ uint32 server_version;\n+\n+ TransactionId stream_xid;\n+ uint32 n; /* id of apply background worker */\n+} ApplyBgworkerShared;\n\nAFAICT you only ever used 'server_version' for storing the *protocol*\nversion, so really this member should be called something like\n'proto_version'. Please see earlier review comment #1.5 and others.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPvN7fwtUE%3DbidzrsOUXSt%2BJpnkJztZ-Jn5t86moofaZ6g%40mail.gmail.com\n[2] https://www.postgresql.org/docs/devel/protocol-logical-replication.html\n\nKind Reagrds,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 26 Jul 2022 19:56:07 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for patch v19-0003:\n\n======\n\n3.1 doc/src/sgml/ref/create_subscription.sgml\n\n@@ -240,6 +240,10 @@ CREATE SUBSCRIPTION <replaceable\nclass=\"parameter\">subscription_name</replaceabl\n transaction is committed. Note that if an error happens when\n applying changes in a background worker, the finish LSN of the\n remote transaction might not be reported in the server log.\n+ <literal>parallel</literal> mode has two requirements: 1) the unique\n+ column in the relation on the subscriber-side should also be the\n+ unique column on the publisher-side; 2) there cannot be any\n+ non-immutable functions used by the subscriber-side replicated table.\n </para>\n\n3.1a.\nIt looked a bit strange starting the sentence with the enum\n\"<literal>parallel</literal> mode\". Maybe reword it something like:\n\n\"This mode has two requirements: ...\"\nor\n\"There are two requirements for using <literal>parallel</literal> mode: ...\"\n\n3.1b.\nPoint 1) says \"relation\", but point 2) says \"table\". I think the\nconsistent term should be used.\n\n======\n\n3.2 <general>\n\nFor consistency, please search all this patch and replace every:\n\n\"... applied by an apply background worker\" -> \"... applied using an\napply background worker\"\n\nAnd also search/replace every:\n\n\"... in the apply background worker: -> \"... using an apply background worker\"\n\n======\n\n3.3 .../replication/logical/applybgworker.c\n\n@@ -800,3 +800,47 @@ apply_bgworker_subxact_info_add(TransactionId current_xid)\n MemoryContextSwitchTo(oldctx);\n }\n }\n+\n+/*\n+ * Check if changes on this relation can be applied by an apply background\n+ * worker.\n+ *\n+ * Although the commit order is maintained only allowing one process to commit\n+ * at a time, the access order to the relation has changed. This could cause\n+ * unexpected problems if the unique column on the replicated table is\n+ * inconsistent with the publisher-side or contains non-immutable functions\n+ * when applying transactions in the apply background worker.\n+ */\n+void\n+apply_bgworker_relation_check(LogicalRepRelMapEntry *rel)\n\n\"only allowing\" -> \"by only allowing\" (I think you mean this, right?)\n\n~~~\n\n3.4\n\n+ /*\n+ * Return if changes on this relation can be applied by an apply background\n+ * worker.\n+ */\n+ if (rel->parallel_apply == PARALLEL_APPLY_SAFE)\n+ return;\n+\n+ /* We are in error mode and should give user correct error. */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot replicate target relation \\\"%s.%s\\\" using \"\n+ \"subscription parameter streaming=parallel\",\n+ rel->remoterel.nspname, rel->remoterel.relname),\n+ errdetail(\"The unique column on subscriber is not the unique \"\n+ \"column on publisher or there is at least one \"\n+ \"non-immutable function.\"),\n+ errhint(\"Please change to use subscription parameter \"\n+ \"streaming=on.\")));\n\n3.4a.\nOf course, the code should give the user the \"correct error\" if there\nis an error (!), but having a comment explicitly saying so does not\nserve any purpose.\n\n3.4b.\nThe logic might be simplified if it was written differently like:\n\n+ if (rel->parallel_apply != PARALLEL_APPLY_SAFE)\n+ ereport(ERROR, ...\n\n======\n\n3.5 src/backend/replication/logical/proto.c\n\n@@ -40,6 +41,68 @@ static void logicalrep_read_tuple(StringInfo in,\nLogicalRepTupleData *tuple);\n static void logicalrep_write_namespace(StringInfo out, Oid nspid);\n static const char *logicalrep_read_namespace(StringInfo in);\n\n+static Bitmapset *RelationGetUniqueKeyBitmap(Relation rel);\n+\n+/*\n+ * RelationGetUniqueKeyBitmap -- get a bitmap of unique attribute numbers\n+ *\n+ * This is similar to RelationGetIdentityKeyBitmap(), but returns a bitmap of\n+ * index attribute numbers for all unique indexes.\n+ */\n+static Bitmapset *\n+RelationGetUniqueKeyBitmap(Relation rel)\n\nWhy is the forward declaration needed when the static function\nimmediately follows it?\n\n======\n\n3.6 src/backend/replication/logical/relation.c -\nlogicalrep_relmap_reset_parallel_cb\n\n@@ -91,6 +98,26 @@ logicalrep_relmap_invalidate_cb(Datum arg, Oid reloid)\n }\n }\n\n+/*\n+ * Relcache invalidation callback to reset parallel flag.\n+ */\n+static void\n+logicalrep_relmap_reset_parallel_cb(Datum arg, int cacheid, uint32 hashvalue)\n\n\"reset parallel flag\" -> \"reset parallel_apply flag\"\n\n~~~\n\n3.7 src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_parallel_apply\n\n+ * There are two requirements for applying changes in an apply background\n+ * worker: 1) The unique column in the relation on the subscriber-side should\n+ * also be the unique column on the publisher-side; 2) There cannot be any\n+ * non-immutable functions used by the subscriber-side.\n\nThis comment should exactly match the help text. See review comment #3.1\n\n~~~\n\n3.8\n\n+ /* Initialize the flag. */\n+ entry->parallel_apply = PARALLEL_APPLY_SAFE;\n\nI previously suggested [1] (#3.6b) to move this. Consider, that you\ncannot logically flag the entry as \"safe\" until you are certain that\nit is safe. And you cannot be sure of that until you've passed all the\nchecks this function is doing. Therefore IMO the assignment to\nPARALLEL_APPLY_SAFE should be the last line of the function.\n\n~~~\n\n3.9\n\n+ /*\n+ * Then, check if there is any non-immutable function used by the local\n+ * table. Look for functions in the following places:\n+ * a. trigger functions;\n+ * b. Column default value expressions and domain constraints;\n+ * c. Constraint expressions;\n+ * d. Foreign keys.\n+ */\n\n\"used by the local table\" -> \"used by the subscriber-side relation\"\n(reworded so that it is consistent with the First comment)\n\n~~~\n\n3.10\n\nI previously suggested [1] (#3.7) to use goto in this function to\navoid the excessive number of returns. IMO there is nothing inherently\nevil about gotos, so long as they are used with care - sometimes they\nare the best option. Anyway, I attached some BEFORE/AFTER example code\nto this post - others can judge which way is preferable.\n\n======\n\n3.11 src/backend/utils/cache/typcache.c - GetDomainConstraints\n\n@@ -2540,6 +2540,23 @@ compare_values_of_enum(TypeCacheEntry *tcache,\nOid arg1, Oid arg2)\n return 0;\n }\n\n+/*\n+ * GetDomainConstraints --- get DomainConstraintState list of\nspecified domain type\n+ */\n+List *\n+GetDomainConstraints(Oid type_id)\n+{\n+ TypeCacheEntry *typentry;\n+ List *constraints = NIL;\n+\n+ typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);\n+\n+ if(typentry->domainData != NULL)\n+ constraints = typentry->domainData->constraints;\n+\n+ return constraints;\n+}\n\nThis function can be simplified (if you want). e.g.\n\nList *\nGetDomainConstraints(Oid type_id)\n{\nTypeCacheEntry *typentry;\n\ntypentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);\n\nreturn typentry->domainData ? typentry->domainData->constraints : NIL;\n}\n\n======\n\n3.12 src/include/replication/logicalrelation.h\n\n@@ -15,6 +15,19 @@\n #include \"access/attmap.h\"\n #include \"replication/logicalproto.h\"\n\n+/*\n+ * States to determine if changes on one relation can be applied using an\n+ * apply background worker.\n+ */\n+typedef enum ParalleApplySafety\n+{\n+ PARALLEL_APPLY_UNKNOWN = 0, /* unknown */\n+ PARALLEL_APPLY_SAFE, /* Can apply changes in an apply background\n+ worker */\n+ PARALLEL_APPLY_UNSAFE /* Can not apply changes in an apply background\n+ worker */\n+} ParalleApplySafety;\n+\n\n3.12a\nTypo in enum and typedef names:\n\"ParalleApplySafety\" -> \"ParallelApplySafety\"\n\n3.12b\nI think the values are quite self-explanatory now. Commenting on each\nof them separately is not really adding anything useful.\n\n3.12c.\nNew enum missing from typedefs.list?\n\n======\n\n3.13 typdefs.list\n\nShould include the new typedef. See comment #3.12c.\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB62758A6AAED27B3A848CEB7A9E8F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 27 Jul 2022 13:37:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jul 26, 2022 at 2:30 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Jul 19, 2022 at 10:29 AM I wrote:\n> > > Attach the news patches.\n> >\n> > Not able to apply patches cleanly because the change in HEAD (366283961a).\n> > Therefore, I rebased the patch based on the changes in HEAD.\n> >\n> > Attach the new patches.\n>\n> + /* Check the foreign keys. */\n> + fkeys = RelationGetFKeyList(entry->localrel);\n> + if (fkeys)\n> + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\n>\n> So if there is a foreign key on any of the tables which are parts of a\n> subscription then we do not allow changes for that subscription to be\n> applied in parallel?\n>\n\nI think the above check will just prevent the parallelism for a xact\noperating on the corresponding relation not the relations of the\nentire subscription. Your statement sounds like you are saying that it\nwill prevent parallelism for all the other tables in the subscription\nwhich has a table with FK.\n\n> I think this is a big limitation because having\n> foreign key on the table is very normal right? I agree that if we\n> allow them then there could be failure due to out of order apply\n> right?\n>\n\nWhat kind of failure do you have in mind and how it can occur? The one\nway it can fail is if the publisher doesn't have a corresponding\nforeign key on the table because then the publisher could have allowed\nan insert into a table (insert into FK table without having the\ncorresponding key in PK table) which may not be allowed on the\nsubscriber. However, I don't see any check that could prevent this\nbecause for this we need to compare the FK list for a table from the\npublisher with the corresponding one on the subscriber. I am not\nreally sure if due to the risk of such conflicts we should block\nparallelism of transactions operating on tables with FK because those\nconflicts can occur even without parallelism, it is just a matter of\ntiming. But, I could be missing something due to which the above check\ncan be useful?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Jul 2022 10:06:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jul 27, 2022 at 10:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 2:30 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tues, Jul 19, 2022 at 10:29 AM I wrote:\n> > > > Attach the news patches.\n> > >\n> > > Not able to apply patches cleanly because the change in HEAD (366283961a).\n> > > Therefore, I rebased the patch based on the changes in HEAD.\n> > >\n> > > Attach the new patches.\n> >\n> > + /* Check the foreign keys. */\n> > + fkeys = RelationGetFKeyList(entry->localrel);\n> > + if (fkeys)\n> > + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\n> >\n> > So if there is a foreign key on any of the tables which are parts of a\n> > subscription then we do not allow changes for that subscription to be\n> > applied in parallel?\n> >\n>\n> I think the above check will just prevent the parallelism for a xact\n> operating on the corresponding relation not the relations of the\n> entire subscription. Your statement sounds like you are saying that it\n> will prevent parallelism for all the other tables in the subscription\n> which has a table with FK.\n\nOkay, got it. I thought we are disallowing parallelism for the entire\nsubscription.\n\n> > I think this is a big limitation because having\n> > foreign key on the table is very normal right? I agree that if we\n> > allow them then there could be failure due to out of order apply\n> > right?\n> >\n>\n> What kind of failure do you have in mind and how it can occur? The one\n> way it can fail is if the publisher doesn't have a corresponding\n> foreign key on the table because then the publisher could have allowed\n> an insert into a table (insert into FK table without having the\n> corresponding key in PK table) which may not be allowed on the\n> subscriber. However, I don't see any check that could prevent this\n> because for this we need to compare the FK list for a table from the\n> publisher with the corresponding one on the subscriber. I am not\n> really sure if due to the risk of such conflicts we should block\n> parallelism of transactions operating on tables with FK because those\n> conflicts can occur even without parallelism, it is just a matter of\n> timing. But, I could be missing something due to which the above check\n> can be useful?\n\nActually, my question starts with this check[1][2], from this it\nappears that if this relation is having a foreign key then we are\nmarking it parallel unsafe[2] and later in [1] while the worker is\napplying changes for that relation and if it was marked parallel\nunsafe then we are throwing error. So my question was why we are\nputting this restriction? Although this error is only talking about\nunique and non-immutable functions this is also giving an error if the\ntarget table had a foreign key. So my question was do we really need\nto restrict this? I mean why we are restricting this case?\n\n\n[1]\n+apply_bgworker_relation_check(LogicalRepRelMapEntry *rel)\n+{\n+ /* Skip check if not an apply background worker. */\n+ if (!am_apply_bgworker())\n+ return;\n+\n+ /*\n+ * Partition table checks are done later in function\n+ * apply_handle_tuple_routing.\n+ */\n+ if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ return;\n+\n+ /*\n+ * Return if changes on this relation can be applied by an apply background\n+ * worker.\n+ */\n+ if (rel->parallel_apply == PARALLEL_APPLY_SAFE)\n+ return;\n+\n+ /* We are in error mode and should give user correct error. */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot replicate target relation \\\"%s.%s\\\" using \"\n+ \"subscription parameter streaming=parallel\",\n+ rel->remoterel.nspname, rel->remoterel.relname),\n+ errdetail(\"The unique column on subscriber is not the unique \"\n+ \"column on publisher or there is at least one \"\n+ \"non-immutable function.\"),\n+ errhint(\"Please change to use subscription parameter \"\n+ \"streaming=on.\")));\n+}\n\n[2]\n> > + /* Check the foreign keys. */\n> > + fkeys = RelationGetFKeyList(entry->localrel);\n> > + if (fkeys)\n> > + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jul 2022 10:58:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, July 27, 2022 1:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Wed, Jul 27, 2022 at 10:06 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jul 26, 2022 at 2:30 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> > >\r\n> > > On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Tues, Jul 19, 2022 at 10:29 AM I wrote:\r\n> > > > > Attach the news patches.\r\n> > > >\r\n> > > > Not able to apply patches cleanly because the change in HEAD\r\n> (366283961a).\r\n> > > > Therefore, I rebased the patch based on the changes in HEAD.\r\n> > > >\r\n> > > > Attach the new patches.\r\n> > >\r\n> > > + /* Check the foreign keys. */\r\n> > > + fkeys = RelationGetFKeyList(entry->localrel);\r\n> > > + if (fkeys)\r\n> > > + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\r\n> > >\r\n> > > So if there is a foreign key on any of the tables which are parts of\r\n> > > a subscription then we do not allow changes for that subscription to\r\n> > > be applied in parallel?\r\n> > >\r\n> >\r\n> > I think the above check will just prevent the parallelism for a xact\r\n> > operating on the corresponding relation not the relations of the\r\n> > entire subscription. Your statement sounds like you are saying that it\r\n> > will prevent parallelism for all the other tables in the subscription\r\n> > which has a table with FK.\r\n> \r\n> Okay, got it. I thought we are disallowing parallelism for the entire subscription.\r\n> \r\n> > > I think this is a big limitation because having foreign key on the\r\n> > > table is very normal right? I agree that if we allow them then\r\n> > > there could be failure due to out of order apply right?\r\n> > >\r\n> >\r\n> > What kind of failure do you have in mind and how it can occur? The one\r\n> > way it can fail is if the publisher doesn't have a corresponding\r\n> > foreign key on the table because then the publisher could have allowed\r\n> > an insert into a table (insert into FK table without having the\r\n> > corresponding key in PK table) which may not be allowed on the\r\n> > subscriber. However, I don't see any check that could prevent this\r\n> > because for this we need to compare the FK list for a table from the\r\n> > publisher with the corresponding one on the subscriber. I am not\r\n> > really sure if due to the risk of such conflicts we should block\r\n> > parallelism of transactions operating on tables with FK because those\r\n> > conflicts can occur even without parallelism, it is just a matter of\r\n> > timing. But, I could be missing something due to which the above check\r\n> > can be useful?\r\n> \r\n> Actually, my question starts with this check[1][2], from this it\r\n> appears that if this relation is having a foreign key then we are\r\n> marking it parallel unsafe[2] and later in [1] while the worker is\r\n> applying changes for that relation and if it was marked parallel\r\n> unsafe then we are throwing error. So my question was why we are\r\n> putting this restriction? Although this error is only talking about\r\n> unique and non-immutable functions this is also giving an error if the\r\n> target table had a foreign key. So my question was do we really need\r\n> to restrict this? I mean why we are restricting this case?\r\n> \r\n\r\nHi,\r\n\r\nI think the foreign key check is used to prevent the apply worker from waiting\r\nindefinitely which is caused by foreign key difference between publisher and\r\nsubscriber, Like the following example:\r\n\r\n-------------------------------------\r\nPublisher:\r\n-- both table are published\r\nCREATE TABLE PKTABLE ( ptest1 int);\r\nCREATE TABLE FKTABLE ( ftest1 int);\r\n\r\n-- initial data\r\nINSERT INTO PKTABLE VALUES(1);\r\n\r\nSubcriber:\r\nCREATE TABLE PKTABLE ( ptest1 int PRIMARY KEY);\r\nCREATE TABLE FKTABLE ( ftest1 int REFERENCES PKTABLE);\r\n\r\n-- Execute the following transactions on publisher\r\n\r\nTx1:\r\nINSERT ... -- make enough changes to start streaming mode\r\nDELETE FROM PKTABLE;\r\n\tTx2:\r\n\tINSERT ITNO FKTABLE VALUES(1);\r\n\tCOMMIT;\r\nCOMMIT;\r\n-------------------------------------\r\n\r\nThe subcriber's apply worker will wait indefinitely, because the main apply worker is\r\nwaiting for the streaming transaction to finish which is in another apply\r\nbgworker.\r\n\r\n\r\nBTW, I think the foreign key won't take effect in subscriber's apply worker by\r\ndefault. Because we set session_replication_role to 'replica' in apply worker\r\nwhich prevent the FK trigger function to be executed(only the trigger with\r\nFIRES_ON_REPLICA flag will be executed in this mode). User can only alter the\r\ntrigger to enable it on replica mode to make the foreign key work. So, ISTM, we\r\nwon't hit this ERROR frequently.\r\n\r\nAnd based on this, another comment about the patch is that it seems unnecessary\r\nto directly check the FK returned by RelationGetFKeyList. Checking the actual FK\r\ntrigger function seems enough.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 27 Jul 2022 07:57:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for the patch v19-0004:\n\n======\n\n1. doc/src/sgml/ref/create_subscription.sgml\n\n@@ -244,6 +244,11 @@ CREATE SUBSCRIPTION <replaceable\nclass=\"parameter\">subscription_name</replaceabl\n column in the relation on the subscriber-side should also be the\n unique column on the publisher-side; 2) there cannot be any\n non-immutable functions used by the subscriber-side replicated table.\n+ When applying a streaming transaction, if either requirement is not\n+ met, the background worker will exit with an error.\n+ <literal>parallel</literal> mode is disregarded when retrying;\n+ instead the transaction will be applied using <literal>on</literal>\n+ mode.\n </para>\n\nThat last sentence starting with lowercase seems odd - that's why I\nthought saying \"The parallel mode...\" might be better. IMO \"on mode\"\nseems strange too. Hence my previous [1] (#4.3) suggestion for this\n\nSUGGESTION\nThe <literal>parallel</literal> mode is disregarded when retrying;\ninstead the transaction will be applied using <literal>streaming =\non</literal>.\n\n======\n\n2. src/backend/replication/logical/worker.c - start_table_sync\n\n@@ -3902,20 +3925,28 @@ start_table_sync(XLogRecPtr *origin_startpos,\nchar **myslotname)\n }\n PG_CATCH();\n {\n+ /*\n+ * Emit the error message, and recover from the error state to an idle\n+ * state\n+ */\n+ HOLD_INTERRUPTS();\n+\n+ EmitErrorReport();\n+ AbortOutOfAnyTransaction();\n+ FlushErrorState();\n+\n+ RESUME_INTERRUPTS();\n+\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MySubscription->oid, false);\n+\n+ /* Set the retry flag. */\n+ set_subscription_retry(true);\n+\n if (MySubscription->disableonerr)\n DisableSubscriptionAndExit();\n- else\n- {\n- /*\n- * Report the worker failed during table synchronization. Abort\n- * the current transaction so that the stats message is sent in an\n- * idle state.\n- */\n- AbortOutOfAnyTransaction();\n- pgstat_report_subscription_error(MySubscription->oid, false);\n\n- PG_RE_THROW();\n- }\n+ proc_exit(0);\n }\n\nBut is it correct to set the 'retry' flag even if the\nMySubscription->disableonerr is true? Won’t that mean even after the\nuser corrects the problem and then re-enabled the subscription it\nstill won't let the streaming=parallel work, because that retry flag\nis set?\n\nAlso, Something seems wrong to me here - IIUC the patch changed this\ncode because of the potential risk of an error within the\nset_subscription_retry function, but now if such an error happens the\ncurrent code will bypass even getting to DisableSubscriptionAndExit,\nso the subscription won't have a chance to get disabled as the user\nmight have wanted.\n\n~~~\n\n3. src/backend/replication/logical/worker.c - start_apply\n\n@@ -3940,20 +3971,27 @@ start_apply(XLogRecPtr origin_startpos)\n }\n PG_CATCH();\n {\n+ /*\n+ * Emit the error message, and recover from the error state to an idle\n+ * state\n+ */\n+ HOLD_INTERRUPTS();\n+\n+ EmitErrorReport();\n+ AbortOutOfAnyTransaction();\n+ FlushErrorState();\n+\n+ RESUME_INTERRUPTS();\n+\n+ /* Report the worker failed while applying changes */\n+ pgstat_report_subscription_error(MySubscription->oid,\n+ !am_tablesync_worker());\n+\n+ /* Set the retry flag. */\n+ set_subscription_retry(true);\n+\n if (MySubscription->disableonerr)\n DisableSubscriptionAndExit();\n- else\n- {\n- /*\n- * Report the worker failed while applying changes. Abort the\n- * current transaction so that the stats message is sent in an\n- * idle state.\n- */\n- AbortOutOfAnyTransaction();\n- pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n-\n- PG_RE_THROW();\n- }\n }\n\n(Same as previous review comment #2)\n\nBut is it correct to set the 'retry' flag even if the\nMySubscription->disableonerr is true? Won’t that mean even after the\nuser corrects the problem and then re-enabled the subscription it\nstill won't let the streaming=parallel work, because that retry flag\nis set?\n\nAlso, Something seems wrong to me here - IIUC the patch changed this\ncode because of the potential risk of an error within the\nset_subscription_retry function, but now if such an error happens the\ncurrent code will bypass even getting to DisableSubscriptionAndExit,\nso the subscription won't have a chance to get disabled as the user\nmight have wanted.\n\n~~~\n\n4. src/backend/replication/logical/worker.c - DisableSubscriptionAndExit\n\n /*\n- * After error recovery, disable the subscription in a new transaction\n- * and exit cleanly.\n+ * Disable the subscription in a new transaction.\n */\n static void\n DisableSubscriptionAndExit(void)\n {\n- /*\n- * Emit the error message, and recover from the error state to an idle\n- * state\n- */\n- HOLD_INTERRUPTS();\n-\n- EmitErrorReport();\n- AbortOutOfAnyTransaction();\n- FlushErrorState();\n-\n- RESUME_INTERRUPTS();\n-\n- /* Report the worker failed during either table synchronization or apply */\n- pgstat_report_subscription_error(MyLogicalRepWorker->subid,\n- !am_tablesync_worker());\n-\n /* Disable the subscription */\n StartTransactionCommand();\n DisableSubscription(MySubscription->oid);\n@@ -4231,8 +4252,6 @@ DisableSubscriptionAndExit(void)\n ereport(LOG,\n errmsg(\"logical replication subscription \\\"%s\\\" has been disabled\ndue to an error\",\n MySubscription->name));\n-\n- proc_exit(0);\n }\n\n4a.\nHmm, I think it is a bad idea to remove the \"exiting\" code from the\nfunction but still leave the function name the same as before saying\n\"AndExit\".\n\n4b.\nAlso, now the patch is unconditionally doing proc_exit(0) in the\ncalling code where previously it would do PG_RE_THROW. So it's a\nsubtle difference from the path the code used to take for worker\nerrors..\n\n~~~\n\n5. src/backend/replication/logical/worker.c - set_subscription_retry\n\n@@ -4467,3 +4486,63 @@ reset_apply_error_context_info(void)\n apply_error_callback_arg.remote_attnum = -1;\n set_apply_error_context_xact(InvalidTransactionId, InvalidXLogRecPtr);\n }\n+\n+/*\n+ * Set subretry of pg_subscription catalog.\n+ *\n+ * If retry is true, subscriber is about to exit with an error. Otherwise, it\n+ * means that the transaction was applied successfully.\n+ */\n+static void\n+set_subscription_retry(bool retry)\n+{\n+ Relation rel;\n+ HeapTuple tup;\n+ bool started_tx = false;\n+ bool nulls[Natts_pg_subscription];\n+ bool replaces[Natts_pg_subscription];\n+ Datum values[Natts_pg_subscription];\n+\n+ if (MySubscription->retry == retry ||\n+ am_apply_bgworker())\n+ return;\n\nCurrently, I think this new 'subretry' field is only used to decide\nwhether a retry can use an apply background worker or not. I think all\nthis logic is *only* used when streaming=parallel. But AFAICT the\nlogic for setting/clearing the retry flag is executed *always*\nregardless of the streaming mode.\n\nSo for all the times when the user did not ask for streaming=parallel\ndoesn't this just cause unnecessary overhead for every transaction?\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB62758A6AAED27B3A848CEB7A9E8F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 27 Jul 2022 18:03:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, July 26, 2022 5:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> On Tue, Jul 26, 2022 at 2:30 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jul 22, 2022 at 8:27 AM wangw.fnst@fujitsu.com\r\n> > <wangw.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tues, Jul 19, 2022 at 10:29 AM I wrote:\r\n> > > > Attach the news patches.\r\n> > >\r\n> > > Not able to apply patches cleanly because the change in HEAD\r\n> (366283961a).\r\n> > > Therefore, I rebased the patch based on the changes in HEAD.\r\n> > >\r\n> > > Attach the new patches.\r\n> >\r\n> > + /* Check the foreign keys. */\r\n> > + fkeys = RelationGetFKeyList(entry->localrel);\r\n> > + if (fkeys)\r\n> > + entry->parallel_apply = PARALLEL_APPLY_UNSAFE;\r\n> >\r\n> > So if there is a foreign key on any of the tables which are parts of a\r\n> > subscription then we do not allow changes for that subscription to be\r\n> > applied in parallel? I think this is a big limitation because having\r\n> > foreign key on the table is very normal right? I agree that if we\r\n> > allow them then there could be failure due to out of order apply\r\n> > right? but IMHO we should not put the restriction instead let it fail\r\n> > if there is ever such conflict. Because if there is a conflict the\r\n> > transaction will be sent again. Do we see that there could be wrong\r\n> > or inconsistent results if we allow such things to be executed in\r\n> > parallel. If not then IMHO just to avoid some corner case failure we\r\n> > are restricting very normal cases.\r\n> \r\n> some more comments..\r\n> 1.\r\n> + /*\r\n> + * If we have found a free worker or if we are already\r\n> applying this\r\n> + * transaction in an apply background worker, then we\r\n> pass the data to\r\n> + * that worker.\r\n> + */\r\n> + if (first_segment)\r\n> + apply_bgworker_send_data(stream_apply_worker, s->len,\r\n> + s->data);\r\n> \r\n> Comment says that if we have found a free worker or we are already applying in\r\n> the worker then pass the changes to the worker but actually as per the code\r\n> here we are only passing in case of first_segment?\r\n> \r\n> I think what you are trying to say is that if it is first segment then send the\r\n> \r\n> 2.\r\n> + /*\r\n> + * This is the main apply worker. Check if there is any free apply\r\n> + * background worker we can use to process this transaction.\r\n> + */\r\n> + if (first_segment)\r\n> + stream_apply_worker = apply_bgworker_start(stream_xid);\r\n> + else\r\n> + stream_apply_worker = apply_bgworker_find(stream_xid);\r\n> \r\n> So currently, whenever we get a new streamed transaction we try to start a new\r\n> background worker for that. Why do we need to start/close the background\r\n> apply worker every time we get a new streamed transaction. I mean we can\r\n> keep the worker in the pool for time being and if there is a new transaction\r\n> looking for a worker then we can find from that. Starting a worker is costly\r\n> operation and since we are using parallelism for this mean we are expecting\r\n> that there would be frequent streamed transaction needing parallel apply\r\n> worker so why not to let it wait for a certain amount of time so that if load is low\r\n> it will anyway stop and if the load is high it will be reused for next streamed\r\n> transaction.\r\n\r\nIt seems the function name was a bit mislead. Currently, the started apply\r\nbgworker won't exit after applying the transaction. And the\r\napply_bgworker_start will first try to choose a free worker. It will start a\r\nnew worker only if no free worker is available.\r\n\r\n> 3.\r\n> Why are we restricting parallel apply workers only for the streamed\r\n> transactions, because streaming depends upon the size of the logical decoding\r\n> work mem so making steaming and parallel apply tightly coupled seems too\r\n> restrictive to me. Do we see some obvious problems in applying other\r\n> transactions in parallel?\r\n\r\nWe thought there could be some conflict failure and deadlock if we parallel\r\napply normal transaction which need transaction dependency check[1]. But I will do\r\nsome more research for this and share the result soon.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 27 Jul 2022 08:21:53 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang-san,\r\n\r\nHi, I'm also interested in the patch and I started to review this.\r\nFollowings are comments about 0001.\r\n\r\n1. terminology\r\n\r\nIn your patch a new worker \"apply background worker\" has been introduced,\r\nbut I thought it might be confused because PostgreSQL has already the worker \"background worker\".\r\nBoth of apply worker and apply bworker are categolized as bgworker. \r\nDo you have any reasons not to use \"apply parallel worker\" or \"apply streaming worker\"?\r\n(Note that I'm not native English speaker)\r\n\r\n2. logicalrep_worker_stop()\r\n\r\n```\r\n- /* No worker, nothing to do. */\r\n- if (!worker)\r\n- {\r\n- LWLockRelease(LogicalRepWorkerLock);\r\n- return;\r\n- }\r\n+ if (worker)\r\n+ logicalrep_worker_stop_internal(worker);\r\n+\r\n+ LWLockRelease(LogicalRepWorkerLock);\r\n+}\r\n```\r\n\r\nI thought you could add a comment the meaning of if-statement, like \"No main apply worker, nothing to do\"\r\n\r\n3. logicalrep_workers_find()\r\n\r\nI thought you could add a description about difference between this and logicalrep_worker_find() at the top of the function.\r\nIIUC logicalrep_workers_find() counts subworker, but logicalrep_worker_find() does not focus such type of workers.\r\n\r\n4. logicalrep_worker_detach()\r\n\r\n```\r\nstatic void\r\n logicalrep_worker_detach(void)\r\n {\r\n+ /*\r\n+ * If we are the main apply worker, stop all the apply background workers\r\n+ * we started before.\r\n+ *\r\n```\r\n\r\nI thought \"we are\" should be \"This is\", based on other comments.\r\n\r\n5. applybgworker.c\r\n\r\n```\r\n+/* Apply background workers hash table (initialized on first use) */\r\n+static HTAB *ApplyWorkersHash = NULL;\r\n+static List *ApplyWorkersFreeList = NIL;\r\n+static List *ApplyWorkersList = NIL;\r\n```\r\n\r\nI thought they should be ApplyBgWorkersXXX, because they stores information only related with apply bgworkers.\r\n\r\n6. ApplyBgworkerShared\r\n\r\n```\r\n+ TransactionId stream_xid;\r\n+ uint32 n; /* id of apply background worker */\r\n+} ApplyBgworkerShared;\r\n```\r\n\r\nI thought the field \"n\" is too general, how about \"proc_id\" or \"worker_id\"?\r\n\r\n7. apply_bgworker_wait_for()\r\n\r\n```\r\n+ /* If any workers (or the postmaster) have died, we have failed. */\r\n+ if (status == APPLY_BGWORKER_EXIT)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"background worker %u failed to apply transaction %u\",\r\n+ wstate->shared->n, wstate->shared->stream_xid)))\r\n```\r\n\r\n7.a\r\nI thought we should not mention about PM death here, because in this case\r\napply worker will exit at WaitLatch().\t\r\n\r\n7.b\r\nThe error message should be \"apply background worker %u...\".\r\n\r\n8. apply_bgworker_check_status()\r\n\r\n```\r\n+ errmsg(\"background worker %u exited unexpectedly\",\r\n+ wstate->shared->n)));\r\n```\r\n\r\nThe error message should be \"apply background worker %u...\".\r\n\r\n\r\n9. apply_bgworker_send_data()\r\n\r\n```\r\n+ if (result != SHM_MQ_SUCCESS)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"could not send tuples to shared-memory queue\")));\r\n```\r\n\r\nI thought the error message should be \"could not send data to...\"\r\nbecause sent data might not be tuples. For example, in case of STEAM PREPARE, I thit does not contain tuple.\r\n\r\n10. wait_event.h\r\n\r\n```\r\n WAIT_EVENT_HASH_GROW_BUCKETS_REINSERT,\r\n+ WAIT_EVENT_LOGICAL_APPLY_WORKER_STATE_CHANGE,\r\n WAIT_EVENT_LOGICAL_SYNC_DATA,\r\n```\r\n\r\nI thought the event should be WAIT_EVENT_LOGICAL_APPLY_BG_WORKER_STATE_CHANGE,\r\nbecause this is used when apply worker waits until the status of bgworker changes. \r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 27 Jul 2022 12:41:43 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nI found further comments about the test code.\r\n\r\n11. src/test/regress/sql/subscription.sql\r\n\r\n```\r\n-- fail - streaming must be boolean\r\nCREATE SUBSCRIPTION regress_testsub CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false, streaming = foo);\r\n```\r\n\r\nThe comment is no longer correct: should be \"streaming must be boolean or 'parallel'\"\r\n\r\n12. src/test/regress/sql/subscription.sql\r\n\r\n```\r\n-- now it works\r\nCREATE SUBSCRIPTION regress_testsub CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false, streaming = true);\r\n```\r\n\r\nI think we should test the case of streaming = 'parallel'.\r\n\r\n13. 015_stream.pl\r\n\r\nI could not find test about TRUNCATE. IIUC apply bgworker works well\r\neven if it gets LOGICAL_REP_MSG_TRUNCATE message from main worker.\r\nCan you add the case? \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 28 Jul 2022 05:20:00 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jul 27, 2022 at 1:27 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, July 27, 2022 1:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jul 27, 2022 at 10:06 AM Amit Kapila <amit.kapila16@gmail.com>\n> > >\n> > > What kind of failure do you have in mind and how it can occur? The one\n> > > way it can fail is if the publisher doesn't have a corresponding\n> > > foreign key on the table because then the publisher could have allowed\n> > > an insert into a table (insert into FK table without having the\n> > > corresponding key in PK table) which may not be allowed on the\n> > > subscriber. However, I don't see any check that could prevent this\n> > > because for this we need to compare the FK list for a table from the\n> > > publisher with the corresponding one on the subscriber. I am not\n> > > really sure if due to the risk of such conflicts we should block\n> > > parallelism of transactions operating on tables with FK because those\n> > > conflicts can occur even without parallelism, it is just a matter of\n> > > timing. But, I could be missing something due to which the above check\n> > > can be useful?\n> >\n> > Actually, my question starts with this check[1][2], from this it\n> > appears that if this relation is having a foreign key then we are\n> > marking it parallel unsafe[2] and later in [1] while the worker is\n> > applying changes for that relation and if it was marked parallel\n> > unsafe then we are throwing error. So my question was why we are\n> > putting this restriction? Although this error is only talking about\n> > unique and non-immutable functions this is also giving an error if the\n> > target table had a foreign key. So my question was do we really need\n> > to restrict this? I mean why we are restricting this case?\n> >\n>\n> Hi,\n>\n> I think the foreign key check is used to prevent the apply worker from waiting\n> indefinitely which is caused by foreign key difference between publisher and\n> subscriber, Like the following example:\n>\n> -------------------------------------\n> Publisher:\n> -- both table are published\n> CREATE TABLE PKTABLE ( ptest1 int);\n> CREATE TABLE FKTABLE ( ftest1 int);\n>\n> -- initial data\n> INSERT INTO PKTABLE VALUES(1);\n>\n> Subcriber:\n> CREATE TABLE PKTABLE ( ptest1 int PRIMARY KEY);\n> CREATE TABLE FKTABLE ( ftest1 int REFERENCES PKTABLE);\n>\n> -- Execute the following transactions on publisher\n>\n> Tx1:\n> INSERT ... -- make enough changes to start streaming mode\n> DELETE FROM PKTABLE;\n> Tx2:\n> INSERT ITNO FKTABLE VALUES(1);\n> COMMIT;\n> COMMIT;\n> -------------------------------------\n>\n> The subcriber's apply worker will wait indefinitely, because the main apply worker is\n> waiting for the streaming transaction to finish which is in another apply\n> bgworker.\n>\n\nIIUC, here the problem will be that TX2 (Insert in FK table) performed\nby the apply worker will wait for a parallel worker doing streaming\ntransaction TX1 which has performed Delete from PK table. This wait is\nrequired because we can't decide if Insert will be successful or not\ntill TX1 is either committed or Rollback. This is similar to the\nproblem related to primary/unique keys mentioned earlier [1]. If so,\nthen, we should try to forbid this in some way to avoid subscribers\nfrom being stuck.\n\nDilip, does this reason sounds sufficient to you for such a check, or\ndo you still think we don't need any check for FK's?\n\n>\n> BTW, I think the foreign key won't take effect in subscriber's apply worker by\n> default. Because we set session_replication_role to 'replica' in apply worker\n> which prevent the FK trigger function to be executed(only the trigger with\n> FIRES_ON_REPLICA flag will be executed in this mode). User can only alter the\n> trigger to enable it on replica mode to make the foreign key work. So, ISTM, we\n> won't hit this ERROR frequently.\n>\n> And based on this, another comment about the patch is that it seems unnecessary\n> to directly check the FK returned by RelationGetFKeyList. Checking the actual FK\n> trigger function seems enough.\n>\n\nThat is correct. I think it would have been better if we can detect\nthat publisher doesn't have FK but the subscriber has FK as it can\noccur only in that scenario. If that requires us to send more\ninformation from the publisher, we can leave it for now (as this\ndoesn't seem to be a frequent scenario) and keep a simpler check based\non subscriber schema.\n\nI think we should add a test as mentioned by you above so that if\ntomorrow one tries to remove the FK check, we have a way to know.\nAlso, please add comments and tests for additional checks related to\nconstraints in the patch.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JwahU_WuP3S%2B7POqta%3DPhm_3gxZeVmJuuoUq1NV%3DkrXA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 19:02:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, July 27, 2022 4:22 PM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Tuesday, July 26, 2022 5:34 PM Dilip Kumar <dilipbalaut@gmail.com>\r\n> wrote:\r\n> \r\n> > 3.\r\n> > Why are we restricting parallel apply workers only for the streamed\r\n> > transactions, because streaming depends upon the size of the logical\r\n> > decoding work mem so making steaming and parallel apply tightly\r\n> > coupled seems too restrictive to me. Do we see some obvious problems\r\n> > in applying other transactions in parallel?\r\n> \r\n> We thought there could be some conflict failure and deadlock if we parallel\r\n> apply normal transaction which need transaction dependency check[1]. But I\r\n> will do some more research for this and share the result soon.\r\n\r\nAfter thinking about this, I confirmed that it would be easy to cause deadlock\r\nerror if we don't have additional dependency analysis and COMMIT order preserve\r\nhandling for parallel apply normal transaction.\r\n\r\nBecause the basic idea to parallel apply normal transaction in the first\r\nversion is that: the main apply worker will receive data from pub and pass them\r\nto apply bgworker without applying by itself. And only before the apply\r\nbgworker apply the final COMMIT command, it need to wait for any previous\r\ntransaction to finish to preserve the commit order. It means we could pass the\r\nnext transaction's data to another apply bgworker before the previous\r\ntransaction is committed in the first apply bgworker.\r\n\r\nIn this approach, we have to do the dependency analysis because it's easy to\r\ncause dead lock error when applying DMLs in parallel(See the attachment for the\r\nexamples where the dead lock could happen). So, it's a bit different from\r\nstreaming transaction.\r\n\r\nWe could apply the next transaction only after the first transaction is\r\ncommitted in which approach we don't need the dependency analysis, but it would\r\nnot bring noticeable performance improvement even if we start serval apply\r\nworkers to do that because the actual DMLs are not performed in parallel.\r\n\r\nBased on above, we plan to first introduce the patch to perform streaming\r\nlogical transactions by background workers, and then introduce parallel apply\r\nnormal transaction which design is different and need some additional handling.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n> [1]\r\n> https://www.postgresql.org/message-id/CAA4eK1%2BwyN6zpaHUkCLorEW\r\n> Nx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com", "msg_date": "Tue, 2 Aug 2022 11:46:21 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Aug 2, 2022 at 5:16 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, July 27, 2022 4:22 PM houzj.fnst@fujitsu.com wrote:\n> >\n> > On Tuesday, July 26, 2022 5:34 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > wrote:\n> >\n> > > 3.\n> > > Why are we restricting parallel apply workers only for the streamed\n> > > transactions, because streaming depends upon the size of the logical\n> > > decoding work mem so making steaming and parallel apply tightly\n> > > coupled seems too restrictive to me. Do we see some obvious problems\n> > > in applying other transactions in parallel?\n> >\n> > We thought there could be some conflict failure and deadlock if we parallel\n> > apply normal transaction which need transaction dependency check[1]. But I\n> > will do some more research for this and share the result soon.\n>\n> After thinking about this, I confirmed that it would be easy to cause deadlock\n> error if we don't have additional dependency analysis and COMMIT order preserve\n> handling for parallel apply normal transaction.\n>\n> Because the basic idea to parallel apply normal transaction in the first\n> version is that: the main apply worker will receive data from pub and pass them\n> to apply bgworker without applying by itself. And only before the apply\n> bgworker apply the final COMMIT command, it need to wait for any previous\n> transaction to finish to preserve the commit order. It means we could pass the\n> next transaction's data to another apply bgworker before the previous\n> transaction is committed in the first apply bgworker.\n>\n> In this approach, we have to do the dependency analysis because it's easy to\n> cause dead lock error when applying DMLs in parallel(See the attachment for the\n> examples where the dead lock could happen). So, it's a bit different from\n> streaming transaction.\n>\n> We could apply the next transaction only after the first transaction is\n> committed in which approach we don't need the dependency analysis, but it would\n> not bring noticeable performance improvement even if we start serval apply\n> workers to do that because the actual DMLs are not performed in parallel.\n>\n\nI agree that for short transactions it may not bring noticeable\nperformance improvement but somewhat larger transactions could still\nbenefit from parallelism where we won't start to operate on new\ntransactions without waiting for the previous transaction's commit.\nHaving said that, I think we can enable parallelism for non-streaming\ntransactions as a separate patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Aug 2022 17:35:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thurs, Jul 28, 2022 at 21:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments and opinions.\r\n\r\n> On Wed, Jul 27, 2022 at 1:27 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > BTW, I think the foreign key won't take effect in subscriber's apply worker by\r\n> > default. Because we set session_replication_role to 'replica' in apply worker\r\n> > which prevent the FK trigger function to be executed(only the trigger with\r\n> > FIRES_ON_REPLICA flag will be executed in this mode). User can only alter the\r\n> > trigger to enable it on replica mode to make the foreign key work. So, ISTM,\r\n> we\r\n> > won't hit this ERROR frequently.\r\n> >\r\n> > And based on this, another comment about the patch is that it seems\r\n> unnecessary\r\n> > to directly check the FK returned by RelationGetFKeyList. Checking the actual\r\n> FK\r\n> > trigger function seems enough.\r\n> >\r\n> \r\n> That is correct. I think it would have been better if we can detect\r\n> that publisher doesn't have FK but the subscriber has FK as it can\r\n> occur only in that scenario. If that requires us to send more\r\n> information from the publisher, we can leave it for now (as this\r\n> doesn't seem to be a frequent scenario) and keep a simpler check based\r\n> on subscriber schema.\r\n> \r\n> I think we should add a test as mentioned by you above so that if\r\n> tomorrow one tries to remove the FK check, we have a way to know.\r\n> Also, please add comments and tests for additional checks related to\r\n> constraints in the patch.\r\n> \r\n> [1] - https://www.postgresql.org/message-\r\n> id/CAA4eK1JwahU_WuP3S%2B7POqta%3DPhm_3gxZeVmJuuoUq1NV%3DkrXA\r\n> %40mail.gmail.com\r\n\r\nI added some test cases that cause indefinite waits without additional checks\r\nrelated to constraints. (please see file 032_streaming_apply.pl in 0003-patch)\r\nI also added some comments for FK check and why we need these checks.\r\n\r\nIn addition, I found another two scenarios that could cause infinite waits, so\r\nI made the following changes:\r\n 1. I check the default values for the columns that only in subscriber-side.\r\n (Previous versions only checked for columns that existed in both\r\n publisher-side and subscriber-side.)\r\n 2. When using an apply background worker, the check needs to be performed not\r\n only in the apply background worker, but also in the main apply worker.\r\n\r\nI also did some other improvements based on the suggestions posted in this\r\nthread. Attach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 4 Aug 2022 06:35:45 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thurs, Jul 28, 2022 at 13:20 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n>\r\n> Dear Wang-san,\r\n> \r\n> Hi, I'm also interested in the patch and I started to review this.\r\n> Followings are comments about 0001.\r\n\r\nThanks for your kindly review and comments.\r\nTo avoid making this thread too long, I will reply to all of your comments\r\n(#1~#13) in this email.\r\n\r\n> 1. terminology\r\n> \r\n> In your patch a new worker \"apply background worker\" has been introduced,\r\n> but I thought it might be confused because PostgreSQL has already the worker\r\n> \"background worker\".\r\n> Both of apply worker and apply bworker are categolized as bgworker.\r\n> Do you have any reasons not to use \"apply parallel worker\" or \"apply streaming\r\n> worker\"?\r\n> (Note that I'm not native English speaker)\r\n\r\nSince we will later consider applying non-streamed transactions in parallel, I\r\nthink \"apply streaming worker\" might not be very suitable. I think PostgreSQL\r\nalso has the worker \"parallel worker\", so for \"apply parallel worker\" and\r\n\"apply background worker\", I feel that \"apply background worker\" will make the\r\nrelationship between workers more clear. (\"[main] apply worker\" and \"apply\r\nbackground worker\")\r\n\r\n> 2. logicalrep_worker_stop()\r\n> \r\n> ```\r\n> - /* No worker, nothing to do. */\r\n> - if (!worker)\r\n> - {\r\n> - LWLockRelease(LogicalRepWorkerLock);\r\n> - return;\r\n> - }\r\n> + if (worker)\r\n> + logicalrep_worker_stop_internal(worker);\r\n> +\r\n> + LWLockRelease(LogicalRepWorkerLock);\r\n> +}\r\n> ```\r\n> \r\n> I thought you could add a comment the meaning of if-statement, like \"No main\r\n> apply worker, nothing to do\"\r\n\r\nSince the processing in the if statement is reversed from before, I added the\r\nfollowing comment based on your suggestion:\r\n```\r\nFound the main worker, then try to stop it.\r\n```\r\n\r\n> 3. logicalrep_workers_find()\r\n> \r\n> I thought you could add a description about difference between this and\r\n> logicalrep_worker_find() at the top of the function.\r\n> IIUC logicalrep_workers_find() counts subworker, but logicalrep_worker_find()\r\n> does not focus such type of workers.\r\n\r\nI think it is fine to keep the comment because the comment says \"returns list\r\nof *all workers* for the subscription\".\r\nAlso, we have added the comment \"We are only interested in the main apply\r\nworker or table sync worker here\" in the function logicalrep_worker_find.\r\n\r\n> 5. applybgworker.c\r\n> \r\n> ```\r\n> +/* Apply background workers hash table (initialized on first use) */\r\n> +static HTAB *ApplyWorkersHash = NULL;\r\n> +static List *ApplyWorkersFreeList = NIL;\r\n> +static List *ApplyWorkersList = NIL;\r\n> ```\r\n> \r\n> I thought they should be ApplyBgWorkersXXX, because they stores information\r\n> only related with apply bgworkers.\r\n\r\nI improved them to ApplyBgworkersXXX just for the consistency with other names.\r\n\r\n> 6. ApplyBgworkerShared\r\n> \r\n> ```\r\n> + TransactionId stream_xid;\r\n> + uint32 n; /* id of apply background worker */\r\n> +} ApplyBgworkerShared;\r\n> ```\r\n> \r\n> I thought the field \"n\" is too general, how about \"proc_id\" or \"worker_id\"?\r\n\r\nI think \"worker_id\" seems better, so I improved \"n\" to \"worker_id\".\r\n\r\n> 10. wait_event.h\r\n> \r\n> ```\r\n> WAIT_EVENT_HASH_GROW_BUCKETS_REINSERT,\r\n> + WAIT_EVENT_LOGICAL_APPLY_WORKER_STATE_CHANGE,\r\n> WAIT_EVENT_LOGICAL_SYNC_DATA,\r\n> ```\r\n> \r\n> I thought the event should be\r\n> WAIT_EVENT_LOGICAL_APPLY_BG_WORKER_STATE_CHANGE,\r\n> because this is used when apply worker waits until the status of bgworker\r\n> changes.\r\n\r\nI improved them to \"WAIT_EVENT_LOGICAL_APPLY_BGWORKER_STATE_CHANGE\" just for\r\nthe consistency with other names.\r\n\r\n> 13. 015_stream.pl\r\n> \r\n> I could not find test about TRUNCATE. IIUC apply bgworker works well\r\n> even if it gets LOGICAL_REP_MSG_TRUNCATE message from main worker.\r\n> Can you add the case?\r\n\r\nI modified the test cases in \"032_streaming_apply.pl\" this time, the use case\r\nyou mentioned is covered now.\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275D64BE7726B0221B15F389E9F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 4 Aug 2022 06:37:42 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jul 27, 2022 at 16:03 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for the patch v19-0004:\r\n\r\nThanks for your kindly review and comments.\r\nTo avoid making this thread too long, I will reply to all of your comments\r\n(0001-patch ~ 0004-patch) in this email.\r\nIn addition, in order not to confuse the replies, I added the following serial\r\nnumber above your comments on 0004-patch:\r\n```\r\n4.2 && 4.3\r\n4.4\r\n4.5\r\n```\r\n\r\n> 1.6 src/backend/replication/logical/applybgworker.c - LogicalApplyBgwLoop\r\n> \r\n> +/* Apply Background Worker main loop */\r\n> +static void\r\n> +LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared\r\n> *shared)\r\n> \r\n> 'shared' seems a very vague param name. Maybe can be 'bgw_shared' or\r\n> 'parallel_shared' or something better?\r\n> \r\n> ~~~\r\n> \r\n> 1.7 src/backend/replication/logical/applybgworker.c - ApplyBgworkerMain\r\n> \r\n> +/*\r\n> + * Apply Background Worker entry point\r\n> + */\r\n> +void\r\n> +ApplyBgworkerMain(Datum main_arg)\r\n> +{\r\n> + volatile ApplyBgworkerShared *shared;\r\n> \r\n> 'shared' seems a very vague var name. Maybe can be 'bgw_shared' or\r\n> 'parallel_shared' or something better?\r\n> \r\n> ~~~\r\n> \r\n> 1.8 src/backend/replication/logical/applybgworker.c -\r\n> apply_bgworker_setup_dsm\r\n> \r\n> +static void\r\n> +apply_bgworker_setup_dsm(ApplyBgworkerState *wstate)\r\n> +{\r\n> + shm_toc_estimator e;\r\n> + Size segsize;\r\n> + dsm_segment *seg;\r\n> + shm_toc *toc;\r\n> + ApplyBgworkerShared *shared;\r\n> + shm_mq *mq;\r\n> \r\n> 'shared' seems a very vague var name. Maybe can be 'bgw_shared' or\r\n> 'parallel_shared' or something better?\r\n> \r\n> ~~~\r\n\r\nNot sure about this.\r\n\r\n> 3.3 .../replication/logical/applybgworker.c\r\n> \r\n> @@ -800,3 +800,47 @@ apply_bgworker_subxact_info_add(TransactionId\r\n> current_xid)\r\n> MemoryContextSwitchTo(oldctx);\r\n> }\r\n> }\r\n> +\r\n> +/*\r\n> + * Check if changes on this relation can be applied by an apply background\r\n> + * worker.\r\n> + *\r\n> + * Although the commit order is maintained only allowing one process to\r\n> commit\r\n> + * at a time, the access order to the relation has changed. This could cause\r\n> + * unexpected problems if the unique column on the replicated table is\r\n> + * inconsistent with the publisher-side or contains non-immutable functions\r\n> + * when applying transactions in the apply background worker.\r\n> + */\r\n> +void\r\n> +apply_bgworker_relation_check(LogicalRepRelMapEntry *rel)\r\n> \r\n> \"only allowing\" -> \"by only allowing\" (I think you mean this, right?)\r\n\r\nSince I'm not a native English speaker, I'm not quite sure which of the two\r\ndescriptions you suggested is better. See #3.4 in [1]. Now I overwrite your\r\nlast suggestion with your suggestion this time.\r\n\r\n> 3.4\r\n> \r\n> + /*\r\n> + * Return if changes on this relation can be applied by an apply background\r\n> + * worker.\r\n> + */\r\n> + if (rel->parallel_apply == PARALLEL_APPLY_SAFE)\r\n> + return;\r\n> +\r\n> + /* We are in error mode and should give user correct error. */\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot replicate target relation \\\"%s.%s\\\" using \"\r\n> + \"subscription parameter streaming=parallel\",\r\n> + rel->remoterel.nspname, rel->remoterel.relname),\r\n> + errdetail(\"The unique column on subscriber is not the unique \"\r\n> + \"column on publisher or there is at least one \"\r\n> + \"non-immutable function.\"),\r\n> + errhint(\"Please change to use subscription parameter \"\r\n> + \"streaming=on.\")));\r\n> \r\n> 3.4a.\r\n> Of course, the code should give the user the \"correct error\" if there\r\n> is an error (!), but having a comment explicitly saying so does not\r\n> serve any purpose.\r\n> \r\n> 3.4b.\r\n> The logic might be simplified if it was written differently like:\r\n> \r\n> + if (rel->parallel_apply != PARALLEL_APPLY_SAFE)\r\n> + ereport(ERROR, ...\r\n\r\nJust to keep the style consistent with the function\r\napply_bgworker_relation_check.\r\n\r\n> 3.8\r\n> \r\n> + /* Initialize the flag. */\r\n> + entry->parallel_apply = PARALLEL_APPLY_SAFE;\r\n> \r\n> I previously suggested [1] (#3.6b) to move this. Consider, that you\r\n> cannot logically flag the entry as \"safe\" until you are certain that\r\n> it is safe. And you cannot be sure of that until you've passed all the\r\n> checks this function is doing. Therefore IMO the assignment to\r\n> PARALLEL_APPLY_SAFE should be the last line of the function.\r\n\r\nNot sure about this.\r\n\r\n> 3.11 src/backend/utils/cache/typcache.c - GetDomainConstraints\r\n> \r\n> @@ -2540,6 +2540,23 @@ compare_values_of_enum(TypeCacheEntry *tcache,\r\n> Oid arg1, Oid arg2)\r\n> return 0;\r\n> }\r\n> \r\n> +/*\r\n> + * GetDomainConstraints --- get DomainConstraintState list of\r\n> specified domain type\r\n> + */\r\n> +List *\r\n> +GetDomainConstraints(Oid type_id)\r\n> +{\r\n> + TypeCacheEntry *typentry;\r\n> + List *constraints = NIL;\r\n> +\r\n> + typentry = lookup_type_cache(type_id,\r\n> TYPECACHE_DOMAIN_CONSTR_INFO);\r\n> +\r\n> + if(typentry->domainData != NULL)\r\n> + constraints = typentry->domainData->constraints;\r\n> +\r\n> + return constraints;\r\n> +}\r\n> \r\n> This function can be simplified (if you want). e.g.\r\n> \r\n> List *\r\n> GetDomainConstraints(Oid type_id)\r\n> {\r\n> TypeCacheEntry *typentry;\r\n> \r\n> typentry = lookup_type_cache(type_id, TYPECACHE_DOMAIN_CONSTR_INFO);\r\n> \r\n> return typentry->domainData ? typentry->domainData->constraints : NIL;\r\n> }\r\n\r\nI just think the former one looks clearer.\r\n\r\n4.2 && 4.3\r\n> 2. src/backend/replication/logical/worker.c - start_table_sync\r\n> \r\n> @@ -3902,20 +3925,28 @@ start_table_sync(XLogRecPtr *origin_startpos,\r\n> char **myslotname)\r\n> }\r\n> PG_CATCH();\r\n> {\r\n> + /*\r\n> + * Emit the error message, and recover from the error state to an idle\r\n> + * state\r\n> + */\r\n> + HOLD_INTERRUPTS();\r\n> +\r\n> + EmitErrorReport();\r\n> + AbortOutOfAnyTransaction();\r\n> + FlushErrorState();\r\n> +\r\n> + RESUME_INTERRUPTS();\r\n> +\r\n> + /* Report the worker failed during table synchronization */\r\n> + pgstat_report_subscription_error(MySubscription->oid, false);\r\n> +\r\n> + /* Set the retry flag. */\r\n> + set_subscription_retry(true);\r\n> +\r\n> if (MySubscription->disableonerr)\r\n> DisableSubscriptionAndExit();\r\n> - else\r\n> - {\r\n> - /*\r\n> - * Report the worker failed during table synchronization. Abort\r\n> - * the current transaction so that the stats message is sent in an\r\n> - * idle state.\r\n> - */\r\n> - AbortOutOfAnyTransaction();\r\n> - pgstat_report_subscription_error(MySubscription->oid, false);\r\n> \r\n> - PG_RE_THROW();\r\n> - }\r\n> + proc_exit(0);\r\n> }\r\n> \r\n> But is it correct to set the 'retry' flag even if the\r\n> MySubscription->disableonerr is true? Won’t that mean even after the\r\n> user corrects the problem and then re-enabled the subscription it\r\n> still won't let the streaming=parallel work, because that retry flag\r\n> is set?\r\n> \r\n> Also, Something seems wrong to me here - IIUC the patch changed this\r\n> code because of the potential risk of an error within the\r\n> set_subscription_retry function, but now if such an error happens the\r\n> current code will bypass even getting to DisableSubscriptionAndExit,\r\n> so the subscription won't have a chance to get disabled as the user\r\n> might have wanted.\r\n> 3. src/backend/replication/logical/worker.c - start_apply\r\n> \r\n> @@ -3940,20 +3971,27 @@ start_apply(XLogRecPtr origin_startpos)\r\n> }\r\n> PG_CATCH();\r\n> {\r\n> + /*\r\n> + * Emit the error message, and recover from the error state to an idle\r\n> + * state\r\n> + */\r\n> + HOLD_INTERRUPTS();\r\n> +\r\n> + EmitErrorReport();\r\n> + AbortOutOfAnyTransaction();\r\n> + FlushErrorState();\r\n> +\r\n> + RESUME_INTERRUPTS();\r\n> +\r\n> + /* Report the worker failed while applying changes */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> +\r\n> + /* Set the retry flag. */\r\n> + set_subscription_retry(true);\r\n> +\r\n> if (MySubscription->disableonerr)\r\n> DisableSubscriptionAndExit();\r\n> - else\r\n> - {\r\n> - /*\r\n> - * Report the worker failed while applying changes. Abort the\r\n> - * current transaction so that the stats message is sent in an\r\n> - * idle state.\r\n> - */\r\n> - AbortOutOfAnyTransaction();\r\n> - pgstat_report_subscription_error(MySubscription-\r\n> >oid, !am_tablesync_worker());\r\n> -\r\n> - PG_RE_THROW();\r\n> - }\r\n> }\r\n> \r\n> (Same as previous review comment #2)\r\n> \r\n> But is it correct to set the 'retry' flag even if the\r\n> MySubscription->disableonerr is true? Won’t that mean even after the\r\n> user corrects the problem and then re-enabled the subscription it\r\n> still won't let the streaming=parallel work, because that retry flag\r\n> is set?\r\n> \r\n> Also, Something seems wrong to me here - IIUC the patch changed this\r\n> code because of the potential risk of an error within the\r\n> set_subscription_retry function, but now if such an error happens the\r\n> current code will bypass even getting to DisableSubscriptionAndExit,\r\n> so the subscription won't have a chance to get disabled as the user\r\n> might have wanted.\r\n\r\n=>4.2.a\r\n=>4.3.a\r\nI think this is the expected behavior.\r\n\r\n=>4.2.b\r\n=>4.3.b\r\nFixed this point. (Invoke function set_subscription_retry after handling the\r\n\"disableonerr\" parameter.)\r\n\r\n4.4\r\n> 4. src/backend/replication/logical/worker.c - DisableSubscriptionAndExit\r\n> \r\n> /*\r\n> - * After error recovery, disable the subscription in a new transaction\r\n> - * and exit cleanly.\r\n> + * Disable the subscription in a new transaction.\r\n> */\r\n> static void\r\n> DisableSubscriptionAndExit(void)\r\n> {\r\n> - /*\r\n> - * Emit the error message, and recover from the error state to an idle\r\n> - * state\r\n> - */\r\n> - HOLD_INTERRUPTS();\r\n> -\r\n> - EmitErrorReport();\r\n> - AbortOutOfAnyTransaction();\r\n> - FlushErrorState();\r\n> -\r\n> - RESUME_INTERRUPTS();\r\n> -\r\n> - /* Report the worker failed during either table synchronization or apply */\r\n> - pgstat_report_subscription_error(MyLogicalRepWorker->subid,\r\n> - !am_tablesync_worker());\r\n> -\r\n> /* Disable the subscription */\r\n> StartTransactionCommand();\r\n> DisableSubscription(MySubscription->oid);\r\n> @@ -4231,8 +4252,6 @@ DisableSubscriptionAndExit(void)\r\n> ereport(LOG,\r\n> errmsg(\"logical replication subscription \\\"%s\\\" has been disabled\r\n> due to an error\",\r\n> MySubscription->name));\r\n> -\r\n> - proc_exit(0);\r\n> }\r\n> \r\n> 4a.\r\n> Hmm, I think it is a bad idea to remove the \"exiting\" code from the\r\n> function but still leave the function name the same as before saying\r\n> \"AndExit\".\r\n> \r\n> 4b.\r\n> Also, now the patch is unconditionally doing proc_exit(0) in the\r\n> calling code where previously it would do PG_RE_THROW. So it's a\r\n> subtle difference from the path the code used to take for worker\r\n> errors..\r\n\r\n=>4.a\r\nFixed as suggested.\r\n\r\n=>4.b\r\nI think function PG_RE_THROW will try to report the error and go away (see\r\nfunction StartBackgroundWorker). So I think that since the error has been\r\nreported at the beginning, it is fine to invoke function proc_exit to go away\r\nat the end.\r\n\r\n4.5\r\n> 5. src/backend/replication/logical/worker.c - set_subscription_retry\r\n> \r\n> @@ -4467,3 +4486,63 @@ reset_apply_error_context_info(void)\r\n> apply_error_callback_arg.remote_attnum = -1;\r\n> set_apply_error_context_xact(InvalidTransactionId, InvalidXLogRecPtr);\r\n> }\r\n> +\r\n> +/*\r\n> + * Set subretry of pg_subscription catalog.\r\n> + *\r\n> + * If retry is true, subscriber is about to exit with an error. Otherwise, it\r\n> + * means that the transaction was applied successfully.\r\n> + */\r\n> +static void\r\n> +set_subscription_retry(bool retry)\r\n> +{\r\n> + Relation rel;\r\n> + HeapTuple tup;\r\n> + bool started_tx = false;\r\n> + bool nulls[Natts_pg_subscription];\r\n> + bool replaces[Natts_pg_subscription];\r\n> + Datum values[Natts_pg_subscription];\r\n> +\r\n> + if (MySubscription->retry == retry ||\r\n> + am_apply_bgworker())\r\n> + return;\r\n> \r\n> Currently, I think this new 'subretry' field is only used to decide\r\n> whether a retry can use an apply background worker or not. I think all\r\n> this logic is *only* used when streaming=parallel. But AFAICT the\r\n> logic for setting/clearing the retry flag is executed *always*\r\n> regardless of the streaming mode.\r\n> \r\n> So for all the times when the user did not ask for streaming=parallel\r\n> doesn't this just cause unnecessary overhead for every transaction?\r\n\r\nI think it is fine. Because for one transaction, only the first time the\r\ntransaction is applied with failure and the first time it is successfully\r\nretried, the catalog pg_subscription will be really modified.\r\n\r\nThe rest of the comments are improved as suggested.\r\nThe new patches were attached in [2].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPtRNAOwFtBp_TnDWdC7UpcTxPJzQnrm%3DNytN7cVBt5zRQ%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB6275D64BE7726B0221B15F389E9F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 4 Aug 2022 06:38:23 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jul 25, 2022 at 21:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Few comments on 0001:\r\n> ======================\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> - <structfield>substream</structfield> <type>bool</type>\r\n> + <structfield>substream</structfield> <type>char</type>\r\n> </para>\r\n> <para>\r\n> - If true, the subscription will allow streaming of in-progress\r\n> - transactions\r\n> + Controls how to handle the streaming of in-progress transactions:\r\n> + <literal>f</literal> = disallow streaming of in-progress transactions,\r\n> + <literal>t</literal> = spill the changes of in-progress transactions to\r\n> + disk and apply at once after the transaction is committed on the\r\n> + publisher,\r\n> + <literal>p</literal> = apply changes directly using a background worker\r\n> \r\n> Shouldn't the description of 'p' be something like: apply changes\r\n> directly using a background worker, if available, otherwise, it\r\n> behaves the same as 't'\r\n\r\nImproved as suggested.\r\n\r\n> 2.\r\n> Note that if an error happens when\r\n> + applying changes in a background worker, the finish LSN of the\r\n> + remote transaction might not be reported in the server log.\r\n> \r\n> Is there any case where finish LSN can be reported when applying via\r\n> background worker, if not, then we should use 'won't' instead of\r\n> 'might not'?\r\n\r\nYes, I think the use case you mentioned exists. (The finish LSN can be reported\r\nwhen applying via background worker). So I do not change this.\r\nFor example, in the function apply_handle_stream_commit , if an error occurs\r\nafter invoking the function set_apply_error_context_xact, I think the error\r\nmessage will contain the finish LSN.\r\n\r\n> 3.\r\n> +#define PG_LOGICAL_APPLY_SHM_MAGIC 0x79fb2447 // TODO Consider\r\n> change\r\n> \r\n> It is better to change this as the same magic number is used by\r\n> PG_TEST_SHM_MQ_MAGIC\r\n\r\nImproved as suggested. I changed it to a random magic number (0x787ca067) that\r\ndoesn't duplicate in the HEAD.\r\n\r\n> 4.\r\n> + /* Ignore statistics fields that have been updated. */\r\n> + s.cursor += IGNORE_SIZE_IN_MESSAGE;\r\n> \r\n> Can we change the comment to: \"Ignore statistics fields that have been\r\n> updated by the main apply worker.\"? Will it be better to name the\r\n> define as \"SIZE_STATS_MESSAGE\"?\r\n\r\nImproved the comments and the macro name as suggested.\r\n\r\n> 5.\r\n> +/* Apply Background Worker main loop */\r\n> +static void\r\n> +LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared\r\n> *shared)\r\n> {\r\n> ...\r\n> ...\r\n> \r\n> + apply_dispatch(&s);\r\n> +\r\n> + if (ConfigReloadPending)\r\n> + {\r\n> + ConfigReloadPending = false;\r\n> + ProcessConfigFile(PGC_SIGHUP);\r\n> + }\r\n> +\r\n> + MemoryContextSwitchTo(oldctx);\r\n> + MemoryContextReset(ApplyMessageContext);\r\n> \r\n> We should not process the config file under ApplyMessageContext. You\r\n> should switch context before processing the config file. See other\r\n> similar usages in the code.\r\n\r\nFixed as suggested.\r\nIn addition, the apply bgworker misses switching \"CurrentMemoryContext\" back to\r\noldctx when it receives a \"STOP\" message. This will make oldctx lose track of\r\n\"TopMemoryContext\". Fixed this by invoking `MemoryContextSwitchTo(oldctx);`\r\nwhen processing the \"STOP\" message.\r\n\r\n> 6.\r\n> +/* Apply Background Worker main loop */\r\n> +static void\r\n> +LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared\r\n> *shared)\r\n> {\r\n> ...\r\n> ...\r\n> + MemoryContextSwitchTo(oldctx);\r\n> + MemoryContextReset(ApplyMessageContext);\r\n> + }\r\n> +\r\n> + MemoryContextSwitchTo(TopMemoryContext);\r\n> + MemoryContextReset(ApplyContext);\r\n> ...\r\n> }\r\n> \r\n> I don't see the need to reset ApplyContext here as we don't do\r\n> anything in that context here.\r\n\r\nImproved as suggested.\r\nRemoved the invocation of function MemoryContextReset(ApplyContext).\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275D64BE7726B0221B15F389E9F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 4 Aug 2022 06:40:05 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 4, 2022 2:36 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> I also did some other improvements based on the suggestions posted in this\r\n> thread. Attach the new patches.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments on v20-0001 patch.\r\n\r\n1.\r\n+typedef struct ApplyBgworkerShared\r\n+{\r\n+\tslock_t\tmutex;\r\n+\r\n+\t/* Status of apply background worker. */\r\n+\tApplyBgworkerStatus\tstatus;\r\n+\r\n+\t/* proto version of publisher. */\r\n+\tuint32\tproto_version;\r\n+\r\n+\tTransactionId\tstream_xid;\r\n+\r\n+\t/* id of apply background worker */\r\n+\tuint32\tworker_id;\r\n+} ApplyBgworkerShared;\r\n\r\nWould it be better to modify the comment of \"proto_version\" to \"Logical protocol\r\nversion\"?\r\n\r\n2. commnent of handle_streamed_transaction()\r\n\r\n+ * Exception: When the main apply worker is applying streaming transactions in\r\n+ * parallel mode (e.g. when addressing LOGICAL_REP_MSG_RELATION or\r\n+ * LOGICAL_REP_MSG_TYPE changes), then return false.\r\n\r\nThis comment doesn't look very clear, could we change it to:\r\n\r\nException: In SUBSTREAM_PARALLEL mode, if the message type is\r\nLOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE, return false even if this is\r\nthe main apply worker.\r\n\r\n3.\r\n+/*\r\n+ * There are three fields in message: start_lsn, end_lsn and send_time. Because\r\n+ * we have updated these statistics in apply worker, we could ignore these\r\n+ * fields in apply background worker. (see function LogicalRepApplyLoop)\r\n+ */\r\n+#define SIZE_STATS_MESSAGE (3 * sizeof(uint64))\r\n\r\nupdated these statistics in apply worker\r\n->\r\nupdated these statistics in main apply worker\r\n\r\n4.\r\n+static void\r\n+LogicalApplyBgwLoop(shm_mq_handle *mqh, volatile ApplyBgworkerShared *shared)\r\n+{\r\n+\tshm_mq_result shmq_res;\r\n+\tPGPROC\t *registrant;\r\n+\tErrorContextCallback errcallback;\r\n\r\nI think we can define \"shmq_res\" in the for loop.\r\n\r\n5.\r\n+\t\t/*\r\n+\t\t * We use first byte of message for additional communication between\r\n+\t\t * main Logical replication worker and apply background workers, so if\r\n+\t\t * it differs from 'w', then process it first.\r\n+\t\t */\r\n\r\nbetween main Logical replication worker and apply background workers\r\n->\r\nbetween main apply worker and apply background workers\r\n\r\nRegards,\r\nShi yu\r\n\r\n", "msg_date": "Fri, 5 Aug 2022 10:00:31 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Aug 2, 2022 at 5:16 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, July 27, 2022 4:22 PM houzj.fnst@fujitsu.com wrote:\n> >\n> > On Tuesday, July 26, 2022 5:34 PM Dilip Kumar <dilipbalaut@gmail.com>\n> > wrote:\n> >\n> > > 3.\n> > > Why are we restricting parallel apply workers only for the streamed\n> > > transactions, because streaming depends upon the size of the logical\n> > > decoding work mem so making steaming and parallel apply tightly\n> > > coupled seems too restrictive to me. Do we see some obvious problems\n> > > in applying other transactions in parallel?\n> >\n> > We thought there could be some conflict failure and deadlock if we parallel\n> > apply normal transaction which need transaction dependency check[1]. But I\n> > will do some more research for this and share the result soon.\n>\n> After thinking about this, I confirmed that it would be easy to cause deadlock\n> error if we don't have additional dependency analysis and COMMIT order preserve\n> handling for parallel apply normal transaction.\n>\n> Because the basic idea to parallel apply normal transaction in the first\n> version is that: the main apply worker will receive data from pub and pass them\n> to apply bgworker without applying by itself. And only before the apply\n> bgworker apply the final COMMIT command, it need to wait for any previous\n> transaction to finish to preserve the commit order. It means we could pass the\n> next transaction's data to another apply bgworker before the previous\n> transaction is committed in the first apply bgworker.\n>\n> In this approach, we have to do the dependency analysis because it's easy to\n> cause dead lock error when applying DMLs in parallel(See the attachment for the\n> examples where the dead lock could happen). So, it's a bit different from\n> streaming transaction.\n>\n> We could apply the next transaction only after the first transaction is\n> committed in which approach we don't need the dependency analysis, but it would\n> not bring noticeable performance improvement even if we start serval apply\n> workers to do that because the actual DMLs are not performed in parallel.\n>\n> Based on above, we plan to first introduce the patch to perform streaming\n> logical transactions by background workers, and then introduce parallel apply\n> normal transaction which design is different and need some additional handling.\n\nYeah I think that makes sense. Since the streamed transactions are\nsent to standby interleaved so we can take advantage of parallelism\nand along with that we can also avoid the I/O so that will also\nspeedup.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Aug 2022 10:18:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 8, 2022 at 10:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > Based on above, we plan to first introduce the patch to perform streaming\n> > logical transactions by background workers, and then introduce parallel apply\n> > normal transaction which design is different and need some additional handling.\n>\n> Yeah I think that makes sense. Since the streamed transactions are\n> sent to standby interleaved so we can take advantage of parallelism\n> and along with that we can also avoid the I/O so that will also\n> speedup.\n\nSome review comments on the latest version of the patch.\n\n1.\n+/* Queue size of DSM, 16 MB for now. */\n+#define DSM_QUEUE_SIZE 160000000\n\nWhy don't we directly use 16 *1024 * 1024, that would be exactly 16 MB\nso it will match with comments and also it would be more readable.\n\n2.\n+/*\n+ * There are three fields in message: start_lsn, end_lsn and send_time. Because\n+ * we have updated these statistics in apply worker, we could ignore these\n+ * fields in apply background worker. (see function LogicalRepApplyLoop)\n+ */\n+#define SIZE_STATS_MESSAGE (3 * sizeof(uint64))\n\nInstead of assuming you have 3 uint64 why don't directly add 2 *\nsizeof(XLogRecPtr) + sizeof(TimestampTz) so that if this data type\never changes\nwe don't need to track that we will have to change this as well.\n\n3.\n+/*\n+ * Entry for a hash table we use to map from xid to our apply background worker\n+ * state.\n+ */\n+typedef struct ApplyBgworkerEntry\n+{\n+ TransactionId xid;\n+ ApplyBgworkerState *wstate;\n+} ApplyBgworkerEntry;\n\nMention in the comment of the structure or for the member that xid is\nthe key of the hash. Refer to other such structures for the\nreference.\n\nI am doing a more detailed review but this is what I got so far.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Aug 2022 11:41:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 8, 2022 at 11:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 10:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > Based on above, we plan to first introduce the patch to perform streaming\n> > > logical transactions by background workers, and then introduce parallel apply\n> > > normal transaction which design is different and need some additional handling.\n> >\n> > Yeah I think that makes sense. Since the streamed transactions are\n> > sent to standby interleaved so we can take advantage of parallelism\n> > and along with that we can also avoid the I/O so that will also\n> > speedup.\n>\n> Some review comments on the latest version of the patch.\n>\n> 1.\n> +/* Queue size of DSM, 16 MB for now. */\n> +#define DSM_QUEUE_SIZE 160000000\n>\n> Why don't we directly use 16 *1024 * 1024, that would be exactly 16 MB\n> so it will match with comments and also it would be more readable.\n>\n> 2.\n> +/*\n> + * There are three fields in message: start_lsn, end_lsn and send_time. Because\n> + * we have updated these statistics in apply worker, we could ignore these\n> + * fields in apply background worker. (see function LogicalRepApplyLoop)\n> + */\n> +#define SIZE_STATS_MESSAGE (3 * sizeof(uint64))\n>\n> Instead of assuming you have 3 uint64 why don't directly add 2 *\n> sizeof(XLogRecPtr) + sizeof(TimestampTz) so that if this data type\n> ever changes\n> we don't need to track that we will have to change this as well.\n>\n> 3.\n> +/*\n> + * Entry for a hash table we use to map from xid to our apply background worker\n> + * state.\n> + */\n> +typedef struct ApplyBgworkerEntry\n> +{\n> + TransactionId xid;\n> + ApplyBgworkerState *wstate;\n> +} ApplyBgworkerEntry;\n>\n> Mention in the comment of the structure or for the member that xid is\n> the key of the hash. Refer to other such structures for the\n> reference.\n>\n> I am doing a more detailed review but this is what I got so far.\n\nSome more comments\n\n+ /*\n+ * Exit if any relation is not in the READY state and if any worker is\n+ * handling the streaming transaction at the same time. Because for\n+ * streaming transactions that is being applied in apply background\n+ * worker, we cannot decide whether to apply the change for a relation\n+ * that is not in the READY state (see should_apply_changes_for_rel) as we\n+ * won't know remote_final_lsn by that time.\n+ */\n+ if (list_length(ApplyBgworkersFreeList) !=\nlist_length(ApplyBgworkersList) &&\n+ !AllTablesyncsReady())\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply workers for\nsubscription \\\"%s\\\" will restart\",\n+ MySubscription->name),\n+ errdetail(\"Cannot handle streamed replication\ntransaction by apply \"\n+ \"background workers until all tables are\nsynchronized\")));\n+\n+ proc_exit(0);\n+ }\n\nHow this situation can occur? I mean while starting a background\nworker itself we can check whether all tables are sync ready or not\nright?\n\n+ /* Check the status of apply background worker if any. */\n+ apply_bgworker_check_status();\n+\n\nWhat is the need to checking each worker status on every commit? I\nmean if there are a lot of small transactions along with some\nsteamiing transactions\nthen it will affect the apply performance for those small transactions?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Aug 2022 11:09:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nThanks for updating patch sets! Followings are comments about v20-0001.\r\n\r\n1. config.sgml\r\n\r\n```\r\n <para>\r\n Specifies maximum number of logical replication workers. This includes\r\n both apply workers and table synchronization workers.\r\n </para>\r\n```\r\n\r\nI think you can add a description in the above paragraph, like\r\n\" This includes apply main workers, apply background workers, and table synchronization workers.\"\r\n\r\n2. logical-replication.sgml\r\n\r\n2.a Configuration Settings\r\n\r\n```\r\n <varname>max_logical_replication_workers</varname> must be set to at least\r\n the number of subscriptions, again plus some reserve for the table\r\n synchronization.\r\n```\r\n\r\nI think you can add a description in the above paragraph, like\r\n\"... the number of subscriptions, plus some reserve for the table synchronization\r\n and the streaming transaction.\"\r\n\r\n2.b Monitoring\r\n\r\n```\r\n <para>\r\n Normally, there is a single apply process running for an enabled\r\n subscription. A disabled subscription or a crashed subscription will have\r\n zero rows in this view. If the initial data synchronization of any\r\n table is in progress, there will be additional workers for the tables\r\n being synchronized.\r\n </para>\r\n```\r\n\r\nI think you can add a sentence in the above paragraph, like \r\n\"... synchronized. Moreover, if the streaming transaction is applied parallelly,\r\nthere will be additional workers\"\r\n\r\n3. launcher.c\r\n\r\n```\r\n+ /* Sanity check : we don't support table sync in subworker. */\r\n```\r\n\r\nI think \"Sanity check :\" should be \"Sanity check:\", per other files.\r\n\r\n4. worker.c\r\n\r\n4.a handle_streamed_transaction()\r\n\r\n```\r\n- /* not in streaming mode */\r\n- if (!in_streamed_transaction)\r\n+ /* Not in streaming mode */\r\n+ if (!(in_streamed_transaction || am_apply_bgworker()))\r\n```\r\n\r\nI think the comment should also mention about apply background worker case.\r\n\r\n4.b handle_streamed_transaction()\r\n\r\n```\r\n- Assert(stream_fd != NULL);\r\n```\r\n\r\nI think this assersion seems reasonable in case of stream='on'.\r\nCould you revive it and move to later part in the function, like after subxact_info_add(current_xid)?\r\n\r\n4.c apply_handle_prepare_internal()\r\n\r\n```\r\n * BeginTransactionBlock is necessary to balance the EndTransactionBlock\r\n * called within the PrepareTransactionBlock below.\r\n */\r\n- BeginTransactionBlock();\r\n+ if (!IsTransactionBlock())\r\n+ BeginTransactionBlock();\r\n+\r\n```\r\n\r\nI think the comment should be \"We must be in transaction block to balance...\".\r\n\r\n4.d apply_handle_stream_prepare()\r\n\r\n```\r\n- *\r\n- * Logic is in two parts:\r\n- * 1. Replay all the spooled operations\r\n- * 2. Mark the transaction as prepared\r\n */\r\n static void\r\n apply_handle_stream_prepare(StringInfo s)\r\n```\r\n\r\nI think these comments are useful when stream='on',\r\nso it should be moved to later part.\r\n\r\n5. applybgworker.c\r\n\r\n5.a apply_bgworker_setup()\r\n\r\n```\r\n+ elog(DEBUG1, \"setting up apply worker #%u\", list_length(ApplyBgworkersList) + 1); \r\n```\r\n\r\n\"apply worker\" should be \"apply background worker\".\r\n\r\n5.b LogicalApplyBgwLoop()\r\n\r\n```\r\n+ elog(DEBUG1, \"[Apply BGW #%u] ended processing streaming chunk,\"\r\n+ \"waiting on shm_mq_receive\", shared->worker_id);\r\n```\r\n\r\nA blank is needed after comma. I checked serverlog, and the message outputed like:\r\n\r\n```\r\n[Apply BGW #1] ended processing streaming chunk,waiting on shm_mq_receive\r\n```\r\n\r\n6.\r\n\r\nWhen I started up the apply background worker and did `SELECT * from pg_stat_subscription`, I got following lines:\r\n\r\n```\r\npostgres=# select * from pg_stat_subscription;\r\n subid | subname | pid | relid | received_lsn | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end\r\n_time \r\n-------+---------+-------+-------+--------------+-------------------------------+-------------------------------+----------------+------------------\r\n-------------\r\n 16400 | sub | 22383 | | | -infinity | -infinity | | -infinity\r\n 16400 | sub | 22312 | | 0/6734740 | 2022-08-09 07:40:19.367676+00 | 2022-08-09 07:40:19.375455+00 | 0/6734740 | 2022-08-09 07:40:\r\n19.367676+00\r\n(2 rows)\r\n```\r\n\r\n\r\n6.a\r\n\r\nIt seems that the upper line represents the apply background worker, but I think last_msg_send_time and last_msg_receipt_time should be null.\r\nIs it like initialization mistake?\r\n\r\n```\r\n$ ps aux | grep 22383\r\n... postgres: logical replication apply background worker for subscription 16400\r\n```\r\n\r\n6.b\r\n\r\nCurrently, the documentation doesn't clarify the method to determine the type of logical replication workers.\r\nCould you add descriptions about it?\r\nI think adding a column \"subworker\" is an alternative approach.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 9 Aug 2022 08:48:53 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:10 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 21:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Few comments on 0001:\n> > ======================\n>\n> Thanks for your comments.\n>\n\nReview comments on v20-0001-Perform-streaming-logical-transactions-by-backgr\n===============================================================\n1.\n+ <para>\n+ If set to <literal>on</literal>, the incoming changes are written to\n+ temporary files and then applied only after the transaction is\n+ committed on the publisher.\n\nIt is not very clear that the transaction is applied when the commit\nis received by the subscriber. Can we slightly change it to: \"If set\nto <literal>on</literal>, the incoming changes are written to\ntemporary files and then applied only after the transaction is\ncommitted on the publisher and received by the subscriber.\"\n\n2.\n/* First time through, initialize apply workers hashtable */\n+ if (ApplyBgworkersHash == NULL)\n+ {\n+ HASHCTL ctl;\n+\n+ MemSet(&ctl, 0, sizeof(ctl));\n+ ctl.keysize = sizeof(TransactionId);\n+ ctl.entrysize = sizeof(ApplyBgworkerEntry);\n+ ctl.hcxt = ApplyContext;\n+\n+ ApplyBgworkersHash = hash_create(\"logical apply workers hash\", 8, &ctl,\n+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\n\nI think it would be better if we start with probably 16 element hash\ntable, 8 seems to be on the lower side.\n\n3.\n+/*\n+ * Try to look up worker assigned before (see function apply_bgworker_get_free)\n+ * inside ApplyBgworkersHash for requested xid.\n+ */\n+ApplyBgworkerState *\n+apply_bgworker_find(TransactionId xid)\n\nThe above comment is not very clear. There doesn't seem to be any\nfunction named apply_bgworker_get_free in the patch. Can we write this\ncomment as: \"Find the previously assigned worker for the given\ntransaction, if any.\"\n\n4.\n/*\n+ * Push apply error context callback. Fields will be filled applying a\n+ * change.\n+ */\n\n/Fields will be filled applying a change./Fields will be filled while\napplying a change.\n\n5.\n+void\n+ApplyBgworkerMain(Datum main_arg)\n+{\n...\n...\n+ StartTransactionCommand();\n+ oldcontext = MemoryContextSwitchTo(ApplyContext);\n+\n+ MySubscription = GetSubscription(MyLogicalRepWorker->subid, true);\n+ if (!MySubscription)\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription %u will not \"\n+ \"start because the subscription was removed during startup\",\n+ MyLogicalRepWorker->subid)));\n+ proc_exit(0);\n+ }\n+\n+ MySubscriptionValid = true;\n+ MemoryContextSwitchTo(oldcontext);\n+\n+ /* Setup synchronous commit according to the user's wishes */\n+ SetConfigOption(\"synchronous_commit\", MySubscription->synccommit,\n+ PGC_BACKEND, PGC_S_OVERRIDE);\n+\n+ /* Keep us informed about subscription changes. */\n+ CacheRegisterSyscacheCallback(SUBSCRIPTIONOID,\n+ subscription_change_cb,\n+ (Datum) 0);\n+\n+ CommitTransactionCommand();\n...\n\nThis part appears of the code appears to be the same as we have in\nApplyWorkerMain() except that the patch doesn't check whether the\nsubscription is enabled. Is there a reason to not have that check here\nas well? Then in ApplyWorkerMain(), we do LOG the type of worker that\nis also missing here. Unless there is a specific reason to have a\ndifferent code here, we should move this part to a common function and\ncall it both from ApplyWorkerMain() and ApplyBgworkerMain().\n\n6. I think the code in ApplyBgworkerMain() to set\nsession_replication_role, search_path, and connect to the database\nalso appears to be the same in ApplyWorkerMain(). If so, that can also\nbe moved to the common function mentioned in the previous point.\n\n7. I think we need to register for subscription rel map invalidation\n(invalidate_syncing_table_states) in ApplyBgworkerMain similar to\nApplyWorkerMain. The reason is that we check the table state after\nprocessing a commit or similar change record via a call to\nprocess_syncing_tables.\n\n8. In apply_bgworker_setup_dsm(), we should have handling related to\ndsm_create failure due to max_segments reached as we have in\nInitializeParallelDSM(). We can follow the regular path of streaming\ntransactions in case we are not able to create DSM instead of\nparallelizing it.\n\n9.\n+ shm_toc_initialize_estimator(&e);\n+ shm_toc_estimate_chunk(&e, sizeof(ApplyBgworkerShared));\n+ shm_toc_estimate_chunk(&e, (Size) queue_size);\n+\n+ shm_toc_estimate_keys(&e, 1 + 1);\n\nHere, you can directly write 2 instead of (1 + 1) stuff. It is quite\nclear that we need two keys here.\n\n10.\napply_bgworker_wait_for()\n{\n...\n+ /* Wait to be signalled. */\n+ WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,\n+ WAIT_EVENT_LOGICAL_APPLY_BGWORKER_STATE_CHANGE);\n...\n}\n\nTypecast with the void, if we don't care for the return value.\n\n11.\n+static void\n+apply_bgworker_shutdown(int code, Datum arg)\n+{\n+ SpinLockAcquire(&MyParallelShared->mutex);\n+ MyParallelShared->status = APPLY_BGWORKER_EXIT;\n+ SpinLockRelease(&MyParallelShared->mutex);\n\nIs there a reason to not use apply_bgworker_set_status() directly?\n\n12.\n+ * Special case is if the first change comes from subtransaction, then\n+ * we check that current_xid differs from stream_xid.\n+ */\n+void\n+apply_bgworker_subxact_info_add(TransactionId current_xid)\n+{\n+ if (current_xid != stream_xid &&\n+ !list_member_int(subxactlist, (int) current_xid))\n...\n...\n\nI don't understand the above comment. Does that mean we don't need to\ndefine a savepoint if the first change is from a subtransaction? Also,\nkeep an empty line before the above comment.\n\n13.\n+void\n+apply_bgworker_subxact_info_add(TransactionId current_xid)\n+{\n+ if (current_xid != stream_xid &&\n+ !list_member_int(subxactlist, (int) current_xid))\n+ {\n+ MemoryContext oldctx;\n+ char spname[MAXPGPATH];\n+\n+ snprintf(spname, MAXPGPATH, \"savepoint_for_xid_%u\", current_xid);\n\nTo uniquely generate the savepoint name, it is better to append the\nsubscription id as well? Something like pg_sp_<subid>_<xid>.\n\n14. The CommitTransactionCommand() call in\napply_bgworker_subxact_info_add looks a bit odd as that function\nneither seems to be starting the transaction command nor has any\ncomments explaining it. Shall we do it in caller where it is more\napparent to do the same?\n\n15.\nelse\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u\", subid);\n+\n snprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication worker\");\n\nSpurious new line\n\n16.\n@@ -1153,7 +1162,14 @@ replorigin_session_setup(RepOriginId node)\n\n Assert(session_replication_state->roident != InvalidRepOriginId);\n\n- session_replication_state->acquired_by = MyProcPid;\n+ if (must_acquire)\n+ session_replication_state->acquired_by = MyProcPid;\n+ else if (session_replication_state->acquired_by == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\n+ errmsg(\"apply background worker could not find replication state\nslot for replication origin with OID %u\",\n+ node),\n+ errdetail(\"There is no replication state slot set by its main apply\nworker.\")));\n\nIt is not a good idea to give apply workers specific messages from\nthis API because I don't think we can assume this is used by only\napply workers. It seems to me that if 'must_acquire' is false, then we\nshould either give elog(ERROR, ..) or there should be an Assert for\nthe same. I am not completely sure but maybe we can request the caller\nto supply the PID (which already has acquired this origin) in case\nmust_acquire is false and then use it in Assert/elog to ensure the\ncorrect usage of API. What do you think?\n\n17. The commit message can explain the abort-related new information\nthis patch sends to the subscribers.\n\n18.\n+ * In streaming case (receiving a block of streamed transaction), for\n+ * SUBSTREAM_ON mode, simply redirect it to a file for the proper toplevel\n+ * transaction, and for SUBSTREAM_PARALLEL mode, send the changes to apply\n+ * background workers (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes\n+ * will also be applied in main apply worker).\n\nIn this, part of the comment \"(LOGICAL_REP_MSG_RELATION or\nLOGICAL_REP_MSG_TYPE changes will also be applied in main apply\nworker)\" is not very clear. Do you mean to say that these messages are\napplied by both main and background apply workers, if so, then please\nstate the same explicitly?\n\n19.\n- /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ /* Not in streaming mode */\n+ if (!(in_streamed_transaction || am_apply_bgworker()))\n...\n...\n- /* write the change to the current file */\n+ /* Write the change to the current file */\n stream_write_change(action, s);\n\nI don't see the need to change the above comments.\n\n20.\n static bool\n handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n {\n...\n...\n+ if (am_apply_bgworker())\n+ {\n+ /* Define a savepoint for a subxact if needed. */\n+ apply_bgworker_subxact_info_add(current_xid);\n+\n+ return false;\n+ }\n+\n+ if (apply_bgworker_active())\n\nIsn't it better to use else if in the above code and probably else for\nthe remaining part of code in this function?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Aug 2022 16:30:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Aug 9, 2022 at 11:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Some more comments\n>\n> + /*\n> + * Exit if any relation is not in the READY state and if any worker is\n> + * handling the streaming transaction at the same time. Because for\n> + * streaming transactions that is being applied in apply background\n> + * worker, we cannot decide whether to apply the change for a relation\n> + * that is not in the READY state (see should_apply_changes_for_rel) as we\n> + * won't know remote_final_lsn by that time.\n> + */\n> + if (list_length(ApplyBgworkersFreeList) !=\n> list_length(ApplyBgworkersList) &&\n> + !AllTablesyncsReady())\n> + {\n> + ereport(LOG,\n> + (errmsg(\"logical replication apply workers for\n> subscription \\\"%s\\\" will restart\",\n> + MySubscription->name),\n> + errdetail(\"Cannot handle streamed replication\n> transaction by apply \"\n> + \"background workers until all tables are\n> synchronized\")));\n> +\n> + proc_exit(0);\n> + }\n>\n> How this situation can occur? I mean while starting a background\n> worker itself we can check whether all tables are sync ready or not\n> right?\n>\n\nWe are already checking at the start in apply_bgworker_can_start() but\nI think it is required to check at the later point of time as well\nbecause the new rels can be added to pg_subscription_rel via Alter\nSubscription ... Refresh. I feel if that reasoning is correct then we\ncan probably expand comments to make it clear.\n\n> + /* Check the status of apply background worker if any. */\n> + apply_bgworker_check_status();\n> +\n>\n> What is the need to checking each worker status on every commit? I\n> mean if there are a lot of small transactions along with some\n> steamiing transactions\n> then it will affect the apply performance for those small transactions?\n>\n\nI don't think performance will be a concern because this won't do any\ncostly operation unless invalidation happens in which case it will\naccess system catalogs. However, if my above understanding is correct\nthat new tables can be added during the apply process then not sure\ndoing it at commit time is sufficient/correct because it can change\neven during the transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Aug 2022 17:39:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi Wang,\r\n\r\n> 6.a\r\n> \r\n> It seems that the upper line represents the apply background worker, but I think\r\n> last_msg_send_time and last_msg_receipt_time should be null.\r\n> Is it like initialization mistake?\r\n\r\nI checked again about the issue.\r\n\r\nAttributes worker->last_send_time, worker->last_recv_time, and worker->reply_time\r\nare initialized in logicalrep_worker_launch():\r\n\r\n```\r\n...\r\n\tTIMESTAMP_NOBEGIN(worker->last_send_time);\r\n\tTIMESTAMP_NOBEGIN(worker->last_recv_time);\r\n\tworker->reply_lsn = InvalidXLogRecPtr;\r\n\tTIMESTAMP_NOBEGIN(worker->reply_time);\r\n...\r\n```\r\n\r\nAnd the macro is defined in timestamp.h, and it seems that the values are initialized as PG_INT64_MIN.\r\n\r\n```\r\n#define DT_NOBEGIN\t\tPG_INT64_MIN\r\n#define DT_NOEND\t\tPG_INT64_MAX\r\n\r\n#define TIMESTAMP_NOBEGIN(j)\t\\\r\n\tdo {(j) = DT_NOBEGIN;} while (0)\r\n```\r\n\r\n\r\nHowever, in pg_stat_get_subscription(), these values are regarded as null if they are zero.\r\n\r\n```\r\n\t\tif (worker.last_send_time == 0)\r\n\t\t\tnulls[4] = true;\r\n\t\telse\r\n\t\t\tvalues[4] = TimestampTzGetDatum(worker.last_send_time);\r\n\t\tif (worker.last_recv_time == 0)\r\n\t\t\tnulls[5] = true;\r\n\t\telse\r\n\t\t\tvalues[5] = TimestampTzGetDatum(worker.last_recv_time);\r\n```\r\n\r\nI think above lines are wrong, these values should be compared with PG_INT64_MIN.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 10 Aug 2022 02:11:21 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Aug 9, 2022 at 5:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 9, 2022 at 11:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Some more comments\n> >\n> > + /*\n> > + * Exit if any relation is not in the READY state and if any worker is\n> > + * handling the streaming transaction at the same time. Because for\n> > + * streaming transactions that is being applied in apply background\n> > + * worker, we cannot decide whether to apply the change for a relation\n> > + * that is not in the READY state (see should_apply_changes_for_rel) as we\n> > + * won't know remote_final_lsn by that time.\n> > + */\n> > + if (list_length(ApplyBgworkersFreeList) !=\n> > list_length(ApplyBgworkersList) &&\n> > + !AllTablesyncsReady())\n> > + {\n> > + ereport(LOG,\n> > + (errmsg(\"logical replication apply workers for\n> > subscription \\\"%s\\\" will restart\",\n> > + MySubscription->name),\n> > + errdetail(\"Cannot handle streamed replication\n> > transaction by apply \"\n> > + \"background workers until all tables are\n> > synchronized\")));\n> > +\n> > + proc_exit(0);\n> > + }\n> >\n> > How this situation can occur? I mean while starting a background\n> > worker itself we can check whether all tables are sync ready or not\n> > right?\n> >\n>\n> We are already checking at the start in apply_bgworker_can_start() but\n> I think it is required to check at the later point of time as well\n> because the new rels can be added to pg_subscription_rel via Alter\n> Subscription ... Refresh. I feel if that reasoning is correct then we\n> can probably expand comments to make it clear.\n>\n> > + /* Check the status of apply background worker if any. */\n> > + apply_bgworker_check_status();\n> > +\n> >\n> > What is the need to checking each worker status on every commit? I\n> > mean if there are a lot of small transactions along with some\n> > steamiing transactions\n> > then it will affect the apply performance for those small transactions?\n> >\n>\n> I don't think performance will be a concern because this won't do any\n> costly operation unless invalidation happens in which case it will\n> access system catalogs. However, if my above understanding is correct\n> that new tables can be added during the apply process then not sure\n> doing it at commit time is sufficient/correct because it can change\n> even during the transaction.\n>\n\nOne idea that may handle it cleanly is to check for\nSUBREL_STATE_SYNCDONE state in should_apply_changes_for_rel() and\nerror out for apply_bg_worker(). For the SUBREL_STATE_READY state, it\nshould return true and for any other state, it can return false. The\none advantage of this approach could be that the parallel apply worker\nwill give an error only if the corresponding transaction has performed\nany operation on the relation that has reached the SYNCDONE state.\nOTOH, checking at each transaction end can also lead to erroring out\nof workers even if the parallel apply transaction doesn't perform any\noperation on the relation which is not in the READY state.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Aug 2022 09:08:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for the patch v20-0001:\n\n======\n\n1. doc/src/sgml/catalogs.sgml\n\n+ <literal>p</literal> = apply changes directly using a background\n+ worker, if available, otherwise, it behaves the same as 't'\n\nThe different char values 'f','t','p' are separated by comma (,) in\nthe list, which is normal for the pgdocs AFAIK. However, because of\nthis I don't think it is a good idea to use those other commas within\nthe description for 'p', I suggest you remove those ones to avoid\nambiguity with the separators.\n\n======\n\n2. doc/src/sgml/protocol.sgml\n\n@@ -3096,7 +3096,7 @@ psql \"dbname=postgres replication=database\" -c\n\"IDENTIFY_SYSTEM;\"\n <listitem>\n <para>\n Protocol version. Currently versions <literal>1</literal>,\n<literal>2</literal>,\n- and <literal>3</literal> are supported.\n+ <literal>3</literal> and <literal>4</literal> are supported.\n </para>\n\nPut a comma after the penultimate value like it had before.\n\n======\n\n3. src/backend/replication/logical/applybgworker.c - <general>\n\nThere are multiple function comments and other code comments in this\nfile that are missing a terminating period (.)\n\n======\n\n4. src/backend/replication/logical/applybgworker.c - apply_bgworker_start\n\n+/*\n+ * Try to get a free apply background worker.\n+ *\n+ * If there is at least one worker in the free list, then take one. Otherwise,\n+ * try to start a new apply background worker. If successful, cache it in\n+ * ApplyBgworkersHash keyed by the specified xid.\n+ */\n+ApplyBgworkerState *\n+apply_bgworker_start(TransactionId xid)\n\nSUGGESTION (for function comment)\nReturn the apply background worker that will be used for the specified xid.\n\nIf an apply background worker is found in the free list then re-use\nit, otherwise start a fresh one. Cache the worker ApplyBgworkersHash\nkeyed by the specified xid.\n\n~~~\n\n5.\n\n+ /* Try to get a free apply background worker */\n+ if (list_length(ApplyBgworkersFreeList) > 0)\n\nif (list_length(ApplyBgworkersFreeList) > 0)\n\nAFAIK a non-empty list is guaranteed to be not NIL, and an empty list\nis guaranteed to be NIL. So if you want to the above can simply be\nwritten as:\n\nif (ApplyBgworkersFreeList)\n\n~~~\n\n6. src/backend/replication/logical/applybgworker.c - apply_bgworker_find\n\n+/*\n+ * Try to look up worker assigned before (see function apply_bgworker_get_free)\n+ * inside ApplyBgworkersHash for requested xid.\n+ */\n+ApplyBgworkerState *\n+apply_bgworker_find(TransactionId xid)\n\nSUGGESTION (for function comment)\nFind the worker previously assigned/cached for this xid. (see function\napply_bgworker_start)\n\n~~~\n\n7.\n\n+ Assert(status == APPLY_BGWORKER_BUSY);\n+\n+ return entry->wstate;\n+ }\n+ else\n+ return NULL;\n\nIMO here it is better to just remove that 'else' and unconditionally\nreturn NULL at the end of this function.\n\n~~~\n\n8. src/backend/replication/logical/applybgworker.c -\napply_bgworker_subxact_info_add\n\n+ * Inside apply background worker we can figure out that new subtransaction was\n+ * started if new change arrived with different xid. In that case we can define\n+ * named savepoint, so that we were able to commit/rollback it separately\n+ * later.\n+ * Special case is if the first change comes from subtransaction, then\n+ * we check that current_xid differs from stream_xid.\n+ */\n+void\n+apply_bgworker_subxact_info_add(TransactionId current_xid)\n\nIt is not quite English. Can you improve it a bit?\n\nSUGGESTION (maybe like this?)\nThe apply background worker can figure out if a new subtransaction was\nstarted by checking if the new change arrived with different xid. In\nthat case define a named savepoint, so that we are able to\ncommit/rollback it separately later. A special case is when the first\nchange comes from subtransaction – this is determined by checking if\nthe current_xid differs from stream_xid.\n\n======\n\n9. src/backend/replication/logical/launcher.c - WaitForReplicationWorkerAttach\n\n+ *\n+ * Return false if the attach fails. Otherwise return true.\n */\n-static void\n+static bool\n WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n\nWhy not just say \"Return whether the attach was successful.\"\n\n~~~\n\n10. src/backend/replication/logical/launcher.c - logicalrep_worker_stop\n\n+ /* Found the main worker, then try to stop it. */\n+ if (worker)\n+ logicalrep_worker_stop_internal(worker);\n\nIMO the comment is kind of pointless because it only says what the\ncode is clearly doing. If you really wanted to reinforce this worker\nis a main apply worker then you can do that with code like:\n\nif (worker)\n{\nAssert(!worker->subworker);\nlogicalrep_worker_stop_internal(worker);\n}\n\n~~~\n\n11. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n\n@@ -599,6 +632,29 @@ logicalrep_worker_attach(int slot)\n static void\n logicalrep_worker_detach(void)\n {\n+ /*\n+ * This is the main apply worker, stop all the apply background workers we\n+ * started before.\n+ */\n+ if (!MyLogicalRepWorker->subworker)\n\nSUGGESTION (for comment)\nThis is the main apply worker. Stop all apply background workers\npreviously started from here.\n\n~~~\n\n12 src/backend/replication/logical/launcher.c - logicalrep_apply_bgworker_count\n\n+/*\n+ * Count the number of registered (not necessarily running) apply background\n+ * workers for a subscription.\n+ */\n+int\n+logicalrep_apply_bgworker_count(Oid subid)\n\nSUGGESTION\nCount the number of registered (but not necessarily running) apply\nbackground workers for a subscription.\n\n~~~\n\n13.\n\n+ /* Search for attached worker for a given subscription id. */\n+ for (i = 0; i < max_logical_replication_workers; i++)\n\nSUGGESTION\nScan all attached apply background workers, only counting those which\nhave the given subscription id.\n\n======\n\n14. src/backend/replication/logical/worker.c - apply_error_callback\n\n+ {\n+ if (errarg->remote_attnum < 0)\n+ {\n+ if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n%u\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->remote_xid);\n+ else\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n%u finished at %X/%X\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->remote_xid,\n+ LSN_FORMAT_ARGS(errarg->finish_lsn));\n+ }\n+ else\n+ {\n+ if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\nin transaction %u\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->rel->remoterel.attnames[errarg->remote_attnum],\n+ errarg->remote_xid);\n+ else\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\nin transaction %u finished at %X/%X\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->rel->remoterel.attnames[errarg->remote_attnum],\n+ errarg->remote_xid,\n+ LSN_FORMAT_ARGS(errarg->finish_lsn));\n+ }\n+ }\n\nThere is quite a lot of common code here:\n\n\"processing remote data for replication origin \\\"%s\\\" during \\\"%s\\\"\nfor replication target relation \\\"%s.%s\\\"\n\n errarg->origin_name,\n logicalrep_message_type(errarg->command),\n errarg->rel->remoterel.nspname,\n errarg->rel->remoterel.relname,\n\nIs it worth trying to extract that common part to keep this code\nshorter? E.g. It could be easily done just with some #defines\n\n======\n\n15. src/include/replication/worker_internal.h\n\n+ /* proto version of publisher. */\n+ uint32 proto_version;\n\nSUGGESTION\nProtocol version of publisher\n\n~~~\n\n16.\n\n+ /* id of apply background worker */\n+ uint32 worker_id;\n\nUppercase comment\n\n~~~\n\n17.\n\n+/*\n+ * Struct for maintaining an apply background worker.\n+ */\n+typedef struct ApplyBgworkerState\n\nI'm not sure what this comment means. Perhaps there are some words missing?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 10 Aug 2022 19:40:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 4, 2022 at 12:07 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thurs, Jul 28, 2022 at 13:20 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear Wang-san,\n> >\n> > Hi, I'm also interested in the patch and I started to review this.\n> > Followings are comments about 0001.\n>\n> Thanks for your kindly review and comments.\n> To avoid making this thread too long, I will reply to all of your comments\n> (#1~#13) in this email.\n>\n> > 1. terminology\n> >\n> > In your patch a new worker \"apply background worker\" has been introduced,\n> > but I thought it might be confused because PostgreSQL has already the worker\n> > \"background worker\".\n> > Both of apply worker and apply bworker are categolized as bgworker.\n> > Do you have any reasons not to use \"apply parallel worker\" or \"apply streaming\n> > worker\"?\n> > (Note that I'm not native English speaker)\n>\n> Since we will later consider applying non-streamed transactions in parallel, I\n> think \"apply streaming worker\" might not be very suitable. I think PostgreSQL\n> also has the worker \"parallel worker\", so for \"apply parallel worker\" and\n> \"apply background worker\", I feel that \"apply background worker\" will make the\n> relationship between workers more clear. (\"[main] apply worker\" and \"apply\n> background worker\")\n>\n\nBut, on similar lines, we do have vacuumparallel.c for parallelizing\nindex vacuum. I agree with Kuroda-San on this point that the currently\nproposed terminology doesn't sound to be very clear. The other options\nthat come to my mind are \"apply streaming transaction worker\", \"apply\nparallel worker\" and file name could be applystreamworker.c,\napplyparallel.c, applyparallelworker.c, etc. I see the point why you\nare hesitant in calling it \"apply parallel worker\" but it is quite\npossible that even for non-streamed xacts, we will share quite some\npart of this code.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Aug 2022 12:13:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, August 9, 2022 7:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Aug 4, 2022 at 12:10 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Mon, Jul 25, 2022 at 21:50 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > Few comments on 0001:\r\n> > > ======================\r\n> >\r\n> > Thanks for your comments.\r\n> >\r\n> \r\n> Review comments on\r\n> v20-0001-Perform-streaming-logical-transactions-by-backgr\r\n> ===================================================\r\n> ============\r\n> 1.\r\n> + <para>\r\n> + If set to <literal>on</literal>, the incoming changes are written to\r\n> + temporary files and then applied only after the transaction is\r\n> + committed on the publisher.\r\n> \r\n> It is not very clear that the transaction is applied when the commit is received\r\n> by the subscriber. Can we slightly change it to: \"If set to <literal>on</literal>,\r\n> the incoming changes are written to temporary files and then applied only after\r\n> the transaction is committed on the publisher and received by the subscriber.\"\r\n\r\nChanged.\r\n\r\n> 2.\r\n> /* First time through, initialize apply workers hashtable */\r\n> + if (ApplyBgworkersHash == NULL)\r\n> + {\r\n> + HASHCTL ctl;\r\n> +\r\n> + MemSet(&ctl, 0, sizeof(ctl));\r\n> + ctl.keysize = sizeof(TransactionId);\r\n> + ctl.entrysize = sizeof(ApplyBgworkerEntry); ctl.hcxt = ApplyContext;\r\n> +\r\n> + ApplyBgworkersHash = hash_create(\"logical apply workers hash\", 8, &ctl,\r\n> + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\r\n> \r\n> I think it would be better if we start with probably 16 element hash table, 8\r\n> seems to be on the lower side.\r\n\r\nChanged.\r\n\r\n> 3.\r\n> +/*\r\n> + * Try to look up worker assigned before (see function\r\n> +apply_bgworker_get_free)\r\n> + * inside ApplyBgworkersHash for requested xid.\r\n> + */\r\n> +ApplyBgworkerState *\r\n> +apply_bgworker_find(TransactionId xid)\r\n> \r\n> The above comment is not very clear. There doesn't seem to be any function\r\n> named apply_bgworker_get_free in the patch. Can we write this comment as:\r\n> \"Find the previously assigned worker for the given transaction, if any.\"\r\n\r\nChanged the comments.\r\n\r\n> 4.\r\n> /*\r\n> + * Push apply error context callback. Fields will be filled applying a\r\n> + * change.\r\n> + */\r\n> \r\n> /Fields will be filled applying a change./Fields will be filled while applying a\r\n> change.\r\n\r\nChanged.\r\n\r\n> 5.\r\n> +void\r\n> +ApplyBgworkerMain(Datum main_arg)\r\n> +{\r\n> ...\r\n> ...\r\n> + StartTransactionCommand();\r\n> + oldcontext = MemoryContextSwitchTo(ApplyContext);\r\n> +\r\n> + MySubscription = GetSubscription(MyLogicalRepWorker->subid, true); if\r\n> + (!MySubscription) { ereport(LOG, (errmsg(\"logical replication apply\r\n> + worker for subscription %u will not \"\r\n> + \"start because the subscription was removed during startup\",\r\n> + MyLogicalRepWorker->subid)));\r\n> + proc_exit(0);\r\n> + }\r\n> +\r\n> + MySubscriptionValid = true;\r\n> + MemoryContextSwitchTo(oldcontext);\r\n> +\r\n> + /* Setup synchronous commit according to the user's wishes */\r\n> + SetConfigOption(\"synchronous_commit\", MySubscription->synccommit,\r\n> + PGC_BACKEND, PGC_S_OVERRIDE);\r\n> +\r\n> + /* Keep us informed about subscription changes. */\r\n> + CacheRegisterSyscacheCallback(SUBSCRIPTIONOID,\r\n> + subscription_change_cb,\r\n> + (Datum) 0);\r\n> +\r\n> + CommitTransactionCommand();\r\n> ...\r\n> \r\n> This part appears of the code appears to be the same as we have in\r\n> ApplyWorkerMain() except that the patch doesn't check whether the\r\n> subscription is enabled. Is there a reason to not have that check here as well?\r\n> Then in ApplyWorkerMain(), we do LOG the type of worker that is also missing\r\n> here. Unless there is a specific reason to have a different code here, we should\r\n> move this part to a common function and call it both from ApplyWorkerMain()\r\n> and ApplyBgworkerMain().\r\n> 6. I think the code in ApplyBgworkerMain() to set session_replication_role,\r\n> search_path, and connect to the database also appears to be the same in\r\n> ApplyWorkerMain(). If so, that can also be moved to the common function\r\n> mentioned in the previous point.\r\n> \r\n> 7. I think we need to register for subscription rel map invalidation\r\n> (invalidate_syncing_table_states) in ApplyBgworkerMain similar to\r\n> ApplyWorkerMain. The reason is that we check the table state after processing\r\n> a commit or similar change record via a call to process_syncing_tables.\r\n\r\nAgreed and changed.\r\n\r\n> 8. In apply_bgworker_setup_dsm(), we should have handling related to\r\n> dsm_create failure due to max_segments reached as we have in\r\n> InitializeParallelDSM(). We can follow the regular path of streaming\r\n> transactions in case we are not able to create DSM instead of parallelizing it.\r\n\r\nChanged.\r\n\r\n> 9.\r\n> + shm_toc_initialize_estimator(&e);\r\n> + shm_toc_estimate_chunk(&e, sizeof(ApplyBgworkerShared));\r\n> + shm_toc_estimate_chunk(&e, (Size) queue_size);\r\n> +\r\n> + shm_toc_estimate_keys(&e, 1 + 1);\r\n> \r\n> Here, you can directly write 2 instead of (1 + 1) stuff. It is quite clear that we\r\n> need two keys here.\r\n\r\nChanged.\r\n\r\n> 10.\r\n> apply_bgworker_wait_for()\r\n> {\r\n> ...\r\n> + /* Wait to be signalled. */\r\n> + WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,\r\n> + WAIT_EVENT_LOGICAL_APPLY_BGWORKER_STATE_CHANGE);\r\n> ...\r\n> }\r\n> \r\n> Typecast with the void, if we don't care for the return value.\r\n\r\nChanged.\r\n\r\n> 11.\r\n> +static void\r\n> +apply_bgworker_shutdown(int code, Datum arg) {\r\n> +SpinLockAcquire(&MyParallelShared->mutex);\r\n> + MyParallelShared->status = APPLY_BGWORKER_EXIT;\r\n> + SpinLockRelease(&MyParallelShared->mutex);\r\n> \r\n> Is there a reason to not use apply_bgworker_set_status() directly?\r\n\r\nNo, changed to use that function.\r\n\r\n> 12.\r\n> + * Special case is if the first change comes from subtransaction, then\r\n> + * we check that current_xid differs from stream_xid.\r\n> + */\r\n> +void\r\n> +apply_bgworker_subxact_info_add(TransactionId current_xid) { if\r\n> +(current_xid != stream_xid && !list_member_int(subxactlist, (int)\r\n> +current_xid))\r\n> ...\r\n> ...\r\n> \r\n> I don't understand the above comment. Does that mean we don't need to\r\n> define a savepoint if the first change is from a subtransaction? Also, keep an\r\n> empty line before the above comment.\r\n\r\nAfter checking, I think this comment is not very clear and have removed it\r\nand improve other comments.\r\n\r\n> 13.\r\n> +void\r\n> +apply_bgworker_subxact_info_add(TransactionId current_xid) { if\r\n> +(current_xid != stream_xid && !list_member_int(subxactlist, (int)\r\n> +current_xid)) { MemoryContext oldctx; char spname[MAXPGPATH];\r\n> +\r\n> + snprintf(spname, MAXPGPATH, \"savepoint_for_xid_%u\", current_xid);\r\n> \r\n> To uniquely generate the savepoint name, it is better to append the\r\n> subscription id as well? Something like pg_sp_<subid>_<xid>.\r\n\r\nChanged.\r\n\r\n> 14. The CommitTransactionCommand() call in\r\n> apply_bgworker_subxact_info_add looks a bit odd as that function neither\r\n> seems to be starting the transaction command nor has any comments\r\n> explaining it. Shall we do it in caller where it is more apparent to do the same?\r\n\r\nI think the CommitTransactionCommand here is used to cooperate the\r\nDefineSavepoint because we need to invoke CommitTransactionCommand to\r\nstart a new subtransaction. I tried to add some comments to explain the same.\r\n\r\n> 15.\r\n> else\r\n> snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> \"logical replication worker for subscription %u\", subid);\r\n> +\r\n> snprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication worker\");\r\n> \r\n> Spurious new line\r\n\r\nRemoved.\r\n\r\n> 16.\r\n> @@ -1153,7 +1162,14 @@ replorigin_session_setup(RepOriginId node)\r\n> \r\n> Assert(session_replication_state->roident != InvalidRepOriginId);\r\n> \r\n> - session_replication_state->acquired_by = MyProcPid;\r\n> + if (must_acquire)\r\n> + session_replication_state->acquired_by = MyProcPid; else if\r\n> + (session_replication_state->acquired_by == 0) ereport(ERROR,\r\n> + (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\r\n> + errmsg(\"apply background worker could not find replication state\r\n> slot for replication origin with OID %u\",\r\n> + node),\r\n> + errdetail(\"There is no replication state slot set by its main apply\r\n> worker.\")));\r\n> \r\n> It is not a good idea to give apply workers specific messages from this API\r\n> because I don't think we can assume this is used by only apply workers. It seems\r\n> to me that if 'must_acquire' is false, then we should either give elog(ERROR, ..)\r\n> or there should be an Assert for the same. I am not completely sure but maybe\r\n> we can request the caller to supply the PID (which already has acquired this\r\n> origin) in case must_acquire is false and then use it in Assert/elog to ensure the\r\n> correct usage of API. What do you think?\r\n\r\nAgreed. I think we can replace the 'must_acquire' with the pid of worker which\r\nacquired this origin(called 'acquired_by'). We can use this pid to check and\r\nreport the error if needed.\r\n\r\n> 17. The commit message can explain the abort-related new information this\r\n> patch sends to the subscribers.\r\n\r\nAdded.\r\n\r\n> 18.\r\n> + * In streaming case (receiving a block of streamed transaction), for\r\n> + * SUBSTREAM_ON mode, simply redirect it to a file for the proper\r\n> + toplevel\r\n> + * transaction, and for SUBSTREAM_PARALLEL mode, send the changes to\r\n> + apply\r\n> + * background workers (LOGICAL_REP_MSG_RELATION or\r\n> LOGICAL_REP_MSG_TYPE\r\n> + changes\r\n> + * will also be applied in main apply worker).\r\n> \r\n> In this, part of the comment \"(LOGICAL_REP_MSG_RELATION or\r\n> LOGICAL_REP_MSG_TYPE changes will also be applied in main apply worker)\" is\r\n> not very clear. Do you mean to say that these messages are applied by both\r\n> main and background apply workers, if so, then please state the same\r\n> explicitly?\r\n\r\nChanged.\r\n\r\n> 19.\r\n> - /* not in streaming mode */\r\n> - if (!in_streamed_transaction)\r\n> + /* Not in streaming mode */\r\n> + if (!(in_streamed_transaction || am_apply_bgworker()))\r\n> ...\r\n> ...\r\n> - /* write the change to the current file */\r\n> + /* Write the change to the current file */\r\n> stream_write_change(action, s);\r\n> \r\n> I don't see the need to change the above comments.\r\n\r\nRemove the changes.\r\n\r\n> 20.\r\n> static bool\r\n> handle_streamed_transaction(LogicalRepMsgType action, StringInfo s) { ...\r\n> ...\r\n> + if (am_apply_bgworker())\r\n> + {\r\n> + /* Define a savepoint for a subxact if needed. */\r\n> + apply_bgworker_subxact_info_add(current_xid);\r\n> +\r\n> + return false;\r\n> + }\r\n> +\r\n> + if (apply_bgworker_active())\r\n> \r\n> Isn't it better to use else if in the above code and probably else for the\r\n> remaining part of code in this function?\r\n\r\nChanged.\r\n\r\nAttach new version(v21) patch set which addressed all the comments received so far.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 11 Aug 2022 07:47:59 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, August 10, 2022 11:39 AM Amit Kapila <amit.kapila16@gmail.com>wrote:\r\n> \r\n> On Tue, Aug 9, 2022 at 5:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Tue, Aug 9, 2022 at 11:09 AM Dilip Kumar <dilipbalaut@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > Some more comments\r\n> > >\r\n> > > + /*\r\n> > > + * Exit if any relation is not in the READY state and if any worker is\r\n> > > + * handling the streaming transaction at the same time. Because for\r\n> > > + * streaming transactions that is being applied in apply background\r\n> > > + * worker, we cannot decide whether to apply the change for a\r\n> relation\r\n> > > + * that is not in the READY state (see should_apply_changes_for_rel) as\r\n> we\r\n> > > + * won't know remote_final_lsn by that time.\r\n> > > + */\r\n> > > + if (list_length(ApplyBgworkersFreeList) !=\r\n> > > list_length(ApplyBgworkersList) &&\r\n> > > + !AllTablesyncsReady())\r\n> > > + {\r\n> > > + ereport(LOG,\r\n> > > + (errmsg(\"logical replication apply workers for\r\n> > > subscription \\\"%s\\\" will restart\",\r\n> > > + MySubscription->name),\r\n> > > + errdetail(\"Cannot handle streamed replication\r\n> > > transaction by apply \"\r\n> > > + \"background workers until all tables are\r\n> > > synchronized\")));\r\n> > > +\r\n> > > + proc_exit(0);\r\n> > > + }\r\n> > >\r\n> > > How this situation can occur? I mean while starting a background\r\n> > > worker itself we can check whether all tables are sync ready or not\r\n> > > right?\r\n> > >\r\n> >\r\n> > We are already checking at the start in apply_bgworker_can_start() but\r\n> > I think it is required to check at the later point of time as well\r\n> > because the new rels can be added to pg_subscription_rel via Alter\r\n> > Subscription ... Refresh. I feel if that reasoning is correct then we\r\n> > can probably expand comments to make it clear.\r\n> >\r\n> > > + /* Check the status of apply background worker if any. */\r\n> > > + apply_bgworker_check_status();\r\n> > > +\r\n> > >\r\n> > > What is the need to checking each worker status on every commit? I\r\n> > > mean if there are a lot of small transactions along with some\r\n> > > steamiing transactions then it will affect the apply performance for\r\n> > > those small transactions?\r\n> > >\r\n> >\r\n> > I don't think performance will be a concern because this won't do any\r\n> > costly operation unless invalidation happens in which case it will\r\n> > access system catalogs. However, if my above understanding is correct\r\n> > that new tables can be added during the apply process then not sure\r\n> > doing it at commit time is sufficient/correct because it can change\r\n> > even during the transaction.\r\n> >\r\n> \r\n> One idea that may handle it cleanly is to check for SUBREL_STATE_SYNCDONE\r\n> state in should_apply_changes_for_rel() and error out for apply_bg_worker().\r\n> For the SUBREL_STATE_READY state, it should return true and for any other\r\n> state, it can return false. The one advantage of this approach could be that the\r\n> parallel apply worker will give an error only if the corresponding transaction\r\n> has performed any operation on the relation that has reached the SYNCDONE\r\n> state.\r\n> OTOH, checking at each transaction end can also lead to erroring out of\r\n> workers even if the parallel apply transaction doesn't perform any operation on\r\n> the relation which is not in the READY state.\r\n\r\nI agree that it would be better to check at should_apply_changes_for_rel().\r\n\r\nIn addition, I think we should report an error if the table is not in READY state,\r\notherwise return true. Currently(on HEAD), if the table state is NOT READY, we\r\nwill skip all the changes related to the relation in a transaction because we\r\ninvoke process_syncing_tables only at transaction end which means the state of\r\ntable won't be changed during applying a transaction.\r\n\r\nBut while the apply bgworker is applying the streaming transaction, the\r\nmain apply worker could have applied serval normal transaction which could\r\nchange the state of table serval times(FROM INIT -> READY). So, to prevent the\r\nrisk case that we skip part of changes before state comes to READY and then\r\nstart to apply the changes after READY during on transaction, we'd better error\r\nout if the table is not in READY state and restart without apply background\r\nworker.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 11 Aug 2022 07:48:36 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, August 9, 2022 4:49 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thanks for updating patch sets! Followings are comments about v20-0001.\r\n> \r\n> 1. config.sgml\r\n> \r\n> ```\r\n> <para>\r\n> Specifies maximum number of logical replication workers. This includes\r\n> both apply workers and table synchronization workers.\r\n> </para>\r\n> ```\r\n> \r\n> I think you can add a description in the above paragraph, like\r\n> \" This includes apply main workers, apply background workers, and table\r\n> synchronization workers.\"\r\n\r\nChanged.\r\n\r\n> 2. logical-replication.sgml\r\n> \r\n> 2.a Configuration Settings\r\n> \r\n> ```\r\n> <varname>max_logical_replication_workers</varname> must be set to at\r\n> least\r\n> the number of subscriptions, again plus some reserve for the table\r\n> synchronization.\r\n> ```\r\n> \r\n> I think you can add a description in the above paragraph, like\r\n> \"... the number of subscriptions, plus some reserve for the table\r\n> synchronization\r\n> and the streaming transaction.\"\r\n\r\nI think it's not a must to add the number of streaming transactions here as\r\nit also works even if no worker is available for apply bgworker as explained in\r\nthe document of streaming option.\r\n\r\n> 2.b Monitoring\r\n> \r\n> ```\r\n> <para>\r\n> Normally, there is a single apply process running for an enabled\r\n> subscription. A disabled subscription or a crashed subscription will have\r\n> zero rows in this view. If the initial data synchronization of any\r\n> table is in progress, there will be additional workers for the tables\r\n> being synchronized.\r\n> </para>\r\n> ```\r\n> \r\n> I think you can add a sentence in the above paragraph, like\r\n> \"... synchronized. Moreover, if the streaming transaction is applied parallelly,\r\n> there will be additional workers\"\r\n\r\nAdded\r\n\r\n> 3. launcher.c\r\n> \r\n> ```\r\n> + /* Sanity check : we don't support table sync in subworker. */\r\n> ```\r\n> \r\n> I think \"Sanity check :\" should be \"Sanity check:\", per other files.\r\n\r\n\r\nChanged.\r\n\r\n> 4. worker.c\r\n> \r\n> 4.a handle_streamed_transaction()\r\n> \r\n> ```\r\n> - /* not in streaming mode */\r\n> - if (!in_streamed_transaction)\r\n> + /* Not in streaming mode */\r\n> + if (!(in_streamed_transaction || am_apply_bgworker()))\r\n> ```\r\n> \r\n> I think the comment should also mention about apply background worker case.\r\n\r\nAdded.\r\n\r\n> 4.b handle_streamed_transaction()\r\n> \r\n> ```\r\n> - Assert(stream_fd != NULL);\r\n> ```\r\n> \r\n> I think this assersion seems reasonable in case of stream='on'.\r\n> Could you revive it and move to later part in the function, like after\r\n> subxact_info_add(current_xid)?\r\n\r\nAdded.\r\n\r\n> 4.c apply_handle_prepare_internal()\r\n> \r\n> ```\r\n> * BeginTransactionBlock is necessary to balance the\r\n> EndTransactionBlock\r\n> * called within the PrepareTransactionBlock below.\r\n> */\r\n> - BeginTransactionBlock();\r\n> + if (!IsTransactionBlock())\r\n> + BeginTransactionBlock();\r\n> +\r\n> ```\r\n> \r\n> I think the comment should be \"We must be in transaction block to balance...\".\r\n\r\nChanged.\r\n\r\n> 4.d apply_handle_stream_prepare()\r\n> \r\n> ```\r\n> - *\r\n> - * Logic is in two parts:\r\n> - * 1. Replay all the spooled operations\r\n> - * 2. Mark the transaction as prepared\r\n> */\r\n> static void\r\n> apply_handle_stream_prepare(StringInfo s)\r\n> ```\r\n> \r\n> I think these comments are useful when stream='on',\r\n> so it should be moved to later part.\r\n\r\nI think we already have similar comments in later part.\r\n\r\n> 5. applybgworker.c\r\n> \r\n> 5.a apply_bgworker_setup()\r\n> \r\n> ```\r\n> + elog(DEBUG1, \"setting up apply worker #%u\",\r\n> list_length(ApplyBgworkersList) + 1);\r\n> ```\r\n> \r\n> \"apply worker\" should be \"apply background worker\".\r\n> \r\n> 5.b LogicalApplyBgwLoop()\r\n> \r\n> ```\r\n> + elog(DEBUG1, \"[Apply BGW #%u] ended\r\n> processing streaming chunk,\"\r\n> + \"waiting on shm_mq_receive\",\r\n> shared->worker_id);\r\n> ```\r\n> \r\n> A blank is needed after comma. I checked serverlog, and the message\r\n> outputed like:\r\n> \r\n> ```\r\n> [Apply BGW #1] ended processing streaming chunk,waiting on\r\n> shm_mq_receive\r\n> ```\r\n\r\nChanged.\r\n\r\n> 6.\r\n> \r\n> When I started up the apply background worker and did `SELECT * from\r\n> pg_stat_subscription`, I got following lines:\r\n> \r\n> ```\r\n> postgres=# select * from pg_stat_subscription;\r\n> subid | subname | pid | relid | received_lsn | last_msg_send_time\r\n> | last_msg_receipt_time | latest_end_lsn | latest_end\r\n> _time\r\n> -------+---------+-------+-------+--------------+----------------------------\r\n> ---+-------------------------------+----------------+------------------\r\n> -------------\r\n> 16400 | sub | 22383 | | | -infinity |\r\n> -infinity | | -infinity\r\n> 16400 | sub | 22312 | | 0/6734740 | 2022-08-09\r\n> 07:40:19.367676+00 | 2022-08-09 07:40:19.375455+00 | 0/6734740 |\r\n> 2022-08-09 07:40:\r\n> 19.367676+00\r\n> (2 rows)\r\n> ```\r\n> \r\n> \r\n> 6.a\r\n> \r\n> It seems that the upper line represents the apply background worker, but I\r\n> think last_msg_send_time and last_msg_receipt_time should be null.\r\n> Is it like initialization mistake?\r\n\r\nChanged.\r\n\r\n> ```\r\n> $ ps aux | grep 22383\r\n> ... postgres: logical replication apply background worker for subscription\r\n> 16400\r\n> ```\r\n> \r\n> 6.b\r\n> \r\n> Currently, the documentation doesn't clarify the method to determine the type\r\n> of logical replication workers.\r\n> Could you add descriptions about it?\r\n> I think adding a column \"subworker\" is an alternative approach.\r\n\r\nI am quite sure about whether it's necessary,\r\nBut I tried to add a new column(main_apply_pid) in a separate patch(0005).\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 11 Aug 2022 07:49:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, August 11, 2022 3:48 PM houzj.fnst@fujitsu.com wrote: \r\n> \r\n> On Tuesday, August 9, 2022 7:00 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> >\r\n> > Review comments on\r\n> > v20-0001-Perform-streaming-logical-transactions-by-backgr\r\n> \r\n> Attach new version(v21) patch set which addressed all the comments received\r\n> so far.\r\n> \r\n\r\nSorry, I didn't include the documentation changes. Here is the complete patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 11 Aug 2022 08:04:40 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, August 10, 2022 5:40 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some review comments for the patch v20-0001:\r\n> ======\r\n> \r\n> 1. doc/src/sgml/catalogs.sgml\r\n> \r\n> + <literal>p</literal> = apply changes directly using a background\r\n> + worker, if available, otherwise, it behaves the same as 't'\r\n> \r\n> The different char values 'f','t','p' are separated by comma (,) in\r\n> the list, which is normal for the pgdocs AFAIK. However, because of\r\n> this I don't think it is a good idea to use those other commas within\r\n> the description for 'p', I suggest you remove those ones to avoid\r\n> ambiguity with the separators.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> 2. doc/src/sgml/protocol.sgml\r\n> \r\n> @@ -3096,7 +3096,7 @@ psql \"dbname=postgres replication=database\" -c\r\n> \"IDENTIFY_SYSTEM;\"\r\n> <listitem>\r\n> <para>\r\n> Protocol version. Currently versions <literal>1</literal>,\r\n> <literal>2</literal>,\r\n> - and <literal>3</literal> are supported.\r\n> + <literal>3</literal> and <literal>4</literal> are supported.\r\n> </para>\r\n> \r\n> Put a comma after the penultimate value like it had before.\r\n> \r\n\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> 3. src/backend/replication/logical/applybgworker.c - <general>\r\n> \r\n> There are multiple function comments and other code comments in this\r\n> file that are missing a terminating period (.)\r\n> \r\n> ======\r\n> \r\n\r\nChanged.\r\n\r\n> 4. src/backend/replication/logical/applybgworker.c - apply_bgworker_start\r\n> \r\n> +/*\r\n> + * Try to get a free apply background worker.\r\n> + *\r\n> + * If there is at least one worker in the free list, then take one. Otherwise,\r\n> + * try to start a new apply background worker. If successful, cache it in\r\n> + * ApplyBgworkersHash keyed by the specified xid.\r\n> + */\r\n> +ApplyBgworkerState *\r\n> +apply_bgworker_start(TransactionId xid)\r\n> \r\n> SUGGESTION (for function comment)\r\n> Return the apply background worker that will be used for the specified xid.\r\n> \r\n> If an apply background worker is found in the free list then re-use\r\n> it, otherwise start a fresh one. Cache the worker ApplyBgworkersHash\r\n> keyed by the specified xid.\r\n> \r\n> ~~~\r\n> \r\n\r\nChanged.\r\n\r\n> 5.\r\n> \r\n> + /* Try to get a free apply background worker */\r\n> + if (list_length(ApplyBgworkersFreeList) > 0)\r\n> \r\n> if (list_length(ApplyBgworkersFreeList) > 0)\r\n> \r\n> AFAIK a non-empty list is guaranteed to be not NIL, and an empty list\r\n> is guaranteed to be NIL. So if you want to the above can simply be\r\n> written as:\r\n> \r\n> if (ApplyBgworkersFreeList)\r\n> \r\n\r\nBoth ways are fine to me, so I kept the current style.\r\n\r\n> ~~~\r\n> \r\n> 6. src/backend/replication/logical/applybgworker.c - apply_bgworker_find\r\n> \r\n> +/*\r\n> + * Try to look up worker assigned before (see function\r\n> apply_bgworker_get_free)\r\n> + * inside ApplyBgworkersHash for requested xid.\r\n> + */\r\n> +ApplyBgworkerState *\r\n> +apply_bgworker_find(TransactionId xid)\r\n> \r\n> SUGGESTION (for function comment)\r\n> Find the worker previously assigned/cached for this xid. (see function\r\n> apply_bgworker_start)\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 7.\r\n> \r\n> + Assert(status == APPLY_BGWORKER_BUSY);\r\n> +\r\n> + return entry->wstate;\r\n> + }\r\n> + else\r\n> + return NULL;\r\n> \r\n> IMO here it is better to just remove that 'else' and unconditionally\r\n> return NULL at the end of this function.\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 8. src/backend/replication/logical/applybgworker.c -\r\n> apply_bgworker_subxact_info_add\r\n> \r\n> + * Inside apply background worker we can figure out that new subtransaction\r\n> was\r\n> + * started if new change arrived with different xid. In that case we can define\r\n> + * named savepoint, so that we were able to commit/rollback it separately\r\n> + * later.\r\n> + * Special case is if the first change comes from subtransaction, then\r\n> + * we check that current_xid differs from stream_xid.\r\n> + */\r\n> +void\r\n> +apply_bgworker_subxact_info_add(TransactionId current_xid)\r\n> \r\n> It is not quite English. Can you improve it a bit?\r\n> \r\n> SUGGESTION (maybe like this?)\r\n> The apply background worker can figure out if a new subtransaction was\r\n> started by checking if the new change arrived with different xid. In\r\n> that case define a named savepoint, so that we are able to\r\n> commit/rollback it separately later. A special case is when the first\r\n> change comes from subtransaction – this is determined by checking if\r\n> the current_xid differs from stream_xid.\r\n> \r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> 9. src/backend/replication/logical/launcher.c -\r\n> WaitForReplicationWorkerAttach\r\n> \r\n> + *\r\n> + * Return false if the attach fails. Otherwise return true.\r\n> */\r\n> -static void\r\n> +static bool\r\n> WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\r\n> \r\n> Why not just say \"Return whether the attach was successful.\"\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 10. src/backend/replication/logical/launcher.c - logicalrep_worker_stop\r\n> \r\n> + /* Found the main worker, then try to stop it. */\r\n> + if (worker)\r\n> + logicalrep_worker_stop_internal(worker);\r\n> \r\n> IMO the comment is kind of pointless because it only says what the\r\n> code is clearly doing. If you really wanted to reinforce this worker\r\n> is a main apply worker then you can do that with code like:\r\n> \r\n> if (worker)\r\n> {\r\n> Assert(!worker->subworker);\r\n> logicalrep_worker_stop_internal(worker);\r\n> }\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 11. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\r\n> \r\n> @@ -599,6 +632,29 @@ logicalrep_worker_attach(int slot)\r\n> static void\r\n> logicalrep_worker_detach(void)\r\n> {\r\n> + /*\r\n> + * This is the main apply worker, stop all the apply background workers we\r\n> + * started before.\r\n> + */\r\n> + if (!MyLogicalRepWorker->subworker)\r\n> \r\n> SUGGESTION (for comment)\r\n> This is the main apply worker. Stop all apply background workers\r\n> previously started from here.\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 12 src/backend/replication/logical/launcher.c -\r\n> logicalrep_apply_bgworker_count\r\n> \r\n> +/*\r\n> + * Count the number of registered (not necessarily running) apply background\r\n> + * workers for a subscription.\r\n> + */\r\n> +int\r\n> +logicalrep_apply_bgworker_count(Oid subid)\r\n> \r\n> SUGGESTION\r\n> Count the number of registered (but not necessarily running) apply\r\n> background workers for a subscription.\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 13.\r\n> \r\n> + /* Search for attached worker for a given subscription id. */\r\n> + for (i = 0; i < max_logical_replication_workers; i++)\r\n> \r\n> SUGGESTION\r\n> Scan all attached apply background workers, only counting those which\r\n> have the given subscription id.\r\n> \r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> 14. src/backend/replication/logical/worker.c - apply_error_callback\r\n> \r\n> + {\r\n> + if (errarg->remote_attnum < 0)\r\n> + {\r\n> + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\r\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\r\n> %u\",\r\n> + errarg->origin_name,\r\n> + logicalrep_message_type(errarg->command),\r\n> + errarg->rel->remoterel.nspname,\r\n> + errarg->rel->remoterel.relname,\r\n> + errarg->remote_xid);\r\n> + else\r\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\r\n> %u finished at %X/%X\",\r\n> + errarg->origin_name,\r\n> + logicalrep_message_type(errarg->command),\r\n> + errarg->rel->remoterel.nspname,\r\n> + errarg->rel->remoterel.relname,\r\n> + errarg->remote_xid,\r\n> + LSN_FORMAT_ARGS(errarg->finish_lsn));\r\n> + }\r\n> + else\r\n> + {\r\n> + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\r\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\r\n> in transaction %u\",\r\n> + errarg->origin_name,\r\n> + logicalrep_message_type(errarg->command),\r\n> + errarg->rel->remoterel.nspname,\r\n> + errarg->rel->remoterel.relname,\r\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum],\r\n> + errarg->remote_xid);\r\n> + else\r\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\r\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\r\n> in transaction %u finished at %X/%X\",\r\n> + errarg->origin_name,\r\n> + logicalrep_message_type(errarg->command),\r\n> + errarg->rel->remoterel.nspname,\r\n> + errarg->rel->remoterel.relname,\r\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum],\r\n> + errarg->remote_xid,\r\n> + LSN_FORMAT_ARGS(errarg->finish_lsn));\r\n> + }\r\n> + }\r\n> \r\n> There is quite a lot of common code here:\r\n> \r\n> \"processing remote data for replication origin \\\"%s\\\" during \\\"%s\\\"\r\n> for replication target relation \\\"%s.%s\\\"\r\n> \r\n> errarg->origin_name,\r\n> logicalrep_message_type(errarg->command),\r\n> errarg->rel->remoterel.nspname,\r\n> errarg->rel->remoterel.relname,\r\n> \r\n> Is it worth trying to extract that common part to keep this code\r\n> shorter? E.g. It could be easily done just with some #defines\r\n> \r\n\r\nI am not sure do we have a clean way to change this, any suggestions ?\r\n\r\n> ======\r\n> \r\n> 15. src/include/replication/worker_internal.h\r\n> \r\n> + /* proto version of publisher. */\r\n> + uint32 proto_version;\r\n> \r\n> SUGGESTION\r\n> Protocol version of publisher\r\n> \r\n> ~~~\r\n> \r\n\r\nChanged.\r\n\r\n> 16.\r\n> \r\n> + /* id of apply background worker */\r\n> + uint32 worker_id;\r\n> \r\n> Uppercase comment\r\n> \r\n\r\nChanged.\r\n\r\n> \r\n> 17.\r\n> \r\n> +/*\r\n> + * Struct for maintaining an apply background worker.\r\n> + */\r\n> +typedef struct ApplyBgworkerState\r\n> \r\n> I'm not sure what this comment means. Perhaps there are some words missing?\r\n> \r\n\r\nI renamed the struct to ApplyBgworkerInfo which sounds better to me and changed the comments.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 11 Aug 2022 08:06:07 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v20-0003:\n\n(Sorry - the reviews are time consuming, so I am lagging slightly\nbehind the latest posted version)\n\n======\n\n1. <General>\n\n1a.\nThere are a few comment modifications in this patch (e.g. changing\nFROM \"in an apply background worker\" TO \"using an apply background\nworker\"). e.g. I noticed lots of these in worker.c but they might be\nin other files too.\n\nAlthough these are good changes, these are just tweaks to new comments\nintroduced by patch 0001, so IMO such changes belong in that patch,\nnot in this one.\n\n1b.\nActually, there are still some comments says \"by an apply background\nworker///\" and some saying \"using an apply background worker...\" and\nsome saying \"in the apply background worker...\". Maybe they are all\nOK, but it will be better if all such can be searched and made to have\nconsistent wording\n\n======\n\n2. Commit message\n\n2a.\n\nWithout these restrictions, the following scenario may occur:\nThe apply background worker lock a row when processing a streaming transaction,\nafter that the main apply worker tries to lock the same row when processing\nanother transaction. At this time, the main apply worker waits for the\nstreaming transaction to complete and the lock to be released, it won't send\nsubsequent data of the streaming transaction to the apply background worker;\nthe apply background worker waits to receive the rest of streaming transaction\nand can't finish this transaction. Then the main apply worker will wait\nindefinitely.\n\n\"background worker lock a row\" -> \"background worker locks a row\"\n\n\"Then the main apply worker will wait indefinitely.\" -> really, you\nalready said the main apply worker is waiting, so I think this\nsentence only needs to say: \"Now a deadlock has occurred, so both\nworkers will wait indefinitely.\"\n\n2b.\n\nText fragments are all common between:\n\ni. This commit message\nii. Text in pgdocs CREATE SUBSCRIPTION\niii. Function comment for 'logicalrep_rel_mark_parallel_apply' in relation.c\n\nAfter addressing other review comments please make sure all those 3\nparts are worded same.\n\n======\n\n3. doc/src/sgml/ref/create_subscription.sgml\n\n+ There are two requirements for using <literal>parallel</literal>\n+ mode: 1) the unique column in the table on the subscriber-side should\n+ also be the unique column on the publisher-side; 2) there cannot be\n+ any non-immutable functions used by the subscriber-side replicated\n+ table.\n\n3a.\nI am not sure – is \"requirements\" the correct word here, or maybe it\nshould be \"prerequisites\".\n\n3b.\nIs it correct to say \"should also be\", or should that say \"must also be\"?\n\n======\n\n4. src/backend/replication/logical/applybgworker.c -\napply_bgworker_relation_check\n\n+ /*\n+ * Skip check if not using apply background workers.\n+ *\n+ * If any worker is handling the streaming transaction, this check needs to\n+ * be performed not only in the apply background worker, but also in the\n+ * main apply worker. This is because without these restrictions, main\n+ * apply worker may block apply background worker, which will cause\n+ * infinite waits.\n+ */\n+ if (!am_apply_bgworker() &&\n+ (list_length(ApplyBgworkersFreeList) == list_length(ApplyBgworkersList)))\n+ return;\n\nI struggled a bit to reconcile the comment with the condition. Is the\n!am_apply_bgworker() part of this even needed – isn't the\nlist_length() check enough?\n\n~~~\n\n5.\n\n+ /* We are in error mode and should give user correct error. */\n\nI still [1, #3.4a] don't see the value in saying \"should give correct\nerror\" (e.g. what's the alternative?).\n\nMaybe instead of that comment it can just say:\nrel->parallel_apply = PARALLEL_APPLY_UNSAFE;\n\n======\n\n6. src/backend/replication/logical/proto.c - RelationGetUniqueKeyBitmap\n\n+ /* Add referenced attributes to idindexattrs */\n+ for (i = 0; i < indexRel->rd_index->indnatts; i++)\n+ {\n+ int attrnum = indexRel->rd_index->indkey.values[i];\n+\n+ /*\n+ * We don't include non-key columns into idindexattrs\n+ * bitmaps. See RelationGetIndexAttrBitmap.\n+ */\n+ if (attrnum != 0)\n+ {\n+ if (i < indexRel->rd_index->indnkeyatts &&\n+ !bms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber, attunique))\n+ attunique = bms_add_member(attunique,\n+ attrnum - FirstLowInvalidHeapAttributeNumber);\n+ }\n+ }\n\nThere are 2x comments in that code that are referring to\n'idindexattrs' but I think it is a cut/paste problem because that\nvariable name does not even exist in this copied function.\n\n======\n\n7. src/backend/replication/logical/relation.c -\nlogicalrep_rel_mark_parallel_apply\n\n+ /* Initialize the flag. */\n+ entry->parallel_apply = PARALLEL_APPLY_SAFE;\n\nI have unsuccessfully repeated the same review comment several times\n[1 #3.8] suggesting that this flag should not be initialized to SAFE.\nIMO the state should remain as UNKNOWN until you are either sure it is\nSAFE, or sure it is UNSAFE. Anyway, I'll give up on this point now;\nlet's see what other people think.\n\n======\n\n8. src/include/replication/logicalrelation.h\n\n+/*\n+ * States to determine if changes on one relation can be applied using an\n+ * apply background worker.\n+ */\n+typedef enum ParallelApplySafety\n+{\n+ PARALLEL_APPLY_UNKNOWN = 0, /* unknown */\n+ PARALLEL_APPLY_SAFE, /* Can apply changes using an apply background\n+ worker */\n+ PARALLEL_APPLY_UNSAFE /* Can not apply changes using an apply\n+ background worker */\n+} ParallelApplySafety;\n+\n\nI think the values are self-explanatory so the comments for every\nvalue add nothing here, particularly since the enum itself has a\ncomment saying the same thing. I'm not sure if you accidentally missed\nmy previous comment [1, #3.12b] about this, or just did not agree with\nit.\n\n======\n\n9. .../subscription/t/015_stream.pl\n\n+# \"streaming = parallel\" does not support non-immutable functions, so change\n+# the function in the defult expression of column \"c\".\n+$node_subscriber->safe_psql(\n+ 'postgres', qq{\n+ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT to_timestamp(1284352323);\n+ALTER SUBSCRIPTION tap_sub SET(streaming = parallel, binary = off);\n+});\n\n9a.\ntypo \"defult\"\n\n9b.\nThe problem with to_timestamp(1284352323) is that it looks like it\nmust be some special value, but in fact AFAIK you don't care at all\nwhat value timestamp this is. I think it would be better here to just\nuse to_timestamp(0) or to_timestamp(999) or similar so the number is\nobviously not something of importance.\n\n======\n\n10. .../subscription/t/016_stream.pl\n\n+# \"streaming = parallel\" does not support non-immutable functions, so change\n+# the function in the defult expression of column \"c\".\n+$node_subscriber->safe_psql(\n+ 'postgres', qq{\n+ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT to_timestamp(1284352323);\n+ALTER SUBSCRIPTION tap_sub SET(streaming = parallel);\n+});\n\n10a. Ditto 9a.\n10b. Ditto 9b.\n\n======\n\n11. .../subscription/t/022_twophase_cascade.pl\n\n+# \"streaming = parallel\" does not support non-immutable functions, so change\n+# the function in the defult expression of column \"c\".\n+$node_B->safe_psql(\n+ 'postgres', \"ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\nto_timestamp(1284352323);\");\n+$node_C->safe_psql(\n+ 'postgres', \"ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\nto_timestamp(1284352323);\");\n+\n\n11a. Ditto 9a.\n11b. Ditto 9b.\n\n======\n\n12. .../subscription/t/023_twophase_stream.pl\n\n+# \"streaming = parallel\" does not support non-immutable functions, so change\n+# the function in the defult expression of column \"c\".\n+$node_subscriber->safe_psql(\n+ 'postgres', qq{\n+ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT to_timestamp(1284352323);\n+ALTER SUBSCRIPTION tap_sub SET(streaming = parallel);\n+});\n\n12a. Ditto 9a.\n12b. Ditto 9b.\n\n======\n\n13. .../subscription/t/032_streaming_apply.pl\n\n+# Drop default value on the subscriber, now it works.\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER TABLE test_tab1 ALTER COLUMN b DROP DEFAULT\");\n\nMaybe for these tests like this it would be better to test if it works\nOK using an immutable DEFAULT function instead of just completely\nremoving the bad function to make it work.\n\nI think maybe the same was done for TRIGGER tests. There was a test\nfor a trigger with a bad function, and then the trigger was removed.\nWhat about including a test for the trigger with a good function?\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPv9cKurDQHtk-ygYp45-8LYdE%3D4sMZY-8UmbeDTGgECVg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 12 Aug 2022 16:46:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v20-0004:\n\n(This completes my reviews of the v20* patch set. Sorry, the reviews\nare time consuming, so I am lagging slightly behind the latest posted\nversion)\n\n======\n\n1. doc/src/sgml/ref/create_subscription.sgml\n\n@@ -245,6 +245,11 @@ CREATE SUBSCRIPTION <replaceable\nclass=\"parameter\">subscription_name</replaceabl\n also be the unique column on the publisher-side; 2) there cannot be\n any non-immutable functions used by the subscriber-side replicated\n table.\n+ When applying a streaming transaction, if either requirement is not\n+ met, the background worker will exit with an error.\n+ The <literal>parallel</literal> mode is disregarded when retrying;\n+ instead the transaction will be applied using <literal>on</literal>\n+ mode.\n </para>\n\nThe \"on mode\" still sounds strange to me. Maybe it's just my personal\nopinion, but I don’t really consider 'on' and 'off' to be \"modes\".\nAnyway I already posted the same comment several times before [1,\n#4.3]. Let's see what others think.\n\nSUGGESTION\n\"using on mode\" -> \"using streaming = on\"\n\n======\n\n2. src/backend/replication/logical/worker.c - start_table_sync\n\n@@ -3902,20 +3925,28 @@ start_table_sync(XLogRecPtr *origin_startpos,\nchar **myslotname)\n }\n PG_CATCH();\n {\n+ /*\n+ * Emit the error message, and recover from the error state to an idle\n+ * state\n+ */\n+ HOLD_INTERRUPTS();\n+\n+ EmitErrorReport();\n+ AbortOutOfAnyTransaction();\n+ FlushErrorState();\n+\n+ RESUME_INTERRUPTS();\n+\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MySubscription->oid, false);\n+\n if (MySubscription->disableonerr)\n- DisableSubscriptionAndExit();\n- else\n- {\n- /*\n- * Report the worker failed during table synchronization. Abort\n- * the current transaction so that the stats message is sent in an\n- * idle state.\n- */\n- AbortOutOfAnyTransaction();\n- pgstat_report_subscription_error(MySubscription->oid, false);\n+ DisableSubscriptionOnError();\n\n- PG_RE_THROW();\n- }\n+ /* Set the retry flag. */\n+ set_subscription_retry(true);\n+\n+ proc_exit(0);\n }\n PG_END_TRY();\n\nPerhaps current code is OK, but I am not 100% sure if we should set\nthe retry flag when the disable_on_error is set, because the\nsubscription is not going to be retried (because it is disabled). And\nlater, if/when the user does enable the subscription, presumably that\nwill be after they have already addressed the problem that caused the\nerror/disablement in the first place.\n\n~~~\n\n3. src/backend/replication/logical/worker.c - start_apply\n\n PG_CATCH();\n {\n+ /*\n+ * Emit the error message, and recover from the error state to an idle\n+ * state\n+ */\n+ HOLD_INTERRUPTS();\n+\n+ EmitErrorReport();\n+ AbortOutOfAnyTransaction();\n+ FlushErrorState();\n+\n+ RESUME_INTERRUPTS();\n+\n+ /* Report the worker failed while applying changes */\n+ pgstat_report_subscription_error(MySubscription->oid,\n+ !am_tablesync_worker());\n+\n if (MySubscription->disableonerr)\n- DisableSubscriptionAndExit();\n- else\n- {\n- /*\n- * Report the worker failed while applying changes. Abort the\n- * current transaction so that the stats message is sent in an\n- * idle state.\n- */\n- AbortOutOfAnyTransaction();\n- pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n+ DisableSubscriptionOnError();\n\n- PG_RE_THROW();\n- }\n+ /* Set the retry flag. */\n+ set_subscription_retry(true);\n }\n PG_END_TRY();\n }\n\n3a.\nSame comment as #2\n\n3b.\nThis PG_CATCH used to leave by either proc_exit(0) or PG_RE_THROW but\nwhat does it do now? My first impression is there is a bug here due to\nsome missing code, because AFAICT the exception is caught and gobbled\nup and then what...?\n\n~~~\n\n4. src/backend/replication/logical/worker.c - set_subscription_retry\n\n+ if (MySubscription->retry == retry ||\n+ am_apply_bgworker())\n+ return;\n\n4a.\nI this this quick exit can be split and given some appropriate comments\n\nSUGGESTION (for example)\n/* Fast path - if no state change then nothing to do */\nif (MySubscription->retry == retry)\nreturn;\n\n/* Fast path - skip for apply background workers */\nif (am_apply_bgworker())\nreturn;\n\n======\n\n5. .../subscription/t/032_streaming_apply.pl\n\n@@ -78,9 +78,13 @@ my $timer =\nIPC::Run::timeout($PostgreSQL::Test::Utils::timeout_default);\n my $h = $node_publisher->background_psql('postgres', \\$in, \\$out, $timer,\n on_error_stop => 0);\n\n+# ============================================================================\n\nAll those comment highlighting lines like \"# ==============\" really\nbelong in the earlier patch (0003 ?) when this TAP test file was\nintroduced.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPvrw%2BtgCEYGxv%2BnKrqg-zbJdYEXee6o4irPAsYoXcuUcw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 12 Aug 2022 19:21:51 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, August 12, 2022 12:46 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v20-0003:\r\n> \r\n> (Sorry - the reviews are time consuming, so I am lagging slightly\r\n> behind the latest posted version)\r\n\r\nThanks for your comments.\r\n\r\n> 1. <General>\r\n> \r\n> 1a.\r\n> There are a few comment modifications in this patch (e.g. changing\r\n> FROM \"in an apply background worker\" TO \"using an apply background\r\n> worker\"). e.g. I noticed lots of these in worker.c but they might be\r\n> in other files too.\r\n> \r\n> Although these are good changes, these are just tweaks to new comments\r\n> introduced by patch 0001, so IMO such changes belong in that patch,\r\n> not in this one.\r\n> \r\n> 1b.\r\n> Actually, there are still some comments says \"by an apply background\r\n> worker///\" and some saying \"using an apply background worker...\" and\r\n> some saying \"in the apply background worker...\". Maybe they are all\r\n> OK, but it will be better if all such can be searched and made to have\r\n> consistent wording\r\n\r\nImproved.\r\n\r\n> 2. Commit message\r\n> \r\n> 2a.\r\n> \r\n> Without these restrictions, the following scenario may occur:\r\n> The apply background worker lock a row when processing a streaming\r\n> transaction,\r\n> after that the main apply worker tries to lock the same row when processing\r\n> another transaction. At this time, the main apply worker waits for the\r\n> streaming transaction to complete and the lock to be released, it won't send\r\n> subsequent data of the streaming transaction to the apply background worker;\r\n> the apply background worker waits to receive the rest of streaming transaction\r\n> and can't finish this transaction. Then the main apply worker will wait\r\n> indefinitely.\r\n> \r\n> \"background worker lock a row\" -> \"background worker locks a row\"\r\n> \r\n> \"Then the main apply worker will wait indefinitely.\" -> really, you\r\n> already said the main apply worker is waiting, so I think this\r\n> sentence only needs to say: \"Now a deadlock has occurred, so both\r\n> workers will wait indefinitely.\"\r\n> \r\n> 2b.\r\n> \r\n> Text fragments are all common between:\r\n> \r\n> i. This commit message\r\n> ii. Text in pgdocs CREATE SUBSCRIPTION\r\n> iii. Function comment for 'logicalrep_rel_mark_parallel_apply' in relation.c\r\n> \r\n> After addressing other review comments please make sure all those 3\r\n> parts are worded same.\r\n\r\nImproved.\r\n\r\n> 3. doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> + There are two requirements for using <literal>parallel</literal>\r\n> + mode: 1) the unique column in the table on the subscriber-side should\r\n> + also be the unique column on the publisher-side; 2) there cannot be\r\n> + any non-immutable functions used by the subscriber-side replicated\r\n> + table.\r\n> \r\n> 3a.\r\n> I am not sure – is \"requirements\" the correct word here, or maybe it\r\n> should be \"prerequisites\".\r\n> \r\n> 3b.\r\n> Is it correct to say \"should also be\", or should that say \"must also be\"?\r\n\r\nImproved.\r\n\r\n> 4. src/backend/replication/logical/applybgworker.c -\r\n> apply_bgworker_relation_check\r\n> \r\n> + /*\r\n> + * Skip check if not using apply background workers.\r\n> + *\r\n> + * If any worker is handling the streaming transaction, this check needs to\r\n> + * be performed not only in the apply background worker, but also in the\r\n> + * main apply worker. This is because without these restrictions, main\r\n> + * apply worker may block apply background worker, which will cause\r\n> + * infinite waits.\r\n> + */\r\n> + if (!am_apply_bgworker() &&\r\n> + (list_length(ApplyBgworkersFreeList) == list_length(ApplyBgworkersList)))\r\n> + return;\r\n> \r\n> I struggled a bit to reconcile the comment with the condition. Is the\r\n> !am_apply_bgworker() part of this even needed – isn't the\r\n> list_length() check enough?\r\n\r\nWe need to check this for apply bgworker. (Both lists are \"NIL\" in apply\r\nbgworker.)\r\n\r\n> 5.\r\n> \r\n> + /* We are in error mode and should give user correct error. */\r\n> \r\n> I still [1, #3.4a] don't see the value in saying \"should give correct\r\n> error\" (e.g. what's the alternative?).\r\n> \r\n> Maybe instead of that comment it can just say:\r\n> rel->parallel_apply = PARALLEL_APPLY_UNSAFE;\r\n\r\nI changed if-statement to report the error:\r\nIf 'parallel_apply' isn't 'PARALLEL_APPLY_SAFE', then report the error.\r\n\r\n> 6. src/backend/replication/logical/proto.c - RelationGetUniqueKeyBitmap\r\n> \r\n> + /* Add referenced attributes to idindexattrs */\r\n> + for (i = 0; i < indexRel->rd_index->indnatts; i++)\r\n> + {\r\n> + int attrnum = indexRel->rd_index->indkey.values[i];\r\n> +\r\n> + /*\r\n> + * We don't include non-key columns into idindexattrs\r\n> + * bitmaps. See RelationGetIndexAttrBitmap.\r\n> + */\r\n> + if (attrnum != 0)\r\n> + {\r\n> + if (i < indexRel->rd_index->indnkeyatts &&\r\n> + !bms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber, attunique))\r\n> + attunique = bms_add_member(attunique,\r\n> + attrnum - FirstLowInvalidHeapAttributeNumber);\r\n> + }\r\n> + }\r\n> \r\n> There are 2x comments in that code that are referring to\r\n> 'idindexattrs' but I think it is a cut/paste problem because that\r\n> variable name does not even exist in this copied function.\r\n\r\nFixed the comments.\r\n\r\n> 7. src/backend/replication/logical/relation.c -\r\n> logicalrep_rel_mark_parallel_apply\r\n> \r\n> + /* Initialize the flag. */\r\n> + entry->parallel_apply = PARALLEL_APPLY_SAFE;\r\n> \r\n> I have unsuccessfully repeated the same review comment several times\r\n> [1 #3.8] suggesting that this flag should not be initialized to SAFE.\r\n> IMO the state should remain as UNKNOWN until you are either sure it is\r\n> SAFE, or sure it is UNSAFE. Anyway, I'll give up on this point now;\r\n> let's see what other people think.\r\n\r\nOkay, I will follow the relevant comments later.\r\n\r\n> 8. src/include/replication/logicalrelation.h\r\n> \r\n> +/*\r\n> + * States to determine if changes on one relation can be applied using an\r\n> + * apply background worker.\r\n> + */\r\n> +typedef enum ParallelApplySafety\r\n> +{\r\n> + PARALLEL_APPLY_UNKNOWN = 0, /* unknown */\r\n> + PARALLEL_APPLY_SAFE, /* Can apply changes using an apply background\r\n> + worker */\r\n> + PARALLEL_APPLY_UNSAFE /* Can not apply changes using an apply\r\n> + background worker */\r\n> +} ParallelApplySafety;\r\n> +\r\n> \r\n> I think the values are self-explanatory so the comments for every\r\n> value add nothing here, particularly since the enum itself has a\r\n> comment saying the same thing. I'm not sure if you accidentally missed\r\n> my previous comment [1, #3.12b] about this, or just did not agree with\r\n> it.\r\n\r\nChanged.\r\n\r\n> 9. .../subscription/t/015_stream.pl\r\n> \r\n> +# \"streaming = parallel\" does not support non-immutable functions, so change\r\n> +# the function in the defult expression of column \"c\".\r\n> +$node_subscriber->safe_psql(\r\n> + 'postgres', qq{\r\n> +ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\r\n> to_timestamp(1284352323);\r\n> +ALTER SUBSCRIPTION tap_sub SET(streaming = parallel, binary = off);\r\n> +});\r\n> \r\n> 9a.\r\n> typo \"defult\"\r\n> \r\n> 9b.\r\n> The problem with to_timestamp(1284352323) is that it looks like it\r\n> must be some special value, but in fact AFAIK you don't care at all\r\n> what value timestamp this is. I think it would be better here to just\r\n> use to_timestamp(0) or to_timestamp(999) or similar so the number is\r\n> obviously not something of importance.\r\n> \r\n> ======\r\n> \r\n> 10. .../subscription/t/016_stream.pl\r\n> \r\n> +# \"streaming = parallel\" does not support non-immutable functions, so change\r\n> +# the function in the defult expression of column \"c\".\r\n> +$node_subscriber->safe_psql(\r\n> + 'postgres', qq{\r\n> +ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\r\n> to_timestamp(1284352323);\r\n> +ALTER SUBSCRIPTION tap_sub SET(streaming = parallel);\r\n> +});\r\n> \r\n> 10a. Ditto 9a.\r\n> 10b. Ditto 9b.\r\n> \r\n> ======\r\n> \r\n> 11. .../subscription/t/022_twophase_cascade.pl\r\n> \r\n> +# \"streaming = parallel\" does not support non-immutable functions, so change\r\n> +# the function in the defult expression of column \"c\".\r\n> +$node_B->safe_psql(\r\n> + 'postgres', \"ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\r\n> to_timestamp(1284352323);\");\r\n> +$node_C->safe_psql(\r\n> + 'postgres', \"ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\r\n> to_timestamp(1284352323);\");\r\n> +\r\n> \r\n> 11a. Ditto 9a.\r\n> 11b. Ditto 9b.\r\n> \r\n> ======\r\n> \r\n> 12. .../subscription/t/023_twophase_stream.pl\r\n> \r\n> +# \"streaming = parallel\" does not support non-immutable functions, so change\r\n> +# the function in the defult expression of column \"c\".\r\n> +$node_subscriber->safe_psql(\r\n> + 'postgres', qq{\r\n> +ALTER TABLE test_tab ALTER COLUMN c SET DEFAULT\r\n> to_timestamp(1284352323);\r\n> +ALTER SUBSCRIPTION tap_sub SET(streaming = parallel);\r\n> +});\r\n> \r\n> 12a. Ditto 9a.\r\n> 12b. Ditto 9b.\r\n\r\nImproved.\r\n\r\n> 13. .../subscription/t/032_streaming_apply.pl\r\n> \r\n> +# Drop default value on the subscriber, now it works.\r\n> +$node_subscriber->safe_psql('postgres',\r\n> + \"ALTER TABLE test_tab1 ALTER COLUMN b DROP DEFAULT\");\r\n> \r\n> Maybe for these tests like this it would be better to test if it works\r\n> OK using an immutable DEFAULT function instead of just completely\r\n> removing the bad function to make it work.\r\n> \r\n> I think maybe the same was done for TRIGGER tests. There was a test\r\n> for a trigger with a bad function, and then the trigger was removed.\r\n> What about including a test for the trigger with a good function?\r\n\r\nImproved.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 16 Aug 2022 07:33:04 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, August 12, 2022 17:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v20-0004:\r\n> \r\n> (This completes my reviews of the v20* patch set. Sorry, the reviews\r\n> are time consuming, so I am lagging slightly behind the latest posted\r\n> version)\r\n\r\nThanks for your comments.\r\n\r\n> 1. doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> @@ -245,6 +245,11 @@ CREATE SUBSCRIPTION <replaceable\r\n> class=\"parameter\">subscription_name</replaceabl\r\n> also be the unique column on the publisher-side; 2) there cannot be\r\n> any non-immutable functions used by the subscriber-side replicated\r\n> table.\r\n> + When applying a streaming transaction, if either requirement is not\r\n> + met, the background worker will exit with an error.\r\n> + The <literal>parallel</literal> mode is disregarded when retrying;\r\n> + instead the transaction will be applied using <literal>on</literal>\r\n> + mode.\r\n> </para>\r\n> \r\n> The \"on mode\" still sounds strange to me. Maybe it's just my personal\r\n> opinion, but I don’t really consider 'on' and 'off' to be \"modes\".\r\n> Anyway I already posted the same comment several times before [1,\r\n> #4.3]. Let's see what others think.\r\n> \r\n> SUGGESTION\r\n> \"using on mode\" -> \"using streaming = on\"\r\n\r\nOkay, I will follow the relevant comments later.\r\n\r\n> 2. src/backend/replication/logical/worker.c - start_table_sync\r\n> \r\n> @@ -3902,20 +3925,28 @@ start_table_sync(XLogRecPtr *origin_startpos,\r\n> char **myslotname)\r\n> }\r\n> PG_CATCH();\r\n> {\r\n> + /*\r\n> + * Emit the error message, and recover from the error state to an idle\r\n> + * state\r\n> + */\r\n> + HOLD_INTERRUPTS();\r\n> +\r\n> + EmitErrorReport();\r\n> + AbortOutOfAnyTransaction();\r\n> + FlushErrorState();\r\n> +\r\n> + RESUME_INTERRUPTS();\r\n> +\r\n> + /* Report the worker failed during table synchronization */\r\n> + pgstat_report_subscription_error(MySubscription->oid, false);\r\n> +\r\n> if (MySubscription->disableonerr)\r\n> - DisableSubscriptionAndExit();\r\n> - else\r\n> - {\r\n> - /*\r\n> - * Report the worker failed during table synchronization. Abort\r\n> - * the current transaction so that the stats message is sent in an\r\n> - * idle state.\r\n> - */\r\n> - AbortOutOfAnyTransaction();\r\n> - pgstat_report_subscription_error(MySubscription->oid, false);\r\n> + DisableSubscriptionOnError();\r\n> \r\n> - PG_RE_THROW();\r\n> - }\r\n> + /* Set the retry flag. */\r\n> + set_subscription_retry(true);\r\n> +\r\n> + proc_exit(0);\r\n> }\r\n> PG_END_TRY();\r\n> \r\n> Perhaps current code is OK, but I am not 100% sure if we should set\r\n> the retry flag when the disable_on_error is set, because the\r\n> subscription is not going to be retried (because it is disabled). And\r\n> later, if/when the user does enable the subscription, presumably that\r\n> will be after they have already addressed the problem that caused the\r\n> error/disablement in the first place.\r\n\r\nI think it is okay. Because even after addressing the problem, it is also\r\n*retrying* to apply the failed transaction. And, in the worst case, it just\r\napplies the first failed streaming transaction using \"on\" mode instead of\r\n\"parallel\" mode.\r\n\r\n> 3. src/backend/replication/logical/worker.c - start_apply\r\n> \r\n> PG_CATCH();\r\n> {\r\n> + /*\r\n> + * Emit the error message, and recover from the error state to an idle\r\n> + * state\r\n> + */\r\n> + HOLD_INTERRUPTS();\r\n> +\r\n> + EmitErrorReport();\r\n> + AbortOutOfAnyTransaction();\r\n> + FlushErrorState();\r\n> +\r\n> + RESUME_INTERRUPTS();\r\n> +\r\n> + /* Report the worker failed while applying changes */\r\n> + pgstat_report_subscription_error(MySubscription->oid,\r\n> + !am_tablesync_worker());\r\n> +\r\n> if (MySubscription->disableonerr)\r\n> - DisableSubscriptionAndExit();\r\n> - else\r\n> - {\r\n> - /*\r\n> - * Report the worker failed while applying changes. Abort the\r\n> - * current transaction so that the stats message is sent in an\r\n> - * idle state.\r\n> - */\r\n> - AbortOutOfAnyTransaction();\r\n> - pgstat_report_subscription_error(MySubscription-\r\n> >oid, !am_tablesync_worker());\r\n> + DisableSubscriptionOnError();\r\n> \r\n> - PG_RE_THROW();\r\n> - }\r\n> + /* Set the retry flag. */\r\n> + set_subscription_retry(true);\r\n> }\r\n> PG_END_TRY();\r\n> }\r\n> \r\n> 3a.\r\n> Same comment as #2\r\n> \r\n> 3b.\r\n> This PG_CATCH used to leave by either proc_exit(0) or PG_RE_THROW but\r\n> what does it do now? My first impression is there is a bug here due to\r\n> some missing code, because AFAICT the exception is caught and gobbled\r\n> up and then what...?\r\n\r\n=>3a.\r\nSee the reply to #2.\r\n=>3b.\r\nThe function `proc_exit(0)` is invoked after invoking function start_apply. See\r\nfunction ApplyWorkerMain.\r\n\r\n> 4. src/backend/replication/logical/worker.c - set_subscription_retry\r\n> \r\n> + if (MySubscription->retry == retry ||\r\n> + am_apply_bgworker())\r\n> + return;\r\n> \r\n> 4a.\r\n> I this this quick exit can be split and given some appropriate comments\r\n> \r\n> SUGGESTION (for example)\r\n> /* Fast path - if no state change then nothing to do */\r\n> if (MySubscription->retry == retry)\r\n> return;\r\n> \r\n> /* Fast path - skip for apply background workers */\r\n> if (am_apply_bgworker())\r\n> return;\r\n\r\nChanged.\r\n\r\n> 5. .../subscription/t/032_streaming_apply.pl\r\n> \r\n> @@ -78,9 +78,13 @@ my $timer =\r\n> IPC::Run::timeout($PostgreSQL::Test::Utils::timeout_default);\r\n> my $h = $node_publisher->background_psql('postgres', \\$in, \\$out, $timer,\r\n> on_error_stop => 0);\r\n> \r\n> +#\r\n> =============================================================\r\n> ===============\r\n> \r\n> All those comment highlighting lines like \"# ==============\" really\r\n> belong in the earlier patch (0003 ?) when this TAP test file was\r\n> introduced.\r\n\r\nChanged.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275739E73E8BEC5D13FB6739E6B9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Tue, 16 Aug 2022 07:37:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, August 16, 2022 15:33 PM I wrote:\r\n> Attach the new patches.\r\n\r\nI found that cfbot has a failure.\r\nAfter investigation, I think it is because the worker's exit state is not set\r\ncorrectly. So I made some slight modifications.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 17 Aug 2022 06:28:21 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Aug 17, 2022 2:28 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tues, August 16, 2022 15:33 PM I wrote:\r\n> > Attach the new patches.\r\n> \r\n> I found that cfbot has a failure.\r\n> After investigation, I think it is because the worker's exit state is not set\r\n> correctly. So I made some slight modifications.\r\n> \r\n> Attach the new patches.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments.\r\n\r\n0003 patch\r\n==============\r\n1. src/backend/replication/logical/applybgworker.c\r\n+\t\tereport(ERROR,\r\n+\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+\t\t\t\t errmsg(\"cannot replicate target relation \\\"%s.%s\\\" using \"\r\n+\t\t\t\t\t\t\"subscription parameter streaming=parallel\",\r\n+\t\t\t\t\t\trel->remoterel.nspname, rel->remoterel.relname),\r\n+\t\t\t\t errdetail(\"The unique column on subscriber is not the unique \"\r\n+\t\t\t\t\t\t \"column on publisher or there is at least one \"\r\n+\t\t\t\t\t\t \"non-immutable function.\"),\r\n+\t\t\t\t errhint(\"Please change to use subscription parameter \"\r\n+\t\t\t\t\t\t \"streaming=on.\")));\r\n\r\nShould we use \"%s\" instead of \"streaming=parallel\" and \"streaming=on\"?\r\n\r\n2. src/backend/replication/logical/applybgworker.c\r\n+\t * If any worker is handling the streaming transaction, this check needs to\r\n+\t * be performed not only using the apply background worker, but also in the\r\n+\t * main apply worker. This is because without these restrictions, main\r\n\r\nthis check needs to be performed not only using the apply background worker, but\r\nalso in the main apply worker.\r\n->\r\nthis check not only needs to be performed by apply background worker, but also\r\nby the main apply worker\r\n\r\n3. src/backend/replication/logical/relation.c\r\n+\tif (ukey)\r\n+\t{\r\n+\t\ti = -1;\r\n+\t\twhile ((i = bms_next_member(ukey, i)) >= 0)\r\n+\t\t{\r\n+\t\t\tattnum = AttrNumberGetAttrOffset(i + FirstLowInvalidHeapAttributeNumber);\r\n+\r\n+\t\t\tif (entry->attrmap->attnums[attnum] < 0 ||\r\n+\t\t\t\t!bms_is_member(entry->attrmap->attnums[attnum], entry->remoterel.attunique))\r\n+\t\t\t{\r\n+\t\t\t\tentry->parallel_apply = PARALLEL_APPLY_UNSAFE;\r\n+\t\t\t\treturn;\r\n+\t\t\t}\r\n+\t\t}\r\n+\r\n+\t\tbms_free(ukey);\r\n\r\nIt looks we need to call bms_free() before return, right?\r\n\r\n4. src/backend/replication/logical/relation.c\r\n+\t\t/* We don't need info for dropped or generated attributes */\r\n+\t\tif (att->attisdropped || att->attgenerated)\r\n+\t\t\tcontinue;\r\n\r\nWould it be better to change the comment to:\r\nWe don't check dropped or generated attributes\r\n\r\n5. src/test/subscription/t/032_streaming_apply.pl\r\n+$node_publisher->wait_for_catchup($appname);\r\n+\r\n+# Then we check the foreign key on partition table.\r\n+$node_publisher->wait_for_catchup($appname);\r\n\r\nHere, wait_for_catchup() is called twice, we can remove the second one.\r\n\r\n6. src/backend/replication/logical/applybgworker.c\r\n+\t\t/* If any workers (or the postmaster) have died, we have failed. */\r\n+\t\tif (status == APPLY_BGWORKER_EXIT)\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+\t\t\t\t\t errmsg(\"background worker %u failed to apply transaction %u\",\r\n+\t\t\t\t\t\t\tentry->wstate->shared->worker_id,\r\n+\t\t\t\t\t\t\tentry->wstate->shared->stream_xid)));\r\n\r\nShould we change the error message to \"apply background worker %u failed to\r\napply transaction %u\" ? To be consistent with the error message in\r\napply_bgworker_wait_for().\r\n\r\n0004 patch\r\n==============\r\n1.\r\nI saw that the commit message says:\r\nIf the subscriber exits with an error, this flag will be set true, and\r\nwhenever the transaction is applied successfully, this flag is reset false.\r\n\r\n\"subretry\" is set to false if a transaction is applied successfully, it looks\r\nsimilar to what clear_subscription_skip_lsn() does, so maybe we should remove\r\nthe following change in apply_handle_stream_abort()? Or only call\r\nset_subscription_retry() when rollbacking the toplevel transaction.\r\n\r\n@@ -1671,6 +1688,9 @@ apply_handle_stream_abort(StringInfo s)\r\n \t\t\t */\r\n \t\t\tserialize_stream_abort(xid, subxid);\r\n \t\t}\r\n+\r\n+\t\t/* Reset the retry flag. */\r\n+\t\tset_subscription_retry(false);\r\n \t}\r\n \r\n \treset_apply_error_context_info();\r\n\r\n2. src/backend/replication/logical/worker.c\r\n+\t/* Reset subretry */\r\n+\tvalues[Anum_pg_subscription_subretry - 1] = BoolGetDatum(retry);\r\n+\treplaces[Anum_pg_subscription_subretry - 1] = true;\r\n\r\n/* Reset subretry */\r\n->\r\n/* Set subretry */\r\n\r\n3.\r\n+# Insert dependent data on the publisher, now it works.\r\n+$node_subscriber->safe_psql('postgres', \"INSERT INTO test_tab2 VALUES(1)\");\r\n\r\nIn the case that the DELETE change from publisher has not been applied yet when\r\nexecuting the INSERT, the INSERT will fail.\r\n\r\n0005 patch\r\n==============\r\n1.\r\n+ <para>\r\n+ Process ID of the main apply worker, if this process is a apply\r\n+ background worker. NULL if this process is a main apply worker or a\r\n+ synchronization worker.\r\n+ </para></entry>\r\n\r\na apply background worker\r\n->\r\nan apply background worker\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Thu, 18 Aug 2022 03:44:05 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for patch v21-0001:\n\nNote - There are some \"general\" comments which will result in lots of\nsmaller changes. The subsequent \"detailed\" review comments have some\noverlap with these general comments but I expect some will be missed\nso please search/replace to fix all code related to those general\ncomments.\n\n======\n\n1. GENERAL - main_worker_pid and replorigin_session_setup\n\nQuite a few of my subsequent review comments below are related to the\nsomewhat tricky (IMO) change to the code for this area. Here is a\nsummary of some things that can be done to clean/simplify this logic.\n\n1a.\nMake the existing replorigin_session_setup function just be a wrapper\nthat delegates to the other function passing the acquired_by as 0.\nThis is because in every case but one (in the apply bg worker main) we\nare always passing 0, and IMO there is no need to spread the messy\nextra param to places that do not use it.\n\n1b.\n'main_worker_pid' is a confusing member name given the way it gets\nused - e.g. not even set when you actually *are* the main apply\nworker? You can still keep all the same logic, but just change the\nname to something more like 'apply_leader_pid' - then the code can\nmake sense because the main apply workers have no \"apply leader\" but\nthe apply background workers do.\n\n1c.\nIMO it will be much better to use pid_t and InvalidPid for the type\nand the unset values of this member.\n\n1d.\nThe checks/Asserts for main_worker_pid are confusing to read. (e.g.\nAssert(worker->main_worker_pid != 0) means the worker is a apply\nbackground worker. IMO there should be convenient macros for these -\nthen code can be readable again.\ne.g.\n#define isApplyMainWorker(worker) (worker->apply_leader_pid == InvalidPid)\n#define isApplyBgWorker(worker) (worker->apply_leader_pid != InvalidPid)\n\n======\n\n2. GENERAL - ApplyBgworkerInfo\n\nI like that the struct ApplyBgworkerState was renamed to the more\nappropriate name ApplyBgworkerInfo. But now all the old variable names\n(e.g. 'wstate') and parameters must be updated as well. Please\nsearch/replace them all in code and comments.\n\ne.g.\nApplyBgworkerInfo *wstate\n\nshould now be something like:\nApplyBgworkerInfo *winfo;\n\n======\n\n3. GENERAL - ApplyBgWorkerStatus --> ApplyBgworkerState\n\nIMO the enum should be changed to ApplyBgWorkerState because the\nvalues all represent the discrete state that the bgworker is at. See\nthe top StackOverflow answer here [1] which is the same as the point I\nam trying to make with this comment.\n\nThis is a simple mechanical exercise rename to fix the reliability\nbut it will impact lots of variables, parameters, function names, and\ncomments. Please search/replace to get them all.\n\n======\n\n4. Commit message\n\nIn addition, the patch extends the logical replication STREAM_ABORT message so\nthat abort_time and abort_lsn can also be sent which can be used to update the\nreplication origin in apply background worker when the streaming transaction is\naborted.\n\n4a.\nShould this para also mention something about the introduction of\nprotocol version 4?\n\n4b.\nShould this para also mention that these extensions are not strictly\nmandatory for the parallel streaming to still work?\n\n======\n\n5. doc/src/sgml/catalogs.sgml\n\n <para>\n- If true, the subscription will allow streaming of in-progress\n- transactions\n+ Controls how to handle the streaming of in-progress transactions:\n+ <literal>f</literal> = disallow streaming of in-progress transactions,\n+ <literal>t</literal> = spill the changes of in-progress transactions to\n+ disk and apply at once after the transaction is committed on the\n+ publisher,\n+ <literal>p</literal> = apply changes directly using a background\n+ worker if available(same as 't' if no worker is available)\n </para></entry>\n\nMissing whitespace before '('\n\n======\n\n6. doc/src/sgml/logical-replication.sgml\n\n@@ -1334,7 +1344,8 @@ CONTEXT: processing remote data for replication\norigin \"pg_16395\" during \"INSER\n subscription. A disabled subscription or a crashed subscription will have\n zero rows in this view. If the initial data synchronization of any\n table is in progress, there will be additional workers for the tables\n- being synchronized.\n+ being synchronized. Moreover, if the streaming transaction is applied\n+ parallelly, there will be additional workers.\n </para>\n\n\"applied parallelly\" sounds a bit strange.\n\nSUGGESTION-1\nMoreover, if the streaming transaction is applied in parallel, there\nwill be additional workers.\n\nSUGGESTION-2\nMoreover, if the streaming transaction is applied using 'parallel'\nmode, there will be additional workers.\n\n======\n\n7. doc/src/sgml/protocol.sgml\n\n@@ -3106,6 +3106,11 @@ psql \"dbname=postgres replication=database\" -c\n\"IDENTIFY_SYSTEM;\"\n Version <literal>3</literal> is supported only for server version 15\n and above, and it allows streaming of two-phase commits.\n </para>\n+ <para>\n+ Version <literal>4</literal> is supported only for server version 16\n+ and above, and it allows applying stream of large in-progress\n+ transactions in parallel.\n+ </para>\n\n7a.\n\"applying stream of\" -> \"applying streams of\"\n\n7b.\nActually, I'm not sure that this description is strictly correct even\nto say \"it allows ...\" because IIUC the streaming=parallel can still\nwork anyway without protocol 4 – it is just some of the extended\nSTREAM_ABORT message members will be missing, right?\n\n======\n\n8. doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>parallel</literal>, incoming changes are directly\n+ applied via one of the apply background workers, if available. If no\n+ background worker is free to handle streaming transaction then the\n+ changes are written to temporary files and applied after the\n+ transaction is committed. Note that if an error happens when\n+ applying changes in a background worker, the finish LSN of the\n+ remote transaction might not be reported in the server log.\n </para>\n\n\"is free to handle streaming transaction\"\n-> \"is free to handle streaming transactions\"\nor -> \"is free to handle the streaming transaction\"\n\n======\n\n9. src/backend/replication/logical/applybgworker.c - general\n\nSome of the messages refer to the \"worker #%u\" and some refer to the\n\"worker %u\" (without the '#'). All the messages should have a\nconsistent format.\n\n~~~\n\n10. src/backend/replication/logical/applybgworker.c - general\n\nSearch/replace all 'wstate' and change to 'winfo' or similar. See comment #2\n\n~~~\n\n11. src/backend/replication/logical/applybgworker.c - define\n\n+/* Queue size of DSM, 16 MB for now. */\n+#define DSM_QUEUE_SIZE (16*1024*1024)\n\nMissing whitespace between operators\n\n~~~\n\n12. src/backend/replication/logical/applybgworker.c - define\n\n+/*\n+ * There are three fields in message: start_lsn, end_lsn and send_time. Because\n+ * we have updated these statistics in apply worker, we could ignore these\n+ * fields in apply background worker. (see function LogicalRepApplyLoop).\n+ */\n+#define SIZE_STATS_MESSAGE (2*sizeof(XLogRecPtr)+sizeof(TimestampTz))\n\n12a.\n\"worker.\" -> \"worker\" (since the sentence already has a period at the end)\n\n12b.\nMissing whitespace between operators\n\n~~~\n\n13. src/backend/replication/logical/applybgworker.c - ApplyBgworkerEntry\n\n+/*\n+ * Entry for a hash table we use to map from xid to our apply background worker\n+ * state.\n+ */\n+typedef struct ApplyBgworkerEntry\n\n\"our\" -> \"the\"\n\n~~~\n\n14. src/backend/replication/logical/applybgworker.c - apply_bgworker_can_start\n\n+ /*\n+ * For streaming transactions that are being applied in apply background\n+ * worker, we cannot decide whether to apply the change for a relation\n+ * that is not in the READY state (see should_apply_changes_for_rel) as we\n+ * won't know remote_final_lsn by that time. So, we don't start new apply\n+ * background worker in this case.\n+ */\n\n14a.\n\"applied in apply background worker\" -> \"applied using an apply\nbackground worker\"\n\n14b.\n\"we don't start new apply\" -> \"we don't start the new apply\"\n\n~~~\n\n15. src/backend/replication/logical/applybgworker.c - apply_bgworker_start\n\n+/*\n+ * Return the apply background worker that will be used for the specified xid.\n+ *\n+ * If an apply background worker is found in the free list then re-use it,\n+ * otherwise start a fresh one. Cache the worker ApplyBgworkersHash keyed by\n+ * the specified xid.\n+ */\n+ApplyBgworkerInfo *\n+apply_bgworker_start(TransactionId xid)\n\n\"Cache the worker ApplyBgworkersHash\" -> \"Cache the worker in\nApplyBgworkersHash\"\n\n~~~\n\n16.\n\n+ /* Try to get a free apply background worker. */\n+ if (list_length(ApplyBgworkersFreeList) > 0)\n\nPlease refer to the recent push [2] of my other patch. This code should say\n\nif (ApplyBgworkersFreeList !- NIL)\n\n~~~\n\n17. src/backend/replication/logical/applybgworker.c - LogicalApplyBgworkerMain\n\n+ MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =\n+ MyLogicalRepWorker->reply_time = 0;\n+\n+ InitializeApplyWorker();\n\nLots of things happen within InitializeApplyWorker(). I think this\ncall deserves at least some comment to say it does lots of common\ninitialization. And same for the other caller or this in the apply\nmain worker.\n\n~~~\n\n18. src/backend/replication/logical/applybgworker.c - apply_bgworker_setup_dsm\n\n+/*\n+ * Set up a dynamic shared memory segment.\n+ *\n+ * We set up a control region that contains a ApplyBgworkerShared,\n+ * plus one region per message queue. There are as many message queues as\n+ * the number of workers.\n+ */\n+static bool\n+apply_bgworker_setup_dsm(ApplyBgworkerInfo *wstate)\n\nThis function is now returning a bool, so it would be better for the\nfunction comment to describe the meaning of the return value.\n\n~~~\n\n19.\n\n+ /* Create the shared memory segment and establish a table of contents. */\n+ seg = dsm_create(shm_toc_estimate(&e), 0);\n+\n+ if (seg == NULL)\n+ return false;\n+\n+ toc = shm_toc_create(PG_LOGICAL_APPLY_SHM_MAGIC, dsm_segment_address(seg),\n+ segsize);\n\nThis code is similar but inconsistent with other code in the function\nLogicalApplyBgworkerMain\n\n19a.\nI think the whitespace should be the same as in the other fucntion\n\n19b.\nShouldn't the 'toc' result be checked like it was in the other function?\n\n~~~\n\n20. src/backend/replication/logical/applybgworker.c - apply_bgworker_setup\n\nI think this function could be refactored to be cleaner and share more\ncommon logic.\n\nSUGGESTION\n\n/* Setup shared memory, and attempt launch. */\nif (apply_bgworker_setup_dsm(wstate))\n{\nbool launched;\nlaunched = logicalrep_worker_launch(MyLogicalRepWorker->dbid,\nMySubscription->oid,\nMySubscription->name,\nMyLogicalRepWorker->userid,\nInvalidOid,\ndsm_segment_handle(wstate->dsm_seg));\nif (launched)\n{\nApplyBgworkersList = lappend(ApplyBgworkersList, wstate);\nMemoryContextSwitchTo(oldcontext);\nreturn wstate;\n}\nelse\n{\ndsm_detach(wstate->dsm_seg);\nwstate->dsm_seg = NULL;\n}\n}\n\npfree(wstate);\nMemoryContextSwitchTo(oldcontext);\nreturn NULL;\n\n~~~\n\n21. src/backend/replication/logical/applybgworker.c -\napply_bgworker_check_status\n\n+apply_bgworker_check_status(void)\n+{\n+ ListCell *lc;\n+\n+ if (am_apply_bgworker() || MySubscription->stream != SUBSTREAM_PARALLEL)\n+ return;\n\nIMO it makes more sense logically for the condition to be reordered:\n\nif (MySubscription->stream != SUBSTREAM_PARALLEL || am_apply_bgworker())\n\n~~~\n\n22.\n\nThis function should be renamed to 'apply_bgworker_check_state'. See\nreview comment #3\n\n~~~\n\n23. src/backend/replication/logical/applybgworker.c - apply_bgworker_set_status\n\nThis function should be renamed to 'apply_bgworker_set_state'. See\nreview comment #3\n\n~~~\n\n24. src/backend/replication/logical/applybgworker.c -\napply_bgworker_subxact_info_add\n\n+ /*\n+ * CommitTransactionCommand is needed to start a subtransaction after\n+ * issuing a SAVEPOINT inside a transaction block(see\n+ * StartSubTransaction()).\n+ */\n\nMissing whitespace before '('\n\n~~~\n\n25. src/backend/replication/logical/applybgworker.c -\napply_bgworker_savepoint_name\n\n+/*\n+ * Form the savepoint name for streaming transaction.\n+ *\n+ * Return the name in the supplied buffer.\n+ */\n+void\n+apply_bgworker_savepoint_name(Oid suboid, TransactionId xid,\n\n\"name for streaming\" -> \"name for the streaming\"\n\n======\n\n26. src/backend/replication/logical/launcher.c - logicalrep_worker_find\n\n@@ -223,6 +227,13 @@ logicalrep_worker_find(Oid subid, Oid relid, bool\nonly_running)\n {\n LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n\n+ /*\n+ * We are only interested in the main apply worker or table sync worker\n+ * here.\n+ */\n+ if (w->main_worker_pid != 0)\n+ continue;\n+\n\nIMO the comment is not very well aligned with what the code is doing.\n\n26a.\nThat comment saying \"We are only interested in the main apply worker\nor table sync worker here.\" is a general statement that I think\nbelongs outside this loop.\n\n26b.\nAnd the comment just for this condition should be like the below:\n\nSUGGESTION\nSkip apply background workers.\n\n26c.\nAlso, code readability would be better if it used the earlier\nsuggested macros. See comment #1d.\n\nSUGGESTION\n/* Skip apply background workers. */\nif (isApplyBgWorker(w))\ncontinue;\n~~~\n\n27. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n@@ -259,11 +270,11 @@ logicalrep_workers_find(Oid subid, bool only_running)\n }\n\n /*\n- * Start new apply background worker, if possible.\n+ * Start new background worker, if possible.\n */\n-void\n+bool\n logicalrep_worker_launch(Oid dbid, Oid subid, const char *subname, Oid userid,\n- Oid relid)\n+ Oid relid, dsm_handle subworker_dsm)\n\nThis function now returns bool so the function comment probably should\ndescribe the meaning of that return value.\n\n~~~\n\n28.\n\n+ worker->main_worker_pid = is_subworker ? MyProcPid : 0;\n\nHere is an example where I think code would benefit from the\nsuggestions of comments #1b, #1c.\n\nSUGGESTION\nworker->apply_leader_pid = is_subworker ? MyProcPid : InvalidPid;\n\n~~~\n\n29. src/backend/replication/logical/launcher.c - logicalrep_worker_stop\n\n+ Assert(worker->main_worker_pid == 0);\n\nHere is an example where I think code readability would benefit from\ncomment #1d.\n\nAssert(isApplyMainWorker(worker));\n\n~~~\n\n30. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n\n+ /*\n+ * This is the main apply worker, stop all the apply background workers\n+ * previously started from here.\n+ */\n\n\"worker, stop\" -> \"worker; stop\"\n\n~~~\n\n31.\n\n+ if (w->main_worker_pid != 0)\n+ logicalrep_worker_stop_internal(w);\n\nSee comment #1d.\n\nSUGGESTION:\nif (isApplyBgWorker(w)) ...\n\n~~~\n\n32. src/backend/replication/logical/launcher.c - logicalrep_worker_cleanup\n\n@@ -621,6 +678,7 @@ logicalrep_worker_cleanup(LogicalRepWorker *worker)\n worker->userid = InvalidOid;\n worker->subid = InvalidOid;\n worker->relid = InvalidOid;\n+ worker->main_worker_pid = 0;\n }\n\nSee Comment #1c.\n\nSUGGESTION:\nworker->apply_leader_pid = InvalidPid;\n\n~~~\n\n33. src/backend/replication/logical/launcher.c - logicalrep_apply_bgworker_count\n\n+ if (w->subid == subid && w->main_worker_pid != 0)\n+ res++;\n\nSee comment #1d.\n\nSUGGESTION\nif (w->subid == subid && isApplyBgWorker(w))\n\n======\n\n34. src/backend/replication/logical/origin.c - replorigin_session_setup\n\n@@ -1075,12 +1075,21 @@ ReplicationOriginExitCleanup(int code, Datum arg)\n * array doesn't have to be searched when calling\n * replorigin_session_advance().\n *\n- * Obviously only one such cached origin can exist per process and the current\n+ * Normally only one such cached origin can exist per process and the current\n * cached value can only be set again after the previous value is torn down\n * with replorigin_session_reset().\n+ *\n+ * However, if the function parameter 'acquired_by' is not 0, we allow the\n+ * process to use the same slot already acquired by another process. It's safe\n+ * because 1) The only caller (apply background workers) will maintain the\n+ * commit order by allowing only one process to commit at a time, so no two\n+ * workers will be operating on the same origin at the same time (see comments\n+ * in logical/worker.c). 2) Even though we try to advance the session's origin\n+ * concurrently, it's safe to do so as we change/advance the session_origin\n+ * LSNs under replicate_state LWLock.\n */\n void\n-replorigin_session_setup(RepOriginId node)\n+replorigin_session_setup(RepOriginId node, int acquired_by)\n\n34a.\nThe comment does not actually say that acquired_by is the PID of the\nowning process. It should say that.\n\n34b.\nIMO better to change the int acquired_by to type pid_t.\n\n~~~\n\n35.\n\nSee comment #1a.\n\nI suggest existing replorigin_session_setup should just now be a\nwrapper function that delegates to this new function and it can pass\nthe 'acquired_by' as 0.\n\ne.g.\n\nvoid\nreplorigin_session_setup(RepOriginId node)\n{\nreplorigin_session_setup_acquired(node, 0)\n}\n\n~~\n\n- session_replication_state->acquired_by = MyProcPid;\n+ if (acquired_by == 0)\n+ session_replication_state->acquired_by = MyProcPid;\n+ else if (session_replication_state->acquired_by == 0)\n+ elog(ERROR, \"could not find replication state slot for replication\"\n+ \"origin with OID %u which was acquired by %d\", node, acquired_by);\n\nIs that right to compare == 0?\n\nShouldn't this really be checking the owner is the passed 'acquired_by' slot?\n\ne.g.\n\nelse if (session_replication_state->acquired_by != acquired_by)\n\n======\n\n36. src/backend/replication/logical/tablesync.c - process_syncing_tables\n\n@@ -589,6 +590,9 @@ process_syncing_tables_for_apply(XLogRecPtr current_lsn)\n void\n process_syncing_tables(XLogRecPtr current_lsn)\n {\n+ if (am_apply_bgworker())\n+ return;\n+\n\nPerhaps should be a comment to describe why process_syncing_tables\nshould be skipped for the apply background worker?\n\n======\n\n37. src/backend/replication/logical/worker.c - file comment\n\n+ * 2) Write to temporary files and apply when the final commit arrives\n+ *\n+ * If no worker is available to handle streamed transaction, the data is\n+ * written to temporary files and then applied at once when the final commit\n+ * arrives.\n\n\"streamed transaction\" -> \"the streamed transaction\"\n\n~~~\n\n38. src/backend/replication/logical/worker.c - should_apply_changes_for_rel\n\n+ *\n+ * Note that for streaming transactions that is being applied in apply\n+ * background worker, we disallow applying changes on a table that is not in\n+ * the READY state, because we cannot decide whether to apply the change as we\n+ * won't know remote_final_lsn by that time.\n+ *\n+ * We already checked this in apply_bgworker_can_start() before assigning the\n+ * streaming transaction to the background worker, but it also needs to be\n+ * checked here because if the user executes ALTER SUBSCRIPTION ... REFRESH\n+ * PUBLICATION in parallel, the new table can be added to pg_subscription_rel\n+ * in parallel to this transaction.\n */\n static bool\n should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)\n\n38a.\n\"transactions that is being applied\" -> \"transactions that are being applied\"\n\n38b.\nIt is a bit confusing to keep using the word \"parallel\" here in the\ncomments (which is nothing to do with streaming=parallel mode – you\njust mean *simultaneously* or *concurrently*). Perhaps the code\ncomment can be slightly reworded? Also, \"in parallel to\" doesn't sound\nright.\n\n~~~\n\n39. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /* Not in streaming mode and not in apply background worker. */\n+ if (!(in_streamed_transaction || am_apply_bgworker()))\n return false;\nIMO if you wanted to write the comment in that way then the code\nshould have matched it more closely like:\nif (!in_streamed_transaction && !am_apply_bgworker())\n\nOTOH, if you want to keep the code as-is then the comment should be\nworded slightly differently.\n\n~~~\n\n40.\n\nThe coding styles do not seem particularly consistent. For example,\nthis function (handle_streamed_transaction) uses if/else and assigns\nvar 'res' to be a common return. But the previous function\n(should_apply_changes_for_rel) uses if/else but returns directly from\nevery block. If possible, I think it's better to stick to the same\npattern instead of flip/flopping coding styles for no apparent reason.\n\n~~~\n\n41. src/backend/replication/logical/worker.c - apply_handle_prepare_internal\n\n /*\n- * BeginTransactionBlock is necessary to balance the EndTransactionBlock\n+ * We must be in transaction block to balance the EndTransactionBlock\n * called within the PrepareTransactionBlock below.\n */\n\nI'm not sure what this comment changes saying that is any different to\nthe original HEAD comment.\n\nAnd even if it must be kept the grammar is wrong.\n\n~~~\n\n42. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n@@ -1468,8 +1793,8 @@ apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\n static void\n apply_handle_stream_commit(StringInfo s)\n {\n- TransactionId xid;\n LogicalRepCommitData commit_data;\n+ TransactionId xid;\n\nThis change is just switching the order of declarations? If not\nneeded, remove it.\n\n~~~\n\n43.\n\n+ else\n+ {\n+ /* This is the main apply worker. */\n+ ApplyBgworkerInfo *wstate = apply_bgworker_find(xid);\n\n- /* unlink the files with serialized changes and subxact info */\n- stream_cleanup_files(MyLogicalRepWorker->subid, xid);\n+ elog(DEBUG1, \"received commit for streamed transaction %u\", xid);\n+\n+ /*\n+ * Check if we are processing this transaction in an apply background\n+ * worker and if so, send the changes to that worker.\n+ */\n+ if (wstate)\n+ {\n+ /* Send STREAM COMMIT message to the apply background worker. */\n+ apply_bgworker_send_data(wstate, s->len, s->data);\n+\n+ /*\n+ * After sending the data to the apply background worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ apply_bgworker_wait_for(wstate, APPLY_BGWORKER_FINISHED);\n+\n+ pgstat_report_stat(false);\n+ store_flush_position(commit_data.end_lsn);\n+ stop_skipping_changes();\n+\n+ apply_bgworker_free(wstate);\n+\n+ /*\n+ * The transaction is either non-empty or skipped, so we clear the\n+ * subskiplsn.\n+ */\n+ clear_subscription_skip_lsn(commit_data.commit_lsn);\n+ }\n+ else\n+ {\n+ /*\n+ * The transaction has been serialized to file, so replay all the\n+ * spooled operations.\n+ */\n+ apply_spooled_messages(xid, commit_data.commit_lsn);\n+\n+ apply_handle_commit_internal(&commit_data);\n+\n+ /* Unlink the files with serialized changes and subxact info. */\n+ stream_cleanup_files(MyLogicalRepWorker->subid, xid);\n+ }\n+ }\n+\n+ /* Check the status of apply background worker if any. */\n+ apply_bgworker_check_status();\n\n /* Process any tables that are being synchronized in parallel. */\n process_syncing_tables(commit_data.end_lsn);\n43a.\nAFAIK apply_bgworker_check_status() does nothing if am_apply_worker –\nso can this call be moved into the code block where you already know\nit is the main apply worker?\n\n43b.\nSimilarly, AFAIK process_syncing_tables() does nothing if\nam_apply_worker – so can this call can be moved into the code block\nwhere you already know it is the main apply worker?\n\n~~~\n\n44. src/backend/replication/logical/worker.c - InitializeApplyWorker\n\n\n+/*\n+ * Initialize the databse connection, in-memory subscription and necessary\n+ * config options.\n+ */\n void\n-ApplyWorkerMain(Datum main_arg)\n+InitializeApplyWorker(void)\n {\n\n44a.\ntypo \"databse\"\n\n44b.\nShould there be some more explanation in this comment to say that this\nis common code for both the appl main workers and apply background\nworkers?\n\n44c.\nFollowing on from #44b, consider renaming this to something like\nCommonApplyWorkerInit() to emphasize it is called from multiple\nplaces?\n\n~~~\n\n45. src/backend/replication/logical/worker.c - ApplyWorkerMain\n\n- replorigin_session_setup(originid);\n+ replorigin_session_setup(originid, 0);\n\n\nSee #1a. Then this change won't be necessary.\n\n~~~\n\n46. src/backend/replication/logical/worker.c - apply_error_callback\n\n+ if (errarg->remote_attnum < 0)\n+ {\n+ if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n%u\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->remote_xid);\n+ else\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n%u finished at %X/%X\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->remote_xid,\n+ LSN_FORMAT_ARGS(errarg->finish_lsn));\n+ }\n+ else\n+ {\n+ if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\nin transaction %u\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->rel->remoterel.attnames[errarg->remote_attnum],\n+ errarg->remote_xid);\n+ else\n+ errcontext(\"processing remote data for replication origin \\\"%s\\\"\nduring \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\nin transaction %u finished at %X/%X\",\n+ errarg->origin_name,\n+ logicalrep_message_type(errarg->command),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname,\n+ errarg->rel->remoterel.attnames[errarg->remote_attnum],\n+ errarg->remote_xid,\n+ LSN_FORMAT_ARGS(errarg->finish_lsn));\n+ }\n+ }\n\nHou-san had asked [3](comment #14) me how the above code can be\nshortened. Below is one idea, but maybe you won't like it ;-)\n\n#define MSG_O_T_S_R \"processing remote data for replication origin\n\\\"%s\\\" during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" \"\n#define O_T_S_R\\\nerrarg->origin_name,\\\nlogicalrep_message_type(errarg->command),\\\nerrarg->rel->remoterel.nspname,\\\nerrarg->rel->remoterel.relname\n\nif (errarg->remote_attnum < 0)\n{\nif (XLogRecPtrIsInvalid(errarg->finish_lsn))\nerrcontext(MSG_O_T_S_R \"in transaction %u\",\n O_T_S_R,\n errarg->remote_xid);\nelse\nerrcontext(MSG_O_T_S_R \"in transaction %u finished at %X/%X\",\n O_T_S_R,\n errarg->remote_xid,\n LSN_FORMAT_ARGS(errarg->finish_lsn));\n}\nelse\n{\nif (XLogRecPtrIsInvalid(errarg->finish_lsn))\nerrcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u\",\n O_T_S_R,\n errarg->rel->remoterel.attnames[errarg->remote_attnum],\n errarg->remote_xid);\nelse\nerrcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u finished at %X/%X\",\n O_T_S_R,\n errarg->rel->remoterel.attnames[errarg->remote_attnum],\n errarg->remote_xid,\n LSN_FORMAT_ARGS(errarg->finish_lsn));\n}\n#undef O_T_S_R\n#undef MSG_O_T_S_R\n\n======\n\n47. src/include/replication/logicalproto.h\n\n@@ -32,12 +32,17 @@\n *\n * LOGICALREP_PROTO_TWOPHASE_VERSION_NUM is the minimum protocol version with\n * support for two-phase commit decoding (at prepare time). Introduced in PG15.\n+ *\n+ * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n+ * with support for streaming large transactions using apply background\n+ * workers. Introduced in PG16.\n */\n #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n #define LOGICALREP_PROTO_VERSION_NUM 1\n #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n-#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_TWOPHASE_VERSION_NUM\n+#define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n+#define LOGICALREP_PROTO_MAX_VERSION_NUM\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n\n47a.\nI don't think that comment is strictly true. IIUC the new protocol\nversion 4 is currently only affecting the *extra* STREAM_ABORT members\n– but in fact streaming=parallel is still functional without using\nthose extra members, isn't it? So maybe this description needed to be\nmodified a bit to be more accurate?\n\n47b.\nAnd perhaps the entire constant should be renamed to something like\nLOGICALREP_PROTO_PARALLEL_STREAM_ABORT_VERSION_NUM?\n\n======\n\n48. src/include/replication/origin.h\n\n-extern void replorigin_session_setup(RepOriginId node);\n+extern void replorigin_session_setup(RepOriginId node, int acquired_by);\n\nSee comment #1a, #35.\n\nIMO original should be left as-is and a new \"wrapped\" function added\nwith pid_t param.\n\n======\n\n49. src/include/replication/worker_internal.h\n\n@@ -60,6 +63,12 @@ typedef struct LogicalRepWorker\n */\n FileSet *stream_fileset;\n\n+ /*\n+ * PID of main apply worker if this slot is used for an apply background\n+ * worker.\n+ */\n+ int main_worker_pid;\n+\n /* Stats. */\n XLogRecPtr last_lsn;\n TimestampTz last_send_time;\n@@ -68,8 +77,70 @@ typedef struct LogicalRepWorker\n TimestampTz reply_time;\n } LogicalRepWorker;\n\n49a.\nSee my general comments #1b, #1b, #1c about this.\n\n49b.\nAlso, the comment should describe both cases.\n\nSUGGESTION\n/*\n * For apply background worker - 'apply_leader_pid' is the PID of the main\n * apply worker that launched the apply background worker.\n *\n * For main apply worker - 'apply_leader_pid' is InvalidPid.\n */\npid_t apply_leader_pid;\n\n49c.\nHere is where some helpful worker macros (mentioned in comment #1d)\ncan be defined.\n\nSUGGESTION\n#define isApplyMainWorker(worker) (worker->apply_leader_pid == InvalidPid)\n#define isApplyBgWorker(worker) (worker->apply_leader_pid != InvalidPid)\n\n~~~\n\n50.\n\n+/*\n+ * Status of apply background worker.\n+ */\n+typedef enum ApplyBgworkerStatus\n+{\n+ APPLY_BGWORKER_BUSY = 0, /* assigned to a transaction */\n+ APPLY_BGWORKER_FINISHED, /* transaction is completed */\n+ APPLY_BGWORKER_EXIT /* exit */\n+} ApplyBgworkerStatus;\n\n\n50a.\nSee general comment #3 why this enum should be renamed to ApplyBgworkerState\n\n50b.\nThe comment \"/* exit */\" is pretty meaningless. Maybe \"worker has\nshutdown/exited\" or similar?\n\n50c.\nIn fact, I think the enum value should be APPLY_BGWORKER_EXITED\n\n50d.\nThere seems no reason to explicitly assign APPLY_BGWORKER_BUSY enum value to 0.\n\nSUGGESTION\n/*\n * Apply background worker states.\n */\ntypedef enum ApplyBgworkerState\n{\nAPPLY_BGWORKER_BUSY, /* assigned to a transaction */\nAPPLY_BGWORKER_FINISHED, /* transaction is completed */\nAPPLY_BGWORKER_EXITED /* worker has shutdown/exited */\n} ApplyBgworkerState;\n\n~~~\n\n51.\n\n+typedef struct ApplyBgworkerShared\n+{\n+ slock_t mutex;\n+\n+ /* Status of apply background worker. */\n+ ApplyBgworkerStatus status;\n+\n+ /* Logical protocol version. */\n+ uint32 proto_version;\n+\n+ TransactionId stream_xid;\n+\n+ /* Id of apply background worker */\n+ uint32 worker_id;\n+} ApplyBgworkerShared;\n\n51a.\n+ /* Status of apply background worker. */\n+ ApplyBgworkerStatus status;\n\nSee review comment #3.\n\nSUGGESTION:\n/* Current state of the apply background worker c. */\nApplyBgworkerState worker_state;\n\n51b.\n+ /* Id of apply background worker */\n\n\"Id\" -> \"ID\" might be more usual.\n\n~~~\n\n52.\n\n+/* Apply background worker setup and interactions */\n+extern ApplyBgworkerInfo *apply_bgworker_start(TransactionId xid);\n+extern ApplyBgworkerInfo *apply_bgworker_find(TransactionId xid);\n+extern void apply_bgworker_wait_for(ApplyBgworkerInfo *wstate,\n+ ApplyBgworkerStatus wait_for_status);\n+extern void apply_bgworker_send_data(ApplyBgworkerInfo *wstate, Size nbytes,\n+ const void *data);\n+extern void apply_bgworker_free(ApplyBgworkerInfo *wstate);\n+extern void apply_bgworker_check_status(void);\n+extern void apply_bgworker_set_status(ApplyBgworkerStatus status);\n+extern void apply_bgworker_subxact_info_add(TransactionId current_xid);\n+extern void apply_bgworker_savepoint_name(Oid suboid, Oid relid,\n+ char *spname, int szsp);\n\nThis big block of similarly named externs might as well be in\nalphabetical order instead of apparently random.\n\n~~~\n\n53.\n\n+static inline bool\n+am_apply_bgworker(void)\n+{\n+ return MyLogicalRepWorker->main_worker_pid != 0;\n+}\n\nThis can be simplified/improved using the new macros as previously\nsuggested in #1d.\n\nSUGGESTION\nstatic inline bool\nam_apply_bgworker(void)\n{\nreturn isApplyBgWorker(MyLogicalRepWorker);\n}\n\n====\n\n54. src/tools/pgindent/typedefs.list\n\n AppendState\n+ApplyBgworkerEntry\n+ApplyBgworkerShared\n+ApplyBgworkerInfo\n+ApplyBgworkerStatus\n ApplyErrorCallbackArg\n\nPlease rearrange these into alphabetical order.\n\n------\n[1] https://softwareengineering.stackexchange.com/questions/219351/state-or-status-when-should-a-variable-name-contain-the-word-state-and-w#:~:text=status%20is%20used%20to%20describe,(e.g.%20pending%2Fdispatched)\n[2] https://github.com/postgres/postgres/commit/efd0c16becbf45e3b0215e124fde75fee8fcbce4\n[3] https://www.postgresql.org/message-id/OS0PR01MB57169AEA399C6DC370EAF23B94649%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Aug 2022 16:29:10 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi Wang-san,\n\nFYI, I also checked the latest patch v23-0001 but found that the\nv21-0001/v23-0001 differences are minimal, so all my v21* review\ncomments are still applicable for the patch v23-0001.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Aug 2022 16:51:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 18, 2022 at 11:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for patch v21-0001:\n>\n> 4. Commit message\n>\n> In addition, the patch extends the logical replication STREAM_ABORT message so\n> that abort_time and abort_lsn can also be sent which can be used to update the\n> replication origin in apply background worker when the streaming transaction is\n> aborted.\n>\n> 4a.\n> Should this para also mention something about the introduction of\n> protocol version 4?\n>\n> 4b.\n> Should this para also mention that these extensions are not strictly\n> mandatory for the parallel streaming to still work?\n>\n\nWithout parallel streaming/apply, we don't need to send this extra\nmessage. So, I don't think it will be correct to say that.\n\n>\n> 46. src/backend/replication/logical/worker.c - apply_error_callback\n>\n> + if (errarg->remote_attnum < 0)\n> + {\n> + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n> %u\",\n> + errarg->origin_name,\n> + logicalrep_message_type(errarg->command),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname,\n> + errarg->remote_xid);\n> + else\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n> %u finished at %X/%X\",\n> + errarg->origin_name,\n> + logicalrep_message_type(errarg->command),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname,\n> + errarg->remote_xid,\n> + LSN_FORMAT_ARGS(errarg->finish_lsn));\n> + }\n> + else\n> + {\n> + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\n> in transaction %u\",\n> + errarg->origin_name,\n> + logicalrep_message_type(errarg->command),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname,\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> + errarg->remote_xid);\n> + else\n> + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\n> in transaction %u finished at %X/%X\",\n> + errarg->origin_name,\n> + logicalrep_message_type(errarg->command),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname,\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> + errarg->remote_xid,\n> + LSN_FORMAT_ARGS(errarg->finish_lsn));\n> + }\n> + }\n>\n> Hou-san had asked [3](comment #14) me how the above code can be\n> shortened. Below is one idea, but maybe you won't like it ;-)\n>\n> #define MSG_O_T_S_R \"processing remote data for replication origin\n> \\\"%s\\\" during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" \"\n> #define O_T_S_R\\\n> errarg->origin_name,\\\n> logicalrep_message_type(errarg->command),\\\n> errarg->rel->remoterel.nspname,\\\n> errarg->rel->remoterel.relname\n>\n> if (errarg->remote_attnum < 0)\n> {\n> if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> errcontext(MSG_O_T_S_R \"in transaction %u\",\n> O_T_S_R,\n> errarg->remote_xid);\n> else\n> errcontext(MSG_O_T_S_R \"in transaction %u finished at %X/%X\",\n> O_T_S_R,\n> errarg->remote_xid,\n> LSN_FORMAT_ARGS(errarg->finish_lsn));\n> }\n> else\n> {\n> if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> errcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u\",\n> O_T_S_R,\n> errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> errarg->remote_xid);\n> else\n> errcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u finished at %X/%X\",\n> O_T_S_R,\n> errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> errarg->remote_xid,\n> LSN_FORMAT_ARGS(errarg->finish_lsn));\n> }\n> #undef O_T_S_R\n> #undef MSG_O_T_S_R\n>\n> ======\n>\n\nI don't like this much. I think this reduces readability.\n\n> 47. src/include/replication/logicalproto.h\n>\n> @@ -32,12 +32,17 @@\n> *\n> * LOGICALREP_PROTO_TWOPHASE_VERSION_NUM is the minimum protocol version with\n> * support for two-phase commit decoding (at prepare time). Introduced in PG15.\n> + *\n> + * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n> + * with support for streaming large transactions using apply background\n> + * workers. Introduced in PG16.\n> */\n> #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n> #define LOGICALREP_PROTO_VERSION_NUM 1\n> #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n> #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n> -#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_TWOPHASE_VERSION_NUM\n> +#define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n> +#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n>\n> 47a.\n> I don't think that comment is strictly true. IIUC the new protocol\n> version 4 is currently only affecting the *extra* STREAM_ABORT members\n> – but in fact streaming=parallel is still functional without using\n> those extra members, isn't it? So maybe this description needed to be\n> modified a bit to be more accurate?\n>\n\nThe reason for sending this extra abort members is to ensure that\nafter aborting the transaction, if the subscriber/apply worker\nrestarts, it doesn't need to request the transaction again. Do you\nhave suggestions for improving this comment?\n\n>\n> 52.\n>\n> +/* Apply background worker setup and interactions */\n> +extern ApplyBgworkerInfo *apply_bgworker_start(TransactionId xid);\n> +extern ApplyBgworkerInfo *apply_bgworker_find(TransactionId xid);\n> +extern void apply_bgworker_wait_for(ApplyBgworkerInfo *wstate,\n> + ApplyBgworkerStatus wait_for_status);\n> +extern void apply_bgworker_send_data(ApplyBgworkerInfo *wstate, Size nbytes,\n> + const void *data);\n> +extern void apply_bgworker_free(ApplyBgworkerInfo *wstate);\n> +extern void apply_bgworker_check_status(void);\n> +extern void apply_bgworker_set_status(ApplyBgworkerStatus status);\n> +extern void apply_bgworker_subxact_info_add(TransactionId current_xid);\n> +extern void apply_bgworker_savepoint_name(Oid suboid, Oid relid,\n> + char *spname, int szsp);\n>\n> This big block of similarly named externs might as well be in\n> alphabetical order instead of apparently random.\n>\n\nI think it is better to order them based on related functionality if\nthey are not already instead of using alphabetical order.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Aug 2022 14:27:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 18, 2022 at 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 11:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for patch v21-0001:\n> >\n> > 4. Commit message\n> >\n> > In addition, the patch extends the logical replication STREAM_ABORT message so\n> > that abort_time and abort_lsn can also be sent which can be used to update the\n> > replication origin in apply background worker when the streaming transaction is\n> > aborted.\n> >\n> > 4a.\n> > Should this para also mention something about the introduction of\n> > protocol version 4?\n> >\n> > 4b.\n> > Should this para also mention that these extensions are not strictly\n> > mandatory for the parallel streaming to still work?\n> >\n>\n> Without parallel streaming/apply, we don't need to send this extra\n> message. So, I don't think it will be correct to say that.\n\nSee my reply to 47a below.\n\n>\n> >\n> > 46. src/backend/replication/logical/worker.c - apply_error_callback\n> >\n> > + if (errarg->remote_attnum < 0)\n> > + {\n> > + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n> > %u\",\n> > + errarg->origin_name,\n> > + logicalrep_message_type(errarg->command),\n> > + errarg->rel->remoterel.nspname,\n> > + errarg->rel->remoterel.relname,\n> > + errarg->remote_xid);\n> > + else\n> > + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" in transaction\n> > %u finished at %X/%X\",\n> > + errarg->origin_name,\n> > + logicalrep_message_type(errarg->command),\n> > + errarg->rel->remoterel.nspname,\n> > + errarg->rel->remoterel.relname,\n> > + errarg->remote_xid,\n> > + LSN_FORMAT_ARGS(errarg->finish_lsn));\n> > + }\n> > + else\n> > + {\n> > + if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\n> > in transaction %u\",\n> > + errarg->origin_name,\n> > + logicalrep_message_type(errarg->command),\n> > + errarg->rel->remoterel.nspname,\n> > + errarg->rel->remoterel.relname,\n> > + errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> > + errarg->remote_xid);\n> > + else\n> > + errcontext(\"processing remote data for replication origin \\\"%s\\\"\n> > during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" column \\\"%s\\\"\n> > in transaction %u finished at %X/%X\",\n> > + errarg->origin_name,\n> > + logicalrep_message_type(errarg->command),\n> > + errarg->rel->remoterel.nspname,\n> > + errarg->rel->remoterel.relname,\n> > + errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> > + errarg->remote_xid,\n> > + LSN_FORMAT_ARGS(errarg->finish_lsn));\n> > + }\n> > + }\n> >\n> > Hou-san had asked [3](comment #14) me how the above code can be\n> > shortened. Below is one idea, but maybe you won't like it ;-)\n> >\n> > #define MSG_O_T_S_R \"processing remote data for replication origin\n> > \\\"%s\\\" during \\\"%s\\\" for replication target relation \\\"%s.%s\\\" \"\n> > #define O_T_S_R\\\n> > errarg->origin_name,\\\n> > logicalrep_message_type(errarg->command),\\\n> > errarg->rel->remoterel.nspname,\\\n> > errarg->rel->remoterel.relname\n> >\n> > if (errarg->remote_attnum < 0)\n> > {\n> > if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > errcontext(MSG_O_T_S_R \"in transaction %u\",\n> > O_T_S_R,\n> > errarg->remote_xid);\n> > else\n> > errcontext(MSG_O_T_S_R \"in transaction %u finished at %X/%X\",\n> > O_T_S_R,\n> > errarg->remote_xid,\n> > LSN_FORMAT_ARGS(errarg->finish_lsn));\n> > }\n> > else\n> > {\n> > if (XLogRecPtrIsInvalid(errarg->finish_lsn))\n> > errcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u\",\n> > O_T_S_R,\n> > errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> > errarg->remote_xid);\n> > else\n> > errcontext(MSG_O_T_S_R \"column \\\"%s\\\" in transaction %u finished at %X/%X\",\n> > O_T_S_R,\n> > errarg->rel->remoterel.attnames[errarg->remote_attnum],\n> > errarg->remote_xid,\n> > LSN_FORMAT_ARGS(errarg->finish_lsn));\n> > }\n> > #undef O_T_S_R\n> > #undef MSG_O_T_S_R\n> >\n> > ======\n> >\n>\n> I don't like this much. I think this reduces readability.\n\nI agree. That wasn't a very serious suggestion :-)\n\n>\n> > 47. src/include/replication/logicalproto.h\n> >\n> > @@ -32,12 +32,17 @@\n> > *\n> > * LOGICALREP_PROTO_TWOPHASE_VERSION_NUM is the minimum protocol version with\n> > * support for two-phase commit decoding (at prepare time). Introduced in PG15.\n> > + *\n> > + * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n> > + * with support for streaming large transactions using apply background\n> > + * workers. Introduced in PG16.\n> > */\n> > #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n> > #define LOGICALREP_PROTO_VERSION_NUM 1\n> > #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n> > #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n> > -#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_TWOPHASE_VERSION_NUM\n> > +#define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n> > +#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n> >\n> > 47a.\n> > I don't think that comment is strictly true. IIUC the new protocol\n> > version 4 is currently only affecting the *extra* STREAM_ABORT members\n> > – but in fact streaming=parallel is still functional without using\n> > those extra members, isn't it? So maybe this description needed to be\n> > modified a bit to be more accurate?\n> >\n>\n> The reason for sending this extra abort members is to ensure that\n> after aborting the transaction, if the subscriber/apply worker\n> restarts, it doesn't need to request the transaction again. Do you\n> have suggestions for improving this comment?\n>\n\nI gave three review comments for v21-0001 that were all related to\nthis same point:\ni- #4b (commit message)\nii- #7 (protocol pgdocs)\niii- #47a (code comment)\n\nThe point was:\nAFAIK protocol 4 is only to let the parallel streaming logic behave\n*better* in how it can handle restarts after aborts. But that does not\nmean that protocol 4 is a *pre-requisite* for \"allowing\"\nstreaming=parallel to work in the first place. I thought that a PG15\npublisher and PG16 subscriber can still work using streaming=parallel\neven with protocol 3, but it just won't be quite as clever for\nhandling restarts after abort as protocol 4 (PG16 -> PG16) would be.\n\nIf the above is correct, then the code comment can be changed to\nsomething like this:\n\nBEFORE\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol\nversion with support for streaming large transactions using apply\nbackground workers. Introduced in PG16.\n\nAFTER\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM improves how subscription\nparameter streaming=parallel (introduced in PG16) will handle restarts\nafter aborts. Introduced in PG16.\n\n~\n\nThe protocol pgdocs might be changed similarly...\n\nBEFORE\nVersion <literal>4</literal> is supported only for server version 16\nand above, and it allows applying stream of large in-progress\ntransactions in parallel.\n\nAFTER\nVersion <literal>4</literal> is supported only for server version 16\nand above, and it improves how subscription parameter\nstreaming=parallel (introduced in PG16) will handle restarts after\naborts.\n\n~~\n\nAnd similar text again for the commit message...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 18 Aug 2022 20:10:21 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 18, 2022 at 3:40 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > 47. src/include/replication/logicalproto.h\n> > >\n> > > @@ -32,12 +32,17 @@\n> > > *\n> > > * LOGICALREP_PROTO_TWOPHASE_VERSION_NUM is the minimum protocol version with\n> > > * support for two-phase commit decoding (at prepare time). Introduced in PG15.\n> > > + *\n> > > + * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n> > > + * with support for streaming large transactions using apply background\n> > > + * workers. Introduced in PG16.\n> > > */\n> > > #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n> > > #define LOGICALREP_PROTO_VERSION_NUM 1\n> > > #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n> > > #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n> > > -#define LOGICALREP_PROTO_MAX_VERSION_NUM LOGICALREP_PROTO_TWOPHASE_VERSION_NUM\n> > > +#define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n> > > +#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n> > >\n> > > 47a.\n> > > I don't think that comment is strictly true. IIUC the new protocol\n> > > version 4 is currently only affecting the *extra* STREAM_ABORT members\n> > > – but in fact streaming=parallel is still functional without using\n> > > those extra members, isn't it? So maybe this description needed to be\n> > > modified a bit to be more accurate?\n> > >\n> >\n> > The reason for sending this extra abort members is to ensure that\n> > after aborting the transaction, if the subscriber/apply worker\n> > restarts, it doesn't need to request the transaction again. Do you\n> > have suggestions for improving this comment?\n> >\n>\n> I gave three review comments for v21-0001 that were all related to\n> this same point:\n> i- #4b (commit message)\n> ii- #7 (protocol pgdocs)\n> iii- #47a (code comment)\n>\n> The point was:\n> AFAIK protocol 4 is only to let the parallel streaming logic behave\n> *better* in how it can handle restarts after aborts. But that does not\n> mean that protocol 4 is a *pre-requisite* for \"allowing\"\n> streaming=parallel to work in the first place. I thought that a PG15\n> publisher and PG16 subscriber can still work using streaming=parallel\n> even with protocol 3, but it just won't be quite as clever for\n> handling restarts after abort as protocol 4 (PG16 -> PG16) would be.\n>\n\nIt is not only that it makes it better but one can say that it is a\nbreak of a replication protocol that after the client (subscriber) has\napplied some transaction, it requests the same transaction again. So,\nI think it is better to make the parallelism work only when the server\nversion is also 16.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Aug 2022 15:50:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Aug 17, 2022 at 11:58 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patches.\n>\n\nFew comments on v23-0001\n=======================\n1.\n+ /*\n+ * Attach to the dynamic shared memory segment for the parallel query, and\n+ * find its table of contents.\n+ *\n+ * Note: at this point, we have not created any ResourceOwner in this\n+ * process. This will result in our DSM mapping surviving until process\n+ * exit, which is fine. If there were a ResourceOwner, it would acquire\n+ * ownership of the mapping, but we have no need for that.\n+ */\n\nIn the first sentence, instead of a parallel query, you need to use\nparallel apply. I think we don't need to repeat the entire note as we\nhave in ParallelWorkerMain. You can say something like: \"Like parallel\nquery, we don't need resource owner by this time. See\nParallelWorkerMain\"\n\n2.\n+/*\n+ * There are three fields in message: start_lsn, end_lsn and send_time. Because\n+ * we have updated these statistics in apply worker, we could ignore these\n+ * fields in apply background worker. (see function LogicalRepApplyLoop).\n+ */\n+#define SIZE_STATS_MESSAGE (2*sizeof(XLogRecPtr)+sizeof(TimestampTz))\n\nThe first sentence in the above comment isn't clear about which\nmessage it talks about. I think it is about any message received by\nthis apply bgworker, if so, can we change it to: \"There are three\nfields in each message received by apply worker: start_lsn, end_lsn,\nand send_time.\"\n\n3.\n+/*\n+ * Return the apply background worker that will be used for the specified xid.\n+ *\n+ * If an apply background worker is found in the free list then re-use it,\n+ * otherwise start a fresh one. Cache the worker ApplyBgworkersHash keyed by\n+ * the specified xid.\n+ */\n+ApplyBgworkerInfo *\n+apply_bgworker_start(TransactionId xid)\n\nThe first sentence should say apply background worker info. Can we\nchange the cache-related sentence in the above comment to \"Cache the\nworker info in ApplyBgworkersHash keyed by the specified xid.\"?\n\n4.\n/*\n+ * We use first byte of message for additional communication between\n+ * main Logical replication worker and apply background workers, so if\n+ * it differs from 'w', then process it first.\n+ */\n+ c = pq_getmsgbyte(&s);\n+ switch (c)\n+ {\n+ /* End message of streaming chunk */\n+ case LOGICAL_REP_MSG_STREAM_STOP:\n+ elog(DEBUG1, \"[Apply BGW #%u] ended processing streaming chunk, \"\n+ \"waiting on shm_mq_receive\", shared->worker_id);\n+\n\nWhy do we need special handling of LOGICAL_REP_MSG_STREAM_STOP message\nhere? Instead, why not let it get handled via apply_dispatch path? You\nwill require special handling for apply_bg_worker but I see other\nmessages do have similar handling.\n\n5.\n+ /*\n+ * Now, we have initialized DSM. Attach to slot.\n+ */\n+ logicalrep_worker_attach(worker_slot);\n\nCan we change this comment to something like: \"Primary initialization\nis complete. Now, we can attach to our slot.\". IIRC, we have done it\nafter initialization to avoid some race conditions among leader apply\nworker and this parallel apply worker. If so, can we explain the same\nin the comments?\n\n6.\n+/*\n+ * Set up a dynamic shared memory segment.\n+ *\n+ * We set up a control region that contains a ApplyBgworkerShared,\n+ * plus one region per message queue. There are as many message queues as\n+ * the number of workers.\n+ */\n+static bool\n+apply_bgworker_setup_dsm(ApplyBgworkerInfo *wstate)\n\nI think the part of the comment: \"There are as many message queues as\nthe number of workers.\" doesn't seem to fit atop this function as this\nhas nothing to do with the number of workers. It would be a good idea\nto write something about what all is communicated via DSM in the\ndescription you have written about apply bg workers in worker.c.\n\n7.\n+ /* Check if there are free worker slot(s). */\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+ napplyworkers = logicalrep_apply_bgworker_count(MyLogicalRepWorker->subid);\n+ LWLockRelease(LogicalRepWorkerLock);\n+\n+ if (napplyworkers >= max_apply_bgworkers_per_subscription)\n+ return NULL;\n\nWon't it be better to check this restriction in\nlogicalrep_worker_launch() as we do for tablesync workers? That way\nall similar restrictions will be in one place.\n\n8.\n+ if (rel->state != SUBREL_STATE_READY)\n+ ereport(ERROR,\n+ (errmsg(\"logical replication apply workers for subscription \\\"%s\\\"\nwill restart\",\n+ MySubscription->name),\n+ errdetail(\"Cannot handle streamed replication transaction by apply \"\n+ \"background workers until all tables are synchronized\")));\n\nerrdetail messages always end with a full stop.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Aug 2022 17:14:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi Wang-san,\n\nHere is some more information about my v21-0001 review [2] posted yesterday.\n\n~~\n\nIf the streaming=parallel will be disallowed for publishers not using\nprotocol 4 (see Amit's post [1]), then please ignore all my previous\nreview comments about the protocol descriptions (see [2] comments #4b,\n#7b, #47a, #47b).\n\n~~\n\nAlso, I was having second thoughts about the name replacement for the\n'main_worker_pid' member (see [2] comments #1b, #49). Previously I\nsuggested 'apply_leader_pid', but now I think something like\n'apply_bgworker_leader_pid' would be better. (It's a bit verbose, but\nnow it gives the proper understanding that only an apply bgworker can\nhave a valid value for this member).\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1JR2GR9jjaz9T1ZxzgLVS0h089EE8ZB%3DF2EsVHbM_5sfA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPuxEQ88PDhFcBftnNY1BAjdj_9G8FYhTvPHKjP8yfacaQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Aug 2022 09:05:57 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 19, 2022 at 4:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Wang-san,\n>\n> Here is some more information about my v21-0001 review [2] posted yesterday.\n>\n> ~~\n>\n> If the streaming=parallel will be disallowed for publishers not using\n> protocol 4 (see Amit's post [1]), then please ignore all my previous\n> review comments about the protocol descriptions (see [2] comments #4b,\n> #7b, #47a, #47b).\n>\n> ~~\n>\n> Also, I was having second thoughts about the name replacement for the\n> 'main_worker_pid' member (see [2] comments #1b, #49). Previously I\n> suggested 'apply_leader_pid', but now I think something like\n> 'apply_bgworker_leader_pid' would be better. (It's a bit verbose, but\n> now it gives the proper understanding that only an apply bgworker can\n> have a valid value for this member).\n>\n\nI find your previous suggestion to name it 'apply_leader_pid' better.\nAccording to me, it conveys the meaning.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Aug 2022 08:54:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for the patch v23-0003:\n\n======\n\n3.1. src/backend/replication/logical/applybgworker.c -\napply_bgworker_relation_check\n\n+ * Although the commit order is maintained by only allowing one process to\n+ * commit at a time, the access order to the relation has changed. This could\n+ * cause unexpected problems if the unique column on the replicated table is\n+ * inconsistent with the publisher-side or contains non-immutable functions\n+ * when applying transactions using an apply background worker.\n+ */\n+void\n+apply_bgworker_relation_check(LogicalRepRelMapEntry *rel)\n\nI’m not sure, but should that second sentence be rearranged as follows?\n\nSUGGESTION\nThis could cause unexpected problems when applying transactions using\nan apply background worker if the unique column on the replicated\ntable is inconsistent with the publisher-side, or if the relation\ncontains non-immutable functions.\n\n~~~\n\n3.2.\n\n+ if (!am_apply_bgworker() &&\n+ (list_length(ApplyBgworkersFreeList) == list_length(ApplyBgworkersList)))\n+ return;\n\nPreviously I posted I was struggling to understand the above\ncondition, and then it was explained (see [1] comment #4) that:\n> We need to check this for apply bgworker. (Both lists are \"NIL\" in apply bgworker.)\n\nI think that information should be included in the code comment.\n\n======\n\n3.3. src/include/replication/logicalrelation.h\n\n+/*\n+ * States to determine if changes on one relation can be applied using an\n+ * apply background worker.\n+ */\n+typedef enum ParallelApplySafety\n+{\n+ PARALLEL_APPLY_UNKNOWN = 0,\n+ PARALLEL_APPLY_SAFE,\n+ PARALLEL_APPLY_UNSAFE\n+} ParallelApplySafety;\n+\n\n3.3a.\nThe enum value PARALLEL_APPLY_UNKNOWN doesn't really mean anything.\nMaybe naming it PARALLEL_APPLY_SAFETY_UNKNOWN gives it the intended\nmeaning.\n\n3.3b.\n+ PARALLEL_APPLY_UNKNOWN = 0,\nI didn't see any reason to explicitly assign this to 0.\n\n~~~\n\n3.4. src/include/replication/logicalrelation.h\n\n@@ -31,6 +42,8 @@ typedef struct LogicalRepRelMapEntry\n Relation localrel; /* relcache entry (NULL when closed) */\n AttrMap *attrmap; /* map of local attributes to remote ones */\n bool updatable; /* Can apply updates/deletes? */\n+ ParallelApplySafety parallel_apply; /* Can apply changes in an apply\n+\n\n(Similar to above comment #3.3a)\n\nThe member name 'parallel_apply' doesn't really mean anything. Perhaps\nrenaming this to 'parallel_apply_safe' or 'parallel_safe' etc will\ngive it the intended meaning.\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275739E73E8BEC5D13FB6739E6B9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Aug 2022 15:22:50 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for the patch v23-0004:\n\n======\n\n4.1 src/test/subscription/t/032_streaming_apply.pl\n\nThis test file was introduced in patch 0003, but I think there are a\nfew changes in this 0004 patch which are really have nothing to do\nwith 0004 and should have been included in the original 0003.\n\ne.g. There are multiple comments like below - these belong back in the\n0003 patch\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n# Wait for this streaming transaction to be applied in the apply worker.\n\n~~~\n\n4.2\n\n@@ -166,17 +175,6 @@ CREATE TRIGGER tri_tab1_unsafe\n BEFORE INSERT ON public.test_tab1\n FOR EACH ROW EXECUTE PROCEDURE trigger_func_tab1_unsafe();\n ALTER TABLE test_tab1 ENABLE REPLICA TRIGGER tri_tab1_unsafe;\n-\n-CREATE FUNCTION trigger_func_tab1_safe() RETURNS TRIGGER AS \\$\\$\n- BEGIN\n- RAISE NOTICE 'test for safe trigger function';\n- RETURN NEW;\n- END\n-\\$\\$ language plpgsql;\n-ALTER FUNCTION trigger_func_tab1_safe IMMUTABLE;\n-CREATE TRIGGER tri_tab1_safe\n-BEFORE INSERT ON public.test_tab1\n-FOR EACH ROW EXECUTE PROCEDURE trigger_func_tab1_safe();\n });\n\nI didn't understand why all this trigger_func_tab1_safe which was\nadded in patch 0003 is now getting removed in patch 0004. Maybe there\nis some good reason, but it doesn't seem right to be adding code in\none patch and then removing it again in the next patch.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Aug 2022 17:46:12 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 18, 2022 at 5:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 17, 2022 at 11:58 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > Attach the new patches.\n> >\n>\n> Few comments on v23-0001\n> =======================\n>\n\nSome more comments on v23-0001\n============================\n1.\nstatic bool\n handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n{\n...\n- /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ /* Not in streaming mode and not in apply background worker. */\n+ if (!(in_streamed_transaction || am_apply_bgworker()))\n return false;\n\nThis check appears a bit strange because ideally in bgworker\nin_streamed_transaction should be false. I think we should set\nin_streamed_transaction to true in apply_handle_stream_start() only\nwhen we are going to write to file. Is there a reason for not doing\nthe same?\n\n2.\n+ {\n+ /* This is the main apply worker. */\n+ ApplyBgworkerInfo *wstate = apply_bgworker_find(xid);\n+\n+ /*\n+ * Check if we are processing this transaction using an apply\n+ * background worker and if so, send the changes to that worker.\n+ */\n+ if (wstate)\n+ {\n+ /* Send STREAM ABORT message to the apply background worker. */\n+ apply_bgworker_send_data(wstate, s->len, s->data);\n\nWhy at some places the patch needs to separately fetch\nApplyBgworkerInfo whereas at other places it directly uses\nstream_apply_worker to pass the data to bgworker.\n\n3. Why apply_handle_stream_abort() or apply_handle_stream_prepare()\ndoesn't use apply_bgworker_active() to identify whether it needs to\nsend the information to bgworker?\n\n4. In apply_handle_stream_prepare(), apply_handle_stream_abort(), and\nsome other similar functions, the patch handles three cases (a) apply\nbackground worker, (b) sending data to bgworker, (c) handling for\nstreamed transaction in apply worker. I think the code will look\nbetter if you move the respective code for all three cases into\nseparate functions. Surely, if the code to deal with each of the cases\nis less then we don't need to move it to a separate function.\n\n5.\n@@ -1088,24 +1177,78 @@ apply_handle_stream_prepare(StringInfo s)\n{\n...\n+ in_remote_transaction = false;\n+\n+ /* Unlink the files with serialized changes and subxact info. */\n+ stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid);\n+ }\n+ }\n\n in_remote_transaction = false;\n...\n\nWe don't need to in_remote_transaction to false in multiple places.\n\n6.\n@@ -1177,36 +1311,93 @@ apply_handle_stream_start(StringInfo s)\n{\n...\n...\n+ if (am_apply_bgworker())\n {\n- MemoryContext oldctx;\n-\n- oldctx = MemoryContextSwitchTo(ApplyContext);\n+ /*\n+ * Make sure the handle apply_dispatch methods are aware we're in a\n+ * remote transaction.\n+ */\n+ in_remote_transaction = true;\n\n- MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\n- FileSetInit(MyLogicalRepWorker->stream_fileset);\n+ /* Begin the transaction. */\n+ AcceptInvalidationMessages();\n+ maybe_reread_subscription();\n\n- MemoryContextSwitchTo(oldctx);\n+ StartTransactionCommand();\n+ BeginTransactionBlock();\n+ CommitTransactionCommand();\n }\n...\n\nWhy do we need to start a transaction here? Why can't it be done via\nbegin_replication_step() during the first operation apply? Is it\nbecause we may need to define a save point in bgworker and we don't\nthat information beforehand? If so, then also, can't it be handled by\nbegin_replication_step() either by explicitly passing the information\nor checking it there and then starting a transaction block? In any\ncase, please add a few comments to explain why this separate handling\nis required for bgworker?\n\n7. When we are already setting bgworker status as APPLY_BGWORKER_BUSY\nin apply_bgworker_setup_dsm() then why do we need to set it again in\napply_bgworker_start()?\n\n8. It is not clear to me how APPLY_BGWORKER_EXIT status is used. Is it\nrequired for the cases where bgworker exists due to some error and\nthen apply worker uses it to detect that and exits? How other\nbgworkers would notice this, is it done via\napply_bgworker_check_status()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Aug 2022 14:18:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for the v23-0005 patch:\n\n======\n\nCommit Message says:\nmain_worker_pid is Process ID of the main apply worker, if this process is a\napply background worker. NULL if this process is a main apply worker or a\nsynchronization worker.\nThe new column can make it easier to distinguish main apply worker and apply\nbackground worker.\n\n--\n\nHaving a column called ‘main_worker_pid’ which is defined to be NULL\nif the process *is* the main apply worker does not make any sense to\nme.\n\nIMO it feels hacky trying to squeeze meaning out of the\n'main_worker_pid' member of the LogicalRepWorker like this.\n\nIf the intention really is to make it easier to distinguish the\ndifferent kinds of subscription workers then surely there are much\nbetter ways to achieve that. For example, why not introduce a new\n'type' enum member of the LogicalRepWorker (e.g.\nWORKER_TYPE_TABLESYNC='t', WORKER_TYPE_APPLY='a',\nWORKER_TYPE_PARALLEL_APPLY='p'), then use some char column to expose\nit? As a bonus, I think the other code (i.e.patch 0001) will also be\nimproved if a 'type' member is added.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Aug 2022 19:06:14 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for the v23-0005 patch:\n>\n> ======\n>\n> Commit Message says:\n> main_worker_pid is Process ID of the main apply worker, if this process is a\n> apply background worker. NULL if this process is a main apply worker or a\n> synchronization worker.\n> The new column can make it easier to distinguish main apply worker and apply\n> background worker.\n>\n> --\n>\n> Having a column called ‘main_worker_pid’ which is defined to be NULL\n> if the process *is* the main apply worker does not make any sense to\n> me.\n>\n\nI haven't read this part of a patch but it seems to me we have\nsomething similar for parallel query workers. Refer 'leader_pid'\ncolumn in pg_stat_activity.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Aug 2022 14:40:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 19, 2022 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for the v23-0005 patch:\n> >\n> > ======\n> >\n> > Commit Message says:\n> > main_worker_pid is Process ID of the main apply worker, if this process is a\n> > apply background worker. NULL if this process is a main apply worker or a\n> > synchronization worker.\n> > The new column can make it easier to distinguish main apply worker and apply\n> > background worker.\n> >\n> > --\n> >\n> > Having a column called ‘main_worker_pid’ which is defined to be NULL\n> > if the process *is* the main apply worker does not make any sense to\n> > me.\n> >\n>\n> I haven't read this part of a patch but it seems to me we have\n> something similar for parallel query workers. Refer 'leader_pid'\n> column in pg_stat_activity.\n>\n\nIIUC (from the patch 0005 commit message) the intention is to be able\nto easily distinguish the worker types.\n\nI thought using a leader PID (by whatever name) seemed a poor way to\nachieve that in this case because the PID is either NULL or not NULL,\nbut there are 3 kinds of subscription workers, not 2.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Aug 2022 19:35:03 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 19, 2022 at 3:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Here are my review comments for the v23-0005 patch:\n> > >\n> > > ======\n> > >\n> > > Commit Message says:\n> > > main_worker_pid is Process ID of the main apply worker, if this process is a\n> > > apply background worker. NULL if this process is a main apply worker or a\n> > > synchronization worker.\n> > > The new column can make it easier to distinguish main apply worker and apply\n> > > background worker.\n> > >\n> > > --\n> > >\n> > > Having a column called ‘main_worker_pid’ which is defined to be NULL\n> > > if the process *is* the main apply worker does not make any sense to\n> > > me.\n> > >\n> >\n> > I haven't read this part of a patch but it seems to me we have\n> > something similar for parallel query workers. Refer 'leader_pid'\n> > column in pg_stat_activity.\n> >\n>\n> IIUC (from the patch 0005 commit message) the intention is to be able\n> to easily distinguish the worker types.\n>\n\nI think it is only to distinguish between leader apply worker and\nbackground apply workers. The tablesync worker can be distinguished\nbased on relid field.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Aug 2022 15:24:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 19, 2022 at 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 3:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Aug 19, 2022 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > Here are my review comments for the v23-0005 patch:\n> > > >\n> > > > ======\n> > > >\n> > > > Commit Message says:\n> > > > main_worker_pid is Process ID of the main apply worker, if this process is a\n> > > > apply background worker. NULL if this process is a main apply worker or a\n> > > > synchronization worker.\n> > > > The new column can make it easier to distinguish main apply worker and apply\n> > > > background worker.\n> > > >\n> > > > --\n> > > >\n> > > > Having a column called ‘main_worker_pid’ which is defined to be NULL\n> > > > if the process *is* the main apply worker does not make any sense to\n> > > > me.\n> > > >\n> > >\n> > > I haven't read this part of a patch but it seems to me we have\n> > > something similar for parallel query workers. Refer 'leader_pid'\n> > > column in pg_stat_activity.\n> > >\n> >\n> > IIUC (from the patch 0005 commit message) the intention is to be able\n> > to easily distinguish the worker types.\n> >\n>\n> I think it is only to distinguish between leader apply worker and\n> background apply workers. The tablesync worker can be distinguished\n> based on relid field.\n>\n\nRight. But that's the reason for my question in the first place - why\nimplement the patch so that the user still has to jump through hoops\njust to know the worker type information?\n\ne.g.\n\nOption 1 (patch) - if there is a non-NULL relid field then the worker\ntype must be a tablesyc worker, otherwise if there is non-NULL a\nleader_pid field then the worker type must be an apply background\nworker, otherwise the worker type must be an apply main worker.\n\nversus\n\nOption 2 - new worker_type field (values 't','p','a')\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 22 Aug 2022 09:11:50 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 22, 2022 at 4:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Aug 19, 2022 at 3:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Fri, Aug 19, 2022 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > >\n> > > > > Here are my review comments for the v23-0005 patch:\n> > > > >\n> > > > > ======\n> > > > >\n> > > > > Commit Message says:\n> > > > > main_worker_pid is Process ID of the main apply worker, if this process is a\n> > > > > apply background worker. NULL if this process is a main apply worker or a\n> > > > > synchronization worker.\n> > > > > The new column can make it easier to distinguish main apply worker and apply\n> > > > > background worker.\n> > > > >\n> > > > > --\n> > > > >\n> > > > > Having a column called ‘main_worker_pid’ which is defined to be NULL\n> > > > > if the process *is* the main apply worker does not make any sense to\n> > > > > me.\n> > > > >\n> > > >\n> > > > I haven't read this part of a patch but it seems to me we have\n> > > > something similar for parallel query workers. Refer 'leader_pid'\n> > > > column in pg_stat_activity.\n> > > >\n> > >\n> > > IIUC (from the patch 0005 commit message) the intention is to be able\n> > > to easily distinguish the worker types.\n> > >\n> >\n> > I think it is only to distinguish between leader apply worker and\n> > background apply workers. The tablesync worker can be distinguished\n> > based on relid field.\n> >\n>\n> Right. But that's the reason for my question in the first place - why\n> implement the patch so that the user still has to jump through hoops\n> just to know the worker type information?\n>\n\nI think it is not only to judge worker type but also to know the pid\nof each of the workers during parallel apply. Isn't it better to have\nboth main apply worker pid and parallel apply worker pid as we have\nfor the parallel query system?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Aug 2022 14:31:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nThank you for updating the patch! Followings are comments about v23-0001 and v23-0005.\r\n\r\nv23-0001\r\n\r\n01. logical-replication.sgml\r\n\r\n+ <para>\r\n+ When the streaming mode is <literal>parallel</literal>, the finish LSN of\r\n+ failed transactions may not be logged. In that case, it may be necessary to\r\n+ change the streaming mode to <literal>on</literal> and cause the same\r\n+ conflicts again so the finish LSN of the failed transaction will be written\r\n+ to the server log. For the usage of finish LSN, please refer to <link\r\n+ linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\n+ SKIP</command></link>.\r\n+ </para>\r\n\r\nI was not sure about streaming='off' mode. Is there any reasons that only ON mode is focused?\r\n\r\n02. protocol.sgml\r\n\r\n+ <varlistentry>\r\n+ <term>Int64 (XLogRecPtr)</term>\r\n+ <listitem>\r\n+ <para>\r\n+ The LSN of the abort. This field is available since protocol version\r\n+ 4.\r\n+ </para>\r\n+ </listitem>\r\n+ </varlistentry>\r\n+\r\n+ <varlistentry>\r\n+ <term>Int64 (TimestampTz)</term>\r\n+ <listitem>\r\n+ <para>\r\n+ Abort timestamp of the transaction. The value is in number\r\n+ of microseconds since PostgreSQL epoch (2000-01-01). This field is\r\n+ available since protocol version 4.\r\n+ </para>\r\n+ </listitem>\r\n+ </varlistentry>\r\n+\r\n\r\nIt seems that changes are in the variablelist for stream commit.\r\nI think these are included in the stream abort message, so it should be moved.\r\n\r\n03. decode.c\r\n\r\n- ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr);\r\n+ ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr,\r\n+ commit_time);\r\n }\r\n- ReorderBufferForget(ctx->reorder, xid, buf->origptr);\r\n+ ReorderBufferForget(ctx->reorder, xid, buf->origptr, commit_time);\r\n\r\n'commit_time' has been passed as argument 'abort_time', I think it may be confusing.\r\nHow about adding a comment above, like:\r\n\"In case of streamed transactions, they are regarded as being aborted at commit_time\"\r\n\r\n04. launcher.c\r\n\r\n04.a\r\n\r\n+ worker->main_worker_pid = is_subworker ? MyProcPid : 0;\r\n\r\nYou can use InvalidPid instead of 0.\r\n(I thought pid should be represented by the datatype pid_t, but in some codes it is defined as int...) \r\n\r\n04.b\r\n\r\n+ worker->main_worker_pid = 0;\r\n\r\nYou can use InvalidPid instead of 0, same as above.\r\n\r\n05. origin.c\r\n\r\n void\r\n-replorigin_session_setup(RepOriginId node)\r\n+replorigin_session_setup(RepOriginId node, int acquired_by)\r\n\r\nIIUC the same slot can be used only when the apply main worker has already acquired the slot\r\nand the subworker for the same subscription tries to acquire, but it cannot understand from comments.\r\nHow about adding comments, or an assertion that acquired_by is same as session_replication_state->acquired_by ?\r\nMoreover acquired_by should be compared with InvalidPid, based on above comments.\r\n\r\n06. proto.c\r\n\r\n void\r\n logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\r\n- TransactionId subxid)\r\n+ ReorderBufferTXN *txn, XLogRecPtr abort_lsn,\r\n+ bool write_abort_lsn\r\n\r\nI think write_abort_lsn may be not needed,\r\nbecause abort_lsn can be used for controlling whether abort_XXX fields should be filled or not.\r\n\r\n07. worker.c\r\n\r\n+/*\r\n+ * The number of changes during one streaming block (only for apply background\r\n+ * workers)\r\n+ */\r\n+static uint32 nchanges = 0;\r\n\r\nThis variable is used only by the main apply worker, so the comment seems not correct.\r\nHow about \"...(only for SUBSTREAM_PARALLEL case)\"?\r\n\r\nv23-0005\r\n\r\n08. monitoring.sgml\r\n\r\nI cannot decide which option proposed in [1] is better, but followings descriptions are needed in both cases.\r\n(In [2] I had intended to propose something like option 2)\r\n\r\n08.a\r\n\r\nYou can add a description that the field 'relid' will be NULL even for apply background worker.\r\n\r\n08.b\r\n\r\nYou can add a description that fields 'received_lsn', 'last_msg_send_time', 'last_msg_receipt_time',\r\n'latest_end_lsn', 'latest_end_time' will be NULL for apply background worker.\r\n\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPuPwdwZqXBJjtU%2BR9NULbOpxMG%3Di2hmqgg%2B7p0rmK0hrw%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB58660B4732E7F80B322174A3F5629%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 22 Aug 2022 12:49:53 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 22, 2022 at 7:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 22, 2022 at 4:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Aug 19, 2022 at 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Aug 19, 2022 at 3:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Fri, Aug 19, 2022 at 7:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Aug 19, 2022 at 2:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > > >\n> > > > > > Here are my review comments for the v23-0005 patch:\n> > > > > >\n> > > > > > ======\n> > > > > >\n> > > > > > Commit Message says:\n> > > > > > main_worker_pid is Process ID of the main apply worker, if this process is a\n> > > > > > apply background worker. NULL if this process is a main apply worker or a\n> > > > > > synchronization worker.\n> > > > > > The new column can make it easier to distinguish main apply worker and apply\n> > > > > > background worker.\n> > > > > >\n> > > > > > --\n> > > > > >\n> > > > > > Having a column called ‘main_worker_pid’ which is defined to be NULL\n> > > > > > if the process *is* the main apply worker does not make any sense to\n> > > > > > me.\n> > > > > >\n> > > > >\n> > > > > I haven't read this part of a patch but it seems to me we have\n> > > > > something similar for parallel query workers. Refer 'leader_pid'\n> > > > > column in pg_stat_activity.\n> > > > >\n> > > >\n> > > > IIUC (from the patch 0005 commit message) the intention is to be able\n> > > > to easily distinguish the worker types.\n> > > >\n> > >\n> > > I think it is only to distinguish between leader apply worker and\n> > > background apply workers. The tablesync worker can be distinguished\n> > > based on relid field.\n> > >\n> >\n> > Right. But that's the reason for my question in the first place - why\n> > implement the patch so that the user still has to jump through hoops\n> > just to know the worker type information?\n> >\n>\n> I think it is not only to judge worker type but also to know the pid\n> of each of the workers during parallel apply. Isn't it better to have\n> both main apply worker pid and parallel apply worker pid as we have\n> for the parallel query system?\n>\n\nOK, thanks for pointing me to that other view. Now that I see the\nexisting pg_stat_activity already has 'pid' and 'leader_pid' [1], it\nsuddenly seems more reasonable to do similar for this\npg_stat_subscription.\n\nThis background information needs to be conveyed better in the patch\n0005 commit message. The current commit message said nothing about\ntrying to be consistent with the existing stats views; it only says\nthis field was added to distinguish more easily between the types of\napply workers.\n\n------\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 23 Aug 2022 14:01:52 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 22, 2022 at 10:49 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n\n> 04. launcher.c\n>\n> 04.a\n>\n> + worker->main_worker_pid = is_subworker ? MyProcPid : 0;\n>\n> You can use InvalidPid instead of 0.\n> (I thought pid should be represented by the datatype pid_t, but in some codes it is defined as int...)\n>\n> 04.b\n>\n> + worker->main_worker_pid = 0;\n>\n> You can use InvalidPid instead of 0, same as above.\n>\n> 05. origin.c\n>\n> void\n> -replorigin_session_setup(RepOriginId node)\n> +replorigin_session_setup(RepOriginId node, int acquired_by)\n>\n> IIUC the same slot can be used only when the apply main worker has already acquired the slot\n> and the subworker for the same subscription tries to acquire, but it cannot understand from comments.\n> How about adding comments, or an assertion that acquired_by is same as session_replication_state->acquired_by ?\n> Moreover acquired_by should be compared with InvalidPid, based on above comments.\n>\n\nIn general I agree, and I also suggested to use pid_t and InvalidPid\n(at least for all the new code)\n\nIn practice, please be aware that InvalidPid is -1 (not 0), so\nreplacing any existing code (e.g. in replorigin_session_setup) that\nwas already checking for 0 has to be done with lots of care.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 23 Aug 2022 14:20:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nFollowings are my comments about v23-0003. Currently I do not have any comments about 0002 and 0004.\r\n\r\n09. general\r\n\r\nIt seems that logicalrep_rel_mark_parallel_apply() is always called when relations are opened on the subscriber-side,\r\nbut is it really needed? There checks are required only for streaming parallel apply,\r\nso it may be not needed in case of streaming = 'on' or 'off'.\r\n\r\n10. commit message\r\n\r\n 2) There cannot be any non-immutable functions used by the subscriber-side\r\n replicated table. Look for functions in the following places:\r\n * a. Trigger functions\r\n * b. Column default value expressions and domain constraints\r\n * c. Constraint expressions\r\n * d. Foreign keys\r\n\r\n\"Foreign key\" should not be listed here because it is not related with the mutability. I think it should be listed as 3), not d..\r\n\r\n11. create_subscription.sgml\r\n\r\nThe constraint about foreign key should be described here.\r\n\r\n11. relation.c\r\n\r\n11.a\r\n\r\n+ CacheRegisterSyscacheCallback(PROCOID,\r\n+ logicalrep_relmap_reset_parallel_cb,\r\n+ (Datum) 0);\r\n\r\nIsn't it needed another syscache callback for pg_type?\r\nUsers can add any constraints via ALTER DOMAIN command, but the added constraint may be not checked.\r\nI checked AlterDomainAddConstraint(), and it invalidates only the relcache for pg_type.\r\n\r\n11.b\r\n\r\n+ /*\r\n+ * If the column is of a DOMAIN type, determine whether\r\n+ * that domain has any CHECK expressions that are not\r\n+ * immutable.\r\n+ */\r\n+ if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)\r\n+ {\r\n\r\nI think the default value of *domain* must be also checked here.\r\nI tested like followings.\r\n\r\n===\r\n1. created a domain that has a default value\r\nCREATE DOMAIN tmp INT DEFAULT 1 CHECK (VALUE > 0);\r\n\r\n2. created a table \r\nCREATE TABLE foo (id tmp PRIMARY KEY);\r\n\r\n3. checked pg_attribute and pg_class\r\nselect oid, relname, attname, atthasdef from pg_attribute, pg_class where pg_attribute.attrelid = pg_class.oid and pg_class.relname = 'foo' and attname = 'id';\r\n oid | relname | attname | atthasdef \r\n-------+---------+---------+-----------\r\n 16394 | foo | id | f\r\n(1 row)\r\n\r\nTt meant that functions might be not checked because the if-statement `if (att->atthasdef)` became false.\r\n===\r\n\r\n12. 015_stream.pl, 016_stream_subxact.pl, 022_twophase_cascade.pl, 023_twophase_stream.pl\r\n\r\n- my ($node_publisher, $node_subscriber, $appname, $is_parallel) = @_;\r\n+ my ($node_publisher, $node_subscriber, $appname) = @_;\r\n\r\nWhy the parameter is removed? I think the test that waits the output\r\nfrom the apply background worker is meaningful.\r\n\r\n13. 032_streaming_apply.pl\r\n\r\nThe filename seems too general because apply background workers are tested in above tests.\r\nHow about \"streaming_apply_constraint\" or something?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 23 Aug 2022 08:40:36 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, August 19, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Thu, Aug 18, 2022 at 5:14 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Aug 17, 2022 at 11:58 AM wangw.fnst@fujitsu.com\r\n> > <wangw.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Attach the new patches.\r\n> > >\r\n> >\r\n> > Few comments on v23-0001\r\n> > =======================\r\n> >\r\n> \r\n> Some more comments on v23-0001\r\n> ============================\r\n> 1.\r\n> static bool\r\n> handle_streamed_transaction(LogicalRepMsgType action, StringInfo s) { ...\r\n> - /* not in streaming mode */\r\n> - if (!in_streamed_transaction)\r\n> + /* Not in streaming mode and not in apply background worker. */ if\r\n> + (!(in_streamed_transaction || am_apply_bgworker()))\r\n> return false;\r\n> \r\n> This check appears a bit strange because ideally in bgworker\r\n> in_streamed_transaction should be false. I think we should set\r\n> in_streamed_transaction to true in apply_handle_stream_start() only when we\r\n> are going to write to file. Is there a reason for not doing the same?\r\n\r\nNo, I removed this.\r\n\r\n> 2.\r\n> + {\r\n> + /* This is the main apply worker. */\r\n> + ApplyBgworkerInfo *wstate = apply_bgworker_find(xid);\r\n> +\r\n> + /*\r\n> + * Check if we are processing this transaction using an apply\r\n> + * background worker and if so, send the changes to that worker.\r\n> + */\r\n> + if (wstate)\r\n> + {\r\n> + /* Send STREAM ABORT message to the apply background worker. */\r\n> + apply_bgworker_send_data(wstate, s->len, s->data);\r\n> \r\n> Why at some places the patch needs to separately fetch ApplyBgworkerInfo\r\n> whereas at other places it directly uses stream_apply_worker to pass the data\r\n> to bgworker.\r\n> 3. Why apply_handle_stream_abort() or apply_handle_stream_prepare()\r\n> doesn't use apply_bgworker_active() to identify whether it needs to send the\r\n> information to bgworker?\r\n\r\nI think stream_apply_worker is only valid between STREAM_START and STREAM_END,\r\nBut it seems it's not clear from the code. So I added some comments and slightly refactor\r\nthe code.\r\n\r\n\r\n> 4. In apply_handle_stream_prepare(), apply_handle_stream_abort(), and some\r\n> other similar functions, the patch handles three cases (a) apply background\r\n> worker, (b) sending data to bgworker, (c) handling for streamed transaction in\r\n> apply worker. I think the code will look better if you move the respective code\r\n> for all three cases into separate functions. Surely, if the code to deal with each\r\n> of the cases is less then we don't need to move it to a separate function.\r\n\r\nRefactored and simplified.\r\n\r\n> 5.\r\n> @@ -1088,24 +1177,78 @@ apply_handle_stream_prepare(StringInfo s) { ...\r\n> + in_remote_transaction = false;\r\n> +\r\n> + /* Unlink the files with serialized changes and subxact info. */\r\n> + stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid); } }\r\n> \r\n> in_remote_transaction = false;\r\n> ...\r\n> \r\n> We don't need to in_remote_transaction to false in multiple places.\r\n\r\nRemoved.\r\n\r\n> 6.\r\n> @@ -1177,36 +1311,93 @@ apply_handle_stream_start(StringInfo s) { ...\r\n> ...\r\n> + if (am_apply_bgworker())\r\n> {\r\n> - MemoryContext oldctx;\r\n> -\r\n> - oldctx = MemoryContextSwitchTo(ApplyContext);\r\n> + /*\r\n> + * Make sure the handle apply_dispatch methods are aware we're in a\r\n> + * remote transaction.\r\n> + */\r\n> + in_remote_transaction = true;\r\n> \r\n> - MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n> - FileSetInit(MyLogicalRepWorker->stream_fileset);\r\n> + /* Begin the transaction. */\r\n> + AcceptInvalidationMessages();\r\n> + maybe_reread_subscription();\r\n> \r\n> - MemoryContextSwitchTo(oldctx);\r\n> + StartTransactionCommand();\r\n> + BeginTransactionBlock();\r\n> + CommitTransactionCommand();\r\n> }\r\n> ...\r\n> \r\n> Why do we need to start a transaction here? Why can't it be done via\r\n> begin_replication_step() during the first operation apply? Is it because we may\r\n> need to define a save point in bgworker and we don't that information\r\n> beforehand? If so, then also, can't it be handled by\r\n> begin_replication_step() either by explicitly passing the information or\r\n> checking it there and then starting a transaction block? In any case, please add\r\n> a few comments to explain why this separate handling is required for\r\n> bgworker?\r\n\r\nThe transaction block is used to define the savepoint and I moved these\r\ncodes to the place where the savepoint is defined which looks better now.\r\n\r\n> 7. When we are already setting bgworker status as APPLY_BGWORKER_BUSY in\r\n> apply_bgworker_setup_dsm() then why do we need to set it again in\r\n> apply_bgworker_start()?\r\n\r\nRemoved.\r\n\r\n> 8. It is not clear to me how APPLY_BGWORKER_EXIT status is used. Is it required\r\n> for the cases where bgworker exists due to some error and then apply worker\r\n> uses it to detect that and exits? How other bgworkers would notice this, is it\r\n> done via apply_bgworker_check_status()?\r\n\r\nIt was used to detect the unexpected exit of bgworker and I have changed the design\r\nof this which is now similar to what we have in parallel query.\r\n\r\nAttach the new version patch set(v24) which address above comments.\r\nBesides, I added some logic which try to stop the bgworker at transaction end\r\nif there are enough workers in the pool.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 24 Aug 2022 13:47:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 22, 2022 20:50 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thank you for updating the patch! Followings are comments about \r\n> v23-0001 and v23-0005.\r\n\r\nThanks for your comments.\r\n\r\n> v23-0001\r\n> \r\n> 01. logical-replication.sgml\r\n> \r\n> + <para>\r\n> + When the streaming mode is <literal>parallel</literal>, the finish LSN of\r\n> + failed transactions may not be logged. In that case, it may be necessary to\r\n> + change the streaming mode to <literal>on</literal> and cause the same\r\n> + conflicts again so the finish LSN of the failed transaction will be written\r\n> + to the server log. For the usage of finish LSN, please refer to <link\r\n> + linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\n> + SKIP</command></link>.\r\n> + </para>\r\n> \r\n> I was not sure about streaming='off' mode. Is there any reasons that \r\n> only ON mode is focused?\r\n\r\nAdded off.\r\n\r\n> 02. protocol.sgml\r\n> \r\n> + <varlistentry>\r\n> + <term>Int64 (XLogRecPtr)</term>\r\n> + <listitem>\r\n> + <para>\r\n> + The LSN of the abort. This field is available since protocol version\r\n> + 4.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> +\r\n> + <varlistentry>\r\n> + <term>Int64 (TimestampTz)</term>\r\n> + <listitem>\r\n> + <para>\r\n> + Abort timestamp of the transaction. The value is in number\r\n> + of microseconds since PostgreSQL epoch (2000-01-01). This field is\r\n> + available since protocol version 4.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> +\r\n> \r\n> It seems that changes are in the variablelist for stream commit.\r\n> I think these are included in the stream abort message, so it should be moved.\r\n\r\nFixed.\r\n\r\n> 03. decode.c\r\n> \r\n> - ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf-\r\n> >origptr);\r\n> + ReorderBufferForget(ctx->reorder, \r\n> + parsed->subxacts[i], buf-\r\n> >origptr,\r\n> + \r\n> + commit_time);\r\n> }\r\n> - ReorderBufferForget(ctx->reorder, xid, buf->origptr);\r\n> + ReorderBufferForget(ctx->reorder, xid, buf->origptr, \r\n> + commit_time);\r\n> \r\n> 'commit_time' has been passed as argument 'abort_time', I think it may \r\n> be confusing.\r\n> How about adding a comment above, like:\r\n> \"In case of streamed transactions, they are regarded as being aborted \r\n> at commit_time\"\r\n\r\nIIRC, I free the comment above the loop might be more clear about this,\r\nbut I will think about it again. \r\n\r\n> 04. launcher.c\r\n> \r\n> 04.a\r\n> \r\n> + worker->main_worker_pid = is_subworker ? MyProcPid : 0;\r\n> \r\n> You can use InvalidPid instead of 0.\r\n> (I thought pid should be represented by the datatype pid_t, but in \r\n> some codes it is defined as int...)\r\n> \r\n> 04.b\r\n> \r\n> + worker->main_worker_pid = 0;\r\n> \r\n> You can use InvalidPid instead of 0, same as above.\r\n\r\nImproved\r\n\r\n> 05. origin.c\r\n> \r\n> void\r\n> -replorigin_session_setup(RepOriginId node)\r\n> +replorigin_session_setup(RepOriginId node, int acquired_by)\r\n> \r\n> IIUC the same slot can be used only when the apply main worker has \r\n> already acquired the slot and the subworker for the same subscription \r\n> tries to acquire, but it cannot understand from comments.\r\n> How about adding comments, or an assertion that acquired_by is same as \r\n> session_replication_state->acquired_by ?\r\n> Moreover acquired_by should be compared with InvalidPid, based on \r\n> above comments.\r\n\r\nI think we have tried to check if 'acquired_by' and acquired_by of\r\nslot are equal inside this function.\r\n\r\nI am not sure if it's a good idea to use InvalidPid here ,as we set\r\nsession_replication_state->acquired_by(int) to 0(instead of -1) to indicate\r\nthat no worker acquire it.\r\n\r\n> 06. proto.c\r\n> \r\n> void\r\n> logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\r\n> - TransactionId subxid)\r\n> + ReorderBufferTXN *txn, XLogRecPtr abort_lsn,\r\n> + bool \r\n> + write_abort_lsn\r\n> \r\n> I think write_abort_lsn may be not needed, because abort_lsn can be \r\n> used for controlling whether abort_XXX fields should be filled or not.\r\n\r\nI think if the subscriber's version is lower than 16 (which won't handle the abort_XXX fields),\r\nthen we don't need to send the abort_XXX fields either.\r\n\r\n> 07. worker.c\r\n> \r\n> +/*\r\n> + * The number of changes during one streaming block (only for apply\r\n> background\r\n> + * workers)\r\n> + */\r\n> +static uint32 nchanges = 0;\r\n> \r\n> This variable is used only by the main apply worker, so the comment \r\n> seems not correct.\r\n> How about \"...(only for SUBSTREAM_PARALLEL case)\"?\r\n\r\nThe previous comments seemed a bit confusing. I tried to improve this comments to this:\r\n```\r\nThe number of changes sent to apply background workers during one streaming block.\r\n```\r\n\r\n> v23-0005\r\n> \r\n> 08. monitoring.sgml\r\n> \r\n> I cannot decide which option proposed in [1] is better, but followings \r\n> descriptions are needed in both cases.\r\n> (In [2] I had intended to propose something like option 2)\r\n> \r\n> 08.a\r\n> \r\n> You can add a description that the field 'relid' will be NULL even for \r\n> apply background worker.\r\n> \r\n> 08.b\r\n> \r\n> You can add a description that fields 'received_lsn', \r\n> 'last_msg_send_time', 'last_msg_receipt_time', 'latest_end_lsn', \r\n> 'latest_end_time' will be NULL for apply background worker.\r\n\r\nImproved\r\n\r\nRegards,\r\nWang wei\r\n\r\n", "msg_date": "Wed, 24 Aug 2022 13:50:43 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Aug 18, 2022 11:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for patch v21-0001:\r\n> \r\n> Note - There are some \"general\" comments which will result in lots of \r\n> smaller changes. The subsequent \"detailed\" review comments have some \r\n> overlap with these general comments but I expect some will be missed \r\n> so please search/replace to fix all code related to those general \r\n> comments.\r\n\r\nThanks for your comments.\r\n\r\n> 1. GENERAL - main_worker_pid and replorigin_session_setup\r\n> \r\n> Quite a few of my subsequent review comments below are related to the \r\n> somewhat tricky (IMO) change to the code for this area. Here is a \r\n> summary of some things that can be done to clean/simplify this logic.\r\n> \r\n> 1a.\r\n> Make the existing replorigin_session_setup function just be a wrapper \r\n> that delegates to the other function passing the acquired_by as 0.\r\n> This is because in every case but one (in the apply bg worker main) we \r\n> are always passing 0, and IMO there is no need to spread the messy \r\n> extra param to places that do not use it.\r\n\r\nNot sure about this. I feel interface change should\r\nbe fine in major release.\r\n\r\n> 17. src/backend/replication/logical/applybgworker.c - \r\n> LogicalApplyBgworkerMain\r\n> \r\n> + MyLogicalRepWorker->last_send_time = MyLogicalRepWorker-\r\n> >last_recv_time =\r\n> + MyLogicalRepWorker->reply_time = 0;\r\n> +\r\n> + InitializeApplyWorker();\r\n> \r\n> Lots of things happen within InitializeApplyWorker(). I think this \r\n> call deserves at least some comment to say it does lots of common \r\n> initialization. And same for the other caller or this in the apply \r\n> main worker.\r\n\r\nI feel we can refer to the comments above/in the function InitializeApplyWorker.\r\n\r\n> 19.\r\n> + toc = shm_toc_create(PG_LOGICAL_APPLY_SHM_MAGIC,\r\n> dsm_segment_address(seg),\r\n> + segsize);\r\n\r\nSince toc is just same as the input address which I think should not be NULL.\r\nI think it's fine to skip the check here like what we did in other codes.\r\n\r\nshm_toc_create(uint64 magic, void *address, Size nbytes)\r\n{\r\n\tshm_toc *toc = (shm_toc *) address;\r\n\r\n> 20. src/backend/replication/logical/applybgworker.c - \r\n> apply_bgworker_setup\r\n> \r\n> I think this function could be refactored to be cleaner and share more \r\n> common logic.\r\n> \r\n> SUGGESTION\r\n> \r\n> /* Setup shared memory, and attempt launch. */ if \r\n> (apply_bgworker_setup_dsm(wstate))\r\n> {\r\n> bool launched;\r\n> launched = logicalrep_worker_launch(MyLogicalRepWorker->dbid,\r\n> MySubscription->oid,\r\n> MySubscription->name,\r\n> MyLogicalRepWorker->userid,\r\n> InvalidOid,\r\n> dsm_segment_handle(wstate->dsm_seg));\r\n> if (launched)\r\n> {\r\n> ApplyBgworkersList = lappend(ApplyBgworkersList, wstate); \r\n> MemoryContextSwitchTo(oldcontext);\r\n> return wstate;\r\n> }\r\n> else\r\n> {\r\n> dsm_detach(wstate->dsm_seg);\r\n> wstate->dsm_seg = NULL;\r\n> }\r\n> }\r\n> \r\n> pfree(wstate);\r\n> MemoryContextSwitchTo(oldcontext);\r\n> return NULL;\r\n\r\nNot sure about this.\r\n\r\n> 36. src/backend/replication/logical/tablesync.c - \r\n> process_syncing_tables\r\n> \r\n> @@ -589,6 +590,9 @@ process_syncing_tables_for_apply(XLogRecPtr\r\n> current_lsn)\r\n> void\r\n> process_syncing_tables(XLogRecPtr current_lsn) {\r\n> + if (am_apply_bgworker())\r\n> + return;\r\n> +\r\n> \r\n> Perhaps should be a comment to describe why process_syncing_tables \r\n> should be skipped for the apply background worker?\r\n\r\nI might refactor this function soon, so didn't change for now.\r\nBut I will consider it.\r\n\r\n> 39. src/backend/replication/logical/worker.c - \r\n> handle_streamed_transaction\r\n> \r\n> + /* Not in streaming mode and not in apply background worker. */ if \r\n> + (!(in_streamed_transaction || am_apply_bgworker()))\r\n> return false;\r\n> IMO if you wanted to write the comment in that way then the code \r\n> should have matched it more closely like:\r\n> if (!in_streamed_transaction && !am_apply_bgworker())\r\n> \r\n> OTOH, if you want to keep the code as-is then the comment should be \r\n> worded slightly differently.\r\n\r\nI feel both the in_streamed_transaction flag and in bgworker indicate that\r\nwe are in streaming mode. So it seems the original /* Not in streaming mode */\r\nShould be fine.\r\n\r\n> 44. src/backend/replication/logical/worker.c - InitializeApplyWorker\r\n> \r\n> \r\n> +/*\r\n> + * Initialize the databse connection, in-memory subscription and \r\n> +necessary\r\n> + * config options.\r\n> + */\r\n> void\r\n> -ApplyWorkerMain(Datum main_arg)\r\n> 44b.\r\n> Should there be some more explanation in this comment to say that this \r\n> is common code for both the appl main workers and apply background \r\n> workers?\r\n> \r\n> 44c.\r\n> Following on from #44b, consider renaming this to something like\r\n> CommonApplyWorkerInit() to emphasize it is called from multiple \r\n> places?\r\n\r\nNot sure about this. if we change the bgworker name to parallel\r\napply worker in the future, it might be worth emphasizing this. So\r\nI will consider this.\r\n\r\n> 52.\r\n> \r\n> +/* Apply background worker setup and interactions */ extern \r\n> +ApplyBgworkerInfo *apply_bgworker_start(TransactionId xid); extern \r\n> +ApplyBgworkerInfo *apply_bgworker_find(TransactionId xid); extern \r\n> +void apply_bgworker_wait_for(ApplyBgworkerInfo *wstate, \r\n> +ApplyBgworkerStatus wait_for_status); extern void \r\n> +apply_bgworker_send_data(ApplyBgworkerInfo *wstate, Size\r\n> nbytes,\r\n> + const void *data);\r\n> +extern void apply_bgworker_free(ApplyBgworkerInfo *wstate); extern \r\n> +void apply_bgworker_check_status(void);\r\n> +extern void apply_bgworker_set_status(ApplyBgworkerStatus status); \r\n> +extern void apply_bgworker_subxact_info_add(TransactionId \r\n> +current_xid); extern void apply_bgworker_savepoint_name(Oid suboid, Oid relid,\r\n> + char *spname, int szsp);\r\n> \r\n> This big block of similarly named externs might as well be in \r\n> alphabetical order instead of apparently random.\r\n\r\nI think Amit has a good idea in [2].\r\nSo I tried to reorder these based on related functionality.\r\n\r\nThe reply to your comments #4.2 for patch 0004 in [3]:\r\n> 4.2\r\n> \r\n> @@ -166,17 +175,6 @@ CREATE TRIGGER tri_tab1_unsafe BEFORE INSERT ON \r\n> public.test_tab1 FOR EACH ROW EXECUTE PROCEDURE \r\n> trigger_func_tab1_unsafe(); ALTER TABLE test_tab1 ENABLE REPLICA \r\n> TRIGGER tri_tab1_unsafe;\r\n> -\r\n> -CREATE FUNCTION trigger_func_tab1_safe() RETURNS TRIGGER AS \\$\\$\r\n> - BEGIN\r\n> - RAISE NOTICE 'test for safe trigger function';\r\n> - RETURN NEW;\r\n> - END\r\n> -\\$\\$ language plpgsql;\r\n> -ALTER FUNCTION trigger_func_tab1_safe IMMUTABLE; -CREATE TRIGGER \r\n> tri_tab1_safe -BEFORE INSERT ON public.test_tab1 -FOR EACH ROW EXECUTE \r\n> PROCEDURE trigger_func_tab1_safe(); });\r\n> \r\n> I didn't understand why all this trigger_func_tab1_safe which was \r\n> added in patch 0003 is now getting removed in patch 0004. Maybe there \r\n> is some good reason, but it doesn't seem right to be adding code in \r\n> one patch and then removing it again in the next patch.\r\n\r\nBecause in 0003 we need to manually do something to let the test recover\r\nfrom the constraint failure, while in 0004 it can automatically retry.\r\n\r\nThe rest of your comments are improved as suggested.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPuAxW57fowiMrn%3D3%3D53sagmehiTSW0o1Q52MpR3phUmyw%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAA4eK1KpuQAk_fiqVXy16WkDrKPBwA9E61VpvLfkse-o31NNVA%40mail.gmail.com\r\n[3] - https://www.postgresql.org/message-id/CAHut%2BPtCRkTT_KNaqA5Fn6_T38BXtFn4Eb3Ct-AbNko91s-cjQ%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 24 Aug 2022 14:23:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, August 24, 2022 9:47 PM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Friday, August 19, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> >\r\n> > On Thu, Aug 18, 2022 at 5:14 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > On Wed, Aug 17, 2022 at 11:58 AM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Attach the new patches.\r\n> > > >\r\n> > >\r\n> > > Few comments on v23-0001\r\n> > > =======================\r\n> > >\r\n> >\r\n> > Some more comments on v23-0001\r\n> > ============================\r\n> > 1.\r\n> > static bool\r\n> > handle_streamed_transaction(LogicalRepMsgType action, StringInfo s) { ...\r\n> > - /* not in streaming mode */\r\n> > - if (!in_streamed_transaction)\r\n> > + /* Not in streaming mode and not in apply background worker. */ if\r\n> > + (!(in_streamed_transaction || am_apply_bgworker()))\r\n> > return false;\r\n> >\r\n> > This check appears a bit strange because ideally in bgworker\r\n> > in_streamed_transaction should be false. I think we should set\r\n> > in_streamed_transaction to true in apply_handle_stream_start() only\r\n> > when we are going to write to file. Is there a reason for not doing the same?\r\n> \r\n> No, I removed this.\r\n> \r\n> > 2.\r\n> > + {\r\n> > + /* This is the main apply worker. */ ApplyBgworkerInfo *wstate =\r\n> > + apply_bgworker_find(xid);\r\n> > +\r\n> > + /*\r\n> > + * Check if we are processing this transaction using an apply\r\n> > + * background worker and if so, send the changes to that worker.\r\n> > + */\r\n> > + if (wstate)\r\n> > + {\r\n> > + /* Send STREAM ABORT message to the apply background worker. */\r\n> > + apply_bgworker_send_data(wstate, s->len, s->data);\r\n> >\r\n> > Why at some places the patch needs to separately fetch\r\n> > ApplyBgworkerInfo whereas at other places it directly uses\r\n> > stream_apply_worker to pass the data to bgworker.\r\n> > 3. Why apply_handle_stream_abort() or apply_handle_stream_prepare()\r\n> > doesn't use apply_bgworker_active() to identify whether it needs to\r\n> > send the information to bgworker?\r\n> \r\n> I think stream_apply_worker is only valid between STREAM_START and\r\n> STREAM_END, But it seems it's not clear from the code. So I added some\r\n> comments and slightly refactor the code.\r\n> \r\n> \r\n> > 4. In apply_handle_stream_prepare(), apply_handle_stream_abort(), and\r\n> > some other similar functions, the patch handles three cases (a) apply\r\n> > background worker, (b) sending data to bgworker, (c) handling for\r\n> > streamed transaction in apply worker. I think the code will look\r\n> > better if you move the respective code for all three cases into\r\n> > separate functions. Surely, if the code to deal with each of the cases is less then\r\n> we don't need to move it to a separate function.\r\n> \r\n> Refactored and simplified.\r\n> \r\n> > 5.\r\n> > @@ -1088,24 +1177,78 @@ apply_handle_stream_prepare(StringInfo s) { ...\r\n> > + in_remote_transaction = false;\r\n> > +\r\n> > + /* Unlink the files with serialized changes and subxact info. */\r\n> > + stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid); }\r\n> > + }\r\n> >\r\n> > in_remote_transaction = false;\r\n> > ...\r\n> >\r\n> > We don't need to in_remote_transaction to false in multiple places.\r\n> \r\n> Removed.\r\n> \r\n> > 6.\r\n> > @@ -1177,36 +1311,93 @@ apply_handle_stream_start(StringInfo s) { ...\r\n> > ...\r\n> > + if (am_apply_bgworker())\r\n> > {\r\n> > - MemoryContext oldctx;\r\n> > -\r\n> > - oldctx = MemoryContextSwitchTo(ApplyContext);\r\n> > + /*\r\n> > + * Make sure the handle apply_dispatch methods are aware we're in a\r\n> > + * remote transaction.\r\n> > + */\r\n> > + in_remote_transaction = true;\r\n> >\r\n> > - MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n> > - FileSetInit(MyLogicalRepWorker->stream_fileset);\r\n> > + /* Begin the transaction. */\r\n> > + AcceptInvalidationMessages();\r\n> > + maybe_reread_subscription();\r\n> >\r\n> > - MemoryContextSwitchTo(oldctx);\r\n> > + StartTransactionCommand();\r\n> > + BeginTransactionBlock();\r\n> > + CommitTransactionCommand();\r\n> > }\r\n> > ...\r\n> >\r\n> > Why do we need to start a transaction here? Why can't it be done via\r\n> > begin_replication_step() during the first operation apply? Is it\r\n> > because we may need to define a save point in bgworker and we don't\r\n> > that information beforehand? If so, then also, can't it be handled by\r\n> > begin_replication_step() either by explicitly passing the information\r\n> > or checking it there and then starting a transaction block? In any\r\n> > case, please add a few comments to explain why this separate handling\r\n> > is required for bgworker?\r\n> \r\n> The transaction block is used to define the savepoint and I moved these codes to\r\n> the place where the savepoint is defined which looks better now.\r\n> \r\n> > 7. When we are already setting bgworker status as APPLY_BGWORKER_BUSY\r\n> > in\r\n> > apply_bgworker_setup_dsm() then why do we need to set it again in\r\n> > apply_bgworker_start()?\r\n> \r\n> Removed.\r\n> \r\n> > 8. It is not clear to me how APPLY_BGWORKER_EXIT status is used. Is it\r\n> > required for the cases where bgworker exists due to some error and\r\n> > then apply worker uses it to detect that and exits? How other\r\n> > bgworkers would notice this, is it done via apply_bgworker_check_status()?\r\n> \r\n> It was used to detect the unexpected exit of bgworker and I have changed the\r\n> design of this which is now similar to what we have in parallel query.\r\n> \r\n> Attach the new version patch set(v24) which address above comments.\r\n> Besides, I added some logic which try to stop the bgworker at transaction end if\r\n> there are enough workers in the pool.\r\n\r\nAlso attach the result of performance test based on v23 patch.\r\n\r\nThis test used synchronous logical replication, and compared SQL execution\r\ntimes before and after applying the patch. This is tested by varying\r\nlogical_decoding_work_mem.\r\n\r\nThe test was performed ten times, and the average of the middle eight was taken.\r\n\r\nThe results are as follows. The bar chart and the details of the test are attached.\r\n\r\nRESULT - bulk insert (5kk)\r\n----------------------------------\r\nlogical_decoding_work_mem 64kB 128kB 256kB 512kB 1MB 2MB 4MB 8MB 16MB 32MB 64MB\r\nHEAD 46.940 46.428 46.663 46.373 46.339 46.838 50.346 50.536 50.452 50.582 47.491\r\npatched 33.942 33.780 30.760 30.760 29.992 30.076 30.827 33.420 33.966 34.133 31.096\r\n\r\nFor different logical_decoding_work_mem size, it takes\r\nabout 30% ~ 40% less time, which looks good.\r\n\r\nSome other tests are still in progress, might share them later.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 24 Aug 2022 15:05:29 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Aug 24, 2022 at 7:17 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, August 19, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n>\n> > 8. It is not clear to me how APPLY_BGWORKER_EXIT status is used. Is it required\n> > for the cases where bgworker exists due to some error and then apply worker\n> > uses it to detect that and exits? How other bgworkers would notice this, is it\n> > done via apply_bgworker_check_status()?\n>\n> It was used to detect the unexpected exit of bgworker and I have changed the design\n> of this which is now similar to what we have in parallel query.\n>\n\nThanks, this looks better.\n\n> Attach the new version patch set(v24) which address above comments.\n> Besides, I added some logic which try to stop the bgworker at transaction end\n> if there are enough workers in the pool.\n>\n\nI think this deserves an explanation in worker.c under the title:\n\"Separate background workers\" in the patch.\n\nReview comments for v24-0001\n=========================\n1.\n+ * cost of searhing the hash table\n\n/searhing/searching\n\n2.\n+/*\n+ * Apply background worker states.\n+ */\n+typedef enum ApplyBgworkerState\n+{\n+ APPLY_BGWORKER_BUSY, /* assigned to a transaction */\n+ APPLY_BGWORKER_FINISHED /* transaction is completed */\n+} ApplyBgworkerState;\n\nNow, that there are just two states, can we think to represent them\nvia a flag ('available'/'in_use') or do you see a downside with that\nas compared to the current approach?\n\n3.\n-replorigin_session_setup(RepOriginId node)\n+replorigin_session_setup(RepOriginId node, int apply_leader_pid)\n\nI have mentioned previously that we don't need anything specific to\napply worker/leader in this API, so why this change? The other idea\nthat occurred to me is that can we use replorigin_session_reset()\nbefore sending the commit message to bgworker and then do the session\nsetup in bgworker only to handle the commit/abort/prepare message. We\nalso need to set it again for the leader apply worker after the leader\nworker completes the wait for bgworker to finish the commit handling.\n\n4. Unlike parallel query, here we seem to be creating separate DSM for\neach worker, and probably the difference is due to the fact that here\nwe don't know upfront how many workers will actually be required. If\nso, can we write some comments for the same in worker.c where you have\nexplained about parallel bgwroker stuff?\n\n5.\n/*\n- * Handle streamed transactions.\n+ * Handle streamed transactions for both the main apply worker and the apply\n+ * background workers.\n\nShall we use leader apply worker in the above comment? Also, check\nother places in the patch for similar changes.\n\n6.\n+ else\n+ {\n\n- /* open the spool file for this transaction */\n- stream_open_file(MyLogicalRepWorker->subid, stream_xid, first_segment);\n+ /* notify handle methods we're processing a remote transaction */\n+ in_streamed_transaction = true;\n\nThere is a spurious line after else {. Also, the comment could be\nslightly improved: \"/* notify handle methods we're processing a remote\nin-progress transaction */\"\n\n7. The checks in various apply_handle_stream_* functions have improved\nas compared to the previous version but I think we can still improve\nthose. One idea could be to use a separate function to decide the\naction we want to take and then based on it, the caller can take\nappropriate action. Using a similar idea, we can improve the checks in\nhandle_streamed_transaction() as well.\n\n8.\n+ else if ((winfo = apply_bgworker_find(xid)))\n+ {\n+ /* Send STREAM ABORT message to the apply background worker. */\n+ apply_bgworker_send_data(winfo, s->len, s->data);\n+\n+ /*\n+ * After sending the data to the apply background worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ if (subxid == xid)\n+ {\n+ apply_bgworker_wait_for(winfo, APPLY_BGWORKER_FINISHED);\n+ apply_bgworker_free(winfo);\n+ }\n+ }\n+ else\n+ /*\n+ * We are in main apply worker and the transaction has been\n+ * serialized to file.\n+ */\n+ serialize_stream_abort(xid, subxid);\n\nIn the last else block, you can use {} to make it consistent with\nother if, else checks.\n\n9.\n+void\n+ApplyBgworkerMain(Datum main_arg)\n+{\n+ volatile ApplyBgworkerShared *shared;\n+\n+ dsm_handle handle;\n\nIs there a need to keep this empty line between the above two declarations?\n\n10.\n+ /*\n+ * Attach to the message queue.\n+ */\n+ mq = shm_toc_lookup(toc, APPLY_BGWORKER_KEY_ERROR_QUEUE, false);\n\nHere, we should say error queue in the comments.\n\n11.\n+ /*\n+ * Attach to the message queue.\n+ */\n+ mq = shm_toc_lookup(toc, APPLY_BGWORKER_KEY_ERROR_QUEUE, false);\n+ shm_mq_set_sender(mq, MyProc);\n+ error_mqh = shm_mq_attach(mq, seg, NULL);\n+ pq_redirect_to_shm_mq(seg, error_mqh);\n+\n+ /*\n+ * Now, we have initialized DSM. Attach to slot.\n+ */\n+ logicalrep_worker_attach(worker_slot);\n+ MyParallelShared->logicalrep_worker_generation =\nMyLogicalRepWorker->generation;\n+ MyParallelShared->logicalrep_worker_slot_no = worker_slot;\n+\n+ pq_set_parallel_leader(MyLogicalRepWorker->apply_leader_pid,\n+ InvalidBackendId);\n\nIs there a reason to set parallel_leader immediately after\npq_redirect_to_shm_mq() as we are doing parallel.c?\n\n12.\nif (pq_mq_parallel_leader_pid != 0)\n+ {\n SendProcSignal(pq_mq_parallel_leader_pid,\n PROCSIG_PARALLEL_MESSAGE,\n pq_mq_parallel_leader_backend_id);\n\n+ /*\n+ * XXX maybe we can reuse the PROCSIG_PARALLEL_MESSAGE instead of\n+ * introducing a new signal reason.\n+ */\n+ SendProcSignal(pq_mq_parallel_leader_pid,\n+ PROCSIG_APPLY_BGWORKER_MESSAGE,\n+ pq_mq_parallel_leader_backend_id);\n+ }\n\nI think we don't need to send both signals. Here, we can check if this\nis a parallel worker (IsParallelWorker), then send\nPROCSIG_PARALLEL_MESSAGE, otherwise, send\nPROCSIG_APPLY_BGWORKER_MESSAGE message. In the else part, we can have\nan assert to ensure it is an apply bgworker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Aug 2022 17:02:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Aug 11, 2022 at 12:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Since we will later consider applying non-streamed transactions in parallel, I\n> > think \"apply streaming worker\" might not be very suitable. I think PostgreSQL\n> > also has the worker \"parallel worker\", so for \"apply parallel worker\" and\n> > \"apply background worker\", I feel that \"apply background worker\" will make the\n> > relationship between workers more clear. (\"[main] apply worker\" and \"apply\n> > background worker\")\n> >\n>\n> But, on similar lines, we do have vacuumparallel.c for parallelizing\n> index vacuum. I agree with Kuroda-San on this point that the currently\n> proposed terminology doesn't sound to be very clear. The other options\n> that come to my mind are \"apply streaming transaction worker\", \"apply\n> parallel worker\" and file name could be applystreamworker.c,\n> applyparallel.c, applyparallelworker.c, etc. I see the point why you\n> are hesitant in calling it \"apply parallel worker\" but it is quite\n> possible that even for non-streamed xacts, we will share quite some\n> part of this code.\n\nI think the \"apply streaming transaction worker\" is a good option\nw.r.t. what we are currently doing but then in the future, if we want\nto apply normal transactions in parallel then we will have to again\nchange the name. So I think \"apply parallel worker\" might look\nbetter and the file name could be \"applyparallelworker.c\" or just\n\"parallelworker.c\". Although \"parallelworker.c\" file name is a bit\ngeneric but we already have worker.c so w.r.t that \"parallelworker.c\"\nshould just look fine. At least that is what I think.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Aug 2022 09:29:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Aug 26, 2022 at 9:30 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Aug 11, 2022 at 12:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Since we will later consider applying non-streamed transactions in parallel, I\n> > > think \"apply streaming worker\" might not be very suitable. I think PostgreSQL\n> > > also has the worker \"parallel worker\", so for \"apply parallel worker\" and\n> > > \"apply background worker\", I feel that \"apply background worker\" will make the\n> > > relationship between workers more clear. (\"[main] apply worker\" and \"apply\n> > > background worker\")\n> > >\n> >\n> > But, on similar lines, we do have vacuumparallel.c for parallelizing\n> > index vacuum. I agree with Kuroda-San on this point that the currently\n> > proposed terminology doesn't sound to be very clear. The other options\n> > that come to my mind are \"apply streaming transaction worker\", \"apply\n> > parallel worker\" and file name could be applystreamworker.c,\n> > applyparallel.c, applyparallelworker.c, etc. I see the point why you\n> > are hesitant in calling it \"apply parallel worker\" but it is quite\n> > possible that even for non-streamed xacts, we will share quite some\n> > part of this code.\n>\n> I think the \"apply streaming transaction worker\" is a good option\n> w.r.t. what we are currently doing but then in the future, if we want\n> to apply normal transactions in parallel then we will have to again\n> change the name. So I think \"apply parallel worker\" might look\n> better and the file name could be \"applyparallelworker.c\" or just\n> \"parallelworker.c\". Although \"parallelworker.c\" file name is a bit\n> generic but we already have worker.c so w.r.t that \"parallelworker.c\"\n> should just look fine.\n>\n\nYeah based on that theory, we can go with parallelworker.c but my vote\nis to go with applyparallelworker.c among the above as that is more\nclear. I feel worker.c is already not a very good name where we are\ndoing the work related to apply, so it won't be advisable to go down\nthat path further.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Aug 2022 11:18:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, August 25, 2022 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Aug 24, 2022 at 7:17 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, August 19, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > >\r\n> >\r\n> > > 8. It is not clear to me how APPLY_BGWORKER_EXIT status is used. Is it\r\n> required\r\n> > > for the cases where bgworker exists due to some error and then apply\r\n> worker\r\n> > > uses it to detect that and exits? How other bgworkers would notice this, is\r\n> it\r\n> > > done via apply_bgworker_check_status()?\r\n> >\r\n> > It was used to detect the unexpected exit of bgworker and I have changed\r\n> the design\r\n> > of this which is now similar to what we have in parallel query.\r\n> >\r\n> \r\n> Thanks, this looks better.\r\n> \r\n> > Attach the new version patch set(v24) which address above comments.\r\n> > Besides, I added some logic which try to stop the bgworker at transaction\r\n> end\r\n> > if there are enough workers in the pool.\r\n> >\r\n> \r\n> I think this deserves an explanation in worker.c under the title:\r\n> \"Separate background workers\" in the patch.\r\n> \r\n> Review comments for v24-0001\r\n\r\nThanks for the comments.\r\n\r\n> =========================\r\n> 1.\r\n> + * cost of searhing the hash table\r\n> \r\n> /searhing/searching\r\n\r\nFixed.\r\n\r\n> 2.\r\n> +/*\r\n> + * Apply background worker states.\r\n> + */\r\n> +typedef enum ApplyBgworkerState\r\n> +{\r\n> + APPLY_BGWORKER_BUSY, /* assigned to a transaction */\r\n> + APPLY_BGWORKER_FINISHED /* transaction is completed */\r\n> +} ApplyBgworkerState;\r\n> \r\n> Now, that there are just two states, can we think to represent them\r\n> via a flag ('available'/'in_use') or do you see a downside with that\r\n> as compared to the current approach?\r\n\r\nChanged to in_use.\r\n\r\n> 3.\r\n> -replorigin_session_setup(RepOriginId node)\r\n> +replorigin_session_setup(RepOriginId node, int apply_leader_pid)\r\n> \r\n> I have mentioned previously that we don't need anything specific to\r\n> apply worker/leader in this API, so why this change? The other idea\r\n> that occurred to me is that can we use replorigin_session_reset()\r\n> before sending the commit message to bgworker and then do the session\r\n> setup in bgworker only to handle the commit/abort/prepare message. We\r\n> also need to set it again for the leader apply worker after the leader\r\n> worker completes the wait for bgworker to finish the commit handling.\r\n\r\nI have reverted the changes related to replorigin_session_setup and used\r\nthe suggested approach. I also did some simple performance tests for this approach\r\nand didn't see some obvious overhead as the replorigin_session_setup is invoked\r\nper streaming transaction.\r\n\r\n> 4. Unlike parallel query, here we seem to be creating separate DSM for\r\n> each worker, and probably the difference is due to the fact that here\r\n> we don't know upfront how many workers will actually be required. If\r\n> so, can we write some comments for the same in worker.c where you have\r\n> explained about parallel bgwroker stuff?\r\n\r\nAdded.\r\n\r\n> 5.\r\n> /*\r\n> - * Handle streamed transactions.\r\n> + * Handle streamed transactions for both the main apply worker and the apply\r\n> + * background workers.\r\n> \r\n> Shall we use leader apply worker in the above comment? Also, check\r\n> other places in the patch for similar changes.\r\n\r\nChanged.\r\n\r\n> 6.\r\n> + else\r\n> + {\r\n> \r\n> - /* open the spool file for this transaction */\r\n> - stream_open_file(MyLogicalRepWorker->subid, stream_xid, first_segment);\r\n> + /* notify handle methods we're processing a remote transaction */\r\n> + in_streamed_transaction = true;\r\n> \r\n> There is a spurious line after else {. Also, the comment could be\r\n> slightly improved: \"/* notify handle methods we're processing a remote\r\n> in-progress transaction */\"\r\n\r\nChanged.\r\n\r\n> 7. The checks in various apply_handle_stream_* functions have improved\r\n> as compared to the previous version but I think we can still improve\r\n> those. One idea could be to use a separate function to decide the\r\n> action we want to take and then based on it, the caller can take\r\n> appropriate action. Using a similar idea, we can improve the checks in\r\n> handle_streamed_transaction() as well.\r\n\r\nImproved as suggested.\r\n\r\n> 8.\r\n> + else if ((winfo = apply_bgworker_find(xid)))\r\n> + {\r\n> + /* Send STREAM ABORT message to the apply background worker. */\r\n> + apply_bgworker_send_data(winfo, s->len, s->data);\r\n> +\r\n> + /*\r\n> + * After sending the data to the apply background worker, wait for\r\n> + * that worker to finish. This is necessary to maintain commit\r\n> + * order which avoids failures due to transaction dependencies and\r\n> + * deadlocks.\r\n> + */\r\n> + if (subxid == xid)\r\n> + {\r\n> + apply_bgworker_wait_for(winfo, APPLY_BGWORKER_FINISHED);\r\n> + apply_bgworker_free(winfo);\r\n> + }\r\n> + }\r\n> + else\r\n> + /*\r\n> + * We are in main apply worker and the transaction has been\r\n> + * serialized to file.\r\n> + */\r\n> + serialize_stream_abort(xid, subxid);\r\n> \r\n> In the last else block, you can use {} to make it consistent with\r\n> other if, else checks.\r\n> \r\n> 9.\r\n> +void\r\n> +ApplyBgworkerMain(Datum main_arg)\r\n> +{\r\n> + volatile ApplyBgworkerShared *shared;\r\n> +\r\n> + dsm_handle handle;\r\n> \r\n> Is there a need to keep this empty line between the above two declarations?\r\n\r\nRemoved.\r\n\r\n> 10.\r\n> + /*\r\n> + * Attach to the message queue.\r\n> + */\r\n> + mq = shm_toc_lookup(toc, APPLY_BGWORKER_KEY_ERROR_QUEUE, false);\r\n> \r\n> Here, we should say error queue in the comments.\r\n\r\nFixed.\r\n\r\n> 11.\r\n> + /*\r\n> + * Attach to the message queue.\r\n> + */\r\n> + mq = shm_toc_lookup(toc, APPLY_BGWORKER_KEY_ERROR_QUEUE, false);\r\n> + shm_mq_set_sender(mq, MyProc);\r\n> + error_mqh = shm_mq_attach(mq, seg, NULL);\r\n> + pq_redirect_to_shm_mq(seg, error_mqh);\r\n> +\r\n> + /*\r\n> + * Now, we have initialized DSM. Attach to slot.\r\n> + */\r\n> + logicalrep_worker_attach(worker_slot);\r\n> + MyParallelShared->logicalrep_worker_generation =\r\n> MyLogicalRepWorker->generation;\r\n> + MyParallelShared->logicalrep_worker_slot_no = worker_slot;\r\n> +\r\n> + pq_set_parallel_leader(MyLogicalRepWorker->apply_leader_pid,\r\n> + InvalidBackendId);\r\n> \r\n> Is there a reason to set parallel_leader immediately after\r\n> pq_redirect_to_shm_mq() as we are doing parallel.c?\r\n\r\nMoved the code.\r\n\r\n> 12.\r\n> if (pq_mq_parallel_leader_pid != 0)\r\n> + {\r\n> SendProcSignal(pq_mq_parallel_leader_pid,\r\n> PROCSIG_PARALLEL_MESSAGE,\r\n> pq_mq_parallel_leader_backend_id);\r\n> \r\n> + /*\r\n> + * XXX maybe we can reuse the PROCSIG_PARALLEL_MESSAGE instead of\r\n> + * introducing a new signal reason.\r\n> + */\r\n> + SendProcSignal(pq_mq_parallel_leader_pid,\r\n> + PROCSIG_APPLY_BGWORKER_MESSAGE,\r\n> + pq_mq_parallel_leader_backend_id);\r\n> + }\r\n> \r\n> I think we don't need to send both signals. Here, we can check if this\r\n> is a parallel worker (IsParallelWorker), then send\r\n> PROCSIG_PARALLEL_MESSAGE, otherwise, send\r\n> PROCSIG_APPLY_BGWORKER_MESSAGE message. In the else part, we can have\r\n> an assert to ensure it is an apply bgworker.\r\n\r\nChanged.\r\n\r\n\r\nAttach the new version patch set which addressed the above comments\r\nand comments from Amit[1] and Kuroda-san[2].\r\n\r\nAs discussed, I also renamed all the \"apply background worker\" and\r\nrelated stuff to \"apply parallel worker\".\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1%2B_oHZHoDooAR7QcYD2CeTUWNSwkqVcLWC2iQijAJC4Cg%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/TYAPR01MB58666A97D40AB8919D106AD5F5709%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 29 Aug 2022 11:31:42 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Aug 24, 2022 16:41 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Followings are my comments about v23-0003. Currently I do not have any \r\n> comments about 0002 and 0004.\r\n\r\nThanks for your comments.\r\n\r\n> 09. general\r\n> \r\n> It seems that logicalrep_rel_mark_parallel_apply() is always called \r\n> when relations are opened on the subscriber-side, but is it really \r\n> needed? There checks are required only for streaming parallel apply, \r\n> so it may be not needed in case of streaming = 'on' or 'off'.\r\n\r\nImproved.\r\nThis check is only performed when using apply background workers.\r\n\r\n> 10. commit message\r\n> \r\n> 2) There cannot be any non-immutable functions used by the subscriber-side\r\n> replicated table. Look for functions in the following places:\r\n> * a. Trigger functions\r\n> * b. Column default value expressions and domain constraints\r\n> * c. Constraint expressions\r\n> * d. Foreign keys\r\n> \r\n> \"Foreign key\" should not be listed here because it is not related with \r\n> the mutability. I think it should be listed as 3), not d..\r\n\r\nImproved.\r\n\r\n> 11. create_subscription.sgml\r\n> \r\n> The constraint about foreign key should be described here.\r\n> \r\n> 11. relation.c\r\n> \r\n> 11.a\r\n> \r\n> + CacheRegisterSyscacheCallback(PROCOID,\r\n> + logicalrep_relmap_reset_parallel_cb,\r\n> + \r\n> + (Datum) 0);\r\n> \r\n> Isn't it needed another syscache callback for pg_type?\r\n> Users can add any constraints via ALTER DOMAIN command, but the added \r\n> constraint may be not checked.\r\n> I checked AlterDomainAddConstraint(), and it invalidates only the \r\n> relcache for pg_type.\r\n> \r\n> 11.b\r\n> \r\n> + /*\r\n> + * If the column is of a DOMAIN type, determine whether\r\n> + * that domain has any CHECK expressions that are not\r\n> + * immutable.\r\n> + */\r\n> + if (get_typtype(att->atttypid) == TYPTYPE_DOMAIN)\r\n> + {\r\n> \r\n> I think the default value of *domain* must be also checked here.\r\n> I tested like followings.\r\n> \r\n> ===\r\n> 1. created a domain that has a default value CREATE DOMAIN tmp INT \r\n> DEFAULT 1 CHECK (VALUE > 0);\r\n> \r\n> 2. created a table\r\n> CREATE TABLE foo (id tmp PRIMARY KEY);\r\n> \r\n> 3. checked pg_attribute and pg_class\r\n> select oid, relname, attname, atthasdef from pg_attribute, pg_class \r\n> where pg_attribute.attrelid = pg_class.oid and pg_class.relname = \r\n> 'foo' and attname = 'id';\r\n> oid | relname | attname | atthasdef\r\n> -------+---------+---------+-----------\r\n> 16394 | foo | id | f\r\n> (1 row)\r\n> \r\n> Tt meant that functions might be not checked because the if-statement \r\n> `if (att-\r\n> >atthasdef)` became false.\r\n> ===\r\n\r\nFixed.\r\nIn addition, to reduce duplicate validation, only the flag \"parallel_apply_safe\" is reset when pg_proc and pg_type changes.\r\n\r\n> 12. 015_stream.pl, 016_stream_subxact.pl, 022_twophase_cascade.pl, \r\n> 023_twophase_stream.pl\r\n> \r\n> - my ($node_publisher, $node_subscriber, $appname, $is_parallel) = @_;\r\n> + my ($node_publisher, $node_subscriber, $appname) = @_;\r\n> \r\n> Why the parameter is removed? I think the test that waits the output \r\n> from the apply background worker is meaningful.\r\n\r\nRevert this change.\r\nIn addition, made some modifications to the logs confirmed in these test files to\r\nensure the streamed transactions complete as expected using apply background worker.\r\n\r\n> 13. 032_streaming_apply.pl\r\n> \r\n> The filename seems too general because apply background workers are \r\n> tested in above tests.\r\n> How about \"streaming_apply_constraint\" or something?\r\n\r\nRenamed to 032_streaming_parallel_safety.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Mon, 29 Aug 2022 11:31:49 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Aug 29, 2022 at 5:01 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, August 25, 2022 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 11.\n> > + /*\n> > + * Attach to the message queue.\n> > + */\n> > + mq = shm_toc_lookup(toc, APPLY_BGWORKER_KEY_ERROR_QUEUE, false);\n> > + shm_mq_set_sender(mq, MyProc);\n> > + error_mqh = shm_mq_attach(mq, seg, NULL);\n> > + pq_redirect_to_shm_mq(seg, error_mqh);\n> > +\n> > + /*\n> > + * Now, we have initialized DSM. Attach to slot.\n> > + */\n> > + logicalrep_worker_attach(worker_slot);\n> > + MyParallelShared->logicalrep_worker_generation =\n> > MyLogicalRepWorker->generation;\n> > + MyParallelShared->logicalrep_worker_slot_no = worker_slot;\n> > +\n> > + pq_set_parallel_leader(MyLogicalRepWorker->apply_leader_pid,\n> > + InvalidBackendId);\n> >\n> > Is there a reason to set parallel_leader immediately after\n> > pq_redirect_to_shm_mq() as we are doing parallel.c?\n>\n> Moved the code.\n>\n\nSorry, if I was not clear but what I wanted was something like the below:\n\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 832e99cd48..6646e00658 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -480,6 +480,9 @@ ApplyParallelWorkerMain(Datum main_arg)\n mq = shm_toc_lookup(toc, PARALLEL_APPLY_KEY_ERROR_QUEUE, false);\n shm_mq_set_sender(mq, MyProc);\n error_mqh = shm_mq_attach(mq, seg, NULL);\n+ pq_redirect_to_shm_mq(seg, error_mqh);\n+ pq_set_parallel_leader(MyLogicalRepWorker->apply_leader_pid,\n+ InvalidBackendId);\n\n /*\n * Primary initialization is complete. Now, we can attach to\nour slot. This\n@@ -490,10 +493,6 @@ ApplyParallelWorkerMain(Datum main_arg)\n MyParallelShared->logicalrep_worker_generation =\nMyLogicalRepWorker->generation;\n MyParallelShared->logicalrep_worker_slot_no = worker_slot;\n\n- pq_redirect_to_shm_mq(seg, error_mqh);\n- pq_set_parallel_leader(MyLogicalRepWorker->apply_leader_pid,\n- InvalidBackendId);\n-\n MyLogicalRepWorker->last_send_time =\nMyLogicalRepWorker->last_recv_time =\n MyLogicalRepWorker->reply_time = 0;\n\n\nFew other comments on v25-0001*\n============================\n1.\n+ {\n+ {\"max_apply_parallel_workers_per_subscription\",\n+ PGC_SIGHUP,\n+ REPLICATION_SUBSCRIBERS,\n+ gettext_noop(\"Maximum number of apply parallel workers per subscription.\"),\n+ NULL,\n+ },\n+ &max_apply_parallel_workers_per_subscription,\n\nLet's model this to max_parallel_workers_per_gather and name this\nmax_parallel_apply_workers_per_subscription.\n\n\n+typedef struct ApplyParallelWorkerEntry\n+{\n+ TransactionId xid; /* Hash key -- must be first */\n+ ApplyParallelWorkerInfo *winfo;\n+} ApplyParallelWorkerEntry;\n+\n+/* Apply parallel workers hash table (initialized on first use). */\n+static HTAB *ApplyParallelWorkersHash = NULL;\n+static List *ApplyParallelWorkersFreeList = NIL;\n+static List *ApplyParallelWorkersList = NIL;\n\nSimilarly, for above let's name them as ParallelApply*. I think in\ncomments/doc changes it is better to refer as parallel apply worker.\nwe can keep filename as it is.\n\n\n2.\n+ * If there are enough apply parallel workers(reache half of the\n+ * max_apply_parallel_workers_per_subscription)\n\n/reache/reached. There should be a space before (.\n\n3.\n+ * The dynamic shared memory segment will contain (1) a shm_mq that can be used\n+ * to transport errors (and other messages reported via elog/ereport) from the\n+ * apply parallel worker to leader apply worker (2) another shm_mq that can\n+ * be used to transport changes in the transaction from leader apply worker to\n+ * apply parallel worker (3) necessary information to be shared among apply\n+ * parallel workers to leader apply worker\n\nI think it is better to use send instead of transport in above\nparagraph. In (3), /apply parallel workers to leader apply\nworker/apply parallel workers and leader apply worker\n\n4.\nhandle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n{\n...\n...\n+ else if (apply_action == TA_SEND_TO_PARALLEL_WORKER)\n+ {\n+ parallel_apply_send_data(winfo, s->len, s->data);\n\n\nIt is better to have an Assert for winfo being non-null here and other\nsimilar usages.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Aug 2022 12:12:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Aug 30, 2022 at 12:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few other comments on v25-0001*\n> ============================\n>\n\nSome more comments on v25-0001*:\n=============================\n1.\n+static void\n+apply_handle_stream_abort(StringInfo s)\n...\n...\n+ else if (apply_action == TA_SEND_TO_PARALLEL_WORKER)\n+ {\n+ if (subxid == xid)\n+ parallel_apply_replorigin_reset();\n+\n+ /* Send STREAM ABORT message to the apply parallel worker. */\n+ parallel_apply_send_data(winfo, s->len, s->data);\n+\n+ /*\n+ * After sending the data to the apply parallel worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ if (subxid == xid)\n+ {\n+ parallel_apply_wait_for_free(winfo);\n...\n...\n\n From this code, it appears that we are waiting for rollbacks to finish\nbut not doing the same in the rollback to savepoint cases. Is there a\nreason for the same? I think we need to wait for rollbacks to avoid\ntransaction dependency and deadlock issues. Consider the below case:\n\nConsider table t1 (c1 primary key, c2, c3) has a row (1, 2, 3) on both\npublisher and subscriber.\n\nPublisher\nSession-1\n==========\nBegin;\n...\nDelete from t1 where c1 = 1;\n\nSession-2\nBegin;\n...\ninsert into t1 values(1, 4, 5); --This will wait for Session-1's\nDelete to finish.\n\nSession-1\nRollback;\n\nSession-2\n-- The wait will be finished and the insert will be successful.\nCommit;\n\nNow, assume both these transactions get streamed and if we didn't wait\nfor rollback/rollback to savepoint, it is possible that the insert\ngets executed before and leads to a constraint violation. This won't\nhappen in non-parallel mode, so we should wait for rollbacks to\nfinish.\n\n2. I think we don't need to wait at Rollback Prepared/Commit Prepared\nbecause we wait for prepare to finish in *_stream_prepare function.\nThat will ensure all the operations in that transaction have happened\nin the subscriber, so no concurrent transaction can create deadlock or\ntransaction dependency issues. If so, I think it is better to explain\nthis in the comments.\n\n3.\n+/* What action to take for the transaction. */\n+typedef enum\n {\n- LogicalRepMsgType command; /* 0 if invalid */\n- LogicalRepRelMapEntry *rel;\n+ /* The action for non-streaming transactions. */\n+ TA_APPLY_IN_LEADER_WORKER,\n\n- /* Remote node information */\n- int remote_attnum; /* -1 if invalid */\n- TransactionId remote_xid;\n- XLogRecPtr finish_lsn;\n- char *origin_name;\n-} ApplyErrorCallbackArg;\n+ /* Actions for streaming transactions. */\n+ TA_SERIALIZE_TO_FILE,\n+ TA_APPLY_IN_PARALLEL_WORKER,\n+ TA_SEND_TO_PARALLEL_WORKER\n+} TransactionApplyAction;\n\nI think each action needs explanation atop this enum typedef.\n\n4.\n@@ -1149,24 +1315,14 @@ static void\n apply_handle_stream_start(StringInfo s)\n{\n...\n+ else if (apply_action == TA_SERIALIZE_TO_FILE)\n+ {\n+ /*\n+ * For the first stream start, check if there is any free apply\n+ * parallel worker we can use to process this transaction.\n+ */\n+ if (first_segment)\n+ winfo = parallel_apply_start_worker(stream_xid);\n\n- /* open the spool file for this transaction */\n- stream_open_file(MyLogicalRepWorker->subid, stream_xid, first_segment);\n+ if (winfo)\n+ {\n+ /*\n+ * If we have found a free worker, then we pass the data to that\n+ * worker.\n+ */\n+ parallel_apply_send_data(winfo, s->len, s->data);\n\n- /* if this is not the first segment, open existing subxact file */\n- if (!first_segment)\n- subxact_info_read(MyLogicalRepWorker->subid, stream_xid);\n+ nchanges = 0;\n\n- pgstat_report_activity(STATE_RUNNING, NULL);\n+ /* Cache the apply parallel worker for this transaction. */\n+ stream_apply_worker = winfo;\n+ }\n...\n\nThis looks odd to me in the sense that even if the action is\nTA_SERIALIZE_TO_FILE, we still send the information to the parallel\nworker. Won't it be better if we call parallel_apply_start_worker()\nfor first_segment before checking apply_action with\nget_transaction_apply_action(). That way we can avoid this special\ncase handling.\n\n5.\n+/*\n+ * Struct for sharing information between apply leader apply worker and apply\n+ * parallel workers.\n+ */\n+typedef struct ApplyParallelWorkerShared\n+{\n+ slock_t mutex;\n+\n+ bool in_use;\n+\n+ /* Logical protocol version. */\n+ uint32 proto_version;\n+\n+ TransactionId stream_xid;\n\nAre we using stream_xid passed by the leader in parallel worker? If\nso, how? If not, then can we do without this?\n\n6.\n+void\n+HandleParallelApplyMessages(void)\n{\n...\n+ /* OK to process messages. Reset the flag saying there are more to do. */\n+ ParallelApplyMessagePending = false;\n\nI don't understand the meaning of the second part of the comment.\nShouldn't we say: \"Reset the flag saying there is nothing more to\ndo.\"? I know you have copied from the other part of the code but there\nalso I am not sure if it is correct.\n\n7.\n+static List *ApplyParallelWorkersFreeList = NIL;\n+static List *ApplyParallelWorkersList = NIL;\n\nDo we really need to maintain two different workers' lists? If so,\nwhat is the advantage? I think there won't be many parallel apply\nworkers, so even if maintain one list and search it, there shouldn't\nbe any performance impact. I feel maintaining two lists for this\npurpose is a bit complex and has more chances of bugs, so we should\ntry to avoid it if possible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Aug 2022 17:21:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, August 30, 2022 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Aug 30, 2022 at 12:12 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > Few other comments on v25-0001*\r\n> > ============================\r\n> >\r\n> \r\n> Some more comments on v25-0001*:\r\n> =============================\r\n> 1.\r\n> +static void\r\n> +apply_handle_stream_abort(StringInfo s)\r\n> ...\r\n> ...\r\n> + else if (apply_action == TA_SEND_TO_PARALLEL_WORKER) { if (subxid ==\r\n> + xid) parallel_apply_replorigin_reset();\r\n> +\r\n> + /* Send STREAM ABORT message to the apply parallel worker. */\r\n> + parallel_apply_send_data(winfo, s->len, s->data);\r\n> +\r\n> + /*\r\n> + * After sending the data to the apply parallel worker, wait for\r\n> + * that worker to finish. This is necessary to maintain commit\r\n> + * order which avoids failures due to transaction dependencies and\r\n> + * deadlocks.\r\n> + */\r\n> + if (subxid == xid)\r\n> + {\r\n> + parallel_apply_wait_for_free(winfo);\r\n> ...\r\n> ...\r\n> \r\n> From this code, it appears that we are waiting for rollbacks to finish but not\r\n> doing the same in the rollback to savepoint cases. Is there a reason for the\r\n> same? I think we need to wait for rollbacks to avoid transaction dependency\r\n> and deadlock issues. Consider the below case:\r\n> \r\n> Consider table t1 (c1 primary key, c2, c3) has a row (1, 2, 3) on both publisher and\r\n> subscriber.\r\n> \r\n> Publisher\r\n> Session-1\r\n> ==========\r\n> Begin;\r\n> ...\r\n> Delete from t1 where c1 = 1;\r\n> \r\n> Session-2\r\n> Begin;\r\n> ...\r\n> insert into t1 values(1, 4, 5); --This will wait for Session-1's Delete to finish.\r\n> \r\n> Session-1\r\n> Rollback;\r\n> \r\n> Session-2\r\n> -- The wait will be finished and the insert will be successful.\r\n> Commit;\r\n> \r\n> Now, assume both these transactions get streamed and if we didn't wait for\r\n> rollback/rollback to savepoint, it is possible that the insert gets executed\r\n> before and leads to a constraint violation. This won't happen in non-parallel\r\n> mode, so we should wait for rollbacks to finish.\r\n\r\nAgreed and changed.\r\n\r\n> 2. I think we don't need to wait at Rollback Prepared/Commit Prepared\r\n> because we wait for prepare to finish in *_stream_prepare function.\r\n> That will ensure all the operations in that transaction have happened in the\r\n> subscriber, so no concurrent transaction can create deadlock or transaction\r\n> dependency issues. If so, I think it is better to explain this in the comments.\r\n\r\nAdded some comments about this.\r\n\r\n> 3.\r\n> +/* What action to take for the transaction. */ typedef enum\r\n> {\r\n> - LogicalRepMsgType command; /* 0 if invalid */\r\n> - LogicalRepRelMapEntry *rel;\r\n> + /* The action for non-streaming transactions. */\r\n> + TA_APPLY_IN_LEADER_WORKER,\r\n> \r\n> - /* Remote node information */\r\n> - int remote_attnum; /* -1 if invalid */\r\n> - TransactionId remote_xid;\r\n> - XLogRecPtr finish_lsn;\r\n> - char *origin_name;\r\n> -} ApplyErrorCallbackArg;\r\n> + /* Actions for streaming transactions. */ TA_SERIALIZE_TO_FILE,\r\n> +TA_APPLY_IN_PARALLEL_WORKER, TA_SEND_TO_PARALLEL_WORKER }\r\n> +TransactionApplyAction;\r\n> \r\n> I think each action needs explanation atop this enum typedef.\r\n\r\nAdded.\r\n\r\n> 4.\r\n> @@ -1149,24 +1315,14 @@ static void\r\n> apply_handle_stream_start(StringInfo s) { ...\r\n> + else if (apply_action == TA_SERIALIZE_TO_FILE) {\r\n> + /*\r\n> + * For the first stream start, check if there is any free apply\r\n> + * parallel worker we can use to process this transaction.\r\n> + */\r\n> + if (first_segment)\r\n> + winfo = parallel_apply_start_worker(stream_xid);\r\n> \r\n> - /* open the spool file for this transaction */\r\n> - stream_open_file(MyLogicalRepWorker->subid, stream_xid, first_segment);\r\n> + if (winfo)\r\n> + {\r\n> + /*\r\n> + * If we have found a free worker, then we pass the data to that\r\n> + * worker.\r\n> + */\r\n> + parallel_apply_send_data(winfo, s->len, s->data);\r\n> \r\n> - /* if this is not the first segment, open existing subxact file */\r\n> - if (!first_segment)\r\n> - subxact_info_read(MyLogicalRepWorker->subid, stream_xid);\r\n> + nchanges = 0;\r\n> \r\n> - pgstat_report_activity(STATE_RUNNING, NULL);\r\n> + /* Cache the apply parallel worker for this transaction. */\r\n> + stream_apply_worker = winfo; }\r\n> ...\r\n> \r\n> This looks odd to me in the sense that even if the action is\r\n> TA_SERIALIZE_TO_FILE, we still send the information to the parallel\r\n> worker. Won't it be better if we call parallel_apply_start_worker()\r\n> for first_segment before checking apply_action with\r\n> get_transaction_apply_action(). That way we can avoid this special\r\n> case handling.\r\n\r\nChanged as suggested.\r\n\r\n> 5.\r\n> +/*\r\n> + * Struct for sharing information between apply leader apply worker and apply\r\n> + * parallel workers.\r\n> + */\r\n> +typedef struct ApplyParallelWorkerShared\r\n> +{\r\n> + slock_t mutex;\r\n> +\r\n> + bool in_use;\r\n> +\r\n> + /* Logical protocol version. */\r\n> + uint32 proto_version;\r\n> +\r\n> + TransactionId stream_xid;\r\n> \r\n> Are we using stream_xid passed by the leader in parallel worker? If\r\n> so, how? If not, then can we do without this?\r\n\r\nNo, it seems we don't need this. Removed.\r\n\r\n> 6.\r\n> +void\r\n> +HandleParallelApplyMessages(void)\r\n> {\r\n> ...\r\n> + /* OK to process messages. Reset the flag saying there are more to do. */\r\n> + ParallelApplyMessagePending = false;\r\n> \r\n> I don't understand the meaning of the second part of the comment.\r\n> Shouldn't we say: \"Reset the flag saying there is nothing more to\r\n> do.\"? I know you have copied from the other part of the code but there\r\n> also I am not sure if it is correct.\r\n\r\nI feel the comment here is not very helpful, so I removed this.\r\n\r\n> 7.\r\n> +static List *ApplyParallelWorkersFreeList = NIL;\r\n> +static List *ApplyParallelWorkersList = NIL;\r\n> \r\n> Do we really need to maintain two different workers' lists? If so,\r\n> what is the advantage? I think there won't be many parallel apply\r\n> workers, so even if maintain one list and search it, there shouldn't\r\n> be any performance impact. I feel maintaining two lists for this\r\n> purpose is a bit complex and has more chances of bugs, so we should\r\n> try to avoid it if possible.\r\n\r\nAgreed, I removed the ApplyParallelWorkersList and reused\r\nApplyParallelWorkersList in other places.\r\n\r\nAttach the new version patch set which addressed above comments\r\nand comments from[1].\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1%2Be8JsiC8uMZPU25xQRyxNvVS24M4%3DZy-xD18jzX%2BvrmA%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 31 Aug 2022 09:55:45 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, August 31, 2022 5:56 PM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Tuesday, August 30, 2022 7:51 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Aug 30, 2022 at 12:12 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > Few other comments on v25-0001*\r\n> > > ============================\r\n> > >\r\n> >\r\n> > Some more comments on v25-0001*:\r\n> > =============================\r\n> > 1.\r\n> > +static void\r\n> > +apply_handle_stream_abort(StringInfo s)\r\n> > ...\r\n> > ...\r\n> > + else if (apply_action == TA_SEND_TO_PARALLEL_WORKER) { if (subxid ==\r\n> > + xid) parallel_apply_replorigin_reset();\r\n> > +\r\n> > + /* Send STREAM ABORT message to the apply parallel worker. */\r\n> > + parallel_apply_send_data(winfo, s->len, s->data);\r\n> > +\r\n> > + /*\r\n> > + * After sending the data to the apply parallel worker, wait for\r\n> > + * that worker to finish. This is necessary to maintain commit\r\n> > + * order which avoids failures due to transaction dependencies and\r\n> > + * deadlocks.\r\n> > + */\r\n> > + if (subxid == xid)\r\n> > + {\r\n> > + parallel_apply_wait_for_free(winfo);\r\n> > ...\r\n> > ...\r\n> >\r\n> > From this code, it appears that we are waiting for rollbacks to finish\r\n> > but not doing the same in the rollback to savepoint cases. Is there a\r\n> > reason for the same? I think we need to wait for rollbacks to avoid\r\n> > transaction dependency and deadlock issues. Consider the below case:\r\n> >\r\n> > Consider table t1 (c1 primary key, c2, c3) has a row (1, 2, 3) on both\r\n> > publisher and subscriber.\r\n> >\r\n> > Publisher\r\n> > Session-1\r\n> > ==========\r\n> > Begin;\r\n> > ...\r\n> > Delete from t1 where c1 = 1;\r\n> >\r\n> > Session-2\r\n> > Begin;\r\n> > ...\r\n> > insert into t1 values(1, 4, 5); --This will wait for Session-1's Delete to finish.\r\n> >\r\n> > Session-1\r\n> > Rollback;\r\n> >\r\n> > Session-2\r\n> > -- The wait will be finished and the insert will be successful.\r\n> > Commit;\r\n> >\r\n> > Now, assume both these transactions get streamed and if we didn't wait\r\n> > for rollback/rollback to savepoint, it is possible that the insert\r\n> > gets executed before and leads to a constraint violation. This won't\r\n> > happen in non-parallel mode, so we should wait for rollbacks to finish.\r\n> \r\n> Agreed and changed.\r\n> \r\n> > 2. I think we don't need to wait at Rollback Prepared/Commit Prepared\r\n> > because we wait for prepare to finish in *_stream_prepare function.\r\n> > That will ensure all the operations in that transaction have happened\r\n> > in the subscriber, so no concurrent transaction can create deadlock or\r\n> > transaction dependency issues. If so, I think it is better to explain this in the\r\n> comments.\r\n> \r\n> Added some comments about this.\r\n> \r\n> > 3.\r\n> > +/* What action to take for the transaction. */ typedef enum\r\n> > {\r\n> > - LogicalRepMsgType command; /* 0 if invalid */\r\n> > - LogicalRepRelMapEntry *rel;\r\n> > + /* The action for non-streaming transactions. */\r\n> > + TA_APPLY_IN_LEADER_WORKER,\r\n> >\r\n> > - /* Remote node information */\r\n> > - int remote_attnum; /* -1 if invalid */\r\n> > - TransactionId remote_xid;\r\n> > - XLogRecPtr finish_lsn;\r\n> > - char *origin_name;\r\n> > -} ApplyErrorCallbackArg;\r\n> > + /* Actions for streaming transactions. */ TA_SERIALIZE_TO_FILE,\r\n> > +TA_APPLY_IN_PARALLEL_WORKER, TA_SEND_TO_PARALLEL_WORKER }\r\n> > +TransactionApplyAction;\r\n> >\r\n> > I think each action needs explanation atop this enum typedef.\r\n> \r\n> Added.\r\n> \r\n> > 4.\r\n> > @@ -1149,24 +1315,14 @@ static void\r\n> > apply_handle_stream_start(StringInfo s) { ...\r\n> > + else if (apply_action == TA_SERIALIZE_TO_FILE) {\r\n> > + /*\r\n> > + * For the first stream start, check if there is any free apply\r\n> > + * parallel worker we can use to process this transaction.\r\n> > + */\r\n> > + if (first_segment)\r\n> > + winfo = parallel_apply_start_worker(stream_xid);\r\n> >\r\n> > - /* open the spool file for this transaction */\r\n> > - stream_open_file(MyLogicalRepWorker->subid, stream_xid,\r\n> > first_segment);\r\n> > + if (winfo)\r\n> > + {\r\n> > + /*\r\n> > + * If we have found a free worker, then we pass the data to that\r\n> > + * worker.\r\n> > + */\r\n> > + parallel_apply_send_data(winfo, s->len, s->data);\r\n> >\r\n> > - /* if this is not the first segment, open existing subxact file */\r\n> > - if (!first_segment)\r\n> > - subxact_info_read(MyLogicalRepWorker->subid, stream_xid);\r\n> > + nchanges = 0;\r\n> >\r\n> > - pgstat_report_activity(STATE_RUNNING, NULL);\r\n> > + /* Cache the apply parallel worker for this transaction. */\r\n> > + stream_apply_worker = winfo; }\r\n> > ...\r\n> >\r\n> > This looks odd to me in the sense that even if the action is\r\n> > TA_SERIALIZE_TO_FILE, we still send the information to the parallel\r\n> > worker. Won't it be better if we call parallel_apply_start_worker()\r\n> > for first_segment before checking apply_action with\r\n> > get_transaction_apply_action(). That way we can avoid this special\r\n> > case handling.\r\n> \r\n> Changed as suggested.\r\n> \r\n> > 5.\r\n> > +/*\r\n> > + * Struct for sharing information between apply leader apply worker\r\n> > +and apply\r\n> > + * parallel workers.\r\n> > + */\r\n> > +typedef struct ApplyParallelWorkerShared { slock_t mutex;\r\n> > +\r\n> > + bool in_use;\r\n> > +\r\n> > + /* Logical protocol version. */\r\n> > + uint32 proto_version;\r\n> > +\r\n> > + TransactionId stream_xid;\r\n> >\r\n> > Are we using stream_xid passed by the leader in parallel worker? If\r\n> > so, how? If not, then can we do without this?\r\n> \r\n> No, it seems we don't need this. Removed.\r\n> \r\n> > 6.\r\n> > +void\r\n> > +HandleParallelApplyMessages(void)\r\n> > {\r\n> > ...\r\n> > + /* OK to process messages. Reset the flag saying there are more to\r\n> > + do. */ ParallelApplyMessagePending = false;\r\n> >\r\n> > I don't understand the meaning of the second part of the comment.\r\n> > Shouldn't we say: \"Reset the flag saying there is nothing more to\r\n> > do.\"? I know you have copied from the other part of the code but there\r\n> > also I am not sure if it is correct.\r\n> \r\n> I feel the comment here is not very helpful, so I removed this.\r\n> \r\n> > 7.\r\n> > +static List *ApplyParallelWorkersFreeList = NIL; static List\r\n> > +*ApplyParallelWorkersList = NIL;\r\n> >\r\n> > Do we really need to maintain two different workers' lists? If so,\r\n> > what is the advantage? I think there won't be many parallel apply\r\n> > workers, so even if maintain one list and search it, there shouldn't\r\n> > be any performance impact. I feel maintaining two lists for this\r\n> > purpose is a bit complex and has more chances of bugs, so we should\r\n> > try to avoid it if possible.\r\n> \r\n> Agreed, I removed the ApplyParallelWorkersList and reused\r\n> ApplyParallelWorkersList in other places.\r\n> \r\n> Attach the new version patch set which addressed above comments and\r\n> comments from[1].\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/CAA4eK1%2Be8JsiC8uMZPU25xQRy\r\n> xNvVS24M4%3DZy-xD18jzX%2BvrmA%40mail.gmail.com\r\n\r\nAttach a new version patch set which fixes some typos and some cosmetic things.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 1 Sep 2022 11:23:22 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 1, 2022 at 4:53 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nReview of v27-0001*:\n================\n1. I feel the usage of in_remote_transaction and in_use flags is\nslightly complex. IIUC, the patch uses in_use flag to ensure commit\nordering by waiting for it to become false before proceeding in\ntransaction finish commands in leader apply worker. If so, I think it\nis better to name it in_parallel_apply_xact and set it to true only\nwhen we start applying xact in parallel apply worker and set it to\nfalse when we finish the xact in parallel apply worker. It can be\ninitialized to false while setting up DSM. Also, accordingly change\nthe function parallel_apply_wait_for_free() to\nparallel_apply_wait_for_xact_finish and parallel_apply_set_idle to\nparallel_apply_set_xact_finish. We can change the name of the\nin_remote_transaction flag to in_use.\n\nPlease explain about these flags in the struct where they are declared.\n\n2. The worker_id in ParallelApplyWorkerShared struct could have wrong\ninformation after the worker is reused from the pool. Because we could\nhave removed some other worker from the ParallelApplyWorkersList which\nwill make the value of worker_id wrong. For error/debug messages, we\ncan probably use LSN if available or can oid of subscription if\nrequired. I thought of using xid as well but I think it is better to\navoid that in messages as it can wraparound. See, if the patch uses\nxid in other messages, it is better to either use it along with LSN or\ntry to use only LSN.\n\n3.\nelog(ERROR, \"[Parallel Apply Worker #%u] unexpected message \\\"%c\\\"\",\n+ shared->worker_id, c);\n\nAlso, I am not sure whether the above style (use of []) of messages is\ngood. Did you follow the usage from some other place?\n\n4.\napply_handle_stream_stop(StringInfo s)\n{\n...\n+ if (apply_action == TA_APPLY_IN_PARALLEL_WORKER)\n+ {\n+ elog(DEBUG1, \"[Parallel Apply Worker #%u] ended processing streaming chunk, \"\n+ \"waiting on shm_mq_receive\", MyParallelShared->worker_id);\n...\n\nI don't understand the relevance of \"waiting on shm_mq_receive\" in the\nabove message because AFAICS, here we are not waiting on any receive\ncall.\n\n5. I suggest you please go through all the ERROR/LOG/DEBUG messages in\nthe patch and try to improve them based on the above comments.\n\n6.\n+ * The dynamic shared memory segment will contain (1) a shm_mq that can be used\n+ * to send errors (and other messages reported via elog/ereport) from the\n+ * parallel apply worker to leader apply worker (2) another shm_mq that can be\n+ * used to send changes in the transaction from leader apply worker to parallel\n+ * apply worker\n\nHere, it would be better to switch (1) and (2). I feel it is better to\nexplain first about how the main apply information is exchanged among\nworkers.\n\n7.\n+ /* Try to get a free parallel apply worker. */\n+ foreach(lc, ParallelApplyWorkersList)\n+ {\n+ ParallelApplyWorkerInfo *tmp_winfo;\n+\n+ tmp_winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n+\n+ if (tmp_winfo->error_mq_handle == NULL)\n+ {\n+ /*\n+ * Release the worker information and try next one if the parallel\n+ * apply worker exited cleanly.\n+ */\n+ ParallelApplyWorkersList =\nforeach_delete_current(ParallelApplyWorkersList, lc);\n+ shm_mq_detach(tmp_winfo->mq_handle);\n+ dsm_detach(tmp_winfo->dsm_seg);\n+ pfree(tmp_winfo);\n+\n+ continue;\n+ }\n+\n+ if (!tmp_winfo->in_remote_transaction)\n+ {\n+ winfo = tmp_winfo;\n+ break;\n+ }\n+ }\n\nCan we write it as if ... else if? If so, then we don't need to\ncontinue in the first loop. And, can we add some more comments to\nexplain these cases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 2 Sep 2022 11:39:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, September 2, 2022 2:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Sep 1, 2022 at 4:53 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> Review of v27-0001*:\r\n\r\nThanks for the comments.\r\n\r\n> ================\r\n> 1. I feel the usage of in_remote_transaction and in_use flags is slightly complex.\r\n> IIUC, the patch uses in_use flag to ensure commit ordering by waiting for it to\r\n> become false before proceeding in transaction finish commands in leader\r\n> apply worker. If so, I think it is better to name it in_parallel_apply_xact and set it\r\n> to true only when we start applying xact in parallel apply worker and set it to\r\n> false when we finish the xact in parallel apply worker. It can be initialized to false\r\n> while setting up DSM. Also, accordingly change the function\r\n> parallel_apply_wait_for_free() to parallel_apply_wait_for_xact_finish and\r\n> parallel_apply_set_idle to parallel_apply_set_xact_finish. We can change the\r\n> name of the in_remote_transaction flag to in_use.\r\n\r\nAgreed. One thing I found when addressing this is that there could be a race\r\ncondition if we want to set the flag in parallel apply worker:\r\n\r\nwhere the leader has already started waiting for the parallel apply worker to\r\nfinish processing the transaction(set the in_parallel_apply_xact to false)\r\nwhile the child process has not yet processed the first STREAM_START and has\r\nnot set the in_parallel_apply_xact to true.\r\n\r\n> Please explain about these flags in the struct where they are declared.\r\n> \r\n> 2. The worker_id in ParallelApplyWorkerShared struct could have wrong\r\n> information after the worker is reused from the pool. Because we could have\r\n> removed some other worker from the ParallelApplyWorkersList which will\r\n> make the value of worker_id wrong. For error/debug messages, we can\r\n> probably use LSN if available or can oid of subscription if required. I thought of\r\n> using xid as well but I think it is better to avoid that in messages as it can\r\n> wraparound. See, if the patch uses xid in other messages, it is better to either\r\n> use it along with LSN or try to use only LSN.\r\n> 3.\r\n> elog(ERROR, \"[Parallel Apply Worker #%u] unexpected message \\\"%c\\\"\",\r\n> + shared->worker_id, c);\r\n> \r\n> Also, I am not sure whether the above style (use of []) of messages is good. Did\r\n> you follow the usage from some other place?\r\n> 4.\r\n> apply_handle_stream_stop(StringInfo s)\r\n> {\r\n> ...\r\n> + if (apply_action == TA_APPLY_IN_PARALLEL_WORKER) { elog(DEBUG1,\r\n> + \"[Parallel Apply Worker #%u] ended processing streaming chunk, \"\r\n> + \"waiting on shm_mq_receive\", MyParallelShared->worker_id);\r\n> ...\r\n> \r\n> I don't understand the relevance of \"waiting on shm_mq_receive\" in the\r\n> above message because AFAICS, here we are not waiting on any receive\r\n> call.\r\n> \r\n> 5. I suggest you please go through all the ERROR/LOG/DEBUG messages in\r\n> the patch and try to improve them based on the above comments.\r\n\r\nI removed the worker_id and also removed and improved some DEBUG/ERROR\r\nmessages which I think is not clear or we don't have similar message in existing code.\r\n\r\n> 6.\r\n> + * The dynamic shared memory segment will contain (1) a shm_mq that can be\r\n> used\r\n> + * to send errors (and other messages reported via elog/ereport) from the\r\n> + * parallel apply worker to leader apply worker (2) another shm_mq that can\r\n> be\r\n> + * used to send changes in the transaction from leader apply worker to parallel\r\n> + * apply worker\r\n> \r\n> Here, it would be better to switch (1) and (2). I feel it is better to\r\n> explain first about how the main apply information is exchanged among\r\n> workers.\r\n\r\nExchanged.\r\n\r\n> 7.\r\n> + /* Try to get a free parallel apply worker. */\r\n> + foreach(lc, ParallelApplyWorkersList)\r\n> + {\r\n> + ParallelApplyWorkerInfo *tmp_winfo;\r\n> +\r\n> + tmp_winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\r\n> +\r\n> + if (tmp_winfo->error_mq_handle == NULL)\r\n> + {\r\n> + /*\r\n> + * Release the worker information and try next one if the parallel\r\n> + * apply worker exited cleanly.\r\n> + */\r\n> + ParallelApplyWorkersList =\r\n> foreach_delete_current(ParallelApplyWorkersList, lc);\r\n> + shm_mq_detach(tmp_winfo->mq_handle);\r\n> + dsm_detach(tmp_winfo->dsm_seg);\r\n> + pfree(tmp_winfo);\r\n> +\r\n> + continue;\r\n> + }\r\n> +\r\n> + if (!tmp_winfo->in_remote_transaction)\r\n> + {\r\n> + winfo = tmp_winfo;\r\n> + break;\r\n> + }\r\n> + }\r\n> \r\n> Can we write it as if ... else if? If so, then we don't need to\r\n> continue in the first loop. And, can we add some more comments to\r\n> explain these cases?\r\n\r\nChanged.\r\n\r\n\r\nAttach the new version patch set which addressed above comments and\r\nalso fixed another problem while subscriber to a low version publisher.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 5 Sep 2022 12:40:34 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, September 5, 2022 8:41 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Friday, September 2, 2022 2:10 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Sep 1, 2022 at 4:53 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> >\r\n> > Review of v27-0001*:\r\n> \r\n> Thanks for the comments.\r\n> \r\n> > ================\r\n> > 1. I feel the usage of in_remote_transaction and in_use flags is slightly complex.\r\n> > IIUC, the patch uses in_use flag to ensure commit ordering by waiting\r\n> > for it to become false before proceeding in transaction finish\r\n> > commands in leader apply worker. If so, I think it is better to name\r\n> > it in_parallel_apply_xact and set it to true only when we start\r\n> > applying xact in parallel apply worker and set it to false when we\r\n> > finish the xact in parallel apply worker. It can be initialized to\r\n> > false while setting up DSM. Also, accordingly change the function\r\n> > parallel_apply_wait_for_free() to parallel_apply_wait_for_xact_finish\r\n> > and parallel_apply_set_idle to parallel_apply_set_xact_finish. We can\r\n> > change the name of the in_remote_transaction flag to in_use.\r\n> \r\n> Agreed. One thing I found when addressing this is that there could be a race\r\n> condition if we want to set the flag in parallel apply worker:\r\n> \r\n> where the leader has already started waiting for the parallel apply worker to\r\n> finish processing the transaction(set the in_parallel_apply_xact to false) while the\r\n> child process has not yet processed the first STREAM_START and has not set the\r\n> in_parallel_apply_xact to true.\r\n\r\nSorry, I didn’t complete this sentence. I meant it's safer to set this flag in apply leader,\r\nSo I changed the code like that and added some comments to explain the same.\r\n\r\n...\r\n> \r\n> Attach the new version patch set which addressed above comments and also\r\n> fixed another problem while subscriber to a low version publisher.\r\n\r\nAttach the correct patch set this time.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 5 Sep 2022 13:04:33 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Sep 5, 2022 at 6:34 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the correct patch set this time.\n>\n\nFew comments on v28-0001*:\n=======================\n1.\n+ /* Whether the worker is processing a transaction. */\n+ bool in_use;\n\nI think this same comment applies to in_parallel_apply_xact flag as\nwell. How about: \"Indicates whether the worker is available to be used\nfor parallel apply transaction?\"?\n\n2.\n+ /*\n+ * Set this flag in the leader instead of the parallel apply worker to\n+ * avoid the race condition where the leader has already started waiting\n+ * for the parallel apply worker to finish processing the transaction(set\n+ * the in_parallel_apply_xact to false) while the child process has not yet\n+ * processed the first STREAM_START and has not set the\n+ * in_parallel_apply_xact to true.\n\nI think part of this comment \"(set the in_parallel_apply_xact to\nfalse)\" is not necessary. It will be clear without that.\n\n3.\n+ /* Create entry for requested transaction. */\n+ entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_ENTER, &found);\n+ if (found)\n+ elog(ERROR, \"hash table corrupted\");\n...\n...\n+ hash_search(ParallelApplyWorkersHash, &xid, HASH_REMOVE, NULL);\n\nIt is better to have a similar elog for HASH_REMOVE case as well. We\nnormally seem to have such elog for HASH_REMOVE.\n\n4.\n* Parallel apply is not supported when subscribing to a publisher which\n+ * cannot provide the abort_time, abort_lsn and the column information used\n+ * to verify the parallel apply safety.\n\n\nIn this comment, which column information are you referring to?\n\n5.\n+ /*\n+ * Set in_parallel_apply_xact to true again as we only aborted the\n+ * subtransaction and the top transaction is still in progress. No\n+ * need to lock here because currently only the apply leader are\n+ * accessing this flag.\n+ */\n+ winfo->shared->in_parallel_apply_xact = true;\n\nThis theory sounds good to me but I think it is better to update/read\nthis flag under spinlock as the patch is doing at a few other places.\nI think that will make the code easier to follow without worrying too\nmuch about such special cases. There are a few asserts as well which\nread this without lock, it would be better to change those as well.\n\n6.\n+ * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol version\n+ * with support for streaming large transactions using parallel apply\n+ * workers. Introduced in PG16.\n\nHow about changing it to something like:\n\"LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum protocol\nversion where we support applying large streaming transactions in\nparallel. Introduced in PG16.\"\n\n7.\n+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ bool write_abort_lsn = (data->protocol_version >=\n+ LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM);\n\n /*\n * The abort should happen outside streaming block, even for streamed\n@@ -1856,7 +1859,8 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n Assert(rbtxn_is_streamed(toptxn));\n\n OutputPluginPrepareWrite(ctx, true);\n- logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn->xid);\n+ logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn, abort_lsn,\n+ write_abort_lsn);\n\nI think we need to send additional information if the client has used\nthe parallel streaming option. Also, let's keep sending subxid as we\nwere doing previously and add additional parameters required. It may\nbe better to name write_abort_lsn as abort_info.\n\n8.\n+ /*\n+ * Check whether the publisher sends abort_lsn and abort_time.\n+ *\n+ * Note that the paralle apply worker is only started when the publisher\n+ * sends abort_lsn and abort_time.\n+ */\n+ if (am_parallel_apply_worker() ||\n+ walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n+ read_abort_lsn = true;\n+\n+ logicalrep_read_stream_abort(s, &abort_data, read_abort_lsn);\n\nThis check should match with the check for the write operation where\nwe are checking the protocol version as well. There is a typo as well\nin the comments (/paralle/parallel).\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Sep 2022 12:21:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 8, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 5, 2022 at 6:34 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Attach the correct patch set this time.\n> >\n>\n> Few comments on v28-0001*:\n> =======================\n>\n\nSome suggestions for comments in v28-0001*\n1.\n+/*\n+ * Entry for a hash table we use to map from xid to the parallel apply worker\n+ * state.\n+ */\n+typedef struct ParallelApplyWorkerEntry\n\nLet's change this comment to: \"Hash table entry to map xid to the\nparallel apply worker state.\"\n\n2.\n+/*\n+ * List that stores the information of parallel apply workers that were\n+ * started. Newly added worker information will be removed from the list at the\n+ * end of the transaction when there are enough workers in the pool. Besides,\n+ * exited workers will be removed from the list after being detected.\n+ */\n+static List *ParallelApplyWorkersList = NIL;\n\nCan we change this to: \"A list to maintain the active parallel apply\nworkers. The information for the new worker is added to the list after\nsuccessfully launching it. The list entry is removed at the end of the\ntransaction if there are already enough workers in the worker pool.\nFor more information about the worker pool, see comments atop\nworker.c. We also remove the entry from the list if the worker is\nexited due to some error.\"\n\nApart from this, I have added/changed a few other comments in\nv28-0001*. Kindly check the attached, if you are fine with it then\nplease include it in the next version.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 8 Sep 2022 16:54:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for the v28-0001 patch:\n\n(There may be some overlap with other people's review comments and/or\nsome fixes already made).\n\n======\n\n1. Commit Message\n\nIn addition, the patch extends the logical replication STREAM_ABORT message so\nthat abort_time and abort_lsn can also be sent which can be used to update the\nreplication origin in parallel apply worker when the streaming transaction is\naborted.\n\n~\n\nShould this also mention that because this message extension is needed\nto support parallel streaming, meaning that parallel streaming is not\nsupported for publications on servers < PG16?\n\n======\n\n2. doc/src/sgml/config.sgml\n\n <para>\n Specifies maximum number of logical replication workers. This includes\n- both apply workers and table synchronization workers.\n+ apply leader workers, parallel apply workers, and table synchronization\n+ workers.\n </para>\n\"apply leader workers\" -> \"leader apply workers\"\n\n~~~\n\n3.\n\nmax_logical_replication_workers (integer)\n Specifies maximum number of logical replication workers. This\nincludes apply leader workers, parallel apply workers, and table\nsynchronization workers.\n Logical replication workers are taken from the pool defined by\nmax_worker_processes.\n The default value is 4. This parameter can only be set at server start.\n\n~\n\nI did not really understand why the default is 4. Because the default\ntablesync workers is 2, and the default parallel workers is 2, but\nwhat about accounting for the apply worker? Therefore, shouldn't\nmax_logical_replication_workers default be 5 instead of 4?\n\n======\n\n4. src/backend/commands/subscriptioncmds.c - defGetStreamingMode\n\n+ }\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s requires a Boolean value or \\\"parallel\\\"\",\n+ def->defname)));\n+ return SUBSTREAM_OFF; /* keep compiler quiet */\n+}\n\nSome whitespace before the ereport and the return might be tidier.\n\n======\n\n5. src/backend/libpq/pqmq.c\n\n+ {\n+ if (IsParallelWorker())\n+ SendProcSignal(pq_mq_parallel_leader_pid,\n+ PROCSIG_PARALLEL_MESSAGE,\n+ pq_mq_parallel_leader_backend_id);\n+ else\n+ {\n+ Assert(IsLogicalParallelApplyWorker());\n+ SendProcSignal(pq_mq_parallel_leader_pid,\n+ PROCSIG_PARALLEL_APPLY_MESSAGE,\n+ pq_mq_parallel_leader_backend_id);\n+ }\n+ }\n\nThis code can be simplified if you want to. For example,\n\n{\nProcSignalReason reason;\nAssert(IsParallelWorker() || IsLogicalParallelApplyWorker());\nreason = IsParallelWorker() ? PROCSIG_PARALLEL_MESSAGE :\nPROCSIG_PARALLEL_APPLY_MESSAGE;\nSendProcSignal(pq_mq_parallel_leader_pid, reason,\n pq_mq_parallel_leader_backend_id);\n}\n\n======\n\n6. src/backend/replication/logical/applyparallelworker.c\n\nIs there a reason why this file is called applyparallelworker.c\ninstead of parallelapplyworker.c? Now this name is out of step with\nnames of all the new typedefs etc.\n\n~~~\n\n7.\n\n+/*\n+ * There are three fields in each message received by parallel apply worker:\n+ * start_lsn, end_lsn and send_time. Because we have updated these statistics\n+ * in leader apply worker, we could ignore these fields in parallel apply\n+ * worker (see function LogicalRepApplyLoop).\n+ */\n+#define SIZE_STATS_MESSAGE (2 * sizeof(XLogRecPtr) + sizeof(TimestampTz))\n\nSUGGESTION (Just dded word \"the\" and change \"could\" -> \"can\")\nThere are three fields in each message received by the parallel apply\nworker: start_lsn, end_lsn and send_time. Because we have updated\nthese statistics in the leader apply worker, we can ignore these\nfields in the parallel apply worker (see function\nLogicalRepApplyLoop).\n\n~~~\n\n8.\n\n+/*\n+ * List that stores the information of parallel apply workers that were\n+ * started. Newly added worker information will be removed from the list at the\n+ * end of the transaction when there are enough workers in the pool. Besides,\n+ * exited workers will be removed from the list after being detected.\n+ */\n+static List *ParallelApplyWorkersList = NIL;\n\nPerhaps this comment can give more explanation of what is meant by the\npart that says \"when there are enough workers in the pool\".\n\n~~~\n\n9. src/backend/replication/logical/applyparallelworker.c -\nparallel_apply_can_start\n\n+ /*\n+ * Don't start a new parallel worker if not in streaming parallel mode.\n+ */\n+ if (MySubscription->stream != SUBSTREAM_PARALLEL)\n+ return false;\n\n\"streaming parallel mode.\" -> \"parallel streaming mode.\"\n\n~~~\n\n10.\n\n+ /*\n+ * For streaming transactions that are being applied using parallel apply\n+ * worker, we cannot decide whether to apply the change for a relation that\n+ * is not in the READY state (see should_apply_changes_for_rel) as we won't\n+ * know remote_final_lsn by that time. So, we don't start the new parallel\n+ * apply worker in this case.\n+ */\n+ if (!AllTablesyncsReady())\n+ return false;\n\n\"using parallel apply worker\" -> \"using a parallel apply worker\"\n\n~~~\n\n11.\n\n+ /*\n+ * Do not allow parallel apply worker to be started in the parallel apply\n+ * worker.\n+ */\n+ if (am_parallel_apply_worker())\n+ return false;\n\nI guess the comment is valid but it sounds strange.\n\nSUGGESTION\nOnly leader apply workers can start parallel apply workers.\n\n~~~\n\n12.\n\n+ if (am_parallel_apply_worker())\n+ return false;\n\nMaybe this code should be earlier in this function, because surely\nthis is a less costly test than the test for !AllTablesyncsReady()?\n\n~~~\n\n13. src/backend/replication/logical/applyparallelworker.c -\nparallel_apply_start_worker\n\n+/*\n+ * Start a parallel apply worker that will be used for the specified xid.\n+ *\n+ * If a parallel apply worker is not in use then re-use it, otherwise start a\n+ * fresh one. Cache the worker information in ParallelApplyWorkersHash keyed by\n+ * the specified xid.\n+ */\n\n\"is not in use\" -> \"is found but not in use\" ?\n\n~~~\n\n14.\n\n+ /* Failed to start a new parallel apply worker. */\n+ if (winfo == NULL)\n+ return;\n\nThere seem to be quite a lot of places (like this example) where\nsomething may go wrong and the behaviour apparently will just silently\nfall-back to using the non-parallel streaming. Maybe that is OK, but I\nam just wondering how can the user ever know this has happened? Maybe\nthe docs can mention that this could happen and give some description\nof what processes users can look for (or some other strategy) so they\ncan just confirm that the parallel streaming is really working like\nthey assume it to be?\n\n~~~\n\n15.\n\n+ * Set this flag in the leader instead of the parallel apply worker to\n+ * avoid the race condition where the leader has already started waiting\n+ * for the parallel apply worker to finish processing the transaction(set\n+ * the in_parallel_apply_xact to false) while the child process has not yet\n+ * processed the first STREAM_START and has not set the\n+ * in_parallel_apply_xact to true.\n\nMissing whitespace before \"(\"\n\n~~~\n\n16. src/backend/replication/logical/applyparallelworker.c -\nparallel_apply_find_worker\n\n+ /* Return the cached parallel apply worker if valid. */\n+ if (stream_apply_worker != NULL)\n+ return stream_apply_worker;\n\nPerhaps 'cur_stream_parallel_apply_winfo' is a better name for this var?\n\n~~~\n\n17. src/backend/replication/logical/applyparallelworker.c -\nparallel_apply_free_worker\n\n+/*\n+ * Remove the parallel apply worker entry from the hash table. And stop the\n+ * worker if there are enough workers in the pool.\n+ */\n+void\n+parallel_apply_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId xid)\n\nI think the reason for doing the \"enough workers in the pool\" logic\nneeds some more explanation.\n\n~~~\n\n18.\n\n+ if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n+ {\n+ logicalrep_worker_stop_by_slot(winfo->shared->logicalrep_worker_slot_no,\n+ winfo->shared->logicalrep_worker_generation);\n+\n+ ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList, winfo);\n+\n+ shm_mq_detach(winfo->mq_handle);\n+ shm_mq_detach(winfo->error_mq_handle);\n+ dsm_detach(winfo->dsm_seg);\n+ pfree(winfo);\n+ }\n+ else\n+ winfo->in_use = false;\n\nMaybe it is easier to remove this \"else\" and just unconditionally set\nwinfo->in_use = false BEFORE the check to free the entire winfo.\n\n~~~\n\n19. src/backend/replication/logical/applyparallelworker.c -\nLogicalParallelApplyLoop\n\n+ ApplyMessageContext = AllocSetContextCreate(ApplyContext,\n+ \"ApplyMessageContext\",\n+ ALLOCSET_DEFAULT_SIZES);\n\nShould the name of this context be \"ParallelApplyMessageContext\"?\n\n~~~\n\n20. src/backend/replication/logical/applyparallelworker.c -\nHandleParallelApplyMessage\n\n+ default:\n+ {\n+ elog(ERROR, \"unrecognized message type received from parallel apply\nworker: %c (message length %d bytes)\",\n+ msgtype, msg->len);\n+ }\n\n\"received from\" -> \"received by\"\n\n~~~\n\n\n21. src/backend/replication/logical/applyparallelworker.c -\nHandleParallelApplyMessages\n\n+/*\n+ * Handle any queued protocol messages received from parallel apply workers.\n+ */\n+void\n+HandleParallelApplyMessages(void)\n\n21a.\n\"received from\" -> \"received by\"\n\n~\n\n21b.\nI wonder if this comment should give some credit to the function in\nparallel.c - because this seems almost a copy of all that code.\n\n~~~\n\n22. src/backend/replication/logical/applyparallelworker.c -\nparallel_apply_set_xact_finish\n\n+/*\n+ * Set the in_parallel_apply_xact flag for the current parallel apply worker.\n+ */\n+void\n+parallel_apply_set_xact_finish(void)\n\nShould that \"Set\" really be saying \"Reset\" or \"Clear\"?\n\n======\n\n23. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\n\n+ nparallelapplyworkers = logicalrep_parallel_apply_worker_count(subid);\n+\n+ /*\n+ * Return silently if the number of parallel apply workers reached the\n+ * limit per subscription.\n+ */\n+ if (is_subworker && nparallelapplyworkers >=\nmax_parallel_apply_workers_per_subscription)\n+ {\n+ LWLockRelease(LogicalRepWorkerLock);\n+ return false;\n }\nI’m not sure if this is a good idea to be so silent. How will the user\nknow if they should increase the GUC parameter or not if it never\ntells them that the value is too low?\n\n~~~\n\n24.\n\n /* Now wait until it attaches. */\n- WaitForReplicationWorkerAttach(worker, generation, bgw_handle);\n+ return WaitForReplicationWorkerAttach(worker, generation, bgw_handle);\n\nThe comment feels a tiny bit misleading, because there is a chance\nthat this might not attach at all and return false if something goes\nwrong.\n\n~~~\n\n25. src/backend/replication/logical/launcher.c - logicalrep_worker_stop\n\n+void\n+logicalrep_worker_stop_by_slot(int slot_no, uint16 generation)\n+{\n+ LogicalRepWorker *worker = &LogicalRepCtx->workers[slot_no];\n+\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+\n+ /* Return if the generation doesn't match or the worker is not alive. */\n+ if (worker->generation != generation ||\n+ worker->proc == NULL)\n+ return;\n+\n+ logicalrep_worker_stop_internal(worker);\n+\n+ LWLockRelease(LogicalRepWorkerLock);\n+}\n\nI think this condition should be changed and reversed, otherwise you\nmight return before releasing the lock (??)\n\nSUGGESTION\n\n{\nLWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n\n/* Stop only if the worker is alive and the generation matches. */\nif (worker && worker->proc && worker->generation == generation)\nlogicalrep_worker_stop_internal(worker);\n\nLWLockRelease(LogicalRepWorkerLock);\n}\n\n~~~\n\n26 src/backend/replication/logical/launcher.c - logicalrep_worker_stop_internal\n\n+/*\n+ * Workhorse for logicalrep_worker_stop() and logicalrep_worker_detach(). Stop\n+ * the worker and wait for it to die.\n+ */\n\n... and logicalrep_worker_stop_by_slot()\n\n~~~\n\n27. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n\n+ /*\n+ * This is the leader apply worker; stop all the parallel apply workers\n+ * previously started from here.\n+ */\n+ if (!isParallelApplyWorker(MyLogicalRepWorker))\n\n27a.\nThe comment does not match the code. If this *is* the leader apply\nworker then why do we have the condition to check that?\n\nMaybe only needs a comment update like\n\nSUGGESTION\nIf this is the leader apply worker then stop all the parallel...\n\n~\n\n27b.\nCode seems also assuming it cannot be a tablesync worker but it is not\nchecking that. I am wondering if it will be better to have yet another\nmacro/inline to do isLeaderApplyWorker() that will make sure this\nreally is the leader apply worker. (This review comment suggestion is\nrepeated later below).\n\n======\n\n28. src/backend/replication/logical/worker.c - STREAMED TRANSACTIONS comment\n\n+ * If no worker is available to handle the streamed transaction, the data is\n+ * written to temporary files and then applied at once when the final commit\n+ * arrives.\n\nSUGGESTION\nIf streaming = true, or if streaming = parallel but there are not\nparallel apply workers available to handle the streamed transaction,\nthe data is written to...\n\n~~~\n\n29. src/backend/replication/logical/worker.c - TransactionApplyAction\n\n/*\n * What action to take for the transaction.\n *\n * TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply worker and\n * changes of the transaction are applied directly in the worker.\n *\n * TA_SERIALIZE_TO_FILE means that we are in leader apply worker and changes\n * are written to temporary files and then applied when the final commit\n * arrives.\n *\n * TA_APPLY_IN_PARALLEL_WORKER means that we are in the parallel apply worker\n * and changes of the transaction are applied directly in the worker.\n *\n * TA_SEND_TO_PARALLEL_WORKER means that we are in the leader apply worker and\n * need to send the changes to the parallel apply worker.\n */\ntypedef enum\n{\n/* The action for non-streaming transactions. */\nTA_APPLY_IN_LEADER_WORKER,\n\n/* Actions for streaming transactions. */\nTA_SERIALIZE_TO_FILE,\nTA_APPLY_IN_PARALLEL_WORKER,\nTA_SEND_TO_PARALLEL_WORKER\n} TransactionApplyAction;\n\n~\n\n29a.\nI think if you change all those enum names slightly (e.g. like below)\nthen they can be more self-explanatory:\n\nTA_NOT_STREAMING_LEADER_APPLY\nTA_STREAMING_LEADER_SERIALIZE\nTA_STREAMING_LEADER_SEND_TO_PARALLEL\nTA_STREAMING_PARALLEL_APPLY\n\n~\n\n29b.\n * TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply worker and\n * changes of the transaction are applied directly in the worker.\n\nMaybe that should mention this is for the non-streaming case, or if\nyou change all the enums names like in 29a. then there is no need\nbecause it is more self-explanatory.\n\n~~~\n\n30. src/backend/replication/logical/worker.c - should_apply_changes_for_rel\n\n * Note that for streaming transactions that are being applied in parallel\n+ * apply worker, we disallow applying changes on a table that is not in\n+ * the READY state, because we cannot decide whether to apply the change as we\n+ * won't know remote_final_lsn by that time.\n\n\"applied in parallel apply worker\" -> \"applied in the parallel apply worker\"\n\n~~~\n\n31.\n\n+ errdetail(\"Cannot handle streamed replication transaction by parallel \"\n+ \"apply workers until all tables are synchronized.\")));\n\n\"by parallel apply workers\" -> \"using parallel apply workers\" (?)\n\n~~~\n\n32. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n33. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n(same as comment #32)\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n34. src/backend/replication/logical/worker.c - apply_handle_stream_start\n\n(same as comment #32)\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n35.\n\n+ else if (apply_action == TA_SERIALIZE_TO_FILE)\n+ {\n+ /*\n+ * Notify handle methods we're processing a remote in-progress\n+ * transaction.\n+ */\n+ in_streamed_transaction = true;\n+\n+ /*\n+ * Since no parallel apply worker is available for the first\n+ * stream start, serialize all the changes of the transaction.\n+ *\n\n\"Since no parallel apply worker is available\".\n\nI don't think the comment is quite correct. Maybe it is doing the\nserialization because the user simply did not request to use the\nparallel mode at all?\n\n~~~\n\n36. src/backend/replication/logical/worker.c - apply_handle_stream_stop\n\n(same as comment #32)\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n37. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+ /*\n+ * Check whether the publisher sends abort_lsn and abort_time.\n+ *\n+ * Note that the paralle apply worker is only started when the publisher\n+ * sends abort_lsn and abort_time.\n+ */\n\ntypo \"paralle\"\n\n~~~\n\n38.\n\n(same as comment #32)\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n39.\n\n+ /*\n+ * Set in_parallel_apply_xact to true again as we only aborted the\n+ * subtransaction and the top transaction is still in progress. No\n+ * need to lock here because currently only the apply leader are\n+ * accessing this flag.\n+ */\n\n\"are accessing\" -> \"is accessing\"\n\n~~~\n\n40. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\n(same as comment #32)\n\nNow that there is an apply_action enum I felt it is better for this\ncode to be using a switch instead of all the if/else. Furthermore, it\nmight be better to put the switch case in a logical order (e.g. same\nas the suggested enums value order of #29a).\n\n~~~\n\n41. src/backend/replication/logical/worker.c - store_flush_position\n\n+ /* Skip if not the leader apply worker */\n+ if (am_parallel_apply_worker())\n+ return;\n+\n\nCode might be better to implement/use a new function so it can check\nsomething like !am_leader_apply_worker()\n\n~~~\n\n42. src/backend/replication/logical/worker.c - InitializeApplyWorker\n\n+/*\n+ * Initialize the database connection, in-memory subscription and necessary\n+ * config options.\n+ */\n\nI still think this should mention that this is common initialization\ncode for \"both leader apply workers, and parallel apply workers\"\n\n~~~\n\n43. src/backend/replication/logical/worker.c - ApplyWorkerMain\n\n- /* This is main apply worker */\n+ /* This is leader apply worker */\n\n\"is leader\" -> \"is the leader\"\n\n~~~\n\n44. src/backend/replication/logical/worker.c - IsLogicalParallelApplyWorker\n\n+/*\n+ * Is current process a logical replication parallel apply worker?\n+ */\n+bool\n+IsLogicalParallelApplyWorker(void)\n+{\n+ return am_parallel_apply_worker();\n+}\n+\n\nIt seems a bit strange to have this function\nIsLogicalParallelApplyWorker, and also am_parallel_apply_worker()\nwhich are basically identical except one of them is static and one is\nnot.\n\nI wonder if there should be just one function. And if you really do\nneed 2 names for consistency then you can just define a synonym like\n\n#define am_parallel_apply_worker IsLogicalParallelApplyWorker\n\n~~~\n\n45. src/backend/replication/logical/worker.c - get_transaction_apply_action\n\n+/*\n+ * Return the action to take for the given transaction. Also return the\n+ * parallel apply worker information if the action is\n+ * TA_SEND_TO_PARALLEL_WORKER.\n+ */\n+static TransactionApplyAction\n+get_transaction_apply_action(TransactionId xid,\nParallelApplyWorkerInfo **winfo)\n\nI think this should be slightly more clear to say that *winfo is\nassigned to the destination parallel worker info (if the action is\nTA_SEND_TO_PARALLEL_WORKER), otherwise *winfo is assigned NULL (see\nalso #46 below)\n\n~~~\n\n46.\n\n+static TransactionApplyAction\n+get_transaction_apply_action(TransactionId xid,\nParallelApplyWorkerInfo **winfo)\n+{\n+ if (am_parallel_apply_worker())\n+ return TA_APPLY_IN_PARALLEL_WORKER;\n+ else if (in_remote_transaction)\n+ return TA_APPLY_IN_LEADER_WORKER;\n+\n+ /*\n+ * Check if we are processing this transaction using a parallel apply\n+ * worker and if so, send the changes to that worker.\n+ */\n+ else if ((*winfo = parallel_apply_find_worker(xid)))\n+ return TA_SEND_TO_PARALLEL_WORKER;\n+ else\n+ return TA_SERIALIZE_TO_FILE;\n+}\n\nThe code is a bit quirky at the moment because sometimes the *winfo\nwill be assigned NULL and sometimes it will be assigned valid value,\nand sometimes it will still be unassigned.\n\nI suggest always assigning it either NULL or valid.\n\nSUGGESTIONS\nstatic TransactionApplyAction\nget_transaction_apply_action(TransactionId xid, ParallelApplyWorkerInfo **winfo)\n{\n*winfo = NULL; <== add this default assignment\n...\n\n======\n\n47. src/backend/storage/ipc/procsignal.c - procsignal_sigusr1_handler\n\n@@ -657,6 +658,9 @@ procsignal_sigusr1_handler(SIGNAL_ARGS)\n if (CheckProcSignal(PROCSIG_LOG_MEMORY_CONTEXT))\n HandleLogMemoryContextInterrupt();\n\n+ if (CheckProcSignal(PROCSIG_PARALLEL_APPLY_MESSAGE))\n+ HandleParallelApplyMessageInterrupt();\n+\n\nI wasn’t sure about the placement of this new code because those\nCheckProcSignal don’t seem to have any particular order. I think this\nbelongs adjacent to the PROCSIG_PARALLEL_MESSAGE since it has the most\nin common with that one.\n\n======\n\n48. src/backend/tcop/postgres.c\n\n@@ -3377,6 +3377,9 @@ ProcessInterrupts(void)\n\n if (LogMemoryContextPending)\n ProcessLogMemoryContextInterrupt();\n+\n+ if (ParallelApplyMessagePending)\n+ HandleParallelApplyMessages();\n\n(like #47)\n\nI think this belongs adjacent to the ParallelMessagePending check\nsince it has most in common with that one.\n\n======\n\n49. src/include/replication/worker_internal.h\n\n@@ -60,6 +64,12 @@ typedef struct LogicalRepWorker\n */\n FileSet *stream_fileset;\n\n+ /*\n+ * PID of leader apply worker if this slot is used for a parallel apply\n+ * worker, InvalidPid otherwise.\n+ */\n+ pid_t apply_leader_pid;\n+\n /* Stats. */\n XLogRecPtr last_lsn;\n TimestampTz last_send_time;\nWhitespace indent of the new member ok?\n\n\n~~~\n\n50.\n\n+typedef struct ParallelApplyWorkerShared\n+{\n+ slock_t mutex;\n+\n+ /*\n+ * Flag used to ensure commit ordering.\n+ *\n+ * The parallel apply worker will set it to false after handling the\n+ * transaction finish commands while the apply leader will wait for it to\n+ * become false before proceeding in transaction finish commands (e.g.\n+ * STREAM_COMMIT/STREAM_ABORT/STREAM_PREPARE).\n+ */\n+ bool in_parallel_apply_xact;\n+\n+ /* Information from the corresponding LogicalRepWorker slot. */\n+ uint16 logicalrep_worker_generation;\n+\n+ int logicalrep_worker_slot_no;\n+} ParallelApplyWorkerShared;\n\nWhitespace indents of the new members ok?\n\n~~~\n\n51.\n\n /* Main memory context for apply worker. Permanent during worker lifetime. */\n extern PGDLLIMPORT MemoryContext ApplyContext;\n+extern PGDLLIMPORT MemoryContext ApplyMessageContext;\n\nMaybe there should be a blank line between those externs, because the\ncomment applies only to the first one, right? Alternatively modify the\ncomment.\n\n~~~\n\n52. src/include/replication/worker_internal.h - am_parallel_apply_worker\n\nI thought it might be worthwhile to also add another function like\nam_leader_apply_worker(). I noticed at least one place in this patch\nwhere it could have been called.\n\nSUGGESTION\nstatic inline bool\nam_parallel_apply_worker(void)\n{\nreturn !isParallelApplyWorker(MyLogicalRepWorker) && !am_tablesync_worker();\n}\n\n======\n\n53. src/include/storage/procsignal.h\n\n@@ -35,6 +35,7 @@ typedef enum\n PROCSIG_WALSND_INIT_STOPPING, /* ask walsenders to prepare for shutdown */\n PROCSIG_BARRIER, /* global barrier interrupt */\n PROCSIG_LOG_MEMORY_CONTEXT, /* ask backend to log the memory contexts */\n+ PROCSIG_PARALLEL_APPLY_MESSAGE, /* Message from parallel apply workers */\n\n(like #47)\n\nI think this new enum belongs adjacent to the PROCSIG_PARALLEL_MESSAGE\nsince it has most in common with that one\n\n======\n\n54. src/tools/pgindent/typedefs.list\n\nMissing TransactionApplyAction?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 9 Sep 2022 17:02:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, September 9, 2022 3:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for the v28-0001 patch:\r\n> \r\n> (There may be some overlap with other people's review comments and/or\r\n> some fixes already made).\r\n> \r\n\r\nThanks for the comments.\r\n\r\n\r\n> 3.\r\n> \r\n> max_logical_replication_workers (integer)\r\n> Specifies maximum number of logical replication workers. This\r\n> includes apply leader workers, parallel apply workers, and table\r\n> synchronization workers.\r\n> Logical replication workers are taken from the pool defined by\r\n> max_worker_processes.\r\n> The default value is 4. This parameter can only be set at server start.\r\n> \r\n> ~\r\n> \r\n> I did not really understand why the default is 4. Because the default\r\n> tablesync workers is 2, and the default parallel workers is 2, but\r\n> what about accounting for the apply worker? Therefore, shouldn't\r\n> max_logical_replication_workers default be 5 instead of 4?\r\n\r\nThe parallel apply is disabled by default, so it's not a must to increase this\r\nglobal default value as discussed[1]\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoCwaU8SqjmC7UkKWNjDg3Uz4FDGurMpis3zw5SEC%2B27jQ%40mail.gmail.com\r\n\r\n\r\n> 6. src/backend/replication/logical/applyparallelworker.c\r\n> \r\n> Is there a reason why this file is called applyparallelworker.c\r\n> instead of parallelapplyworker.c? Now this name is out of step with\r\n> names of all the new typedefs etc.\r\n\r\nIt was suggested which is consistent with the \"vacuumparallel.c\", but I am fine\r\nwith either name. I can change this if more people think parallelapplyworker.c\r\nis better.\r\n\r\n\r\n> 16. src/backend/replication/logical/applyparallelworker.c -\r\n> parallel_apply_find_worker\r\n> \r\n> + /* Return the cached parallel apply worker if valid. */\r\n> + if (stream_apply_worker != NULL)\r\n> + return stream_apply_worker;\r\n> \r\n> Perhaps 'cur_stream_parallel_apply_winfo' is a better name for this var?\r\n\r\nThis looks a bit long to me.\r\n\r\n> /* Now wait until it attaches. */\r\n> - WaitForReplicationWorkerAttach(worker, generation, bgw_handle);\r\n> + return WaitForReplicationWorkerAttach(worker, generation, bgw_handle);\r\n> \r\n> The comment feels a tiny bit misleading, because there is a chance\r\n> that this might not attach at all and return false if something goes\r\n> wrong.\r\n\r\nI feel it might be better to fix this via a separate patch.\r\n\r\n\r\n> Now that there is an apply_action enum I felt it is better for this\r\n> code to be using a switch instead of all the if/else. Furthermore, it\r\n> might be better to put the switch case in a logical order (e.g. same\r\n> as the suggested enums value order of #29a).\r\n\r\nI'm not sure whether switch case is better than if/else here. But if more\r\npeople prefer, I can change this.\r\n\r\n\r\n> 23. src/backend/replication/logical/launcher.c - logicalrep_worker_launch\r\n> \r\n> + nparallelapplyworkers = logicalrep_parallel_apply_worker_count(subid);\r\n> +\r\n> + /*\r\n> + * Return silently if the number of parallel apply workers reached the\r\n> + * limit per subscription.\r\n> + */\r\n> + if (is_subworker && nparallelapplyworkers >=\r\n> max_parallel_apply_workers_per_subscription)\r\n> + {\r\n> + LWLockRelease(LogicalRepWorkerLock);\r\n> + return false;\r\n> }\r\n> I’m not sure if this is a good idea to be so silent. How will the user\r\n> know if they should increase the GUC parameter or not if it never\r\n> tells them that the value is too low ?\r\n\r\nIt's like what we do for table sync worker. Besides, I think user is\r\nlikely to intentionally limit the parallel apply worker number to leave free\r\nworkers for other purposes. And we do report a WARNING later if there is no\r\nfree worker slots errmsg(\"out of logical replication worker slots\").\r\n\r\n\r\n> 41. src/backend/replication/logical/worker.c - store_flush_position\r\n> \r\n> + /* Skip if not the leader apply worker */\r\n> + if (am_parallel_apply_worker())\r\n> + return;\r\n> +\r\n> \r\n> Code might be better to implement/use a new function so it can check\r\n> something like !am_leader_apply_worker()\r\n\r\nBased on the existing code, both leader and table sync worker could enter this\r\nfunction. Using !am_leader_apply_worker() seems will disallow table sync worker\r\nto enter this function which might be not good although .\r\n\r\n\r\n> 47. src/backend/storage/ipc/procsignal.c - procsignal_sigusr1_handler\r\n> \r\n> @@ -657,6 +658,9 @@ procsignal_sigusr1_handler(SIGNAL_ARGS)\r\n> if (CheckProcSignal(PROCSIG_LOG_MEMORY_CONTEXT))\r\n> HandleLogMemoryContextInterrupt();\r\n> \r\n> + if (CheckProcSignal(PROCSIG_PARALLEL_APPLY_MESSAGE))\r\n> + HandleParallelApplyMessageInterrupt();\r\n> +\r\n> \r\n> I wasn’t sure about the placement of this new code because those\r\n> CheckProcSignal don’t seem to have any particular order. I think this\r\n> belongs adjacent to the PROCSIG_PARALLEL_MESSAGE since it has the most\r\n> in common with that one.\r\n\r\nI'm not very sure, I just followed the way we used to add new SignalReason\r\n(e.g. add the new reason at the last but before the Recovery conflict reasons).\r\nAnd the parallel apply is not very similar to parallel query in detail.\r\n\r\n\r\n> I thought it might be worthwhile to also add another function like\r\n> am_leader_apply_worker(). I noticed at least one place in this patch\r\n> where it could have been called.\r\n\r\nIt seems a bit unnecessary to introduce a new macro where we already can use\r\nam_parallel_apply_worker to check.\r\n\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 9 Sep 2022 09:01:07 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou-san,\r\n\r\nThank you for updating the patch! Followings are comments for v28-0001.\r\nI will dig your patch more, but I send partially to keep the activity of the thread.\r\n\r\n===\r\nFor applyparallelworker.c\r\n\r\n01. filename\r\nThe word-ordering of filename seems not good\r\nbecause you defined the new worker as \"parallel apply worker\".\r\n\r\n02. global variable\r\n\r\n```\r\n+/* Parallel apply workers hash table (initialized on first use). */\r\n+static HTAB *ParallelApplyWorkersHash = NULL;\r\n+\r\n+/*\r\n+ * List that stores the information of parallel apply workers that were\r\n+ * started. Newly added worker information will be removed from the list at the\r\n+ * end of the transaction when there are enough workers in the pool. Besides,\r\n+ * exited workers will be removed from the list after being detected.\r\n+ */\r\n+static List *ParallelApplyWorkersList = NIL;\r\n```\r\n\r\nCould you add descriptions about difference between the list and hash table?\r\nIIUC the Hash stores the parallel workers that\r\nare assigned to transacitons, and the list stores all alive ones.\r\n\r\n\r\n03. parallel_apply_find_worker\r\n\r\n```\r\n+ /* Return the cached parallel apply worker if valid. */\r\n+ if (stream_apply_worker != NULL)\r\n+ return stream_apply_worker;\r\n```\r\n\r\nThis is just a question -\r\nWhy the given xid and the assigned xid to the worker are not checked here?\r\nIs there chance to find wrong worker? \r\n\r\n\r\n04. parallel_apply_start_worker\r\n\r\n```\r\n+/*\r\n+ * Start a parallel apply worker that will be used for the specified xid.\r\n+ *\r\n+ * If a parallel apply worker is not in use then re-use it, otherwise start a\r\n+ * fresh one. Cache the worker information in ParallelApplyWorkersHash keyed by\r\n+ * the specified xid.\r\n+ */\r\n+void\r\n+parallel_apply_start_worker(TransactionId xid)\r\n```\r\n\r\n\"parallel_apply_start_worker\" should be \"start_parallel_apply_worker\", I think\r\n\r\n\r\n05. parallel_apply_stream_abort\r\n\r\n```\r\n\t\tfor (i = list_length(subxactlist) - 1; i >= 0; i--)\r\n\t\t{\r\n\t\t\txid = list_nth_xid(subxactlist, i);\r\n\t\t\tif (xid == subxid)\r\n\t\t\t{\r\n\t\t\t\tfound = true;\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\t\t}\r\n```\r\n\r\nPlease not reuse the xid, declare and use another variable in the else block or something.\r\n\r\n06. parallel_apply_free_worker\r\n\r\n```\r\n+ if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n+ {\r\n```\r\n\r\nPlease add a comment like: \"Do we have enough workers in the pool?\" or something.\r\n\r\n===\r\nFor worker.c\r\n\r\n07. general\r\n\r\nIn many lines if-else statement is used for apply_action, but I think they should rewrite as switch-case statement.\r\n\r\n08. global variable\r\n\r\n```\r\n-static bool in_streamed_transaction = false;\r\n+bool in_streamed_transaction = false;\r\n```\r\n\r\na.\r\n\r\nIt seems that in_streamed_transaction is used only in the worker.c, so we can change to stati variable.\r\n\r\nb.\r\n\r\nThat flag is set only when an apply worker spill the transaction to the disk.\r\nHow about \"in_streamed_transaction\" -> \"in_spilled_transaction\"?\r\n\r\n09. apply_handle_stream_prepare\r\n\r\n```\r\n- elog(DEBUG1, \"received prepare for streamed transaction %u\", prepare_data.xid);\r\n```\r\n\r\nI think this debug message is still useful.\r\n\r\n10. apply_handle_stream_stop\r\n\r\n```\r\n+ if (apply_action == TA_APPLY_IN_PARALLEL_WORKER)\r\n+ {\r\n+ pgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);\r\n+ }\r\n+ else if (apply_action == TA_SEND_TO_PARALLEL_WORKER)\r\n+ {\r\n```\r\n\r\nThe ordering of the STREAM {STOP, START} is checked only when an apply worker spill the transaction to the disk.\r\n(This is done via in_streamed_transaction)\r\nI think checks should be added here, like if (!stream_apply_worker) or something.\r\n\r\n11. apply_handle_stream_abort\r\n\r\n```\r\n+ if (in_streamed_transaction)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n+ errmsg_internal(\"STREAM ABORT message without STREAM STOP\")));\r\n```\r\n\r\nI think the check by stream_apply_worker should be added.\r\n\r\n12. apply_handle_stream_commit\r\n\r\na.\r\n\r\n```\r\n\tif (in_streamed_transaction)\r\n\t\tereport(ERROR,\r\n\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n\t\t\t\t errmsg_internal(\"STREAM COMMIT message without STREAM STOP\")));\r\n```\r\n\r\nI think the check by stream_apply_worker should be added.\r\n\r\nb. \r\n\r\n```\r\n- elog(DEBUG1, \"received commit for streamed transaction %u\", xid);\r\n```\r\n\r\nI think this debug message is still useful.\r\n\r\n===\r\nFor launcher.c\r\n\r\n13. logicalrep_worker_stop_by_slot\r\n\r\n```\r\n+ LogicalRepWorker *worker = &LogicalRepCtx->workers[slot_no];\r\n+\r\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n+\r\n+ /* Return if the generation doesn't match or the worker is not alive. */\r\n+ if (worker->generation != generation ||\r\n+ worker->proc == NULL)\r\n+ return;\r\n+\r\n```\r\n\r\na.\r\n\r\nLWLockAcquire(LogicalRepWorkerLock) is needed before reading slots.\r\n\r\nb. \r\n\r\nLWLockRelease(LogicalRepWorkerLock) is needed even if worker is not found.\r\n\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 12 Sep 2022 10:57:42 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Sep 9, 2022 at 2:31 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, September 9, 2022 3:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> > 3.\n> >\n> > max_logical_replication_workers (integer)\n> > Specifies maximum number of logical replication workers. This\n> > includes apply leader workers, parallel apply workers, and table\n> > synchronization workers.\n> > Logical replication workers are taken from the pool defined by\n> > max_worker_processes.\n> > The default value is 4. This parameter can only be set at server start.\n> >\n> > ~\n> >\n> > I did not really understand why the default is 4. Because the default\n> > tablesync workers is 2, and the default parallel workers is 2, but\n> > what about accounting for the apply worker? Therefore, shouldn't\n> > max_logical_replication_workers default be 5 instead of 4?\n>\n> The parallel apply is disabled by default, so it's not a must to increase this\n> global default value as discussed[1]\n>\n> [1] https://www.postgresql.org/message-id/CAD21AoCwaU8SqjmC7UkKWNjDg3Uz4FDGurMpis3zw5SEC%2B27jQ%40mail.gmail.com\n>\n\nOkay, but can we document to increase this value when the parallel\napply is enabled?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Sep 2022 15:18:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Sep 9, 2022 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 29. src/backend/replication/logical/worker.c - TransactionApplyAction\n>\n> /*\n> * What action to take for the transaction.\n> *\n> * TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply worker and\n> * changes of the transaction are applied directly in the worker.\n> *\n> * TA_SERIALIZE_TO_FILE means that we are in leader apply worker and changes\n> * are written to temporary files and then applied when the final commit\n> * arrives.\n> *\n> * TA_APPLY_IN_PARALLEL_WORKER means that we are in the parallel apply worker\n> * and changes of the transaction are applied directly in the worker.\n> *\n> * TA_SEND_TO_PARALLEL_WORKER means that we are in the leader apply worker and\n> * need to send the changes to the parallel apply worker.\n> */\n> typedef enum\n> {\n> /* The action for non-streaming transactions. */\n> TA_APPLY_IN_LEADER_WORKER,\n>\n> /* Actions for streaming transactions. */\n> TA_SERIALIZE_TO_FILE,\n> TA_APPLY_IN_PARALLEL_WORKER,\n> TA_SEND_TO_PARALLEL_WORKER\n> } TransactionApplyAction;\n>\n> ~\n>\n> 29a.\n> I think if you change all those enum names slightly (e.g. like below)\n> then they can be more self-explanatory:\n>\n> TA_NOT_STREAMING_LEADER_APPLY\n> TA_STREAMING_LEADER_SERIALIZE\n> TA_STREAMING_LEADER_SEND_TO_PARALLEL\n> TA_STREAMING_PARALLEL_APPLY\n>\n> ~\n>\n\nI also think we can improve naming but adding streaming in the names\nmakes them slightly difficult to read. As you have suggested, it will\nbe better to add comments for streaming and non-streaming cases. How\nabout naming them as below:\n\ntypedef enum\n{\nTRANS_LEADER_APPLY\nTRANS_LEADER_SERIALIZE\nTRANS_LEADER_SEND_TO_PARALLEL\nTRANS_PARALLEL_APPLY\n} TransApplyAction;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Sep 2022 15:55:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Sep 12, 2022 at 4:27 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Hou-san,\n>\n> Thank you for updating the patch! Followings are comments for v28-0001.\n> I will dig your patch more, but I send partially to keep the activity of the thread.\n>\n> ===\n> For applyparallelworker.c\n>\n> 01. filename\n> The word-ordering of filename seems not good\n> because you defined the new worker as \"parallel apply worker\".\n>\n\nI think in the future we may have more files for apply work (like\napplyddl.c for DDL apply work), so it seems okay to name all apply\nrelated files in a similar way.\n\n>\n> ===\n> For worker.c\n>\n> 07. general\n>\n> In many lines if-else statement is used for apply_action, but I think they should rewrite as switch-case statement.\n>\n\nSounds reasonable to me.\n\n> 08. global variable\n>\n> ```\n> -static bool in_streamed_transaction = false;\n> +bool in_streamed_transaction = false;\n> ```\n>\n> a.\n>\n> It seems that in_streamed_transaction is used only in the worker.c, so we can change to stati variable.\n>\n\nYeah, I don't know why it has been changed in the first place.\n\n> b.\n>\n> That flag is set only when an apply worker spill the transaction to the disk.\n> How about \"in_streamed_transaction\" -> \"in_spilled_transaction\"?\n>\n\nIsn't this an existing variable? If so, it doesn't seem like a good\nidea to change the name unless we are changing its meaning.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:19:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou-san,\r\n\r\n> I will dig your patch more, but I send partially to keep the activity of the thread.\r\n\r\nMore minor comments about v28.\r\n\r\n===\r\nAbout 0002 \r\n\r\nFor 015_stream.pl\r\n\r\n14. check_parallel_log\r\n\r\n```\r\n+# Check the log that the streamed transaction was completed successfully\r\n+# reported by parallel apply worker.\r\n+sub check_parallel_log\r\n+{\r\n+ my ($node_subscriber, $offset, $is_parallel)= @_;\r\n+ my $parallel_message = 'finished processing the transaction finish command';\r\n+\r\n+ if ($is_parallel)\r\n+ {\r\n+ $node_subscriber->wait_for_log(qr/$parallel_message/, $offset);\r\n+ }\r\n+}\r\n```\r\n\r\nI think check_parallel_log() should be called only when streaming = 'parallel' and if-statement is not needed\r\n\r\n===\r\nFor 016_stream_subxact.pl\r\n\r\n15. test_streaming\r\n\r\n```\r\n+ INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series( 3, 500) s(i);\r\n```\r\n\r\n\" 3\" should be \"3\".\r\n\r\n===\r\nAbout 0003\r\n\r\nFor applyparallelworker.c\r\n\r\n16. parallel_apply_relation_check()\r\n\r\n```\r\n+ if (rel->parallel_apply_safe == PARALLEL_APPLY_SAFETY_UNKNOWN)\r\n+ logicalrep_rel_mark_parallel_apply(rel);\r\n```\r\n\r\nI was not clear when logicalrep_rel_mark_parallel_apply() is called here.\r\nIIUC parallel_apply_relation_check() is called when parallel apply worker handles changes,\r\nbut before that relation is opened via logicalrep_rel_open() and parallel_apply_safe is set here.\r\nIf it guards some protocol violation, we may use Assert().\r\n\r\n===\r\nFor create_subscription.sgml\r\n\r\n17.\r\nThe restriction about foreign key does not seem to be documented.\r\n\r\n===\r\nAbout 0004\r\n\r\nFor 015_stream.pl\r\n\r\n18. check_parallel_log\r\n\r\nI heard that the removal has been reverted, but in the patch\r\ncheck_parallel_log() is removed again... :-(\r\n\r\n\r\n===\r\nAbout throughout\r\n\r\nI checked the test coverage via `make coverage`. About appluparallelworker.c and worker.c, both function coverage is 100%, and\r\nline coverages are 86.2 % and 94.5 %. Generally it's good.\r\nBut I read the report and following parts seems not tested.\r\n\r\nIn parallel_apply_start_worker():\r\n\r\n```\r\n\t\tif (tmp_winfo->error_mq_handle == NULL)\r\n\t\t{\r\n\t\t\t/*\r\n\t\t\t * Release the worker information and try next one if the parallel\r\n\t\t\t * apply worker exited cleanly.\r\n\t\t\t */\r\n\t\t\tParallelApplyWorkersList = foreach_delete_current(ParallelApplyWorkersList, lc);\r\n\t\t\tshm_mq_detach(tmp_winfo->mq_handle);\r\n\t\t\tdsm_detach(tmp_winfo->dsm_seg);\r\n\t\t\tpfree(tmp_winfo);\r\n\t\t}\r\n```\r\n\r\nIn HandleParallelApplyMessage():\r\n\r\n```\r\n\t\tcase 'X':\t\t\t\t/* Terminate, indicating clean exit */\r\n\t\t\t{\r\n\t\t\t\tshm_mq_detach(winfo->error_mq_handle);\r\n\t\t\t\twinfo->error_mq_handle = NULL;\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n```\r\n\r\nDoes it mean that we do not test the termination of parallel apply worker? If so I think it should be tested.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 13 Sep 2022 12:02:26 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi,\r\n\r\n> > 01. filename\r\n> > The word-ordering of filename seems not good\r\n> > because you defined the new worker as \"parallel apply worker\".\r\n> >\r\n> \r\n> I think in the future we may have more files for apply work (like\r\n> applyddl.c for DDL apply work), so it seems okay to name all apply\r\n> related files in a similar way.\r\n\r\n> > That flag is set only when an apply worker spill the transaction to the disk.\r\n> > How about \"in_streamed_transaction\" -> \"in_spilled_transaction\"?\r\n> >\r\n> \r\n> Isn't this an existing variable? If so, it doesn't seem like a good\r\n> idea to change the name unless we are changing its meaning.\r\n\r\nBoth of you said are reasonable. They do not have to be modified.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 13 Sep 2022 12:05:32 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Sep 8, 2022 at 14:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Sep 5, 2022 at 6:34 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the correct patch set this time.\r\n> >\r\n> \r\n> Few comments on v28-0001*:\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> + /* Whether the worker is processing a transaction. */\r\n> + bool in_use;\r\n> \r\n> I think this same comment applies to in_parallel_apply_xact flag as\r\n> well. How about: \"Indicates whether the worker is available to be used\r\n> for parallel apply transaction?\"?\r\n> \r\n> 2.\r\n> + /*\r\n> + * Set this flag in the leader instead of the parallel apply worker to\r\n> + * avoid the race condition where the leader has already started waiting\r\n> + * for the parallel apply worker to finish processing the transaction(set\r\n> + * the in_parallel_apply_xact to false) while the child process has not yet\r\n> + * processed the first STREAM_START and has not set the\r\n> + * in_parallel_apply_xact to true.\r\n> \r\n> I think part of this comment \"(set the in_parallel_apply_xact to\r\n> false)\" is not necessary. It will be clear without that.\r\n> \r\n> 3.\r\n> + /* Create entry for requested transaction. */\r\n> + entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_ENTER, &found);\r\n> + if (found)\r\n> + elog(ERROR, \"hash table corrupted\");\r\n> ...\r\n> ...\r\n> + hash_search(ParallelApplyWorkersHash, &xid, HASH_REMOVE, NULL);\r\n> \r\n> It is better to have a similar elog for HASH_REMOVE case as well. We\r\n> normally seem to have such elog for HASH_REMOVE.\r\n> \r\n> 4.\r\n> * Parallel apply is not supported when subscribing to a publisher which\r\n> + * cannot provide the abort_time, abort_lsn and the column information\r\n> used\r\n> + * to verify the parallel apply safety.\r\n> \r\n> \r\n> In this comment, which column information are you referring to?\r\n> \r\n> 5.\r\n> + /*\r\n> + * Set in_parallel_apply_xact to true again as we only aborted the\r\n> + * subtransaction and the top transaction is still in progress. No\r\n> + * need to lock here because currently only the apply leader are\r\n> + * accessing this flag.\r\n> + */\r\n> + winfo->shared->in_parallel_apply_xact = true;\r\n> \r\n> This theory sounds good to me but I think it is better to update/read\r\n> this flag under spinlock as the patch is doing at a few other places.\r\n> I think that will make the code easier to follow without worrying too\r\n> much about such special cases. There are a few asserts as well which\r\n> read this without lock, it would be better to change those as well.\r\n> \r\n> 6.\r\n> + * LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum\r\n> protocol version\r\n> + * with support for streaming large transactions using parallel apply\r\n> + * workers. Introduced in PG16.\r\n> \r\n> How about changing it to something like:\r\n> \"LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is the minimum\r\n> protocol\r\n> version where we support applying large streaming transactions in\r\n> parallel. Introduced in PG16.\"\r\n> \r\n> 7.\r\n> + PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\r\n> + bool write_abort_lsn = (data->protocol_version >=\r\n> + LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM);\r\n> \r\n> /*\r\n> * The abort should happen outside streaming block, even for streamed\r\n> @@ -1856,7 +1859,8 @@ pgoutput_stream_abort(struct\r\n> LogicalDecodingContext *ctx,\r\n> Assert(rbtxn_is_streamed(toptxn));\r\n> \r\n> OutputPluginPrepareWrite(ctx, true);\r\n> - logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn->xid);\r\n> + logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn, abort_lsn,\r\n> + write_abort_lsn);\r\n> \r\n> I think we need to send additional information if the client has used\r\n> the parallel streaming option. Also, let's keep sending subxid as we\r\n> were doing previously and add additional parameters required. It may\r\n> be better to name write_abort_lsn as abort_info.\r\n> \r\n> 8.\r\n> + /*\r\n> + * Check whether the publisher sends abort_lsn and abort_time.\r\n> + *\r\n> + * Note that the paralle apply worker is only started when the publisher\r\n> + * sends abort_lsn and abort_time.\r\n> + */\r\n> + if (am_parallel_apply_worker() ||\r\n> + walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\r\n> + read_abort_lsn = true;\r\n> +\r\n> + logicalrep_read_stream_abort(s, &abort_data, read_abort_lsn);\r\n> \r\n> This check should match with the check for the write operation where\r\n> we are checking the protocol version as well. There is a typo as well\r\n> in the comments (/paralle/parallel).\r\n\r\nImproved as suggested.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 15 Sep 2022 05:15:24 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Sep 8, 2022 at 19:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Sep 8, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Mon, Sep 5, 2022 at 6:34 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Attach the correct patch set this time.\r\n> > >\r\n> >\r\n> > Few comments on v28-0001*:\r\n> > =======================\r\n> >\r\n> \r\n> Some suggestions for comments in v28-0001*\r\n\r\nThanks for your comments and patch!\r\n\r\n> 1.\r\n> +/*\r\n> + * Entry for a hash table we use to map from xid to the parallel apply worker\r\n> + * state.\r\n> + */\r\n> +typedef struct ParallelApplyWorkerEntry\r\n> \r\n> Let's change this comment to: \"Hash table entry to map xid to the\r\n> parallel apply worker state.\"\r\n> \r\n> 2.\r\n> +/*\r\n> + * List that stores the information of parallel apply workers that were\r\n> + * started. Newly added worker information will be removed from the list at\r\n> the\r\n> + * end of the transaction when there are enough workers in the pool. Besides,\r\n> + * exited workers will be removed from the list after being detected.\r\n> + */\r\n> +static List *ParallelApplyWorkersList = NIL;\r\n> \r\n> Can we change this to: \"A list to maintain the active parallel apply\r\n> workers. The information for the new worker is added to the list after\r\n> successfully launching it. The list entry is removed at the end of the\r\n> transaction if there are already enough workers in the worker pool.\r\n> For more information about the worker pool, see comments atop\r\n> worker.c. We also remove the entry from the list if the worker is\r\n> exited due to some error.\"\r\n> \r\n> Apart from this, I have added/changed a few other comments in\r\n> v28-0001*. Kindly check the attached, if you are fine with it then\r\n> please include it in the next version.\r\n\r\nImproved as suggested.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 15 Sep 2022 05:17:20 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Sep 9, 2022 at 15:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for the v28-0001 patch:\r\n> \r\n> (There may be some overlap with other people's review comments and/or\r\n> some fixes already made).\r\n\r\nThanks for your comments.\r\n\r\n> 5. src/backend/libpq/pqmq.c\r\n> \r\n> + {\r\n> + if (IsParallelWorker())\r\n> + SendProcSignal(pq_mq_parallel_leader_pid,\r\n> + PROCSIG_PARALLEL_MESSAGE,\r\n> + pq_mq_parallel_leader_backend_id);\r\n> + else\r\n> + {\r\n> + Assert(IsLogicalParallelApplyWorker());\r\n> + SendProcSignal(pq_mq_parallel_leader_pid,\r\n> + PROCSIG_PARALLEL_APPLY_MESSAGE,\r\n> + pq_mq_parallel_leader_backend_id);\r\n> + }\r\n> + }\r\n> \r\n> This code can be simplified if you want to. For example,\r\n> \r\n> {\r\n> ProcSignalReason reason;\r\n> Assert(IsParallelWorker() || IsLogicalParallelApplyWorker());\r\n> reason = IsParallelWorker() ? PROCSIG_PARALLEL_MESSAGE :\r\n> PROCSIG_PARALLEL_APPLY_MESSAGE;\r\n> SendProcSignal(pq_mq_parallel_leader_pid, reason,\r\n> pq_mq_parallel_leader_backend_id);\r\n> }\r\n\r\nNot sure this would be better.\r\n\r\n> 14.\r\n> \r\n> + /* Failed to start a new parallel apply worker. */\r\n> + if (winfo == NULL)\r\n> + return;\r\n> \r\n> There seem to be quite a lot of places (like this example) where\r\n> something may go wrong and the behaviour apparently will just silently\r\n> fall-back to using the non-parallel streaming. Maybe that is OK, but I\r\n> am just wondering how can the user ever know this has happened? Maybe\r\n> the docs can mention that this could happen and give some description\r\n> of what processes users can look for (or some other strategy) so they\r\n> can just confirm that the parallel streaming is really working like\r\n> they assume it to be?\r\n\r\nI think user could refer to the view pg_stat_subscription to check if the\r\nparallel apply worker started.\r\nBTW, we have documented the case if no parallel worker are available.\r\n\r\n> 17. src/backend/replication/logical/applyparallelworker.c -\r\n> parallel_apply_free_worker\r\n> \r\n> +/*\r\n> + * Remove the parallel apply worker entry from the hash table. And stop the\r\n> + * worker if there are enough workers in the pool.\r\n> + */\r\n> +void\r\n> +parallel_apply_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId\r\n> xid)\r\n> \r\n> I think the reason for doing the \"enough workers in the pool\" logic\r\n> needs some more explanation.\r\n\r\nBecause the process is always running, So stop it to reduce waste of resources.\r\n\r\n> 19. src/backend/replication/logical/applyparallelworker.c -\r\n> LogicalParallelApplyLoop\r\n> \r\n> + ApplyMessageContext = AllocSetContextCreate(ApplyContext,\r\n> + \"ApplyMessageContext\",\r\n> + ALLOCSET_DEFAULT_SIZES);\r\n> \r\n> Should the name of this context be \"ParallelApplyMessageContext\"?\r\n\r\nI think it is okay to use \"ApplyMessageContext\" here just like \"ApplyContext\".\r\nI will change this if more people have the same idea as you.\r\n\r\n> 20. src/backend/replication/logical/applyparallelworker.c -\r\n> HandleParallelApplyMessage\r\n> \r\n> + default:\r\n> + {\r\n> + elog(ERROR, \"unrecognized message type received from parallel apply\r\n> worker: %c (message length %d bytes)\",\r\n> + msgtype, msg->len);\r\n> + }\r\n> \r\n> \"received from\" -> \"received by\"\r\n> \r\n> ~~~\r\n> \r\n> \r\n> 21. src/backend/replication/logical/applyparallelworker.c -\r\n> HandleParallelApplyMessages\r\n> \r\n> +/*\r\n> + * Handle any queued protocol messages received from parallel apply workers.\r\n> + */\r\n> +void\r\n> +HandleParallelApplyMessages(void)\r\n> \r\n> 21a.\r\n> \"received from\" -> \"received by\"\r\n> \r\n> ~\r\n> \r\n> 21b.\r\n> I wonder if this comment should give some credit to the function in\r\n> parallel.c - because this seems almost a copy of all that code.\r\n\r\nSince the message is from parallel apply worker to main apply worker, I think\r\n\"from\" looks a little better.\r\n\r\n> 27. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\r\n> \r\n> + /*\r\n> + * This is the leader apply worker; stop all the parallel apply workers\r\n> + * previously started from here.\r\n> + */\r\n> + if (!isParallelApplyWorker(MyLogicalRepWorker))\r\n> \r\n> 27a.\r\n> The comment does not match the code. If this *is* the leader apply\r\n> worker then why do we have the condition to check that?\r\n> \r\n> Maybe only needs a comment update like\r\n> \r\n> SUGGESTION\r\n> If this is the leader apply worker then stop all the parallel...\r\n> \r\n> ~\r\n> \r\n> 27b.\r\n> Code seems also assuming it cannot be a tablesync worker but it is not\r\n> checking that. I am wondering if it will be better to have yet another\r\n> macro/inline to do isLeaderApplyWorker() that will make sure this\r\n> really is the leader apply worker. (This review comment suggestion is\r\n> repeated later below).\r\n\r\n=>27a.\r\nImproved as suggested.\r\n\r\n=>27b.\r\nChanged the if-statement to \r\n`if (!am_parallel_apply_worker() && !am_tablesync_worker())`.\r\n\r\n> 42. src/backend/replication/logical/worker.c - InitializeApplyWorker\r\n> \r\n> +/*\r\n> + * Initialize the database connection, in-memory subscription and necessary\r\n> + * config options.\r\n> + */\r\n> \r\n> I still think this should mention that this is common initialization\r\n> code for \"both leader apply workers, and parallel apply workers\"\r\n\r\nI'm not sure about this. I will change this if more people have the same idea\r\nas you.\r\n\r\n> 44. src/backend/replication/logical/worker.c - IsLogicalParallelApplyWorker\r\n> \r\n> +/*\r\n> + * Is current process a logical replication parallel apply worker?\r\n> + */\r\n> +bool\r\n> +IsLogicalParallelApplyWorker(void)\r\n> +{\r\n> + return am_parallel_apply_worker();\r\n> +}\r\n> +\r\n> \r\n> It seems a bit strange to have this function\r\n> IsLogicalParallelApplyWorker, and also am_parallel_apply_worker()\r\n> which are basically identical except one of them is static and one is\r\n> not.\r\n> \r\n> I wonder if there should be just one function. And if you really do\r\n> need 2 names for consistency then you can just define a synonym like\r\n> \r\n> #define am_parallel_apply_worker IsLogicalParallelApplyWorker\r\n\r\nI am not sure whether this will be better. But I can change this if more people\r\nprefer.\r\n\r\n> 49. src/include/replication/worker_internal.h\r\n> \r\n> @@ -60,6 +64,12 @@ typedef struct LogicalRepWorker\r\n> */\r\n> FileSet *stream_fileset;\r\n> \r\n> + /*\r\n> + * PID of leader apply worker if this slot is used for a parallel apply\r\n> + * worker, InvalidPid otherwise.\r\n> + */\r\n> + pid_t apply_leader_pid;\r\n> +\r\n> /* Stats. */\r\n> XLogRecPtr last_lsn;\r\n> TimestampTz last_send_time;\r\n> Whitespace indent of the new member ok?\r\n\r\nI will run pgindent later.\r\n\r\nThe rest of the comments are changed as suggested.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 15 Sep 2022 05:18:28 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Sep 12, 2022 at 18:58 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Hou-san,\r\n> \r\n> Thank you for updating the patch! Followings are comments for v28-0001.\r\n> I will dig your patch more, but I send partially to keep the activity of the thread.\r\n\r\nThanks for your comments.\r\n\r\n> ===\r\n> For applyparallelworker.c\r\n> \r\n> 01. filename\r\n> The word-ordering of filename seems not good\r\n> because you defined the new worker as \"parallel apply worker\".\r\n\r\nAs the Amit said, keep it consistent with other file name format.\r\n\r\n> 02. global variable\r\n> \r\n> ```\r\n> +/* Parallel apply workers hash table (initialized on first use). */\r\n> +static HTAB *ParallelApplyWorkersHash = NULL;\r\n> +\r\n> +/*\r\n> + * List that stores the information of parallel apply workers that were\r\n> + * started. Newly added worker information will be removed from the list at\r\n> the\r\n> + * end of the transaction when there are enough workers in the pool. Besides,\r\n> + * exited workers will be removed from the list after being detected.\r\n> + */\r\n> +static List *ParallelApplyWorkersList = NIL;\r\n> ```\r\n> \r\n> Could you add descriptions about difference between the list and hash table?\r\n> IIUC the Hash stores the parallel workers that\r\n> are assigned to transacitons, and the list stores all alive ones.\r\n\r\nDid some modifications to the comments above ParallelApplyWorkersList.\r\nAnd I think we could know the difference between these two variables by\r\nreferring to the functions parallel_apply_start_worker and\r\nparallel_apply_free_worker.\r\n\r\n> 03. parallel_apply_find_worker\r\n> \r\n> ```\r\n> + /* Return the cached parallel apply worker if valid. */\r\n> + if (stream_apply_worker != NULL)\r\n> + return stream_apply_worker;\r\n> ```\r\n> \r\n> This is just a question -\r\n> Why the given xid and the assigned xid to the worker are not checked here?\r\n> Is there chance to find wrong worker?\r\n\r\nI think it is okay to not check the worker's xid here.\r\nPlease refer to the comments above `stream_apply_worker`.\r\n\"stream_apply_worker\" will only be returned during a stream block, which means\r\nthe xid is the same as the xid in the STREAM_START message.\r\n\r\n> 04. parallel_apply_start_worker\r\n> \r\n> ```\r\n> +/*\r\n> + * Start a parallel apply worker that will be used for the specified xid.\r\n> + *\r\n> + * If a parallel apply worker is not in use then re-use it, otherwise start a\r\n> + * fresh one. Cache the worker information in ParallelApplyWorkersHash\r\n> keyed by\r\n> + * the specified xid.\r\n> + */\r\n> +void\r\n> +parallel_apply_start_worker(TransactionId xid)\r\n> ```\r\n> \r\n> \"parallel_apply_start_worker\" should be \"start_parallel_apply_worker\", I think\r\n\r\nFor code readability, similar functions are named in this format:\r\n`parallel_apply_.*_worker`.\r\n\r\n> 05. parallel_apply_stream_abort\r\n> \r\n> ```\r\n> \t\tfor (i = list_length(subxactlist) - 1; i >= 0; i--)\r\n> \t\t{\r\n> \t\t\txid = list_nth_xid(subxactlist, i);\r\n> \t\t\tif (xid == subxid)\r\n> \t\t\t{\r\n> \t\t\t\tfound = true;\r\n> \t\t\t\tbreak;\r\n> \t\t\t}\r\n> \t\t}\r\n> ```\r\n> \r\n> Please not reuse the xid, declare and use another variable in the else block or\r\n> something.\r\n\r\nAdded a temporary variable \"xid_tmp\" inside the for-statement.\r\n\r\n> 06. parallel_apply_free_worker\r\n> \r\n> ```\r\n> + if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n> + {\r\n> ```\r\n> \r\n> Please add a comment like: \"Do we have enough workers in the pool?\" or\r\n> something.\r\n\r\nAdded the following comment according to your suggestion:\r\n`Are there enough workers in the pool?`\r\n\r\n> For worker.c\r\n> \r\n> 07. general\r\n> \r\n> In many lines if-else statement is used for apply_action, but I think they should\r\n> rewrite as switch-case statement.\r\n\r\nChanged.\r\n\r\n> 08. global variable\r\n> \r\n> ```\r\n> -static bool in_streamed_transaction = false;\r\n> +bool in_streamed_transaction = false;\r\n> ```\r\n> \r\n> a.\r\n> \r\n> It seems that in_streamed_transaction is used only in the worker.c, so we can\r\n> change to stati variable.\r\n> \r\n> b.\r\n> \r\n> That flag is set only when an apply worker spill the transaction to the disk.\r\n> How about \"in_streamed_transaction\" -> \"in_spilled_transaction\"?\r\n\r\n=>8a.\r\nImproved.\r\n\r\n=>8b.\r\nI am not sure if we could rename this existing variable for this. So I kept the\r\nname.\r\n\r\n> 09. apply_handle_stream_prepare\r\n> \r\n> ```\r\n> - elog(DEBUG1, \"received prepare for streamed transaction %u\",\r\n> prepare_data.xid);\r\n> ```\r\n> \r\n> I think this debug message is still useful.\r\n\r\nSince I think it is not appropriate to log the xid here, added back the\r\nfollowing message: `finished processing the transaction finish command`.\r\n\r\n> 10. apply_handle_stream_stop\r\n> \r\n> ```\r\n> + if (apply_action == TA_APPLY_IN_PARALLEL_WORKER)\r\n> + {\r\n> + pgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);\r\n> + }\r\n> + else if (apply_action == TA_SEND_TO_PARALLEL_WORKER)\r\n> + {\r\n> ```\r\n> \r\n> The ordering of the STREAM {STOP, START} is checked only when an apply\r\n> worker spill the transaction to the disk.\r\n> (This is done via in_streamed_transaction)\r\n> I think checks should be added here, like if (!stream_apply_worker) or\r\n> something.\r\n>\r\n> 11. apply_handle_stream_abort\r\n> \r\n> ```\r\n> + if (in_streamed_transaction)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> + errmsg_internal(\"STREAM ABORT message without STREAM\r\n> STOP\")));\r\n> ```\r\n> \r\n> I think the check by stream_apply_worker should be added.\r\n\r\nBecause \"in_streamed_transaction\" is only used for non-parallel apply.\r\nSo I used stream_apply_worker to confirm the ordering of the STREAM {STOP,\r\nSTART}.\r\nBTW, I move the reset of in_streamed_transaction into the block of\r\n`else if (apply_action == TA_SERIALIZE_TO_FILE)`.\r\n\r\n> 12. apply_handle_stream_commit\r\n> \r\n> a.\r\n> \r\n> ```\r\n> \tif (in_streamed_transaction)\r\n> \t\tereport(ERROR,\r\n> \t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> \t\t\t\t errmsg_internal(\"STREAM COMMIT message\r\n> without STREAM STOP\")));\r\n> ```\r\n> \r\n> I think the check by stream_apply_worker should be added.\r\n> \r\n> b.\r\n> \r\n> ```\r\n> - elog(DEBUG1, \"received commit for streamed transaction %u\", xid);\r\n> ```\r\n> \r\n> I think this debug message is still useful.\r\n\r\n=>12a.\r\nSee the reply to #10 && #11.\r\n\r\n=>12b.\r\nSee the reply to #09.\r\n\r\n> ===\r\n> For launcher.c\r\n> \r\n> 13. logicalrep_worker_stop_by_slot\r\n> \r\n> ```\r\n> + LogicalRepWorker *worker = &LogicalRepCtx->workers[slot_no];\r\n> +\r\n> + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> +\r\n> + /* Return if the generation doesn't match or the worker is not alive. */\r\n> + if (worker->generation != generation ||\r\n> + worker->proc == NULL)\r\n> + return;\r\n> +\r\n> ```\r\n> \r\n> a.\r\n> \r\n> LWLockAcquire(LogicalRepWorkerLock) is needed before reading slots.\r\n> \r\n> b.\r\n> \r\n> LWLockRelease(LogicalRepWorkerLock) is needed even if worker is not found.\r\n\r\nFixed.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 15 Sep 2022 05:20:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Sep 13, 2022 at 17:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments.\r\n\r\n> On Fri, Sep 9, 2022 at 2:31 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, September 9, 2022 3:02 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > >\r\n> >\r\n> > > 3.\r\n> > >\r\n> > > max_logical_replication_workers (integer)\r\n> > > Specifies maximum number of logical replication workers. This\r\n> > > includes apply leader workers, parallel apply workers, and table\r\n> > > synchronization workers.\r\n> > > Logical replication workers are taken from the pool defined by\r\n> > > max_worker_processes.\r\n> > > The default value is 4. This parameter can only be set at server start.\r\n> > >\r\n> > > ~\r\n> > >\r\n> > > I did not really understand why the default is 4. Because the default\r\n> > > tablesync workers is 2, and the default parallel workers is 2, but\r\n> > > what about accounting for the apply worker? Therefore, shouldn't\r\n> > > max_logical_replication_workers default be 5 instead of 4?\r\n> >\r\n> > The parallel apply is disabled by default, so it's not a must to increase this\r\n> > global default value as discussed[1]\r\n> >\r\n> > [1] https://www.postgresql.org/message-\r\n> id/CAD21AoCwaU8SqjmC7UkKWNjDg3Uz4FDGurMpis3zw5SEC%2B27jQ%40mail\r\n> .gmail.com\r\n> >\r\n> \r\n> Okay, but can we document to increase this value when the parallel\r\n> apply is enabled?\r\n\r\nAdd the following sentence in the chapter [31.10. Configuration Settings]:\r\n```\r\nIn addition, if the subscription parameter <literal>streaming</literal> is set\r\nto <literal>parallel</literal>, please increase\r\n<literal>max_logical_replication_workers</literal> according to the desired\r\nnumber of parallel apply workers.\r\n```\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 15 Sep 2022 05:20:54 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Sep 13, 2022 at 18:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments.\r\n\r\n> On Fri, Sep 9, 2022 at 12:32 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > 29. src/backend/replication/logical/worker.c - TransactionApplyAction\r\n> >\r\n> > /*\r\n> > * What action to take for the transaction.\r\n> > *\r\n> > * TA_APPLY_IN_LEADER_WORKER means that we are in the leader apply\r\n> worker and\r\n> > * changes of the transaction are applied directly in the worker.\r\n> > *\r\n> > * TA_SERIALIZE_TO_FILE means that we are in leader apply worker and\r\n> changes\r\n> > * are written to temporary files and then applied when the final commit\r\n> > * arrives.\r\n> > *\r\n> > * TA_APPLY_IN_PARALLEL_WORKER means that we are in the parallel apply\r\n> worker\r\n> > * and changes of the transaction are applied directly in the worker.\r\n> > *\r\n> > * TA_SEND_TO_PARALLEL_WORKER means that we are in the leader apply\r\n> worker and\r\n> > * need to send the changes to the parallel apply worker.\r\n> > */\r\n> > typedef enum\r\n> > {\r\n> > /* The action for non-streaming transactions. */\r\n> > TA_APPLY_IN_LEADER_WORKER,\r\n> >\r\n> > /* Actions for streaming transactions. */\r\n> > TA_SERIALIZE_TO_FILE,\r\n> > TA_APPLY_IN_PARALLEL_WORKER,\r\n> > TA_SEND_TO_PARALLEL_WORKER\r\n> > } TransactionApplyAction;\r\n> >\r\n> > ~\r\n> >\r\n> > 29a.\r\n> > I think if you change all those enum names slightly (e.g. like below)\r\n> > then they can be more self-explanatory:\r\n> >\r\n> > TA_NOT_STREAMING_LEADER_APPLY\r\n> > TA_STREAMING_LEADER_SERIALIZE\r\n> > TA_STREAMING_LEADER_SEND_TO_PARALLEL\r\n> > TA_STREAMING_PARALLEL_APPLY\r\n> >\r\n> > ~\r\n> >\r\n> \r\n> I also think we can improve naming but adding streaming in the names\r\n> makes them slightly difficult to read. As you have suggested, it will\r\n> be better to add comments for streaming and non-streaming cases. How\r\n> about naming them as below:\r\n> \r\n> typedef enum\r\n> {\r\n> TRANS_LEADER_APPLY\r\n> TRANS_LEADER_SERIALIZE\r\n> TRANS_LEADER_SEND_TO_PARALLEL\r\n> TRANS_PARALLEL_APPLY\r\n> } TransApplyAction;\r\n\r\nI think your suggestion looks good.\r\nImproved as suggested.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Thu, 15 Sep 2022 05:22:45 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Sep 13, 2022 at 20:02 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Hou-san,\r\n> \r\n> > I will dig your patch more, but I send partially to keep the activity of the thread.\r\n> \r\n> More minor comments about v28.\r\n\r\nThanks for your comments.\r\n\r\n> ===\r\n> About 0002\r\n> \r\n> For 015_stream.pl\r\n> \r\n> 14. check_parallel_log\r\n> \r\n> ```\r\n> +# Check the log that the streamed transaction was completed successfully\r\n> +# reported by parallel apply worker.\r\n> +sub check_parallel_log\r\n> +{\r\n> + my ($node_subscriber, $offset, $is_parallel)= @_;\r\n> + my $parallel_message = 'finished processing the transaction finish\r\n> command';\r\n> +\r\n> + if ($is_parallel)\r\n> + {\r\n> + $node_subscriber->wait_for_log(qr/$parallel_message/, $offset);\r\n> + }\r\n> +}\r\n> ```\r\n> \r\n> I think check_parallel_log() should be called only when streaming = 'parallel' and\r\n> if-statement is not needed\r\n\r\nI wanted to make the function test_streaming look simpler, so I put the\r\nchecking of the streaming option inside the function check_parallel_log.\r\n\r\n> For 016_stream_subxact.pl\r\n> \r\n> 15. test_streaming\r\n> \r\n> ```\r\n> + INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series( 3,\r\n> 500) s(i);\r\n> ```\r\n> \r\n> \" 3\" should be \"3\".\r\n\r\nImproved.\r\n\r\n> About 0003\r\n> \r\n> For applyparallelworker.c\r\n> \r\n> 16. parallel_apply_relation_check()\r\n> \r\n> ```\r\n> + if (rel->parallel_apply_safe == PARALLEL_APPLY_SAFETY_UNKNOWN)\r\n> + logicalrep_rel_mark_parallel_apply(rel);\r\n> ```\r\n> \r\n> I was not clear when logicalrep_rel_mark_parallel_apply() is called here.\r\n> IIUC parallel_apply_relation_check() is called when parallel apply worker\r\n> handles changes,\r\n> but before that relation is opened via logicalrep_rel_open() and\r\n> parallel_apply_safe is set here.\r\n> If it guards some protocol violation, we may use Assert().\r\n\r\nCompared with the flag \"localrelvalid\", we also need to additionally reset the\r\nflag \"safety\" when function and type are changed (see function\r\nlogicalrep_relmap_init). So I think for these two cases, we just need to reset\r\nthe flag \"safety\" to avoid rebuilding too much cache (see function\r\nlogicalrep_relmap_reset_parallel_cb).\r\n\r\n> For create_subscription.sgml\r\n> \r\n> 17.\r\n> The restriction about foreign key does not seem to be documented.\r\n\r\nI removed the check for the foreign key.\r\n\r\nSince foreign key does not take effect in the subscriber's apply worker by\r\ndefault, it seems that foreign key does not hit this ERROR frequently. \r\nIf we set foreign key related trigger to \"REPLICA\", then, I think this flag\r\nwill be set to \"unsafety\" when checking non-immutable function uesd by trigger.\r\n\r\nBTW, I only document this reason in the commit message and keep the foreign key\r\nrelated tests.\r\n\r\n> ===\r\n> About 0004\r\n> \r\n> For 015_stream.pl\r\n> \r\n> 18. check_parallel_log\r\n> \r\n> I heard that the removal has been reverted, but in the patch\r\n> check_parallel_log() is removed again... :-(\r\n\r\nYes, I removed it.\r\nI think this will make the test unstable. Because after applying patch\r\n0004, we could not sure whether the transaction is completed in a parallel\r\napply worker. If any unexpected error occurs, the test will fail because the\r\nlog cannot be found, even if the transaction completed successfully. \r\n\r\n> ===\r\n> About throughout\r\n> \r\n> I checked the test coverage via `make coverage`. About appluparallelworker.c\r\n> and worker.c, both function coverage is 100%, and\r\n> line coverages are 86.2 % and 94.5 %. Generally it's good.\r\n> But I read the report and following parts seems not tested.\r\n> \r\n> In parallel_apply_start_worker():\r\n> \r\n> ```\r\n> \t\tif (tmp_winfo->error_mq_handle == NULL)\r\n> \t\t{\r\n> \t\t\t/*\r\n> \t\t\t * Release the worker information and try next one if\r\n> the parallel\r\n> \t\t\t * apply worker exited cleanly.\r\n> \t\t\t */\r\n> \t\t\tParallelApplyWorkersList =\r\n> foreach_delete_current(ParallelApplyWorkersList, lc);\r\n> \t\t\tshm_mq_detach(tmp_winfo->mq_handle);\r\n> \t\t\tdsm_detach(tmp_winfo->dsm_seg);\r\n> \t\t\tpfree(tmp_winfo);\r\n> \t\t}\r\n> ```\r\n> \r\n> In HandleParallelApplyMessage():\r\n> \r\n> ```\r\n> \t\tcase 'X':\t\t\t\t/* Terminate, indicating\r\n> clean exit */\r\n> \t\t\t{\r\n> \t\t\t\tshm_mq_detach(winfo->error_mq_handle);\r\n> \t\t\t\twinfo->error_mq_handle = NULL;\r\n> \t\t\t\tbreak;\r\n> \t\t\t}\r\n> ```\r\n> \r\n> Does it mean that we do not test the termination of parallel apply worker? If so I\r\n> think it should be tested.\r\n\r\nSince this is an unexpected situation that cannot be reproduced 100%, we did\r\nnot add tests related to this part of the code to improve coverage.\r\n\r\nThe new patches were attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275F145878B4A44586C46CE9E499%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n\r\n", "msg_date": "Thu, 15 Sep 2022 05:23:34 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 15, 2022 1:15 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> Attach the new patch set.\r\n> \r\n\r\nHi,\r\n\r\nI did some performance tests for \"rollback to savepoint\" cases, based on v28\r\npatch.\r\n\r\nThis test used synchronous logical replication, and compared SQL execution times\r\nbefore and after applying the patch. It tested different percentage of changes\r\nin the transaction are rolled back (use \"rollback to savepoint\"), when using\r\ndifferent logical_decoding_work_mem.\r\n\r\nThe test was performed ten times, and the average of the middle eight was taken.\r\n\r\nThe results are as follows. The bar charts and the scripts of the test are\r\nattached. The steps to reproduce performance test are at the beginning of\r\n`start_pub.sh`.\r\n\r\nRESULT - rollback 10% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 43.752 43.463 42.667\r\npatched 32.646 30.941 31.491\r\nCompare with HEAD -25.39% -28.81% -26.19%\r\n\r\n\r\nRESULT - rollback 20% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 40.974 40.214 39.930\r\npatched 28.114 28.055 27.550\r\nCompare with HEAD -31.39% -30.23% -31.00%\r\n\r\n\r\nRESULT - rollback 30% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 37.648 37.785 36.969\r\npatched 29.554 29.389 27.398\r\nCompare with HEAD -21.50% -22.22% -25.89%\r\n\r\n\r\nRESULT - rollback 50% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 32.312 32.201 32.533\r\npatched 30.238 30.244 27.903\r\nCompare with HEAD -6.42% -6.08% -14.23%\r\n\r\n(If \"Compare with HEAD\" is a positive number, it means worse than HEAD; if it is\r\na negative number, it means better than HEAD.)\r\n\r\nSummary:\r\nIn general, when using \"rollback to savepoint\", the more the amount of data we\r\nneed to rollback, the smaller the improvement compared to HEAD. But as such\r\ncases won't be often, this should be okay.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Thu, 15 Sep 2022 06:03:13 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 15, 2022 at 10:45 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patch set.\n>\n\nReview of v29-0001*\n==================\n1.\n+parallel_apply_find_worker(TransactionId xid)\n{\n...\n+ entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_FIND, &found);\n+ if (found)\n+ {\n+ /* If any workers (or the postmaster) have died, we have failed. */\n+ if (entry->winfo->error_mq_handle == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"lost connection to parallel apply worker\")));\n...\n}\n\nI think the above comment is incorrect because if the postmaster would\nhave died then you wouldn't have found the entry in the hash table.\nHow about something like: \"We can't proceed if the parallel streaming\nworker has already exited.\"\n\n2.\n+/*\n+ * Find the previously assigned worker for the given transaction, if any.\n+ */\n+ParallelApplyWorkerInfo *\n+parallel_apply_find_worker(TransactionId xid)\n\nNo need to use word 'previously' in the above sentence.\n\n3.\n+ * We need one key to register the location of the header, and we need\n+ * another key to track the location of the message queue.\n+ */\n+ shm_toc_initialize_estimator(&e);\n+ shm_toc_estimate_chunk(&e, sizeof(ParallelApplyWorkerShared));\n+ shm_toc_estimate_chunk(&e, queue_size);\n+ shm_toc_estimate_chunk(&e, error_queue_size);\n+\n+ shm_toc_estimate_keys(&e, 3);\n\nOverall, three keys are used but the comment indicates two. You forgot\nto mention about error_queue.\n\n4.\n+ if (launched)\n+ ParallelApplyWorkersList = lappend(ParallelApplyWorkersList, winfo);\n+ else\n+ {\n+ shm_mq_detach(winfo->mq_handle);\n+ shm_mq_detach(winfo->error_mq_handle);\n+ dsm_detach(winfo->dsm_seg);\n+ pfree(winfo);\n+\n+ winfo = NULL;\n+ }\n\nA. The code used in the else part to free worker info is the same as\nwhat is used in parallel_apply_free_worker. Can we move this to a\nseparate function say parallel_apply_free_worker_info()?\nB. I think it will be better if you use {} for if branch to make it\nlook consistent with else branch.\n\n5.\n+ * case define a named savepoint, so that we are able to commit/rollback it\n+ * separately later.\n+ */\n+void\n+parallel_apply_subxact_info_add(TransactionId current_xid)\n\nI don't see the need of commit in the above message. So, we can\nslightly modify it to: \"... so that we are able to rollback to it\nseparately later.\"\n\n6.\n+ for (i = list_length(subxactlist) - 1; i >= 0; i--)\n+ {\n+ xid = list_nth_xid(subxactlist, i);\n...\n...\n\n+/*\n+ * Return the TransactionId value contained in the n'th element of the\n+ * specified list.\n+ */\n+static inline TransactionId\n+list_nth_xid(const List *list, int n)\n+{\n+ Assert(IsA(list, XidList));\n+ return lfirst_xid(list_nth_cell(list, n));\n+}\n\nI am not really sure that we need a new list function to use for this\nplace. Can't we directly use lfirst_xid(list_nth_cell) instead?\n\n7.\n+void\n+parallel_apply_replorigin_setup(void)\n+{\n+ RepOriginId originid;\n+ char originname[NAMEDATALEN];\n+ bool started_tx = false;\n+\n+ /* This function might be called inside or outside of transaction. */\n+ if (!IsTransactionState())\n+ {\n+ StartTransactionCommand();\n+ started_tx = true;\n+ }\n\nIs there a place in the patch where this function will be called\nwithout having an active transaction state? If so, then this coding is\nfine but if not, then I suggest keeping an assert for transaction\nstate here. The same thing applies to\nparallel_apply_replorigin_reset() as well.\n\n8.\n+ *\n+ * If write_abort_lsn is true, send the abort_lsn and abort_time fields,\n+ * otherwise don't.\n */\n void\n logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\n- TransactionId subxid)\n+ TransactionId subxid, XLogRecPtr abort_lsn,\n+ TimestampTz abort_time, bool abort_info)\n\nIn the comment, the name of the variable needs to be updated.\n\n9.\n+TransactionId stream_xid = InvalidTransactionId;\n\n-static TransactionId stream_xid = InvalidTransactionId;\n...\n...\n+void\n+parallel_apply_subxact_info_add(TransactionId current_xid)\n+{\n+ if (current_xid != stream_xid &&\n+ !list_member_xid(subxactlist, current_xid))\n\nIt seems you have changed the scope of stream_xid to use it in\nparallel_apply_subxact_info_add(). Won't it be better to pass it as a\nparameter (say top_xid)?\n\n10.\n--- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n+++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n@@ -20,6 +20,7 @@\n #include <sys/time.h>\n\n #include \"access/xlog.h\"\n+#include \"catalog/pg_subscription.h\"\n #include \"catalog/pg_type.h\"\n #include \"common/connect.h\"\n #include \"funcapi.h\"\n@@ -443,9 +444,14 @@ libpqrcv_startstreaming(WalReceiverConn *conn,\n appendStringInfo(&cmd, \"proto_version '%u'\",\n options->proto.logical.proto_version);\n\n- if (options->proto.logical.streaming &&\n- PQserverVersion(conn->streamConn) >= 140000)\n- appendStringInfoString(&cmd, \", streaming 'on'\");\n+ if (options->proto.logical.streaming != SUBSTREAM_OFF)\n+ {\n+ if (PQserverVersion(conn->streamConn) >= 160000 &&\n+ options->proto.logical.streaming == SUBSTREAM_PARALLEL)\n+ appendStringInfoString(&cmd, \", streaming 'parallel'\");\n+ else if (PQserverVersion(conn->streamConn) >= 140000)\n+ appendStringInfoString(&cmd, \", streaming 'on'\");\n+ }\n\nIt doesn't seem like a good idea to expose subscription options here.\nCan we think of having char *streaming_option instead of the current\nstreaming parameter which is filled by the caller and used here\ndirectly?\n\n11. The error message used in pgoutput_startup() seems to be better\nthan the current messages used in that function but it is better to be\nconsistent with other messages. There is a discussion in the email\nthread [1] on improving those messages, so kindly suggest there.\n\n12. In addition to the above, I have changed/added a few comments in\nthe attached patch.\n\n[1] - https://www.postgresql.org/message-id/20220914.111507.13049297635620898.horikyota.ntt%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 15 Sep 2022 17:09:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 15, 2022 at 19:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Sep 15, 2022 at 10:45 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new patch set.\r\n> >\r\n> \r\n> Review of v29-0001*\r\n\r\nThanks for your comments and patch!\r\n\r\n> ==================\r\n> 1.\r\n> +parallel_apply_find_worker(TransactionId xid)\r\n> {\r\n> ...\r\n> + entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_FIND, &found);\r\n> + if (found)\r\n> + {\r\n> + /* If any workers (or the postmaster) have died, we have failed. */\r\n> + if (entry->winfo->error_mq_handle == NULL)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"lost connection to parallel apply worker\")));\r\n> ...\r\n> }\r\n> \r\n> I think the above comment is incorrect because if the postmaster would\r\n> have died then you wouldn't have found the entry in the hash table.\r\n> How about something like: \"We can't proceed if the parallel streaming\r\n> worker has already exited.\"\r\n\r\nFixed.\r\n\r\n> 2.\r\n> +/*\r\n> + * Find the previously assigned worker for the given transaction, if any.\r\n> + */\r\n> +ParallelApplyWorkerInfo *\r\n> +parallel_apply_find_worker(TransactionId xid)\r\n> \r\n> No need to use word 'previously' in the above sentence.\r\n\r\nImproved.\r\n\r\n> 3.\r\n> + * We need one key to register the location of the header, and we need\r\n> + * another key to track the location of the message queue.\r\n> + */\r\n> + shm_toc_initialize_estimator(&e);\r\n> + shm_toc_estimate_chunk(&e, sizeof(ParallelApplyWorkerShared));\r\n> + shm_toc_estimate_chunk(&e, queue_size);\r\n> + shm_toc_estimate_chunk(&e, error_queue_size);\r\n> +\r\n> + shm_toc_estimate_keys(&e, 3);\r\n> \r\n> Overall, three keys are used but the comment indicates two. You forgot\r\n> to mention about error_queue.\r\n\r\nFixed.\r\n\r\n> 4.\r\n> + if (launched)\r\n> + ParallelApplyWorkersList = lappend(ParallelApplyWorkersList, winfo);\r\n> + else\r\n> + {\r\n> + shm_mq_detach(winfo->mq_handle);\r\n> + shm_mq_detach(winfo->error_mq_handle);\r\n> + dsm_detach(winfo->dsm_seg);\r\n> + pfree(winfo);\r\n> +\r\n> + winfo = NULL;\r\n> + }\r\n> \r\n> A. The code used in the else part to free worker info is the same as\r\n> what is used in parallel_apply_free_worker. Can we move this to a\r\n> separate function say parallel_apply_free_worker_info()?\r\n> B. I think it will be better if you use {} for if branch to make it\r\n> look consistent with else branch.\r\n\r\nImproved.\r\n\r\n> 5.\r\n> + * case define a named savepoint, so that we are able to commit/rollback it\r\n> + * separately later.\r\n> + */\r\n> +void\r\n> +parallel_apply_subxact_info_add(TransactionId current_xid)\r\n> \r\n> I don't see the need of commit in the above message. So, we can\r\n> slightly modify it to: \"... so that we are able to rollback to it\r\n> separately later.\"\r\n\r\nImproved.\r\n\r\n> 6.\r\n> + for (i = list_length(subxactlist) - 1; i >= 0; i--)\r\n> + {\r\n> + xid = list_nth_xid(subxactlist, i);\r\n> ...\r\n> ...\r\n> \r\n> +/*\r\n> + * Return the TransactionId value contained in the n'th element of the\r\n> + * specified list.\r\n> + */\r\n> +static inline TransactionId\r\n> +list_nth_xid(const List *list, int n)\r\n> +{\r\n> + Assert(IsA(list, XidList));\r\n> + return lfirst_xid(list_nth_cell(list, n));\r\n> +}\r\n> \r\n> I am not really sure that we need a new list function to use for this\r\n> place. Can't we directly use lfirst_xid(list_nth_cell) instead?\r\n\r\nImproved.\r\n \r\n> 7.\r\n> +void\r\n> +parallel_apply_replorigin_setup(void)\r\n> +{\r\n> + RepOriginId originid;\r\n> + char originname[NAMEDATALEN];\r\n> + bool started_tx = false;\r\n> +\r\n> + /* This function might be called inside or outside of transaction. */\r\n> + if (!IsTransactionState())\r\n> + {\r\n> + StartTransactionCommand();\r\n> + started_tx = true;\r\n> + }\r\n> \r\n> Is there a place in the patch where this function will be called\r\n> without having an active transaction state? If so, then this coding is\r\n> fine but if not, then I suggest keeping an assert for transaction\r\n> state here. The same thing applies to\r\n> parallel_apply_replorigin_reset() as well.\r\n\r\nWhen using parallel apply, only the parallel apply worker is in a transaction\r\nwhile the leader apply worker is not. So when invoking function\r\nparallel_apply_replorigin_setup() in the leader apply worker, we need to start\r\na transaction block.\r\n\r\n> 8.\r\n> + *\r\n> + * If write_abort_lsn is true, send the abort_lsn and abort_time fields,\r\n> + * otherwise don't.\r\n> */\r\n> void\r\n> logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\r\n> - TransactionId subxid)\r\n> + TransactionId subxid, XLogRecPtr abort_lsn,\r\n> + TimestampTz abort_time, bool abort_info)\r\n> \r\n> In the comment, the name of the variable needs to be updated.\r\n\r\nFixed.\r\n\r\n> 9.\r\n> +TransactionId stream_xid = InvalidTransactionId;\r\n> \r\n> -static TransactionId stream_xid = InvalidTransactionId;\r\n> ...\r\n> ...\r\n> +void\r\n> +parallel_apply_subxact_info_add(TransactionId current_xid)\r\n> +{\r\n> + if (current_xid != stream_xid &&\r\n> + !list_member_xid(subxactlist, current_xid))\r\n> \r\n> It seems you have changed the scope of stream_xid to use it in\r\n> parallel_apply_subxact_info_add(). Won't it be better to pass it as a\r\n> parameter (say top_xid)?\r\n\r\nImproved.\r\n\r\n> 10.\r\n> --- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\r\n> +++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\r\n> @@ -20,6 +20,7 @@\r\n> #include <sys/time.h>\r\n> \r\n> #include \"access/xlog.h\"\r\n> +#include \"catalog/pg_subscription.h\"\r\n> #include \"catalog/pg_type.h\"\r\n> #include \"common/connect.h\"\r\n> #include \"funcapi.h\"\r\n> @@ -443,9 +444,14 @@ libpqrcv_startstreaming(WalReceiverConn *conn,\r\n> appendStringInfo(&cmd, \"proto_version '%u'\",\r\n> options->proto.logical.proto_version);\r\n> \r\n> - if (options->proto.logical.streaming &&\r\n> - PQserverVersion(conn->streamConn) >= 140000)\r\n> - appendStringInfoString(&cmd, \", streaming 'on'\");\r\n> + if (options->proto.logical.streaming != SUBSTREAM_OFF)\r\n> + {\r\n> + if (PQserverVersion(conn->streamConn) >= 160000 &&\r\n> + options->proto.logical.streaming == SUBSTREAM_PARALLEL)\r\n> + appendStringInfoString(&cmd, \", streaming 'parallel'\");\r\n> + else if (PQserverVersion(conn->streamConn) >= 140000)\r\n> + appendStringInfoString(&cmd, \", streaming 'on'\");\r\n> + }\r\n> \r\n> It doesn't seem like a good idea to expose subscription options here.\r\n> Can we think of having char *streaming_option instead of the current\r\n> streaming parameter which is filled by the caller and used here\r\n> directly?\r\n\r\nImproved.\r\n\r\n> 11. The error message used in pgoutput_startup() seems to be better\r\n> than the current messages used in that function but it is better to be\r\n> consistent with other messages. There is a discussion in the email\r\n> thread [1] on improving those messages, so kindly suggest there.\r\n\r\nOkay, I will try to modify the two messages and share them in the thread you\r\nmentioned.\r\n\r\n> 12. In addition to the above, I have changed/added a few comments in\r\n> the attached patch.\r\n\r\nImproved as suggested.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 19 Sep 2022 03:25:31 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Sept 19, 2022 11:26 AM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> \r\n> Improved as suggested.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments on 0001 patch.\r\n\r\n1.\r\n+\t\tcase TRANS_LEADER_SERIALIZE:\r\n \r\n-\t\toldctx = MemoryContextSwitchTo(ApplyContext);\r\n+\t\t\t/*\r\n+\t\t\t * Notify handle methods we're processing a remote in-progress\r\n+\t\t\t * transaction.\r\n+\t\t\t */\r\n+\t\t\tin_streamed_transaction = true;\r\n \r\n-\t\tMyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n-\t\tFileSetInit(MyLogicalRepWorker->stream_fileset);\r\n+\t\t\t/*\r\n+\t\t\t * Since no parallel apply worker is used for the first stream\r\n+\t\t\t * start, serialize all the changes of the transaction.\r\n+\t\t\t *\r\n+\t\t\t * Start a transaction on stream start, this transaction will be\r\n\r\n\r\nIt seems that the following comment can be removed after using switch case.\r\n+\t\t\t * Since no parallel apply worker is used for the first stream\r\n+\t\t\t * start, serialize all the changes of the transaction.\r\n\r\n2.\r\n+\tswitch (apply_action)\r\n+\t{\r\n+\t\tcase TRANS_LEADER_SERIALIZE:\r\n+\t\t\tif (!in_streamed_transaction)\r\n+\t\t\t\tereport(ERROR,\r\n+\t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n+\t\t\t\t\t\t errmsg_internal(\"STREAM STOP message without STREAM START\")));\r\n\r\nIn apply_handle_stream_stop(), I think we can move this check to the beginning of\r\nthis function, to be consistent to other functions.\r\n\r\n3. I think the some of the changes in 0005 patch can be merged to 0001 patch,\r\n0005 patch can only contain the changes about new column 'apply_leader_pid'.\r\n\r\n4.\r\n+ * ParallelApplyWorkersList. After successfully, launching a new worker it's\r\n+ * information is added to the ParallelApplyWorkersList. Once the worker\r\n\r\nShould `it's` be `its` ?\r\n\r\nRegards\r\nShi yu\r\n", "msg_date": "Tue, 20 Sep 2022 03:40:43 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "FYI -\n\nThe latest patch 30-0001 fails to apply, it seems due to a recent commit [1].\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v30-0001-Perform-streaming-logical-transactions-by-parall.patch\nerror: patch failed: src/include/replication/logicalproto.h:246\nerror: src/include/replication/logicalproto.h: patch does not apply\n\n------\n[1] https://github.com/postgres/postgres/commit/bfcf1b34805f70df48eedeec237230d0cc1154a6\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Sep 2022 11:57:46 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Sep 20, 2022 at 11:41 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\r\n> On Mon, Sept 19, 2022 11:26 AM Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> >\r\n> > Improved as suggested.\r\n> >\r\n> \r\n> Thanks for updating the patch. Here are some comments on 0001 patch.\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> +\t\tcase TRANS_LEADER_SERIALIZE:\r\n> \r\n> -\t\toldctx = MemoryContextSwitchTo(ApplyContext);\r\n> +\t\t\t/*\r\n> +\t\t\t * Notify handle methods we're processing a remote in-\r\n> progress\r\n> +\t\t\t * transaction.\r\n> +\t\t\t */\r\n> +\t\t\tin_streamed_transaction = true;\r\n> \r\n> -\t\tMyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n> -\t\tFileSetInit(MyLogicalRepWorker->stream_fileset);\r\n> +\t\t\t/*\r\n> +\t\t\t * Since no parallel apply worker is used for the first\r\n> stream\r\n> +\t\t\t * start, serialize all the changes of the transaction.\r\n> +\t\t\t *\r\n> +\t\t\t * Start a transaction on stream start, this transaction will\r\n> be\r\n> \r\n> \r\n> It seems that the following comment can be removed after using switch case.\r\n> +\t\t\t * Since no parallel apply worker is used for the first\r\n> stream\r\n> +\t\t\t * start, serialize all the changes of the transaction.\r\n\r\nRemoved.\r\n\r\n> 2.\r\n> +\tswitch (apply_action)\r\n> +\t{\r\n> +\t\tcase TRANS_LEADER_SERIALIZE:\r\n> +\t\t\tif (!in_streamed_transaction)\r\n> +\t\t\t\tereport(ERROR,\r\n> +\r\n> \t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> +\t\t\t\t\t\t errmsg_internal(\"STREAM STOP\r\n> message without STREAM START\")));\r\n> \r\n> In apply_handle_stream_stop(), I think we can move this check to the beginning\r\n> of\r\n> this function, to be consistent to other functions.\r\n\r\nImproved as suggested.\r\n\r\n> 3. I think the some of the changes in 0005 patch can be merged to 0001 patch,\r\n> 0005 patch can only contain the changes about new column 'apply_leader_pid'.\r\n\r\nMerged changes not related to 'apply_leader_pid' into patch 0001.\r\n\r\n> 4.\r\n> + * ParallelApplyWorkersList. After successfully, launching a new worker it's\r\n> + * information is added to the ParallelApplyWorkersList. Once the worker\r\n> \r\n> Should `it's` be `its` ?\r\n\r\nFixed.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 21 Sep 2022 02:09:27 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "> FYI -\r\n> \r\n> The latest patch 30-0001 fails to apply, it seems due to a recent commit [1].\r\n> \r\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\r\n> ../patches_misc/v30-0001-Perform-streaming-logical-transactions-by-\r\n> parall.patch\r\n> error: patch failed: src/include/replication/logicalproto.h:246\r\n> error: src/include/replication/logicalproto.h: patch does not apply\r\n\r\nThanks for your kindly reminder.\r\n\r\nI rebased the patch set and attached them in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275298521AE1BBEF5A055EE9E4F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Wed, 21 Sep 2022 02:14:35 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Sep 21, 2022 at 10:09 AM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the new patch set.\r\n\r\nBecause of the changes in HEAD (a932824), the patch set could not be applied\r\ncleanly, so I rebase them.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 21 Sep 2022 08:27:58 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for patch v30-0001.\n\n======\n\n1. Commit message\n\nIn addition, the patch extends the logical replication STREAM_ABORT message so\nthat abort_time and abort_lsn can also be sent which can be used to update the\nreplication origin in parallel apply worker when the streaming transaction is\naborted. Because this message extension is needed to support parallel\nstreaming, meaning that parallel streaming is not supported for publications on\nservers < PG16.\n\n\"meaning that parallel streaming is not supported\" -> \"parallel\nstreaming is not supported\"\n\n======\n\n2. doc/src/sgml/logical-replication.sgml\n\n@@ -1611,8 +1622,12 @@ CONTEXT: processing remote data for\nreplication origin \"pg_16395\" during \"INSER\n to the subscriber, plus some reserve for table synchronization.\n <varname>max_logical_replication_workers</varname> must be set to at least\n the number of subscriptions, again plus some reserve for the table\n- synchronization. Additionally the <varname>max_worker_processes</varname>\n- may need to be adjusted to accommodate for replication workers, at least\n+ synchronization. In addition, if the subscription parameter\n+ <literal>streaming</literal> is set to <literal>parallel</literal>, please\n+ increase <literal>max_logical_replication_workers</literal> according to\n+ the desired number of parallel apply workers. Additionally the\n+ <varname>max_worker_processes</varname> may need to be adjusted to\n+ accommodate for replication workers, at least\n (<varname>max_logical_replication_workers</varname>\n + <literal>1</literal>). Note that some extensions and parallel queries\n also take worker slots from <varname>max_worker_processes</varname>.\n\nIMO it looks a bit strange to have \"In addition\" followed by \"Additionally\".\n\nAlso, \"to accommodate for replication workers\"? seems like a typo (but\nit is not caused by your patch)\n\nBEFORE\nIn addition, if the subscription parameter streaming is set to\nparallel, please increase max_logical_replication_workers according to\nthe desired number of parallel apply workers.\n\nAFTER (???)\nIf the subscription parameter streaming is set to parallel,\nmax_logical_replication_workers should be increased according to the\ndesired number of parallel apply workers.\n\n======\n\n3. .../replication/logical/applyparallelworker.c - parallel_apply_can_start\n\n+/*\n+ * Returns true, if it is allowed to start a parallel apply worker, false,\n+ * otherwise.\n+ */\n+static bool\n+parallel_apply_can_start(TransactionId xid)\n\nSeems a slightly complicated comment for a simple boolean function.\n\nSUGGESTION\nReturns true/false if it is OK to start a parallel apply worker.\n\n======\n\n4. .../replication/logical/applyparallelworker.c - parallel_apply_free_worker\n\n+ winfo->in_use = false;\n+\n+ /* Are there enough workers in the pool? */\n+ if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n+ {\n\nI felt the comment/logic about \"enough\" needs a bit more description.\nAt least it should say to refer to the more detailed explanation atop\nworker.c\n\n======\n\n5. .../replication/logical/applyparallelworker.c - parallel_apply_setup_dsm\n\n+ /*\n+ * Estimate how much shared memory we need.\n+ *\n+ * Because the TOC machinery may choose to insert padding of oddly-sized\n+ * requests, we must estimate each chunk separately.\n+ *\n+ * We need one key to register the location of the header, and we need two\n+ * other keys to track of the locations of the message queue and the error\n+ * message queue.\n+ */\n\n\"track of\" -> \"keep track of\" ?\n\n======\n\n6. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n\n logicalrep_worker_detach(void)\n {\n+ /* Stop the parallel apply workers. */\n+ if (!am_parallel_apply_worker() && !am_tablesync_worker())\n+ {\n+ List *workers;\n+ ListCell *lc;\n\nThe condition is not very obvious. This is why I previously suggested\nadding another macro/function like 'isLeaderApplyWorker'. In the\nabsence of that, then I think the comment needs to be more\ndescriptive.\n\nSUGGESTION\nIf this is the leader apply worker then stop the parallel apply workers.\n\n======\n\n7. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\n\n void\n logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\n- TransactionId subxid)\n+ TransactionId subxid, XLogRecPtr abort_lsn,\n+ TimestampTz abort_time, bool abort_info)\n {\n pq_sendbyte(out, LOGICAL_REP_MSG_STREAM_ABORT);\n\n@@ -1175,19 +1179,40 @@ logicalrep_write_stream_abort(StringInfo out,\nTransactionId xid,\n /* transaction ID */\n pq_sendint32(out, xid);\n pq_sendint32(out, subxid);\n+\n+ if (abort_info)\n+ {\n+ pq_sendint64(out, abort_lsn);\n+ pq_sendint64(out, abort_time);\n+ }\n\n\nThe new param name 'abort_info' seems misleading.\n\nMaybe a name like 'write_abort_info' is better?\n\n~~~\n\n8. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\n\n+logicalrep_read_stream_abort(StringInfo in,\n+ LogicalRepStreamAbortData *abort_data,\n+ bool read_abort_lsn)\n {\n- Assert(xid && subxid);\n+ Assert(abort_data);\n+\n+ abort_data->xid = pq_getmsgint(in, 4);\n+ abort_data->subxid = pq_getmsgint(in, 4);\n\n- *xid = pq_getmsgint(in, 4);\n- *subxid = pq_getmsgint(in, 4);\n+ if (read_abort_lsn)\n+ {\n+ abort_data->abort_lsn = pq_getmsgint64(in);\n+ abort_data->abort_time = pq_getmsgint64(in);\n+ }\n\nThis name 'read_abort_lsn' is inconsistent with the 'abort_info' of\nthe logicalrep_write_stream_abort.\n\nI suggest change these to 'read_abort_info/write_abort_info'\n\n======\n\n9. src/backend/replication/logical/worker.c - file header comment\n\n+ * information is added to the ParallelApplyWorkersList. Once the worker\n+ * finishes applying the transaction, we mark it available for use. Now,\n+ * before starting a new worker to apply the streaming transaction, we check\n+ * the list and use any worker, if available. Note that we maintain a maximum\n\n9a.\n\"available for use.\" -> \"available for re-use.\"\n\n~\n\n9b.\n\"we check the list and use any worker, if available\" -> \"we check the\nlist for any available worker\"\n\n~~~\n\n10. src/backend/replication/logical/worker.c - handle_streamed_transaction\n\n+ /* write the change to the current file */\n+ stream_write_change(action, s);\n+ return true;\n\nUppercase the comment.\n\n~~~\n\n11. src/backend/replication/logical/worker.c - apply_handle_stream_abort\n\n+static void\n+apply_handle_stream_abort(StringInfo s)\n+{\n+ TransactionId xid;\n+ TransactionId subxid;\n+ LogicalRepStreamAbortData abort_data;\n+ bool read_abort_lsn = false;\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+ TransApplyAction apply_action;\n\nThe variable 'read_abort_lsn' name ought to be changed to match\nconsistently the parameter name.\n\n======\n\n12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_abort\n\n@@ -1843,6 +1850,8 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n XLogRecPtr abort_lsn)\n {\n ReorderBufferTXN *toptxn;\n+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ bool abort_info = (data->streaming == SUBSTREAM_PARALLEL);\n\nThe variable 'abort_info' name ought to be changed to be\n'write_abort_info' (as suggested above) to match consistently the\nparameter name.\n\n======\n\n13. src/include/replication/worker_internal.h\n\n+ /*\n+ * Indicates whether the worker is available to be used for parallel apply\n+ * transaction?\n+ */\n+ bool in_use;\n\nThis comment seems backward for this member's name.\n\nSUGGESTION (something like...)\nIndicates whether this ParallelApplyWorkerInfo is currently being used\nby a parallel apply worker processing a transaction. (If this flag is\nfalse then it means the ParallelApplyWorkerInfo is available for\nre-use by another parallel apply worker.)\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Sep 2022 19:25:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Sep 21, 2022 at 2:55 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ======\n>\n> 3. .../replication/logical/applyparallelworker.c - parallel_apply_can_start\n>\n> +/*\n> + * Returns true, if it is allowed to start a parallel apply worker, false,\n> + * otherwise.\n> + */\n> +static bool\n> +parallel_apply_can_start(TransactionId xid)\n>\n> Seems a slightly complicated comment for a simple boolean function.\n>\n> SUGGESTION\n> Returns true/false if it is OK to start a parallel apply worker.\n>\n\nI think this is the style followed at some other places as well. So,\nwe can leave it.\n\n>\n> 6. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\n>\n> logicalrep_worker_detach(void)\n> {\n> + /* Stop the parallel apply workers. */\n> + if (!am_parallel_apply_worker() && !am_tablesync_worker())\n> + {\n> + List *workers;\n> + ListCell *lc;\n>\n> The condition is not very obvious. This is why I previously suggested\n> adding another macro/function like 'isLeaderApplyWorker'.\n>\n\nHow about having function a function am_leader_apply_worker() { ...\nreturn OidIsValid(MyLogicalRepWorker->relid) &&\n(MyLogicalRepWorker->apply_leader_pid == InvalidPid) ...}?\n\n>\n> 13. src/include/replication/worker_internal.h\n>\n> + /*\n> + * Indicates whether the worker is available to be used for parallel apply\n> + * transaction?\n> + */\n> + bool in_use;\n>\n> This comment seems backward for this member's name.\n>\n> SUGGESTION (something like...)\n> Indicates whether this ParallelApplyWorkerInfo is currently being used\n> by a parallel apply worker processing a transaction. (If this flag is\n> false then it means the ParallelApplyWorkerInfo is available for\n> re-use by another parallel apply worker.)\n>\n\nI am not sure if this is an improvement over the current. The current\ncomment appears reasonable to me as it is easy to follow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Sep 2022 17:10:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Sep 21, 2022 at 17:25 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch v30-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 1. Commit message\r\n> \r\n> In addition, the patch extends the logical replication STREAM_ABORT message\r\n> so\r\n> that abort_time and abort_lsn can also be sent which can be used to update the\r\n> replication origin in parallel apply worker when the streaming transaction is\r\n> aborted. Because this message extension is needed to support parallel\r\n> streaming, meaning that parallel streaming is not supported for publications on\r\n> servers < PG16.\r\n> \r\n> \"meaning that parallel streaming is not supported\" -> \"parallel\r\n> streaming is not supported\"\r\n\r\nImproved as suggested.\r\n\r\n> ======\r\n> \r\n> 2. doc/src/sgml/logical-replication.sgml\r\n> \r\n> @@ -1611,8 +1622,12 @@ CONTEXT: processing remote data for\r\n> replication origin \"pg_16395\" during \"INSER\r\n> to the subscriber, plus some reserve for table synchronization.\r\n> <varname>max_logical_replication_workers</varname> must be set to at\r\n> least\r\n> the number of subscriptions, again plus some reserve for the table\r\n> - synchronization. Additionally the\r\n> <varname>max_worker_processes</varname>\r\n> - may need to be adjusted to accommodate for replication workers, at least\r\n> + synchronization. In addition, if the subscription parameter\r\n> + <literal>streaming</literal> is set to <literal>parallel</literal>, please\r\n> + increase <literal>max_logical_replication_workers</literal> according to\r\n> + the desired number of parallel apply workers. Additionally the\r\n> + <varname>max_worker_processes</varname> may need to be adjusted to\r\n> + accommodate for replication workers, at least\r\n> (<varname>max_logical_replication_workers</varname>\r\n> + <literal>1</literal>). Note that some extensions and parallel queries\r\n> also take worker slots from <varname>max_worker_processes</varname>.\r\n> \r\n> IMO it looks a bit strange to have \"In addition\" followed by \"Additionally\".\r\n> \r\n> Also, \"to accommodate for replication workers\"? seems like a typo (but\r\n> it is not caused by your patch)\r\n> \r\n> BEFORE\r\n> In addition, if the subscription parameter streaming is set to\r\n> parallel, please increase max_logical_replication_workers according to\r\n> the desired number of parallel apply workers.\r\n> \r\n> AFTER (???)\r\n> If the subscription parameter streaming is set to parallel,\r\n> max_logical_replication_workers should be increased according to the\r\n> desired number of parallel apply workers.\r\n\r\n=> Reword\r\nImproved as suggested.\r\n\r\n=> typo?\r\nSorry, I am not sure. Do you mean\r\ns/replication workers/workers for subscriptions/ or something else?\r\nI think we should improve it in a new thread.\r\n\r\n> ======\r\n> \r\n> 4. .../replication/logical/applyparallelworker.c - parallel_apply_free_worker\r\n> \r\n> + winfo->in_use = false;\r\n> +\r\n> + /* Are there enough workers in the pool? */\r\n> + if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n> + {\r\n> \r\n> I felt the comment/logic about \"enough\" needs a bit more description.\r\n> At least it should say to refer to the more detailed explanation atop\r\n> worker.c\r\n\r\nAdded related comment atop this function.\r\n\r\n> ======\r\n> \r\n> 5. .../replication/logical/applyparallelworker.c - parallel_apply_setup_dsm\r\n> \r\n> + /*\r\n> + * Estimate how much shared memory we need.\r\n> + *\r\n> + * Because the TOC machinery may choose to insert padding of oddly-sized\r\n> + * requests, we must estimate each chunk separately.\r\n> + *\r\n> + * We need one key to register the location of the header, and we need two\r\n> + * other keys to track of the locations of the message queue and the error\r\n> + * message queue.\r\n> + */\r\n> \r\n> \"track of\" -> \"keep track of\" ?\r\n\r\nImproved.\r\n\r\n> ======\r\n> \r\n> 6. src/backend/replication/logical/launcher.c - logicalrep_worker_detach\r\n> \r\n> logicalrep_worker_detach(void)\r\n> {\r\n> + /* Stop the parallel apply workers. */\r\n> + if (!am_parallel_apply_worker() && !am_tablesync_worker())\r\n> + {\r\n> + List *workers;\r\n> + ListCell *lc;\r\n> \r\n> The condition is not very obvious. This is why I previously suggested\r\n> adding another macro/function like 'isLeaderApplyWorker'. In the\r\n> absence of that, then I think the comment needs to be more\r\n> descriptive.\r\n> \r\n> SUGGESTION\r\n> If this is the leader apply worker then stop the parallel apply workers.\r\n\r\nAdded the new function am_leader_apply_worker.\r\n\r\n> ======\r\n> \r\n> 7. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\r\n> \r\n> void\r\n> logicalrep_write_stream_abort(StringInfo out, TransactionId xid,\r\n> - TransactionId subxid)\r\n> + TransactionId subxid, XLogRecPtr abort_lsn,\r\n> + TimestampTz abort_time, bool abort_info)\r\n> {\r\n> pq_sendbyte(out, LOGICAL_REP_MSG_STREAM_ABORT);\r\n> \r\n> @@ -1175,19 +1179,40 @@ logicalrep_write_stream_abort(StringInfo out,\r\n> TransactionId xid,\r\n> /* transaction ID */\r\n> pq_sendint32(out, xid);\r\n> pq_sendint32(out, subxid);\r\n> +\r\n> + if (abort_info)\r\n> + {\r\n> + pq_sendint64(out, abort_lsn);\r\n> + pq_sendint64(out, abort_time);\r\n> + }\r\n> \r\n> \r\n> The new param name 'abort_info' seems misleading.\r\n> \r\n> Maybe a name like 'write_abort_info' is better?\r\n\r\nImproved as suggested.\r\n\r\n> ~~~\r\n> \r\n> 8. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\r\n> \r\n> +logicalrep_read_stream_abort(StringInfo in,\r\n> + LogicalRepStreamAbortData *abort_data,\r\n> + bool read_abort_lsn)\r\n> {\r\n> - Assert(xid && subxid);\r\n> + Assert(abort_data);\r\n> +\r\n> + abort_data->xid = pq_getmsgint(in, 4);\r\n> + abort_data->subxid = pq_getmsgint(in, 4);\r\n> \r\n> - *xid = pq_getmsgint(in, 4);\r\n> - *subxid = pq_getmsgint(in, 4);\r\n> + if (read_abort_lsn)\r\n> + {\r\n> + abort_data->abort_lsn = pq_getmsgint64(in);\r\n> + abort_data->abort_time = pq_getmsgint64(in);\r\n> + }\r\n> \r\n> This name 'read_abort_lsn' is inconsistent with the 'abort_info' of\r\n> the logicalrep_write_stream_abort.\r\n> \r\n> I suggest change these to 'read_abort_info/write_abort_info'\r\n\r\nImproved as suggested.\r\n\r\n> ======\r\n> \r\n> 9. src/backend/replication/logical/worker.c - file header comment\r\n> \r\n> + * information is added to the ParallelApplyWorkersList. Once the worker\r\n> + * finishes applying the transaction, we mark it available for use. Now,\r\n> + * before starting a new worker to apply the streaming transaction, we check\r\n> + * the list and use any worker, if available. Note that we maintain a maximum\r\n> \r\n> 9a.\r\n> \"available for use.\" -> \"available for re-use.\"\r\n> \r\n> ~\r\n> \r\n> 9b.\r\n> \"we check the list and use any worker, if available\" -> \"we check the\r\n> list for any available worker\"\r\n\r\nImproved as suggested.\r\n\r\n> ~~~\r\n> \r\n> 10. src/backend/replication/logical/worker.c - handle_streamed_transaction\r\n> \r\n> + /* write the change to the current file */\r\n> + stream_write_change(action, s);\r\n> + return true;\r\n> \r\n> Uppercase the comment.\r\n\r\nImproved as suggested.\r\n\r\n> ~~~\r\n> \r\n> 11. src/backend/replication/logical/worker.c - apply_handle_stream_abort\r\n> \r\n> +static void\r\n> +apply_handle_stream_abort(StringInfo s)\r\n> +{\r\n> + TransactionId xid;\r\n> + TransactionId subxid;\r\n> + LogicalRepStreamAbortData abort_data;\r\n> + bool read_abort_lsn = false;\r\n> + ParallelApplyWorkerInfo *winfo = NULL;\r\n> + TransApplyAction apply_action;\r\n> \r\n> The variable 'read_abort_lsn' name ought to be changed to match\r\n> consistently the parameter name.\r\n\r\nImproved as suggested.\r\n\r\n> ======\r\n> \r\n> 12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_abort\r\n> \r\n> @@ -1843,6 +1850,8 @@ pgoutput_stream_abort(struct\r\n> LogicalDecodingContext *ctx,\r\n> XLogRecPtr abort_lsn)\r\n> {\r\n> ReorderBufferTXN *toptxn;\r\n> + PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\r\n> + bool abort_info = (data->streaming == SUBSTREAM_PARALLEL);\r\n> \r\n> The variable 'abort_info' name ought to be changed to be\r\n> 'write_abort_info' (as suggested above) to match consistently the\r\n> parameter name.\r\n\r\nImproved as suggested.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Thu, 22 Sep 2022 03:29:13 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nThanks for updating the patch! Followings are comments for v33-0001.\r\n\r\n===\r\nlibpqwalreceiver.c\r\n\r\n01. inclusion\r\n\r\n```\r\n+#include \"catalog/pg_subscription.h\"\r\n```\r\n\r\nWe don't have to include it because the analysis of parameters is done at caller.\r\n\r\n===\r\nlauncher.c\r\n\r\n02. logicalrep_worker_launch()\r\n\r\n```\r\n+ /*\r\n+ * Return silently if the number of parallel apply workers reached the\r\n+ * limit per subscription.\r\n+ */\r\n+ if (is_subworker && nparallelapplyworkers >= max_parallel_apply_workers_per_subscription)\r\n```\r\n\r\na. \r\nI felt that it might be kind if we output some debug messages.\r\n\r\nb.\r\nThe if statement seems to be more than 80 characters. You can move to new line around \"nparallelapplyworkers >= ...\".\r\n\r\n\r\n===\r\napplyparallelworker.c\r\n\r\n03. declaration\r\n\r\n```\r\n+/*\r\n+ * Is there a message pending in parallel apply worker which we need to\r\n+ * receive?\r\n+ */\r\n+volatile bool ParallelApplyMessagePending = false;\r\n```\r\n\r\nI checked other flags that are set by signal handlers, their datatype seemed to be sig_atomic_t.\r\nIs there any reasons that you use normal bool? It should be changed if not.\r\n\r\n04. HandleParallelApplyMessages()\r\n\r\n```\r\n+ if (winfo->error_mq_handle == NULL)\r\n+ continue;\r\n```\r\n\r\na.\r\nI was not sure when the cell should be cleaned. Currently we clean up ParallelApplyWorkersList() only in the parallel_apply_start_worker(),\r\nbut we have chances to remove such a cell like HandleParallelApplyMessages() or HandleParallelApplyMessage(). How do you think?\r\n\r\nb.\r\nComments should be added even if we keep this, like \"exited worker, skipped\".\r\n\r\n```\r\n+ else\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"lost connection to the leader apply worker\")));\r\n```\r\n\r\nc.\r\nThis function is called on the leader apply worker, so the hint should be \"lost connection to the parallel apply worker\".\r\n\r\n05. parallel_apply_setup_worker()\r\n\r\n``\r\n+ if (launched)\r\n+ {\r\n+ ParallelApplyWorkersList = lappend(ParallelApplyWorkersList, winfo);\r\n+ }\r\n```\r\n\r\n{} should be removed.\r\n\r\n\r\n06. parallel_apply_wait_for_xact_finish()\r\n\r\n```\r\n+ /* If any workers have died, we have failed. */\r\n```\r\n\r\nThis function checked only about a parallel apply worker, so the comment should be \"if worker has...\"?\r\n\r\n===\r\nworker.c\r\n\r\n07. handle_streamed_transaction()\r\n\r\n```\r\n+ * For non-streamed transactions, returns false;\r\n```\r\n\r\n\"returns false;\" -> \"returns false\"\r\n\r\napply_handle_commit_prepared(), apply_handle_abort_prepared()\r\n\r\nThese functions are not expected that parallel worker calls\r\nso I think Assert() should be added.\r\n\r\n08. UpdateWorkerStats()\r\n\r\n```\r\n-static void\r\n+void\r\n UpdateWorkerStats(XLogRecPtr last_lsn, TimestampTz send_time, bool reply)\r\n```\r\n\r\nThis function is called only in worker.c, should be static.\r\n\r\n09. subscription_change_cb()\r\n\r\n```\r\n-static void\r\n+void\r\n subscription_change_cb(Datum arg, int cacheid, uint32 hashvalue)\r\n```\r\n\r\nThis function is called only in worker.c, should be static.\r\n\r\n10. InitializeApplyWorker()\r\n\r\n```\r\n+/*\r\n+ * Initialize the database connection, in-memory subscription and necessary\r\n+ * config options.\r\n+ */\r\n void\r\n-ApplyWorkerMain(Datum main_arg)\r\n+InitializeApplyWorker(void)\r\n```\r\n\r\nSome comments should be added about this is a common part of leader and parallel apply worker.\r\n\r\n===\r\nlogicalrepworker.h\r\n\r\n11. declaration\r\n\r\n```\r\nextern PGDLLIMPORT volatile bool ParallelApplyMessagePending;\r\n```\r\n\r\nPlease refer above comment.\r\n\r\n===\r\nguc_tables.c\r\n\r\n12. ConfigureNamesInt\r\n\r\n```\r\n+ {\r\n+ {\"max_parallel_apply_workers_per_subscription\",\r\n+ PGC_SIGHUP,\r\n+ REPLICATION_SUBSCRIBERS,\r\n+ gettext_noop(\"Maximum number of parallel apply workers per subscription.\"),\r\n+ NULL,\r\n+ },\r\n+ &max_parallel_apply_workers_per_subscription,\r\n+ 2, 0, MAX_BACKENDS,\r\n+ NULL, NULL, NULL\r\n+ },\r\n```\r\n\r\nThis parameter can be changed by pg_ctl reload, so the following corner case may be occurred.\r\nShould we add a assign hook to handle this? Or, can we ignore it?\r\n\r\n1. set max_parallel_apply_workers_per_subscription to 4.\r\n2. start replicating two streaming transactions.\r\n3. commit transactions\r\n=== Two parallel workers will be remained ===\r\n4. change max_parallel_apply_workers_per_subscription to 3\r\n5. We expected that only one worker remains, but two parallel workers remained. \r\n It will be not stopped until another streamed transaction is started and committed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 22 Sep 2022 08:07:49 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, September 22, 2022 4:08 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Thanks for updating the patch! Followings are comments for v33-0001.\r\n\r\nThanks for the comments.\r\n\r\n> 04. HandleParallelApplyMessages()\r\n> \r\n> ```\r\n> + if (winfo->error_mq_handle == NULL)\r\n> + continue;\r\n> ```\r\n> \r\n> a.\r\n> I was not sure when the cell should be cleaned. Currently we clean up\r\n> ParallelApplyWorkersList() only in the parallel_apply_start_worker(), but we\r\n> have chances to remove such a cell like HandleParallelApplyMessages() or\r\n> HandleParallelApplyMessage(). How do you think?\r\n\r\nHandleParallelApplyxx functions are signal callback functions which I think\r\nare unsafe to cleanup the list cells that may be in use before entering\r\nthese signal callback functions.\r\n\r\n\r\n> \r\n> 05. parallel_apply_setup_worker()\r\n> \r\n> ``\r\n> + if (launched)\r\n> + {\r\n> + ParallelApplyWorkersList = lappend(ParallelApplyWorkersList,\r\n> winfo);\r\n> + }\r\n> ```\r\n> \r\n> {} should be removed.\r\n\r\nI think this style is fine and this was also suggested to be consistent with the\r\nelse{} part.\r\n\r\n\r\n> \r\n> 06. parallel_apply_wait_for_xact_finish()\r\n> \r\n> ```\r\n> + /* If any workers have died, we have failed. */\r\n> ```\r\n> \r\n> This function checked only about a parallel apply worker, so the comment\r\n> should be \"if worker has...\"?\r\n\r\nThe comments seem clear to me as it's a general comment.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Thu, 22 Sep 2022 08:57:24 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 22, 2022 at 1:37 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> ===\n> applyparallelworker.c\n>\n> 03. declaration\n>\n> ```\n> +/*\n> + * Is there a message pending in parallel apply worker which we need to\n> + * receive?\n> + */\n> +volatile bool ParallelApplyMessagePending = false;\n> ```\n>\n> I checked other flags that are set by signal handlers, their datatype seemed to be sig_atomic_t.\n> Is there any reasons that you use normal bool? It should be changed if not.\n>\n\nIt follows the logic similar to ParallelMessagePending. Do you see any\nproblem with it?\n\n> 04. HandleParallelApplyMessages()\n>\n> ```\n> + if (winfo->error_mq_handle == NULL)\n> + continue;\n> ```\n>\n> a.\n> I was not sure when the cell should be cleaned. Currently we clean up ParallelApplyWorkersList() only in the parallel_apply_start_worker(),\n> but we have chances to remove such a cell like HandleParallelApplyMessages() or HandleParallelApplyMessage(). How do you think?\n>\n\nNote that HandleParallelApply* are invoked during interrupt handling,\nso it may not be advisable to remove it there.\n\n>\n> 12. ConfigureNamesInt\n>\n> ```\n> + {\n> + {\"max_parallel_apply_workers_per_subscription\",\n> + PGC_SIGHUP,\n> + REPLICATION_SUBSCRIBERS,\n> + gettext_noop(\"Maximum number of parallel apply workers per subscription.\"),\n> + NULL,\n> + },\n> + &max_parallel_apply_workers_per_subscription,\n> + 2, 0, MAX_BACKENDS,\n> + NULL, NULL, NULL\n> + },\n> ```\n>\n> This parameter can be changed by pg_ctl reload, so the following corner case may be occurred.\n> Should we add a assign hook to handle this? Or, can we ignore it?\n>\n\nI think we can ignore this as it will eventually start respecting the threshold.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Sep 2022 14:30:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n\nFew comments on v33-0001\n=======================\n1.\n+ else if (data->streaming == SUBSTREAM_PARALLEL &&\n+ data->protocol_version < LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"requested proto_version=%d does not support\nstreaming=parallel mode, need %d or higher\",\n+ data->protocol_version, LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));\n\nI think we can improve this error message as: \"requested\nproto_version=%d does not support parallel streaming mode, need %d or\nhigher\".\n\n2.\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -3184,7 +3184,7 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n </para>\n <para>\n OID of the relation that the worker is synchronizing; null for the\n- main apply worker\n+ main apply worker and the apply parallel worker\n </para></entry>\n </row>\n\nThis and other changes in monitoring.sgml refers the workers as \"apply\nparallel worker\". Isn't it better to use parallel apply worker as we\nare using at other places in the patch? But, I have another question,\ndo we really need to display entries for parallel apply workers in\npg_stat_subscription if it doesn't have any meaningful information? I\nthink we can easily avoid it in pg_stat_get_subscription by checking\napply_leader_pid.\n\n3.\nApplyWorkerMain()\n{\n...\n...\n+\n+ if (server_version >= 160000 &&\n+ MySubscription->stream == SUBSTREAM_PARALLEL)\n+ options.proto.logical.streaming = pstrdup(\"parallel\");\n\nAfter deciding here whether the parallel streaming mode is enabled or\nnot, we recheck the same thing in apply_handle_stream_abort() and\nparallel_apply_can_start(). In parallel_apply_can_start(), we do it\nvia two different checks. How about storing this information say in\nstructure MyLogicalRepWorker in ApplyWorkerMain() and then use it at\nother places?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Sep 2022 15:41:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi Amit,\r\n\r\n> > I checked other flags that are set by signal handlers, their datatype seemed to\r\n> be sig_atomic_t.\r\n> > Is there any reasons that you use normal bool? It should be changed if not.\r\n> >\r\n> \r\n> It follows the logic similar to ParallelMessagePending. Do you see any\r\n> problem with it?\r\n\r\nHmm, one consideration is:\r\nwhat will happen if the signal handler HandleParallelApplyMessageInterrupt() is fired during \"ParallelApplyMessagePending = false;\"?\r\nIIUC sig_atomic_t has been needed to avoid writing to same data at the same time.\r\n\r\nAccording to C99 standard(I grepped draft version [1]), the behavior seems to be undefined if the signal handler\r\nattaches to not \"volatile sig_atomic_t\" data.\r\n...But I'm not sure whether this is really problematic in the current system, sorry...\r\n\r\n```\r\nIf the signal occurs other than as the result of calling the abort or raise function,\r\nthe behavior is undefined if the signal handler refers to any object with static storage duration other than by assigning a value to an object declared as volatile sig_atomic_t,\r\nor the signal handler calls any function in the standard library other than the abort function,\r\nthe _Exit function, or the signal function with the first argument equal to the signal number corresponding to the signal that caused the invocation of the handler.\r\n```\r\n\r\n> > a.\r\n> > I was not sure when the cell should be cleaned. Currently we clean up\r\n> ParallelApplyWorkersList() only in the parallel_apply_start_worker(),\r\n> > but we have chances to remove such a cell like HandleParallelApplyMessages()\r\n> or HandleParallelApplyMessage(). How do you think?\r\n> >\r\n> \r\n> Note that HandleParallelApply* are invoked during interrupt handling,\r\n> so it may not be advisable to remove it there.\r\n> \r\n> >\r\n> > 12. ConfigureNamesInt\r\n> >\r\n> > ```\r\n> > + {\r\n> > + {\"max_parallel_apply_workers_per_subscription\",\r\n> > + PGC_SIGHUP,\r\n> > + REPLICATION_SUBSCRIBERS,\r\n> > + gettext_noop(\"Maximum number of parallel apply\r\n> workers per subscription.\"),\r\n> > + NULL,\r\n> > + },\r\n> > + &max_parallel_apply_workers_per_subscription,\r\n> > + 2, 0, MAX_BACKENDS,\r\n> > + NULL, NULL, NULL\r\n> > + },\r\n> > ```\r\n> >\r\n> > This parameter can be changed by pg_ctl reload, so the following corner case\r\n> may be occurred.\r\n> > Should we add a assign hook to handle this? Or, can we ignore it?\r\n> >\r\n> \r\n> I think we can ignore this as it will eventually start respecting the threshold.\r\n\r\nBoth of you said are reasonable for me.\r\n\r\n[1]: https://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 22 Sep 2022 11:20:08 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 22, 2022 at 4:50 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi Amit,\n>\n> > > I checked other flags that are set by signal handlers, their datatype seemed to\n> > be sig_atomic_t.\n> > > Is there any reasons that you use normal bool? It should be changed if not.\n> > >\n> >\n> > It follows the logic similar to ParallelMessagePending. Do you see any\n> > problem with it?\n>\n> Hmm, one consideration is:\n> what will happen if the signal handler HandleParallelApplyMessageInterrupt() is fired during \"ParallelApplyMessagePending = false;\"?\n> IIUC sig_atomic_t has been needed to avoid writing to same data at the same time.\n>\n\nBut we do call HOLD_INTERRUPTS() before we do\n\"ParallelApplyMessagePending = false;\", so that should not happen.\nHowever, I think it would be better to use sig_atomic_t here for the\nsake of consistency.\n\nI think you can start a separate thread to check if we can change\nParallelMessagePending to make it consistent with other such\nvariables.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Sep 2022 09:17:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 22, 2022 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n>\n> Few comments on v33-0001\n> =======================\n>\n\nSome more comments on v33-0001\n=============================\n1.\n+ /* Information from the corresponding LogicalRepWorker slot. */\n+ uint16 logicalrep_worker_generation;\n+\n+ int logicalrep_worker_slot_no;\n+} ParallelApplyWorkerShared;\n\nBoth these variables are read/changed by leader/parallel workers\nwithout using any lock (mutex). It seems currently there is no problem\nbecause of the way the patch is using in_parallel_apply_xact but I\nthink it won't be a good idea to rely on it. I suggest using mutex to\noperate on these variables and also check if the slot_no is in a valid\nrange after reading it in parallel_apply_free_worker, otherwise error\nout using elog.\n\n2.\n static void\n apply_handle_stream_stop(StringInfo s)\n {\n- if (!in_streamed_transaction)\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+ TransApplyAction apply_action;\n+\n+ if (!am_parallel_apply_worker() &&\n+ (!in_streamed_transaction && !stream_apply_worker))\n ereport(ERROR,\n (errcode(ERRCODE_PROTOCOL_VIOLATION),\n errmsg_internal(\"STREAM STOP message without STREAM START\")));\n\nThis check won't be able to detect missing stream start messages for\nparallel apply workers apart from the first pair of start/stop. I\nthought of adding in_remote_transaction check along with\nam_parallel_apply_worker() to detect the same but that also won't work\nbecause the parallel worker doesn't reset it at the stop message.\nAnother possibility is to introduce yet another variable for this but\nthat doesn't seem worth it. I would like to keep this check simple.\nCan you think of any better way?\n\n3. I think we can skip sending start/stop messages from the leader to\nthe parallel worker because unlike apply worker it will process only\none transaction-at-a-time. However, it is not clear whether that is\nworth the effort because it is sent after logical_decoding_work_mem\nchanges. For now, I have added a comment for this in the attached\npatch but let me if I am missing something or if I am wrong.\n\n4.\npostgres=# select pid, leader_pid, application_name, backend_type from\npg_stat_activity;\n pid | leader_pid | application_name | backend_type\n-------+------------+------------------+------------------------------\n 27624 | | | logical replication launcher\n 17336 | | psql | client backend\n 26312 | | | logical replication worker\n 26376 | | psql | client backend\n 14004 | | | logical replication worker\n\nHere, the second worker entry is for the parallel worker. Isn't it\nbetter if we distinguish this by keeping type as a logical replication\nparallel worker? I think for this you need to change bgw_type in\nlogicalrep_worker_launch().\n\n5. Can we name parallel_apply_subxact_info_add() as\nparallel_apply_start_subtrans()?\n\nApart from the above, I have added/edited a few comments and made a\nfew other cosmetic changes in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 24 Sep 2022 17:10:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Sep 22, 2022 at 16:08 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thanks for updating the patch! Followings are comments for v33-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ===\r\n> libpqwalreceiver.c\r\n> \r\n> 01. inclusion\r\n> \r\n> ```\r\n> +#include \"catalog/pg_subscription.h\"\r\n> ```\r\n> \r\n> We don't have to include it because the analysis of parameters is done at caller.\r\n> \r\n> ===\r\n> launcher.c\r\n\r\nImproved.\r\n\r\n> 02. logicalrep_worker_launch()\r\n> \r\n> ```\r\n> + /*\r\n> + * Return silently if the number of parallel apply workers reached the\r\n> + * limit per subscription.\r\n> + */\r\n> + if (is_subworker && nparallelapplyworkers >=\r\n> max_parallel_apply_workers_per_subscription)\r\n> ```\r\n> \r\n> a.\r\n> I felt that it might be kind if we output some debug messages.\r\n> \r\n> b.\r\n> The if statement seems to be more than 80 characters. You can move to new\r\n> line around \"nparallelapplyworkers >= ...\".\r\n\r\nImproved.\r\n\r\n> ===\r\n> applyparallelworker.c\r\n> \r\n> 03. declaration\r\n> \r\n> ```\r\n> +/*\r\n> + * Is there a message pending in parallel apply worker which we need to\r\n> + * receive?\r\n> + */\r\n> +volatile bool ParallelApplyMessagePending = false;\r\n> ```\r\n> \r\n> I checked other flags that are set by signal handlers, their datatype seemed to\r\n> be sig_atomic_t.\r\n> Is there any reasons that you use normal bool? It should be changed if not.\r\n\r\nImproved.\r\n\r\n> 04. HandleParallelApplyMessages()\r\n> \r\n> ```\r\n> + if (winfo->error_mq_handle == NULL)\r\n> + continue;\r\n> ```\r\n> \r\n> a.\r\n> I was not sure when the cell should be cleaned. Currently we clean up\r\n> ParallelApplyWorkersList() only in the parallel_apply_start_worker(),\r\n> but we have chances to remove such a cell like HandleParallelApplyMessages()\r\n> or HandleParallelApplyMessage(). How do you think?\r\n> \r\n> b.\r\n> Comments should be added even if we keep this, like \"exited worker, skipped\".\r\n> \r\n> ```\r\n> + else\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"lost connection to the leader apply worker\")));\r\n> ```\r\n> \r\n> c.\r\n> This function is called on the leader apply worker, so the hint should be \"lost\r\n> connection to the parallel apply worker\".\r\n\r\n=>b.\r\nAdded the following comment according to your suggestion.\r\n`Skip if worker has exited`\r\n\r\n=>c.\r\nFixed.\r\n\r\n> ===\r\n> worker.c\r\n> \r\n> 07. handle_streamed_transaction()\r\n> \r\n> ```\r\n> + * For non-streamed transactions, returns false;\r\n> ```\r\n> \r\n> \"returns false;\" -> \"returns false\"\r\n\r\nImproved. I changed the semicolon to a period\r\n\r\n> apply_handle_commit_prepared(), apply_handle_abort_prepared()\r\n> \r\n> These functions are not expected that parallel worker calls\r\n> so I think Assert() should be added.\r\n\r\nI am not sure if this modification is necessary since we do not modify the\r\nnon-streamed transaction related message like \"COMMIT PREPARED\" or \"ROLLBACK\r\nPREPARED\".\r\n\r\n> 08. UpdateWorkerStats()\r\n> \r\n> ```\r\n> -static void\r\n> +void\r\n> UpdateWorkerStats(XLogRecPtr last_lsn, TimestampTz send_time, bool reply)\r\n> ```\r\n> \r\n> This function is called only in worker.c, should be static.\r\n> \r\n> 09. subscription_change_cb()\r\n> \r\n> ```\r\n> -static void\r\n> +void\r\n> subscription_change_cb(Datum arg, int cacheid, uint32 hashvalue)\r\n> ```\r\n> \r\n> This function is called only in worker.c, should be static.\r\n\r\nImproved.\r\n\r\n> 10. InitializeApplyWorker()\r\n> \r\n> ```\r\n> +/*\r\n> + * Initialize the database connection, in-memory subscription and necessary\r\n> + * config options.\r\n> + */\r\n> void\r\n> -ApplyWorkerMain(Datum main_arg)\r\n> +InitializeApplyWorker(void)\r\n> ```\r\n> \r\n> Some comments should be added about this is a common part of leader and\r\n> parallel apply worker.\r\n\r\nAdded the following comment:\r\n`The common initialization for leader apply worker and parallel apply worker.`\r\n\r\n> ===\r\n> logicalrepworker.h\r\n> \r\n> 11. declaration\r\n> \r\n> ```\r\n> extern PGDLLIMPORT volatile bool ParallelApplyMessagePending;\r\n> ```\r\n> \r\n> Please refer above comment.\r\n> \r\n> ===\r\n> guc_tables.c\r\n\r\nImproved.\r\n\r\nAlso rebased the patch set based on the changes in HEAD (26f7802).\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Mon, 26 Sep 2022 03:09:55 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Sep 22, 2022 at 18:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Few comments on v33-0001\r\n> =======================\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> + else if (data->streaming == SUBSTREAM_PARALLEL &&\r\n> + data->protocol_version <\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"requested proto_version=%d does not support\r\n> streaming=parallel mode, need %d or higher\",\r\n> + data->protocol_version,\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));\r\n> \r\n> I think we can improve this error message as: \"requested\r\n> proto_version=%d does not support parallel streaming mode, need %d or\r\n> higher\".\r\n\r\nImproved.\r\n\r\n> 2.\r\n> --- a/doc/src/sgml/monitoring.sgml\r\n> +++ b/doc/src/sgml/monitoring.sgml\r\n> @@ -3184,7 +3184,7 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> </para>\r\n> <para>\r\n> OID of the relation that the worker is synchronizing; null for the\r\n> - main apply worker\r\n> + main apply worker and the apply parallel worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> This and other changes in monitoring.sgml refers the workers as \"apply\r\n> parallel worker\". Isn't it better to use parallel apply worker as we\r\n> are using at other places in the patch? But, I have another question,\r\n> do we really need to display entries for parallel apply workers in\r\n> pg_stat_subscription if it doesn't have any meaningful information? I\r\n> think we can easily avoid it in pg_stat_get_subscription by checking\r\n> apply_leader_pid.\r\n\r\nMake sense. Improved as suggested.\r\nDo not display parallel apply worker related information in this view after\r\napplying 0001 patch. But display entries for parallel apply worker after\r\napplying 0005 patch.\r\n\r\n> 3.\r\n> ApplyWorkerMain()\r\n> {\r\n> ...\r\n> ...\r\n> +\r\n> + if (server_version >= 160000 &&\r\n> + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> + options.proto.logical.streaming = pstrdup(\"parallel\");\r\n> \r\n> After deciding here whether the parallel streaming mode is enabled or\r\n> not, we recheck the same thing in apply_handle_stream_abort() and\r\n> parallel_apply_can_start(). In parallel_apply_can_start(), we do it\r\n> via two different checks. How about storing this information say in\r\n> structure MyLogicalRepWorker in ApplyWorkerMain() and then use it at\r\n> other places?\r\n\r\nImproved as suggested.\r\nAdded a new flag \"in_parallel_apply\" to structure MyLogicalRepWorker.\r\n\r\nBecause the patch set could not be applied cleanly, I rebased and shared them\r\nfor review.\r\nI have not addressed the comment you posted in [1]. I will share the new patch\r\nset when I finish them.\r\n\r\nThe new patches were attached in [2].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1KjGNA8T8O77rRhkv6bRT6OsdQaEy--2hNrJFCc80bN0A%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB6275F4A7CA186412E2FF2ED29E529%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Mon, 26 Sep 2022 03:11:38 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang, \r\n\r\nThanks for updating patch!... but cfbot says that it cannot be accepted [1].\r\nI thought the header <signal.h> should be included, like miscadmin.h.\r\n\r\n[1]: https://cirrus-ci.com/task/5909508684775424\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 26 Sep 2022 08:27:55 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Sep 26, 2022 at 8:41 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thur, Sep 22, 2022 at 18:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > 3.\n> > ApplyWorkerMain()\n> > {\n> > ...\n> > ...\n> > +\n> > + if (server_version >= 160000 &&\n> > + MySubscription->stream == SUBSTREAM_PARALLEL)\n> > + options.proto.logical.streaming = pstrdup(\"parallel\");\n> >\n> > After deciding here whether the parallel streaming mode is enabled or\n> > not, we recheck the same thing in apply_handle_stream_abort() and\n> > parallel_apply_can_start(). In parallel_apply_can_start(), we do it\n> > via two different checks. How about storing this information say in\n> > structure MyLogicalRepWorker in ApplyWorkerMain() and then use it at\n> > other places?\n>\n> Improved as suggested.\n> Added a new flag \"in_parallel_apply\" to structure MyLogicalRepWorker.\n>\n\nCan we name the variable in_parallel_apply as parallel_apply and set\nit in logicalrep_worker_launch() instead of in\nParallelApplyWorkerMain()?\n\nFew other comments:\n==================\n1.\n+ if (is_subworker &&\n+ nparallelapplyworkers >= max_parallel_apply_workers_per_subscription)\n+ {\n+ LWLockRelease(LogicalRepWorkerLock);\n+\n+ ereport(DEBUG1,\n+ (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\n+ errmsg(\"out of parallel apply workers\"),\n+ errhint(\"You might need to increase\nmax_parallel_apply_workers_per_subscription.\")));\n\nI think it is better to keep the level of this as LOG. Similar\nmessages at other places use WARNING or LOG. Here, I prefer LOG\nbecause the system can still proceed without blocking anything.\n\n2.\n+/* Reset replication origin tracking. */\n+void\n+parallel_apply_replorigin_reset(void)\n+{\n+ bool started_tx = false;\n+\n+ /* This function might be called inside or outside of transaction. */\n+ if (!IsTransactionState())\n+ {\n+ StartTransactionCommand();\n+ started_tx = true;\n+ }\n\nWhy do we need a transaction in this function?\n\n3. Few suggestions to improve in the patch:\ndiff --git a/src/backend/replication/logical/worker.c\nb/src/backend/replication/logical/worker.c\nindex 1623c9e2fa..d9c519dfab 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -1264,6 +1264,10 @@ apply_handle_stream_prepare(StringInfo s)\n case TRANS_LEADER_SEND_TO_PARALLEL:\n Assert(winfo);\n\n+ /*\n+ * The origin can be active only in one process. See\n+ * apply_handle_stream_commit.\n+ */\n parallel_apply_replorigin_reset();\n\n /* Send STREAM PREPARE message to the parallel apply worker. */\n@@ -1623,12 +1627,7 @@ apply_handle_stream_abort(StringInfo s)\n (errcode(ERRCODE_PROTOCOL_VIOLATION),\n errmsg_internal(\"STREAM ABORT message without STREAM STOP\")));\n\n- /*\n- * Check whether the publisher sends abort_lsn and abort_time.\n- *\n- * Note that the parallel apply worker is only started when the publisher\n- * sends abort_lsn and abort_time.\n- */\n+ /* We receive abort information only when we can apply in parallel. */\n if (MyLogicalRepWorker->in_parallel_apply)\n read_abort_info = true;\n\n@@ -1656,7 +1655,13 @@ apply_handle_stream_abort(StringInfo s)\n Assert(winfo);\n\n if (subxid == xid)\n+ {\n+ /*\n+ * The origin can be active only in one process. See\n+ * apply_handle_stream_commit.\n+ */\n parallel_apply_replorigin_reset();\n+ }\n\n /* Send STREAM ABORT message to the parallel apply worker. */\n parallel_apply_send_data(winfo, s->len, s->data);\n@@ -1858,6 +1863,12 @@ apply_handle_stream_commit(StringInfo s)\n case TRANS_LEADER_SEND_TO_PARALLEL:\n Assert(winfo);\n\n+ /*\n+ * We need to reset the replication origin before sending the commit\n+ * message and set it up again after confirming that parallel worker\n+ * has processed the message. This is required because origin can be\n+ * active only in one process at-a-time.\n+ */\n parallel_apply_replorigin_reset();\n\n /* Send STREAM COMMIT message to the parallel apply worker. */\ndiff --git a/src/include/replication/worker_internal.h\nb/src/include/replication/worker_internal.h\nindex 4cbfb43492..2bd9664f86 100644\n--- a/src/include/replication/worker_internal.h\n+++ b/src/include/replication/worker_internal.h\n@@ -70,11 +70,7 @@ typedef struct LogicalRepWorker\n */\n pid_t apply_leader_pid;\n\n- /*\n- * Indicates whether to use parallel apply workers.\n- *\n- * Determined based on streaming parameter and publisher version.\n- */\n+ /* Indicates whether apply can be performed parallelly. */\n bool in_parallel_apply;\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 16:28:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Wang,\r\n\r\nFollowings are comments for your patchset.\r\n\r\n====\r\n0001\r\n\r\n\r\n01. launcher.c - logicalrep_worker_stop_internal()\r\n\r\n```\r\n+\r\n+ Assert(LWLockHeldByMe(LogicalRepWorkerLock));\r\n+\r\n```\r\n\r\nI think it should be Assert(LWLockHeldByMeInMode(LogicalRepWorkerLock, LW_SHARED))\r\nbecause the lock is released one and acquired again as LW_SHARED.\r\nIf newer function has been acquired lock as LW_EXCLUSIVE and call logicalrep_worker_stop_internal(),\r\nits lock may become weaker after calling it.\r\n\r\n02. launcher.c - apply_handle_stream_start()\r\n\r\n```\r\n+ /*\r\n+ * Notify handle methods we're processing a remote in-progress\r\n+ * transaction.\r\n+ */\r\n+ in_streamed_transaction = true;\r\n \r\n- MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n- FileSetInit(MyLogicalRepWorker->stream_fileset);\r\n+ /*\r\n+ * Start a transaction on stream start, this transaction will be\r\n+ * committed on the stream stop unless it is a tablesync worker in\r\n+ * which case it will be committed after processing all the\r\n+ * messages. We need the transaction for handling the buffile,\r\n+ * used for serializing the streaming data and subxact info.\r\n+ */\r\n+ begin_replication_step();\r\n```\r\n\r\nPreviously in_streamed_transaction was set after the begin_replication_step(),\r\nbut the ordering is modified. Maybe we don't have to modify it if there is no particular reason.\r\n\r\n03. launcher.c - apply_handle_stream_stop()\r\n\r\n```\r\n+ /* Commit the per-stream transaction */\r\n+ CommitTransactionCommand();\r\n+\r\n+ /* Reset per-stream context */\r\n+ MemoryContextReset(LogicalStreamingContext);\r\n+\r\n+ pgstat_report_activity(STATE_IDLE, NULL);\r\n+\r\n+ in_streamed_transaction = false;\r\n```\r\n\r\nPreviously in_streamed_transaction was set after the MemoryContextReset(), but the ordering is modified.\r\nMaybe we don't have to modify it if there is no particular reason.\r\n\r\n04. applyparallelworker.c - LogicalParallelApplyLoop()\r\n\r\n```\r\n+ shmq_res = shm_mq_receive(mqh, &len, &data, false);\r\n...\r\n+ if (ConfigReloadPending)\r\n+ {\r\n+ ConfigReloadPending = false;\r\n+ ProcessConfigFile(PGC_SIGHUP);\r\n+ }\r\n```\r\n\r\n\r\nHere the parallel apply worker waits to receive messages and after dispatching it ProcessConfigFile() is called.\r\nIt means that .conf will be not read until the parallel apply worker receives new messages and apply them.\r\n\r\nIt may be problematic when users change log_min_message to debugXXX for debugging but the streamed transaction rarely come.\r\nThey expected that detailed description appears on the log from next streaming chunk, but it does not.\r\n\r\nThis does not occur in leader worker when it waits messages from publisher, because it uses libpqrcv_receive(), which works asynchronously.\r\n\r\nI 'm not sure whether it should be documented that the evaluation of GUCs may be delayed, how do you think?\r\n\r\n===\r\n0004\r\n\r\n05. logical-replication.sgml\r\n\r\n```\r\n...\r\nIn that case, it may be necessary to change the streaming mode to on or off and cause\r\nthe same conflicts again so the finish LSN of the failed transaction will be written to the server log.\r\n ...\r\n```\r\n\r\nAbove sentence is added by 0001, but it is not modified by 0004.\r\nSuch transactions will be retried as streaming=on mode, so some descriptions related with it should be added.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 27 Sep 2022 06:32:13 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, September 24, 2022 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Sep 22, 2022 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\r\n> > <wangw.fnst@fujitsu.com> wrote:\r\n> > >\r\n> >\r\n> > Few comments on v33-0001\r\n> > =======================\r\n> >\r\n> \r\n> Some more comments on v33-0001\r\n> =============================\r\n> 1.\r\n> + /* Information from the corresponding LogicalRepWorker slot. */\r\n> + uint16 logicalrep_worker_generation;\r\n> +\r\n> + int logicalrep_worker_slot_no;\r\n> +} ParallelApplyWorkerShared;\r\n> \r\n> Both these variables are read/changed by leader/parallel workers without\r\n> using any lock (mutex). It seems currently there is no problem because of the\r\n> way the patch is using in_parallel_apply_xact but I think it won't be a good idea\r\n> to rely on it. I suggest using mutex to operate on these variables and also check\r\n> if the slot_no is in a valid range after reading it in parallel_apply_free_worker,\r\n> otherwise error out using elog.\r\n\r\nChanged.\r\n\r\n> 2.\r\n> static void\r\n> apply_handle_stream_stop(StringInfo s)\r\n> {\r\n> - if (!in_streamed_transaction)\r\n> + ParallelApplyWorkerInfo *winfo = NULL; TransApplyAction apply_action;\r\n> +\r\n> + if (!am_parallel_apply_worker() &&\r\n> + (!in_streamed_transaction && !stream_apply_worker))\r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> errmsg_internal(\"STREAM STOP message without STREAM START\")));\r\n> \r\n> This check won't be able to detect missing stream start messages for parallel\r\n> apply workers apart from the first pair of start/stop. I thought of adding\r\n> in_remote_transaction check along with\r\n> am_parallel_apply_worker() to detect the same but that also won't work\r\n> because the parallel worker doesn't reset it at the stop message.\r\n> Another possibility is to introduce yet another variable for this but that doesn't\r\n> seem worth it. I would like to keep this check simple.\r\n> Can you think of any better way?\r\n\r\nI feel we can reuse the in_streamed_transaction in parallel apply worker to\r\nsimplify the check there. I tried to set this flag in parallel apply worker\r\nwhen stream starts and reset it when stream stop so that we can directly check\r\nthis flag for duplicate stream start message and other related things.\r\n\r\n> 3. I think we can skip sending start/stop messages from the leader to the\r\n> parallel worker because unlike apply worker it will process only one\r\n> transaction-at-a-time. However, it is not clear whether that is worth the effort\r\n> because it is sent after logical_decoding_work_mem changes. For now, I have\r\n> added a comment for this in the attached patch but let me if I am missing\r\n> something or if I am wrong.\r\n\r\nI the suggested comments look good. \r\n\r\n> 4.\r\n> postgres=# select pid, leader_pid, application_name, backend_type from\r\n> pg_stat_activity;\r\n> pid | leader_pid | application_name | backend_type\r\n> -------+------------+------------------+------------------------------\r\n> 27624 | | | logical replication launcher\r\n> 17336 | | psql | client backend\r\n> 26312 | | | logical replication worker\r\n> 26376 | | psql | client backend\r\n> 14004 | | | logical replication worker\r\n> \r\n> Here, the second worker entry is for the parallel worker. Isn't it better if we\r\n> distinguish this by keeping type as a logical replication parallel worker? I think\r\n> for this you need to change bgw_type in logicalrep_worker_launch().\r\n\r\nChanged.\r\n\r\n> 5. Can we name parallel_apply_subxact_info_add() as\r\n> parallel_apply_start_subtrans()?\r\n> \r\n> Apart from the above, I have added/edited a few comments and made a few\r\n> other cosmetic changes in the attached.\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 27 Sep 2022 12:26:44 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, September 26, 2022 6:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Sep 26, 2022 at 8:41 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thur, Sep 22, 2022 at 18:12 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > > 3.\r\n> > > ApplyWorkerMain()\r\n> > > {\r\n> > > ...\r\n> > > ...\r\n> > > +\r\n> > > + if (server_version >= 160000 &&\r\n> > > + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> > > + options.proto.logical.streaming = pstrdup(\"parallel\");\r\n> > >\r\n> > > After deciding here whether the parallel streaming mode is enabled\r\n> > > or not, we recheck the same thing in apply_handle_stream_abort() and\r\n> > > parallel_apply_can_start(). In parallel_apply_can_start(), we do it\r\n> > > via two different checks. How about storing this information say in\r\n> > > structure MyLogicalRepWorker in ApplyWorkerMain() and then use it at\r\n> > > other places?\r\n> >\r\n> > Improved as suggested.\r\n> > Added a new flag \"in_parallel_apply\" to structure MyLogicalRepWorker.\r\n> >\r\n> \r\n> Can we name the variable in_parallel_apply as parallel_apply and set it in\r\n> logicalrep_worker_launch() instead of in ParallelApplyWorkerMain()?\r\n\r\nChanged.\r\n\r\n> Few other comments:\r\n> ==================\r\n> 1.\r\n> + if (is_subworker &&\r\n> + nparallelapplyworkers >= max_parallel_apply_workers_per_subscription)\r\n> + {\r\n> + LWLockRelease(LogicalRepWorkerLock);\r\n> +\r\n> + ereport(DEBUG1,\r\n> + (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\r\n> + errmsg(\"out of parallel apply workers\"), errhint(\"You might need to\r\n> + increase\r\n> max_parallel_apply_workers_per_subscription.\")));\r\n> \r\n> I think it is better to keep the level of this as LOG. Similar messages at other\r\n> places use WARNING or LOG. Here, I prefer LOG because the system can still\r\n> proceed without blocking anything.\r\n\r\nChanged.\r\n\r\n> 2.\r\n> +/* Reset replication origin tracking. */ void\r\n> +parallel_apply_replorigin_reset(void)\r\n> +{\r\n> + bool started_tx = false;\r\n> +\r\n> + /* This function might be called inside or outside of transaction. */\r\n> + if (!IsTransactionState()) { StartTransactionCommand(); started_tx =\r\n> + true; }\r\n> \r\n> Why do we need a transaction in this function?\r\n\r\nI think we don't need it and removed this in the new version patch.\r\n\r\n> 3. Few suggestions to improve in the patch:\r\n> diff --git a/src/backend/replication/logical/worker.c\r\n> b/src/backend/replication/logical/worker.c\r\n> index 1623c9e2fa..d9c519dfab 100644\r\n> --- a/src/backend/replication/logical/worker.c\r\n> +++ b/src/backend/replication/logical/worker.c\r\n> @@ -1264,6 +1264,10 @@ apply_handle_stream_prepare(StringInfo s)\r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> Assert(winfo);\r\n> \r\n> + /*\r\n> + * The origin can be active only in one process. See\r\n> + * apply_handle_stream_commit.\r\n> + */\r\n> parallel_apply_replorigin_reset();\r\n> \r\n> /* Send STREAM PREPARE message to the parallel apply worker. */ @@\r\n> -1623,12 +1627,7 @@ apply_handle_stream_abort(StringInfo s)\r\n> (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> errmsg_internal(\"STREAM ABORT message without STREAM STOP\")));\r\n> \r\n> - /*\r\n> - * Check whether the publisher sends abort_lsn and abort_time.\r\n> - *\r\n> - * Note that the parallel apply worker is only started when the publisher\r\n> - * sends abort_lsn and abort_time.\r\n> - */\r\n> + /* We receive abort information only when we can apply in parallel. */\r\n> if (MyLogicalRepWorker->in_parallel_apply)\r\n> read_abort_info = true;\r\n> \r\n> @@ -1656,7 +1655,13 @@ apply_handle_stream_abort(StringInfo s)\r\n> Assert(winfo);\r\n> \r\n> if (subxid == xid)\r\n> + {\r\n> + /*\r\n> + * The origin can be active only in one process. See\r\n> + * apply_handle_stream_commit.\r\n> + */\r\n> parallel_apply_replorigin_reset();\r\n> + }\r\n> \r\n> /* Send STREAM ABORT message to the parallel apply worker. */\r\n> parallel_apply_send_data(winfo, s->len, s->data); @@ -1858,6 +1863,12 @@\r\n> apply_handle_stream_commit(StringInfo s)\r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> Assert(winfo);\r\n> \r\n> + /*\r\n> + * We need to reset the replication origin before sending the commit\r\n> + * message and set it up again after confirming that parallel worker\r\n> + * has processed the message. This is required because origin can be\r\n> + * active only in one process at-a-time.\r\n> + */\r\n> parallel_apply_replorigin_reset();\r\n> \r\n> /* Send STREAM COMMIT message to the parallel apply worker. */ diff --git\r\n> a/src/include/replication/worker_internal.h\r\n> b/src/include/replication/worker_internal.h\r\n> index 4cbfb43492..2bd9664f86 100644\r\n> --- a/src/include/replication/worker_internal.h\r\n> +++ b/src/include/replication/worker_internal.h\r\n> @@ -70,11 +70,7 @@ typedef struct LogicalRepWorker\r\n> */\r\n> pid_t apply_leader_pid;\r\n> \r\n> - /*\r\n> - * Indicates whether to use parallel apply workers.\r\n> - *\r\n> - * Determined based on streaming parameter and publisher version.\r\n> - */\r\n> + /* Indicates whether apply can be performed parallelly. */\r\n> bool in_parallel_apply;\r\n> \r\n\r\nMerged, thanks.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 27 Sep 2022 12:27:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, September 27, 2022 2:32 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com>\r\n> \r\n> Dear Wang,\r\n> \r\n> Followings are comments for your patchset.\r\n\r\nThanks for the comments.\r\n\r\n> ====\r\n> 0001\r\n> \r\n> \r\n> 01. launcher.c - logicalrep_worker_stop_internal()\r\n> \r\n> ```\r\n> +\r\n> + Assert(LWLockHeldByMe(LogicalRepWorkerLock));\r\n> +\r\n> ```\r\n\r\nChanged.\r\n\r\n> I think it should be Assert(LWLockHeldByMeInMode(LogicalRepWorkerLock,\r\n> LW_SHARED)) because the lock is released one and acquired again as\r\n> LW_SHARED.\r\n> If newer function has been acquired lock as LW_EXCLUSIVE and call\r\n> logicalrep_worker_stop_internal(),\r\n> its lock may become weaker after calling it.\r\n> \r\n> 02. launcher.c - apply_handle_stream_start()\r\n> \r\n> ```\r\n> + /*\r\n> + * Notify handle methods we're processing a remote\r\n> in-progress\r\n> + * transaction.\r\n> + */\r\n> + in_streamed_transaction = true;\r\n> \r\n> - MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));\r\n> - FileSetInit(MyLogicalRepWorker->stream_fileset);\r\n> + /*\r\n> + * Start a transaction on stream start, this transaction\r\n> will be\r\n> + * committed on the stream stop unless it is a\r\n> tablesync worker in\r\n> + * which case it will be committed after processing all\r\n> the\r\n> + * messages. We need the transaction for handling the\r\n> buffile,\r\n> + * used for serializing the streaming data and subxact\r\n> info.\r\n> + */\r\n> + begin_replication_step();\r\n> ```\r\n> \r\n> Previously in_streamed_transaction was set after the begin_replication_step(),\r\n> but the ordering is modified. Maybe we don't have to modify it if there is no\r\n> particular reason.\r\n> \r\n> 03. launcher.c - apply_handle_stream_stop()\r\n> \r\n> ```\r\n> + /* Commit the per-stream transaction */\r\n> + CommitTransactionCommand();\r\n> +\r\n> + /* Reset per-stream context */\r\n> + MemoryContextReset(LogicalStreamingContext);\r\n> +\r\n> + pgstat_report_activity(STATE_IDLE, NULL);\r\n> +\r\n> + in_streamed_transaction = false;\r\n> ```\r\n> \r\n> Previously in_streamed_transaction was set after the MemoryContextReset(),\r\n> but the ordering is modified.\r\n> Maybe we don't have to modify it if there is no particular reason.\r\n\r\nI adjusted the position of this due to some other improvements this time.\r\n\r\n> \r\n> 04. applyparallelworker.c - LogicalParallelApplyLoop()\r\n> \r\n> ```\r\n> + shmq_res = shm_mq_receive(mqh, &len, &data, false);\r\n> ...\r\n> + if (ConfigReloadPending)\r\n> + {\r\n> + ConfigReloadPending = false;\r\n> + ProcessConfigFile(PGC_SIGHUP);\r\n> + }\r\n> ```\r\n> \r\n> \r\n> Here the parallel apply worker waits to receive messages and after dispatching\r\n> it ProcessConfigFile() is called.\r\n> It means that .conf will be not read until the parallel apply worker receives new\r\n> messages and apply them.\r\n> \r\n> It may be problematic when users change log_min_message to debugXXX for\r\n> debugging but the streamed transaction rarely come.\r\n> They expected that detailed description appears on the log from next\r\n> streaming chunk, but it does not.\r\n> \r\n> This does not occur in leader worker when it waits messages from publisher,\r\n> because it uses libpqrcv_receive(), which works asynchronously.\r\n> \r\n> I 'm not sure whether it should be documented that the evaluation of GUCs may\r\n> be delayed, how do you think?\r\n\r\nI changed the shm_mq_receive to asynchronous mode which is also consistent with\r\nwhat we did for Gather node when reading data from parallel query workers.\r\n\r\n> \r\n> ===\r\n> 0004\r\n> \r\n> 05. logical-replication.sgml\r\n> \r\n> ```\r\n> ...\r\n> In that case, it may be necessary to change the streaming mode to on or off and\r\n> cause the same conflicts again so the finish LSN of the failed transaction will be\r\n> written to the server log.\r\n> ...\r\n> ```\r\n> \r\n> Above sentence is added by 0001, but it is not modified by 0004.\r\n> Such transactions will be retried as streaming=on mode, so some descriptions\r\n> related with it should be added.\r\n\r\nAdded.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 27 Sep 2022 12:31:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for updating patch. I will review yours soon, but I reply to your comment.\r\n\r\n> > 04. applyparallelworker.c - LogicalParallelApplyLoop()\r\n> >\r\n> > ```\r\n> > + shmq_res = shm_mq_receive(mqh, &len, &data, false);\r\n> > ...\r\n> > + if (ConfigReloadPending)\r\n> > + {\r\n> > + ConfigReloadPending = false;\r\n> > + ProcessConfigFile(PGC_SIGHUP);\r\n> > + }\r\n> > ```\r\n> >\r\n> >\r\n> > Here the parallel apply worker waits to receive messages and after dispatching\r\n> > it ProcessConfigFile() is called.\r\n> > It means that .conf will be not read until the parallel apply worker receives new\r\n> > messages and apply them.\r\n> >\r\n> > It may be problematic when users change log_min_message to debugXXX for\r\n> > debugging but the streamed transaction rarely come.\r\n> > They expected that detailed description appears on the log from next\r\n> > streaming chunk, but it does not.\r\n> >\r\n> > This does not occur in leader worker when it waits messages from publisher,\r\n> > because it uses libpqrcv_receive(), which works asynchronously.\r\n> >\r\n> > I 'm not sure whether it should be documented that the evaluation of GUCs may\r\n> > be delayed, how do you think?\r\n> \r\n> I changed the shm_mq_receive to asynchronous mode which is also consistent\r\n> with\r\n> what we did for Gather node when reading data from parallel query workers.\r\n\r\nI checked your implementation, but it seemed that the parallel apply worker will not sleep\r\neven if there are no messages or signals. It might be very inefficient.\r\n\r\nIn gather node - gather_readnext(), the same way is used, but I think there is a premise\r\nthat the wait-time is short because it is related with only one gather node.\r\nIn terms of parallel apply worker, however, we cannot predict the wait-time because\r\nit is related with the streamed transactions. If such transactions rarely come, parallel apply workers may spend many CPU time.\r\n\r\nI think we should wait during short time or until leader notifies, if shmq_res == SHM_MQ_WOULD_BLOCK.\r\nHow do you think?\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 29 Sep 2022 09:50:01 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for the v35-0001 patch:\n\n======\n\n1. Commit message\n\nCurrently, for large transactions, the publisher sends the data in multiple\nstreams (changes divided into chunks depending upon logical_decoding_work_mem),\nand then on the subscriber-side, the apply worker writes the changes into\ntemporary files and once it receives the commit, it reads from the file and\napplies the entire transaction.\n\n~\n\nThere is a mix of plural and singular.\n\n\"reads from the file\" -> \"reads from those files\" ?\n\n~~~\n\n2.\n\nThis preserves commit ordering and avoids\nwriting to and reading from file in most cases. We still need to spill if there\nis no worker available.\n\n2a.\n\"file\" => \"files\"\n\n2b.\n\"in most cases. We still need to spill\" -> \"in most cases, although we\nstill need to spill\"\n\n======\n\n3. GENERAL\n\n(this comment was written after I wrote all the other ones below so\nthere might be some unintended overlaps...)\n\nI found the mixed use of the same member names having different\nmeanings to be quite confusing.\n\ne.g.1\nPGOutputData 'streaming' is now a single char internal representation\nthe subscription parameter streaming mode ('f','t','p')\n- bool streaming;\n+ char streaming;\n\ne.g.2\nWalRcvStreamOptions 'streaming' is a C string version of the\nsubscription streaming mode (\"on\", \"parallel\")\n- bool streaming; /* Streaming of large transactions */\n+ char *streaming; /* Streaming of large transactions */\n\ne.g.3\nSubOpts 'streaming' is again like the first example - a single char\nfor the mode.\n- bool streaming;\n+ char streaming;\n\n\nIMO everything would become much simpler if you did:\n\n3a.\nRename \"char streaming;\" -> \"char streaming_mode;\"\n\n3b.\nRe-designed the \"char *streaming;\" code to also use the single char\nnotation, then also call that member 'streaming_mode'. Then everything\nwill be consistent.\n\n\n======\n\ndoc/src/sgml/config.sgml\n\n4. - max_parallel_apply_workers_per_subscription\n\n+ <varlistentry\nid=\"guc-max-parallel-apply-workers-per-subscription\"\nxreflabel=\"max_parallel_apply_workers_per_subscription\">\n+ <term><varname>max_parallel_apply_workers_per_subscription</varname>\n(<type>integer</type>)\n+ <indexterm>\n+ <primary><varname>max_parallel_apply_workers_per_subscription</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Maximum number of parallel apply workers per subscription. This\n+ parameter controls the amount of parallelism for streaming of\n+ in-progress transactions with subscription parameter\n+ <literal>streaming = parallel</literal>.\n+ </para>\n+ <para>\n+ The parallel apply workers are taken from the pool defined by\n+ <varname>max_logical_replication_workers</varname>.\n+ </para>\n+ <para>\n+ The default value is 2. This parameter can only be set in the\n+ <filename>postgresql.conf</filename> file or on the server command\n+ line.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nI felt that maybe this should also xref to the\ndoc/src/sgml/logical-replication.sgml section where you say about\n\"max_logical_replication_workers should be increased according to the\ndesired number of parallel apply workers.\"\n\n=====\n\n5. doc/src/sgml/protocol.sgml\n\n+ <para>\n+ Version <literal>4</literal> is supported only for server version 16\n+ and above, and it allows applying streams of large in-progress\n+ transactions in parallel.\n+ </para>\n\nSUGGESTION\n... and it allows streams of large in-progress transactions to be\napplied in parallel.\n\n======\n\n6. doc/src/sgml/ref/create_subscription.sgml\n\n+ <para>\n+ If set to <literal>parallel</literal>, incoming changes are directly\n+ applied via one of the parallel apply workers, if available. If no\n+ parallel worker is free to handle streaming transactions then the\n+ changes are written to temporary files and applied after the\n+ transaction is committed. Note that if an error happens when\n+ applying changes in a parallel worker, the finish LSN of the\n+ remote transaction might not be reported in the server log.\n </para>\n\n6a.\n\"parallel worker is free\" -> \"parallel apply worker is free\"\n\n~\n\n6b.\n\"Note that if an error happens when applying changes in a parallel\nworker,\" --> \"Note that if an error happens in a parallel apply\nworker,\"\n\n======\n\n7. src/backend/access/transam/xact.c - RecordTransactionAbort\n\n\n+ /*\n+ * Are we using the replication origins feature? Or, in other words, are\n+ * we replaying remote actions?\n+ */\n+ replorigin = (replorigin_session_origin != InvalidRepOriginId &&\n+ replorigin_session_origin != DoNotReplicateId);\n\n\"Or, in other words,\" -> \"In other words,\"\n\n======\n\nsrc/backend/replication/logical/applyparallelworker.c\n\n8. - file header comment\n\n+ * Refer to the comments in file header of logical/worker.c to see more\n+ * information about parallel apply worker.\n\n8a.\n\"in file header\" -> \"in the file header\"\n\n~\n\n8b.\n\"about parallel apply worker.\" -> \"about parallel apply workers.\"\n\n~~~\n\n9. - parallel_apply_can_start\n\n+/*\n+ * Returns true, if it is allowed to start a parallel apply worker, false,\n+ * otherwise.\n+ */\n+static bool\n+parallel_apply_can_start(TransactionId xid)\n\n(The commas are strange)\n\nSUGGESTION\nReturns true if it is OK to start a parallel apply worker, false otherwise.\n\nor just SUGGESTION\nReturns true if it is OK to start a parallel apply worker.\n\n~~~\n\n10.\n\n+ /*\n+ * Don't start a new parallel worker if not in parallel streaming mode or\n+ * the publisher does not support parallel apply.\n+ */\n+ if (!MyLogicalRepWorker->parallel_apply)\n+ return false;\n\n10a.\nSUGGESTION\nDon't start a new parallel apply worker if the subscription is not\nusing parallel streaming mode, or if the publisher does not support\nparallel apply.\n\n~\n\n10b.\nIMO this flag might be better to be called 'parallel_apply_enabled' or\nsomething similar.\n(see also review comment #55b.)\n\n~~~\n\n11. - parallel_apply_start_worker\n\n+ /* Try to start a new parallel apply worker. */\n+ if (winfo == NULL)\n+ winfo = parallel_apply_setup_worker();\n+\n+ /* Failed to start a new parallel apply worker. */\n+ if (winfo == NULL)\n+ return;\n\nIMO might be cleaner to write that code like below. And now the 2nd\ncomment is not really adding anything so it can be removed too.\n\nSUGGESTION\nif (winfo == NULL)\n{\n/* Try to start a new parallel apply worker. */\nwinfo = parallel_apply_setup_worker();\n\nif (winfo == NULL)\nreturn;\n}\n\n~~~\n\n12. - parallel_apply_free_worker\n\n+ SpinLockAcquire(&winfo->shared->mutex);\n+ slot_no = winfo->shared->logicalrep_worker_slot_no;\n+ generation = winfo->shared->logicalrep_worker_generation;\n+ SpinLockRelease(&winfo->shared->mutex);\n\nI know there are not many places doing this, but do you think it might\nbe worth introducing some new set/get function to encapsulate the\nset/get of the generation/slot so it does the mutex spin-locks in\ncommon code?\n\n~~~\n\n13. - LogicalParallelApplyLoop\n\n+ /*\n+ * Init the ApplyMessageContext which we clean up after each replication\n+ * protocol message.\n+ */\n+ ApplyMessageContext = AllocSetContextCreate(ApplyContext,\n+ \"ApplyMessageContext\",\n+ ALLOCSET_DEFAULT_SIZES);\n\nBecause this is in the parallel apply worker should the name (e.g. the\n2nd param) be changed to \"ParallelApplyMessageContext\"?\n\n~~~\n\n14.\n\n+ else if (shmq_res == SHM_MQ_DETACHED)\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"lost connection to the leader apply worker\")));\n+ }\n+ /* SHM_MQ_WOULD_BLOCK is purposefully ignored */\n\nInstead of that comment sort of floating in space I wonder if this\ncode would be better written as a switch, so then you can write this\ncomment in the 'default' case.\n\nOR, maybe the \"else if (shmq_res == SHM_MQ_DETACHED)\" should be changed to\nSUGGESTION\nelse if (shmq_res != SHM_MQ_WOULD_BLOCK)\n\nOR, just having an empty code block would be better than just a code\ncomment all by itself.\nSUGGESTION\nelse\n{\n/* SHM_MQ_WOULD_BLOCK is purposefully ignored */\n}\n\n~~~\n\n15. - ParallelApplyWorkerMain\n\n+ /*\n+ * Allocate the origin name in long-lived context for error context\n+ * message.\n+ */\n+ snprintf(originname, sizeof(originname), \"pg_%u\", MySubscription->oid);\n\n15a.\n\"in long-lived\" -> \"in a long-lived\"\n\n~\n\n15b.\nPlease watch my other thread [1] where I am hoping to push a patch to\nwill replace these snprintf's with a common function to do the same.\nIf/when my patch is pushed then this code needs to be changed to call\nthat new function.\n\n~~~\n\n16. - HandleParallelApplyMessages\n\n+ res = shm_mq_receive(winfo->error_mq_handle, &nbytes,\n+ &data, true);\n\nSeems to have unnecessary wrapping.\n\n~~~\n\n17. - parallel_apply_setup_dsm\n\n+/*\n+ * Set up a dynamic shared memory segment.\n+ *\n+ * We set up a control region that contains a fixed worker info\n+ * (ParallelApplyWorkerShared), a message queue, and an error queue.\n+ *\n+ * Returns true on success, false on failure.\n+ */\n+static bool\n+parallel_apply_setup_dsm(ParallelApplyWorkerInfo *winfo)\n\n\"fixed worker info\" -> \"fixed size worker info\" ?\n\n~~~\n\n18.\n\n+ * We need one key to register the location of the header, and we need two\n+ * other keys to track the locations of the message queue and the error\n+ * message queue.\n\n\"and we need two other\" -> \"and two other\"\n\n~~~\n\n19. - parallel_apply_wait_for_xact_finish\n\n+void\n+parallel_apply_wait_for_xact_finish(ParallelApplyWorkerInfo *winfo)\n+{\n+ for (;;)\n+ {\n+ if (!parallel_apply_get_in_xact(winfo->shared))\n+ break;\n\nShould that condition have a comment? All the others do.\n\n~~~\n\n20. - parallel_apply_savepoint_name\n\nThe only callers that I could find are from\nparallel_apply_start_subtrans and parallel_apply_stream_abort so...\n\n20a.\nWhy is there an extern in worker_internal.h?\n\n~\n\n20b.\nWhy is this not declared static?\n\n~~~\n\n21.\nThe callers to parallel_apply_start_subtrans are both allocating a\nname buffer size like:\nchar spname[MAXPGPATH];\n\nIs that right?\n\nI thought that PG names were limited by NAMEDATALEN.\n\n~~~\n\n22. - parallel_apply_replorigin_setup\n\n+ snprintf(originname, sizeof(originname), \"pg_%u\", MySubscription->oid);\n\nPlease watch my other thread [1] where I am hoping to push a patch to\nwill replace these snprintf's with a common function to do the same.\nIf/when my patch is pushed then this code needs to be changed to call\nthat new function.\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n23. - GUCs\n\n@@ -54,6 +54,7 @@\n\n int max_logical_replication_workers = 4;\n int max_sync_workers_per_subscription = 2;\n+int max_parallel_apply_workers_per_subscription = 2;\n\nPlease watch my other thread [2] where I am hoping to push a patch to\nclean up some of these GUV C variable declarations. It is not really\nrecommended to assign default values to the C variable like this -\nthey are kind of misleading because they will be overwritten by the\nGUC default value when the GUC mechanism starts up.\n\n~~~\n\n24. - logicalrep_worker_launch\n\n+ /* Sanity check: we don't support table sync in subworker. */\n+ Assert(!(is_subworker && OidIsValid(relid)));\n\nIMO \"we don't support\" makes it sound like this is something that\nmaybe is intended for the future. In fact, I think just this\ncombination is not possible so it is just a plain sanity check. I\nthink might be better just say like below\n\n/* Sanity check - tablesync worker cannot be a subworker */\n\n~~~\n\n25.\n\n+ worker->parallel_apply = is_subworker;\n\nIt seems kind of strange to assign one boolean to about but they have\ncompletely different names. I wondered if 'is_subworker' should be\ncalled 'is_parallel_apply_worker'?\n\n~~~\n\n26.\n\n if (OidIsValid(relid))\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u sync %u\", subid, relid);\n+ else if (is_subworker)\n+ snprintf(bgw.bgw_name, BGW_MAXLEN,\n+ \"logical replication parallel apply worker for subscription %u\", subid);\n else\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n \"logical replication worker for subscription %u\", subid);\n\nI think that *last* text now be changed like below:\n\nBEFORE\n\"logical replication worker for subscription %u\"\nAFTER\n\"logical replication apply worker for subscription %u\"\n\n~~~\n\n27. - logicalrep_worker_stop_internal\n\n+/*\n+ * Workhorse for logicalrep_worker_stop(), logicalrep_worker_detach() and\n+ * logicalrep_worker_stop_by_slot(). Stop the worker and wait for it to die.\n+ */\n+static void\n+logicalrep_worker_stop_internal(LogicalRepWorker *worker)\n\nIMO it would be better to define this static function *before* all the\ncallers of it.\n\n~~~\n\n28. - logicalrep_worker_detach\n\n+ /* Stop the parallel apply workers. */\n+ if (am_leader_apply_worker())\n+ {\n\nShould that comment rather say like below?\n\n/* If this is the leader apply worker then stop all of its parallel\napply workers. */\n\n~~~\n\n29. - pg_stat_get_subscription\n\n+ /* Skip if this is parallel apply worker */\n+ if (worker.apply_leader_pid != InvalidPid)\n+ continue;\n\n29a.\n\"is parallel apply\" -> \"is a parallel apply\"\n\n~\n\n29b.\nIMO this condition should be using your macro isParallelApplyWorker(worker).\n\n======\n\n30. src/backend/replication/logical/proto.c - logicalrep_read_stream_abort\n\n+ *\n+ * If read_abort_info is true, try to read the abort_lsn and abort_time fields,\n+ * otherwise don't.\n */\n void\n-logicalrep_read_stream_abort(StringInfo in, TransactionId *xid,\n- TransactionId *subxid)\n+logicalrep_read_stream_abort(StringInfo in,\n+ LogicalRepStreamAbortData *abort_data,\n+ bool read_abort_info)\n\n\"try to read\" -> \"read\"\n\n======\n\n31. src/backend/replication/logical/tablesync.c - process_syncing_tables\n\n process_syncing_tables(XLogRecPtr current_lsn)\n {\n+ if (am_parallel_apply_worker())\n+ return;\n+\n\nMaybe should have some comment here like:\n\n/* Skip for parallel apply workers. */\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n32. - file header comment\n\n+ * the list for any available worker. Note that we maintain a maximum of half\n+ * the max_parallel_apply_workers_per_subscription workers in the pool and\n+ * after that, we simply exit the worker after applying the transaction. This\n+ * worker pool threshold is a bit arbitrary and we can provide a guc for this\n+ * in the future if required.\n\nIMO that sentence beginning with \"This worker pool\" should be written\nas an XXX-style comment.\n\nAlso \"guc\" -> \"GUC variable\"\n\ne.g.\n\n* the list for any available worker. Note that we maintain a maximum of half\n* the max_parallel_apply_workers_per_subscription workers in the pool and\n* after that, we simply exit the worker after applying the transaction.\n*\n* XXX This worker pool threshold is a bit arbitrary and we can provide a GUC\n* variable for this in the future if required.\n\n~~~\n\n33.\n\n * we cannot count how many workers will be started. It may be possible to\n * allocate enough shared memory in one segment based on the maximum number of\n * parallel apply workers\n(max_parallel_apply_workers_per_subscription), but this\n * may waste some memory if no process is actually started.\n\n \"may waste some memory\" -> \"would waste memory\"\n\n~~~\n\n34.\n\n+ * In case, no worker is available to handle the streamed transaction, we\n+ * follow approach 2.\n\nSUGGESTION\nIf no parallel apply worker is available to handle the streamed\ntransaction we follow approach 2.\n\n~~~\n\n35. - TransApplyAction\n\n+ * TRANS_LEADER_SERIALIZE means that we are in leader apply worker and changes\n+ * are written to temporary files and then applied when the final commit\n+ * arrives.\n\n\"in leader apply\" -> \"in the leader apply\"\n\n~~~\n\n36 - should_apply_changes_for_rel\n\n should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)\n {\n if (am_tablesync_worker())\n return MyLogicalRepWorker->relid == rel->localreloid;\n+ else if (am_parallel_apply_worker())\n+ {\n+ if (rel->state != SUBREL_STATE_READY)\n+ ereport(ERROR,\n+ (errmsg(\"logical replication apply workers for subscription \\\"%s\\\"\nwill restart\",\n+ MySubscription->name),\n+ errdetail(\"Cannot handle streamed replication transaction using parallel \"\n+ \"apply workers until all tables are synchronized.\")));\n+\n+ return true;\n+ }\n else\n return (rel->state == SUBREL_STATE_READY ||\n (rel->state == SUBREL_STATE_SYNCDONE &&\n@@ -427,43 +519,87 @@ end_replication_step(void)\n\nThis function can be made tidier just by removing all the 'else' ...\n\nSUGGESTION\nif (am_tablesync_worker())\nreturn ...\nif (am_parallel_apply_worker())\n{\n...\nreturn true;\n}\n\nAssert(am_leader_apply_worker());\nreturn ...\n\n~~~\n\n37. - handle_streamed_transaction\n\n+ /*\n+ * XXX The publisher side doesn't always send relation/type update\n+ * messages after the streaming transaction, so also update the\n+ * relation/type in leader apply worker here. See function\n+ * cleanup_rel_sync_cache.\n+ */\n+ if (action == LOGICAL_REP_MSG_RELATION ||\n+ action == LOGICAL_REP_MSG_TYPE)\n+ return false;\n+ return true;\n\n37.\n\"so also update the relation/type in leader apply worker here\"\n\nIs that comment worded correctly? There is nothing being updated \"here\".\n\n~\n\n37.\nThat code is the same as:\n\nreturn (action != LOGICAL_REP_MSG_RELATION && action != LOGICAL_REP_MSG_TYPE);\n\n~~~\n\n38. - apply_handle_commit_prepared\n\n+ *\n+ * Note that we don't need to wait here if the transaction was prepared in a\n+ * parallel apply worker. Because we have already waited for the prepare to\n+ * finish in apply_handle_stream_prepare() which will ensure all the operations\n+ * in that transaction have happened in the subscriber and no concurrent\n+ * transaction can create deadlock or transaction dependency issues.\n */\n static void\n apply_handle_commit_prepared(StringInfo s)\n\n\"worker. Because\" -> \"worker because\"\n\n~~~\n\n39. - apply_handle_rollback_prepared\n\n+ *\n+ * Note that we don't need to wait here if the transaction was prepared in a\n+ * parallel apply worker. Because we have already waited for the prepare to\n+ * finish in apply_handle_stream_prepare() which will ensure all the operations\n+ * in that transaction have happened in the subscriber and no concurrent\n+ * transaction can create deadlock or transaction dependency issues.\n */\n static void\n apply_handle_rollback_prepared(StringInfo s)\n\nSee previous review comment #38 above.\n\n~~~\n\n40. - apply_handle_stream_prepare\n\n+ case TRANS_LEADER_SERIALIZE:\n\n- /* Mark the transaction as prepared. */\n- apply_handle_prepare_internal(&prepare_data);\n+ /*\n+ * The transaction has been serialized to file, so replay all the\n+ * spooled operations.\n+ */\n\nSpurious blank line after the 'case'.\n\nFYI - this same blank line is also in all the other switch/case that\nlooked like this one, so if you will fix it then please check all\nthose other places too...\n\n~~~\n\n41. - apply_handle_stream_start\n\n+ *\n+ * XXX We can avoid sending pair of the START/STOP messages to the parallel\n+ * worker because unlike apply worker it will process only one\n+ * transaction-at-a-time. However, it is not clear whether that is worth the\n+ * effort because it is sent after logical_decoding_work_mem changes.\n */\n static void\n apply_handle_stream_start(StringInfo s)\n\n\"sending pair\" -> \"sending pairs\"\n\n~~~\n\n42.\n\n- /* notify handle methods we're processing a remote transaction */\n+ /* Notify handle methods we're processing a remote transaction. */\n in_streamed_transaction = true;\nChanging this comment seemed unrelated to this patch, so maybe don't do this.\n\n~~~\n\n43.\n\n /*\n- * Initialize the worker's stream_fileset if we haven't yet. This will be\n- * used for the entire duration of the worker so create it in a permanent\n- * context. We create this on the very first streaming message from any\n- * transaction and then use it for this and other streaming transactions.\n- * Now, we could create a fileset at the start of the worker as well but\n- * then we won't be sure that it will ever be used.\n+ * For the first stream start, check if there is any free parallel apply\n+ * worker we can use to process this transaction.\n */\n- if (MyLogicalRepWorker->stream_fileset == NULL)\n+ if (first_segment)\n+ parallel_apply_start_worker(stream_xid);\n\nThis comment update seems misleading. The\nparallel_apply_start_worker() isn't just checking if there is a free\nworker. All that free worker logic stuff is *inside* the\nparallel_apply_start_worker() function, so maybe no need to mention\nabout it here at the caller.\n\n~~~\n\n44.\n\n+ case TRANS_PARALLEL_APPLY:\n+ break;\n\nShould this include a comment explaining why there is nothing to do?\n\n~~~\n\n39. - apply_handle_stream_abort\n\n+ /* We receive abort information only when we can apply in parallel. */\n+ if (MyLogicalRepWorker->parallel_apply)\n+ read_abort_info = true;\n\n44a.\nSUGGESTION\nWe receive abort information only when the publisher can support parallel apply.\n\n~\n\n44b.\nWhy not remove the assignment in the declaration, and just write this code as:\nread_abort_info = MyLogicalRepWorker->parallel_apply;\n\n~~~\n\n45.\n\n+ /*\n+ * We are in leader apply worker and the transaction has been\n+ * serialized to file.\n+ */\n+ serialize_stream_abort(xid, subxid);\n\n\"in leader apply worker\" -> \"in the leader apply worker\"\n\n~~~\n\n46. - store_flush_position\n\n/* Skip if not the leader apply worker */\nif (am_parallel_apply_worker())\nreturn;\nI previously wrote something about this and Hou-san gave a reason [3]\nwhy not to change the condition.\n\nBut the comment still does not match the code, because a tablesync\nworker would get past here.\n\nMaybe the comment is wrong?\n\n~~~\n\n47. - InitializeApplyWorker\n\n+/*\n+ * The common initialization for leader apply worker and parallel apply worker.\n+ *\n+ * Initialize the database connection, in-memory subscription and necessary\n+ * config options.\n+ */\n void\n-ApplyWorkerMain(Datum main_arg)\n+InitializeApplyWorker(void)\n\n\"The common initialization\" -> \"Common initialization\"\n\n~~~\n\n48. - ApplyWorkerMain\n\n+/* Logical Replication Apply worker entry point */\n+void\n+ApplyWorkerMain(Datum main_arg)\n\n\"Apply worker\" -> \"apply worker\"\n\n~~~\n\n49.\n\n+ /*\n+ * We don't currently need any ResourceOwner in a walreceiver process, but\n+ * if we did, we could call CreateAuxProcessResourceOwner here.\n+ */\n\nI think this comment should have \"XXX\" prefix.\n\n~~~\n\n50.\n\n+ if (server_version >= 160000 &&\n+ MySubscription->stream == SUBSTREAM_PARALLEL)\n+ {\n+ options.proto.logical.streaming = pstrdup(\"parallel\");\n+ MyLogicalRepWorker->parallel_apply = true;\n+ }\n+ else if (server_version >= 140000 &&\n+ MySubscription->stream != SUBSTREAM_OFF)\n+ options.proto.logical.streaming = pstrdup(\"on\");\n+ else\n+ options.proto.logical.streaming = NULL;\n\nIMO it might make more sense for these conditions to be checking the\n'options.proto.logical.proto_version' here instead of checking the\nhardwired server versions. Also, I suggest may be better (for clarity)\nto always assign the parallel_apply member.\n\nSUGGESTION\n\nif (options.proto.logical.proto_version >=\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM &&\nMySubscription->stream == SUBSTREAM_PARALLEL)\n{\noptions.proto.logical.streaming = pstrdup(\"parallel\");\nMyLogicalRepWorker->parallel_apply = true;\n}\nelse if (options.proto.logical.proto_version >=\nLOGICALREP_PROTO_STREAM_VERSION_NUM &&\nMySubscription->stream != SUBSTREAM_OFF)\n{\noptions.proto.logical.streaming = pstrdup(\"on\");\nMyLogicalRepWorker->parallel_apply = false;\n}\nelse\n{\noptions.proto.logical.streaming = NULL;\nMyLogicalRepWorker->parallel_apply = false;\n}\n\n~~~\n\n51. - clear_subscription_skip_lsn\n\n- if (likely(XLogRecPtrIsInvalid(myskiplsn)))\n+ if (likely(XLogRecPtrIsInvalid(myskiplsn)) ||\n+ am_parallel_apply_worker())\n return;\n\nUnnecessary wrapping.\n\n~~~\n\n52. - get_transaction_apply_action\n\n+static TransApplyAction\n+get_transaction_apply_action(TransactionId xid,\nParallelApplyWorkerInfo **winfo)\n+{\n+ *winfo = NULL;\n+\n+ if (am_parallel_apply_worker())\n+ {\n+ return TRANS_PARALLEL_APPLY;\n+ }\n+ else if (in_remote_transaction)\n+ {\n+ return TRANS_LEADER_APPLY;\n+ }\n+\n+ /*\n+ * Check if we are processing this transaction using a parallel apply\n+ * worker and if so, send the changes to that worker.\n+ */\n+ else if ((*winfo = parallel_apply_find_worker(xid)))\n+ {\n+ return TRANS_LEADER_SEND_TO_PARALLEL;\n+ }\n+ else\n+ {\n+ return TRANS_LEADER_SERIALIZE;\n+ }\n+}\n\n52a.\nAll these if/else and code blocks seem excessive. It can be simplified\nas follows:\n\nSUGGESTION\n\nstatic TransApplyAction\nget_transaction_apply_action(TransactionId xid, ParallelApplyWorkerInfo **winfo)\n{\n*winfo = NULL;\n\nif (am_parallel_apply_worker())\nreturn TRANS_PARALLEL_APPLY;\n\nif (in_remote_transaction)\nreturn TRANS_LEADER_APPLY;\n\n/*\n* Check if we are processing this transaction using a parallel apply\n* worker and if so, send the changes to that worker.\n*/\nif ((*winfo = parallel_apply_find_worker(xid)))\nreturn TRANS_LEADER_SEND_TO_PARALLEL;\n\nreturn TRANS_LEADER_SERIALIZE;\n}\n\n~\n\n52b.\nCan a tablesync worker ever get here? It might be better to\nAssert(!am_tablesync_worker()); at top of this function?\n\n======\n\nsrc/backend/replication/pgoutput/pgoutput.c\n\n53. - pgoutput_startup\n\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"requested proto_version=%d does not support streaming, need\n%d or higher\",\n data->protocol_version, LOGICALREP_PROTO_STREAM_VERSION_NUM)));\n+ else if (data->streaming == SUBSTREAM_PARALLEL &&\n+ data->protocol_version < LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"requested proto_version=%d does not support parallel\nstreaming mode, need %d or higher\",\n+ data->protocol_version, LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));\n\nThe previous error message just says \"streamimg\", not \"streaming mode\"\nso for consistency better to remove that word \"mode\" IMO.\n\n~~~\n\n54. - pgoutput_stream_abort\n\n- logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn->xid);\n+ logicalrep_write_stream_abort(ctx->out, toptxn->xid, txn->xid,\nabort_lsn, txn->xact_time.abort_time, write_abort_info);\n+\n\nWrapping is needed here.\n\n======\n\nsrc/include/replication/worker_internal.h\n\n55. - LogicalRepWorker\n\n+ /* Indicates whether apply can be performed parallelly. */\n+ bool parallel_apply;\n+\n\n55a.\n\"parallelly\" - ?? is there a better way to phrase this? IMO that is an\nuncommon word.\n\n~\n\n55b.\nIMO this member name should be named slightly different to give a\nbetter feel for what it really means.\n\nMaybe something like one of:\n\"parallel_apply_ok\"\n\"parallel_apply_enabled\"\n\"use_parallel_apply\"\netc?\n\n~~~\n\n56. - ParallelApplyWorkerInfo\n\n+ /*\n+ * Indicates whether the worker is available to be used for parallel apply\n+ * transaction?\n+ */\n+ bool in_use;\n\nAs previously posted [4], this member comment is describing the\nopposite of the member name. (e.g. the comment would be correct if the\nmember was called 'is_available', but it isn't)\n\nSUGGESTION\nTrue if the worker is being used to process a parallel apply\ntransaction. False indicates this worker is available for re-use.\n\n~~~\n\n57. - am_leader_apply_worker\n\n+static inline bool\n+am_leader_apply_worker(void)\n+{\n+ return (!OidIsValid(MyLogicalRepWorker->relid) &&\n+ !isParallelApplyWorker(MyLogicalRepWorker));\n+}\n\nI wondered if it would be tidier/easier to define this function like\nbelow. The others are inline functions anyhow so it should end up as\nthe same thing, right?\n\nstatic inline bool\nam_leader_apply_worker(void)\n{\nreturn (!am_tablesync_worker() && !am_parallel_apply_worker);\n}\n\n======\n\n58.\n\n--- fail - streaming must be boolean\n+-- fail - streaming must be boolean or 'parallel'\n CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, streaming = foo);\n\nI think there are tests already for explicitly create/set the\nsubscription parameter streaming = on/off/parallel\n\nBut what about when there is no value explicitly specified? Shouldn't\nthere also be tests like below to check that *implied* boolean true\nstill works for this enum?\n\nCREATE SUBSCRIPTION ... WITH (streaming)\nALTER SUBSCRIPTION ... SET (streaming)\n\n------\n[1] My patch snprintfs -\nhttps://www.postgresql.org/message-id/flat/CAHut%2BPsB9hEEU-JHqTUBL3bv--vesUvThYr1-95ZyG5PkF9PQQ%40mail.gmail.com#17abe65e826f48d3d5a1cf5b83ce5271\n[2] My patch GUC C vars -\nhttps://www.postgresql.org/message-id/flat/CAHut%2BPsWxJgmrAvPsw9smFVAvAoyWstO7ttAkAq8NKDhsVNa3Q%40mail.gmail.com#1526a180383a3374ae4d701f25799926\n[3] Houz reply comment #41 -\nhttps://www.postgresql.org/message-id/OS0PR01MB5716E7E5798625AE9437CD6F94439%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[4] Previous review comment #13 -\nhttps://www.postgresql.org/message-id/CAHut%2BPuVjRgGr4saN7qwq0oB8DANHVR7UfDiciB1Q3cYN54F6A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 30 Sep 2022 18:26:37 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Sep 27, 2022 at 9:26 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, September 24, 2022 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Sep 22, 2022 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > >\n> > > On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\n> > > <wangw.fnst@fujitsu.com> wrote:\n> > > >\n> > >\n> > > Few comments on v33-0001\n> > > =======================\n> > >\n> >\n> > Some more comments on v33-0001\n> > =============================\n> > 1.\n> > + /* Information from the corresponding LogicalRepWorker slot. */\n> > + uint16 logicalrep_worker_generation;\n> > +\n> > + int logicalrep_worker_slot_no;\n> > +} ParallelApplyWorkerShared;\n> >\n> > Both these variables are read/changed by leader/parallel workers without\n> > using any lock (mutex). It seems currently there is no problem because of the\n> > way the patch is using in_parallel_apply_xact but I think it won't be a good idea\n> > to rely on it. I suggest using mutex to operate on these variables and also check\n> > if the slot_no is in a valid range after reading it in parallel_apply_free_worker,\n> > otherwise error out using elog.\n>\n> Changed.\n>\n> > 2.\n> > static void\n> > apply_handle_stream_stop(StringInfo s)\n> > {\n> > - if (!in_streamed_transaction)\n> > + ParallelApplyWorkerInfo *winfo = NULL; TransApplyAction apply_action;\n> > +\n> > + if (!am_parallel_apply_worker() &&\n> > + (!in_streamed_transaction && !stream_apply_worker))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > errmsg_internal(\"STREAM STOP message without STREAM START\")));\n> >\n> > This check won't be able to detect missing stream start messages for parallel\n> > apply workers apart from the first pair of start/stop. I thought of adding\n> > in_remote_transaction check along with\n> > am_parallel_apply_worker() to detect the same but that also won't work\n> > because the parallel worker doesn't reset it at the stop message.\n> > Another possibility is to introduce yet another variable for this but that doesn't\n> > seem worth it. I would like to keep this check simple.\n> > Can you think of any better way?\n>\n> I feel we can reuse the in_streamed_transaction in parallel apply worker to\n> simplify the check there. I tried to set this flag in parallel apply worker\n> when stream starts and reset it when stream stop so that we can directly check\n> this flag for duplicate stream start message and other related things.\n>\n> > 3. I think we can skip sending start/stop messages from the leader to the\n> > parallel worker because unlike apply worker it will process only one\n> > transaction-at-a-time. However, it is not clear whether that is worth the effort\n> > because it is sent after logical_decoding_work_mem changes. For now, I have\n> > added a comment for this in the attached patch but let me if I am missing\n> > something or if I am wrong.\n>\n> I the suggested comments look good.\n>\n> > 4.\n> > postgres=# select pid, leader_pid, application_name, backend_type from\n> > pg_stat_activity;\n> > pid | leader_pid | application_name | backend_type\n> > -------+------------+------------------+------------------------------\n> > 27624 | | | logical replication launcher\n> > 17336 | | psql | client backend\n> > 26312 | | | logical replication worker\n> > 26376 | | psql | client backend\n> > 14004 | | | logical replication worker\n> >\n> > Here, the second worker entry is for the parallel worker. Isn't it better if we\n> > distinguish this by keeping type as a logical replication parallel worker? I think\n> > for this you need to change bgw_type in logicalrep_worker_launch().\n>\n> Changed.\n>\n> > 5. Can we name parallel_apply_subxact_info_add() as\n> > parallel_apply_start_subtrans()?\n> >\n> > Apart from the above, I have added/edited a few comments and made a few\n> > other cosmetic changes in the attached.\n>\n\nWhile looking at v35 patch, I realized that there are some cases where\nthe logical replication gets stuck depending on partitioned table\nstructure. For instance, there are following tables, publication, and\nsubscription:\n\n* On publisher\ncreate table p (c int) partition by list (c);\ncreate table c1 partition of p for values in (1);\ncreate table c2 (c int);\ncreate publication test_pub for table p, c1, c2 with\n(publish_via_partition_root = 'true');\n\n* On subscriber\ncreate table p (c int) partition by list (c);\ncreate table c1 partition of p for values In (2);\ncreate table c2 partition of p for values In (1);\ncreate subscription test_sub connection 'port=5551 dbname=postgres'\npublication test_pub with (streaming = 'parallel', copy_data =\n'false');\n\nNote that while both the publisher and the subscriber have the same\nname tables the partition structure is different and rows go to a\ndifferent table on the subscriber (eg, row c=1 will go to c2 table on\nthe subscriber). If two current transactions are executed as follows,\nthe apply worker (ig, the leader apply worker) waits for a lock on c2\nheld by its parallel apply worker:\n\n* TX-1\nBEGIN;\nINSERT INTO p SELECT 1 FROM generate_series(1, 10000); --- changes are streamed\n\n * TX-2\n BEGIN;\n TRUNCATE c2; --- wait for a lock on c2\n\n* TX-1\nINSERT INTO p SELECT 1 FROM generate_series(1, 10000);\nCOMMIT;\n\nThis might not be a common case in practice but it could mean that\nthere is a restriction on how partitioned tables should be structured\non the publisher and the subscriber when using streaming = 'parallel'.\nWhen this happens, since the logical replication cannot move forward\nthe users need to disable parallel-apply mode or increase\nlogical_decoding_work_mem. We could describe this limitation in the\ndoc but it would be hard for users to detect problematic table\nstructure.\n\nBTW, when the leader apply worker waits for a lock on c2 in the above\nexample, the parallel apply worker is in a busy-loop, which should be\nfixed.\n\nRegards,\n\n--\nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Oct 2022 17:06:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Sep 29, 2022 at 3:20 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Hou,\n>\n> Thanks for updating patch. I will review yours soon, but I reply to your comment.\n>\n> > > 04. applyparallelworker.c - LogicalParallelApplyLoop()\n> > >\n> > > ```\n> > > + shmq_res = shm_mq_receive(mqh, &len, &data, false);\n> > > ...\n> > > + if (ConfigReloadPending)\n> > > + {\n> > > + ConfigReloadPending = false;\n> > > + ProcessConfigFile(PGC_SIGHUP);\n> > > + }\n> > > ```\n> > >\n> > >\n> > > Here the parallel apply worker waits to receive messages and after dispatching\n> > > it ProcessConfigFile() is called.\n> > > It means that .conf will be not read until the parallel apply worker receives new\n> > > messages and apply them.\n> > >\n> > > It may be problematic when users change log_min_message to debugXXX for\n> > > debugging but the streamed transaction rarely come.\n> > > They expected that detailed description appears on the log from next\n> > > streaming chunk, but it does not.\n> > >\n> > > This does not occur in leader worker when it waits messages from publisher,\n> > > because it uses libpqrcv_receive(), which works asynchronously.\n> > >\n> > > I 'm not sure whether it should be documented that the evaluation of GUCs may\n> > > be delayed, how do you think?\n> >\n> > I changed the shm_mq_receive to asynchronous mode which is also consistent\n> > with\n> > what we did for Gather node when reading data from parallel query workers.\n>\n> I checked your implementation, but it seemed that the parallel apply worker will not sleep\n> even if there are no messages or signals. It might be very inefficient.\n>\n> In gather node - gather_readnext(), the same way is used, but I think there is a premise\n> that the wait-time is short because it is related with only one gather node.\n> In terms of parallel apply worker, however, we cannot predict the wait-time because\n> it is related with the streamed transactions. If such transactions rarely come, parallel apply workers may spend many CPU time.\n>\n> I think we should wait during short time or until leader notifies, if shmq_res == SHM_MQ_WOULD_BLOCK.\n> How do you think?\n>\n\nCan't we use WaitLatch in the case of SHM_MQ_WOULD_BLOCK as we are\nusing it for the same case at some other place in the code? We can use\nthe same nap time as we are using in the leader apply worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Oct 2022 15:46:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Amit,\r\n\r\n> Can't we use WaitLatch in the case of SHM_MQ_WOULD_BLOCK as we are\r\n> using it for the same case at some other place in the code? We can use\r\n> the same nap time as we are using in the leader apply worker.\r\n\r\nI'm not sure whether such a short nap time is needed or not.\r\nBecause unlike leader apply worker, parallel apply workers do not have timeout like wal_receiver_timeout,\r\nso they do not have to check so frequently and send feedback to publisher.\r\nBut basically I agree that we can use same logic as leader.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 6 Oct 2022 10:54:23 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Sep 30, 2022 at 1:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for the v35-0001 patch:\n>\n> ======\n>\n> 3. GENERAL\n>\n> (this comment was written after I wrote all the other ones below so\n> there might be some unintended overlaps...)\n>\n> I found the mixed use of the same member names having different\n> meanings to be quite confusing.\n>\n> e.g.1\n> PGOutputData 'streaming' is now a single char internal representation\n> the subscription parameter streaming mode ('f','t','p')\n> - bool streaming;\n> + char streaming;\n>\n> e.g.2\n> WalRcvStreamOptions 'streaming' is a C string version of the\n> subscription streaming mode (\"on\", \"parallel\")\n> - bool streaming; /* Streaming of large transactions */\n> + char *streaming; /* Streaming of large transactions */\n>\n> e.g.3\n> SubOpts 'streaming' is again like the first example - a single char\n> for the mode.\n> - bool streaming;\n> + char streaming;\n>\n>\n> IMO everything would become much simpler if you did:\n>\n> 3a.\n> Rename \"char streaming;\" -> \"char streaming_mode;\"\n>\n> 3b.\n> Re-designed the \"char *streaming;\" code to also use the single char\n> notation, then also call that member 'streaming_mode'. Then everything\n> will be consistent.\n>\n\nWon't this impact the previous version publisher which already uses\non/off? We may need to maintain multiple values which would be\nconfusing.\n\n>\n> 9. - parallel_apply_can_start\n>\n> +/*\n> + * Returns true, if it is allowed to start a parallel apply worker, false,\n> + * otherwise.\n> + */\n> +static bool\n> +parallel_apply_can_start(TransactionId xid)\n>\n> (The commas are strange)\n>\n> SUGGESTION\n> Returns true if it is OK to start a parallel apply worker, false otherwise.\n>\n\n+1 for this.\n>\n> 28. - logicalrep_worker_detach\n>\n> + /* Stop the parallel apply workers. */\n> + if (am_leader_apply_worker())\n> + {\n>\n> Should that comment rather say like below?\n>\n> /* If this is the leader apply worker then stop all of its parallel\n> apply workers. */\n>\n\nI think this would be just saying what is apparent from the code, so\nnot sure if it is an improvement.\n\n>\n> 38. - apply_handle_commit_prepared\n>\n> + *\n> + * Note that we don't need to wait here if the transaction was prepared in a\n> + * parallel apply worker. Because we have already waited for the prepare to\n> + * finish in apply_handle_stream_prepare() which will ensure all the operations\n> + * in that transaction have happened in the subscriber and no concurrent\n> + * transaction can create deadlock or transaction dependency issues.\n> */\n> static void\n> apply_handle_commit_prepared(StringInfo s)\n>\n> \"worker. Because\" -> \"worker because\"\n>\n\nI think this will make this line too long. Can we think of breaking it\nin some way?\n\n>\n> 43.\n>\n> /*\n> - * Initialize the worker's stream_fileset if we haven't yet. This will be\n> - * used for the entire duration of the worker so create it in a permanent\n> - * context. We create this on the very first streaming message from any\n> - * transaction and then use it for this and other streaming transactions.\n> - * Now, we could create a fileset at the start of the worker as well but\n> - * then we won't be sure that it will ever be used.\n> + * For the first stream start, check if there is any free parallel apply\n> + * worker we can use to process this transaction.\n> */\n> - if (MyLogicalRepWorker->stream_fileset == NULL)\n> + if (first_segment)\n> + parallel_apply_start_worker(stream_xid);\n>\n> This comment update seems misleading. The\n> parallel_apply_start_worker() isn't just checking if there is a free\n> worker. All that free worker logic stuff is *inside* the\n> parallel_apply_start_worker() function, so maybe no need to mention\n> about it here at the caller.\n>\n\nIt will be good to have some comments here instead of completely removing it.\n\n>\n> 39. - apply_handle_stream_abort\n>\n> + /* We receive abort information only when we can apply in parallel. */\n> + if (MyLogicalRepWorker->parallel_apply)\n> + read_abort_info = true;\n>\n> 44a.\n> SUGGESTION\n> We receive abort information only when the publisher can support parallel apply.\n>\n\nThe existing comment seems better to me in this case.\n\n>\n> 55. - LogicalRepWorker\n>\n> + /* Indicates whether apply can be performed parallelly. */\n> + bool parallel_apply;\n> +\n>\n> 55a.\n> \"parallelly\" - ?? is there a better way to phrase this? IMO that is an\n> uncommon word.\n>\n\nHow about \".. can be performed in parallel.\"?\n\n> ~\n>\n> 55b.\n> IMO this member name should be named slightly different to give a\n> better feel for what it really means.\n>\n> Maybe something like one of:\n> \"parallel_apply_ok\"\n> \"parallel_apply_enabled\"\n> \"use_parallel_apply\"\n> etc?\n>\n\nThe extra word doesn't seem to be useful here.\n\n> 58.\n>\n> --- fail - streaming must be boolean\n> +-- fail - streaming must be boolean or 'parallel'\n> CREATE SUBSCRIPTION regress_testsub CONNECTION\n> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\n> false, streaming = foo);\n>\n> I think there are tests already for explicitly create/set the\n> subscription parameter streaming = on/off/parallel\n>\n> But what about when there is no value explicitly specified? Shouldn't\n> there also be tests like below to check that *implied* boolean true\n> still works for this enum?\n>\n> CREATE SUBSCRIPTION ... WITH (streaming)\n> ALTER SUBSCRIPTION ... SET (streaming)\n>\n\nI think before adding new tests for this, please check if we have any\nsimilar tests for other boolean options.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 Oct 2022 17:07:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> Sent: Thursday, October 6, 2022 4:07 PM\r\n> To: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\r\n> Cc: Amit Kapila <amit.kapila16@gmail.com>; Wang, Wei/王 威\r\n> <wangw.fnst@fujitsu.com>; Peter Smith <smithpb2250@gmail.com>; Dilip\r\n> Kumar <dilipbalaut@gmail.com>; Shi, Yu/侍 雨 <shiy.fnst@fujitsu.com>;\r\n> PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\n> Subject: Re: Perform streaming logical transactions by background workers and\r\n> parallel apply\r\n> \r\n> On Tue, Sep 27, 2022 at 9:26 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Saturday, September 24, 2022 7:40 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Sep 22, 2022 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\r\n> > > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > > >\r\n> > > >\r\n> > > > Few comments on v33-0001\r\n> > > > =======================\r\n> > > >\r\n> > >\r\n> > > Some more comments on v33-0001\r\n> > > =============================\r\n> > > 1.\r\n> > > + /* Information from the corresponding LogicalRepWorker slot. */\r\n> > > + uint16 logicalrep_worker_generation;\r\n> > > +\r\n> > > + int logicalrep_worker_slot_no;\r\n> > > +} ParallelApplyWorkerShared;\r\n> > >\r\n> > > Both these variables are read/changed by leader/parallel workers without\r\n> > > using any lock (mutex). It seems currently there is no problem because of\r\n> the\r\n> > > way the patch is using in_parallel_apply_xact but I think it won't be a good\r\n> idea\r\n> > > to rely on it. I suggest using mutex to operate on these variables and also\r\n> check\r\n> > > if the slot_no is in a valid range after reading it in parallel_apply_free_worker,\r\n> > > otherwise error out using elog.\r\n> >\r\n> > Changed.\r\n> >\r\n> > > 2.\r\n> > > static void\r\n> > > apply_handle_stream_stop(StringInfo s)\r\n> > > {\r\n> > > - if (!in_streamed_transaction)\r\n> > > + ParallelApplyWorkerInfo *winfo = NULL; TransApplyAction apply_action;\r\n> > > +\r\n> > > + if (!am_parallel_apply_worker() &&\r\n> > > + (!in_streamed_transaction && !stream_apply_worker))\r\n> > > ereport(ERROR,\r\n> > > (errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> > > errmsg_internal(\"STREAM STOP message without STREAM START\")));\r\n> > >\r\n> > > This check won't be able to detect missing stream start messages for parallel\r\n> > > apply workers apart from the first pair of start/stop. I thought of adding\r\n> > > in_remote_transaction check along with\r\n> > > am_parallel_apply_worker() to detect the same but that also won't work\r\n> > > because the parallel worker doesn't reset it at the stop message.\r\n> > > Another possibility is to introduce yet another variable for this but that\r\n> doesn't\r\n> > > seem worth it. I would like to keep this check simple.\r\n> > > Can you think of any better way?\r\n> >\r\n> > I feel we can reuse the in_streamed_transaction in parallel apply worker to\r\n> > simplify the check there. I tried to set this flag in parallel apply worker\r\n> > when stream starts and reset it when stream stop so that we can directly check\r\n> > this flag for duplicate stream start message and other related things.\r\n> >\r\n> > > 3. I think we can skip sending start/stop messages from the leader to the\r\n> > > parallel worker because unlike apply worker it will process only one\r\n> > > transaction-at-a-time. However, it is not clear whether that is worth the\r\n> effort\r\n> > > because it is sent after logical_decoding_work_mem changes. For now, I have\r\n> > > added a comment for this in the attached patch but let me if I am missing\r\n> > > something or if I am wrong.\r\n> >\r\n> > I the suggested comments look good.\r\n> >\r\n> > > 4.\r\n> > > postgres=# select pid, leader_pid, application_name, backend_type from\r\n> > > pg_stat_activity;\r\n> > > pid | leader_pid | application_name | backend_type\r\n> > > -------+------------+------------------+------------------------------\r\n> > > 27624 | | | logical replication launcher\r\n> > > 17336 | | psql | client backend\r\n> > > 26312 | | | logical replication worker\r\n> > > 26376 | | psql | client backend\r\n> > > 14004 | | | logical replication worker\r\n> > >\r\n> > > Here, the second worker entry is for the parallel worker. Isn't it better if we\r\n> > > distinguish this by keeping type as a logical replication parallel worker? I\r\n> think\r\n> > > for this you need to change bgw_type in logicalrep_worker_launch().\r\n> >\r\n> > Changed.\r\n> >\r\n> > > 5. Can we name parallel_apply_subxact_info_add() as\r\n> > > parallel_apply_start_subtrans()?\r\n> > >\r\n> > > Apart from the above, I have added/edited a few comments and made a few\r\n> > > other cosmetic changes in the attached.\r\n> >\r\n> \r\n> While looking at v35 patch, I realized that there are some cases where\r\n> the logical replication gets stuck depending on partitioned table\r\n> structure. For instance, there are following tables, publication, and\r\n> subscription:\r\n> \r\n> * On publisher\r\n> create table p (c int) partition by list (c);\r\n> create table c1 partition of p for values in (1);\r\n> create table c2 (c int);\r\n> create publication test_pub for table p, c1, c2 with\r\n> (publish_via_partition_root = 'true');\r\n> \r\n> * On subscriber\r\n> create table p (c int) partition by list (c);\r\n> create table c1 partition of p for values In (2);\r\n> create table c2 partition of p for values In (1);\r\n> create subscription test_sub connection 'port=5551 dbname=postgres'\r\n> publication test_pub with (streaming = 'parallel', copy_data =\r\n> 'false');\r\n> \r\n> Note that while both the publisher and the subscriber have the same\r\n> name tables the partition structure is different and rows go to a\r\n> different table on the subscriber (eg, row c=1 will go to c2 table on\r\n> the subscriber). If two current transactions are executed as follows,\r\n> the apply worker (ig, the leader apply worker) waits for a lock on c2\r\n> held by its parallel apply worker:\r\n> \r\n> * TX-1\r\n> BEGIN;\r\n> INSERT INTO p SELECT 1 FROM generate_series(1, 10000); --- changes are\r\n> streamed\r\n> \r\n> * TX-2\r\n> BEGIN;\r\n> TRUNCATE c2; --- wait for a lock on c2\r\n> \r\n> * TX-1\r\n> INSERT INTO p SELECT 1 FROM generate_series(1, 10000);\r\n> COMMIT;\r\n> \r\n> This might not be a common case in practice but it could mean that\r\n> there is a restriction on how partitioned tables should be structured\r\n> on the publisher and the subscriber when using streaming = 'parallel'.\r\n> When this happens, since the logical replication cannot move forward\r\n> the users need to disable parallel-apply mode or increase\r\n> logical_decoding_work_mem. We could describe this limitation in the\r\n> doc but it would be hard for users to detect problematic table\r\n> structure.\r\n\r\nThanks for testing this!\r\n\r\nI think the root reason for this kind of deadlock problems is the table\r\nstructure difference between publisher and subscriber(similar to the unique\r\ndifference reported earlier[1]). So, I think we'd better disallow this case. For\r\nexample to avoid the reported problem, we could only support parallel apply if\r\npubviaroot is false on publisher and replicated tables' types(relkind) are the\r\nsame between publisher and subscriber.\r\n\r\nAlthough it might restrict some use cases, but I think it only restrict the\r\ncases when the partitioned table's structure is different between publisher and\r\nsubscriber. User can still use parallel apply for cases when the table\r\nstructure is the same between publisher and subscriber which seems acceptable\r\nto me. And we can also document that the feature is expected to be used for the\r\ncase when tables' structure are the same. Thoughts ?\r\n\r\nBTW, to achieve this, we could send the publisher's relkind along with the\r\nRELATION message and compare it with relkind on subscriber. We could report an\r\nerror if publisher's or subscriber's table is a partitioned table.\r\n\r\n> BTW, when the leader apply worker waits for a lock on c2 in the above\r\n> example, the parallel apply worker is in a busy-loop, which should be\r\n> fixed.\r\n\r\nYeah, It seems we used async mode when receiving message which caused this,\r\nI plan to improve that part soon.\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoDPHstj%2BjD3ODS-bd1uM%2BZE%3DcpDKf8npeNFZD%2BYdM28fA%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n", "msg_date": "Thu, 6 Oct 2022 12:04:31 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, October 6, 2022 6:54 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Dear Amit,\r\n> \r\n> > Can't we use WaitLatch in the case of SHM_MQ_WOULD_BLOCK as we are\r\n> > using it for the same case at some other place in the code? We can use\r\n> > the same nap time as we are using in the leader apply worker.\r\n> \r\n> I'm not sure whether such a short nap time is needed or not.\r\n> Because unlike leader apply worker, parallel apply workers do not have timeout\r\n> like wal_receiver_timeout, so they do not have to check so frequently and send\r\n> feedback to publisher.\r\n> But basically I agree that we can use same logic as leader.\r\n\r\nThanks for the suggestion.\r\n\r\nI tried to add a WaitLatch, but it seems affect the performance\r\nbecause the Latch might not be set when leader send some\r\nmessage to parallel apply worker which means it will wait until\r\ntimeout.\r\n\r\nI feel we'd better change it back to sync mode and do the ProcessConfigFile()\r\nafter receiving the message and before applying the change which seems also\r\naddress the problem.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 6 Oct 2022 12:11:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nI put comments for v35-0001.\r\n\r\n01. catalog.sgml\r\n\r\n```\r\n+ Controls how to handle the streaming of in-progress transactions:\r\n+ <literal>f</literal> = disallow streaming of in-progress transactions,\r\n+ <literal>t</literal> = spill the changes of in-progress transactions to\r\n+ disk and apply at once after the transaction is committed on the\r\n+ publisher,\r\n+ <literal>p</literal> = apply changes directly using a parallel apply\r\n+ worker if available (same as 't' if no worker is available)\r\n```\r\n\r\nI'm not sure why 't' means \"spill the changes to file\". Is it compatibility issue?\r\n\r\n~~~\r\n02. applyworker.c - parallel_apply_stream_abort\r\n\r\nThe argument abort_data is not modified in the function. Maybe \"const\" modifier should be added.\r\n(Other functions should be also checked...)\r\n\r\n~~~\r\n03. applyparallelworker.c - parallel_apply_find_worker\r\n\r\n```\r\n+ ParallelApplyWorkerEntry *entry = NULL;\r\n```\r\n\r\nThis may not have to be initialized here.\r\n\r\n~~~\r\n04. applyparallelworker.c - HandleParallelApplyMessages\r\n\r\n```\r\n+ static MemoryContext hpm_context = NULL;\r\n```\r\n\r\nI think \"hpm\" means \"handle parallel message\", so it should be \"hpam\".\r\n\r\n~~~\r\n05. launcher.c - logicalrep_worker_launch()\r\n\r\n```\r\n\tif (is_subworker)\r\n\t\tsnprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication parallel worker\");\r\n\telse\r\n\t\tsnprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication worker\");\r\n```\r\n\r\nI'm not sure why there are only bgw_type even if there are three types of apply workers. Is it for compatibility?\r\n\r\n~~~\r\n06. launcher.c - logicalrep_worker_stop_by_slot\r\n\r\nAn assertion like Assert(slot_no >=0 && slot_no < max_logical_replication_workers) should be added at the top of this function.\r\n\r\n~~~\r\n07. launcher.c - logicalrep_worker_stop_internal\r\n\r\n```\r\n+/*\r\n+ * Workhorse for logicalrep_worker_stop(), logicalrep_worker_detach() and\r\n+ * logicalrep_worker_stop_by_slot(). Stop the worker and wait for it to die.\r\n+ */\r\n+static void\r\n+logicalrep_worker_stop_internal(LogicalRepWorker *worker)\r\n```\r\n\r\nI think logicalrep_worker_stop_internal() may be not \"Workhorse\" for logicalrep_worker_detach(). In the function internal function is called for parallel apply worker, and it does not main part of the detach function. \r\n\r\n~~~\r\n08. worker.c - handle_streamed_transaction()\r\n\r\n```\r\n+ TransactionId current_xid = InvalidTransactionId;\r\n```\r\n\r\nThis initialization is not needed. This is not used in non-streaming mode, otherwise it is substituted before used.\r\n\r\n~~~\r\n09. worker.c - handle_streamed_transaction()\r\n\r\n```\r\n+ case TRANS_PARALLEL_APPLY:\r\n+ /* Define a savepoint for a subxact if needed. */\r\n+ parallel_apply_start_subtrans(current_xid, stream_xid);\r\n+ return false;\r\n```\r\n\r\nBased on other case-block, Assert(am_parallel_apply_worker()) may be added at the top of this part.\r\nThis suggestion can be said for other swith-case statements.\r\n\r\n~~~\r\n10. worker.c - apply_handle_stream_start\r\n\r\n```\r\n+ *\r\n+ * XXX We can avoid sending pair of the START/STOP messages to the parallel\r\n+ * worker because unlike apply worker it will process only one\r\n+ * transaction-at-a-time. However, it is not clear whether that is worth the\r\n+ * effort because it is sent after logical_decoding_work_mem changes.\r\n```\r\n\r\nI can understand that START message is not needed, but is STOP really removable? If leader does not send STOP to its child, does it lose a chance to change the worker-state to IDLE_IN_TRANSACTION? \r\n\r\n~~~\r\n11. worker.c - apply_handle_stream_start\r\n\r\nCurrently the number of received chunks have not counted, but it can do if a variable \"nchunks\" is defined and incremented in apply_handle_stream_start(). This this info may be useful to determine appropriate logical_decoding_work_mem for workloads. How do you think?\r\n\r\n~~~\r\n12. worker.c - get_transaction_apply_action\r\n\r\n{} are not needed.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 6 Oct 2022 12:39:35 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\n> Thanks for the suggestion.\r\n> \r\n> I tried to add a WaitLatch, but it seems affect the performance\r\n> because the Latch might not be set when leader send some\r\n> message to parallel apply worker which means it will wait until\r\n> timeout.\r\n\r\nYes, currently it leader does not notify anything.\r\nTo handle that leader must set a latch in parallel_apply_send_data().\r\nIt can be done if leader accesses to winfo->shared-> logicalrep_worker_slot_no,\r\nand sets a latch for LogicalRepCtxStruct->worker[slot_no].\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n", "msg_date": "Thu, 6 Oct 2022 13:00:05 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, October 6, 2022 9:00 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Dear Hou,\r\n> \r\n> > Thanks for the suggestion.\r\n> >\r\n> > I tried to add a WaitLatch, but it seems affect the performance\r\n> > because the Latch might not be set when leader send some message to\r\n> > parallel apply worker which means it will wait until timeout.\r\n> \r\n> Yes, currently it leader does not notify anything.\r\n> To handle that leader must set a latch in parallel_apply_send_data().\r\n> It can be done if leader accesses to winfo->shared-> logicalrep_worker_slot_no,\r\n> and sets a latch for LogicalRepCtxStruct->worker[slot_no].\r\n\r\nThanks for the suggestion. I think we could do that, but I feel it's not great\r\nto set latch frequently. Besides, to access the LogicalRepCtxStruct->worker[]\r\nwe would need to hold a lock which might also bring some overhead.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 6 Oct 2022 14:21:10 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 6, 2022 at 10:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 30, 2022 at 1:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for the v35-0001 patch:\n> >\n> > ======\n> >\n> > 3. GENERAL\n> >\n> > (this comment was written after I wrote all the other ones below so\n> > there might be some unintended overlaps...)\n> >\n> > I found the mixed use of the same member names having different\n> > meanings to be quite confusing.\n> >\n> > e.g.1\n> > PGOutputData 'streaming' is now a single char internal representation\n> > the subscription parameter streaming mode ('f','t','p')\n> > - bool streaming;\n> > + char streaming;\n> >\n> > e.g.2\n> > WalRcvStreamOptions 'streaming' is a C string version of the\n> > subscription streaming mode (\"on\", \"parallel\")\n> > - bool streaming; /* Streaming of large transactions */\n> > + char *streaming; /* Streaming of large transactions */\n> >\n> > e.g.3\n> > SubOpts 'streaming' is again like the first example - a single char\n> > for the mode.\n> > - bool streaming;\n> > + char streaming;\n> >\n> >\n> > IMO everything would become much simpler if you did:\n> >\n> > 3a.\n> > Rename \"char streaming;\" -> \"char streaming_mode;\"\n> >\n> > 3b.\n> > Re-designed the \"char *streaming;\" code to also use the single char\n> > notation, then also call that member 'streaming_mode'. Then everything\n> > will be consistent.\n> >\n>\n> Won't this impact the previous version publisher which already uses\n> on/off? We may need to maintain multiple values which would be\n> confusing.\n>\n\nI only meant that the *internal* struct member names mentioned could\nchange - not anything exposed as user-visible parameter names or\ncolumn names etc. Or were you referring to it as causing unnecessary\ntroubles for back-patching? Anyway, the main point of this review\ncomment was #3b. Unless I am mistaken, there is no reason why that one\ncannot be changed to use 'char' instead of 'char *', for consistency\nacross all the same named members.\n\n> >\n> > 9. - parallel_apply_can_start\n> >\n> > +/*\n> > + * Returns true, if it is allowed to start a parallel apply worker, false,\n> > + * otherwise.\n> > + */\n> > +static bool\n> > +parallel_apply_can_start(TransactionId xid)\n> >\n> > (The commas are strange)\n> >\n> > SUGGESTION\n> > Returns true if it is OK to start a parallel apply worker, false otherwise.\n> >\n>\n> +1 for this.\n> >\n> > 28. - logicalrep_worker_detach\n> >\n> > + /* Stop the parallel apply workers. */\n> > + if (am_leader_apply_worker())\n> > + {\n> >\n> > Should that comment rather say like below?\n> >\n> > /* If this is the leader apply worker then stop all of its parallel\n> > apply workers. */\n> >\n>\n> I think this would be just saying what is apparent from the code, so\n> not sure if it is an improvement.\n>\n> >\n> > 38. - apply_handle_commit_prepared\n> >\n> > + *\n> > + * Note that we don't need to wait here if the transaction was prepared in a\n> > + * parallel apply worker. Because we have already waited for the prepare to\n> > + * finish in apply_handle_stream_prepare() which will ensure all the operations\n> > + * in that transaction have happened in the subscriber and no concurrent\n> > + * transaction can create deadlock or transaction dependency issues.\n> > */\n> > static void\n> > apply_handle_commit_prepared(StringInfo s)\n> >\n> > \"worker. Because\" -> \"worker because\"\n> >\n>\n> I think this will make this line too long. Can we think of breaking it\n> in some way?\n\nOK, how about below:\n\nNote that we don't need to wait here if the transaction was prepared\nin a parallel apply worker. In that case, we have already waited for\nthe prepare to finish in apply_handle_stream_prepare() which will\nensure all the operations in that transaction have happened in the\nsubscriber, so no concurrent transaction can cause deadlock or\ntransaction dependency issues.\n\n>\n> >\n> > 43.\n> >\n> > /*\n> > - * Initialize the worker's stream_fileset if we haven't yet. This will be\n> > - * used for the entire duration of the worker so create it in a permanent\n> > - * context. We create this on the very first streaming message from any\n> > - * transaction and then use it for this and other streaming transactions.\n> > - * Now, we could create a fileset at the start of the worker as well but\n> > - * then we won't be sure that it will ever be used.\n> > + * For the first stream start, check if there is any free parallel apply\n> > + * worker we can use to process this transaction.\n> > */\n> > - if (MyLogicalRepWorker->stream_fileset == NULL)\n> > + if (first_segment)\n> > + parallel_apply_start_worker(stream_xid);\n> >\n> > This comment update seems misleading. The\n> > parallel_apply_start_worker() isn't just checking if there is a free\n> > worker. All that free worker logic stuff is *inside* the\n> > parallel_apply_start_worker() function, so maybe no need to mention\n> > about it here at the caller.\n> >\n>\n> It will be good to have some comments here instead of completely removing it.\n>\n> >\n> > 39. - apply_handle_stream_abort\n> >\n> > + /* We receive abort information only when we can apply in parallel. */\n> > + if (MyLogicalRepWorker->parallel_apply)\n> > + read_abort_info = true;\n> >\n> > 44a.\n> > SUGGESTION\n> > We receive abort information only when the publisher can support parallel apply.\n> >\n>\n> The existing comment seems better to me in this case.\n>\n> >\n> > 55. - LogicalRepWorker\n> >\n> > + /* Indicates whether apply can be performed parallelly. */\n> > + bool parallel_apply;\n> > +\n> >\n> > 55a.\n> > \"parallelly\" - ?? is there a better way to phrase this? IMO that is an\n> > uncommon word.\n> >\n>\n> How about \".. can be performed in parallel.\"?\n>\n> > ~\n> >\n> > 55b.\n> > IMO this member name should be named slightly different to give a\n> > better feel for what it really means.\n> >\n> > Maybe something like one of:\n> > \"parallel_apply_ok\"\n> > \"parallel_apply_enabled\"\n> > \"use_parallel_apply\"\n> > etc?\n> >\n>\n> The extra word doesn't seem to be useful here.\n>\n> > 58.\n> >\n> > --- fail - streaming must be boolean\n> > +-- fail - streaming must be boolean or 'parallel'\n> > CREATE SUBSCRIPTION regress_testsub CONNECTION\n> > 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\n> > false, streaming = foo);\n> >\n> > I think there are tests already for explicitly create/set the\n> > subscription parameter streaming = on/off/parallel\n> >\n> > But what about when there is no value explicitly specified? Shouldn't\n> > there also be tests like below to check that *implied* boolean true\n> > still works for this enum?\n> >\n> > CREATE SUBSCRIPTION ... WITH (streaming)\n> > ALTER SUBSCRIPTION ... SET (streaming)\n> >\n>\n> I think before adding new tests for this, please check if we have any\n> similar tests for other boolean options.\n\nIMO this one is a bit different because it's not really a boolean\noption anymore - it's a kind of a hybrid boolean/enum. That's why I\nthought this ought to be tested regardless if there are existing tests\nfor the (normal) boolean options.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 7 Oct 2022 14:08:07 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 6, 2022 at 9:04 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n>\n> > -----Original Message-----\n> > From: Masahiko Sawada <sawada.mshk@gmail.com>\n> > Sent: Thursday, October 6, 2022 4:07 PM\n> > To: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\n> > Cc: Amit Kapila <amit.kapila16@gmail.com>; Wang, Wei/王 威\n> > <wangw.fnst@fujitsu.com>; Peter Smith <smithpb2250@gmail.com>; Dilip\n> > Kumar <dilipbalaut@gmail.com>; Shi, Yu/侍 雨 <shiy.fnst@fujitsu.com>;\n> > PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\n> > Subject: Re: Perform streaming logical transactions by background workers and\n> > parallel apply\n> >\n> > On Tue, Sep 27, 2022 at 9:26 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Saturday, September 24, 2022 7:40 PM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 22, 2022 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > On Thu, Sep 22, 2022 at 8:59 AM wangw.fnst@fujitsu.com\n> > > > > <wangw.fnst@fujitsu.com> wrote:\n> > > > > >\n> > > > >\n> > > > > Few comments on v33-0001\n> > > > > =======================\n> > > > >\n> > > >\n> > > > Some more comments on v33-0001\n> > > > =============================\n> > > > 1.\n> > > > + /* Information from the corresponding LogicalRepWorker slot. */\n> > > > + uint16 logicalrep_worker_generation;\n> > > > +\n> > > > + int logicalrep_worker_slot_no;\n> > > > +} ParallelApplyWorkerShared;\n> > > >\n> > > > Both these variables are read/changed by leader/parallel workers without\n> > > > using any lock (mutex). It seems currently there is no problem because of\n> > the\n> > > > way the patch is using in_parallel_apply_xact but I think it won't be a good\n> > idea\n> > > > to rely on it. I suggest using mutex to operate on these variables and also\n> > check\n> > > > if the slot_no is in a valid range after reading it in parallel_apply_free_worker,\n> > > > otherwise error out using elog.\n> > >\n> > > Changed.\n> > >\n> > > > 2.\n> > > > static void\n> > > > apply_handle_stream_stop(StringInfo s)\n> > > > {\n> > > > - if (!in_streamed_transaction)\n> > > > + ParallelApplyWorkerInfo *winfo = NULL; TransApplyAction apply_action;\n> > > > +\n> > > > + if (!am_parallel_apply_worker() &&\n> > > > + (!in_streamed_transaction && !stream_apply_worker))\n> > > > ereport(ERROR,\n> > > > (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > > errmsg_internal(\"STREAM STOP message without STREAM START\")));\n> > > >\n> > > > This check won't be able to detect missing stream start messages for parallel\n> > > > apply workers apart from the first pair of start/stop. I thought of adding\n> > > > in_remote_transaction check along with\n> > > > am_parallel_apply_worker() to detect the same but that also won't work\n> > > > because the parallel worker doesn't reset it at the stop message.\n> > > > Another possibility is to introduce yet another variable for this but that\n> > doesn't\n> > > > seem worth it. I would like to keep this check simple.\n> > > > Can you think of any better way?\n> > >\n> > > I feel we can reuse the in_streamed_transaction in parallel apply worker to\n> > > simplify the check there. I tried to set this flag in parallel apply worker\n> > > when stream starts and reset it when stream stop so that we can directly check\n> > > this flag for duplicate stream start message and other related things.\n> > >\n> > > > 3. I think we can skip sending start/stop messages from the leader to the\n> > > > parallel worker because unlike apply worker it will process only one\n> > > > transaction-at-a-time. However, it is not clear whether that is worth the\n> > effort\n> > > > because it is sent after logical_decoding_work_mem changes. For now, I have\n> > > > added a comment for this in the attached patch but let me if I am missing\n> > > > something or if I am wrong.\n> > >\n> > > I the suggested comments look good.\n> > >\n> > > > 4.\n> > > > postgres=# select pid, leader_pid, application_name, backend_type from\n> > > > pg_stat_activity;\n> > > > pid | leader_pid | application_name | backend_type\n> > > > -------+------------+------------------+------------------------------\n> > > > 27624 | | | logical replication launcher\n> > > > 17336 | | psql | client backend\n> > > > 26312 | | | logical replication worker\n> > > > 26376 | | psql | client backend\n> > > > 14004 | | | logical replication worker\n> > > >\n> > > > Here, the second worker entry is for the parallel worker. Isn't it better if we\n> > > > distinguish this by keeping type as a logical replication parallel worker? I\n> > think\n> > > > for this you need to change bgw_type in logicalrep_worker_launch().\n> > >\n> > > Changed.\n> > >\n> > > > 5. Can we name parallel_apply_subxact_info_add() as\n> > > > parallel_apply_start_subtrans()?\n> > > >\n> > > > Apart from the above, I have added/edited a few comments and made a few\n> > > > other cosmetic changes in the attached.\n> > >\n> >\n> > While looking at v35 patch, I realized that there are some cases where\n> > the logical replication gets stuck depending on partitioned table\n> > structure. For instance, there are following tables, publication, and\n> > subscription:\n> >\n> > * On publisher\n> > create table p (c int) partition by list (c);\n> > create table c1 partition of p for values in (1);\n> > create table c2 (c int);\n> > create publication test_pub for table p, c1, c2 with\n> > (publish_via_partition_root = 'true');\n> >\n> > * On subscriber\n> > create table p (c int) partition by list (c);\n> > create table c1 partition of p for values In (2);\n> > create table c2 partition of p for values In (1);\n> > create subscription test_sub connection 'port=5551 dbname=postgres'\n> > publication test_pub with (streaming = 'parallel', copy_data =\n> > 'false');\n> >\n> > Note that while both the publisher and the subscriber have the same\n> > name tables the partition structure is different and rows go to a\n> > different table on the subscriber (eg, row c=1 will go to c2 table on\n> > the subscriber). If two current transactions are executed as follows,\n> > the apply worker (ig, the leader apply worker) waits for a lock on c2\n> > held by its parallel apply worker:\n> >\n> > * TX-1\n> > BEGIN;\n> > INSERT INTO p SELECT 1 FROM generate_series(1, 10000); --- changes are\n> > streamed\n> >\n> > * TX-2\n> > BEGIN;\n> > TRUNCATE c2; --- wait for a lock on c2\n> >\n> > * TX-1\n> > INSERT INTO p SELECT 1 FROM generate_series(1, 10000);\n> > COMMIT;\n> >\n> > This might not be a common case in practice but it could mean that\n> > there is a restriction on how partitioned tables should be structured\n> > on the publisher and the subscriber when using streaming = 'parallel'.\n> > When this happens, since the logical replication cannot move forward\n> > the users need to disable parallel-apply mode or increase\n> > logical_decoding_work_mem. We could describe this limitation in the\n> > doc but it would be hard for users to detect problematic table\n> > structure.\n>\n> Thanks for testing this!\n>\n> I think the root reason for this kind of deadlock problems is the table\n> structure difference between publisher and subscriber(similar to the unique\n> difference reported earlier[1]). So, I think we'd better disallow this case. For\n> example to avoid the reported problem, we could only support parallel apply if\n> pubviaroot is false on publisher and replicated tables' types(relkind) are the\n> same between publisher and subscriber.\n>\n> Although it might restrict some use cases, but I think it only restrict the\n> cases when the partitioned table's structure is different between publisher and\n> subscriber. User can still use parallel apply for cases when the table\n> structure is the same between publisher and subscriber which seems acceptable\n> to me. And we can also document that the feature is expected to be used for the\n> case when tables' structure are the same. Thoughts ?\n\nI'm concerned that it could be a big restriction for users. Having\ndifferent partitioned table's structures on the publisher and the\nsubscriber is quite common use cases.\n\n From the feature perspective, the root cause seems to be the fact that\nthe apply worker does both receiving and applying changes. Since it\ncannot receive the subsequent messages while waiting for a lock on a\ntable, the parallel apply worker also cannot move forward. If we have\na dedicated receiver process, it can off-load the messages to the\nworker while another process waiting for a lock. So I think that\nseparating receiver and apply worker could be a building block for\nparallel-apply.\n\nRegards,\n\n--\nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 12:17:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 7, 2022 at 8:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 6, 2022 at 9:04 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > I think the root reason for this kind of deadlock problems is the table\n> > structure difference between publisher and subscriber(similar to the unique\n> > difference reported earlier[1]). So, I think we'd better disallow this case. For\n> > example to avoid the reported problem, we could only support parallel apply if\n> > pubviaroot is false on publisher and replicated tables' types(relkind) are the\n> > same between publisher and subscriber.\n> >\n> > Although it might restrict some use cases, but I think it only restrict the\n> > cases when the partitioned table's structure is different between publisher and\n> > subscriber. User can still use parallel apply for cases when the table\n> > structure is the same between publisher and subscriber which seems acceptable\n> > to me. And we can also document that the feature is expected to be used for the\n> > case when tables' structure are the same. Thoughts ?\n>\n> I'm concerned that it could be a big restriction for users. Having\n> different partitioned table's structures on the publisher and the\n> subscriber is quite common use cases.\n>\n> From the feature perspective, the root cause seems to be the fact that\n> the apply worker does both receiving and applying changes. Since it\n> cannot receive the subsequent messages while waiting for a lock on a\n> table, the parallel apply worker also cannot move forward. If we have\n> a dedicated receiver process, it can off-load the messages to the\n> worker while another process waiting for a lock. So I think that\n> separating receiver and apply worker could be a building block for\n> parallel-apply.\n>\n\nI think the disadvantage that comes to mind is the overhead of passing\nmessages between receiver and applier processes even for non-parallel\ncases. Now, I don't think it is advisable to have separate handling\nfor non-parallel cases. The other thing is that we need to someway\ndeal with feedback messages which helps to move synchronous replicas\nand update subscriber's progress which in turn helps to keep the\nrestart point updated. These messages also act as heartbeat messages\nbetween walsender and walapply process.\n\nTo deal with this, one idea is that we can have two connections to\nwalsender process, one with walreceiver and the other with walapply\nprocess which according to me could lead to a big increase in resource\nconsumption and it will bring another set of complexities in the\nsystem. Now, in this, I think we have two possibilities, (a) The first\none is that we pass all messages to the leader apply worker and then\nit decides whether to execute serially or pass it to the parallel\napply worker. However, that can again deadlock in the truncate\nscenario we discussed because the main apply worker won't be able to\nreceive new messages once it is blocked at the truncate command. (b)\nThe second one is walreceiver process itself takes care of passing\nstreaming transactions to parallel apply workers but if we do that\nthen walreceiver needs to wait at the transaction end to maintain\ncommit order which means it can also lead to deadlock in case the\ntruncate happens in a streaming xact.\n\nThe other alternative is that we allow walreceiver process to wait for\napply process to finish transaction and send the feedback but that\nseems to be again an overhead if we have to do it even for small\ntransactions, especially it can delay sync replication cases. Even, if\nwe don't consider overhead, it can still lead to a deadlock because\nwalreceiver won't be able to move in the scenario we are discussing.\n\nAbout your point that having different partition structures for\npublisher and subscriber, I don't know how common it will be once we\nhave DDL replication. Also, the default value of\npublish_via_partition_root is false which doesn't seem to indicate\nthat this is a quite common case.\n\nWe have fixed quite a few issues in this area in the last release or\ntwo which were found during development, so not sure if these are used\nquite often in the field but it could just be a coincidence. Also, it\nwill only matter if there are large transactions that perform on such\ntables which I don't think will be easy to predict whether those are\ncommon or not.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Oct 2022 10:30:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 7, 2022 at 8:38 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Oct 6, 2022 at 10:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 30, 2022 at 1:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Here are my review comments for the v35-0001 patch:\n> > >\n> > > ======\n> > >\n> > > 3. GENERAL\n> > >\n> > > (this comment was written after I wrote all the other ones below so\n> > > there might be some unintended overlaps...)\n> > >\n> > > I found the mixed use of the same member names having different\n> > > meanings to be quite confusing.\n> > >\n> > > e.g.1\n> > > PGOutputData 'streaming' is now a single char internal representation\n> > > the subscription parameter streaming mode ('f','t','p')\n> > > - bool streaming;\n> > > + char streaming;\n> > >\n> > > e.g.2\n> > > WalRcvStreamOptions 'streaming' is a C string version of the\n> > > subscription streaming mode (\"on\", \"parallel\")\n> > > - bool streaming; /* Streaming of large transactions */\n> > > + char *streaming; /* Streaming of large transactions */\n> > >\n> > > e.g.3\n> > > SubOpts 'streaming' is again like the first example - a single char\n> > > for the mode.\n> > > - bool streaming;\n> > > + char streaming;\n> > >\n> > >\n> > > IMO everything would become much simpler if you did:\n> > >\n> > > 3a.\n> > > Rename \"char streaming;\" -> \"char streaming_mode;\"\n> > >\n> > > 3b.\n> > > Re-designed the \"char *streaming;\" code to also use the single char\n> > > notation, then also call that member 'streaming_mode'. Then everything\n> > > will be consistent.\n> > >\n> >\n> > Won't this impact the previous version publisher which already uses\n> > on/off? We may need to maintain multiple values which would be\n> > confusing.\n> >\n>\n> I only meant that the *internal* struct member names mentioned could\n> change - not anything exposed as user-visible parameter names or\n> column names etc. Or were you referring to it as causing unnecessary\n> troubles for back-patching? Anyway, the main point of this review\n> comment was #3b.\n>\n\nMy response was for 3b only.\n\n> Unless I am mistaken, there is no reason why that one\n> cannot be changed to use 'char' instead of 'char *', for consistency\n> across all the same named members.\n>\n\nI feel this will bring more complexity to the code if you have to keep\nit working with old-version publishers.\n\n> > >\n> > > 9. - parallel_apply_can_start\n> > >\n> > > +/*\n> > > + * Returns true, if it is allowed to start a parallel apply worker, false,\n> > > + * otherwise.\n> > > + */\n> > > +static bool\n> > > +parallel_apply_can_start(TransactionId xid)\n> > >\n> > > (The commas are strange)\n> > >\n> > > SUGGESTION\n> > > Returns true if it is OK to start a parallel apply worker, false otherwise.\n> > >\n> >\n> > +1 for this.\n> > >\n> > > 28. - logicalrep_worker_detach\n> > >\n> > > + /* Stop the parallel apply workers. */\n> > > + if (am_leader_apply_worker())\n> > > + {\n> > >\n> > > Should that comment rather say like below?\n> > >\n> > > /* If this is the leader apply worker then stop all of its parallel\n> > > apply workers. */\n> > >\n> >\n> > I think this would be just saying what is apparent from the code, so\n> > not sure if it is an improvement.\n> >\n> > >\n> > > 38. - apply_handle_commit_prepared\n> > >\n> > > + *\n> > > + * Note that we don't need to wait here if the transaction was prepared in a\n> > > + * parallel apply worker. Because we have already waited for the prepare to\n> > > + * finish in apply_handle_stream_prepare() which will ensure all the operations\n> > > + * in that transaction have happened in the subscriber and no concurrent\n> > > + * transaction can create deadlock or transaction dependency issues.\n> > > */\n> > > static void\n> > > apply_handle_commit_prepared(StringInfo s)\n> > >\n> > > \"worker. Because\" -> \"worker because\"\n> > >\n> >\n> > I think this will make this line too long. Can we think of breaking it\n> > in some way?\n>\n> OK, how about below:\n>\n> Note that we don't need to wait here if the transaction was prepared\n> in a parallel apply worker. In that case, we have already waited for\n> the prepare to finish in apply_handle_stream_prepare() which will\n> ensure all the operations in that transaction have happened in the\n> subscriber, so no concurrent transaction can cause deadlock or\n> transaction dependency issues.\n>\n\nYeah, this looks better.\n\n> >\n> > > 58.\n> > >\n> > > --- fail - streaming must be boolean\n> > > +-- fail - streaming must be boolean or 'parallel'\n> > > CREATE SUBSCRIPTION regress_testsub CONNECTION\n> > > 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\n> > > false, streaming = foo);\n> > >\n> > > I think there are tests already for explicitly create/set the\n> > > subscription parameter streaming = on/off/parallel\n> > >\n> > > But what about when there is no value explicitly specified? Shouldn't\n> > > there also be tests like below to check that *implied* boolean true\n> > > still works for this enum?\n> > >\n> > > CREATE SUBSCRIPTION ... WITH (streaming)\n> > > ALTER SUBSCRIPTION ... SET (streaming)\n> > >\n> >\n> > I think before adding new tests for this, please check if we have any\n> > similar tests for other boolean options.\n>\n> IMO this one is a bit different because it's not really a boolean\n> option anymore - it's a kind of a hybrid boolean/enum. That's why I\n> thought this ought to be tested regardless if there are existing tests\n> for the (normal) boolean options.\n>\n\nI am not really sure if adding such tests are valuable but if Hou-San\nand you feel it is good to have it then I am fine with it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Oct 2022 10:44:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, October 6, 2022 8:40 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Dear Hou,\r\n> \r\n> I put comments for v35-0001.\r\n\r\nThanks for the comments.\r\n\r\n> 01. catalog.sgml\r\n> \r\n> ```\r\n> + Controls how to handle the streaming of in-progress transactions:\r\n> + <literal>f</literal> = disallow streaming of in-progress transactions,\r\n> + <literal>t</literal> = spill the changes of in-progress transactions to\r\n> + disk and apply at once after the transaction is committed on the\r\n> + publisher,\r\n> + <literal>p</literal> = apply changes directly using a parallel apply\r\n> + worker if available (same as 't' if no worker is available)\r\n> ```\r\n> \r\n> I'm not sure why 't' means \"spill the changes to file\". Is it compatibility issue?\r\n\r\nYes, I think it would be better to be consistent with previous version.\r\n\r\n> ~~~\r\n> 02. applyworker.c - parallel_apply_stream_abort\r\n> \r\n> The argument abort_data is not modified in the function. Maybe \"const\"\r\n> modifier should be added.\r\n> (Other functions should be also checked...)\r\n\r\nI am not sure is it necessary to add the const here as I didn’t\r\nfind many similar style codes.\r\n\r\n> ~~~\r\n> 03. applyparallelworker.c - parallel_apply_find_worker\r\n> \r\n> ```\r\n> + ParallelApplyWorkerEntry *entry = NULL;\r\n> ```\r\n> \r\n> This may not have to be initialized here.\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> 04. applyparallelworker.c - HandleParallelApplyMessages\r\n> \r\n> ```\r\n> + static MemoryContext hpm_context = NULL;\r\n> ```\r\n> \r\n> I think \"hpm\" means \"handle parallel message\", so it should be \"hpam\".\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> 05. launcher.c - logicalrep_worker_launch()\r\n> \r\n> ```\r\n> \tif (is_subworker)\r\n> \t\tsnprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication\r\n> parallel worker\");\r\n> \telse\r\n> \t\tsnprintf(bgw.bgw_type, BGW_MAXLEN, \"logical replication\r\n> worker\"); ```\r\n> \r\n> I'm not sure why there are only bgw_type even if there are three types of apply\r\n> workers. Is it for compatibility?\r\n\r\nYeah, It's for compatibility.\r\n\r\n> ~~~\r\n> 06. launcher.c - logicalrep_worker_stop_by_slot\r\n> \r\n> An assertion like Assert(slot_no >=0 && slot_no <\r\n> max_logical_replication_workers) should be added at the top of this function.\r\n>\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> 07. launcher.c - logicalrep_worker_stop_internal\r\n> \r\n> ```\r\n> +/*\r\n> + * Workhorse for logicalrep_worker_stop(), logicalrep_worker_detach()\r\n> +and\r\n> + * logicalrep_worker_stop_by_slot(). Stop the worker and wait for it to die.\r\n> + */\r\n> +static void\r\n> +logicalrep_worker_stop_internal(LogicalRepWorker *worker)\r\n> ```\r\n> \r\n> I think logicalrep_worker_stop_internal() may be not \"Workhorse\" for\r\n> logicalrep_worker_detach(). In the function internal function is called for\r\n> parallel apply worker, and it does not main part of the detach function.\r\n> \r\n> ~~~\r\n> 08. worker.c - handle_streamed_transaction()\r\n> \r\n> ```\r\n> + TransactionId current_xid = InvalidTransactionId;\r\n> ```\r\n> \r\n> This initialization is not needed. This is not used in non-streaming mode,\r\n> otherwise it is substituted before used.\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> 09. worker.c - handle_streamed_transaction()\r\n> \r\n> ```\r\n> + case TRANS_PARALLEL_APPLY:\r\n> + /* Define a savepoint for a subxact if needed. */\r\n> + parallel_apply_start_subtrans(current_xid, stream_xid);\r\n> + return false;\r\n> ```\r\n> \r\n> Based on other case-block, Assert(am_parallel_apply_worker()) may be added\r\n> at the top of this part.\r\n> This suggestion can be said for other swith-case statements.\r\n\r\nI feel the apply_action is returned by the nearby\r\nget_transaction_apply_action() function call which means it can only be in\r\nparallel apply worker here. So, I am not sure if the assert is necessary or not.\r\n\r\n> ~~~\r\n> 10. worker.c - apply_handle_stream_start\r\n> \r\n> ```\r\n> + *\r\n> + * XXX We can avoid sending pair of the START/STOP messages to the\r\n> + parallel\r\n> + * worker because unlike apply worker it will process only one\r\n> + * transaction-at-a-time. However, it is not clear whether that is\r\n> + worth the\r\n> + * effort because it is sent after logical_decoding_work_mem changes.\r\n> ```\r\n> \r\n> I can understand that START message is not needed, but is STOP really\r\n> removable? If leader does not send STOP to its child, does it lose a chance to\r\n> change the worker-state to IDLE_IN_TRANSACTION?\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> 11. worker.c - apply_handle_stream_start\r\n> \r\n> Currently the number of received chunks have not counted, but it can do if a\r\n> variable \"nchunks\" is defined and incremented in apply_handle_stream_start().\r\n> This this info may be useful to determine appropriate\r\n> logical_decoding_work_mem for workloads. How do you think?\r\n\r\nSince we don't have similar DEBUG message for \"streaming=on\" mode, so I feel\r\nmaybe we can leave this for now and add them later as a separate patch if needed.\r\n\r\n> ~~~\r\n> 12. worker.c - get_transaction_apply_action\r\n> \r\n> {} are not needed.\r\n\r\nI am fine with either style here, so I didn’t change this.\r\n\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 7 Oct 2022 06:15:06 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, September 30, 2022 4:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for the v35-0001 patch:\r\n\r\nThanks for the comments.\r\n\r\n\r\n> 3. GENERAL\r\n> I found the mixed use of the same member names having different meanings to be quite confusing.\r\n> \r\n> e.g.1\r\n> PGOutputData 'streaming' is now a single char internal representation the subscription parameter streaming mode ('f','t','p')\r\n> - bool streaming;\r\n> + char streaming;\r\n> \r\n> e.g.2\r\n> WalRcvStreamOptions 'streaming' is a C string version of the subscription streaming mode (\"on\", \"parallel\")\r\n> - bool streaming; /* Streaming of large transactions */\r\n> + char *streaming; /* Streaming of large transactions */\r\n> \r\n> e.g.3\r\n> SubOpts 'streaming' is again like the first example - a single char for the mode.\r\n> - bool streaming;\r\n> + char streaming;\r\n> \r\n> \r\n> IMO everything would become much simpler if you did:\r\n> \r\n> 3a.\r\n> Rename \"char streaming;\" -> \"char streaming_mode;\"\r\n\r\nThe word 'streaming' is the same as the actual option name, so personally I think it's fine.\r\nBut if others also agreed that the name can be improved, I can change it.\r\n\r\n> \r\n> 3b. Re-designed the \"char *streaming;\" code to also use the single char\r\n> notation, then also call that member 'streaming_mode'. Then everything will\r\n> be > consistent.\r\n\r\nIf we use single byte(char) here we would need to compare it with the standard\r\nstreaming option value in libpqwalreceiver.c which was suggested not to do[1].\r\n\r\n\r\n> 4. - max_parallel_apply_workers_per_subscription\r\n> + </para>\r\n> + <para>\r\n> + The parallel apply workers are taken from the pool defined by\r\n> + <varname>max_logical_replication_workers</varname>.\r\n> + </para>\r\n> + <para>\r\n> + The default value is 2. This parameter can only be set in the\r\n> + <filename>postgresql.conf</filename> file or on the server command\r\n> + line.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> \r\n> I felt that maybe this should also xref to the\r\n> doc/src/sgml/logical-replication.sgml section where you say about\r\n> \"max_logical_replication_workers should be increased according to the\r\n> desired number of parallel apply workers.\"\r\n\r\nNot sure about this as we don't have similar thing in the document of\r\nmax_logical_replication_workers and max_sync_workers_per_subscription.\r\n\r\n\r\n> ======\r\n> \r\n> 7. src/backend/access/transam/xact.c - RecordTransactionAbort\r\n> \r\n> \r\n> + /*\r\n> + * Are we using the replication origins feature? Or, in other words, \r\n> + are\r\n> + * we replaying remote actions?\r\n> + */\r\n> + replorigin = (replorigin_session_origin != InvalidRepOriginId &&\r\n> + replorigin_session_origin != DoNotReplicateId);\r\n> \r\n> \"Or, in other words,\" -> \"In other words,\"\r\n\r\nI think it is better to keep consistent with the comments in function\r\nRecordTransactionCommit.\r\n\r\n\r\n> 10b.\r\n> IMO this flag might be better to be called 'parallel_apply_enabled' or something similar.\r\n> (see also review comment #55b.)\r\n\r\nNot sure about this.\r\n\r\n> 12. - parallel_apply_free_worker\r\n> \r\n> + SpinLockAcquire(&winfo->shared->mutex);\r\n> + slot_no = winfo->shared->logicalrep_worker_slot_no;\r\n> + generation = winfo->shared->logicalrep_worker_generation;\r\n> + SpinLockRelease(&winfo->shared->mutex);\r\n> \r\n> I know there are not many places doing this, but do you think it might be\r\n> worth introducing some new set/get function to encapsulate the set/get of the\r\n> >generation/slot so it does the mutex spin-locks in common code?\r\n\r\nNot sure about this.\r\n\r\n> 13. - LogicalParallelApplyLoop\r\n> \r\n> + /*\r\n> + * Init the ApplyMessageContext which we clean up after each \r\n> + replication\r\n> + * protocol message.\r\n> + */\r\n> + ApplyMessageContext = AllocSetContextCreate(ApplyContext,\r\n> + \"ApplyMessageContext\",\r\n> + ALLOCSET_DEFAULT_SIZES);\r\n> \r\n> Because this is in the parallel apply worker should the name (e.g. the 2nd\r\n> param) be changed to \"ParallelApplyMessageContext\"?\r\n\r\nNot sure about this, because ApplyMessageContext is used in both worker.c and\r\napplyparallelworker.c.\r\n\r\n\r\n> + else if (is_subworker)\r\n> + snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> + \"logical replication parallel apply worker for subscription %u\", \r\n> + subid);\r\n> else\r\n> snprintf(bgw.bgw_name, BGW_MAXLEN,\r\n> \"logical replication worker for subscription %u\", subid);\r\n> \r\n> I think that *last* text now be changed like below:\r\n> \r\n> BEFORE\r\n> \"logical replication worker for subscription %u\"\r\n> AFTER\r\n> \"logical replication apply worker for subscription %u\"\r\n\r\nI am not sure if it's a good idea to change existing process description.\r\n\r\n\r\n> 36 - should_apply_changes_for_rel\r\n> should_apply_changes_for_rel(LogicalRepRelMapEntry *rel) {\r\n> if (am_tablesync_worker())\r\n> return MyLogicalRepWorker->relid == rel->localreloid;\r\n> + else if (am_parallel_apply_worker())\r\n> + {\r\n> + if (rel->state != SUBREL_STATE_READY)\r\n> + ereport(ERROR,\r\n> + (errmsg(\"logical replication apply workers for subscription \\\"%s\\\"\r\n> will restart\",\r\n> + MySubscription->name),\r\n> + errdetail(\"Cannot handle streamed replication transaction using parallel \"\r\n> + \"apply workers until all tables are synchronized.\")));\r\n> +\r\n> + return true;\r\n> + }\r\n> else\r\n> return (rel->state == SUBREL_STATE_READY ||\r\n> (rel->state == SUBREL_STATE_SYNCDONE && @@ -427,43 +519,87 @@ end_replication_step(void)\r\n> \r\n> This function can be made tidier just by removing all the 'else' ...\r\n\r\nI feel the current style looks better.\r\n\r\n\r\n> 40. - apply_handle_stream_prepare\r\n> \r\n> + case TRANS_LEADER_SERIALIZE:\r\n> \r\n> - /* Mark the transaction as prepared. */\r\n> - apply_handle_prepare_internal(&prepare_data);\r\n> + /*\r\n> + * The transaction has been serialized to file, so replay all the\r\n> + * spooled operations.\r\n> + */\r\n> \r\n> Spurious blank line after the 'case'.\r\n\r\nPersonally, I think this style is fine.\r\n\r\n\r\n> 48. - ApplyWorkerMain\r\n> \r\n> +/* Logical Replication Apply worker entry point */ void \r\n> +ApplyWorkerMain(Datum main_arg)\r\n> \r\n> \"Apply worker\" -> \"apply worker\"\r\n\r\nSince it's the existing comment, I feel we can leave this.\r\n\r\n\r\n> + /*\r\n> + * We don't currently need any ResourceOwner in a walreceiver process, \r\n> + but\r\n> + * if we did, we could call CreateAuxProcessResourceOwner here.\r\n> + */\r\n> \r\n> I think this comment should have \"XXX\" prefix.\r\n\r\nI am not sure as this comment is just a reminder.\r\n\r\n\r\n> 50.\r\n> \r\n> + if (server_version >= 160000 &&\r\n> + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> + {\r\n> + options.proto.logical.streaming = pstrdup(\"parallel\");\r\n> + MyLogicalRepWorker->parallel_apply = true;\r\n> + }\r\n> + else if (server_version >= 140000 &&\r\n> + MySubscription->stream != SUBSTREAM_OFF)\r\n> + options.proto.logical.streaming = pstrdup(\"on\"); else \r\n> + options.proto.logical.streaming = NULL;\r\n> \r\n> IMO it might make more sense for these conditions to be checking the\r\n> 'options.proto.logical.proto_version' here instead of checking the hardwired\r\n> server > versions. Also, I suggest may be better (for clarity) to always\r\n> assign the parallel_apply member.\r\n\r\nCurrently, the proto_version is only checked at publisher, I am not sure if\r\nit's a good idea to check it here.\r\n\r\n> 52. - get_transaction_apply_action\r\n> \r\n> + /*\r\n> + * Check if we are processing this transaction using a parallel apply\r\n> + * worker and if so, send the changes to that worker.\r\n> + */\r\n> + else if ((*winfo = parallel_apply_find_worker(xid))) { return \r\n> +TRANS_LEADER_SEND_TO_PARALLEL; } else { return \r\n> +TRANS_LEADER_SERIALIZE; } }\r\n> \r\n> 52a.\r\n> All these if/else and code blocks seem excessive. It can be simplified as follows:\r\n\r\nI feel this style is fine.\r\n\r\n> 52b.\r\n> Can a tablesync worker ever get here? It might be better to\r\n> Assert(!am_tablesync_worker()); at top of this function?\r\n\r\nNot sure if it's necessary or not.\r\n\r\n\r\n> 55b.\r\n> IMO this member name should be named slightly different to give a better feel\r\n> for what it really means.\r\n> \r\n> Maybe something like one of:\r\n> \"parallel_apply_ok\"\r\n> \"parallel_apply_enabled\"\r\n> \"use_parallel_apply\"\r\n> etc?\r\n\r\nI feel the current name is fine. But if others also feel the same, I can try to\r\nrename it.\r\n\r\n> 57. - am_leader_apply_worker\r\n> \r\n> +static inline bool\r\n> +am_leader_apply_worker(void)\r\n> +{\r\n> + return (!OidIsValid(MyLogicalRepWorker->relid) && \r\n> +!isParallelApplyWorker(MyLogicalRepWorker));\r\n> +}\r\n> \r\n> I wondered if it would be tidier/easier to define this function like below.\r\n> The others are inline functions anyhow so it should end up as the same >\r\n> thing, right?\r\n> \r\n> static inline bool\r\n> am_leader_apply_worker(void)\r\n> {\r\n> return (!am_tablesync_worker() && !am_parallel_apply_worker); }\r\n\r\nI feel the current style is fine.\r\n\r\n>--- fail - streaming must be boolean\r\n>+-- fail - streaming must be boolean or 'parallel'\r\n> CREATE SUBSCRIPTION regress_testsub CONNECTION 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = >false, streaming = foo);\r\n>\r\n>I think there are tests already for explicitly create/set the subscription\r\n>parameter streaming = on/off/parallel\r\n>\r\n>But what about when there is no value explicitly specified? Shouldn't there\r\n>also be tests like below to check that *implied* boolean true still works for\r\n>this enum?\r\n\r\nI didn't find similar tests for no value explicitly specified cases, so I didn't add this\r\nfor now.\r\n\r\nAttach the new version patch set which addressed most of the comments.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1LMVdS6uM7Tw7ANL0BetAd76TKkmAXNNQa0haTe2tax6g%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 7 Oct 2022 06:18:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 7, 2022 at 8:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 6, 2022 at 9:04 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > I think the root reason for this kind of deadlock problems is the table\n> > > structure difference between publisher and subscriber(similar to the unique\n> > > difference reported earlier[1]). So, I think we'd better disallow this case. For\n> > > example to avoid the reported problem, we could only support parallel apply if\n> > > pubviaroot is false on publisher and replicated tables' types(relkind) are the\n> > > same between publisher and subscriber.\n> > >\n> > > Although it might restrict some use cases, but I think it only restrict the\n> > > cases when the partitioned table's structure is different between publisher and\n> > > subscriber. User can still use parallel apply for cases when the table\n> > > structure is the same between publisher and subscriber which seems acceptable\n> > > to me. And we can also document that the feature is expected to be used for the\n> > > case when tables' structure are the same. Thoughts ?\n> >\n> > I'm concerned that it could be a big restriction for users. Having\n> > different partitioned table's structures on the publisher and the\n> > subscriber is quite common use cases.\n> >\n> > From the feature perspective, the root cause seems to be the fact that\n> > the apply worker does both receiving and applying changes. Since it\n> > cannot receive the subsequent messages while waiting for a lock on a\n> > table, the parallel apply worker also cannot move forward. If we have\n> > a dedicated receiver process, it can off-load the messages to the\n> > worker while another process waiting for a lock. So I think that\n> > separating receiver and apply worker could be a building block for\n> > parallel-apply.\n> >\n>\n> I think the disadvantage that comes to mind is the overhead of passing\n> messages between receiver and applier processes even for non-parallel\n> cases. Now, I don't think it is advisable to have separate handling\n> for non-parallel cases. The other thing is that we need to someway\n> deal with feedback messages which helps to move synchronous replicas\n> and update subscriber's progress which in turn helps to keep the\n> restart point updated. These messages also act as heartbeat messages\n> between walsender and walapply process.\n>\n> To deal with this, one idea is that we can have two connections to\n> walsender process, one with walreceiver and the other with walapply\n> process which according to me could lead to a big increase in resource\n> consumption and it will bring another set of complexities in the\n> system. Now, in this, I think we have two possibilities, (a) The first\n> one is that we pass all messages to the leader apply worker and then\n> it decides whether to execute serially or pass it to the parallel\n> apply worker. However, that can again deadlock in the truncate\n> scenario we discussed because the main apply worker won't be able to\n> receive new messages once it is blocked at the truncate command. (b)\n> The second one is walreceiver process itself takes care of passing\n> streaming transactions to parallel apply workers but if we do that\n> then walreceiver needs to wait at the transaction end to maintain\n> commit order which means it can also lead to deadlock in case the\n> truncate happens in a streaming xact.\n\nI imagined (b) but I had missed the point of preserving the commit\norder. Separating the receiver and apply worker cannot resolve this\nproblem.\n\n>\n> The other alternative is that we allow walreceiver process to wait for\n> apply process to finish transaction and send the feedback but that\n> seems to be again an overhead if we have to do it even for small\n> transactions, especially it can delay sync replication cases. Even, if\n> we don't consider overhead, it can still lead to a deadlock because\n> walreceiver won't be able to move in the scenario we are discussing.\n>\n> About your point that having different partition structures for\n> publisher and subscriber, I don't know how common it will be once we\n> have DDL replication. Also, the default value of\n> publish_via_partition_root is false which doesn't seem to indicate\n> that this is a quite common case.\n\nSo how can we consider these concurrent issues that could happen only\nwhen streaming = 'parallel'? Can we restrict some use cases to avoid\nthe problem or can we have a safeguard against these conflicts? We\ncould find a new problematic scenario in the future and if it happens,\nlogical replication gets stuck, it cannot be resolved only by apply\nworkers themselves.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 09:22:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 7, 2022 at 14:18 PM Hou, Zhijie/侯 志杰 <houzj.fnst@cn.fujitsu.com> wrote:\r\n> Attach the new version patch set which addressed most of the comments.\r\n\r\nRebased the patch set because the new change in HEAD (776e1c8).\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 12 Oct 2022 02:10:59 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > About your point that having different partition structures for\n> > publisher and subscriber, I don't know how common it will be once we\n> > have DDL replication. Also, the default value of\n> > publish_via_partition_root is false which doesn't seem to indicate\n> > that this is a quite common case.\n>\n> So how can we consider these concurrent issues that could happen only\n> when streaming = 'parallel'? Can we restrict some use cases to avoid\n> the problem or can we have a safeguard against these conflicts?\n>\n\nYeah, right now the strategy is to disallow parallel apply for such\ncases as you can see in *0003* patch.\n\n> We\n> could find a new problematic scenario in the future and if it happens,\n> logical replication gets stuck, it cannot be resolved only by apply\n> workers themselves.\n>\n\nI think users can change streaming option to on/off and internally the\nparallel apply worker can detect and restart to allow replication to\nproceed. Having said that, I think that would be a bug in the code and\nwe should try to fix it. We may need to disable parallel apply in the\nproblematic case.\n\nThe other ideas that occurred to me in this regard are (a) provide a\nreloption (say parallel_apply) at table level and we can use that to\nbypass various checks like different Unique Key between\npublisher/subscriber, constraints/expressions having mutable\nfunctions, Foreign Key (when enabled on subscriber), operations on\nPartitioned Table. We can't detect whether those are safe or not\n(primarily because of a different structure in publisher and\nsubscriber) so we prohibit parallel apply but if users use this\noption, we can allow it even in those cases. (b) While enabling the\nparallel option in the subscription, we can try to match all the\ntable(s) information of the publisher/subscriber. It will be tricky to\nmake this work because say even if match some trigger function name,\nwe won't be able to match the function body. The other thing is when\nat a later point the table definition is changed on the subscriber, we\nneed to again validate the information between publisher and\nsubscriber which I think would be difficult as we would be already in\nbetween processing some message and getting information from the\npublisher at that stage won't be possible.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Oct 2022 11:34:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v36-0001.\n\n======\n\n1. GENERAL\n\nHouzj wrote ([1] #3a):\nThe word 'streaming' is the same as the actual option name, so\npersonally I think it's fine. But if others also agreed that the name\ncan be improved, I can change it.\n\n~\n\nSure, I was not really complaining that the name is \"wrong\". Only I\ndid not think it was a good idea to have multiple struct members\ncalled 'streaming' when they don't have the same meaning. e.g. one is\nthe internal character mode equivalent of the parameter, and one is\nthe parameter value as a string. That's why I thought they should be\ndifferent names. e.g. Make the 2nd one 'streaming_valstr' or\nsomething.\n\n======\n\n2. doc/src/sgml/config.sgml\n\nPreviously I suggested there should be xrefsto the \"Configuration\nSettings\" page but Houzj wrote ([1] #4):\nNot sure about this as we don't have similar thing in the document of\nmax_logical_replication_workers and max_sync_workers_per_subscription.\n\n~\n\nFair enough, but IMO perhaps all those others should also xref to the\n\"Configuration Settings\" chapter. So if such a change does not belong\nin this patch, then how about if I make another independent thread to\npost this suggestion?\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n\n3. parallel_apply_find_worker\n\n+parallel_apply_find_worker(TransactionId xid)\n+{\n+ bool found;\n+ ParallelApplyWorkerEntry *entry = NULL;\n+\n+ if (!TransactionIdIsValid(xid))\n+ return NULL;\n+\n+ if (ParallelApplyWorkersHash == NULL)\n+ return NULL;\n+\n+ /* Return the cached parallel apply worker if valid. */\n+ if (stream_apply_worker != NULL)\n+ return stream_apply_worker;\n+\n+ /*\n+ * Find entry for requested transaction.\n+ */\n+ entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_FIND, &found);\n\nIn function parallel_apply_start_worker() you removed the entry\nassignment to NULL because it is never needed. Can do the same here\ntoo.\n\n~~~\n\n4. parallel_apply_free_worker\n\n+/*\n+ * Remove the parallel apply worker entry from the hash table. And stop the\n+ * worker if there are enough workers in the pool. For more information about\n+ * the worker pool, see comments atop worker.c.\n+ */\n+void\n+parallel_apply_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId xid)\n\n\"And stop\" -> \"Stop\"\n\n~~~\n\n5. parallel_apply_free_worker\n\n+ * Although some error messages may be lost in rare scenarios, but\n+ * since the parallel apply worker has finished processing the\n+ * transaction, and error messages may be lost even if we detach the\n+ * error queue after terminating the process. So it should be ok.\n+ */\n\nSUGGESTION (minor rewording)\nSome error messages may be lost in rare scenarios, but it should be OK\nbecause the parallel apply worker has finished processing the\ntransaction, and error messages may be lost even if we detached the\nerror queue after terminating the process.\n\n~~~\n\n6. LogicalParallelApplyLoop\n\n+ for (;;)\n+ {\n+ void *data;\n+ Size len;\n+ int c;\n+ StringInfoData s;\n+ MemoryContext oldctx;\n+\n+ CHECK_FOR_INTERRUPTS();\n+\n+ /* Ensure we are reading the data into our memory context. */\n+ oldctx = MemoryContextSwitchTo(ApplyMessageContext);\n+\n...\n+\n+ MemoryContextSwitchTo(oldctx);\n+ MemoryContextReset(ApplyMessageContext);\n+ }\n\nDo those memory context switches need to happen inside the for(;;)\nloop like that? I thought perhaps those can be done *outside* of the\nloop instead of always switching and switching back on the next\niteration.\n\n~~~\n\n7. LogicalParallelApplyLoop\n\nPrevious I suggested maybe the name (e.g. the 2nd param) should be\nchanged to \"ParallelApplyMessageContext\"? Houzj wrote ([1] #13): Not\nsure about this, because ApplyMessageContext is used in both worker.c\nand applyparallelworker.c.\n\n~\n\nBut I thought those are completely independent ApplyMessageContext's\nin different processes that happen to have the same name. Shouldn't\nthey have a name appropriate to who owns them?\n\n~~~\n\n8. ParallelApplyWorkerMain\n\n+ /*\n+ * Allocate the origin name in a long-lived context for error context\n+ * message.\n+ */\n+ snprintf(originname, sizeof(originname), \"pg_%u\", MySubscription->oid);\n\nNow that ReplicationOriginNameForLogicalRep patch is pushed [2] please\nmake use of this common function.\n\n~~~\n\n9. HandleParallelApplyMessage\n\n+ case 'X': /* Terminate, indicating clean exit */\n+ {\n+ shm_mq_detach(winfo->error_mq_handle);\n+ winfo->error_mq_handle = NULL;\n+ break;\n+ }\n+\n+ /*\n+ * Don't need to do anything about NoticeResponse and\n+ * NotifyResponse as the logical replication worker doesn't need\n+ * to send messages to the client.\n+ */\n+ case 'N':\n+ case 'A':\n+ break;\n+ default:\n+ {\n+ elog(ERROR, \"unrecognized message type received from parallel apply\nworker: %c (message length %d bytes)\",\n+ msgtype, msg->len);\n+ }\n\n9a. case 'X':\nThere are no variable declarations here so the statement block {} is not needed\n\n~\n\n9b. default:\nThere are no variable declarations here so the statement block {} is not needed\n\n~~~\n\n10. parallel_apply_stream_abort\n\n+ int i;\n+ bool found = false;\n+ char spname[MAXPGPATH];\n+\n+ parallel_apply_savepoint_name(MySubscription->oid, subxid, spname,\n+ sizeof(spname));\n\nI posted about using NAMEDATALEN in a previous review ([3] #21) but I\nthink only one place was fixed and this one was missed.\n\n~~~\n\n11. parallel_apply_replorigin_setup\n\n+ snprintf(originname, sizeof(originname), \"pg_%u\", MySubscription->oid);\n+ originid = replorigin_by_name(originname, false);\n+ replorigin_session_setup(originid);\n+ replorigin_session_origin = originid;\n\nSame as #8. Please call the new ReplicationOriginNameForLogicalRep function.\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n12. logicalrep_worker_launch\n\nPreviously I suggested may the apply process name should change\n\nFROM\n\"logical replication worker for subscription %u\"\nTO\n\"logical replication apply worker for subscription %u\"\n\nand Houz wrote ([1] #13)\nI am not sure if it's a good idea to change existing process description.\n\n~\n\nBut that seems inconsistent to me because elsewhere this patch is\nalready exposing the name to the user (like when it says \"logical\nreplication apply worker for subscription \\\"%s\\\" has started\".\nShouldn’t the process name match these logs?\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n13. apply_handle_stream_start\n\n+ *\n+ * XXX We can avoid sending pairs of the START messages to the parallel worker\n+ * because unlike apply worker it will process only one transaction-at-a-time.\n+ * However, it is not clear whether that is worth the effort because it is sent\n+ * after logical_decoding_work_mem changes.\n */\n static void\n apply_handle_stream_start(StringInfo s)\n\n13a.\n\"transaction-at-a-time.\" -> \"transaction at a time.\"\n\n~\n\n13b.\nI was not sure what does that last sentence mean? Does it mean something like:\n\"However, it is not clear whether doing this is worth the effort\nbecause pairs of START messages occur only after\nlogical_decoding_work_mem changes.\"\n\n~~~\n\n14. apply_handle_stream_start\n\n+ ParallelApplyWorkerInfo *winfo = NULL;\n\nThe declaration *winfo assignment to NULL is not needed because\nget_transaction_apply_action will always do this anyway.\n\n~~~\n\n15. apply_handle_stream_start\n\n+\n+ case TRANS_PARALLEL_APPLY:\n+ break;\n\nI had previously suggested this include a comment explaining why there\nis nothing to do ([3] #44), but I think there was no reply.\n\n~~~\n\n16. apply_handle_stream_stop\n\n apply_handle_stream_stop(StringInfo s)\n {\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+ TransApplyAction apply_action\n\nThe declaration *winfo assignment to NULL is not needed because\nget_transaction_apply_action will always do this anyway.\n\n~~~\n\n17. serialize_stream_abort\n\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+ TransApplyAction apply_action;\n\nThe declaration *winfo assignment to NULL is not needed because\nget_transaction_apply_action will always do this anyway.\n\n~~~\n\n18. apply_handle_stream_commit\n\n LogicalRepCommitData commit_data;\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+ TransApplyAction apply_action;\n\nThe declaration *winfo assignment to NULL is not needed because\nget_transaction_apply_action will always do this anyway.\n\n~~~\n\n19. ApplyWorkerMain\n\n+\n+/* Logical Replication Apply worker entry point */\n+void\n+ApplyWorkerMain(Datum main_arg)\n\nPreviously I suugested changing \"Apply worker\" to \"apply worker\", and\nHouzj ([1] #48) replied:\nSince it's the existing comment, I feel we can leave this.\n\n~\n\nNormally I agree don't change the original code unrelated to the\npatch, but in practice, I think no patch would be accepted that just\nchanges just \"A\" to \"a\", so if you don't change it here in this patch\nto be consistent then it will never happen. That's why I think should\nbe part of this patch.\n\n~~~\n\n20. ApplyWorkerMain\n\n+ /*\n+ * We don't currently need any ResourceOwner in a walreceiver process, but\n+ * if we did, we could call CreateAuxProcessResourceOwner here.\n+ */\n\nPreviously I suggested prefixing this as \"XXX\" and Houzj replied ([1] #48):\nI am not sure as this comment is just a reminder.\n\n~\n\nOK, then maybe since it is a reminder \"Note\" then it should be changed:\n\"We don't currently...\" -> \"Note: We don't currently...\"\n\n~~~\n\n21. ApplyWorkerMain\n\n+ if (server_version >= 160000 &&\n+ MySubscription->stream == SUBSTREAM_PARALLEL)\n+ {\n+ options.proto.logical.streaming = pstrdup(\"parallel\");\n+ MyLogicalRepWorker->parallel_apply = true;\n+ }\n+ else if (server_version >= 140000 &&\n+ MySubscription->stream != SUBSTREAM_OFF)\n+ {\n+ options.proto.logical.streaming = pstrdup(\"on\");\n+ MyLogicalRepWorker->parallel_apply = false;\n+ }\n+ else\n+ {\n+ options.proto.logical.streaming = NULL;\n+ MyLogicalRepWorker->parallel_apply = false;\n+ }\n\nI think the block of if/else is only for assigning the\nstreaming/parallel members so should have some comment to say that:\n\nSUGGESTION\nAssign the appropriate streaming flag according to the 'streaming'\nmode and the publisher's ability to support that mode.\n\n~~~\n\n22. get_transaction_apply_action\n\n+static TransApplyAction\n+get_transaction_apply_action(TransactionId xid,\nParallelApplyWorkerInfo **winfo)\n+{\n+ *winfo = NULL;\n+\n+ if (am_parallel_apply_worker())\n+ {\n+ return TRANS_PARALLEL_APPLY;\n+ }\n+ else if (in_remote_transaction)\n+ {\n+ return TRANS_LEADER_APPLY;\n+ }\n+\n+ /*\n+ * Check if we are processing this transaction using a parallel apply\n+ * worker and if so, send the changes to that worker.\n+ */\n+ else if ((*winfo = parallel_apply_find_worker(xid)))\n+ {\n+ return TRANS_LEADER_SEND_TO_PARALLEL;\n+ }\n+ else\n+ {\n+ return TRANS_LEADER_SERIALIZE;\n+ }\n+}\n\n22a.\n\nPreviously I suggested the statement blocks are overkill and all the\n{} should be removed, and Houzj ([1] #52a) wrote:\nI feel this style is fine.\n\n~\n\nSure, it is fine, but FWIW I thought it is not the normal PG coding\nconvention to use unnecessary {} unless it would seem strange to omit\nthem.\n\n~~\n\n22b.\nAlso previously I had suggested\n\n> Can a tablesync worker ever get here? It might be better to\n> Assert(!am_tablesync_worker()); at top of this function?\n\nand Houzj ([1] #52b) replied:\nNot sure if it's necessary or not.\n\n~\n\nOTOH you could say no Assert is ever really necessary, but IMO adding\none here would at least be a sanity check and help to document the\nfunction better.\n\n======\n\n23. src/test/regress/sql/subscription.sql\n\nPreviously I mentioned testing the 'streaming' option with no value.\nHouzj replied ([1]\nI didn't find similar tests for no value explicitly specified cases,\nso I didn't add this for now.\n\nBut as I also responded ([4] #58) already to Amit:\nIMO this one is a bit different because it's not really a boolean\noption anymore - it's a kind of a hybrid boolean/enum. That's why I\nthought this ought to be tested regardless if there are existing tests\nfor the (normal) boolean options.\n\nAnyway, you can decide what you want.\n\n------\n[1] Houzj replies to my v35 review\nhttps://www.postgresql.org/message-id/OS0PR01MB5716B400CD81565E868616DB945F9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2] ReplicationOriginNameForLogicalRep\nhttps://github.com/postgres/postgres/commit/776e1c8a5d1494e345e5e1b16a5eba5e98aaddca\n[3] My review v35\nhttps://www.postgresql.org/message-id/CAHut%2BPvFENKb5fcMko5HHtNEAaZyNwGhu3PASrcBt%2BHFoFL%3DFw%40mail.gmail.com\n[4] Explaining some v35 review comments\nhttps://www.postgresql.org/message-id/CAHut%2BPscac%2BipFSFx89ACmacjPe4Dn%3DqVq8T0V%3DnQkv38QgnBw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 12 Oct 2022 21:10:47 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 6, 2022 at 6:09 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> ~~~\n> 10. worker.c - apply_handle_stream_start\n>\n> ```\n> + *\n> + * XXX We can avoid sending pair of the START/STOP messages to the parallel\n> + * worker because unlike apply worker it will process only one\n> + * transaction-at-a-time. However, it is not clear whether that is worth the\n> + * effort because it is sent after logical_decoding_work_mem changes.\n> ```\n>\n> I can understand that START message is not needed, but is STOP really removable? If leader does not send STOP to its child, does it lose a chance to change the worker-state to IDLE_IN_TRANSACTION?\n>\n\nI think if we want we can set that state before we went to sleep in\nparallel apply worker. So, I guess ideally we don't need both of these\nmessages but for now, it is fine as mentioned in the comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:19:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Oct 12, 2022 at 3:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for v36-0001.\n>\n>\n> 6. LogicalParallelApplyLoop\n>\n> + for (;;)\n> + {\n> + void *data;\n> + Size len;\n> + int c;\n> + StringInfoData s;\n> + MemoryContext oldctx;\n> +\n> + CHECK_FOR_INTERRUPTS();\n> +\n> + /* Ensure we are reading the data into our memory context. */\n> + oldctx = MemoryContextSwitchTo(ApplyMessageContext);\n> +\n> ...\n> +\n> + MemoryContextSwitchTo(oldctx);\n> + MemoryContextReset(ApplyMessageContext);\n> + }\n>\n> Do those memory context switches need to happen inside the for(;;)\n> loop like that? I thought perhaps those can be done *outside* of the\n> loop instead of always switching and switching back on the next\n> iteration.\n>\n\nI think we need to reset the ApplyMessageContext each time after\nprocessing a message and also don't want to process the config file in\nthe applymessagecontext.\n\n> ======\n>\n> src/backend/replication/logical/launcher.c\n>\n> 12. logicalrep_worker_launch\n>\n> Previously I suggested may the apply process name should change\n>\n> FROM\n> \"logical replication worker for subscription %u\"\n> TO\n> \"logical replication apply worker for subscription %u\"\n>\n> and Houz wrote ([1] #13)\n> I am not sure if it's a good idea to change existing process description.\n>\n> ~\n>\n> But that seems inconsistent to me because elsewhere this patch is\n> already exposing the name to the user (like when it says \"logical\n> replication apply worker for subscription \\\"%s\\\" has started\".\n> Shouldn’t the process name match these logs?\n>\n\nI think it is okay to change the name here for the sake of consistency.\n\n>\n> 19. ApplyWorkerMain\n>\n> +\n> +/* Logical Replication Apply worker entry point */\n> +void\n> +ApplyWorkerMain(Datum main_arg)\n>\n> Previously I suugested changing \"Apply worker\" to \"apply worker\", and\n> Houzj ([1] #48) replied:\n> Since it's the existing comment, I feel we can leave this.\n>\n> ~\n>\n> Normally I agree don't change the original code unrelated to the\n> patch, but in practice, I think no patch would be accepted that just\n> changes just \"A\" to \"a\", so if you don't change it here in this patch\n> to be consistent then it will never happen. That's why I think should\n> be part of this patch.\n>\n\nHmm, I think one might then extend this to many other similar cosmetic\nstuff in the nearby areas. It sometimes distracts the reviewer if\nthere are unrelated changes, so better to avoid it.\n\n>\n> 22. get_transaction_apply_action\n>\n> +static TransApplyAction\n> +get_transaction_apply_action(TransactionId xid,\n> ParallelApplyWorkerInfo **winfo)\n> +{\n> + *winfo = NULL;\n> +\n> + if (am_parallel_apply_worker())\n> + {\n> + return TRANS_PARALLEL_APPLY;\n> + }\n> + else if (in_remote_transaction)\n> + {\n> + return TRANS_LEADER_APPLY;\n> + }\n> +\n> + /*\n> + * Check if we are processing this transaction using a parallel apply\n> + * worker and if so, send the changes to that worker.\n> + */\n> + else if ((*winfo = parallel_apply_find_worker(xid)))\n> + {\n> + return TRANS_LEADER_SEND_TO_PARALLEL;\n> + }\n> + else\n> + {\n> + return TRANS_LEADER_SERIALIZE;\n> + }\n> +}\n>\n> 22a.\n>\n> Previously I suggested the statement blocks are overkill and all the\n> {} should be removed, and Houzj ([1] #52a) wrote:\n> I feel this style is fine.\n>\n> ~\n>\n> Sure, it is fine, but FWIW I thought it is not the normal PG coding\n> convention to use unnecessary {} unless it would seem strange to omit\n> them.\n>\n\nYeah, but here we are using comments in between the else if construct\ndue to which using {} makes it look better. I agree that this is\nmostly a question of personal preference and we can go either way but\nmy preference would be to use the style patch has currently used.\n\n>\n> 23. src/test/regress/sql/subscription.sql\n>\n> Previously I mentioned testing the 'streaming' option with no value.\n> Houzj replied ([1]\n> I didn't find similar tests for no value explicitly specified cases,\n> so I didn't add this for now.\n>\n> But as I also responded ([4] #58) already to Amit:\n> IMO this one is a bit different because it's not really a boolean\n> option anymore - it's a kind of a hybrid boolean/enum. That's why I\n> thought this ought to be tested regardless if there are existing tests\n> for the (normal) boolean options.\n>\n\nI still feel this is not required. I think we have to be cautious\nabout not adding too many tests in this area that are of less or no\nvalue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:33:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Oct 12, 2022 at 7:41 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Oct 7, 2022 at 14:18 PM Hou, Zhijie/侯 志杰 <houzj.fnst@cn.fujitsu.com> wrote:\n> > Attach the new version patch set which addressed most of the comments.\n>\n> Rebased the patch set because the new change in HEAD (776e1c8).\n>\n> Attach the new patch set.\n>\n\n+static void\n+HandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo msg)\n{\n...\n+ case 'X': /* Terminate, indicating clean exit */\n+ {\n+ shm_mq_detach(winfo->error_mq_handle);\n+ winfo->error_mq_handle = NULL;\n+ break;\n+ }\n...\n}\n\nI don't see the use of this message in the patch. If this is not\nrequired by the latest version then we can remove it and its\ncorresponding handling in parallel_apply_start_worker(). I am\nreferring to the below code in parallel_apply_start_worker():\n\n+ if (tmp_winfo->error_mq_handle == NULL)\n+ {\n+ /*\n+ * Release the worker information and try next one if the parallel\n+ * apply worker exited cleanly.\n+ */\n+ ParallelApplyWorkersList =\nforeach_delete_current(ParallelApplyWorkersList, lc);\n+ shm_mq_detach(tmp_winfo->mq_handle);\n+ dsm_detach(tmp_winfo->dsm_seg);\n+ pfree(tmp_winfo);\n+ }\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 14 Oct 2022 09:59:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, October 14, 2022 12:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Oct 12, 2022 at 7:41 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Fri, Oct 7, 2022 at 14:18 PM Hou, Zhijie/侯 志杰\r\n> <houzj.fnst@cn.fujitsu.com> wrote:\r\n> > > Attach the new version patch set which addressed most of the comments.\r\n> >\r\n> > Rebased the patch set because the new change in HEAD (776e1c8).\r\n> >\r\n> > Attach the new patch set.\r\n> >\r\n> \r\n> +static void\r\n> +HandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo\r\n> +msg)\r\n> {\r\n> ...\r\n> + case 'X': /* Terminate, indicating clean exit */ {\r\n> + shm_mq_detach(winfo->error_mq_handle);\r\n> + winfo->error_mq_handle = NULL;\r\n> + break;\r\n> + }\r\n> ...\r\n> }\r\n> \r\n> I don't see the use of this message in the patch. If this is not required by the\r\n> latest version then we can remove it and its corresponding handling in\r\n> parallel_apply_start_worker(). I am referring to the below code in\r\n> parallel_apply_start_worker():\r\n\r\nThanks for the comments, I removed these codes in the new version patch set.\r\n\r\nI also did the following changes in the new version patch:\r\n\r\n[0001] \r\n* Teach the parallel apply worker to catch the subscription parameter change in\r\nthe main loop so that user can change the streaming option to \"on\" to stop\r\nthe parallel apply workers in case the leader apply workers get stuck because of\r\nsome deadlock problems discussed in [1].\r\n\r\n* Some cosmetic changes.\r\n\r\n* Address comments from Peter[2].\r\n\r\n[0004]\r\n* Disallow replicating from or to a partitioned table in parallel streaming\r\nmode. This is to avoid the deadlock cases when the partitioned table's\r\ninheritance structure is different between publisher and subscriber as\r\ndiscussed [1].\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1JYFXEoFhJAvg1qU%3DnZrZLw_87X%3D2YWQGFBbcBGirAUwA%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAHut%2BPvxL8tJ2ZUpEjkbRFe6qKSH%2Br54BQ7wM8p%3D335tUbuXbg%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 14 Oct 2022 09:08:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Oct 12, 2022 at 18:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v36-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 1. GENERAL\r\n> \r\n> Houzj wrote ([1] #3a):\r\n> The word 'streaming' is the same as the actual option name, so \r\n> personally I think it's fine. But if others also agreed that the name \r\n> can be improved, I can change it.\r\n> \r\n> ~\r\n> \r\n> Sure, I was not really complaining that the name is \"wrong\". Only I \r\n> did not think it was a good idea to have multiple struct members \r\n> called 'streaming' when they don't have the same meaning. e.g. one is \r\n> the internal character mode equivalent of the parameter, and one is \r\n> the parameter value as a string. That's why I thought they should be \r\n> different names. e.g. Make the 2nd one 'streaming_valstr' or \r\n> something.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> 2. doc/src/sgml/config.sgml\r\n> \r\n> Previously I suggested there should be xrefsto the \"Configuration \r\n> Settings\" page but Houzj wrote ([1] #4):\r\n> Not sure about this as we don't have similar thing in the document of \r\n> max_logical_replication_workers and max_sync_workers_per_subscription.\r\n> \r\n> ~\r\n> \r\n> Fair enough, but IMO perhaps all those others should also xref to the \r\n> \"Configuration Settings\" chapter. So if such a change does not belong \r\n> in this patch, then how about if I make another independent thread to \r\n> post this suggestion?\r\n\r\nSure, I feel it would be better to do it in a separate thread.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> \r\n> 3. parallel_apply_find_worker\r\n> \r\n> +parallel_apply_find_worker(TransactionId xid) { bool found; \r\n> +ParallelApplyWorkerEntry *entry = NULL;\r\n> +\r\n> + if (!TransactionIdIsValid(xid))\r\n> + return NULL;\r\n> +\r\n> + if (ParallelApplyWorkersHash == NULL) return NULL;\r\n> +\r\n> + /* Return the cached parallel apply worker if valid. */ if \r\n> + (stream_apply_worker != NULL) return stream_apply_worker;\r\n> +\r\n> + /*\r\n> + * Find entry for requested transaction.\r\n> + */\r\n> + entry = hash_search(ParallelApplyWorkersHash, &xid, HASH_FIND, \r\n> + &found);\r\n> \r\n> In function parallel_apply_start_worker() you removed the entry \r\n> assignment to NULL because it is never needed. Can do the same here \r\n> too.\r\n\r\nChanged.\r\n\r\n> 4. parallel_apply_free_worker\r\n> \r\n> +/*\r\n> + * Remove the parallel apply worker entry from the hash table. And \r\n> +stop the\r\n> + * worker if there are enough workers in the pool. For more \r\n> +information about\r\n> + * the worker pool, see comments atop worker.c.\r\n> + */\r\n> +void\r\n> +parallel_apply_free_worker(ParallelApplyWorkerInfo *winfo, \r\n> +TransactionId\r\n> xid)\r\n> \r\n> \"And stop\" -> \"Stop\"\r\n\r\nChanged.\r\n\r\n> 5. parallel_apply_free_worker\r\n> \r\n> + * Although some error messages may be lost in rare scenarios, but\r\n> + * since the parallel apply worker has finished processing the\r\n> + * transaction, and error messages may be lost even if we detach the\r\n> + * error queue after terminating the process. So it should be ok.\r\n> + */\r\n> \r\n> SUGGESTION (minor rewording)\r\n> Some error messages may be lost in rare scenarios, but it should be OK \r\n> because the parallel apply worker has finished processing the \r\n> transaction, and error messages may be lost even if we detached the \r\n> error queue after terminating the process.\r\n\r\nChanged.\r\n\r\n\r\n> ~~~\r\n> \r\n> 7. LogicalParallelApplyLoop\r\n> \r\n> Previous I suggested maybe the name (e.g. the 2nd param) should be \r\n> changed to \"ParallelApplyMessageContext\"? Houzj wrote ([1] #13): Not \r\n> sure about this, because ApplyMessageContext is used in both worker.c \r\n> and applyparallelworker.c.\r\n> \r\n> ~\r\n> \r\n> But I thought those are completely independent ApplyMessageContext's \r\n> in different processes that happen to have the same name. Shouldn't \r\n> they have a name appropriate to who owns them?\r\n\r\nApplyMessageContext is used by the begin_replication_step() function which will\r\nbe invoked in both leader and parallel apply worker. So, we need to name the\r\nmemory context the same as ApplyMessageContext, otherwise we would need to\r\nmodify the logic of begin_replication_step() to use another memory context if\r\nin parallel apply worker.\r\n\r\n\r\n> ~~~\r\n> \r\n> 8. ParallelApplyWorkerMain\r\n> \r\n> + /*\r\n> + * Allocate the origin name in a long-lived context for error context\r\n> + * message.\r\n> + */\r\n> + snprintf(originname, sizeof(originname), \"pg_%u\", \r\n> + MySubscription->oid);\r\n> \r\n> Now that ReplicationOriginNameForLogicalRep patch is pushed [2] please \r\n> make use of this common function.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 9. HandleParallelApplyMessage\r\n> \r\n> + case 'X': /* Terminate, indicating clean exit */ { \r\n> + shm_mq_detach(winfo->error_mq_handle);\r\n> + winfo->error_mq_handle = NULL;\r\n> + break;\r\n> + }\r\n> +\r\n> + /*\r\n> + * Don't need to do anything about NoticeResponse and\r\n> + * NotifyResponse as the logical replication worker doesn't need\r\n> + * to send messages to the client.\r\n> + */\r\n> + case 'N':\r\n> + case 'A':\r\n> + break;\r\n> + default:\r\n> + {\r\n> + elog(ERROR, \"unrecognized message type received from parallel apply\r\n> worker: %c (message length %d bytes)\",\r\n> + msgtype, msg->len);\r\n> + }\r\n> \r\n> 9a. case 'X':\r\n> There are no variable declarations here so the statement block {} is \r\n> not needed\r\n> \r\n> ~\r\n> \r\n> 9b. default:\r\n> There are no variable declarations here so the statement block {} is \r\n> not needed\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 10. parallel_apply_stream_abort\r\n> \r\n> + int i;\r\n> + bool found = false;\r\n> + char spname[MAXPGPATH];\r\n> +\r\n> + parallel_apply_savepoint_name(MySubscription->oid, subxid, spname,\r\n> + sizeof(spname));\r\n> \r\n> I posted about using NAMEDATALEN in a previous review ([3] #21) but I \r\n> think only one place was fixed and this one was missed.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/launcher.c\r\n> \r\n> 12. logicalrep_worker_launch\r\n> \r\n> Previously I suggested may the apply process name should change\r\n> \r\n> FROM\r\n> \"logical replication worker for subscription %u\"\r\n> TO\r\n> \"logical replication apply worker for subscription %u\"\r\n> \r\n> and Houz wrote ([1] #13)\r\n> I am not sure if it's a good idea to change existing process description.\r\n> \r\n> ~\r\n> \r\n> But that seems inconsistent to me because elsewhere this patch is \r\n> already exposing the name to the user (like when it says \"logical \r\n> replication apply worker for subscription \\\"%s\\\" has started\".\r\n> Shouldn’t the process name match these logs?\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 13. apply_handle_stream_start\r\n> \r\n> + *\r\n> + * XXX We can avoid sending pairs of the START messages to the \r\n> + parallel\r\n> worker\r\n> + * because unlike apply worker it will process only one transaction-at-a-time.\r\n> + * However, it is not clear whether that is worth the effort because \r\n> + it is sent\r\n> + * after logical_decoding_work_mem changes.\r\n> */\r\n> static void\r\n> apply_handle_stream_start(StringInfo s)\r\n> \r\n> 13a.\r\n> \"transaction-at-a-time.\" -> \"transaction at a time.\"\r\n> \r\n> ~\r\n> \r\n> 13b.\r\n> I was not sure what does that last sentence mean? Does it mean something like:\r\n> \"However, it is not clear whether doing this is worth the effort \r\n> because pairs of START messages occur only after \r\n> logical_decoding_work_mem changes.\"\r\n\r\n=>13a.\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 14. apply_handle_stream_start\r\n> \r\n> + ParallelApplyWorkerInfo *winfo = NULL;\r\n> \r\n> The declaration *winfo assignment to NULL is not needed because \r\n> get_transaction_apply_action will always do this anyway.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 15. apply_handle_stream_start\r\n> \r\n> +\r\n> + case TRANS_PARALLEL_APPLY:\r\n> + break;\r\n> \r\n> I had previously suggested this include a comment explaining why there \r\n> is nothing to do ([3] #44), but I think there was no reply.\r\n\r\nThe parallel apply worker doesn't need special handling for STREAM START,\r\nit only needs to run some common code path that is shared by leader.\r\nI added a small comment about this.\r\n\r\n> ~~~\r\n> \r\n> 20. ApplyWorkerMain\r\n> \r\n> + /*\r\n> + * We don't currently need any ResourceOwner in a walreceiver \r\n> + process, but\r\n> + * if we did, we could call CreateAuxProcessResourceOwner here.\r\n> + */\r\n> \r\n> Previously I suggested prefixing this as \"XXX\" and Houzj replied ([1] #48):\r\n> I am not sure as this comment is just a reminder.\r\n> \r\n> ~\r\n> \r\n> OK, then maybe since it is a reminder \"Note\" then it should be changed:\r\n> \"We don't currently...\" -> \"Note: We don't currently...\"\r\n\r\nI feel it's fine to leave the comment as that's the existing comment\r\nin ApplyWorkerMain().\r\n\r\n> ~~~\r\n> \r\n> 21. ApplyWorkerMain\r\n> \r\n> + if (server_version >= 160000 &&\r\n> + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> + {\r\n> + options.proto.logical.streaming = pstrdup(\"parallel\");\r\n> + MyLogicalRepWorker->parallel_apply = true;\r\n> + }\r\n> + else if (server_version >= 140000 &&\r\n> + MySubscription->stream != SUBSTREAM_OFF)\r\n> + {\r\n> + options.proto.logical.streaming = pstrdup(\"on\");\r\n> + MyLogicalRepWorker->parallel_apply = false;\r\n> + }\r\n> + else\r\n> + {\r\n> + options.proto.logical.streaming = NULL;\r\n> + MyLogicalRepWorker->parallel_apply = false;\r\n> + }\r\n> \r\n> I think the block of if/else is only for assigning the \r\n> streaming/parallel members so should have some comment to say that:\r\n> \r\n> SUGGESTION\r\n> Assign the appropriate streaming flag according to the 'streaming'\r\n> mode and the publisher's ability to support that mode.\r\n\r\nAdded the comments as suggested.\r\n\r\n> ~~~\r\n> \r\n> 22. get_transaction_apply_action\r\n> \r\n> +static TransApplyAction\r\n> +get_transaction_apply_action(TransactionId xid,\r\n> ParallelApplyWorkerInfo **winfo)\r\n> +{\r\n> + *winfo = NULL;\r\n> +\r\n> + if (am_parallel_apply_worker())\r\n> + {\r\n> + return TRANS_PARALLEL_APPLY;\r\n> + }\r\n> + else if (in_remote_transaction)\r\n> + {\r\n> + return TRANS_LEADER_APPLY;\r\n> + }\r\n> +\r\n> + /*\r\n> + * Check if we are processing this transaction using a parallel apply\r\n> + * worker and if so, send the changes to that worker.\r\n> + */\r\n> + else if ((*winfo = parallel_apply_find_worker(xid))) { return \r\n> +TRANS_LEADER_SEND_TO_PARALLEL; } else { return \r\n> +TRANS_LEADER_SERIALIZE; } }\r\n> \r\n> 22b.\r\n> Also previously I had suggested\r\n> \r\n> > Can a tablesync worker ever get here? It might be better to \r\n> > Assert(!am_tablesync_worker()); at top of this function?\r\n> \r\n> and Houzj ([1] #52b) replied:\r\n> Not sure if it's necessary or not.\r\n> \r\n> ~\r\n> \r\n> OTOH you could say no Assert is ever really necessary, but IMO adding \r\n> one here would at least be a sanity check and help to document the \r\n> function better.\r\n\r\nget_transaction_apply_action might also be invoked in table sync worker in some\r\nrare cases when some streaming transaction comes while doing the table sync.\r\nAnd the function works fine in that case, so I don't think we should add the\r\nAssert() here.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Fri, 14 Oct 2022 09:29:32 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, here are my review comments for patch v38-0001.\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n1. parallel_apply_start_worker\n\n+ /* Try to get a free parallel apply worker. */\n+ foreach(lc, ParallelApplyWorkersList)\n+ {\n+ ParallelApplyWorkerInfo *tmp_winfo;\n+\n+ tmp_winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n+\n+ if (!tmp_winfo->in_use)\n+ {\n+ /* Found a worker that has not been assigned a transaction. */\n+ winfo = tmp_winfo;\n+ break;\n+ }\n+ }\n\nThe \"Found a worker...\" comment seems redundant because it's already\nclear from the prior comment and the 'in_use' member what this code is\ndoing.\n\n~~~\n\n2. LogicalParallelApplyLoop\n\n+ void *data;\n+ Size len;\n+ int c;\n+ int rc;\n+ StringInfoData s;\n+ MemoryContext oldctx;\n\nSeveral of these vars (like 'c', 'rc', 's') can be declared deeper -\ne.g. only in the scope where they are actually used.\n\n~~~\n\n3.\n\n+ /* Ensure we are reading the data into our memory context. */\n+ oldctx = MemoryContextSwitchTo(ApplyMessageContext);\n\nDoesn't something need to switch back to this 'oldctx' prior to\nbreaking out of the for(;;) loop?\n\n~~~\n\n4.\n\n+ apply_dispatch(&s);\n+\n+ MemoryContextReset(ApplyMessageContext);\n\nIsn't this broken now? Since you've removed the\nMemoryContextSwitchTo(oldctx), so next iteration will switch to\nApplyMessageContext again which will overwrite and lose knowledge of\nthe original 'oldctx' (??)\n\n~~\n\n5.\n\nMaybe this is a silly idea, I'm not sure. Because this is an infinite\nloop, then instead of the multiple calls to\nMemoryContextReset(ApplyMessageContext) maybe there can be just a\nsingle call to it immediately before you switch to that context in the\nfirst place. The effect will be the same, won't it?\n\ne.g.\n+ /* Ensure we are reading the data into our memory context. */\n+ MemoryContextReset(ApplyMessageContext); <=== THIS\n+ oldctx = MemoryContextSwitchTo(ApplyMessageContext);\n\n~~~\n\n6.\n\nThe code logic keeps flip-flopping for several versions. I think if\nyou are going to check all the return types of shm_mq_receive then\nusing a switch(shmq_res) might be a better way than having multiple\nif/else with some Asserts.\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n7. logicalrep_worker_launch\n\nPreviously I'd suggested ([1] #12) that the process name should change\nfor consistency, and AFAIK Amit also said [2] that would be OK, but\nthis change is still not done in the current patch.\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n8. should_apply_changes_for_rel\n\n * Should this worker apply changes for given relation.\n *\n * This is mainly needed for initial relation data sync as that runs in\n * separate worker process running in parallel and we need some way to skip\n * changes coming to the main apply worker during the sync of a table.\n\nThis existing comment refers to the \"main apply worker\". IMO it should\nsay \"leader apply worker\" to keep all the terminology consistent.\n\n~~~\n\n9. apply_handle_stream_start\n\n+ *\n+ * XXX We can avoid sending pairs of the START/STOP messages to the parallel\n+ * worker because unlike apply worker it will process only one transaction at a\n+ * time. However, it is not clear whether that is worth the effort because it\n+ * is sent after logical_decoding_work_mem changes.\n */\n static void\n apply_handle_stream_start(StringInfo s)\n\nAs previously mentioned ([1] #13b) it's not obvious to me what that\nlast sentence means. e.g. \"because it is sent\" - what is \"it\"?\n\n~~~\n\n10. ApplyWorkerMain\n\nelse\n{\n/* This is main apply worker */\nRepOriginId originid;\nTimeLineID startpointTLI;\nchar *err;\n\nSame as #8. IMO it should now say \"leader apply worker\" to keep all\nthe terminology consistent.\n\n~~~\n\n11.\n\n+ /*\n+ * Assign the appropriate streaming flag according to the 'streaming' mode\n+ * and the publisher's ability to support that mode.\n+ */\n\nMaybe \"streaming flag\" -> \"streaming string/flag\". (sorry, it was my\nbad suggestion last time)\n\n~~~\n\n12. get_transaction_apply_action\n\nI still felt like there should be some tablesync checks/comments in\nthis function, just for sanity, even if it works as-is now.\n\nFor example, are you saying ([3] #22b) that there might be rare cases\nwhere a Tablesync would call to parallel_apply_find_worker? That seems\nstrange, given that \"for streaming transactions that are being applied\nin the parallel ... we disallow applying changes on a table that is\nnot in the READY state\".\n\n------\n[1] My v36 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPvxL8tJ2ZUpEjkbRFe6qKSH%2Br54BQ7wM8p%3D335tUbuXbg%40mail.gmail.com\n[2] Amit's feedback for my v36 review -\nhttps://www.postgresql.org/message-id/CAA4eK1%2BOyQ8-psruZZ0sYff5KactTHZneR-cfsHd%2Bn%2BN7khEKQ%40mail.gmail.com\n[3] Hou's feedback for my v36 review -\nhttps://www.postgresql.org/message-id/OS0PR01MB57162232BF51A09F4BD13C7594249%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 Oct 2022 13:35:47 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, October 18, 2022 10:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Hi, here are my review comments for patch v38-0001.\r\n\r\nThanks for the comments.\r\n\r\n> ~~~\r\n> \r\n> 12. get_transaction_apply_action\r\n> \r\n> I still felt like there should be some tablesync checks/comments in\r\n> this function, just for sanity, even if it works as-is now.\r\n> \r\n> For example, are you saying ([3] #22b) that there might be rare cases\r\n> where a Tablesync would call to parallel_apply_find_worker? That seems\r\n> strange, given that \"for streaming transactions that are being applied\r\n> in the parallel ... we disallow applying changes on a table that is\r\n> not in the READY state\".\r\n> \r\n> ------\r\n\r\nI think because we won't try to start parallel apply worker in table sync\r\nworker(see the check in parallel_apply_can_start()), so we won't find any\r\nworker in parallel_apply_find_worker() which means get_transaction_apply_action\r\nwill return TRANS_LEADER_SERIALIZE. And get_transaction_apply_action is a\r\nfunction which can be invoked for all kinds of workers(same is true for all\r\napply_handle_xxx functions), so not sure if table sync check/comment is\r\nnecessary.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 18 Oct 2022 04:23:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 18, 2022 at 8:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are my review comments for patch v38-0001.\n>\n> 3.\n>\n> + /* Ensure we are reading the data into our memory context. */\n> + oldctx = MemoryContextSwitchTo(ApplyMessageContext);\n>\n> Doesn't something need to switch back to this 'oldctx' prior to\n> breaking out of the for(;;) loop?\n>\n> ~~~\n>\n> 4.\n>\n> + apply_dispatch(&s);\n> +\n> + MemoryContextReset(ApplyMessageContext);\n>\n> Isn't this broken now? Since you've removed the\n> MemoryContextSwitchTo(oldctx), so next iteration will switch to\n> ApplyMessageContext again which will overwrite and lose knowledge of\n> the original 'oldctx' (??)\n>\n> ~~\n>\n> 5.\n>\n> Maybe this is a silly idea, I'm not sure. Because this is an infinite\n> loop, then instead of the multiple calls to\n> MemoryContextReset(ApplyMessageContext) maybe there can be just a\n> single call to it immediately before you switch to that context in the\n> first place. The effect will be the same, won't it?\n>\n\nI think so but I think it will look a bit odd, especially for the\nfirst time. If the purpose is to just do it once, won't it be better\nto do it at the end of for loop?\n\n>\n> 9. apply_handle_stream_start\n>\n> + *\n> + * XXX We can avoid sending pairs of the START/STOP messages to the parallel\n> + * worker because unlike apply worker it will process only one transaction at a\n> + * time. However, it is not clear whether that is worth the effort because it\n> + * is sent after logical_decoding_work_mem changes.\n> */\n> static void\n> apply_handle_stream_start(StringInfo s)\n>\n> As previously mentioned ([1] #13b) it's not obvious to me what that\n> last sentence means. e.g. \"because it is sent\" - what is \"it\"?\n>\n\nHere, it refers to START/STOP messages, so I think we should say \"...\nbecause these messages are sent ..\" instead of \"... because it is sent\n...\". Does that makes sense to you?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Oct 2022 15:26:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 6, 2022 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n\n> While looking at v35 patch, I realized that there are some cases where\n> the logical replication gets stuck depending on partitioned table\n> structure. For instance, there are following tables, publication, and\n> subscription:\n>\n> * On publisher\n> create table p (c int) partition by list (c);\n> create table c1 partition of p for values in (1);\n> create table c2 (c int);\n> create publication test_pub for table p, c1, c2 with\n> (publish_via_partition_root = 'true');\n>\n> * On subscriber\n> create table p (c int) partition by list (c);\n> create table c1 partition of p for values In (2);\n> create table c2 partition of p for values In (1);\n> create subscription test_sub connection 'port=5551 dbname=postgres'\n> publication test_pub with (streaming = 'parallel', copy_data =\n> 'false');\n>\n> Note that while both the publisher and the subscriber have the same\n> name tables the partition structure is different and rows go to a\n> different table on the subscriber (eg, row c=1 will go to c2 table on\n> the subscriber). If two current transactions are executed as follows,\n> the apply worker (ig, the leader apply worker) waits for a lock on c2\n> held by its parallel apply worker:\n>\n> * TX-1\n> BEGIN;\n> INSERT INTO p SELECT 1 FROM generate_series(1, 10000); --- changes are streamed\n>\n> * TX-2\n> BEGIN;\n> TRUNCATE c2; --- wait for a lock on c2\n>\n> * TX-1\n> INSERT INTO p SELECT 1 FROM generate_series(1, 10000);\n> COMMIT;\n>\n> This might not be a common case in practice but it could mean that\n> there is a restriction on how partitioned tables should be structured\n> on the publisher and the subscriber when using streaming = 'parallel'.\n> When this happens, since the logical replication cannot move forward\n> the users need to disable parallel-apply mode or increase\n> logical_decoding_work_mem. We could describe this limitation in the\n> doc but it would be hard for users to detect problematic table\n> structure.\n\nInteresting case. So I think the root of the problem is the same as\nwhat we have for a column is marked unique to the subscriber but not\nto the publisher. In short, two transactions which are independent of\neach other on the publisher are dependent on each other on the\nsubscriber side because table definition is different on the\nsubscriber. So can't we handle this case in the same way by marking\nthis table unsafe for parallel-apply?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Oct 2022 17:22:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 18, 2022 at 5:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 6, 2022 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> > While looking at v35 patch, I realized that there are some cases where\n> > the logical replication gets stuck depending on partitioned table\n> > structure. For instance, there are following tables, publication, and\n> > subscription:\n> >\n> > * On publisher\n> > create table p (c int) partition by list (c);\n> > create table c1 partition of p for values in (1);\n> > create table c2 (c int);\n> > create publication test_pub for table p, c1, c2 with\n> > (publish_via_partition_root = 'true');\n> >\n> > * On subscriber\n> > create table p (c int) partition by list (c);\n> > create table c1 partition of p for values In (2);\n> > create table c2 partition of p for values In (1);\n> > create subscription test_sub connection 'port=5551 dbname=postgres'\n> > publication test_pub with (streaming = 'parallel', copy_data =\n> > 'false');\n> >\n> > Note that while both the publisher and the subscriber have the same\n> > name tables the partition structure is different and rows go to a\n> > different table on the subscriber (eg, row c=1 will go to c2 table on\n> > the subscriber). If two current transactions are executed as follows,\n> > the apply worker (ig, the leader apply worker) waits for a lock on c2\n> > held by its parallel apply worker:\n> >\n> > * TX-1\n> > BEGIN;\n> > INSERT INTO p SELECT 1 FROM generate_series(1, 10000); --- changes are streamed\n> >\n> > * TX-2\n> > BEGIN;\n> > TRUNCATE c2; --- wait for a lock on c2\n> >\n> > * TX-1\n> > INSERT INTO p SELECT 1 FROM generate_series(1, 10000);\n> > COMMIT;\n> >\n> > This might not be a common case in practice but it could mean that\n> > there is a restriction on how partitioned tables should be structured\n> > on the publisher and the subscriber when using streaming = 'parallel'.\n> > When this happens, since the logical replication cannot move forward\n> > the users need to disable parallel-apply mode or increase\n> > logical_decoding_work_mem. We could describe this limitation in the\n> > doc but it would be hard for users to detect problematic table\n> > structure.\n>\n> Interesting case. So I think the root of the problem is the same as\n> what we have for a column is marked unique to the subscriber but not\n> to the publisher. In short, two transactions which are independent of\n> each other on the publisher are dependent on each other on the\n> subscriber side because table definition is different on the\n> subscriber. So can't we handle this case in the same way by marking\n> this table unsafe for parallel-apply?\n>\n\nYes, we can do that. I think Hou-San has already dealt that way in his\nlatest patch [1]. See his response in the email [1]: \"Disallow\nreplicating from or to a partitioned table in parallel streaming\nmode\".\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB57160760B34E1655718F4D1994249%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Oct 2022 18:24:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, October 18, 2022 10:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi, here are my review comments for patch v38-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 1. parallel_apply_start_worker\r\n> \r\n> + /* Try to get a free parallel apply worker. */ foreach(lc, \r\n> + ParallelApplyWorkersList) { ParallelApplyWorkerInfo *tmp_winfo;\r\n> +\r\n> + tmp_winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\r\n> +\r\n> + if (!tmp_winfo->in_use)\r\n> + {\r\n> + /* Found a worker that has not been assigned a transaction. */ winfo \r\n> + = tmp_winfo; break; } }\r\n> \r\n> The \"Found a worker...\" comment seems redundant because it's already \r\n> clear from the prior comment and the 'in_use' member what this code is \r\n> doing.\r\n\r\nRemoved.\r\n\r\n> ~~~\r\n> \r\n> 2. LogicalParallelApplyLoop\r\n> \r\n> + void *data;\r\n> + Size len;\r\n> + int c;\r\n> + int rc;\r\n> + StringInfoData s;\r\n> + MemoryContext oldctx;\r\n> \r\n> Several of these vars (like 'c', 'rc', 's') can be declared deeper - \r\n> e.g. only in the scope where they are actually used.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 3.\r\n> \r\n> + /* Ensure we are reading the data into our memory context. */ oldctx \r\n> + = MemoryContextSwitchTo(ApplyMessageContext);\r\n> \r\n> Doesn't something need to switch back to this 'oldctx' prior to \r\n> breaking out of the for(;;) loop?\r\n> \r\n> ~~~\r\n> \r\n> 4.\r\n> \r\n> + apply_dispatch(&s);\r\n> +\r\n> + MemoryContextReset(ApplyMessageContext);\r\n> \r\n> Isn't this broken now? Since you've removed the \r\n> MemoryContextSwitchTo(oldctx), so next iteration will switch to \r\n> ApplyMessageContext again which will overwrite and lose knowledge of \r\n> the original 'oldctx' (??)\r\n\r\nSorry for the miss, fixed.\r\n\r\n> ~~\r\n> \r\n> 5.\r\n> \r\n> Maybe this is a silly idea, I'm not sure. Because this is an infinite \r\n> loop, then instead of the multiple calls to\r\n> MemoryContextReset(ApplyMessageContext) maybe there can be just a \r\n> single call to it immediately before you switch to that context in the \r\n> first place. The effect will be the same, won't it?\r\n> \r\n> e.g.\r\n> + /* Ensure we are reading the data into our memory context. */ \r\n> + MemoryContextReset(ApplyMessageContext); <=== THIS oldctx = \r\n> + MemoryContextSwitchTo(ApplyMessageContext);\r\n\r\nIn SHM_MQ_WOULD_BLOCK branch, we would invoke WaitLatch, so I feel we'd better\r\nreset the memory context before waiting to avoid keeping no longer useful\r\nmemory context for more time (although it doesn’t matter too much in practice).\r\nSo, I didn't change this for now.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> \r\n> The code logic keeps flip-flopping for several versions. I think if \r\n> you are going to check all the return types of shm_mq_receive then \r\n> using a switch(shmq_res) might be a better way than having multiple \r\n> if/else with some Asserts.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/launcher.c\r\n> \r\n> 7. logicalrep_worker_launch\r\n> \r\n> Previously I'd suggested ([1] #12) that the process name should change \r\n> for consistency, and AFAIK Amit also said [2] that would be OK, but \r\n> this change is still not done in the current patch.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 8. should_apply_changes_for_rel\r\n> \r\n> * Should this worker apply changes for given relation.\r\n> *\r\n> * This is mainly needed for initial relation data sync as that runs \r\n> in\r\n> * separate worker process running in parallel and we need some way to \r\n> skip\r\n> * changes coming to the main apply worker during the sync of a table.\r\n> \r\n> This existing comment refers to the \"main apply worker\". IMO it should \r\n> say \"leader apply worker\" to keep all the terminology consistent.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 9. apply_handle_stream_start\r\n> \r\n> + *\r\n> + * XXX We can avoid sending pairs of the START/STOP messages to the \r\n> + parallel\r\n> + * worker because unlike apply worker it will process only one \r\n> + transaction at a\r\n> + * time. However, it is not clear whether that is worth the effort \r\n> + because it\r\n> + * is sent after logical_decoding_work_mem changes.\r\n> */\r\n> static void\r\n> apply_handle_stream_start(StringInfo s)\r\n> \r\n> As previously mentioned ([1] #13b) it's not obvious to me what that \r\n> last sentence means. e.g. \"because it is sent\" - what is \"it\"?\r\n\r\nChanged as Amit's suggestion in [1].\r\n\r\n> ~~~\r\n> \r\n> 11.\r\n> \r\n> + /*\r\n> + * Assign the appropriate streaming flag according to the 'streaming' \r\n> + mode\r\n> + * and the publisher's ability to support that mode.\r\n> + */\r\n> \r\n> Maybe \"streaming flag\" -> \"streaming string/flag\". (sorry, it was my \r\n> bad suggestion last time)\r\n\r\nImproved.\r\n\r\nAttach the version patch set.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BqwbD419%3DKgRTLRVj5zQhbM%3Dbfi-cvWG3HkORktb4-YA%40mail.gmail.com\r\n\r\nBest Regards\r\nHou Zhijie", "msg_date": "Wed, 19 Oct 2022 06:46:46 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for updating the patch! Followings are my comments.\r\n\r\n===\r\n01. applyparallelworker.c - SIZE_STATS_MESSAGE\r\n\r\n```\r\n/*\r\n * There are three fields in each message received by the parallel apply\r\n * worker: start_lsn, end_lsn and send_time. Because we have updated these\r\n * statistics in the leader apply worker, we can ignore these fields in the\r\n * parallel apply worker (see function LogicalRepApplyLoop).\r\n */\r\n#define SIZE_STATS_MESSAGE (2 * sizeof(XLogRecPtr) + sizeof(TimestampTz))\r\n```\r\n\r\nAccording to other comment styles, it seems that the first sentence of the comment should\r\nrepresent the datatype and usage, not the detailed reason.\r\nFor example, about ParallelApplyWorkersList, you said \"A list ...\". How about adding like following message:\r\nThe message size that can be skipped by parallel apply worker\r\n\r\n\r\n~~~\r\n02. applyparallelworker.c - parallel_apply_start_subtrans\r\n\r\n```\r\n\tif (current_xid != top_xid &&\r\n\t\t!list_member_xid(subxactlist, current_xid))\r\n```\r\n\r\nA macro TransactionIdEquals is defined in access/transam.h. Should we use it, or is it too trivial?\r\n\r\n\r\n~~~\r\n03. applyparallelwprker.c - LogicalParallelApplyLoop\r\n\r\n```\r\n\t\t\tcase SHM_MQ_WOULD_BLOCK:\r\n\t\t\t\t{\r\n\t\t\t\t\tint\t\t\trc;\r\n\r\n\t\t\t\t\tif (!in_streamed_transaction)\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\t/*\r\n\t\t\t\t\t\t * If we didn't get any transactions for a while there might be\r\n\t\t\t\t\t\t * unconsumed invalidation messages in the queue, consume them\r\n\t\t\t\t\t\t * now.\r\n\t\t\t\t\t\t */\r\n\t\t\t\t\t\tAcceptInvalidationMessages();\r\n\t\t\t\t\t\tmaybe_reread_subscription();\r\n\t\t\t\t\t}\r\n\r\n\t\t\t\t\tMemoryContextReset(ApplyMessageContext);\r\n```\r\n\r\nIs MemoryContextReset() needed? IIUC no one uses ApplyMessageContext if we reach here.\r\n\r\n\r\n~~~\r\n04. applyparallelwprker.c - HandleParallelApplyMessages\r\n\r\n```\r\n\t\telse if (res == SHM_MQ_SUCCESS)\r\n\t\t{\r\n\t\t\tStringInfoData msg;\r\n\r\n\t\t\tinitStringInfo(&msg);\r\n\t\t\tappendBinaryStringInfo(&msg, data, nbytes);\r\n\t\t\tHandleParallelApplyMessage(winfo, &msg);\r\n\t\t\tpfree(msg.data);\r\n\t\t}\r\n```\r\n\r\nIn LogicalParallelApplyLoop(), appendBinaryStringInfo() is not used\r\nbut initialized StringInfoData directly initialized. Why there is a difrerence?\r\nThe function will do repalloc() and memcpy(), so it may be inefficient.\r\n\r\n\r\n~~~\r\n05. applyparallelwprker.c - parallel_apply_send_data\r\n\r\n```\r\n\tif (result != SHM_MQ_SUCCESS)\r\n\t\tereport(ERROR,\r\n\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n\t\t\t\t errmsg(\"could not send data to shared-memory queue\")));\r\n\r\n```\r\n\r\nI checked the enumeration of shm_mq_result, and I felt that shm_mq_send(nowait = false) failed\r\nonly when the opposite process has been exited.\r\nHow about add a hint or detailed message like \"lost connection to parallel apply worker\"?\r\n\r\n\r\n===\r\n06. worker.c - nchanges\r\n\r\n```\r\n/*\r\n * The number of changes sent to parallel apply workers during one streaming\r\n * block.\r\n */\r\nstatic uint32 nchanges = 0;\r\n```\r\n\r\nI found that the name \"nchanges\" has been already used in apply_spooled_messages().\r\nIt works well because the local variable is always used\r\nwhen name collision between local and global variables is occurred, but I think it may be confused.\r\n\r\n\r\n~~~\r\n07. worker.c - apply_handle_commit_internal\r\n\r\nI think we can add an assertion like Assert(replorigin_session_origin_lsn != InvalidXLogRecPtr && replorigin_session_origin = InvalidRepOriginId),\r\nto avoid missing replorigin_session_setup. Previously it was set at the entry point at never reset.\r\n\r\n\r\n~~~\r\n08. worker.c - apply_handle_prepare_internal\r\n\r\nSame as above.\r\n\r\n\r\n~~~\r\n09. worker.c - maybe_reread_subscription\r\n\r\n```\r\n\t/*\r\n\t * Exit if any parameter that affects the remote connection was changed.\r\n\t * The launcher will start a new worker.\r\n\t */\r\n\tif (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||\r\n\t\tstrcmp(newsub->name, MySubscription->name) != 0 ||\r\n\t\tstrcmp(newsub->slotname, MySubscription->slotname) != 0 ||\r\n\t\tnewsub->binary != MySubscription->binary ||\r\n\t\tnewsub->stream != MySubscription->stream ||\r\n\t\tstrcmp(newsub->origin, MySubscription->origin) != 0 ||\r\n\t\tnewsub->owner != MySubscription->owner ||\r\n\t\t!equal(newsub->publications, MySubscription->publications))\r\n\t{\r\n\t\tereport(LOG,\r\n\t\t\t\t(errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will restart because of a parameter change\",\r\n\t\t\t\t\t\tMySubscription->name)));\r\n\r\n\t\tproc_exit(0);\r\n\t}\r\n```\r\n\r\nWhen the parallel apply worker has been launched and then the subscription option has been modified,\r\nthe same message will appear twice.\r\nBut if the option \"streaming\" is changed from \"parallel\" to \"on\", one of them will not restart again.\r\nShould we modify message?\r\n\r\n\r\n===\r\n10. general\r\n\r\nIIUC parallel apply workers could not detect the deadlock automatically, right?\r\nI thought we might be able to use the heartbeat protocol between a leader worker and parallel workers.\r\n \r\nYou have already implemented a mechanism to send and receive messages between workers.\r\nMy idea is that each parallel apply worker records a timestamp that gets a message from the leader\r\nand if a certain time (30s?) has passed it sends a heartbeat message like 'H'.\r\nThe leader consumes 'H' and sends a reply like LOGICAL_REP_MSG_KEEPALIVE in HandleParallelApplyMessage().\r\nIf the parallel apply worker does not receive any message for more than one minute,\r\nit regards that the deadlock has occurred and can change the retry flag to on and exit.\r\n\r\nThe above assumes that the leader cannot reply to the message while waiting for the lock.\r\nMoreover, it may have notable overhead and we must use a new logical replication message type.\r\n\r\nHow do you think? Have you already considered about it?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 19 Oct 2022 12:49:52 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 18, 2022 at 6:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Interesting case. So I think the root of the problem is the same as\n> > what we have for a column is marked unique to the subscriber but not\n> > to the publisher. In short, two transactions which are independent of\n> > each other on the publisher are dependent on each other on the\n> > subscriber side because table definition is different on the\n> > subscriber. So can't we handle this case in the same way by marking\n> > this table unsafe for parallel-apply?\n> >\n>\n> Yes, we can do that. I think Hou-San has already dealt that way in his\n> latest patch [1]. See his response in the email [1]: \"Disallow\n> replicating from or to a partitioned table in parallel streaming\n> mode\".\n>\n> [1] - https://www.postgresql.org/message-id/OS0PR01MB57160760B34E1655718F4D1994249%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nOkay, somehow I missed the latest email. I will look into it soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Oct 2022 08:48:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, here are my review comments for the patch v39-0001\n\n======\n\nsrc/backend/libpq/pqmq.c\n\n1. mq_putmessage\n\n+ if (IsParallelWorker())\n+ SendProcSignal(pq_mq_parallel_leader_pid,\n+ PROCSIG_PARALLEL_MESSAGE,\n+ pq_mq_parallel_leader_backend_id);\n+ else\n+ {\n+ Assert(IsLogicalParallelApplyWorker());\n+ SendProcSignal(pq_mq_parallel_leader_pid,\n+ PROCSIG_PARALLEL_APPLY_MESSAGE,\n+ pq_mq_parallel_leader_backend_id);\n+ }\n\nThe generically named macro (IsParallelWorker) makes it seem like a\nparallel apply worker is NOT a kind of parallel worker (e.g. it is in\nthe 'else'), which seems odd. But I am not sure what you can do to\nimprove this... e.g. reversing the if/else might look logically saner,\nbut might also be less efficient for the IsParallelWorker case (??)\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n2. LogicalParallelApplyLoop\n\n+ /* Ensure we are reading the data into our memory context. */\n+ (void) MemoryContextSwitchTo(ApplyMessageContext);\n\nWhy did you use the (void) cast for this MemoryContextSwitchTo but not\nfor the next one later in the same function?\n\n~~~\n\n3.\n\n+ if (len == 0)\n+ break;\n\nAs mentioned in my previous review ([1] #3), we are still in the\nApplyMessageContext here. Shouldn't the code be switching to the\nprevious context before escaping from the loop?\n\n~~~\n\n4.\n\n+ switch (shmq_res)\n+ {\n+ case SHM_MQ_SUCCESS:\n+ {\n+ StringInfoData s;\n+ int c;\n+\n+ if (len == 0)\n+ break;\n\nI think this introduces a subtle bug.\n\nIIUC the intent of the \"break\" when len == 0 is to escape from the\nloop. But now, this will only break from the switch case. So, it looks\nlike you need some kind of loop \"done\" flag, or maybe have to revert\nback to using if/else to fix this.\n\n~~~\n\n5.\n\n+ /*\n+ * The first byte of message for additional communication between\n+ * leader apply worker and parallel apply workers can only be 'w'.\n+ */\n+ c = pq_getmsgbyte(&s);\n\nWhy does it refer to \"additional communication\"? Isn’t it enough just\nto say something like below:\n\nSUGGESTION\nThe first byte of messages sent from leader apply worker to parallel\napply workers can only be 'w'.\n\n~~~\n\nsrc/backend/replication/logical/worker.c\n\n6. apply_handle_stream_start\n\n+ *\n+ * XXX We can avoid sending pairs of the START/STOP messages to the parallel\n+ * worker because unlike apply worker it will process only one transaction at a\n+ * time. However, it is not clear whether that is worth the effort because\n+ * these messages are sent after logical_decoding_work_mem changes.\n */\n static void\n apply_handle_stream_start(StringInfo s)\n\n\nI don't know what the \"changes\" part means. IIUC, the meaning of the\nlast sentence is like below:\n\nSUGGESTION\nHowever, it is not clear whether any optimization is worthwhile\nbecause these messages are sent only when the\nlogical_decoding_work_mem threshold is exceeded.\n\n~~~\n\n7. get_transaction_apply_action\n\n> 12. get_transaction_apply_action\n>\n> I still felt like there should be some tablesync checks/comments in\n> this function, just for sanity, even if it works as-is now.\n>\n> For example, are you saying ([3] #22b) that there might be rare cases\n> where a Tablesync would call to parallel_apply_find_worker? That seems\n> strange, given that \"for streaming transactions that are being applied\n> in the parallel ... we disallow applying changes on a table that is\n> not in the READY state\".\n>\n> ------\n\nHouz wrote [2] -\n\nI think because we won't try to start parallel apply worker in table sync\nworker(see the check in parallel_apply_can_start()), so we won't find any\nworker in parallel_apply_find_worker() which means get_transaction_apply_action\nwill return TRANS_LEADER_SERIALIZE. And get_transaction_apply_action is a\nfunction which can be invoked for all kinds of workers(same is true for all\napply_handle_xxx functions), so not sure if table sync check/comment is\nnecessary.\n\n~\n\nSure, and I believe you when you say it all works OK - but IMO there\nis something still not quite right with this current code. For\nexample,\n\ne.g.1 the functional will return TRANS_LEADER_SERIALIZE for Tablesync\nworker, and yet the comment for TRANS_LEADER_SERIALIZE says \"means\nthat we are in the leader apply worker\" (except we are not)\n\ne.g.2 we know for a fact that Tablesync workers cannot start their own\nparallel apply workers, so then why do we even let the Tablesync\nworker make a call to parallel_apply_find_worker() looking for\nsomething we know will not be found?\n\n------\n[1] My review of v38-0001 -\nhttps://www.postgresql.org/message-id/CAHut%2BPsY0aevdVqeCUJOrRQMrwpg5Wz3-Mo%2BbU%3DmCxW2%2B9EBTg%40mail.gmail.com\n[2] Houz reply for my review v38 -\nhttps://www.postgresql.org/message-id/OS0PR01MB5716D738A8F27968806957B194289%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Oct 2022 19:38:02 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, October 19, 2022 8:50 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n\r\nThanks for the comments.\r\n\r\n> 03. applyparallelwprker.c - LogicalParallelApplyLoop\r\n> \r\n> ```\r\n> \t\t\tcase SHM_MQ_WOULD_BLOCK:\r\n> \t\t\t\t{\r\n> \t\t\t\t\tint\t\t\trc;\r\n> \r\n> \t\t\t\t\tif (!in_streamed_transaction)\r\n> \t\t\t\t\t{\r\n> \t\t\t\t\t\t/*\r\n> \t\t\t\t\t\t * If we didn't get any transactions for a while there might be\r\n> \t\t\t\t\t\t * unconsumed invalidation messages in the queue, consume them\r\n> \t\t\t\t\t\t * now.\r\n> \t\t\t\t\t\t */\r\n> \t\t\t\t\t\tAcceptInvalidationMessages();\r\n> \t\t\t\t\t\tmaybe_reread_subscription();\r\n> \t\t\t\t\t}\r\n> \r\n> \t\t\t\t\tMemoryContextReset(ApplyMessageContext);\r\n> ```\r\n> \r\n> Is MemoryContextReset() needed? IIUC no one uses ApplyMessageContext if we reach here.\r\n\r\nI was concerned that some code in deeper level might allocate some memory as\r\nthere are lots of codes and functions could be invoked in the loop(For example,\r\nthe functions in ProcessInterrupts()). Although It might not matter in\r\npractice, but I think it might be better to reset here to make it robust. Besides,\r\nthe code seems consistent with the logic in LogicalRepApplyLoop.\r\n\r\n> 04. applyparallelwprker.c - HandleParallelApplyMessages\r\n> \r\n> ```\r\n> \t\telse if (res == SHM_MQ_SUCCESS)\r\n> \t\t{\r\n> \t\t\tStringInfoData msg;\r\n> \r\n> \t\t\tinitStringInfo(&msg);\r\n> \t\t\tappendBinaryStringInfo(&msg, data, nbytes);\r\n> \t\t\tHandleParallelApplyMessage(winfo, &msg);\r\n> \t\t\tpfree(msg.data);\r\n> \t\t}\r\n> ```\r\n> \r\n> In LogicalParallelApplyLoop(), appendBinaryStringInfo() is not used but\r\n> initialized StringInfoData directly initialized. Why there is a difrerence? The\r\n> function will do repalloc() and memcpy(), so it may be inefficient.\r\n\r\nI think both styles are fine, the code in HandleParallelApplyMessages is to keep\r\nconsistent with the similar function HandleParallelMessages() which is not a\r\nperformance sensitive function.\r\n\r\n\r\n> 05. applyparallelwprker.c - parallel_apply_send_data\r\n> \r\n> ```\r\n> \tif (result != SHM_MQ_SUCCESS)\r\n> \t\tereport(ERROR,\r\n> \t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> \t\t\t\t errmsg(\"could not send data to shared-memory queue\")));\r\n> \r\n> ```\r\n> \r\n> I checked the enumeration of shm_mq_result, and I felt that shm_mq_send(nowait\r\n> = false) failed only when the opposite process has been exited. How about add a\r\n> hint or detailed message like \"lost connection to parallel apply worker\"?\r\n\r\nThanks for analyzing, but I am not sure if \"lost connection to xx worker\" is a\r\nappropriate errhint or detail. The current error message looks clear to me.\r\n\r\n\r\n> 07. worker.c - apply_handle_commit_internal\r\n> \r\n> I think we can add an assertion like Assert(replorigin_session_origin_lsn !=\r\n> InvalidXLogRecPtr && replorigin_session_origin = InvalidRepOriginId), to\r\n> avoid missing replorigin_session_setup. Previously it was set at the entry\r\n> point at never reset.\r\n\r\nI feel addding the assert for replorigin_session_origin is fine here. For\r\nreplorigin_session_origin_lsn, I am not sure if looks better to check here as\r\nwe need to distingush the case for streaming=on and streaming=parallel if we\r\nwant to check that.\r\n\r\n\r\n> 10. general\r\n> \r\n> IIUC parallel apply workers could not detect the deadlock automatically,\r\n> right? I thought we might be able to use the heartbeat protocol between a\r\n> leader worker and parallel workers.\r\n> \r\n> You have already implemented a mechanism to send and receive messages between\r\n> workers. My idea is that each parallel apply worker records a timestamp that\r\n> gets a message from the leader and if a certain time (30s?) has passed it\r\n> sends a heartbeat message like 'H'. The leader consumes 'H' and sends a reply\r\n> like LOGICAL_REP_MSG_KEEPALIVE in HandleParallelApplyMessage(). If the\r\n> parallel apply worker does not receive any message for more than one minute,\r\n> it regards that the deadlock has occurred and can change the retry flag to on\r\n> and exit.\r\n> \r\n> The above assumes that the leader cannot reply to the message while waiting\r\n> for the lock. Moreover, it may have notable overhead and we must use a new\r\n> logical replication message type.\r\n> \r\n> How do you think? Have you already considered about it?\r\n\r\nThanks for the suggestion. But we are trying to detect this kind of problem before\r\nthis problematic case happens and disallow parallelism in these cases by\r\nchecking the unique/constr/partitioned... in 0003/0004 patch.\r\n\r\nAbout the keepalive design. We could do that, but the leader could also be\r\nblocked by some other user backend, so this design might cause the worker to\r\nerror out in some unexpected cases which seems not great.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 20 Oct 2022 09:38:23 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 20, 2022 at 2:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 7. get_transaction_apply_action\n>\n> > 12. get_transaction_apply_action\n> >\n> > I still felt like there should be some tablesync checks/comments in\n> > this function, just for sanity, even if it works as-is now.\n> >\n> > For example, are you saying ([3] #22b) that there might be rare cases\n> > where a Tablesync would call to parallel_apply_find_worker? That seems\n> > strange, given that \"for streaming transactions that are being applied\n> > in the parallel ... we disallow applying changes on a table that is\n> > not in the READY state\".\n> >\n> > ------\n>\n> Houz wrote [2] -\n>\n> I think because we won't try to start parallel apply worker in table sync\n> worker(see the check in parallel_apply_can_start()), so we won't find any\n> worker in parallel_apply_find_worker() which means get_transaction_apply_action\n> will return TRANS_LEADER_SERIALIZE. And get_transaction_apply_action is a\n> function which can be invoked for all kinds of workers(same is true for all\n> apply_handle_xxx functions), so not sure if table sync check/comment is\n> necessary.\n>\n> ~\n>\n> Sure, and I believe you when you say it all works OK - but IMO there\n> is something still not quite right with this current code. For\n> example,\n>\n> e.g.1 the functional will return TRANS_LEADER_SERIALIZE for Tablesync\n> worker, and yet the comment for TRANS_LEADER_SERIALIZE says \"means\n> that we are in the leader apply worker\" (except we are not)\n>\n> e.g.2 we know for a fact that Tablesync workers cannot start their own\n> parallel apply workers, so then why do we even let the Tablesync\n> worker make a call to parallel_apply_find_worker() looking for\n> something we know will not be found?\n>\n\nI don't see much benefit in adding an additional check for tablesync\nworkers here. It will unnecessarily make this part of the code look\nbit ugly.\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Oct 2022 15:19:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, October 19, 2022 8:50 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> ===\r\n> 01. applyparallelworker.c - SIZE_STATS_MESSAGE\r\n> \r\n> ```\r\n> /*\r\n> * There are three fields in each message received by the parallel apply\r\n> * worker: start_lsn, end_lsn and send_time. Because we have updated these\r\n> * statistics in the leader apply worker, we can ignore these fields in the\r\n> * parallel apply worker (see function LogicalRepApplyLoop).\r\n> */\r\n> #define SIZE_STATS_MESSAGE (2 * sizeof(XLogRecPtr) + sizeof(TimestampTz))\r\n> ```\r\n> \r\n> According to other comment styles, it seems that the first sentence of the\r\n> comment should represent the datatype and usage, not the detailed reason.\r\n> For example, about ParallelApplyWorkersList, you said \"A list ...\". How about\r\n> adding like following message:\r\n> The message size that can be skipped by parallel apply worker\r\n\r\nThanks for the comments, but the current description seems enough to me.\r\n\r\n> ~~~\r\n> 02. applyparallelworker.c - parallel_apply_start_subtrans\r\n> \r\n> ```\r\n> \tif (current_xid != top_xid &&\r\n> \t\t!list_member_xid(subxactlist, current_xid)) ```\r\n> \r\n> A macro TransactionIdEquals is defined in access/transam.h. Should we use it,\r\n> or is it too trivial?\r\n\r\nI checked the existing codes, it seems both style are being used.\r\nMaybe we can post a separate patch to replace them later.\r\n\r\n> ~~~\r\n> 08. worker.c - apply_handle_prepare_internal\r\n> \r\n> Same as above.\r\n> \r\n> \r\n> ~~~\r\n> 09. worker.c - maybe_reread_subscription\r\n> \r\n> ```\r\n> \t/*\r\n> \t * Exit if any parameter that affects the remote connection was\r\n> changed.\r\n> \t * The launcher will start a new worker.\r\n> \t */\r\n> \tif (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||\r\n> \t\tstrcmp(newsub->name, MySubscription->name) != 0 ||\r\n> \t\tstrcmp(newsub->slotname, MySubscription->slotname) != 0 ||\r\n> \t\tnewsub->binary != MySubscription->binary ||\r\n> \t\tnewsub->stream != MySubscription->stream ||\r\n> \t\tstrcmp(newsub->origin, MySubscription->origin) != 0 ||\r\n> \t\tnewsub->owner != MySubscription->owner ||\r\n> \t\t!equal(newsub->publications, MySubscription->publications))\r\n> \t{\r\n> \t\tereport(LOG,\r\n> \t\t\t\t(errmsg(\"logical replication apply worker for\r\n> subscription \\\"%s\\\" will restart because of a parameter change\",\r\n> \t\t\t\t\t\tMySubscription->name)));\r\n> \r\n> \t\tproc_exit(0);\r\n> \t}\r\n> ```\r\n> \r\n> When the parallel apply worker has been launched and then the subscription\r\n> option has been modified, the same message will appear twice.\r\n> But if the option \"streaming\" is changed from \"parallel\" to \"on\", one of them\r\n> will not restart again.\r\n> Should we modify message?\r\n\r\nThanks, it seems a timing problem, if the leader catch the change first and stop\r\nthe parallel workers, the message will only appear once. But I agree we'd\r\nbetter make the message clear. I changed the message in parallel apply worker.\r\nWhile on it, I also adjusted some other message to include \"parallel apply\r\nworker\" if they are in parallel apply worker.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Fri, 21 Oct 2022 09:31:42 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, October 20, 2022 5:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Oct 20, 2022 at 2:08 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > 7. get_transaction_apply_action\r\n> >\r\n> > > 12. get_transaction_apply_action\r\n> > >\r\n> > > I still felt like there should be some tablesync checks/comments in\r\n> > > this function, just for sanity, even if it works as-is now.\r\n> > >\r\n> > > For example, are you saying ([3] #22b) that there might be rare\r\n> > > cases where a Tablesync would call to parallel_apply_find_worker?\r\n> > > That seems strange, given that \"for streaming transactions that are\r\n> > > being applied in the parallel ... we disallow applying changes on a\r\n> > > table that is not in the READY state\".\r\n> > >\r\n> > > ------\r\n> >\r\n> > Houz wrote [2] -\r\n> >\r\n> > I think because we won't try to start parallel apply worker in table\r\n> > sync worker(see the check in parallel_apply_can_start()), so we won't\r\n> > find any worker in parallel_apply_find_worker() which means\r\n> > get_transaction_apply_action will return TRANS_LEADER_SERIALIZE. And\r\n> > get_transaction_apply_action is a function which can be invoked for\r\n> > all kinds of workers(same is true for all apply_handle_xxx functions),\r\n> > so not sure if table sync check/comment is necessary.\r\n> >\r\n> > ~\r\n> >\r\n> > Sure, and I believe you when you say it all works OK - but IMO there\r\n> > is something still not quite right with this current code. For\r\n> > example,\r\n> >\r\n> > e.g.1 the functional will return TRANS_LEADER_SERIALIZE for Tablesync\r\n> > worker, and yet the comment for TRANS_LEADER_SERIALIZE says \"means\r\n> > that we are in the leader apply worker\" (except we are not)\r\n> >\r\n> > e.g.2 we know for a fact that Tablesync workers cannot start their own\r\n> > parallel apply workers, so then why do we even let the Tablesync\r\n> > worker make a call to parallel_apply_find_worker() looking for\r\n> > something we know will not be found?\r\n> >\r\n> \r\n> I don't see much benefit in adding an additional check for tablesync workers\r\n> here. It will unnecessarily make this part of the code look bit ugly.\r\n\r\nThanks for the review, here is the new version patch set which addressed Peter[1]\r\nand Kuroda-san[2]'s comments.\r\n\r\n[1] https://www.postgresql.org/message-id/CAHut%2BPs0HXawMD%3DzQ5YUncc9kjGy%2Bmd_39Y4Fdf%3DsKjt-LE92g%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/TYAPR01MB586674C1EE91C06DBACE7728F52B9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 21 Oct 2022 09:32:20 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v40-0001.\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n\n1. should_apply_changes_for_rel\n\n+ else if (am_parallel_apply_worker())\n+ {\n+ if (rel->state != SUBREL_STATE_READY)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will stop\",\n+ MySubscription->name),\n+ errdetail(\"Cannot handle streamed replication transaction using parallel \"\n+ \"apply workers until all tables are synchronized.\")));\n\n1a.\n\"transaction\" -> \"transactions\"\n\n1b.\n\"are synchronized\" -> \"have been synchronized.\"\n\ne.g. \"Cannot handle streamed replication transactions using parallel\napply workers until all tables have been synchronized.\"\n\n~~~\n\n2. maybe_reread_subscription\n\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will \"\n+ \"stop because the subscription was removed\",\n+ MySubscription->name)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\n+ \"stop because the subscription was removed\",\n+ MySubscription->name)));\n\nMaybe there is an easier way to code this instead of if/else and\ncut/paste message text:\n\nSUGGESTION\n\nereport(LOG,\n(errmsg(\"logical replication %s for subscription \\\"%s\\\" will stop\nbecause the subscription was removed\",\nam_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\nMySubscription->name)));\n~~~\n\n3.\n\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will \"\n+ \"stop because the subscription was disabled\",\n+ MySubscription->name)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\n+ \"stop because the subscription was disabled\",\n+ MySubscription->name)));\n\nThese can be combined like comment #2 above\n\nSUGGESTION\n\nereport(LOG,\n(errmsg(\"logical replication %s for subscription \\\"%s\\\" will stop\nbecause the subscription was disabled\",\nam_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\nMySubscription->name)));\n\n~~~\n\n4.\n\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will stop because of a parameter change\",\n+ MySubscription->name)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription \\\"%s\\\"\nwill restart because of a parameter change\",\n+ MySubscription->name)));\n\nThese can be combined like comment #2 above\n\nSUGGESTION\n\nereport(LOG,\n(errmsg(\"logical replication %s for subscription \\\"%s\\\" will restart\nbecause of a parameter change\",\nam_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\nMySubscription->name)));\n\n~~~~\n\n4. InitializeApplyWorker\n\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel apply worker for subscription\n%u will not \"\n+ \"start because the subscription was removed during startup\",\n+ MyLogicalRepWorker->subid)));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"logical replication apply worker for subscription %u will not \"\n+ \"start because the subscription was removed during startup\",\n+ MyLogicalRepWorker->subid)));\n\nThese can be combined like comment #2 above\n\nSUGGESTION\n\nereport(LOG,\n(errmsg(\"logical replication %s for subscription %u will not start\nbecause the subscription was removed during startup\",\nam_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\nMyLogicalRepWorker->subid)));\n\n~~~\n\n5.\n\n+ else if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" has started\",\n+ MySubscription->name)));\n else\n ereport(LOG,\n (errmsg(\"logical replication apply worker for subscription \\\"%s\\\"\nhas started\",\n MySubscription->name)));\n\n\nThe last if/else can be combined same as comment #2 above\n\nSUGGESTION\n\n else\n ereport(LOG,\n (errmsg(\"logical replication %s for subscription \\\"%s\\\" has started\",\nam_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\nMySubscription->name)));\n\n~~~\n\n6. IsLogicalParallelApplyWorker\n\n+bool\n+IsLogicalParallelApplyWorker(void)\n+{\n+ return IsLogicalWorker() && am_parallel_apply_worker();\n+}\n\nPatch v40 added the IsLogicalWorker() to the condition, but why is\nthat extra check necessary?\n\n======\n\n7. src/include/replication/worker_internal.h\n\n+typedef struct ParallelApplyWorkerInfo\n+{\n+ shm_mq_handle *mq_handle;\n+\n+ /*\n+ * The queue used to transfer messages from the parallel apply worker to\n+ * the leader apply worker.\n+ */\n+ shm_mq_handle *error_mq_handle;\n\nIn patch v40 the comment about the NULL error_mq_handle is removed,\nbut since the code still explicitly set/checks NULL in different\nplaces isn't it still better to have some comment here to describe\nwhat NULL means?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 24 Oct 2022 17:11:38 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 21, 2022 at 3:02 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nFew comments on the 0001 and 0003 patches:\n\nv40-0001*\n==========\n1.\n+ /*\n+ * The queue used to transfer messages from the parallel apply worker to\n+ * the leader apply worker.\n+ */\n+ shm_mq_handle *error_mq_handle;\n\nShall we say error messages instead of messages?\n\n2.\n+/*\n+ * Is there a message pending in parallel apply worker which we need to\n+ * receive?\n+ */\n+volatile sig_atomic_t ParallelApplyMessagePending = false;\n\nCan we slightly change above comment to: \"Is there a message sent by\nparallel apply worker which we need to receive?\"\n\n3.\n+\n+ ThrowErrorData(&edata);\n+\n+ /* Should not reach here after rethrowing an error. */\n+ error_context_stack = save_error_context_stack;\n\nShould we instead do Assert(false) after ThrowErrorData?\n\n4.\n+ * apply worker (c) necessary information to be shared among parallel apply\n+ * workers and leader apply worker (i.e. in_parallel_apply_xact flag and the\n+ * corresponding LogicalRepWorker slot information).\n\nI don't think here the comment needs to exactly say which variables\nare shared. necessary information to synchronize between parallel\napply workers and leader apply worker.\n\n5.\n+ * The dynamic shared memory segment will contain (a) a shm_mq that can be\n+ * used to send changes in the transaction from leader apply worker to parallel\n+ * apply worker (b) another shm_mq that can be used to send errors\n\nIn both (a) and (b), instead of \"can be\", we can use \"is\".\n\n6.\nNote that we cannot skip the streaming transactions when using\n+ * parallel apply workers because we cannot get the finish LSN before\n+ * applying the changes.\n\nThis comment is unclear about the action of parallel apply worker when\nfinish LSN is set. We can add something like: \"So, we don't start\nparallel apply worker when finish LSN is set by the user.\"\n\nv40-0003\n==========\n7. The function RelationGetUniqueKeyBitmap() should be defined in\nrelcache.c next to RelationGetIdentityKeyBitmap().\n\n8.\n+RelationGetUniqueKeyBitmap(Relation rel)\n{\n...\n+ if (!rel->rd_rel->relhasindex)\n+ return NULL;\n\nIt would be better to use \"if\n(!RelationGetForm(relation)->relhasindex)\" so as to be consistent with\nsimilar usage in RelationGetUniqueKeyBitmap.\n\n9. In RelationGetUniqueKeyBitmap(), we must assert here that the\nhistoric snapshot is set as we are not taking a lock on index rels.\nThe same is already ensured in RelationGetIdentityKeyBitmap(), is\nthere a reason to be different here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Oct 2022 17:04:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > About your point that having different partition structures for\n> > > publisher and subscriber, I don't know how common it will be once we\n> > > have DDL replication. Also, the default value of\n> > > publish_via_partition_root is false which doesn't seem to indicate\n> > > that this is a quite common case.\n> >\n> > So how can we consider these concurrent issues that could happen only\n> > when streaming = 'parallel'? Can we restrict some use cases to avoid\n> > the problem or can we have a safeguard against these conflicts?\n> >\n>\n> Yeah, right now the strategy is to disallow parallel apply for such\n> cases as you can see in *0003* patch.\n\nTightening the restrictions could work in some cases but there might\nstill be coner cases and it could reduce the usability. I'm not really\nsure that we can ensure such a deadlock won't happen with the current\nrestrictions. I think we need something safeguard just in case. For\nexample, if the leader apply worker is waiting for a lock acquired by\nits parallel worker, it cancels the parallel worker's transaction,\ncommits its transaction, and restarts logical replication. Or the\nleader can log the deadlock to let the user know.\n\n>\n> > We\n> > could find a new problematic scenario in the future and if it happens,\n> > logical replication gets stuck, it cannot be resolved only by apply\n> > workers themselves.\n> >\n>\n> I think users can change streaming option to on/off and internally the\n> parallel apply worker can detect and restart to allow replication to\n> proceed. Having said that, I think that would be a bug in the code and\n> we should try to fix it. We may need to disable parallel apply in the\n> problematic case.\n>\n> The other ideas that occurred to me in this regard are (a) provide a\n> reloption (say parallel_apply) at table level and we can use that to\n> bypass various checks like different Unique Key between\n> publisher/subscriber, constraints/expressions having mutable\n> functions, Foreign Key (when enabled on subscriber), operations on\n> Partitioned Table. We can't detect whether those are safe or not\n> (primarily because of a different structure in publisher and\n> subscriber) so we prohibit parallel apply but if users use this\n> option, we can allow it even in those cases.\n\nThe parallel apply worker is assigned per transaction, right? If so,\nhow can we know which tables are modified in the transaction in\nadvance? and what if two tables whose reloptions are true and false\nare modified in the same transaction?\n\n> (b) While enabling the\n> parallel option in the subscription, we can try to match all the\n> table(s) information of the publisher/subscriber. It will be tricky to\n> make this work because say even if match some trigger function name,\n> we won't be able to match the function body. The other thing is when\n> at a later point the table definition is changed on the subscriber, we\n> need to again validate the information between publisher and\n> subscriber which I think would be difficult as we would be already in\n> between processing some message and getting information from the\n> publisher at that stage won't be possible.\n\nIndeed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 24 Oct 2022 20:42:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Oct 24, 2022 at 11:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v40-0001.\n>\n> ======\n>\n> src/backend/replication/logical/worker.c\n>\n>\n> 1. should_apply_changes_for_rel\n>\n> + else if (am_parallel_apply_worker())\n> + {\n> + if (rel->state != SUBREL_STATE_READY)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"logical replication parallel apply worker for subscription\n> \\\"%s\\\" will stop\",\n> + MySubscription->name),\n> + errdetail(\"Cannot handle streamed replication transaction using parallel \"\n> + \"apply workers until all tables are synchronized.\")));\n>\n> 1a.\n> \"transaction\" -> \"transactions\"\n>\n> 1b.\n> \"are synchronized\" -> \"have been synchronized.\"\n>\n> e.g. \"Cannot handle streamed replication transactions using parallel\n> apply workers until all tables have been synchronized.\"\n>\n> ~~~\n>\n> 2. maybe_reread_subscription\n>\n> + if (am_parallel_apply_worker())\n> + ereport(LOG,\n> + (errmsg(\"logical replication parallel apply worker for subscription\n> \\\"%s\\\" will \"\n> + \"stop because the subscription was removed\",\n> + MySubscription->name)));\n> + else\n> + ereport(LOG,\n> + (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\n> + \"stop because the subscription was removed\",\n> + MySubscription->name)));\n>\n> Maybe there is an easier way to code this instead of if/else and\n> cut/paste message text:\n>\n> SUGGESTION\n>\n> ereport(LOG,\n> (errmsg(\"logical replication %s for subscription \\\"%s\\\" will stop\n> because the subscription was removed\",\n> am_parallel_apply_worker() ? \"parallel apply worker\" : \"apply worker\",\n> MySubscription->name)));\n> ~~~\n>\n\nIf we want to go this way then it may be better to record the\nappropriate string beforehand and use that here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Oct 2022 17:28:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 21, 2022 at 6:32 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, October 20, 2022 5:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Oct 20, 2022 at 2:08 PM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > >\n> > > 7. get_transaction_apply_action\n> > >\n> > > > 12. get_transaction_apply_action\n> > > >\n> > > > I still felt like there should be some tablesync checks/comments in\n> > > > this function, just for sanity, even if it works as-is now.\n> > > >\n> > > > For example, are you saying ([3] #22b) that there might be rare\n> > > > cases where a Tablesync would call to parallel_apply_find_worker?\n> > > > That seems strange, given that \"for streaming transactions that are\n> > > > being applied in the parallel ... we disallow applying changes on a\n> > > > table that is not in the READY state\".\n> > > >\n> > > > ------\n> > >\n> > > Houz wrote [2] -\n> > >\n> > > I think because we won't try to start parallel apply worker in table\n> > > sync worker(see the check in parallel_apply_can_start()), so we won't\n> > > find any worker in parallel_apply_find_worker() which means\n> > > get_transaction_apply_action will return TRANS_LEADER_SERIALIZE. And\n> > > get_transaction_apply_action is a function which can be invoked for\n> > > all kinds of workers(same is true for all apply_handle_xxx functions),\n> > > so not sure if table sync check/comment is necessary.\n> > >\n> > > ~\n> > >\n> > > Sure, and I believe you when you say it all works OK - but IMO there\n> > > is something still not quite right with this current code. For\n> > > example,\n> > >\n> > > e.g.1 the functional will return TRANS_LEADER_SERIALIZE for Tablesync\n> > > worker, and yet the comment for TRANS_LEADER_SERIALIZE says \"means\n> > > that we are in the leader apply worker\" (except we are not)\n> > >\n> > > e.g.2 we know for a fact that Tablesync workers cannot start their own\n> > > parallel apply workers, so then why do we even let the Tablesync\n> > > worker make a call to parallel_apply_find_worker() looking for\n> > > something we know will not be found?\n> > >\n> >\n> > I don't see much benefit in adding an additional check for tablesync workers\n> > here. It will unnecessarily make this part of the code look bit ugly.\n>\n> Thanks for the review, here is the new version patch set which addressed Peter[1]\n> and Kuroda-san[2]'s comments.\n\nI've started to review this patch. I tested v40-0001 patch and have\none question:\n\nIIUC even when most of the changes in the transaction are filtered out\nin pgoutput (eg., by relation filter or row filter), the walsender\nsends STREAM_START. This means that the subscriber could end up\nlaunching parallel apply workers also for almost empty (and streamed)\ntransactions. For example, I created three subscriptions each of which\nsubscribes to a different table. When I loaded a large amount of data\ninto one table, all three (leader) apply workers received START_STREAM\nand launched their parallel apply workers. However, two of them\nfinished without applying any data. I think this behaviour looks\nproblematic since it wastes workers and rather decreases the apply\nperformance if the changes are not large. Is it worth considering a\nway to delay launching a parallel apply worker until we find out the\namount of changes is actually large? For example, the leader worker\nwrites the streamed changes to files as usual and launches a parallel\nworker if the amount of changes exceeds a threshold or the leader\nreceives the second segment. After that, the leader worker switches to\nsend the streamed changes to parallel workers via shm_mq instead of\nfiles.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Oct 2022 12:08:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "FYI - After a recent push, the v40-0001 patch no longer applies on the\nlatest HEAD.\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v40-0001-Perform-streaming-logical-transactions-by-parall.patch\nerror: patch failed: src/backend/replication/logical/launcher.c:54\nerror: src/backend/replication/logical/launcher.c: patch does not apply\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 Oct 2022 17:28:06 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, Oct 25, 2022 at 14:28 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> FYI - After a recent push, the v40-0001 patch no longer applies on the\r\n> latest HEAD.\r\n> \r\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\r\n> ../patches_misc/v40-0001-Perform-streaming-logical-transactions-by-\r\n> parall.patch\r\n> error: patch failed: src/backend/replication/logical/launcher.c:54\r\n> error: src/backend/replication/logical/launcher.c: patch does not apply\r\n\r\nThanks for your reminder.\r\n\r\nI just rebased the patch set for review.\r\nThe new patch set will be shared later when the comments in this thread are\r\naddressed.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 25 Oct 2022 06:55:35 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 25, 2022 at 8:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 21, 2022 at 6:32 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>\n> I've started to review this patch. I tested v40-0001 patch and have\n> one question:\n>\n> IIUC even when most of the changes in the transaction are filtered out\n> in pgoutput (eg., by relation filter or row filter), the walsender\n> sends STREAM_START. This means that the subscriber could end up\n> launching parallel apply workers also for almost empty (and streamed)\n> transactions. For example, I created three subscriptions each of which\n> subscribes to a different table. When I loaded a large amount of data\n> into one table, all three (leader) apply workers received START_STREAM\n> and launched their parallel apply workers.\n>\n\nThe apply workers will be launched just the first time then we\nmaintain a pool so that we don't need to restart them.\n\n> However, two of them\n> finished without applying any data. I think this behaviour looks\n> problematic since it wastes workers and rather decreases the apply\n> performance if the changes are not large. Is it worth considering a\n> way to delay launching a parallel apply worker until we find out the\n> amount of changes is actually large?\n>\n\nI think even if changes are less there may not be much difference\nbecause we have observed that the performance improvement comes from\nnot writing to file.\n\n> For example, the leader worker\n> writes the streamed changes to files as usual and launches a parallel\n> worker if the amount of changes exceeds a threshold or the leader\n> receives the second segment. After that, the leader worker switches to\n> send the streamed changes to parallel workers via shm_mq instead of\n> files.\n>\n\nI think writing to file won't be a good idea as that can hamper the\nperformance benefit in some cases and not sure if it is worth.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Oct 2022 16:49:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Oct 26, 2022 7:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Oct 25, 2022 at 8:38 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Fri, Oct 21, 2022 at 6:32 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I've started to review this patch. I tested v40-0001 patch and have\r\n> > one question:\r\n> >\r\n> > IIUC even when most of the changes in the transaction are filtered out\r\n> > in pgoutput (eg., by relation filter or row filter), the walsender\r\n> > sends STREAM_START. This means that the subscriber could end up\r\n> > launching parallel apply workers also for almost empty (and streamed)\r\n> > transactions. For example, I created three subscriptions each of which\r\n> > subscribes to a different table. When I loaded a large amount of data\r\n> > into one table, all three (leader) apply workers received START_STREAM\r\n> > and launched their parallel apply workers.\r\n> >\r\n> \r\n> The apply workers will be launched just the first time then we\r\n> maintain a pool so that we don't need to restart them.\r\n> \r\n> > However, two of them\r\n> > finished without applying any data. I think this behaviour looks\r\n> > problematic since it wastes workers and rather decreases the apply\r\n> > performance if the changes are not large. Is it worth considering a\r\n> > way to delay launching a parallel apply worker until we find out the\r\n> > amount of changes is actually large?\r\n> >\r\n> \r\n> I think even if changes are less there may not be much difference\r\n> because we have observed that the performance improvement comes from\r\n> not writing to file.\r\n> \r\n> > For example, the leader worker\r\n> > writes the streamed changes to files as usual and launches a parallel\r\n> > worker if the amount of changes exceeds a threshold or the leader\r\n> > receives the second segment. After that, the leader worker switches to\r\n> > send the streamed changes to parallel workers via shm_mq instead of\r\n> > files.\r\n> >\r\n> \r\n> I think writing to file won't be a good idea as that can hamper the\r\n> performance benefit in some cases and not sure if it is worth.\r\n> \r\n\r\nI tried to test some cases that only a small part of the transaction or an empty\r\ntransaction is sent to subscriber, to see if using streaming parallel will bring\r\nperformance degradation.\r\n\r\nThe test was performed ten times, and the average was taken.\r\nThe results are as follows. The details and the script of the test is attached.\r\n\r\n10% of rows are sent\r\n----------------------------------\r\nHEAD 24.4595\r\npatched 18.4545\r\n\r\n5% of rows are sent\r\n----------------------------------\r\nHEAD 21.244\r\npatched 17.9655\r\n\r\n0% of rows are sent\r\n----------------------------------\r\nHEAD 18.0605\r\npatched 17.893\r\n\r\n\r\nIt shows that when only 5% or 10% of rows are sent to subscriber, using parallel\r\napply takes less time than HEAD, and even if all rows are filtered there's no\r\nperformance degradation.\r\n\r\n\r\nRegards\r\nShi yu", "msg_date": "Thu, 27 Oct 2022 02:34:24 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Oct 27, 2022 at 11:34 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Oct 26, 2022 7:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 25, 2022 at 8:38 AM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 21, 2022 at 6:32 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > I've started to review this patch. I tested v40-0001 patch and have\n> > > one question:\n> > >\n> > > IIUC even when most of the changes in the transaction are filtered out\n> > > in pgoutput (eg., by relation filter or row filter), the walsender\n> > > sends STREAM_START. This means that the subscriber could end up\n> > > launching parallel apply workers also for almost empty (and streamed)\n> > > transactions. For example, I created three subscriptions each of which\n> > > subscribes to a different table. When I loaded a large amount of data\n> > > into one table, all three (leader) apply workers received START_STREAM\n> > > and launched their parallel apply workers.\n> > >\n> >\n> > The apply workers will be launched just the first time then we\n> > maintain a pool so that we don't need to restart them.\n> >\n> > > However, two of them\n> > > finished without applying any data. I think this behaviour looks\n> > > problematic since it wastes workers and rather decreases the apply\n> > > performance if the changes are not large. Is it worth considering a\n> > > way to delay launching a parallel apply worker until we find out the\n> > > amount of changes is actually large?\n> > >\n> >\n> > I think even if changes are less there may not be much difference\n> > because we have observed that the performance improvement comes from\n> > not writing to file.\n> >\n> > > For example, the leader worker\n> > > writes the streamed changes to files as usual and launches a parallel\n> > > worker if the amount of changes exceeds a threshold or the leader\n> > > receives the second segment. After that, the leader worker switches to\n> > > send the streamed changes to parallel workers via shm_mq instead of\n> > > files.\n> > >\n> >\n> > I think writing to file won't be a good idea as that can hamper the\n> > performance benefit in some cases and not sure if it is worth.\n> >\n>\n> I tried to test some cases that only a small part of the transaction or an empty\n> transaction is sent to subscriber, to see if using streaming parallel will bring\n> performance degradation.\n>\n> The test was performed ten times, and the average was taken.\n> The results are as follows. The details and the script of the test is attached.\n>\n> 10% of rows are sent\n> ----------------------------------\n> HEAD 24.4595\n> patched 18.4545\n>\n> 5% of rows are sent\n> ----------------------------------\n> HEAD 21.244\n> patched 17.9655\n>\n> 0% of rows are sent\n> ----------------------------------\n> HEAD 18.0605\n> patched 17.893\n>\n>\n> It shows that when only 5% or 10% of rows are sent to subscriber, using parallel\n> apply takes less time than HEAD, and even if all rows are filtered there's no\n> performance degradation.\n\nThank you for the testing!\n\nI think this performance improvement comes from both applying changes\nin parallel to receiving changes and avoiding writing a file. I'm\nhappy to know there is also a benefit also for small streaming\ntransactions. I've also measured the overhead when processing\nstreaming empty transactions and confirmed the overhead is negligible.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Oct 2022 09:47:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Oct 25, 2022 2:56 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tues, Oct 25, 2022 at 14:28 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > FYI - After a recent push, the v40-0001 patch no longer applies on the\r\n> > latest HEAD.\r\n> >\r\n> > [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\r\n> > ../patches_misc/v40-0001-Perform-streaming-logical-transactions-by-\r\n> > parall.patch\r\n> > error: patch failed: src/backend/replication/logical/launcher.c:54\r\n> > error: src/backend/replication/logical/launcher.c: patch does not apply\r\n> \r\n> Thanks for your reminder.\r\n> \r\n> I just rebased the patch set for review.\r\n> The new patch set will be shared later when the comments in this thread are\r\n> addressed.\r\n> \r\n\r\nI tried to write a draft patch to force streaming every change instead of\r\nwaiting until logical_decoding_work_mem is exceeded, which could help to test\r\nstreaming parallel. Attach the patch. This is based on v41-0001 patch.\r\n\r\nWith this patch, I saw a problem that the subscription option \"origin\" doesn't\r\nwork when using streaming parallel. That's because when the parallel apply\r\nworker writing the WAL for the changes, replorigin_session_origin is\r\nInvalidRepOriginId. In current patch, origin can be active only in one process\r\nat-a-time.\r\n\r\nTo fix it, maybe we need to remove this restriction, like what we did in the old\r\nversion of patch.\r\n\r\nRegards\r\nShi yu", "msg_date": "Fri, 28 Oct 2022 09:34:16 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Oct 28, 2022 at 3:04 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Oct 25, 2022 2:56 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\n>\n> I tried to write a draft patch to force streaming every change instead of\n> waiting until logical_decoding_work_mem is exceeded, which could help to test\n> streaming parallel. Attach the patch. This is based on v41-0001 patch.\n>\n\nThanks, I think this is quite useful for testing.\n\n> With this patch, I saw a problem that the subscription option \"origin\" doesn't\n> work when using streaming parallel. That's because when the parallel apply\n> worker writing the WAL for the changes, replorigin_session_origin is\n> InvalidRepOriginId. In current patch, origin can be active only in one process\n> at-a-time.\n>\n> To fix it, maybe we need to remove this restriction, like what we did in the old\n> version of patch.\n>\n\nAgreed, we need to allow using origins for writing all the changes by\nthe parallel worker.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 31 Oct 2022 16:14:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Oct 24, 2022 at 8:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > About your point that having different partition structures for\n> > > > publisher and subscriber, I don't know how common it will be once we\n> > > > have DDL replication. Also, the default value of\n> > > > publish_via_partition_root is false which doesn't seem to indicate\n> > > > that this is a quite common case.\n> > >\n> > > So how can we consider these concurrent issues that could happen only\n> > > when streaming = 'parallel'? Can we restrict some use cases to avoid\n> > > the problem or can we have a safeguard against these conflicts?\n> > >\n> >\n> > Yeah, right now the strategy is to disallow parallel apply for such\n> > cases as you can see in *0003* patch.\n>\n> Tightening the restrictions could work in some cases but there might\n> still be coner cases and it could reduce the usability. I'm not really\n> sure that we can ensure such a deadlock won't happen with the current\n> restrictions. I think we need something safeguard just in case. For\n> example, if the leader apply worker is waiting for a lock acquired by\n> its parallel worker, it cancels the parallel worker's transaction,\n> commits its transaction, and restarts logical replication. Or the\n> leader can log the deadlock to let the user know.\n>\n\nAs another direction, we could make the parallel apply feature robust\nif we can detect deadlocks that happen among the leader worker and\nparallel workers. I'd like to summarize the idea discussed off-list\n(with Amit, Hou-San, and Kuroda-San) for discussion. The basic idea is\nthat when the leader worker or parallel worker needs to wait for\nsomething (eg. transaction completion, messages) we use lmgr\nfunctionality so that we can create wait-for edges and detect\ndeadlocks in lmgr.\n\nFor example, a scenario where a deadlock occurs is the following:\n\n[Publisher]\ncreate table tab1(a int);\ncreate publication pub for table tab1;\n\n[Subcriber]\ncreat table tab1(a int primary key);\ncreate subscription sub connection 'port=10000 dbname=postgres'\npublication pub with (streaming = parallel);\n\nTX1:\nBEGIN;\nINSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\n Tx2:\n BEGIN;\n INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\n COMMIT;\nCOMMIT;\n\nSuppose a parallel apply worker (PA-1) is executing TX-1 and the\nleader apply worker (LA) is executing TX-2 concurrently on the\nsubscriber. Now, LA is waiting for PA-1 because of the unique key of\ntab1 while PA-1 is waiting for LA to send further messages. There is a\ndeadlock between PA-1 and LA but lmgr cannot detect it.\n\nOne idea to resolve this issue is that we have LA acquire a session\nlock on a shared object (by LockSharedObjectForSession()) and have\nPA-1 wait on the lock before trying to receive messages. IOW, LA\nacquires the lock before sending STREAM_STOP and releases it if\nalready acquired before sending STREAM_START, STREAM_PREPARE and\nSTREAM_COMMIT. For PA-1, it always needs to acquire the lock after\nprocessing STREAM_STOP and then release immediately after acquiring\nit. That way, when PA-1 is waiting for LA, we can have a wait-edge\nfrom PA-1 to LA in lmgr, which will make a deadlock in lmgr like:\n\nLA (waiting to acquire lock) -> PA-1 (waiting to acquire the shared\nobject) -> LA\n\nWe would need the shared objects per parallel apply worker.\n\nAfter detecting a deadlock, we can restart logical replication with\ntemporarily disabling the parallel apply, which is done by 0005 patch.\n\nAnother scenario is similar to the previous case but TX-1 and TX-2 are\nexecuted by two parallel apply workers (PA-1 and PA-2 respectively).\nIn this scenario, PA-2 is waiting for PA-1 to complete its transaction\nwhile PA-1 is waiting for subsequent input from LA. Also, LA is\nwaiting for PA-2 to complete its transaction in order to preserve the\ncommit order. There is a deadlock among three processes but it cannot\nbe detected in lmgr because the fact that LA is waiting for PA-2 to\ncomplete its transaction doesn't appear in lmgr (see\nparallel_apply_wait_for_xact_finish()). To fix it, we can use\nXactLockTableWait() instead.\n\nHowever, since XactLockTableWait() considers PREPARED TRANSACTION as\nstill in progress, probably we need a similar trick as above in case\nwhere a transaction is prepared. For example, suppose that TX-2 was\nprepared instead of committed in the above scenario, PA-2 acquires\nanother shared lock at START_STREAM and releases it at\nSTREAM_COMMIT/PREPARE. LA can wait on the lock.\n\nYet another scenario where LA has to wait is the case where the shm_mq\nbuffer is full. In the above scenario (ie. PA-1 and PA-2 are executing\ntransactions concurrently), if the shm_mq buffer between LA and PA-2\nis full, LA has to wait to send messages, and this wait doesn't appear\nin lmgr. To fix it, probably we have to use non-blocking write and\nwait with a timeout. If timeout is exceeded, the LA will write to file\nand indicate PA-2 that it needs to read file for remaining messages.\nThen LA will start waiting for commit which will detect deadlock if\nany.\n\nIf we can detect deadlocks by having such a functionality or some\nother way then we don't need to tighten the restrictions of subscribed\ntables' schemas etc.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Nov 2022 11:50:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, November 2, 2022 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Mon, Oct 24, 2022 at 8:42 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > About your point that having different partition structures for\r\n> > > > > publisher and subscriber, I don't know how common it will be once we\r\n> > > > > have DDL replication. Also, the default value of\r\n> > > > > publish_via_partition_root is false which doesn't seem to indicate\r\n> > > > > that this is a quite common case.\r\n> > > >\r\n> > > > So how can we consider these concurrent issues that could happen only\r\n> > > > when streaming = 'parallel'? Can we restrict some use cases to avoid\r\n> > > > the problem or can we have a safeguard against these conflicts?\r\n> > > >\r\n> > >\r\n> > > Yeah, right now the strategy is to disallow parallel apply for such\r\n> > > cases as you can see in *0003* patch.\r\n> >\r\n> > Tightening the restrictions could work in some cases but there might\r\n> > still be coner cases and it could reduce the usability. I'm not really\r\n> > sure that we can ensure such a deadlock won't happen with the current\r\n> > restrictions. I think we need something safeguard just in case. For\r\n> > example, if the leader apply worker is waiting for a lock acquired by\r\n> > its parallel worker, it cancels the parallel worker's transaction,\r\n> > commits its transaction, and restarts logical replication. Or the\r\n> > leader can log the deadlock to let the user know.\r\n> >\r\n> \r\n> As another direction, we could make the parallel apply feature robust\r\n> if we can detect deadlocks that happen among the leader worker and\r\n> parallel workers. I'd like to summarize the idea discussed off-list\r\n> (with Amit, Hou-San, and Kuroda-San) for discussion. The basic idea is\r\n> that when the leader worker or parallel worker needs to wait for\r\n> something (eg. transaction completion, messages) we use lmgr\r\n> functionality so that we can create wait-for edges and detect\r\n> deadlocks in lmgr.\r\n> \r\n> For example, a scenario where a deadlock occurs is the following:\r\n> \r\n> [Publisher]\r\n> create table tab1(a int);\r\n> create publication pub for table tab1;\r\n> \r\n> [Subcriber]\r\n> creat table tab1(a int primary key);\r\n> create subscription sub connection 'port=10000 dbname=postgres'\r\n> publication pub with (streaming = parallel);\r\n> \r\n> TX1:\r\n> BEGIN;\r\n> INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\r\n> Tx2:\r\n> BEGIN;\r\n> INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\r\n> COMMIT;\r\n> COMMIT;\r\n> \r\n> Suppose a parallel apply worker (PA-1) is executing TX-1 and the\r\n> leader apply worker (LA) is executing TX-2 concurrently on the\r\n> subscriber. Now, LA is waiting for PA-1 because of the unique key of\r\n> tab1 while PA-1 is waiting for LA to send further messages. There is a\r\n> deadlock between PA-1 and LA but lmgr cannot detect it.\r\n> \r\n> One idea to resolve this issue is that we have LA acquire a session\r\n> lock on a shared object (by LockSharedObjectForSession()) and have\r\n> PA-1 wait on the lock before trying to receive messages. IOW, LA\r\n> acquires the lock before sending STREAM_STOP and releases it if\r\n> already acquired before sending STREAM_START, STREAM_PREPARE and\r\n> STREAM_COMMIT. For PA-1, it always needs to acquire the lock after\r\n> processing STREAM_STOP and then release immediately after acquiring\r\n> it. That way, when PA-1 is waiting for LA, we can have a wait-edge\r\n> from PA-1 to LA in lmgr, which will make a deadlock in lmgr like:\r\n> \r\n> LA (waiting to acquire lock) -> PA-1 (waiting to acquire the shared\r\n> object) -> LA\r\n> \r\n> We would need the shared objects per parallel apply worker.\r\n> \r\n> After detecting a deadlock, we can restart logical replication with\r\n> temporarily disabling the parallel apply, which is done by 0005 patch.\r\n> \r\n> Another scenario is similar to the previous case but TX-1 and TX-2 are\r\n> executed by two parallel apply workers (PA-1 and PA-2 respectively).\r\n> In this scenario, PA-2 is waiting for PA-1 to complete its transaction\r\n> while PA-1 is waiting for subsequent input from LA. Also, LA is\r\n> waiting for PA-2 to complete its transaction in order to preserve the\r\n> commit order. There is a deadlock among three processes but it cannot\r\n> be detected in lmgr because the fact that LA is waiting for PA-2 to\r\n> complete its transaction doesn't appear in lmgr (see\r\n> parallel_apply_wait_for_xact_finish()). To fix it, we can use\r\n> XactLockTableWait() instead.\r\n> \r\n> However, since XactLockTableWait() considers PREPARED TRANSACTION as\r\n> still in progress, probably we need a similar trick as above in case\r\n> where a transaction is prepared. For example, suppose that TX-2 was\r\n> prepared instead of committed in the above scenario, PA-2 acquires\r\n> another shared lock at START_STREAM and releases it at\r\n> STREAM_COMMIT/PREPARE. LA can wait on the lock.\r\n> \r\n> Yet another scenario where LA has to wait is the case where the shm_mq\r\n> buffer is full. In the above scenario (ie. PA-1 and PA-2 are executing\r\n> transactions concurrently), if the shm_mq buffer between LA and PA-2\r\n> is full, LA has to wait to send messages, and this wait doesn't appear\r\n> in lmgr. To fix it, probably we have to use non-blocking write and\r\n> wait with a timeout. If timeout is exceeded, the LA will write to file\r\n> and indicate PA-2 that it needs to read file for remaining messages.\r\n> Then LA will start waiting for commit which will detect deadlock if\r\n> any.\r\n> \r\n> If we can detect deadlocks by having such a functionality or some\r\n> other way then we don't need to tighten the restrictions of subscribed\r\n> tables' schemas etc.\r\n\r\nThanks for the analysis and summary !\r\n\r\nI tried to implement the above idea and here is the patch set. I have done some\r\nbasic tests for the new codes and it work fine. But I am going to test some\r\nconner cases to make sure all the codes work fine. I removed the old 0003 patch\r\nwhich was used to check the parallel apply safety because now we can detect the\r\ndeadlock problem.\r\n\r\nBesides, there are few tasks left which I will handle soon and update the patch set:\r\n\r\n* Address previous comment from Amit[1], Shi-san[2] and Peter[3] (Already done but haven't merged them).\r\n* Rebase the original 0005 patch which is \"retry to apply streaming xact only in leader apply worker\".\r\n* Adjust some comments and documentation related to new codes.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1Lsn%3D_gz1-3LqZ-wEDQDmChUsOX8LvHS8WV39wC1iRR%3DQ%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/OSZPR01MB631042582805A8E8615BC413FD329%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n[3] https://www.postgresql.org/message-id/CAHut%2BPsJWHRoRzXtMrJ1RaxmkS2LkiMR_4S2pSionxXmYsyOww%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 3 Nov 2022 13:06:35 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThank you for updating the patch!\r\nWhile testing yours, I found that the leader apply worker has been crashed in the following case.\r\nI will dig the failure more, but I reported here for records.\r\n\r\n\r\n1. Change macros for forcing to write a temporary file.\r\n\r\n```\r\n-#define CHANGES_THRESHOLD 1000\r\n-#define SHM_SEND_TIMEOUT_MS 10000\r\n+#define CHANGES_THRESHOLD 10\r\n+#define SHM_SEND_TIMEOUT_MS 100\r\n```\r\n\r\n2. Set logical_decoding_work_mem to 64kB on publisher\r\n\r\n3. Insert huge data on publisher\r\n\r\n```\r\npublisher=# \\d tbl \r\n Table \"public.tbl\"\r\n Column | Type | Collation | Nullable | Default \r\n--------+---------+-----------+----------+---------\r\n c | integer | | | \r\nPublications:\r\n \"pub\"\r\n\r\n\r\npublisher=# BEGIN;\r\nBEGIN\r\npublisher=*# INSERT INTO tbl SELECT i FROM generate_series(1, 5000000) s(i);\r\nINSERT 0 5000000\r\npublisher=*# COMMIT;\r\n```\r\n\r\n-> LA crashes on subscriber! Followings are the backtrace.\r\n\r\n\r\n```\r\n(gdb) bt\r\n#0 0x00007f2663ae4387 in raise () from /lib64/libc.so.6\r\n#1 0x00007f2663ae5a78 in abort () from /lib64/libc.so.6\r\n#2 0x0000000000ad0a95 in ExceptionalCondition (conditionName=0xcabdd0 \"mqh->mqh_partial_bytes <= nbytes\", \r\n fileName=0xcabc30 \"../src/backend/storage/ipc/shm_mq.c\", lineNumber=420) at ../src/backend/utils/error/assert.c:66\r\n#3 0x00000000008eaeb7 in shm_mq_sendv (mqh=0x271ebd8, iov=0x7ffc664a2690, iovcnt=1, nowait=false, force_flush=true)\r\n at ../src/backend/storage/ipc/shm_mq.c:420\r\n#4 0x00000000008eac5a in shm_mq_send (mqh=0x271ebd8, nbytes=1, data=0x271f3c0, nowait=false, force_flush=true)\r\n at ../src/backend/storage/ipc/shm_mq.c:338\r\n#5 0x0000000000880e18 in parallel_apply_free_worker (winfo=0x271f270, xid=735, stop_worker=true)\r\n at ../src/backend/replication/logical/applyparallelworker.c:368\r\n#6 0x00000000008a3638 in apply_handle_stream_commit (s=0x7ffc664a2790) at ../src/backend/replication/logical/worker.c:2081\r\n#7 0x00000000008a54da in apply_dispatch (s=0x7ffc664a2790) at ../src/backend/replication/logical/worker.c:3195\r\n#8 0x00000000008a5a76 in LogicalRepApplyLoop (last_received=378674872) at ../src/backend/replication/logical/worker.c:3431\r\n#9 0x00000000008a72ac in start_apply (origin_startpos=0) at ../src/backend/replication/logical/worker.c:4245\r\n#10 0x00000000008a7d77 in ApplyWorkerMain (main_arg=0) at ../src/backend/replication/logical/worker.c:4555\r\n#11 0x000000000084983c in StartBackgroundWorker () at ../src/backend/postmaster/bgworker.c:861\r\n#12 0x0000000000854192 in do_start_bgworker (rw=0x26c0d20) at ../src/backend/postmaster/postmaster.c:5801\r\n#13 0x000000000085457c in maybe_start_bgworkers () at ../src/backend/postmaster/postmaster.c:6025\r\n#14 0x000000000085350b in sigusr1_handler (postgres_signal_arg=10) at ../src/backend/postmaster/postmaster.c:5182\r\n#15 <signal handler called>\r\n#16 0x00007f2663ba3b23 in __select_nocancel () from /lib64/libc.so.6\r\n#17 0x000000000084edbc in ServerLoop () at ../src/backend/postmaster/postmaster.c:1768\r\n#18 0x000000000084e737 in PostmasterMain (argc=3, argv=0x2690f60) at ../src/backend/postmaster/postmaster.c:1476\r\n#19 0x000000000074adfb in main (argc=3, argv=0x2690f60) at ../src/backend/main/main.c:197\r\n``` \r\n\r\nPSA the script that can reproduce the failure on my environment. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 4 Nov 2022 07:45:18 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Thanks for the analysis and summary !\n>\n> I tried to implement the above idea and here is the patch set.\n>\n\nFew comments on v42-0001\n===========================\n1.\n+ /*\n+ * Set the xact_state flag in the leader instead of the\n+ * parallel apply worker to avoid the race condition where the leader has\n+ * already started waiting for the parallel apply worker to finish\n+ * processing the transaction while the child process has not yet\n+ * processed the first STREAM_START and has not set the\n+ * xact_state to true.\n+ */\n+ SpinLockAcquire(&winfo->shared->mutex);\n+ winfo->shared->xact_state = PARALLEL_TRANS_UNKNOWN;\n\nThe comments and code for xact_state doesn't seem to match.\n\n2.\n+ * progress. This could happend as we don't wait for transaction rollback\n+ * to finish.\n+ */\n\n/happend/happen\n\n3.\n+/* Helper function to release a lock with lockid */\n+void\n+parallel_apply_lock(uint16 lockid)\n...\n...\n+/* Helper function to take a lock with lockid */\n+void\n+parallel_apply_unlock(uint16 lockid)\n\nHere, the comments seems to be reversed.\n\n4.\n+parallel_apply_lock(uint16 lockid)\n+{\n+ MemoryContext oldcontext;\n+\n+ if (list_member_int(ParallelApplyLockids, lockid))\n+ return;\n+\n+ LockSharedObjectForSession(SubscriptionRelationId, MySubscription->oid,\n+ lockid, am_leader_apply_worker() ?\n+ AccessExclusiveLock:\n+ AccessShareLock);\n\nThis appears odd to me because this forecloses the option the parallel\napply worker can ever acquire this lock in exclusive mode. I think it\nwould be better to have lock_mode as one of the parameters in this\nAPI.\n\n5.\n+ * Inintialize fileset if not yet and open the file.\n+ */\n+void\n+serialize_stream_start(TransactionId xid, bool first_segment)\n\nTypo. /Inintialize/Initialize\n\n6.\nparallel_apply_setup_dsm()\n{\n...\n+ shared->xact_state = false;\n\nxact_state should be set with one of the values of ParallelTransState.\n\n7.\n/*\n+ * Don't use SharedFileSet here because the fileset is shared by the leader\n+ * worker and the fileset in leader need to survive after releasing the\n+ * shared memory\n\nThis comment seems a bit unclear to me. Should there be and between\nleader worker? If so, then the following 'and' won't make sense.\n\n8.\n+apply_handle_stream_stop(StringInfo s)\n{\n...\n+ case TRANS_PARALLEL_APPLY:\n+\n+ /*\n+ * If there is no message left, wait for the leader to release the\n+ * lock and send more messages.\n+ */\n+ if (pg_atomic_sub_fetch_u32(&(MyParallelShared->left_message), 1) == 0)\n+ parallel_apply_lock(MyParallelShared->stream_lock_id);\n\nAs per Sawada-San's email [1], this lock should be released\nimmediately after we acquire it. If we do so, then we don't need to\nunlock separately in apply_handle_stream_start() in the below code and\nat similar places in stream_prepare, stream_commit, and stream_abort.\nIs there a reason for doing it differently?\n\napply_handle_stream_start(StringInfo s)\n{\n...\n+ case TRANS_PARALLEL_APPLY:\n...\n+ /*\n+ * Unlock the shared object lock so that the leader apply worker\n+ * can continue to send changes.\n+ */\n+ parallel_apply_unlock(MyParallelShared->stream_lock_id);\n\n\n9.\n+parallel_apply_spooled_messages(void)\n{\n...\n+ if (fileset_valid)\n+ {\n+ in_streamed_transaction = false;\n+\n+ parallel_apply_lock(MyParallelShared->transaction_lock_id);\n\nIs there a reason to acquire this lock here if the parallel apply\nworker will acquire it at stream_start?\n\n10.\n+ winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\n+ winfo->shared->transaction_lock_id = parallel_apply_get_unique_id();\n\nWhy can't we use xid (remote_xid) for one of these and local_xid (one\ngenerated by parallel apply) for the other? I was a bit worried about\nthe local_xid because it will be generated only after applying the\nfirst message but the patch already seems to be waiting for it in\nparallel_apply_wait_for_xact_finish as seen in the below code.\n\n+void\n+parallel_apply_wait_for_xact_finish(ParallelApplyWorkerShared *wshared)\n+{\n+ /*\n+ * Wait until the parallel apply worker handles the first message and\n+ * set the flag to true.\n+ */\n+ parallel_apply_wait_for_in_xact(wshared, PARALLEL_TRANS_STARTED);\n+\n+ /* Wait for the transaction lock to be released. */\n+ parallel_apply_lock(wshared->transaction_lock_id);\n\n[1] - https://www.postgresql.org/message-id/CAD21AoCWovvhGBD2uKcQqbk6px6apswuBrs6dR9%2BWhP1j2LdsQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 4 Nov 2022 13:36:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "> While testing yours, I found that the leader apply worker has been crashed in the\r\n> following case.\r\n> I will dig the failure more, but I reported here for records.\r\n\r\nI found a reason why the leader apply worker crasehes.\r\nIn parallel_apply_free_worker() the leader sends the pending message to parallel apply worker:\r\n\r\n```\r\n+ /*\r\n+ * Resend the pending message to parallel apply worker to cleanup the\r\n+ * queue. Note that parallel apply worker will just ignore this message\r\n+ * as it has already handled this message while applying spooled\r\n+ * messages.\r\n+ */\r\n+ result = shm_mq_send(winfo->mq_handle, strlen(winfo->pending_msg),\r\n+ winfo->pending_msg, false, true);\r\n```\r\n\r\n...but the message length should not be calucarete by strlen() because the logicalrep message has '\\0'.\r\nPSA the patch to fix it. It can be applied on v42 patch set.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 4 Nov 2022 09:46:55 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 4, 2022 at 1:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Thanks for the analysis and summary !\n> >\n> > I tried to implement the above idea and here is the patch set.\n> >\n>\n> Few comments on v42-0001\n> ===========================\n>\n\nFew more comments on v42-0001\n===============================\n1. In parallel_apply_send_data(), it seems winfo->serialize_changes\nand switching_to_serialize are set to indicate that we have changed\nparallel to serialize mode. Isn't using just the\nswitching_to_serialize sufficient? Also, it would be better to name\nswitching_to_serialize as parallel_to_serialize or something like\nthat.\n\n2. In parallel_apply_send_data(), the patch has already initialized\nthe fileset, and then again in apply_handle_stream_start(), it will do\nthe same if we fail while sending stream_start message to the parallel\nworker. It seems we don't need to initialize fileset again for\nTRANS_LEADER_PARTIAL_SERIALIZE state in apply_handle_stream_start()\nunless I am missing something.\n\n3.\napply_handle_stream_start(StringInfo s)\n{\n...\n+ if (!first_segment)\n+ {\n+ /*\n+ * Unlock the shared object lock so that parallel apply worker\n+ * can continue to receive and apply changes.\n+ */\n+ parallel_apply_unlock(winfo->shared->stream_lock_id);\n...\n}\n\nCan we have an assert before this unlock call that the lock must be\nheld? Similarly, if there are other places then we can have assert\nthere as well.\n\n4. It is not very clear to me how maintaining ParallelApplyLockids\nlist is helpful.\n\n5.\n/*\n+ * Handle STREAM START message when the transaction was spilled to disk.\n+ *\n+ * Inintialize fileset if not yet and open the file.\n+ */\n+void\n+serialize_stream_start(TransactionId xid, bool first_segment)\n+{\n+ /*\n+ * Start a transaction on stream start,\n\nThis function's name and comments seem to indicate that it is to\nhandle stream_start message. Is that really the case? It is being\ncalled from parallel_apply_send_data() which made me think it can be\nused from other places as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 4 Nov 2022 17:14:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, November 4, 2022 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Thanks for the analysis and summary !\r\n> >\r\n> > I tried to implement the above idea and here is the patch set.\r\n> >\r\n> \r\n> Few comments on v42-0001\r\n> ===========================\r\n\r\nThanks for the comments.\r\n\r\n> \r\n> 10.\r\n> + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\r\n> + winfo->shared->transaction_lock_id = parallel_apply_get_unique_id();\r\n> \r\n> Why can't we use xid (remote_xid) for one of these and local_xid (one generated\r\n> by parallel apply) for the other? I was a bit worried about the local_xid because it\r\n> will be generated only after applying the first message but the patch already\r\n> seems to be waiting for it in parallel_apply_wait_for_xact_finish as seen in the\r\n> below code.\r\n> \r\n> +void\r\n> +parallel_apply_wait_for_xact_finish(ParallelApplyWorkerShared *wshared)\r\n> +{\r\n> + /*\r\n> + * Wait until the parallel apply worker handles the first message and\r\n> + * set the flag to true.\r\n> + */\r\n> + parallel_apply_wait_for_in_xact(wshared, PARALLEL_TRANS_STARTED);\r\n> +\r\n> + /* Wait for the transaction lock to be released. */\r\n> + parallel_apply_lock(wshared->transaction_lock_id);\r\n\r\nI also considered using xid for these locks, but it seems the objsubid for the\r\nshared object lock is 16bit while xid is 32 bit. So, I tried to generate a unique 16bit id\r\nhere. I will think more on this and maybe I need to add some comments to\r\nexplain this.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 4 Nov 2022 14:05:21 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 4, 2022 at 7:35 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, November 4, 2022 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Thanks for the analysis and summary !\n> > >\n> > > I tried to implement the above idea and here is the patch set.\n> > >\n> >\n> > Few comments on v42-0001\n> > ===========================\n>\n> Thanks for the comments.\n>\n> >\n> > 10.\n> > + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\n> > + winfo->shared->transaction_lock_id = parallel_apply_get_unique_id();\n> >\n> > Why can't we use xid (remote_xid) for one of these and local_xid (one generated\n> > by parallel apply) for the other?\n...\n...\n>\n> I also considered using xid for these locks, but it seems the objsubid for the\n> shared object lock is 16bit while xid is 32 bit. So, I tried to generate a unique 16bit id\n> here.\n>\n\nOkay, I see your point. Can we think of having a new lock tag for this\nwith classid, objid, objsubid for the first three fields of locktag\nfield? We can use a new macro SET_LOCKTAG_APPLY_TRANSACTION and a\ncommon function to set the tag and acquire the lock. One more point\nrelated to this is that I am suggesting classid by referring to\nSET_LOCKTAG_OBJECT as that is used in the current patch but do you\nthink we need it for our purpose, won't subscription id and xid can\nuniquely identify the tag?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 5 Nov 2022 11:13:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, November 5, 2022 1:43 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Fri, Nov 4, 2022 at 7:35 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, November 4, 2022 4:07 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Thanks for the analysis and summary !\r\n> > > >\r\n> > > > I tried to implement the above idea and here is the patch set.\r\n> > > >\r\n> > >\r\n> > > Few comments on v42-0001\r\n> > > ===========================\r\n> >\r\n> > Thanks for the comments.\r\n> >\r\n> > >\r\n> > > 10.\r\n> > > + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\r\n> > > + winfo->shared->transaction_lock_id =\r\n> > > + winfo->shared->parallel_apply_get_unique_id();\r\n> > >\r\n> > > Why can't we use xid (remote_xid) for one of these and local_xid\r\n> > > (one generated by parallel apply) for the other?\r\n> ...\r\n> ...\r\n> >\r\n> > I also considered using xid for these locks, but it seems the objsubid\r\n> > for the shared object lock is 16bit while xid is 32 bit. So, I tried\r\n> > to generate a unique 16bit id here.\r\n> >\r\n> \r\n> Okay, I see your point. Can we think of having a new lock tag for this with classid,\r\n> objid, objsubid for the first three fields of locktag field? We can use a new\r\n> macro SET_LOCKTAG_APPLY_TRANSACTION and a common function to set the\r\n> tag and acquire the lock. One more point related to this is that I am suggesting\r\n> classid by referring to SET_LOCKTAG_OBJECT as that is used in the current\r\n> patch but do you think we need it for our purpose, won't subscription id and\r\n> xid can uniquely identify the tag?\r\n\r\nI agree that it could be better to have a new lock tag. Another point is that\r\nthe remote xid and Local xid could be the same in some rare cases, so I think\r\nwe might need to add another identifier to make it unique.\r\n\r\nMaybe :\r\nlocktag_field1 : subscription oid\r\nlocktag_field2 : xid(remote or local)\r\nlocktag_field3 : 0(lock for stream block)/1(lock for transaction)\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 6 Nov 2022 06:40:30 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Nov 6, 2022 at 3:40 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, November 5, 2022 1:43 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Fri, Nov 4, 2022 at 7:35 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Friday, November 4, 2022 4:07 PM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > Thanks for the analysis and summary !\n> > > > >\n> > > > > I tried to implement the above idea and here is the patch set.\n> > > > >\n> > > >\n> > > > Few comments on v42-0001\n> > > > ===========================\n> > >\n> > > Thanks for the comments.\n> > >\n> > > >\n> > > > 10.\n> > > > + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\n> > > > + winfo->shared->transaction_lock_id =\n> > > > + winfo->shared->parallel_apply_get_unique_id();\n> > > >\n> > > > Why can't we use xid (remote_xid) for one of these and local_xid\n> > > > (one generated by parallel apply) for the other?\n> > ...\n> > ...\n> > >\n> > > I also considered using xid for these locks, but it seems the objsubid\n> > > for the shared object lock is 16bit while xid is 32 bit. So, I tried\n> > > to generate a unique 16bit id here.\n> > >\n> >\n> > Okay, I see your point. Can we think of having a new lock tag for this with classid,\n> > objid, objsubid for the first three fields of locktag field? We can use a new\n> > macro SET_LOCKTAG_APPLY_TRANSACTION and a common function to set the\n> > tag and acquire the lock. One more point related to this is that I am suggesting\n> > classid by referring to SET_LOCKTAG_OBJECT as that is used in the current\n> > patch but do you think we need it for our purpose, won't subscription id and\n> > xid can uniquely identify the tag?\n>\n> I agree that it could be better to have a new lock tag. Another point is that\n> the remote xid and Local xid could be the same in some rare cases, so I think\n> we might need to add another identifier to make it unique.\n>\n> Maybe :\n> locktag_field1 : subscription oid\n> locktag_field2 : xid(remote or local)\n> locktag_field3 : 0(lock for stream block)/1(lock for transaction)\n\nOr I think we can use locktag_field2 for remote xid and locktag_field3\nfor local xid.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Nov 2022 11:56:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 7, 2022 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sun, Nov 6, 2022 at 3:40 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Saturday, November 5, 2022 1:43 PM Amit Kapila <amit.kapila16@gmail.com>\n> > >\n> > > On Fri, Nov 4, 2022 at 7:35 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Friday, November 4, 2022 4:07 PM Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n> > > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > > >\n> > > > > > Thanks for the analysis and summary !\n> > > > > >\n> > > > > > I tried to implement the above idea and here is the patch set.\n> > > > > >\n> > > > >\n> > > > > Few comments on v42-0001\n> > > > > ===========================\n> > > >\n> > > > Thanks for the comments.\n> > > >\n> > > > >\n> > > > > 10.\n> > > > > + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\n> > > > > + winfo->shared->transaction_lock_id =\n> > > > > + winfo->shared->parallel_apply_get_unique_id();\n> > > > >\n> > > > > Why can't we use xid (remote_xid) for one of these and local_xid\n> > > > > (one generated by parallel apply) for the other?\n> > > ...\n> > > ...\n> > > >\n> > > > I also considered using xid for these locks, but it seems the objsubid\n> > > > for the shared object lock is 16bit while xid is 32 bit. So, I tried\n> > > > to generate a unique 16bit id here.\n> > > >\n> > >\n> > > Okay, I see your point. Can we think of having a new lock tag for this with classid,\n> > > objid, objsubid for the first three fields of locktag field? We can use a new\n> > > macro SET_LOCKTAG_APPLY_TRANSACTION and a common function to set the\n> > > tag and acquire the lock. One more point related to this is that I am suggesting\n> > > classid by referring to SET_LOCKTAG_OBJECT as that is used in the current\n> > > patch but do you think we need it for our purpose, won't subscription id and\n> > > xid can uniquely identify the tag?\n> >\n> > I agree that it could be better to have a new lock tag. Another point is that\n> > the remote xid and Local xid could be the same in some rare cases, so I think\n> > we might need to add another identifier to make it unique.\n> >\n> > Maybe :\n> > locktag_field1 : subscription oid\n> > locktag_field2 : xid(remote or local)\n> > locktag_field3 : 0(lock for stream block)/1(lock for transaction)\n>\n> Or I think we can use locktag_field2 for remote xid and locktag_field3\n> for local xid.\n>\n\nWe can do that way as well but OTOH, I think for the local\ntransactions we don't need subscription oid, so field1 could be\nInvalidOid and field2 will be xid of local xact. Won't that be better?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Nov 2022 09:28:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 7, 2022 at 12:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 7, 2022 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sun, Nov 6, 2022 at 3:40 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Saturday, November 5, 2022 1:43 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > >\n> > > > On Fri, Nov 4, 2022 at 7:35 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > On Friday, November 4, 2022 4:07 PM Amit Kapila\n> > > > <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\n> > > > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > > > >\n> > > > > > > Thanks for the analysis and summary !\n> > > > > > >\n> > > > > > > I tried to implement the above idea and here is the patch set.\n> > > > > > >\n> > > > > >\n> > > > > > Few comments on v42-0001\n> > > > > > ===========================\n> > > > >\n> > > > > Thanks for the comments.\n> > > > >\n> > > > > >\n> > > > > > 10.\n> > > > > > + winfo->shared->stream_lock_id = parallel_apply_get_unique_id();\n> > > > > > + winfo->shared->transaction_lock_id =\n> > > > > > + winfo->shared->parallel_apply_get_unique_id();\n> > > > > >\n> > > > > > Why can't we use xid (remote_xid) for one of these and local_xid\n> > > > > > (one generated by parallel apply) for the other?\n> > > > ...\n> > > > ...\n> > > > >\n> > > > > I also considered using xid for these locks, but it seems the objsubid\n> > > > > for the shared object lock is 16bit while xid is 32 bit. So, I tried\n> > > > > to generate a unique 16bit id here.\n> > > > >\n> > > >\n> > > > Okay, I see your point. Can we think of having a new lock tag for this with classid,\n> > > > objid, objsubid for the first three fields of locktag field? We can use a new\n> > > > macro SET_LOCKTAG_APPLY_TRANSACTION and a common function to set the\n> > > > tag and acquire the lock. One more point related to this is that I am suggesting\n> > > > classid by referring to SET_LOCKTAG_OBJECT as that is used in the current\n> > > > patch but do you think we need it for our purpose, won't subscription id and\n> > > > xid can uniquely identify the tag?\n> > >\n> > > I agree that it could be better to have a new lock tag. Another point is that\n> > > the remote xid and Local xid could be the same in some rare cases, so I think\n> > > we might need to add another identifier to make it unique.\n> > >\n> > > Maybe :\n> > > locktag_field1 : subscription oid\n> > > locktag_field2 : xid(remote or local)\n> > > locktag_field3 : 0(lock for stream block)/1(lock for transaction)\n> >\n> > Or I think we can use locktag_field2 for remote xid and locktag_field3\n> > for local xid.\n> >\n>\n> We can do that way as well but OTOH, I think for the local\n> transactions we don't need subscription oid, so field1 could be\n> InvalidOid and field2 will be xid of local xact. Won't that be better?\n\nThis would work. But I'm a bit concerned that we cannot identify which\nsubscriptions the lock belongs to when checking pg_locks view.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Nov 2022 13:32:17 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 7, 2022 at 10:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 7, 2022 at 12:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > > I agree that it could be better to have a new lock tag. Another point is that\n> > > > the remote xid and Local xid could be the same in some rare cases, so I think\n> > > > we might need to add another identifier to make it unique.\n> > > >\n> > > > Maybe :\n> > > > locktag_field1 : subscription oid\n> > > > locktag_field2 : xid(remote or local)\n> > > > locktag_field3 : 0(lock for stream block)/1(lock for transaction)\n> > >\n> > > Or I think we can use locktag_field2 for remote xid and locktag_field3\n> > > for local xid.\n> > >\n> >\n> > We can do that way as well but OTOH, I think for the local\n> > transactions we don't need subscription oid, so field1 could be\n> > InvalidOid and field2 will be xid of local xact. Won't that be better?\n>\n> This would work. But I'm a bit concerned that we cannot identify which\n> subscriptions the lock belongs to when checking pg_locks view.\n>\n\nFair point. I think if the user wants, she can join with\npg_stat_subscription based on PID and find the corresponding\nsubscription. However, if we want to identify everything via pg_locks\nthen I think we should also mention classid or database id as field1.\nSo, it would look like: field1: (pg_subscription's oid or current db\nid); field2: OID of subscription in pg_subscription; field3: local or\nremote xid; field4: 0/1 to differentiate between remote and local xid.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Nov 2022 10:32:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v42-0001\n\n======\n\n1. General.\n\nPlease take the time to process all new code comments using a\ngrammar/spelling checker (e.g. simply cut/paste them into MSWord or\nGrammarly or any other tool of your choice as a quick double-check)\n*before* posting the patches; too many of my review comments are about\ncode comments and it's taking a long time to keep cycling through\nreporting/fixing/confirming comments for every patch version -\nwhereas it probably would take hardly any time to make the same\nspelling/grammar corrections up-front.\n\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n2. ParallelApplyLockids\n\nThis seems like a bogus name. Code is using this in a way that means\nthe subset of lockED ids. Not the list of all the lock ids.\n\nOTHO, having another list of ALL lock-ids might be useful (for\ndetecting unique ids) if you are able to maintain such a list safely.\n\n~~~\n\n3. parallel_apply_can_start\n\n+\n+ if (switching_to_serialize)\n+ return false;\n\nThis should have an explanatory comment.\n\n~~~\n\n4. parallel_apply_start_worker\n\n+ /* Check if the transaction in that worker has been finished. */\n+ xact_state = parallel_apply_get_xact_state(tmp_winfo->shared);\n+ if (xact_state == PARALLEL_TRANS_FINISHED)\n\n\"has been finished.\" -> \"has finished.\"\n\n~~~\n\n5.\n\n+ /*\n+ * Set the xact_state flag in the leader instead of the\n+ * parallel apply worker to avoid the race condition where the leader has\n+ * already started waiting for the parallel apply worker to finish\n+ * processing the transaction while the child process has not yet\n+ * processed the first STREAM_START and has not set the\n+ * xact_state to true.\n+ */\n+ SpinLockAcquire(&winfo->shared->mutex);\n+ winfo->shared->xact_state = PARALLEL_TRANS_UNKNOWN;\n+ winfo->shared->xid = xid;\n+ winfo->shared->fileset_valid = false;\n+ winfo->shared->partial_sent_message = false;\n+ SpinLockRelease(&winfo->shared->mutex);\n\nThis code comment is stale, because xact_state is no longer a \"flag\",\nnor does \"set the xact_state to true.\" make sense anymore.\n\n~~~\n\n6. parallel_apply_free_worker\n\n+ /*\n+ * Don't free the worker if the transaction in the worker is still in\n+ * progress. This could happend as we don't wait for transaction rollback\n+ * to finish.\n+ */\n+ if (parallel_apply_get_xact_state(winfo->shared) < PARALLEL_TRANS_FINISHED)\n+ return;\n\n6a.\ntypo \"happend\"\n\n~\n\n6b.\nSaying \"< PARALLEL_TRANS_FINISHED\" seems kind of risky because not it\nis assuming a specific ordering of those enums which has never been\nmentioned before. I think it will be safer to say \"!=\nPARALLEL_TRANS_FINISHED\" instead. Alternatively, if the enum order is\nimportant then it must be documented with the typedef so that nobody\nchanges it.\n\n~~~\n\n7.\n\n+ ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList,\n+ winfo);\n\nUnnecessary wrapping\n\n~~~\n\n8.\n\n+ /*\n+ * Resend the pending message to parallel apply worker to cleanup the\n+ * queue. Note that parallel apply worker will just ignore this message\n+ * as it has already handled this message while applying spooled\n+ * messages.\n+ */\n+ result = shm_mq_send(winfo->mq_handle, strlen(winfo->pending_msg),\n+ winfo->pending_msg, false, true);\n\nIf I understand this logic it seems a bit hacky. From the comment, it\nseems you are resending a message that you know/expect to be ignored\nsimply to make it disappear. (??). Isn't there some other way to clear\nthe pending message without requiring a bogus send?\n\n~~~\n\n9. parallel_apply_spooled_messages\n\n+\n+static void\n+parallel_apply_spooled_messages(void)\n\nMissing function comment\n\n~~~\n\n10.\n\n+parallel_apply_spooled_messages(void)\n+{\n+ bool fileset_valid = false;\n+\n+ /*\n+ * Check if changes has been serialized to disk. if so, read and\n+ * apply them.\n+ */\n+ SpinLockAcquire(&MyParallelShared->mutex);\n+ fileset_valid = MyParallelShared->fileset_valid;\n+ SpinLockRelease(&MyParallelShared->mutex);\n\nThe variable assignment in the declaration seems unnecessary.\n\n~~~\n\n11.\n\n+ /*\n+ * Check if changes has been serialized to disk. if so, read and\n+ * apply them.\n+ */\n+ SpinLockAcquire(&MyParallelShared->mutex);\n+ fileset_valid = MyParallelShared->fileset_valid;\n+ SpinLockRelease(&MyParallelShared->mutex);\n\n\"has been\" -> \"have been\"\n\n~~~\n\n12.\n\n+ apply_spooled_messages(&MyParallelShared->fileset,\n+ MyParallelShared->xid,\n+ InvalidXLogRecPtr);\n+ parallel_apply_set_fileset(MyParallelShared, false);\n\nparallel_apply_set_fileset() is a confusing function name. IMO this\nlogic would be better split into 2 smaller functions:\n- parallel_apply_set_fileset_valid()\n- parallel_apply_set_fileset_invalid()\n\n~~~\n\n13. parallel_apply_get_unique_id\n\n+/*\n+ * Returns the unique id among all parallel apply workers in the subscriber.\n+ */\n+static uint16\n+parallel_apply_get_unique_id()\n\nThe meaning of that comment and the purpose of this function are not\nentirely clear... e.g. I had to read the code to figure out what the\ncomment is describing.\n\n~~~\n\n14.\n\nThe function seems to be written in some way that scans all known ids\nlooking for one that does not match. I wonder if it might be easier to\njust assign some auto-incrementing static instead of having to scan\nfor uniqueness always. Since the pool of apply workers is limited is\nthat kind of ID ever going to come close to running out?\n\nAlternatively, see also comment #2 for a different way to know what\nlockids are present.\n\n~~~\n\n15.\n\nwinfo->shared->stream_lock_id = parallel_apply_get_unique_id();\nwinfo->shared->transaction_lock_id = parallel_apply_get_unique_id();\nIt somehow feels clunky to be calling this\nparallel_apply_get_unique_id() like this to scan all the same things 2\ntimes. If you are going to keep this scanning logic then at least the\nfunction should be changed to return a PAIR of lock-ids so you only;y\nneed to do 1x scan instead of 2x scan.\n~~~\n\n16. parallel_apply_send_data\n\n+/*\n+ * Send the data to the specified parallel apply worker via\nshared-memory queue.\n+ */\n+void\n+parallel_apply_send_data(ParallelApplyWorkerInfo *winfo, Size nbytes,\n+ const void *data)\n\nThe function comment needs more detail to explain the purpose of, and\nhow the thresholds work.\n\n~~~\n\n17. parallel_apply_wait_for_xact_finish\n\n+/*\n+ * Wait until the parallel apply worker's transaction finishes.\n+ */\n+void\n+parallel_apply_wait_for_xact_finish(ParallelApplyWorkerShared *wshared)\n\nI think this comment needs lots more details because the\nimplementation seems to be doing a lot more than just waiting for the\nstart to become \"finished\" - e.g. it seems to be waiting for it to\ntransition through the other stages as well...\n\n~~~\n\n18.\n\nThe boolean flag was changed to enum states so all these comments\nmentioning \"flag\" are stale and need to be reworded/rewritten.\n\n18a.\n+ /*\n+ * Wait until the parallel apply worker handles the first message and\n+ * set the flag to true.\n+ */\n\nUpdate this comment\n\n~\n\n18b.\n+ /*\n+ * Wait until the flag becomes false in case the lock was released because\n+ * of failure while applying.\n+ */\n\nUpdate this comment\n\n~~~\n\n19. parallel_apply_wait_for_in_xact\n\n+/*\n+ * Wait until the parallel apply worker's xact_state flag becomes\n+ * the same as in_xact.\n+ */\n+static void\n+parallel_apply_wait_for_in_xact(ParallelApplyWorkerShared *wshared,\n+ ParallelTransState xact_state)\n\nSUGGESTION\nWait until the parallel apply worker's transaction state becomes the\nsame as in_xact.\n\n~~~\n\n20.\n\n+ /* Stop if the flag becomes the same as in_xact. */\n+ if (parallel_apply_get_xact_state(wshared) >= xact_state)\n+ break;\n\n20a.\n\"flag\" -> \"transaction state\",\n\n~\n\n20b.\nThis code uses >= comparison which means a strict order of the enum\nvalues is assumed. So this order MUST be documented in the enum\ntypedef.\n\n~~~\n\n21. parallel_apply_set_xact_state\n\n+/*\n+ * Set the xact_state flag for the given parallel apply worker.\n+ */\n+void\n+parallel_apply_set_xact_state(ParallelApplyWorkerShared *wshared,\n+ ParallelTransState xact_state)\n\nSUGGESTION\nSet an enum indicating the transaction state for the given parallel\napply worker.\n\n~~~\n\n22. parallel_apply_get_xact_state\n\n/*\n * Get the xact_state flag for the given parallel apply worker.\n */\nstatic ParallelTransState\nparallel_apply_get_xact_state(ParallelApplyWorkerShared *wshared)\n\nSUGGESTION\nGet an enum indicating the transaction state for the given parallel\napply worker.\n\n~~~\n\n23. parallel_apply_set_fileset\n\n\n+/*\n+ * Set the fileset_valid flag and fileset for the given parallel apply worker.\n+ */\n+void\n+parallel_apply_set_fileset(ParallelApplyWorkerShared *wshared, bool\nfileset_valid)\n\nAs mentioned elsewhere (#12 above) I think would be better to split\nthis into 2 functions.\n\n~~~\n\n24. parallel_apply_lock/unlock\n\n24a.\n+/* Helper function to release a lock with lockid */\nSUGGESTION\nHelper function to release a lock identified by lockid.\n\n~\n\n24b.\n+/* Helper function to take a lock with lockid */\nSUGGESTION\nHelper function to acquire a lock identified by lockid.\n\n~\n\n24c.\n+/* Helper function to release a lock with lockid */\n+void\n+parallel_apply_lock(uint16 lockid)\n...\n+/* Helper function to take a lock with lockid */\n+void\n+parallel_apply_unlock(uint16 lockid)\n\nAren't those function comments around the wrong way?\n\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n25. File header comment\n\n+ * The dynamic shared memory segment will contain (a) a shm_mq that can be used\n+ * to send changes in the transaction from leader apply worker to parallel\n+ * apply worker (b) another shm_mq that can be used to send errors (and other\n+ * messages reported via elog/ereport) from the parallel apply worker to leader\n+ * apply worker (c) necessary information to be shared among parallel apply\n+ * workers and leader apply worker (i.e. the member in\n+ * ParallelApplyWorkerShared).\n\n\"the member in ParallelApplyWorkerShared\" -> \"the members of\nParallelApplyWorkerShared\"\n\n~~~\n\n26.\n\nShouldn't that comment have something to say about the\ndeadlock-detection design?\n\n~~~\n\n27. TransApplyAction\n\n+typedef enum\n {\n- LogicalRepMsgType command; /* 0 if invalid */\n- LogicalRepRelMapEntry *rel;\n-\n- /* Remote node information */\n- int remote_attnum; /* -1 if invalid */\n- TransactionId remote_xid;\n- XLogRecPtr finish_lsn;\n- char *origin_name;\n-} ApplyErrorCallbackArg;\n-\n-static ApplyErrorCallbackArg apply_error_callback_arg =\n+ /* The action for non-streaming transactions. */\n+ TRANS_LEADER_APPLY,\n+\n+ /* Actions for streaming transactions. */\n+ TRANS_LEADER_SERIALIZE,\n+ TRANS_LEADER_PARTIAL_SERIALIZE,\n+ TRANS_LEADER_SEND_TO_PARALLEL,\n+ TRANS_PARALLEL_APPLY\n+} TransApplyAction;\n\n27a.\nA new enum TRANS_LEADER_PARTIAL_SERIALIZE was added, but the\nexplanatory comment for it is missing\n\n~\n\n27b.\nIn fact, this new TRANS_LEADER_PARTIAL_SERIALIZE is used in many\nplaces with no comments to explain what it is for.\n\n~~~\n\n28. handle_streamed_transaction\n\n static bool\n handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)\n {\n- TransactionId xid;\n+ TransactionId current_xid;\n+ ParallelApplyWorkerInfo *winfo;\n+ TransApplyAction apply_action;\n+ StringInfoData origin_msg;\n+\n+ apply_action = get_transaction_apply_action(stream_xid, &winfo);\n\n /* not in streaming mode */\n- if (!in_streamed_transaction)\n+ if (apply_action == TRANS_LEADER_APPLY)\n return false;\n\n- Assert(stream_fd != NULL);\n Assert(TransactionIdIsValid(stream_xid));\n\n+ origin_msg = *s;\n\n28a.\nThere are no comments explaining what this\nTRANS_LEADER_PARTIAL_SERIALIZE is doing. SO I cannot tell if\n'origin_msg' is a meaningful name, or does that mean to say\n'original_msg' ?\n\n~\n\n28b.\nWhy not assign it at the declaration, the same as\napply_handle_stream_prepare does?\n\n~~~\n\n29. apply_handle_stream_prepare\n\n+ case TRANS_LEADER_PARTIAL_SERIALIZE:\n\nSeems like there is a missing explanation of what this partial\nserialize logic is doing.\n\n~~~\n\n30.\n\n+ case TRANS_PARALLEL_APPLY:\n+ parallel_apply_replorigin_setup();\n+\n+ /* Unlock all the shared object lock at transaction end. */\n+ parallel_apply_unlock(MyParallelShared->stream_lock_id);\n+\n+ if (stream_fd)\n+ BufFileClose(stream_fd);\n\nShould be some explanatory comment, on what's going on here with the\nstream_fd. E.g. how does it get to be non-NULL and why you do not set\nit again to NULL after the BufFileClose.\n\n~~~\n\n31.\n\n /*\n+ * Handle STREAM START message when the transaction was spilled to disk.\n+ *\n+ * Inintialize fileset if not yet and open the file.\n+ */\n+void\n+serialize_stream_start(TransactionId xid, bool first_segment)\n\nTypo \"Inintialize\" -> \"Initialize\"\n\nLooks like missing words in the comment.\n\nSUGGESTION\nInitialize fileset (if not already done), and open the file.\n\n~~~\n\n\n32. apply_handle_stream_start\n\n- if (in_streamed_transaction)\n+ if (!switching_to_serialize && in_streamed_transaction)\n ereport(ERROR,\n (errcode(ERRCODE_PROTOCOL_VIOLATION),\n errmsg_internal(\"duplicate STREAM START message\")));\n\nSomehow, I think this condition seems more natural if written the\nother way around:\n\nSUGGESTION\nif (in_streamed_transaction && !switching_to_serialize)\n\n~~~\n\n33.\n\n+ /*\n+ * Increment the number of message waiting to be processed by\n+ * parallel apply worker.\n+ */\n+ pg_atomic_add_fetch_u32(&(winfo->shared->left_message), 1);\n\n33a.\n\"of message\" -> \"of messages\".\n\n~\n\n33b.\nThe extra &() parens are not useful.\n\nThis same syntax is repeated in all the calls to that atomic function\nso please search/fix all the others too...\n\n~\n\n33c.\nThe member name 'left_message' seems not a very good name. How about\n'pending_message_count' or 'n_unprocessed_messages' or\n'n_messages_remaining' or anything else more informative?\n\n~~~\n\n34. apply_handle_stream_abort\n\n+static void\n+apply_handle_stream_abort(StringInfo s)\n+{\n+ TransactionId xid;\n+ TransactionId subxid;\n+ LogicalRepStreamAbortData abort_data;\n+ ParallelApplyWorkerInfo *winfo;\n+ TransApplyAction apply_action;\n+ StringInfoData origin_msg = *s;\n\nI'm unsure about that 'origin_msg' variable. Should that be called\n'original_msg'?\n\n~~~\n\n35.\n\n+ if (subxid == xid)\n\nThere are multiple parts of this logic that are doing (subxid == xid),\nso it might be better to assign that to a meaningful variable name\ninstead of the repeated comparisons.\n\n36.\n\n+ * The file was deleted if aborted the whole transaction, so\n+ * create it again in this case.\n\nEnglish? Missing words?\n\n~~~\n\n37.\n\n+ /*\n+ * Increment the number of message waiting to be processed by\n+ * parallel apply worker.\n+ */\n\n\"message\" -> \"messages\"\n\n~~~\n\n38.\n\n+ /*\n+ * If there is no message left, wait for the leader to release the\n+ * lock and send more messages.\n+ */\n+ if (xid != subxid &&\n+ pg_atomic_sub_fetch_u32(&(MyParallelShared->left_message), 1) == 0)\n+ parallel_apply_lock(MyParallelShared->stream_lock_id);\n\nThe comment says \"wait for the leader\"... but the comment seems\nmisleading - there is no waiting happening here.\n\n~~~\n\n39. apply_spooled_messages\n\n+\n /*\n * Common spoolfile processing.\n */\n-static void\n-apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\n+void\n+apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n+ XLogRecPtr lsn)\n\nSpurious extra blank line above this function.\n\n~~~\n\n40.\n\n- fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDONLY,\n+ fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY,\n false);\n\nUnnecessary wrapping.\n\n~~~\n\n41.\n\n+ fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY,\n false);\n+ stream_fd = fd;\n\nIs it still meaningful to have the local 'fd' variable? Might as well\njust use 'stream_fd' instead now, right?\n\n~~~\n\n42.\n\n+ /*\n+ * Break the loop if parallel apply worker have finished applying the\n+ * transaction. The parallel apply worker should have close the file\n+ * before committing.\n+ */\n\nEnglish?\n\n\"if parallel\" -> \"if the parallel\"\n\n\"have finished\" -> \"has finished\"\n\n\"should have close\" -> \"should have closed\"\n\n~~~\n\n43. apply_handle_stream_commit\n\n LogicalRepCommitData commit_data;\n+ ParallelApplyWorkerInfo *winfo;\n+ TransApplyAction apply_action;\n+ StringInfoData origin_msg = *s\n\nI'm unsure about that 'origin_msg' variable. Should that be called\n'original_msg' ?\n\n~~~\n\n\n44. stream_write_message\n\n+ * stream_write_message\n+ * Serialize the message that are not in a streaming block to a file.\n+ */\n+static void\n+stream_write_message(TransactionId xid, char action, StringInfo s,\n+ bool create_file)\n\n\n44a.\nThis logic seems new, but the function comment sounds strange\n(English/typos?) and it is not giving enough details about when is\nthis file, and for what purpose are we writing to it?\n\n~\n\n44b.\nIf this is always written to a file, then wouldn't a better function\nname be something including the word \"serialize\" - e.g.\nserialize_message()?\n\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n45. logicalrep_worker_onexit\n\n+ /*\n+ * Release all the session level lock that could be held in parallel apply\n+ * mode.\n+ */\n+ LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n\n\"the session level lock\" -> \"session level locks\"\n\n======\n\nsrc/include/replication/worker_internal.h\n\n46. ParallelApplyWorkerShared\n\n+ /*\n+ * Flag used to ensure commit ordering.\n+ *\n+ * The parallel apply worker will set it to false after handling the\n+ * transaction finish commands while the apply leader will wait for it to\n+ * become false before proceeding in transaction finish commands (e.g.\n+ * STREAM_COMMIT/STREAM_ABORT/STREAM_PREPARE).\n+ */\n+ ParallelTransState xact_state;\n\nThe comment has gone stale because this member is not a boolean flag\nanymore, so saying \"will set it to false\" is wrong...\n\n~~~\n\n47.\n\n+ /* Unique identifiers in the current subscription that used to lock. */\n+ uint16 stream_lock_id;\n+ uint16 transaction_lock_id;\n\nComment English?\n\n~~~\n\n48.\n\n+ pg_atomic_uint32 left_message;\n\nNeeds explanatory comment.\n\n~~~\n\n49.\n\n+ /* Whether there is partially sent message left in the queue. */\n+ bool partial_sent_message;\n\nComment English?\n\n~~~\n\n50.\n\n+ /*\n+ * Don't use SharedFileSet here because the fileset is shared by the leader\n+ * worker and the fileset in leader need to survive after releasing the\n+ * shared memory so that the leader can re-use the fileset for next\n+ * streaming transaction.\n+ */\n+ bool fileset_valid;\n+ FileSet fileset;\n\nThe comment here seems to need some more work because it is saying\nmore about what it *isn't*, rather than what it *is*.\n\nSomething like:\n\nThe 'fileset' is used for....\nThe 'fileset' is only valid to use when the accompanying fileset_valid\nflag is true...\nNOTE - We cannot use a SharedFileSet here because....\n\nAlso, fix typos \"need to survive\" -> \"needs to survive\".\n\nAlso, it may be better to refer to the \"leader apply worker\" by its\nfull name instead of just \"leader\".\n\n~~~\n\n51. typedef struct ParallelApplyWorkerInfo\n\n+ bool serialize_changes;\n\nNeeds explanatory comment.\n\n~~\n\n52.\n\n+ /*\n+ * Used to save the message that was only partially sent to parallel apply\n+ * worker.\n+ */\n+ char *pending_msg;\n\n\nSome information seems missing because this comment does not have\nenough detail to know what it means - e.g. what is a partially sent\nmessage?\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 7 Nov 2022 19:16:37 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Nov 3, 2022 at 10:06 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, November 2, 2022 10:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Oct 24, 2022 at 8:42 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > > > On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > > > >\n> > > > > > About your point that having different partition structures for\n> > > > > > publisher and subscriber, I don't know how common it will be once we\n> > > > > > have DDL replication. Also, the default value of\n> > > > > > publish_via_partition_root is false which doesn't seem to indicate\n> > > > > > that this is a quite common case.\n> > > > >\n> > > > > So how can we consider these concurrent issues that could happen only\n> > > > > when streaming = 'parallel'? Can we restrict some use cases to avoid\n> > > > > the problem or can we have a safeguard against these conflicts?\n> > > > >\n> > > >\n> > > > Yeah, right now the strategy is to disallow parallel apply for such\n> > > > cases as you can see in *0003* patch.\n> > >\n> > > Tightening the restrictions could work in some cases but there might\n> > > still be coner cases and it could reduce the usability. I'm not really\n> > > sure that we can ensure such a deadlock won't happen with the current\n> > > restrictions. I think we need something safeguard just in case. For\n> > > example, if the leader apply worker is waiting for a lock acquired by\n> > > its parallel worker, it cancels the parallel worker's transaction,\n> > > commits its transaction, and restarts logical replication. Or the\n> > > leader can log the deadlock to let the user know.\n> > >\n> >\n> > As another direction, we could make the parallel apply feature robust\n> > if we can detect deadlocks that happen among the leader worker and\n> > parallel workers. I'd like to summarize the idea discussed off-list\n> > (with Amit, Hou-San, and Kuroda-San) for discussion. The basic idea is\n> > that when the leader worker or parallel worker needs to wait for\n> > something (eg. transaction completion, messages) we use lmgr\n> > functionality so that we can create wait-for edges and detect\n> > deadlocks in lmgr.\n> >\n> > For example, a scenario where a deadlock occurs is the following:\n> >\n> > [Publisher]\n> > create table tab1(a int);\n> > create publication pub for table tab1;\n> >\n> > [Subcriber]\n> > creat table tab1(a int primary key);\n> > create subscription sub connection 'port=10000 dbname=postgres'\n> > publication pub with (streaming = parallel);\n> >\n> > TX1:\n> > BEGIN;\n> > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\n> > Tx2:\n> > BEGIN;\n> > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\n> > COMMIT;\n> > COMMIT;\n> >\n> > Suppose a parallel apply worker (PA-1) is executing TX-1 and the\n> > leader apply worker (LA) is executing TX-2 concurrently on the\n> > subscriber. Now, LA is waiting for PA-1 because of the unique key of\n> > tab1 while PA-1 is waiting for LA to send further messages. There is a\n> > deadlock between PA-1 and LA but lmgr cannot detect it.\n> >\n> > One idea to resolve this issue is that we have LA acquire a session\n> > lock on a shared object (by LockSharedObjectForSession()) and have\n> > PA-1 wait on the lock before trying to receive messages. IOW, LA\n> > acquires the lock before sending STREAM_STOP and releases it if\n> > already acquired before sending STREAM_START, STREAM_PREPARE and\n> > STREAM_COMMIT. For PA-1, it always needs to acquire the lock after\n> > processing STREAM_STOP and then release immediately after acquiring\n> > it. That way, when PA-1 is waiting for LA, we can have a wait-edge\n> > from PA-1 to LA in lmgr, which will make a deadlock in lmgr like:\n> >\n> > LA (waiting to acquire lock) -> PA-1 (waiting to acquire the shared\n> > object) -> LA\n> >\n> > We would need the shared objects per parallel apply worker.\n> >\n> > After detecting a deadlock, we can restart logical replication with\n> > temporarily disabling the parallel apply, which is done by 0005 patch.\n> >\n> > Another scenario is similar to the previous case but TX-1 and TX-2 are\n> > executed by two parallel apply workers (PA-1 and PA-2 respectively).\n> > In this scenario, PA-2 is waiting for PA-1 to complete its transaction\n> > while PA-1 is waiting for subsequent input from LA. Also, LA is\n> > waiting for PA-2 to complete its transaction in order to preserve the\n> > commit order. There is a deadlock among three processes but it cannot\n> > be detected in lmgr because the fact that LA is waiting for PA-2 to\n> > complete its transaction doesn't appear in lmgr (see\n> > parallel_apply_wait_for_xact_finish()). To fix it, we can use\n> > XactLockTableWait() instead.\n> >\n> > However, since XactLockTableWait() considers PREPARED TRANSACTION as\n> > still in progress, probably we need a similar trick as above in case\n> > where a transaction is prepared. For example, suppose that TX-2 was\n> > prepared instead of committed in the above scenario, PA-2 acquires\n> > another shared lock at START_STREAM and releases it at\n> > STREAM_COMMIT/PREPARE. LA can wait on the lock.\n> >\n> > Yet another scenario where LA has to wait is the case where the shm_mq\n> > buffer is full. In the above scenario (ie. PA-1 and PA-2 are executing\n> > transactions concurrently), if the shm_mq buffer between LA and PA-2\n> > is full, LA has to wait to send messages, and this wait doesn't appear\n> > in lmgr. To fix it, probably we have to use non-blocking write and\n> > wait with a timeout. If timeout is exceeded, the LA will write to file\n> > and indicate PA-2 that it needs to read file for remaining messages.\n> > Then LA will start waiting for commit which will detect deadlock if\n> > any.\n> >\n> > If we can detect deadlocks by having such a functionality or some\n> > other way then we don't need to tighten the restrictions of subscribed\n> > tables' schemas etc.\n>\n> Thanks for the analysis and summary !\n>\n> I tried to implement the above idea and here is the patch set. I have done some\n> basic tests for the new codes and it work fine.\n\nThank you for updating the patches!\n\nHere are comments on v42-0001:\n\nWe have the following three similar name functions regarding to\nstarting a new parallel apply worker:\n\nparallel_apply_start_worker()\nparallel_apply_setup_worker()\nparallel_apply_setup_dsm()\n\nIt seems to me that we can somewhat merge them since\nparallel_apply_setup_worker() and parallel_apply_setup_dsm() have only\none caller.\n\n---\n+/*\n+ * Extract the streaming mode value from a DefElem. This is like\n+ * defGetBoolean() but also accepts the special value of \"parallel\".\n+ */\n+char\n+defGetStreamingMode(DefElem *def)\n\nIt's a bit unnatural to have this function in define.c since other\nfunctions in this file for primitive data types. How about having it\nin subscription.c?\n\n---\n /*\n * Exit if any parameter that affects the remote connection\nwas changed.\n- * The launcher will start a new worker.\n+ * The launcher will start a new worker, but note that the\nparallel apply\n+ * worker may or may not restart depending on the value of\nthe streaming\n+ * option and whether there will be a streaming transaction.\n\nIn which case does the parallel apply worker don't restart even if the\nstreaming option has been changed?\n\n---\nI think we should explain somewhere the idea of using locks for\nsynchronization between leader and worker. Maybe can we do that with\nsample workload in new README file?\n\n---\nin parallel_apply_send_data():\n\n+ result = shm_mq_send(winfo->mq_handle, nbytes, data,\ntrue, true);\n+\n+ if (result == SHM_MQ_SUCCESS)\n+ break;\n+ else if (result == SHM_MQ_DETACHED)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"could not send data\nto shared-memory queue\")))\n+\n+ Assert(result == SHM_MQ_WOULD_BLOCK);\n+\n+ if (++retry >= CHANGES_THRESHOLD)\n+ {\n+ MemoryContext oldcontext;\n+ StringInfoData msg;\n+ TimestampTz now = GetCurrentTimestamp();\n+\n+ if (startTime == 0)\n+ startTime = now;\n+\n+ if (!TimestampDifferenceExceeds(startTime,\nnow, SHM_SEND_TIMEOUT_MS))\n+ continue;\n\nIIUC since the parallel worker retries to send data without waits the\n'retry' will get larger than CHANGES_THRESHOLD in a very short time.\nBut the worker waits at least for SHM_SEND_TIMEOUT_MS to spool data\nregardless of 'retry' count. Don't we need to nap somewhat and why do\nwe need CHANGES_THRESHOLD?\n\n---\n+/*\n+ * Wait until the parallel apply worker's xact_state flag becomes\n+ * the same as in_xact.\n+ */\n+static void\n+parallel_apply_wait_for_in_xact(ParallelApplyWorkerShared *wshared,\n+\nParallelTransState xact_state)\n+{\n+ for (;;)\n+ {\n+ /* Stop if the flag becomes the same as in_xact. */\n\nWhat do you mean by 'in_xact' here?\n\n---\nI got the error \"ERROR: invalid logical replication message type \"\"\nwith the following scenario:\n\n1. Stop the PA by sending SIGSTOP signal.\n2. Stream a large transaction so that the LA spools changes to the file for PA.\n3. Resume the PA by sending SIGCONT signal.\n4. Stream another large transaction.\n\n---\n* On publisher (with logical_decoding_work_mem = 64kB)\nbegin;\ninsert into t select generate_series(1, 1000);\nrollback;\nbegin;\ninsert into t select generate_series(1, 1000);\nrollback;\n\nI got the following error:\n\nERROR: hash table corrupted\nCONTEXT: processing remote data for replication origin \"pg_16393\"\nduring message type \"STREAM START\" in transaction 734\n\n---\nIIUC the changes for worker.c in 0001 patch includes both changes:\n\n1. apply worker takes action based on the apply_action returned by\nget_transaction_apply_action() per message (or streamed chunk).\n2. apply worker supports handling parallel apply workers.\n\nIt seems to me that (1) is a rather refactoring patch, so probably we\ncan do that in a separate patch so that we can make the patches\nsmaller.\n\n---\npostgres(1:2831190)=# \\dRs+ test_sub1\nList of subscriptions\n-[ RECORD 1 ]------+--------------------------\nName | test_sub1\nOwner | masahiko\nEnabled | t\nPublication | {test_pub1}\nBinary | f\nStreaming | p\nTwo-phase commit | d\nDisable on error | f\nOrigin | any\nSynchronous commit | off\nConninfo | port=5551 dbname=postgres\nSkip LSN | 0/0\n\nIt's better to show 'on', 'off' or 'streaming' rather than one character.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Nov 2022 19:17:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThe followings are my comments. I want to consider the patch more, but I sent it once.\r\n\r\n===\r\nworker.c\r\n\r\n01. typedef enum TransApplyAction\r\n\r\n```\r\n/*\r\n * What action to take for the transaction.\r\n *\r\n * TRANS_LEADER_APPLY means that we are in the leader apply worker and changes\r\n * of the transaction are applied directly in the worker.\r\n *\r\n * TRANS_LEADER_SERIALIZE means that we are in the leader apply worker or table\r\n * sync worker. Changes are written to temporary files and then applied when\r\n * the final commit arrives.\r\n *\r\n * TRANS_LEADER_SEND_TO_PARALLEL means that we are in the leader apply worker\r\n * and need to send the changes to the parallel apply worker.\r\n *\r\n * TRANS_PARALLEL_APPLY means that we are in the parallel apply worker and\r\n * changes of the transaction are applied directly in the worker.\r\n */\r\n```\r\n\r\nTRANS_LEADER_PARTIAL_SERIALIZE should be listed in.\r\n\r\n02. handle_streamed_transaction()\r\n\r\n```\r\n+ StringInfoData origin_msg;\r\n...\r\n+ origin_msg = *s;\r\n...\r\n+ /* Write the change to the current file */\r\n+ stream_write_change(action,\r\n+ apply_action == TRANS_LEADER_SERIALIZE ?\r\n+ s : &origin_msg);\r\n```\r\n\r\nI'm not sure why origin_msg is needed. Can we remove the conditional operator?\r\n\r\n\r\n03. apply_handle_stream_start()\r\n\r\n```\r\n+ * XXX We can avoid sending pairs of the START/STOP messages to the parallel\r\n+ * worker because unlike apply worker it will process only one transaction at a\r\n+ * time. However, it is not clear whether any optimization is worthwhile\r\n+ * because these messages are sent only when the logical_decoding_work_mem\r\n+ * threshold is exceeded.\r\n```\r\n\r\nThis comment should be modified because PA must acquire and release locks at that time.\r\n\r\n\r\n04. apply_handle_stream_prepare()\r\n\r\n```\r\n+ /*\r\n+ * After sending the data to the parallel apply worker, wait for\r\n+ * that worker to finish. This is necessary to maintain commit\r\n+ * order which avoids failures due to transaction dependencies and\r\n+ * deadlocks.\r\n+ */\r\n+ parallel_apply_wait_for_xact_finish(winfo->shared);\r\n```\r\n\r\nHere seems not to be correct. LA may not send data but spill changes to file.\r\n\r\n05. apply_handle_stream_commit()\r\n\r\n```\r\n+ if (apply_action == TRANS_LEADER_PARTIAL_SERIALIZE)\r\n+ stream_cleanup_files(MyLogicalRepWorker->subid, xid);\r\n```\r\n\r\nI'm not sure whether the stream files should be removed by LA or PAs. Could you tell me the reason why you choose LA?\r\n\r\n===\r\napplyparallelworker.c\r\n\r\n05. parallel_apply_can_start()\r\n\r\n```\r\n+ if (switching_to_serialize)\r\n+ return false;\r\n```\r\n\r\nCould you add a comment like:\r\nDon't start a new parallel apply worker if the leader apply worker has been spilling changes to the disk temporarily.\r\n\r\n06. parallel_apply_start_worker()\r\n\r\n```\r\n+ /*\r\n+ * Set the xact_state flag in the leader instead of the\r\n+ * parallel apply worker to avoid the race condition where the leader has\r\n+ * already started waiting for the parallel apply worker to finish\r\n+ * processing the transaction while the child process has not yet\r\n+ * processed the first STREAM_START and has not set the\r\n+ * xact_state to true.\r\n+ */\r\n```\r\n\r\nI thinkg the word \"flag\" should be used for boolean, so the comment should be modified.\r\n(There are so many such code-comments, all of them should be modified.)\r\n\r\n\r\n07. parallel_apply_get_unique_id()\r\n\r\n```\r\n+/*\r\n+ * Returns the unique id among all parallel apply workers in the subscriber.\r\n+ */\r\n+static uint16\r\n+parallel_apply_get_unique_id()\r\n```\r\n\r\nI think this function is inefficient: the computational complexity will be increased linearly when the number of PAs is increased. I think the Bitmapset data structure may be used.\r\n\r\n08. parallel_apply_send_data()\r\n\r\n```\r\n#define CHANGES_THRESHOLD\t1000\r\n#define SHM_SEND_TIMEOUT_MS\t10000\r\n```\r\n\r\nI think the timeout may be too long. Could you tell me the background about it?\r\n\r\n\r\n09. parallel_apply_send_data()\r\n\r\n```\r\n\t\t\t/*\r\n\t\t\t * Close the stream file if not in a streaming block, the file will\r\n\t\t\t * be reopened later.\r\n\t\t\t */\r\n\t\t\tif (!stream_apply_worker)\r\n\t\t\t\tserialize_stream_stop(winfo->shared->xid);\r\n```\r\n\r\na.\r\nIIUC the timings when LA tries to send data but stream_apply_worker is NULL are:\r\n* apply_handle_stream_prepare, \r\n* apply_handle_stream_start, \r\n* apply_handle_stream_abort, and\r\n* apply_handle_stream_commit.\r\nAnd at that time the state of TransApplyAction may be TRANS_LEADER_SEND_TO_PARALLEL. When should be close the file?\r\n\r\nb.\r\nEven if this is needed, I think the name of the called function should be modified. Here LA may not handle STREAM_STOP message. close_stream_file() or something?\r\n\r\n\r\n10. parallel_apply_send_data()\r\n\r\n```\r\n\t\t\t/* Initialize the stream fileset. */\r\n\t\t\tserialize_stream_start(winfo->shared->xid, true);\r\n```\r\n\r\nI think the name of the called function should be modified. Here LA may not handle STREAM_START message. open_stream_file() or something?\r\n\r\n11. parallel_apply_send_data()\r\n\r\n```\r\n\t\tif (++retry >= CHANGES_THRESHOLD)\r\n\t\t{\r\n\t\t\tMemoryContext oldcontext;\r\n\t\t\tStringInfoData msg;\r\n...\r\n\t\t\tinitStringInfo(&msg);\r\n\t\t\tappendBinaryStringInfo(&msg, data, nbytes);\r\n...\r\n\t\t\tswitching_to_serialize = true;\r\n\t\t\tapply_dispatch(&msg);\r\n\t\t\tswitching_to_serialize = false;\r\n\r\n\t\t\tbreak;\r\n\t\t}\r\n```\r\n\r\npfree(msg.data) may be needed.\r\n\r\n===\r\n12. worker_internal.h\r\n\r\n```\r\n+ pg_atomic_uint32 left_message;\r\n```\r\n\r\n\r\nParallelApplyWorkerShared has been already controlled by mutex locks. Why did you add an atomic variable to the data structure?\r\n\r\n===\r\n13. typedefs.list\r\n\r\nParallelTransState should be added.\r\n\r\n===\r\n14. General\r\n\r\nI have already said old about it directly, but I point it out to notify other members again.\r\nI have caused a deadlock with two PAs. Indeed it could be solved by the lmgr, but the output seemed not to be kind. Followings were copied from the log and we could see that commands executed by apply workers were not output. Can we extend it, or is it the out of scope?\r\n\r\n\r\n```\r\n2022-11-07 11:11:27.449 UTC [11262] ERROR: deadlock detected\r\n2022-11-07 11:11:27.449 UTC [11262] DETAIL: Process 11262 waits for AccessExclusiveLock on object 16393 of class 6100 of database 0; blocked by process 11320.\r\n Process 11320 waits for ShareLock on transaction 742; blocked by process 11266.\r\n Process 11266 waits for AccessShareLock on object 16393 of class 6100 of database 0; blocked by process 11262.\r\n Process 11262: <command string not enabled>\r\n Process 11320: <command string not enabled>\r\n Process 11266: <command string not enabled>\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 7 Nov 2022 11:42:53 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, November 4, 2022 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Nov 4, 2022 at 1:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Thanks for the analysis and summary !\r\n> > >\r\n> > > I tried to implement the above idea and here is the patch set.\r\n> > >\r\n> >\r\n> > Few comments on v42-0001\r\n> > ===========================\r\n> >\r\n\r\nThanks for the comments.\r\n\r\n> Few more comments on v42-0001\r\n> ===============================\r\n> 1. In parallel_apply_send_data(), it seems winfo->serialize_changes\r\n> and switching_to_serialize are set to indicate that we have changed\r\n> parallel to serialize mode. Isn't using just the\r\n> switching_to_serialize sufficient? Also, it would be better to name\r\n> switching_to_serialize as parallel_to_serialize or something like\r\n> that.\r\n\r\nI slightly change the logic to let serialize the message directly when timeout\r\ninstead of invoking apply_dispatch again so that we don't need the\r\nswitching_to_serialize.\r\n\r\n> \r\n> 2. In parallel_apply_send_data(), the patch has already initialized\r\n> the fileset, and then again in apply_handle_stream_start(), it will do\r\n> the same if we fail while sending stream_start message to the parallel\r\n> worker. It seems we don't need to initialize fileset again for\r\n> TRANS_LEADER_PARTIAL_SERIALIZE state in apply_handle_stream_start()\r\n> unless I am missing something.\r\n\r\nFixed.\r\n\r\n> 3.\r\n> apply_handle_stream_start(StringInfo s)\r\n> {\r\n> ...\r\n> + if (!first_segment)\r\n> + {\r\n> + /*\r\n> + * Unlock the shared object lock so that parallel apply worker\r\n> + * can continue to receive and apply changes.\r\n> + */\r\n> + parallel_apply_unlock(winfo->shared->stream_lock_id);\r\n> ...\r\n> }\r\n> \r\n> Can we have an assert before this unlock call that the lock must be\r\n> held? Similarly, if there are other places then we can have assert\r\n> there as well.\r\n\r\nIt seems we don't have a standard API can be used without a transaction.\r\nMaybe we can use the list ParallelApplyLockids to check that ?\r\n\r\n> 4. It is not very clear to me how maintaining ParallelApplyLockids\r\n> list is helpful.\r\n\r\nI will think about this and remove this in next version list if possible.\r\n\r\n> \r\n> 5.\r\n> /*\r\n> + * Handle STREAM START message when the transaction was spilled to disk.\r\n> + *\r\n> + * Inintialize fileset if not yet and open the file.\r\n> + */\r\n> +void\r\n> +serialize_stream_start(TransactionId xid, bool first_segment)\r\n> +{\r\n> + /*\r\n> + * Start a transaction on stream start,\r\n> \r\n> This function's name and comments seem to indicate that it is to\r\n> handle stream_start message. Is that really the case? It is being\r\n> called from parallel_apply_send_data() which made me think it can be\r\n> used from other places as well.\r\n\r\nAdjusted the comment.\r\n\r\nHere is the new version patch set which addressed comments as of last Friday.\r\nI also added some comments for the newly introduced codes in this version.\r\n\r\nAnd thanks a lot for the comments that Sawada-san, Peter and Kuroda-san posted today.\r\nI will handle them in next version soon.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 7 Nov 2022 13:19:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "> Fair point. I think if the user wants, she can join with\r\n> pg_stat_subscription based on PID and find the corresponding\r\n> subscription. However, if we want to identify everything via pg_locks\r\n> then I think we should also mention classid or database id as field1.\r\n> So, it would look like: field1: (pg_subscription's oid or current db\r\n> id); field2: OID of subscription in pg_subscription; field3: local or\r\n> remote xid; field4: 0/1 to differentiate between remote and local xid.\r\n\r\nSorry I missed the discussion related with LOCKTAG.\r\n+1 for adding a new tag like LOCKTAG_PARALLEL_APPLY, and\r\nI prefer field1 should be dbid because it is more useful for reporting a lock in DescribeLockTag().\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 8 Nov 2022 03:51:23 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 7, 2022 9:19 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Friday, November 4, 2022 7:45 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Nov 4, 2022 at 1:36 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Nov 3, 2022 at 6:36 PM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Thanks for the analysis and summary !\r\n> > > >\r\n> > > > I tried to implement the above idea and here is the patch set.\r\n> > > >\r\n> > >\r\n> > > Few comments on v42-0001\r\n> > > ===========================\r\n> > >\r\n> \r\n> Thanks for the comments.\r\n> \r\n> > Few more comments on v42-0001\r\n> > ===============================\r\n> > 1. In parallel_apply_send_data(), it seems winfo->serialize_changes\r\n> > and switching_to_serialize are set to indicate that we have changed\r\n> > parallel to serialize mode. Isn't using just the\r\n> > switching_to_serialize sufficient? Also, it would be better to name\r\n> > switching_to_serialize as parallel_to_serialize or something like\r\n> > that.\r\n> \r\n> I slightly change the logic to let serialize the message directly when timeout\r\n> instead of invoking apply_dispatch again so that we don't need the\r\n> switching_to_serialize.\r\n> \r\n> >\r\n> > 2. In parallel_apply_send_data(), the patch has already initialized\r\n> > the fileset, and then again in apply_handle_stream_start(), it will do\r\n> > the same if we fail while sending stream_start message to the parallel\r\n> > worker. It seems we don't need to initialize fileset again for\r\n> > TRANS_LEADER_PARTIAL_SERIALIZE state in apply_handle_stream_start()\r\n> > unless I am missing something.\r\n> \r\n> Fixed.\r\n> \r\n> > 3.\r\n> > apply_handle_stream_start(StringInfo s) { ...\r\n> > + if (!first_segment)\r\n> > + {\r\n> > + /*\r\n> > + * Unlock the shared object lock so that parallel apply worker\r\n> > + * can continue to receive and apply changes.\r\n> > + */\r\n> > + parallel_apply_unlock(winfo->shared->stream_lock_id);\r\n> > ...\r\n> > }\r\n> >\r\n> > Can we have an assert before this unlock call that the lock must be\r\n> > held? Similarly, if there are other places then we can have assert\r\n> > there as well.\r\n> \r\n> It seems we don't have a standard API can be used without a transaction.\r\n> Maybe we can use the list ParallelApplyLockids to check that ?\r\n> \r\n> > 4. It is not very clear to me how maintaining ParallelApplyLockids\r\n> > list is helpful.\r\n> \r\n> I will think about this and remove this in next version list if possible.\r\n> \r\n> >\r\n> > 5.\r\n> > /*\r\n> > + * Handle STREAM START message when the transaction was spilled to disk.\r\n> > + *\r\n> > + * Inintialize fileset if not yet and open the file.\r\n> > + */\r\n> > +void\r\n> > +serialize_stream_start(TransactionId xid, bool first_segment) {\r\n> > + /*\r\n> > + * Start a transaction on stream start,\r\n> >\r\n> > This function's name and comments seem to indicate that it is to\r\n> > handle stream_start message. Is that really the case? It is being\r\n> > called from parallel_apply_send_data() which made me think it can be\r\n> > used from other places as well.\r\n> \r\n> Adjusted the comment.\r\n> \r\n> Here is the new version patch set which addressed comments as of last Friday.\r\n> I also added some comments for the newly introduced codes in this version.\r\n>\r\n\r\nSorry, I posted the wrong patch for V43 which lack some changes.\r\nAttach the correct patch set here.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 8 Nov 2022 03:56:43 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 7, 2022 at 6:49 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, November 4, 2022 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > 3.\n> > apply_handle_stream_start(StringInfo s)\n> > {\n> > ...\n> > + if (!first_segment)\n> > + {\n> > + /*\n> > + * Unlock the shared object lock so that parallel apply worker\n> > + * can continue to receive and apply changes.\n> > + */\n> > + parallel_apply_unlock(winfo->shared->stream_lock_id);\n> > ...\n> > }\n> >\n> > Can we have an assert before this unlock call that the lock must be\n> > held? Similarly, if there are other places then we can have assert\n> > there as well.\n>\n> It seems we don't have a standard API can be used without a transaction.\n> Maybe we can use the list ParallelApplyLockids to check that ?\n>\n\nYeah, that occurred to me as well but I am not sure if it is a good\nidea to maintain this list just for assertion but if it turns out that\nwe need to maintain it for a different purpose then we can probably\nuse it for assert as well.\n\nFew other comments/questions:\n=========================\n1.\napply_handle_stream_start(StringInfo s)\n{\n...\n\n+ case TRANS_PARALLEL_APPLY:\n...\n...\n+ /*\n+ * Unlock the shared object lock so that the leader apply worker\n+ * can continue to send changes.\n+ */\n+ parallel_apply_unlock(MyParallelShared->stream_lock_id, AccessShareLock);\n\nAs per the design in the email [1], this lock needs to be released by\nthe leader worker during stream start which means it should be\nreleased under the state TRANS_LEADER_SEND_TO_PARALLEL. From the\ncomments as well, it is not clear to me why at this time leader is\nsupposed to be blocked. Is there a reason for doing differently than\nwhat is proposed in the original design?\n\n2. Similar to above, it is not clear why the parallel worker needs to\nrelease the stream_lock_id lock at stream_commit and stream_prepare?\n\n3. Am, I understanding correctly that you need to lock/unlock in\napply_handle_stream_abort() for the parallel worker because after\nrollback to savepoint, there could be another set of stream or\ntransaction end commands for which you want to wait? If so, maybe an\nadditional comment would serve the purpose.\n\n4.\nThe leader may have sent multiple streaming blocks in the queue\n+ * When the child is processing a streaming block. So only try to\n+ * lock if there is no message left in the queue.\n\nLet's slightly reword this to: \"By the time child is processing the\nchanges in the current streaming block, the leader may have sent\nmultiple streaming blocks. So, try to lock only if there is no message\nleft in the queue.\"\n\n5.\n+parallel_apply_unlock(uint16 lockid, LOCKMODE lockmode)\n+{\n+ if (!list_member_int(ParallelApplyLockids, lockid))\n+ return;\n+\n+ UnlockSharedObjectForSession(SubscriptionRelationId, MySubscription->oid,\n+ lockid, am_leader_apply_worker() ?\n+ AccessExclusiveLock:\n+ AccessShareLock);\n\nThis function should use lockmode argument passed rather than deciding\nbased on am_leader_apply_worker. I think this is anyway going to\nchange if we start using a different locktag as discussed in one of\nthe above emails.\n\n6.\n+\n /*\n * Common spoolfile processing.\n */\n-static void\n-apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\n+void\n+apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n\nSeems like a spurious line addition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Nov 2022 17:19:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi all,\r\n\r\nI have tested the patch set in two cases, so I want to share the result. \r\n\r\n====\r\nCase 1. deadlock caused by leader worker, parallel worker, and backend.\r\n\r\nCase 2. deadlock caused by non-immutable trigger\r\n===\r\n\r\nIt has worked well in both cases. PSA reports what I did.\r\nI want to investigate more if anymore wants to check.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Wed, 9 Nov 2022 02:54:19 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 7, 2022 at 1:46 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v42-0001\n...\n...\n>\n> 8.\n>\n> + /*\n> + * Resend the pending message to parallel apply worker to cleanup the\n> + * queue. Note that parallel apply worker will just ignore this message\n> + * as it has already handled this message while applying spooled\n> + * messages.\n> + */\n> + result = shm_mq_send(winfo->mq_handle, strlen(winfo->pending_msg),\n> + winfo->pending_msg, false, true);\n>\n> If I understand this logic it seems a bit hacky. From the comment, it\n> seems you are resending a message that you know/expect to be ignored\n> simply to make it disappear. (??). Isn't there some other way to clear\n> the pending message without requiring a bogus send?\n>\n\nIIUC, this handling is required for the case when we are not able to\nsend a message to parallel apply worker and switch to serialize mode\n(write remaining data to file). Basically, it is possible that the\nmessage is only partially sent and there is no way clean the queue. I\nfeel we can directly free the worker in this case even if there is a\nspace in the worker pool. The other idea could be that we detach from\nshm_mq and then invent a way to re-attach it after we try to reuse the\nsame worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Nov 2022 17:54:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 7, 2022 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> \r\n> On Thu, Nov 3, 2022 at 10:06 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, November 2, 2022 10:50 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Oct 24, 2022 at 8:42 PM Masahiko Sawada\r\n> > > <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > >\r\n> > > > > On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada\r\n> > > <sawada.mshk@gmail.com> wrote:\r\n> > > > > >\r\n> > > > > > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > > > >\r\n> > > > > > > About your point that having different partition structures for\r\n> > > > > > > publisher and subscriber, I don't know how common it will be once\r\n> we\r\n> > > > > > > have DDL replication. Also, the default value of\r\n> > > > > > > publish_via_partition_root is false which doesn't seem to indicate\r\n> > > > > > > that this is a quite common case.\r\n> > > > > >\r\n> > > > > > So how can we consider these concurrent issues that could happen\r\n> only\r\n> > > > > > when streaming = 'parallel'? Can we restrict some use cases to avoid\r\n> > > > > > the problem or can we have a safeguard against these conflicts?\r\n> > > > > >\r\n> > > > >\r\n> > > > > Yeah, right now the strategy is to disallow parallel apply for such\r\n> > > > > cases as you can see in *0003* patch.\r\n> > > >\r\n> > > > Tightening the restrictions could work in some cases but there might\r\n> > > > still be coner cases and it could reduce the usability. I'm not really\r\n> > > > sure that we can ensure such a deadlock won't happen with the current\r\n> > > > restrictions. I think we need something safeguard just in case. For\r\n> > > > example, if the leader apply worker is waiting for a lock acquired by\r\n> > > > its parallel worker, it cancels the parallel worker's transaction,\r\n> > > > commits its transaction, and restarts logical replication. Or the\r\n> > > > leader can log the deadlock to let the user know.\r\n> > > >\r\n> > >\r\n> > > As another direction, we could make the parallel apply feature robust\r\n> > > if we can detect deadlocks that happen among the leader worker and\r\n> > > parallel workers. I'd like to summarize the idea discussed off-list\r\n> > > (with Amit, Hou-San, and Kuroda-San) for discussion. The basic idea is\r\n> > > that when the leader worker or parallel worker needs to wait for\r\n> > > something (eg. transaction completion, messages) we use lmgr\r\n> > > functionality so that we can create wait-for edges and detect\r\n> > > deadlocks in lmgr.\r\n> > >\r\n> > > For example, a scenario where a deadlock occurs is the following:\r\n> > >\r\n> > > [Publisher]\r\n> > > create table tab1(a int);\r\n> > > create publication pub for table tab1;\r\n> > >\r\n> > > [Subcriber]\r\n> > > creat table tab1(a int primary key);\r\n> > > create subscription sub connection 'port=10000 dbname=postgres'\r\n> > > publication pub with (streaming = parallel);\r\n> > >\r\n> > > TX1:\r\n> > > BEGIN;\r\n> > > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\r\n> > > Tx2:\r\n> > > BEGIN;\r\n> > > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); --\r\n> streamed\r\n> > > COMMIT;\r\n> > > COMMIT;\r\n> > >\r\n> > > Suppose a parallel apply worker (PA-1) is executing TX-1 and the\r\n> > > leader apply worker (LA) is executing TX-2 concurrently on the\r\n> > > subscriber. Now, LA is waiting for PA-1 because of the unique key of\r\n> > > tab1 while PA-1 is waiting for LA to send further messages. There is a\r\n> > > deadlock between PA-1 and LA but lmgr cannot detect it.\r\n> > >\r\n> > > One idea to resolve this issue is that we have LA acquire a session\r\n> > > lock on a shared object (by LockSharedObjectForSession()) and have\r\n> > > PA-1 wait on the lock before trying to receive messages. IOW, LA\r\n> > > acquires the lock before sending STREAM_STOP and releases it if\r\n> > > already acquired before sending STREAM_START, STREAM_PREPARE and\r\n> > > STREAM_COMMIT. For PA-1, it always needs to acquire the lock after\r\n> > > processing STREAM_STOP and then release immediately after acquiring\r\n> > > it. That way, when PA-1 is waiting for LA, we can have a wait-edge\r\n> > > from PA-1 to LA in lmgr, which will make a deadlock in lmgr like:\r\n> > >\r\n> > > LA (waiting to acquire lock) -> PA-1 (waiting to acquire the shared\r\n> > > object) -> LA\r\n> > >\r\n> > > We would need the shared objects per parallel apply worker.\r\n> > >\r\n> > > After detecting a deadlock, we can restart logical replication with\r\n> > > temporarily disabling the parallel apply, which is done by 0005 patch.\r\n> > >\r\n> > > Another scenario is similar to the previous case but TX-1 and TX-2 are\r\n> > > executed by two parallel apply workers (PA-1 and PA-2 respectively).\r\n> > > In this scenario, PA-2 is waiting for PA-1 to complete its transaction\r\n> > > while PA-1 is waiting for subsequent input from LA. Also, LA is\r\n> > > waiting for PA-2 to complete its transaction in order to preserve the\r\n> > > commit order. There is a deadlock among three processes but it cannot\r\n> > > be detected in lmgr because the fact that LA is waiting for PA-2 to\r\n> > > complete its transaction doesn't appear in lmgr (see\r\n> > > parallel_apply_wait_for_xact_finish()). To fix it, we can use\r\n> > > XactLockTableWait() instead.\r\n> > >\r\n> > > However, since XactLockTableWait() considers PREPARED TRANSACTION as\r\n> > > still in progress, probably we need a similar trick as above in case\r\n> > > where a transaction is prepared. For example, suppose that TX-2 was\r\n> > > prepared instead of committed in the above scenario, PA-2 acquires\r\n> > > another shared lock at START_STREAM and releases it at\r\n> > > STREAM_COMMIT/PREPARE. LA can wait on the lock.\r\n> > >\r\n> > > Yet another scenario where LA has to wait is the case where the shm_mq\r\n> > > buffer is full. In the above scenario (ie. PA-1 and PA-2 are executing\r\n> > > transactions concurrently), if the shm_mq buffer between LA and PA-2\r\n> > > is full, LA has to wait to send messages, and this wait doesn't appear\r\n> > > in lmgr. To fix it, probably we have to use non-blocking write and\r\n> > > wait with a timeout. If timeout is exceeded, the LA will write to file\r\n> > > and indicate PA-2 that it needs to read file for remaining messages.\r\n> > > Then LA will start waiting for commit which will detect deadlock if\r\n> > > any.\r\n> > >\r\n> > > If we can detect deadlocks by having such a functionality or some\r\n> > > other way then we don't need to tighten the restrictions of subscribed\r\n> > > tables' schemas etc.\r\n> >\r\n> > Thanks for the analysis and summary !\r\n> >\r\n> > I tried to implement the above idea and here is the patch set. I have done some\r\n> > basic tests for the new codes and it work fine.\r\n> \r\n> Thank you for updating the patches!\r\n> \r\n> Here are comments on v42-0001:\r\n\r\nThanks for the comments.\r\n\r\n> We have the following three similar name functions regarding to\r\n> starting a new parallel apply worker:\r\n> \r\n> parallel_apply_start_worker()\r\n> parallel_apply_setup_worker()\r\n> parallel_apply_setup_dsm()\r\n> \r\n> It seems to me that we can somewhat merge them since\r\n> parallel_apply_setup_worker() and parallel_apply_setup_dsm() have only\r\n> one caller.\r\n\r\nSince these functions are doing different tasks(external function, Launch, DSM), so I \r\npersonally feel it's OK to split them. But if others also feel it's unnecessary I will\r\nmerge them.\r\n\r\n> ---\r\n> +/*\r\n> + * Extract the streaming mode value from a DefElem. This is like\r\n> + * defGetBoolean() but also accepts the special value of \"parallel\".\r\n> + */\r\n> +char\r\n> +defGetStreamingMode(DefElem *def)\r\n> \r\n> It's a bit unnatural to have this function in define.c since other\r\n> functions in this file for primitive data types. How about having it\r\n> in subscription.c?\r\n\r\nChanged.\r\n\r\n> ---\r\n> /*\r\n> * Exit if any parameter that affects the remote connection\r\n> was changed.\r\n> - * The launcher will start a new worker.\r\n> + * The launcher will start a new worker, but note that the\r\n> parallel apply\r\n> + * worker may or may not restart depending on the value of\r\n> the streaming\r\n> + * option and whether there will be a streaming transaction.\r\n> \r\n> In which case does the parallel apply worker don't restart even if the\r\n> streaming option has been changed?\r\n> \r\n> ---\r\n> I think we should explain somewhere the idea of using locks for\r\n> synchronization between leader and worker. Maybe can we do that with\r\n> sample workload in new README file?\r\n\r\nHaving a README sounds like a good idea. I think not only the lock design, we might\r\nneed to also move some other existing design comments atop worker.c into that. So, maybe\r\nbetter do that as a separate patch ? For now, I added comments atop applyparallelworker.c.\r\n\r\n> ---\r\n> in parallel_apply_send_data():\r\n> \r\n> + result = shm_mq_send(winfo->mq_handle, nbytes, data,\r\n> true, true);\r\n> +\r\n> + if (result == SHM_MQ_SUCCESS)\r\n> + break;\r\n> + else if (result == SHM_MQ_DETACHED)\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"could not send data\r\n> to shared-memory queue\")))\r\n> +\r\n> + Assert(result == SHM_MQ_WOULD_BLOCK);\r\n> +\r\n> + if (++retry >= CHANGES_THRESHOLD)\r\n> + {\r\n> + MemoryContext oldcontext;\r\n> + StringInfoData msg;\r\n> + TimestampTz now = GetCurrentTimestamp();\r\n> +\r\n> + if (startTime == 0)\r\n> + startTime = now;\r\n> +\r\n> + if (!TimestampDifferenceExceeds(startTime,\r\n> now, SHM_SEND_TIMEOUT_MS))\r\n> + continue;\r\n> \r\n> IIUC since the parallel worker retries to send data without waits the\r\n> 'retry' will get larger than CHANGES_THRESHOLD in a very short time.\r\n> But the worker waits at least for SHM_SEND_TIMEOUT_MS to spool data\r\n> regardless of 'retry' count. Don't we need to nap somewhat and why do\r\n> we need CHANGES_THRESHOLD?\r\n\r\nOh, I intended to only check for timeout after continuously retrying XX times to\r\nreduce the cost of getting the system time and calculating the time difference.\r\nI added some comments in the code.\r\n\r\n> ---\r\n> +/*\r\n> + * Wait until the parallel apply worker's xact_state flag becomes\r\n> + * the same as in_xact.\r\n> + */\r\n> +static void\r\n> +parallel_apply_wait_for_in_xact(ParallelApplyWorkerShared *wshared,\r\n> +\r\n> ParallelTransState xact_state)\r\n> +{\r\n> + for (;;)\r\n> + {\r\n> + /* Stop if the flag becomes the same as in_xact. */\r\n> \r\n> What do you mean by 'in_xact' here?\r\n\r\nChanged.\r\n\r\n> ---\r\n> I got the error \"ERROR: invalid logical replication message type \"\"\r\n> with the following scenario:\r\n> \r\n> 1. Stop the PA by sending SIGSTOP signal.\r\n> 2. Stream a large transaction so that the LA spools changes to the file for PA.\r\n> 3. Resume the PA by sending SIGCONT signal.\r\n> 4. Stream another large transaction.\r\n> \r\n> ---\r\n> * On publisher (with logical_decoding_work_mem = 64kB)\r\n> begin;\r\n> insert into t select generate_series(1, 1000);\r\n> rollback;\r\n> begin;\r\n> insert into t select generate_series(1, 1000);\r\n> rollback;\r\n> \r\n> I got the following error:\r\n> \r\n> ERROR: hash table corrupted\r\n> CONTEXT: processing remote data for replication origin \"pg_16393\"\r\n> during message type \"STREAM START\" in transaction 734\r\n\r\nThanks! I think I have fixed them in the new version.\r\n\r\n> ---\r\n> IIUC the changes for worker.c in 0001 patch includes both changes:\r\n> \r\n> 1. apply worker takes action based on the apply_action returned by\r\n> get_transaction_apply_action() per message (or streamed chunk).\r\n> 2. apply worker supports handling parallel apply workers.\r\n> \r\n> It seems to me that (1) is a rather refactoring patch, so probably we\r\n> can do that in a separate patch so that we can make the patches\r\n> smaller.\r\n\r\nI tried it, but it seems the code size of the apply_action is quite small,\r\nBecause we only have two action(LEADER_APPLY/LEADER_SERIALIZE) on HEAD branch\r\nand only handle_streamed_transaction use it. I will think if there are other\r\nways to split the patch.\r\n\r\n> ---\r\n> postgres(1:2831190)=# \\dRs+ test_sub1\r\n> List of subscriptions\r\n> -[ RECORD 1 ]------+--------------------------\r\n> Name | test_sub1\r\n> Owner | masahiko\r\n> Enabled | t\r\n> Publication | {test_pub1}\r\n> Binary | f\r\n> Streaming | p\r\n> Two-phase commit | d\r\n> Disable on error | f\r\n> Origin | any\r\n> Synchronous commit | off\r\n> Conninfo | port=5551 dbname=postgres\r\n> Skip LSN | 0/0\r\n> \r\n> It's better to show 'on', 'off' or 'streaming' rather than one character.\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 10 Nov 2022 15:09:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 7, 2022 7:43 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Dear Hou,\r\n> \r\n> The followings are my comments. I want to consider the patch more, but I sent\r\n> it once.\r\n\r\nThanks for the comments.\r\n\r\n> \r\n> ===\r\n> worker.c\r\n> \r\n> 01. typedef enum TransApplyAction\r\n> \r\n> ```\r\n> /*\r\n> * What action to take for the transaction.\r\n> *\r\n> * TRANS_LEADER_APPLY means that we are in the leader apply worker and\r\n> changes\r\n> * of the transaction are applied directly in the worker.\r\n> *\r\n> * TRANS_LEADER_SERIALIZE means that we are in the leader apply worker or\r\n> table\r\n> * sync worker. Changes are written to temporary files and then applied when\r\n> * the final commit arrives.\r\n> *\r\n> * TRANS_LEADER_SEND_TO_PARALLEL means that we are in the leader apply\r\n> worker\r\n> * and need to send the changes to the parallel apply worker.\r\n> *\r\n> * TRANS_PARALLEL_APPLY means that we are in the parallel apply worker and\r\n> * changes of the transaction are applied directly in the worker.\r\n> */\r\n> ```\r\n> \r\n> TRANS_LEADER_PARTIAL_SERIALIZE should be listed in.\r\n> \r\n\r\nAdded.\r\n\r\n> 02. handle_streamed_transaction()\r\n> \r\n> ```\r\n> + StringInfoData origin_msg;\r\n> ...\r\n> + origin_msg = *s;\r\n> ...\r\n> + /* Write the change to the current file */\r\n> + stream_write_change(action,\r\n> +\r\n> apply_action == TRANS_LEADER_SERIALIZE ?\r\n> +\r\n> + s : &origin_msg);\r\n> ```\r\n> \r\n> I'm not sure why origin_msg is needed. Can we remove the conditional\r\n> operator?\r\n\r\nCurrently, the parallel apply worker would need the transaction xid of this change to\r\ndefine savepoint. So, it need to write the original message to file.\r\n\r\n> \r\n> 03. apply_handle_stream_start()\r\n> \r\n> ```\r\n> + * XXX We can avoid sending pairs of the START/STOP messages to the\r\n> + parallel\r\n> + * worker because unlike apply worker it will process only one\r\n> + transaction at a\r\n> + * time. However, it is not clear whether any optimization is\r\n> + worthwhile\r\n> + * because these messages are sent only when the\r\n> + logical_decoding_work_mem\r\n> + * threshold is exceeded.\r\n> ```\r\n> \r\n> This comment should be modified because PA must acquire and release locks at\r\n> that time.\r\n> \r\n> \r\n> 04. apply_handle_stream_prepare()\r\n> \r\n> ```\r\n> + /*\r\n> + * After sending the data to the parallel apply worker,\r\n> wait for\r\n> + * that worker to finish. This is necessary to maintain\r\n> commit\r\n> + * order which avoids failures due to transaction\r\n> dependencies and\r\n> + * deadlocks.\r\n> + */\r\n> +\r\n> + parallel_apply_wait_for_xact_finish(winfo->shared);\r\n> ```\r\n> \r\n> Here seems not to be correct. LA may not send data but spill changes to file.\r\n\r\nChanged.\r\n\r\n> 05. apply_handle_stream_commit()\r\n> \r\n> ```\r\n> + if (apply_action ==\r\n> TRANS_LEADER_PARTIAL_SERIALIZE)\r\n> +\r\n> + stream_cleanup_files(MyLogicalRepWorker->subid, xid);\r\n> ```\r\n> \r\n> I'm not sure whether the stream files should be removed by LA or PAs. Could\r\n> you tell me the reason why you choose LA?\r\n\r\nI think the logic would be natural that only LA can write/delete/create the file and\r\nPA only need to read from it.\r\n\r\n> ===\r\n> applyparallelworker.c\r\n> \r\n> 05. parallel_apply_can_start()\r\n> \r\n> ```\r\n> + if (switching_to_serialize)\r\n> + return false;\r\n> ```\r\n> \r\n> Could you add a comment like:\r\n> Don't start a new parallel apply worker if the leader apply worker has been\r\n> spilling changes to the disk temporarily.\r\n\r\nThese codes have been removed.\r\n\r\n> 06. parallel_apply_start_worker()\r\n> \r\n> ```\r\n> + /*\r\n> + * Set the xact_state flag in the leader instead of the\r\n> + * parallel apply worker to avoid the race condition where the leader\r\n> has\r\n> + * already started waiting for the parallel apply worker to finish\r\n> + * processing the transaction while the child process has not yet\r\n> + * processed the first STREAM_START and has not set the\r\n> + * xact_state to true.\r\n> + */\r\n> ```\r\n> \r\n> I thinkg the word \"flag\" should be used for boolean, so the comment should be\r\n> modified.\r\n> (There are so many such code-comments, all of them should be modified.)\r\n\r\nChanged.\r\n\r\n> \r\n> 07. parallel_apply_get_unique_id()\r\n> \r\n> ```\r\n> +/*\r\n> + * Returns the unique id among all parallel apply workers in the subscriber.\r\n> + */\r\n> +static uint16\r\n> +parallel_apply_get_unique_id()\r\n> ```\r\n> \r\n> I think this function is inefficient: the computational complexity will be increased\r\n> linearly when the number of PAs is increased. I think the Bitmapset data\r\n> structure may be used.\r\n\r\nThis function is removed.\r\n\r\n> 08. parallel_apply_send_data()\r\n> \r\n> ```\r\n> #define CHANGES_THRESHOLD\t1000\r\n> #define SHM_SEND_TIMEOUT_MS\t10000\r\n> ```\r\n> \r\n> I think the timeout may be too long. Could you tell me the background about it?\r\n\r\nSerializing data to file would affect the performance, so I tried to make it difficult to happen unless the\r\nPA is really blocked by another PA or BA.\r\n\r\n> 09. parallel_apply_send_data()\r\n> \r\n> ```\r\n> \t\t\t/*\r\n> \t\t\t * Close the stream file if not in a streaming block, the\r\n> file will\r\n> \t\t\t * be reopened later.\r\n> \t\t\t */\r\n> \t\t\tif (!stream_apply_worker)\r\n> \t\t\t\tserialize_stream_stop(winfo->shared->xid);\r\n> ```\r\n> \r\n> a.\r\n> IIUC the timings when LA tries to send data but stream_apply_worker is NULL\r\n> are:\r\n> * apply_handle_stream_prepare,\r\n> * apply_handle_stream_start,\r\n> * apply_handle_stream_abort, and\r\n> * apply_handle_stream_commit.\r\n> And at that time the state of TransApplyAction may be\r\n> TRANS_LEADER_SEND_TO_PARALLEL. When should be close the file?\r\n\r\nChanged to use another condition to check.\r\n\r\n> b.\r\n> Even if this is needed, I think the name of the called function should be modified.\r\n> Here LA may not handle STREAM_STOP message. close_stream_file() or\r\n> something?\r\n> \r\n> \r\n> 10. parallel_apply_send_data()\r\n> \r\n> ```\r\n> \t\t\t/* Initialize the stream fileset. */\r\n> \t\t\tserialize_stream_start(winfo->shared->xid, true); ```\r\n> \r\n> I think the name of the called function should be modified. Here LA may not\r\n> handle STREAM_START message. open_stream_file() or something?\r\n> \r\n> 11. parallel_apply_send_data()\r\n> \r\n> ```\r\n> \t\tif (++retry >= CHANGES_THRESHOLD)\r\n> \t\t{\r\n> \t\t\tMemoryContext oldcontext;\r\n> \t\t\tStringInfoData msg;\r\n> ...\r\n> \t\t\tinitStringInfo(&msg);\r\n> \t\t\tappendBinaryStringInfo(&msg, data, nbytes); ...\r\n> \t\t\tswitching_to_serialize = true;\r\n> \t\t\tapply_dispatch(&msg);\r\n> \t\t\tswitching_to_serialize = false;\r\n> \r\n> \t\t\tbreak;\r\n> \t\t}\r\n> ```\r\n> \r\n> pfree(msg.data) may be needed.\r\n> \r\n> ===\r\n> 12. worker_internal.h\r\n> \r\n> ```\r\n> + pg_atomic_uint32 left_message;\r\n> ```\r\n> \r\n> \r\n> ParallelApplyWorkerShared has been already controlled by mutex locks. Why\r\n> did you add an atomic variable to the data structure?\r\n\r\nI personally feel this value is modified more frequently, so use an atomic\r\nvariable here.\r\n\r\n> ===\r\n> 13. typedefs.list\r\n> \r\n> ParallelTransState should be added.\r\n\r\nAdded.\r\n\r\n> ===\r\n> 14. General\r\n> \r\n> I have already said old about it directly, but I point it out to notify other members\r\n> again.\r\n> I have caused a deadlock with two PAs. Indeed it could be solved by the lmgr, but\r\n> the output seemed not to be kind. Followings were copied from the log and we\r\n> could see that commands executed by apply workers were not output. Can we\r\n> extend it, or is it the out of scope?\r\n> \r\n> \r\n> ```\r\n> 2022-11-07 11:11:27.449 UTC [11262] ERROR: deadlock detected\r\n> 2022-11-07 11:11:27.449 UTC [11262] DETAIL: Process 11262 waits for\r\n> AccessExclusiveLock on object 16393 of class 6100 of database 0; blocked by\r\n> process 11320.\r\n> Process 11320 waits for ShareLock on transaction 742; blocked by\r\n> process 11266.\r\n> Process 11266 waits for AccessShareLock on object 16393 of class 6100 of\r\n> database 0; blocked by process 11262.\r\n> Process 11262: <command string not enabled>\r\n> Process 11320: <command string not enabled>\r\n> Process 11266: <command string not enabled> ```\r\n\r\nOn HEAD, a apply worker could also cause a deadlock with a user backend. Like:\r\nTx1 (backend)\r\nbegin;\r\ninsert into tbl1 values (100);\r\n Tx2 (replaying streaming transaction)\r\n begin;\r\n insert into tbl1 values (1);\r\n delete from tbl2;\r\ninsert into tbl1 values (1);\r\n insert into tbl1 values (100);\r\n\r\nlogical replication worker ERROR: deadlock detected\r\nlogical replication worker DETAIL: Process 2158391 waits for ShareLock on transaction 749; blocked by process 2158410.\r\n Process 2158410 waits for ShareLock on transaction 750; blocked by process 2158391.\r\n Process 2158391: <command string not enabled>\r\n Process 2158410: insert into tbl1 values (1);\r\n\r\nSo, it looks like the existing behavior. I agree that it would be better to\r\nshow something, but maybe we can do that as a separate patch.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 10 Nov 2022 15:14:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 7, 2022 4:17 PM Peter Smith <smithpb2250@gmail.com>\r\n> \r\n> Here are my review comments for v42-0001\r\n\r\nThanks for the comments.\r\n> ======\r\n> \r\n> 28. handle_streamed_transaction\r\n> \r\n> static bool\r\n> handle_streamed_transaction(LogicalRepMsgType action, StringInfo s) {\r\n> - TransactionId xid;\r\n> + TransactionId current_xid;\r\n> + ParallelApplyWorkerInfo *winfo;\r\n> + TransApplyAction apply_action;\r\n> + StringInfoData origin_msg;\r\n> +\r\n> + apply_action = get_transaction_apply_action(stream_xid, &winfo);\r\n> \r\n> /* not in streaming mode */\r\n> - if (!in_streamed_transaction)\r\n> + if (apply_action == TRANS_LEADER_APPLY)\r\n> return false;\r\n> \r\n> - Assert(stream_fd != NULL);\r\n> Assert(TransactionIdIsValid(stream_xid));\r\n> \r\n> + origin_msg = *s;\r\n> \r\n> ~\r\n> \r\n> 28b.\r\n> Why not assign it at the declaration, the same as apply_handle_stream_prepare\r\n> does?\r\n\r\nThe assignment is unnecessary for non-streaming transaction, so I delayed it.\r\n> ~\r\n> \r\n> 44b.\r\n> If this is always written to a file, then wouldn't a better function name be\r\n> something including the word \"serialize\" - e.g.\r\n> serialize_message()?\r\n\r\nI feel it would be better to be consistent with the existing style stream_xxx_xx().\r\n\r\nI think I have addressed all the comments, but since quite a few logics are\r\nchanged in the new version so I might missed something. And dome code wrapping need to\r\nbe adjusted, I plan to run pg_indent for next version.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Thu, 10 Nov 2022 15:15:11 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, November 8, 2022 7:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Nov 7, 2022 at 6:49 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, November 4, 2022 7:45 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > 3.\r\n> > > apply_handle_stream_start(StringInfo s) { ...\r\n> > > + if (!first_segment)\r\n> > > + {\r\n> > > + /*\r\n> > > + * Unlock the shared object lock so that parallel apply worker\r\n> > > + * can continue to receive and apply changes.\r\n> > > + */\r\n> > > + parallel_apply_unlock(winfo->shared->stream_lock_id);\r\n> > > ...\r\n> > > }\r\n> > >\r\n> > > Can we have an assert before this unlock call that the lock must be\r\n> > > held? Similarly, if there are other places then we can have assert\r\n> > > there as well.\r\n> >\r\n> > It seems we don't have a standard API can be used without a transaction.\r\n> > Maybe we can use the list ParallelApplyLockids to check that ?\r\n> >\r\n> \r\n> Yeah, that occurred to me as well but I am not sure if it is a good\r\n> idea to maintain this list just for assertion but if it turns out that\r\n> we need to maintain it for a different purpose then we can probably\r\n> use it for assert as well.\r\n> \r\n> Few other comments/questions:\r\n> =========================\r\n> 1.\r\n> apply_handle_stream_start(StringInfo s)\r\n> {\r\n> ...\r\n> \r\n> + case TRANS_PARALLEL_APPLY:\r\n> ...\r\n> ...\r\n> + /*\r\n> + * Unlock the shared object lock so that the leader apply worker\r\n> + * can continue to send changes.\r\n> + */\r\n> + parallel_apply_unlock(MyParallelShared->stream_lock_id,\r\n> AccessShareLock);\r\n> \r\n> As per the design in the email [1], this lock needs to be released by\r\n> the leader worker during stream start which means it should be\r\n> released under the state TRANS_LEADER_SEND_TO_PARALLEL. From the\r\n> comments as well, it is not clear to me why at this time leader is\r\n> supposed to be blocked. Is there a reason for doing differently than\r\n> what is proposed in the original design?\r\n> 2. Similar to above, it is not clear why the parallel worker needs to\r\n> release the stream_lock_id lock at stream_commit and stream_prepare?\r\n\r\nSorry, these were due to my miss. Changed.\r\n\r\n> 3. Am, I understanding correctly that you need to lock/unlock in\r\n> apply_handle_stream_abort() for the parallel worker because after\r\n> rollback to savepoint, there could be another set of stream or\r\n> transaction end commands for which you want to wait? If so, maybe an\r\n> additional comment would serve the purpose.\r\n\r\nI think you are right. I will think about this in case I missed something and\r\nadd some comments in next version.\r\n\r\n> 4.\r\n> The leader may have sent multiple streaming blocks in the queue\r\n> + * When the child is processing a streaming block. So only try to\r\n> + * lock if there is no message left in the queue.\r\n> \r\n> Let's slightly reword this to: \"By the time child is processing the\r\n> changes in the current streaming block, the leader may have sent\r\n> multiple streaming blocks. So, try to lock only if there is no message\r\n> left in the queue.\"\r\n\r\nChanged.\r\n\r\n> 5.\r\n> +parallel_apply_unlock(uint16 lockid, LOCKMODE lockmode)\r\n> +{\r\n> + if (!list_member_int(ParallelApplyLockids, lockid))\r\n> + return;\r\n> +\r\n> + UnlockSharedObjectForSession(SubscriptionRelationId,\r\n> MySubscription->oid,\r\n> + lockid, am_leader_apply_worker() ?\r\n> + AccessExclusiveLock:\r\n> + AccessShareLock);\r\n> \r\n> This function should use lockmode argument passed rather than deciding\r\n> based on am_leader_apply_worker. I think this is anyway going to\r\n> change if we start using a different locktag as discussed in one of\r\n> the above emails.\r\n\r\nChanged.\r\n\r\n> 6.\r\n> +\r\n> /*\r\n> * Common spoolfile processing.\r\n> */\r\n> -static void\r\n> -apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\r\n> +void\r\n> +apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\r\n> \r\n> Seems like a spurious line addition.\r\n\r\nRemoved.\r\n\r\n> Fair point. I think if the user wants, she can join with pg_stat_subscription\r\n> based on PID and find the corresponding subscription. However, if we want to\r\n> identify everything via pg_locks then I think we should also mention classid\r\n> or database id as field1. So, it would look like: field1: (pg_subscription's\r\n> oid or current db id); field2: OID of subscription in pg_subscription;\r\n> field3: local or remote xid; field4: 0/1 to differentiate between remote and\r\n> local xid.\r\n\r\nI tried to use local xid to lock the transaction, but we currently can only get\r\nthe local xid after applying the first change. And it's possible that the first\r\nchange in parallel apply worker is blocked by other parallel apply worker which\r\nmeans the parallel apply worker might not have a chance to share the local xid\r\nwith the leader.\r\n\r\nTo resolve this, I tried to use remote_xid for both stream and transaction lock\r\nand use field4: 0/1 to differentiate between stream and transaction lock. Like:\r\n\r\nfield1: (current db id); field2: OID of subscription in pg_subscription;\r\nfield3: remote xid; field4: 0/1 to differentiate between stream_lock and\r\ntransaction_lock.\r\n\r\n\r\n> IIUC, this handling is required for the case when we are not able to send a\r\n> message to parallel apply worker and switch to serialize mode (write\r\n> remaining data to file). Basically, it is possible that the message is only\r\n> partially sent and there is no way clean the queue. I feel we can directly\r\n> free the worker in this case even if there is a space in the worker pool. The\r\n> other idea could be that we detach from shm_mq and then invent a way to\r\n> re-attach it after we try to reuse the same worker.\r\n\r\nFor now, I directly stop the worker in this case. But I will think more about\r\nthis.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 11 Nov 2022 02:26:33 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 7, 2022 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Thu, Nov 3, 2022 at 10:06 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, November 2, 2022 10:50 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Oct 24, 2022 at 8:42 PM Masahiko Sawada\r\n> > > <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > On Wed, Oct 12, 2022 at 3:04 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > >\r\n> > > > > On Tue, Oct 11, 2022 at 5:52 AM Masahiko Sawada\r\n> > > <sawada.mshk@gmail.com> wrote:\r\n> > > > > >\r\n> > > > > > On Fri, Oct 7, 2022 at 2:00 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > > > >\r\n> > > > > > > About your point that having different partition structures for\r\n> > > > > > > publisher and subscriber, I don't know how common it will be once\r\n> we\r\n> > > > > > > have DDL replication. Also, the default value of\r\n> > > > > > > publish_via_partition_root is false which doesn't seem to indicate\r\n> > > > > > > that this is a quite common case.\r\n> > > > > >\r\n> > > > > > So how can we consider these concurrent issues that could happen\r\n> only\r\n> > > > > > when streaming = 'parallel'? Can we restrict some use cases to avoid\r\n> > > > > > the problem or can we have a safeguard against these conflicts?\r\n> > > > > >\r\n> > > > >\r\n> > > > > Yeah, right now the strategy is to disallow parallel apply for such\r\n> > > > > cases as you can see in *0003* patch.\r\n> > > >\r\n> > > > Tightening the restrictions could work in some cases but there might\r\n> > > > still be coner cases and it could reduce the usability. I'm not really\r\n> > > > sure that we can ensure such a deadlock won't happen with the current\r\n> > > > restrictions. I think we need something safeguard just in case. For\r\n> > > > example, if the leader apply worker is waiting for a lock acquired by\r\n> > > > its parallel worker, it cancels the parallel worker's transaction,\r\n> > > > commits its transaction, and restarts logical replication. Or the\r\n> > > > leader can log the deadlock to let the user know.\r\n> > > >\r\n> > >\r\n> > > As another direction, we could make the parallel apply feature robust\r\n> > > if we can detect deadlocks that happen among the leader worker and\r\n> > > parallel workers. I'd like to summarize the idea discussed off-list\r\n> > > (with Amit, Hou-San, and Kuroda-San) for discussion. The basic idea is\r\n> > > that when the leader worker or parallel worker needs to wait for\r\n> > > something (eg. transaction completion, messages) we use lmgr\r\n> > > functionality so that we can create wait-for edges and detect\r\n> > > deadlocks in lmgr.\r\n> > >\r\n> > > For example, a scenario where a deadlock occurs is the following:\r\n> > >\r\n> > > [Publisher]\r\n> > > create table tab1(a int);\r\n> > > create publication pub for table tab1;\r\n> > >\r\n> > > [Subcriber]\r\n> > > creat table tab1(a int primary key);\r\n> > > create subscription sub connection 'port=10000 dbname=postgres'\r\n> > > publication pub with (streaming = parallel);\r\n> > >\r\n> > > TX1:\r\n> > > BEGIN;\r\n> > > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); -- streamed\r\n> > > Tx2:\r\n> > > BEGIN;\r\n> > > INSERT INTO tab1 SELECT i FROM generate_series(1, 5000) s(i); --\r\n> streamed\r\n> > > COMMIT;\r\n> > > COMMIT;\r\n> > >\r\n> > > Suppose a parallel apply worker (PA-1) is executing TX-1 and the\r\n> > > leader apply worker (LA) is executing TX-2 concurrently on the\r\n> > > subscriber. Now, LA is waiting for PA-1 because of the unique key of\r\n> > > tab1 while PA-1 is waiting for LA to send further messages. There is a\r\n> > > deadlock between PA-1 and LA but lmgr cannot detect it.\r\n> > >\r\n> > > One idea to resolve this issue is that we have LA acquire a session\r\n> > > lock on a shared object (by LockSharedObjectForSession()) and have\r\n> > > PA-1 wait on the lock before trying to receive messages. IOW, LA\r\n> > > acquires the lock before sending STREAM_STOP and releases it if\r\n> > > already acquired before sending STREAM_START, STREAM_PREPARE and\r\n> > > STREAM_COMMIT. For PA-1, it always needs to acquire the lock after\r\n> > > processing STREAM_STOP and then release immediately after acquiring\r\n> > > it. That way, when PA-1 is waiting for LA, we can have a wait-edge\r\n> > > from PA-1 to LA in lmgr, which will make a deadlock in lmgr like:\r\n> > >\r\n> > > LA (waiting to acquire lock) -> PA-1 (waiting to acquire the shared\r\n> > > object) -> LA\r\n> > >\r\n> > > We would need the shared objects per parallel apply worker.\r\n> > >\r\n> > > After detecting a deadlock, we can restart logical replication with\r\n> > > temporarily disabling the parallel apply, which is done by 0005 patch.\r\n> > >\r\n> > > Another scenario is similar to the previous case but TX-1 and TX-2 are\r\n> > > executed by two parallel apply workers (PA-1 and PA-2 respectively).\r\n> > > In this scenario, PA-2 is waiting for PA-1 to complete its transaction\r\n> > > while PA-1 is waiting for subsequent input from LA. Also, LA is\r\n> > > waiting for PA-2 to complete its transaction in order to preserve the\r\n> > > commit order. There is a deadlock among three processes but it cannot\r\n> > > be detected in lmgr because the fact that LA is waiting for PA-2 to\r\n> > > complete its transaction doesn't appear in lmgr (see\r\n> > > parallel_apply_wait_for_xact_finish()). To fix it, we can use\r\n> > > XactLockTableWait() instead.\r\n> > >\r\n> > > However, since XactLockTableWait() considers PREPARED TRANSACTION\r\n> as\r\n> > > still in progress, probably we need a similar trick as above in case\r\n> > > where a transaction is prepared. For example, suppose that TX-2 was\r\n> > > prepared instead of committed in the above scenario, PA-2 acquires\r\n> > > another shared lock at START_STREAM and releases it at\r\n> > > STREAM_COMMIT/PREPARE. LA can wait on the lock.\r\n> > >\r\n> > > Yet another scenario where LA has to wait is the case where the shm_mq\r\n> > > buffer is full. In the above scenario (ie. PA-1 and PA-2 are executing\r\n> > > transactions concurrently), if the shm_mq buffer between LA and PA-2\r\n> > > is full, LA has to wait to send messages, and this wait doesn't appear\r\n> > > in lmgr. To fix it, probably we have to use non-blocking write and\r\n> > > wait with a timeout. If timeout is exceeded, the LA will write to file\r\n> > > and indicate PA-2 that it needs to read file for remaining messages.\r\n> > > Then LA will start waiting for commit which will detect deadlock if\r\n> > > any.\r\n> > >\r\n> > > If we can detect deadlocks by having such a functionality or some\r\n> > > other way then we don't need to tighten the restrictions of subscribed\r\n> > > tables' schemas etc.\r\n> >\r\n> > Thanks for the analysis and summary !\r\n> >\r\n> > I tried to implement the above idea and here is the patch set. I have done\r\n> some\r\n> > basic tests for the new codes and it work fine.\r\n> \r\n> Thank you for updating the patches!\r\n> \r\n> Here are comments on v42-0001:\r\n> \r\n> We have the following three similar name functions regarding to\r\n> starting a new parallel apply worker:\r\n> ---\r\n> /*\r\n> * Exit if any parameter that affects the remote connection\r\n> was changed.\r\n> - * The launcher will start a new worker.\r\n> + * The launcher will start a new worker, but note that the\r\n> parallel apply\r\n> + * worker may or may not restart depending on the value of\r\n> the streaming\r\n> + * option and whether there will be a streaming transaction.\r\n> \r\n> In which case does the parallel apply worker don't restart even if the\r\n> streaming option has been changed?\r\n\r\nSorry, I forgot to reply to this comment. If user change the streaming option from\r\n'parallel' to 'on' or 'off', the parallel apply workers won't be restarted.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 11 Nov 2022 02:27:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 11, 2022 at 7:57 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, November 7, 2022 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Here are comments on v42-0001:\n> >\n> > We have the following three similar name functions regarding to\n> > starting a new parallel apply worker:\n> > ---\n> > /*\n> > * Exit if any parameter that affects the remote connection\n> > was changed.\n> > - * The launcher will start a new worker.\n> > + * The launcher will start a new worker, but note that the\n> > parallel apply\n> > + * worker may or may not restart depending on the value of\n> > the streaming\n> > + * option and whether there will be a streaming transaction.\n> >\n> > In which case does the parallel apply worker don't restart even if the\n> > streaming option has been changed?\n>\n> Sorry, I forgot to reply to this comment. If user change the streaming option from\n> 'parallel' to 'on' or 'off', the parallel apply workers won't be restarted.\n>\n\nHow about something like the below so as to be more explicit about\nthis in the comments?\ndiff --git a/src/backend/replication/logical/worker.c\nb/src/backend/replication/logical/worker.c\nindex bfe326bf0c..74cd5565bd 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -3727,9 +3727,10 @@ maybe_reread_subscription(void)\n\n /*\n * Exit if any parameter that affects the remote connection was changed.\n- * The launcher will start a new worker, but note that the\nparallel apply\n- * worker may or may not restart depending on the value of the streaming\n- * option and whether there will be a streaming transaction.\n+ * The launcher will start a new worker but note that the parallel apply\n+ * worker won't restart if the streaming option's value is changed from\n+ * 'parallel' to any other value or the server decides not to stream the\n+ * in-progress transaction.\n */\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Nov 2022 12:51:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi,\r\n\r\nI noticed a CFbot failure and here is the new version patch set which should fix that.\r\nI also ran pgindent and made some cosmetic changes in the new version patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 11 Nov 2022 08:42:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Nov 10, 2022 at 8:41 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, November 7, 2022 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> >\n> > Here are comments on v42-0001:\n>\n> Thanks for the comments.\n>\n> > We have the following three similar name functions regarding to\n> > starting a new parallel apply worker:\n> >\n> > parallel_apply_start_worker()\n> > parallel_apply_setup_worker()\n> > parallel_apply_setup_dsm()\n> >\n> > It seems to me that we can somewhat merge them since\n> > parallel_apply_setup_worker() and parallel_apply_setup_dsm() have only\n> > one caller.\n>\n> Since these functions are doing different tasks(external function, Launch, DSM), so I\n> personally feel it's OK to split them. But if others also feel it's unnecessary I will\n> merge them.\n>\n\nI think it is fine either way but if you want to keep the\nfunctionality of parallel_apply_setup_worker() separate then let's\nname it to something like parallel_apply_init_and_launch_worker which\nwill make the function name bit long but it will be clear. I am\nthinking that instead of using parallel_apply infront of each\nfunction, shall we use PA? Then we can name this function as\nPAInitializeAndLaunchWorker().\n\nI feel you can even move the functionality to get the worker from pool\nin parallel_apply_start_worker() to a separate function.\n\nAnother related comment:\n+ /* Try to get a free parallel apply worker. */\n+ foreach(lc, ParallelApplyWorkersList)\n+ {\n+ ParallelApplyWorkerInfo *tmp_winfo;\n+\n+ tmp_winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n+\n+ /* Check if the transaction in the worker has finished. */\n+ if (parallel_apply_free_worker(tmp_winfo, tmp_winfo->shared->xid, false))\n+ {\n+ /*\n+ * Clean up the woker information if the parallel apply woker has\n+ * been stopped.\n+ */\n+ ParallelApplyWorkersList =\nforeach_delete_current(ParallelApplyWorkersList, lc);\n+ parallel_apply_free_worker_info(tmp_winfo);\n+ continue;\n+ }\n\nI find it bit odd that even parallel_apply_free_worker() has the\nfunctionality to free the worker info, still, we are doing it outside.\nIs there a specific reason for the same? I think we can add a comment\natop parallel_apply_free_worker() that on success, it will free the\npassed winfo. In addition to that, we can write some comments before\ntrying to free worker suggesting that it would be possible for\nrollback cases because after rollback we don't wait for workers to\nfinish so can't perform the cleanup.\n\n> > ---\n> > in parallel_apply_send_data():\n> >\n> > + result = shm_mq_send(winfo->mq_handle, nbytes, data,\n> > true, true);\n> > +\n> > + if (result == SHM_MQ_SUCCESS)\n> > + break;\n> > + else if (result == SHM_MQ_DETACHED)\n> > + ereport(ERROR,\n> > +\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"could not send data\n> > to shared-memory queue\")))\n> > +\n> > + Assert(result == SHM_MQ_WOULD_BLOCK);\n> > +\n> > + if (++retry >= CHANGES_THRESHOLD)\n> > + {\n> > + MemoryContext oldcontext;\n> > + StringInfoData msg;\n> > + TimestampTz now = GetCurrentTimestamp();\n> > +\n> > + if (startTime == 0)\n> > + startTime = now;\n> > +\n> > + if (!TimestampDifferenceExceeds(startTime,\n> > now, SHM_SEND_TIMEOUT_MS))\n> > + continue;\n> >\n> > IIUC since the parallel worker retries to send data without waits the\n> > 'retry' will get larger than CHANGES_THRESHOLD in a very short time.\n> > But the worker waits at least for SHM_SEND_TIMEOUT_MS to spool data\n> > regardless of 'retry' count. Don't we need to nap somewhat and why do\n> > we need CHANGES_THRESHOLD?\n>\n> Oh, I intended to only check for timeout after continuously retrying XX times to\n> reduce the cost of getting the system time and calculating the time difference.\n> I added some comments in the code.\n>\n\nSure, but the patch assumes that immediate retry will help which I am\nnot sure is correct. IIUC, the patch has overall wait time 10s, if so,\nI guess you can retry after 1s, that will amiliorate the cost of\ngetting the system time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Nov 2022 17:07:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 11, 2022 at 2:12 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nFew comments on v46-0001:\n======================\n1.\n+static void\n+apply_handle_stream_abort(StringInfo s)\n{\n...\n+ /* Send STREAM ABORT message to the parallel apply worker. */\n+ parallel_apply_send_data(winfo, s->len, s->data);\n+\n+ if (abort_toplevel_transaction)\n+ {\n+ parallel_apply_unlock_stream(xid, AccessExclusiveLock);\n\nShouldn't we need to release this lock before sending the message as\nwe are doing for streap_prepare and stream_commit? If there is a\nreason for doing it differently here then let's add some comments for\nthe same.\n\n2. It seems once the patch makes the file state as busy\n(LEADER_FILESET_BUSY), it will only be accessible after the leader\napply worker receives a transaction end message like stream_commit. Is\nmy understanding correct? If yes, then why can't we make it accessible\nafter the stream_stop message? Are you worried about the concurrency\nhandling for reading and writing the file? If so, we can probably deal\nwith it via some lock for reading and writing to file for each change.\nI think after this we may not need additional stream level lock/unlock\nin parallel_apply_spooled_messages. I understand that you probably\nwant to keep the code simple so I am not suggesting changing it\nimmediately but just wanted to know whether you have considered\nalternatives here.\n\n3. Don't we need to release the transaction lock at stream_abort in\nparallel apply worker? I understand that we are not waiting for it in\nthe leader worker but still parallel apply worker should release it if\nacquired at stream_start by it.\n\n4. A minor comment change as below:\ndiff --git a/src/backend/replication/logical/worker.c\nb/src/backend/replication/logical/worker.c\nindex 43f09b7e9a..c771851d1f 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -1851,6 +1851,9 @@ apply_handle_stream_abort(StringInfo s)\n parallel_apply_stream_abort(&abort_data);\n\n /*\n+ * We need to wait after processing rollback\nto savepoint for the next set\n+ * of changes.\n+ *\n * By the time parallel apply worker is\nprocessing the changes in\n * the current streaming block, the leader\napply worker may have\n * sent multiple streaming blocks. So, try to\nlock only if there\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Nov 2022 16:35:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, November 12, 2022 7:06 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Fri, Nov 11, 2022 at 2:12 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> Few comments on v46-0001:\r\n> ======================\r\n>\r\n\r\nThanks for the comments.\r\n\r\n> 1.\r\n> +static void\r\n> +apply_handle_stream_abort(StringInfo s)\r\n> {\r\n> ...\r\n> + /* Send STREAM ABORT message to the parallel apply worker. */\r\n> + parallel_apply_send_data(winfo, s->len, s->data);\r\n> +\r\n> + if (abort_toplevel_transaction)\r\n> + {\r\n> + parallel_apply_unlock_stream(xid, AccessExclusiveLock);\r\n> \r\n> Shouldn't we need to release this lock before sending the message as\r\n> we are doing for streap_prepare and stream_commit? If there is a\r\n> reason for doing it differently here then let's add some comments for\r\n> the same.\r\n\r\nChanged.\r\n\r\n> 2. It seems once the patch makes the file state as busy\r\n> (LEADER_FILESET_BUSY), it will only be accessible after the leader\r\n> apply worker receives a transaction end message like stream_commit. Is\r\n> my understanding correct? If yes, then why can't we make it accessible\r\n> after the stream_stop message? Are you worried about the concurrency\r\n> handling for reading and writing the file? If so, we can probably deal\r\n> with it via some lock for reading and writing to file for each change.\r\n> I think after this we may not need additional stream level lock/unlock\r\n> in parallel_apply_spooled_messages. I understand that you probably\r\n> want to keep the code simple so I am not suggesting changing it\r\n> immediately but just wanted to know whether you have considered\r\n> alternatives here.\r\n\r\nI thought about this, but it seems the current buffile design doesn't allow two\r\nprocesses to open the same buffile at the same time(refer to the comment atop\r\nof BufFileOpenFileSet()). This means the LA needs to make sure the PA has\r\nclosed the buffile before writing more changes into it. Although we could let\r\nthe LA wait for that, but it could cause another kind of deadlock. Suppose the\r\nPA opened the file and is blocked when applying the just read change. And the\r\nLA starts to wait when trying to write the next set of streaming changes into\r\nfile because the file is still opened by PA. Then the lock edge is like:\r\n\r\nLA (wait for file to be closed) -> PA1 (wait for unique lock in PA2) -> PA2\r\n(wait for stream lock held in LA)\r\n\r\nWe could introduce another lock for this, but that seems not very great as we\r\nalready had two kinds of locks here.\r\n\r\nAnother solution could be we create different filename for each streaming block\r\nso that the leader don't need to reopen the same file after writing changes\r\ninto it, but that seems largely increase the number of temp files and looks a\r\nbit hacky. Or we could let PA open the file, then read and close the file for\r\neach change, but it seems bring some overhead of opening and closing file.\r\n\r\nAnother solution which doesn't need a new lock could be that we create\r\ndifferent filename for each streaming block so that the leader doesn't need to\r\nreopen the same file after writing changes into it, but that seems largely\r\nincrease the number of temp files and looks a bit hacky. Or we could let PA\r\nopen the file, then read and close the file for each change, but it seems bring\r\nsome overhead of opening and closing file.\r\n\r\nBased on above, how about keep the current approach ?(i.e. PA\r\nwill open the file only after the leader apply worker receives a transaction\r\nend message like stream_commit). Ideally, it will enter partial serialize mode\r\nonly when PA is blocked by a backend or another PA which seems not that common.\r\n\r\n> 3. Don't we need to release the transaction lock at stream_abort in\r\n> parallel apply worker? I understand that we are not waiting for it in\r\n> the leader worker but still parallel apply worker should release it if\r\n> acquired at stream_start by it.\r\n\r\nI thought that the lock will be automatically released on rollback. But after testing, I find\r\nIt’s possible that the lock won't be released if it's a empty streaming transaction. So, I\r\nadd the code to release the lock in the new version patch.\r\n\r\n> \r\n> 4. A minor comment change as below:\r\n> diff --git a/src/backend/replication/logical/worker.c\r\n> b/src/backend/replication/logical/worker.c\r\n> index 43f09b7e9a..c771851d1f 100644\r\n> --- a/src/backend/replication/logical/worker.c\r\n> +++ b/src/backend/replication/logical/worker.c\r\n> @@ -1851,6 +1851,9 @@ apply_handle_stream_abort(StringInfo s)\r\n> parallel_apply_stream_abort(&abort_data);\r\n> \r\n> /*\r\n> + * We need to wait after processing rollback\r\n> to savepoint for the next set\r\n> + * of changes.\r\n> + *\r\n> * By the time parallel apply worker is\r\n> processing the changes in\r\n> * the current streaming block, the leader\r\n> apply worker may have\r\n> * sent multiple streaming blocks. So, try to\r\n> lock only if there\r\n\r\nMerged.\r\n\r\nAttach the new version patch set which addressed above comments and comments from [1].\r\n\r\nIn the new version patch, I renamed parallel_apply_xxx functions to pa_xxx to\r\nmake the name shorter according to the suggestion in [1]. Besides, I split the\r\ncodes related to partial serialize to 0002 patch to make the patch easier to\r\nreview.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1LGyQ%2BS-jCMnYSz_hvoqiNA0Of%3D%2BMksY%3DXTUaRc5XzXJQ%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 15 Nov 2022 11:57:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, November 15, 2022 7:58 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Saturday, November 12, 2022 7:06 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> >\r\n> > On Fri, Nov 11, 2022 at 2:12 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> >\r\n> > Few comments on v46-0001:\r\n> > ======================\r\n> >\r\n> \r\n> Thanks for the comments.\r\n> \r\n> > 1.\r\n> > +static void\r\n> > +apply_handle_stream_abort(StringInfo s)\r\n> > {\r\n> > ...\r\n> > + /* Send STREAM ABORT message to the parallel apply worker. */\r\n> > + parallel_apply_send_data(winfo, s->len, s->data);\r\n> > +\r\n> > + if (abort_toplevel_transaction)\r\n> > + {\r\n> > + parallel_apply_unlock_stream(xid, AccessExclusiveLock);\r\n> >\r\n> > Shouldn't we need to release this lock before sending the message as\r\n> > we are doing for streap_prepare and stream_commit? If there is a\r\n> > reason for doing it differently here then let's add some comments for\r\n> > the same.\r\n> \r\n> Changed.\r\n> \r\n> > 2. It seems once the patch makes the file state as busy\r\n> > (LEADER_FILESET_BUSY), it will only be accessible after the leader\r\n> > apply worker receives a transaction end message like stream_commit. Is\r\n> > my understanding correct? If yes, then why can't we make it accessible\r\n> > after the stream_stop message? Are you worried about the concurrency\r\n> > handling for reading and writing the file? If so, we can probably deal\r\n> > with it via some lock for reading and writing to file for each change.\r\n> > I think after this we may not need additional stream level lock/unlock\r\n> > in parallel_apply_spooled_messages. I understand that you probably\r\n> > want to keep the code simple so I am not suggesting changing it\r\n> > immediately but just wanted to know whether you have considered\r\n> > alternatives here.\r\n> \r\n> I thought about this, but it seems the current buffile design doesn't allow two\r\n> processes to open the same buffile at the same time(refer to the comment\r\n> atop of BufFileOpenFileSet()). This means the LA needs to make sure the PA has\r\n> closed the buffile before writing more changes into it. Although we could let\r\n> the LA wait for that, but it could cause another kind of deadlock. Suppose the\r\n> PA opened the file and is blocked when applying the just read change. And the\r\n> LA starts to wait when trying to write the next set of streaming changes into file\r\n> because the file is still opened by PA. Then the lock edge is like:\r\n> \r\n> LA (wait for file to be closed) -> PA1 (wait for unique lock in PA2) -> PA2 (wait\r\n> for stream lock held in LA)\r\n> \r\n> We could introduce another lock for this, but that seems not very great as we\r\n> already had two kinds of locks here.\r\n> \r\n> Another solution could be we create different filename for each streaming\r\n> block so that the leader don't need to reopen the same file after writing\r\n> changes into it, but that seems largely increase the number of temp files and\r\n> looks a bit hacky. Or we could let PA open the file, then read and close the file\r\n> for each change, but it seems bring some overhead of opening and closing file.\r\n> \r\n> Another solution which doesn't need a new lock could be that we create\r\n> different filename for each streaming block so that the leader doesn't need to\r\n> reopen the same file after writing changes into it, but that seems largely\r\n> increase the number of temp files and looks a bit hacky. Or we could let PA\r\n> open the file, then read and close the file for each change, but it seems bring\r\n> some overhead of opening and closing file.\r\n> \r\n> Based on above, how about keep the current approach ?(i.e. PA will open the\r\n> file only after the leader apply worker receives a transaction end message like\r\n> stream_commit). Ideally, it will enter partial serialize mode only when PA is\r\n> blocked by a backend or another PA which seems not that common.\r\n> \r\n> > 3. Don't we need to release the transaction lock at stream_abort in\r\n> > parallel apply worker? I understand that we are not waiting for it in\r\n> > the leader worker but still parallel apply worker should release it if\r\n> > acquired at stream_start by it.\r\n> \r\n> I thought that the lock will be automatically released on rollback. But after\r\n> testing, I find It’s possible that the lock won't be released if it's a empty\r\n> streaming transaction. So, I add the code to release the lock in the new version\r\n> patch.\r\n> \r\n> >\r\n> > 4. A minor comment change as below:\r\n> > diff --git a/src/backend/replication/logical/worker.c\r\n> > b/src/backend/replication/logical/worker.c\r\n> > index 43f09b7e9a..c771851d1f 100644\r\n> > --- a/src/backend/replication/logical/worker.c\r\n> > +++ b/src/backend/replication/logical/worker.c\r\n> > @@ -1851,6 +1851,9 @@ apply_handle_stream_abort(StringInfo s)\r\n> > parallel_apply_stream_abort(&abort_data);\r\n> >\r\n> > /*\r\n> > + * We need to wait after processing rollback\r\n> > to savepoint for the next set\r\n> > + * of changes.\r\n> > + *\r\n> > * By the time parallel apply worker is\r\n> > processing the changes in\r\n> > * the current streaming block, the leader\r\n> > apply worker may have\r\n> > * sent multiple streaming blocks. So, try to\r\n> > lock only if there\r\n> \r\n> Merged.\r\n> \r\n> Attach the new version patch set which addressed above comments and\r\n> comments from [1].\r\n> \r\n> In the new version patch, I renamed parallel_apply_xxx functions to pa_xxx to\r\n> make the name shorter according to the suggestion in [1]. Besides, I split the\r\n> codes related to partial serialize to 0002 patch to make the patch easier to\r\n> review.\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/CAA4eK1LGyQ%2BS-jCMnYSz_hvoq\r\n> iNA0Of%3D%2BMksY%3DXTUaRc5XzXJQ%40mail.gmail.com\r\n\r\nI noticed that I didn't add CHECK_FOR_INTERRUPTS while retrying send message.\r\nSo, attach the new version which adds that. Also attach the 0004 patch that\r\nrestarts logical replication with temporarily disabling the parallel apply if\r\nfailed to apply a transaction in parallel apply worker.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 16 Nov 2022 08:19:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:57 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, November 12, 2022 7:06 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Fri, Nov 11, 2022 at 2:12 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> >\n> > Few comments on v46-0001:\n> > ======================\n> >\n>\n> Thanks for the comments.\n>\n> > 1.\n> > +static void\n> > +apply_handle_stream_abort(StringInfo s)\n> > {\n> > ...\n> > + /* Send STREAM ABORT message to the parallel apply worker. */\n> > + parallel_apply_send_data(winfo, s->len, s->data);\n> > +\n> > + if (abort_toplevel_transaction)\n> > + {\n> > + parallel_apply_unlock_stream(xid, AccessExclusiveLock);\n> >\n> > Shouldn't we need to release this lock before sending the message as\n> > we are doing for streap_prepare and stream_commit? If there is a\n> > reason for doing it differently here then let's add some comments for\n> > the same.\n>\n> Changed.\n>\n> > 2. It seems once the patch makes the file state as busy\n> > (LEADER_FILESET_BUSY), it will only be accessible after the leader\n> > apply worker receives a transaction end message like stream_commit. Is\n> > my understanding correct? If yes, then why can't we make it accessible\n> > after the stream_stop message? Are you worried about the concurrency\n> > handling for reading and writing the file? If so, we can probably deal\n> > with it via some lock for reading and writing to file for each change.\n> > I think after this we may not need additional stream level lock/unlock\n> > in parallel_apply_spooled_messages. I understand that you probably\n> > want to keep the code simple so I am not suggesting changing it\n> > immediately but just wanted to know whether you have considered\n> > alternatives here.\n>\n> I thought about this, but it seems the current buffile design doesn't allow two\n> processes to open the same buffile at the same time(refer to the comment atop\n> of BufFileOpenFileSet()). This means the LA needs to make sure the PA has\n> closed the buffile before writing more changes into it. Although we could let\n> the LA wait for that, but it could cause another kind of deadlock. Suppose the\n> PA opened the file and is blocked when applying the just read change. And the\n> LA starts to wait when trying to write the next set of streaming changes into\n> file because the file is still opened by PA. Then the lock edge is like:\n>\n> LA (wait for file to be closed) -> PA1 (wait for unique lock in PA2) -> PA2\n> (wait for stream lock held in LA)\n>\n> We could introduce another lock for this, but that seems not very great as we\n> already had two kinds of locks here.\n>\n> Another solution could be we create different filename for each streaming block\n> so that the leader don't need to reopen the same file after writing changes\n> into it, but that seems largely increase the number of temp files and looks a\n> bit hacky. Or we could let PA open the file, then read and close the file for\n> each change, but it seems bring some overhead of opening and closing file.\n>\n> Another solution which doesn't need a new lock could be that we create\n> different filename for each streaming block so that the leader doesn't need to\n> reopen the same file after writing changes into it, but that seems largely\n> increase the number of temp files and looks a bit hacky. Or we could let PA\n> open the file, then read and close the file for each change, but it seems bring\n> some overhead of opening and closing file.\n>\n> Based on above, how about keep the current approach ?(i.e. PA\n> will open the file only after the leader apply worker receives a transaction\n> end message like stream_commit). Ideally, it will enter partial serialize mode\n> only when PA is blocked by a backend or another PA which seems not that common.\n\n+1. We can improve this area later in a separate patch.\n\nHere are review comments on v47-0001 and v47-0002 patches:\n\nWhen the parallel apply worker exited, I got the following server log.\nI think this log is not appropriate since the worker was not\nterminated by administrator command but exited by itself. Also,\nprobably it should exit with exit code 0?\n\nFATAL: terminating logical replication worker due to administrator command\nLOG: background worker \"logical replication parallel worker\" (PID\n3594918) exited with exit code 1\n\n---\n/*\n * Stop the worker if there are enough workers in the pool or the leader\n * apply worker serialized part of the transaction data to a file due to\n * send timeout.\n */\nif (winfo->serialize_changes ||\nnapplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n\nWhy do we need to stop the worker if the leader serializes changes?\n\n---\n+ /*\n+ * Release all session level locks that could be held in parallel apply\n+ * mode.\n+ */\n+ LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n+\n\nI think we call LockReleaseAll() at the process exit (in ProcKill()),\nbut do we really need to do LockReleaseAll() here too?\n\n---\n\n+ elog(ERROR, \"could not find replication state slot\nfor replication\"\n+ \"origin with OID %u which was acquired by\n%d\", node, acquired_by);\n\nLet's not break the error log message in the middle so that the user\ncan search the message by grep easily.\n\n---\n+ {\n+ {\"max_parallel_apply_workers_per_subscription\",\n+ PGC_SIGHUP,\n+ REPLICATION_SUBSCRIBERS,\n+ gettext_noop(\"Maximum number of parallel\napply workers per subscription.\"),\n+ NULL,\n+ },\n+ &max_parallel_apply_workers_per_subscription,\n+ 2, 0, MAX_BACKENDS,\n+ NULL, NULL, NULL\n+ },\n+\n\nI think we should use MAX_PARALLEL_WORKER_LIMIT as the max value\ninstead. MAX_BACKENDS is too high.\n\n---\n+ /*\n+ * Indicates whether there are pending messages in the queue.\nThe parallel\n+ * apply worker will check it before starting to wait.\n+ */\n+ pg_atomic_uint32 pending_message_count;\n\nThe \"pending messages\" sounds like individual logical replication\nmessages such as LOGICAL_REP_MSG_INSERT. But IIUC what this value\nactually shows is how many streamed chunks are pending to process,\nright?\n\n---\nThe streaming parameter has the new value \"parallel\" for \"streaming\"\noption to enable the parallel apply. It fits so far but I think the\nparallel apply feature doesn't necessarily need to be tied up with\nstreaming replication. For example, we might want to support parallel\napply also for non-streaming transactions in the future. It might be\nbetter to have another option, say \"parallel\", to control parallel\napply behavior. The \"parallel\" option can be a boolean option and\nsetting parallel = on requires streaming = on.\n\nAnother variant is to have a new subscription parameter for example\n\"parallel_workers\" parameter that specifies the number of parallel\nworkers. That way, users can specify the number of parallel workers\nper subscription.\n\n---\nWhen the parallel apply worker raises an error, I got the same error\ntwice from the leader worker and parallel worker as follows. Can we\nsuppress either one?\n\n2022-11-17 17:30:23.490 JST [3814552] LOG: logical replication\nparallel apply worker for subscription \"test_sub1\" has started\n2022-11-17 17:30:23.490 JST [3814552] ERROR: duplicate key value\nviolates unique constraint \"test1_c_idx\"\n2022-11-17 17:30:23.490 JST [3814552] DETAIL: Key (c)=(1) already exists.\n2022-11-17 17:30:23.490 JST [3814552] CONTEXT: processing remote data\nfor replication origin \"pg_16390\" during message type \"INSERT\" for\nreplication target relatio\nn \"public.test1\" in transaction 731\n2022-11-17 17:30:23.490 JST [3814550] ERROR: duplicate key value\nviolates unique constraint \"test1_c_idx\"\n2022-11-17 17:30:23.490 JST [3814550] DETAIL: Key (c)=(1) already exists.\n2022-11-17 17:30:23.490 JST [3814550] CONTEXT: processing remote data\nfor replication origin \"pg_16390\" during message type \"INSERT\" for\nreplication target relatio\nn \"public.test1\" in transaction 731\n parallel apply worker\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Nov 2022 09:35:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Nov 16, 2022 at 1:50 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, November 15, 2022 7:58 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>\n> I noticed that I didn't add CHECK_FOR_INTERRUPTS while retrying send message.\n> So, attach the new version which adds that. Also attach the 0004 patch that\n> restarts logical replication with temporarily disabling the parallel apply if\n> failed to apply a transaction in parallel apply worker.\n>\n\nFew comments on v48-0001\n======================\n1. The variable name pending_message_count seems to indicate a number\nof pending messages but normally it is pending start/stop streams\nexcept for probably rollback to savepoint case. Shall we name it\npending_stream_count and change the comments accordingly?\n\n2. The variable name abort_toplevel_transaction seems unnecessarily\nlong. Shall we rename it to toplevel_xact or something like that?\n\n3.\n+ /*\n+ * Increment the number of messages waiting to be processed by\n+ * parallel apply worker.\n+ */\n+ if (!abort_toplevel_transaction)\n+ pg_atomic_add_fetch_u32(&(winfo->shared->pending_message_count), 1);\n+ else\n+ pa_unlock_stream(xid, AccessExclusiveLock);\n\nIt is better to explain here why different actions are required for\nsubtransaction and transaction rather than the current comment.\n\n4.\n+\n+ if (abort_toplevel_transaction)\n+ {\n+ (void) pa_free_worker(winfo, xid);\n+ }\n\n{} is not required here.\n\n5.\n/*\n+ * Although the lock can be automatically released during transaction\n+ * rollback, but we still release the lock here as we may not in a\n+ * transaction.\n+ */\n+ pa_unlock_transaction(xid, AccessShareLock);\n+\n\nIt is better to explain for which case (I think it is for empty xacts)\nit will be useful to release it explicitly.\n\n6.\n+ *\n+ * XXX We can avoid sending pairs of the START/STOP messages to the parallel\n+ * worker because unlike apply worker it will process only one transaction at a\n+ * time. However, it is not clear whether any optimization is worthwhile\n+ * because these messages are sent only when the logical_decoding_work_mem\n+ * threshold is exceeded.\n */\n static void\n apply_handle_stream_start(StringInfo s)\n\nI think this comment is no longer valid as now we need to wait for the\nnext stream at stream_stop message and also need to acquire the lock\nin stream_start message. So, I think it is better to remove it unless\nI am missing something.\n\n7. I am able to compile applyparallelworker.c by commenting few of the\nheader includes. Please check if those are really required.\n#include \"libpq/pqformat.h\"\n#include \"libpq/pqmq.h\"\n//#include \"mb/pg_wchar.h\"\n#include \"pgstat.h\"\n#include \"postmaster/interrupt.h\"\n#include \"replication/logicallauncher.h\"\n//#include \"replication/logicalworker.h\"\n#include \"replication/origin.h\"\n//#include \"replication/walreceiver.h\"\n#include \"replication/worker_internal.h\"\n#include \"storage/ipc.h\"\n#include \"storage/lmgr.h\"\n//#include \"storage/procarray.h\"\n#include \"tcop/tcopprot.h\"\n#include \"utils/inval.h\"\n#include \"utils/memutils.h\"\n//#include \"utils/resowner.h\"\n#include \"utils/syscache.h\"\n\n8.\n+/*\n+ * Is there a message sent by parallel apply worker which we need to receive?\n+ */\n+volatile sig_atomic_t ParallelApplyMessagePending = false;\n\nThis comment and variable are placed in applyparallelworker.c, so 'we'\nin the above sentence is not clear. I think you need to use leader\napply worker instead.\n\n9.\n+static ParallelApplyWorkerInfo *pa_get_free_worker(void);\n\nWill it be better if we name this function pa_get_available_worker()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Nov 2022 07:56:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n...\n> ---\n> The streaming parameter has the new value \"parallel\" for \"streaming\"\n> option to enable the parallel apply. It fits so far but I think the\n> parallel apply feature doesn't necessarily need to be tied up with\n> streaming replication. For example, we might want to support parallel\n> apply also for non-streaming transactions in the future. It might be\n> better to have another option, say \"parallel\", to control parallel\n> apply behavior. The \"parallel\" option can be a boolean option and\n> setting parallel = on requires streaming = on.\n>\n\nFWIW, I tend to agree with this idea but for a different reason. In\nthis patch, the 'streaming' parameter had become a kind of hybrid\nboolean/enum. AFAIK there are no other parameters anywhere that use a\nhybrid pattern like this so I was thinking it may be better not to be\ndifferent.\n\nBut I didn't think that parallel_apply=on should *require*\nstreaming=on. It might be better for parallel_apply=on is just the\n*default*, but it simply achieves nothing unless streaming=on too.\nThat way users would not need to change anything at all to get the\nbenefits of parallel streaming.\n\n> Another variant is to have a new subscription parameter for example\n> \"parallel_workers\" parameter that specifies the number of parallel\n> workers. That way, users can specify the number of parallel workers\n> per subscription.\n>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 18 Nov 2022 13:30:43 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 8:01 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Nov 18, 2022 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> ...\n> > ---\n> > The streaming parameter has the new value \"parallel\" for \"streaming\"\n> > option to enable the parallel apply. It fits so far but I think the\n> > parallel apply feature doesn't necessarily need to be tied up with\n> > streaming replication. For example, we might want to support parallel\n> > apply also for non-streaming transactions in the future. It might be\n> > better to have another option, say \"parallel\", to control parallel\n> > apply behavior. The \"parallel\" option can be a boolean option and\n> > setting parallel = on requires streaming = on.\n> >\n\nIf we do that then how will the user be able to use streaming\nserialize mode (write to file for streaming transactions) as we have\nnow? Because after we introduce parallelism for non-streaming\ntransactions, the user would want parallel = on irrespective of the\nstreaming mode. Also, users may wish to only parallelize large\ntransactions because of additional overhead for non-streaming\ntransactions for transaction dependency tracking, etc. So, the user\nmay wish to have a separate knob for large transactions as the patch\nhas now.\n\n>\n> FWIW, I tend to agree with this idea but for a different reason. In\n> this patch, the 'streaming' parameter had become a kind of hybrid\n> boolean/enum. AFAIK there are no other parameters anywhere that use a\n> hybrid pattern like this so I was thinking it may be better not to be\n> different.\n>\n\nI think we have a similar pattern for GUC parameters like\nconstraint_exclusion (see constraint_exclusion_options),\nbackslash_quote (see backslash_quote_options), etc.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Nov 2022 10:17:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 18, 2022 at 8:01 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Nov 18, 2022 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > ...\n> > > ---\n> > > The streaming parameter has the new value \"parallel\" for \"streaming\"\n> > > option to enable the parallel apply. It fits so far but I think the\n> > > parallel apply feature doesn't necessarily need to be tied up with\n> > > streaming replication. For example, we might want to support parallel\n> > > apply also for non-streaming transactions in the future. It might be\n> > > better to have another option, say \"parallel\", to control parallel\n> > > apply behavior. The \"parallel\" option can be a boolean option and\n> > > setting parallel = on requires streaming = on.\n> > >\n>\n> If we do that then how will the user be able to use streaming\n> serialize mode (write to file for streaming transactions) as we have\n> now? Because after we introduce parallelism for non-streaming\n> transactions, the user would want parallel = on irrespective of the\n> streaming mode. Also, users may wish to only parallelize large\n> transactions because of additional overhead for non-streaming\n> transactions for transaction dependency tracking, etc. So, the user\n> may wish to have a separate knob for large transactions as the patch\n> has now.\n\nOne idea for that would be to make it enum. For example, setting\nparallel = \"streaming\" works for that.\n\n>\n> >\n> > FWIW, I tend to agree with this idea but for a different reason. In\n> > this patch, the 'streaming' parameter had become a kind of hybrid\n> > boolean/enum. AFAIK there are no other parameters anywhere that use a\n> > hybrid pattern like this so I was thinking it may be better not to be\n> > different.\n> >\n>\n> I think we have a similar pattern for GUC parameters like\n> constraint_exclusion (see constraint_exclusion_options),\n> backslash_quote (see backslash_quote_options), etc.\n\nRight. vacuum_index_cleanup and buffering storage parameters that\naccept 'on', 'off', or 'auto') are other examples.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Nov 2022 14:00:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 10:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Nov 18, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Nov 18, 2022 at 8:01 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Fri, Nov 18, 2022 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > ...\n> > > > ---\n> > > > The streaming parameter has the new value \"parallel\" for \"streaming\"\n> > > > option to enable the parallel apply. It fits so far but I think the\n> > > > parallel apply feature doesn't necessarily need to be tied up with\n> > > > streaming replication. For example, we might want to support parallel\n> > > > apply also for non-streaming transactions in the future. It might be\n> > > > better to have another option, say \"parallel\", to control parallel\n> > > > apply behavior. The \"parallel\" option can be a boolean option and\n> > > > setting parallel = on requires streaming = on.\n> > > >\n> >\n> > If we do that then how will the user be able to use streaming\n> > serialize mode (write to file for streaming transactions) as we have\n> > now? Because after we introduce parallelism for non-streaming\n> > transactions, the user would want parallel = on irrespective of the\n> > streaming mode. Also, users may wish to only parallelize large\n> > transactions because of additional overhead for non-streaming\n> > transactions for transaction dependency tracking, etc. So, the user\n> > may wish to have a separate knob for large transactions as the patch\n> > has now.\n>\n> One idea for that would be to make it enum. For example, setting\n> parallel = \"streaming\" works for that.\n>\n\nYeah, but then we will have two different parameters (parallel and\nstreaming) to control streaming behavior. This will be confusing say\nwhen the user says parallel = 'streaming' and streaming = off, we need\nto probably disallow such settings but not sure if it would be any\nbetter than allowing parallelism for large xacts by streaming\nparameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Nov 2022 10:37:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v47-0001\n\n(This review is a WIP - I will post more comments for this patch next week)\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n1.\n\n\n+ * Copyright (c) 2022, PostgreSQL Global Development Group\n+ *\n+ * IDENTIFICATION src/backend/replication/logical/applyparallelworker.c\n+ *\n\nThis IDENTIFICATION should be on 2 lines like it previously was\ninstead of wrapped into one line. For consistency with all other file\nheaders.\n\n~~~\n\n2. File header comment\n\n+ * Since the database structure (schema of subscription tables, etc.) of\n+ * publisher and subscriber may be different.\n\nIncomplete sentence?\n\n~~~\n\n3.\n\n+ * When the following two scenarios occur, a deadlock occurs.\n\nActually, you described three scenarios in this comment. Not two.\n\nSUGGESTION\nThe following scenarios can cause a deadlock.\n\n~~~\n\n4.\n\n+ * LA (waiting to acquire the local transaction lock) -> PA1 (waiting to\n+ * acquire the lock on the unique index) -> PA2 (waiting to acquire the lock on\n+ * the remote transaction) -> LA\n\n\"PA1\" -> \"PA-1\"\n\"PA2\" -> \"PA-2\"\n\n~~~\n\n5.\n\n+ * To resolve this issue, we use non-blocking write and wait with a timeout. If\n+ * timeout is exceeded, the LA report an error and restart logical replication.\n\n\"report\" --> \"reports\"\n\"restart\" -> \"restarts\"\n\nOR\n\n\"LA report\" -> \"LA will report\"\n\n~~~\n\n6. pa_wait_for_xact_state\n\n+/*\n+ * Wait until the parallel apply worker's transaction state reach or exceed the\n+ * given xact_state.\n+ */\n+static void\n+pa_wait_for_xact_state(ParallelApplyWorkerShared *wshared,\n+ ParallelTransState xact_state)\n\n\"reach or exceed\" -> \"reaches or exceeds\"\n\n~~~\n\n7. pa_stream_abort\n\n+ /*\n+ * Although the lock can be automatically released during transaction\n+ * rollback, but we still release the lock here as we may not in a\n+ * transaction.\n+ */\n+ pa_unlock_transaction(xid, AccessShareLock);\n\n\"but we still\" -> \"we still\"\n\"we may not in a\" -> \"we may not be in a\"\n\n~~~\n\n8.\n\n+ pa_savepoint_name(MySubscription->oid, subxid, spname,\n+ sizeof(spname));\n+\n\nUnnecessary wrapping\n\n~~~\n\n9.\n\n+ for (i = list_length(subxactlist) - 1; i >= 0; i--)\n+ {\n+ TransactionId xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\n+\n+ if (xid_tmp == subxid)\n+ {\n+ found = true;\n+ break;\n+ }\n+ }\n+\n+ if (found)\n+ {\n+ RollbackToSavepoint(spname);\n+ CommitTransactionCommand();\n+ subxactlist = list_truncate(subxactlist, i + 1);\n+ }\n\nThis code logic does not seem to require the 'found' flag. You can do\nthe RollbackToSavepoint/CommitTransactionCommand/list_truncate before\nthe break.\n\n~~~\n\n10. pa_lock/unlock _stream/_transaction\n\n+/*\n+ * Helper functions to acquire and release a lock for each stream block.\n+ *\n+ * Set locktag_field4 to 0 to indicate that it's a stream lock.\n+ */\n\n+/*\n+ * Helper functions to acquire and release a lock for each local transaction.\n+ *\n+ * Set locktag_field4 to 1 to indicate that it's a transaction lock.\n\nShould constants/defines/enums replace those magic numbers 0 and 1?\n\n~~~\n\n11. pa_lock_transaction\n\n+ * Note that all the callers are passing remote transaction ID instead of local\n+ * transaction ID as xid. This is because the local transaction ID will only be\n+ * assigned while applying the first change in the parallel apply, but it's\n+ * possible that the first change in parallel apply worker is blocked by a\n+ * concurrently executing transaction in another parallel apply worker causing\n+ * the leader cannot get local transaction ID.\n\n\"causing the leader cannot\" -> \"which means the leader cannot\" (??)\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n12. TransApplyAction\n\n+/*\n+ * What action to take for the transaction.\n+ *\n+ * TRANS_LEADER_APPLY:\n+ * The action means that we are in the leader apply worker and changes of the\n+ * transaction are applied directly in the worker.\n+ *\n+ * TRANS_LEADER_SERIALIZE:\n+ * It means that we are in the leader apply worker or table sync worker.\n+ * Changes are written to temporary files and then applied when the final\n+ * commit arrives.\n+ *\n+ * TRANS_LEADER_SEND_TO_PARALLEL:\n+ * The action means that we are in the leader apply worker and need to send the\n+ * changes to the parallel apply worker.\n+ *\n+ * TRANS_PARALLEL_APPLY:\n+ * The action that we are in the parallel apply worker and changes of the\n+ * transaction are applied directly in the worker.\n+ */\n+typedef enum\n\n12a\nToo many various ways of saying the same thing:\n\n\"The action means that we...\"\n\"It means that we...\"\n\"The action that we...\" (typo?)\n\nPlease word all these comments consistently\n\n~\n\n12b.\n\"directly in the worker\" -> \"directly by the worker\" (??) 2x\n\n~~~\n\n13. get_worker_name\n\n+/*\n+ * Return the name of the logical replication worker.\n+ */\n+static const char *\n+get_worker_name(void)\n+{\n+ if (am_tablesync_worker())\n+ return _(\"logical replication table synchronization worker\");\n+ else if (am_parallel_apply_worker())\n+ return _(\"logical replication parallel apply worker\");\n+ else\n+ return _(\"logical replication apply worker\");\n+}\n\nThis function belongs nearer the top of the module (above all the\nerror messages that are using it).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 18 Nov 2022 18:03:20 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 7:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 1:50 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, November 15, 2022 7:58 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> >\n> > I noticed that I didn't add CHECK_FOR_INTERRUPTS while retrying send message.\n> > So, attach the new version which adds that. Also attach the 0004 patch that\n> > restarts logical replication with temporarily disabling the parallel apply if\n> > failed to apply a transaction in parallel apply worker.\n> >\n>\n> Few comments on v48-0001\n> ======================\n>\n\nI have made quite a few changes in the comments, added some new\ncomments, and made other cosmetic changes in the attached patch. The\nis atop v48-0001*. If these look okay to you, please include them in\nthe next version. Apart from these, I have a few more comments on\nv48-0001*\n\n1.\n+static bool\n+pa_can_start(TransactionId xid)\n+{\n+ if (!TransactionIdIsValid(xid))\n+ return false;\n\nThe caller (see caller of pa_start_worker) already has a check that\nxid passed here is valid, so I think this should be an Assert unless I\nam missing something in which case it is better to add a comment here.\n\n2. Will it be better to rename pa_start_worker() as\npa_allocate_worker() because it sometimes gets the worker from the\npool and also allocate the hash entry for worker info? That will even\nmatch the corresponding pa_free_worker().\n\n3.\n+pa_start_subtrans(TransactionId current_xid, TransactionId top_xid)\n{\n...\n+\n+ oldctx = MemoryContextSwitchTo(ApplyContext);\n+ subxactlist = lappend_xid(subxactlist, current_xid);\n+ MemoryContextSwitchTo(oldctx);\n...\n\nWhy do we need to allocate this list in a permanent context? IIUC, we\nneed to use this to maintain subxacts so that it can be later used to\nfind the given subxact at the time of rollback to savepoint in the\ncurrent in-progress transaction, so why do we need it beyond the\ntransaction being applied? If there is a reason for the same, it would\nbe better to add some comments for the same.\n\n4.\n+pa_stream_abort(LogicalRepStreamAbortData *abort_data)\n{\n...\n+\n+ for (i = list_length(subxactlist) - 1; i >= 0; i--)\n+ {\n+ TransactionId xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\n+\n+ if (xid_tmp == subxid)\n+ {\n+ found = true;\n+ break;\n+ }\n+ }\n+\n+ if (found)\n+ {\n+ RollbackToSavepoint(spname);\n+ CommitTransactionCommand();\n+ subxactlist = list_truncate(subxactlist, i + 1);\n+ }\n\nI was thinking whether we can have an Assert(false) for the not found\ncase but it seems if all the changes of a subxact have been skipped\nthen probably subxid corresponding to \"rollback to savepoint\" won't be\nfound in subxactlist and we don't need to do anything for it. If that\nis the case, then probably adding a comment for it would be a good\nidea, otherwise, we can probably have Assert(false) in the else case.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 19 Nov 2022 16:18:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Nov 18, 2022 at 6:03 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for v47-0001\n>\n> (This review is a WIP - I will post more comments for this patch next week)\n>\n\nHere are the rest of my comments for v47-0001\n\n======\n\ndoc/src/sgml/monitoring.\n\n1.\n\n@@ -1851,6 +1851,11 @@ postgres 27093 0.0 0.0 30096 2752 ?\n Ss 11:34 0:00 postgres: ser\n <entry>Waiting to acquire an advisory user lock.</entry>\n </row>\n <row>\n+ <entry><literal>applytransaction</literal></entry>\n+ <entry>Waiting to acquire acquire a lock on a remote transaction being\n+ applied on the subscriber side.</entry>\n+ </row>\n+ <row>\n\n1a.\nTypo \"acquire acquire\"\n\n~\n\n1b.\nMaybe \"on the subscriber side\" does not mean much without any context.\nMaybe better to word it as below.\n\nSUGGESTION\nWaiting to acquire a lock on a remote transaction being applied by a\nlogical replication subscriber.\n\n======\n\ndoc/src/sgml/system-views.sgml\n\n2.\n\n@@ -1361,8 +1361,9 @@\n <literal>virtualxid</literal>,\n <literal>spectoken</literal>,\n <literal>object</literal>,\n- <literal>userlock</literal>, or\n- <literal>advisory</literal>.\n+ <literal>userlock</literal>,\n+ <literal>advisory</literal> or\n+ <literal>applytransaction</literal>.\n\nThis change removed the Oxford comma that was there before. I assume\nit was unintended.\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n3. globals\n\nThe parallel_apply_XXX functions were all shortened to pa_XXX.\n\nI wondered if the same simplification should be done also to the\nglobal statics...\n\ne.g.\nParallelApplyWorkersHash -> PAWorkerHash\nParallelApplyWorkersList -> PAWorkerList\nParallelApplyMessagePending -> PAMessagePending\netc...\n\n~~~\n\n4. pa_get_free_worker\n\n+ foreach(lc, active_workers)\n+ {\n+ ParallelApplyWorkerInfo *winfo = NULL;\n+\n+ winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n\nNo need to assign NULL because the next line just overwrites that anyhow.\n\n~\n\n5.\n\n+ /*\n+ * Try to free the worker first, because we don't wait for the rollback\n+ * command to finish so the worker may not be freed at the end of the\n+ * transaction.\n+ */\n+ if (pa_free_worker(winfo, winfo->shared->xid))\n+ continue;\n+\n+ if (!winfo->in_use)\n+ return winfo;\n\nShouldn't the (!winfo->in_use) check be done first as well -- e.g. why\nare we trying to free a worker which is maybe not even in_use?\n\nSUGGESTION (this will need some comment to explain what it is doing)\nif (!winfo->in_use || !pa_free_worker(winfo, winfo->shared->xid) &&\n!winfo->in_use)\nreturn winfo;\n\n~~~\n\n6. pa_free_worker\n\n+/*\n+ * Remove the parallel apply worker entry from the hash table. Stop the work if\n+ * there are enough workers in the pool.\n+ *\n\nTypo? \"work\" -> \"worker\"\n\n~\n\n7.\n\n+ /* Are there enough workers in the pool? */\n+ if (napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n+ {\n\nIMO that comment should be something more like \"Don't detach/stop the\nworker unless...\"\n\n~~~\n\n8. pa_send_data\n\n+ /*\n+ * Retry after 1s to reduce the cost of getting the system time and\n+ * calculating the time difference.\n+ */\n+ (void) WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ 1000L,\n+ WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE);\n\n8a.\nI am not sure you need to explain the reason in the comment. Just\nsaying \"Wait before retrying.\" seems sufficient to me.\n\n~\n\n8b.\nInstead of the hardwired \"1s\" in the comment, and 1000L in the code,\nmaybe better to just have another constant.\n\nSUGGESTION\n#define SHM_SEND_RETRY_INTERVAL_MS 1000\n#define SHM_SEND_TIMEOUT_MS 10000\n\n~\n\n9.\n\n+ if (startTime == 0)\n+ startTime = GetCurrentTimestamp();\n+ else if (TimestampDifferenceExceeds(startTime, GetCurrentTimestamp(),\n\nIMO the initial startTime should be at top of the function otherwise\nthe timeout calculation seems wrong.\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n10. handle_streamed_transaction\n\n+ * In streaming case (receiving a block of streamed transaction), for\n+ * SUBSTREAM_ON mode, simply redirect it to a file for the proper toplevel\n+ * transaction, and for SUBSTREAM_PARALLEL mode, send the changes to parallel\n+ * apply workers (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE changes\n+ * will be applied by both leader apply worker and parallel apply workers).\n\nI'm not sure this function comment should be referring to SUBSTREAM_ON\nand SUBSTREAM_PARALLEL because the function body does not use those\nanywhere in the logic.\n\n~~~\n\n11. apply_handle_stream_start\n\n+ /*\n+ * Increment the number of messages waiting to be processed by\n+ * parallel apply worker.\n+ */\n+ pg_atomic_add_fetch_u32(&(winfo->shared->pending_message_count), 1);\n+\n\nThe &() parens are not needed. Just write &winfo->shared->pending_message_count.\n\nAlso, search/replace others like this -- there are a few of them.\n\n~~~\n\n12. apply_handle_stream_stop\n\n+ if (!abort_toplevel_transaction &&\n+ pg_atomic_sub_fetch_u32(&(MyParallelShared->pending_message_count), 1) == 0)\n+ {\n+ pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n+ }\n\nThat lock/unlock seems like it is done just as a way of\ntesting/waiting for an exclusive lock held on the xid to be released.\nBut the code is too tricky -- IMO it needs a big comment saying how\nthis trick works, or maybe better to have a wrapper function for this\nfor clarity. e.g. pa_wait_nolock_stream(xid); (or some better name)\n\n~~~\n\n13. apply_handle_stream_abort\n\n+ if (abort_toplevel_transaction)\n+ {\n+ (void) pa_free_worker(winfo, xid);\n+ }\n\nUnnecessary { }\n\n~~~\n\n14. maybe_reread_subscription\n\n@@ -3083,8 +3563,9 @@ maybe_reread_subscription(void)\n if (!newsub)\n {\n ereport(LOG,\n- (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\n- \"stop because the subscription was removed\",\n+ /* translator: first %s is the name of logical replication worker */\n+ (errmsg(\"%s for subscription \\\"%s\\\" will stop because the \"\n+ \"subscription was removed\", get_worker_name(),\n MySubscription->name)));\n\n proc_exit(0);\n@@ -3094,8 +3575,9 @@ maybe_reread_subscription(void)\n if (!newsub->enabled)\n {\n ereport(LOG,\n- (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\n- \"stop because the subscription was disabled\",\n+ /* translator: first %s is the name of logical replication worker */\n+ (errmsg(\"%s for subscription \\\"%s\\\" will stop because the \"\n+ \"subscription was disabled\", get_worker_name(),\n MySubscription->name)));\n\nIMO better to avoid splitting the string literals over multiple line like this.\n\nPlease check the rest of the patch too -- there may be many more just like this.\n\n~~~\n\n15. ApplyWorkerMain\n\n@@ -3726,7 +4236,7 @@ ApplyWorkerMain(Datum main_arg)\n }\n else\n {\n- /* This is main apply worker */\n+ /* This is leader apply worker */\n RepOriginId originid;\n\"This is leader\" -> \"This is the leader\"\n\n======\n\nsrc/bin/psql/describe.c\n\n16. describeSubscriptions\n\n+ if (pset.sversion >= 160000)\n+ appendPQExpBuffer(&buf,\n+ \", (CASE substream\\n\"\n+ \" WHEN 'f' THEN 'off'\\n\"\n+ \" WHEN 't' THEN 'on'\\n\"\n+ \" WHEN 'p' THEN 'parallel'\\n\"\n+ \" END) AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Streaming\"));\n+ else\n+ appendPQExpBuffer(&buf,\n+ \", substream AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Streaming\"));\n\nI'm not sure it is an improvement to change the output \"t/f/p\" to\n\"on/off/parallel\"\n\nIMO \"t/f/parallel\" would be better. Then the t/f is consistent with\n- how it used to display, and\n- all the other boolean fields\n\n======\n\nsrc/include/replication/worker_internal.h\n\n17. ParallelTransState\n\n+/*\n+ * State of the transaction in parallel apply worker.\n+ *\n+ * These enum values are ordered by the order of transaction state changes in\n+ * parallel apply worker.\n+ */\n+typedef enum ParallelTransState\n\n\"ordered by the order\" ??\n\nSUGGESTION\nThe enum values must have the same order as the transaction state transitions.\n\n======\n\nsrc/include/storage/lock.h\n\n18.\n\n@@ -149,10 +149,12 @@ typedef enum LockTagType\n LOCKTAG_SPECULATIVE_TOKEN, /* speculative insertion Xid and token */\n LOCKTAG_OBJECT, /* non-relation database object */\n LOCKTAG_USERLOCK, /* reserved for old contrib/userlock code */\n- LOCKTAG_ADVISORY /* advisory user locks */\n+ LOCKTAG_ADVISORY, /* advisory user locks */\n+ LOCKTAG_APPLY_TRANSACTION /* transaction being applied on the subscriber\n+ * side */\n } LockTagType;\n\n-#define LOCKTAG_LAST_TYPE LOCKTAG_ADVISORY\n+#define LOCKTAG_LAST_TYPE LOCKTAG_APPLY_TRANSACTION\n\n extern PGDLLIMPORT const char *const LockTagTypeNames[];\n\n@@ -278,6 +280,17 @@ typedef struct LOCKTAG\n (locktag).locktag_type = LOCKTAG_ADVISORY, \\\n (locktag).locktag_lockmethodid = USER_LOCKMETHOD)\n\n+/*\n+ * ID info for a remote transaction on the subscriber side is:\n+ * DB OID + SUBSCRIPTION OID + TRANSACTION ID + OBJID\n+ */\n+#define SET_LOCKTAG_APPLY_TRANSACTION(locktag,dboid,suboid,xid,objid) \\\n+ ((locktag).locktag_field1 = (dboid), \\\n+ (locktag).locktag_field2 = (suboid), \\\n+ (locktag).locktag_field3 = (xid), \\\n+ (locktag).locktag_field4 = (objid), \\\n+ (locktag).locktag_type = LOCKTAG_APPLY_TRANSACTION, \\\n+ (locktag).locktag_lockmethodid = DEFAULT_LOCKMETHOD)\n\nMaybe \"on the subscriber side\" (2 places above) has no meaning here\nbecause there is no context this is talking about logical replication.\nMaybe those comments need to say something more like \"on a logical\nreplication subscriber\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 21 Nov 2022 17:26:03 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, November 19, 2022 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Nov 18, 2022 at 7:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Nov 16, 2022 at 1:50 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tuesday, November 15, 2022 7:58 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > I noticed that I didn't add CHECK_FOR_INTERRUPTS while retrying send\r\n> message.\r\n> > > So, attach the new version which adds that. Also attach the 0004\r\n> > > patch that restarts logical replication with temporarily disabling\r\n> > > the parallel apply if failed to apply a transaction in parallel apply worker.\r\n> > >\r\n> >\r\n> > Few comments on v48-0001\r\n\r\nThanks for the comments !\r\n\r\n> > ======================\r\n> >\r\n> \r\n> I have made quite a few changes in the comments, added some new comments,\r\n> and made other cosmetic changes in the attached patch. The is atop v48-0001*.\r\n> If these look okay to you, please include them in the next version. Apart from\r\n> these, I have a few more comments on\r\n> v48-0001*\r\n\r\nThanks, I have checked and merge them.\r\n\r\n> 1.\r\n> +static bool\r\n> +pa_can_start(TransactionId xid)\r\n> +{\r\n> + if (!TransactionIdIsValid(xid))\r\n> + return false;\r\n> \r\n> The caller (see caller of pa_start_worker) already has a check that xid passed\r\n> here is valid, so I think this should be an Assert unless I am missing something in\r\n> which case it is better to add a comment here.\r\n\r\nChanged to an Assert().\r\n\r\n> 2. Will it be better to rename pa_start_worker() as\r\n> pa_allocate_worker() because it sometimes gets the worker from the pool and\r\n> also allocate the hash entry for worker info? That will even match the\r\n> corresponding pa_free_worker().\r\n\r\nAgreed and changed.\r\n\r\n> 3.\r\n> +pa_start_subtrans(TransactionId current_xid, TransactionId top_xid)\r\n> {\r\n> ...\r\n> +\r\n> + oldctx = MemoryContextSwitchTo(ApplyContext);\r\n> + subxactlist = lappend_xid(subxactlist, current_xid);\r\n> + MemoryContextSwitchTo(oldctx);\r\n> ...\r\n> \r\n> Why do we need to allocate this list in a permanent context? IIUC, we need to\r\n> use this to maintain subxacts so that it can be later used to find the given\r\n> subxact at the time of rollback to savepoint in the current in-progress\r\n> transaction, so why do we need it beyond the transaction being applied? If\r\n> there is a reason for the same, it would be better to add some comments for\r\n> the same.\r\n\r\nI think you are right, I changed to use TopTransactionContext here.\r\n\r\n> 4.\r\n> +pa_stream_abort(LogicalRepStreamAbortData *abort_data)\r\n> {\r\n> ...\r\n> +\r\n> + for (i = list_length(subxactlist) - 1; i >= 0; i--) { TransactionId\r\n> + xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\r\n> +\r\n> + if (xid_tmp == subxid)\r\n> + {\r\n> + found = true;\r\n> + break;\r\n> + }\r\n> + }\r\n> +\r\n> + if (found)\r\n> + {\r\n> + RollbackToSavepoint(spname);\r\n> + CommitTransactionCommand();\r\n> + subxactlist = list_truncate(subxactlist, i + 1); }\r\n> \r\n> I was thinking whether we can have an Assert(false) for the not found case but it\r\n> seems if all the changes of a subxact have been skipped then probably subxid\r\n> corresponding to \"rollback to savepoint\" won't be found in subxactlist and we\r\n> don't need to do anything for it. If that is the case, then probably adding a\r\n> comment for it would be a good idea, otherwise, we can probably have\r\n> Assert(false) in the else case.\r\n\r\nYes, we might not find the xid for an empty subtransaction. I added some comments\r\nhere for the same.\r\n\r\nApart from above, I also addressed the comments in [1] and fixed a bug that\r\nparallel worker exits silently while the leader cannot detect that. In the\r\nlatest patch, the parallel apply worker will send a notify('X') message to\r\nleader so that leader can detect the exit.\r\n\r\nHere is the new version patch.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1KWgReYbpwEMh1H1ohHoYirv4Aa%3D6v13MutCF9NvHTc5A%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 21 Nov 2022 12:34:02 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, November 18, 2022 8:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Here are review comments on v47-0001 and v47-0002 patches:\r\n\r\nThanks for the comments!\r\n\r\n> When the parallel apply worker exited, I got the following server log.\r\n> I think this log is not appropriate since the worker was not terminated by\r\n> administrator command but exited by itself. Also, probably it should exit with\r\n> exit code 0?\r\n> \r\n> FATAL: terminating logical replication worker due to administrator command\r\n> LOG: background worker \"logical replication parallel worker\" (PID\r\n> 3594918) exited with exit code 1\r\n\r\nChanged to report a LOG and exited with code 0.\r\n\r\n> ---\r\n> /*\r\n> * Stop the worker if there are enough workers in the pool or the leader\r\n> * apply worker serialized part of the transaction data to a file due to\r\n> * send timeout.\r\n> */\r\n> if (winfo->serialize_changes ||\r\n> napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n> \r\n> Why do we need to stop the worker if the leader serializes changes?\r\n\r\nBecause there might be partial sent message left in memory queue if send timeout.\r\nAnd we need to either re-send the same message until success or detach from the memory\r\nqueue. To make the logic simple, the patch directly stop the worker in this case.\r\n\r\n\r\n> ---\r\n> + /*\r\n> + * Release all session level locks that could be held in parallel apply\r\n> + * mode.\r\n> + */\r\n> + LockReleaseAll(DEFAULT_LOCKMETHOD, true);\r\n> +\r\n> \r\n> I think we call LockReleaseAll() at the process exit (in ProcKill()), but do we\r\n> really need to do LockReleaseAll() here too?\r\n\r\nIf we don't release locks before ProcKill, we might break an Assert check at\r\nthe beginning of ProcKill which is used to ensure all the locks are released.\r\nAnd It seems ProcKill doesn't release session level locks after the assert\r\ncheck. So I think we'd better release them here.\r\n\r\n> ---\r\n> \r\n> + elog(ERROR, \"could not find replication state slot\r\n> for replication\"\r\n> + \"origin with OID %u which was acquired by\r\n> %d\", node, acquired_by);\r\n> \r\n> Let's not break the error log message in the middle so that the user can search\r\n> the message by grep easily.\r\n\r\nChanged.\r\n\r\n> ---\r\n> + {\r\n> + {\"max_parallel_apply_workers_per_subscription\",\r\n> + PGC_SIGHUP,\r\n> + REPLICATION_SUBSCRIBERS,\r\n> + gettext_noop(\"Maximum number of parallel\r\n> apply workers per subscription.\"),\r\n> + NULL,\r\n> + },\r\n> + &max_parallel_apply_workers_per_subscription,\r\n> + 2, 0, MAX_BACKENDS,\r\n> + NULL, NULL, NULL\r\n> + },\r\n> +\r\n> \r\n> I think we should use MAX_PARALLEL_WORKER_LIMIT as the max value instead.\r\n> MAX_BACKENDS is too high.\r\n\r\nChanged.\r\n\r\n> ---\r\n> + /*\r\n> + * Indicates whether there are pending messages in the queue.\r\n> The parallel\r\n> + * apply worker will check it before starting to wait.\r\n> + */\r\n> + pg_atomic_uint32 pending_message_count;\r\n> \r\n> The \"pending messages\" sounds like individual logical replication messages\r\n> such as LOGICAL_REP_MSG_INSERT. But IIUC what this value actually shows is\r\n> how many streamed chunks are pending to process, right?\r\n\r\nYes, renamed this.\r\n\r\n> ---\r\n> When the parallel apply worker raises an error, I got the same error twice from\r\n> the leader worker and parallel worker as follows. Can we suppress either one?\r\n> \r\n> 2022-11-17 17:30:23.490 JST [3814552] LOG: logical replication parallel apply\r\n> worker for subscription \"test_sub1\" has started\r\n> 2022-11-17 17:30:23.490 JST [3814552] ERROR: duplicate key value violates\r\n> unique constraint \"test1_c_idx\"\r\n> 2022-11-17 17:30:23.490 JST [3814552] DETAIL: Key (c)=(1) already exists.\r\n> 2022-11-17 17:30:23.490 JST [3814552] CONTEXT: processing remote data for\r\n> replication origin \"pg_16390\" during message type \"INSERT\" for replication\r\n> target relatio n \"public.test1\" in transaction 731\r\n> 2022-11-17 17:30:23.490 JST [3814550] ERROR: duplicate key value violates\r\n> unique constraint \"test1_c_idx\"\r\n> 2022-11-17 17:30:23.490 JST [3814550] DETAIL: Key (c)=(1) already exists.\r\n> 2022-11-17 17:30:23.490 JST [3814550] CONTEXT: processing remote data for\r\n> replication origin \"pg_16390\" during message type \"INSERT\" for replication\r\n> target relatio n \"public.test1\" in transaction 731\r\n> parallel apply worker\r\n\r\nIt seems similar to the behavior of parallel query which will report the same\r\nerror twice. But I agree it might be better for the leader to report something\r\ndifferent. So, I changed the error message reported by leader in the new\r\nversion patch.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Mon, 21 Nov 2022 12:34:35 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 21, 2022 2:26 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Fri, Nov 18, 2022 at 6:03 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > Here are some review comments for v47-0001\r\n> >\r\n> > (This review is a WIP - I will post more comments for this patch next\r\n> > week)\r\n> >\r\n> \r\n> Here are the rest of my comments for v47-0001\r\n\r\nThanks for the comments!\r\n\r\n> ======\r\n> \r\n> doc/src/sgml/monitoring.\r\n> \r\n> 1.\r\n> \r\n> @@ -1851,6 +1851,11 @@ postgres 27093 0.0 0.0 30096 2752 ?\r\n> Ss 11:34 0:00 postgres: ser\r\n> <entry>Waiting to acquire an advisory user lock.</entry>\r\n> </row>\r\n> <row>\r\n> + <entry><literal>applytransaction</literal></entry>\r\n> + <entry>Waiting to acquire acquire a lock on a remote transaction being\r\n> + applied on the subscriber side.</entry>\r\n> + </row>\r\n> + <row>\r\n> \r\n> 1a.\r\n> Typo \"acquire acquire\"\r\n\r\nFixed.\r\n\r\n> ~\r\n> \r\n> 1b.\r\n> Maybe \"on the subscriber side\" does not mean much without any context.\r\n> Maybe better to word it as below.\r\n> \r\n> SUGGESTION\r\n> Waiting to acquire a lock on a remote transaction being applied by a logical\r\n> replication subscriber.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> doc/src/sgml/system-views.sgml\r\n> \r\n> 2.\r\n> \r\n> @@ -1361,8 +1361,9 @@\r\n> <literal>virtualxid</literal>,\r\n> <literal>spectoken</literal>,\r\n> <literal>object</literal>,\r\n> - <literal>userlock</literal>, or\r\n> - <literal>advisory</literal>.\r\n> + <literal>userlock</literal>,\r\n> + <literal>advisory</literal> or\r\n> + <literal>applytransaction</literal>.\r\n> \r\n> This change removed the Oxford comma that was there before. I assume it was\r\n> unintended.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 3. globals\r\n> \r\n> The parallel_apply_XXX functions were all shortened to pa_XXX.\r\n> \r\n> I wondered if the same simplification should be done also to the global\r\n> statics...\r\n> \r\n> e.g.\r\n> ParallelApplyWorkersHash -> PAWorkerHash ParallelApplyWorkersList ->\r\n> PAWorkerList ParallelApplyMessagePending -> PAMessagePending etc...\r\n\r\nI personally feel these names looks fine to me.\r\n\r\n> ~~~\r\n> \r\n> 4. pa_get_free_worker\r\n> \r\n> + foreach(lc, active_workers)\r\n> + {\r\n> + ParallelApplyWorkerInfo *winfo = NULL;\r\n> +\r\n> + winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\r\n> \r\n> No need to assign NULL because the next line just overwrites that anyhow.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 5.\r\n> \r\n> + /*\r\n> + * Try to free the worker first, because we don't wait for the rollback\r\n> + * command to finish so the worker may not be freed at the end of the\r\n> + * transaction.\r\n> + */\r\n> + if (pa_free_worker(winfo, winfo->shared->xid)) continue;\r\n> +\r\n> + if (!winfo->in_use)\r\n> + return winfo;\r\n> \r\n> Shouldn't the (!winfo->in_use) check be done first as well -- e.g. why are we\r\n> trying to free a worker which is maybe not even in_use?\r\n> \r\n> SUGGESTION (this will need some comment to explain what it is doing) if\r\n> (!winfo->in_use || !pa_free_worker(winfo, winfo->shared->xid) &&\r\n> !winfo->in_use)\r\n> return winfo;\r\n\r\nSince the pa_free_worker will check the in_use flag as well and\r\nthe current style looks clean to me. So I didn't change this.\r\n\r\nBut it seems we need to first call pa_free_worker for every worker and then\r\nchoose a free a free, otherwise a stopped worker info(shared memory or ...)\r\nmight be left for a long time. I will think about this and try to fix it in\r\nnext version.\r\n\r\n> ~~~\r\n> \r\n> 6. pa_free_worker\r\n> \r\n> +/*\r\n> + * Remove the parallel apply worker entry from the hash table. Stop the\r\n> +work if\r\n> + * there are enough workers in the pool.\r\n> + *\r\n> \r\n> Typo? \"work\" -> \"worker\"\r\n> \r\n\r\nFixed.\r\n\r\n> \r\n> 7.\r\n> \r\n> + /* Are there enough workers in the pool? */ if (napplyworkers >\r\n> + (max_parallel_apply_workers_per_subscription / 2)) {\r\n> \r\n> IMO that comment should be something more like \"Don't detach/stop the\r\n> worker unless...\"\r\n> \r\n\r\nImproved.\r\n\r\n> \r\n> 8. pa_send_data\r\n> \r\n> + /*\r\n> + * Retry after 1s to reduce the cost of getting the system time and\r\n> + * calculating the time difference.\r\n> + */\r\n> + (void) WaitLatch(MyLatch,\r\n> + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, 1000L,\r\n> + WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE);\r\n> \r\n> 8a.\r\n> I am not sure you need to explain the reason in the comment. Just saying \"Wait\r\n> before retrying.\" seems sufficient to me.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 8b.\r\n> Instead of the hardwired \"1s\" in the comment, and 1000L in the code, maybe\r\n> better to just have another constant.\r\n> \r\n> SUGGESTION\r\n> #define SHM_SEND_RETRY_INTERVAL_MS 1000\r\n> #define SHM_SEND_TIMEOUT_MS 10000\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 9.\r\n> \r\n> + if (startTime == 0)\r\n> + startTime = GetCurrentTimestamp();\r\n> + else if (TimestampDifferenceExceeds(startTime, GetCurrentTimestamp(),\r\n> \r\n> IMO the initial startTime should be at top of the function otherwise the timeout\r\n> calculation seems wrong.\r\n\r\nSetting startTime at beginning will bring unnecessary cost if we don't need to retry.\r\nAnd start counting from the first failure looks fine to me.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 10. handle_streamed_transaction\r\n> \r\n> + * In streaming case (receiving a block of streamed transaction), for\r\n> + * SUBSTREAM_ON mode, simply redirect it to a file for the proper\r\n> + toplevel\r\n> + * transaction, and for SUBSTREAM_PARALLEL mode, send the changes to\r\n> + parallel\r\n> + * apply workers (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE\r\n> + changes\r\n> + * will be applied by both leader apply worker and parallel apply workers).\r\n> \r\n> I'm not sure this function comment should be referring to SUBSTREAM_ON\r\n> and SUBSTREAM_PARALLEL because the function body does not use those\r\n> anywhere in the logic.\r\n\r\nImproved.\r\n\r\n> ~~~\r\n> \r\n> 11. apply_handle_stream_start\r\n> \r\n> + /*\r\n> + * Increment the number of messages waiting to be processed by\r\n> + * parallel apply worker.\r\n> + */\r\n> + pg_atomic_add_fetch_u32(&(winfo->shared->pending_message_count), 1);\r\n> +\r\n> \r\n> The &() parens are not needed. Just write\r\n> &winfo->shared->pending_message_count.\r\n> \r\n> Also, search/replace others like this -- there are a few of them.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 12. apply_handle_stream_stop\r\n> \r\n> + if (!abort_toplevel_transaction &&\r\n> + pg_atomic_sub_fetch_u32(&(MyParallelShared->pending_message_count),\r\n> 1)\r\n> + == 0) { pa_lock_stream(MyParallelShared->xid, AccessShareLock);\r\n> + pa_unlock_stream(MyParallelShared->xid, AccessShareLock); }\r\n> \r\n> That lock/unlock seems like it is done just as a way of testing/waiting for an\r\n> exclusive lock held on the xid to be released.\r\n> But the code is too tricky -- IMO it needs a big comment saying how this trick\r\n> works, or maybe better to have a wrapper function for this for clarity. e.g.\r\n> pa_wait_nolock_stream(xid); (or some better name)\r\n\r\nI think the comments atop applyparallelworker.c explained the usage of\r\nstream/transaction lock.\r\n\r\n```\r\n...\r\n* In order for lmgr to detect this, we have LA acquire a session lock on the\r\n * remote transaction (by pa_lock_stream()) and have PA wait on the lock before\r\n * trying to receive messages. In other words, LA acquires the lock before\r\n * sending STREAM_STOP and releases it if already acquired before sending\r\n * STREAM_START, STREAM_ABORT(for toplevel transaction), STREAM_PREPARE and\r\n * STREAM_COMMIT. For PA, it always needs to acquire the lock after processing\r\n * STREAM_STOP and then release immediately after acquiring it. That way, when\r\n * PA is waiting for LA, we can have a wait-edge from PA to LA in lmgr, which\r\n * will make a deadlock in lmgr like:\r\n...\r\n```\r\n\r\n> ~~~\r\n> \r\n> 13. apply_handle_stream_abort\r\n> \r\n> + if (abort_toplevel_transaction)\r\n> + {\r\n> + (void) pa_free_worker(winfo, xid);\r\n> + }\r\n> \r\n> Unnecessary { }\r\n\r\nRemoved.\r\n\r\n> ~~~\r\n> \r\n> 14. maybe_reread_subscription\r\n> \r\n> @@ -3083,8 +3563,9 @@ maybe_reread_subscription(void)\r\n> if (!newsub)\r\n> {\r\n> ereport(LOG,\r\n> - (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\r\n> - \"stop because the subscription was removed\",\r\n> + /* translator: first %s is the name of logical replication worker */\r\n> + (errmsg(\"%s for subscription \\\"%s\\\" will stop because the \"\r\n> + \"subscription was removed\", get_worker_name(),\r\n> MySubscription->name)));\r\n> \r\n> proc_exit(0);\r\n> @@ -3094,8 +3575,9 @@ maybe_reread_subscription(void)\r\n> if (!newsub->enabled)\r\n> {\r\n> ereport(LOG,\r\n> - (errmsg(\"logical replication apply worker for subscription \\\"%s\\\" will \"\r\n> - \"stop because the subscription was disabled\",\r\n> + /* translator: first %s is the name of logical replication worker */\r\n> + (errmsg(\"%s for subscription \\\"%s\\\" will stop because the \"\r\n> + \"subscription was disabled\", get_worker_name(),\r\n> MySubscription->name)));\r\n> \r\n> IMO better to avoid splitting the string literals over multiple line like this.\r\n> \r\n> Please check the rest of the patch too -- there may be many more just like this.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 15. ApplyWorkerMain\r\n> \r\n> @@ -3726,7 +4236,7 @@ ApplyWorkerMain(Datum main_arg)\r\n> }\r\n> else\r\n> {\r\n> - /* This is main apply worker */\r\n> + /* This is leader apply worker */\r\n> RepOriginId originid;\r\n> \"This is leader\" -> \"This is the leader\"\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/bin/psql/describe.c\r\n> \r\n> 16. describeSubscriptions\r\n> \r\n> + if (pset.sversion >= 160000)\r\n> + appendPQExpBuffer(&buf,\r\n> + \", (CASE substream\\n\"\r\n> + \" WHEN 'f' THEN 'off'\\n\"\r\n> + \" WHEN 't' THEN 'on'\\n\"\r\n> + \" WHEN 'p' THEN 'parallel'\\n\"\r\n> + \" END) AS \\\"%s\\\"\\n\",\r\n> + gettext_noop(\"Streaming\"));\r\n> + else\r\n> + appendPQExpBuffer(&buf,\r\n> + \", substream AS \\\"%s\\\"\\n\",\r\n> + gettext_noop(\"Streaming\"));\r\n> \r\n> I'm not sure it is an improvement to change the output \"t/f/p\" to\r\n> \"on/off/parallel\"\r\n> \r\n> IMO \"t/f/parallel\" would be better. Then the t/f is consistent with\r\n> - how it used to display, and\r\n> - all the other boolean fields\r\n\r\nI think the current style is consistent with the \" Synchronous commit\" parameter which\r\nalso shows \"on/off/remote_apply/...\", so didn't change this.\r\n\r\nName | ... | Synchronous commit\r\n------+-----+-------------------\r\nsub | ... | on \r\n\r\n> ======\r\n> \r\n> src/include/replication/worker_internal.h\r\n> \r\n> 17. ParallelTransState\r\n> \r\n> +/*\r\n> + * State of the transaction in parallel apply worker.\r\n> + *\r\n> + * These enum values are ordered by the order of transaction state\r\n> +changes in\r\n> + * parallel apply worker.\r\n> + */\r\n> +typedef enum ParallelTransState\r\n> \r\n> \"ordered by the order\" ??\r\n> \r\n> SUGGESTION\r\n> The enum values must have the same order as the transaction state transitions.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/include/storage/lock.h\r\n> \r\n> 18.\r\n> \r\n> @@ -149,10 +149,12 @@ typedef enum LockTagType\r\n> LOCKTAG_SPECULATIVE_TOKEN, /* speculative insertion Xid and token */\r\n> LOCKTAG_OBJECT, /* non-relation database object */\r\n> LOCKTAG_USERLOCK, /* reserved for old contrib/userlock code */\r\n> - LOCKTAG_ADVISORY /* advisory user locks */\r\n> + LOCKTAG_ADVISORY, /* advisory user locks */\r\n> LOCKTAG_APPLY_TRANSACTION\r\n> + /* transaction being applied on the subscriber\r\n> + * side */\r\n> } LockTagType;\r\n> \r\n> -#define LOCKTAG_LAST_TYPE LOCKTAG_ADVISORY\r\n> +#define LOCKTAG_LAST_TYPE LOCKTAG_APPLY_TRANSACTION\r\n> \r\n> extern PGDLLIMPORT const char *const LockTagTypeNames[];\r\n> \r\n> @@ -278,6 +280,17 @@ typedef struct LOCKTAG\r\n> (locktag).locktag_type = LOCKTAG_ADVISORY, \\\r\n> (locktag).locktag_lockmethodid = USER_LOCKMETHOD)\r\n> \r\n> +/*\r\n> + * ID info for a remote transaction on the subscriber side is:\r\n> + * DB OID + SUBSCRIPTION OID + TRANSACTION ID + OBJID */ #define\r\n> +SET_LOCKTAG_APPLY_TRANSACTION(locktag,dboid,suboid,xid,objid) \\\r\n> + ((locktag).locktag_field1 = (dboid), \\\r\n> + (locktag).locktag_field2 = (suboid), \\\r\n> + (locktag).locktag_field3 = (xid), \\\r\n> + (locktag).locktag_field4 = (objid), \\\r\n> + (locktag).locktag_type = LOCKTAG_APPLY_TRANSACTION, \\\r\n> +(locktag).locktag_lockmethodid = DEFAULT_LOCKMETHOD)\r\n> \r\n> Maybe \"on the subscriber side\" (2 places above) has no meaning here because\r\n> there is no context this is talking about logical replication.\r\n> Maybe those comments need to say something more like \"on a logical\r\n> replication subscriber\"\r\n> \r\nChanged.\r\n\r\nI also addressed all the comments from [1]\r\n\r\n[1] https://www.postgresql.org/message-id/CAHut%2BPs7TzqqDnuH8r_ct1W_zSBCnuo3wodMt4Y8_Gw7rSRAaw%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Mon, 21 Nov 2022 12:36:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, November 21, 2022 8:34 PMhouzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Saturday, November 19, 2022 6:49 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Nov 18, 2022 at 7:56 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Nov 16, 2022 at 1:50 PM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Tuesday, November 15, 2022 7:58 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > I noticed that I didn't add CHECK_FOR_INTERRUPTS while retrying\r\n> > > > send\r\n> > message.\r\n> > > > So, attach the new version which adds that. Also attach the 0004\r\n> > > > patch that restarts logical replication with temporarily disabling\r\n> > > > the parallel apply if failed to apply a transaction in parallel apply worker.\r\n> > > >\r\n> > >\r\n> > > Few comments on v48-0001\r\n> \r\n> Thanks for the comments !\r\n> \r\n> > > ======================\r\n> > >\r\n> >\r\n> > I have made quite a few changes in the comments, added some new\r\n> > comments, and made other cosmetic changes in the attached patch. The is\r\n> atop v48-0001*.\r\n> > If these look okay to you, please include them in the next version.\r\n> > Apart from these, I have a few more comments on\r\n> > v48-0001*\r\n> \r\n> Thanks, I have checked and merge them.\r\n> \r\n> > 1.\r\n> > +static bool\r\n> > +pa_can_start(TransactionId xid)\r\n> > +{\r\n> > + if (!TransactionIdIsValid(xid))\r\n> > + return false;\r\n> >\r\n> > The caller (see caller of pa_start_worker) already has a check that\r\n> > xid passed here is valid, so I think this should be an Assert unless I\r\n> > am missing something in which case it is better to add a comment here.\r\n> \r\n> Changed to an Assert().\r\n> \r\n> > 2. Will it be better to rename pa_start_worker() as\r\n> > pa_allocate_worker() because it sometimes gets the worker from the\r\n> > pool and also allocate the hash entry for worker info? That will even\r\n> > match the corresponding pa_free_worker().\r\n> \r\n> Agreed and changed.\r\n> \r\n> > 3.\r\n> > +pa_start_subtrans(TransactionId current_xid, TransactionId top_xid)\r\n> > {\r\n> > ...\r\n> > +\r\n> > + oldctx = MemoryContextSwitchTo(ApplyContext);\r\n> > + subxactlist = lappend_xid(subxactlist, current_xid);\r\n> > + MemoryContextSwitchTo(oldctx);\r\n> > ...\r\n> >\r\n> > Why do we need to allocate this list in a permanent context? IIUC, we\r\n> > need to use this to maintain subxacts so that it can be later used to\r\n> > find the given subxact at the time of rollback to savepoint in the\r\n> > current in-progress transaction, so why do we need it beyond the\r\n> > transaction being applied? If there is a reason for the same, it would\r\n> > be better to add some comments for the same.\r\n> \r\n> I think you are right, I changed to use TopTransactionContext here.\r\n> \r\n> > 4.\r\n> > +pa_stream_abort(LogicalRepStreamAbortData *abort_data)\r\n> > {\r\n> > ...\r\n> > +\r\n> > + for (i = list_length(subxactlist) - 1; i >= 0; i--) { TransactionId\r\n> > + xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\r\n> > +\r\n> > + if (xid_tmp == subxid)\r\n> > + {\r\n> > + found = true;\r\n> > + break;\r\n> > + }\r\n> > + }\r\n> > +\r\n> > + if (found)\r\n> > + {\r\n> > + RollbackToSavepoint(spname);\r\n> > + CommitTransactionCommand();\r\n> > + subxactlist = list_truncate(subxactlist, i + 1); }\r\n> >\r\n> > I was thinking whether we can have an Assert(false) for the not found\r\n> > case but it seems if all the changes of a subxact have been skipped\r\n> > then probably subxid corresponding to \"rollback to savepoint\" won't be\r\n> > found in subxactlist and we don't need to do anything for it. If that\r\n> > is the case, then probably adding a comment for it would be a good\r\n> > idea, otherwise, we can probably have\r\n> > Assert(false) in the else case.\r\n> \r\n> Yes, we might not find the xid for an empty subtransaction. I added some\r\n> comments here for the same.\r\n> \r\n> Apart from above, I also addressed the comments in [1] and fixed a bug that\r\n> parallel worker exits silently while the leader cannot detect that. In the latest\r\n> patch, the parallel apply worker will send a notify('X') message to leader so that\r\n> leader can detect the exit.\r\n> \r\n> Here is the new version patch.\r\n\r\nI noticed that I missed a header file causing CFbot to complain.\r\nAttach a new version patch set which fix that.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 22 Nov 2022 02:00:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Thanks for addressing my review comments on v47-0001.\n\nHere are my review comments for v49-0001.\n\n======\n\nsrc/backend/replication/logical/applyparallelworker.c\n\n1. GENERAL - NULL checks\n\nThere is inconsistent NULL checking in the patch.\n\nSometimes it is like (!winfo)\nSometimes explicit NULL checks like (winfo->mq_handle != NULL)\n\n(That is just one example -- there are differences in many places)\n\nIt would be better to use a consistent style everywhere.\n\n~\n\n2. GENERAL - Error message worker name\n\n2a.\nIn worker.c all the messages are now \"logical replication apply\nworker\" or \"logical replication parallel apply worker\" etc, but in the\napplyparallel.c sometimes the \"logical replication\" part is missing.\nIMO all the messages in this patch should be consistently worded.\n\nI've reported some of them in the following comment below, but please\nsearch the whole patch for any I might have missed.\n\n2b.\nConsider if maybe all of these ought to be calling get_worker_name()\nwhich is currently static in worker.c. Doing this means any future\nchanges to get_worker_name won't cause more inconsistencies.\n\n~~~\n\n3. File header comment\n\n+ * IDENTIFICATION src/backend/replication/logical/applyparallelworker.c\n\nThe word \"IDENTIFICATION\" should be on a separate line (for\nconsistency with every other PG source file)\n\n~\n\n4.\n\n+ * In order for lmgr to detect this, we have LA acquire a session lock on the\n+ * remote transaction (by pa_lock_stream()) and have PA wait on the lock before\n+ * trying to receive messages. In other words, LA acquires the lock before\n+ * sending STREAM_STOP and releases it if already acquired before sending\n+ * STREAM_START, STREAM_ABORT(for toplevel transaction), STREAM_PREPARE and\n+ * STREAM_COMMIT. For PA, it always needs to acquire the lock after processing\n+ * STREAM_STOP and STREAM_ABORT(for subtransaction) and then release\n+ * immediately after acquiring it. That way, when PA is waiting for LA, we can\n+ * have a wait-edge from PA to LA in lmgr, which will make a deadlock in lmgr\n+ * like:\n\nMissing spaces before '(' deliberate?\n\n~~~\n\n5. globals\n\n+/*\n+ * Is there a message sent by parallel apply worker which the leader apply\n+ * worker need to receive?\n+ */\n+volatile sig_atomic_t ParallelApplyMessagePending = false;\n\nSUGGESTION\nIs there a message sent by a parallel apply worker that the leader\napply worker needs to receive?\n\n~~~\n\n6. pa_get_available_worker\n\n+/*\n+ * get an available parallel apply worker from the worker pool.\n+ */\n+static ParallelApplyWorkerInfo *\n+pa_get_available_worker(void)\n\nUppercase comment\n\n~\n\n7.\n\n+ /*\n+ * We first try to free the worker to improve our chances of getting\n+ * the worker. Normally, we free the worker after ensuring that the\n+ * transaction is committed by parallel worker but for rollbacks, we\n+ * don't wait for the transaction to finish so can't free the worker\n+ * information immediately.\n+ */\n\n7a.\n\"We first try to free the worker to improve our chances of getting the worker.\"\n\nSUGGESTION\nWe first try to free the worker to improve our chances of finding one\nthat is not in use.\n\n~\n\n7b.\n\"parallel worker\" -> \"the parallel worker\"\n\n~~~\n\n8. pa_allocate_worker\n\n+ /* Try to get a free parallel apply worker. */\n+ winfo = pa_get_available_worker();\n+\n\nSUGGESTION\nFirst, try to get a parallel apply worker from the pool.\n\n~~~\n\n9. pa_free_worker\n\n+ * This removes the parallel apply worker entry from the hash table so that it\n+ * can't be used. This either stops the worker and free the corresponding info,\n+ * if there are enough workers in the pool or just marks it available for\n+ * reuse.\n\nBEFORE\nThis either stops the worker and free the corresponding info, if there\nare enough workers in the pool or just marks it available for reuse.\n\nSUGGESTION\nIf there are enough workers in the pool it stops the worker and frees\nthe corresponding info, otherwise it just marks the worker as\navailable for reuse.\n\n~\n\n10.\n\n+ /* Free the corresponding info if the worker exited cleanly. */\n+ if (winfo->error_mq_handle == NULL)\n+ {\n+ pa_free_worker_info(winfo);\n+ return true;\n+ }\n\nIs it correct that this bypasses the removal from the hash table?\n\n~\n\n11.\n\n+\n+ /* Worker is already available for reuse. */\n+ if (!winfo->in_use)\n+ return false;\n\nShould this quick-exit check for in_use come first?\n\n~~\n\n12. HandleParallelApplyMessage\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"parallel apply worker exited abnormally\"),\n+ errcontext(\"%s\", edata.context)));\n\nMaybe \"parallel apply worker\" -> \"logical replication parallel apply\nworker\" (for consistency with the other error messages)\n\n~\n\n13.\n\n\n+ default:\n+ elog(ERROR, \"unrecognized message type received from parallel apply\nworker: %c (message length %d bytes)\",\n+ msgtype, msg->len);\n+ }\n\nditto #12 above.\n\n~\n\n14.\n\n+ case 'X': /* Terminate, indicating clean exit. */\n+ {\n+ shm_mq_detach(winfo->error_mq_handle);\n+ winfo->error_mq_handle = NULL;\n+ break;\n+ }\n+ default:\n\n\nNo need for the { } here.\n\n~~~\n\n15. HandleParallelApplyMessage\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"lost connection to the parallel apply worker\")));\n+ }\n\n\"parallel apply worker\" -> \"logical replication parallel apply worker\"\n\n~~~\n\n16. pa_init_and_launch_worker\n\n+ /* Setup shared memory. */\n+ if (!pa_setup_dsm(winfo))\n+ {\n+ MemoryContextSwitchTo(oldcontext);\n+ pfree(winfo);\n+ return NULL;\n+ }\n\n\nWouldn't it be better to do the pfree before switching back to the oldcontext?\n\n~~~\n\n17. pa_send_data\n\n+ /* Wait before retrying. */\n+ rc = WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ SHM_SEND_RETRY_INTERVAL_MS,\n+ WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE);\n+\n+ if (rc & WL_LATCH_SET)\n+ {\n+ ResetLatch(MyLatch);\n+ CHECK_FOR_INTERRUPTS();\n+ }\n\n\nInstead of CHECK_FOR_INTERRUPTS, should this be calling your new\nfunction ProcessParallelApplyInterrupts?\n\n~\n\n18.\n\n+ if (startTime == 0)\n+ startTime = GetCurrentTimestamp();\n+ else if (TimestampDifferenceExceeds(startTime, GetCurrentTimestamp(),\n+ SHM_SEND_TIMEOUT_MS))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONNECTION_FAILURE),\n+ errmsg(\"terminating logical replication parallel apply worker due to\ntimeout\")));\n\n\nI'd previously commented that the timeout calculation seemed wrong.\nHou-san replied [1,#9] \"start counting from the first failure looks\nfine to me.\" but I am not so sure - e.g. If the timeout is 10s then I\nexpect it to fail ~10s after the function is called, not 11s after. I\nknow it's pedantic, but where's the harm in making the calculation\nright instead of just nearly right?\n\nIMO probably an easy fix for this is like:\n\n#define SHM_SEND_RETRY_INTERVAL_MS 1000\n#define SHM_SEND_TIMEOUT_MS (10000 - SHM_SEND_RETRY_INTERVAL_MS)\n\n~~~\n\n19. pa_wait_for_xact_state\n\n+ /* An interrupt may have occurred while we were waiting. */\n+ CHECK_FOR_INTERRUPTS();\n\nInstead of CHECK_FOR_INTERRUPTS, should this be calling your new\nfunction ProcessParallelApplyInterrupts?\n\n~~~\n\n20. pa_savepoint_name\n\n+static void\n+pa_savepoint_name(Oid suboid, TransactionId xid, char *spname,\n+ Size szsp)\n\nUnnecessary wrapping?\n\n======\n\nsrc/backend/replication/logical/origin.c\n\n21. replorigin_session_setup\n\n+ * However, we do allow multiple processes to point to the same origin slot\n+ * if requested by the caller by passing PID of the process that has already\n+ * acquired it. This is to allow using the same origin by multiple parallel\n+ * apply processes the provided they maintain commit order, for example, by\n+ * allowing only one process to commit at a time.\n\n21a.\nI thought the comment should mention this is optional and the special\nvalue acquired_by=0 means don't do this.\n\n~\n\n21b.\n\"the provided they\" ?? typo?\n\n======\n\nsrc/backend/replication/logical/tablesync.c\n\n22. process_syncing_tables\n\n process_syncing_tables(XLogRecPtr current_lsn)\n {\n+ /*\n+ * Skip for parallel apply workers as they don't operate on tables that\n+ * are not in ready state. See pa_can_start() and\n+ * should_apply_changes_for_rel().\n+ */\n+ if (am_parallel_apply_worker())\n+ return;\n\nSUGGESTION (remove the double negative)\nSkip for parallel apply workers because they only operate on tables\nthat are in a READY state. See pa_can_start() and\nshould_apply_changes_for_rel().\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n23. apply_handle_stream_stop\n\n\nPreviously I suggested that this lock/unlock seems too tricky and\nneeded a comment. The reply [1,#12] was that this is already described\natop parallelapplyworker.c. OK, but in that case maybe here the\ncomment can just refer to that explanation:\n\nSUGGESTION\nRefer to the comments atop applyparallelworker.c for what this lock\nand immediate unlock is doing.\n\n~~~\n\n24. apply_handle_stream_abort\n\n+ if (pg_atomic_sub_fetch_u32(&(MyParallelShared->pending_stream_count),\n1) == 0)\n+ {\n+ pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n+ }\n\nditto comment #23\n\n~~~\n\n25. apply_worker_clean_exit\n\n+void\n+apply_worker_clean_exit(void)\n+{\n+ /* Notify the leader apply worker that we have exited cleanly. */\n+ if (am_parallel_apply_worker())\n+ pq_putmessage('X', NULL, 0);\n+\n+ proc_exit(0);\n+}\n\nSomehow it doesn't seem right that the PA worker sending 'X' is here\nin worker.c, while the LA worker receipt of this 'X' is in the other\napplyparallelworker.c module. Maybe that other function\nHandleParallelApplyMessage should also be here in worker.c?\n\n======\n\nsrc/backend/utils/misc/guc_tables.c\n\n26.\n\n@@ -2957,6 +2957,18 @@ struct config_int ConfigureNamesInt[] =\n NULL,\n },\n &max_sync_workers_per_subscription,\n+ 2, 0, MAX_PARALLEL_WORKER_LIMIT,\n+ NULL, NULL, NULL\n+ },\n+\n+ {\n+ {\"max_parallel_apply_workers_per_subscription\",\n+ PGC_SIGHUP,\n+ REPLICATION_SUBSCRIBERS,\n+ gettext_noop(\"Maximum number of parallel apply workers per subscription.\"),\n+ NULL,\n+ },\n+ &max_parallel_apply_workers_per_subscription,\n 2, 0, MAX_BACKENDS,\n NULL, NULL, NULL\n\nIs this correct? Did you mean to change\nmax_sync_workers_per_subscription, My 1st impression is that there has\nbeen some mixup with the MAX_PARALLEL_WORKER_LIMIT and MAX_BACKENDS or\nthat this change was accidentally made to the wrong GUC.\n\n======\n\nsrc/include/replication/worker_internal.h\n\n27. ParallelApplyWorkerShared\n\n+ /*\n+ * Indicates whether there are pending streaming blocks in the queue. The\n+ * parallel apply worker will check it before starting to wait.\n+ */\n+ pg_atomic_uint32 pending_stream_count;\n\nA better name might be 'n_pending_stream_blocks'.\n\n~\n\n28. function names\n\n extern void logicalrep_worker_stop(Oid subid, Oid relid);\n+extern void logicalrep_parallel_apply_worker_stop(int slot_no, uint16\ngeneration);\n extern void logicalrep_worker_wakeup(Oid subid, Oid relid);\n extern void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);\n\n extern int logicalrep_sync_worker_count(Oid subid);\n+extern int logicalrep_parallel_apply_worker_count(Oid subid);\n\nWould it be better to call those new functions using similar shorter\nnames as done elsewhere?\n\nlogicalrep_parallel_apply_worker_stop -> logicalrep_pa_worker_stop\nlogicalrep_parallel_apply_worker_count -> logicalrep_pa_worker_count\n\n------\n[1] Hou-san's reply to my review v47-0001.\nhttps://www.postgresql.org/message-id/OS0PR01MB571680391393F3CB63469F3E940A9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 22 Nov 2022 16:19:45 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tues, November 22, 2022 13:20 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Thanks for addressing my review comments on v47-0001.\r\n> \r\n> Here are my review comments for v49-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/applyparallelworker.c\r\n> \r\n> 1. GENERAL - NULL checks\r\n> \r\n> There is inconsistent NULL checking in the patch.\r\n> \r\n> Sometimes it is like (!winfo)\r\n> Sometimes explicit NULL checks like (winfo->mq_handle != NULL)\r\n> \r\n> (That is just one example -- there are differences in many places)\r\n> \r\n> It would be better to use a consistent style everywhere.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 2. GENERAL - Error message worker name\r\n> \r\n> 2a.\r\n> In worker.c all the messages are now \"logical replication apply \r\n> worker\" or \"logical replication parallel apply worker\" etc, but in the \r\n> applyparallel.c sometimes the \"logical replication\" part is missing.\r\n> IMO all the messages in this patch should be consistently worded.\r\n> \r\n> I've reported some of them in the following comment below, but please \r\n> search the whole patch for any I might have missed.\r\n\r\nRename LA and PA to the following styles:\r\n```\r\nLA -> logical replication apply worker\r\nPA -> logical replication parallel apply worker ```\r\n\r\n> 2b.\r\n> Consider if maybe all of these ought to be calling get_worker_name() \r\n> which is currently static in worker.c. Doing this means any future \r\n> changes to get_worker_name won't cause more inconsistencies.\r\n\r\nThe most error message in applyparallelxx.c can only use \"xx parallel worker\",\r\nso I think it's fine not to call get_worker_name\r\n\r\n> ~~~\r\n> \r\n> 3. File header comment\r\n> \r\n> + * IDENTIFICATION \r\n> + src/backend/replication/logical/applyparallelworker.c\r\n> \r\n> The word \"IDENTIFICATION\" should be on a separate line (for \r\n> consistency with every other PG source file)\r\n\r\nFixed.\r\n\r\n> ~\r\n> \r\n> 4.\r\n> \r\n> + * In order for lmgr to detect this, we have LA acquire a session \r\n> + lock on the\r\n> + * remote transaction (by pa_lock_stream()) and have PA wait on the \r\n> + lock\r\n> before\r\n> + * trying to receive messages. In other words, LA acquires the lock \r\n> + before\r\n> + * sending STREAM_STOP and releases it if already acquired before \r\n> + sending\r\n> + * STREAM_START, STREAM_ABORT(for toplevel transaction),\r\n> STREAM_PREPARE and\r\n> + * STREAM_COMMIT. For PA, it always needs to acquire the lock after\r\n> processing\r\n> + * STREAM_STOP and STREAM_ABORT(for subtransaction) and then release\r\n> + * immediately after acquiring it. That way, when PA is waiting for \r\n> + LA, we can\r\n> + * have a wait-edge from PA to LA in lmgr, which will make a deadlock \r\n> + in lmgr\r\n> + * like:\r\n> \r\n> Missing spaces before '(' deliberate?\r\n\r\nAdded.\r\n\r\n> ~~~\r\n> \r\n> 5. globals\r\n> \r\n> +/*\r\n> + * Is there a message sent by parallel apply worker which the leader \r\n> +apply\r\n> + * worker need to receive?\r\n> + */\r\n> +volatile sig_atomic_t ParallelApplyMessagePending = false;\r\n> \r\n> SUGGESTION\r\n> Is there a message sent by a parallel apply worker that the leader \r\n> apply worker needs to receive?\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 6. pa_get_available_worker\r\n> \r\n> +/*\r\n> + * get an available parallel apply worker from the worker pool.\r\n> + */\r\n> +static ParallelApplyWorkerInfo *\r\n> +pa_get_available_worker(void)\r\n> \r\n> Uppercase comment\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 7.\r\n> \r\n> + /*\r\n> + * We first try to free the worker to improve our chances of getting\r\n> + * the worker. Normally, we free the worker after ensuring that the\r\n> + * transaction is committed by parallel worker but for rollbacks, we\r\n> + * don't wait for the transaction to finish so can't free the worker\r\n> + * information immediately.\r\n> + */\r\n> \r\n> 7a.\r\n> \"We first try to free the worker to improve our chances of getting the worker.\"\r\n> \r\n> SUGGESTION\r\n> We first try to free the worker to improve our chances of finding one \r\n> that is not in use.\r\n> \r\n> ~\r\n> \r\n> 7b.\r\n> \"parallel worker\" -> \"the parallel worker\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 8. pa_allocate_worker\r\n> \r\n> + /* Try to get a free parallel apply worker. */ winfo = \r\n> + pa_get_available_worker();\r\n> +\r\n> \r\n> SUGGESTION\r\n> First, try to get a parallel apply worker from the pool.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 9. pa_free_worker\r\n> \r\n> + * This removes the parallel apply worker entry from the hash table \r\n> + so that it\r\n> + * can't be used. This either stops the worker and free the \r\n> + corresponding info,\r\n> + * if there are enough workers in the pool or just marks it available \r\n> + for\r\n> + * reuse.\r\n> \r\n> BEFORE\r\n> This either stops the worker and free the corresponding info, if there \r\n> are enough workers in the pool or just marks it available for reuse.\r\n> \r\n> SUGGESTION\r\n> If there are enough workers in the pool it stops the worker and frees \r\n> the corresponding info, otherwise it just marks the worker as \r\n> available for reuse.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 10.\r\n> \r\n> + /* Free the corresponding info if the worker exited cleanly. */ if \r\n> + (winfo->error_mq_handle == NULL) { pa_free_worker_info(winfo); \r\n> + return true; }\r\n> \r\n> Is it correct that this bypasses the removal from the hash table?\r\n\r\nI rethink about this, it seems unnecessary to free the information here as\r\nwe don't expect the worker to stop unless the leader as them to stop.\r\nSo, I temporarily remove this and will think about this in next version.\r\n\r\n> ~\r\n> \r\n> 14.\r\n> \r\n> + case 'X': /* Terminate, indicating clean exit. */ { \r\n> + shm_mq_detach(winfo->error_mq_handle);\r\n> + winfo->error_mq_handle = NULL;\r\n> + break;\r\n> + }\r\n> + default:\r\n> \r\n> \r\n> No need for the { } here.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 16. pa_init_and_launch_worker\r\n> \r\n> + /* Setup shared memory. */\r\n> + if (!pa_setup_dsm(winfo))\r\n> + {\r\n> + MemoryContextSwitchTo(oldcontext);\r\n> + pfree(winfo);\r\n> + return NULL;\r\n> + }\r\n> \r\n> \r\n> Wouldn't it be better to do the pfree before switching back to the oldcontext?\r\n\r\nI think either style seems fine.\r\n\r\n> ~~~\r\n> \r\n> 17. pa_send_data\r\n> \r\n> + /* Wait before retrying. */\r\n> + rc = WaitLatch(MyLatch,\r\n> + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\r\n> + SHM_SEND_RETRY_INTERVAL_MS,\r\n> + WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE);\r\n> +\r\n> + if (rc & WL_LATCH_SET)\r\n> + {\r\n> + ResetLatch(MyLatch);\r\n> + CHECK_FOR_INTERRUPTS();\r\n> + }\r\n> \r\n> \r\n> Instead of CHECK_FOR_INTERRUPTS, should this be calling your new \r\n> function ProcessParallelApplyInterrupts?\r\n\r\nI thought the ProcessParallelApplyInterrupts is intended to be invoked only in main\r\nloop(LogicalParallelApplyLoop) to make the parallel apply worker exit cleanly.\r\n\r\n> ~\r\n> \r\n> 18.\r\n> \r\n> + if (startTime == 0)\r\n> + startTime = GetCurrentTimestamp();\r\n> + else if (TimestampDifferenceExceeds(startTime, \r\n> + GetCurrentTimestamp(),\r\n> + SHM_SEND_TIMEOUT_MS))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_CONNECTION_FAILURE),\r\n> + errmsg(\"terminating logical replication parallel apply worker due to\r\n> timeout\")));\r\n> \r\n> \r\n> I'd previously commented that the timeout calculation seemed wrong.\r\n> Hou-san replied [1,#9] \"start counting from the first failure looks \r\n> fine to me.\" but I am not so sure - e.g. If the timeout is 10s then I \r\n> expect it to fail ~10s after the function is called, not 11s after. I \r\n> know it's pedantic, but where's the harm in making the calculation \r\n> right instead of just nearly right?\r\n> \r\n> IMO probably an easy fix for this is like:\r\n> \r\n> #define SHM_SEND_RETRY_INTERVAL_MS 1000 #define SHM_SEND_TIMEOUT_MS \r\n> (10000 - SHM_SEND_RETRY_INTERVAL_MS)\r\n\r\nOK, I moved the place of setting startTime before the WaitLatch.\r\n\r\n> ~~~\r\n> \r\n> 20. pa_savepoint_name\r\n> \r\n> +static void\r\n> +pa_savepoint_name(Oid suboid, TransactionId xid, char *spname,\r\n> + Size szsp)\r\n> \r\n> Unnecessary wrapping?\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/origin.c\r\n> \r\n> 21. replorigin_session_setup\r\n> \r\n> + * However, we do allow multiple processes to point to the same \r\n> + origin slot\r\n> + * if requested by the caller by passing PID of the process that has \r\n> + already\r\n> + * acquired it. This is to allow using the same origin by multiple \r\n> + parallel\r\n> + * apply processes the provided they maintain commit order, for \r\n> + example, by\r\n> + * allowing only one process to commit at a time.\r\n> \r\n> 21a.\r\n> I thought the comment should mention this is optional and the special \r\n> value acquired_by=0 means don't do this.\r\n\r\nAdded.\r\n\r\n> ~\r\n> \r\n> 21b.\r\n> \"the provided they\" ?? typo?\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/tablesync.c\r\n> \r\n> 22. process_syncing_tables\r\n> \r\n> process_syncing_tables(XLogRecPtr current_lsn) {\r\n> + /*\r\n> + * Skip for parallel apply workers as they don't operate on tables \r\n> + that\r\n> + * are not in ready state. See pa_can_start() and\r\n> + * should_apply_changes_for_rel().\r\n> + */\r\n> + if (am_parallel_apply_worker())\r\n> + return;\r\n> \r\n> SUGGESTION (remove the double negative) Skip for parallel apply \r\n> workers because they only operate on tables that are in a READY state. \r\n> See pa_can_start() and should_apply_changes_for_rel().\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 23. apply_handle_stream_stop\r\n> \r\n> \r\n> Previously I suggested that this lock/unlock seems too tricky and \r\n> needed a comment. The reply [1,#12] was that this is already described \r\n> atop parallelapplyworker.c. OK, but in that case maybe here the \r\n> comment can just refer to that explanation:\r\n> \r\n> SUGGESTION\r\n> Refer to the comments atop applyparallelworker.c for what this lock \r\n> and immediate unlock is doing.\r\n> \r\n> ~~~\r\n> \r\n> 24. apply_handle_stream_abort\r\n> \r\n> + if \r\n> + (pg_atomic_sub_fetch_u32(&(MyParallelShared->pending_stream_count),\r\n> 1) == 0)\r\n> + {\r\n> + pa_lock_stream(MyParallelShared->xid, AccessShareLock); \r\n> + pa_unlock_stream(MyParallelShared->xid, AccessShareLock); }\r\n> \r\n> ditto comment #23\r\n\r\nI feel the place atop the definition of pa_lock_xxx function is a better place to\r\nput the comments, so added there. User can check it when reading the lock\r\nfunctions.\r\n\r\n> ~~~\r\n> \r\n> 25. apply_worker_clean_exit\r\n> \r\n> +void\r\n> +apply_worker_clean_exit(void)\r\n> +{\r\n> + /* Notify the leader apply worker that we have exited cleanly. */ \r\n> +if (am_parallel_apply_worker()) pq_putmessage('X', NULL, 0);\r\n> +\r\n> + proc_exit(0);\r\n> +}\r\n> \r\n> Somehow it doesn't seem right that the PA worker sending 'X' is here \r\n> in worker.c, while the LA worker receipt of this 'X' is in the other \r\n> applyparallelworker.c module. Maybe that other function \r\n> HandleParallelApplyMessage should also be here in worker.c?\r\n\r\nI thought the function apply_worker_clean_exit is widely used in worker.c and\r\nis a common function for both leader/parallel apply workers, so I put it in\r\nworker.c. But HandleParallelApplyMessage is a function only for parallel\r\nworker, so it would be better to put it in applyparallelworker.c.\r\n\r\n> ======\r\n> \r\n> src/backend/utils/misc/guc_tables.c\r\n> \r\n> 26.\r\n> \r\n> @@ -2957,6 +2957,18 @@ struct config_int ConfigureNamesInt[] =\r\n> NULL,\r\n> },\r\n> &max_sync_workers_per_subscription,\r\n> + 2, 0, MAX_PARALLEL_WORKER_LIMIT,\r\n> + NULL, NULL, NULL\r\n> + },\r\n> +\r\n> + {\r\n> + {\"max_parallel_apply_workers_per_subscription\",\r\n> + PGC_SIGHUP,\r\n> + REPLICATION_SUBSCRIBERS,\r\n> + gettext_noop(\"Maximum number of parallel apply workers per \r\n> + subscription.\"), NULL, }, \r\n> + &max_parallel_apply_workers_per_subscription,\r\n> 2, 0, MAX_BACKENDS,\r\n> NULL, NULL, NULL\r\n> \r\n> Is this correct? Did you mean to change \r\n> max_sync_workers_per_subscription, My 1st impression is that there has \r\n> been some mixup with the MAX_PARALLEL_WORKER_LIMIT and MAX_BACKENDS or \r\n> that this change was accidentally made to the wrong GUC.\r\n\r\nFixed.\r\n\r\n> ======\r\n> \r\n> src/include/replication/worker_internal.h\r\n> \r\n> 27. ParallelApplyWorkerShared\r\n> \r\n> + /*\r\n> + * Indicates whether there are pending streaming blocks in the queue. \r\n> + The\r\n> + * parallel apply worker will check it before starting to wait.\r\n> + */\r\n> + pg_atomic_uint32 pending_stream_count;\r\n> \r\n> A better name might be 'n_pending_stream_blocks'.\r\n\r\nI am not sure if the name looks better, so didn’t change this.\r\n\r\n> ~\r\n> \r\n> 28. function names\r\n> \r\n> extern void logicalrep_worker_stop(Oid subid, Oid relid);\r\n> +extern void logicalrep_parallel_apply_worker_stop(int slot_no, uint16\r\n> generation);\r\n> extern void logicalrep_worker_wakeup(Oid subid, Oid relid); extern \r\n> void logicalrep_worker_wakeup_ptr(LogicalRepWorker *worker);\r\n> \r\n> extern int logicalrep_sync_worker_count(Oid subid);\r\n> +extern int logicalrep_parallel_apply_worker_count(Oid subid);\r\n> \r\n> Would it be better to call those new functions using similar shorter \r\n> names as done elsewhere?\r\n> \r\n> logicalrep_parallel_apply_worker_stop -> logicalrep_pa_worker_stop \r\n> logicalrep_parallel_apply_worker_count -> logicalrep_pa_worker_count\r\n\r\nChanged.\r\n\r\nAttach new version patch which also fixed an invalid shared memory access bug\r\nin 0002 patch reported by Kuroda-San offlist. \r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 22 Nov 2022 12:42:24 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for updating the patch!\r\nI tested the case whether the deadlock caused by foreign key constraint could be\r\ndetected, and it worked well.\r\n\r\nFollowings are my review comments. They are basically related with 0001, but\r\nsome contents may be not. It takes time to understand 0002 correctly...\r\n\r\n01. typedefs.list\r\n\r\nLeaderFileSetState should be added to typedefs.list.\r\n\r\n\r\n02. 032_streaming_parallel_apply.pl\r\n\r\nAs I said in [1]: the test name may be not matched. Do you have reasons to\r\nrevert the change?\r\n\r\n\r\n03. 032_streaming_parallel_apply.pl\r\n\r\nThe test does not cover the case that the backend process relates with the\r\ndeadlock. IIUC this is another motivation to use a stream/transaction lock.\r\nI think it should be added.\r\n\r\n04. log output\r\n\r\nWhile being applied spooled changes by PA, there are so many messages like\r\n\"replayed %d changes from file...\" and \"applied %u changes...\". They comes from\r\napply_handle_stream_stop() and apply_spooled_messages(). They have same meaning,\r\nso I think one of them can be removed.\r\n\r\n05. system_views.sql\r\n\r\nIn the previous version you modified pg_stat_subscription system view. Why do you revert that?\r\n\r\n06. interrupt.c - SignalHandlerForShutdownRequest()\r\n\r\nIn the comment atop SignalHandlerForShutdownRequest(), some processes that assign the function\r\nexcept SIGTERM are clarified. We may be able to add the parallel apply worker.\r\n\r\n07. proto.c - logicalrep_write_stream_abort()\r\n\r\nWe may able to add assertions for abort_lsn and abort_time, like xid and subxid.\r\n\r\n\r\n08. guc_tables.c - ConfigureNamesInt\r\n\r\n```\r\n &max_sync_workers_per_subscription,\r\n+ 2, 0, MAX_PARALLEL_WORKER_LIMIT,\r\n+ NULL, NULL, NULL\r\n+ },\r\n```\r\n\r\nThe upper limit for max_sync_workers_per_subscription seems to be wrong, it should\r\nbe used for max_parallel_apply_workers_per_subscription.\r\n\r\n\r\n10. worker.c - maybe_reread_subscription()\r\n\r\n\r\n```\r\n+ if (am_parallel_apply_worker())\r\n+ ereport(LOG,\r\n+ /* translator: first %s is the name of logical replication worker */\r\n+ (errmsg(\"%s for subscription \\\"%s\\\" will stop because of a parameter change\",\r\n+ get_worker_name(), MySubscription->name)));\r\n```\r\n\r\nI was not sure get_worker_name() is needed. I think \"logical replication apply worker\"\r\nshould be embedded.\r\n\r\n\r\n11. worker.c - ApplyWorkerMain()\r\n\r\n```\r\n+ (errmsg_internal(\"%s for subscription \\\"%s\\\" two_phase is %s\",\r\n+ get_worker_name(),\r\n```\r\n\r\nThe message for translator is needed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58666A97D40AB8919D106AD5F5709%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 22 Nov 2022 13:53:04 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 22, 2022 at 7:23 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n>\n> 07. proto.c - logicalrep_write_stream_abort()\n>\n> We may able to add assertions for abort_lsn and abort_time, like xid and subxid.\n>\n\nIf you see logicalrep_write_stream_commit(), we have an assertion for\nxid but not for LSN and other parameters. I think the current coding\nin the patch is consistent with that.\n\n>\n> 08. guc_tables.c - ConfigureNamesInt\n>\n> ```\n> &max_sync_workers_per_subscription,\n> + 2, 0, MAX_PARALLEL_WORKER_LIMIT,\n> + NULL, NULL, NULL\n> + },\n> ```\n>\n> The upper limit for max_sync_workers_per_subscription seems to be wrong, it should\n> be used for max_parallel_apply_workers_per_subscription.\n>\n\nRight, I don't know why this needs to be changed in the first place.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Nov 2022 15:55:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 22, 2022 at 7:30 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nFew minor comments and questions:\n============================\n1.\n+static void\n+LogicalParallelApplyLoop(shm_mq_handle *mqh)\n{\n+ for (;;)\n+ {\n+ void *data;\n+ Size len;\n+\n+ ProcessParallelApplyInterrupts();\n...\n...\n+ if (rc & WL_LATCH_SET)\n+ {\n+ ResetLatch(MyLatch);\n+ ProcessParallelApplyInterrupts();\n+ }\n...\n}\n\nWhy ProcessParallelApplyInterrupts() is called twice in\nLogicalParallelApplyLoop()?\n\n2.\n+ * This scenario is similar to the first case but TX-1 and TX-2 are executed by\n+ * two parallel apply workers (PA-1 and PA-2 respectively). In this scenario,\n+ * PA-2 is waiting for PA-1 to complete its transaction while PA-1 is waiting\n+ * for subsequent input from LA. Also, LA is waiting for PA-2 to complete its\n+ * transaction in order to preserve the commit order. There is a deadlock among\n+ * three processes.\n+ *\n...\n...\n+ *\n+ * LA (waiting to acquire the local transaction lock) -> PA-1 (waiting to\n+ * acquire the lock on the unique index) -> PA-2 (waiting to acquire the lock\n+ * on the remote transaction) -> LA\n+ *\n\nIsn't the order of PA-1 and PA-2 different in the second paragraph as\ncompared to the first one.\n\n3.\n+ * Deadlock-detection\n+ * ------------------\n\nIt may be better to keep the title of this section as Locking Considerations.\n\n4. In the section mentioned in Point 3, it would be better to\nseparately explain why we need session-level locks instead of\ntransaction level.\n\n5. Add the below comments in the code:\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 9385afb6d2..56f00defcf 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -431,6 +431,9 @@ pa_free_worker_info(ParallelApplyWorkerInfo *winfo)\n if (winfo->dsm_seg != NULL)\n dsm_detach(winfo->dsm_seg);\n\n+ /*\n+ * Ensure this worker information won't be reused during\nworker allocation.\n+ */\n ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList,\n\n winfo);\n\n@@ -762,6 +765,10 @@\nHandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo\nmsg)\n */\n error_context_stack = apply_error_context_stack;\n\n+ /*\n+ * The actual error must be already\nreported by parallel apply\n+ * worker.\n+ */\n ereport(ERROR,\n\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"parallel\napply worker exited abnormally\"),\n\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Nov 2022 19:10:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v51-0001.\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n1. General - Error messages, get_worker_name()\n\nI previously wrote a comment to ask if the get_worker_name() should be\nused in more places but the reply [1, #2b] was:\n\n> 2b.\n> Consider if maybe all of these ought to be calling get_worker_name()\n> which is currently static in worker.c. Doing this means any future\n> changes to get_worker_name won't cause more inconsistencies.\n\nThe most error message in applyparallelxx.c can only use \"xx parallel\nworker\", so I think it's fine not to call get_worker_name\n\n~\n\nI thought the reply missed the point I was trying to make -- I meant\nif it was arranged now so *every* message would go via\nget_worker_name() then in future somebody wanted to change the names\n(e.g. from \"logical replication parallel apply worker\" to \"LR PA\nworker\") then it would only need to be changed in one central place\ninstead of hunting down every hardwired error message.\n\nAnyway, you can do it how you want -- I just was not sure you'd got my\noriginal point.\n\n~~~\n\n2. HandleParallelApplyMessage\n\n+ case 'X': /* Terminate, indicating clean exit. */\n+ shm_mq_detach(winfo->error_mq_handle);\n+ winfo->error_mq_handle = NULL;\n+ break;\n+ default:\n+ elog(ERROR, \"unrecognized message type received from logical\nreplication parallel apply worker: %c (message length %d bytes)\",\n+ msgtype, msg->len);\n\nThe case 'X' code indentation is too much.\n\n======\n\nsrc/backend/replication/logical/origin.c\n\n3. replorigin_session_setup(RepOriginId node, int acquired_by)\n\n@@ -1075,12 +1075,20 @@ ReplicationOriginExitCleanup(int code, Datum arg)\n * array doesn't have to be searched when calling\n * replorigin_session_advance().\n *\n- * Obviously only one such cached origin can exist per process and the current\n+ * Normally only one such cached origin can exist per process and the current\n * cached value can only be set again after the previous value is torn down\n * with replorigin_session_reset().\n+ *\n+ * However, we do allow multiple processes to point to the same origin slot if\n+ * requested by the caller by passing PID of the process that has already\n+ * acquired it as acquired_by. This is to allow multiple parallel apply\n+ * processes to use the same origin, provided they maintain commit order, for\n+ * example, by allowing only one process to commit at a time. For the first\n+ * process requesting this origin, the acquired_by parameter needs to be set to\n+ * 0.\n */\n void\n-replorigin_session_setup(RepOriginId node)\n+replorigin_session_setup(RepOriginId node, int acquired_by)\n\nI think the meaning of the acquired_by=0 is not fully described here:\n\"For the first process requesting this origin, the acquired_by\nparameter needs to be set to 0.\"\nIMO that seems to be describing it only from POV that you are always\ngoing to want to allow multiple processes. But really this is an\noptional feature so you might pass acquired_by=0, not just because\nthis is the first of multiple, but also because you *never* want to\nallow multiple at all. The comment does not convey this meaning.\n\nMaybe something worded like below is better?\n\nSUGGESTION\nNormally only one such cached origin can exist per process so the\ncached value can only be set again after the previous value is torn\ndown with replorigin_session_reset(). For this normal case pass\nacquired_by=0 (meaning the slot is not allowed to be already acquired\nby another process).\n\nHowever, sometimes multiple processes can safely re-use the same\norigin slot (for example, multiple parallel apply processes can safely\nuse the same origin, provided they maintain commit order by allowing\nonly one process to commit at a time). For this case the first process\nmust pass acquired_by=0, and then the other processes sharing that\nsame origin can pass acquired_by=PID of the first process.\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n4. GENERAL - get_worker_name()\n\nIf you decide it is OK to hardwire some error messages instead of\nunconditionally calling the get_worker_name() -- see my #1 review\ncomment in this post -- then there are some other messages in this\nfile that also seem like they can be also hardwired because the type\nof worker is already known.\n\nHere are some examples:\n\n4a.\n\n+ else if (am_parallel_apply_worker())\n+ {\n+ if (rel->state != SUBREL_STATE_READY)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ /* translator: first %s is the name of logical replication worker */\n+ errmsg(\"%s for subscription \\\"%s\\\" will stop\",\n+ get_worker_name(), MySubscription->name),\n+ errdetail(\"Cannot handle streamed replication transactions using\nparallel apply workers until all tables have been synchronized.\")));\n+\n+ return true;\n+ }\n\nIn the above code from should_apply_changes_for_rel we already know\nthis is a parallel apply worker.\n\n~\n\n4b.\n\n+ if (am_parallel_apply_worker())\n+ ereport(LOG,\n+ /* translator: first %s is the name of logical replication worker */\n+ (errmsg(\"%s for subscription \\\"%s\\\" will stop because of a parameter change\",\n+ get_worker_name(), MySubscription->name)));\n+ else\n\nIn the above code from maybe_reread_subscription we already know this\nis a parallel apply worker.\n\n4c.\n\n if (am_tablesync_worker())\n ereport(LOG,\n- (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has started\",\n- MySubscription->name, get_rel_name(MyLogicalRepWorker->relid))));\n+ /* translator: first %s is the name of logical replication worker */\n+ (errmsg(\"%s for subscription \\\"%s\\\", table \\\"%s\\\" has started\",\n+ get_worker_name(), MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid))));\n\nIn the above code from ApplyWorkerMain we already know this is a\ntablesync worker\n\n~~~\n\n5. get_transaction_apply_action\n\n+\n+/*\n+ * Return the action to take for the given transaction. *winfo is assigned to\n+ * the destination parallel worker info (if the action is\n+ * TRANS_LEADER_SEND_TO_PARALLEL, otherwise *winfo is assigned NULL.\n+ */\n+static TransApplyAction\n+get_transaction_apply_action(TransactionId xid,\nParallelApplyWorkerInfo **winfo)\n\nThere is no closing ')' in the function comment.\n\n~~~\n\n6. apply_worker_clean_exit\n\n+ /* Notify the leader apply worker that we have exited cleanly. */\n+ if (am_parallel_apply_worker())\n+ pq_putmessage('X', NULL, 0);\n\nIMO the comment would be better inside the if block\n\nSUGGESTION\nif (am_parallel_apply_worker())\n{\n /* Notify the leader apply worker that we have exited cleanly. */\n pq_putmessage('X', NULL, 0);\n}\n\n------\n\n[1] Hou-san's reply to my v49-0001 review.\nhttps://www.postgresql.org/message-id/OS0PR01MB5716339FF7CB759E751492CB940D9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 25 Nov 2022 13:53:37 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, November 23, 2022 9:40 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Tue, Nov 22, 2022 at 7:30 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> Few minor comments and questions:\r\n> ============================\r\n> 1.\r\n> +static void\r\n> +LogicalParallelApplyLoop(shm_mq_handle *mqh)\r\n> {\r\n> + for (;;)\r\n> + {\r\n> + void *data;\r\n> + Size len;\r\n> +\r\n> + ProcessParallelApplyInterrupts();\r\n> ...\r\n> ...\r\n> + if (rc & WL_LATCH_SET)\r\n> + {\r\n> + ResetLatch(MyLatch);\r\n> + ProcessParallelApplyInterrupts();\r\n> + }\r\n> ...\r\n> }\r\n> \r\n> Why ProcessParallelApplyInterrupts() is called twice in\r\n> LogicalParallelApplyLoop()?\r\n\r\nI think the second call is unnecessary, so removed it.\r\n\r\n> 2.\r\n> + * This scenario is similar to the first case but TX-1 and TX-2 are\r\n> + executed by\r\n> + * two parallel apply workers (PA-1 and PA-2 respectively). In this\r\n> + scenario,\r\n> + * PA-2 is waiting for PA-1 to complete its transaction while PA-1 is\r\n> + waiting\r\n> + * for subsequent input from LA. Also, LA is waiting for PA-2 to\r\n> + complete its\r\n> + * transaction in order to preserve the commit order. There is a\r\n> + deadlock among\r\n> + * three processes.\r\n> + *\r\n> ...\r\n> ...\r\n> + *\r\n> + * LA (waiting to acquire the local transaction lock) -> PA-1 (waiting\r\n> + to\r\n> + * acquire the lock on the unique index) -> PA-2 (waiting to acquire\r\n> + the lock\r\n> + * on the remote transaction) -> LA\r\n> + *\r\n> \r\n> Isn't the order of PA-1 and PA-2 different in the second paragraph as compared\r\n> to the first one.\r\n\r\nFixed.\r\n\r\n> 3.\r\n> + * Deadlock-detection\r\n> + * ------------------\r\n> \r\n> It may be better to keep the title of this section as Locking Considerations.\r\n> \r\n> 4. In the section mentioned in Point 3, it would be better to separately explain\r\n> why we need session-level locks instead of transaction level.\r\n\r\nAdded.\r\n\r\n> 5. Add the below comments in the code:\r\n> diff --git a/src/backend/replication/logical/applyparallelworker.c\r\n> b/src/backend/replication/logical/applyparallelworker.c\r\n> index 9385afb6d2..56f00defcf 100644\r\n> --- a/src/backend/replication/logical/applyparallelworker.c\r\n> +++ b/src/backend/replication/logical/applyparallelworker.c\r\n> @@ -431,6 +431,9 @@ pa_free_worker_info(ParallelApplyWorkerInfo *winfo)\r\n> if (winfo->dsm_seg != NULL)\r\n> dsm_detach(winfo->dsm_seg);\r\n> \r\n> + /*\r\n> + * Ensure this worker information won't be reused during\r\n> worker allocation.\r\n> + */\r\n> ,\r\n> \r\n> winfo);\r\n> \r\n> @@ -762,6 +765,10 @@\r\n> HandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo\r\n> msg)\r\n> */\r\n> error_context_stack =\r\n> apply_error_context_stack;\r\n> \r\n> + /*\r\n> + * The actual error must be already\r\n> reported by parallel apply\r\n> + * worker.\r\n> + */\r\n> ereport(ERROR,\r\n> \r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"parallel apply worker\r\n> exited abnormally\"),\r\n\r\nAdded.\r\n\r\nAttach the new version patch which addressed all comments so far.\r\n\r\nBesides, I let the PA send a different message to LA when it exits due to\r\nsubscription information change. The LA will report a more meaningful message\r\nand restart replication after catching new message to prevent the LA from\r\nsending message to exited PA.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sun, 27 Nov 2022 04:13:34 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, November 25, 2022 10:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some review comments for v51-0001.\r\n\r\nThanks for the comments!\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 1. General - Error messages, get_worker_name()\r\n> \r\n> I previously wrote a comment to ask if the get_worker_name() should be used\r\n> in more places but the reply [1, #2b] was:\r\n> \r\n> > 2b.\r\n> > Consider if maybe all of these ought to be calling get_worker_name()\r\n> > which is currently static in worker.c. Doing this means any future\r\n> > changes to get_worker_name won't cause more inconsistencies.\r\n> \r\n> The most error message in applyparallelxx.c can only use \"xx parallel worker\",\r\n> so I think it's fine not to call get_worker_name\r\n> \r\n> ~\r\n> \r\n> I thought the reply missed the point I was trying to make -- I meant if it was\r\n> arranged now so *every* message would go via\r\n> get_worker_name() then in future somebody wanted to change the names (e.g.\r\n> from \"logical replication parallel apply worker\" to \"LR PA\r\n> worker\") then it would only need to be changed in one central place instead of\r\n> hunting down every hardwired error message.\r\n> \r\n\r\nThanks for the suggestion. I understand your point, but I feel that using\r\nget_worker_name() at some places where the worker type is decided could make\r\ndeveloper think that all kind of worker can enter this code which I am not sure\r\nis better. So I didn't change this.\r\n\r\n> \r\n> 2. HandleParallelApplyMessage\r\n> \r\n> + case 'X': /* Terminate, indicating clean exit. */\r\n> + shm_mq_detach(winfo->error_mq_handle);\r\n> + winfo->error_mq_handle = NULL;\r\n> + break;\r\n> + default:\r\n> + elog(ERROR, \"unrecognized message type received from logical\r\n> replication parallel apply worker: %c (message length %d bytes)\",\r\n> + msgtype, msg->len);\r\n> \r\n> The case 'X' code indentation is too much.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/origin.c\r\n> \r\n> 3. replorigin_session_setup(RepOriginId node, int acquired_by)\r\n> \r\n> @@ -1075,12 +1075,20 @@ ReplicationOriginExitCleanup(int code, Datum arg)\r\n> * array doesn't have to be searched when calling\r\n> * replorigin_session_advance().\r\n> *\r\n> - * Obviously only one such cached origin can exist per process and the current\r\n> + * Normally only one such cached origin can exist per process and the\r\n> + current\r\n> * cached value can only be set again after the previous value is torn down\r\n> * with replorigin_session_reset().\r\n> + *\r\n> + * However, we do allow multiple processes to point to the same origin\r\n> + slot if\r\n> + * requested by the caller by passing PID of the process that has\r\n> + already\r\n> + * acquired it as acquired_by. This is to allow multiple parallel apply\r\n> + * processes to use the same origin, provided they maintain commit\r\n> + order, for\r\n> + * example, by allowing only one process to commit at a time. For the\r\n> + first\r\n> + * process requesting this origin, the acquired_by parameter needs to\r\n> + be set to\r\n> + * 0.\r\n> */\r\n> void\r\n> -replorigin_session_setup(RepOriginId node)\r\n> +replorigin_session_setup(RepOriginId node, int acquired_by)\r\n> \r\n> I think the meaning of the acquired_by=0 is not fully described here:\r\n> \"For the first process requesting this origin, the acquired_by parameter needs\r\n> to be set to 0.\"\r\n> IMO that seems to be describing it only from POV that you are always going to\r\n> want to allow multiple processes. But really this is an optional feature so you\r\n> might pass acquired_by=0, not just because this is the first of multiple, but also\r\n> because you *never* want to allow multiple at all. The comment does not\r\n> convey this meaning.\r\n> \r\n> Maybe something worded like below is better?\r\n> \r\n> SUGGESTION\r\n> Normally only one such cached origin can exist per process so the cached value\r\n> can only be set again after the previous value is torn down with\r\n> replorigin_session_reset(). For this normal case pass\r\n> acquired_by=0 (meaning the slot is not allowed to be already acquired by\r\n> another process).\r\n> \r\n> However, sometimes multiple processes can safely re-use the same origin slot\r\n> (for example, multiple parallel apply processes can safely use the same origin,\r\n> provided they maintain commit order by allowing only one process to commit\r\n> at a time). For this case the first process must pass acquired_by=0, and then the\r\n> other processes sharing that same origin can pass acquired_by=PID of the first\r\n> process.\r\n\r\nChanges as suggested.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 4. GENERAL - get_worker_name()\r\n> \r\n> If you decide it is OK to hardwire some error messages instead of\r\n> unconditionally calling the get_worker_name() -- see my #1 review comment in\r\n> this post -- then there are some other messages in this file that also seem like\r\n> they can be also hardwired because the type of worker is already known.\r\n> \r\n> Here are some examples:\r\n> \r\n> 4a.\r\n> \r\n> + else if (am_parallel_apply_worker())\r\n> + {\r\n> + if (rel->state != SUBREL_STATE_READY)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + /* translator: first %s is the name of logical replication worker */\r\n> + errmsg(\"%s for subscription \\\"%s\\\" will stop\", get_worker_name(),\r\n> + MySubscription->name), errdetail(\"Cannot handle streamed replication\r\n> + transactions using\r\n> parallel apply workers until all tables have been synchronized.\")));\r\n> +\r\n> + return true;\r\n> + }\r\n> \r\n> In the above code from should_apply_changes_for_rel we already know this is a\r\n> parallel apply worker.\r\n> \r\n> ~\r\n> \r\n> 4b.\r\n> \r\n> + if (am_parallel_apply_worker())\r\n> + ereport(LOG,\r\n> + /* translator: first %s is the name of logical replication worker */\r\n> + (errmsg(\"%s for subscription \\\"%s\\\" will stop because of a parameter\r\n> + change\", get_worker_name(), MySubscription->name))); else\r\n> \r\n> In the above code from maybe_reread_subscription we already know this is a\r\n> parallel apply worker.\r\n> \r\n> 4c.\r\n> \r\n> if (am_tablesync_worker())\r\n> ereport(LOG,\r\n> - (errmsg(\"logical replication table synchronization worker for subscription\r\n> \\\"%s\\\", table \\\"%s\\\" has started\",\r\n> - MySubscription->name, get_rel_name(MyLogicalRepWorker->relid))));\r\n> + /* translator: first %s is the name of logical replication worker */\r\n> + (errmsg(\"%s for subscription \\\"%s\\\", table \\\"%s\\\" has started\",\r\n> + get_worker_name(), MySubscription->name,\r\n> + get_rel_name(MyLogicalRepWorker->relid))));\r\n> \r\n> In the above code from ApplyWorkerMain we already know this is a tablesync\r\n> worker\r\n\r\nThanks for checking these, changed.\r\n\r\n> ~~~\r\n> \r\n> 5. get_transaction_apply_action\r\n> \r\n> +\r\n> +/*\r\n> + * Return the action to take for the given transaction. *winfo is\r\n> +assigned to\r\n> + * the destination parallel worker info (if the action is\r\n> + * TRANS_LEADER_SEND_TO_PARALLEL, otherwise *winfo is assigned NULL.\r\n> + */\r\n> +static TransApplyAction\r\n> +get_transaction_apply_action(TransactionId xid,\r\n> ParallelApplyWorkerInfo **winfo)\r\n> \r\n> There is no closing ')' in the function comment.\r\n\r\nAdded.\r\n\r\n> ~~~\r\n> \r\n> 6. apply_worker_clean_exit\r\n> \r\n> + /* Notify the leader apply worker that we have exited cleanly. */ if\r\n> + (am_parallel_apply_worker()) pq_putmessage('X', NULL, 0);\r\n> \r\n> IMO the comment would be better inside the if block\r\n> \r\n> SUGGESTION\r\n> if (am_parallel_apply_worker())\r\n> {\r\n> /* Notify the leader apply worker that we have exited cleanly. */\r\n> pq_putmessage('X', NULL, 0);\r\n> }\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 27 Nov 2022 04:15:26 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, November 22, 2022 9:53 PM Kuroda, Hayato <kuroda.hayato@fujitsu.com> wroteL\r\n> \r\n> Thanks for updating the patch!\r\n> I tested the case whether the deadlock caused by foreign key constraint could\r\n> be detected, and it worked well.\r\n> \r\n> Followings are my review comments. They are basically related with 0001, but\r\n> some contents may be not. It takes time to understand 0002 correctly...\r\n\r\nThanks for the comments!\r\n\r\n> 01. typedefs.list\r\n> \r\n> LeaderFileSetState should be added to typedefs.list.\r\n> \r\n> \r\n> 02. 032_streaming_parallel_apply.pl\r\n> \r\n> As I said in [1]: the test name may be not matched. Do you have reasons to\r\n> revert the change?\r\n\r\nThe original parallel safety check has been removed, so I changed the name.\r\nAfter rethinking about this, I named it to stream_parallel_conflict.\r\n\r\n> \r\n> 03. 032_streaming_parallel_apply.pl\r\n> \r\n> The test does not cover the case that the backend process relates with the\r\n> deadlock. IIUC this is another motivation to use a stream/transaction lock.\r\n> I think it should be added.\r\n\r\nThe main deadlock cases that stream/transaction lock can detect is 1) LA->PA 2)\r\nLA->PA->PA as explained atop applyparallelworker.c. So I think backend process\r\nrelated one is a variant which I think have been covered by the existing\r\ntests in the patch.\r\n\r\n> 04. log output\r\n> \r\n> While being applied spooled changes by PA, there are so many messages like\r\n> \"replayed %d changes from file...\" and \"applied %u changes...\". They comes\r\n> from\r\n> apply_handle_stream_stop() and apply_spooled_messages(). They have same\r\n> meaning, so I think one of them can be removed.\r\n\r\nChanged.\r\n\r\n> 05. system_views.sql\r\n> \r\n> In the previous version you modified pg_stat_subscription system view. Why\r\n> do you revert that?\r\n\r\nI was not sure should we include that in the main patch set.\r\nI added a top-up patch that change the view.\r\n\r\n> 06. interrupt.c - SignalHandlerForShutdownRequest()\r\n> \r\n> In the comment atop SignalHandlerForShutdownRequest(), some processes\r\n> that assign the function except SIGTERM are clarified. We may be able to add\r\n> the parallel apply worker.\r\n\r\nChanged\r\n\r\n\r\n> 08. guc_tables.c - ConfigureNamesInt\r\n> \r\n> ```\r\n> &max_sync_workers_per_subscription,\r\n> + 2, 0, MAX_PARALLEL_WORKER_LIMIT,\r\n> + NULL, NULL, NULL\r\n> + },\r\n> ```\r\n> \r\n> The upper limit for max_sync_workers_per_subscription seems to be wrong, it\r\n> should be used for max_parallel_apply_workers_per_subscription.\r\n\r\nThat's my miss, sorry for that.\r\n\r\n> 10. worker.c - maybe_reread_subscription()\r\n> \r\n> \r\n> ```\r\n> + if (am_parallel_apply_worker())\r\n> + ereport(LOG,\r\n> + /* translator: first %s is the name of logical replication\r\n> worker */\r\n> + (errmsg(\"%s for subscription \\\"%s\\\"\r\n> will stop because of a parameter change\",\r\n> +\r\n> + get_worker_name(), MySubscription->name)));\r\n> ```\r\n> \r\n> I was not sure get_worker_name() is needed. I think \"logical replication apply\r\n> worker\"\r\n> should be embedded.\r\n\r\nChanged.\r\n\r\n> \r\n> 11. worker.c - ApplyWorkerMain()\r\n> \r\n> ```\r\n> + (errmsg_internal(\"%s for subscription \\\"%s\\\"\r\n> two_phase is %s\",\r\n> +\r\n> + get_worker_name(),\r\n> ```\r\n\r\nChanged\r\n\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 27 Nov 2022 04:16:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for patch v51-0002\n\n======\n\n1.\n\nGENERAL - terminology: spool/serialize and data/changes/message\n\nThe terminology seems to be used at random. IMO it might be worthwhile\nrechecking at least that terms are used consistently in all the\ncomments. e.g \"serialize message data to disk\" ... and later ...\n\"apply the spooled messages\".\n\nAlso for places where it says \"Write the message to file\" maybe\nconsider using consistent terminology like \"serialize the message to a\nfile\".\n\nAlso, try to standardize the way things are described by using\nconsistent (if they really are the same) terminology for \"writing\ndata\" VS \"writing data\" VS \"writing messages\" etc. It is confusing\ntrying to know if the different wording has some intended meaning or\nis it just random.\n\n======\n\nCommit message\n\n2.\nWhen the leader apply worker times out while sending a message to the parallel\napply worker. Instead of erroring out, switch to partial serialize mode and let\nthe leader serialize all remaining changes to the file and notify the parallel\napply workers to read and apply them at the end of the transaction.\n\n~\n\nThe first sentence seems incomplete\n\nSUGGESTION.\nIn patch 0001 if the leader apply worker times out while attempting to\nsend a message to the parallel apply worker it results in an ERROR.\n\nThis patch (0002) modifies that behaviour, so instead of erroring it\nwill switch to \"partial serialize\" mode - in this mode the leader\nserializes all remaining changes to a file and notifies the parallel\napply workers to read and apply them at the end of the transaction.\n\n~~~\n\n3.\n\nThis patch 0002 is called “Serialize partial changes to disk if the\nshm_mq buffer is full”, but the commit message is saying nothing about\nthe buffer filling up. I think the Commit message should be mentioning\nsomething that makes the commit patch name more relevant. Otherwise\nchange the patch name.\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n4. File header comment\n\n+ * timeout is exceeded, the LA will write to file and indicate PA-2 that it\n+ * needs to read file for remaining messages. Then LA will start waiting for\n+ * commit which will detect deadlock if any. (See pa_send_data() and typedef\n+ * enum TransApplyAction)\n\n\"needs to read file for remaining messages\" -> \"needs to read that\nfile for the remaining messages\"\n\n~~~\n\n5. pa_free_worker\n\n+ /*\n+ * Stop the worker if there are enough workers in the pool.\n+ *\n+ * XXX we also need to stop the worker if the leader apply worker\n+ * serialized part of the transaction data to a file due to send timeout.\n+ * This is because the message could be partially written to the queue due\n+ * to send timeout and there is no way to clean the queue other than\n+ * resending the message until it succeeds. To avoid complexity, we\n+ * directly stop the worker in this case.\n+ */\n+ if (winfo->serialize_changes ||\n+ napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n\n5a.\n\n+ * XXX we also need to stop the worker if the leader apply worker\n+ * serialized part of the transaction data to a file due to send timeout.\n\nSUGGESTION\nXXX The worker is also stopped if the leader apply worker needed to\nserialize part of the transaction data due to a send timeout.\n\n~\n\n5b.\n\n+ /* Unlink the files with serialized changes. */\n+ if (winfo->serialize_changes)\n+ stream_cleanup_files(MyLogicalRepWorker->subid, winfo->shared->xid);\n\nA better comment might be\n\nSUGGESTION\nUnlink any files that were needed to serialize partial changes.\n\n~~~\n\n6. pa_spooled_messages\n\n/*\n * Replay the spooled messages in the parallel apply worker if leader apply\n * worker has finished serializing changes to the file.\n */\nstatic void\npa_spooled_messages(void)\n\n6a.\nIMO a better name for this function would be pa_apply_spooled_messages();\n\n~\n\n6b.\n\"if leader apply\" -> \"if the leader apply\"\n\n~\n\n7.\n\n+ /*\n+ * Acquire the stream lock if the leader apply worker is serializing\n+ * changes to the file, because the parallel apply worker will no longer\n+ * have a chance to receive a STREAM_STOP and acquire the lock until the\n+ * leader serialize all changes to the file.\n+ */\n+ if (fileset_state == LEADER_FILESET_BUSY)\n+ {\n+ pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n+ }\n\nSUGGESTION (rearranged comment - please check, I am not sure if I got\nthis right)\n\nIf the leader apply worker is still (busy) serializing partial changes\nthen the parallel apply worker acquires the stream lock now.\nOtherwise, it would not have a chance to receive a STREAM_STOP (and\nacquire the stream lock) until the leader had serialized all changes.\n\n~~~\n\n8. pa_send_data\n\n+ *\n+ * When sending data times out, data will be serialized to disk. And the\n+ * current streaming transaction will enter PARTIAL_SERIALIZE mode, which means\n+ * that subsequent data will also be serialized to disk.\n */\n void\n pa_send_data(ParallelApplyWorkerInfo *winfo, Size nbytes, const void *data)\n\nSUGGESTION (minor comment change)\n\nIf the attempt to send data via shared memory times out, then we will\nswitch to \"PARTIAL_SERIALIZE mode\" for the current transaction. This\nmeans that the current data and any subsequent data for this\ntransaction will be serialized to disk.\n\n~\n\n9.\n\n Assert(!IsTransactionState());\n+ Assert(!winfo->serialize_changes);\n\nHow about also asserting that this must be the LA worker?\n\n~\n\n10.\n\n+ /*\n+ * The parallel apply worker might be stuck for some reason, so\n+ * stop sending data to parallel worker and start to serialize\n+ * data to files.\n+ */\n+ winfo->serialize_changes = true;\n\nSUGGESTION (minor reword)\nThe parallel apply worker might be stuck for some reason, so stop\nsending data directly to it and start to serialize data to files\ninstead.\n\n~\n\n11.\n+ /* Skip first byte and statistics fields. */\n+ msg.cursor += SIZE_STATS_MESSAGE + 1;\n\nIMO it would be better for the comment order and the code calculation\norder to be the same.\n\nSUGGESTION\n/* Skip first byte and statistics fields. */\nmsg.cursor += 1 + SIZE_STATS_MESSAGE;\n\n~\n\n12. pa_stream_abort\n\n+ /*\n+ * If the parallel apply worker is applying the spooled\n+ * messages, we save the current file position and close the\n+ * file to prevent the file from being accidentally closed on\n+ * rollback.\n+ */\n+ if (stream_fd)\n+ {\n+ BufFileTell(stream_fd, &fileno, &offset);\n+ BufFileClose(stream_fd);\n+ reopen_stream_fd = true;\n+ }\n+\n RollbackToSavepoint(spname);\n CommitTransactionCommand();\n subxactlist = list_truncate(subxactlist, i + 1);\n+\n+ /*\n+ * Reopen the file and set the file position to the saved\n+ * position.\n+ */\n+ if (reopen_stream_fd)\n\nIt seems a bit vague to just refer to \"close the file\" and \"reopen the\nfile\" in these comments. IMO it would be better to call this file by a\nname like \"the message spool file\" or similar. Please check all other\nsimilar comments.\n\n~~~\n\n13. pa_set_fileset_state\n\n /*\n+ * Set the fileset_state flag for the given parallel apply worker. The\n+ * stream_fileset of the leader apply worker will be written into the shared\n+ * memory if the fileset_state is LEADER_FILESET_ACCESSIBLE.\n+ */\n+void\n+pa_set_fileset_state(ParallelApplyWorkerShared *wshared,\n+ LeaderFileSetState fileset_state)\n+{\n\n13a.\n\nIt is an enum -- not a \"flag\", so:\n\n\"fileset_state flag\" -> \"fileste state\"\n\n~~\n\n13b.\n\nIt seemed strange to me that the comment/code says this state is only\nwritten to shm when it is \"ACCESSIBLE\".... IIUC this same filestate\nlingers around to be reused for other workers so I expected the state\nshould *always* be written whenever the LA changes it. (I mean even if\nthe PA is not needing to look at this member, I still think it should\nhave the current/correct value in it).\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n14. TRANS_LEADER_SEND_TO_PARALLEL\n\n+ * TRANS_LEADER_PARTIAL_SERIALIZE:\n+ * The action means that we are in the leader apply worker and have sent some\n+ * changes to the parallel apply worker, but the remaining changes need to be\n+ * serialized to disk due to timeout while sending data, and the parallel apply\n+ * worker will apply these changes when the final commit arrives.\n+ *\n+ * One might think we can use LEADER_SERIALIZE directly. But in partial\n+ * serialize mode, in addition to serializing changes to file, the leader\n+ * worker needs to write the STREAM_XXX message to disk, and needs to wait for\n+ * parallel apply worker to finish the transaction when processing the\n+ * transaction finish command. So a new action was introduced to make the logic\n+ * clearer.\n+ *\n * TRANS_LEADER_SEND_TO_PARALLEL:\n\n\nSUGGESTION (Minor wording changes)\nThe action means that we are in the leader apply worker and have sent\nsome changes directly to the parallel apply worker, due to timeout\nwhile sending data the remaining changes need to be serialized to\ndisk. The parallel apply worker will apply these serialized changes\nwhen the final commit arrives.\n\nLEADER_SERIALIZE could not be used for this case because, in addition\nto serializing changes, the leader worker also needs to write the\nSTREAM_XXX message to disk, and wait for the parallel apply worker to\nfinish the transaction when processing the transaction finish command.\nSo this new action was introduced to make the logic clearer.\n\n~\n\n15.\n /* Actions for streaming transactions. */\n TRANS_LEADER_SERIALIZE,\n+ TRANS_LEADER_PARTIAL_SERIALIZE,\n TRANS_LEADER_SEND_TO_PARALLEL,\n TRANS_PARALLEL_APPLY\n\nAlthough it makes no difference I felt it would be better to put\nTRANS_LEADER_PARTIAL_SERIALIZE *after* TRANS_LEADER_SEND_TO_PARALLEL\nbecause that would be the order that these mode changes occur in the\nlogic...\n\n~~~\n\n16.\n\n@@ -375,7 +388,7 @@ typedef struct ApplySubXactData\n static ApplySubXactData subxact_data = {0, 0, InvalidTransactionId, NULL};\n\n static inline void subxact_filename(char *path, Oid subid, TransactionId xid);\n-static inline void changes_filename(char *path, Oid subid, TransactionId xid);\n+inline void changes_filename(char *path, Oid subid, TransactionId xid);\n\nIIUC (see [1]) when this function was made non-static the \"inline\"\nshould have been put into the header file.\n\n~\n\n17.\n@@ -388,10 +401,9 @@ static inline void cleanup_subxact_info(void);\n /*\n * Serialize and deserialize changes for a toplevel transaction.\n */\n-static void stream_cleanup_files(Oid subid, TransactionId xid);\n static void stream_open_file(Oid subid, TransactionId xid,\n bool first_segment);\n-static void stream_write_change(char action, StringInfo s);\n+static void stream_write_message(TransactionId xid, char action, StringInfo s);\n static void stream_close_file(void);\n\n17a.\n\nI felt just saying \"file/files\" is too vague. All the references to\nthe file should be consistent, so IMO everything would be better named\nlike:\n\n\"stream_cleanup_files\" -> \"stream_msg_spoolfile_cleanup()\"\n\"stream_open_file\" -> \"stream_msg_spoolfile_open()\"\n\"stream_close_file\" -> \"stream_msg_spoolfile_close()\"\n\"stream_write_message\" -> \"stream_msg_spoolfile_write_msg()\"\n\n~\n\n17b.\nIMO there is not enough distinction here between function names\nstream_write_message and stream_write_change. e.g. You cannot really\ntell from their names what might be the difference.\n\n~~~\n\n18.\n\n@@ -586,6 +595,7 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n TransactionId current_xid;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg;\n\n apply_action = get_transaction_apply_action(stream_xid, &winfo);\n\n@@ -595,6 +605,8 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n\n Assert(TransactionIdIsValid(stream_xid));\n\n+ original_msg = *s;\n+\n /*\n * We should have received XID of the subxact as the first part of the\n * message, so extract it.\n@@ -618,10 +630,14 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n stream_write_change(action, s);\n return true;\n\n+ case TRANS_LEADER_PARTIAL_SERIALIZE:\n case TRANS_LEADER_SEND_TO_PARALLEL:\n Assert(winfo);\n\n- pa_send_data(winfo, s->len, s->data);\n+ if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\n+ pa_send_data(winfo, s->len, s->data);\n+ else\n+ stream_write_change(action, &original_msg);\n\nThe original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE\ncase so I think it should only be declared/assigned in the scope of\nthat 'else'\n\n~~\n\n19. apply_handle_stream_prepare\n\n@@ -1316,13 +1335,21 @@ apply_handle_stream_prepare(StringInfo s)\n pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);\n\n /* Send STREAM PREPARE message to the parallel apply worker. */\n- pa_send_data(winfo, s->len, s->data);\n+ if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\n+ pa_send_data(winfo, s->len, s->data);\n+ else\n+ stream_write_message(prepare_data.xid,\n+ LOGICAL_REP_MSG_STREAM_PREPARE,\n+ &original_msg);\n\n\nThe original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE\ncase so I think it should only be declared/assigned in the scope of\nthat 'else'\n\n~\n\n20.\n\n+ /*\n+ * Close the file before committing if the parallel apply is\n+ * applying spooled changes.\n+ */\n+ if (stream_fd)\n+ BufFileClose(stream_fd);\n\nI found this a bit confusing because there is already a\nstream_close_file() wrapper function which does almost the same as\nthis. So either this code should be calling that function, or the\ncomment here should be explaining why this code is NOT calling that\nfunction.\n\n~~~\n\n21. serialize_stream_start\n\n+/*\n+ * Initialize fileset (if not already done).\n+ *\n+ * Create a new file when first_segment is true, otherwise open the existing\n+ * file.\n+ */\n+void\n+serialize_stream_start(TransactionId xid, bool first_segment)\n\nIMO this function should be called stream_msg_spoolfile_init() or\nstream_msg_spoolfile_begin() to match the pattern for function names\nof the message spool file that I previously suggested. (see review\ncomment #17a)\n\n~\n\n22.\n\n+ /*\n+ * Initialize the worker's stream_fileset if we haven't yet. This will be\n+ * used for the entire duration of the worker so create it in a permanent\n+ * context. We create this on the very first streaming message from any\n+ * transaction and then use it for this and other streaming transactions.\n+ * Now, we could create a fileset at the start of the worker as well but\n+ * then we won't be sure that it will ever be used.\n+ */\n+ if (!MyLogicalRepWorker->stream_fileset)\n\nI assumed this is a typo \"Now,\" --> \"Note,\" ?\n\n~~~\n\n23. apply_handle_stream_start\n\n@@ -1404,6 +1478,7 @@ apply_handle_stream_start(StringInfo s)\n bool first_segment;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg = *s;\n\nThe original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE\ncase so I think it should only be declared/assigned in the scope of\nthat 'else'\n\n~\n\n24.\n\n /*\n- * Start a transaction on stream start, this transaction will be\n- * committed on the stream stop unless it is a tablesync worker in\n- * which case it will be committed after processing all the\n- * messages. We need the transaction for handling the buffile,\n- * used for serializing the streaming data and subxact info.\n+ * serialize_stream_start will start a transaction, this\n+ * transaction will be committed on the stream stop unless it is a\n+ * tablesync worker in which case it will be committed after\n+ * processing all the messages. We need the transaction for\n+ * handling the buffile, used for serializing the streaming data\n+ * and subxact info.\n */\n- begin_replication_step();\n+ serialize_stream_start(stream_xid, first_segment);\n+ break;\n\nMake the comment a bit more natural.\n\nSUGGESTION\n\nFunction serialize_stream_start starts a transaction. This transaction\nwill be committed on the stream stop unless it is a tablesync worker\nin which case it will be committed after processing all the messages.\nWe need this transaction for handling the BufFile, used for\nserializing the streaming data and subxact info.\n\n~\n\n25.\n\n+ case TRANS_LEADER_PARTIAL_SERIALIZE:\n /*\n- * Initialize the worker's stream_fileset if we haven't yet. This\n- * will be used for the entire duration of the worker so create it\n- * in a permanent context. We create this on the very first\n- * streaming message from any transaction and then use it for this\n- * and other streaming transactions. Now, we could create a\n- * fileset at the start of the worker as well but then we won't be\n- * sure that it will ever be used.\n+ * The file should have been created when entering\n+ * PARTIAL_SERIALIZE mode so no need to create it again. The\n+ * transaction started in serialize_stream_start will be committed\n+ * on the stream stop.\n */\n- if (!MyLogicalRepWorker->stream_fileset)\n\nBEFORE\nThe file should have been created when entering PARTIAL_SERIALIZE mode\nso no need to create it again.\n\nSUGGESTION\nThe message spool file was already created when entering PARTIAL_SERIALIZE mode.\n\n~~~\n\n26. serialize_stream_stop\n\n /*\n+ * Update the information about subxacts and close the file.\n+ *\n+ * This function should be called when the serialize_stream_start function has\n+ * been called.\n+ */\n+void\n+serialize_stream_stop(TransactionId xid)\n\nMaybe 2nd part of that comment should be something more like\n\nSUGGESTION\nThis function ends what was started by the function serialize_stream_start().\n\n~\n\n27.\n\n+ /*\n+ * Close the file with serialized changes, and serialize information about\n+ * subxacts for the toplevel transaction.\n+ */\n+ subxact_info_write(MyLogicalRepWorker->subid, xid);\n+ stream_close_file();\n\nShould the comment and the code be in the same order?\n\nSUGGESTION\nSerialize information about subxacts for the toplevel transaction,\nthen close the stream messages spool file.\n\n~~~\n\n28. handle_stream_abort\n\n+ case TRANS_LEADER_PARTIAL_SERIALIZE:\n+ Assert(winfo);\n+\n+ /*\n+ * Parallel apply worker might have applied some changes, so write\n+ * the STREAM_ABORT message so that the parallel apply worker can\n+ * rollback the subtransaction if needed.\n+ */\n+ stream_write_message(xid, LOGICAL_REP_MSG_STREAM_ABORT,\n+ &original_msg);\n+\n\n28a.\nThe original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE\ncase so I think it should only be declared/assigned in the scope of\nthat case.\n\n~\n\n28b.\n\"so that the parallel apply worker can\" -> \"so that it can\"\n\n\n~~~\n\n29. apply_spooled_messages\n\n+void\n+apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n+ XLogRecPtr lsn)\n {\n StringInfoData s2;\n int nchanges;\n char path[MAXPGPATH];\n char *buffer = NULL;\n MemoryContext oldcxt;\n- BufFile *fd;\n\n- maybe_start_skipping_changes(lsn);\n+ if (!am_parallel_apply_worker())\n+ maybe_start_skipping_changes(lsn);\n\n /* Make sure we have an open transaction */\n begin_replication_step();\n@@ -1810,8 +1913,8 @@ apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\n changes_filename(path, MyLogicalRepWorker->subid, xid);\n elog(DEBUG1, \"replaying changes from file \\\"%s\\\"\", path);\n\n- fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDONLY,\n- false);\n+ stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);\n+ stream_xid = xid;\n\nIMO it seems strange to me that the fileset is passed as a parameter\nbut then the resulting fd is always assigned to a single global\nvariable (regardless of what the fileset was passed).\n\n~\n\n30.\n\n- BufFileClose(fd);\n-\n+ BufFileClose(stream_fd);\n pfree(buffer);\n pfree(s2.data);\n\n+done:\n+ stream_fd = NULL;\n+ stream_xid = InvalidTransactionId;\n+\n\nThis code fragment seems to be doing almost the same as what function\nstream_close_file() is doing. Should you just call that instead?\n\n~~~\n\n31. apply_handle_stream_commit\n\n+ if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\n+ pa_send_data(winfo, s->len, s->data);\n+ else\n+ stream_write_message(xid, LOGICAL_REP_MSG_STREAM_COMMIT,\n+ &original_msg);\n\nThe original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE\ncase so I think it should only be declared/assigned in the scope of\nthat 'else'\n\n~\n\n32.\n\n case TRANS_PARALLEL_APPLY:\n+\n+ /*\n+ * Close the file before committing if the parallel apply is\n+ * applying spooled changes.\n+ */\n+ if (stream_fd)\n+ BufFileClose(stream_fd);\n\n(Same as earlier review comment #20)\n\nIMO this is confusing because there is already a stream_close_file()\nwrapper function that does almost the same. So either this code should\nbe calling that function, or the comment here should explain why this\ncode is NOT calling that function.\n\n\n======\n\nsrc/include/replication/worker_internal.h\n\n33. LeaderFileSetState\n\n+/* State of fileset in leader apply worker. */\n+typedef enum LeaderFileSetState\n+{\n+ LEADER_FILESET_UNKNOWN,\n+ LEADER_FILESET_BUSY,\n+ LEADER_FILESET_ACCESSIBLE\n+} LeaderFileSetState;\n\n33a.\n\nMissing from typedefs.list?\n\n~\n\n33b.\n\nI thought some more explanatory comments for the meaning of\nBUSY/ACCESSIBLE should be here.\n\n~\n\n33c.\n\nREADY might be a better value than ACCESSIBLE\n\n~\n\n33d.\nI'm not sure what usefulness does the \"LEADER_\" and \"Leader\" prefixes\ngive here. Maybe a name like PartialFileSetStat is more meaningful?\n\ne.g. like this?\n\ntypedef enum PartialFileSetState\n{\nFS_UNKNOWN,\nFS_BUSY,\nFS_READY\n} PartialFileSetState;\n\n~\n\n\n~~~\n\n34. ParallelApplyWorkerShared\n\n+ /*\n+ * The leader apply worker will serialize changes to the file after\n+ * entering PARTIAL_SERIALIZE mode and share the fileset with the parallel\n+ * apply worker when processing the transaction finish command. And then\n+ * the parallel apply worker will apply all the spooled messages.\n+ *\n+ * Don't use SharedFileSet here as we need the fileset to survive after\n+ * releasing the shared memory so that the leader apply worker can re-use\n+ * the fileset for next streaming transaction.\n+ */\n+ LeaderFileSetState fileset_state;\n+ FileSet fileset;\n\nMinor rewording of that comment\n\nSUGGESTION\nAfter entering PARTIAL_SERIALIZE mode, the leader apply worker will\nserialize changes to the file, and share the fileset with the parallel\napply worker when processing the transaction finish command. Then the\nparallel apply worker will apply all the spooled messages.\n\nFileSet is used here instead of SharedFileSet because we need it to\nsurvive after releasing the shared memory so that the leader apply\nworker can re-use the same fileset for the next streaming transaction.\n\n~~~\n\n35. globals\n\n /*\n+ * Indicates whether the leader apply worker needs to serialize the\n+ * remaining changes to disk due to timeout when sending data to the\n+ * parallel apply worker.\n+ */\n+ bool serialize_changes;\n\n35a.\nI wonder if the comment would be better to also mention \"via shared memory\".\n\nSUGGESTION\n\nIndicates whether the leader apply worker needs to serialize the\nremaining changes to disk due to timeout when attempting to send data\nto the parallel apply worker via shared memory.\n\n~\n\n35b.\nI wonder if a more informative variable name might be\nserialize_remaining_changes?\n\n------\n[1] https://stackoverflow.com/questions/17504316/what-happens-with-an-extern-inline-function\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 28 Nov 2022 18:19:01 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Nov 27, 2022 at 9:43 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed all comments so far.\n>\n\nFew comments on v52-0001*\n========================\n1.\npa_free_worker()\n{\n...\n+ /* Free the worker information if the worker exited cleanly. */\n+ if (!winfo->error_mq_handle)\n+ {\n+ pa_free_worker_info(winfo);\n+\n+ if (winfo->in_use &&\n+ !hash_search(ParallelApplyWorkersHash, &xid, HASH_REMOVE, NULL))\n+ elog(ERROR, \"hash table corrupted\");\n\npa_free_worker_info() pfrees the winfo, so how is it legal to\nwinfo->in_use in the above check?\n\nAlso, why is this check (!winfo->error_mq_handle) required in the\nfirst place in the patch? The worker exits cleanly only when the\nleader apply worker sends a SIGINT signal and in that case, we already\ndetach from the error queue and clean up other worker information.\n\n2.\n+HandleParallelApplyMessages(void)\n+{\n...\n...\n+ foreach(lc, ParallelApplyWorkersList)\n+ {\n+ shm_mq_result res;\n+ Size nbytes;\n+ void *data;\n+ ParallelApplyWorkerInfo *winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n+\n+ if (!winfo->error_mq_handle)\n+ continue;\n\nSimilar to the previous comment, it is not clear whether we need this\ncheck. If required, can we add a comment to indicate the case where it\nhappens to be true?\n\nNote, there is a similar check for winfo->error_mq_handle in\npa_wait_for_xact_state(). Please add some comments if that is\nrequired.\n\n3. Why is there apply_worker_clean_exit() at the end of\nParallelApplyWorkerMain()? Normally either the leader worker stops\nparallel apply, or parallel apply gets stopped because of a parameter\nchange, or exits because of error, and in none of those cases it can\nhit this code path unless I am missing something.\n\nAdditionally, I think in LogicalParallelApplyLoop, we will never\nreceive zero-length messages so that is also wrong and should be\nconverted to elog(ERROR,..).\n\n4. I think in logicalrep_worker_detach(), we should detach from the\nshm error queue so that the parallel apply worker won't try to send a\ntermination message back to the leader worker.\n\n5.\npa_send_data()\n{\n...\n+ if (startTime == 0)\n+ startTime = GetCurrentTimestamp();\n...\n\nWhat is the use of getting the current timestamp before waitlatch\nlogic, if it is not used before that? It seems that is for the time\nlogic to look correct. We can probably reduce the 10s interval to 9s\nfor that.\n\nIn this function, we need to add some comments to indicate why the\ncurrent logic is used, and also probably we can refer to the comments\natop this file.\n\n6. I think it will be better if we keep stream_apply_worker local to\napplyparallelworker.c by exposing functions to cache/resetting the\nrequired info.\n\n7. Apart from the above, I have made a few changes in the comments and\nsome miscellaneous cosmetic changes in the attached. Kindly include\nthese in the next version unless you see a problem with any change.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 28 Nov 2022 17:55:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Nov 28, 2022 at 12:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n...\n>\n> 17.\n> @@ -388,10 +401,9 @@ static inline void cleanup_subxact_info(void);\n> /*\n> * Serialize and deserialize changes for a toplevel transaction.\n> */\n> -static void stream_cleanup_files(Oid subid, TransactionId xid);\n> static void stream_open_file(Oid subid, TransactionId xid,\n> bool first_segment);\n> -static void stream_write_change(char action, StringInfo s);\n> +static void stream_write_message(TransactionId xid, char action, StringInfo s);\n> static void stream_close_file(void);\n>\n> 17a.\n>\n> I felt just saying \"file/files\" is too vague. All the references to\n> the file should be consistent, so IMO everything would be better named\n> like:\n>\n> \"stream_cleanup_files\" -> \"stream_msg_spoolfile_cleanup()\"\n> \"stream_open_file\" -> \"stream_msg_spoolfile_open()\"\n> \"stream_close_file\" -> \"stream_msg_spoolfile_close()\"\n> \"stream_write_message\" -> \"stream_msg_spoolfile_write_msg()\"\n>\n> ~\n>\n> 17b.\n> IMO there is not enough distinction here between function names\n> stream_write_message and stream_write_change. e.g. You cannot really\n> tell from their names what might be the difference.\n>\n> ~~~\n>\n\nI think the only new function needed by this patch is\nstream_write_message so don't see why to change all others for that. I\nsee two possibilities to make name better (a) name function as\nstream_open_and_write_change, or (b) pass a new argument (boolean\nopen) to stream_write_change\n\n...\n>\n> src/include/replication/worker_internal.h\n>\n> 33. LeaderFileSetState\n>\n> +/* State of fileset in leader apply worker. */\n> +typedef enum LeaderFileSetState\n> +{\n> + LEADER_FILESET_UNKNOWN,\n> + LEADER_FILESET_BUSY,\n> + LEADER_FILESET_ACCESSIBLE\n> +} LeaderFileSetState;\n>\n> 33a.\n>\n> Missing from typedefs.list?\n>\n> ~\n>\n> 33b.\n>\n> I thought some more explanatory comments for the meaning of\n> BUSY/ACCESSIBLE should be here.\n>\n> ~\n>\n> 33c.\n>\n> READY might be a better value than ACCESSIBLE\n>\n> ~\n>\n> 33d.\n> I'm not sure what usefulness does the \"LEADER_\" and \"Leader\" prefixes\n> give here. Maybe a name like PartialFileSetStat is more meaningful?\n>\n> e.g. like this?\n>\n> typedef enum PartialFileSetState\n> {\n> FS_UNKNOWN,\n> FS_BUSY,\n> FS_READY\n> } PartialFileSetState;\n>\n> ~\n>\n\nAll your suggestions in this point look good to me.\n\n>\n> ~~~\n>\n>\n> 35. globals\n>\n> /*\n> + * Indicates whether the leader apply worker needs to serialize the\n> + * remaining changes to disk due to timeout when sending data to the\n> + * parallel apply worker.\n> + */\n> + bool serialize_changes;\n>\n> 35a.\n> I wonder if the comment would be better to also mention \"via shared memory\".\n>\n> SUGGESTION\n>\n> Indicates whether the leader apply worker needs to serialize the\n> remaining changes to disk due to timeout when attempting to send data\n> to the parallel apply worker via shared memory.\n>\n> ~\n>\n\nI think the comment should say \" .. the leader apply worker serialized\nremaining changes ...\"\n\n> 35b.\n> I wonder if a more informative variable name might be\n> serialize_remaining_changes?\n>\n\nI think this needlessly makes the variable name long.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Nov 2022 18:40:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, November 28, 2022 20:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sun, Nov 27, 2022 at 9:43 AM houzj.fnst@fujitsu.com \r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch which addressed all comments so far.\r\n> >\r\n> \r\n> Few comments on v52-0001*\r\n> ========================\r\n> 1.\r\n> pa_free_worker()\r\n> {\r\n> ...\r\n> + /* Free the worker information if the worker exited cleanly. */ if \r\n> + (!winfo->error_mq_handle) { pa_free_worker_info(winfo);\r\n> +\r\n> + if (winfo->in_use &&\r\n> + !hash_search(ParallelApplyWorkersHash, &xid, HASH_REMOVE, NULL)) \r\n> + elog(ERROR, \"hash table corrupted\");\r\n> \r\n> pa_free_worker_info() pfrees the winfo, so how is it legal to\r\n> winfo->in_use in the above check?\r\n> \r\n> Also, why is this check (!winfo->error_mq_handle) required in the \r\n> first place in the patch? The worker exits cleanly only when the \r\n> leader apply worker sends a SIGINT signal and in that case, we already \r\n> detach from the error queue and clean up other worker information.\r\n\r\nIt was intended for the case when a user send a signal, but it seems not standard way to do that.\r\nSo, I removed this check (!winfo->error_mq_handle).\r\n\r\n> 2.\r\n> +HandleParallelApplyMessages(void)\r\n> +{\r\n> ...\r\n> ...\r\n> + foreach(lc, ParallelApplyWorkersList) { shm_mq_result res; Size \r\n> + nbytes;\r\n> + void *data;\r\n> + ParallelApplyWorkerInfo *winfo = (ParallelApplyWorkerInfo *) \r\n> + lfirst(lc);\r\n> +\r\n> + if (!winfo->error_mq_handle)\r\n> + continue;\r\n> \r\n> Similar to the previous comment, it is not clear whether we need this \r\n> check. If required, can we add a comment to indicate the case where it \r\n> happens to be true?\r\n> Note, there is a similar check for winfo->error_mq_handle in \r\n> pa_wait_for_xact_state(). Please add some comments if that is \r\n> required.\r\n\r\nRemoved this check in these two functions.\r\n\r\n> 3. Why is there apply_worker_clean_exit() at the end of \r\n> ParallelApplyWorkerMain()? Normally either the leader worker stops \r\n> parallel apply, or parallel apply gets stopped because of a parameter \r\n> change, or exits because of error, and in none of those cases it can \r\n> hit this code path unless I am missing something.\r\n> \r\n> Additionally, I think in LogicalParallelApplyLoop, we will never \r\n> receive zero-length messages so that is also wrong and should be \r\n> converted to elog(ERROR,..).\r\n\r\nAgreed and changed. \r\n\r\n> 4. I think in logicalrep_worker_detach(), we should detach from the \r\n> shm error queue so that the parallel apply worker won't try to send a \r\n> termination message back to the leader worker.\r\n\r\nAgreed and changed.\r\n\r\n> 5.\r\n> pa_send_data()\r\n> {\r\n> ...\r\n> + if (startTime == 0)\r\n> + startTime = GetCurrentTimestamp();\r\n> ...\r\n> \r\n> What is the use of getting the current timestamp before waitlatch \r\n> logic, if it is not used before that? It seems that is for the time \r\n> logic to look correct. We can probably reduce the 10s interval to 9s \r\n> for that.\r\n\r\nChanged.\r\n\r\n> In this function, we need to add some comments to indicate why the \r\n> current logic is used, and also probably we can refer to the comments \r\n> atop this file.\r\n\r\nAdded some comments.\r\n\r\n> 6. I think it will be better if we keep stream_apply_worker local to \r\n> applyparallelworker.c by exposing functions to cache/resetting the \r\n> required info.\r\n\r\nAgree. Added a new function to set the stream_apply_worker.\r\n\r\n> 7. Apart from the above, I have made a few changes in the comments and \r\n> some miscellaneous cosmetic changes in the attached. Kindly include \r\n> these in the next version unless you see a problem with any change.\r\n\r\nThanks, I have checked and merge them.\r\n\r\nAttach the new version patch which addressed all comments.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 29 Nov 2022 04:48:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, November 28, 2022 15:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch v51-0002\r\n\r\nThanks for your comments!\r\n\r\n> ======\r\n> \r\n> 1.\r\n> \r\n> GENERAL - terminology: spool/serialize and data/changes/message\r\n> \r\n> The terminology seems to be used at random. IMO it might be worthwhile \r\n> rechecking at least that terms are used consistently in all the \r\n> comments. e.g \"serialize message data to disk\" ... and later ...\r\n> \"apply the spooled messages\".\r\n> \r\n> Also for places where it says \"Write the message to file\" maybe \r\n> consider using consistent terminology like \"serialize the message to a \r\n> file\".\r\n> \r\n> Also, try to standardize the way things are described by using \r\n> consistent (if they really are the same) terminology for \"writing \r\n> data\" VS \"writing data\" VS \"writing messages\" etc. It is confusing \r\n> trying to know if the different wording has some intended meaning or \r\n> is it just random.\r\n\r\nI changes some of them, but I think there some things left which I will recheck in next version.\r\nAnd I think we'd better not change comments that refer to existing comments or functions or variables.\r\nFor example, it’s fine for comments that refer to apply_spooled_message to use \"spool\" \"message\".\r\n\r\n\r\n> ======\r\n> \r\n> Commit message\r\n> \r\n> 2.\r\n> When the leader apply worker times out while sending a message to the \r\n> parallel apply worker. Instead of erroring out, switch to partial \r\n> serialize mode and let the leader serialize all remaining changes to \r\n> the file and notify the parallel apply workers to read and apply them at the end of the transaction.\r\n> \r\n> ~\r\n> \r\n> The first sentence seems incomplete\r\n> \r\n> SUGGESTION.\r\n> In patch 0001 if the leader apply worker times out while attempting to \r\n> send a message to the parallel apply worker it results in an ERROR.\r\n> \r\n> This patch (0002) modifies that behaviour, so instead of erroring it \r\n> will switch to \"partial serialize\" mode - in this mode the leader \r\n> serializes all remaining changes to a file and notifies the parallel \r\n> apply workers to read and apply them at the end of the transaction.\r\n> \r\n> ~~~\r\n> \r\n> 3.\r\n> \r\n> This patch 0002 is called “Serialize partial changes to disk if the \r\n> shm_mq buffer is full”, but the commit message is saying nothing about \r\n> the buffer filling up. I think the Commit message should be mentioning \r\n> something that makes the commit patch name more relevant. Otherwise \r\n> change the patch name.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 4. File header comment\r\n> \r\n> + * timeout is exceeded, the LA will write to file and indicate PA-2 \r\n> + that it\r\n> + * needs to read file for remaining messages. Then LA will start \r\n> + waiting for\r\n> + * commit which will detect deadlock if any. (See pa_send_data() and \r\n> + typedef\r\n> + * enum TransApplyAction)\r\n> \r\n> \"needs to read file for remaining messages\" -> \"needs to read that \r\n> file for the remaining messages\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 5. pa_free_worker\r\n> \r\n> + /*\r\n> + * Stop the worker if there are enough workers in the pool.\r\n> + *\r\n> + * XXX we also need to stop the worker if the leader apply worker\r\n> + * serialized part of the transaction data to a file due to send timeout.\r\n> + * This is because the message could be partially written to the \r\n> + queue due\r\n> + * to send timeout and there is no way to clean the queue other than\r\n> + * resending the message until it succeeds. To avoid complexity, we\r\n> + * directly stop the worker in this case.\r\n> + */\r\n> + if (winfo->serialize_changes ||\r\n> + napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n> \r\n> 5a.\r\n> \r\n> + * XXX we also need to stop the worker if the leader apply worker\r\n> + * serialized part of the transaction data to a file due to send timeout.\r\n> \r\n> SUGGESTION\r\n> XXX The worker is also stopped if the leader apply worker needed to \r\n> serialize part of the transaction data due to a send timeout.\r\n> \r\n> ~\r\n> \r\n> 5b.\r\n> \r\n> + /* Unlink the files with serialized changes. */ if\r\n> + (winfo->serialize_changes)\r\n> + stream_cleanup_files(MyLogicalRepWorker->subid, winfo->shared->xid);\r\n> \r\n> A better comment might be\r\n> \r\n> SUGGESTION\r\n> Unlink any files that were needed to serialize partial changes.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 6. pa_spooled_messages\r\n> \r\n> /*\r\n> * Replay the spooled messages in the parallel apply worker if leader \r\n> apply\r\n> * worker has finished serializing changes to the file.\r\n> */\r\n> static void\r\n> pa_spooled_messages(void)\r\n> \r\n> 6a.\r\n> IMO a better name for this function would be \r\n> pa_apply_spooled_messages();\r\n\r\nNot sure about this.\r\n\r\n> ~\r\n> \r\n> 6b.\r\n> \"if leader apply\" -> \"if the leader apply\"\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 7.\r\n> \r\n> + /*\r\n> + * Acquire the stream lock if the leader apply worker is serializing\r\n> + * changes to the file, because the parallel apply worker will no \r\n> + longer\r\n> + * have a chance to receive a STREAM_STOP and acquire the lock until \r\n> + the\r\n> + * leader serialize all changes to the file.\r\n> + */\r\n> + if (fileset_state == LEADER_FILESET_BUSY) { \r\n> + pa_lock_stream(MyParallelShared->xid, AccessShareLock); \r\n> + pa_unlock_stream(MyParallelShared->xid, AccessShareLock); }\r\n> \r\n> SUGGESTION (rearranged comment - please check, I am not sure if I got \r\n> this right)\r\n> \r\n> If the leader apply worker is still (busy) serializing partial changes \r\n> then the parallel apply worker acquires the stream lock now.\r\n> Otherwise, it would not have a chance to receive a STREAM_STOP (and \r\n> acquire the stream lock) until the leader had serialized all changes.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 8. pa_send_data\r\n> \r\n> + *\r\n> + * When sending data times out, data will be serialized to disk. And \r\n> + the\r\n> + * current streaming transaction will enter PARTIAL_SERIALIZE mode, \r\n> + which\r\n> means\r\n> + * that subsequent data will also be serialized to disk.\r\n> */\r\n> void\r\n> pa_send_data(ParallelApplyWorkerInfo *winfo, Size nbytes, const void\r\n> *data)\r\n> \r\n> SUGGESTION (minor comment change)\r\n> \r\n> If the attempt to send data via shared memory times out, then we will \r\n> switch to \"PARTIAL_SERIALIZE mode\" for the current transaction. This \r\n> means that the current data and any subsequent data for this \r\n> transaction will be serialized to disk.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 9.\r\n> \r\n> Assert(!IsTransactionState());\r\n> + Assert(!winfo->serialize_changes);\r\n> \r\n> How about also asserting that this must be the LA worker?\r\n\r\nNot sure about this as I think the parallel apply worker won't have a winfo.\r\n\r\n> ~\r\n> \r\n> 10.\r\n> \r\n> + /*\r\n> + * The parallel apply worker might be stuck for some reason, so\r\n> + * stop sending data to parallel worker and start to serialize\r\n> + * data to files.\r\n> + */\r\n> + winfo->serialize_changes = true;\r\n> \r\n> SUGGESTION (minor reword)\r\n> The parallel apply worker might be stuck for some reason, so stop \r\n> sending data directly to it and start to serialize data to files \r\n> instead.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 11.\r\n> + /* Skip first byte and statistics fields. */ msg.cursor += \r\n> + SIZE_STATS_MESSAGE + 1;\r\n> \r\n> IMO it would be better for the comment order and the code calculation \r\n> order to be the same.\r\n> \r\n> SUGGESTION\r\n> /* Skip first byte and statistics fields. */ msg.cursor += 1 + \r\n> SIZE_STATS_MESSAGE;\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 12. pa_stream_abort\r\n> \r\n> + /*\r\n> + * If the parallel apply worker is applying the spooled\r\n> + * messages, we save the current file position and close the\r\n> + * file to prevent the file from being accidentally closed on\r\n> + * rollback.\r\n> + */\r\n> + if (stream_fd)\r\n> + {\r\n> + BufFileTell(stream_fd, &fileno, &offset); BufFileClose(stream_fd); \r\n> + reopen_stream_fd = true; }\r\n> +\r\n> RollbackToSavepoint(spname);\r\n> CommitTransactionCommand();\r\n> subxactlist = list_truncate(subxactlist, i + 1);\r\n> +\r\n> + /*\r\n> + * Reopen the file and set the file position to the saved\r\n> + * position.\r\n> + */\r\n> + if (reopen_stream_fd)\r\n> \r\n> It seems a bit vague to just refer to \"close the file\" and \"reopen the \r\n> file\" in these comments. IMO it would be better to call this file by a \r\n> name like \"the message spool file\" or similar. Please check all other \r\n> similar comments.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 13. pa_set_fileset_state\r\n> \r\n> /*\r\n> + * Set the fileset_state flag for the given parallel apply worker. \r\n> +The\r\n> + * stream_fileset of the leader apply worker will be written into the \r\n> +shared\r\n> + * memory if the fileset_state is LEADER_FILESET_ACCESSIBLE.\r\n> + */\r\n> +void\r\n> +pa_set_fileset_state(ParallelApplyWorkerShared *wshared, \r\n> +LeaderFileSetState fileset_state) {\r\n> \r\n> 13a.\r\n> \r\n> It is an enum -- not a \"flag\", so:\r\n> \r\n> \"fileset_state flag\" -> \"fileste state\"\r\n\r\nChanged.\r\n\r\n> ~~\r\n> \r\n> 13b.\r\n> \r\n> It seemed strange to me that the comment/code says this state is only \r\n> written to shm when it is \"ACCESSIBLE\".... IIUC this same filestate \r\n> lingers around to be reused for other workers so I expected the state \r\n> should *always* be written whenever the LA changes it. (I mean even if \r\n> the PA is not needing to look at this member, I still think it should \r\n> have the current/correct value in it).\r\n\r\nI think we will always change the state.\r\nOr do you mean the fileset is only written(not the state) when it is ACCESSIBLE?\r\nThe fileset cannot be used before it's READY, so I didn't write that fileset into\r\nshared memory before that.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 14. TRANS_LEADER_SEND_TO_PARALLEL\r\n> \r\n> + * TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> + * The action means that we are in the leader apply worker and have \r\n> + sent\r\n> some\r\n> + * changes to the parallel apply worker, but the remaining changes \r\n> + need to be\r\n> + * serialized to disk due to timeout while sending data, and the \r\n> + parallel apply\r\n> + * worker will apply these changes when the final commit arrives.\r\n> + *\r\n> + * One might think we can use LEADER_SERIALIZE directly. But in \r\n> + partial\r\n> + * serialize mode, in addition to serializing changes to file, the \r\n> + leader\r\n> + * worker needs to write the STREAM_XXX message to disk, and needs to \r\n> + wait\r\n> for\r\n> + * parallel apply worker to finish the transaction when processing \r\n> + the\r\n> + * transaction finish command. So a new action was introduced to make \r\n> + the\r\n> logic\r\n> + * clearer.\r\n> + *\r\n> * TRANS_LEADER_SEND_TO_PARALLEL:\r\n> \r\n> \r\n> SUGGESTION (Minor wording changes)\r\n> The action means that we are in the leader apply worker and have sent \r\n> some changes directly to the parallel apply worker, due to timeout \r\n> while sending data the remaining changes need to be serialized to \r\n> disk. The parallel apply worker will apply these serialized changes \r\n> when the final commit arrives.\r\n> \r\n> LEADER_SERIALIZE could not be used for this case because, in addition \r\n> to serializing changes, the leader worker also needs to write the \r\n> STREAM_XXX message to disk, and wait for the parallel apply worker to \r\n> finish the transaction when processing the transaction finish command.\r\n> So this new action was introduced to make the logic clearer.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 15.\r\n> /* Actions for streaming transactions. */\r\n> TRANS_LEADER_SERIALIZE,\r\n> + TRANS_LEADER_PARTIAL_SERIALIZE,\r\n> TRANS_LEADER_SEND_TO_PARALLEL,\r\n> TRANS_PARALLEL_APPLY\r\n> \r\n> Although it makes no difference I felt it would be better to put \r\n> TRANS_LEADER_PARTIAL_SERIALIZE *after* TRANS_LEADER_SEND_TO_PARALLEL \r\n> because that would be the order that these mode changes occur in the \r\n> logic...\r\n\r\nI thought that it is fine as it follows LEADER_SERIALIZE which is similar to\r\nLEADER_PARTIAL_SERIALIZE.\r\n\r\n> ~~~\r\n> \r\n> 16.\r\n> \r\n> @@ -375,7 +388,7 @@ typedef struct ApplySubXactData static \r\n> ApplySubXactData subxact_data = {0, 0, InvalidTransactionId, NULL};\r\n> \r\n> static inline void subxact_filename(char *path, Oid subid, \r\n> TransactionId xid); -static inline void changes_filename(char *path, \r\n> Oid subid, TransactionId xid);\r\n> +inline void changes_filename(char *path, Oid subid, TransactionId \r\n> +xid);\r\n> \r\n> IIUC (see [1]) when this function was made non-static the \"inline\"\r\n> should have been put into the header file.\r\n\r\nChanged this function from \"inline void\" to \"void\" as I am not sure is it better to put\r\nthis function's definition on header file.\r\n\r\n> ~\r\n> \r\n> 17.\r\n> @@ -388,10 +401,9 @@ static inline void cleanup_subxact_info(void);\r\n> /*\r\n> * Serialize and deserialize changes for a toplevel transaction.\r\n> */\r\n> -static void stream_cleanup_files(Oid subid, TransactionId xid); \r\n> static void stream_open_file(Oid subid, TransactionId xid,\r\n> bool first_segment);\r\n> -static void stream_write_change(char action, StringInfo s);\r\n> +static void stream_write_message(TransactionId xid, char action, \r\n> +StringInfo s);\r\n> static void stream_close_file(void);\r\n> \r\n> 17a.\r\n> \r\n> I felt just saying \"file/files\" is too vague. All the references to \r\n> the file should be consistent, so IMO everything would be better named\r\n> like:\r\n> \r\n> \"stream_cleanup_files\" -> \"stream_msg_spoolfile_cleanup()\"\r\n> \"stream_open_file\" -> \"stream_msg_spoolfile_open()\"\r\n> \"stream_close_file\" -> \"stream_msg_spoolfile_close()\"\r\n> \"stream_write_message\" -> \"stream_msg_spoolfile_write_msg()\"\r\n\r\nRenamed the function stream_write_message to stream_open_and_write_change.\r\n\r\n> ~\r\n> \r\n> 17b.\r\n> IMO there is not enough distinction here between function names \r\n> stream_write_message and stream_write_change. e.g. You cannot really \r\n> tell from their names what might be the difference.\r\n\r\nChanged the name.\r\n\r\n> ~~~\r\n> \r\n> 18.\r\n> \r\n> @@ -586,6 +595,7 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> TransactionId current_xid;\r\n> ParallelApplyWorkerInfo *winfo;\r\n> TransApplyAction apply_action;\r\n> + StringInfoData original_msg;\r\n> \r\n> apply_action = get_transaction_apply_action(stream_xid, &winfo);\r\n> \r\n> @@ -595,6 +605,8 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> \r\n> Assert(TransactionIdIsValid(stream_xid));\r\n> \r\n> + original_msg = *s;\r\n> +\r\n> /*\r\n> * We should have received XID of the subxact as the first part of the\r\n> * message, so extract it.\r\n> @@ -618,10 +630,14 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> stream_write_change(action, s);\r\n> return true;\r\n> \r\n> + case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> Assert(winfo);\r\n> \r\n> - pa_send_data(winfo, s->len, s->data);\r\n> + if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL) \r\n> + pa_send_data(winfo, s->len, s->data); else \r\n> + stream_write_change(action, &original_msg);\r\n> \r\n> The original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE \r\n> case so I think it should only be declared/assigned in the scope of \r\n> that 'else'\r\n\r\nThe member 'cursor' of 's' is changed after invoking the function pq_getmsgint.\r\nSo 'original_msg' is assigned before invoking the function pq_getmsgint.\r\n\r\n> ~\r\n> \r\n> 20.\r\n> \r\n> + /*\r\n> + * Close the file before committing if the parallel apply is\r\n> + * applying spooled changes.\r\n> + */\r\n> + if (stream_fd)\r\n> + BufFileClose(stream_fd);\r\n> \r\n> I found this a bit confusing because there is already a\r\n> stream_close_file() wrapper function which does almost the same as \r\n> this. So either this code should be calling that function, or the \r\n> comment here should be explaining why this code is NOT calling that \r\n> function.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 21. serialize_stream_start\r\n> \r\n> +/*\r\n> + * Initialize fileset (if not already done).\r\n> + *\r\n> + * Create a new file when first_segment is true, otherwise open the \r\n> +existing\r\n> + * file.\r\n> + */\r\n> +void\r\n> +serialize_stream_start(TransactionId xid, bool first_segment)\r\n> \r\n> IMO this function should be called stream_msg_spoolfile_init() or\r\n> stream_msg_spoolfile_begin() to match the pattern for function names \r\n> of the message spool file that I previously suggested. (see review \r\n> comment #17a)\r\n\r\nI am not sure about the name is better. I will think over this and adjust in next version.\r\n\r\n> ~\r\n> \r\n> 22.\r\n> \r\n> + /*\r\n> + * Initialize the worker's stream_fileset if we haven't yet. This \r\n> + will be\r\n> + * used for the entire duration of the worker so create it in a \r\n> + permanent\r\n> + * context. We create this on the very first streaming message from \r\n> + any\r\n> + * transaction and then use it for this and other streaming transactions.\r\n> + * Now, we could create a fileset at the start of the worker as well \r\n> + but\r\n> + * then we won't be sure that it will ever be used.\r\n> + */\r\n> + if (!MyLogicalRepWorker->stream_fileset)\r\n> \r\n> I assumed this is a typo \"Now,\" --> \"Note,\" ?\r\n\r\nThat seems the existing comments, I am not sure it's a typo or not.\r\n\r\n> ~\r\n> \r\n> 24.\r\n> \r\n> /*\r\n> - * Start a transaction on stream start, this transaction will be\r\n> - * committed on the stream stop unless it is a tablesync worker in\r\n> - * which case it will be committed after processing all the\r\n> - * messages. We need the transaction for handling the buffile,\r\n> - * used for serializing the streaming data and subxact info.\r\n> + * serialize_stream_start will start a transaction, this\r\n> + * transaction will be committed on the stream stop unless it is a\r\n> + * tablesync worker in which case it will be committed after\r\n> + * processing all the messages. We need the transaction for\r\n> + * handling the buffile, used for serializing the streaming data\r\n> + * and subxact info.\r\n> */\r\n> - begin_replication_step();\r\n> + serialize_stream_start(stream_xid, first_segment); break;\r\n> \r\n> Make the comment a bit more natural.\r\n> \r\n> SUGGESTION\r\n> \r\n> Function serialize_stream_start starts a transaction. This transaction \r\n> will be committed on the stream stop unless it is a tablesync worker \r\n> in which case it will be committed after processing all the messages.\r\n> We need this transaction for handling the BufFile, used for \r\n> serializing the streaming data and subxact info.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 25.\r\n> \r\n> + case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> /*\r\n> - * Initialize the worker's stream_fileset if we haven't yet. This\r\n> - * will be used for the entire duration of the worker so create it\r\n> - * in a permanent context. We create this on the very first\r\n> - * streaming message from any transaction and then use it for this\r\n> - * and other streaming transactions. Now, we could create a\r\n> - * fileset at the start of the worker as well but then we won't be\r\n> - * sure that it will ever be used.\r\n> + * The file should have been created when entering\r\n> + * PARTIAL_SERIALIZE mode so no need to create it again. The\r\n> + * transaction started in serialize_stream_start will be committed\r\n> + * on the stream stop.\r\n> */\r\n> - if (!MyLogicalRepWorker->stream_fileset)\r\n> \r\n> BEFORE\r\n> The file should have been created when entering PARTIAL_SERIALIZE mode \r\n> so no need to create it again.\r\n> \r\n> SUGGESTION\r\n> The message spool file was already created when entering \r\n> PARTIAL_SERIALIZE mode.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 26. serialize_stream_stop\r\n> \r\n> /*\r\n> + * Update the information about subxacts and close the file.\r\n> + *\r\n> + * This function should be called when the serialize_stream_start \r\n> +function has\r\n> + * been called.\r\n> + */\r\n> +void\r\n> +serialize_stream_stop(TransactionId xid)\r\n> \r\n> Maybe 2nd part of that comment should be something more like\r\n> \r\n> SUGGESTION\r\n> This function ends what was started by the function serialize_stream_start().\r\n\r\nI am thinking about a new function name and will adjust this in next version.\r\n\r\n> ~\r\n> \r\n> 27.\r\n> \r\n> + /*\r\n> + * Close the file with serialized changes, and serialize information \r\n> + about\r\n> + * subxacts for the toplevel transaction.\r\n> + */\r\n> + subxact_info_write(MyLogicalRepWorker->subid, xid); \r\n> + stream_close_file();\r\n> \r\n> Should the comment and the code be in the same order?\r\n> \r\n> SUGGESTION\r\n> Serialize information about subxacts for the toplevel transaction, \r\n> then close the stream messages spool file.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 28. handle_stream_abort\r\n> \r\n> + case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> + Assert(winfo);\r\n> +\r\n> + /*\r\n> + * Parallel apply worker might have applied some changes, so write\r\n> + * the STREAM_ABORT message so that the parallel apply worker can\r\n> + * rollback the subtransaction if needed.\r\n> + */\r\n> + stream_write_message(xid, LOGICAL_REP_MSG_STREAM_ABORT, \r\n> + &original_msg);\r\n> +\r\n> \r\n> 28a.\r\n> The original_msg is not used except for TRANS_LEADER_PARTIAL_SERIALIZE \r\n> case so I think it should only be declared/assigned in the scope of \r\n> that case.\r\n> \r\n> ~\r\n> \r\n> 28b.\r\n> \"so that the parallel apply worker can\" -> \"so that it can\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 29. apply_spooled_messages\r\n> \r\n> +void\r\n> +apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\r\n> + XLogRecPtr lsn)\r\n> {\r\n> StringInfoData s2;\r\n> int nchanges;\r\n> char path[MAXPGPATH];\r\n> char *buffer = NULL;\r\n> MemoryContext oldcxt;\r\n> - BufFile *fd;\r\n> \r\n> - maybe_start_skipping_changes(lsn);\r\n> + if (!am_parallel_apply_worker())\r\n> + maybe_start_skipping_changes(lsn);\r\n> \r\n> /* Make sure we have an open transaction */\r\n> begin_replication_step();\r\n> @@ -1810,8 +1913,8 @@ apply_spooled_messages(TransactionId xid, \r\n> XLogRecPtr lsn)\r\n> changes_filename(path, MyLogicalRepWorker->subid, xid);\r\n> elog(DEBUG1, \"replaying changes from file \\\"%s\\\"\", path);\r\n> \r\n> - fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, \r\n> O_RDONLY,\r\n> - false);\r\n> + stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, \r\n> + false); stream_xid = xid;\r\n> \r\n> IMO it seems strange to me that the fileset is passed as a parameter \r\n> but then the resulting fd is always assigned to a single global \r\n> variable (regardless of what the fileset was passed).\r\n\r\nI am not sure about this as we already have similar code in stream_open_file().\r\n\r\n> ~\r\n> \r\n> 30.\r\n> \r\n> - BufFileClose(fd);\r\n> -\r\n> + BufFileClose(stream_fd);\r\n> pfree(buffer);\r\n> pfree(s2.data);\r\n> \r\n> +done:\r\n> + stream_fd = NULL;\r\n> + stream_xid = InvalidTransactionId;\r\n> +\r\n> \r\n> This code fragment seems to be doing almost the same as what function\r\n> stream_close_file() is doing. Should you just call that instead?\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/include/replication/worker_internal.h\r\n> \r\n> 33. LeaderFileSetState\r\n> \r\n> +/* State of fileset in leader apply worker. */ typedef enum \r\n> +LeaderFileSetState { LEADER_FILESET_UNKNOWN, LEADER_FILESET_BUSY, \r\n> +LEADER_FILESET_ACCESSIBLE } LeaderFileSetState;\r\n> \r\n> 33a.\r\n> \r\n> Missing from typedefs.list?\r\n> \r\n> ~\r\n> \r\n> 33b.\r\n> \r\n> I thought some more explanatory comments for the meaning of \r\n> BUSY/ACCESSIBLE should be here.\r\n>\r\n> ~\r\n> \r\n> 33c.\r\n> \r\n> READY might be a better value than ACCESSIBLE\r\n> \r\n> ~\r\n> \r\n> 33d.\r\n> I'm not sure what usefulness does the \"LEADER_\" and \"Leader\" prefixes \r\n> give here. Maybe a name like PartialFileSetStat is more meaningful?\r\n> \r\n> e.g. like this?\r\n> \r\n> typedef enum PartialFileSetState\r\n> {\r\n> FS_UNKNOWN,\r\n> FS_BUSY,\r\n> FS_READY\r\n> } PartialFileSetState;\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 34. ParallelApplyWorkerShared\r\n> \r\n> + /*\r\n> + * The leader apply worker will serialize changes to the file after\r\n> + * entering PARTIAL_SERIALIZE mode and share the fileset with the \r\n> + parallel\r\n> + * apply worker when processing the transaction finish command. And \r\n> + then\r\n> + * the parallel apply worker will apply all the spooled messages.\r\n> + *\r\n> + * Don't use SharedFileSet here as we need the fileset to survive \r\n> + after\r\n> + * releasing the shared memory so that the leader apply worker can \r\n> + re-use\r\n> + * the fileset for next streaming transaction.\r\n> + */\r\n> + LeaderFileSetState fileset_state;\r\n> + FileSet fileset;\r\n> \r\n> Minor rewording of that comment\r\n> \r\n> SUGGESTION\r\n> After entering PARTIAL_SERIALIZE mode, the leader apply worker will \r\n> serialize changes to the file, and share the fileset with the parallel \r\n> apply worker when processing the transaction finish command. Then the \r\n> parallel apply worker will apply all the spooled messages.\r\n> \r\n> FileSet is used here instead of SharedFileSet because we need it to \r\n> survive after releasing the shared memory so that the leader apply \r\n> worker can re-use the same fileset for the next streaming transaction.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 35. globals\r\n> \r\n> /*\r\n> + * Indicates whether the leader apply worker needs to serialize the\r\n> + * remaining changes to disk due to timeout when sending data to the\r\n> + * parallel apply worker.\r\n> + */\r\n> + bool serialize_changes;\r\n> \r\n> 35a.\r\n> I wonder if the comment would be better to also mention \"via shared memory\".\r\n> \r\n> SUGGESTION\r\n> \r\n> Indicates whether the leader apply worker needs to serialize the \r\n> remaining changes to disk due to timeout when attempting to send data \r\n> to the parallel apply worker via shared memory.\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 29 Nov 2022 05:11:34 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 29, 2022 at 10:18 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed all comments.\n>\n\nReview comments on v53-0001*\n==========================\n1.\n Subscription *MySubscription = NULL;\n-static bool MySubscriptionValid = false;\n+bool MySubscriptionValid = false;\n\nIt seems still this variable is used in worker.c, so why it's scope changed?\n\n2.\n/* fields valid only when processing streamed transaction */\n-static bool in_streamed_transaction = false;\n+bool in_streamed_transaction = false;\n\nIs it really required to change the scope of this variable? Can we\nthink of exposing a macro or inline function to check it in\napplyparallelworker.c?\n\n3.\nshould_apply_changes_for_rel(LogicalRepRelMapEntry *rel)\n {\n if (am_tablesync_worker())\n return MyLogicalRepWorker->relid == rel->localreloid;\n+ else if (am_parallel_apply_worker())\n+ {\n+ if (rel->state != SUBREL_STATE_READY)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication parallel apply worker for subscription\n\\\"%s\\\" will stop\",\n\nIs this check sufficient? What if the rel->state is\nSUBREL_STATE_UNKNOWN? I think that will be possible when the refresh\npublication has not been yet performed after adding a new relation to\nthe publication. If that is true then won't we need to simply ignore\nthat change and continue instead of erroring out? Can you please once\ntest and check this case?\n\n4.\n+\n+ case TRANS_PARALLEL_APPLY:\n+ list_free(subxactlist);\n+ subxactlist = NIL;\n+\n+ apply_handle_commit_internal(&commit_data);\n\nI don't think we need to retail pfree subxactlist as this is allocated\nin TopTransactionContext and will be freed at commit/prepare. This way\nfreeing looks a bit adhoc to me and you need to expose this list\noutside applyparallelworker.c which doesn't seem like a good idea to\nme either.\n\n5.\n+ apply_handle_commit_internal(&commit_data);\n+\n+ pa_set_xact_state(MyParallelShared, PARALLEL_TRANS_FINISHED);\n+ pa_unlock_transaction(xid, AccessShareLock);\n+\n+ elog(DEBUG1, \"finished processing the transaction finish command\");\n\nI think in this and similar DEBUG logs, we can tell the exact command\ninstead of writing 'finish'.\n\n6.\napply_handle_stream_commit()\n{\n...\n+ /*\n+ * After sending the data to the parallel apply worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ pa_wait_for_xact_finish(winfo);\n+\n+ pgstat_report_stat(false);\n+ store_flush_position(commit_data.end_lsn);\n+ stop_skipping_changes();\n+\n+ (void) pa_free_worker(winfo, xid);\n...\n}\n\napply_handle_stream_prepare(StringInfo s)\n{\n+\n+ /*\n+ * After sending the data to the parallel apply worker, wait for\n+ * that worker to finish. This is necessary to maintain commit\n+ * order which avoids failures due to transaction dependencies and\n+ * deadlocks.\n+ */\n+ pa_wait_for_xact_finish(winfo);\n+ (void) pa_free_worker(winfo, prepare_data.xid);\n\n- /* unlink the files with serialized changes and subxact info. */\n- stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid);\n+ in_remote_transaction = false;\n+\n+ store_flush_position(prepare_data.end_lsn);\n\n\nIn both of the above functions, we should be consistent in calling\npa_free_worker() function which I think should be immediately after\npa_wait_for_xact_finish(). Is there a reason for not being consistent\nhere?\n\n7.\n+ res = shm_mq_receive(winfo->error_mq_handle, &nbytes, &data, true);\n+\n+ /*\n+ * The leader will detach from the error queue and set it to NULL\n+ * before preparing to stop all parallel apply workers, so we don't\n+ * need to handle error messages anymore.\n+ */\n+ if (!winfo->error_mq_handle)\n+ continue;\n\nThis check must be done before calling shm_mq_receive. So, changed it\nin the attached patch.\n\n8.\n@@ -2675,6 +3156,10 @@ store_flush_position(XLogRecPtr remote_lsn)\n {\n FlushPosition *flushpos;\n\n+ /* Skip for parallel apply workers. */\n+ if (am_parallel_apply_worker())\n+ return;\n\nIt is okay to always update the flush position by leader apply worker\nbut I think the leader won't have updated value for XactLastCommitEnd\nas the local transaction is committed by parallel apply worker.\n\n9.\n@@ -3831,11 +4366,11 @@ ApplyWorkerMain(Datum main_arg)\n\n ereport(DEBUG1,\n (errmsg_internal(\"logical replication apply worker for subscription\n\\\"%s\\\" two_phase is %s\",\n- MySubscription->name,\n- MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_DISABLED\n? \"DISABLED\" :\n- MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING ?\n\"PENDING\" :\n- MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED ?\n\"ENABLED\" :\n- \"?\")));\n+ MySubscription->name,\n+ MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_DISABLED\n? \"DISABLED\" :\n+ MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING ?\n\"PENDING\" :\n+ MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED ?\n\"ENABLED\" :\n+ \"?\")));\n\nIs this change related to this patch?\n\n10. What is the reason to expose ApplyErrorCallbackArg via worker_internal.h?\n\n11. The order to declare pa_set_stream_apply_worker() in\nworker_internal.h and define in applyparallelworker.c is not the same.\nSimilarly, please check all other functions.\n\n12. Apart from the above, I have made a few changes in the comments\nand some other cosmetic changes in the attached patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 29 Nov 2022 18:03:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 29, 2022 at 6:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 12. Apart from the above, I have made a few changes in the comments\n> and some other cosmetic changes in the attached patch.\n>\n\nI have made some additional changes in the comments at various places.\nKindly check the attached and let me know your thoughts.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 30 Nov 2022 12:20:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Nov 29, 2022 at 10:18 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed all comments.\n>\n\nSome comments on v53-0002*\n========================\n1. I think testing the scenario where the shm_mq buffer is full\nbetween the leader and parallel apply worker would require a large\namount of data and then also there is no guarantee. How about having a\ndeveloper GUC [1] force_apply_serialize which allows us to serialize\nthe changes and only after commit the parallel apply worker would be\nallowed to apply it?\n\nI am not sure if we can reliably test the serialization of partial\nchanges (like some changes have been already sent to parallel apply\nworker and then serialization happens) but at least we can test the\nserialization of complete xacts and their execution via parallel apply\nworker.\n\n2.\n+ /*\n+ * The stream lock is released when processing changes in a\n+ * streaming block, so the leader needs to acquire the lock here\n+ * before entering PARTIAL_SERIALIZE mode to ensure that the\n+ * parallel apply worker will wait for the leader to release the\n+ * stream lock.\n+ */\n+ if (in_streamed_transaction &&\n+ action != LOGICAL_REP_MSG_STREAM_STOP)\n+ {\n+ pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);\n\nThis comment is not completely correct because we can even acquire the\nlock for the very streaming chunk. This check will work but doesn't\nappear future-proof or at least not very easy to understand though I\ndon't have a better suggestion at this stage. Can we think of a better\ncheck here?\n\n3. I have modified a few comments in v53-0002* patch and the\nincremental patch for the same is attached.\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 30 Nov 2022 16:23:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear hackers,\r\n\r\n> 1. I think testing the scenario where the shm_mq buffer is full\r\n> between the leader and parallel apply worker would require a large\r\n> amount of data and then also there is no guarantee. How about having a\r\n> developer GUC [1] force_apply_serialize which allows us to serialize\r\n> the changes and only after commit the parallel apply worker would be\r\n> allowed to apply it?\r\n> \r\n> I am not sure if we can reliably test the serialization of partial\r\n> changes (like some changes have been already sent to parallel apply\r\n> worker and then serialization happens) but at least we can test the\r\n> serialization of complete xacts and their execution via parallel apply\r\n> worker.\r\n\r\nI agreed for adding the developer options, because the part that LA serialize\r\nchanges and PAs read and apply them might be complex. I have reported some\r\nbugs around here.\r\n\r\nOne idea: A threshold(integer) can be introduced as the developer GUC.\r\nLA skips to send data or jumps to serialization part to PA via shm_mq_send() when\r\nit has sent more than (threshold) times. This may be able to test the partial-serialization case.\r\nDefault(-1) means no-op, and 0 means all changes must be serialized.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 30 Nov 2022 11:35:58 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, November 29, 2022 8:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Nov 29, 2022 at 10:18 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch which addressed all comments.\r\n> >\r\n> \r\n> Review comments on v53-0001*\r\n\r\nThanks for the comments!\r\n> ==========================\r\n> 1.\r\n> Subscription *MySubscription = NULL;\r\n> -static bool MySubscriptionValid = false;\r\n> +bool MySubscriptionValid = false;\r\n> \r\n> It seems still this variable is used in worker.c, so why it's scope changed?\r\n\r\nI think it's not needed. Removed.\r\n\r\n> 2.\r\n> /* fields valid only when processing streamed transaction */ -static bool\r\n> in_streamed_transaction = false;\r\n> +bool in_streamed_transaction = false;\r\n> \r\n> Is it really required to change the scope of this variable? Can we think of\r\n> exposing a macro or inline function to check it in applyparallelworker.c?\r\n\r\nIntroduced a new function.\r\n\r\n> 3.\r\n> should_apply_changes_for_rel(LogicalRepRelMapEntry *rel) {\r\n> if (am_tablesync_worker())\r\n> return MyLogicalRepWorker->relid == rel->localreloid;\r\n> + else if (am_parallel_apply_worker())\r\n> + {\r\n> + if (rel->state != SUBREL_STATE_READY)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"logical replication parallel apply worker for subscription\r\n> \\\"%s\\\" will stop\",\r\n> \r\n> Is this check sufficient? What if the rel->state is SUBREL_STATE_UNKNOWN? I\r\n> think that will be possible when the refresh publication has not been yet\r\n> performed after adding a new relation to the publication. If that is true then\r\n> won't we need to simply ignore that change and continue instead of erroring\r\n> out? Can you please once test and check this case?\r\n\r\nYou are right. Changed to not report an ERROR for SUBREL_STATE_UNKNOWN.\r\n\r\n> 4.\r\n> +\r\n> + case TRANS_PARALLEL_APPLY:\r\n> + list_free(subxactlist);\r\n> + subxactlist = NIL;\r\n> +\r\n> + apply_handle_commit_internal(&commit_data);\r\n> \r\n> I don't think we need to retail pfree subxactlist as this is allocated in\r\n> TopTransactionContext and will be freed at commit/prepare. This way freeing\r\n> looks a bit adhoc to me and you need to expose this list outside\r\n> applyparallelworker.c which doesn't seem like a good idea to me either.\r\n\r\nRemoved the list_free.\r\n\r\n> 5.\r\n> + apply_handle_commit_internal(&commit_data);\r\n> +\r\n> + pa_set_xact_state(MyParallelShared, PARALLEL_TRANS_FINISHED);\r\n> + pa_unlock_transaction(xid, AccessShareLock);\r\n> +\r\n> + elog(DEBUG1, \"finished processing the transaction finish command\");\r\n> \r\n> I think in this and similar DEBUG logs, we can tell the exact command instead of\r\n> writing 'finish'.\r\n\r\nChanged.\r\n\r\n> 6.\r\n> apply_handle_stream_commit()\r\n> {\r\n> ...\r\n> + /*\r\n> + * After sending the data to the parallel apply worker, wait for\r\n> + * that worker to finish. This is necessary to maintain commit\r\n> + * order which avoids failures due to transaction dependencies and\r\n> + * deadlocks.\r\n> + */\r\n> + pa_wait_for_xact_finish(winfo);\r\n> +\r\n> + pgstat_report_stat(false);\r\n> + store_flush_position(commit_data.end_lsn);\r\n> + stop_skipping_changes();\r\n> +\r\n> + (void) pa_free_worker(winfo, xid);\r\n> ...\r\n> }\r\n\r\n> apply_handle_stream_prepare(StringInfo s) {\r\n> +\r\n> + /*\r\n> + * After sending the data to the parallel apply worker, wait for\r\n> + * that worker to finish. This is necessary to maintain commit\r\n> + * order which avoids failures due to transaction dependencies and\r\n> + * deadlocks.\r\n> + */\r\n> + pa_wait_for_xact_finish(winfo);\r\n> + (void) pa_free_worker(winfo, prepare_data.xid);\r\n> \r\n> - /* unlink the files with serialized changes and subxact info. */\r\n> - stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid);\r\n> + in_remote_transaction = false;\r\n> +\r\n> + store_flush_position(prepare_data.end_lsn);\r\n> \r\n> \r\n> In both of the above functions, we should be consistent in calling\r\n> pa_free_worker() function which I think should be immediately after\r\n> pa_wait_for_xact_finish(). Is there a reason for not being consistent here?\r\n\r\nChanged the order to make them consistent.\r\n\r\n> 7.\r\n> + res = shm_mq_receive(winfo->error_mq_handle, &nbytes, &data, true);\r\n> +\r\n> + /*\r\n> + * The leader will detach from the error queue and set it to NULL\r\n> + * before preparing to stop all parallel apply workers, so we don't\r\n> + * need to handle error messages anymore.\r\n> + */\r\n> + if (!winfo->error_mq_handle)\r\n> + continue;\r\n> \r\n> This check must be done before calling shm_mq_receive. So, changed it in the\r\n> attached patch.\r\n\r\nThanks, merged.\r\n\r\n> 8.\r\n> @@ -2675,6 +3156,10 @@ store_flush_position(XLogRecPtr remote_lsn) {\r\n> FlushPosition *flushpos;\r\n> \r\n> + /* Skip for parallel apply workers. */ if (am_parallel_apply_worker())\r\n> + return;\r\n> \r\n> It is okay to always update the flush position by leader apply worker but I think\r\n> the leader won't have updated value for XactLastCommitEnd as the local\r\n> transaction is committed by parallel apply worker.\r\n\r\nI added a field in shared memory so that the parallel apply worker can pass\r\nthe XactLastCommitEnd to leader and then the leader will store that.\r\n\r\n> 9.\r\n> @@ -3831,11 +4366,11 @@ ApplyWorkerMain(Datum main_arg)\r\n> \r\n> ereport(DEBUG1,\r\n> (errmsg_internal(\"logical replication apply worker for subscription \\\"%s\\\"\r\n> two_phase is %s\",\r\n> - MySubscription->name,\r\n> - MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_DISABLED\r\n> ? \"DISABLED\" :\r\n> - MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_PENDING ?\r\n> \"PENDING\" :\r\n> - MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_ENABLED ?\r\n> \"ENABLED\" :\r\n> - \"?\")));\r\n> + MySubscription->name,\r\n> + MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_DISABLED\r\n> ? \"DISABLED\" :\r\n> + MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_PENDING ?\r\n> \"PENDING\" :\r\n> + MySubscription->twophasestate ==\r\n> LOGICALREP_TWOPHASE_STATE_ENABLED ?\r\n> \"ENABLED\" :\r\n> + \"?\")));\r\n> \r\n> Is this change related to this patch?\r\n\r\nI think accidentally changed due to pgident. Reverted.\r\n\r\n> 10. What is the reason to expose ApplyErrorCallbackArg via worker_internal.h?\r\n\r\nThe parallel apply worker need to set the origin name into this. I introduced another function\r\nto set this.\r\n\r\n> 11. The order to declare pa_set_stream_apply_worker() in worker_internal.h and\r\n> define in applyparallelworker.c is not the same.\r\n> Similarly, please check all other functions.\r\n\r\nChanged.\r\n\r\n> 12. Apart from the above, I have made a few changes in the comments and\r\n> some other cosmetic changes in the attached patch.\r\n\r\nThanks, I have checked and merged them.\r\n\r\nAttach the new version patch set.\r\n\r\nI haven't addressed comment #1 and #2 from [1], I need to think about it and\r\nwill handle it soon. Besides, I haven't renamed serialize_stream_start/stop and\r\nhaven't finished the word consistency check for comments, I think I will handle\r\nthem soon.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1LGKYUDFZ_jFPrU497wQf2HNvt5a%2BtCTpqSeWSG6kfpSA%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 30 Nov 2022 13:40:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, November 30, 2022 9:41 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tuesday, November 29, 2022 8:34 PM Amit Kapila\r\n> > Review comments on v53-0001*\r\n> \r\n> Attach the new version patch set.\r\n\r\nSorry, there were some mistakes in the previous patch set.\r\nHere is the correct V54 patch set. I also ran pgindent for the patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 30 Nov 2022 13:51:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Nov 30, 2022 at 7:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 29, 2022 at 10:18 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Attach the new version patch which addressed all comments.\n> >\n>\n> Some comments on v53-0002*\n> ========================\n> 1. I think testing the scenario where the shm_mq buffer is full\n> between the leader and parallel apply worker would require a large\n> amount of data and then also there is no guarantee. How about having a\n> developer GUC [1] force_apply_serialize which allows us to serialize\n> the changes and only after commit the parallel apply worker would be\n> allowed to apply it?\n\n+1\n\nThe code coverage report shows that we don't cover the partial\nserialization codes. This GUC would improve the code coverage.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 1 Dec 2022 15:13:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Nov 30, 2022 at 10:51 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, November 30, 2022 9:41 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, November 29, 2022 8:34 PM Amit Kapila\n> > > Review comments on v53-0001*\n> >\n> > Attach the new version patch set.\n>\n> Sorry, there were some mistakes in the previous patch set.\n> Here is the correct V54 patch set. I also ran pgindent for the patch set.\n>\n\nThank you for updating the patches. Here are random review comments\nfor 0001 and 0002 patches.\n\nereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"logical replication parallel apply worker\nexited abnormally\"),\n errcontext(\"%s\", edata.context)));\nand\n\nereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"logical replication parallel apply worker\nexited because of subscription information change\")));\n\nI'm not sure ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is appropriate\nhere. Given that parallel apply worker has already reported the error\nmessage with the error code, I think we don't need to set the\nerrorcode for the logs from the leader process.\n\nAlso, I'm not sure the term \"exited abnormally\" is appropriate since\nwe use it when the server crashes for example. I think ERRORs reported\nhere don't mean that in general.\n\n---\nif (am_parallel_apply_worker() && on_subinfo_change)\n{\n /*\n * If a parallel apply worker exits due to the subscription\n * information change, we notify the leader apply worker so that the\n * leader can report more meaningful message in time and restart the\n * logical replication.\n */\n pq_putmessage('X', NULL, 0);\n}\n\nand\n\nereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"logical replication parallel apply worker\nexited because of subscription information change\")));\n\nDo we really need an additional message in case of 'X'? When we call\napply_worker_clean_exit with on_subinfo_change = true, we have\nreported the error message such as:\n\nereport(LOG,\n (errmsg(\"logical replication parallel apply worker for\nsubscription \\\"%s\\\" will stop because of a parameter change\",\n MySubscription->name)));\n\nI think that reporting a similar message from the leader might not be\nmeaningful for users.\n\n---\n- if (options->proto.logical.streaming &&\n- PQserverVersion(conn->streamConn) >= 140000)\n- appendStringInfoString(&cmd, \", streaming 'on'\");\n+ if (options->proto.logical.streaming_str)\n+ appendStringInfo(&cmd, \", streaming '%s'\",\n+\noptions->proto.logical.streaming_str);\n\nand\n\n+ /*\n+ * Assign the appropriate option value for streaming option\naccording to\n+ * the 'streaming' mode and the publisher's ability to\nsupport that mode.\n+ */\n+ if (server_version >= 160000 &&\n+ MySubscription->stream == SUBSTREAM_PARALLEL)\n+ {\n+ options.proto.logical.streaming_str = pstrdup(\"parallel\");\n+ MyLogicalRepWorker->parallel_apply = true;\n+ }\n+ else if (server_version >= 140000 &&\n+ MySubscription->stream != SUBSTREAM_OFF)\n+ {\n+ options.proto.logical.streaming_str = pstrdup(\"on\");\n+ MyLogicalRepWorker->parallel_apply = false;\n+ }\n+ else\n+ {\n+ options.proto.logical.streaming_str = NULL;\n+ MyLogicalRepWorker->parallel_apply = false;\n+ }\n\nThis change moves the code of adjustment of the streaming option based\non the publisher server version from libpqwalreceiver.c to worker.c.\nOn the other hand, the similar logic for other parameters such as\n\"two_phase\" and \"origin\" are still done in libpqwalreceiver.c. How\nabout passing MySubscription->stream via WalRcvStreamOptions and\nconstructing a streaming option string in libpqrcv_startstreaming()?\nIn ApplyWorkerMain(), we just need to set\nMyLogicalRepWorker->parallel_apply = true if (server_version >= 160000\n&& MySubscription->stream == SUBSTREAM_PARALLEL). We won't need\npstrdup for \"parallel\" and \"on\", and it's more consistent with other\nparameters.\n\n---\n+ * We maintain a worker pool to avoid restarting workers for each streaming\n+ * transaction. We maintain each worker's information in the\n\nDo we need to describe the pool in the doc?\n\n---\n+ * in AccessExclusive mode at transaction finish commands (STREAM_COMMIT and\n+ * STREAM_PREAPRE) and release it immediately.\n\ntypo, s/STREAM_PREAPRE/STREAM_PREPARE/\n\n---\n+/* Parallel apply workers hash table (initialized on first use). */\n+static HTAB *ParallelApplyWorkersHash = NULL;\n+\n+/*\n+ * A list to maintain the active parallel apply workers. The information for\n+ * the new worker is added to the list after successfully launching it. The\n+ * list entry is removed if there are already enough workers in the worker\n+ * pool either at the end of the transaction or while trying to find a free\n+ * worker for applying the transaction. For more information about the worker\n+ * pool, see comments atop this file.\n+ */\n+static List *ParallelApplyWorkersList = NIL;\n\nThe names ParallelApplyWorkersHash and ParallelWorkersList are very\nsimilar but the usages are completely different. Probably we can find\nbetter names such as ParallelApplyTxnHash and ParallelApplyWorkerPool.\nAnd probably we can add more comments for ParallelApplyWorkersHash.\n\n---\nif (winfo->serialize_changes ||\n napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n{\n int slot_no;\n uint16 generation;\n\n SpinLockAcquire(&winfo->shared->mutex);\n generation = winfo->shared->logicalrep_worker_generation;\n slot_no = winfo->shared->logicalrep_worker_slot_no;\n SpinLockRelease(&winfo->shared->mutex);\n\n logicalrep_pa_worker_stop(slot_no, generation);\n\n pa_free_worker_info(winfo);\n\n return true;\n}\n\n/* Unlink any files that were needed to serialize partial changes. */\nif (winfo->serialize_changes)\n stream_cleanup_files(MyLogicalRepWorker->subid, winfo->shared->xid);\n\nIf winfo->serialize_changes is true, we return true in the first if\nstatement. So stream_cleanup_files in the second if statement is never\nexecuted.\n\n---\n+ /*\n+ * First, try to get a parallel apply worker from the pool,\nif available.\n+ * Otherwise, try to start a new parallel apply worker.\n+ */\n+ winfo = pa_get_available_worker();\n+ if (!winfo)\n+ {\n+ winfo = pa_init_and_launch_worker();\n+ if (!winfo)\n+ return;\n+ }\n\nI think we don't necessarily need to separate two functions for\ngetting a worker from the pool and launching a new worker. It seems to\nreduce the readability. Instead, I think that we can have one function\nthat returns winfo if there is a free worker in the worker pool or it\nlaunches a worker. That way, we can simply do like:\n\nwinfo = pg_launch_parallel_worker()\nif (!winfo)\n return;\n\n---\n+ /* Setup replication origin tracking. */\n+ StartTransactionCommand();\n+ ReplicationOriginNameForLogicalRep(MySubscription->oid, InvalidOid,\n+\n originname, sizeof(originname));\n+ originid = replorigin_by_name(originname, true);\n+ if (!OidIsValid(originid))\n+ originid = replorigin_create(originname);\n\nThis code looks to allow parallel workers to use different origins in\ncases where the origin doesn't exist, but is that okay? Shouldn't we\npass miassing_ok = false in this case?\n\n---\ncfbot seems to fails:\n\nhttps://cirrus-ci.com/task/6264595342426112\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 1 Dec 2022 16:57:50 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 1, 2022 at 11:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 30, 2022 at 7:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 29, 2022 at 10:18 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Attach the new version patch which addressed all comments.\n> > >\n> >\n> > Some comments on v53-0002*\n> > ========================\n> > 1. I think testing the scenario where the shm_mq buffer is full\n> > between the leader and parallel apply worker would require a large\n> > amount of data and then also there is no guarantee. How about having a\n> > developer GUC [1] force_apply_serialize which allows us to serialize\n> > the changes and only after commit the parallel apply worker would be\n> > allowed to apply it?\n>\n> +1\n>\n> The code coverage report shows that we don't cover the partial\n> serialization codes. This GUC would improve the code coverage.\n>\n\nShall we keep it as a boolean or an integer? Keeping it as an integer\nas suggested by Kuroda-San [1] would have an added advantage that we\ncan easily test the cases where serialization would be triggered after\nsending some changes.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB5866160DE81FA2D88B8F22DEF5159%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Dec 2022 14:16:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, December 1, 2022 3:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Wed, Nov 30, 2022 at 10:51 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, November 30, 2022 9:41 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tuesday, November 29, 2022 8:34 PM Amit Kapila\r\n> > > > Review comments on v53-0001*\r\n> > >\r\n> > > Attach the new version patch set.\r\n> >\r\n> > Sorry, there were some mistakes in the previous patch set.\r\n> > Here is the correct V54 patch set. I also ran pgindent for the patch set.\r\n> >\r\n> \r\n> Thank you for updating the patches. Here are random review comments for\r\n> 0001 and 0002 patches.\r\n\r\nThanks for the comments!\r\n\r\n> \r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"logical replication parallel apply worker exited\r\n> abnormally\"),\r\n> errcontext(\"%s\", edata.context))); and\r\n> \r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"logical replication parallel apply worker exited\r\n> because of subscription information change\")));\r\n> \r\n> I'm not sure ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is appropriate\r\n> here. Given that parallel apply worker has already reported the error message\r\n> with the error code, I think we don't need to set the errorcode for the logs\r\n> from the leader process.\r\n> \r\n> Also, I'm not sure the term \"exited abnormally\" is appropriate since we use it\r\n> when the server crashes for example. I think ERRORs reported here don't mean\r\n> that in general.\r\n\r\nHow about reporting \"xxx worker exited due to error\" ?\r\n\r\n> ---\r\n> if (am_parallel_apply_worker() && on_subinfo_change) {\r\n> /*\r\n> * If a parallel apply worker exits due to the subscription\r\n> * information change, we notify the leader apply worker so that the\r\n> * leader can report more meaningful message in time and restart the\r\n> * logical replication.\r\n> */\r\n> pq_putmessage('X', NULL, 0);\r\n> }\r\n> \r\n> and\r\n> \r\n> ereport(ERROR,\r\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"logical replication parallel apply worker exited\r\n> because of subscription information change\")));\r\n> \r\n> Do we really need an additional message in case of 'X'? When we call\r\n> apply_worker_clean_exit with on_subinfo_change = true, we have reported the\r\n> error message such as:\r\n> \r\n> ereport(LOG,\r\n> (errmsg(\"logical replication parallel apply worker for subscription\r\n> \\\"%s\\\" will stop because of a parameter change\",\r\n> MySubscription->name)));\r\n> \r\n> I think that reporting a similar message from the leader might not be\r\n> meaningful for users.\r\n\r\nThe intention is to let leader report more meaningful message if a worker\r\nexited due to subinfo change. Otherwise, the leader is likely to report an\r\nerror like \" lost connection ... to parallel apply worker\" when trying to send\r\ndata via shared memory if the worker exited. What do you think ?\r\n\r\n> ---\r\n> - if (options->proto.logical.streaming &&\r\n> - PQserverVersion(conn->streamConn) >= 140000)\r\n> - appendStringInfoString(&cmd, \", streaming 'on'\");\r\n> + if (options->proto.logical.streaming_str)\r\n> + appendStringInfo(&cmd, \", streaming '%s'\",\r\n> +\r\n> options->proto.logical.streaming_str);\r\n> \r\n> and\r\n> \r\n> + /*\r\n> + * Assign the appropriate option value for streaming option\r\n> according to\r\n> + * the 'streaming' mode and the publisher's ability to\r\n> support that mode.\r\n> + */\r\n> + if (server_version >= 160000 &&\r\n> + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> + {\r\n> + options.proto.logical.streaming_str = pstrdup(\"parallel\");\r\n> + MyLogicalRepWorker->parallel_apply = true;\r\n> + }\r\n> + else if (server_version >= 140000 &&\r\n> + MySubscription->stream != SUBSTREAM_OFF)\r\n> + {\r\n> + options.proto.logical.streaming_str = pstrdup(\"on\");\r\n> + MyLogicalRepWorker->parallel_apply = false;\r\n> + }\r\n> + else\r\n> + {\r\n> + options.proto.logical.streaming_str = NULL;\r\n> + MyLogicalRepWorker->parallel_apply = false;\r\n> + }\r\n> \r\n> This change moves the code of adjustment of the streaming option based on\r\n> the publisher server version from libpqwalreceiver.c to worker.c.\r\n> On the other hand, the similar logic for other parameters such as \"two_phase\"\r\n> and \"origin\" are still done in libpqwalreceiver.c. How about passing\r\n> MySubscription->stream via WalRcvStreamOptions and constructing a\r\n> streaming option string in libpqrcv_startstreaming()?\r\n> In ApplyWorkerMain(), we just need to set\r\n> MyLogicalRepWorker->parallel_apply = true if (server_version >= 160000\r\n> && MySubscription->stream == SUBSTREAM_PARALLEL). We won't need\r\n> pstrdup for \"parallel\" and \"on\", and it's more consistent with other parameters.\r\n\r\nThanks for the suggestion. I thought about the same idea before, but it seems\r\nwe would weed to introduce \" pg_subscription.h \" into libpqwalreceiver.c. The\r\nlibpqwalreceiver.c looks a like a common place. So I am not sure is it looks\r\nbetter to expose the detail of streaming option to it.\r\n\r\n> ---\r\n> + * We maintain a worker pool to avoid restarting workers for each\r\n> + streaming\r\n> + * transaction. We maintain each worker's information in the\r\n> \r\n> Do we need to describe the pool in the doc?\r\n\r\nI thought the worker pool is kind of internal information.\r\nMaybe we can add it later if receive some feedback about this\r\nafter pushing the main patch.\r\n\r\n> ---\r\n> + * in AccessExclusive mode at transaction finish commands\r\n> + (STREAM_COMMIT and\r\n> + * STREAM_PREAPRE) and release it immediately.\r\n> \r\n> typo, s/STREAM_PREAPRE/STREAM_PREPARE/\r\n\r\nWill change.\r\n\r\n> ---\r\n> +/* Parallel apply workers hash table (initialized on first use). */\r\n> +static HTAB *ParallelApplyWorkersHash = NULL;\r\n> +\r\n> +/*\r\n> + * A list to maintain the active parallel apply workers. The\r\n> +information for\r\n> + * the new worker is added to the list after successfully launching it.\r\n> +The\r\n> + * list entry is removed if there are already enough workers in the\r\n> +worker\r\n> + * pool either at the end of the transaction or while trying to find a\r\n> +free\r\n> + * worker for applying the transaction. For more information about the\r\n> +worker\r\n> + * pool, see comments atop this file.\r\n> + */\r\n> +static List *ParallelApplyWorkersList = NIL;\r\n> \r\n> The names ParallelApplyWorkersHash and ParallelWorkersList are very similar\r\n> but the usages are completely different. Probably we can find better names\r\n> such as ParallelApplyTxnHash and ParallelApplyWorkerPool.\r\n> And probably we can add more comments for ParallelApplyWorkersHash.\r\n\r\nWill change.\r\n\r\n> ---\r\n> if (winfo->serialize_changes ||\r\n> napplyworkers > (max_parallel_apply_workers_per_subscription / 2)) {\r\n> int slot_no;\r\n> uint16 generation;\r\n> \r\n> SpinLockAcquire(&winfo->shared->mutex);\r\n> generation = winfo->shared->logicalrep_worker_generation;\r\n> slot_no = winfo->shared->logicalrep_worker_slot_no;\r\n> SpinLockRelease(&winfo->shared->mutex);\r\n> \r\n> logicalrep_pa_worker_stop(slot_no, generation);\r\n> \r\n> pa_free_worker_info(winfo);\r\n> \r\n> return true;\r\n> }\r\n> \r\n> /* Unlink any files that were needed to serialize partial changes. */ if\r\n> (winfo->serialize_changes)\r\n> stream_cleanup_files(MyLogicalRepWorker->subid, winfo->shared->xid);\r\n> \r\n> If winfo->serialize_changes is true, we return true in the first if statement. So\r\n> stream_cleanup_files in the second if statement is never executed.\r\n\r\npa_free_worker_info will also cleanup the fileset. But I think I can move that\r\nstream_cleanup_files before the \"... napplyworkers >\r\n(max_parallel_apply_workers_per_subscription / 2))\" check so that it would be\r\nmore clear.\r\n\r\n> ---\r\n> + /*\r\n> + * First, try to get a parallel apply worker from the pool,\r\n> if available.\r\n> + * Otherwise, try to start a new parallel apply worker.\r\n> + */\r\n> + winfo = pa_get_available_worker();\r\n> + if (!winfo)\r\n> + {\r\n> + winfo = pa_init_and_launch_worker();\r\n> + if (!winfo)\r\n> + return;\r\n> + }\r\n> \r\n> I think we don't necessarily need to separate two functions for getting a worker\r\n> from the pool and launching a new worker. It seems to reduce the readability.\r\n> Instead, I think that we can have one function that returns winfo if there is a free\r\n> worker in the worker pool or it launches a worker. That way, we can simply do\r\n> like:\r\n> \r\n> winfo = pg_launch_parallel_worker()\r\n> if (!winfo)\r\n> return;\r\n\r\nWill change\r\n\r\n> ---\r\n> + /* Setup replication origin tracking. */\r\n> + StartTransactionCommand();\r\n> + ReplicationOriginNameForLogicalRep(MySubscription->oid,\r\n> + InvalidOid,\r\n> +\r\n> originname, sizeof(originname));\r\n> + originid = replorigin_by_name(originname, true);\r\n> + if (!OidIsValid(originid))\r\n> + originid = replorigin_create(originname);\r\n> \r\n> This code looks to allow parallel workers to use different origins in cases where\r\n> the origin doesn't exist, but is that okay? Shouldn't we pass miassing_ok = false\r\n> in this case?\r\n>\r\n\r\nWill change\r\n\r\n> ---\r\n> cfbot seems to fails:\r\n> \r\n> https://cirrus-ci.com/task/6264595342426112\r\n\r\nThanks for reporting, it's due to a testcase problem, I will fix that test soon.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Thu, 1 Dec 2022 10:16:58 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Nov 30, 2022 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 2.\n> + /*\n> + * The stream lock is released when processing changes in a\n> + * streaming block, so the leader needs to acquire the lock here\n> + * before entering PARTIAL_SERIALIZE mode to ensure that the\n> + * parallel apply worker will wait for the leader to release the\n> + * stream lock.\n> + */\n> + if (in_streamed_transaction &&\n> + action != LOGICAL_REP_MSG_STREAM_STOP)\n> + {\n> + pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);\n>\n> This comment is not completely correct because we can even acquire the\n> lock for the very streaming chunk. This check will work but doesn't\n> appear future-proof or at least not very easy to understand though I\n> don't have a better suggestion at this stage. Can we think of a better\n> check here?\n>\n\nOne idea is that we acquire this lock every time and callers like\nstream_commit are responsible to release it. Also, we can handle the\nclose of stream file in the respective callers. I think that will make\nthis part of the patch easier to follow.\n\nSome other comments:\n=====================\n1. The handling of buffile inside pa_stream_abort() looks bit ugly to\nme. I think you primarily required it because the buffile opened by\nparallel apply worker is in CurrentResourceOwner. Can we think of\nhaving a new resource owner to apply spooled messages? I think that\nwill avoid the need to have a special purpose code to handle buffiles\nin parallel apply worker.\n\n2.\n@@ -564,6 +571,7 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n TransactionId current_xid;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg;\n\n apply_action = get_transaction_apply_action(stream_xid, &winfo);\n\n@@ -573,6 +581,8 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n\n Assert(TransactionIdIsValid(stream_xid));\n\n+ original_msg = *s;\n+\n /*\n * We should have received XID of the subxact as the first part of the\n * message, so extract it.\n@@ -596,10 +606,14 @@ handle_streamed_transaction(LogicalRepMsgType\naction, StringInfo s)\n stream_write_change(action, s);\n return true;\n\n+ case TRANS_LEADER_PARTIAL_SERIALIZE:\n case TRANS_LEADER_SEND_TO_PARALLEL:\n Assert(winfo);\n\n- pa_send_data(winfo, s->len, s->data);\n+ if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\n+ pa_send_data(winfo, s->len, s->data);\n+ else\n+ stream_write_change(action, &original_msg);\n\nPlease add the comment to specify the reason to remember the original string.\n\n3.\n@@ -1797,8 +1907,8 @@ apply_spooled_messages(TransactionId xid, XLogRecPtr lsn)\n changes_filename(path, MyLogicalRepWorker->subid, xid);\n elog(DEBUG1, \"replaying changes from file \\\"%s\\\"\", path);\n\n- fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDONLY,\n- false);\n+ stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);\n+ stream_xid = xid;\n\nWhy do we need stream_xid here? I think we can avoid having global\nstream_fd if the comment #1 is feasible.\n\n4.\n+ * TRANS_LEADER_APPLY:\n+ * The action means that we\n\n/The/This. Please make a similar change for other actions.\n\n5. Apart from the above, please find a few changes to the comments for\n0001 and 0002 patches in the attached patches.\n\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 1 Dec 2022 18:09:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 2, 2022 at 2:29 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 3. pa_setup_dsm\n>\n> +/*\n> + * Set up a dynamic shared memory segment.\n> + *\n> + * We set up a control region that contains a fixed-size worker info\n> + * (ParallelApplyWorkerShared), a message queue, and an error queue.\n> + *\n> + * Returns true on success, false on failure.\n> + */\n> +static bool\n> +pa_setup_dsm(ParallelApplyWorkerInfo *winfo)\n>\n> IMO that's confusing to say \"fixed-sized worker info\" when it's\n> referring to the ParallelApplyWorkerShared structure and not the other\n> ParallelApplyWorkerInfo.\n>\n> Might be better to say:\n>\n> \"a fixed-size worker info (ParallelApplyWorkerShared)\" -> \"a\n> fixed-size struct (ParallelApplyWorkerShared)\"\n>\n> ~~~\n>\n\nI find the existing wording better than what you are proposing. We can\nremove the structure name if you think that is better but IMO, current\nwording is good.\n\n>\n> 6. pa_free_worker_info\n>\n> + /*\n> + * Ensure this worker information won't be reused during worker\n> + * allocation.\n> + */\n> + ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList,\n> + winfo);\n>\n> SUGGESTION 1\n> Removing from the worker pool ensures this information won't be reused\n> during worker allocation.\n>\n> SUGGESTION 2 (more simply)\n> Remove from the worker pool.\n>\n\n+1 for the second suggestion.\n\n> ~~~\n>\n> 7. HandleParallelApplyMessage\n>\n> + /*\n> + * The actual error must have been reported by the parallel\n> + * apply worker.\n> + */\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"logical replication parallel apply worker exited abnormally\"),\n> + errcontext(\"%s\", edata.context)));\n>\n> Maybe it's better to remove the comment, but replace it with an\n> errhint that tells the user \"For the cause of this error see the error\n> logged by the logical replication parallel apply worker.\"\n>\n\nI am not sure if such an errhint is a good idea, anyway, I think both\nthe errors will be adjacent in the LOGs unless there is some other\nerror in the short span.\n> ~~~\n>\n> 17. apply_handle_stream_stop\n>\n> + case TRANS_PARALLEL_APPLY:\n> + elog(DEBUG1, \"applied %u changes in the streaming chunk\",\n> + parallel_stream_nchanges);\n> +\n> + /*\n> + * By the time parallel apply worker is processing the changes in\n> + * the current streaming block, the leader apply worker may have\n> + * sent multiple streaming blocks. This can lead to parallel apply\n> + * worker start waiting even when there are more chunk of streams\n> + * in the queue. So, try to lock only if there is no message left\n> + * in the queue. See Locking Considerations atop\n> + * applyparallelworker.c.\n> + */\n>\n> SUGGESTION (minor rewording)\n>\n> By the time the parallel apply worker is processing the changes in the\n> current streaming block, the leader apply worker may have sent\n> multiple streaming blocks. To the parallel apply from waiting\n> unnecessarily, try to lock only if there is no message left in the\n> queue. See Locking Considerations atop applyparallelworker.c.\n>\n\nI have proposed the additional line (This can lead to parallel apply\nworker start waiting even when there are more chunk of streams in the\nqueue.) because it took me some time to understand this particular\nscenario.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 2 Dec 2022 15:27:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for making the patch. Followings are my comments for v54-0003 and 0004.\r\n\r\n0003\r\n\r\npa_free_worker()\r\n\r\n+ /* Unlink any files that were needed to serialize partial changes. */\r\n+ if (winfo->serialize_changes)\r\n+ stream_cleanup_files(MyLogicalRepWorker->subid, winfo->shared->xid);\r\n+\r\n\r\nI think this part is not needed, because the LA cannot reach here if winfo->serialize_changes is true. Moreover stream_cleanup_files() is done in pa_free_worker_info().\r\n\r\nLogicalParallelApplyLoop()\r\n\r\nThe parallel apply worker wakes up every 0.1s even if we are in the PARTIAL_SERIALIZE mode. Do you have idea to reduce that?\r\n\r\n```\r\n+ pa_spooled_messages();\r\n```\r\n\r\nComments are needed here, like \"Changes may be serialize...\".\r\n\r\npa_stream_abort()\r\n\r\n```\r\n+ /*\r\n+ * Reopen the file and set the file position to the saved\r\n+ * position.\r\n+ */\r\n+ if (reopen_stream_fd)\r\n+ {\r\n+ char path[MAXPGPATH];\r\n+\r\n+ changes_filename(path, MyLogicalRepWorker->subid, xid);\r\n+ stream_fd = BufFileOpenFileSet(&MyParallelShared->fileset,\r\n+ path, O_RDONLY, false);\r\n+ BufFileSeek(stream_fd, fileno, offset, SEEK_SET);\r\n+ }\r\n```\r\n\r\nMyParallelShared->serialize_changes may be used instead of reopen_stream_fd.\r\n\r\n\r\nworker.c\r\n\r\n```\r\n-#include \"storage/buffile.h\"\r\n```\r\n\r\nI think this include should not be removed.\r\n\r\n\r\nhandle_streamed_transaction()\r\n\r\n```\r\n+ if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\r\n+ pa_send_data(winfo, s->len, s->data);\r\n+ else\r\n+ stream_write_change(action, &original_msg);\r\n```\r\n\r\nComments are needed here, 0001 has that bu removed in 0002.\r\nThere are some similar lines.\r\n\r\n\r\n```\r\n+ /*\r\n+ * It is possible that while sending this change to parallel apply\r\n+ * worker we need to switch to serialize mode.\r\n+ */\r\n+ if (winfo->serialize_changes)\r\n+ pa_set_fileset_state(winfo->shared, FS_READY);\r\n```\r\n\r\nThere are three same parts in the code, can we combine them to common part?\r\n\r\napply_spooled_messages()\r\n\r\n```\r\n+ /*\r\n+ * Break the loop if the parallel apply worker has finished applying\r\n+ * the transaction. The parallel apply worker should have closed the\r\n+ * file before committing.\r\n+ */\r\n+ if (am_parallel_apply_worker() &&\r\n+ MyParallelShared->xact_state == PARALLEL_TRANS_FINISHED)\r\n+ goto done;\r\n```\r\n\r\nI thnk pfree(buffer) and pfree(s2.data) should not be skippied.\r\nAnd this part should be at below \"nchanges++;\"\r\n\r\n\r\n0004\r\n\r\nset_subscription_retry()\r\n\r\n```\r\n+ LockSharedObject(SubscriptionRelationId, MySubscription->oid, 0,\r\n+ AccessShareLock);\r\n+\r\n```\r\n\r\nI think AccessExclusiveLock should be aquired instead of AccessShareLock.\r\nIn AlterSubscription(), LockSharedObject(AccessExclusiveLock) seems to be used.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Fri, 2 Dec 2022 11:27:19 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 2, 2022 at 4:57 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> handle_streamed_transaction()\n>\n> ```\n> + if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL)\n> + pa_send_data(winfo, s->len, s->data);\n> + else\n> + stream_write_change(action, &original_msg);\n> ```\n>\n> Comments are needed here, 0001 has that bu removed in 0002.\n> There are some similar lines.\n>\n\nI have suggested removing it because they were just saying what is\nevident from the code and doesn't seem to be adding any value. I would\nsay they were rather confusing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 2 Dec 2022 17:05:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "---------- Forwarded message ---------\nFrom: Peter Smith <smithpb2250@gmail.com>\nDate: Sat, Dec 3, 2022 at 8:03 AM\nSubject: Re: Perform streaming logical transactions by background\nworkers and parallel apply\nTo: Amit Kapila <amit.kapila16@gmail.com>\n\n\nOn Fri, Dec 2, 2022 at 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 2, 2022 at 2:29 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 3. pa_setup_dsm\n> >\n> > +/*\n> > + * Set up a dynamic shared memory segment.\n> > + *\n> > + * We set up a control region that contains a fixed-size worker info\n> > + * (ParallelApplyWorkerShared), a message queue, and an error queue.\n> > + *\n> > + * Returns true on success, false on failure.\n> > + */\n> > +static bool\n> > +pa_setup_dsm(ParallelApplyWorkerInfo *winfo)\n> >\n> > IMO that's confusing to say \"fixed-sized worker info\" when it's\n> > referring to the ParallelApplyWorkerShared structure and not the other\n> > ParallelApplyWorkerInfo.\n> >\n> > Might be better to say:\n> >\n> > \"a fixed-size worker info (ParallelApplyWorkerShared)\" -> \"a\n> > fixed-size struct (ParallelApplyWorkerShared)\"\n> >\n> > ~~~\n> >\n>\n> I find the existing wording better than what you are proposing. We can\n> remove the structure name if you think that is better but IMO, current\n> wording is good.\n>\n\nIncluding the structure name was helpful, but \"worker info\" made me\nwrongly think it was talking about ParallelApplyWorkerInfo (e.g.\n\"worker info\" was too much like WorkerInfo). So any different way to\nsay \"worker info\" might avoid that confusion.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sat, 3 Dec 2022 08:43:15 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Fwd: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "(Resending this because somehow my previous post did not appear in the\nmail archives)\n\n---------- Forwarded message ---------\nFrom: Peter Smith <smithpb2250@gmail.com>\nDate: Fri, Dec 2, 2022 at 7:59 PM\nSubject: Re: Perform streaming logical transactions by background\nworkers and parallel apply\nTo: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>, Masahiko Sawada\n<sawada.mshk@gmail.com>, wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com>, Dilip Kumar <dilipbalaut@gmail.com>,\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, PostgreSQL Hackers\n<pgsql-hackers@lists.postgresql.org>\n\n\nHere are my review comments for patch v54-0001.\n\n======\n\nFILE: .../replication/logical/applyparallelworker.c\n\n1. File header comment\n\n1a.\n\n+ * This file contains the code to launch, set up, and teardown parallel apply\n+ * worker which receives the changes from the leader worker and\ninvokes routines\n+ * to apply those on the subscriber database.\n\n\"parallel apply worker\" -> \"a parallel apply worker\"\n\n~\n\n1b.\n\n+ *\n+ * This file contains routines that are intended to support setting up, using\n+ * and tearing down a ParallelApplyWorkerInfo which is required to communicate\n+ * among leader and parallel apply workers.\n\n\"that are intended to support\" -> \"for\"\n\n\"required to communicate among leader and parallel apply workers.\" ->\n\"required so the leader worker and parallel apply workers can\ncommunicate with each other.\"\n\n~\n\n1c.\n\n+ *\n+ * The parallel apply workers are assigned (if available) as soon as xact's\n+ * first stream is received for subscriptions that have set their 'streaming'\n+ * option as parallel. The leader apply worker will send changes to this new\n+ * worker via shared memory. We keep this worker assigned till the transaction\n+ * commit is received and also wait for the worker to finish at commit. This\n+ * preserves commit ordering and avoid file I/O in most cases, although we\n+ * still need to spill to a file if there is no worker available. See comments\n+ * atop logical/worker to know more about streamed xacts whose changes are\n+ * spilled to disk. It is important to maintain commit order to avoid failures\n+ * due to (a) transaction dependencies, say if we insert a row in the first\n+ * transaction and update it in the second transaction on publisher then\n+ * allowing the subscriber to apply both in parallel can lead to failure in the\n+ * update. (b) deadlocks, allowing transactions that update the same set of\n+ * rows/tables in the opposite order to be applied in parallel can lead to\n+ * deadlocks.\n\n\"due to (a)\" -> \"due to: \"\n\n\"(a) transaction dependencies, \" -> \"(a) transaction dependencies - \"\n\n\". (b) deadlocks, \" => \"; (b) deadlocks - \"\n\n~\n\n1d.\n\n+ *\n+ * We maintain a worker pool to avoid restarting workers for each streaming\n+ * transaction. We maintain each worker's information in the\n+ * ParallelApplyWorkersList. After successfully launching a new worker, its\n+ * information is added to the ParallelApplyWorkersList. Once the worker\n+ * finishes applying the transaction, we mark it available for re-use. Now,\n+ * before starting a new worker to apply the streaming transaction, we check\n+ * the list for any available worker. Note that we maintain a maximum of half\n+ * the max_parallel_apply_workers_per_subscription workers in the pool and\n+ * after that, we simply exit the worker after applying the transaction.\n+ *\n\n\"We maintain a worker pool\" -> \"A worker pool is used\"\n\n\"We maintain each worker's information\" -> \"We maintain each worker's\ninformation (ParallelApplyWorkerInfo)\"\n\n\"we mark it available for re-use\" -> \"it is marked as available for re-use\"\n\n\"Note that we maintain a maximum of half\" -> \"Note that we retain a\nmaximum of half\"\n\n~\n\n1e.\n\n+ * XXX This worker pool threshold is a bit arbitrary and we can provide a GUC\n+ * variable for this in the future if required.\n\n\"a bit arbitrary\" -> \"arbitrary\"\n\n~\n\n1f.\n\n+ *\n+ * The leader apply worker will create a separate dynamic shared memory segment\n+ * when each parallel apply worker starts. The reason for this design is that\n+ * we cannot count how many workers will be started. It may be possible to\n+ * allocate enough shared memory in one segment based on the maximum number of\n+ * parallel apply workers (max_parallel_apply_workers_per_subscription), but\n+ * this would waste memory if no process is actually started.\n+ *\n\n\"we cannot count how many workers will be started.\" -> \"we cannot\npredict how many workers will be needed.\"\n\n~\n\n1g.\n\n+ * The dynamic shared memory segment will contain (a) a shm_mq that is used to\n+ * send changes in the transaction from leader apply worker to parallel apply\n+ * worker (b) another shm_mq that is used to send errors (and other messages\n+ * reported via elog/ereport) from the parallel apply worker to leader apply\n+ * worker (c) necessary information to be shared among parallel apply workers\n+ * and leader apply worker (i.e. members of ParallelApplyWorkerShared).\n\n\"will contain (a)\" => \"contains: (a)\"\n\n\"worker (b)\" -> \"worker; (b)\n\n\"worker (c)\" -> \"worker; (c)\"\n\n\"and leader apply worker\" -> \"and the leader apply worker\"\n\n~\n\n1h.\n\n+ *\n+ * Locking Considerations\n+ * ----------------------\n+ * Since the database structure (schema of subscription tables, constraints,\n+ * etc.) of the publisher and subscriber could be different, applying\n+ * transactions in parallel mode on the subscriber side can cause some\n+ * deadlocks that do not occur on the publisher side which is expected and can\n+ * happen even without parallel mode. In order to detect the deadlocks among\n+ * leader and parallel apply workers, we need to ensure that we wait using lmgr\n+ * locks, otherwise, such deadlocks won't be detected. The other approach was\n+ * to not allow parallelism when the schema of tables is different between the\n+ * publisher and subscriber but that would be too restrictive and would require\n+ * the publisher to send much more information than it is currently sending.\n+ *\n\n\"side which is expected and can happen even without parallel mode.\" =>\n\"side. This can happen even without parallel mode.\"\n\n\", otherwise, such deadlocks won't be detected.\" -> remove this\nbecause the beginning of the sentence says the same thing.\n\n\"The other approach was to not allow\" -> \"An alternative approach\ncould be to not allow\"\n\n~\n\n1i.\n\n+ *\n+ * 4) Lock types\n+ *\n+ * Both the stream lock and the transaction lock mentioned above are\n+ * session-level locks because both locks could be acquired outside the\n+ * transaction, and the stream lock in the leader need to persist across\n+ * transaction boundaries i.e. until the end of the streaming transaction.\n+ *-------------------------------------------------------------------------\n+ */\n\n\"need to persist\" -> \"needs to persist\"\n\n~~~\n\n2. ParallelApplyWorkersList\n\n+/*\n+ * A list to maintain the active parallel apply workers. The information for\n+ * the new worker is added to the list after successfully launching it. The\n+ * list entry is removed if there are already enough workers in the worker\n+ * pool either at the end of the transaction or while trying to find a free\n+ * worker for applying the transaction. For more information about the worker\n+ * pool, see comments atop this file.\n+ */\n+static List *ParallelApplyWorkersList = NIL;\n\n\"A list to maintain the active parallel apply workers.\" -> \"A list\n(pool) of active parallel apply workers.\"\n\n~~~\n\n3. pa_setup_dsm\n\n+/*\n+ * Set up a dynamic shared memory segment.\n+ *\n+ * We set up a control region that contains a fixed-size worker info\n+ * (ParallelApplyWorkerShared), a message queue, and an error queue.\n+ *\n+ * Returns true on success, false on failure.\n+ */\n+static bool\n+pa_setup_dsm(ParallelApplyWorkerInfo *winfo)\n\nIMO that's confusing to say \"fixed-sized worker info\" when it's\nreferring to the ParallelApplyWorkerShared structure and not the other\nParallelApplyWorkerInfo.\n\nMight be better to say:\n\n\"a fixed-size worker info (ParallelApplyWorkerShared)\" -> \"a\nfixed-size struct (ParallelApplyWorkerShared)\"\n\n~~~\n\n4. pa_init_and_launch_worker\n\n+ /*\n+ * The worker info can be used for the entire duration of the worker so\n+ * create it in a permanent context.\n+ */\n+ oldcontext = MemoryContextSwitchTo(ApplyContext);\n\nSUGGESTION\nThe worker info can be used for the lifetime of the worker process, so\ncreate it in a permanent context.\n\n~~~\n\n5. pa_allocate_worker\n\n+ /*\n+ * First, try to get a parallel apply worker from the pool, if available.\n+ * Otherwise, try to start a new parallel apply worker.\n+ */\n+ winfo = pa_get_available_worker();\n+ if (!winfo)\n+ {\n+ winfo = pa_init_and_launch_worker();\n+ if (!winfo)\n+ return;\n+ }\n\nSUGGESTION\nTry to get a parallel apply worker from the pool. If none is available\nthen start a new one.\n\n~~~\n\n6. pa_free_worker_info\n\n+ /*\n+ * Ensure this worker information won't be reused during worker\n+ * allocation.\n+ */\n+ ParallelApplyWorkersList = list_delete_ptr(ParallelApplyWorkersList,\n+ winfo);\n\nSUGGESTION 1\nRemoving from the worker pool ensures this information won't be reused\nduring worker allocation.\n\nSUGGESTION 2 (more simply)\nRemove from the worker pool.\n\n~~~\n\n7. HandleParallelApplyMessage\n\n+ /*\n+ * The actual error must have been reported by the parallel\n+ * apply worker.\n+ */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication parallel apply worker exited abnormally\"),\n+ errcontext(\"%s\", edata.context)));\n\nMaybe it's better to remove the comment, but replace it with an\nerrhint that tells the user \"For the cause of this error see the error\nlogged by the logical replication parallel apply worker.\"\n\n~\n\n8.\n\n+ case 'X':\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication parallel apply worker exited because of\nsubscription information change\")));\n+ break; /* Silence compiler warning. */\n+ default:\n\nAdd a blank line before the default:\n\n~\n\n9.\n\n+ /*\n+ * Don't need to do anything about NoticeResponse and\n+ * NotifyResponse as the logical replication worker doesn't need\n+ * to send messages to the client.\n+ */\n+ case 'N':\n+ case 'A':\n+ break;\n+\n+ /*\n+ * Restart replication if a parallel apply worker exited because\n+ * of subscription information change.\n+ */\n+ case 'X':\n\n\nIMO the comments describing the logic to take for each case should be\n*inside* the case. The comment above (if any) should only say what the\nmessagetype means means.\n\nSUGGESTION\n\n/* Notification, NotifyResponse. */\ncase 'N':\ncase 'A':\n/*\n* Don't need to do anything about these message types as the logical replication\n* worker doesn't need to send messages to the client.\n*/\nbreak;\n\n/* Parallel apply worker exited because of subscription information change. */\ncase 'X':\n/* Restart replication */\n\n~~~\n\n10. pa_send_data\n\n+ /*\n+ * If the attempt to send data via shared memory times out, we restart\n+ * the logical replication to prevent possible deadlocks with another\n+ * parallel apply worker. Refer to the comments atop\n+ * applyparallelworker.c for details.\n+ */\n+ if (startTime == 0)\n+ startTime = GetCurrentTimestamp();\n\nSometimes (like here) you say \"Refer to the comments atop\napplyparallelworker.c\". In other places, the comments say \"Refer to\nthe comments atop this file.\". IMO the wording should be consistent\neverywhere.\n\n~~~\n\n11. pa_set_stream_apply_worker\n\n+/*\n+ * Set the worker that required for applying the current streaming transaction.\n+ */\n+void\n+pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo)\n+{\n+ stream_apply_worker = winfo;\n+}\n\n\"the worker that required for\" ?? English ??\n\n~~~\n\n12. pa_clean_subtrans\n\n+/* Reset the list that maintains subtransactions. */\n+void\n+pa_clean_subtrans(void)\n+{\n+ subxactlist = NIL;\n+}\n\nMaybe a more informative function name would be pa_reset_subxactlist()?\n\n~~~\n\n13. pa_stream_abort\n\n+ subxactlist = NIL;\n\nSince you created a new function pa_clean_subtrans which does exactly\nthis same NIL assignment I was not expecting to see this global being\nexplicitly set like this in other code -- It's confusing to have\nmultiple ways to do the same thing.\n\nPlease check the rest of the patch in case the same is done elsewhere.\n\n======\n\nFILE: src/backend/replication/logical/launcher.c\n\n14. logicalrep_worker_detach\n\n+ /*\n+ * Detach from the error_mq_handle for all parallel apply workers\n+ * before terminating them to prevent the leader apply worker from\n+ * receiving the worker termination messages and sending it to logs\n+ * when the same is already done by individual parallel worker.\n+ */\n+ pa_detach_all_error_mq();\n\n\"before terminating them to prevent\" -> \"before terminating them. This prevents\"\n\n\"termination messages\" -> \"termination message\"\n\n\"by individual\" -> \"by the\"\n\n======\n\nFILE: src/backend/replication/logical/worker.c\n\n15. File header comment\n\n+ * 1) Write to temporary files and apply when the final commit arrives\n+ *\n+ * This approach is used when user has set subscription's streaming option as\n+ * on.\n\n\"when user has set\" -> \"when the user has set the\"\n\n~\n\n16.\n\n+ * 2) Parallel apply workers.\n+ *\n+ * This approach is used when user has set subscription's streaming option as\n+ * parallel. See logical/applyparallelworker.c for information about this\n+ * approach.\n\n\"when user has set\" -> \"when the user has set the \"\n\n\n~~~\n\n17. apply_handle_stream_stop\n\n+ case TRANS_PARALLEL_APPLY:\n+ elog(DEBUG1, \"applied %u changes in the streaming chunk\",\n+ parallel_stream_nchanges);\n+\n+ /*\n+ * By the time parallel apply worker is processing the changes in\n+ * the current streaming block, the leader apply worker may have\n+ * sent multiple streaming blocks. This can lead to parallel apply\n+ * worker start waiting even when there are more chunk of streams\n+ * in the queue. So, try to lock only if there is no message left\n+ * in the queue. See Locking Considerations atop\n+ * applyparallelworker.c.\n+ */\n\nSUGGESTION (minor rewording)\n\nBy the time the parallel apply worker is processing the changes in the\ncurrent streaming block, the leader apply worker may have sent\nmultiple streaming blocks. To the parallel apply from waiting\nunnecessarily, try to lock only if there is no message left in the\nqueue. See Locking Considerations atop applyparallelworker.c.\n\n~~~\n\n18. apply_handle_stream_abort\n\n+ case TRANS_PARALLEL_APPLY:\n+ pa_stream_abort(&abort_data);\n+\n+ /*\n+ * We need to wait after processing rollback to savepoint for the\n+ * next set of changes.\n+ *\n+ * By the time parallel apply worker is processing the changes in\n+ * the current streaming block, the leader apply worker may have\n+ * sent multiple streaming blocks. This can lead to parallel apply\n+ * worker start waiting even when there are more chunk of streams\n+ * in the queue. So, try to lock only if there is no message left\n+ * in the queue. See Locking Considerations atop\n+ * applyparallelworker.c.\n+ */\n\nSecond paragraph (\"By the time...\") same review comment as the\nprevious one (#17)\n\n~~~\n\n19. store_flush_position\n\n+ /*\n+ * Skip for parallel apply workers. The leader apply worker will ensure to\n+ * update it as the lsn_mapping is maintained by it.\n+ */\n+ if (am_parallel_apply_worker())\n+ return;\n\nSUGGESTION (comment multiple \"it\" was confusing)\nSkip for parallel apply workers, because the lsn_mapping is maintained\nby the leader apply worker.\n\n~~~\n\n20. set_apply_error_context_origin\n\n+\n+/* Set the origin name of apply error callback. */\n+void\n+set_apply_error_context_origin(char *originname)\n+{\n+ /*\n+ * Allocate the origin name in long-lived context for error context\n+ * message.\n+ */\n+ apply_error_callback_arg.origin_name = MemoryContextStrdup(ApplyContext,\n+ originname);\n+}\n\nIMO that \"Allocate ...\" comment should just replace the function header comment.\n\n~~~\n\n21. apply_worker_clean_exit\n\nI wasn't sure if calling this a 'clean' exit meant anything much.\n\nHow about:\n- apply_worker_proc_exit, or\n- apply_worker_exit\n\n~\n\n22.\n\n+apply_worker_clean_exit(bool on_subinfo_change)\n+{\n+ if (am_parallel_apply_worker() && on_subinfo_change)\n+ {\n+ /*\n+ * If a parallel apply worker exits due to the subscription\n+ * information change, we notify the leader apply worker so that the\n+ * leader can report more meaningful message in time and restart the\n+ * logical replication.\n+ */\n+ pq_putmessage('X', NULL, 0);\n+ }\n+\n+ proc_exit(0);\n+}\n\nSUGGESTION (for comment)\nIf this is a parallel apply worker exiting due to a subscription\ninformation change, we notify the leader apply worker so that it can\nreport a more meaningful message before restarting the logical\nreplication.\n\n======\n\nFILE: src/include/commands/subscriptioncmds.h\n\n23. externs\n\n@@ -26,4 +26,6 @@ extern void DropSubscription(DropSubscriptionStmt\n*stmt, bool isTopLevel);\n extern ObjectAddress AlterSubscriptionOwner(const char *name, Oid newOwnerId);\n extern void AlterSubscriptionOwner_oid(Oid subid, Oid newOwnerId);\n\n+extern char defGetStreamingMode(DefElem *def);\n\nThe extern is not in the same order as the functions of subscriptioncmds.c\n\n======\n\nFILE: src/include/replication/worker_internal.h\n\n24. externs\n\n24a.\n\n+extern void apply_dispatch(StringInfo s);\n+\n+extern void InitializeApplyWorker(void);\n+\n+extern void maybe_reread_subscription(void);\n\nThe above externs are not in the same order as the functions of worker.c\n\n~\n\n24b.\n\n+extern void pa_lock_stream(TransactionId xid, LOCKMODE lockmode);\n+extern void pa_lock_transaction(TransactionId xid, LOCKMODE lockmode);\n+\n+extern void pa_unlock_stream(TransactionId xid, LOCKMODE lockmode);\n+extern void pa_unlock_transaction(TransactionId xid, LOCKMODE lockmode);\n\nThe above externs are not in the same order as the functions of\napplyparallelworker.c\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sat, 3 Dec 2022 08:49:46 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Fwd: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Thursday, December 1, 2022 8:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Nov 30, 2022 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > 2.\r\n> > + /*\r\n> > + * The stream lock is released when processing changes in a\r\n> > + * streaming block, so the leader needs to acquire the lock here\r\n> > + * before entering PARTIAL_SERIALIZE mode to ensure that the\r\n> > + * parallel apply worker will wait for the leader to release the\r\n> > + * stream lock.\r\n> > + */\r\n> > + if (in_streamed_transaction &&\r\n> > + action != LOGICAL_REP_MSG_STREAM_STOP) {\r\n> > + pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);\r\n> >\r\n> > This comment is not completely correct because we can even acquire the\r\n> > lock for the very streaming chunk. This check will work but doesn't\r\n> > appear future-proof or at least not very easy to understand though I\r\n> > don't have a better suggestion at this stage. Can we think of a better\r\n> > check here?\r\n> >\r\n> \r\n> One idea is that we acquire this lock every time and callers like stream_commit\r\n> are responsible to release it. Also, we can handle the close of stream file in the\r\n> respective callers. I think that will make this part of the patch easier to follow.\r\n\r\nChanged.\r\n\r\n> Some other comments:\r\n> =====================\r\n> 1. The handling of buffile inside pa_stream_abort() looks bit ugly to me. I think\r\n> you primarily required it because the buffile opened by parallel apply worker is\r\n> in CurrentResourceOwner. \r\n\r\nChanged to use toplevel transaction's resource.\r\n\r\n> Can we think of having a new resource owner to\r\n> apply spooled messages? I think that will avoid the need to have a special\r\n> purpose code to handle buffiles in parallel apply worker.\r\n\r\nI am thinking about this and will address this in next version.\r\n\r\n> 2.\r\n> @@ -564,6 +571,7 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> TransactionId current_xid;\r\n> ParallelApplyWorkerInfo *winfo;\r\n> TransApplyAction apply_action;\r\n> + StringInfoData original_msg;\r\n> \r\n> apply_action = get_transaction_apply_action(stream_xid, &winfo);\r\n> \r\n> @@ -573,6 +581,8 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> \r\n> Assert(TransactionIdIsValid(stream_xid));\r\n> \r\n> + original_msg = *s;\r\n> +\r\n> /*\r\n> * We should have received XID of the subxact as the first part of the\r\n> * message, so extract it.\r\n> @@ -596,10 +606,14 @@ handle_streamed_transaction(LogicalRepMsgType\r\n> action, StringInfo s)\r\n> stream_write_change(action, s);\r\n> return true;\r\n> \r\n> + case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> Assert(winfo);\r\n> \r\n> - pa_send_data(winfo, s->len, s->data);\r\n> + if (apply_action == TRANS_LEADER_SEND_TO_PARALLEL) pa_send_data(winfo,\r\n> + s->len, s->data); else stream_write_change(action, &original_msg);\r\n> \r\n> Please add the comment to specify the reason to remember the original string.\r\n\r\nAdded.\r\n\r\n> 3.\r\n> @@ -1797,8 +1907,8 @@ apply_spooled_messages(TransactionId xid,\r\n> XLogRecPtr lsn)\r\n> changes_filename(path, MyLogicalRepWorker->subid, xid);\r\n> elog(DEBUG1, \"replaying changes from file \\\"%s\\\"\", path);\r\n> \r\n> - fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path,\r\n> O_RDONLY,\r\n> - false);\r\n> + stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);\r\n> + stream_xid = xid;\r\n> \r\n> Why do we need stream_xid here? I think we can avoid having global stream_fd\r\n> if the comment #1 is feasible.\r\n\r\nI think we don't need it anymore, I have removed it.\r\n\r\n> 4.\r\n> + * TRANS_LEADER_APPLY:\r\n> + * The action means that we\r\n> \r\n> /The/This. Please make a similar change for other actions.\r\n> \r\n> 5. Apart from the above, please find a few changes to the comments for\r\n> 0001 and 0002 patches in the attached patches.\r\n\r\nMerged.\r\n\r\nAttach the new version patch set which addressed most of the comments received so\r\nfar except some comments being discussed[1].\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sun, 4 Dec 2022 11:17:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 2, 2022 7:27 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wroteL\r\n> \r\n> Dear Hou,\r\n> \r\n> Thanks for making the patch. Followings are my comments for v54-0003 and\r\n> 0004.\r\n\r\nThanks for the comments!\r\n\r\n> \r\n> 0003\r\n> \r\n> pa_free_worker()\r\n> \r\n> + /* Unlink any files that were needed to serialize partial changes. */\r\n> + if (winfo->serialize_changes)\r\n> + stream_cleanup_files(MyLogicalRepWorker->subid,\r\n> winfo->shared->xid);\r\n> +\r\n> \r\n> I think this part is not needed, because the LA cannot reach here if\r\n> winfo->serialize_changes is true. Moreover stream_cleanup_files() is done in\r\n> pa_free_worker_info().\r\n\r\nRemoved.\r\n\r\n> LogicalParallelApplyLoop()\r\n> \r\n> The parallel apply worker wakes up every 0.1s even if we are in the\r\n> PARTIAL_SERIALIZE mode. Do you have idea to reduce that?\r\n\r\nThe parallel apply worker usually will wait on the stream lock after entering\r\nPARTIAL_SERIALIZE mode.\r\n\r\n> ```\r\n> + pa_spooled_messages();\r\n> ```\r\n> \r\n> Comments are needed here, like \"Changes may be serialize...\".\r\n\r\nAdded.\r\n\r\n> pa_stream_abort()\r\n> \r\n> ```\r\n> + /*\r\n> + * Reopen the file and set the file position to\r\n> the saved\r\n> + * position.\r\n> + */\r\n> + if (reopen_stream_fd)\r\n> + {\r\n> + char path[MAXPGPATH];\r\n> +\r\n> + changes_filename(path,\r\n> MyLogicalRepWorker->subid, xid);\r\n> + stream_fd =\r\n> BufFileOpenFileSet(&MyParallelShared->fileset,\r\n> +\r\n> path, O_RDONLY, false);\r\n> + BufFileSeek(stream_fd, fileno, offset,\r\n> SEEK_SET);\r\n> + }\r\n> ```\r\n> \r\n> MyParallelShared->serialize_changes may be used instead of reopen_stream_fd.\r\n\r\nThese codes have been removed.\r\n\r\n> \r\n> ```\r\n> + /*\r\n> + * It is possible that while sending this change to\r\n> parallel apply\r\n> + * worker we need to switch to serialize mode.\r\n> + */\r\n> + if (winfo->serialize_changes)\r\n> + pa_set_fileset_state(winfo->shared,\r\n> FS_READY);\r\n> ```\r\n> \r\n> There are three same parts in the code, can we combine them to common part?\r\n\r\nThese codes have been slightly refactored.\r\n\r\n> apply_spooled_messages()\r\n> \r\n> ```\r\n> + /*\r\n> + * Break the loop if the parallel apply worker has finished\r\n> applying\r\n> + * the transaction. The parallel apply worker should have closed\r\n> the\r\n> + * file before committing.\r\n> + */\r\n> + if (am_parallel_apply_worker() &&\r\n> + MyParallelShared->xact_state ==\r\n> PARALLEL_TRANS_FINISHED)\r\n> + goto done;\r\n> ```\r\n> \r\n> I thnk pfree(buffer) and pfree(s2.data) should not be skippied.\r\n> And this part should be at below \"nchanges++;\"\r\n\r\nbuffer, s2.data were allocated in the toplevel transaction's context and it\r\nwill be automatically freed soon when handling STREAM COMMIT.\r\n\r\n> \r\n> 0004\r\n> \r\n> set_subscription_retry()\r\n> \r\n> ```\r\n> + LockSharedObject(SubscriptionRelationId, MySubscription->oid, 0,\r\n> + AccessShareLock);\r\n> +\r\n> ```\r\n> \r\n> I think AccessExclusiveLock should be aquired instead of AccessShareLock.\r\n> In AlterSubscription(), LockSharedObject(AccessExclusiveLock) seems to be\r\n> used.\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 4 Dec 2022 11:18:02 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 2, 2022 4:59 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for patch v54-0001.\r\n\r\nThanks for the comments!\r\n\r\n> ======\r\n> \r\n> FILE: .../replication/logical/applyparallelworker.c\r\n> \r\n> 1b.\r\n> \r\n> + *\r\n> + * This file contains routines that are intended to support setting up,\r\n> + using\r\n> + * and tearing down a ParallelApplyWorkerInfo which is required to\r\n> + communicate\r\n> + * among leader and parallel apply workers.\r\n> \r\n> \"that are intended to support\" -> \"for\"\r\n\r\nI find the current word is consistent with the comments atop vacuumparallel.c and\r\nexecParallel.c. So didn't change this one.\r\n\r\n> 3. pa_setup_dsm\r\n> \r\n> +/*\r\n> + * Set up a dynamic shared memory segment.\r\n> + *\r\n> + * We set up a control region that contains a fixed-size worker info\r\n> + * (ParallelApplyWorkerShared), a message queue, and an error queue.\r\n> + *\r\n> + * Returns true on success, false on failure.\r\n> + */\r\n> +static bool\r\n> +pa_setup_dsm(ParallelApplyWorkerInfo *winfo)\r\n> \r\n> IMO that's confusing to say \"fixed-sized worker info\" when it's referring to the\r\n> ParallelApplyWorkerShared structure and not the other\r\n> ParallelApplyWorkerInfo.\r\n> \r\n> Might be better to say:\r\n> \r\n> \"a fixed-size worker info (ParallelApplyWorkerShared)\" -> \"a fixed-size struct\r\n> (ParallelApplyWorkerShared)\"\r\n\r\nThe ParallelApplyWorkerShared is also kind of information that shared\r\nbetween workers. So, I am fine with current word. Or maybe just \"fixed-size info\" ?\r\n\r\n> ~~~\r\n> \r\n> 12. pa_clean_subtrans\r\n> \r\n> +/* Reset the list that maintains subtransactions. */ void\r\n> +pa_clean_subtrans(void)\r\n> +{\r\n> + subxactlist = NIL;\r\n> +}\r\n> \r\n> Maybe a more informative function name would be pa_reset_subxactlist()?\r\n\r\nI thought the current name is more consistent with pa_start_subtrans.\r\n\r\n> ~~~\r\n> \r\n> 17. apply_handle_stream_stop\r\n> \r\n> + case TRANS_PARALLEL_APPLY:\r\n> + elog(DEBUG1, \"applied %u changes in the streaming chunk\",\r\n> + parallel_stream_nchanges);\r\n> +\r\n> + /*\r\n> + * By the time parallel apply worker is processing the changes in\r\n> + * the current streaming block, the leader apply worker may have\r\n> + * sent multiple streaming blocks. This can lead to parallel apply\r\n> + * worker start waiting even when there are more chunk of streams\r\n> + * in the queue. So, try to lock only if there is no message left\r\n> + * in the queue. See Locking Considerations atop\r\n> + * applyparallelworker.c.\r\n> + */\r\n> \r\n> SUGGESTION (minor rewording)\r\n> \r\n> By the time the parallel apply worker is processing the changes in the current\r\n> streaming block, the leader apply worker may have sent multiple streaming\r\n> blocks. To the parallel apply from waiting unnecessarily, try to lock only if there\r\n> is no message left in the queue. See Locking Considerations atop\r\n> applyparallelworker.c.\r\n> \r\n\r\nDidn't change this one according to Amit's comment.\r\n\r\n> \r\n> 21. apply_worker_clean_exit\r\n> \r\n> I wasn't sure if calling this a 'clean' exit meant anything much.\r\n> \r\n> How about:\r\n> - apply_worker_proc_exit, or\r\n> - apply_worker_exit\r\n\r\nI thought the clean means the exit number is 0(proc_exit(0)) and is\r\nnot due to any ERROR, I am not sure If proc_exit or exit is better.\r\n\r\nI have addressed other comments in the new version patch.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 4 Dec 2022 11:18:29 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sunday, December 4, 2022 7:17 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> \r\n> Thursday, December 1, 2022 8:40 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Some other comments:\r\n> ...\r\n> Attach the new version patch set which addressed most of the comments\r\n> received so far except some comments being discussed[1].\r\n> [1] https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nAttach a new version patch set which fixed a testcase failure on CFbot.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 5 Dec 2022 04:29:30 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Dec 4, 2022 at 4:48 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, December 2, 2022 4:59 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> > ~~~\n> >\n> > 12. pa_clean_subtrans\n> >\n> > +/* Reset the list that maintains subtransactions. */ void\n> > +pa_clean_subtrans(void)\n> > +{\n> > + subxactlist = NIL;\n> > +}\n> >\n> > Maybe a more informative function name would be pa_reset_subxactlist()?\n>\n> I thought the current name is more consistent with pa_start_subtrans.\n>\n\nThen how about changing the name to pg_reset_subtrans()?\n\n>\n> >\n> > 21. apply_worker_clean_exit\n> >\n> > I wasn't sure if calling this a 'clean' exit meant anything much.\n> >\n> > How about:\n> > - apply_worker_proc_exit, or\n> > - apply_worker_exit\n>\n> I thought the clean means the exit number is 0(proc_exit(0)) and is\n> not due to any ERROR, I am not sure If proc_exit or exit is better.\n>\n> I have addressed other comments in the new version patch.\n>\n\n+1 for apply_worker_exit.\n\nOne minor suggestion for a recent change in v56-0001*:\n /*\n- * A hash table used to cache streaming transactions being applied and the\n- * parallel application workers required to apply transactions.\n+ * A hash table used to cache the state of streaming transactions being\n+ * applied by the parallel apply workers.\n */\n static HTAB *ParallelApplyTxnHash = NULL;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Dec 2022 14:53:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for patch v55-0002\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n1. pa_can_start\n\n@@ -276,9 +278,9 @@ pa_can_start(TransactionId xid)\n /*\n * Don't start a new parallel worker if user has set skiplsn as it's\n * possible that user want to skip the streaming transaction. For\n- * streaming transaction, we need to spill the transaction to disk so that\n- * we can get the last LSN of the transaction to judge whether to skip\n- * before starting to apply the change.\n+ * streaming transaction, we need to serialize the transaction to a file\n+ * so that we can get the last LSN of the transaction to judge whether to\n+ * skip before starting to apply the change.\n */\n if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\n return false;\n\nI think the wording change may belong in patch 0001 because it has\nnothing to do with partial serializing.\n\n~~~\n\n2. pa_free_worker\n\n+ /*\n+ * Stop the worker if there are enough workers in the pool.\n+ *\n+ * XXX The worker is also stopped if the leader apply worker needed to\n+ * serialize part of the transaction data due to a send timeout. This is\n+ * because the message could be partially written to the queue due to send\n+ * timeout and there is no way to clean the queue other than resending the\n+ * message until it succeeds. To avoid complexity, we directly stop the\n+ * worker in this case.\n+ */\n+ if (winfo->serialize_changes ||\n+ napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\n\nDon't need to say \"due to send timeout\" 2 times in 2 sentences.\n\nSUGGESTION\nXXX The worker is also stopped if the leader apply worker needed to\nserialize part of the transaction data due to a send timeout. This is\nbecause the message could be partially written to the queue but there\nis no way to clean the queue other than resending the message until it\nsucceeds. Directly stopping the worker avoids needing this complexity.\n\n~~~\n\n3. pa_spooled_messages\n\nPreviously I suggested this function name should be changed but that\nwas rejected (see [1] #6a)\n\n> 6a.\n> IMO a better name for this function would be\n> pa_apply_spooled_messages();\nNot sure about this.\n\n~\n\nFYI the reason for the previous suggestion is because there is no verb\nin the current function name, so the reader is left thinking\npa_spooled_messages \"what\"?\n\nIt means the caller has to have extra comments like:\n/* Check if changes have been serialized to a file. */\npa_spooled_messages();\n\nOTOH, if the function was called something better -- e.g.\npa_check_for_spooled_messages() or similar -- then it would be\nself-explanatory.\n\n~\n\n4.\n\n /*\n+ * Replay the spooled messages in the parallel apply worker if the leader apply\n+ * worker has finished serializing changes to the file.\n+ */\n+static void\n+pa_spooled_messages(void)\n\nI'm not 100% sure of the logic, so IMO maybe the comment should say a\nbit more about how this works:\n\nSpecifically, let's say there was some timeout and the LA needed to\nwrite the spool file, then let's say the PA timed out and found itself\ninside this function. Now, let's say the LA is still busy writing the\nfile -- so what happens next?\n\nDoes this function simply return, then the main PA loop waits again,\nthen the times out again, then PA finds itself back inside this\nfunction again... and that keeps happening over and over until\neventually the spool file is found FS_READY? Some explanatory comments\nmight help.\n\n~\n\n5.\n\n+ /*\n+ * Check if changes have been serialized to a file. if so, read and apply\n+ * them.\n+ */\n+ SpinLockAcquire(&MyParallelShared->mutex);\n+ fileset_state = MyParallelShared->fileset_state;\n+ SpinLockRelease(&MyParallelShared->mutex);\n\n\"if so\" -> \"If so\"\n\n~~~\n\n\n6. pa_send_data\n\n+ *\n+ * If the attempt to send data via shared memory times out, then we will switch\n+ * to \"PARTIAL_SERIALIZE mode\" for the current transaction to prevent possible\n+ * deadlocks with another parallel apply worker (refer to the comments atop\n+ * applyparallelworker.c for details). This means that the current data and any\n+ * subsequent data for this transaction will be serialized to a file.\n */\n void\n pa_send_data(ParallelApplyWorkerInfo *winfo, Size nbytes, const void *data)\n\nSUGGESTION (minor comment rearranging)\n\nIf the attempt to send data via shared memory times out, then we will\nswitch to \"PARTIAL_SERIALIZE mode\" for the current transaction -- this\nmeans that the current data and any subsequent data for this\ntransaction will be serialized to a file. This is done to prevent\npossible deadlocks with another parallel apply worker (refer to the\ncomments atop applyparallelworker.c for details).\n\n~\n\n7.\n\n+ /*\n+ * Take the stream lock to make sure that the parallel apply worker\n+ * will wait for the leader to release the stream lock until the\n+ * end of the transaction.\n+ */\n+ pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);\n\nThe comment doesn't sound right.\n\n\"until the end\" -> \"at the end\" (??)\n\n~~~\n\n8. pa_stream_abort\n\n@@ -1374,6 +1470,7 @@ pa_stream_abort(LogicalRepStreamAbortData *abort_data)\n RollbackToSavepoint(spname);\n CommitTransactionCommand();\n subxactlist = list_truncate(subxactlist, i + 1);\n+\n break;\n }\n }\nSpurious whitespace unrelated to this patch?\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n9. handle_streamed_transaction\n\n /*\n+ * The parallel apply worker needs the xid in this message to decide\n+ * whether to define a savepoint, so save the original message that has not\n+ * moved the cursor after the xid. We will serailize this message to a file\n+ * in PARTIAL_SERIALIZE mode.\n+ */\n+ original_msg = *s;\n\n\"serailize\" -> \"serialize\"\n\n~~~\n\n10. apply_handle_stream_prepare\n\n@@ -1245,6 +1265,7 @@ apply_handle_stream_prepare(StringInfo s)\n LogicalRepPreparedTxnData prepare_data;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg = *s;\n\nShould this include a longer explanation of why this copy is needed\n(same as was done in handle_streamed_transaction)?\n\n~\n\n11.\n\n case TRANS_PARALLEL_APPLY:\n+\n+ /*\n+ * Close the file before committing if the parallel apply worker\n+ * is applying spooled messages.\n+ */\n+ if (stream_fd)\n+ stream_close_file();\n\n11a.\n\nThis comment seems worded backwards.\n\nSUGGESTION\nIf the parallel apply worker is applying spooled messages then close\nthe file before committing.\n\n~\n\n11b.\n\nI'm confused - isn't there code doing exactly this (close file before\ncommit) already in the apply_handle_stream_commit\nTRANS_PARALLEL_APPLY?\n\n~~~\n\n12. apply_handle_stream_start\n\n@@ -1383,6 +1493,7 @@ apply_handle_stream_start(StringInfo s)\n bool first_segment;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg = *s;\n\nShould this include a longer explanation of why this copy is needed\n(same as was done in handle_streamed_transaction)?\n\n~\n\n13.\n\n+ serialize_stream_start(stream_xid, false);\n+ stream_write_change(LOGICAL_REP_MSG_STREAM_START, &original_msg);\n\n- end_replication_step();\n break;\n\nA spurious blank line is left before the break;\n\n~~~\n\n14. serialize_stream_stop\n\n+ /* We must be in a valid transaction state */\n+ Assert(IsTransactionState());\n\nThe comment seems redundant. The code says the same.\n\n~~~\n\n15. apply_handle_stream_abort\n\n@@ -1676,6 +1794,7 @@ apply_handle_stream_abort(StringInfo s)\n LogicalRepStreamAbortData abort_data;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg = *s;\n bool toplevel_xact;\n\nShould this include a longer explanation of why this copy is needed\n(same as was done in handle_streamed_transaction)?\n\n~~~\n\n16. apply_spooled_messages\n\n+ stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);\n\nSomething still seems a bit odd about this to me (previously also\nmentioned in review [1] #29) but I cannot quite put my finger on it...\n\nAFAIK the 'stream_fd' is the global the LA is using to remember the\nsingle stream spool file; It corresponds to the LogicalRepWorker's\n'stream_fileset'. So using that same global on the PA side somehow\nseemed strange to me. The fileset at PA comes from a different place\n(MyParallelShared->fileset).\n\nBasically, I felt that whenever use are using 'stream_fd' and\n'stream_fileset' etc. then it should be safe to assume you are looking\nat the worker.c from the leader apply worker POV. Otherwise, IMO it\nshould just use some fd/fs passed around as parameters. Sure, there\nmight be a few places like stream_close_file (etc) which need some\nsmall refactoring to pass as a parameter instead of always using\n'stream_fd' but IMO the end result will be tidier.\n\n~\n\n17.\n\n+ /*\n+ * No need to output the DEBUG message here in the parallel apply\n+ * worker as similar messages will be output when handling STREAM_STOP\n+ * message.\n+ */\n+ if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\n elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\n nchanges, path);\n\nInstead of saying what you are not doing (\"No need to... in output\napply worker\") wouldn't it make more sense to reverse it and say what\nyou are doing (\"Only log DEBUG messages for the leader apply worker\nbecause ...\") and then the condition also becomes positive:\n\nif (am_leader_apply_worker())\n{\n...\n}\n\n~\n\n18.\n\n+ if (am_parallel_apply_worker() &&\n+ MyParallelShared->xact_state == PARALLEL_TRANS_FINISHED)\n+ goto done;\n+\n+ /*\n+ * No need to output the DEBUG message here in the parallel apply\n+ * worker as similar messages will be output when handling STREAM_STOP\n+ * message.\n+ */\n+ if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\n elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\n nchanges, path);\n }\n\n- BufFileClose(fd);\n-\n+ stream_close_file();\n pfree(buffer);\n pfree(s2.data);\n\n+done:\n elog(DEBUG1, \"replayed %d (all) changes from file \\\"%s\\\"\",\n nchanges, path);\n\nShouldn't that \"done:\" label be *above* the pfree's. Otherwise, those\nare going to be skipped over by the \"goto done;\".\n\n~~~\n\n19. apply_handle_stream_commit\n\n@@ -1898,6 +2072,7 @@ apply_handle_stream_commit(StringInfo s)\n LogicalRepCommitData commit_data;\n ParallelApplyWorkerInfo *winfo;\n TransApplyAction apply_action;\n+ StringInfoData original_msg = *s;\n\nShould this include a longer explanation of why this copy is needed\n(same as was done in handle_streamed_transaction)?\n\n~\n\n20.\n\n+ /*\n+ * Close the file before committing if the parallel apply worker\n+ * is applying spooled messages.\n+ */\n+ if (stream_fd)\n+ stream_close_file();\n\n(same as previous review comment - see #11)\n\nThis comment seems worded backwards.\n\nSUGGESTION\nIf the parallel apply worker is applying spooled messages then close\nthe file before committing.\n\n======\n\nsrc/include/replication/worker_internal.h\n\n21. PartialFileSetState\n\n\n+ * State of fileset in leader apply worker.\n+ *\n+ * FS_BUSY means that the leader is serializing changes to the file. FS_READY\n+ * means that the leader has serialized all changes to the file and the file is\n+ * ready to be read by a parallel apply worker.\n+ */\n+typedef enum PartialFileSetState\n\n\"ready to be read\" sounded a bit strange.\n\nSUGGESTION\n... to the file so it is now OK for a parallel apply worker to read it.\n\n\n------\n[1] Houz reply to my review v51-0002 --\nhttps://www.postgresql.org/message-id/OS0PR01MB5716350729D8C67AA8CE333194129%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 6 Dec 2022 10:56:48 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 6, 2022 at 5:27 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for patch v55-0002\n>\n...\n>\n> 3. pa_spooled_messages\n>\n> Previously I suggested this function name should be changed but that\n> was rejected (see [1] #6a)\n>\n> > 6a.\n> > IMO a better name for this function would be\n> > pa_apply_spooled_messages();\n> Not sure about this.\n>\n> ~\n>\n> FYI the reason for the previous suggestion is because there is no verb\n> in the current function name, so the reader is left thinking\n> pa_spooled_messages \"what\"?\n>\n> It means the caller has to have extra comments like:\n> /* Check if changes have been serialized to a file. */\n> pa_spooled_messages();\n>\n> OTOH, if the function was called something better -- e.g.\n> pa_check_for_spooled_messages() or similar -- then it would be\n> self-explanatory.\n>\n\nI think pa_check_for_spooled_messages() could be misleading because we\ndo apply the changes in that function, so probably a comment as\nsuggested by you is a better option.\n\n> ~\n>\n> 4.\n>\n> /*\n> + * Replay the spooled messages in the parallel apply worker if the leader apply\n> + * worker has finished serializing changes to the file.\n> + */\n> +static void\n> +pa_spooled_messages(void)\n>\n> I'm not 100% sure of the logic, so IMO maybe the comment should say a\n> bit more about how this works:\n>\n> Specifically, let's say there was some timeout and the LA needed to\n> write the spool file, then let's say the PA timed out and found itself\n> inside this function. Now, let's say the LA is still busy writing the\n> file -- so what happens next?\n>\n> Does this function simply return, then the main PA loop waits again,\n> then the times out again, then PA finds itself back inside this\n> function again... and that keeps happening over and over until\n> eventually the spool file is found FS_READY? Some explanatory comments\n> might help.\n>\n\nNo, PA will simply wait for LA to finish. See the code handling for\nFS_BUSY state. We might want to slightly improve part of the current\ncomment to: \"If the leader apply worker is busy serializing the\npartial changes then acquire the stream lock now and wait for the\nleader worker to finish serializing the changes\".\n\n>\n> 16. apply_spooled_messages\n>\n> + stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);\n>\n> Something still seems a bit odd about this to me (previously also\n> mentioned in review [1] #29) but I cannot quite put my finger on it...\n>\n> AFAIK the 'stream_fd' is the global the LA is using to remember the\n> single stream spool file; It corresponds to the LogicalRepWorker's\n> 'stream_fileset'. So using that same global on the PA side somehow\n> seemed strange to me. The fileset at PA comes from a different place\n> (MyParallelShared->fileset).\n>\n\nI think 'stream_fd' is specific to apply module which can be used by\napply, tablesync, or parallel worker. Unfortunately, now, the code in\nworker.c is a mix of worker and apply module. At some point, we should\nseparate apply logic to a separate file.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 6 Dec 2022 09:21:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 5, 2022 at 9:59 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach a new version patch set which fixed a testcase failure on CFbot.\n>\n\nFew comments:\n============\n1.\n+ /*\n+ * Break the loop if the parallel apply worker has finished applying\n+ * the transaction. The parallel apply worker should have closed the\n+ * file before committing.\n+ */\n+ if (am_parallel_apply_worker() &&\n+ MyParallelShared->xact_state == PARALLEL_TRANS_FINISHED)\n+ goto done;\n\nThis looks hackish to me because ideally, this API should exit after\nreading and applying all the messages in the spool file. This check is\nprimarily based on the knowledge that once we reach some state, the\nfile won't have more data. I think it would be better to explicitly\nensure the same.\n\n2.\n+ /*\n+ * No need to output the DEBUG message here in the parallel apply\n+ * worker as similar messages will be output when handling STREAM_STOP\n+ * message.\n+ */\n+ if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\n elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\n nchanges, path);\n }\n\nI think this check appeared a bit ugly to me. I think it is okay to\nget a similar DEBUG message at another place (on stream_stop) because\n(a) this is logged every 1000 messages whereas stream_stop can be\nafter many more messages, so there doesn't appear to be a direct\ncorrelation; (b) due to this, we can identify whether it is due to\nspooled messages or due to direct apply; ideally we can use another\nDEBUG message to differentiate but this doesn't appear bad to me.\n\n3. The function names for serialize_stream_start(),\nserialize_stream_stop(), and serialize_stream_abort() don't seem to\nmatch the functionality they provide because none of these\nwrite/serialize changes to the file. Can we rename these? Some\npossible options could be stream_start_internal or stream_start_guts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 6 Dec 2022 13:20:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 6, 2022 at 2:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 5:27 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for patch v55-0002\n> >\n> ...\n\n> > 4.\n> >\n> > /*\n> > + * Replay the spooled messages in the parallel apply worker if the leader apply\n> > + * worker has finished serializing changes to the file.\n> > + */\n> > +static void\n> > +pa_spooled_messages(void)\n> >\n> > I'm not 100% sure of the logic, so IMO maybe the comment should say a\n> > bit more about how this works:\n> >\n> > Specifically, let's say there was some timeout and the LA needed to\n> > write the spool file, then let's say the PA timed out and found itself\n> > inside this function. Now, let's say the LA is still busy writing the\n> > file -- so what happens next?\n> >\n> > Does this function simply return, then the main PA loop waits again,\n> > then the times out again, then PA finds itself back inside this\n> > function again... and that keeps happening over and over until\n> > eventually the spool file is found FS_READY? Some explanatory comments\n> > might help.\n> >\n>\n> No, PA will simply wait for LA to finish. See the code handling for\n> FS_BUSY state. We might want to slightly improve part of the current\n> comment to: \"If the leader apply worker is busy serializing the\n> partial changes then acquire the stream lock now and wait for the\n> leader worker to finish serializing the changes\".\n>\n\nSure, \"PA will simply wait for LA to finish\".\n\nExcept I think it's not quite that simple because IIUC when LA *does*\nfinish, the PA (this function) will continue and just drop out the\nbottom -- it cannot apply those spooled messages yet until it cycles\nall the way back around the main loop and times out again and gets\nback into pa_spooled_messages function again to get to the FS_READY\nblock of code where it can finally call the\n'apply_spooled_messages'...\n\nIf my understanding is correct, then It's that extra looping that I\nthought maybe warrants some mention in a comment here.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 6 Dec 2022 19:22:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, December 6, 2022 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Dec 5, 2022 at 9:59 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach a new version patch set which fixed a testcase failure on CFbot.\r\n> >\r\n> \r\n> Few comments:\r\n> ============\r\n> 1.\r\n> + /*\r\n> + * Break the loop if the parallel apply worker has finished applying\r\n> + * the transaction. The parallel apply worker should have closed the\r\n> + * file before committing.\r\n> + */\r\n> + if (am_parallel_apply_worker() &&\r\n> + MyParallelShared->xact_state == PARALLEL_TRANS_FINISHED)\r\n> + goto done;\r\n> \r\n> This looks hackish to me because ideally, this API should exit after reading and\r\n> applying all the messages in the spool file. This check is primarily based on the\r\n> knowledge that once we reach some state, the file won't have more data. I\r\n> think it would be better to explicitly ensure the same.\r\n\r\nI added a function to ensure that there is no message left after committing\r\nthe transaction.\r\n\r\n\r\n> 2.\r\n> + /*\r\n> + * No need to output the DEBUG message here in the parallel apply\r\n> + * worker as similar messages will be output when handling STREAM_STOP\r\n> + * message.\r\n> + */\r\n> + if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\r\n> elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\r\n> nchanges, path);\r\n> }\r\n> \r\n> I think this check appeared a bit ugly to me. I think it is okay to get a similar\r\n> DEBUG message at another place (on stream_stop) because\r\n> (a) this is logged every 1000 messages whereas stream_stop can be after many\r\n> more messages, so there doesn't appear to be a direct correlation; (b) due to\r\n> this, we can identify whether it is due to spooled messages or due to direct\r\n> apply; ideally we can use another DEBUG message to differentiate but this\r\n> doesn't appear bad to me.\r\n\r\nOK, I removed this check.\r\n\r\n> 3. The function names for serialize_stream_start(), serialize_stream_stop(), and\r\n> serialize_stream_abort() don't seem to match the functionality they provide\r\n> because none of these write/serialize changes to the file. Can we rename\r\n> these? Some possible options could be stream_start_internal or\r\n> stream_start_guts.\r\n\r\nRenamed to stream_start_internal().\r\n\r\nAttach the new version patch set which addressed above comments.\r\nI also attach a new patch to force stream change(provided by Shi-san) and\r\nanother one that introduce a GUC stream_serialize_threshold (provided by\r\nKuroda-san and Shi-san) which can help testing the patch set.\r\n\r\nBesides, I fixed a bug where there could still be messages left in memory\r\nqueue and the PA has started to apply spooled message.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 7 Dec 2022 02:58:19 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 6, 2022 7:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for patch v55-0002\r\n\r\nThansk for your comments.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 1. pa_can_start\r\n> \r\n> @@ -276,9 +278,9 @@ pa_can_start(TransactionId xid)\r\n> /*\r\n> * Don't start a new parallel worker if user has set skiplsn as it's\r\n> * possible that user want to skip the streaming transaction. For\r\n> - * streaming transaction, we need to spill the transaction to disk so \r\n> that\r\n> - * we can get the last LSN of the transaction to judge whether to \r\n> skip\r\n> - * before starting to apply the change.\r\n> + * streaming transaction, we need to serialize the transaction to a \r\n> + file\r\n> + * so that we can get the last LSN of the transaction to judge \r\n> + whether to\r\n> + * skip before starting to apply the change.\r\n> */\r\n> if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\r\n> return false;\r\n> \r\n> I think the wording change may belong in patch 0001 because it has \r\n> nothing to do with partial serializing.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 2. pa_free_worker\r\n> \r\n> + /*\r\n> + * Stop the worker if there are enough workers in the pool.\r\n> + *\r\n> + * XXX The worker is also stopped if the leader apply worker needed \r\n> + to\r\n> + * serialize part of the transaction data due to a send timeout. This \r\n> + is\r\n> + * because the message could be partially written to the queue due to \r\n> + send\r\n> + * timeout and there is no way to clean the queue other than \r\n> + resending the\r\n> + * message until it succeeds. To avoid complexity, we directly stop \r\n> + the\r\n> + * worker in this case.\r\n> + */\r\n> + if (winfo->serialize_changes ||\r\n> + napplyworkers > (max_parallel_apply_workers_per_subscription / 2))\r\n> \r\n> Don't need to say \"due to send timeout\" 2 times in 2 sentences.\r\n> \r\n> SUGGESTION\r\n> XXX The worker is also stopped if the leader apply worker needed to \r\n> serialize part of the transaction data due to a send timeout. This is \r\n> because the message could be partially written to the queue but there \r\n> is no way to clean the queue other than resending the message until it \r\n> succeeds. Directly stopping the worker avoids needing this complexity.\r\n\r\nChanged.\r\n\r\n> 4.\r\n> \r\n> /*\r\n> + * Replay the spooled messages in the parallel apply worker if the \r\n> +leader apply\r\n> + * worker has finished serializing changes to the file.\r\n> + */\r\n> +static void\r\n> +pa_spooled_messages(void)\r\n> \r\n> I'm not 100% sure of the logic, so IMO maybe the comment should say a \r\n> bit more about how this works:\r\n> \r\n> Specifically, let's say there was some timeout and the LA needed to \r\n> write the spool file, then let's say the PA timed out and found itself \r\n> inside this function. Now, let's say the LA is still busy writing the \r\n> file -- so what happens next?\r\n> \r\n> Does this function simply return, then the main PA loop waits again, \r\n> then the times out again, then PA finds itself back inside this \r\n> function again... and that keeps happening over and over until \r\n> eventually the spool file is found FS_READY? Some explanatory comments \r\n> might help.\r\n\r\nSlightly changed the logic and comment here.\r\n\r\n> ~\r\n> \r\n> 5.\r\n> \r\n> + /*\r\n> + * Check if changes have been serialized to a file. if so, read and \r\n> + apply\r\n> + * them.\r\n> + */\r\n> + SpinLockAcquire(&MyParallelShared->mutex);\r\n> + fileset_state = MyParallelShared->fileset_state; \r\n> + SpinLockRelease(&MyParallelShared->mutex);\r\n> \r\n> \"if so\" -> \"If so\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> \r\n> 6. pa_send_data\r\n> \r\n> + *\r\n> + * If the attempt to send data via shared memory times out, then we \r\n> + will\r\n> switch\r\n> + * to \"PARTIAL_SERIALIZE mode\" for the current transaction to prevent\r\n> possible\r\n> + * deadlocks with another parallel apply worker (refer to the \r\n> + comments atop\r\n> + * applyparallelworker.c for details). This means that the current \r\n> + data and any\r\n> + * subsequent data for this transaction will be serialized to a file.\r\n> */\r\n> void\r\n> pa_send_data(ParallelApplyWorkerInfo *winfo, Size nbytes, const void \r\n> *data)\r\n> \r\n> SUGGESTION (minor comment rearranging)\r\n> \r\n> If the attempt to send data via shared memory times out, then we will \r\n> switch to \"PARTIAL_SERIALIZE mode\" for the current transaction -- this \r\n> means that the current data and any subsequent data for this \r\n> transaction will be serialized to a file. This is done to prevent \r\n> possible deadlocks with another parallel apply worker (refer to the \r\n> comments atop applyparallelworker.c for details).\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 7.\r\n> \r\n> + /*\r\n> + * Take the stream lock to make sure that the parallel apply worker\r\n> + * will wait for the leader to release the stream lock until the\r\n> + * end of the transaction.\r\n> + */\r\n> + pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);\r\n> \r\n> The comment doesn't sound right.\r\n> \r\n> \"until the end\" -> \"at the end\" (??)\r\n\r\nI think it means \"PA wait ... until the end of transaction\".\r\n\r\n> ~~~\r\n> \r\n> 8. pa_stream_abort\r\n> \r\n> @@ -1374,6 +1470,7 @@ pa_stream_abort(LogicalRepStreamAbortData\r\n> *abort_data)\r\n> RollbackToSavepoint(spname);\r\n> CommitTransactionCommand();\r\n> subxactlist = list_truncate(subxactlist, i + 1);\r\n> +\r\n> break;\r\n> }\r\n> }\r\n> Spurious whitespace unrelated to this patch?\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 9. handle_streamed_transaction\r\n> \r\n> /*\r\n> + * The parallel apply worker needs the xid in this message to decide\r\n> + * whether to define a savepoint, so save the original message that \r\n> + has not\r\n> + * moved the cursor after the xid. We will serailize this message to \r\n> + a file\r\n> + * in PARTIAL_SERIALIZE mode.\r\n> + */\r\n> + original_msg = *s;\r\n> \r\n> \"serailize\" -> \"serialize\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 10. apply_handle_stream_prepare\r\n> \r\n> @@ -1245,6 +1265,7 @@ apply_handle_stream_prepare(StringInfo s)\r\n> LogicalRepPreparedTxnData prepare_data;\r\n> ParallelApplyWorkerInfo *winfo;\r\n> TransApplyAction apply_action;\r\n> + StringInfoData original_msg = *s;\r\n> \r\n> Should this include a longer explanation of why this copy is needed \r\n> (same as was done in handle_streamed_transaction)?\r\n\r\nAdded the blow comment atop this variable.\r\n```\r\nSave the message before it is consumed.\r\n```\r\n\r\n> ~\r\n> \r\n> 11.\r\n> \r\n> case TRANS_PARALLEL_APPLY:\r\n> +\r\n> + /*\r\n> + * Close the file before committing if the parallel apply worker\r\n> + * is applying spooled messages.\r\n> + */\r\n> + if (stream_fd)\r\n> + stream_close_file();\r\n> \r\n> 11a.\r\n> \r\n> This comment seems worded backwards.\r\n> \r\n> SUGGESTION\r\n> If the parallel apply worker is applying spooled messages then close \r\n> the file before committing.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 11b.\r\n> \r\n> I'm confused - isn't there code doing exactly this (close file before\r\n> commit) already in the apply_handle_stream_commit \r\n> TRANS_PARALLEL_APPLY?\r\n\r\nI think here is a typo.\r\nChanged the action in the comment. (committing -> preparing)\r\n\r\n> ~\r\n> \r\n> 13.\r\n> \r\n> + serialize_stream_start(stream_xid, false); \r\n> + stream_write_change(LOGICAL_REP_MSG_STREAM_START, &original_msg);\r\n> \r\n> - end_replication_step();\r\n> break;\r\n> \r\n> A spurious blank line is left before the break;\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 14. serialize_stream_stop\r\n> \r\n> + /* We must be in a valid transaction state */ \r\n> + Assert(IsTransactionState());\r\n> \r\n> The comment seems redundant. The code says the same.\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 17.\r\n> \r\n> + /*\r\n> + * No need to output the DEBUG message here in the parallel apply\r\n> + * worker as similar messages will be output when handling \r\n> + STREAM_STOP\r\n> + * message.\r\n> + */\r\n> + if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\r\n> elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\r\n> nchanges, path);\r\n> \r\n> Instead of saying what you are not doing (\"No need to... in output \r\n> apply worker\") wouldn't it make more sense to reverse it and say what \r\n> you are doing (\"Only log DEBUG messages for the leader apply worker \r\n> because ...\") and then the condition also becomes positive:\r\n> \r\n> if (am_leader_apply_worker())\r\n> {\r\n> ...\r\n> }\r\n\r\nRemoved this condition according to Amit's comment.\r\n\r\n> ~\r\n> \r\n> 18.\r\n> \r\n> + if (am_parallel_apply_worker() &&\r\n> + MyParallelShared->xact_state == PARALLEL_TRANS_FINISHED)\r\n> + goto done;\r\n> +\r\n> + /*\r\n> + * No need to output the DEBUG message here in the parallel apply\r\n> + * worker as similar messages will be output when handling \r\n> + STREAM_STOP\r\n> + * message.\r\n> + */\r\n> + if (!am_parallel_apply_worker() && nchanges % 1000 == 0)\r\n> elog(DEBUG1, \"replayed %d changes from file \\\"%s\\\"\",\r\n> nchanges, path);\r\n> }\r\n> \r\n> - BufFileClose(fd);\r\n> -\r\n> + stream_close_file();\r\n> pfree(buffer);\r\n> pfree(s2.data);\r\n> \r\n> +done:\r\n> elog(DEBUG1, \"replayed %d (all) changes from file \\\"%s\\\"\",\r\n> nchanges, path);\r\n> \r\n> Shouldn't that \"done:\" label be *above* the pfree's. Otherwise, those \r\n> are going to be skipped over by the \"goto done;\".\r\n\r\nAfter reconsidering, I think there is no need to 'pfree' these two variables here,\r\nbecause they are allocated in toplevel transaction's context and will be freed very soon.\r\nSo, I just removed these pfree().\r\n\r\n> ======\r\n> \r\n> src/include/replication/worker_internal.h\r\n> \r\n> 21. PartialFileSetState\r\n> \r\n> \r\n> + * State of fileset in leader apply worker.\r\n> + *\r\n> + * FS_BUSY means that the leader is serializing changes to the file. \r\n> +FS_READY\r\n> + * means that the leader has serialized all changes to the file and \r\n> +the file is\r\n> + * ready to be read by a parallel apply worker.\r\n> + */\r\n> +typedef enum PartialFileSetState\r\n> \r\n> \"ready to be read\" sounded a bit strange.\r\n> \r\n> SUGGESTION\r\n> ... to the file so it is now OK for a parallel apply worker to read it.\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 7 Dec 2022 03:01:35 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 1, 2022 at 7:17 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, December 1, 2022 3:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Nov 30, 2022 at 10:51 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, November 30, 2022 9:41 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Tuesday, November 29, 2022 8:34 PM Amit Kapila\n> > > > > Review comments on v53-0001*\n> > > >\n> > > > Attach the new version patch set.\n> > >\n> > > Sorry, there were some mistakes in the previous patch set.\n> > > Here is the correct V54 patch set. I also ran pgindent for the patch set.\n> > >\n> >\n> > Thank you for updating the patches. Here are random review comments for\n> > 0001 and 0002 patches.\n>\n> Thanks for the comments!\n>\n> >\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"logical replication parallel apply worker exited\n> > abnormally\"),\n> > errcontext(\"%s\", edata.context))); and\n> >\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"logical replication parallel apply worker exited\n> > because of subscription information change\")));\n> >\n> > I'm not sure ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is appropriate\n> > here. Given that parallel apply worker has already reported the error message\n> > with the error code, I think we don't need to set the errorcode for the logs\n> > from the leader process.\n> >\n> > Also, I'm not sure the term \"exited abnormally\" is appropriate since we use it\n> > when the server crashes for example. I think ERRORs reported here don't mean\n> > that in general.\n>\n> How about reporting \"xxx worker exited due to error\" ?\n\nSounds better to me.\n\n>\n> > ---\n> > if (am_parallel_apply_worker() && on_subinfo_change) {\n> > /*\n> > * If a parallel apply worker exits due to the subscription\n> > * information change, we notify the leader apply worker so that the\n> > * leader can report more meaningful message in time and restart the\n> > * logical replication.\n> > */\n> > pq_putmessage('X', NULL, 0);\n> > }\n> >\n> > and\n> >\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"logical replication parallel apply worker exited\n> > because of subscription information change\")));\n> >\n> > Do we really need an additional message in case of 'X'? When we call\n> > apply_worker_clean_exit with on_subinfo_change = true, we have reported the\n> > error message such as:\n> >\n> > ereport(LOG,\n> > (errmsg(\"logical replication parallel apply worker for subscription\n> > \\\"%s\\\" will stop because of a parameter change\",\n> > MySubscription->name)));\n> >\n> > I think that reporting a similar message from the leader might not be\n> > meaningful for users.\n>\n> The intention is to let leader report more meaningful message if a worker\n> exited due to subinfo change. Otherwise, the leader is likely to report an\n> error like \" lost connection ... to parallel apply worker\" when trying to send\n> data via shared memory if the worker exited. What do you think ?\n\nAgreed. But do we need to have the leader exit with an error in spite\nof the fact that the worker cleanly exits? If the leader exits with an\nerror, the subscription will be disabled if disable_on_error is true,\nright?\n\nAnd what do you think about the error code?\n\n>\n> > ---\n> > - if (options->proto.logical.streaming &&\n> > - PQserverVersion(conn->streamConn) >= 140000)\n> > - appendStringInfoString(&cmd, \", streaming 'on'\");\n> > + if (options->proto.logical.streaming_str)\n> > + appendStringInfo(&cmd, \", streaming '%s'\",\n> > +\n> > options->proto.logical.streaming_str);\n> >\n> > and\n> >\n> > + /*\n> > + * Assign the appropriate option value for streaming option\n> > according to\n> > + * the 'streaming' mode and the publisher's ability to\n> > support that mode.\n> > + */\n> > + if (server_version >= 160000 &&\n> > + MySubscription->stream == SUBSTREAM_PARALLEL)\n> > + {\n> > + options.proto.logical.streaming_str = pstrdup(\"parallel\");\n> > + MyLogicalRepWorker->parallel_apply = true;\n> > + }\n> > + else if (server_version >= 140000 &&\n> > + MySubscription->stream != SUBSTREAM_OFF)\n> > + {\n> > + options.proto.logical.streaming_str = pstrdup(\"on\");\n> > + MyLogicalRepWorker->parallel_apply = false;\n> > + }\n> > + else\n> > + {\n> > + options.proto.logical.streaming_str = NULL;\n> > + MyLogicalRepWorker->parallel_apply = false;\n> > + }\n> >\n> > This change moves the code of adjustment of the streaming option based on\n> > the publisher server version from libpqwalreceiver.c to worker.c.\n> > On the other hand, the similar logic for other parameters such as \"two_phase\"\n> > and \"origin\" are still done in libpqwalreceiver.c. How about passing\n> > MySubscription->stream via WalRcvStreamOptions and constructing a\n> > streaming option string in libpqrcv_startstreaming()?\n> > In ApplyWorkerMain(), we just need to set\n> > MyLogicalRepWorker->parallel_apply = true if (server_version >= 160000\n> > && MySubscription->stream == SUBSTREAM_PARALLEL). We won't need\n> > pstrdup for \"parallel\" and \"on\", and it's more consistent with other parameters.\n>\n> Thanks for the suggestion. I thought about the same idea before, but it seems\n> we would weed to introduce \" pg_subscription.h \" into libpqwalreceiver.c. The\n> libpqwalreceiver.c looks a like a common place. So I am not sure is it looks\n> better to expose the detail of streaming option to it.\n\nRight. It means that all enum parameters of WalRcvStreamOptions needs\nto be handled in the caller (e.g. worker.c etc) whereas other\nparameters are handled in libpqwalreceiver.c. It's not elegant but I\nhave no better idea for that.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Dec 2022 12:29:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 1, 2022 at 7:17 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > > ---\n> > > if (am_parallel_apply_worker() && on_subinfo_change) {\n> > > /*\n> > > * If a parallel apply worker exits due to the subscription\n> > > * information change, we notify the leader apply worker so that the\n> > > * leader can report more meaningful message in time and restart the\n> > > * logical replication.\n> > > */\n> > > pq_putmessage('X', NULL, 0);\n> > > }\n> > >\n> > > and\n> > >\n> > > ereport(ERROR,\n> > > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > errmsg(\"logical replication parallel apply worker exited\n> > > because of subscription information change\")));\n> > >\n> > > Do we really need an additional message in case of 'X'? When we call\n> > > apply_worker_clean_exit with on_subinfo_change = true, we have reported the\n> > > error message such as:\n> > >\n> > > ereport(LOG,\n> > > (errmsg(\"logical replication parallel apply worker for subscription\n> > > \\\"%s\\\" will stop because of a parameter change\",\n> > > MySubscription->name)));\n> > >\n> > > I think that reporting a similar message from the leader might not be\n> > > meaningful for users.\n> >\n> > The intention is to let leader report more meaningful message if a worker\n> > exited due to subinfo change. Otherwise, the leader is likely to report an\n> > error like \" lost connection ... to parallel apply worker\" when trying to send\n> > data via shared memory if the worker exited. What do you think ?\n>\n> Agreed. But do we need to have the leader exit with an error in spite\n> of the fact that the worker cleanly exits? If the leader exits with an\n> error, the subscription will be disabled if disable_on_error is true,\n> right?\n>\n\nRight, but the leader will anyway exit at some point either due to an\nERROR like \"lost connection ... to parallel worker\" or with a LOG\nlike: \"... will restart because of a parameter change\" but I see your\npoint. So, will it be better if we have a LOG message here and then\nproc_exit()? Do you have something else in mind for this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Dec 2022 09:58:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 1, 2022 at 7:17 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > > ---\n> > > > if (am_parallel_apply_worker() && on_subinfo_change) {\n> > > > /*\n> > > > * If a parallel apply worker exits due to the subscription\n> > > > * information change, we notify the leader apply worker so that the\n> > > > * leader can report more meaningful message in time and restart the\n> > > > * logical replication.\n> > > > */\n> > > > pq_putmessage('X', NULL, 0);\n> > > > }\n> > > >\n> > > > and\n> > > >\n> > > > ereport(ERROR,\n> > > > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > > > errmsg(\"logical replication parallel apply worker exited\n> > > > because of subscription information change\")));\n> > > >\n> > > > Do we really need an additional message in case of 'X'? When we call\n> > > > apply_worker_clean_exit with on_subinfo_change = true, we have reported the\n> > > > error message such as:\n> > > >\n> > > > ereport(LOG,\n> > > > (errmsg(\"logical replication parallel apply worker for subscription\n> > > > \\\"%s\\\" will stop because of a parameter change\",\n> > > > MySubscription->name)));\n> > > >\n> > > > I think that reporting a similar message from the leader might not be\n> > > > meaningful for users.\n> > >\n> > > The intention is to let leader report more meaningful message if a worker\n> > > exited due to subinfo change. Otherwise, the leader is likely to report an\n> > > error like \" lost connection ... to parallel apply worker\" when trying to send\n> > > data via shared memory if the worker exited. What do you think ?\n> >\n> > Agreed. But do we need to have the leader exit with an error in spite\n> > of the fact that the worker cleanly exits? If the leader exits with an\n> > error, the subscription will be disabled if disable_on_error is true,\n> > right?\n> >\n>\n> Right, but the leader will anyway exit at some point either due to an\n> ERROR like \"lost connection ... to parallel worker\" or with a LOG\n> like: \"... will restart because of a parameter change\" but I see your\n> point. So, will it be better if we have a LOG message here and then\n> proc_exit()? Do you have something else in mind for this?\n\nNo, I was thinking that too. It's better to write a LOG message and do\nproc_exit().\n\nRegarding the error \"lost connection ... to parallel worker\", it could\nstill happen depending on the timing even if the parallel worker\ncleanly exits due to parameter changes, right? If so, I'm concerned\nthat it could lead to disable the subscription unexpectedly if\ndisable_on_error is enabled.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Dec 2022 13:39:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Right, but the leader will anyway exit at some point either due to an\n> > ERROR like \"lost connection ... to parallel worker\" or with a LOG\n> > like: \"... will restart because of a parameter change\" but I see your\n> > point. So, will it be better if we have a LOG message here and then\n> > proc_exit()? Do you have something else in mind for this?\n>\n> No, I was thinking that too. It's better to write a LOG message and do\n> proc_exit().\n>\n> Regarding the error \"lost connection ... to parallel worker\", it could\n> still happen depending on the timing even if the parallel worker\n> cleanly exits due to parameter changes, right? If so, I'm concerned\n> that it could lead to disable the subscription unexpectedly if\n> disable_on_error is enabled.\n>\n\nIf we want to avoid this then I think we have the following options\n(a) parallel apply skips checking parameter change (b) parallel worker\nwon't exit on parameter change but will silently absorb the parameter\nand continue its processing; anyway, the leader will detect it and\nstop the worker for the parameter change\n\nAmong these, the advantage of (b) is that it will allow reflecting the\nparameter change (that doesn't need restart) in the parallel worker.\nDo you have any better idea to deal with this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Dec 2022 13:01:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 8:28 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Besides, I fixed a bug where there could still be messages left in memory\n> queue and the PA has started to apply spooled message.\n>\n\nFew comments on the recent changes in the patch:\n========================================\n1. It seems you need to set FS_SERIALIZE_DONE in\nstream_prepare/commit/abort. They are still directly setting the state\nas READY. Am, I missing something or you forgot to change it?\n\n2.\n case TRANS_PARALLEL_APPLY:\n pa_stream_abort(&abort_data);\n\n+ /*\n+ * Reset the stream_fd after aborting the toplevel transaction in\n+ * case the parallel apply worker is applying spooled messages\n+ */\n+ if (toplevel_xact)\n+ stream_fd = NULL;\n\nI think we can keep the handling of stream file the same in\nabort/commit/prepare code path.\n\n3. It is already pointed out by Peter that it is better to add some\ncomments in pa_spooled_messages() function that we won't be\nimmediately able to apply changes after the lock is released, it will\nbe done in the next cycle.\n\n4. Shall we rename FS_SERIALIZE as FS_SERIALIZE_IN_PROGRESS? That will\nappear consistent with FS_SERIALIZE_DONE.\n\n5. Comment improvements:\ndiff --git a/src/backend/replication/logical/worker.c\nb/src/backend/replication/logical/worker.c\nindex b26d587ae4..921d973863 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -1934,8 +1934,7 @@ apply_handle_stream_abort(StringInfo s)\n }\n\n /*\n- * Check if the passed fileno and offset are the last fileno and position of\n- * the fileset, and report an ERROR if not.\n+ * Ensure that the passed location is fileset's end.\n */\n static void\n ensure_last_message(FileSet *stream_fileset, TransactionId xid, int fileno,\n@@ -2084,9 +2083,9 @@ apply_spooled_messages(FileSet *stream_fileset,\nTransactionId xid,\n nchanges++;\n\n /*\n- * Break the loop if stream_fd is set to NULL which\nmeans the parallel\n- * apply worker has finished applying the transaction.\nThe parallel\n- * apply worker should have closed the file before committing.\n+ * It is possible the file has been closed because we\nhave processed\n+ * some transaction end message like stream_commit in\nwhich case that\n+ * must be the last message.\n */\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Dec 2022 16:18:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 5, 2022 at 1:29 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Sunday, December 4, 2022 7:17 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\n> >\n> > Thursday, December 1, 2022 8:40 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > Some other comments:\n> > ...\n> > Attach the new version patch set which addressed most of the comments\n> > received so far except some comments being discussed[1].\n> > [1] https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n>\n> Attach a new version patch set which fixed a testcase failure on CFbot.\n\nHere are some comments on v56 0001, 0002 patches. Please ignore\ncomments if you already incorporated them in v57.\n\n+static void\n+ProcessParallelApplyInterrupts(void)\n+{\n+ CHECK_FOR_INTERRUPTS();\n+\n+ if (ShutdownRequestPending)\n+ {\n+ ereport(LOG,\n+ (errmsg(\"logical replication parallel\napply worker for subscrip\ntion \\\"%s\\\" has finished\",\n+ MySubscription->name)));\n+\n+ apply_worker_clean_exit(false);\n+ }\n+\n+ if (ConfigReloadPending)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ }\n+}\n\nI personally think that we don't need to have a function to do only\nthese few things.\n\n---\n+/* Disallow streaming in-progress transactions. */\n+#define SUBSTREAM_OFF 'f'\n+\n+/*\n+ * Streaming in-progress transactions are written to a temporary file and\n+ * applied only after the transaction is committed on upstream.\n+ */\n+#define SUBSTREAM_ON 't'\n+\n+/*\n+ * Streaming in-progress transactions are applied immediately via a parallel\n+ * apply worker.\n+ */\n+#define SUBSTREAM_PARALLEL 'p'\n+\n\nWhile these names look good to me, we already have the following\nexisting values:\n\n*/\n#define LOGICALREP_TWOPHASE_STATE_DISABLED 'd'\n#define LOGICALREP_TWOPHASE_STATE_PENDING 'p'\n#define LOGICALREP_TWOPHASE_STATE_ENABLED 'e'\n\n/*\n* The subscription will request the publisher to\n* have any origin.\n*/\n#define LOGICALREP_ORIGIN_NONE \"none\"\n\n/*\n* The subscription will request the publisher to\n* of their origin.\n*/\n#define LOGICALREP_ORIGIN_ANY \"any\"\n\nShould we change the names to something like LOGICALREP_STREAM_PARALLEL?\n\n---\n+ * The lock graph for the above example will look as follows:\n+ * LA (waiting to acquire the lock on the unique index) -> PA (waiting to\n+ * acquire the lock on the remote transaction) -> LA\n\nand\n\n+ * The lock graph for the above example will look as follows:\n+ * LA (waiting to acquire the transaction lock) -> PA-2 (waiting to acquire the\n+ * lock due to unique index constraint) -> PA-1 (waiting to acquire the stream\n+ * lock) -> LA\n\n\"(waiting to acquire the lock on the remote transaction)\" in the first\nexample and \"(waiting to acquire the stream lock)\" in the second\nexample is the same meaning, right? If so, I think we should use\neither term for consistency.\n\n---\n+ bool write_abort_info = (data->streaming ==\nSUBSTREAM_PARALLEL);\n\nI think that instead of setting write_abort_info every time when\npgoutput_stream_abort() is called, we can set it once, probably in\nPGOutputData, at startup.\n\n---\nserver_version = walrcv_server_version(LogRepWorkerWalRcvConn);\noptions.proto.logical.proto_version =\n+ server_version >= 160000 ?\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :\n LOGICALREP_PROTO_VERSION_NUM;\n\nInstead of always using the new protocol version, I think we can use\nLOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n'parallel'. That way, we don't need to change protocl version check\nlogic in pgoutput.c and don't need to expose defGetStreamingMode().\nWhat do you think?\n\n---\nWhen max_parallel_apply_workers_per_subscription is changed to a value\nlower than the number of parallel worker running at that time, do we\nneed to stop extra workers?\n\n---\nIf a value of max_parallel_apply_workers_per_subscription is not\nsufficient, we get the LOG \"out of parallel apply workers\" every time\nwhen the apply worker doesn't launch a worker. But do we really need\nthis log? It seems not consistent with\nmax_sync_workers_per_subscription behavior. I think we can check if\nthe number of running parallel workers is less than\nmax_parallel_apply_workers_per_subscription before calling\nlogicalrep_worker_launch(). What do you think?\n\n---\n+ if (server_version >= 160000 &&\n+ MySubscription->stream == SUBSTREAM_PARALLEL)\n+ {\n+ options.proto.logical.streaming_str = pstrdup(\"parallel\");\n+ MyLogicalRepWorker->parallel_apply = true;\n+ }\n+ else if (server_version >= 140000 &&\n+ MySubscription->stream != SUBSTREAM_OFF)\n+ {\n+ options.proto.logical.streaming_str = pstrdup(\"on\");\n+ MyLogicalRepWorker->parallel_apply = false;\n+ }\n\nI think we don't need to use pstrdup().\n\n---\n- BeginTransactionBlock();\n- CommitTransactionCommand(); /* Completes the preceding Begin command. */\n+ if (!IsTransactionBlock())\n+ {\n+ BeginTransactionBlock();\n+ CommitTransactionCommand(); /* Completes the preceding\nBegin command. */\n+ }\n\nDo we need this change? In my environment, 'make check-world' passes\nwithout this change.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Dec 2022 20:51:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 7, 2022 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Right, but the leader will anyway exit at some point either due to an\n> > > ERROR like \"lost connection ... to parallel worker\" or with a LOG\n> > > like: \"... will restart because of a parameter change\" but I see your\n> > > point. So, will it be better if we have a LOG message here and then\n> > > proc_exit()? Do you have something else in mind for this?\n> >\n> > No, I was thinking that too. It's better to write a LOG message and do\n> > proc_exit().\n> >\n> > Regarding the error \"lost connection ... to parallel worker\", it could\n> > still happen depending on the timing even if the parallel worker\n> > cleanly exits due to parameter changes, right? If so, I'm concerned\n> > that it could lead to disable the subscription unexpectedly if\n> > disable_on_error is enabled.\n> >\n>\n> If we want to avoid this then I think we have the following options\n> (a) parallel apply skips checking parameter change (b) parallel worker\n> won't exit on parameter change but will silently absorb the parameter\n> and continue its processing; anyway, the leader will detect it and\n> stop the worker for the parameter change\n>\n> Among these, the advantage of (b) is that it will allow reflecting the\n> parameter change (that doesn't need restart) in the parallel worker.\n> Do you have any better idea to deal with this?\n\nI think (b) is better. We need to reflect the synchronous_commit\nparameter also in parallel workers in the worker pool.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Dec 2022 21:48:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Mon, Dec 5, 2022 at 1:29 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Sunday, December 4, 2022 7:17 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com>\r\n> > >\r\n> > > Thursday, December 1, 2022 8:40 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > Some other comments:\r\n> > > ...\r\n> > > Attach the new version patch set which addressed most of the comments\r\n> > > received so far except some comments being discussed[1].\r\n> > > [1]\r\n> https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C\r\n> 8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n> >\r\n> > Attach a new version patch set which fixed a testcase failure on CFbot.\r\n> \r\n> Here are some comments on v56 0001, 0002 patches. Please ignore\r\n> comments if you already incorporated them in v57.\r\n\r\nThanks for the comments!\r\n\r\n> +static void\r\n> +ProcessParallelApplyInterrupts(void)\r\n> +{\r\n> + CHECK_FOR_INTERRUPTS();\r\n> +\r\n> + if (ShutdownRequestPending)\r\n> + {\r\n> + ereport(LOG,\r\n> + (errmsg(\"logical replication parallel\r\n> apply worker for subscrip\r\n> tion \\\"%s\\\" has finished\",\r\n> + MySubscription->name)));\r\n> +\r\n> + apply_worker_clean_exit(false);\r\n> + }\r\n> +\r\n> + if (ConfigReloadPending)\r\n> + {\r\n> + ConfigReloadPending = false;\r\n> + ProcessConfigFile(PGC_SIGHUP);\r\n> + }\r\n> +}\r\n> \r\n> I personally think that we don't need to have a function to do only\r\n> these few things.\r\n\r\nI thought that introduce a new function make the handling of worker specific\r\nInterrupts logic similar to other existing ones. Like:\r\nProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\r\npgarch.c ...\r\n\r\n> \r\n> Should we change the names to something like\r\n> LOGICALREP_STREAM_PARALLEL?\r\n\r\nAgreed, will change.\r\n\r\n> ---\r\n> + * The lock graph for the above example will look as follows:\r\n> + * LA (waiting to acquire the lock on the unique index) -> PA (waiting to\r\n> + * acquire the lock on the remote transaction) -> LA\r\n> \r\n> and\r\n> \r\n> + * The lock graph for the above example will look as follows:\r\n> + * LA (waiting to acquire the transaction lock) -> PA-2 (waiting to acquire the\r\n> + * lock due to unique index constraint) -> PA-1 (waiting to acquire the stream\r\n> + * lock) -> LA\r\n> \r\n> \"(waiting to acquire the lock on the remote transaction)\" in the first\r\n> example and \"(waiting to acquire the stream lock)\" in the second\r\n> example is the same meaning, right? If so, I think we should use\r\n> either term for consistency.\r\n\r\nWill change.\r\n\r\n> ---\r\n> + bool write_abort_info = (data->streaming ==\r\n> SUBSTREAM_PARALLEL);\r\n> \r\n> I think that instead of setting write_abort_info every time when\r\n> pgoutput_stream_abort() is called, we can set it once, probably in\r\n> PGOutputData, at startup.\r\n\r\nI thought that since we already have a \"stream\" flag in PGOutputData, I am not\r\nsure if it would be better to introduce another flag for the same option.\r\n\r\n\r\n> ---\r\n> server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\r\n> options.proto.logical.proto_version =\r\n> + server_version >= 160000 ?\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\r\n> server_version >= 150000 ?\r\n> LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\r\n> server_version >= 140000 ?\r\n> LOGICALREP_PROTO_STREAM_VERSION_NUM :\r\n> LOGICALREP_PROTO_VERSION_NUM;\r\n> \r\n> Instead of always using the new protocol version, I think we can use\r\n> LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\r\n> 'parallel'. That way, we don't need to change protocl version check\r\n> logic in pgoutput.c and don't need to expose defGetStreamingMode().\r\n> What do you think?\r\n\r\nI think that some user can also use the new version number when trying to get\r\nchanges (via pg_logical_slot_peek_binary_changes or other functions), so I feel\r\nleave the check for new version number seems fine.\r\n\r\nBesides, I feel even if we don't use new version number, we still need to use\r\ndefGetStreamingMode to check if parallel mode in used as we need to send\r\nabort_lsn when parallel is in used. I might be missing something, sorry for\r\nthat. Can you please explain the idea a bit ?\r\n\r\n> ---\r\n> When max_parallel_apply_workers_per_subscription is changed to a value\r\n> lower than the number of parallel worker running at that time, do we\r\n> need to stop extra workers?\r\n\r\nI think we can do this, like adding a check in the main loop of leader worker, and\r\ncheck every time after reloading the conf. OTOH, we will also stop the worker after\r\nfinishing a transaction, so I am slightly not sure do we need to add another check logic here.\r\nBut I am fine to add it if you think it would be better.\r\n\r\n\r\n> ---\r\n> If a value of max_parallel_apply_workers_per_subscription is not\r\n> sufficient, we get the LOG \"out of parallel apply workers\" every time\r\n> when the apply worker doesn't launch a worker. But do we really need\r\n> this log? It seems not consistent with\r\n> max_sync_workers_per_subscription behavior. I think we can check if\r\n> the number of running parallel workers is less than\r\n> max_parallel_apply_workers_per_subscription before calling\r\n> logicalrep_worker_launch(). What do you think?\r\n> \r\n> ---\r\n> + if (server_version >= 160000 &&\r\n> + MySubscription->stream == SUBSTREAM_PARALLEL)\r\n> + {\r\n> + options.proto.logical.streaming_str = pstrdup(\"parallel\");\r\n> + MyLogicalRepWorker->parallel_apply = true;\r\n> + }\r\n> + else if (server_version >= 140000 &&\r\n> + MySubscription->stream != SUBSTREAM_OFF)\r\n> + {\r\n> + options.proto.logical.streaming_str = pstrdup(\"on\");\r\n> + MyLogicalRepWorker->parallel_apply = false;\r\n> + }\r\n> \r\n> I think we don't need to use pstrdup().\r\n\r\nWill remove.\r\n\r\n> ---\r\n> - BeginTransactionBlock();\r\n> - CommitTransactionCommand(); /* Completes the preceding Begin\r\n> command. */\r\n> + if (!IsTransactionBlock())\r\n> + {\r\n> + BeginTransactionBlock();\r\n> + CommitTransactionCommand(); /* Completes the preceding\r\n> Begin command. */\r\n> + }\r\n> \r\n> Do we need this change? In my environment, 'make check-world' passes\r\n> without this change.\r\n\r\nWe will start a transaction block when defining the savepoint and we will get\r\na warning[1] if enter this function later. I think there would be some WARNs in\r\nthe log of \" 022_twophase_cascade\" test if we remove this check.\r\n\r\n[1] WARN: there is already a transaction in progress\"\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 7 Dec 2022 13:03:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Mon, Dec 5, 2022 at 1:29 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Sunday, December 4, 2022 7:17 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com>\r\n> > >\r\n> > > Thursday, December 1, 2022 8:40 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > Some other comments:\r\n> > > ...\r\n> > > Attach the new version patch set which addressed most of the comments\r\n> > > received so far except some comments being discussed[1].\r\n> > > [1]\r\n> https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C\r\n> 8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n> >\r\n> > Attach a new version patch set which fixed a testcase failure on CFbot.\r\n> \r\n> ---\r\n> If a value of max_parallel_apply_workers_per_subscription is not\r\n> sufficient, we get the LOG \"out of parallel apply workers\" every time\r\n> when the apply worker doesn't launch a worker. But do we really need\r\n> this log? It seems not consistent with\r\n> max_sync_workers_per_subscription behavior. I think we can check if\r\n> the number of running parallel workers is less than\r\n> max_parallel_apply_workers_per_subscription before calling\r\n> logicalrep_worker_launch(). What do you think?\r\n\r\n(Sorry, I missed this comment in last email)\r\n\r\nI personally feel giving a hint might help user to realize that the\r\nmax_parallel_applyxxx is not enough for the current workload and then they can\r\nadjust the parameter. Otherwise, user might have an easy way to check if more\r\nworkers are needed.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 7 Dec 2022 13:13:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 6:33 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> > ---\n> > When max_parallel_apply_workers_per_subscription is changed to a value\n> > lower than the number of parallel worker running at that time, do we\n> > need to stop extra workers?\n>\n> I think we can do this, like adding a check in the main loop of leader worker, and\n> check every time after reloading the conf. OTOH, we will also stop the worker after\n> finishing a transaction, so I am slightly not sure do we need to add another check logic here.\n> But I am fine to add it if you think it would be better.\n>\n\nI think this is tricky because it is possible that all active workers\nare busy with long-running transactions, so, I think stopping them\ndoesn't make sense. I think as long as we are freeing them after use\nit seems okay to me. OTOH, each time after finishing the transaction,\nwe can stop the workers, if the workers in the free pool exceed\n'max_parallel_apply_workers_per_subscription'. I don't know if it is\nworth.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Dec 2022 10:21:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, December 7, 2022 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Dec 7, 2022 at 8:28 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Besides, I fixed a bug where there could still be messages left in\r\n> > memory queue and the PA has started to apply spooled message.\r\n> >\r\n> \r\n> Few comments on the recent changes in the patch:\r\n> ========================================\r\n> 1. It seems you need to set FS_SERIALIZE_DONE in\r\n> stream_prepare/commit/abort. They are still directly setting the state as\r\n> READY. Am, I missing something or you forgot to change it?\r\n\r\nIt's my miss, changed.\r\n\r\n> 2.\r\n> case TRANS_PARALLEL_APPLY:\r\n> pa_stream_abort(&abort_data);\r\n> \r\n> + /*\r\n> + * Reset the stream_fd after aborting the toplevel transaction in\r\n> + * case the parallel apply worker is applying spooled messages */ if\r\n> + (toplevel_xact) stream_fd = NULL;\r\n> \r\n> I think we can keep the handling of stream file the same in\r\n> abort/commit/prepare code path.\r\n\r\nChanged.\r\n\r\n> 3. It is already pointed out by Peter that it is better to add some comments in\r\n> pa_spooled_messages() function that we won't be immediately able to apply\r\n> changes after the lock is released, it will be done in the next cycle.\r\n\r\nAdded.\r\n\r\n> 4. Shall we rename FS_SERIALIZE as FS_SERIALIZE_IN_PROGRESS? That will\r\n> appear consistent with FS_SERIALIZE_DONE.\r\n\r\nAgreed, changed.\r\n\r\n> 5. Comment improvements:\r\n> diff --git a/src/backend/replication/logical/worker.c\r\n> b/src/backend/replication/logical/worker.c\r\n> index b26d587ae4..921d973863 100644\r\n> --- a/src/backend/replication/logical/worker.c\r\n> +++ b/src/backend/replication/logical/worker.c\r\n> @@ -1934,8 +1934,7 @@ apply_handle_stream_abort(StringInfo s) }\r\n> \r\n> /*\r\n> - * Check if the passed fileno and offset are the last fileno and position of\r\n> - * the fileset, and report an ERROR if not.\r\n> + * Ensure that the passed location is fileset's end.\r\n> */\r\n> static void\r\n> ensure_last_message(FileSet *stream_fileset, TransactionId xid, int fileno, @@\r\n> -2084,9 +2083,9 @@ apply_spooled_messages(FileSet *stream_fileset,\r\n> TransactionId xid,\r\n> nchanges++;\r\n> \r\n> /*\r\n> - * Break the loop if stream_fd is set to NULL which\r\n> means the parallel\r\n> - * apply worker has finished applying the transaction.\r\n> The parallel\r\n> - * apply worker should have closed the file before committing.\r\n> + * It is possible the file has been closed because we\r\n> have processed\r\n> + * some transaction end message like stream_commit in\r\n> which case that\r\n> + * must be the last message.\r\n> */\r\n\r\nMerged, thanks.\r\n\r\nAttach the new version patch which addressed all above comments and part of\r\ncomments from[1] except some comments that are being discussed.\r\n\r\nApart from above, according to the comment from Amit and Sawada-san[2], the new\r\nversion patch won't stop the parallel worker due to subscription parameter\r\nchange, it will absorb the change instead, and the leader will anyway detect\r\nthe parameter change and stop all workers later.\r\n\r\nBased on this, I also removed the maybe_reread_subscription() call in parallel\r\napply worker's main loop, because we need to make sure we won't update the local\r\nsubscription parameter in the middle of the transaction. And we will call\r\nmaybe_reread_subscription() before starting a transaction in parallel apply\r\nworker anyway(in maybe_reread_subscription()), so remove that check is fine and\r\ncan save some codes.\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoCZ3i9w1Rz-81Lv1QB%2BJGP60Ypiom4%2BwM9eP3aQTx0STQ%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAD21AoAzYstJVM0nMVnXZoeYamqD2j92DkWVH%3DYbGtA4yzy19A%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 8 Dec 2022 07:07:15 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 8, 2022 at 1:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 6:33 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > > ---\n> > > When max_parallel_apply_workers_per_subscription is changed to a value\n> > > lower than the number of parallel worker running at that time, do we\n> > > need to stop extra workers?\n> >\n> > I think we can do this, like adding a check in the main loop of leader worker, and\n> > check every time after reloading the conf. OTOH, we will also stop the worker after\n> > finishing a transaction, so I am slightly not sure do we need to add another check logic here.\n> > But I am fine to add it if you think it would be better.\n> >\n>\n> I think this is tricky because it is possible that all active workers\n> are busy with long-running transactions, so, I think stopping them\n> doesn't make sense.\n\nRight, we should not stop running parallel workers.\n\n> I think as long as we are freeing them after use\n> it seems okay to me. OTOH, each time after finishing the transaction,\n> we can stop the workers, if the workers in the free pool exceed\n> 'max_parallel_apply_workers_per_subscription'.\n\nOr the apply leader worker can check that after reloading the config file.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Dec 2022 16:10:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 at 1:29 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Sunday, December 4, 2022 7:17 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com>\n> > > >\n> > > > Thursday, December 1, 2022 8:40 PM Amit Kapila\n> > <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > > Some other comments:\n> > > > ...\n> > > > Attach the new version patch set which addressed most of the comments\n> > > > received so far except some comments being discussed[1].\n> > > > [1]\n> > https://www.postgresql.org/message-id/OS0PR01MB57167BF64FC0891734C\n> > 8E81A94149%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n> > >\n> > > Attach a new version patch set which fixed a testcase failure on CFbot.\n> >\n> > Here are some comments on v56 0001, 0002 patches. Please ignore\n> > comments if you already incorporated them in v57.\n>\n> Thanks for the comments!\n>\n> > +static void\n> > +ProcessParallelApplyInterrupts(void)\n> > +{\n> > + CHECK_FOR_INTERRUPTS();\n> > +\n> > + if (ShutdownRequestPending)\n> > + {\n> > + ereport(LOG,\n> > + (errmsg(\"logical replication parallel\n> > apply worker for subscrip\n> > tion \\\"%s\\\" has finished\",\n> > + MySubscription->name)));\n> > +\n> > + apply_worker_clean_exit(false);\n> > + }\n> > +\n> > + if (ConfigReloadPending)\n> > + {\n> > + ConfigReloadPending = false;\n> > + ProcessConfigFile(PGC_SIGHUP);\n> > + }\n> > +}\n> >\n> > I personally think that we don't need to have a function to do only\n> > these few things.\n>\n> I thought that introduce a new function make the handling of worker specific\n> Interrupts logic similar to other existing ones. Like:\n> ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> pgarch.c ...\n\nI think the difference from them is that there is only one place to\ncall ProcessParallelApplyInterrupts().\n\n>\n> >\n> > Should we change the names to something like\n> > LOGICALREP_STREAM_PARALLEL?\n>\n> Agreed, will change.\n>\n> > ---\n> > + * The lock graph for the above example will look as follows:\n> > + * LA (waiting to acquire the lock on the unique index) -> PA (waiting to\n> > + * acquire the lock on the remote transaction) -> LA\n> >\n> > and\n> >\n> > + * The lock graph for the above example will look as follows:\n> > + * LA (waiting to acquire the transaction lock) -> PA-2 (waiting to acquire the\n> > + * lock due to unique index constraint) -> PA-1 (waiting to acquire the stream\n> > + * lock) -> LA\n> >\n> > \"(waiting to acquire the lock on the remote transaction)\" in the first\n> > example and \"(waiting to acquire the stream lock)\" in the second\n> > example is the same meaning, right? If so, I think we should use\n> > either term for consistency.\n>\n> Will change.\n>\n> > ---\n> > + bool write_abort_info = (data->streaming ==\n> > SUBSTREAM_PARALLEL);\n> >\n> > I think that instead of setting write_abort_info every time when\n> > pgoutput_stream_abort() is called, we can set it once, probably in\n> > PGOutputData, at startup.\n>\n> I thought that since we already have a \"stream\" flag in PGOutputData, I am not\n> sure if it would be better to introduce another flag for the same option.\n\nI see your point. Another way is to have it as a static variable like\npublish_no_origin. But since It's trivial change I'm fine also with\nthe current code.\n\n>\n> > ---\n> > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > options.proto.logical.proto_version =\n> > + server_version >= 160000 ?\n> > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > server_version >= 150000 ?\n> > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > server_version >= 140000 ?\n> > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > LOGICALREP_PROTO_VERSION_NUM;\n> >\n> > Instead of always using the new protocol version, I think we can use\n> > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > 'parallel'. That way, we don't need to change protocl version check\n> > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > What do you think?\n>\n> I think that some user can also use the new version number when trying to get\n> changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> leave the check for new version number seems fine.\n>\n> Besides, I feel even if we don't use new version number, we still need to use\n> defGetStreamingMode to check if parallel mode in used as we need to send\n> abort_lsn when parallel is in used. I might be missing something, sorry for\n> that. Can you please explain the idea a bit ?\n\nMy idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n(server_version >= 160000 && MySubscription->stream ==\nSUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\nLOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\nversion is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\nto send \"streaming = parallel\" to the publisher since the publisher\ncan decide whether or not to send abort_lsn based on the protocol\nversion (still needs to send \"streaming = on\" though). I might be\nmissing something.\n\nMy question came from the fact that the difference between\nLOGICALREP_PROTO_TWOPHASE_VERSION_NUM and\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM is just whether or not to\nsend abort_lsn and there are two knobs to control that. IIUC even if\nwe use the new protocol version, the data actually sent during logical\nreplication are the same as the previous protocol version if streaming\nis not 'parallel'. So I thought that we do either not send 'parallel'\nto the publisher (i.e., send abort_lsn based on the protocol version)\nor not introduce a new protocol version (i.e. send abort_lsn based on\nthe streaming option).\n\n>\n> > ---\n> > When max_parallel_apply_workers_per_subscription is changed to a value\n> > lower than the number of parallel worker running at that time, do we\n> > need to stop extra workers?\n>\n> I think we can do this, like adding a check in the main loop of leader worker, and\n> check every time after reloading the conf. OTOH, we will also stop the worker after\n> finishing a transaction, so I am slightly not sure do we need to add another check logic here.\n> But I am fine to add it if you think it would be better.\n>\n>\n> > ---\n> > If a value of max_parallel_apply_workers_per_subscription is not\n> > sufficient, we get the LOG \"out of parallel apply workers\" every time\n> > when the apply worker doesn't launch a worker. But do we really need\n> > this log? It seems not consistent with\n> > max_sync_workers_per_subscription behavior. I think we can check if\n> > the number of running parallel workers is less than\n> > max_parallel_apply_workers_per_subscription before calling\n> > logicalrep_worker_launch(). What do you think?\n> >\n> > ---\n> > + if (server_version >= 160000 &&\n> > + MySubscription->stream == SUBSTREAM_PARALLEL)\n> > + {\n> > + options.proto.logical.streaming_str = pstrdup(\"parallel\");\n> > + MyLogicalRepWorker->parallel_apply = true;\n> > + }\n> > + else if (server_version >= 140000 &&\n> > + MySubscription->stream != SUBSTREAM_OFF)\n> > + {\n> > + options.proto.logical.streaming_str = pstrdup(\"on\");\n> > + MyLogicalRepWorker->parallel_apply = false;\n> > + }\n> >\n> > I think we don't need to use pstrdup().\n>\n> Will remove.\n>\n> > ---\n> > - BeginTransactionBlock();\n> > - CommitTransactionCommand(); /* Completes the preceding Begin\n> > command. */\n> > + if (!IsTransactionBlock())\n> > + {\n> > + BeginTransactionBlock();\n> > + CommitTransactionCommand(); /* Completes the preceding\n> > Begin command. */\n> > + }\n> >\n> > Do we need this change? In my environment, 'make check-world' passes\n> > without this change.\n>\n> We will start a transaction block when defining the savepoint and we will get\n> a warning[1] if enter this function later. I think there would be some WARNs in\n> the log of \" 022_twophase_cascade\" test if we remove this check.\n\nThanks, I understood.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Dec 2022 16:11:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 8, 2022 at 12:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> >\n> > > +static void\n> > > +ProcessParallelApplyInterrupts(void)\n> > > +{\n> > > + CHECK_FOR_INTERRUPTS();\n> > > +\n> > > + if (ShutdownRequestPending)\n> > > + {\n> > > + ereport(LOG,\n> > > + (errmsg(\"logical replication parallel\n> > > apply worker for subscrip\n> > > tion \\\"%s\\\" has finished\",\n> > > + MySubscription->name)));\n> > > +\n> > > + apply_worker_clean_exit(false);\n> > > + }\n> > > +\n> > > + if (ConfigReloadPending)\n> > > + {\n> > > + ConfigReloadPending = false;\n> > > + ProcessConfigFile(PGC_SIGHUP);\n> > > + }\n> > > +}\n> > >\n> > > I personally think that we don't need to have a function to do only\n> > > these few things.\n> >\n> > I thought that introduce a new function make the handling of worker specific\n> > Interrupts logic similar to other existing ones. Like:\n> > ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> > pgarch.c ...\n>\n> I think the difference from them is that there is only one place to\n> call ProcessParallelApplyInterrupts().\n>\n\nBut I feel it is better to isolate this code in a separate function.\nWhat if we decide to extend it further by having some logic to stop\nworkers after reloading of config?\n\n> >\n> > > ---\n> > > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > > options.proto.logical.proto_version =\n> > > + server_version >= 160000 ?\n> > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > > server_version >= 150000 ?\n> > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > > server_version >= 140000 ?\n> > > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > > LOGICALREP_PROTO_VERSION_NUM;\n> > >\n> > > Instead of always using the new protocol version, I think we can use\n> > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > > 'parallel'. That way, we don't need to change protocl version check\n> > > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > > What do you think?\n> >\n> > I think that some user can also use the new version number when trying to get\n> > changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> > leave the check for new version number seems fine.\n> >\n> > Besides, I feel even if we don't use new version number, we still need to use\n> > defGetStreamingMode to check if parallel mode in used as we need to send\n> > abort_lsn when parallel is in used. I might be missing something, sorry for\n> > that. Can you please explain the idea a bit ?\n>\n> My idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n> (server_version >= 160000 && MySubscription->stream ==\n> SUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\n> LOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n> 160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\n> version is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\n> to send \"streaming = parallel\" to the publisher since the publisher\n> can decide whether or not to send abort_lsn based on the protocol\n> version (still needs to send \"streaming = on\" though). I might be\n> missing something.\n>\n\nWhat if we decide to send some more additional information as part of\nanother patch like we are discussing in the thread [1]? Now, we won't\nbe able to decide the version number based on just the streaming\noption. Also, in such a case, even for\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, it may not be a good\nidea to send additional abort information unless the user has used the\nstreaming=parallel option.\n\n[1] - https://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Dec 2022 13:12:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 8, 2022 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 8, 2022 at 12:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > >\n> > > > +static void\n> > > > +ProcessParallelApplyInterrupts(void)\n> > > > +{\n> > > > + CHECK_FOR_INTERRUPTS();\n> > > > +\n> > > > + if (ShutdownRequestPending)\n> > > > + {\n> > > > + ereport(LOG,\n> > > > + (errmsg(\"logical replication parallel\n> > > > apply worker for subscrip\n> > > > tion \\\"%s\\\" has finished\",\n> > > > + MySubscription->name)));\n> > > > +\n> > > > + apply_worker_clean_exit(false);\n> > > > + }\n> > > > +\n> > > > + if (ConfigReloadPending)\n> > > > + {\n> > > > + ConfigReloadPending = false;\n> > > > + ProcessConfigFile(PGC_SIGHUP);\n> > > > + }\n> > > > +}\n> > > >\n> > > > I personally think that we don't need to have a function to do only\n> > > > these few things.\n> > >\n> > > I thought that introduce a new function make the handling of worker specific\n> > > Interrupts logic similar to other existing ones. Like:\n> > > ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> > > pgarch.c ...\n> >\n> > I think the difference from them is that there is only one place to\n> > call ProcessParallelApplyInterrupts().\n> >\n>\n> But I feel it is better to isolate this code in a separate function.\n> What if we decide to extend it further by having some logic to stop\n> workers after reloading of config?\n\nI think we can separate the function at that time. But let's keep the\ncurrent code as you and Hou agree with the current code. I'm not going\nto insist on that.\n\n>\n> > >\n> > > > ---\n> > > > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > > > options.proto.logical.proto_version =\n> > > > + server_version >= 160000 ?\n> > > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > > > server_version >= 150000 ?\n> > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > > > server_version >= 140000 ?\n> > > > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > > > LOGICALREP_PROTO_VERSION_NUM;\n> > > >\n> > > > Instead of always using the new protocol version, I think we can use\n> > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > > > 'parallel'. That way, we don't need to change protocl version check\n> > > > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > > > What do you think?\n> > >\n> > > I think that some user can also use the new version number when trying to get\n> > > changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> > > leave the check for new version number seems fine.\n> > >\n> > > Besides, I feel even if we don't use new version number, we still need to use\n> > > defGetStreamingMode to check if parallel mode in used as we need to send\n> > > abort_lsn when parallel is in used. I might be missing something, sorry for\n> > > that. Can you please explain the idea a bit ?\n> >\n> > My idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n> > (server_version >= 160000 && MySubscription->stream ==\n> > SUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\n> > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n> > 160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\n> > version is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\n> > to send \"streaming = parallel\" to the publisher since the publisher\n> > can decide whether or not to send abort_lsn based on the protocol\n> > version (still needs to send \"streaming = on\" though). I might be\n> > missing something.\n> >\n>\n> What if we decide to send some more additional information as part of\n> another patch like we are discussing in the thread [1]? Now, we won't\n> be able to decide the version number based on just the streaming\n> option. Also, in such a case, even for\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, it may not be a good\n> idea to send additional abort information unless the user has used the\n> streaming=parallel option.\n\nIf we're going to send the additional information, it makes sense to\nsend streaming=parallel. But the next question came to me is why do we\nneed to increase the protocol version for parallel apply feature? If\nsending the additional information is also controlled by an option\nlike \"streaming\", we can decide what we send based on these options,\nno?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Dec 2022 17:43:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 8, 2022 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 8, 2022 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 8, 2022 at 12:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > >\n> > > > > +static void\n> > > > > +ProcessParallelApplyInterrupts(void)\n> > > > > +{\n> > > > > + CHECK_FOR_INTERRUPTS();\n> > > > > +\n> > > > > + if (ShutdownRequestPending)\n> > > > > + {\n> > > > > + ereport(LOG,\n> > > > > + (errmsg(\"logical replication parallel\n> > > > > apply worker for subscrip\n> > > > > tion \\\"%s\\\" has finished\",\n> > > > > + MySubscription->name)));\n> > > > > +\n> > > > > + apply_worker_clean_exit(false);\n> > > > > + }\n> > > > > +\n> > > > > + if (ConfigReloadPending)\n> > > > > + {\n> > > > > + ConfigReloadPending = false;\n> > > > > + ProcessConfigFile(PGC_SIGHUP);\n> > > > > + }\n> > > > > +}\n> > > > >\n> > > > > I personally think that we don't need to have a function to do only\n> > > > > these few things.\n> > > >\n> > > > I thought that introduce a new function make the handling of worker specific\n> > > > Interrupts logic similar to other existing ones. Like:\n> > > > ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> > > > pgarch.c ...\n> > >\n> > > I think the difference from them is that there is only one place to\n> > > call ProcessParallelApplyInterrupts().\n> > >\n> >\n> > But I feel it is better to isolate this code in a separate function.\n> > What if we decide to extend it further by having some logic to stop\n> > workers after reloading of config?\n>\n> I think we can separate the function at that time. But let's keep the\n> current code as you and Hou agree with the current code. I'm not going\n> to insist on that.\n>\n> >\n> > > >\n> > > > > ---\n> > > > > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > > > > options.proto.logical.proto_version =\n> > > > > + server_version >= 160000 ?\n> > > > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > > > > server_version >= 150000 ?\n> > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > > > > server_version >= 140000 ?\n> > > > > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > > > > LOGICALREP_PROTO_VERSION_NUM;\n> > > > >\n> > > > > Instead of always using the new protocol version, I think we can use\n> > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > > > > 'parallel'. That way, we don't need to change protocl version check\n> > > > > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > > > > What do you think?\n> > > >\n> > > > I think that some user can also use the new version number when trying to get\n> > > > changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> > > > leave the check for new version number seems fine.\n> > > >\n> > > > Besides, I feel even if we don't use new version number, we still need to use\n> > > > defGetStreamingMode to check if parallel mode in used as we need to send\n> > > > abort_lsn when parallel is in used. I might be missing something, sorry for\n> > > > that. Can you please explain the idea a bit ?\n> > >\n> > > My idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n> > > (server_version >= 160000 && MySubscription->stream ==\n> > > SUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\n> > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n> > > 160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\n> > > version is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\n> > > to send \"streaming = parallel\" to the publisher since the publisher\n> > > can decide whether or not to send abort_lsn based on the protocol\n> > > version (still needs to send \"streaming = on\" though). I might be\n> > > missing something.\n> > >\n> >\n> > What if we decide to send some more additional information as part of\n> > another patch like we are discussing in the thread [1]? Now, we won't\n> > be able to decide the version number based on just the streaming\n> > option. Also, in such a case, even for\n> > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, it may not be a good\n> > idea to send additional abort information unless the user has used the\n> > streaming=parallel option.\n>\n> If we're going to send the additional information, it makes sense to\n> send streaming=parallel. But the next question came to me is why do we\n> need to increase the protocol version for parallel apply feature? If\n> sending the additional information is also controlled by an option\n> like \"streaming\", we can decide what we send based on these options,\n> no?\n>\n\nAFAIK the protocol version defines what protocol message bytes are\ntransmitted on the wire. So I thought the protocol version should\n*always* be updated whenever the message format changes. In other\nwords, I don't think we ought to be transmitting different protocol\nmessage formats unless it is a different protocol version.\n\nWhether the pub/sub implementation actually needs to check that\nprotocol version or whether we happen to have some alternative knob we\ncan check doesn't change what the protocol version is supposed to\nmean. And the PGDOCS [1] and [2] currently have clear field notes\nabout when those fields are present (e.g. \"This field is available\nsince protocol version XXX\"), but if hypothetically you don't change\nthe protocol version for some new fields then now the message format\nbecomes tied to the built-in implementation of pub/sub -- now what\nfield note will you say instead to explain that?\n\n------\n[1] https://www.postgresql.org/docs/current/protocol-logical-replication.html\n[2] https://www.postgresql.org/docs/current/protocol-logicalrep-message-formats.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 9 Dec 2022 13:15:38 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 9, 2022 at 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Dec 8, 2022 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 8, 2022 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 8, 2022 at 12:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > >\n> > > > > > +static void\n> > > > > > +ProcessParallelApplyInterrupts(void)\n> > > > > > +{\n> > > > > > + CHECK_FOR_INTERRUPTS();\n> > > > > > +\n> > > > > > + if (ShutdownRequestPending)\n> > > > > > + {\n> > > > > > + ereport(LOG,\n> > > > > > + (errmsg(\"logical replication parallel\n> > > > > > apply worker for subscrip\n> > > > > > tion \\\"%s\\\" has finished\",\n> > > > > > + MySubscription->name)));\n> > > > > > +\n> > > > > > + apply_worker_clean_exit(false);\n> > > > > > + }\n> > > > > > +\n> > > > > > + if (ConfigReloadPending)\n> > > > > > + {\n> > > > > > + ConfigReloadPending = false;\n> > > > > > + ProcessConfigFile(PGC_SIGHUP);\n> > > > > > + }\n> > > > > > +}\n> > > > > >\n> > > > > > I personally think that we don't need to have a function to do only\n> > > > > > these few things.\n> > > > >\n> > > > > I thought that introduce a new function make the handling of worker specific\n> > > > > Interrupts logic similar to other existing ones. Like:\n> > > > > ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> > > > > pgarch.c ...\n> > > >\n> > > > I think the difference from them is that there is only one place to\n> > > > call ProcessParallelApplyInterrupts().\n> > > >\n> > >\n> > > But I feel it is better to isolate this code in a separate function.\n> > > What if we decide to extend it further by having some logic to stop\n> > > workers after reloading of config?\n> >\n> > I think we can separate the function at that time. But let's keep the\n> > current code as you and Hou agree with the current code. I'm not going\n> > to insist on that.\n> >\n> > >\n> > > > >\n> > > > > > ---\n> > > > > > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > > > > > options.proto.logical.proto_version =\n> > > > > > + server_version >= 160000 ?\n> > > > > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > > > > > server_version >= 150000 ?\n> > > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > > > > > server_version >= 140000 ?\n> > > > > > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > > > > > LOGICALREP_PROTO_VERSION_NUM;\n> > > > > >\n> > > > > > Instead of always using the new protocol version, I think we can use\n> > > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > > > > > 'parallel'. That way, we don't need to change protocl version check\n> > > > > > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > > > > > What do you think?\n> > > > >\n> > > > > I think that some user can also use the new version number when trying to get\n> > > > > changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> > > > > leave the check for new version number seems fine.\n> > > > >\n> > > > > Besides, I feel even if we don't use new version number, we still need to use\n> > > > > defGetStreamingMode to check if parallel mode in used as we need to send\n> > > > > abort_lsn when parallel is in used. I might be missing something, sorry for\n> > > > > that. Can you please explain the idea a bit ?\n> > > >\n> > > > My idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n> > > > (server_version >= 160000 && MySubscription->stream ==\n> > > > SUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\n> > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n> > > > 160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\n> > > > version is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\n> > > > to send \"streaming = parallel\" to the publisher since the publisher\n> > > > can decide whether or not to send abort_lsn based on the protocol\n> > > > version (still needs to send \"streaming = on\" though). I might be\n> > > > missing something.\n> > > >\n> > >\n> > > What if we decide to send some more additional information as part of\n> > > another patch like we are discussing in the thread [1]? Now, we won't\n> > > be able to decide the version number based on just the streaming\n> > > option. Also, in such a case, even for\n> > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, it may not be a good\n> > > idea to send additional abort information unless the user has used the\n> > > streaming=parallel option.\n> >\n> > If we're going to send the additional information, it makes sense to\n> > send streaming=parallel. But the next question came to me is why do we\n> > need to increase the protocol version for parallel apply feature? If\n> > sending the additional information is also controlled by an option\n> > like \"streaming\", we can decide what we send based on these options,\n> > no?\n> >\n>\n> AFAIK the protocol version defines what protocol message bytes are\n> transmitted on the wire. So I thought the protocol version should\n> *always* be updated whenever the message format changes. In other\n> words, I don't think we ought to be transmitting different protocol\n> message formats unless it is a different protocol version.\n>\n> Whether the pub/sub implementation actually needs to check that\n> protocol version or whether we happen to have some alternative knob we\n> can check doesn't change what the protocol version is supposed to\n> mean. And the PGDOCS [1] and [2] currently have clear field notes\n> about when those fields are present (e.g. \"This field is available\n> since protocol version XXX\"), but if hypothetically you don't change\n> the protocol version for some new fields then now the message format\n> becomes tied to the built-in implementation of pub/sub -- now what\n> field note will you say instead to explain that?\n>\n\nI think the protocol version acts as a backstop to not send some\ninformation which clients don't understand. Now, the other way is to\nbelieve the client when it sends a particular option (say streaming =\non (aka allow sending in-progress transactions)) that it will\nunderstand additional information for that feature but the protocol\nversion acts as a backstop in that case. As Peter mentioned, it will\nbe easier to explain the additional information we are sending across\ndifferent versions without relying on additional options for pub/sub.\nHaving said that, we can send additional required information based on\njust the new option but I felt it is better to bump the protocol\nversion along with it unless we see any downside to it. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Dec 2022 11:35:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 8, 2022 at 12:37 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nReview comments\n==============\n1. Currently, we don't release the stream lock in LA (leade apply\nworker) for \"rollback to savepoint\" and the reason is mentioned in\ncomments of apply_handle_stream_abort() in the patch. But, today,\nwhile testing, I found that can lead to deadlock which otherwise,\nwon't happen on the publisher. The key point is rollback to savepoint\nreleases the locks acquired by the particular subtransaction, so\nparallel apply worker should also do the same. Consider the following\nexample where the transaction in session-1 is being performed by the\nparallel apply worker and the transaction in session-2 is being\nperformed by the leader apply worker. I have simulated it by using GUC\nforce_stream_mode.\n\nPublisher\n==========\nSession-1\npostgres=# begin;\nBEGIN\npostgres=*# savepoint s1;\nSAVEPOINT\npostgres=*# truncate t1;\nTRUNCATE TABLE\n\nSession-2\npostgres=# begin;\nBEGIN\npostgres=*# insert into t1 values(4);\n\nSession-1\npostgres=*# rollback to savepoint s1;\nROLLBACK\n\nSession-2\nCommit;\n\nWith or without commit of Session-2, this scenario will lead to\ndeadlock on the subscriber because PA (parallel apply worker) is\nwaiting for LA to send the next command, and LA is blocked by\nExclusive of PA. There is no deadlock on the publisher because\nrollback to savepoint will release the lock acquired by truncate.\n\nTo solve this, How about if we do three things before sending abort of\nsub-transaction (a) unlock the stream lock, (b) increment\npending_stream_count, (c) take the stream lock again?\n\nNow, if the PA is not already waiting on the stop, it will not wait at\nstream_stop but will wait after applying abort of sub-transaction and\nif it is already waiting at stream_stop, the wait will be released. If\nthis works then probably we should try to do (b) before (a) to match\nthe steps with stream_start.\n\n2. There seems to be another general problem in the way the patch\nwaits for stream_stop in PA (parallel apply worker). Currently, PA\nchecks, if there are no more pending streams then it tries to wait for\nthe next stream by waiting on a stream lock. However, it is possible\nafter PA checks there is no pending stream and before it actually\nstarts waiting on a lock, the LA sends another stream for which even\nstream_stop is sent, in this case, PA will start waiting for the next\nstream whereas there is actually a pending stream available. In this\ncase, it won't lead to any problem apart from delay in applying the\nchanges in such cases but for the case mentioned in the previous point\n(Pont 1), it can lead to deadlock even after we implement the solution\nproposed to solve it.\n\n3. The other point to consider is that for\nstream_commit/prepare/abort, in LA, we release the stream lock after\nsending the message whereas for stream_start we release it before\nsending the message. I think for the earlier cases\n(stream_commit/prepare/abort), the patch has done like this because\npa_send_data() may need to require the lock again when it times out\nand start serializing, so there will be no sense in first releasing\nit, then re-acquiring it, and then again releasing it. Can't we also\nrelease the lock for stream_start after pa_send_data() only if it is\nnot switched to serialize mode?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Dec 2022 12:44:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 9, 2022 at 3:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 9, 2022 at 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Dec 8, 2022 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 8, 2022 at 4:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 8, 2022 at 12:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Dec 7, 2022 at 10:03 PM houzj.fnst@fujitsu.com\n> > > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > > +static void\n> > > > > > > +ProcessParallelApplyInterrupts(void)\n> > > > > > > +{\n> > > > > > > + CHECK_FOR_INTERRUPTS();\n> > > > > > > +\n> > > > > > > + if (ShutdownRequestPending)\n> > > > > > > + {\n> > > > > > > + ereport(LOG,\n> > > > > > > + (errmsg(\"logical replication parallel\n> > > > > > > apply worker for subscrip\n> > > > > > > tion \\\"%s\\\" has finished\",\n> > > > > > > + MySubscription->name)));\n> > > > > > > +\n> > > > > > > + apply_worker_clean_exit(false);\n> > > > > > > + }\n> > > > > > > +\n> > > > > > > + if (ConfigReloadPending)\n> > > > > > > + {\n> > > > > > > + ConfigReloadPending = false;\n> > > > > > > + ProcessConfigFile(PGC_SIGHUP);\n> > > > > > > + }\n> > > > > > > +}\n> > > > > > >\n> > > > > > > I personally think that we don't need to have a function to do only\n> > > > > > > these few things.\n> > > > > >\n> > > > > > I thought that introduce a new function make the handling of worker specific\n> > > > > > Interrupts logic similar to other existing ones. Like:\n> > > > > > ProcessWalRcvInterrupts () in walreceiver.c and HandlePgArchInterrupts() in\n> > > > > > pgarch.c ...\n> > > > >\n> > > > > I think the difference from them is that there is only one place to\n> > > > > call ProcessParallelApplyInterrupts().\n> > > > >\n> > > >\n> > > > But I feel it is better to isolate this code in a separate function.\n> > > > What if we decide to extend it further by having some logic to stop\n> > > > workers after reloading of config?\n> > >\n> > > I think we can separate the function at that time. But let's keep the\n> > > current code as you and Hou agree with the current code. I'm not going\n> > > to insist on that.\n> > >\n> > > >\n> > > > > >\n> > > > > > > ---\n> > > > > > > server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n> > > > > > > options.proto.logical.proto_version =\n> > > > > > > + server_version >= 160000 ?\n> > > > > > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :\n> > > > > > > server_version >= 150000 ?\n> > > > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :\n> > > > > > > server_version >= 140000 ?\n> > > > > > > LOGICALREP_PROTO_STREAM_VERSION_NUM :\n> > > > > > > LOGICALREP_PROTO_VERSION_NUM;\n> > > > > > >\n> > > > > > > Instead of always using the new protocol version, I think we can use\n> > > > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM if the streaming is not\n> > > > > > > 'parallel'. That way, we don't need to change protocl version check\n> > > > > > > logic in pgoutput.c and don't need to expose defGetStreamingMode().\n> > > > > > > What do you think?\n> > > > > >\n> > > > > > I think that some user can also use the new version number when trying to get\n> > > > > > changes (via pg_logical_slot_peek_binary_changes or other functions), so I feel\n> > > > > > leave the check for new version number seems fine.\n> > > > > >\n> > > > > > Besides, I feel even if we don't use new version number, we still need to use\n> > > > > > defGetStreamingMode to check if parallel mode in used as we need to send\n> > > > > > abort_lsn when parallel is in used. I might be missing something, sorry for\n> > > > > > that. Can you please explain the idea a bit ?\n> > > > >\n> > > > > My idea is that we use LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM if\n> > > > > (server_version >= 160000 && MySubscription->stream ==\n> > > > > SUBSTREAM_PARALLEL). If the stream is SUBSTREAM_ON, we use\n> > > > > LOGICALREP_PROTO_TWOPHASE_VERSION_NUM even if server_version is\n> > > > > 160000. That way, in pgoutput.c, we can send abort_lsn if the protocol\n> > > > > version is LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM. We don't need\n> > > > > to send \"streaming = parallel\" to the publisher since the publisher\n> > > > > can decide whether or not to send abort_lsn based on the protocol\n> > > > > version (still needs to send \"streaming = on\" though). I might be\n> > > > > missing something.\n> > > > >\n> > > >\n> > > > What if we decide to send some more additional information as part of\n> > > > another patch like we are discussing in the thread [1]? Now, we won't\n> > > > be able to decide the version number based on just the streaming\n> > > > option. Also, in such a case, even for\n> > > > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, it may not be a good\n> > > > idea to send additional abort information unless the user has used the\n> > > > streaming=parallel option.\n> > >\n> > > If we're going to send the additional information, it makes sense to\n> > > send streaming=parallel. But the next question came to me is why do we\n> > > need to increase the protocol version for parallel apply feature? If\n> > > sending the additional information is also controlled by an option\n> > > like \"streaming\", we can decide what we send based on these options,\n> > > no?\n> > >\n> >\n> > AFAIK the protocol version defines what protocol message bytes are\n> > transmitted on the wire. So I thought the protocol version should\n> > *always* be updated whenever the message format changes. In other\n> > words, I don't think we ought to be transmitting different protocol\n> > message formats unless it is a different protocol version.\n> >\n> > Whether the pub/sub implementation actually needs to check that\n> > protocol version or whether we happen to have some alternative knob we\n> > can check doesn't change what the protocol version is supposed to\n> > mean. And the PGDOCS [1] and [2] currently have clear field notes\n> > about when those fields are present (e.g. \"This field is available\n> > since protocol version XXX\"), but if hypothetically you don't change\n> > the protocol version for some new fields then now the message format\n> > becomes tied to the built-in implementation of pub/sub -- now what\n> > field note will you say instead to explain that?\n> >\n>\n> I think the protocol version acts as a backstop to not send some\n> information which clients don't understand. Now, the other way is to\n> believe the client when it sends a particular option (say streaming =\n> on (aka allow sending in-progress transactions)) that it will\n> understand additional information for that feature but the protocol\n> version acts as a backstop in that case.\n\nYeah, it seems that this is how the logical replication protocol has\nbeen working. New logical replication protocol versions have backward\ncompatibility. I was thinking that the protocol version needs to bump\nif there is no compatibility, i.g. if most clients need to change to\nsupport new protocols.\n\n> As Peter mentioned, it will\n> be easier to explain the additional information we are sending across\n> different versions without relying on additional options for pub/sub.\n> Having said that, we can send additional required information based on\n> just the new option but I felt it is better to bump the protocol\n> version along with it unless we see any downside to it. What do you\n> think?\n\nI agree to bump the protocol version.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Dec 2022 17:00:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 9, 2022 3:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Dec 8, 2022 at 12:37 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> Review comments\r\n\r\nThanks for the comments!\r\n\r\n> ==============\r\n> 1. Currently, we don't release the stream lock in LA (leade apply\r\n> worker) for \"rollback to savepoint\" and the reason is mentioned in comments of\r\n> apply_handle_stream_abort() in the patch. But, today, while testing, I found that\r\n> can lead to deadlock which otherwise, won't happen on the publisher. The key\r\n> point is rollback to savepoint releases the locks acquired by the particular\r\n> subtransaction, so parallel apply worker should also do the same. Consider the\r\n> following example where the transaction in session-1 is being performed by the\r\n> parallel apply worker and the transaction in session-2 is being performed by the\r\n> leader apply worker. I have simulated it by using GUC force_stream_mode.\r\n> Publisher\r\n> ==========\r\n> Session-1\r\n> postgres=# begin;\r\n> BEGIN\r\n> postgres=*# savepoint s1;\r\n> SAVEPOINT\r\n> postgres=*# truncate t1;\r\n> TRUNCATE TABLE\r\n> \r\n> Session-2\r\n> postgres=# begin;\r\n> BEGIN\r\n> postgres=*# insert into t1 values(4);\r\n> \r\n> Session-1\r\n> postgres=*# rollback to savepoint s1;\r\n> ROLLBACK\r\n> \r\n> Session-2\r\n> Commit;\r\n> \r\n> With or without commit of Session-2, this scenario will lead to deadlock on the\r\n> subscriber because PA (parallel apply worker) is waiting for LA to send the next\r\n> command, and LA is blocked by Exclusive of PA. There is no deadlock on the\r\n> publisher because rollback to savepoint will release the lock acquired by\r\n> truncate.\r\n> \r\n> To solve this, How about if we do three things before sending abort of\r\n> sub-transaction (a) unlock the stream lock, (b) increment pending_stream_count,\r\n> (c) take the stream lock again?\r\n> \r\n> Now, if the PA is not already waiting on the stop, it will not wait at stream_stop\r\n> but will wait after applying abort of sub-transaction and if it is already waiting at\r\n> stream_stop, the wait will be released. If this works then probably we should try\r\n> to do (b) before (a) to match the steps with stream_start.\r\n\r\nThe solution works for me, I have changed the code as suggested.\r\n\r\n\r\n> 2. There seems to be another general problem in the way the patch waits for\r\n> stream_stop in PA (parallel apply worker). Currently, PA checks, if there are no\r\n> more pending streams then it tries to wait for the next stream by waiting on a\r\n> stream lock. However, it is possible after PA checks there is no pending stream\r\n> and before it actually starts waiting on a lock, the LA sends another stream for\r\n> which even stream_stop is sent, in this case, PA will start waiting for the next\r\n> stream whereas there is actually a pending stream available. In this case, it won't\r\n> lead to any problem apart from delay in applying the changes in such cases but\r\n> for the case mentioned in the previous point (Pont 1), it can lead to deadlock\r\n> even after we implement the solution proposed to solve it.\r\n\r\nThanks for reporting, I have introduced another flag in shared memory and use it to\r\nprevent the leader from incrementing the pending_stream_count if the parallel\r\napply worker is trying to lock the stream lock.\r\n\r\n\r\n> 3. The other point to consider is that for stream_commit/prepare/abort, in LA, we\r\n> release the stream lock after sending the message whereas for stream_start we\r\n> release it before sending the message. I think for the earlier cases\r\n> (stream_commit/prepare/abort), the patch has done like this because\r\n> pa_send_data() may need to require the lock again when it times out and start\r\n> serializing, so there will be no sense in first releasing it, then re-acquiring it, and\r\n> then again releasing it. Can't we also release the lock for stream_start after\r\n> pa_send_data() only if it is not switched to serialize mode?\r\n\r\nChanged.\r\n\r\nAttach the new version patch set which addressed above comments.\r\nBesides, the new version patch will try to stop extra parallel workers if user\r\nsets the max_parallel_apply_workers_per_subscription to a lower number.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sun, 11 Dec 2022 11:44:55 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "FYI - a rebase is needed.\n\nThis patch is currently failing in cfbot [1], probably due to recent\nlogical replication documentation updates [2].\n\n------\n[1] cfbot failing for v59 - http://cfbot.cputube.org/patch_41_3621.log\n[2] PGDOCS updated -\nhttps://github.com/postgres/postgres/commit/a8500750ca0acf6bb95cf9d1ac7f421749b22db7\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 13 Dec 2022 09:21:25 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Some minor review comments for v58-0001\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n1. pa_can_start\n\n+ /*\n+ * Don't start a new parallel worker if user has set skiplsn as it's\n+ * possible that user want to skip the streaming transaction. For streaming\n+ * transaction, we need to serialize the transaction to a file so that we\n+ * can get the last LSN of the transaction to judge whether to skip before\n+ * starting to apply the change.\n+ */\n+ if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\n+ return false;\n\n\n\"that user want\" -> \"that they want\"\n\n\"For streaming transaction,\" -> \"For streaming transactions,\"\n\n~~~\n\n2. pa_free_worker_info\n\n+ /* Remove from the worker pool. */\n+ ParallelApplyWorkerPool = list_delete_ptr(ParallelApplyWorkerPool,\n+ winfo);\n\nUnnecessary wrapping\n\n~~~\n\n3. pa_set_stream_apply_worker\n\n+/*\n+ * Set the worker that required to apply the current streaming transaction.\n+ */\n+void\n+pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo)\n+{\n+ stream_apply_worker = winfo;\n+}\n\nComment wording seems wrong.\n\n======\n\nsrc/include/replication/worker_internal.h\n\n4. ParallelApplyWorkerShared\n\n+ * XactLastCommitEnd from the parallel apply worker. This is required to\n+ * update the lsn_mappings by leader worker.\n+ */\n+ XLogRecPtr last_commit_end;\n+} ParallelApplyWorkerShared;\n\n\n\"This is required to update the lsn_mappings by leader worker.\" -->\ndid you mean \"This is required by the leader worker so it can update\nthe lsn_mappings.\" ??\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 13 Dec 2022 10:05:57 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 13, 2022 at 4:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ~~~\n>\n> 3. pa_set_stream_apply_worker\n>\n> +/*\n> + * Set the worker that required to apply the current streaming transaction.\n> + */\n> +void\n> +pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo)\n> +{\n> + stream_apply_worker = winfo;\n> +}\n>\n> Comment wording seems wrong.\n>\n\nI think something like \"Cache the parallel apply worker information.\"\nmay be more suitable here.\n\nFew more similar cosmetic comments:\n1.\n+ /*\n+ * Unlock the shared object lock so that the parallel apply worker\n+ * can continue to receive changes.\n+ */\n+ if (!first_segment)\n+ pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);\n\nThis comment is missing in the new (0002) patch.\n\n2.\n+ if (!winfo->serialize_changes)\n+ {\n+ if (!first_segment)\n+ pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);\n\nI think we should write some comments on why we are not unlocking when\nserializing changes.\n\n3. Please add a comment like below in the patch to make it clear why\nin stream_abort case we perform locking before sending the message.\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -1858,6 +1858,13 @@ apply_handle_stream_abort(StringInfo s)\n * worker will wait on the lock for the next\nset of changes after\n * processing the STREAM_ABORT message if it\nis not already waiting\n * for STREAM_STOP message.\n+ *\n+ * It is important to perform this locking\nbefore sending the\n+ * STREAM_ABORT message so that the leader can\nhold the lock first\n+ * and the parallel apply worker will wait for\nthe leader to release\n+ * the lock. This is the same as what we do in\n+ * apply_handle_stream_stop. See Locking\nConsiderations atop\n+ * applyparallelworker.c.\n */\n if (!toplevel_xact)\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Dec 2022 16:10:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, December 13, 2022 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Dec 13, 2022 at 4:36 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > ~~~\r\n> >\r\n> > 3. pa_set_stream_apply_worker\r\n> >\r\n> > +/*\r\n> > + * Set the worker that required to apply the current streaming transaction.\r\n> > + */\r\n> > +void\r\n> > +pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo) {\r\n> > +stream_apply_worker = winfo; }\r\n> >\r\n> > Comment wording seems wrong.\r\n> >\r\n> \r\n> I think something like \"Cache the parallel apply worker information.\"\r\n> may be more suitable here.\r\n\r\nChanged.\r\n\r\n> Few more similar cosmetic comments:\r\n> 1.\r\n> + /*\r\n> + * Unlock the shared object lock so that the parallel apply worker\r\n> + * can continue to receive changes.\r\n> + */\r\n> + if (!first_segment)\r\n> + pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);\r\n> \r\n> This comment is missing in the new (0002) patch.\r\n\r\nAdded.\r\n\r\n> 2.\r\n> + if (!winfo->serialize_changes)\r\n> + {\r\n> + if (!first_segment)\r\n> + pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);\r\n> \r\n> I think we should write some comments on why we are not unlocking when\r\n> serializing changes.\r\n\r\nAdded.\r\n\r\n> 3. Please add a comment like below in the patch to make it clear why in\r\n> stream_abort case we perform locking before sending the message.\r\n> --- a/src/backend/replication/logical/worker.c\r\n> +++ b/src/backend/replication/logical/worker.c\r\n> @@ -1858,6 +1858,13 @@ apply_handle_stream_abort(StringInfo s)\r\n> * worker will wait on the lock for the next set of\r\n> changes after\r\n> * processing the STREAM_ABORT message if it is not\r\n> already waiting\r\n> * for STREAM_STOP message.\r\n> + *\r\n> + * It is important to perform this locking\r\n> before sending the\r\n> + * STREAM_ABORT message so that the leader can\r\n> hold the lock first\r\n> + * and the parallel apply worker will wait for\r\n> the leader to release\r\n> + * the lock. This is the same as what we do in\r\n> + * apply_handle_stream_stop. See Locking\r\n> Considerations atop\r\n> + * applyparallelworker.c.\r\n> */\r\n> if (!toplevel_xact)\r\n\r\nMerged.\r\n\r\nAttach the new version patch which addressed above comments.\r\nI also slightly refactored logic related to pa_spooled_messages() so that\r\nIt doesn't need to wait for a timeout if there are pending spooled messages.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 13 Dec 2022 13:07:06 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 13, 2022 7:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Some minor review comments for v58-0001\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 1. pa_can_start\r\n> \r\n> + /*\r\n> + * Don't start a new parallel worker if user has set skiplsn as it's\r\n> + * possible that user want to skip the streaming transaction. For \r\n> + streaming\r\n> + * transaction, we need to serialize the transaction to a file so \r\n> + that we\r\n> + * can get the last LSN of the transaction to judge whether to skip \r\n> + before\r\n> + * starting to apply the change.\r\n> + */\r\n> + if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\r\n> + return false;\r\n> \r\n> \r\n> \"that user want\" -> \"that they want\"\r\n> \r\n> \"For streaming transaction,\" -> \"For streaming transactions,\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 2. pa_free_worker_info\r\n> \r\n> + /* Remove from the worker pool. */\r\n> + ParallelApplyWorkerPool = list_delete_ptr(ParallelApplyWorkerPool,\r\n> + winfo);\r\n> \r\n> Unnecessary wrapping\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 3. pa_set_stream_apply_worker\r\n> \r\n> +/*\r\n> + * Set the worker that required to apply the current streaming transaction.\r\n> + */\r\n> +void\r\n> +pa_set_stream_apply_worker(ParallelApplyWorkerInfo *winfo) { \r\n> +stream_apply_worker = winfo; }\r\n> \r\n> Comment wording seems wrong.\r\n\r\nTried to improve this comment.\r\n\r\n> ======\r\n> \r\n> src/include/replication/worker_internal.h\r\n> \r\n> 4. ParallelApplyWorkerShared\r\n> \r\n> + * XactLastCommitEnd from the parallel apply worker. This is required \r\n> +to\r\n> + * update the lsn_mappings by leader worker.\r\n> + */\r\n> + XLogRecPtr last_commit_end;\r\n> +} ParallelApplyWorkerShared;\r\n> \r\n> \r\n> \"This is required to update the lsn_mappings by leader worker.\" --> \r\n> did you mean \"This is required by the leader worker so it can update \r\n> the lsn_mappings.\" ??\r\n\r\nChanged.\r\n\r\nAlso thanks for the kind reminder in [1], rebased the patch set.\r\nAttach the new patch set.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPt4qv7xfJUmwdn6Vy47L5mqzKtkPr31%3DDmEayJWXetvYg%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 13 Dec 2022 13:07:09 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Dec 11, 2022 at 8:45 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, December 9, 2022 3:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 8, 2022 at 12:37 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> >\n> > Review comments\n>\n> Thanks for the comments!\n>\n> > ==============\n> > 1. Currently, we don't release the stream lock in LA (leade apply\n> > worker) for \"rollback to savepoint\" and the reason is mentioned in comments of\n> > apply_handle_stream_abort() in the patch. But, today, while testing, I found that\n> > can lead to deadlock which otherwise, won't happen on the publisher. The key\n> > point is rollback to savepoint releases the locks acquired by the particular\n> > subtransaction, so parallel apply worker should also do the same. Consider the\n> > following example where the transaction in session-1 is being performed by the\n> > parallel apply worker and the transaction in session-2 is being performed by the\n> > leader apply worker. I have simulated it by using GUC force_stream_mode.\n> > Publisher\n> > ==========\n> > Session-1\n> > postgres=# begin;\n> > BEGIN\n> > postgres=*# savepoint s1;\n> > SAVEPOINT\n> > postgres=*# truncate t1;\n> > TRUNCATE TABLE\n> >\n> > Session-2\n> > postgres=# begin;\n> > BEGIN\n> > postgres=*# insert into t1 values(4);\n> >\n> > Session-1\n> > postgres=*# rollback to savepoint s1;\n> > ROLLBACK\n> >\n> > Session-2\n> > Commit;\n> >\n> > With or without commit of Session-2, this scenario will lead to deadlock on the\n> > subscriber because PA (parallel apply worker) is waiting for LA to send the next\n> > command, and LA is blocked by Exclusive of PA. There is no deadlock on the\n> > publisher because rollback to savepoint will release the lock acquired by\n> > truncate.\n> >\n> > To solve this, How about if we do three things before sending abort of\n> > sub-transaction (a) unlock the stream lock, (b) increment pending_stream_count,\n> > (c) take the stream lock again?\n> >\n> > Now, if the PA is not already waiting on the stop, it will not wait at stream_stop\n> > but will wait after applying abort of sub-transaction and if it is already waiting at\n> > stream_stop, the wait will be released. If this works then probably we should try\n> > to do (b) before (a) to match the steps with stream_start.\n>\n> The solution works for me, I have changed the code as suggested.\n>\n>\n> > 2. There seems to be another general problem in the way the patch waits for\n> > stream_stop in PA (parallel apply worker). Currently, PA checks, if there are no\n> > more pending streams then it tries to wait for the next stream by waiting on a\n> > stream lock. However, it is possible after PA checks there is no pending stream\n> > and before it actually starts waiting on a lock, the LA sends another stream for\n> > which even stream_stop is sent, in this case, PA will start waiting for the next\n> > stream whereas there is actually a pending stream available. In this case, it won't\n> > lead to any problem apart from delay in applying the changes in such cases but\n> > for the case mentioned in the previous point (Pont 1), it can lead to deadlock\n> > even after we implement the solution proposed to solve it.\n>\n> Thanks for reporting, I have introduced another flag in shared memory and use it to\n> prevent the leader from incrementing the pending_stream_count if the parallel\n> apply worker is trying to lock the stream lock.\n>\n>\n> > 3. The other point to consider is that for stream_commit/prepare/abort, in LA, we\n> > release the stream lock after sending the message whereas for stream_start we\n> > release it before sending the message. I think for the earlier cases\n> > (stream_commit/prepare/abort), the patch has done like this because\n> > pa_send_data() may need to require the lock again when it times out and start\n> > serializing, so there will be no sense in first releasing it, then re-acquiring it, and\n> > then again releasing it. Can't we also release the lock for stream_start after\n> > pa_send_data() only if it is not switched to serialize mode?\n>\n> Changed.\n>\n> Attach the new version patch set which addressed above comments.\n\nHere are comments on v59 0001, 0002 patches:\n\n+void\n+pa_increment_stream_block(ParallelApplyWorkerShared *wshared)\n+{\n+ while (1)\n+ {\n+ SpinLockAcquire(&wshared->mutex);\n+\n+ /*\n+ * Don't try to increment the count if the parallel\napply worker is\n+ * taking the stream lock. Otherwise, there would be\na race condition\n+ * that the parallel apply worker checks there is no\npending streaming\n+ * block and before it actually starts waiting on a\nlock, the leader\n+ * sends another streaming block and take the stream\nlock again. In\n+ * this case, the parallel apply worker will start\nwaiting for the next\n+ * streaming block whereas there is actually a\npending streaming block\n+ * available.\n+ */\n+ if (!wshared->pa_wait_for_stream)\n+ {\n+ wshared->pending_stream_count++;\n+ SpinLockRelease(&wshared->mutex);\n+ break;\n+ }\n+\n+ SpinLockRelease(&wshared->mutex);\n+ }\n+}\n\nI think we should add an assertion to check if we don't hold the stream lock.\n\nI think that waiting for pa_wait_for_stream to be false in a busy loop\nis not a good idea. It's not interruptible and there is not guarantee\nthat we can break from this loop in a short time. For instance, if PA\nexecutes pa_decr_and_wait_stream_block() a bit earlier than LA\nexecutes pa_increment_stream_block(), LA has to wait for PA to acquire\nand release the stream lock in a busy loop. It should not be long in\nnormal cases but the duration LA needs to wait for PA depends on PA,\nwhich could be long. Also what if PA raises an error in\npa_lock_stream() due to some reasons? I think LA won't be able to\ndetect the failure.\n\nI think we should at least make it interruptible and maybe need to add\nsome sleep. Or perhaps we can use the condition variable for this\ncase.\n\n---\nIn worker.c, we have the following common pattern:\n\ncase TRANS_LEADER_PARTIAL_SERIALIZE:\n write change to the file;\n do some work;\n break;\n\ncase TRANS_LEADER_SEND_TO_PARALLEL:\n pa_send_data();\n\n if (winfo->serialize_changes)\n {\n do some worker required after writing changes to the file.\n }\n :\n break;\n\nIIUC there are two different paths for partial serialization: (a)\nwhere apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\napply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\nwinfo->serialize_changes became true. And we need to match what we do\nin (a) and (b). Rather than having two different paths for the same\ncase, how about falling through TRANS_LEADER_PARTIAL_SERIALIZE when we\ncould not send the changes? That is, pa_send_data() just returns false\nwhen the timeout exceeds and we need to switch to serialize changes,\notherwise returns true. If it returns false, we prepare for switching\nto serialize changes such as initializing fileset, and fall through\nTRANS_LEADER_PARTIAL_SERIALIZE case. The code would be like:\n\ncase TRANS_LEADER_SEND_TO_PARALLEL:\n ret = pa_send_data();\n\n if (ret)\n {\n do work for sending changes to PA.\n break;\n }\n\n /* prepare for switching to serialize changes */\n winfo->serialize_changes = true;\n initialize fileset;\n acquire stream lock if necessary;\n\n /* FALLTHROUGH */\ncase TRANS_LEADER_PARTIAL_SERIALIZE:\n do work for serializing changes;\n break;\n\n---\n/*\n- * Unlock the shared object lock so that\nparallel apply worker can\n- * continue to receive and apply changes.\n+ * Parallel apply worker might have applied\nsome changes, so write\n+ * the STREAM_ABORT message so that it can rollback the\n+ * subtransaction if needed.\n */\n- pa_unlock_stream(xid, AccessExclusiveLock);\n+ stream_open_and_write_change(xid,\nLOGICAL_REP_MSG_STREAM_ABORT,\n+\n &original_msg);\n+\n+ if (toplevel_xact)\n+ {\n+ pa_unlock_stream(xid, AccessExclusiveLock);\n+ pa_set_fileset_state(winfo->shared,\nFS_SERIALIZE_DONE);\n+ (void) pa_free_worker(winfo, xid);\n+ }\n\nAt every place except for the above code, we set the fileset state\nFS_SERIALIZE_DONE first then unlock the stream lock. Is there any\nreason for that?\n\n---\n+ case TRANS_LEADER_SEND_TO_PARALLEL:\n+ Assert(winfo);\n+\n+ /*\n+ * Unlock the shared object lock so that\nparallel apply worker can\n+ * continue to receive and apply changes.\n+ */\n+ pa_unlock_stream(xid, AccessExclusiveLock);\n+\n+ /*\n+ * For the case of aborting the\nsubtransaction, we increment the\n+ * number of streaming blocks and take the\nlock again before\n+ * sending the STREAM_ABORT to ensure that the\nparallel apply\n+ * worker will wait on the lock for the next\nset of changes after\n+ * processing the STREAM_ABORT message if it\nis not already waiting\n+ * for STREAM_STOP message.\n+ */\n+ if (!toplevel_xact)\n+ {\n+ pa_increment_stream_block(winfo->shared);\n+ pa_lock_stream(xid, AccessExclusiveLock);\n+ }\n+\n+ /* Send STREAM ABORT message to the parallel\napply worker. */\n+ pa_send_data(winfo, s->len, s->data);\n+\n+ if (toplevel_xact)\n+ (void) pa_free_worker(winfo, xid);\n+\n+ break;\n\nIn apply_handle_stream_abort(), it's better to add the comment why we\ndon't need to wait for PA to finish.\n\n\nAlso, given that we don't wait for PA to finish in this case, does it\nreally make sense to call pa_free_worker() immediately after sending\nSTREAM_ABORT?\n\n---\nPA acquires the transaction lock in AccessShare mode whereas LA\nacquires it in AccessExclusiveMode. Is it better to do the opposite?\nLike a backend process acquires a lock on its XID in Exclusive mode,\nwe can have PA acquire the lock on its XID in Exclusive mode whereas\nother attempts to acquire it in Share mode to wait.\n\n---\n void\npa_lock_stream(TransactionId xid, LOCKMODE lockmode)\n{\n LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\n PARALLEL_APPLY_LOCK_STREAM, lockmode);\n}\n\nI think since we don't need to let the caller to specify the lock mode\nbut need only shared and exclusive modes, we can make it simple by\nhaving a boolean argument say shared instead of lockmode.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 00:25:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Sun, Dec 11, 2022 at 8:45 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, December 9, 2022 3:14 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Dec 8, 2022 at 12:37 PM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > >\r\n> > > Review comments\r\n> >\r\n> > Thanks for the comments!\r\n> >\r\n> > > ==============\r\n> > > 1. Currently, we don't release the stream lock in LA (leade apply\r\n> > > worker) for \"rollback to savepoint\" and the reason is mentioned in\r\n> > > comments of\r\n> > > apply_handle_stream_abort() in the patch. But, today, while testing,\r\n> > > I found that can lead to deadlock which otherwise, won't happen on\r\n> > > the publisher. The key point is rollback to savepoint releases the\r\n> > > locks acquired by the particular subtransaction, so parallel apply\r\n> > > worker should also do the same. Consider the following example where\r\n> > > the transaction in session-1 is being performed by the parallel\r\n> > > apply worker and the transaction in session-2 is being performed by the\r\n> leader apply worker. I have simulated it by using GUC force_stream_mode.\r\n> > > Publisher\r\n> > > ==========\r\n> > > Session-1\r\n> > > postgres=# begin;\r\n> > > BEGIN\r\n> > > postgres=*# savepoint s1;\r\n> > > SAVEPOINT\r\n> > > postgres=*# truncate t1;\r\n> > > TRUNCATE TABLE\r\n> > >\r\n> > > Session-2\r\n> > > postgres=# begin;\r\n> > > BEGIN\r\n> > > postgres=*# insert into t1 values(4);\r\n> > >\r\n> > > Session-1\r\n> > > postgres=*# rollback to savepoint s1; ROLLBACK\r\n> > >\r\n> > > Session-2\r\n> > > Commit;\r\n> > >\r\n> > > With or without commit of Session-2, this scenario will lead to\r\n> > > deadlock on the subscriber because PA (parallel apply worker) is\r\n> > > waiting for LA to send the next command, and LA is blocked by\r\n> > > Exclusive of PA. There is no deadlock on the publisher because\r\n> > > rollback to savepoint will release the lock acquired by truncate.\r\n> > >\r\n> > > To solve this, How about if we do three things before sending abort\r\n> > > of sub-transaction (a) unlock the stream lock, (b) increment\r\n> > > pending_stream_count,\r\n> > > (c) take the stream lock again?\r\n> > >\r\n> > > Now, if the PA is not already waiting on the stop, it will not wait\r\n> > > at stream_stop but will wait after applying abort of sub-transaction\r\n> > > and if it is already waiting at stream_stop, the wait will be\r\n> > > released. If this works then probably we should try to do (b) before (a) to\r\n> match the steps with stream_start.\r\n> >\r\n> > The solution works for me, I have changed the code as suggested.\r\n> >\r\n> >\r\n> > > 2. There seems to be another general problem in the way the patch\r\n> > > waits for stream_stop in PA (parallel apply worker). Currently, PA\r\n> > > checks, if there are no more pending streams then it tries to wait\r\n> > > for the next stream by waiting on a stream lock. However, it is\r\n> > > possible after PA checks there is no pending stream and before it\r\n> > > actually starts waiting on a lock, the LA sends another stream for\r\n> > > which even stream_stop is sent, in this case, PA will start waiting\r\n> > > for the next stream whereas there is actually a pending stream\r\n> > > available. In this case, it won't lead to any problem apart from\r\n> > > delay in applying the changes in such cases but for the case mentioned in\r\n> the previous point (Pont 1), it can lead to deadlock even after we implement the\r\n> solution proposed to solve it.\r\n> >\r\n> > Thanks for reporting, I have introduced another flag in shared memory\r\n> > and use it to prevent the leader from incrementing the\r\n> > pending_stream_count if the parallel apply worker is trying to lock the stream\r\n> lock.\r\n> >\r\n> >\r\n> > > 3. The other point to consider is that for\r\n> > > stream_commit/prepare/abort, in LA, we release the stream lock after\r\n> > > sending the message whereas for stream_start we release it before\r\n> > > sending the message. I think for the earlier cases\r\n> > > (stream_commit/prepare/abort), the patch has done like this because\r\n> > > pa_send_data() may need to require the lock again when it times out\r\n> > > and start serializing, so there will be no sense in first releasing\r\n> > > it, then re-acquiring it, and then again releasing it. Can't we also\r\n> > > release the lock for stream_start after\r\n> > > pa_send_data() only if it is not switched to serialize mode?\r\n> >\r\n> > Changed.\r\n> >\r\n> > Attach the new version patch set which addressed above comments.\r\n> \r\n> Here are comments on v59 0001, 0002 patches:\r\n\r\nThanks for the comments!\r\n\r\n> +void\r\n> +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\r\n> + while (1)\r\n> + {\r\n> + SpinLockAcquire(&wshared->mutex);\r\n> +\r\n> + /*\r\n> + * Don't try to increment the count if the parallel\r\n> apply worker is\r\n> + * taking the stream lock. Otherwise, there would be\r\n> a race condition\r\n> + * that the parallel apply worker checks there is no\r\n> pending streaming\r\n> + * block and before it actually starts waiting on a\r\n> lock, the leader\r\n> + * sends another streaming block and take the stream\r\n> lock again. In\r\n> + * this case, the parallel apply worker will start\r\n> waiting for the next\r\n> + * streaming block whereas there is actually a\r\n> pending streaming block\r\n> + * available.\r\n> + */\r\n> + if (!wshared->pa_wait_for_stream)\r\n> + {\r\n> + wshared->pending_stream_count++;\r\n> + SpinLockRelease(&wshared->mutex);\r\n> + break;\r\n> + }\r\n> +\r\n> + SpinLockRelease(&wshared->mutex);\r\n> + }\r\n> +}\r\n> \r\n> I think we should add an assertion to check if we don't hold the stream lock.\r\n> \r\n> I think that waiting for pa_wait_for_stream to be false in a busy loop is not a\r\n> good idea. It's not interruptible and there is not guarantee that we can break\r\n> from this loop in a short time. For instance, if PA executes\r\n> pa_decr_and_wait_stream_block() a bit earlier than LA executes\r\n> pa_increment_stream_block(), LA has to wait for PA to acquire and release the\r\n> stream lock in a busy loop. It should not be long in normal cases but the\r\n> duration LA needs to wait for PA depends on PA, which could be long. Also\r\n> what if PA raises an error in\r\n> pa_lock_stream() due to some reasons? I think LA won't be able to detect the\r\n> failure.\r\n> \r\n> I think we should at least make it interruptible and maybe need to add some\r\n> sleep. Or perhaps we can use the condition variable for this case.\r\n\r\nThanks for the analysis, I will research this part.\r\n\r\n> ---\r\n> In worker.c, we have the following common pattern:\r\n> \r\n> case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> write change to the file;\r\n> do some work;\r\n> break;\r\n> \r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> pa_send_data();\r\n> \r\n> if (winfo->serialize_changes)\r\n> {\r\n> do some worker required after writing changes to the file.\r\n> }\r\n> :\r\n> break;\r\n> \r\n> IIUC there are two different paths for partial serialization: (a) where\r\n> apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\r\n> apply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\r\n> winfo->serialize_changes became true. And we need to match what we do\r\n> in (a) and (b). Rather than having two different paths for the same case, how\r\n> about falling through TRANS_LEADER_PARTIAL_SERIALIZE when we could not\r\n> send the changes? That is, pa_send_data() just returns false when the timeout\r\n> exceeds and we need to switch to serialize changes, otherwise returns true. If it\r\n> returns false, we prepare for switching to serialize changes such as initializing\r\n> fileset, and fall through TRANS_LEADER_PARTIAL_SERIALIZE case. The code\r\n> would be like:\r\n> \r\n> case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> ret = pa_send_data();\r\n> \r\n> if (ret)\r\n> {\r\n> do work for sending changes to PA.\r\n> break;\r\n> }\r\n> \r\n> /* prepare for switching to serialize changes */\r\n> winfo->serialize_changes = true;\r\n> initialize fileset;\r\n> acquire stream lock if necessary;\r\n> \r\n> /* FALLTHROUGH */\r\n> case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> do work for serializing changes;\r\n> break;\r\n\r\nI think that the suggestion is to extract the code that switch to serialize\r\nmode out of the pa_send_data(), and then we need to add that logic in all the\r\nfunctions which call pa_send_data(), I am not sure if it looks better as it\r\nmight introduce some more codes in each handling function.\r\n\r\n> ---\r\n> /*\r\n> - * Unlock the shared object lock so that\r\n> parallel apply worker can\r\n> - * continue to receive and apply changes.\r\n> + * Parallel apply worker might have applied\r\n> some changes, so write\r\n> + * the STREAM_ABORT message so that it can rollback\r\n> the\r\n> + * subtransaction if needed.\r\n> */\r\n> - pa_unlock_stream(xid, AccessExclusiveLock);\r\n> + stream_open_and_write_change(xid,\r\n> LOGICAL_REP_MSG_STREAM_ABORT,\r\n> +\r\n> &original_msg);\r\n> +\r\n> + if (toplevel_xact)\r\n> + {\r\n> + pa_unlock_stream(xid, AccessExclusiveLock);\r\n> + pa_set_fileset_state(winfo->shared,\r\n> FS_SERIALIZE_DONE);\r\n> + (void) pa_free_worker(winfo, xid);\r\n> + }\r\n> \r\n> At every place except for the above code, we set the fileset state\r\n> FS_SERIALIZE_DONE first then unlock the stream lock. Is there any reason for\r\n> that?\r\n\r\nNo, I think we should make them consistent, will change this.\r\n\r\n> ---\r\n> + case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> + Assert(winfo);\r\n> +\r\n> + /*\r\n> + * Unlock the shared object lock so that\r\n> parallel apply worker can\r\n> + * continue to receive and apply changes.\r\n> + */\r\n> + pa_unlock_stream(xid, AccessExclusiveLock);\r\n> +\r\n> + /*\r\n> + * For the case of aborting the\r\n> subtransaction, we increment the\r\n> + * number of streaming blocks and take the\r\n> lock again before\r\n> + * sending the STREAM_ABORT to ensure that the\r\n> parallel apply\r\n> + * worker will wait on the lock for the next\r\n> set of changes after\r\n> + * processing the STREAM_ABORT message if it\r\n> is not already waiting\r\n> + * for STREAM_STOP message.\r\n> + */\r\n> + if (!toplevel_xact)\r\n> + {\r\n> + pa_increment_stream_block(winfo->shared);\r\n> + pa_lock_stream(xid, AccessExclusiveLock);\r\n> + }\r\n> +\r\n> + /* Send STREAM ABORT message to the parallel\r\n> apply worker. */\r\n> + pa_send_data(winfo, s->len, s->data);\r\n> +\r\n> + if (toplevel_xact)\r\n> + (void) pa_free_worker(winfo, xid);\r\n> +\r\n> + break;\r\n> \r\n> In apply_handle_stream_abort(), it's better to add the comment why we don't\r\n> need to wait for PA to finish.\r\n\r\nWill add.\r\n\r\n> \r\n> Also, given that we don't wait for PA to finish in this case, does it really make\r\n> sense to call pa_free_worker() immediately after sending STREAM_ABORT?\r\n\r\nI think it's possible that the PA finish the ROLLBACK quickly and the LA can\r\nfree the worker here in time.\r\n\r\n> ---\r\n> PA acquires the transaction lock in AccessShare mode whereas LA acquires it in\r\n> AccessExclusiveMode. Is it better to do the opposite?\r\n> Like a backend process acquires a lock on its XID in Exclusive mode, we can\r\n> have PA acquire the lock on its XID in Exclusive mode whereas other attempts\r\n> to acquire it in Share mode to wait.\r\n\r\nAgreed, will improve.\r\n\r\n> ---\r\n> void\r\n> pa_lock_stream(TransactionId xid, LOCKMODE lockmode) {\r\n> LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\r\n> PARALLEL_APPLY_LOCK_STREAM,\r\n> lockmode); }\r\n> \r\n> I think since we don't need to let the caller to specify the lock mode but need\r\n> only shared and exclusive modes, we can make it simple by having a boolean\r\n> argument say shared instead of lockmode.\r\n\r\nI personally think passing the lockmode would make the code more clear\r\nthan passing a Boolean value.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 14 Dec 2022 04:19:58 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi,\r\n\r\nI did some performance tests for this patch, based on v59-0001 and v59-0002\r\npatch.\r\n\r\nThis test used synchronous logical replication, and compared SQL execution times\r\nbefore and after applying the patch.\r\n\r\nTwo cases are tested by varying logical_decoding_work_mem:\r\na) Bulk insert.\r\nb) Rollback to savepoint. (Different percentage of changes in the transaction\r\nare rolled back).\r\n\r\nThe test was performed ten times, and the average of the middle eight was taken.\r\n\r\nThe results are as follows. The bar charts are attached.\r\n(The steps are the same as before.[1])\r\n\r\nRESULT - bulk insert (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 51.655 51.694 51.262\r\npatched 31.104 31.234 31.711\r\nCompare with HEAD -39.79% -39.58% -38.14%\r\n\r\nRESULT - rollback 10% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 43.908 43.358 42.874\r\npatched 31.924 31.343 29.102\r\nCompare with HEAD -27.29% -27.71% -32.12%\r\n\r\nRESULT - rollback 20% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 40.561 40.599 40.015\r\npatched 31.562 32.116 29.680\r\nCompare with HEAD -22.19% -20.89% -25.83%\r\n\r\nRESULT - rollback 30% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 38.092 37.756 37.142\r\npatched 31.631 31.236 28.783\r\nCompare with HEAD -16.96% -17.27% -22.50%\r\n\r\nRESULT - rollback 50% (5kk)\r\n---------------------------------------------------------------\r\nlogical_decoding_work_mem 64kB 256kB 64MB\r\nHEAD 33.387 33.056 32.638\r\npatched 31.272 31.279 29.876\r\nCompare with HEAD -6.34% -5.38% -8.46%\r\n\r\n(If \"Compare with HEAD\" is a positive number, it means worse than HEAD; if it is\r\na negative number, it means better than HEAD.)\r\n\r\nSummary:\r\nIn the case of bulk insert, it takes about 30% ~ 40% less time, which looks good\r\nto me.\r\nIn the case of rollback to savepoint, the larger the amount of data rolled back,\r\nthe smaller the improvement compared to HEAD. But as such cases won't be often,\r\nthis should be okay.\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB63103AA97349BBB858E27DEAFD499%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nShi yu", "msg_date": "Wed, 14 Dec 2022 06:34:19 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 14, 2022 at 9:50 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Here are comments on v59 0001, 0002 patches:\n>\n> Thanks for the comments!\n>\n> > +void\n> > +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\n> > + while (1)\n> > + {\n> > + SpinLockAcquire(&wshared->mutex);\n> > +\n> > + /*\n> > + * Don't try to increment the count if the parallel\n> > apply worker is\n> > + * taking the stream lock. Otherwise, there would be\n> > a race condition\n> > + * that the parallel apply worker checks there is no\n> > pending streaming\n> > + * block and before it actually starts waiting on a\n> > lock, the leader\n> > + * sends another streaming block and take the stream\n> > lock again. In\n> > + * this case, the parallel apply worker will start\n> > waiting for the next\n> > + * streaming block whereas there is actually a\n> > pending streaming block\n> > + * available.\n> > + */\n> > + if (!wshared->pa_wait_for_stream)\n> > + {\n> > + wshared->pending_stream_count++;\n> > + SpinLockRelease(&wshared->mutex);\n> > + break;\n> > + }\n> > +\n> > + SpinLockRelease(&wshared->mutex);\n> > + }\n> > +}\n> >\n> > I think we should add an assertion to check if we don't hold the stream lock.\n> >\n> > I think that waiting for pa_wait_for_stream to be false in a busy loop is not a\n> > good idea. It's not interruptible and there is not guarantee that we can break\n> > from this loop in a short time. For instance, if PA executes\n> > pa_decr_and_wait_stream_block() a bit earlier than LA executes\n> > pa_increment_stream_block(), LA has to wait for PA to acquire and release the\n> > stream lock in a busy loop. It should not be long in normal cases but the\n> > duration LA needs to wait for PA depends on PA, which could be long. Also\n> > what if PA raises an error in\n> > pa_lock_stream() due to some reasons? I think LA won't be able to detect the\n> > failure.\n> >\n> > I think we should at least make it interruptible and maybe need to add some\n> > sleep. Or perhaps we can use the condition variable for this case.\n>\n\nOr we can leave this while (true) logic altogether for the first\nversion and have a comment to explain this race. Anyway, after\nrestarting, it will probably be solved. We can always change this part\nof the code later if this really turns out to be problematic.\n\n> Thanks for the analysis, I will research this part.\n>\n> > ---\n> > In worker.c, we have the following common pattern:\n> >\n> > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > write change to the file;\n> > do some work;\n> > break;\n> >\n> > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > pa_send_data();\n> >\n> > if (winfo->serialize_changes)\n> > {\n> > do some worker required after writing changes to the file.\n> > }\n> > :\n> > break;\n> >\n> > IIUC there are two different paths for partial serialization: (a) where\n> > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\n> > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\n> > winfo->serialize_changes became true. And we need to match what we do\n> > in (a) and (b). Rather than having two different paths for the same case, how\n> > about falling through TRANS_LEADER_PARTIAL_SERIALIZE when we could not\n> > send the changes? That is, pa_send_data() just returns false when the timeout\n> > exceeds and we need to switch to serialize changes, otherwise returns true. If it\n> > returns false, we prepare for switching to serialize changes such as initializing\n> > fileset, and fall through TRANS_LEADER_PARTIAL_SERIALIZE case. The code\n> > would be like:\n> >\n> > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > ret = pa_send_data();\n> >\n> > if (ret)\n> > {\n> > do work for sending changes to PA.\n> > break;\n> > }\n> >\n> > /* prepare for switching to serialize changes */\n> > winfo->serialize_changes = true;\n> > initialize fileset;\n> > acquire stream lock if necessary;\n> >\n> > /* FALLTHROUGH */\n> > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > do work for serializing changes;\n> > break;\n>\n> I think that the suggestion is to extract the code that switch to serialize\n> mode out of the pa_send_data(), and then we need to add that logic in all the\n> functions which call pa_send_data(), I am not sure if it looks better as it\n> might introduce some more codes in each handling function.\n>\n\nHow about extracting the common code from apply_handle_stream_commit\nand apply_handle_stream_prepare to a separate function say\npa_xact_finish_common()? I see there is a lot of common code (unlock\nthe stream, wait for the finish, store flush location, free worker\ninfo) in both the functions for TRANS_LEADER_PARTIAL_SERIALIZE and\nTRANS_LEADER_SEND_TO_PARALLEL cases.\n\n>\n> > ---\n> > void\n> > pa_lock_stream(TransactionId xid, LOCKMODE lockmode) {\n> > LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\n> > PARALLEL_APPLY_LOCK_STREAM,\n> > lockmode); }\n> >\n> > I think since we don't need to let the caller to specify the lock mode but need\n> > only shared and exclusive modes, we can make it simple by having a boolean\n> > argument say shared instead of lockmode.\n>\n> I personally think passing the lockmode would make the code more clear\n> than passing a Boolean value.\n>\n\n+1.\n\nI have made a few changes in the newly added comments and function\nname in the attached patch. Kindly include this if you find the\nchanges okay.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 14 Dec 2022 12:18:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 14, 2022 at 1:20 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sun, Dec 11, 2022 at 8:45 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Friday, December 9, 2022 3:14 PM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 8, 2022 at 12:37 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > >\n> > > > Review comments\n> > >\n> > > Thanks for the comments!\n> > >\n> > > > ==============\n> > > > 1. Currently, we don't release the stream lock in LA (leade apply\n> > > > worker) for \"rollback to savepoint\" and the reason is mentioned in\n> > > > comments of\n> > > > apply_handle_stream_abort() in the patch. But, today, while testing,\n> > > > I found that can lead to deadlock which otherwise, won't happen on\n> > > > the publisher. The key point is rollback to savepoint releases the\n> > > > locks acquired by the particular subtransaction, so parallel apply\n> > > > worker should also do the same. Consider the following example where\n> > > > the transaction in session-1 is being performed by the parallel\n> > > > apply worker and the transaction in session-2 is being performed by the\n> > leader apply worker. I have simulated it by using GUC force_stream_mode.\n> > > > Publisher\n> > > > ==========\n> > > > Session-1\n> > > > postgres=# begin;\n> > > > BEGIN\n> > > > postgres=*# savepoint s1;\n> > > > SAVEPOINT\n> > > > postgres=*# truncate t1;\n> > > > TRUNCATE TABLE\n> > > >\n> > > > Session-2\n> > > > postgres=# begin;\n> > > > BEGIN\n> > > > postgres=*# insert into t1 values(4);\n> > > >\n> > > > Session-1\n> > > > postgres=*# rollback to savepoint s1; ROLLBACK\n> > > >\n> > > > Session-2\n> > > > Commit;\n> > > >\n> > > > With or without commit of Session-2, this scenario will lead to\n> > > > deadlock on the subscriber because PA (parallel apply worker) is\n> > > > waiting for LA to send the next command, and LA is blocked by\n> > > > Exclusive of PA. There is no deadlock on the publisher because\n> > > > rollback to savepoint will release the lock acquired by truncate.\n> > > >\n> > > > To solve this, How about if we do three things before sending abort\n> > > > of sub-transaction (a) unlock the stream lock, (b) increment\n> > > > pending_stream_count,\n> > > > (c) take the stream lock again?\n> > > >\n> > > > Now, if the PA is not already waiting on the stop, it will not wait\n> > > > at stream_stop but will wait after applying abort of sub-transaction\n> > > > and if it is already waiting at stream_stop, the wait will be\n> > > > released. If this works then probably we should try to do (b) before (a) to\n> > match the steps with stream_start.\n> > >\n> > > The solution works for me, I have changed the code as suggested.\n> > >\n> > >\n> > > > 2. There seems to be another general problem in the way the patch\n> > > > waits for stream_stop in PA (parallel apply worker). Currently, PA\n> > > > checks, if there are no more pending streams then it tries to wait\n> > > > for the next stream by waiting on a stream lock. However, it is\n> > > > possible after PA checks there is no pending stream and before it\n> > > > actually starts waiting on a lock, the LA sends another stream for\n> > > > which even stream_stop is sent, in this case, PA will start waiting\n> > > > for the next stream whereas there is actually a pending stream\n> > > > available. In this case, it won't lead to any problem apart from\n> > > > delay in applying the changes in such cases but for the case mentioned in\n> > the previous point (Pont 1), it can lead to deadlock even after we implement the\n> > solution proposed to solve it.\n> > >\n> > > Thanks for reporting, I have introduced another flag in shared memory\n> > > and use it to prevent the leader from incrementing the\n> > > pending_stream_count if the parallel apply worker is trying to lock the stream\n> > lock.\n> > >\n> > >\n> > > > 3. The other point to consider is that for\n> > > > stream_commit/prepare/abort, in LA, we release the stream lock after\n> > > > sending the message whereas for stream_start we release it before\n> > > > sending the message. I think for the earlier cases\n> > > > (stream_commit/prepare/abort), the patch has done like this because\n> > > > pa_send_data() may need to require the lock again when it times out\n> > > > and start serializing, so there will be no sense in first releasing\n> > > > it, then re-acquiring it, and then again releasing it. Can't we also\n> > > > release the lock for stream_start after\n> > > > pa_send_data() only if it is not switched to serialize mode?\n> > >\n> > > Changed.\n> > >\n> > > Attach the new version patch set which addressed above comments.\n> >\n> > Here are comments on v59 0001, 0002 patches:\n>\n> Thanks for the comments!\n>\n> > +void\n> > +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\n> > + while (1)\n> > + {\n> > + SpinLockAcquire(&wshared->mutex);\n> > +\n> > + /*\n> > + * Don't try to increment the count if the parallel\n> > apply worker is\n> > + * taking the stream lock. Otherwise, there would be\n> > a race condition\n> > + * that the parallel apply worker checks there is no\n> > pending streaming\n> > + * block and before it actually starts waiting on a\n> > lock, the leader\n> > + * sends another streaming block and take the stream\n> > lock again. In\n> > + * this case, the parallel apply worker will start\n> > waiting for the next\n> > + * streaming block whereas there is actually a\n> > pending streaming block\n> > + * available.\n> > + */\n> > + if (!wshared->pa_wait_for_stream)\n> > + {\n> > + wshared->pending_stream_count++;\n> > + SpinLockRelease(&wshared->mutex);\n> > + break;\n> > + }\n> > +\n> > + SpinLockRelease(&wshared->mutex);\n> > + }\n> > +}\n> >\n> > I think we should add an assertion to check if we don't hold the stream lock.\n> >\n> > I think that waiting for pa_wait_for_stream to be false in a busy loop is not a\n> > good idea. It's not interruptible and there is not guarantee that we can break\n> > from this loop in a short time. For instance, if PA executes\n> > pa_decr_and_wait_stream_block() a bit earlier than LA executes\n> > pa_increment_stream_block(), LA has to wait for PA to acquire and release the\n> > stream lock in a busy loop. It should not be long in normal cases but the\n> > duration LA needs to wait for PA depends on PA, which could be long. Also\n> > what if PA raises an error in\n> > pa_lock_stream() due to some reasons? I think LA won't be able to detect the\n> > failure.\n> >\n> > I think we should at least make it interruptible and maybe need to add some\n> > sleep. Or perhaps we can use the condition variable for this case.\n>\n> Thanks for the analysis, I will research this part.\n>\n> > ---\n> > In worker.c, we have the following common pattern:\n> >\n> > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > write change to the file;\n> > do some work;\n> > break;\n> >\n> > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > pa_send_data();\n> >\n> > if (winfo->serialize_changes)\n> > {\n> > do some worker required after writing changes to the file.\n> > }\n> > :\n> > break;\n> >\n> > IIUC there are two different paths for partial serialization: (a) where\n> > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\n> > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\n> > winfo->serialize_changes became true. And we need to match what we do\n> > in (a) and (b). Rather than having two different paths for the same case, how\n> > about falling through TRANS_LEADER_PARTIAL_SERIALIZE when we could not\n> > send the changes? That is, pa_send_data() just returns false when the timeout\n> > exceeds and we need to switch to serialize changes, otherwise returns true. If it\n> > returns false, we prepare for switching to serialize changes such as initializing\n> > fileset, and fall through TRANS_LEADER_PARTIAL_SERIALIZE case. The code\n> > would be like:\n> >\n> > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > ret = pa_send_data();\n> >\n> > if (ret)\n> > {\n> > do work for sending changes to PA.\n> > break;\n> > }\n> >\n> > /* prepare for switching to serialize changes */\n> > winfo->serialize_changes = true;\n> > initialize fileset;\n> > acquire stream lock if necessary;\n> >\n> > /* FALLTHROUGH */\n> > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > do work for serializing changes;\n> > break;\n>\n> I think that the suggestion is to extract the code that switch to serialize\n> mode out of the pa_send_data(), and then we need to add that logic in all the\n> functions which call pa_send_data(), I am not sure if it looks better as it\n> might introduce some more codes in each handling function.\n\nI think we can have a common function to prepare for switching to\nserialize changes. With the current code, I'm concerned that we have\nto check if what we do in both cases are matched whenever we change\nthe code for the partial serialization case.\n\n> > ---\n> > void\n> > pa_lock_stream(TransactionId xid, LOCKMODE lockmode) {\n> > LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\n> > PARALLEL_APPLY_LOCK_STREAM,\n> > lockmode); }\n> >\n> > I think since we don't need to let the caller to specify the lock mode but need\n> > only shared and exclusive modes, we can make it simple by having a boolean\n> > argument say shared instead of lockmode.\n>\n> I personally think passing the lockmode would make the code more clear\n> than passing a Boolean value.\n\nOkay, agreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 18:19:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, December 14, 2022 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n\r\n> \r\n> On Wed, Dec 14, 2022 at 9:50 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > Here are comments on v59 0001, 0002 patches:\r\n> >\r\n> > Thanks for the comments!\r\n> >\r\n> > > +void\r\n> > > +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\r\n> > > + while (1)\r\n> > > + {\r\n> > > + SpinLockAcquire(&wshared->mutex);\r\n> > > +\r\n> > > + /*\r\n> > > + * Don't try to increment the count if the parallel\r\n> > > apply worker is\r\n> > > + * taking the stream lock. Otherwise, there would\r\n> > > + be\r\n> > > a race condition\r\n> > > + * that the parallel apply worker checks there is\r\n> > > + no\r\n> > > pending streaming\r\n> > > + * block and before it actually starts waiting on a\r\n> > > lock, the leader\r\n> > > + * sends another streaming block and take the\r\n> > > + stream\r\n> > > lock again. In\r\n> > > + * this case, the parallel apply worker will start\r\n> > > waiting for the next\r\n> > > + * streaming block whereas there is actually a\r\n> > > pending streaming block\r\n> > > + * available.\r\n> > > + */\r\n> > > + if (!wshared->pa_wait_for_stream)\r\n> > > + {\r\n> > > + wshared->pending_stream_count++;\r\n> > > + SpinLockRelease(&wshared->mutex);\r\n> > > + break;\r\n> > > + }\r\n> > > +\r\n> > > + SpinLockRelease(&wshared->mutex);\r\n> > > + }\r\n> > > +}\r\n> > >\r\n> > > I think we should add an assertion to check if we don't hold the stream lock.\r\n> > >\r\n> > > I think that waiting for pa_wait_for_stream to be false in a busy\r\n> > > loop is not a good idea. It's not interruptible and there is not\r\n> > > guarantee that we can break from this loop in a short time. For\r\n> > > instance, if PA executes\r\n> > > pa_decr_and_wait_stream_block() a bit earlier than LA executes\r\n> > > pa_increment_stream_block(), LA has to wait for PA to acquire and\r\n> > > release the stream lock in a busy loop. It should not be long in\r\n> > > normal cases but the duration LA needs to wait for PA depends on PA,\r\n> > > which could be long. Also what if PA raises an error in\r\n> > > pa_lock_stream() due to some reasons? I think LA won't be able to\r\n> > > detect the failure.\r\n> > >\r\n> > > I think we should at least make it interruptible and maybe need to\r\n> > > add some sleep. Or perhaps we can use the condition variable for this case.\r\n> >\r\n> \r\n> Or we can leave this while (true) logic altogether for the first version and have a\r\n> comment to explain this race. Anyway, after restarting, it will probably be\r\n> solved. We can always change this part of the code later if this really turns out\r\n> to be problematic.\r\n\r\nAgreed, and reverted this part.\r\n\r\n> \r\n> > Thanks for the analysis, I will research this part.\r\n> >\r\n> > > ---\r\n> > > In worker.c, we have the following common pattern:\r\n> > >\r\n> > > case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> > > write change to the file;\r\n> > > do some work;\r\n> > > break;\r\n> > >\r\n> > > case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> > > pa_send_data();\r\n> > >\r\n> > > if (winfo->serialize_changes)\r\n> > > {\r\n> > > do some worker required after writing changes to the file.\r\n> > > }\r\n> > > :\r\n> > > break;\r\n> > >\r\n> > > IIUC there are two different paths for partial serialization: (a)\r\n> > > where apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\r\n> > > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\r\n> > > winfo->serialize_changes became true. And we need to match what we\r\n> > > winfo->do\r\n> > > in (a) and (b). Rather than having two different paths for the same\r\n> > > case, how about falling through TRANS_LEADER_PARTIAL_SERIALIZE when\r\n> > > we could not send the changes? That is, pa_send_data() just returns\r\n> > > false when the timeout exceeds and we need to switch to serialize\r\n> > > changes, otherwise returns true. If it returns false, we prepare for\r\n> > > switching to serialize changes such as initializing fileset, and\r\n> > > fall through TRANS_LEADER_PARTIAL_SERIALIZE case. The code would be\r\n> like:\r\n> > >\r\n> > > case TRANS_LEADER_SEND_TO_PARALLEL:\r\n> > > ret = pa_send_data();\r\n> > >\r\n> > > if (ret)\r\n> > > {\r\n> > > do work for sending changes to PA.\r\n> > > break;\r\n> > > }\r\n> > >\r\n> > > /* prepare for switching to serialize changes */\r\n> > > winfo->serialize_changes = true;\r\n> > > initialize fileset;\r\n> > > acquire stream lock if necessary;\r\n> > >\r\n> > > /* FALLTHROUGH */\r\n> > > case TRANS_LEADER_PARTIAL_SERIALIZE:\r\n> > > do work for serializing changes;\r\n> > > break;\r\n> >\r\n> > I think that the suggestion is to extract the code that switch to\r\n> > serialize mode out of the pa_send_data(), and then we need to add that\r\n> > logic in all the functions which call pa_send_data(), I am not sure if\r\n> > it looks better as it might introduce some more codes in each handling\r\n> function.\r\n> >\r\n> \r\n> How about extracting the common code from apply_handle_stream_commit\r\n> and apply_handle_stream_prepare to a separate function say\r\n> pa_xact_finish_common()? I see there is a lot of common code (unlock the\r\n> stream, wait for the finish, store flush location, free worker\r\n> info) in both the functions for TRANS_LEADER_PARTIAL_SERIALIZE and\r\n> TRANS_LEADER_SEND_TO_PARALLEL cases.\r\n\r\nAgreed, changed. I also addressed Sawada-san comment by extracting the\r\ncode that switch to serialize out of pa_send_data().\r\n\r\n> >\r\n> > > ---\r\n> > > void\r\n> > > pa_lock_stream(TransactionId xid, LOCKMODE lockmode) {\r\n> > > LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\r\n> > > PARALLEL_APPLY_LOCK_STREAM,\r\n> > > lockmode); }\r\n> > >\r\n> > > I think since we don't need to let the caller to specify the lock\r\n> > > mode but need only shared and exclusive modes, we can make it simple\r\n> > > by having a boolean argument say shared instead of lockmode.\r\n> >\r\n> > I personally think passing the lockmode would make the code more clear\r\n> > than passing a Boolean value.\r\n> >\r\n> \r\n> +1.\r\n> \r\n> I have made a few changes in the newly added comments and function name in\r\n> the attached patch. Kindly include this if you find the changes okay.\r\n\r\nThanks, I have checked and merged it.\r\n\r\nAttach the new version patch set which addressed all comments so far.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 15 Dec 2022 03:28:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 14, 2022 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 9:50 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Here are comments on v59 0001, 0002 patches:\n> >\n> > Thanks for the comments!\n> >\n> > > +void\n> > > +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\n> > > + while (1)\n> > > + {\n> > > + SpinLockAcquire(&wshared->mutex);\n> > > +\n> > > + /*\n> > > + * Don't try to increment the count if the parallel\n> > > apply worker is\n> > > + * taking the stream lock. Otherwise, there would be\n> > > a race condition\n> > > + * that the parallel apply worker checks there is no\n> > > pending streaming\n> > > + * block and before it actually starts waiting on a\n> > > lock, the leader\n> > > + * sends another streaming block and take the stream\n> > > lock again. In\n> > > + * this case, the parallel apply worker will start\n> > > waiting for the next\n> > > + * streaming block whereas there is actually a\n> > > pending streaming block\n> > > + * available.\n> > > + */\n> > > + if (!wshared->pa_wait_for_stream)\n> > > + {\n> > > + wshared->pending_stream_count++;\n> > > + SpinLockRelease(&wshared->mutex);\n> > > + break;\n> > > + }\n> > > +\n> > > + SpinLockRelease(&wshared->mutex);\n> > > + }\n> > > +}\n> > >\n> > > I think we should add an assertion to check if we don't hold the stream lock.\n> > >\n> > > I think that waiting for pa_wait_for_stream to be false in a busy loop is not a\n> > > good idea. It's not interruptible and there is not guarantee that we can break\n> > > from this loop in a short time. For instance, if PA executes\n> > > pa_decr_and_wait_stream_block() a bit earlier than LA executes\n> > > pa_increment_stream_block(), LA has to wait for PA to acquire and release the\n> > > stream lock in a busy loop. It should not be long in normal cases but the\n> > > duration LA needs to wait for PA depends on PA, which could be long. Also\n> > > what if PA raises an error in\n> > > pa_lock_stream() due to some reasons? I think LA won't be able to detect the\n> > > failure.\n> > >\n> > > I think we should at least make it interruptible and maybe need to add some\n> > > sleep. Or perhaps we can use the condition variable for this case.\n> >\n>\n> Or we can leave this while (true) logic altogether for the first\n> version and have a comment to explain this race. Anyway, after\n> restarting, it will probably be solved. We can always change this part\n> of the code later if this really turns out to be problematic.\n>\n\n+1. Thank you Hou-san for adding this comment in the latest version (v61) patch!\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 15 Dec 2022 14:21:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 15, 2022 at 8:58 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nFew minor comments:\n=================\n1.\n+ for (i = list_length(subxactlist) - 1; i >= 0; i--)\n+ {\n+ TransactionId xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\n+\n+ if (xid_tmp == subxid)\n+ {\n+ RollbackToSavepoint(spname);\n+ CommitTransactionCommand();\n+ subxactlist = list_truncate(subxactlist, i + 1);\n\nI find that there is always one element extra in the list after\nrollback to savepoint. Don't we need to truncate the list to 'i' as\nshown in the diff below?\n\n2.\n* Note that If it's an empty sub-transaction then we will not find\n* the subxid here.\n\nIf in above comment seems to be in wrong case. Anyway, I have slightly\nmodified it as you can see in the diff below.\n\n$ git diff\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 11695c75fa..c809b1fd01 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -1516,8 +1516,8 @@ pa_stream_abort(LogicalRepStreamAbortData *abort_data)\n * Search the subxactlist, determine the offset tracked for the\n * subxact, and truncate the list.\n *\n- * Note that If it's an empty sub-transaction then we\nwill not find\n- * the subxid here.\n+ * Note that for an empty sub-transaction we won't\nfind the subxid\n+ * here.\n */\n for (i = list_length(subxactlist) - 1; i >= 0; i--)\n {\n@@ -1527,7 +1527,7 @@ pa_stream_abort(LogicalRepStreamAbortData *abort_data)\n {\n RollbackToSavepoint(spname);\n CommitTransactionCommand();\n- subxactlist = list_truncate(subxactlist, i + 1);\n+ subxactlist = list_truncate(subxactlist, i);\n break;\n }\n }\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Dec 2022 18:29:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 15, 2022 at 12:28 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, December 14, 2022 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > On Wed, Dec 14, 2022 at 9:50 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, December 13, 2022 11:25 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Here are comments on v59 0001, 0002 patches:\n> > >\n> > > Thanks for the comments!\n> > >\n> > > > +void\n> > > > +pa_increment_stream_block(ParallelApplyWorkerShared *wshared) {\n> > > > + while (1)\n> > > > + {\n> > > > + SpinLockAcquire(&wshared->mutex);\n> > > > +\n> > > > + /*\n> > > > + * Don't try to increment the count if the parallel\n> > > > apply worker is\n> > > > + * taking the stream lock. Otherwise, there would\n> > > > + be\n> > > > a race condition\n> > > > + * that the parallel apply worker checks there is\n> > > > + no\n> > > > pending streaming\n> > > > + * block and before it actually starts waiting on a\n> > > > lock, the leader\n> > > > + * sends another streaming block and take the\n> > > > + stream\n> > > > lock again. In\n> > > > + * this case, the parallel apply worker will start\n> > > > waiting for the next\n> > > > + * streaming block whereas there is actually a\n> > > > pending streaming block\n> > > > + * available.\n> > > > + */\n> > > > + if (!wshared->pa_wait_for_stream)\n> > > > + {\n> > > > + wshared->pending_stream_count++;\n> > > > + SpinLockRelease(&wshared->mutex);\n> > > > + break;\n> > > > + }\n> > > > +\n> > > > + SpinLockRelease(&wshared->mutex);\n> > > > + }\n> > > > +}\n> > > >\n> > > > I think we should add an assertion to check if we don't hold the stream lock.\n> > > >\n> > > > I think that waiting for pa_wait_for_stream to be false in a busy\n> > > > loop is not a good idea. It's not interruptible and there is not\n> > > > guarantee that we can break from this loop in a short time. For\n> > > > instance, if PA executes\n> > > > pa_decr_and_wait_stream_block() a bit earlier than LA executes\n> > > > pa_increment_stream_block(), LA has to wait for PA to acquire and\n> > > > release the stream lock in a busy loop. It should not be long in\n> > > > normal cases but the duration LA needs to wait for PA depends on PA,\n> > > > which could be long. Also what if PA raises an error in\n> > > > pa_lock_stream() due to some reasons? I think LA won't be able to\n> > > > detect the failure.\n> > > >\n> > > > I think we should at least make it interruptible and maybe need to\n> > > > add some sleep. Or perhaps we can use the condition variable for this case.\n> > >\n> >\n> > Or we can leave this while (true) logic altogether for the first version and have a\n> > comment to explain this race. Anyway, after restarting, it will probably be\n> > solved. We can always change this part of the code later if this really turns out\n> > to be problematic.\n>\n> Agreed, and reverted this part.\n>\n> >\n> > > Thanks for the analysis, I will research this part.\n> > >\n> > > > ---\n> > > > In worker.c, we have the following common pattern:\n> > > >\n> > > > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > > > write change to the file;\n> > > > do some work;\n> > > > break;\n> > > >\n> > > > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > > > pa_send_data();\n> > > >\n> > > > if (winfo->serialize_changes)\n> > > > {\n> > > > do some worker required after writing changes to the file.\n> > > > }\n> > > > :\n> > > > break;\n> > > >\n> > > > IIUC there are two different paths for partial serialization: (a)\n> > > > where apply_action is TRANS_LEADER_PARTIAL_SERIALIZE, and (b) where\n> > > > apply_action is TRANS_LEADER_PARTIAL_SERIALIZE and\n> > > > winfo->serialize_changes became true. And we need to match what we\n> > > > winfo->do\n> > > > in (a) and (b). Rather than having two different paths for the same\n> > > > case, how about falling through TRANS_LEADER_PARTIAL_SERIALIZE when\n> > > > we could not send the changes? That is, pa_send_data() just returns\n> > > > false when the timeout exceeds and we need to switch to serialize\n> > > > changes, otherwise returns true. If it returns false, we prepare for\n> > > > switching to serialize changes such as initializing fileset, and\n> > > > fall through TRANS_LEADER_PARTIAL_SERIALIZE case. The code would be\n> > like:\n> > > >\n> > > > case TRANS_LEADER_SEND_TO_PARALLEL:\n> > > > ret = pa_send_data();\n> > > >\n> > > > if (ret)\n> > > > {\n> > > > do work for sending changes to PA.\n> > > > break;\n> > > > }\n> > > >\n> > > > /* prepare for switching to serialize changes */\n> > > > winfo->serialize_changes = true;\n> > > > initialize fileset;\n> > > > acquire stream lock if necessary;\n> > > >\n> > > > /* FALLTHROUGH */\n> > > > case TRANS_LEADER_PARTIAL_SERIALIZE:\n> > > > do work for serializing changes;\n> > > > break;\n> > >\n> > > I think that the suggestion is to extract the code that switch to\n> > > serialize mode out of the pa_send_data(), and then we need to add that\n> > > logic in all the functions which call pa_send_data(), I am not sure if\n> > > it looks better as it might introduce some more codes in each handling\n> > function.\n> > >\n> >\n> > How about extracting the common code from apply_handle_stream_commit\n> > and apply_handle_stream_prepare to a separate function say\n> > pa_xact_finish_common()? I see there is a lot of common code (unlock the\n> > stream, wait for the finish, store flush location, free worker\n> > info) in both the functions for TRANS_LEADER_PARTIAL_SERIALIZE and\n> > TRANS_LEADER_SEND_TO_PARALLEL cases.\n>\n> Agreed, changed. I also addressed Sawada-san comment by extracting the\n> code that switch to serialize out of pa_send_data().\n>\n> > >\n> > > > ---\n> > > > void\n> > > > pa_lock_stream(TransactionId xid, LOCKMODE lockmode) {\n> > > > LockApplyTransactionForSession(MyLogicalRepWorker->subid, xid,\n> > > > PARALLEL_APPLY_LOCK_STREAM,\n> > > > lockmode); }\n> > > >\n> > > > I think since we don't need to let the caller to specify the lock\n> > > > mode but need only shared and exclusive modes, we can make it simple\n> > > > by having a boolean argument say shared instead of lockmode.\n> > >\n> > > I personally think passing the lockmode would make the code more clear\n> > > than passing a Boolean value.\n> > >\n> >\n> > +1.\n> >\n> > I have made a few changes in the newly added comments and function name in\n> > the attached patch. Kindly include this if you find the changes okay.\n>\n> Thanks, I have checked and merged it.\n>\n> Attach the new version patch set which addressed all comments so far.\n\nThank you for updating the patches! Here are some minor comments:\n\n@@ -100,7 +100,6 @@ static void check_duplicates_in_publist(List\n*publist, Datum *datums);\n static List *merge_publications(List *oldpublist, List *newpublist,\nbool addpub, const char *subname);\n static void ReportSlotConnectionError(List *rstates, Oid subid, char\n*slotname, char *err);\n\n-\n /*\n * Common option parsing function for CREATE and ALTER SUBSCRIPTION commands.\n *\n\nUnnecessary line removal.\n\n---\n+ * Swtich to PARTIAL_SERIALIZE mode for the current transaction -- this means\n\ntypo\n\ns/Swtich/Switch/\n\n---\n+pa_has_spooled_message_pending()\n+{\n+ PartialFileSetState fileset_state;\n+\n+ fileset_state = pa_get_fileset_state();\n+\n+ if (fileset_state != FS_UNKNOWN)\n+ return true;\n+ else\n+ return false;\n+}\n\nI think we can simply do:\n\nreturn (fileset_state != FS_UNKNOWN);\n\nOr do we need this function in the first place? I think we can do in\nLogicalParallelApplyLoop() like:\n\nelse if (shmq_res == SHM_MQ_WOULD_BLOCK)\n{\n /* Check if changes have been serialized to a file. */\n if (pa_get_fileset_state != FS_UNKNOWN)\n {\n pa_spooled_messages();\n }\n\nAlso, I think the name FS_UNKNOWN doesn't mean anything. It sounds\nrather we don't expect this state but it's not true. How about\nFS_INITIAL or FS_EMPTY? It sounds more understandable.\n\n---\n+/*\n+ * Wait until the parallel apply worker's transaction finishes.\n+ */\n+void\n+pa_wait_for_xact_finish(ParallelApplyWorkerInfo *winfo)\n\nI think we no longer need to expose pa_wait_for_exact_finish().\n\n---\n+ active_workers = list_copy(ParallelApplyWorkerPool);\n+\n+ foreach(lc, active_workers)\n+ {\n+ int slot_no;\n+ uint16 generation;\n+ ParallelApplyWorkerInfo *winfo =\n(ParallelApplyWorkerInfo *) lfirst(lc);\n+\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+ napplyworkers =\nlogicalrep_pa_worker_count(MyLogicalRepWorker->subid);\n+ LWLockRelease(LogicalRepWorkerLock);\n+\n+ if (napplyworkers <=\nmax_parallel_apply_workers_per_subscription / 2)\n+ return;\n+\n\nCalling logicalrep_pa_worker_count() with lwlock for each worker seems\nnot efficient to me. I think we can get the number of workers once at\nthe top of this function and return if it's already lower than the\nmaximum pool size. Otherwise, we attempt to stop extra workers.\n\n---\n+bool\n+pa_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId xid)\n+{\n\n\nIs there any reason why this function has the XID as a separate\nargument? It seems to me that since we always call this function with\n'winfo' and 'winfo->shared->xid', we can remove xid from the function\nargument.\n\n---\n+ /* Initialize shared memory area. */\n+ SpinLockAcquire(&winfo->shared->mutex);\n+ winfo->shared->xact_state = PARALLEL_TRANS_UNKNOWN;\n+ winfo->shared->xid = xid;\n+ SpinLockRelease(&winfo->shared->mutex);\n\nIt's practically no problem but is there any reason why some fields of\nParallelApplyWorkerInfo are initialized in pa_setup_dsm() whereas some\nfields are done here?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 16 Dec 2022 16:08:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 16, 2022 3:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> \r\n>Here are some minor comments:\r\n\r\nThanks for the comments!\r\n\r\n> ---\r\n> +pa_has_spooled_message_pending()\r\n> +{\r\n> + PartialFileSetState fileset_state;\r\n> +\r\n> + fileset_state = pa_get_fileset_state();\r\n> +\r\n> + if (fileset_state != FS_UNKNOWN)\r\n> + return true;\r\n> + else\r\n> + return false;\r\n> +}\r\n> \r\n> I think we can simply do:\r\n> \r\n> return (fileset_state != FS_UNKNOWN);\r\n\r\nWill change.\r\n\r\n> \r\n> Or do we need this function in the first place? I think we can do in\r\n> LogicalParallelApplyLoop() like:\r\n\r\nI was intended to not expose the file state in the main loop, so maybe better\r\nto keep this function.\r\n\r\n> ---\r\n> + active_workers = list_copy(ParallelApplyWorkerPool);\r\n> +\r\n> + foreach(lc, active_workers)\r\n> + {\r\n> + int slot_no;\r\n> + uint16 generation;\r\n> + ParallelApplyWorkerInfo *winfo =\r\n> (ParallelApplyWorkerInfo *) lfirst(lc);\r\n> +\r\n> + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> + napplyworkers =\r\n> logicalrep_pa_worker_count(MyLogicalRepWorker->subid);\r\n> + LWLockRelease(LogicalRepWorkerLock);\r\n> +\r\n> + if (napplyworkers <=\r\n> max_parallel_apply_workers_per_subscription / 2)\r\n> + return;\r\n> +\r\n> \r\n> Calling logicalrep_pa_worker_count() with lwlock for each worker seems\r\n> not efficient to me. I think we can get the number of workers once at\r\n> the top of this function and return if it's already lower than the\r\n> maximum pool size. Otherwise, we attempt to stop extra workers.\r\n\r\nHow about we directly check the length of worker pool list here which\r\nseems simpler and don't need to lock ?\r\n\r\n> ---\r\n> +bool\r\n> +pa_free_worker(ParallelApplyWorkerInfo *winfo, TransactionId xid)\r\n> +{\r\n> \r\n> \r\n> Is there any reason why this function has the XID as a separate\r\n> argument? It seems to me that since we always call this function with\r\n> 'winfo' and 'winfo->shared->xid', we can remove xid from the function\r\n> argument.\r\n> \r\n> ---\r\n> + /* Initialize shared memory area. */\r\n> + SpinLockAcquire(&winfo->shared->mutex);\r\n> + winfo->shared->xact_state = PARALLEL_TRANS_UNKNOWN;\r\n> + winfo->shared->xid = xid;\r\n> + SpinLockRelease(&winfo->shared->mutex);\r\n> \r\n> It's practically no problem but is there any reason why some fields of\r\n> ParallelApplyWorkerInfo are initialized in pa_setup_dsm() whereas some\r\n> fields are done here?\r\n\r\nWe could be using old worker in the pool here in which case we need to update\r\nthese fields with the new streaming transaction information.\r\n\r\nI will address other comments except above ones which are being discussed.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Fri, 16 Dec 2022 09:17:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 15, 2022 at 6:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\nI have noticed that the origin information of the rollback is not\nrestored after restart of the server. So, the apply worker will send\nthe old origin information in that case. It seems we need the below\nchange in XactLogAbortRecord(). What do you think?\n\ndiff --git a/src/backend/access/transam/xact.c\nb/src/backend/access/transam/xact.c\nindex 419fac5d6f..1b047133db 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -5880,11 +5880,10 @@ XactLogAbortRecord(TimestampTz abort_time,\n }\n\n /*\n- * Dump transaction origin information only for abort prepared. We need\n- * this during recovery to update the replication origin progress.\n+ * Dump transaction origin information. We need this during recovery to\n+ * update the replication origin progress.\n */\n- if ((replorigin_session_origin != InvalidRepOriginId) &&\n- TransactionIdIsValid(twophase_xid))\n+ if (replorigin_session_origin != InvalidRepOriginId)\n {\n xl_xinfo.xinfo |= XACT_XINFO_HAS_ORIGIN;\n\n@@ -5941,8 +5940,8 @@ XactLogAbortRecord(TimestampTz abort_time,\n if (xl_xinfo.xinfo & XACT_XINFO_HAS_ORIGIN)\n XLogRegisterData((char *) (&xl_origin), sizeof(xl_xact_origin));\n\n- if (TransactionIdIsValid(twophase_xid))\n- XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);\n+ /* include the replication origin */\n+ XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Dec 2022 16:09:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 16, 2022 at 2:47 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > ---\n> > + active_workers = list_copy(ParallelApplyWorkerPool);\n> > +\n> > + foreach(lc, active_workers)\n> > + {\n> > + int slot_no;\n> > + uint16 generation;\n> > + ParallelApplyWorkerInfo *winfo =\n> > (ParallelApplyWorkerInfo *) lfirst(lc);\n> > +\n> > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > + napplyworkers =\n> > logicalrep_pa_worker_count(MyLogicalRepWorker->subid);\n> > + LWLockRelease(LogicalRepWorkerLock);\n> > +\n> > + if (napplyworkers <=\n> > max_parallel_apply_workers_per_subscription / 2)\n> > + return;\n> > +\n> >\n> > Calling logicalrep_pa_worker_count() with lwlock for each worker seems\n> > not efficient to me. I think we can get the number of workers once at\n> > the top of this function and return if it's already lower than the\n> > maximum pool size. Otherwise, we attempt to stop extra workers.\n>\n> How about we directly check the length of worker pool list here which\n> seems simpler and don't need to lock ?\n>\n\nI don't see any problem with that. Also, if such a check is safe then\ncan't we use the same in pa_free_worker() as well? BTW, shouldn't\npa_stop_idle_workers() try to free/stop workers unless the active\nnumber reaches below max_parallel_apply_workers_per_subscription?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Dec 2022 16:34:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 16, 2022 at 4:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 16, 2022 at 2:47 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > > ---\n> > > + active_workers = list_copy(ParallelApplyWorkerPool);\n> > > +\n> > > + foreach(lc, active_workers)\n> > > + {\n> > > + int slot_no;\n> > > + uint16 generation;\n> > > + ParallelApplyWorkerInfo *winfo =\n> > > (ParallelApplyWorkerInfo *) lfirst(lc);\n> > > +\n> > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > > + napplyworkers =\n> > > logicalrep_pa_worker_count(MyLogicalRepWorker->subid);\n> > > + LWLockRelease(LogicalRepWorkerLock);\n> > > +\n> > > + if (napplyworkers <=\n> > > max_parallel_apply_workers_per_subscription / 2)\n> > > + return;\n> > > +\n> > >\n> > > Calling logicalrep_pa_worker_count() with lwlock for each worker seems\n> > > not efficient to me. I think we can get the number of workers once at\n> > > the top of this function and return if it's already lower than the\n> > > maximum pool size. Otherwise, we attempt to stop extra workers.\n> >\n> > How about we directly check the length of worker pool list here which\n> > seems simpler and don't need to lock ?\n> >\n>\n> I don't see any problem with that. Also, if such a check is safe then\n> can't we use the same in pa_free_worker() as well? BTW, shouldn't\n> pa_stop_idle_workers() try to free/stop workers unless the active\n> number reaches below max_parallel_apply_workers_per_subscription?\n>\n\nBTW, can we move pa_stop_idle_workers() functionality to a later patch\n(say into v61-0006*)? That way we can focus on it separately once the\nmain patch is committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 17 Dec 2022 17:45:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, December 17, 2022 8:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Dec 16, 2022 at 4:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Dec 16, 2022 at 2:47 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > > ---\r\n> > > > + active_workers = list_copy(ParallelApplyWorkerPool);\r\n> > > > +\r\n> > > > + foreach(lc, active_workers)\r\n> > > > + {\r\n> > > > + int slot_no;\r\n> > > > + uint16 generation;\r\n> > > > + ParallelApplyWorkerInfo *winfo =\r\n> > > > (ParallelApplyWorkerInfo *) lfirst(lc);\r\n> > > > +\r\n> > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> > > > + napplyworkers =\r\n> > > > logicalrep_pa_worker_count(MyLogicalRepWorker->subid);\r\n> > > > + LWLockRelease(LogicalRepWorkerLock);\r\n> > > > +\r\n> > > > + if (napplyworkers <=\r\n> > > > max_parallel_apply_workers_per_subscription / 2)\r\n> > > > + return;\r\n> > > > +\r\n> > > >\r\n> > > > Calling logicalrep_pa_worker_count() with lwlock for each worker\r\n> > > > seems not efficient to me. I think we can get the number of\r\n> > > > workers once at the top of this function and return if it's\r\n> > > > already lower than the maximum pool size. Otherwise, we attempt to stop\r\n> extra workers.\r\n> > >\r\n> > > How about we directly check the length of worker pool list here\r\n> > > which seems simpler and don't need to lock ?\r\n> > >\r\n> >\r\n> > I don't see any problem with that. Also, if such a check is safe then\r\n> > can't we use the same in pa_free_worker() as well? BTW, shouldn't\r\n> > pa_stop_idle_workers() try to free/stop workers unless the active\r\n> > number reaches below max_parallel_apply_workers_per_subscription?\r\n> >\r\n> \r\n> BTW, can we move pa_stop_idle_workers() functionality to a later patch (say into\r\n> v61-0006*)? That way we can focus on it separately once the main patch is\r\n> committed.\r\n\r\nAgreed. I have addressed all the comments and did some cosmetic changes.\r\nAttach the new version patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sat, 17 Dec 2022 14:04:14 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sat, Dec 17, 2022 at 7:34 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Agreed. I have addressed all the comments and did some cosmetic changes.\n> Attach the new version patch set.\n>\n\nFew comments:\n============\n1.\n+ if (fileset_state == FS_SERIALIZE_IN_PROGRESS)\n+ {\n+ pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n+ }\n+\n+ /*\n+ * We cannot read the file immediately after the leader has serialized all\n+ * changes to the file because there may still be messages in the memory\n+ * queue. We will apply all spooled messages the next time we call this\n+ * function, which should ensure that there are no messages left in the\n+ * memory queue.\n+ */\n+ else if (fileset_state == FS_SERIALIZE_DONE)\n+ {\n\nOnce we have waited in the FS_SERIALIZE_IN_PROGRESS, the file state\ncan be FS_SERIALIZE_DONE immediately after that. So, won't it be\nbetter to have a separate if block for FS_SERIALIZE_DONE state? If you\nagree to do so then we can probably remove the comment: \"* XXX It is\npossible that immediately after we have waited for a lock in ...\".\n\n2.\n+void\n+pa_decr_and_wait_stream_block(void)\n+{\n+ Assert(am_parallel_apply_worker());\n+\n+ if (pg_atomic_sub_fetch_u32(&MyParallelShared->pending_stream_count, 1) == 0)\n\nI think here the count can go negative when we are in serialize mode\nbecause we don't increase it for serialize mode. I can't see any\nproblem due to that but OTOH, this doesn't seem to be intended because\nin the future if we decide to implement the functionality of switching\nback to non-serialize mode, this could be a problem. Also, I guess we\ndon't even need to try locking/unlocking the stream lock in that case.\nOne idea to avoid this is to check if the pending count is zero then\nif file_set in not available raise an error (elog ERROR), otherwise,\nsimply return from here.\n\n3. In apply_handle_stream_stop(), we are setting backendstate as idle\nfor cases TRANS_LEADER_SEND_TO_PARALLEL and TRANS_PARALLEL_APPLY. For\nother cases, it is set by stream_stop_internal. I think it would be\nbetter to set the state explicitly for all cases to make the code look\nconsistent and remove it from stream_stop_internal(). The other reason\nto remove setting the state from stream_stop_internal() is that when\nthat function is invoked from other places like\napply_handle_stream_commit(), it seems to be setting the idle before\nactually we reach the idle state.\n\n4. Apart from the above, I have made a few changes in the comments,\nsee attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 19 Dec 2022 18:17:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, I have done some testing for this patch. This post describes my\ntests so far and the results observed.\n\nBackground - Testing multiple PA workers:\n---------------------------------------\n\nThe \"parallel apply\" feature allocates the PA workers (if it can) upon\nreceiving STREAM_START replication protocol msg. This means that if\nthere are replication messages for overlapping streaming transactions\nyou should see multiple PA workers processing them (assuming the PA\npool size is configured appropriately).\n\nBut AFAIK the only way to cause replication protocol messages to\narrive and be applied in a particular order is by manual testing (e.g\nuse 2x psql sessions and manually arrange for there to be overlapping\ntransactions for the published table). I have tried to make this kind\nof (regression) testing easier -- in order to test many overlapping\ncombinations in a repeatable and semi-automated way I have posted a\nsmall enhancement to the isolationtester spec grammar [1]. Using this,\nnow we can just press a button to test lots of different streaming\ntransaction combinations and then observe the parallel apply message\ndispatching in action...\n\nTest message combinations (from specs/pub-sub.spec):\n----------------------------------------------------\n\n# single tx\npermutation ps1_begin ps1_ins ps1_commit ps1_sel ps2_sel sub_sleep sub_sel\npermutation ps2_begin ps2_ins ps2_commit ps1_sel ps2_sel sub_sleep sub_sel\n\n# rollback\npermutation ps1_begin ps1_ins ps1_rollback ps1_sel sub_sleep sub_sel\n\n# overlapping tx rollback and commit\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps1_rollback\nps2_commit sub_sleep sub_sel\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps1_commit\nps2_rollback sub_sleep sub_sel\n\n# overlapping tx commits\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps2_commit ps1_commit\nsub_sleep sub_sel\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps1_commit ps2_commit\nsub_sleep sub_sel\n\npermutation ps1_begin ps2_begin ps1_ins ps2_ins ps2_commit ps1_commit\nsub_sleep sub_sel\npermutation ps1_begin ps2_begin ps1_ins ps2_ins ps1_commit ps2_commit\nsub_sleep sub_sel\n\npermutation ps1_begin ps2_begin ps2_ins ps1_ins ps2_commit ps1_commit\nsub_sleep sub_sel\npermutation ps1_begin ps2_begin ps2_ins ps1_ins ps1_commit ps2_commit\nsub_sleep sub_sel\n\nTest setup:\n-----------\n\n1. Setup publisher and subscriber servers\n\n1a. Publisher server is configured to use new GUC 'force_stream_mode =\ntrue' [2]. This means even single-row inserts cause replication\nSTREAM_START messages which will trigger the PA workers.\n\n1b. Subscriber server is configured to use new GUC\n'max_parallel_apply_workers_per_subscription'. Set this value to\nchange how many PA workers can be allocated.\n\n2. isolation/specs/pub-test.spec (defines the publisher sessions being tested)\n\n\nHow verified:\n-------------\n\n1. Running the isolationtester pub-sub.spec test gives the expected\ntable results (so data was replicated OK)\n- any new permutations can be added as required.\n- more overlapping sessions (e.g. 3 or 4...) can be added as required.\n\n2. Changing the publisher GUC 'force_stream_mode' to be true/false\n- we can see if PA workers being used or not being used -- (ps -eaf |\ngrep 'logical replication')\n\n3. Changing the subscriber GUC 'max_parallel_apply_workers_per_subscription'\n- set to high value or low value so we can see the PA worker (pool)\nbeing used or filling to capacity\n\n4. I have also patched some temporary logging into code for both \"LA\"\nand \"PA\" workers\n- now the subscriber logfile leaves a trail of evidence about which\nworker did what (for apply_dispatch and for locking calls)\n\nObserved Results:\n-----------------\n\n1. From the user's POV everything is normal - data gets replicated as\nexpected regardless of GUC settings (force_streaming /\nmax_parallel_apply_workers_per_subscription).\n\n[postgres@CentOS7-x64 isolation]$ make check-pub-sub\n...\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 61696 with PID 11822\n============== creating database \"isolation_regression\" ==============\nCREATE DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\n============== running regression test queries ==============\ntest pub-sub ... ok 33424 ms\n============== shutting down postmaster ==============\n============== removing temporary instance ==============\n\n=====================\n All 1 tests passed.\n=====================\n\n\n2. Confirmation multiple PA workers were used (force_streaming=true /\nmax_parallel_apply_workers_per_subscription=99)\n\n[postgres@CentOS7-x64 isolation]$ ps -eaf | grep 'logical replication'\npostgres 5298 5293 0 Dec19 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 5306 5301 0 Dec19 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 17301 5301 0 10:31 ? 00:00:00 postgres: logical\nreplication parallel apply worker for subscription 16387\npostgres 17524 5301 0 10:31 ? 00:00:00 postgres: logical\nreplication parallel apply worker for subscription 16387\npostgres 21134 5301 0 08:08 ? 00:00:01 postgres: logical\nreplication apply worker for subscription 16387\npostgres 22377 13260 0 10:34 pts/0 00:00:00 grep --color=auto\nlogical replication\n\n3. Confirmation no PA workers were used when not streaming\n(force_streaming=false /\nmax_parallel_apply_workers_per_subscription=99)\n\n[postgres@CentOS7-x64 isolation]$ ps -eaf | grep 'logical replication'\npostgres 26857 26846 0 10:37 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 26875 26864 0 10:37 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 26889 26864 0 10:37 ? 00:00:00 postgres: logical\nreplication apply worker for subscription 16387\npostgres 29901 13260 0 10:39 pts/0 00:00:00 grep --color=auto\nlogical replication\n\n4. Confirmation only one PA worker gets used when the pool is limited\n(force_streaming=true / max_parallel_apply_workers_per_subscription=1)\n\n4a. (processes)\n[postgres@CentOS7-x64 isolation]$ ps -eaf | grep 'logical replication'\npostgres 2484 13260 0 10:42 pts/0 00:00:00 grep --color=auto\nlogical replication\npostgres 32500 32495 0 10:40 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 32508 32503 0 10:40 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 32514 32503 0 10:41 ? 00:00:00 postgres: logical\nreplication apply worker for subscription 16387\n\n4b. (logs)\n2022-12-20 10:41:43.551 AEDT [32514] LOG: out of parallel apply workers\n2022-12-20 10:41:43.551 AEDT [32514] HINT: You might need to increase\nmax_parallel_apply_workers_per_subscription.\n2022-12-20 10:41:43.551 AEDT [32514] CONTEXT: processing remote data\nfor replication origin \"pg_16387\" during message type \"STREAM START\"\nin transaction 756\n\n5. Confirmation no PA workers get used when there is none available\n(force_streaming=true / max_parallel_apply_workers_per_subscription=0)\n\n5a. (processes)\n[postgres@CentOS7-x64 isolation]$ ps -eaf | grep 'logical replication'\npostgres 10026 10021 0 10:47 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 10034 10029 0 10:47 ? 00:00:00 postgres: logical\nreplication launcher\npostgres 10041 10029 0 10:47 ? 00:00:00 postgres: logical\nreplication apply worker for subscription 16387\npostgres 13068 13260 0 10:48 pts/0 00:00:00 grep --color=auto\nlogical replication\n\n5b. (logs)\n2022-12-20 10:47:50.216 AEDT [10041] LOG: out of parallel apply workers\n2022-12-20 10:47:50.216 AEDT [10041] HINT: You might need to increase\nmax_parallel_apply_workers_per_subscription.\n..\nAlso, there are no \"PA\" log messages present\n\n\nSummary\n-------\n\nIn summary, everything I have tested so far appeared to be working\nproperly. In other words, for overlapping streamed transactions of\ndifferent kinds, and regardless of whether zero/some/all of those\ntransactions are getting processed by a PA worker, the resulting\nreplicated data looked consistently OK.\n\n\nPSA some files\n- test_init.sh - sample test script for setup publisher/subscriber\nrequired by spec test.\n- spec/pub-sub.spec = spec combinations for causing overlapping\nstreaming transactions\n- pub-sub.out = output from successful isolationtester (make check-pub-sub) run\n- SUB.log = subscriber logs augmented with my \"LA\" and \"PA\" extra\nlogging for showing locking/dispatching.\n\n(I can also post my logging patch if anyone is interested to try using\nit to see the output like in SUB.log).\n\nNOTE - all testing described in this post above was using v58-0001\nonly. However, the point of implementing these as a .spec test was to\nbe able to repeat these same regression tests on newer versions with\nminimal manual steps required. Later I plan to fetch/apply the most\nrecent patch version and repeat these same tests.\n\n------\n[1] My isolationtester conninfo enhancement v2 -\nhttps://www.postgresql.org/message-id/CAHut%2BPv_1Mev0709uj_OjyNCzfBjENE3RD9%3Dd9RZYfcqUKfG%3DA%40mail.gmail.com\n[2] Shi-san's GUC 'force_streaming_mode' -\nhttps://www.postgresql.org/message-id/flat/OSZPR01MB63104E7449DBE41932DB19F1FD1B9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 20 Dec 2022 13:47:18 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 20, 2022 at 8:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Summary\n> -------\n>\n> In summary, everything I have tested so far appeared to be working\n> properly. In other words, for overlapping streamed transactions of\n> different kinds, and regardless of whether zero/some/all of those\n> transactions are getting processed by a PA worker, the resulting\n> replicated data looked consistently OK.\n>\n\nThanks for doing the detailed testing of this patch. I think the one\narea where we can focus more is the switch-to-serialization mode while\nsending changes to the parallel worker.\n\n>\n> NOTE - all testing described in this post above was using v58-0001\n> only. However, the point of implementing these as a .spec test was to\n> be able to repeat these same regression tests on newer versions with\n> minimal manual steps required. Later I plan to fetch/apply the most\n> recent patch version and repeat these same tests.\n>\n\nThat would be really helpful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Dec 2022 08:49:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 20, 2022 at 2:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 20, 2022 at 8:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Summary\n> > -------\n> >\n> > In summary, everything I have tested so far appeared to be working\n> > properly. In other words, for overlapping streamed transactions of\n> > different kinds, and regardless of whether zero/some/all of those\n> > transactions are getting processed by a PA worker, the resulting\n> > replicated data looked consistently OK.\n> >\n>\n> Thanks for doing the detailed testing of this patch. I think the one\n> area where we can focus more is the switch-to-serialization mode while\n> sending changes to the parallel worker.\n>\n> >\n> > NOTE - all testing described in this post above was using v58-0001\n> > only. However, the point of implementing these as a .spec test was to\n> > be able to repeat these same regression tests on newer versions with\n> > minimal manual steps required. Later I plan to fetch/apply the most\n> > recent patch version and repeat these same tests.\n> >\n>\n> That would be really helpful.\n>\n\nFYI, my pub-sub.spec tests gave the same result (i.e. pass) when\nre-run against the latest v62-0001 (parallel apply base patch) and\nv62-0004 (GUC 'force_stream_mode' patch).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 20 Dec 2022 17:22:32 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 19, 2022 at 6:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Dec 17, 2022 at 7:34 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Agreed. I have addressed all the comments and did some cosmetic changes.\n> > Attach the new version patch set.\n> >\n>\n> Few comments:\n> ============\n>\n\nFew more minor points:\n1.\n-static inline void\n+void\n changes_filename(char *path, Oid subid, TransactionId xid)\n {\n\nThis function seems to be used only in worker.c. So, what is the need\nto make it extern?\n\n2. I have made a few changes in the comments. See attached. This is\natop my yesterday's top-up patch.\n\nI think we should merge the 0001 and 0002 patches as they need to be\ncommitted together.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 20 Dec 2022 14:41:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "\r\nOn Monday, December 19, 2022 8:47 PMs Amit Kapila <amit.kapila16@gmail.com>:\r\n> \r\n> On Sat, Dec 17, 2022 at 7:34 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Agreed. I have addressed all the comments and did some cosmetic changes.\r\n> > Attach the new version patch set.\r\n> >\r\n> \r\n> Few comments:\r\n> ============\r\n> 1.\r\n> + if (fileset_state == FS_SERIALIZE_IN_PROGRESS) {\r\n> + pa_lock_stream(MyParallelShared->xid, AccessShareLock);\r\n> + pa_unlock_stream(MyParallelShared->xid, AccessShareLock); }\r\n> +\r\n> + /*\r\n> + * We cannot read the file immediately after the leader has serialized\r\n> + all\r\n> + * changes to the file because there may still be messages in the\r\n> + memory\r\n> + * queue. We will apply all spooled messages the next time we call this\r\n> + * function, which should ensure that there are no messages left in the\r\n> + * memory queue.\r\n> + */\r\n> + else if (fileset_state == FS_SERIALIZE_DONE) {\r\n> \r\n> Once we have waited in the FS_SERIALIZE_IN_PROGRESS, the file state can be\r\n> FS_SERIALIZE_DONE immediately after that. So, won't it be better to have a\r\n> separate if block for FS_SERIALIZE_DONE state? If you agree to do so then we\r\n> can probably remove the comment: \"* XXX It is possible that immediately after\r\n> we have waited for a lock in ...\".\r\n\r\nChanged and slightly adjust the comments.\r\n\r\n> 2.\r\n> +void\r\n> +pa_decr_and_wait_stream_block(void)\r\n> +{\r\n> + Assert(am_parallel_apply_worker());\r\n> +\r\n> + if (pg_atomic_sub_fetch_u32(&MyParallelShared->pending_stream_count,\r\n> + 1) == 0)\r\n> \r\n> I think here the count can go negative when we are in serialize mode because\r\n> we don't increase it for serialize mode. I can't see any problem due to that but\r\n> OTOH, this doesn't seem to be intended because in the future if we decide to\r\n> implement the functionality of switching back to non-serialize mode, this could\r\n> be a problem. Also, I guess we don't even need to try locking/unlocking the\r\n> stream lock in that case.\r\n> One idea to avoid this is to check if the pending count is zero then if file_set in\r\n> not available raise an error (elog ERROR), otherwise, simply return from here.\r\n\r\nAdded the check.\r\n\r\n> \r\n> 3. In apply_handle_stream_stop(), we are setting backendstate as idle for cases\r\n> TRANS_LEADER_SEND_TO_PARALLEL and TRANS_PARALLEL_APPLY. For other\r\n> cases, it is set by stream_stop_internal. I think it would be better to set the state\r\n> explicitly for all cases to make the code look consistent and remove it from\r\n> stream_stop_internal(). The other reason to remove setting the state from\r\n> stream_stop_internal() is that when that function is invoked from other places\r\n> like apply_handle_stream_commit(), it seems to be setting the idle before\r\n> actually we reach the idle state.\r\n\r\nChanged. Besides, I notice that the pgstat_report_activity in pa_stream_abort\r\nfor sub transaction is unnecessary since the state should be consistent with the\r\nstate set at last stream_stop, so I have removed that as well.\r\n\r\n> \r\n> 4. Apart from the above, I have made a few changes in the comments, see\r\n> attached.\r\n\r\nThanks, I have merged the patch.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 20 Dec 2022 10:14:49 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, December 20, 2022 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Dec 19, 2022 at 6:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Sat, Dec 17, 2022 at 7:34 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Agreed. I have addressed all the comments and did some cosmetic changes.\r\n> > > Attach the new version patch set.\r\n> > >\r\n> >\r\n> > Few comments:\r\n> > ============\r\n> >\r\n> \r\n> Few more minor points:\r\n> 1.\r\n> -static inline void\r\n> +void\r\n> changes_filename(char *path, Oid subid, TransactionId xid) {\r\n> \r\n> This function seems to be used only in worker.c. So, what is the need to make it\r\n> extern?\r\n\r\nOh, I forgot to revert this change after removing the one caller outside of worker.c.\r\nChanged.\r\n\r\n> \r\n> 2. I have made a few changes in the comments. See attached. This is atop my\r\n> yesterday's top-up patch.\r\n\r\nThanks, I have checked and merged this.\r\n\r\n> I think we should merge the 0001 and 0002 patches as they need to be\r\n> committed together.\r\n\r\nMerged and ran the pgident for the patch set.\r\n\r\nAttach the new version patch set which addressed all comments so far.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 20 Dec 2022 10:16:23 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "FYI - applying v63-0001 using the latest master does not work.\n\ngit apply ../patches_misc/v63-0001-Perform-streaming-logical-transactions-by-parall.patch\nerror: patch failed: src/backend/replication/logical/meson.build:1\nerror: src/backend/replication/logical/meson.build: patch does not apply\n\nLooks like a recent commit [1] to add copyrights broke the patch\n\n------\n[1] https://github.com/postgres/postgres/commit/8284cf5f746f84303eda34d213e89c8439a83a42\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Dec 2022 12:07:02 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 20, 2022 at 5:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Dec 20, 2022 at 2:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 20, 2022 at 8:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Summary\n> > > -------\n> > >\n> > > In summary, everything I have tested so far appeared to be working\n> > > properly. In other words, for overlapping streamed transactions of\n> > > different kinds, and regardless of whether zero/some/all of those\n> > > transactions are getting processed by a PA worker, the resulting\n> > > replicated data looked consistently OK.\n> > >\n> >\n> > Thanks for doing the detailed testing of this patch. I think the one\n> > area where we can focus more is the switch-to-serialization mode while\n> > sending changes to the parallel worker.\n> >\n> > >\n> > > NOTE - all testing described in this post above was using v58-0001\n> > > only. However, the point of implementing these as a .spec test was to\n> > > be able to repeat these same regression tests on newer versions with\n> > > minimal manual steps required. Later I plan to fetch/apply the most\n> > > recent patch version and repeat these same tests.\n> > >\n> >\n> > That would be really helpful.\n> >\n>\n\nFYI, my pub-sub.spec tests gave the same result (i.e. pass) when\nre-run with the latest v63 (0001,0002,0003) applied.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Dec 2022 13:01:41 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 21, 2022 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> FYI - applying v63-0001 using the latest master does not work.\r\n> \r\n> git apply ../patches_misc/v63-0001-Perform-streaming-logical-transactions-by-\r\n> parall.patch\r\n> error: patch failed: src/backend/replication/logical/meson.build:1\r\n> error: src/backend/replication/logical/meson.build: patch does not apply\r\n> \r\n> Looks like a recent commit [1] to add copyrights broke the patch\r\n\r\nThanks for your reminder.\r\nRebased the patch set.\r\n\r\nAttach the new patch set which also includes some\r\ncosmetic comment changes.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 21 Dec 2022 05:32:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 21, 2022 at 11:02 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Attach the new patch set which also includes some\n> cosmetic comment changes.\n>\n\nI noticed one problem with the recent change in the patch.\n\n+ * The fileset state should become FS_SERIALIZE_DONE once we have waited\n+ * for a lock in the FS_SERIALIZE_IN_PROGRESS state, so we get the state\n+ * again and recheck it later.\n+ */\n+ if (fileset_state == FS_SERIALIZE_IN_PROGRESS)\n+ {\n+ pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n+\n+ fileset_state = pa_get_fileset_state();\n+ Assert(fileset_state == FS_SERIALIZE_DONE);\n\nThis is not always true because say due to deadlock, this lock is\nreleased by the leader worker, in that case, the file state will be\nstill in progress. So, I think we need a change like the below:\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 45faa74596..8076786f0d 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -686,8 +686,8 @@ pa_spooled_messages(void)\n * the leader had serialized all changes which can lead to undetected\n * deadlock.\n *\n- * The fileset state must be FS_SERIALIZE_DONE once the leader\nworker has\n- * finished serializing the changes.\n+ * Note that the fileset state can be FS_SERIALIZE_DONE once the leader\n+ * worker has finished serializing the changes.\n */\n if (fileset_state == FS_SERIALIZE_IN_PROGRESS)\n {\n@@ -695,7 +695,6 @@ pa_spooled_messages(void)\n pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n\n fileset_state = pa_get_fileset_state();\n- Assert(fileset_state == FS_SERIALIZE_DONE);\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Dec 2022 17:29:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 21, 2022 at 2:32 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wed, Dec 21, 2022 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > FYI - applying v63-0001 using the latest master does not work.\n> >\n> > git apply ../patches_misc/v63-0001-Perform-streaming-logical-transactions-by-\n> > parall.patch\n> > error: patch failed: src/backend/replication/logical/meson.build:1\n> > error: src/backend/replication/logical/meson.build: patch does not apply\n> >\n> > Looks like a recent commit [1] to add copyrights broke the patch\n>\n> Thanks for your reminder.\n> Rebased the patch set.\n>\n> Attach the new patch set which also includes some\n> cosmetic comment changes.\n>\n\nThank you for updating the patch. Here are some comments on v64 patches:\n\nWhile testing the patch, I realized that if all streamed transactions\nare handled by parallel workers, there is no chance for the leader to\ncall maybe_reread_subscription() except for when waiting for the next\nmessage. Due to this, the leader didn't stop for a while even if the\nsubscription gets disabled. It's an extreme case since my test was\nthat pgbench runs 30 concurrent transactions and logical_decoding_mode\n= 'immediate', but we might want to make sure to call\nmaybe_reread_subscription() at least after committing/preparing a\ntransaction.\n\n---\n+ if (pg_atomic_read_u32(&MyParallelShared->pending_stream_count) == 0)\n+ {\n+ if (pa_has_spooled_message_pending())\n+ return;\n+\n+ elog(ERROR, \"invalid pending streaming block number\");\n+ }\n\nI think it's helpful if the error message shows the invalid block number.\n\n---\nOn Wed, Dec 7, 2022 at 10:13 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > ---\n> > If a value of max_parallel_apply_workers_per_subscription is not\n> > sufficient, we get the LOG \"out of parallel apply workers\" every time\n> > when the apply worker doesn't launch a worker. But do we really need\n> > this log? It seems not consistent with\n> > max_sync_workers_per_subscription behavior. I think we can check if\n> > the number of running parallel workers is less than\n> > max_parallel_apply_workers_per_subscription before calling\n> > logicalrep_worker_launch(). What do you think?\n>\n> (Sorry, I missed this comment in last email)\n>\n> I personally feel giving a hint might help user to realize that the\n> max_parallel_applyxxx is not enough for the current workload and then they can\n> adjust the parameter. Otherwise, user might have an easy way to check if more\n> workers are needed.\n>\n\nSorry, I missed this comment.\n\nI think the number of concurrent transactions on the publisher could\nbe several hundreds, and the number of streamed transactions among\nthem could be several tens. I agree setting\nmax_parallel_apply_workers_per_subscription to a value high enough is\nideal but I'm not sure we want to inform users immediately that the\nsetting value is not enough. I think that with the default value\n(i.e., 2), it will not be enough for many systems and the server logs\ncould be flood with the LOG \"out of parallel apply workers\". If we\nwant to give a hint to users, we can probably show the statistics on\npg_stat_subscription_stats view such as the number of streamed\ntransactions that are handled by the leader and parallel workers.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 15:08:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 22, 2022 at 11:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Thank you for updating the patch. Here are some comments on v64 patches:\n>\n> While testing the patch, I realized that if all streamed transactions\n> are handled by parallel workers, there is no chance for the leader to\n> call maybe_reread_subscription() except for when waiting for the next\n> message. Due to this, the leader didn't stop for a while even if the\n> subscription gets disabled. It's an extreme case since my test was\n> that pgbench runs 30 concurrent transactions and logical_decoding_mode\n> = 'immediate', but we might want to make sure to call\n> maybe_reread_subscription() at least after committing/preparing a\n> transaction.\n>\n\nWon't it be better to call it only if we handle the transaction by the\nparallel worker?\n\n> ---\n> + if (pg_atomic_read_u32(&MyParallelShared->pending_stream_count) == 0)\n> + {\n> + if (pa_has_spooled_message_pending())\n> + return;\n> +\n> + elog(ERROR, \"invalid pending streaming block number\");\n> + }\n>\n> I think it's helpful if the error message shows the invalid block number.\n>\n\n+1. Additionally, I suggest changing the message to \"invalid pending\nstreaming chunk\"?\n\n> ---\n> On Wed, Dec 7, 2022 at 10:13 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > ---\n> > > If a value of max_parallel_apply_workers_per_subscription is not\n> > > sufficient, we get the LOG \"out of parallel apply workers\" every time\n> > > when the apply worker doesn't launch a worker. But do we really need\n> > > this log? It seems not consistent with\n> > > max_sync_workers_per_subscription behavior. I think we can check if\n> > > the number of running parallel workers is less than\n> > > max_parallel_apply_workers_per_subscription before calling\n> > > logicalrep_worker_launch(). What do you think?\n> >\n> > (Sorry, I missed this comment in last email)\n> >\n> > I personally feel giving a hint might help user to realize that the\n> > max_parallel_applyxxx is not enough for the current workload and then they can\n> > adjust the parameter. Otherwise, user might have an easy way to check if more\n> > workers are needed.\n> >\n>\n> Sorry, I missed this comment.\n>\n> I think the number of concurrent transactions on the publisher could\n> be several hundreds, and the number of streamed transactions among\n> them could be several tens. I agree setting\n> max_parallel_apply_workers_per_subscription to a value high enough is\n> ideal but I'm not sure we want to inform users immediately that the\n> setting value is not enough. I think that with the default value\n> (i.e., 2), it will not be enough for many systems and the server logs\n> could be flood with the LOG \"out of parallel apply workers\".\n>\n\nIt seems currently we give a similar message when the logical\nreplication worker slots are finished \"out of logical replication\nworker slots\" or when we are not able to register background workers\n\"out of background worker slots\". Now, OTOH, when we exceed the limit\nof sync workers \"max_sync_workers_per_subscription\", we don't display\nany message. Personally, I think if any user has used the streaming\noption as \"parallel\" she wants all large transactions to be performed\nin parallel and if the system is not able to deal with it, displaying\na LOG message will be useful for users. This is because the\nperformance difference for large transactions between parallel and\nnon-parallel is big (30-40%) and it is better for users to know as\nsoon as possible instead of expecting them to run some monitoring\nquery to notice the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Dec 2022 15:34:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 21, 2022 at 11:02 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new patch set which also includes some\n> cosmetic comment changes.\n>\n\nFew minor comments:\n=================\n1.\n+ <literal>t</literal> = spill the changes of in-progress\ntransactions to+ disk and apply at once after the transaction is\ncommitted on the+ publisher,\n\nCan we change this description to: \"spill the changes of in-progress\ntransactions to disk and apply at once after the transaction is\ncommitted on the publisher and received by the subscriber,\"\n\n2.\n table is in progress, there will be additional workers for the tables\n- being synchronized.\n+ being synchronized. Moreover, if the streaming transaction is applied in\n+ parallel, there will be additional workers.\n\nDo we need this change in the first patch? We skip parallel apply\nworkers from view for the first patch. Am, I missing something?\n\n3.\nI think we would need a catversion bump for parallel apply feature\nbecause of below change:\n@@ -7913,11 +7913,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\ncount&gt;</replaceable>:<replaceable>&l\n\n <row>\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n- <structfield>substream</structfield> <type>bool</type>\n+ <structfield>substream</structfield> <type>char</type>\n </para>\n\nAm, I missing something? If not, then I think we can note that in the\ncommit message to avoid forgetting it before commit.\n\n4. Kindly change the below comments:\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 97f4a3037c..02bb608188 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -9,11 +9,10 @@\n *\n * This file contains the code to launch, set up, and teardown a parallel apply\n * worker which receives the changes from the leader worker and\ninvokes routines\n- * to apply those on the subscriber database.\n- *\n- * This file contains routines that are intended to support setting up, using\n- * and tearing down a ParallelApplyWorkerInfo which is required so the leader\n- * worker and parallel apply workers can communicate with each other.\n+ * to apply those on the subscriber database. Additionally, this file contains\n+ * routines that are intended to support setting up, using, and tearing down a\n+ * ParallelApplyWorkerInfo which is required so the leader worker and parallel\n+ * apply workers can communicate with each other.\n *\n * The parallel apply workers are assigned (if available) as soon as xact's\n * first stream is received for subscriptions that have set their 'streaming'\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:34:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 22, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 11:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for updating the patch. Here are some comments on v64 patches:\n> >\n> > While testing the patch, I realized that if all streamed transactions\n> > are handled by parallel workers, there is no chance for the leader to\n> > call maybe_reread_subscription() except for when waiting for the next\n> > message. Due to this, the leader didn't stop for a while even if the\n> > subscription gets disabled. It's an extreme case since my test was\n> > that pgbench runs 30 concurrent transactions and logical_decoding_mode\n> > = 'immediate', but we might want to make sure to call\n> > maybe_reread_subscription() at least after committing/preparing a\n> > transaction.\n> >\n>\n> Won't it be better to call it only if we handle the transaction by the\n> parallel worker?\n\nAgreed. And we won't need to do that after handling stream_prepare as\nwe don't do that now.\n\n>\n> > ---\n> > + if (pg_atomic_read_u32(&MyParallelShared->pending_stream_count) == 0)\n> > + {\n> > + if (pa_has_spooled_message_pending())\n> > + return;\n> > +\n> > + elog(ERROR, \"invalid pending streaming block number\");\n> > + }\n> >\n> > I think it's helpful if the error message shows the invalid block number.\n> >\n>\n> +1. Additionally, I suggest changing the message to \"invalid pending\n> streaming chunk\"?\n>\n> > ---\n> > On Wed, Dec 7, 2022 at 10:13 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, December 7, 2022 7:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > ---\n> > > > If a value of max_parallel_apply_workers_per_subscription is not\n> > > > sufficient, we get the LOG \"out of parallel apply workers\" every time\n> > > > when the apply worker doesn't launch a worker. But do we really need\n> > > > this log? It seems not consistent with\n> > > > max_sync_workers_per_subscription behavior. I think we can check if\n> > > > the number of running parallel workers is less than\n> > > > max_parallel_apply_workers_per_subscription before calling\n> > > > logicalrep_worker_launch(). What do you think?\n> > >\n> > > (Sorry, I missed this comment in last email)\n> > >\n> > > I personally feel giving a hint might help user to realize that the\n> > > max_parallel_applyxxx is not enough for the current workload and then they can\n> > > adjust the parameter. Otherwise, user might have an easy way to check if more\n> > > workers are needed.\n> > >\n> >\n> > Sorry, I missed this comment.\n> >\n> > I think the number of concurrent transactions on the publisher could\n> > be several hundreds, and the number of streamed transactions among\n> > them could be several tens. I agree setting\n> > max_parallel_apply_workers_per_subscription to a value high enough is\n> > ideal but I'm not sure we want to inform users immediately that the\n> > setting value is not enough. I think that with the default value\n> > (i.e., 2), it will not be enough for many systems and the server logs\n> > could be flood with the LOG \"out of parallel apply workers\".\n> >\n>\n> It seems currently we give a similar message when the logical\n> replication worker slots are finished \"out of logical replication\n> worker slots\" or when we are not able to register background workers\n> \"out of background worker slots\". Now, OTOH, when we exceed the limit\n> of sync workers \"max_sync_workers_per_subscription\", we don't display\n> any message. Personally, I think if any user has used the streaming\n> option as \"parallel\" she wants all large transactions to be performed\n> in parallel and if the system is not able to deal with it, displaying\n> a LOG message will be useful for users. This is because the\n> performance difference for large transactions between parallel and\n> non-parallel is big (30-40%) and it is better for users to know as\n> soon as possible instead of expecting them to run some monitoring\n> query to notice the same.\n\nI see your point. But looking at other parallel features such as\nparallel queries, parallel vacuum and parallel index creation, we\ndon't give such messages even if the number of parallel workers\nactually launched is lower than the ideal. They also bring a big\nperformance benefit.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 21:47:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Dec 22, 2022 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 22, 2022 at 11:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Thank you for updating the patch. Here are some comments on v64 patches:\n> > >\n> > > While testing the patch, I realized that if all streamed transactions\n> > > are handled by parallel workers, there is no chance for the leader to\n> > > call maybe_reread_subscription() except for when waiting for the next\n> > > message. Due to this, the leader didn't stop for a while even if the\n> > > subscription gets disabled. It's an extreme case since my test was\n> > > that pgbench runs 30 concurrent transactions and logical_decoding_mode\n> > > = 'immediate', but we might want to make sure to call\n> > > maybe_reread_subscription() at least after committing/preparing a\n> > > transaction.\n> > >\n> >\n> > Won't it be better to call it only if we handle the transaction by the\n> > parallel worker?\n>\n> Agreed. And we won't need to do that after handling stream_prepare as\n> we don't do that now.\n>\n\nI think we do this for both prepare and non-prepare cases via\nbegin_replication_step(). Here, in both cases, as the changes are sent\nto the parallel apply worker, we missed in both cases. So, I think it\nis better to do in both cases.\n\n> >\n> > It seems currently we give a similar message when the logical\n> > replication worker slots are finished \"out of logical replication\n> > worker slots\" or when we are not able to register background workers\n> > \"out of background worker slots\". Now, OTOH, when we exceed the limit\n> > of sync workers \"max_sync_workers_per_subscription\", we don't display\n> > any message. Personally, I think if any user has used the streaming\n> > option as \"parallel\" she wants all large transactions to be performed\n> > in parallel and if the system is not able to deal with it, displaying\n> > a LOG message will be useful for users. This is because the\n> > performance difference for large transactions between parallel and\n> > non-parallel is big (30-40%) and it is better for users to know as\n> > soon as possible instead of expecting them to run some monitoring\n> > query to notice the same.\n>\n> I see your point. But looking at other parallel features such as\n> parallel queries, parallel vacuum and parallel index creation, we\n> don't give such messages even if the number of parallel workers\n> actually launched is lower than the ideal. They also bring a big\n> performance benefit.\n>\n\nFair enough. Let's remove this LOG message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Dec 2022 08:50:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 23, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 22, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 22, 2022 at 11:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Thank you for updating the patch. Here are some comments on v64 patches:\n> > > >\n> > > > While testing the patch, I realized that if all streamed transactions\n> > > > are handled by parallel workers, there is no chance for the leader to\n> > > > call maybe_reread_subscription() except for when waiting for the next\n> > > > message. Due to this, the leader didn't stop for a while even if the\n> > > > subscription gets disabled. It's an extreme case since my test was\n> > > > that pgbench runs 30 concurrent transactions and logical_decoding_mode\n> > > > = 'immediate', but we might want to make sure to call\n> > > > maybe_reread_subscription() at least after committing/preparing a\n> > > > transaction.\n> > > >\n> > >\n> > > Won't it be better to call it only if we handle the transaction by the\n> > > parallel worker?\n> >\n> > Agreed. And we won't need to do that after handling stream_prepare as\n> > we don't do that now.\n> >\n>\n> I think we do this for both prepare and non-prepare cases via\n> begin_replication_step(). Here, in both cases, as the changes are sent\n> to the parallel apply worker, we missed in both cases. So, I think it\n> is better to do in both cases.\n\nAgreed. I missed that we call maybe_reread_subscription() even in the\nprepare case.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 23 Dec 2022 12:41:48 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, December 22, 2022 8:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Dec 21, 2022 at 11:02 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new patch set which also includes some cosmetic comment\r\n> > changes.\r\n> >\r\n> \r\n> Few minor comments:\r\n> =================\r\n> 1.\r\n> + <literal>t</literal> = spill the changes of in-progress\r\n> transactions to+ disk and apply at once after the transaction is\r\n> committed on the+ publisher,\r\n> \r\n> Can we change this description to: \"spill the changes of in-progress transactions\r\n> to disk and apply at once after the transaction is committed on the publisher and\r\n> received by the subscriber,\"\r\n\r\nChanged.\r\n\r\n> 2.\r\n> table is in progress, there will be additional workers for the tables\r\n> - being synchronized.\r\n> + being synchronized. Moreover, if the streaming transaction is applied in\r\n> + parallel, there will be additional workers.\r\n> \r\n> Do we need this change in the first patch? We skip parallel apply workers from\r\n> view for the first patch. Am, I missing something?\r\n\r\nNo, I moved this to 0007 which include parallel apply workers in the view.\r\n\r\n> 3.\r\n> I think we would need a catversion bump for parallel apply feature because of\r\n> below change:\r\n> @@ -7913,11 +7913,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\r\n> count&gt;</replaceable>:<replaceable>&l\r\n> \r\n> <row>\r\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> - <structfield>substream</structfield> <type>bool</type>\r\n> + <structfield>substream</structfield> <type>char</type>\r\n> </para>\r\n> \r\n> Am, I missing something? If not, then I think we can note that in the commit\r\n> message to avoid forgetting it before commit.\r\n\r\nAdded.\r\n\r\n> \r\n> 4. Kindly change the below comments:\r\n> diff --git a/src/backend/replication/logical/applyparallelworker.c\r\n> b/src/backend/replication/logical/applyparallelworker.c\r\n> index 97f4a3037c..02bb608188 100644\r\n> --- a/src/backend/replication/logical/applyparallelworker.c\r\n> +++ b/src/backend/replication/logical/applyparallelworker.c\r\n> @@ -9,11 +9,10 @@\r\n> *\r\n> * This file contains the code to launch, set up, and teardown a parallel apply\r\n> * worker which receives the changes from the leader worker and invokes\r\n> routines\r\n> - * to apply those on the subscriber database.\r\n> - *\r\n> - * This file contains routines that are intended to support setting up, using\r\n> - * and tearing down a ParallelApplyWorkerInfo which is required so the leader\r\n> - * worker and parallel apply workers can communicate with each other.\r\n> + * to apply those on the subscriber database. Additionally, this file\r\n> + contains\r\n> + * routines that are intended to support setting up, using, and tearing\r\n> + down a\r\n> + * ParallelApplyWorkerInfo which is required so the leader worker and\r\n> + parallel\r\n> + * apply workers can communicate with each other.\r\n> *\r\n> * The parallel apply workers are assigned (if available) as soon as xact's\r\n> * first stream is received for subscriptions that have set their 'streaming'\r\n\r\nMerged.\r\n\r\nBesides, I also did the following changes:\r\n1. Added maybe_reread_subscription_info in leader before assigning the\r\n transaction to parallel apply worker (Sawada-san's comments[1])\r\n2. Removed the \"out of parallel apply workers\" LOG ( Sawada-san's comments[1])\r\n3. Improved a elog message (Sawada-san's comments[1]).\r\n4. Moved the testcases from 032_xx into existing 015_stream.pl which can save\r\nthe initialization time. Since we introduced quite a few testcases in this\r\npatch set, so I did this to try to reduce the testing time that increased after\r\napplying these patches.\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoDWd2pXau%2BpkYWOi87VGYrDD%3DOxakEDgOyUS%2BqV9XuAGA%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 23 Dec 2022 05:52:00 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 23, 2022 1:52 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Thursday, December 22, 2022 8:05 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Dec 21, 2022 at 11:02 AM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Attach the new patch set which also includes some cosmetic comment\r\n> > > changes.\r\n> > >\r\n> >\r\n> > Few minor comments:\r\n> > =================\r\n> > 1.\r\n> > + <literal>t</literal> = spill the changes of in-progress\r\n> > transactions to+ disk and apply at once after the transaction is\r\n> > committed on the+ publisher,\r\n> >\r\n> > Can we change this description to: \"spill the changes of in-progress\r\n> > transactions to disk and apply at once after the transaction is\r\n> > committed on the publisher and received by the subscriber,\"\r\n> \r\n> Changed.\r\n> \r\n> > 2.\r\n> > table is in progress, there will be additional workers for the tables\r\n> > - being synchronized.\r\n> > + being synchronized. Moreover, if the streaming transaction is applied in\r\n> > + parallel, there will be additional workers.\r\n> >\r\n> > Do we need this change in the first patch? We skip parallel apply\r\n> > workers from view for the first patch. Am, I missing something?\r\n> \r\n> No, I moved this to 0007 which include parallel apply workers in the view.\r\n> \r\n> > 3.\r\n> > I think we would need a catversion bump for parallel apply feature\r\n> > because of below change:\r\n> > @@ -7913,11 +7913,16 @@ SCRAM-SHA-256$<replaceable>&lt;iteration\r\n> > count&gt;</replaceable>:<replaceable>&l\r\n> >\r\n> > <row>\r\n> > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> > - <structfield>substream</structfield> <type>bool</type>\r\n> > + <structfield>substream</structfield> <type>char</type>\r\n> > </para>\r\n> >\r\n> > Am, I missing something? If not, then I think we can note that in the\r\n> > commit message to avoid forgetting it before commit.\r\n> \r\n> Added.\r\n> \r\n> >\r\n> > 4. Kindly change the below comments:\r\n> > diff --git a/src/backend/replication/logical/applyparallelworker.c\r\n> > b/src/backend/replication/logical/applyparallelworker.c\r\n> > index 97f4a3037c..02bb608188 100644\r\n> > --- a/src/backend/replication/logical/applyparallelworker.c\r\n> > +++ b/src/backend/replication/logical/applyparallelworker.c\r\n> > @@ -9,11 +9,10 @@\r\n> > *\r\n> > * This file contains the code to launch, set up, and teardown a parallel apply\r\n> > * worker which receives the changes from the leader worker and\r\n> > invokes routines\r\n> > - * to apply those on the subscriber database.\r\n> > - *\r\n> > - * This file contains routines that are intended to support setting\r\n> > up, using\r\n> > - * and tearing down a ParallelApplyWorkerInfo which is required so\r\n> > the leader\r\n> > - * worker and parallel apply workers can communicate with each other.\r\n> > + * to apply those on the subscriber database. Additionally, this file\r\n> > + contains\r\n> > + * routines that are intended to support setting up, using, and\r\n> > + tearing down a\r\n> > + * ParallelApplyWorkerInfo which is required so the leader worker and\r\n> > + parallel\r\n> > + * apply workers can communicate with each other.\r\n> > *\r\n> > * The parallel apply workers are assigned (if available) as soon as xact's\r\n> > * first stream is received for subscriptions that have set their 'streaming'\r\n> \r\n> Merged.\r\n> \r\n> Besides, I also did the following changes:\r\n> 1. Added maybe_reread_subscription_info in leader before assigning the\r\n> transaction to parallel apply worker (Sawada-san's comments[1]) 2. Removed\r\n> the \"out of parallel apply workers\" LOG ( Sawada-san's comments[1]) 3.\r\n> Improved a elog message (Sawada-san's comments[1]).\r\n> 4. Moved the testcases from 032_xx into existing 015_stream.pl which can save\r\n> the initialization time. Since we introduced quite a few testcases in this patch set,\r\n> so I did this to try to reduce the testing time that increased after applying these\r\n> patches.\r\n\r\nI noticed a CFbot failure in one of the new testcases in 015_stream.pl which\r\ncomes from old 032_xx.pl. It's because I slightly adjusted the change size in a\r\ntransaction in last version which cause the transaction's size not to exceed the\r\ndecoding work mem, so the transaction is not being applied as expected as\r\nstreaming transactions(it is applied as a non-stremaing transaction) which\r\ncause the failure. Attach the new version patch which fixed this miss.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 23 Dec 2022 09:20:01 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, December 23, 2022 5:20 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> I noticed a CFbot failure in one of the new testcases in 015_stream.pl which\r\n> comes from old 032_xx.pl. It's because I slightly adjusted the change size in a\r\n> transaction in last version which cause the transaction's size not to exceed the\r\n> decoding work mem, so the transaction is not being applied as expected as\r\n> streaming transactions(it is applied as a non-stremaing transaction) which cause\r\n> the failure. Attach the new version patch which fixed this miss.\r\n> \r\n\r\nSince the GUC used to force stream changes has been committed, I removed that\r\npatch from the patch set here and rebased the testcases based on that commit.\r\nHere is the rebased patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 26 Dec 2022 04:22:41 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 9:52 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, December 23, 2022 5:20 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>\n> Since the GUC used to force stream changes has been committed, I removed that\n> patch from the patch set here and rebased the testcases based on that commit.\n> Here is the rebased patch set.\n>\n\nFew comments on 0002 and 0001 patches\n=================================\n1.\n+ if ($is_parallel)\n+ {\n+ $node_subscriber->append_conf('postgresql.conf',\n+ \"log_min_messages = debug1\");\n+ $node_subscriber->reload;\n+ }\n+\n+ # Check the subscriber log from now on.\n+ $offset = -s $node_subscriber->logfile;\n+\n+ $in .= q{\n+ BEGIN;\n+ INSERT INTO test_tab SELECT i, md5(i::text) FROM\ngenerate_series(3, 5000) s(i);\n\nHow can we guarantee that reload would have taken place before this\nnext test? I see that 020_archive_status.pl is executing a query to\nensure the reload has been taken into consideration. Can we do the\nsame?\n\n2. It is not very clear whether converting 017_stream_ddl and\n019_stream_subxact_ddl_abort adds much value. They seem to be mostly\ntesting DDL/DML interaction of publisher side. We can probably check\nthe code coverage by removing the parallel version for these two files\nand remove them unless it covers some extra code. If we decide to\nremove parallel version for these two files then we can probably add a\ncomment atop these files indicating why we don't have a version that\nparallel option for these tests.\n\n3.\n+# Drop the unique index on the subscriber, now it works.\n+$node_subscriber->safe_psql('postgres', \"DROP INDEX idx_tab\");\n+\n+# Wait for this streaming transaction to be applied in the apply worker.\n $node_publisher->wait_for_catchup($appname);\n\n $result =\n- $node_subscriber->safe_psql('postgres',\n- \"SELECT count(*), count(c), count(d = 999) FROM test_tab\");\n-is($result, qq(3334|3334|3334), 'check extra columns contain local defaults');\n+ $node_subscriber->safe_psql('postgres', \"SELECT count(*) FROM test_tab_2\");\n+is($result, qq(5001), 'data replicated to subscriber after dropping index');\n\n-# Test the streaming in binary mode\n+# Clean up test data from the environment.\n+$node_publisher->safe_psql('postgres', \"TRUNCATE TABLE test_tab_2\");\n+$node_publisher->wait_for_catchup($appname);\n $node_subscriber->safe_psql('postgres',\n- \"ALTER SUBSCRIPTION tap_sub SET (binary = on)\");\n+ \"CREATE UNIQUE INDEX idx_tab on test_tab_2(a)\");\n\nWhat is the need to first Drop the index and then recreate it after a few lines?\n\n4. Attached, find some comment improvements atop v67-0002* patch.\nSimilar comments need to be changed in other test files.\n\n5. Attached, find some comment improvements atop v67-0001* patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 26 Dec 2022 17:21:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 1:22 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, December 23, 2022 5:20 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> >\n> > I noticed a CFbot failure in one of the new testcases in 015_stream.pl which\n> > comes from old 032_xx.pl. It's because I slightly adjusted the change size in a\n> > transaction in last version which cause the transaction's size not to exceed the\n> > decoding work mem, so the transaction is not being applied as expected as\n> > streaming transactions(it is applied as a non-stremaing transaction) which cause\n> > the failure. Attach the new version patch which fixed this miss.\n> >\n>\n> Since the GUC used to force stream changes has been committed, I removed that\n> patch from the patch set here and rebased the testcases based on that commit.\n> Here is the rebased patch set.\n>\n\nThank you for updating the patches. Here are some comments for 0001\nand 0002 patches:\n\n\nI think it'd be better to write logs when the leader enters the\nserialization mode. It would be helpful for investigating issues.\n\n---\n+ if (!pa_can_start(xid))\n+ return;\n+\n+ /* First time through, initialize parallel apply worker state\nhashtable. */\n+ if (!ParallelApplyTxnHash)\n+ {\n+ HASHCTL ctl;\n+\n+ MemSet(&ctl, 0, sizeof(ctl));\n+ ctl.keysize = sizeof(TransactionId);\n+ ctl.entrysize = sizeof(ParallelApplyWorkerEntry);\n+ ctl.hcxt = ApplyContext;\n+\n+ ParallelApplyTxnHash = hash_create(\"logical\nreplication parallel apply workershash\",\n+\n 16, &ctl,\n+\n HASH_ELEM |HASH_BLOBS | HASH_CONTEXT);\n+ }\n+\n+ /*\n+ * It's necessary to reread the subscription information\nbefore assigning\n+ * the transaction to a parallel apply worker. Otherwise, the\nleader may\n+ * not be able to reread the subscription information if streaming\n+ * transactions keep coming and are handled by parallel apply workers.\n+ */\n+ maybe_reread_subscription();\n\npa_can_start() checks if the skiplsn is an invalid xid or not, and\nthen maybe_reread_subscription() could update the skiplsn to a valid\nvalue. As the comments in pa_can_start() says, it won't work. I think\nwe should call maybe_reread_subscription() in\napply_handle_stream_start() before calling pa_allocate_worker().\n\n---\n+static inline bool\n+am_leader_apply_worker(void)\n+{\n+ return (!OidIsValid(MyLogicalRepWorker->relid) &&\n+ !isParallelApplyWorker(MyLogicalRepWorker));\n+}\n\nHow about using !am_tablesync_worker() instead of\n!OidIsValid(MyLogicalRepWorker->relid) for better readability?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 26 Dec 2022 22:02:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 6:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> ---\n> + if (!pa_can_start(xid))\n> + return;\n> +\n> + /* First time through, initialize parallel apply worker state\n> hashtable. */\n> + if (!ParallelApplyTxnHash)\n> + {\n> + HASHCTL ctl;\n> +\n> + MemSet(&ctl, 0, sizeof(ctl));\n> + ctl.keysize = sizeof(TransactionId);\n> + ctl.entrysize = sizeof(ParallelApplyWorkerEntry);\n> + ctl.hcxt = ApplyContext;\n> +\n> + ParallelApplyTxnHash = hash_create(\"logical\n> replication parallel apply workershash\",\n> +\n> 16, &ctl,\n> +\n> HASH_ELEM |HASH_BLOBS | HASH_CONTEXT);\n> + }\n> +\n> + /*\n> + * It's necessary to reread the subscription information\n> before assigning\n> + * the transaction to a parallel apply worker. Otherwise, the\n> leader may\n> + * not be able to reread the subscription information if streaming\n> + * transactions keep coming and are handled by parallel apply workers.\n> + */\n> + maybe_reread_subscription();\n>\n> pa_can_start() checks if the skiplsn is an invalid xid or not, and\n> then maybe_reread_subscription() could update the skiplsn to a valid\n> value. As the comments in pa_can_start() says, it won't work. I think\n> we should call maybe_reread_subscription() in\n> apply_handle_stream_start() before calling pa_allocate_worker().\n>\n\nBut I think a similar thing can happen when we start the worker and\nthen before the transaction ends, we do maybe_reread_subscription(). I\nthink we should try to call maybe_reread_subscription() when we are\nreasonably sure that we are going to enter parallel mode, otherwise,\nanyway, it will be later called by the leader worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Dec 2022 18:59:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n\nIn the commit message, there is a statement like this\n\n\"However, if the leader apply worker times out while attempting to\nsend a message to the\nparallel apply worker, it will switch to \"partial serialize\" mode - in this\nmode the leader serializes all remaining changes to a file and notifies the\nparallel apply workers to read and apply them at the end of the transaction.\"\n\nI think it is a good idea to serialize the change to the file in this\ncase to avoid deadlocks, but why does the parallel worker need to wait\ntill the transaction commits to reading the file? I mean we can\nswitch the serialize state and make a parallel worker pull changes\nfrom the file and if the parallel worker has caught up with the\nchanges then it can again change the state to \"share memory\" and now\nthe apply worker can again start sending through shared memory.\n\nI think generally streaming transactions are large and it is possible\nthat the shared memory queue gets full because of a lot of changes for\na particular transaction but later when the load switches to the other\ntransactions then it would be quite common for the worker to catch up\nwith the changes then it better to again take advantage of using\nmemory. Otherwise, in this case, we are just wasting resources\n(worker/shared memory queue) but still writing in the file.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Dec 2022 19:35:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 7:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> In the commit message, there is a statement like this\n>\n> \"However, if the leader apply worker times out while attempting to\n> send a message to the\n> parallel apply worker, it will switch to \"partial serialize\" mode - in this\n> mode the leader serializes all remaining changes to a file and notifies the\n> parallel apply workers to read and apply them at the end of the transaction.\"\n>\n> I think it is a good idea to serialize the change to the file in this\n> case to avoid deadlocks, but why does the parallel worker need to wait\n> till the transaction commits to reading the file? I mean we can\n> switch the serialize state and make a parallel worker pull changes\n> from the file and if the parallel worker has caught up with the\n> changes then it can again change the state to \"share memory\" and now\n> the apply worker can again start sending through shared memory.\n>\n> I think generally streaming transactions are large and it is possible\n> that the shared memory queue gets full because of a lot of changes for\n> a particular transaction but later when the load switches to the other\n> transactions then it would be quite common for the worker to catch up\n> with the changes then it better to again take advantage of using\n> memory. Otherwise, in this case, we are just wasting resources\n> (worker/shared memory queue) but still writing in the file.\n>\n\nNote that there is a certain threshold timeout for which we wait\nbefore switching to serialize mode and normally it happens only when\nPA starts waiting on some lock acquired by the backend. Now, apart\nfrom that even if we decide to switch modes, the current BufFile\nmechanism doesn't have a good way for that. It doesn't allow two\nprocesses to open the same buffile at the same time which means we\nneed to maintain multiple files to achieve the mode where we can\nswitch back from serialize mode. We cannot let LA wait for PA to close\nthe file as that could introduce another kind of deadlock. For\ndetails, see the discussion in the email [1]. The other problem is\nthat we have no way to deal with partially sent data via a shared\nmemory queue. Say, if we timeout while sending the data, we have to\nresend the same message until it succeeds which will be tricky because\nwe can't keep retrying as that can lead to deadlock. I think if we try\nto build this new mode, it will be a lot of effort without equivalent\nreturns. In common cases, we didn't see that we time out and switch to\nserialize mode. It is mostly in cases where PA starts to wait for the\nlock acquired by other backend or the machine is slow enough to deal\nwith the number of parallel apply workers. So, it doesn't seem worth\nadding more complexity to the first version but we don't rule out the\npossibility of the same in the future if we really see such cases are\ncommon.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoDScLvLT8JBfu5WaGCPQs_qhxsybMT%2BsMXJ%3DQrDMTyr9w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Dec 2022 09:15:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 19:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Few comments on 0002 and 0001 patches\r\n> =================================\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> + if ($is_parallel)\r\n> + {\r\n> + $node_subscriber->append_conf('postgresql.conf',\r\n> + \"log_min_messages = debug1\");\r\n> + $node_subscriber->reload;\r\n> + }\r\n> +\r\n> + # Check the subscriber log from now on.\r\n> + $offset = -s $node_subscriber->logfile;\r\n> +\r\n> + $in .= q{\r\n> + BEGIN;\r\n> + INSERT INTO test_tab SELECT i, md5(i::text) FROM\r\n> generate_series(3, 5000) s(i);\r\n> \r\n> How can we guarantee that reload would have taken place before this\r\n> next test? I see that 020_archive_status.pl is executing a query to\r\n> ensure the reload has been taken into consideration. Can we do the\r\n> same?\r\n\r\nAgree. Improved as suggested.\r\n\r\n> 2. It is not very clear whether converting 017_stream_ddl and\r\n> 019_stream_subxact_ddl_abort adds much value. They seem to be mostly\r\n> testing DDL/DML interaction of publisher side. We can probably check\r\n> the code coverage by removing the parallel version for these two files\r\n> and remove them unless it covers some extra code. If we decide to\r\n> remove parallel version for these two files then we can probably add a\r\n> comment atop these files indicating why we don't have a version that\r\n> parallel option for these tests.\r\n\r\nI have checked this and removed the parallel version for these two files.\r\nAlso added some comments atop these two test files to explain this.\r\n\r\n> 3.\r\n> +# Drop the unique index on the subscriber, now it works.\r\n> +$node_subscriber->safe_psql('postgres', \"DROP INDEX idx_tab\");\r\n> +\r\n> +# Wait for this streaming transaction to be applied in the apply worker.\r\n> $node_publisher->wait_for_catchup($appname);\r\n> \r\n> $result =\r\n> - $node_subscriber->safe_psql('postgres',\r\n> - \"SELECT count(*), count(c), count(d = 999) FROM test_tab\");\r\n> -is($result, qq(3334|3334|3334), 'check extra columns contain local defaults');\r\n> + $node_subscriber->safe_psql('postgres', \"SELECT count(*) FROM\r\n> test_tab_2\");\r\n> +is($result, qq(5001), 'data replicated to subscriber after dropping index');\r\n> \r\n> -# Test the streaming in binary mode\r\n> +# Clean up test data from the environment.\r\n> +$node_publisher->safe_psql('postgres', \"TRUNCATE TABLE test_tab_2\");\r\n> +$node_publisher->wait_for_catchup($appname);\r\n> $node_subscriber->safe_psql('postgres',\r\n> - \"ALTER SUBSCRIPTION tap_sub SET (binary = on)\");\r\n> + \"CREATE UNIQUE INDEX idx_tab on test_tab_2(a)\");\r\n> \r\n> What is the need to first Drop the index and then recreate it after a few lines?\r\n\r\nSince we want the two transactions to complete normally without conflicts due\r\nto the unique index, we temporarily delete the index.\r\nI added some new comments to explain this.\r\n\r\n> 4. Attached, find some comment improvements atop v67-0002* patch.\r\n> Similar comments need to be changed in other test files.\r\n\r\nThanks, I have checked and merge them. And also changed similar comments in\r\nother test files.\r\n\r\n> 5. Attached, find some comment improvements atop v67-0001* patch.\r\n\r\nThanks, I have checked and merge them.\r\n\r\nAttach the new version patch which addressed all above comments and part of\r\ncomments from [1] except one comment that are being discussed.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAD21AoDvT%2BTv3auBBShk19EkKLj6ByQtnAzfMjh49BhyT7f4Nw%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 27 Dec 2022 04:54:02 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 21:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Thank you for updating the patches. Here are some comments for 0001\r\n> and 0002 patches:\r\n\r\nThanks for your comments.\r\n\r\n> I think it'd be better to write logs when the leader enters the\r\n> serialization mode. It would be helpful for investigating issues.\r\n\r\nAgree. Added the log about this in the function pa_send_data().\r\n\r\n> ---\r\n> +static inline bool\r\n> +am_leader_apply_worker(void)\r\n> +{\r\n> + return (!OidIsValid(MyLogicalRepWorker->relid) &&\r\n> + !isParallelApplyWorker(MyLogicalRepWorker));\r\n> +}\r\n> \r\n> How about using !am_tablesync_worker() instead of\r\n> !OidIsValid(MyLogicalRepWorker->relid) for better readability?\r\n\r\nAgree. Improved this as suggested.\r\n\r\nThe new patch set was attached in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275B61076717E4CE9E079D19EED9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Tue, 27 Dec 2022 04:57:00 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 27, 2022 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 7:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > In the commit message, there is a statement like this\n> >\n> > \"However, if the leader apply worker times out while attempting to\n> > send a message to the\n> > parallel apply worker, it will switch to \"partial serialize\" mode - in this\n> > mode the leader serializes all remaining changes to a file and notifies the\n> > parallel apply workers to read and apply them at the end of the transaction.\"\n> >\n> > I think it is a good idea to serialize the change to the file in this\n> > case to avoid deadlocks, but why does the parallel worker need to wait\n> > till the transaction commits to reading the file? I mean we can\n> > switch the serialize state and make a parallel worker pull changes\n> > from the file and if the parallel worker has caught up with the\n> > changes then it can again change the state to \"share memory\" and now\n> > the apply worker can again start sending through shared memory.\n> >\n> > I think generally streaming transactions are large and it is possible\n> > that the shared memory queue gets full because of a lot of changes for\n> > a particular transaction but later when the load switches to the other\n> > transactions then it would be quite common for the worker to catch up\n> > with the changes then it better to again take advantage of using\n> > memory. Otherwise, in this case, we are just wasting resources\n> > (worker/shared memory queue) but still writing in the file.\n> >\n>\n> Note that there is a certain threshold timeout for which we wait\n> before switching to serialize mode and normally it happens only when\n> PA starts waiting on some lock acquired by the backend. Now, apart\n> from that even if we decide to switch modes, the current BufFile\n> mechanism doesn't have a good way for that. It doesn't allow two\n> processes to open the same buffile at the same time which means we\n> need to maintain multiple files to achieve the mode where we can\n> switch back from serialize mode. We cannot let LA wait for PA to close\n> the file as that could introduce another kind of deadlock. For\n> details, see the discussion in the email [1]. The other problem is\n> that we have no way to deal with partially sent data via a shared\n> memory queue. Say, if we timeout while sending the data, we have to\n> resend the same message until it succeeds which will be tricky because\n> we can't keep retrying as that can lead to deadlock. I think if we try\n> to build this new mode, it will be a lot of effort without equivalent\n> returns. In common cases, we didn't see that we time out and switch to\n> serialize mode. It is mostly in cases where PA starts to wait for the\n> lock acquired by other backend or the machine is slow enough to deal\n> with the number of parallel apply workers. So, it doesn't seem worth\n> adding more complexity to the first version but we don't rule out the\n> possibility of the same in the future if we really see such cases are\n> common.\n>\n> [1] - https://www.postgresql.org/message-id/CAD21AoDScLvLT8JBfu5WaGCPQs_qhxsybMT%2BsMXJ%3DQrDMTyr9w%40mail.gmail.com\n\nOkay, I see. And once we change to serialize mode we can't release\nthe worker as well because we have already applied partial changes\nunder some transaction from a PA so we can not apply remaining from\nthe LA. I understand it might introduce a lot of complex design to\nchange it back to parallel apply mode but my only worry is that in\nsuch cases we will be holding on to the parallel worker just to wait\ntill commit to reading from the spool file. But as you said it should\nnot be very common case so maybe this is fine.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Dec 2022 10:36:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 27, 2022 at 10:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 27, 2022 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 26, 2022 at 7:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > In the commit message, there is a statement like this\n> > >\n> > > \"However, if the leader apply worker times out while attempting to\n> > > send a message to the\n> > > parallel apply worker, it will switch to \"partial serialize\" mode - in this\n> > > mode the leader serializes all remaining changes to a file and notifies the\n> > > parallel apply workers to read and apply them at the end of the transaction.\"\n> > >\n> > > I think it is a good idea to serialize the change to the file in this\n> > > case to avoid deadlocks, but why does the parallel worker need to wait\n> > > till the transaction commits to reading the file? I mean we can\n> > > switch the serialize state and make a parallel worker pull changes\n> > > from the file and if the parallel worker has caught up with the\n> > > changes then it can again change the state to \"share memory\" and now\n> > > the apply worker can again start sending through shared memory.\n> > >\n> > > I think generally streaming transactions are large and it is possible\n> > > that the shared memory queue gets full because of a lot of changes for\n> > > a particular transaction but later when the load switches to the other\n> > > transactions then it would be quite common for the worker to catch up\n> > > with the changes then it better to again take advantage of using\n> > > memory. Otherwise, in this case, we are just wasting resources\n> > > (worker/shared memory queue) but still writing in the file.\n> > >\n> >\n> > Note that there is a certain threshold timeout for which we wait\n> > before switching to serialize mode and normally it happens only when\n> > PA starts waiting on some lock acquired by the backend. Now, apart\n> > from that even if we decide to switch modes, the current BufFile\n> > mechanism doesn't have a good way for that. It doesn't allow two\n> > processes to open the same buffile at the same time which means we\n> > need to maintain multiple files to achieve the mode where we can\n> > switch back from serialize mode. We cannot let LA wait for PA to close\n> > the file as that could introduce another kind of deadlock. For\n> > details, see the discussion in the email [1]. The other problem is\n> > that we have no way to deal with partially sent data via a shared\n> > memory queue. Say, if we timeout while sending the data, we have to\n> > resend the same message until it succeeds which will be tricky because\n> > we can't keep retrying as that can lead to deadlock. I think if we try\n> > to build this new mode, it will be a lot of effort without equivalent\n> > returns. In common cases, we didn't see that we time out and switch to\n> > serialize mode. It is mostly in cases where PA starts to wait for the\n> > lock acquired by other backend or the machine is slow enough to deal\n> > with the number of parallel apply workers. So, it doesn't seem worth\n> > adding more complexity to the first version but we don't rule out the\n> > possibility of the same in the future if we really see such cases are\n> > common.\n> >\n> > [1] - https://www.postgresql.org/message-id/CAD21AoDScLvLT8JBfu5WaGCPQs_qhxsybMT%2BsMXJ%3DQrDMTyr9w%40mail.gmail.com\n>\n> Okay, I see. And once we change to serialize mode we can't release\n> the worker as well because we have already applied partial changes\n> under some transaction from a PA so we can not apply remaining from\n> the LA. I understand it might introduce a lot of complex design to\n> change it back to parallel apply mode but my only worry is that in\n> such cases we will be holding on to the parallel worker just to wait\n> till commit to reading from the spool file. But as you said it should\n> not be very common case so maybe this is fine.\n>\n\nRight and as said previously if required (which is not clear at this\nstage) we can develop it in the later version as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Dec 2022 10:46:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Dec 26, 2022 at 10:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 6:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > ---\n> > + if (!pa_can_start(xid))\n> > + return;\n> > +\n> > + /* First time through, initialize parallel apply worker state\n> > hashtable. */\n> > + if (!ParallelApplyTxnHash)\n> > + {\n> > + HASHCTL ctl;\n> > +\n> > + MemSet(&ctl, 0, sizeof(ctl));\n> > + ctl.keysize = sizeof(TransactionId);\n> > + ctl.entrysize = sizeof(ParallelApplyWorkerEntry);\n> > + ctl.hcxt = ApplyContext;\n> > +\n> > + ParallelApplyTxnHash = hash_create(\"logical\n> > replication parallel apply workershash\",\n> > +\n> > 16, &ctl,\n> > +\n> > HASH_ELEM |HASH_BLOBS | HASH_CONTEXT);\n> > + }\n> > +\n> > + /*\n> > + * It's necessary to reread the subscription information\n> > before assigning\n> > + * the transaction to a parallel apply worker. Otherwise, the\n> > leader may\n> > + * not be able to reread the subscription information if streaming\n> > + * transactions keep coming and are handled by parallel apply workers.\n> > + */\n> > + maybe_reread_subscription();\n> >\n> > pa_can_start() checks if the skiplsn is an invalid xid or not, and\n> > then maybe_reread_subscription() could update the skiplsn to a valid\n> > value. As the comments in pa_can_start() says, it won't work. I think\n> > we should call maybe_reread_subscription() in\n> > apply_handle_stream_start() before calling pa_allocate_worker().\n> >\n>\n> But I think a similar thing can happen when we start the worker and\n> then before the transaction ends, we do maybe_reread_subscription().\n\nWhere do we do maybe_reread_subscription() in this case? IIUC if the\nleader sends all changes to the worker, there is no chance for the\nleader to do maybe_reread_subscription except for when waiting for the\ninput. On reflection, adding maybe_reread_subscription() to\napply_handle_stream_start() adds one extra call of it so it's not\ngood. Alternatively, we can do that in pa_can_start() before checking\nthe skiplsn. I think we do a similar thing in AllTablesyncsRead() --\nupdate the information before the check if necessary.\n\n> I think we should try to call maybe_reread_subscription() when we are\n> reasonably sure that we are going to enter parallel mode, otherwise,\n> anyway, it will be later called by the leader worker.\n\nIt isn't a big problem even if we update the skiplsn after launching a\nworker since we will skip the transaction the next time. But it would\nbe more consistent with the current behavior. As I mentioned above,\ndoing it in pa_can_start() seems to be reasonable to me. What do you\nthink?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 27 Dec 2022 14:57:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 27, 2022 at 11:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 10:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 26, 2022 at 6:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > ---\n> > > + if (!pa_can_start(xid))\n> > > + return;\n> > > +\n> > > + /* First time through, initialize parallel apply worker state\n> > > hashtable. */\n> > > + if (!ParallelApplyTxnHash)\n> > > + {\n> > > + HASHCTL ctl;\n> > > +\n> > > + MemSet(&ctl, 0, sizeof(ctl));\n> > > + ctl.keysize = sizeof(TransactionId);\n> > > + ctl.entrysize = sizeof(ParallelApplyWorkerEntry);\n> > > + ctl.hcxt = ApplyContext;\n> > > +\n> > > + ParallelApplyTxnHash = hash_create(\"logical\n> > > replication parallel apply workershash\",\n> > > +\n> > > 16, &ctl,\n> > > +\n> > > HASH_ELEM |HASH_BLOBS | HASH_CONTEXT);\n> > > + }\n> > > +\n> > > + /*\n> > > + * It's necessary to reread the subscription information\n> > > before assigning\n> > > + * the transaction to a parallel apply worker. Otherwise, the\n> > > leader may\n> > > + * not be able to reread the subscription information if streaming\n> > > + * transactions keep coming and are handled by parallel apply workers.\n> > > + */\n> > > + maybe_reread_subscription();\n> > >\n> > > pa_can_start() checks if the skiplsn is an invalid xid or not, and\n> > > then maybe_reread_subscription() could update the skiplsn to a valid\n> > > value. As the comments in pa_can_start() says, it won't work. I think\n> > > we should call maybe_reread_subscription() in\n> > > apply_handle_stream_start() before calling pa_allocate_worker().\n> > >\n> >\n> > But I think a similar thing can happen when we start the worker and\n> > then before the transaction ends, we do maybe_reread_subscription().\n>\n> Where do we do maybe_reread_subscription() in this case? IIUC if the\n> leader sends all changes to the worker, there is no chance for the\n> leader to do maybe_reread_subscription except for when waiting for the\n> input.\n\nYes, this is the point where it can happen. IT can happen when there\nis some delay between different streaming chunks.\n\n> On reflection, adding maybe_reread_subscription() to\n> apply_handle_stream_start() adds one extra call of it so it's not\n> good. Alternatively, we can do that in pa_can_start() before checking\n> the skiplsn. I think we do a similar thing in AllTablesyncsRead() --\n> update the information before the check if necessary.\n>\n> > I think we should try to call maybe_reread_subscription() when we are\n> > reasonably sure that we are going to enter parallel mode, otherwise,\n> > anyway, it will be later called by the leader worker.\n>\n> It isn't a big problem even if we update the skiplsn after launching a\n> worker since we will skip the transaction the next time. But it would\n> be more consistent with the current behavior. As I mentioned above,\n> doing it in pa_can_start() seems to be reasonable to me. What do you\n> think?\n>\n\nOkay, we can do it in pa_can_start but then let's do it before we\ncheck the parallel_apply flag as that can also be changed if the\nstreaming mode is changed. Please see the changes in the attached\npatch which is atop the 0001 and 0002 patches. I have made a few\ncomment improvements as well.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 27 Dec 2022 12:13:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 27, 2022 at 10:24 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed all above comments and part of\n> comments from [1] except one comment that are being discussed.\n>\n\n1.\n+# Test that the deadlock is detected among leader and parallel apply workers.\n+\n+$node_subscriber->append_conf('postgresql.conf', \"deadlock_timeout = 1ms\");\n+$node_subscriber->reload;\n+\n\nA. I see that the other existing tests have deadlock_timeout set as\n10ms, 100ms, 100s, etc. Is there a reason to keep so low here? Shall\nwe keep it as 10ms?\nB. /among leader/among the leader\n\n2. Can we leave having tests in 022_twophase_cascade to be covered by\nparallel mode? The two-phase and parallel apply will be covered by\n023_twophase_stream, so not sure if we get any extra coverage by\n022_twophase_cascade.\n\n3. Let's combine 0001 and 0002 as both have got reviewed independently.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Dec 2022 17:07:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Dec 27, 2022 19:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Dec 27, 2022 at 10:24 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch which addressed all above comments and part\r\n> of\r\n> > comments from [1] except one comment that are being discussed.\r\n> >\r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> +# Test that the deadlock is detected among leader and parallel apply workers.\r\n> +\r\n> +$node_subscriber->append_conf('postgresql.conf', \"deadlock_timeout =\r\n> 1ms\");\r\n> +$node_subscriber->reload;\r\n> +\r\n> \r\n> A. I see that the other existing tests have deadlock_timeout set as\r\n> 10ms, 100ms, 100s, etc. Is there a reason to keep so low here? Shall\r\n> we keep it as 10ms?\r\n\r\nNo, I think you are right. Keep it as 10ms.\r\n\r\n> B. /among leader/among the leader\r\n\r\nFixed.\r\n\r\n> 2. Can we leave having tests in 022_twophase_cascade to be covered by\r\n> parallel mode? The two-phase and parallel apply will be covered by\r\n> 023_twophase_stream, so not sure if we get any extra coverage by\r\n> 022_twophase_cascade.\r\n\r\nCompared with 023_twophase_stream, there is \"rollback a subtransaction\" in\r\n022_twophase_cascade, but since this part of the code can be covered by tests\r\nin 018_stream_subxact_abort, I think we can remove parallel version for\r\n022_twophase_cascade. So I reverted changes in 022_twophase_cascade for\r\nparallel mode and added some comments atop this file.\r\n\r\n> 3. Let's combine 0001 and 0002 as both have got reviewed independently.\r\n\r\nCombined them into one patch.\r\n\r\nAnd I also checked and merged the diff patch in [1].\r\n\r\nBesides, also fixed the below problem:\r\nIn previous versions, we didn't wait for STREAM_ABORT transactions to complete.\r\nBut in extreme cases, this can cause problems if the STREAM_ABORT transaction\r\ndoesn't complete and xid wraparound occurs on the publisher-side. Fixed this by\r\nwaiting for the STREAM_ABORT transaction to complete.\r\n\r\nAttach the new patch set.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2B5gTjHzWovkbUj%2BxsQ9yO9jVcKsS-3c5ZXLFy8JmfT%3DA%40mail.gmail.com\r\n\r\nRegards,\r\nWang wei", "msg_date": "Wed, 28 Dec 2022 04:38:58 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Dec 28, 2022 at 10:09 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n\nI have made a number of changes in the comments, removed extra list\ncopy in pa_launch_parallel_worker(), and removed unnecessary include\nin worker. Please see the attached and let me know what you think.\nFeel free to rebase and send the remaining patches.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 29 Dec 2022 18:54:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thur, Dec 29, 2022 21:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Dec 28, 2022 at 10:09 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> I have made a number of changes in the comments, removed extra list\r\n> copy in pa_launch_parallel_worker(), and removed unnecessary include\r\n> in worker. Please see the attached and let me know what you think.\r\n> Feel free to rebase and send the remaining patches.\r\n\r\nThanks for your improvement.\r\n\r\nI've checked it and it looks good to me.\r\nRebased the other patches and ran the pgident for the patch set.\r\n\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Fri, 30 Dec 2022 10:25:32 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> I've checked it and it looks good to me.\n> Rebased the other patches and ran the pgident for the patch set.\n>\n> Attach the new patch set.\n>\n\nI have added a few DEBUG messages and changed a few comments in the\n0001 patch. With that v71-0001* looks good to me and I'll commit it\nlater this week (by Thursday or Friday) unless there are any major\ncomments or objections.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 2 Jan 2023 16:23:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 2, 2023 at 18:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I've checked it and it looks good to me.\r\n> > Rebased the other patches and ran the pgident for the patch set.\r\n> >\r\n> > Attach the new patch set.\r\n> >\r\n> \r\n> I have added a few DEBUG messages and changed a few comments in the\r\n> 0001 patch. With that v71-0001* looks good to me and I'll commit it\r\n> later this week (by Thursday or Friday) unless there are any major\r\n> comments or objections.\r\n\r\nThanks for your improvement.\r\n\r\nRebased the patch set because the new change in HEAD (c8e1ba7).\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei", "msg_date": "Tue, 3 Jan 2023 05:40:22 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 3, 2023 at 11:10 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 2, 2023 at 18:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > I've checked it and it looks good to me.\n> > > Rebased the other patches and ran the pgident for the patch set.\n> > >\n> > > Attach the new patch set.\n> > >\n> >\n> > I have added a few DEBUG messages and changed a few comments in the\n> > 0001 patch. With that v71-0001* looks good to me and I'll commit it\n> > later this week (by Thursday or Friday) unless there are any major\n> > comments or objections.\n>\n> Thanks for your improvement.\n>\n> Rebased the patch set because the new change in HEAD (c8e1ba7).\n> Attach the new patch set.\n>\n> Regards,\n> Wang wei\n\nHi,\nIn continuation with [1] and [2], I did some performance testing on\nv70-0001 patch.\n\nThis test used synchronous logical replication and compared SQL\nexecution times before and after applying the patch.\n\nThe following cases are tested by varying logical_decoding_work_mem:\na) Bulk insert.\nb) Bulk delete\nc) Bulk update\nb) Rollback to savepoint. (different percentages of changes in the\ntransaction are rolled back).\n\nThe tests are performed ten times, and the average of the middle eight is taken.\n\nThe scripts are the same as before [1]. The scripts for additional\nupdate and delete testing are attached.\n\nThe results are as follows:\n\nRESULT - bulk insert (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 34.475 34.222 34.400\npatched 20.168 20.181 20.510\nCompare with HEAD -41.49% -41.029% -40.377%\n\n\nRESULT - bulk delete (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 40.286 41.312 41.312\npatched 23.749 23.759 23.480\nCompare with HEAD -41.04% -42.48% -43.16%\n\n\nRESULT - bulk update (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 63.650 65.260 65.459\npatched 46.692 46.275 48.281\nCompare with HEAD -26.64% -29.09% -26.24%\n\n\nRESULT - rollback 10% (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 33.386 33.213 31.990\npatched 20.540 19.295 18.139\nCompare with HEAD -38.47% -41.90% -43.29%\n\n\nRESULT - rollback 20% (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 32.150 31.871 30.825\npatched 19.331 19.366 18.285\nCompare with HEAD -39.87% -39.23% -40.68%\n\n\nRESULT - rollback 30% (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 28.611 30.139 29.433\npatched 19.632 19.838 18.374\nCompare with HEAD -31.38% -34.17% -37.57%\n\n\nRESULT - rollback 50% (5kk)\n---------------------------------------------------------------\nlogical_decoding_work_mem 64kB 256kB 64MB\nHEAD 27.410 27.167 25.990\npatched 19.982 18.749 18.048\nCompare with HEAD -27.099% -30.98% -30.55%\n\n(if \"Compare with HEAD\" is a positive number, it means worse than\nHEAD; if it is a negative number, it means better than HEAD.)\n\nSummary:\nUpdate shows 26-29% improvement, while insert and delete shows ~40% improvement.\nIn the case of rollback, the improvement is somewhat between 27-42%.\nThe improvement slightly decreases with larger amounts of data being\nrolled back.\n\n\n[1] https://www.postgresql.org/message-id/OSZPR01MB63103AA97349BBB858E27DEAFD499%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/OSZPR01MB6310174063C9144D2081F657FDE09%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nthanks\nShveta", "msg_date": "Tue, 3 Jan 2023 14:39:08 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 3, 2023 at 2:40 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 2, 2023 at 18:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > I've checked it and it looks good to me.\n> > > Rebased the other patches and ran the pgident for the patch set.\n> > >\n> > > Attach the new patch set.\n> > >\n> >\n> > I have added a few DEBUG messages and changed a few comments in the\n> > 0001 patch. With that v71-0001* looks good to me and I'll commit it\n> > later this week (by Thursday or Friday) unless there are any major\n> > comments or objections.\n>\n> Thanks for your improvement.\n>\n> Rebased the patch set because the new change in HEAD (c8e1ba7).\n> Attach the new patch set.\n\nThere are some unused parameters in v72 patches:\n\n+static bool\n+pa_can_start(TransactionId xid)\n+{\n+ Assert(TransactionIdIsValid(xid));\n\n'xid' is used only for the assertion check but I don't think it's necessary.\n\n---\n+/*\n+ * Make sure the leader apply worker tries to read from our error\nqueue one more\n+ * time. This guards against the case where we exit uncleanly without sending\n+ * an ErrorResponse, for example because some code calls proc_exit directly.\n+ */\n+static void\n+pa_shutdown(int code, Datum arg)\n\nSimilarly, we don't use 'code' here.\n\n---\n+/*\n+ * Handle a single protocol message received from a single parallel apply\n+ * worker.\n+ */\n+static void\n+HandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo msg)\n\nIn addition, the same is true for 'winfo'.\n\nThe rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 14:31:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 4, 2023 at 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 2:40 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Mon, Jan 2, 2023 at 18:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com\n> > > <wangw.fnst@fujitsu.com> wrote:\n> > > >\n> > > > I've checked it and it looks good to me.\n> > > > Rebased the other patches and ran the pgident for the patch set.\n> > > >\n> > > > Attach the new patch set.\n> > > >\n> > >\n> > > I have added a few DEBUG messages and changed a few comments in the\n> > > 0001 patch. With that v71-0001* looks good to me and I'll commit it\n> > > later this week (by Thursday or Friday) unless there are any major\n> > > comments or objections.\n> >\n> > Thanks for your improvement.\n> >\n> > Rebased the patch set because the new change in HEAD (c8e1ba7).\n> > Attach the new patch set.\n>\n> There are some unused parameters in v72 patches:\n>\n> ---\n> +/*\n> + * Make sure the leader apply worker tries to read from our error\n> queue one more\n> + * time. This guards against the case where we exit uncleanly without sending\n> + * an ErrorResponse, for example because some code calls proc_exit directly.\n> + */\n> +static void\n> +pa_shutdown(int code, Datum arg)\n>\n> Similarly, we don't use 'code' here.\n\nThis is necessary. Sorry for the noise.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 15:39:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 4, 2023 at 13:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Jan 3, 2023 at 2:40 PM wangw.fnst@fujitsu.com \r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Mon, Jan 2, 2023 at 18:54 PM Amit Kapila \r\n> > <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > On Fri, Dec 30, 2022 at 3:55 PM wangw.fnst@fujitsu.com \r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > I've checked it and it looks good to me.\r\n> > > > Rebased the other patches and ran the pgident for the patch set.\r\n> > > >\r\n> > > > Attach the new patch set.\r\n> > > >\r\n> > >\r\n> > > I have added a few DEBUG messages and changed a few comments in \r\n> > > the\r\n> > > 0001 patch. With that v71-0001* looks good to me and I'll commit \r\n> > > it later this week (by Thursday or Friday) unless there are any \r\n> > > major comments or objections.\r\n> >\r\n> > Thanks for your improvement.\r\n> >\r\n> > Rebased the patch set because the new change in HEAD (c8e1ba7).\r\n> > Attach the new patch set.\r\n> \r\n> There are some unused parameters in v72 patches:\r\n\r\nThanks for your comments!\r\n\r\n> +static bool\r\n> +pa_can_start(TransactionId xid)\r\n> +{\r\n> + Assert(TransactionIdIsValid(xid));\r\n> \r\n> 'xid' is used only for the assertion check but I don't think it's necessary.\r\n\r\nAgree. Removed this check.\r\n\r\n> ---\r\n> +/*\r\n> + * Handle a single protocol message received from a single parallel \r\n> +apply\r\n> + * worker.\r\n> + */\r\n> +static void\r\n> +HandleParallelApplyMessage(ParallelApplyWorkerInfo *winfo, StringInfo \r\n> +msg)\r\n> \r\n> In addition, the same is true for 'winfo'.\r\n\r\nAgree. Removed this parameter.\r\n\r\nAttach the new patch set.\r\nApart from addressing Sawada-San's comments, I also did some other minor\r\nchanges in the patch:\r\n\r\n* Adjusted a testcase about crash restart in 023_twophase_stream.pl, I\r\n skipped the check for DEBUG msg as the msg might not be output if the crash happens\r\n before that.\r\n* Adjusted the code in pg_lock_status() to make the fields of\r\n applytransaction lock display in more appropriate places.\r\n* Add a comment to explain why we unlock the transaction before aborting the\r\n transaction in parallel apply worker.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 4 Jan 2023 10:55:34 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 4, 2023 at 4:25 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\n> Attach the new patch set.\n> Apart from addressing Sawada-San's comments, I also did some other minor\n> changes in the patch:\n\nI have done a high-level review of 0001, and later I will do a\ndetailed review of this while reading through the patch I think some\nof the comments need some changes..\n\n1.\n+ The deadlock can happen in\n+ * the following ways:\n+ *\n\n+ * 4) Lock types\n+ *\n+ * Both the stream lock and the transaction lock mentioned above are\n+ * session-level locks because both locks could be acquired outside the\n+ * transaction, and the stream lock in the leader needs to persist across\n+ * transaction boundaries i.e. until the end of the streaming transaction.\n\nI think the Lock types should not be listed with the number 4).\nBecause point number 1,2 and 3 are explaining the way how deadlocks\ncan happen but 4) doesn't fall under that category.\n\n\n2.\n+ * Since the database structure (schema of subscription tables, constraints,\n+ * etc.) of the publisher and subscriber could be different, applying\n+ * transactions in parallel mode on the subscriber side can cause some\n+ * deadlocks that do not occur on the publisher side.\n\nI think this paragraph needs to be rephrased a bit. It is saying that\nsome deadlock can occur on subscribers which did not occur on the\npublisher. I think what it should be conveying is that the deadlock\ncan occur due to concurrently applying the conflicting/dependent\ntransactions which are not conflicting/dependent on the publisher due\nto <explain reason>. Because if we create the same schema on the\npublisher it might not have ended up in a deadlock instead it would\nhave been executed in sequence (due to lock waiting). So the main\npoint we are conveying is that the transaction which was independent\nof each other on the publisher could be dependent on the subscriber\nand they can end up in deadlock due to parallel apply.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 16:52:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 4, 2023 at 4:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> 2.\n> + * Since the database structure (schema of subscription tables, constraints,\n> + * etc.) of the publisher and subscriber could be different, applying\n> + * transactions in parallel mode on the subscriber side can cause some\n> + * deadlocks that do not occur on the publisher side.\n>\n> I think this paragraph needs to be rephrased a bit. It is saying that\n> some deadlock can occur on subscribers which did not occur on the\n> publisher. I think what it should be conveying is that the deadlock\n> can occur due to concurrently applying the conflicting/dependent\n> transactions which are not conflicting/dependent on the publisher due\n> to <explain reason>. Because if we create the same schema on the\n> publisher it might not have ended up in a deadlock instead it would\n> have been executed in sequence (due to lock waiting). So the main\n> point we are conveying is that the transaction which was independent\n> of each other on the publisher could be dependent on the subscriber\n> and they can end up in deadlock due to parallel apply.\n>\n\nHow about changing it to: \"We have a risk of deadlock due to\nparallelly applying the transactions that were independent on the\npublisher side but became dependent on the subscriber side due to the\ndifferent database structures (like schema of subscription tables,\nconstraints, etc.) on each side.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 Jan 2023 18:40:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 4, 2023 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 4:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > 2.\n> > + * Since the database structure (schema of subscription tables, constraints,\n> > + * etc.) of the publisher and subscriber could be different, applying\n> > + * transactions in parallel mode on the subscriber side can cause some\n> > + * deadlocks that do not occur on the publisher side.\n> >\n> > I think this paragraph needs to be rephrased a bit. It is saying that\n> > some deadlock can occur on subscribers which did not occur on the\n> > publisher. I think what it should be conveying is that the deadlock\n> > can occur due to concurrently applying the conflicting/dependent\n> > transactions which are not conflicting/dependent on the publisher due\n> > to <explain reason>. Because if we create the same schema on the\n> > publisher it might not have ended up in a deadlock instead it would\n> > have been executed in sequence (due to lock waiting). So the main\n> > point we are conveying is that the transaction which was independent\n> > of each other on the publisher could be dependent on the subscriber\n> > and they can end up in deadlock due to parallel apply.\n> >\n>\n> How about changing it to: \"We have a risk of deadlock due to\n> parallelly applying the transactions that were independent on the\n> publisher side but became dependent on the subscriber side due to the\n> different database structures (like schema of subscription tables,\n> constraints, etc.) on each side.\n\nI think this looks good to me.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 18:59:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, January 4, 2023 9:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Wed, Jan 4, 2023 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Jan 4, 2023 at 4:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> > >\r\n> > > 2.\r\n> > > + * Since the database structure (schema of subscription tables,\r\n> > > + constraints,\r\n> > > + * etc.) of the publisher and subscriber could be different,\r\n> > > + applying\r\n> > > + * transactions in parallel mode on the subscriber side can cause\r\n> > > + some\r\n> > > + * deadlocks that do not occur on the publisher side.\r\n> > >\r\n> > > I think this paragraph needs to be rephrased a bit. It is saying\r\n> > > that some deadlock can occur on subscribers which did not occur on\r\n> > > the publisher. I think what it should be conveying is that the\r\n> > > deadlock can occur due to concurrently applying the\r\n> > > conflicting/dependent transactions which are not\r\n> > > conflicting/dependent on the publisher due to <explain reason>.\r\n> > > Because if we create the same schema on the publisher it might not\r\n> > > have ended up in a deadlock instead it would have been executed in\r\n> > > sequence (due to lock waiting). So the main point we are conveying\r\n> > > is that the transaction which was independent of each other on the\r\n> > > publisher could be dependent on the subscriber and they can end up in\r\n> deadlock due to parallel apply.\r\n> > >\r\n> >\r\n> > How about changing it to: \"We have a risk of deadlock due to\r\n> > parallelly applying the transactions that were independent on the\r\n> > publisher side but became dependent on the subscriber side due to the\r\n> > different database structures (like schema of subscription tables,\r\n> > constraints, etc.) on each side.\r\n> \r\n> I think this looks good to me.\r\n\r\nThanks for the comments.\r\nAttach the new version patch set which changed the comments as suggested.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 5 Jan 2023 03:37:46 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 5, 2023 at 9:07 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, January 4, 2023 9:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > I think this looks good to me.\n>\n> Thanks for the comments.\n> Attach the new version patch set which changed the comments as suggested.\n\nThanks for the updated patch, while testing this I see one strange\nbehavior which seems like bug to me, here is the step to reproduce\n\n1. start 2 servers(config: logical_decoding_work_mem=64kB)\n./pg_ctl -D data/ -c -l pub_logs start\n./pg_ctl -D data1/ -c -l sub_logs start\n\n2. Publisher:\ncreate table t(a int PRIMARY KEY ,b text);\nCREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\ncreate publication test_pub for table t\nwith(PUBLISH='insert,delete,update,truncate');\nalter table t replica identity FULL ;\ninsert into t values (generate_series(1,2000),large_val()) ON CONFLICT\n(a) DO UPDATE SET a=EXCLUDED.a*300;\n\n3. Subscription Server:\ncreate table t(a int,b text);\ncreate subscription test_sub CONNECTION 'host=localhost port=5432\ndbname=postgres' PUBLICATION test_pub WITH ( slot_name =\ntest_slot_sub1,streaming=parallel);\n\n4. Publication Server:\nbegin ;\nsavepoint a;\ndelete from t;\nsavepoint b;\ninsert into t values (generate_series(1,5000),large_val()) ON CONFLICT\n(a) DO UPDATE SET a=EXCLUDED.a*30000; -- (while executing this start\npublisher in 2-3 secs)\n\nRestart the publication server, while the transaction is still in an\nuncommitted state.\n./pg_ctl -D data/ -c -l pub_logs stop -mi\n./pg_ctl -D data/ -c -l pub_logs start -mi\n\nafter this, the parallel apply worker stuck in waiting on stream lock\nforever (even after 10 mins) -- see below, from subscriber logs I can\nsee one of the parallel apply worker [75677] started but never\nfinished [no error], after that I have performed more operation [same\ninsert] which got applied by new parallel apply worked which got\nstarted and finished within 1 second.\n\ndilipku+ 75660 1 0 13:39 ? 00:00:00\n/home/dilipkumar/work/PG/install/bin/postgres -D data\ndilipku+ 75661 75660 0 13:39 ? 00:00:00 postgres: checkpointer\ndilipku+ 75662 75660 0 13:39 ? 00:00:00 postgres: background writer\ndilipku+ 75664 75660 0 13:39 ? 00:00:00 postgres: walwriter\ndilipku+ 75665 75660 0 13:39 ? 00:00:00 postgres: autovacuum launcher\ndilipku+ 75666 75660 0 13:39 ? 00:00:00 postgres: logical\nreplication launcher\ndilipku+ 75675 75595 0 13:39 ? 00:00:00 postgres: logical\nreplication apply worker for subscription 16389\ndilipku+ 75676 75660 0 13:39 ? 00:00:00 postgres: walsender\ndilipkumar postgres ::1(42192) START_REPLICATION\ndilipku+ 75677 75595 0 13:39 ? 00:00:00 postgres: logical\nreplication parallel apply worker for subscription 16389 waiting\n\n\nSubscriber logs:\n2023-01-05 13:39:07.261 IST [75595] LOG: background worker \"logical\nreplication worker\" (PID 75649) exited with exit code 1\n2023-01-05 13:39:12.272 IST [75675] LOG: logical replication apply\nworker for subscription \"test_sub\" has started\n2023-01-05 13:39:12.307 IST [75677] LOG: logical replication parallel\napply worker for subscription \"test_sub\" has started\n2023-01-05 13:43:31.003 IST [75596] LOG: checkpoint starting: time\n2023-01-05 13:46:32.045 IST [76337] LOG: logical replication parallel\napply worker for subscription \"test_sub\" has started\n2023-01-05 13:46:35.214 IST [76337] LOG: logical replication parallel\napply worker for subscription \"test_sub\" has finished\n2023-01-05 13:46:50.241 IST [76384] LOG: logical replication parallel\napply worker for subscription \"test_sub\" has started\n2023-01-05 13:46:53.676 IST [76384] LOG: logical replication parallel\napply worker for subscription \"test_sub\" has finished\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Jan 2023 13:51:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, January 5, 2023 4:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Thu, Jan 5, 2023 at 9:07 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, January 4, 2023 9:29 PM Dilip Kumar\r\n> <dilipbalaut@gmail.com> wrote:\r\n> \r\n> > > I think this looks good to me.\r\n> >\r\n> > Thanks for the comments.\r\n> > Attach the new version patch set which changed the comments as\r\n> suggested.\r\n> \r\n> Thanks for the updated patch, while testing this I see one strange\r\n> behavior which seems like bug to me, here is the step to reproduce\r\n> \r\n> 1. start 2 servers(config: logical_decoding_work_mem=64kB)\r\n> ./pg_ctl -D data/ -c -l pub_logs start\r\n> ./pg_ctl -D data1/ -c -l sub_logs start\r\n> \r\n> 2. Publisher:\r\n> create table t(a int PRIMARY KEY ,b text);\r\n> CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\r\n> 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';\r\n> create publication test_pub for table t\r\n> with(PUBLISH='insert,delete,update,truncate');\r\n> alter table t replica identity FULL ;\r\n> insert into t values (generate_series(1,2000),large_val()) ON CONFLICT\r\n> (a) DO UPDATE SET a=EXCLUDED.a*300;\r\n> \r\n> 3. Subscription Server:\r\n> create table t(a int,b text);\r\n> create subscription test_sub CONNECTION 'host=localhost port=5432\r\n> dbname=postgres' PUBLICATION test_pub WITH ( slot_name =\r\n> test_slot_sub1,streaming=parallel);\r\n> \r\n> 4. Publication Server:\r\n> begin ;\r\n> savepoint a;\r\n> delete from t;\r\n> savepoint b;\r\n> insert into t values (generate_series(1,5000),large_val()) ON CONFLICT\r\n> (a) DO UPDATE SET a=EXCLUDED.a*30000; -- (while executing this start\r\n> publisher in 2-3 secs)\r\n> \r\n> Restart the publication server, while the transaction is still in an\r\n> uncommitted state.\r\n> ./pg_ctl -D data/ -c -l pub_logs stop -mi\r\n> ./pg_ctl -D data/ -c -l pub_logs start -mi\r\n> \r\n> after this, the parallel apply worker stuck in waiting on stream lock\r\n> forever (even after 10 mins) -- see below, from subscriber logs I can\r\n> see one of the parallel apply worker [75677] started but never\r\n> finished [no error], after that I have performed more operation [same\r\n> insert] which got applied by new parallel apply worked which got\r\n> started and finished within 1 second.\r\n> \r\n\r\nThanks for reporting the problem.\r\n\r\nAfter analyzing the behavior, I think it's a bug on publisher side which\r\nis not directly related to parallel apply.\r\n\r\nI think the root reason is that we didn't try to send a stream end(stream\r\nabort) message to subscriber for the crashed transaction which was streamed\r\nbefore.\r\n\r\nThe behavior is that, after restarting, the publisher will start to decode the\r\ntransaction that aborted due to crash, and when try to stream the first change\r\nof that transaction, it will send a stream start message but then it realizes\r\nthat the transaction was aborted, so it will enter the PG_CATCH block of\r\nReorderBufferProcessTXN() and call ReorderBufferResetTXN() which send the\r\nstream stop message. And in this case, there would be a parallel apply worker\r\nstarted on subscriber waiting for stream end message which will never come.\r\n\r\nI think the same behavior happens for the non-parallel mode which will cause\r\na stream file left on subscriber and will not be cleaned until the apply worker is\r\nrestarted.\r\n\r\nTo fix it, I think we need to send a stream abort message when we are cleaning\r\nup crashed transaction on publisher(e.g., in ReorderBufferAbortOld()). And here\r\nis a tiny patch which change the same. I have confirmed that the bug is fixed\r\nand all regression tests pass.\r\n\r\nWhat do you think ?\r\nI will start a new thread and try to write a testcase if possible\r\nafter reaching a consensus.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 5 Jan 2023 11:33:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 5, 2023 at 5:03 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, January 5, 2023 4:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n\n> Thanks for reporting the problem.\n>\n> After analyzing the behavior, I think it's a bug on publisher side which\n> is not directly related to parallel apply.\n>\n> I think the root reason is that we didn't try to send a stream end(stream\n> abort) message to subscriber for the crashed transaction which was streamed\n> before.\n> The behavior is that, after restarting, the publisher will start to decode the\n> transaction that aborted due to crash, and when try to stream the first change\n> of that transaction, it will send a stream start message but then it realizes\n> that the transaction was aborted, so it will enter the PG_CATCH block of\n> ReorderBufferProcessTXN() and call ReorderBufferResetTXN() which send the\n> stream stop message. And in this case, there would be a parallel apply worker\n> started on subscriber waiting for stream end message which will never come.\n\nI suspected it but didn't analyze this.\n\n> I think the same behavior happens for the non-parallel mode which will cause\n> a stream file left on subscriber and will not be cleaned until the apply worker is\n> restarted.\n> To fix it, I think we need to send a stream abort message when we are cleaning\n> up crashed transaction on publisher(e.g., in ReorderBufferAbortOld()). And here\n> is a tiny patch which change the same. I have confirmed that the bug is fixed\n> and all regression tests pass.\n>\n> What do you think ?\n> I will start a new thread and try to write a testcase if possible\n> after reaching a consensus.\n\nI think your analysis looks correct and we can raise this in a new thread.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Jan 2023 17:23:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, January 5, 2023 7:54 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Thu, Jan 5, 2023 at 5:03 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, January 5, 2023 4:22 PM Dilip Kumar <dilipbalaut@gmail.com>\r\n> wrote:\r\n> > >\r\n> \r\n> > Thanks for reporting the problem.\r\n> >\r\n> > After analyzing the behavior, I think it's a bug on publisher side\r\n> > which is not directly related to parallel apply.\r\n> >\r\n> > I think the root reason is that we didn't try to send a stream\r\n> > end(stream\r\n> > abort) message to subscriber for the crashed transaction which was\r\n> > streamed before.\r\n> > The behavior is that, after restarting, the publisher will start to\r\n> > decode the transaction that aborted due to crash, and when try to\r\n> > stream the first change of that transaction, it will send a stream\r\n> > start message but then it realizes that the transaction was aborted,\r\n> > so it will enter the PG_CATCH block of\r\n> > ReorderBufferProcessTXN() and call ReorderBufferResetTXN() which send\r\n> > the stream stop message. And in this case, there would be a parallel\r\n> > apply worker started on subscriber waiting for stream end message which\r\n> will never come.\r\n> \r\n> I suspected it but didn't analyze this.\r\n> \r\n> > I think the same behavior happens for the non-parallel mode which will\r\n> > cause a stream file left on subscriber and will not be cleaned until\r\n> > the apply worker is restarted.\r\n> > To fix it, I think we need to send a stream abort message when we are\r\n> > cleaning up crashed transaction on publisher(e.g., in\r\n> > ReorderBufferAbortOld()). And here is a tiny patch which change the\r\n> > same. I have confirmed that the bug is fixed and all regression tests pass.\r\n> >\r\n> > What do you think ?\r\n> > I will start a new thread and try to write a testcase if possible\r\n> > after reaching a consensus.\r\n> \r\n> I think your analysis looks correct and we can raise this in a new thread.\r\n\r\nThanks, I have started another thread[1]\r\n\r\nAttach the parallel apply patch set here again. I didn't change the patch set,\r\nattach it here just to let the CFbot keep testing it.\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716A773F46768A1B75BE24394FB9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 6 Jan 2023 04:07:49 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 6, 2023 at 9:37 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, January 5, 2023 7:54 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thanks, I have started another thread[1]\n>\n> Attach the parallel apply patch set here again. I didn't change the patch set,\n> attach it here just to let the CFbot keep testing it.\n\nI have completed the review and some basic testing and it mostly looks\nfine to me. Here is my last set of comments/suggestions.\n\n1.\n /*\n * Don't start a new parallel worker if user has set skiplsn as it's\n * possible that they want to skip the streaming transaction. For\n * streaming transactions, we need to serialize the transaction to a file\n * so that we can get the last LSN of the transaction to judge whether to\n * skip before starting to apply the change.\n */\n if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\n return false;\n\n\nI think this is fine to block parallelism in this case, but it is also\npossible to make it less restrictive, basically, only if the first lsn\nof the transaction is <= skiplsn, then only it is possible that the\nfinal_lsn might match with skiplsn otherwise that is not possible. And\nif we want then we can allow parallelism in that case.\n\nI understand that currently we do not have first_lsn of the\ntransaction in stream start message but I think that should be easy to\ndo? Although I am not sure if it is worth it, it's good to make a\nnote at least.\n\n2.\n\n+ * XXX Additionally, we also stop the worker if the leader apply worker\n+ * serialize part of the transaction data due to a send timeout. This is\n+ * because the message could be partially written to the queue and there\n+ * is no way to clean the queue other than resending the message until it\n+ * succeeds. Instead of trying to send the data which anyway would have\n+ * been serialized and then letting the parallel apply worker deal with\n+ * the spurious message, we stop the worker.\n+ */\n+ if (winfo->serialize_changes ||\n+ list_length(ParallelApplyWorkerPool) >\n+ (max_parallel_apply_workers_per_subscription / 2))\n\nIMHO this reason (XXX Additionally, we also stop the worker if the\nleader apply worker serialize part of the transaction data due to a\nsend timeout) for stopping the worker looks a bit hackish to me. It\nmay be a rare case so I am not talking about the performance but the\nreasoning behind stopping is not good. Ideally we should be able to\nclean up the message queue and reuse the worker.\n\n3.\n+ else if (shmq_res == SHM_MQ_WOULD_BLOCK)\n+ {\n+ /* Replay the changes from the file, if any. */\n+ if (pa_has_spooled_message_pending())\n+ {\n+ pa_spooled_messages();\n+ }\n\nI think we do not need this pa_has_spooled_message_pending() function.\nBecause this function is just calling pa_get_fileset_state() which is\nacquiring mutex and getting filestate then if the filestate is not\nFS_EMPTY then we call pa_spooled_messages() that will again call\npa_get_fileset_state() which will again acquire mutex. I think when\nthe state is FS_SERIALIZE_IN_PROGRESS it will frequently call\npa_get_fileset_state() consecutively 2 times, and I think we can\neasily achieve the same behavior with just one call.\n\n4.\n\n+ * leader, or when there there is an error. None of these cases will allow\n+ * the code to reach here.\n\n/when there there is an error/when there is an error\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 11:24:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 6, 2023 at 11:24 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jan 6, 2023 at 9:37 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Thursday, January 5, 2023 7:54 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Thanks, I have started another thread[1]\n> >\n> > Attach the parallel apply patch set here again. I didn't change the patch set,\n> > attach it here just to let the CFbot keep testing it.\n>\n> I have completed the review and some basic testing and it mostly looks\n> fine to me. Here is my last set of comments/suggestions.\n>\n> 1.\n> /*\n> * Don't start a new parallel worker if user has set skiplsn as it's\n> * possible that they want to skip the streaming transaction. For\n> * streaming transactions, we need to serialize the transaction to a file\n> * so that we can get the last LSN of the transaction to judge whether to\n> * skip before starting to apply the change.\n> */\n> if (!XLogRecPtrIsInvalid(MySubscription->skiplsn))\n> return false;\n>\n>\n> I think this is fine to block parallelism in this case, but it is also\n> possible to make it less restrictive, basically, only if the first lsn\n> of the transaction is <= skiplsn, then only it is possible that the\n> final_lsn might match with skiplsn otherwise that is not possible. And\n> if we want then we can allow parallelism in that case.\n>\n> I understand that currently we do not have first_lsn of the\n> transaction in stream start message but I think that should be easy to\n> do? Although I am not sure if it is worth it, it's good to make a\n> note at least.\n>\n\nYeah, I also don't think sending extra eight bytes with stream_start\nmessage is worth it. But it is fine to mention the same in the\ncomments.\n\n> 2.\n>\n> + * XXX Additionally, we also stop the worker if the leader apply worker\n> + * serialize part of the transaction data due to a send timeout. This is\n> + * because the message could be partially written to the queue and there\n> + * is no way to clean the queue other than resending the message until it\n> + * succeeds. Instead of trying to send the data which anyway would have\n> + * been serialized and then letting the parallel apply worker deal with\n> + * the spurious message, we stop the worker.\n> + */\n> + if (winfo->serialize_changes ||\n> + list_length(ParallelApplyWorkerPool) >\n> + (max_parallel_apply_workers_per_subscription / 2))\n>\n> IMHO this reason (XXX Additionally, we also stop the worker if the\n> leader apply worker serialize part of the transaction data due to a\n> send timeout) for stopping the worker looks a bit hackish to me. It\n> may be a rare case so I am not talking about the performance but the\n> reasoning behind stopping is not good. Ideally we should be able to\n> clean up the message queue and reuse the worker.\n>\n\nTBH, I don't know what is the better way to deal with this with the\ncurrent infrastructure. I thought we can do this as a separate\nenhancement in the future.\n\n> 3.\n> + else if (shmq_res == SHM_MQ_WOULD_BLOCK)\n> + {\n> + /* Replay the changes from the file, if any. */\n> + if (pa_has_spooled_message_pending())\n> + {\n> + pa_spooled_messages();\n> + }\n>\n> I think we do not need this pa_has_spooled_message_pending() function.\n> Because this function is just calling pa_get_fileset_state() which is\n> acquiring mutex and getting filestate then if the filestate is not\n> FS_EMPTY then we call pa_spooled_messages() that will again call\n> pa_get_fileset_state() which will again acquire mutex. I think when\n> the state is FS_SERIALIZE_IN_PROGRESS it will frequently call\n> pa_get_fileset_state() consecutively 2 times, and I think we can\n> easily achieve the same behavior with just one call.\n>\n\nThis is just to keep the code easy to follow. As this would be a rare\ncase, so thought of giving preference to code clarity.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 6 Jan 2023 12:05:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 6, 2023 at 12:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n>\n> Yeah, I also don't think sending extra eight bytes with stream_start\n> message is worth it. But it is fine to mention the same in the\n> comments.\n\nRight.\n\n> > 2.\n> >\n> > + * XXX Additionally, we also stop the worker if the leader apply worker\n> > + * serialize part of the transaction data due to a send timeout. This is\n> > + * because the message could be partially written to the queue and there\n> > + * is no way to clean the queue other than resending the message until it\n> > + * succeeds. Instead of trying to send the data which anyway would have\n> > + * been serialized and then letting the parallel apply worker deal with\n> > + * the spurious message, we stop the worker.\n> > + */\n> > + if (winfo->serialize_changes ||\n> > + list_length(ParallelApplyWorkerPool) >\n> > + (max_parallel_apply_workers_per_subscription / 2))\n> >\n> > IMHO this reason (XXX Additionally, we also stop the worker if the\n> > leader apply worker serialize part of the transaction data due to a\n> > send timeout) for stopping the worker looks a bit hackish to me. It\n> > may be a rare case so I am not talking about the performance but the\n> > reasoning behind stopping is not good. Ideally we should be able to\n> > clean up the message queue and reuse the worker.\n> >\n>\n> TBH, I don't know what is the better way to deal with this with the\n> current infrastructure. I thought we can do this as a separate\n> enhancement in the future.\n\nOkay.\n\n> > 3.\n> > + else if (shmq_res == SHM_MQ_WOULD_BLOCK)\n> > + {\n> > + /* Replay the changes from the file, if any. */\n> > + if (pa_has_spooled_message_pending())\n> > + {\n> > + pa_spooled_messages();\n> > + }\n> >\n> > I think we do not need this pa_has_spooled_message_pending() function.\n> > Because this function is just calling pa_get_fileset_state() which is\n> > acquiring mutex and getting filestate then if the filestate is not\n> > FS_EMPTY then we call pa_spooled_messages() that will again call\n> > pa_get_fileset_state() which will again acquire mutex. I think when\n> > the state is FS_SERIALIZE_IN_PROGRESS it will frequently call\n> > pa_get_fileset_state() consecutively 2 times, and I think we can\n> > easily achieve the same behavior with just one call.\n> >\n>\n> This is just to keep the code easy to follow. As this would be a rare\n> case, so thought of giving preference to code clarity.\n\nI think the code will be simpler with just one function no? I mean\ninstead of calling pa_has_spooled_message_pending() in if condition\nwhat if we directly call pa_spooled_messages();, this is anyway\nfetching the file_state and if the filestate is EMPTY then it can\nreturn false, and if it returns false we can execute the code which is\nthere in else condition. We might need to change the name of the\nfunction though.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 12:59:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 6, 2023 at 12:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > 3.\n> > > + else if (shmq_res == SHM_MQ_WOULD_BLOCK)\n> > > + {\n> > > + /* Replay the changes from the file, if any. */\n> > > + if (pa_has_spooled_message_pending())\n> > > + {\n> > > + pa_spooled_messages();\n> > > + }\n> > >\n> > > I think we do not need this pa_has_spooled_message_pending() function.\n> > > Because this function is just calling pa_get_fileset_state() which is\n> > > acquiring mutex and getting filestate then if the filestate is not\n> > > FS_EMPTY then we call pa_spooled_messages() that will again call\n> > > pa_get_fileset_state() which will again acquire mutex. I think when\n> > > the state is FS_SERIALIZE_IN_PROGRESS it will frequently call\n> > > pa_get_fileset_state() consecutively 2 times, and I think we can\n> > > easily achieve the same behavior with just one call.\n> > >\n> >\n> > This is just to keep the code easy to follow. As this would be a rare\n> > case, so thought of giving preference to code clarity.\n>\n> I think the code will be simpler with just one function no? I mean\n> instead of calling pa_has_spooled_message_pending() in if condition\n> what if we directly call pa_spooled_messages();, this is anyway\n> fetching the file_state and if the filestate is EMPTY then it can\n> return false, and if it returns false we can execute the code which is\n> there in else condition. We might need to change the name of the\n> function though.\n>\nBut anyway it is not a performance-critical path so if you think the\ncurrent way looks cleaner then I am fine with that too.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 13:01:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, January 6, 2023 3:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n\r\nHi,\r\n\r\nThanks for your comments.\r\n\r\n> On Fri, Jan 6, 2023 at 12:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> \r\n> >\r\n> > Yeah, I also don't think sending extra eight bytes with stream_start\r\n> > message is worth it. But it is fine to mention the same in the\r\n> > comments.\r\n> \r\n> Right.\r\n\r\nAdded some comment.\r\n\r\n> \r\n> > > 2.\r\n> > >\r\n> > > + * XXX Additionally, we also stop the worker if the leader apply\r\n> worker\r\n> > > + * serialize part of the transaction data due to a send timeout. This is\r\n> > > + * because the message could be partially written to the queue and\r\n> there\r\n> > > + * is no way to clean the queue other than resending the message\r\n> until it\r\n> > > + * succeeds. Instead of trying to send the data which anyway would\r\n> have\r\n> > > + * been serialized and then letting the parallel apply worker deal with\r\n> > > + * the spurious message, we stop the worker.\r\n> > > + */\r\n> > > + if (winfo->serialize_changes ||\r\n> > > + list_length(ParallelApplyWorkerPool) >\r\n> > > + (max_parallel_apply_workers_per_subscription / 2))\r\n> > >\r\n> > > IMHO this reason (XXX Additionally, we also stop the worker if the\r\n> > > leader apply worker serialize part of the transaction data due to a\r\n> > > send timeout) for stopping the worker looks a bit hackish to me. It\r\n> > > may be a rare case so I am not talking about the performance but the\r\n> > > reasoning behind stopping is not good. Ideally we should be able to\r\n> > > clean up the message queue and reuse the worker.\r\n> > >\r\n> >\r\n> > TBH, I don't know what is the better way to deal with this with the\r\n> > current infrastructure. I thought we can do this as a separate\r\n> > enhancement in the future.\r\n> \r\n> Okay.\r\n> \r\n> > > 3.\r\n> > > + else if (shmq_res == SHM_MQ_WOULD_BLOCK)\r\n> > > + {\r\n> > > + /* Replay the changes from the file, if any. */\r\n> > > + if (pa_has_spooled_message_pending())\r\n> > > + {\r\n> > > + pa_spooled_messages();\r\n> > > + }\r\n> > >\r\n> > > I think we do not need this pa_has_spooled_message_pending() function.\r\n> > > Because this function is just calling pa_get_fileset_state() which\r\n> > > is acquiring mutex and getting filestate then if the filestate is\r\n> > > not FS_EMPTY then we call pa_spooled_messages() that will again call\r\n> > > pa_get_fileset_state() which will again acquire mutex. I think when\r\n> > > the state is FS_SERIALIZE_IN_PROGRESS it will frequently call\r\n> > > pa_get_fileset_state() consecutively 2 times, and I think we can\r\n> > > easily achieve the same behavior with just one call.\r\n> > >\r\n> >\r\n> > This is just to keep the code easy to follow. As this would be a rare\r\n> > case, so thought of giving preference to code clarity.\r\n> \r\n> I think the code will be simpler with just one function no? I mean instead of\r\n> calling pa_has_spooled_message_pending() in if condition what if we directly\r\n> call pa_spooled_messages();, this is anyway fetching the file_state and if the\r\n> filestate is EMPTY then it can return false, and if it returns false we can execute\r\n> the code which is there in else condition. We might need to change the name\r\n> of the function though.\r\n\r\nChanged as suggested.\r\n\r\nI have addressed all the comments and here is the new version patch set.\r\nI also added some documents about the new lock and fixed some typos.\r\n\r\nAttach the new version patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 6 Jan 2023 10:07:57 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 6, 2023 at 3:38 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\nLooks good, but I feel in pa_process_spooled_messages_if_required()\nfunction after getting the filestate the first check should be if\n(filestate== FS_EMPTY) return false. I mean why to process through\nall the states if it is empty and we can directly exit. It is not a\nbig deal so if you prefer the way it is then I have no objection to\nit.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 7 Jan 2023 10:20:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Saturday, January 7, 2023 12:50 PM Dilip Kumar <dilipbalaut@gmail.com>\r\n> \r\n> On Fri, Jan 6, 2023 at 3:38 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> \r\n> Looks good, but I feel in pa_process_spooled_messages_if_required()\r\n> function after getting the filestate the first check should be if (filestate==\r\n> FS_EMPTY) return false. I mean why to process through all the states if it is\r\n> empty and we can directly exit. It is not a big deal so if you prefer the way it is\r\n> then I have no objection to it.\r\n\r\nI think your suggestion looks good, I have adjusted the code.\r\nI also rebase the patch set due to the recent commit c6e1f6.\r\nAnd here is the new version patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sat, 7 Jan 2023 05:42:59 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sat, Jan 7, 2023 at 11:13 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, January 7, 2023 12:50 PM Dilip Kumar <dilipbalaut@gmail.com>\n> >\n> > On Fri, Jan 6, 2023 at 3:38 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\n> > wrote:\n> > >\n> >\n> > Looks good, but I feel in pa_process_spooled_messages_if_required()\n> > function after getting the filestate the first check should be if (filestate==\n> > FS_EMPTY) return false. I mean why to process through all the states if it is\n> > empty and we can directly exit. It is not a big deal so if you prefer the way it is\n> > then I have no objection to it.\n>\n> I think your suggestion looks good, I have adjusted the code.\n> I also rebase the patch set due to the recent commit c6e1f6.\n> And here is the new version patch set.\n>\n\nLGTM\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 7 Jan 2023 14:25:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sat, Jan 7, 2023 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n\nToday, I was analyzing this patch w.r.t recent commit c6e1f62e2c and\nfound that pa_set_xact_state() should set the latch (wake up) for the\nleader worker as the leader could be waiting in\npa_wait_for_xact_state(). What do you think? But otherwise, it should\nbe okay w.r.t DDLs because this patch allows the leader worker to\nrestart logical replication for subscription parameter change which\nwill in turn stop/restart parallel workers if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 8 Jan 2023 07:44:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sunday, January 8, 2023 10:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Sat, Jan 7, 2023 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> \r\n> Today, I was analyzing this patch w.r.t recent commit c6e1f62e2c and found that\r\n> pa_set_xact_state() should set the latch (wake up) for the leader worker as the\r\n> leader could be waiting in pa_wait_for_xact_state(). What do you think? But\r\n> otherwise, it should be okay w.r.t DDLs because this patch allows the leader\r\n> worker to restart logical replication for subscription parameter change which will\r\n> in turn stop/restart parallel workers if required.\r\n\r\nThanks for the analysis. I agree that it would be better to signal the leader\r\nwhen setting the state to PARALLEL_TRANS_STARTED, otherwise it might slightly\r\ndelay the timing of catch the state change in pa_wait_for_xact_state(), so I\r\nhave updated the patch for the same. Besides, I also checked commit c6e1f62e2c,\r\nI think DDL operation doesn't need to wake up the parallel apply worker\r\ndirectly as the parallel apply worker doesn't start table sync and only\r\ncommunicate with the leader, so I didn't find some other places that need to be\r\nchanged.\r\n\r\nAttach the updated patch set.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Sun, 8 Jan 2023 03:58:50 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> On Sunday, January 8, 2023 10:14 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Sat, Jan 7, 2023 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > Today, I was analyzing this patch w.r.t recent commit c6e1f62e2c and\r\n> > found that\r\n> > pa_set_xact_state() should set the latch (wake up) for the leader\r\n> > worker as the leader could be waiting in pa_wait_for_xact_state().\r\n> > What do you think? But otherwise, it should be okay w.r.t DDLs because\r\n> > this patch allows the leader worker to restart logical replication for\r\n> > subscription parameter change which will in turn stop/restart parallel workers\r\n> if required.\r\n> \r\n> Thanks for the analysis. I agree that it would be better to signal the leader when\r\n> setting the state to PARALLEL_TRANS_STARTED, otherwise it might slightly delay\r\n> the timing of catch the state change in pa_wait_for_xact_state(), so I have\r\n> updated the patch for the same. Besides, I also checked commit c6e1f62e2c, I\r\n> think DDL operation doesn't need to wake up the parallel apply worker directly\r\n> as the parallel apply worker doesn't start table sync and only communicate with\r\n> the leader, so I didn't find some other places that need to be changed.\r\n> \r\n> Attach the updated patch set.\r\n\r\nSorry, the commit message of 0001 was accidentally deleted, just attach\r\nthe same patch set again with commit message.", "msg_date": "Sun, 8 Jan 2023 06:02:46 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > Attach the updated patch set.\n>\n> Sorry, the commit message of 0001 was accidentally deleted, just attach\n> the same patch set again with commit message.\n>\n\nPushed the first (0001) patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Jan 2023 14:21:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, Thanks for the great new feature.\r\n\r\nApplied patches include adding wait events LogicalParallelApplyMain, LogicalParallelApplyStateChange. \r\nHowever, it seems that monitoring.sgml only contains descriptions for pg_locks. The attached patch adds relevant wait event information.\r\nPlease update if you have a better description.\r\n\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Amit Kapila <amit.kapila16@gmail.com> \r\nSent: Monday, January 9, 2023 5:51 PM\r\nTo: houzj.fnst@fujitsu.com\r\nCc: Masahiko Sawada <sawada.mshk@gmail.com>; wangw.fnst@fujitsu.com; Peter Smith <smithpb2250@gmail.com>; shiy.fnst@fujitsu.com; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Dilip Kumar <dilipbalaut@gmail.com>\r\nSubject: Re: Perform streaming logical transactions by background workers and parallel apply\r\n\r\nOn Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n>\r\n> On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> > Attach the updated patch set.\r\n>\r\n> Sorry, the commit message of 0001 was accidentally deleted, just \r\n> attach the same patch set again with commit message.\r\n>\r\n\r\nPushed the first (0001) patch.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.", "msg_date": "Mon, 9 Jan 2023 09:32:01 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 9, 2023 5:32 PM Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\r\n> \r\n> Hi, Thanks for the great new feature.\r\n> \r\n> Applied patches include adding wait events LogicalParallelApplyMain,\r\n> LogicalParallelApplyStateChange.\r\n> However, it seems that monitoring.sgml only contains descriptions for\r\n> pg_locks. The attached patch adds relevant wait event information.\r\n> Please update if you have a better description.\r\n\r\nThanks for reporting. I think for LogicalParallelApplyStateChange we'd better\r\ndocument it in a consistent style with LogicalSyncStateChange, so I have\r\nslightly adjusted the patch for the same.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 9 Jan 2023 10:15:20 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Thanks for the reply.\r\n\r\n> Thanks for reporting. I think for LogicalParallelApplyStateChange we'd better document it in a consistent style with LogicalSyncStateChange, \r\n> so I have slightly adjusted the patch for the same.\r\n\r\nI think the description in the patch you attached is better.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> \r\nSent: Monday, January 9, 2023 7:15 PM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; Amit Kapila <amit.kapila16@gmail.com>\r\nCc: Masahiko Sawada <sawada.mshk@gmail.com>; wangw.fnst@fujitsu.com; Peter Smith <smithpb2250@gmail.com>; shiy.fnst@fujitsu.com; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Dilip Kumar <dilipbalaut@gmail.com>\r\nSubject: RE: Perform streaming logical transactions by background workers and parallel apply\r\n\r\nOn Monday, January 9, 2023 5:32 PM Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\r\n> \r\n> Hi, Thanks for the great new feature.\r\n> \r\n> Applied patches include adding wait events LogicalParallelApplyMain, \r\n> LogicalParallelApplyStateChange.\r\n> However, it seems that monitoring.sgml only contains descriptions for \r\n> pg_locks. The attached patch adds relevant wait event information.\r\n> Please update if you have a better description.\r\n\r\nThanks for reporting. I think for LogicalParallelApplyStateChange we'd better document it in a consistent style with LogicalSyncStateChange, so I have slightly adjusted the patch for the same.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Mon, 9 Jan 2023 12:44:12 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 9, 2023 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > > Attach the updated patch set.\r\n> >\r\n> > Sorry, the commit message of 0001 was accidentally deleted, just\r\n> > attach the same patch set again with commit message.\r\n> >\r\n> \r\n> Pushed the first (0001) patch.\r\n\r\nThanks for pushing, here are the remaining patches.\r\nI reordered the patch number to put patches that are easier to\r\ncommit in the front of others.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 10 Jan 2023 04:55:55 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hello.\n\nAt Mon, 9 Jan 2023 14:21:03 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Pushed the first (0001) patch.\n\nIt added the following error message.\n\n+\tseg = dsm_attach(handle);\n+\tif (!seg)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+\t\t\t\t errmsg(\"unable to map dynamic shared memory segment\")));\n\nOn the other hand we already have the following one in parallel.c\n(another in pg_prewarm)\n\n\tseg = dsm_attach(DatumGetUInt32(main_arg));\n\tif (seg == NULL)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t errmsg(\"could not map dynamic shared memory segment\")));\n\nAlthough I don't see a technical difference between the two, all the\nother occurances including the just above (except test_shm_mq) use\n\"could not\". A faint memory in my non-durable memory tells me that we\nhave a policy that we use \"can/could not\" than \"unable\".\n\n(Mmm. I find ones in StartBackgroundWorker and sepgsql_client_auth.)\n\nShouldn't we use the latter than the former? If that's true, it seems\nto me that test_shm_mq also needs the same amendment to avoid the same\nmistake in future.\n\n=====\nindex 2e5914d5d9..a2d7474ed4 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -891,7 +891,7 @@ ParallelApplyWorkerMain(Datum main_arg)\n if (!seg)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n- errmsg(\"unable to map dynamic shared memory segment\")));\n+ errmsg(\"could not map dynamic shared memory segment\")));\n \n toc = shm_toc_attach(PG_LOGICAL_APPLY_SHM_MAGIC, dsm_segment_address(seg));\n if (!toc)\ndiff --git a/src/test/modules/test_shm_mq/worker.c b/src/test/modules/test_shm_mq/worker.c\nindex 8807727337..005b56023b 100644\n--- a/src/test/modules/test_shm_mq/worker.c\n+++ b/src/test/modules/test_shm_mq/worker.c\n@@ -81,7 +81,7 @@ test_shm_mq_main(Datum main_arg)\n if (seg == NULL)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n- errmsg(\"unable to map dynamic shared memory segment\")));\n+ errmsg(\"could not map dynamic shared memory segment\")));\n toc = shm_toc_attach(PG_TEST_SHM_MQ_MAGIC, dsm_segment_address(seg));\n if (toc == NULL)\n ereport(ERROR,\n=====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Jan 2023 14:46:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 9 Jan 2023 14:21:03 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > Pushed the first (0001) patch.\n>\n> It added the following error message.\n>\n> + seg = dsm_attach(handle);\n> + if (!seg)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"unable to map dynamic shared memory segment\")));\n>\n> On the other hand we already have the following one in parallel.c\n> (another in pg_prewarm)\n>\n> seg = dsm_attach(DatumGetUInt32(main_arg));\n> if (seg == NULL)\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"could not map dynamic shared memory segment\")));\n>\n> Although I don't see a technical difference between the two, all the\n> other occurances including the just above (except test_shm_mq) use\n> \"could not\". A faint memory in my non-durable memory tells me that we\n> have a policy that we use \"can/could not\" than \"unable\".\n>\n\nRight, it is mentioned in docs [1] (see section \"Tricky Words to Avoid\").\n\n> (Mmm. I find ones in StartBackgroundWorker and sepgsql_client_auth.)\n>\n> Shouldn't we use the latter than the former? If that's true, it seems\n> to me that test_shm_mq also needs the same amendment to avoid the same\n> mistake in future.\n>\n> =====\n> index 2e5914d5d9..a2d7474ed4 100644\n> --- a/src/backend/replication/logical/applyparallelworker.c\n> +++ b/src/backend/replication/logical/applyparallelworker.c\n> @@ -891,7 +891,7 @@ ParallelApplyWorkerMain(Datum main_arg)\n> if (!seg)\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> - errmsg(\"unable to map dynamic shared memory segment\")));\n> + errmsg(\"could not map dynamic shared memory segment\")));\n>\n> toc = shm_toc_attach(PG_LOGICAL_APPLY_SHM_MAGIC, dsm_segment_address(seg));\n> if (!toc)\n> diff --git a/src/test/modules/test_shm_mq/worker.c b/src/test/modules/test_shm_mq/worker.c\n> index 8807727337..005b56023b 100644\n> --- a/src/test/modules/test_shm_mq/worker.c\n> +++ b/src/test/modules/test_shm_mq/worker.c\n> @@ -81,7 +81,7 @@ test_shm_mq_main(Datum main_arg)\n> if (seg == NULL)\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> - errmsg(\"unable to map dynamic shared memory segment\")));\n> + errmsg(\"could not map dynamic shared memory segment\")));\n> toc = shm_toc_attach(PG_TEST_SHM_MQ_MAGIC, dsm_segment_address(seg));\n> if (toc == NULL)\n> ereport(ERROR,\n> =====\n>\n\nCan you please start a new thread and post these changes as we are\nproposing to change existing message as well?\n\n\n[1] - https://www.postgresql.org/docs/devel/error-style-guide.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Jan 2023 12:01:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 10, 2023 at 10:26 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 9, 2023 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > > Attach the updated patch set.\n> > >\n> > > Sorry, the commit message of 0001 was accidentally deleted, just\n> > > attach the same patch set again with commit message.\n> > >\n> >\n> > Pushed the first (0001) patch.\n>\n> Thanks for pushing, here are the remaining patches.\n> I reordered the patch number to put patches that are easier to\n> commit in the front of others.\n\nI was looking into 0001, IMHO the pid should continue to represent the\nmain apply worker. So the pid will always show the main apply worker\nwhich is actually receiving all the changes for the subscription (in\nshort working as logical receiver) and if it is applying changes\nthrough a parallel worker then it should put the parallel worker pid\nin a new column called 'parallel_worker_pid' or\n'parallel_apply_worker_pid' otherwise NULL. Thoughts?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 17:17:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 10, 2023 7:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 10, 2023 at 10:26 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, January 9, 2023 4:51 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > > > Attach the updated patch set.\r\n> > > >\r\n> > > > Sorry, the commit message of 0001 was accidentally deleted, just\r\n> > > > attach the same patch set again with commit message.\r\n> > > >\r\n> > >\r\n> > > Pushed the first (0001) patch.\r\n> >\r\n> > Thanks for pushing, here are the remaining patches.\r\n> > I reordered the patch number to put patches that are easier to commit\r\n> > in the front of others.\r\n> \r\n> I was looking into 0001, IMHO the pid should continue to represent the main\r\n> apply worker. So the pid will always show the main apply worker which is\r\n> actually receiving all the changes for the subscription (in short working as\r\n> logical receiver) and if it is applying changes through a parallel worker then it\r\n> should put the parallel worker pid in a new column called 'parallel_worker_pid'\r\n> or 'parallel_apply_worker_pid' otherwise NULL. Thoughts?\r\n\r\nThanks for the comment.\r\n\r\nIIRC, you mean something like following, right ?\r\n(sorry if I misunderstood)\r\n--\r\nFor parallel apply worker:\r\n'pid' column shows the pid of the leader, new column parallel_worker_pid shows its own pid\r\n\r\nFor leader apply worker:\r\n'pid' column shows its own pid, new column parallel_worker_pid shows 0\r\n--\r\n\r\nIf so, I am not sure if the above is better, because it is changing the\r\nexisting column's('pid') meaning, the 'pid' will no longer represent the pid of\r\nthe worker itself. Besides, it seems not consistent with what we have for\r\nparallel query workers in pg_stat_activity. What do you think ?\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n", "msg_date": "Wed, 11 Jan 2023 04:04:04 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 11, 2023 at 9:34 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 10, 2023 7:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I was looking into 0001, IMHO the pid should continue to represent the main\n> > apply worker. So the pid will always show the main apply worker which is\n> > actually receiving all the changes for the subscription (in short working as\n> > logical receiver) and if it is applying changes through a parallel worker then it\n> > should put the parallel worker pid in a new column called 'parallel_worker_pid'\n> > or 'parallel_apply_worker_pid' otherwise NULL. Thoughts?\n>\n> Thanks for the comment.\n>\n> IIRC, you mean something like following, right ?\n> (sorry if I misunderstood)\n> --\n> For parallel apply worker:\n> 'pid' column shows the pid of the leader, new column parallel_worker_pid shows its own pid\n>\n> For leader apply worker:\n> 'pid' column shows its own pid, new column parallel_worker_pid shows 0\n> --\n>\n> If so, I am not sure if the above is better, because it is changing the\n> existing column's('pid') meaning, the 'pid' will no longer represent the pid of\n> the worker itself. Besides, it seems not consistent with what we have for\n> parallel query workers in pg_stat_activity. What do you think ?\n>\n\n+1. I think it makes sense to keep it similar to pg_stat_activity.\n\n+ <para>\n+ Process ID of the leader apply worker, if this process is a apply\n+ parallel worker. NULL if this process is a leader apply worker or a\n+ synchronization worker.\n\nCan we change the above description to something like: \"Process ID of\nthe leader apply worker, if this process is a parallel apply worker.\nNULL if this process is a leader apply worker or does not participate\nin parallel apply, or a synchronization worker.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:51:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 11, 2023 at 9:34 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n\n> > I was looking into 0001, IMHO the pid should continue to represent the main\n> > apply worker. So the pid will always show the main apply worker which is\n> > actually receiving all the changes for the subscription (in short working as\n> > logical receiver) and if it is applying changes through a parallel worker then it\n> > should put the parallel worker pid in a new column called 'parallel_worker_pid'\n> > or 'parallel_apply_worker_pid' otherwise NULL. Thoughts?\n>\n> Thanks for the comment.\n>\n> IIRC, you mean something like following, right ?\n> (sorry if I misunderstood)\n> --\n> For parallel apply worker:\n> 'pid' column shows the pid of the leader, new column parallel_worker_pid shows its own pid\n>\n> For leader apply worker:\n> 'pid' column shows its own pid, new column parallel_worker_pid shows 0\n> --\n>\n> If so, I am not sure if the above is better, because it is changing the\n> existing column's('pid') meaning, the 'pid' will no longer represent the pid of\n> the worker itself. Besides, it seems not consistent with what we have for\n> parallel query workers in pg_stat_activity. What do you think ?\n\nActually, I always imagined the pid is the process id of the worker\nwhich is actually receiving the changes for the subscriber. Keeping\nthe pid to represent the leader makes more sense. But as you said,\nthat parallel worker for backend is already following the terminology\nas you have in your patch to show the pid as the pid of the applying\nworker so I am fine with the way you have.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:33:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, here are some review comments for patch v78-0001.\n\n======\n\nGeneral\n\n1. (terminology)\n\nAFAIK everywhere until now we’ve been referring everywhere\n(docs/comments/code) to the parent apply worker as the \"leader apply\nworker\". Not the \"main apply worker\". Not the \"apply leader worker\".\nNot any other variations...\n\n From this POV I think the worker member \"apply_leader_pid\" would be\nbetter named \"leader_apply_pid\", but I see that this was already\ncommitted to HEAD differently.\n\nMaybe it is not possible (or you don't want) to change that internal\nmember name but IMO at least all the new code and docs should try to\nbe using consistent terminology (e.g. leader_apply_XXX) where\npossible.\n\n======\n\nCommit message\n\n2.\n\nmain_worker_pid is Process ID of the leader apply worker, if this process is a\napply parallel worker. NULL if this process is a leader apply worker or a\nsynchronization worker.\n\nIIUC, this text is just cut/paste from the monitoring.sgml. In a\nreview comment below I suggest some changes to that text, so then this\ncommit message should also change to be the same.\n\n~~\n\n3.\n\nThe new column can make it easier to distinguish leader apply worker and apply\nparallel worker which is also similar to the 'leader_pid' column in\npg_stat_activity.\n\nSUGGESTION\nThe new column makes it easier to distinguish parallel apply workers\nfrom other kinds of workers. It is implemented this way to be similar\nto the 'leader_pid' column in pg_stat_activity.\n\n======\n\ndoc/src/sgml/logical-replication.sgml\n\n4.\n\n+ being synchronized. Moreover, if the streaming transaction is applied in\n+ parallel, there will be additional workers.\n\nSUGGESTION\nthere will be additional workers -> there may be additional parallel\napply workers\n\n======\n\ndoc/src/sgml/monitoring.sgml\n\n5. pg_stat_subscription\n\n@@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n\n <row>\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>apply_leader_pid</structfield> <type>integer</type>\n+ </para>\n+ <para>\n+ Process ID of the leader apply worker, if this process is a apply\n+ parallel worker. NULL if this process is a leader apply worker or a\n+ synchronization worker.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n <structfield>relid</structfield> <type>oid</type>\n </para>\n <para>\n OID of the relation that the worker is synchronizing; null for the\n- main apply worker\n+ main apply worker and the parallel apply worker\n </para></entry>\n </row>\n\n5a.\n\n(Same as general comment #1 about terminology)\n\n\"apply_leader_pid\" --> \"leader_apply_pid\"\n\n~~\n\n5b.\n\nThe current text feels awkward. I see it was copied from the similar\ntext of 'pg_stat_activity' but perhaps it can be simplified a bit.\n\nSUGGESTION\nProcess ID of the leader apply worker if this process is a parallel\napply worker; otherwise NULL.\n\n~~\n\n5c.\nBEFORE\nnull for the main apply worker and the parallel apply worker\n\nAFTER\nnull for the leader apply worker and parallel apply workers\n\n~~\n\n5c.\n\n <structfield>relid</structfield> <type>oid</type>\n </para>\n <para>\n OID of the relation that the worker is synchronizing; null for the\n- main apply worker\n+ main apply worker and the parallel apply worker\n </para></entry>\n\n\nmain apply worker -> leader apply worker\n\n~~~\n\n6.\n\n@@ -3212,7 +3223,7 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n </para>\n <para>\n Last write-ahead log location received, the initial value of\n- this field being 0\n+ this field being 0; null for the parallel apply worker\n </para></entry>\n </row>\n\nBEFORE\nnull for the parallel apply worker\n\nAFTER\nnull for parallel apply workers\n\n~~~\n\n7.\n\n@@ -3221,7 +3232,8 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n <structfield>last_msg_send_time</structfield> <type>timestamp\nwith time zone</type>\n </para>\n <para>\n- Send time of last message received from origin WAL sender\n+ Send time of last message received from origin WAL sender; null for the\n+ parallel apply worker\n </para></entry>\n </row>\n\n(same as #6)\n\nBEFORE\nnull for the parallel apply worker\n\nAFTER\nnull for parallel apply workers\n\n~~~\n\n8.\n\n@@ -3230,7 +3242,8 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n <structfield>last_msg_receipt_time</structfield>\n<type>timestamp with time zone</type>\n </para>\n <para>\n- Receipt time of last message received from origin WAL sender\n+ Receipt time of last message received from origin WAL sender; null for\n+ the parallel apply worker\n </para></entry>\n </row>\n\n(same as #6)\n\nBEFORE\nnull for the parallel apply worker\n\nAFTER\nnull for parallel apply workers\n\n~~~\n\n9.\n\n@@ -3239,7 +3252,8 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n <structfield>latest_end_lsn</structfield> <type>pg_lsn</type>\n </para>\n <para>\n- Last write-ahead log location reported to origin WAL sender\n+ Last write-ahead log location reported to origin WAL sender; null for\n+ the parallel apply worker\n </para></entry>\n </row>\n\n(same as #6)\n\nBEFORE\nnull for the parallel apply worker\n\nAFTER\nnull for parallel apply workers\n\n~~~\n\n10.\n\n@@ -3249,7 +3263,7 @@ SELECT pid, wait_event_type, wait_event FROM\npg_stat_activity WHERE wait_event i\n </para>\n <para>\n Time of last write-ahead log location reported to origin WAL\n- sender\n+ sender; null for the parallel apply worker\n </para></entry>\n </row>\n </tbody>\n\n(same as #6)\n\nBEFORE\nnull for the parallel apply worker\n\nAFTER\nnull for parallel apply workers\n\n======\n\nsrc/backend/catalog/system_views.sql\n\n11.\n\n@@ -949,6 +949,7 @@ CREATE VIEW pg_stat_subscription AS\n su.oid AS subid,\n su.subname,\n st.pid,\n+ st.apply_leader_pid,\n st.relid,\n st.received_lsn,\n st.last_msg_send_time,\n\n(Same as general comment #1 about terminology)\n\n\"apply_leader_pid\" --> \"leader_apply_pid\"\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n12.\n\n+ if (worker.apply_leader_pid == InvalidPid)\n nulls[3] = true;\n else\n- values[3] = LSNGetDatum(worker.last_lsn);\n- if (worker.last_send_time == 0)\n+ values[3] = Int32GetDatum(worker.apply_leader_pid);\n+\n\n12a.\n\n(Same as general comment #1 about terminology)\n\n\"apply_leader_pid\" --> \"leader_apply_pid\"\n\n~~\n\n12b.\n\nI wondered if here the code should be using the\nisParallelApplyWorker(worker) macro here for readability.\n\ne.g.\n\nif (isParallelApplyWorker(worker))\nvalues[3] = Int32GetDatum(worker.apply_leader_pid);\nelse\n nulls[3] = true;\n\n======\n\nsrc/include/catalog/pg_proc.dat\n\n13.\n\n+ proallargtypes =>\n'{oid,oid,oid,int4,int4,pg_lsn,timestamptz,timestamptz,pg_lsn,timestamptz}',\n+ proargmodes => '{i,o,o,o,o,o,o,o,o,o}',\n+ proargnames =>\n'{subid,subid,relid,pid,apply_leader_pid,received_lsn,last_msg_send_time,last_msg_receipt_time,latest_end_lsn,latest_end_time}',\n\n(Same as general comment #1 about terminology)\n\n\"apply_leader_pid\" --> \"leader_apply_pid\"\n\n======\n\nsrc/test/regress/expected/rules.out\n\n14.\n\n@@ -2094,6 +2094,7 @@ pg_stat_ssl| SELECT s.pid,\n pg_stat_subscription| SELECT su.oid AS subid,\n su.subname,\n st.pid,\n+ st.apply_leader_pid,\n st.relid,\n st.received_lsn,\n st.last_msg_send_time,\n@@ -2101,7 +2102,7 @@ pg_stat_subscription| SELECT su.oid AS subid,\n st.latest_end_lsn,\n st.latest_end_time\n FROM (pg_subscription su\n- LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\npid, received_lsn, last_msg_send_time, last_msg_receipt_time,\nlatest_end_lsn, latest_end_time) ON ((st.subid = su.oid)));\n+ LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\npid, apply_leader_pid, received_lsn, last_msg_send_time,\nlast_msg_receipt_time, latest_end_lsn, latest_end_time) ON ((st.subid\n= su.oid)));\n pg_stat_subscription_stats| SELECT ss.subid,\n s.subname,\n ss.apply_error_count,\n\n(Same comment as elsewhere)\n\n\"apply_leader_pid\" --> \"leader_apply_pid\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 12 Jan 2023 15:23:49 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 12, 2023 at 9:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> doc/src/sgml/monitoring.sgml\n>\n> 5. pg_stat_subscription\n>\n> @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event FROM\n> pg_stat_activity WHERE wait_event i\n>\n> <row>\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>apply_leader_pid</structfield> <type>integer</type>\n> + </para>\n> + <para>\n> + Process ID of the leader apply worker, if this process is a apply\n> + parallel worker. NULL if this process is a leader apply worker or a\n> + synchronization worker.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> <structfield>relid</structfield> <type>oid</type>\n> </para>\n> <para>\n> OID of the relation that the worker is synchronizing; null for the\n> - main apply worker\n> + main apply worker and the parallel apply worker\n> </para></entry>\n> </row>\n>\n> 5a.\n>\n> (Same as general comment #1 about terminology)\n>\n> \"apply_leader_pid\" --> \"leader_apply_pid\"\n>\n\nHow about naming this as just leader_pid? I think it could be helpful\nin the future if we decide to parallelize initial sync (aka parallel\ncopy) because then we could use this for the leader PID of parallel\nsync workers as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:34:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 12, 2023 at 9:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > doc/src/sgml/monitoring.sgml\n> >\n> > 5. pg_stat_subscription\n> >\n> > @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event FROM\n> > pg_stat_activity WHERE wait_event i\n> >\n> > <row>\n> > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>apply_leader_pid</structfield> <type>integer</type>\n> > + </para>\n> > + <para>\n> > + Process ID of the leader apply worker, if this process is a apply\n> > + parallel worker. NULL if this process is a leader apply worker or a\n> > + synchronization worker.\n> > + </para></entry>\n> > + </row>\n> > +\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > <structfield>relid</structfield> <type>oid</type>\n> > </para>\n> > <para>\n> > OID of the relation that the worker is synchronizing; null for the\n> > - main apply worker\n> > + main apply worker and the parallel apply worker\n> > </para></entry>\n> > </row>\n> >\n> > 5a.\n> >\n> > (Same as general comment #1 about terminology)\n> >\n> > \"apply_leader_pid\" --> \"leader_apply_pid\"\n> >\n>\n> How about naming this as just leader_pid? I think it could be helpful\n> in the future if we decide to parallelize initial sync (aka parallel\n> copy) because then we could use this for the leader PID of parallel\n> sync workers as well.\n>\n> --\n\nI still prefer leader_apply_pid.\nleader_pid does not tell which 'operation' it belongs to. 'apply'\ngives the clarity that it is apply related process.\n\nThe terms used in patch look very confusing. I had to read a few lines\nmultiple times to understand it.\n\n1.\nSummary says 'main_worker_pid' to be added but I do not see\n'main_worker_pid' added in pg_stat_subscription, instead I see\n'apply_leader_pid'. Am I missing something? Also, as stated above\n'leader_apply_pid' makes more sense.\nit is better to correct it everywhere (apply leader-->leader apply).\nOnce that is done, it can be reviewed again.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 12 Jan 2023 16:21:09 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 12, 2023 at 4:21 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Thu, Jan 12, 2023 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jan 12, 2023 at 9:54 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > >\n> > > doc/src/sgml/monitoring.sgml\n> > >\n> > > 5. pg_stat_subscription\n> > >\n> > > @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event FROM\n> > > pg_stat_activity WHERE wait_event i\n> > >\n> > > <row>\n> > > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>apply_leader_pid</structfield> <type>integer</type>\n> > > + </para>\n> > > + <para>\n> > > + Process ID of the leader apply worker, if this process is a apply\n> > > + parallel worker. NULL if this process is a leader apply worker or a\n> > > + synchronization worker.\n> > > + </para></entry>\n> > > + </row>\n> > > +\n> > > + <row>\n> > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > <structfield>relid</structfield> <type>oid</type>\n> > > </para>\n> > > <para>\n> > > OID of the relation that the worker is synchronizing; null for the\n> > > - main apply worker\n> > > + main apply worker and the parallel apply worker\n> > > </para></entry>\n> > > </row>\n> > >\n> > > 5a.\n> > >\n> > > (Same as general comment #1 about terminology)\n> > >\n> > > \"apply_leader_pid\" --> \"leader_apply_pid\"\n> > >\n> >\n> > How about naming this as just leader_pid? I think it could be helpful\n> > in the future if we decide to parallelize initial sync (aka parallel\n> > copy) because then we could use this for the leader PID of parallel\n> > sync workers as well.\n> >\n> > --\n>\n> I still prefer leader_apply_pid.\n> leader_pid does not tell which 'operation' it belongs to. 'apply'\n> gives the clarity that it is apply related process.\n>\n\nBut then do you suggest that tomorrow if we allow parallel sync\nworkers then we have a separate column leader_sync_pid? I think that\ndoesn't sound like a good idea and moreover one can refer to docs for\nclarification.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Jan 2023 16:37:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, January 12, 2023 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Jan 12, 2023 at 4:21 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> >\r\n> > On Thu, Jan 12, 2023 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Jan 12, 2023 at 9:54 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > >\r\n> > > > doc/src/sgml/monitoring.sgml\r\n> > > >\r\n> > > > 5. pg_stat_subscription\r\n> > > >\r\n> > > > @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event\r\n> > > > FROM pg_stat_activity WHERE wait_event i\r\n> > > >\r\n> > > > <row>\r\n> > > > <entry role=\"catalog_table_entry\"><para\r\n> > > > role=\"column_definition\">\r\n> > > > + <structfield>apply_leader_pid</structfield>\r\n> <type>integer</type>\r\n> > > > + </para>\r\n> > > > + <para>\r\n> > > > + Process ID of the leader apply worker, if this process is a apply\r\n> > > > + parallel worker. NULL if this process is a leader apply worker or a\r\n> > > > + synchronization worker.\r\n> > > > + </para></entry>\r\n> > > > + </row>\r\n> > > > +\r\n> > > > + <row>\r\n> > > > + <entry role=\"catalog_table_entry\"><para\r\n> > > > + role=\"column_definition\">\r\n> > > > <structfield>relid</structfield> <type>oid</type>\r\n> > > > </para>\r\n> > > > <para>\r\n> > > > OID of the relation that the worker is synchronizing; null for the\r\n> > > > - main apply worker\r\n> > > > + main apply worker and the parallel apply worker\r\n> > > > </para></entry>\r\n> > > > </row>\r\n> > > >\r\n> > > > 5a.\r\n> > > >\r\n> > > > (Same as general comment #1 about terminology)\r\n> > > >\r\n> > > > \"apply_leader_pid\" --> \"leader_apply_pid\"\r\n> > > >\r\n> > >\r\n> > > How about naming this as just leader_pid? I think it could be\r\n> > > helpful in the future if we decide to parallelize initial sync (aka\r\n> > > parallel\r\n> > > copy) because then we could use this for the leader PID of parallel\r\n> > > sync workers as well.\r\n> > >\r\n> > > --\r\n> >\r\n> > I still prefer leader_apply_pid.\r\n> > leader_pid does not tell which 'operation' it belongs to. 'apply'\r\n> > gives the clarity that it is apply related process.\r\n> >\r\n> \r\n> But then do you suggest that tomorrow if we allow parallel sync workers then\r\n> we have a separate column leader_sync_pid? I think that doesn't sound like a\r\n> good idea and moreover one can refer to docs for clarification.\r\n\r\nI agree that leader_pid would be better not only for future parallel copy sync feature,\r\nbut also it's more consistent with the leader_pid column in pg_stat_activity.\r\n\r\nAnd here is the version patch which addressed Peter's comments and renamed all\r\nthe related stuff to leader_pid.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 12 Jan 2023 12:34:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, January 12, 2023 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Hi, here are some review comments for patch v78-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> General\r\n> \r\n> 1. (terminology)\r\n> \r\n> AFAIK everywhere until now we’ve been referring everywhere\r\n> (docs/comments/code) to the parent apply worker as the \"leader apply\r\n> worker\". Not the \"main apply worker\". Not the \"apply leader worker\".\r\n> Not any other variations...\r\n> \r\n> From this POV I think the worker member \"apply_leader_pid\" would be better\r\n> named \"leader_apply_pid\", but I see that this was already committed to\r\n> HEAD differently.\r\n> \r\n> Maybe it is not possible (or you don't want) to change that internal member\r\n> name but IMO at least all the new code and docs should try to be using\r\n> consistent terminology (e.g. leader_apply_XXX) where possible.\r\n> \r\n> ======\r\n> \r\n> Commit message\r\n> \r\n> 2.\r\n> \r\n> main_worker_pid is Process ID of the leader apply worker, if this process is a\r\n> apply parallel worker. NULL if this process is a leader apply worker or a\r\n> synchronization worker.\r\n> \r\n> IIUC, this text is just cut/paste from the monitoring.sgml. In a review comment\r\n> below I suggest some changes to that text, so then this commit message\r\n> should also change to be the same.\r\n\r\nChanged.\r\n\r\n> ~~\r\n> \r\n> 3.\r\n> \r\n> The new column can make it easier to distinguish leader apply worker and\r\n> apply parallel worker which is also similar to the 'leader_pid' column in\r\n> pg_stat_activity.\r\n> \r\n> SUGGESTION\r\n> The new column makes it easier to distinguish parallel apply workers from\r\n> other kinds of workers. It is implemented this way to be similar to the\r\n> 'leader_pid' column in pg_stat_activity.\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> doc/src/sgml/logical-replication.sgml\r\n> \r\n> 4.\r\n> \r\n> + being synchronized. Moreover, if the streaming transaction is applied in\r\n> + parallel, there will be additional workers.\r\n> \r\n> SUGGESTION\r\n> there will be additional workers -> there may be additional parallel apply\r\n> workers\r\n\r\nChanged.\r\n\r\n> ======\r\n> \r\n> doc/src/sgml/monitoring.sgml\r\n> \r\n> 5. pg_stat_subscription\r\n> \r\n> @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> \r\n> <row>\r\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>apply_leader_pid</structfield> <type>integer</type>\r\n> + </para>\r\n> + <para>\r\n> + Process ID of the leader apply worker, if this process is a apply\r\n> + parallel worker. NULL if this process is a leader apply worker or a\r\n> + synchronization worker.\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> <structfield>relid</structfield> <type>oid</type>\r\n> </para>\r\n> <para>\r\n> OID of the relation that the worker is synchronizing; null for the\r\n> - main apply worker\r\n> + main apply worker and the parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> 5a.\r\n> \r\n> (Same as general comment #1 about terminology)\r\n> \r\n> \"apply_leader_pid\" --> \"leader_apply_pid\"\r\n\r\nI changed this and all related stuff to \"leader_pid\" as I agree with Amit that\r\nthis might be useful for future features and is more consistent with the\r\nleader_pid in pg_stat_activity.\r\n\r\n> \r\n> ~~\r\n> \r\n> 5b.\r\n> \r\n> The current text feels awkward. I see it was copied from the similar text of\r\n> 'pg_stat_activity' but perhaps it can be simplified a bit.\r\n> \r\n> SUGGESTION\r\n> Process ID of the leader apply worker if this process is a parallel apply worker;\r\n> otherwise NULL.\r\n\r\nI slightly adjusted this according Amit's suggestion which I think would provide\r\nmore information.\r\n\r\n\"Process ID of the leader apply worker, if this process is a parallel apply worker.\r\nNULL if this process is a leader apply worker or does not participate in parallel apply, or a synchronization worker.\"\r\n\"\r\n\r\n> ~~\r\n> \r\n> 5c.\r\n> BEFORE\r\n> null for the main apply worker and the parallel apply worker\r\n> \r\n> AFTER\r\n> null for the leader apply worker and parallel apply workers\r\n\r\nChanged.\r\n\r\n> ~~\r\n> \r\n> 5c.\r\n> \r\n> <structfield>relid</structfield> <type>oid</type>\r\n> </para>\r\n> <para>\r\n> OID of the relation that the worker is synchronizing; null for the\r\n> - main apply worker\r\n> + main apply worker and the parallel apply worker\r\n> </para></entry>\r\n> \r\n> \r\n> main apply worker -> leader apply worker\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> \r\n> @@ -3212,7 +3223,7 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> </para>\r\n> <para>\r\n> Last write-ahead log location received, the initial value of\r\n> - this field being 0\r\n> + this field being 0; null for the parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> BEFORE\r\n> null for the parallel apply worker\r\n> \r\n> AFTER\r\n> null for parallel apply workers\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 7.\r\n> \r\n> @@ -3221,7 +3232,8 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> <structfield>last_msg_send_time</structfield> <type>timestamp\r\n> with time zone</type>\r\n> </para>\r\n> <para>\r\n> - Send time of last message received from origin WAL sender\r\n> + Send time of last message received from origin WAL sender; null for\r\n> the\r\n> + parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> (same as #6)\r\n> \r\n> BEFORE\r\n> null for the parallel apply worker\r\n> \r\n> AFTER\r\n> null for parallel apply workers\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 8.\r\n> \r\n> @@ -3230,7 +3242,8 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> <structfield>last_msg_receipt_time</structfield>\r\n> <type>timestamp with time zone</type>\r\n> </para>\r\n> <para>\r\n> - Receipt time of last message received from origin WAL sender\r\n> + Receipt time of last message received from origin WAL sender; null for\r\n> + the parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> (same as #6)\r\n> \r\n> BEFORE\r\n> null for the parallel apply worker\r\n> \r\n> AFTER\r\n> null for parallel apply workers\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 9.\r\n> \r\n> @@ -3239,7 +3252,8 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> <structfield>latest_end_lsn</structfield> <type>pg_lsn</type>\r\n> </para>\r\n> <para>\r\n> - Last write-ahead log location reported to origin WAL sender\r\n> + Last write-ahead log location reported to origin WAL sender; null for\r\n> + the parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> \r\n> (same as #6)\r\n> \r\n> BEFORE\r\n> null for the parallel apply worker\r\n> \r\n> AFTER\r\n> null for parallel apply workers\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 10.\r\n> \r\n> @@ -3249,7 +3263,7 @@ SELECT pid, wait_event_type, wait_event FROM\r\n> pg_stat_activity WHERE wait_event i\r\n> </para>\r\n> <para>\r\n> Time of last write-ahead log location reported to origin WAL\r\n> - sender\r\n> + sender; null for the parallel apply worker\r\n> </para></entry>\r\n> </row>\r\n> </tbody>\r\n> \r\n> (same as #6)\r\n> \r\n> BEFORE\r\n> null for the parallel apply worker\r\n> \r\n> AFTER\r\n> null for parallel apply workers\r\n> \r\n\r\nChanged.\r\n\r\n> 12b.\r\n> \r\n> I wondered if here the code should be using the\r\n> isParallelApplyWorker(worker) macro here for readability.\r\n> \r\n> e.g.\r\n> \r\n> if (isParallelApplyWorker(worker))\r\n> values[3] = Int32GetDatum(worker.apply_leader_pid);\r\n> else\r\n> nulls[3] = true;\r\n\r\nChanged.\r\n\r\nBest Regards,\r\nHou Zhijie\r\n", "msg_date": "Thu, 12 Jan 2023 12:34:08 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v79-0001.\n\n======\n\nGeneral\n\n1.\n\nWhen Amit suggested [1] changing the name just to \"leader_pid\" instead\nof \"leader_apply_pid\" I thought he was only referring to changing the\nview column name, not also the internal member names of the worker\nstructure. Maybe it is OK anyway, but please check if that was the\nintention.\n\n======\n\nCommit message\n\n2.\n\nleader_pid is the process ID of the leader apply worker if this process is a\nparallel apply worker. If this field is NULL, it indicates that the process is\na leader apply worker or does not participate in parallel apply, or a\nsynchronization worker.\n\n~\n\nThis text is just cut/paste from the monitoring.sgml. In a review\ncomment below I suggest some changes to that text, so then this commit\nmessage should also change to be the same.\n\n======\n\ndoc/src/sgml/monitoring.sgml\n\n3.\n\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>leader_pid</structfield> <type>integer</type>\n+ </para>\n+ <para>\n+ Process ID of the leader apply worker if this process is a parallel\n+ apply worker; NULL if this process is a leader apply worker or does not\n+ participate in parallel apply, or a synchronization worker\n+ </para></entry>\n\nI felt this change is giving too many details and ended up just\nmuddying the water.\n\nE.g. Now this says basically \"NULL if AAA or BBB, or CCC\" but that\nmakes it sounds like there are 3 other things the process could be\ninstead of a parallel worker. But that is not not really true unless\nyou are making some distinction between the main \"apply worker\" which\nis a leader versus a main apply worker which is not a leader. IMO we\nshould not be making any distinction at all - the leader apply worker\nand the main (not leader) apply worker are one-and-the-same process.\n\nSo, I still prefer my previous suggestion (see [2] #5b]\n\n======\n\nsrc/backend/catalog/system_views.sql\n\n4.\n\n@@ -949,6 +949,7 @@ CREATE VIEW pg_stat_subscription AS\n su.oid AS subid,\n su.subname,\n st.pid,\n+ st.leader_pid,\n st.relid,\n st.received_lsn,\n st.last_msg_send_time,\n\nIMO it would be very useful to have an additional \"kind\" attribute for\nthis view. This will save the user from needing to do mental\ngymnastics every time just to recognise what kind of process they are\nlooking at.\n\nFor example, I tried this:\n\nCREATE VIEW pg_stat_subscription AS\n SELECT\n su.oid AS subid,\n su.subname,\n CASE\n WHEN st.relid IS NOT NULL THEN 'tablesync'\n WHEN st.leader_pid IS NOT NULL THEN 'parallel apply'\n ELSE 'leader apply'\n END AS kind,\n st.pid,\n st.leader_pid,\n st.relid,\n st.received_lsn,\n st.last_msg_send_time,\n st.last_msg_receipt_time,\n st.latest_end_lsn,\n st.latest_end_time\n FROM pg_subscription su\n LEFT JOIN pg_stat_get_subscription(NULL) st\n ON (st.subid = su.oid);\n\n\nand it results in much more readable output IMO:\n\ntest_sub=# select * from pg_stat_subscription;\n subid | subname | kind | pid | leader_pid | relid |\nreceived_lsn | last_msg_send_time |\nlast_msg_receipt_time | lat\nest_end_lsn | latest_end_time\n-------+---------+--------------+------+------------+-------+--------------+-------------------------------+-------------------------------+----\n------------+-------------------------------\n 16388 | sub1 | leader apply | 5281 | | |\n0/1901378 | 2023-01-13 12:39:03.984249+11 | 2023-01-13\n12:39:03.986157+11 | 0/1\n901378 | 2023-01-13 12:39:03.984249+11\n(1 row)\n\nThoughts?\n\n\n------\n[1] Amit - https://www.postgresql.org/message-id/CAA4eK1KYUbnthSPyo4VjnhMygB0c1DZtp0XC-V2-GSETQ743ww%40mail.gmail.com\n[2] My v78-0001 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPvA10Bp9Jaw9OS2%2BpuKHr7ry_xB3Tf2-bbv5gyxD5E_gw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 13 Jan 2023 13:25:43 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 12, 2023 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> But then do you suggest that tomorrow if we allow parallel sync\n> workers then we have a separate column leader_sync_pid? I think that\n> doesn't sound like a good idea and moreover one can refer to docs for\n> clarification.\n>\n> --\nokay, leader_pid is fine I think.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 13 Jan 2023 08:22:22 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 7:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v79-0001.\n>\n> ======\n>\n> General\n>\n> 1.\n>\n> When Amit suggested [1] changing the name just to \"leader_pid\" instead\n> of \"leader_apply_pid\" I thought he was only referring to changing the\n> view column name, not also the internal member names of the worker\n> structure. Maybe it is OK anyway, but please check if that was the\n> intention.\n>\n\nYes, that was the intention.\n\n>\n> 3.\n>\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>leader_pid</structfield> <type>integer</type>\n> + </para>\n> + <para>\n> + Process ID of the leader apply worker if this process is a parallel\n> + apply worker; NULL if this process is a leader apply worker or does not\n> + participate in parallel apply, or a synchronization worker\n> + </para></entry>\n>\n> I felt this change is giving too many details and ended up just\n> muddying the water.\n>\n\nI see that we give a similar description for other parameters as well.\nFor example leader_pid in pg_stat_activity, see client_dn,\nclient_serial in pg_stat_ssl. It is better to be consistent here and\nthis gives the reader a bit more information when the value is NULL\nfor the new column.\n\n>\n> 4.\n>\n> @@ -949,6 +949,7 @@ CREATE VIEW pg_stat_subscription AS\n> su.oid AS subid,\n> su.subname,\n> st.pid,\n> + st.leader_pid,\n> st.relid,\n> st.received_lsn,\n> st.last_msg_send_time,\n>\n> IMO it would be very useful to have an additional \"kind\" attribute for\n> this view. This will save the user from needing to do mental\n> gymnastics every time just to recognise what kind of process they are\n> looking at.\n>\n\nThis could be a separate enhancement as the same should be true for\nsync workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Jan 2023 09:06:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 13, 2023 at 7:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> >\n> > 3.\n> >\n> > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>leader_pid</structfield> <type>integer</type>\n> > + </para>\n> > + <para>\n> > + Process ID of the leader apply worker if this process is a parallel\n> > + apply worker; NULL if this process is a leader apply worker or does not\n> > + participate in parallel apply, or a synchronization worker\n> > + </para></entry>\n> >\n> > I felt this change is giving too many details and ended up just\n> > muddying the water.\n> >\n>\n> I see that we give a similar description for other parameters as well.\n> For example leader_pid in pg_stat_activity,\n>\n\nBTW, shouldn't we update leader_pid column in pg_stat_activity as well\nto display apply leader PID for parallel apply workers? It will\ncurrently display for other parallel operations like a parallel\nvacuum, so I don't see a reason to not do the same for parallel apply\nworkers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Jan 2023 09:58:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 1:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 13, 2023 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 13, 2023 at 7:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> >\n> > >\n> > > 3.\n> > >\n> > > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>leader_pid</structfield> <type>integer</type>\n> > > + </para>\n> > > + <para>\n> > > + Process ID of the leader apply worker if this process is a parallel\n> > > + apply worker; NULL if this process is a leader apply worker or does not\n> > > + participate in parallel apply, or a synchronization worker\n> > > + </para></entry>\n> > >\n> > > I felt this change is giving too many details and ended up just\n> > > muddying the water.\n> > >\n> >\n> > I see that we give a similar description for other parameters as well.\n> > For example leader_pid in pg_stat_activity,\n> >\n>\n> BTW, shouldn't we update leader_pid column in pg_stat_activity as well\n> to display apply leader PID for parallel apply workers? It will\n> currently display for other parallel operations like a parallel\n> vacuum, so I don't see a reason to not do the same for parallel apply\n> workers.\n\n+1\n\nThe parallel apply workers have different properties than the parallel\nquery workers since they execute different transactions and don't use\ngroup locking but it would be a good hint for users to show the leader\nand parallel apply worker processes are related. If users want to\ncheck only parallel query workers they can use the backend_type\ncolumn.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 13 Jan 2023 14:02:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 2:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > 3.\n> >\n> > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>leader_pid</structfield> <type>integer</type>\n> > + </para>\n> > + <para>\n> > + Process ID of the leader apply worker if this process is a parallel\n> > + apply worker; NULL if this process is a leader apply worker or does not\n> > + participate in parallel apply, or a synchronization worker\n> > + </para></entry>\n> >\n> > I felt this change is giving too many details and ended up just\n> > muddying the water.\n> >\n>\n> I see that we give a similar description for other parameters as well.\n> For example leader_pid in pg_stat_activity, see client_dn,\n> client_serial in pg_stat_ssl. It is better to be consistent here and\n> this gives the reader a bit more information when the value is NULL\n> for the new column.\n>\n\nIt is OK to give extra details as those other examples do, but my\npoint -- where I wrote \"the leader apply worker and the (not leader)\napply worker are one-and-the-same process\" -- was there are currently\nonly 3 kinds of workers possible (leader apply, parallel apply,\ntablsync). If it is not a \"parallel apply\" worker then it can only be\none of the other 2. So I think it is sufficient and less confusing to\nsay:\n\nProcess ID of the leader apply worker if this process is a parallel\napply worker; NULL if this process is a leader apply worker or a\nsynchronization worker.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 13 Jan 2023 16:05:39 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 12, 2023 at 9:34 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, January 12, 2023 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jan 12, 2023 at 4:21 PM shveta malik <shveta.malik@gmail.com> wrote:\n> > >\n> > > On Thu, Jan 12, 2023 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > > > On Thu, Jan 12, 2023 at 9:54 AM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > > > >\n> > > > >\n> > > > > doc/src/sgml/monitoring.sgml\n> > > > >\n> > > > > 5. pg_stat_subscription\n> > > > >\n> > > > > @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type, wait_event\n> > > > > FROM pg_stat_activity WHERE wait_event i\n> > > > >\n> > > > > <row>\n> > > > > <entry role=\"catalog_table_entry\"><para\n> > > > > role=\"column_definition\">\n> > > > > + <structfield>apply_leader_pid</structfield>\n> > <type>integer</type>\n> > > > > + </para>\n> > > > > + <para>\n> > > > > + Process ID of the leader apply worker, if this process is a apply\n> > > > > + parallel worker. NULL if this process is a leader apply worker or a\n> > > > > + synchronization worker.\n> > > > > + </para></entry>\n> > > > > + </row>\n> > > > > +\n> > > > > + <row>\n> > > > > + <entry role=\"catalog_table_entry\"><para\n> > > > > + role=\"column_definition\">\n> > > > > <structfield>relid</structfield> <type>oid</type>\n> > > > > </para>\n> > > > > <para>\n> > > > > OID of the relation that the worker is synchronizing; null for the\n> > > > > - main apply worker\n> > > > > + main apply worker and the parallel apply worker\n> > > > > </para></entry>\n> > > > > </row>\n> > > > >\n> > > > > 5a.\n> > > > >\n> > > > > (Same as general comment #1 about terminology)\n> > > > >\n> > > > > \"apply_leader_pid\" --> \"leader_apply_pid\"\n> > > > >\n> > > >\n> > > > How about naming this as just leader_pid? I think it could be\n> > > > helpful in the future if we decide to parallelize initial sync (aka\n> > > > parallel\n> > > > copy) because then we could use this for the leader PID of parallel\n> > > > sync workers as well.\n> > > >\n> > > > --\n> > >\n> > > I still prefer leader_apply_pid.\n> > > leader_pid does not tell which 'operation' it belongs to. 'apply'\n> > > gives the clarity that it is apply related process.\n> > >\n> >\n> > But then do you suggest that tomorrow if we allow parallel sync workers then\n> > we have a separate column leader_sync_pid? I think that doesn't sound like a\n> > good idea and moreover one can refer to docs for clarification.\n>\n> I agree that leader_pid would be better not only for future parallel copy sync feature,\n> but also it's more consistent with the leader_pid column in pg_stat_activity.\n>\n> And here is the version patch which addressed Peter's comments and renamed all\n> the related stuff to leader_pid.\n\nHere are two comments on v79-0003 patch.\n\n+ /* Force to serialize messages if stream_serialize_threshold\nis reached. */\n+ if (stream_serialize_threshold != -1 &&\n+ (stream_serialize_threshold == 0 ||\n+ stream_serialize_threshold < parallel_stream_nchunks))\n+ {\n+ parallel_stream_nchunks = 0;\n+ return false;\n+ }\n\nI think it would be better if we show the log message \"\"logical\nreplication apply worker will serialize the remaining changes of\nremote transaction %u to a file\" even in stream_serialize_threshold\ncase.\n\nIIUC parallel_stream_nchunks won't be reset if pa_send_data() failed\ndue to the timeout.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 13 Jan 2023 14:43:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for patch v79-0002.\n\n======\n\nGeneral\n\n1.\n\nI saw that earlier in this thread Hou-san [1] and Amit [2] also seemed\nto say there is not much point for this patch.\n\nSo I wanted to +1 that same opinion.\n\nI feel this patch just adds more complexity for almost no gain:\n- reducing the 'max_apply_workers_per_suibscription' seems not very\ncommon in the first place.\n- even when the GUC is reduced, at that point in time all the workers\nmight be in use so there may be nothing that can be immediately done.\n- IIUC the excess workers (for a reduced GUC) are going to get freed\nnaturally anyway over time as more transactions are completed so the\npool size will reduce accordingly.\n\n\n~\n\nOTOH some refactoring parts of this patch (e.g. the new pa_stop_worker\nfunction) look better to me. I would keep those ones but remove all\nthe pa_stop_idle_workers function/call.\n\n\n*** NOTE: The remainder of these review comments are maybe only\nrelevant if you are going to keep this pa_stop_idle_workers\nbehaviour...\n\n======\n\nCommit message\n\n2.\n\nIf the max_parallel_apply_workers_per_subscription is changed to a\nlower value, try to stop free workers in the pool to keep the number of\nworkers lower than half of the max_parallel_apply_workers_per_subscription\n\nSUGGESTION\n\nIf the GUC max_parallel_apply_workers_per_subscription is changed to a\nlower value, try to stop unused workers to keep the pool size lower\nthan half of max_parallel_apply_workers_per_subscription.\n\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n3. pa_free_worker\n\nif (winfo->serialize_changes ||\nlist_length(ParallelApplyWorkerPool) >\n(max_parallel_apply_workers_per_subscription / 2))\n{\npa_stop_worker(winfo);\nreturn;\n}\n\nwinfo->in_use = false;\nwinfo->serialize_changes = false;\n\n~\n\nIMO the above code can be more neatly written using if/else because\nthen there is only one return point, and there is a place to write the\nexplanatory comment about the else.\n\nSUGGESTION\n\nif (winfo->serialize_changes ||\nlist_length(ParallelApplyWorkerPool) >\n(max_parallel_apply_workers_per_subscription / 2))\n{\npa_stop_worker(winfo);\n}\nelse\n{\n/* Don't stop the worker. Only mark it available for re-use. */\nwinfo->in_use = false;\nwinfo->serialize_changes = false;\n}\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n4. pa_stop_idle_workers\n\n/*\n * Try to stop parallel apply workers that are not in use to keep the number of\n * workers lower than half of the max_parallel_apply_workers_per_subscription.\n */\nvoid\npa_stop_idle_workers(void)\n{\nList *active_workers;\nListCell *lc;\nint max_applyworkers = max_parallel_apply_workers_per_subscription / 2;\n\nif (list_length(ParallelApplyWorkerPool) <= max_applyworkers)\nreturn;\n\nactive_workers = list_copy(ParallelApplyWorkerPool);\n\nforeach(lc, active_workers)\n{\nParallelApplyWorkerInfo *winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n\npa_stop_worker(winfo);\n\n/* Recheck the number of workers. */\nif (list_length(ParallelApplyWorkerPool) <= max_applyworkers)\nbreak;\n}\n\nlist_free(active_workers);\n}\n\n~\n\n4a. function comment\n\nSUGGESTION\n\nTry to keep the worker pool size lower than half of the\nmax_parallel_apply_workers_per_subscription.\n\n~\n\n4b. function name\n\nThis is not stopping all idle workers, so maybe a more meaningful name\nfor this function is something more like \"pa_reduce_workerpool\"\n\n~\n\n4c.\n\nIMO the \"max_applyworkers\" var is a misleading name. Maybe something\nlike \"goal_poolsize\" is better?\n\n~\n\n4d.\n\nMaybe I misunderstand the logic for the pool, but shouldn't this be\nchecking the winfo->in_use flag before blindly stopping each worker?\n\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n5.\n\n@@ -3630,6 +3630,13 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n {\n ConfigReloadPending = false;\n ProcessConfigFile(PGC_SIGHUP);\n+\n+ /*\n+ * Try to stop free workers in the pool in case the\n+ * max_parallel_apply_workers_per_subscription is changed to a\n+ * lower value.\n+ */\n+ pa_stop_idle_workers();\n }\n5a.\n\nSUGGESTED COMMENT\nIf max_parallel_apply_workers_per_subscription is changed to a lower\nvalue, try to reduce the worker pool to match.\n\n~\n\n5b.\n\nInstead of unconditionally calling pa_stop_idle_workers, shouldn't\nthis code compare the value of\nmax_parallel_apply_workers_per_subscription before/after the\nProcessConfigFile so it only calls if the GUC was lowered?\n\n\n------\n[1] Hou-san - https://www.postgresql.org/message-id/OS0PR01MB5716E527412A3481F90B4397941A9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2] Amit - https://www.postgresql.org/message-id/CAA4eK1J%3D9m-VNRMHCqeG8jpX0CTn3Ciad2o4H-ogrZMDJ3tn4w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 13 Jan 2023 17:19:58 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, January 13, 2023 1:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Fri, Jan 13, 2023 at 1:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jan 13, 2023 at 9:06 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Fri, Jan 13, 2023 at 7:56 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > >\r\n> > > >\r\n> > > > 3.\r\n> > > >\r\n> > > > <entry role=\"catalog_table_entry\"><para\r\n> > > > role=\"column_definition\">\r\n> > > > + <structfield>leader_pid</structfield> <type>integer</type>\r\n> > > > + </para>\r\n> > > > + <para>\r\n> > > > + Process ID of the leader apply worker if this process is a parallel\r\n> > > > + apply worker; NULL if this process is a leader apply worker or\r\n> does not\r\n> > > > + participate in parallel apply, or a synchronization worker\r\n> > > > + </para></entry>\r\n> > > >\r\n> > > > I felt this change is giving too many details and ended up just\r\n> > > > muddying the water.\r\n> > > >\r\n> > >\r\n> > > I see that we give a similar description for other parameters as well.\r\n> > > For example leader_pid in pg_stat_activity,\r\n> > >\r\n> >\r\n> > BTW, shouldn't we update leader_pid column in pg_stat_activity as well\r\n> > to display apply leader PID for parallel apply workers? It will\r\n> > currently display for other parallel operations like a parallel\r\n> > vacuum, so I don't see a reason to not do the same for parallel apply\r\n> > workers.\r\n> \r\n> +1\r\n> \r\n> The parallel apply workers have different properties than the parallel query\r\n> workers since they execute different transactions and don't use group locking\r\n> but it would be a good hint for users to show the leader and parallel apply\r\n> worker processes are related. If users want to check only parallel query workers\r\n> they can use the backend_type column.\r\n\r\nAgreed, and changed as suggested.\r\n\r\nAttach the new version patch set which address the comments so far.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 13 Jan 2023 10:13:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, January 13, 2023 1:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Thu, Jan 12, 2023 at 9:34 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, January 12, 2023 7:08 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Jan 12, 2023 at 4:21 PM shveta malik <shveta.malik@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Thu, Jan 12, 2023 at 10:34 AM Amit Kapila\r\n> > > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > >\r\n> > > > > On Thu, Jan 12, 2023 at 9:54 AM Peter Smith\r\n> > > > > <smithpb2250@gmail.com>\r\n> > > wrote:\r\n> > > > > >\r\n> > > > > >\r\n> > > > > > doc/src/sgml/monitoring.sgml\r\n> > > > > >\r\n> > > > > > 5. pg_stat_subscription\r\n> > > > > >\r\n> > > > > > @@ -3198,11 +3198,22 @@ SELECT pid, wait_event_type,\r\n> > > > > > wait_event FROM pg_stat_activity WHERE wait_event i\r\n> > > > > >\r\n> > > > > > <row>\r\n> > > > > > <entry role=\"catalog_table_entry\"><para\r\n> > > > > > role=\"column_definition\">\r\n> > > > > > + <structfield>apply_leader_pid</structfield>\r\n> > > <type>integer</type>\r\n> > > > > > + </para>\r\n> > > > > > + <para>\r\n> > > > > > + Process ID of the leader apply worker, if this process is a\r\n> apply\r\n> > > > > > + parallel worker. NULL if this process is a leader apply worker\r\n> or a\r\n> > > > > > + synchronization worker.\r\n> > > > > > + </para></entry>\r\n> > > > > > + </row>\r\n> > > > > > +\r\n> > > > > > + <row>\r\n> > > > > > + <entry role=\"catalog_table_entry\"><para\r\n> > > > > > + role=\"column_definition\">\r\n> > > > > > <structfield>relid</structfield> <type>oid</type>\r\n> > > > > > </para>\r\n> > > > > > <para>\r\n> > > > > > OID of the relation that the worker is synchronizing; null for\r\n> the\r\n> > > > > > - main apply worker\r\n> > > > > > + main apply worker and the parallel apply worker\r\n> > > > > > </para></entry>\r\n> > > > > > </row>\r\n> > > > > >\r\n> > > > > > 5a.\r\n> > > > > >\r\n> > > > > > (Same as general comment #1 about terminology)\r\n> > > > > >\r\n> > > > > > \"apply_leader_pid\" --> \"leader_apply_pid\"\r\n> > > > > >\r\n> > > > >\r\n> > > > > How about naming this as just leader_pid? I think it could be\r\n> > > > > helpful in the future if we decide to parallelize initial sync\r\n> > > > > (aka parallel\r\n> > > > > copy) because then we could use this for the leader PID of\r\n> > > > > parallel sync workers as well.\r\n> > > > >\r\n> > > > > --\r\n> > > >\r\n> > > > I still prefer leader_apply_pid.\r\n> > > > leader_pid does not tell which 'operation' it belongs to. 'apply'\r\n> > > > gives the clarity that it is apply related process.\r\n> > > >\r\n> > >\r\n> > > But then do you suggest that tomorrow if we allow parallel sync\r\n> > > workers then we have a separate column leader_sync_pid? I think that\r\n> > > doesn't sound like a good idea and moreover one can refer to docs for\r\n> clarification.\r\n> >\r\n> > I agree that leader_pid would be better not only for future parallel\r\n> > copy sync feature, but also it's more consistent with the leader_pid column in\r\n> pg_stat_activity.\r\n> >\r\n> > And here is the version patch which addressed Peter's comments and\r\n> > renamed all the related stuff to leader_pid.\r\n> \r\n> Here are two comments on v79-0003 patch.\r\n\r\nThanks for the comments.\r\n\r\n> \r\n> + /* Force to serialize messages if stream_serialize_threshold\r\n> is reached. */\r\n> + if (stream_serialize_threshold != -1 &&\r\n> + (stream_serialize_threshold == 0 ||\r\n> + stream_serialize_threshold < parallel_stream_nchunks))\r\n> + {\r\n> + parallel_stream_nchunks = 0;\r\n> + return false;\r\n> + }\r\n> \r\n> I think it would be better if we show the log message \"\"logical replication apply\r\n> worker will serialize the remaining changes of remote transaction %u to a file\"\r\n> even in stream_serialize_threshold case.\r\n\r\nAgreed and changed.\r\n\r\n> \r\n> IIUC parallel_stream_nchunks won't be reset if pa_send_data() failed due to the\r\n> timeout.\r\n\r\nChanged.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Fri, 13 Jan 2023 10:13:31 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, January 13, 2023 2:20 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some review comments for patch v79-0002.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> General\r\n> \r\n> 1.\r\n> \r\n> I saw that earlier in this thread Hou-san [1] and Amit [2] also seemed to say\r\n> there is not much point for this patch.\r\n> \r\n> So I wanted to +1 that same opinion.\r\n> \r\n> I feel this patch just adds more complexity for almost no gain:\r\n> - reducing the 'max_apply_workers_per_suibscription' seems not very\r\n> common in the first place.\r\n> - even when the GUC is reduced, at that point in time all the workers might be in\r\n> use so there may be nothing that can be immediately done.\r\n> - IIUC the excess workers (for a reduced GUC) are going to get freed naturally\r\n> anyway over time as more transactions are completed so the pool size will\r\n> reduce accordingly.\r\n\r\nI need to think over it, and we can have detailed discussion after committing\r\nthe first patch. So I didn't address the comments for 0002 for now.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Fri, 13 Jan 2023 10:13:39 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 3:44 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, January 13, 2023 1:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Jan 12, 2023 at 9:34 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n\nIn GetLogicalLeaderApplyWorker(), we can use shared lock instead\nexclusive as we are just reading the workers array. Also, the function\nname looks a bit odd to me, so I changed it to\nGetLeaderApplyWorkerPid(). Also, it is better to use InvalidPid\ninstead of 0 when there is no valid value for leader_pid in\nGetLeaderApplyWorkerPid(). Apart from that, I have made minor changes\nin the comments, docs, and commit message. I am planning to push this\nnext week by Tuesday unless you or others have any major comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 14 Jan 2023 17:17:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi,\n\nI think there's a bug in how get_transaction_apply_action() interacts\nwith handle_streamed_transaction() to decide whether the transaction is\nstreamed or not. Originally, the code was simply:\n\n /* not in streaming mode */\n if (!in_streamed_transaction)\n return false;\n\nBut now this decision was moved to get_transaction_apply_action(), which\ndoes this:\n\n if (am_parallel_apply_worker())\n {\n return TRANS_PARALLEL_APPLY;\n }\n else if (in_remote_transaction)\n {\n return TRANS_LEADER_APPLY;\n }\n\nand handle_streamed_transaction() then uses the result like this:\n\n /* not in streaming mode */\n if (apply_action == TRANS_LEADER_APPLY)\n return false;\n\nNotice this is not equal to the original behavior, because the two flags\n(in_remote_transaction and in_streamed_transaction) are not inverse.\nThat is,\n\n in_remote_transaction=false\n\ndoes not imply we're processing streamed transaction. It's allowed both\nflags are false, i.e. a change may be \"non-transactional\" and not\nstreamed, though the only example of such thing in the protocol are\nlogical messages. Which are however ignored in the apply worker, so I'm\nnot surprised no existing test failed on this.\n\nSo I think get_transaction_apply_action() should do this:\n\n if (am_parallel_apply_worker())\n {\n return TRANS_PARALLEL_APPLY;\n }\n else if (!in_streamed_transaction)\n {\n return TRANS_LEADER_APPLY;\n }\n\nFWIW I've noticed this after rebasing the sequence decoding patch, which\nadds another type of protocol message with the transactional vs.\nnon-transactional behavior, similar to \"logical messages\" except that in\nthis case the worker does not ignore that.\n\nAlso, I think get_transaction_apply_action() would deserve better\ncomments explaining how/why it makes the decisions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:09:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "At Tue, 10 Jan 2023 12:01:43 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Jan 10, 2023 at 11:16 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Although I don't see a technical difference between the two, all the\n> > other occurances including the just above (except test_shm_mq) use\n> > \"could not\". A faint memory in my non-durable memory tells me that we\n> > have a policy that we use \"can/could not\" than \"unable\".\n> >\n> \n> Right, it is mentioned in docs [1] (see section \"Tricky Words to Avoid\").\n\nThanks for confirmation.\n\n> Can you please start a new thread and post these changes as we are\n> proposing to change existing message as well?\n\nAll right.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 16 Jan 2023 12:03:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "Here are some review comments for v81-0001.\n\n======\n\nCommit Message\n\n1.\n\nAdditionally, update the leader_pid column in pg_stat_activity as well to\ndisplay the PID of the leader apply worker for parallel apply workers.\n\n~\n\nProbably it should not say both \"Additionally\" and \"as well\" in the\nsame sentence.\n\n======\n\nsrc/backend/replication/logical/launcher.c\n\n2.\n\n /*\n+ * Return the pid of the leader apply worker if the given pid is the pid of a\n+ * parallel apply worker, otherwise return InvalidPid.\n+ */\n+pid_t\n+GetLeaderApplyWorkerPid(pid_t pid)\n+{\n+ int leader_pid = InvalidPid;\n+ int i;\n+\n+ LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n+\n+ for (i = 0; i < max_logical_replication_workers; i++)\n+ {\n+ LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n+\n+ if (isParallelApplyWorker(w) && w->proc && pid == w->proc->pid)\n+ {\n+ leader_pid = w->leader_pid;\n+ break;\n+ }\n+ }\n+\n+ LWLockRelease(LogicalRepWorkerLock);\n+\n+ return leader_pid;\n+}\n\n2a.\nIIUC the IsParallelApplyWorker macro does nothing except check that\nthe leader_pid is not InvalidPid anyway, so AFAIK this algorithm does\nnot benefit from using this macro because we will want to return\nInvalidPid anyway if the given pid matches.\n\nSo the inner condition can just say:\n\nif (w->proc && w->proc->pid == pid)\n{\nleader_pid = w->leader_pid;\nbreak;\n}\n\n~\n\n2b.\nA possible alternative comment.\n\nBEFORE\nReturn the pid of the leader apply worker if the given pid is the pid\nof a parallel apply worker, otherwise return InvalidPid.\n\n\nAFTER\nIf the given pid has a leader apply worker then return the leader pid,\notherwise, return InvalidPid.\n\n======\n\nsrc/backend/utils/adt/pgstatfuncs.c\n\n3.\n\n@@ -434,6 +435,16 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n values[28] = Int32GetDatum(leader->pid);\n nulls[28] = false;\n }\n+ else\n+ {\n+ int leader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\n+\n+ if (leader_pid != InvalidPid)\n+ {\n+ values[28] = Int32GetDatum(leader_pid);\n+ nulls[28] = false;\n+ }\n+\n\n3a.\nThere is an existing comment preceding this if/else but it refers only\nto leaders of parallel groups. Should that comment be updated to\nmention the leader apply worker too?\n\n~\n\n3b.\nIt may be unrelated to this patch, but it seems strange to me that the\nnulls[28]/values[28] assignments are done where they are. Every other\nnulls/values assignment of this function here is pretty much in the\ncorrect numerical order except this one, so IMO this code ought to be\nrelocated to later in this same function.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 16 Jan 2023 15:54:00 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Sun, Jan 15, 2023 at 10:39 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I think there's a bug in how get_transaction_apply_action() interacts\n> with handle_streamed_transaction() to decide whether the transaction is\n> streamed or not. Originally, the code was simply:\n>\n> /* not in streaming mode */\n> if (!in_streamed_transaction)\n> return false;\n>\n> But now this decision was moved to get_transaction_apply_action(), which\n> does this:\n>\n> if (am_parallel_apply_worker())\n> {\n> return TRANS_PARALLEL_APPLY;\n> }\n> else if (in_remote_transaction)\n> {\n> return TRANS_LEADER_APPLY;\n> }\n>\n> and handle_streamed_transaction() then uses the result like this:\n>\n> /* not in streaming mode */\n> if (apply_action == TRANS_LEADER_APPLY)\n> return false;\n>\n> Notice this is not equal to the original behavior, because the two flags\n> (in_remote_transaction and in_streamed_transaction) are not inverse.\n> That is,\n>\n> in_remote_transaction=false\n>\n> does not imply we're processing streamed transaction. It's allowed both\n> flags are false, i.e. a change may be \"non-transactional\" and not\n> streamed, though the only example of such thing in the protocol are\n> logical messages. Which are however ignored in the apply worker, so I'm\n> not surprised no existing test failed on this.\n>\n\nRight, this is the reason we didn't catch it in our testing.\n\n> So I think get_transaction_apply_action() should do this:\n>\n> if (am_parallel_apply_worker())\n> {\n> return TRANS_PARALLEL_APPLY;\n> }\n> else if (!in_streamed_transaction)\n> {\n> return TRANS_LEADER_APPLY;\n> }\n>\n\nYeah, something like this would work but some of the callers other\nthan handle_streamed_transaction() also need to be changed. See\nattached.\n\n> FWIW I've noticed this after rebasing the sequence decoding patch, which\n> adds another type of protocol message with the transactional vs.\n> non-transactional behavior, similar to \"logical messages\" except that in\n> this case the worker does not ignore that.\n>\n> Also, I think get_transaction_apply_action() would deserve better\n> comments explaining how/why it makes the decisions.\n>\n\nOkay, I have added the comments in get_transaction_apply_action() and\nupdated the comments to refer to the enum TransApplyAction where all\nthe actions are explained.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 16 Jan 2023 11:49:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 16, 2023 at 10:24 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2.\n>\n> /*\n> + * Return the pid of the leader apply worker if the given pid is the pid of a\n> + * parallel apply worker, otherwise return InvalidPid.\n> + */\n> +pid_t\n> +GetLeaderApplyWorkerPid(pid_t pid)\n> +{\n> + int leader_pid = InvalidPid;\n> + int i;\n> +\n> + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> +\n> + for (i = 0; i < max_logical_replication_workers; i++)\n> + {\n> + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> +\n> + if (isParallelApplyWorker(w) && w->proc && pid == w->proc->pid)\n> + {\n> + leader_pid = w->leader_pid;\n> + break;\n> + }\n> + }\n> +\n> + LWLockRelease(LogicalRepWorkerLock);\n> +\n> + return leader_pid;\n> +}\n>\n> 2a.\n> IIUC the IsParallelApplyWorker macro does nothing except check that\n> the leader_pid is not InvalidPid anyway, so AFAIK this algorithm does\n> not benefit from using this macro because we will want to return\n> InvalidPid anyway if the given pid matches.\n>\n> So the inner condition can just say:\n>\n> if (w->proc && w->proc->pid == pid)\n> {\n> leader_pid = w->leader_pid;\n> break;\n> }\n>\n\nYeah, this should also work but I feel the current one is explicit and\nmore clear.\n\n> ~\n>\n> 2b.\n> A possible alternative comment.\n>\n> BEFORE\n> Return the pid of the leader apply worker if the given pid is the pid\n> of a parallel apply worker, otherwise return InvalidPid.\n>\n>\n> AFTER\n> If the given pid has a leader apply worker then return the leader pid,\n> otherwise, return InvalidPid.\n>\n\nI don't think that is an improvement.\n\n> ======\n>\n> src/backend/utils/adt/pgstatfuncs.c\n>\n> 3.\n>\n> @@ -434,6 +435,16 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> values[28] = Int32GetDatum(leader->pid);\n> nulls[28] = false;\n> }\n> + else\n> + {\n> + int leader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\n> +\n> + if (leader_pid != InvalidPid)\n> + {\n> + values[28] = Int32GetDatum(leader_pid);\n> + nulls[28] = false;\n> + }\n> +\n>\n> 3a.\n> There is an existing comment preceding this if/else but it refers only\n> to leaders of parallel groups. Should that comment be updated to\n> mention the leader apply worker too?\n>\n\nYeah, we can slightly adjust the comments. How about something like the below:\nindex 415e711729..7eb668634a 100644\n--- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -410,9 +410,9 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n\n /*\n * If a PGPROC entry was retrieved, display\nwait events and lock\n- * group leader information if any. To avoid\nextra overhead, no\n- * extra lock is being held, so there is no guarantee of\n- * consistency across multiple rows.\n+ * group leader or apply leader information if\nany. To avoid extra\n+ * overhead, no extra lock is being held, so\nthere is no guarantee\n+ * of consistency across multiple rows.\n */\n if (proc != NULL)\n {\n@@ -428,7 +428,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n /*\n * Show the leader only for active\nparallel workers. This\n * leaves the field as NULL for the\nleader of a parallel\n- * group.\n+ * group or the leader of a parallel apply.\n */\n if (leader && leader->pid !=\nbeentry->st_procpid)\n\n\n> ~\n>\n> 3b.\n> It may be unrelated to this patch, but it seems strange to me that the\n> nulls[28]/values[28] assignments are done where they are. Every other\n> nulls/values assignment of this function here is pretty much in the\n> correct numerical order except this one, so IMO this code ought to be\n> relocated to later in this same function.\n>\n\nThis is not related to the current patch but I see there is merit in\nthe current coding as it is better to retrieve all the fields of proc\ntogether.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 16 Jan 2023 12:10:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi Amit,\n\nThanks for the patch, the changes seem reasonable to me and it does fix\nthe issue in the sequence decoding patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 16 Jan 2023 17:33:32 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 16, 2023 at 10:24 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 2.\n> >\n> > /*\n> > + * Return the pid of the leader apply worker if the given pid is the pid of a\n> > + * parallel apply worker, otherwise return InvalidPid.\n> > + */\n> > +pid_t\n> > +GetLeaderApplyWorkerPid(pid_t pid)\n> > +{\n> > + int leader_pid = InvalidPid;\n> > + int i;\n> > +\n> > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > +\n> > + for (i = 0; i < max_logical_replication_workers; i++)\n> > + {\n> > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> > +\n> > + if (isParallelApplyWorker(w) && w->proc && pid == w->proc->pid)\n> > + {\n> > + leader_pid = w->leader_pid;\n> > + break;\n> > + }\n> > + }\n> > +\n> > + LWLockRelease(LogicalRepWorkerLock);\n> > +\n> > + return leader_pid;\n> > +}\n> >\n> > 2a.\n> > IIUC the IsParallelApplyWorker macro does nothing except check that\n> > the leader_pid is not InvalidPid anyway, so AFAIK this algorithm does\n> > not benefit from using this macro because we will want to return\n> > InvalidPid anyway if the given pid matches.\n> >\n> > So the inner condition can just say:\n> >\n> > if (w->proc && w->proc->pid == pid)\n> > {\n> > leader_pid = w->leader_pid;\n> > break;\n> > }\n> >\n>\n> Yeah, this should also work but I feel the current one is explicit and\n> more clear.\n\nOK.\n\nBut, I have one last comment about this function -- I saw there are\nalready other functions that iterate max_logical_replication_workers\nlike this looking for things:\n- logicalrep_worker_find\n- logicalrep_workers_find\n- logicalrep_worker_launch\n- logicalrep_sync_worker_count\n\nSo I felt this new function (currently called GetLeaderApplyWorkerPid)\nought to be named similarly to those ones. e.g. call it something like\n \"logicalrep_worker_find_pa_leader_pid\".\n\n>\n> > ~\n> >\n> > 2b.\n> > A possible alternative comment.\n> >\n> > BEFORE\n> > Return the pid of the leader apply worker if the given pid is the pid\n> > of a parallel apply worker, otherwise return InvalidPid.\n> >\n> >\n> > AFTER\n> > If the given pid has a leader apply worker then return the leader pid,\n> > otherwise, return InvalidPid.\n> >\n>\n> I don't think that is an improvement.\n>\n> > ======\n> >\n> > src/backend/utils/adt/pgstatfuncs.c\n> >\n> > 3.\n> >\n> > @@ -434,6 +435,16 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> > values[28] = Int32GetDatum(leader->pid);\n> > nulls[28] = false;\n> > }\n> > + else\n> > + {\n> > + int leader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\n> > +\n> > + if (leader_pid != InvalidPid)\n> > + {\n> > + values[28] = Int32GetDatum(leader_pid);\n> > + nulls[28] = false;\n> > + }\n> > +\n> >\n> > 3a.\n> > There is an existing comment preceding this if/else but it refers only\n> > to leaders of parallel groups. Should that comment be updated to\n> > mention the leader apply worker too?\n> >\n>\n> Yeah, we can slightly adjust the comments. How about something like the below:\n> index 415e711729..7eb668634a 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -410,9 +410,9 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n>\n> /*\n> * If a PGPROC entry was retrieved, display\n> wait events and lock\n> - * group leader information if any. To avoid\n> extra overhead, no\n> - * extra lock is being held, so there is no guarantee of\n> - * consistency across multiple rows.\n> + * group leader or apply leader information if\n> any. To avoid extra\n> + * overhead, no extra lock is being held, so\n> there is no guarantee\n> + * of consistency across multiple rows.\n> */\n> if (proc != NULL)\n> {\n> @@ -428,7 +428,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> /*\n> * Show the leader only for active\n> parallel workers. This\n> * leaves the field as NULL for the\n> leader of a parallel\n> - * group.\n> + * group or the leader of a parallel apply.\n> */\n> if (leader && leader->pid !=\n> beentry->st_procpid)\n>\n\nThe updated comment LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 17 Jan 2023 08:42:57 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 17, 2023 5:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > 2.\r\n> > >\r\n> > > /*\r\n> > > + * Return the pid of the leader apply worker if the given pid is\r\n> > > +the pid of a\r\n> > > + * parallel apply worker, otherwise return InvalidPid.\r\n> > > + */\r\n> > > +pid_t\r\n> > > +GetLeaderApplyWorkerPid(pid_t pid)\r\n> > > +{\r\n> > > + int leader_pid = InvalidPid;\r\n> > > + int i;\r\n> > > +\r\n> > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> > > +\r\n> > > + for (i = 0; i < max_logical_replication_workers; i++) {\r\n> > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\r\n> > > +\r\n> > > + if (isParallelApplyWorker(w) && w->proc && pid == w->proc->pid) {\r\n> > > + leader_pid = w->leader_pid; break; } }\r\n> > > +\r\n> > > + LWLockRelease(LogicalRepWorkerLock);\r\n> > > +\r\n> > > + return leader_pid;\r\n> > > +}\r\n> > >\r\n> > > 2a.\r\n> > > IIUC the IsParallelApplyWorker macro does nothing except check that\r\n> > > the leader_pid is not InvalidPid anyway, so AFAIK this algorithm\r\n> > > does not benefit from using this macro because we will want to\r\n> > > return InvalidPid anyway if the given pid matches.\r\n> > >\r\n> > > So the inner condition can just say:\r\n> > >\r\n> > > if (w->proc && w->proc->pid == pid)\r\n> > > {\r\n> > > leader_pid = w->leader_pid;\r\n> > > break;\r\n> > > }\r\n> > >\r\n> >\r\n> > Yeah, this should also work but I feel the current one is explicit and\r\n> > more clear.\r\n> \r\n> OK.\r\n> \r\n> But, I have one last comment about this function -- I saw there are already\r\n> other functions that iterate max_logical_replication_workers like this looking\r\n> for things:\r\n> - logicalrep_worker_find\r\n> - logicalrep_workers_find\r\n> - logicalrep_worker_launch\r\n> - logicalrep_sync_worker_count\r\n> \r\n> So I felt this new function (currently called GetLeaderApplyWorkerPid) ought\r\n> to be named similarly to those ones. e.g. call it something like\r\n> \"logicalrep_worker_find_pa_leader_pid\".\r\n> \r\n\r\nI am not sure we can use the name, because currently all the API name in launcher that\r\nused by other module(not related to subscription) are like\r\nAxxBxx style(see the functions in logicallauncher.h).\r\nlogicalrep_worker_xxx style functions are currently only declared in\r\nworker_internal.h.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Tue, 17 Jan 2023 02:21:04 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 16, 2023 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 15, 2023 at 10:39 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > I think there's a bug in how get_transaction_apply_action() interacts\n> > with handle_streamed_transaction() to decide whether the transaction is\n> > streamed or not. Originally, the code was simply:\n> >\n> > /* not in streaming mode */\n> > if (!in_streamed_transaction)\n> > return false;\n> >\n> > But now this decision was moved to get_transaction_apply_action(), which\n> > does this:\n> >\n> > if (am_parallel_apply_worker())\n> > {\n> > return TRANS_PARALLEL_APPLY;\n> > }\n> > else if (in_remote_transaction)\n> > {\n> > return TRANS_LEADER_APPLY;\n> > }\n> >\n> > and handle_streamed_transaction() then uses the result like this:\n> >\n> > /* not in streaming mode */\n> > if (apply_action == TRANS_LEADER_APPLY)\n> > return false;\n> >\n> > Notice this is not equal to the original behavior, because the two flags\n> > (in_remote_transaction and in_streamed_transaction) are not inverse.\n> > That is,\n> >\n> > in_remote_transaction=false\n> >\n> > does not imply we're processing streamed transaction. It's allowed both\n> > flags are false, i.e. a change may be \"non-transactional\" and not\n> > streamed, though the only example of such thing in the protocol are\n> > logical messages. Which are however ignored in the apply worker, so I'm\n> > not surprised no existing test failed on this.\n> >\n>\n> Right, this is the reason we didn't catch it in our testing.\n>\n> > So I think get_transaction_apply_action() should do this:\n> >\n> > if (am_parallel_apply_worker())\n> > {\n> > return TRANS_PARALLEL_APPLY;\n> > }\n> > else if (!in_streamed_transaction)\n> > {\n> > return TRANS_LEADER_APPLY;\n> > }\n> >\n>\n> Yeah, something like this would work but some of the callers other\n> than handle_streamed_transaction() also need to be changed. See\n> attached.\n>\n> > FWIW I've noticed this after rebasing the sequence decoding patch, which\n> > adds another type of protocol message with the transactional vs.\n> > non-transactional behavior, similar to \"logical messages\" except that in\n> > this case the worker does not ignore that.\n> >\n> > Also, I think get_transaction_apply_action() would deserve better\n> > comments explaining how/why it makes the decisions.\n> >\n>\n> Okay, I have added the comments in get_transaction_apply_action() and\n> updated the comments to refer to the enum TransApplyAction where all\n> the actions are explained.\n\nThank you for the patch.\n\n@@ -1710,6 +1712,7 @@ apply_handle_stream_stop(StringInfo s)\n }\n\n in_streamed_transaction = false;\n+ stream_xid = InvalidTransactionId;\n\nWe reset stream_xid also in stream_close_file() but probably it's no\nlonger necessary?\n\nHow about adding an assertion in apply_handle_stream_start() to make\nsure the stream_xid is invalid?\n\n---\nIt's not related to this issue but I realized that if the action\nreturned by get_transaction_apply_action() is not handled in the\nswitch statement, we do only Assert(false). Is it better to raise an\nerror like \"unexpected apply action %d\" just in case in order to\ndetect failure cases also in the production environment?\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 12:05:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 8:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 16, 2023 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Okay, I have added the comments in get_transaction_apply_action() and\n> > updated the comments to refer to the enum TransApplyAction where all\n> > the actions are explained.\n>\n> Thank you for the patch.\n>\n> @@ -1710,6 +1712,7 @@ apply_handle_stream_stop(StringInfo s)\n> }\n>\n> in_streamed_transaction = false;\n> + stream_xid = InvalidTransactionId;\n>\n> We reset stream_xid also in stream_close_file() but probably it's no\n> longer necessary?\n>\n\nI think so.\n\n> How about adding an assertion in apply_handle_stream_start() to make\n> sure the stream_xid is invalid?\n>\n\nI think it would be better to add such an assert in\napply_handle_begin/apply_handle_begin_prepare because there won't be a\nproblem if we start_stream message even when stream_xid is valid.\nHowever, maybe it is better to add in all three functions\n(apply_handle_begin/apply_handle_begin_prepare/apply_handle_stream_start).\nWhat do you think?\n\n> ---\n> It's not related to this issue but I realized that if the action\n> returned by get_transaction_apply_action() is not handled in the\n> switch statement, we do only Assert(false). Is it better to raise an\n> error like \"unexpected apply action %d\" just in case in order to\n> detect failure cases also in the production environment?\n>\n\nYeah, that may be better. Shall we do that as part of this patch only\nor as a separate patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 17 Jan 2023 08:59:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 1:21 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 17, 2023 5:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > >\n> > > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > > >\n> > > > 2.\n> > > >\n> > > > /*\n> > > > + * Return the pid of the leader apply worker if the given pid is\n> > > > +the pid of a\n> > > > + * parallel apply worker, otherwise return InvalidPid.\n> > > > + */\n> > > > +pid_t\n> > > > +GetLeaderApplyWorkerPid(pid_t pid)\n> > > > +{\n> > > > + int leader_pid = InvalidPid;\n> > > > + int i;\n> > > > +\n> > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > > > +\n> > > > + for (i = 0; i < max_logical_replication_workers; i++) {\n> > > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> > > > +\n> > > > + if (isParallelApplyWorker(w) && w->proc && pid == w->proc->pid) {\n> > > > + leader_pid = w->leader_pid; break; } }\n> > > > +\n> > > > + LWLockRelease(LogicalRepWorkerLock);\n> > > > +\n> > > > + return leader_pid;\n> > > > +}\n> > > >\n> > > > 2a.\n> > > > IIUC the IsParallelApplyWorker macro does nothing except check that\n> > > > the leader_pid is not InvalidPid anyway, so AFAIK this algorithm\n> > > > does not benefit from using this macro because we will want to\n> > > > return InvalidPid anyway if the given pid matches.\n> > > >\n> > > > So the inner condition can just say:\n> > > >\n> > > > if (w->proc && w->proc->pid == pid)\n> > > > {\n> > > > leader_pid = w->leader_pid;\n> > > > break;\n> > > > }\n> > > >\n> > >\n> > > Yeah, this should also work but I feel the current one is explicit and\n> > > more clear.\n> >\n> > OK.\n> >\n> > But, I have one last comment about this function -- I saw there are already\n> > other functions that iterate max_logical_replication_workers like this looking\n> > for things:\n> > - logicalrep_worker_find\n> > - logicalrep_workers_find\n> > - logicalrep_worker_launch\n> > - logicalrep_sync_worker_count\n> >\n> > So I felt this new function (currently called GetLeaderApplyWorkerPid) ought\n> > to be named similarly to those ones. e.g. call it something like\n> > \"logicalrep_worker_find_pa_leader_pid\".\n> >\n>\n> I am not sure we can use the name, because currently all the API name in launcher that\n> used by other module(not related to subscription) are like\n> AxxBxx style(see the functions in logicallauncher.h).\n> logicalrep_worker_xxx style functions are currently only declared in\n> worker_internal.h.\n>\n\nOK. I didn't know there was another header convention that you were\nfollowing. In that case, it is fine to leave the name as-is.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:32:28 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 17, 2023 11:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 17, 2023 at 1:21 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, January 17, 2023 5:43 AM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith\r\n> > > > <smithpb2250@gmail.com>\r\n> > > wrote:\r\n> > > > >\r\n> > > > > 2.\r\n> > > > >\r\n> > > > > /*\r\n> > > > > + * Return the pid of the leader apply worker if the given pid\r\n> > > > > +is the pid of a\r\n> > > > > + * parallel apply worker, otherwise return InvalidPid.\r\n> > > > > + */\r\n> > > > > +pid_t\r\n> > > > > +GetLeaderApplyWorkerPid(pid_t pid) { int leader_pid =\r\n> > > > > +InvalidPid; int i;\r\n> > > > > +\r\n> > > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\r\n> > > > > +\r\n> > > > > + for (i = 0; i < max_logical_replication_workers; i++) {\r\n> > > > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\r\n> > > > > +\r\n> > > > > + if (isParallelApplyWorker(w) && w->proc && pid ==\r\n> > > > > + w->proc->pid) { leader_pid = w->leader_pid; break; } }\r\n> > > > > +\r\n> > > > > + LWLockRelease(LogicalRepWorkerLock);\r\n> > > > > +\r\n> > > > > + return leader_pid;\r\n> > > > > +}\r\n> > > > >\r\n> > > > > 2a.\r\n> > > > > IIUC the IsParallelApplyWorker macro does nothing except check\r\n> > > > > that the leader_pid is not InvalidPid anyway, so AFAIK this\r\n> > > > > algorithm does not benefit from using this macro because we will\r\n> > > > > want to return InvalidPid anyway if the given pid matches.\r\n> > > > >\r\n> > > > > So the inner condition can just say:\r\n> > > > >\r\n> > > > > if (w->proc && w->proc->pid == pid) { leader_pid =\r\n> > > > > w->leader_pid; break; }\r\n> > > > >\r\n> > > >\r\n> > > > Yeah, this should also work but I feel the current one is explicit\r\n> > > > and more clear.\r\n> > >\r\n> > > OK.\r\n> > >\r\n> > > But, I have one last comment about this function -- I saw there are\r\n> > > already other functions that iterate max_logical_replication_workers\r\n> > > like this looking for things:\r\n> > > - logicalrep_worker_find\r\n> > > - logicalrep_workers_find\r\n> > > - logicalrep_worker_launch\r\n> > > - logicalrep_sync_worker_count\r\n> > >\r\n> > > So I felt this new function (currently called\r\n> > > GetLeaderApplyWorkerPid) ought to be named similarly to those ones.\r\n> > > e.g. call it something like \"logicalrep_worker_find_pa_leader_pid\".\r\n> > >\r\n> >\r\n> > I am not sure we can use the name, because currently all the API name\r\n> > in launcher that used by other module(not related to subscription) are\r\n> > like AxxBxx style(see the functions in logicallauncher.h).\r\n> > logicalrep_worker_xxx style functions are currently only declared in\r\n> > worker_internal.h.\r\n> >\r\n> \r\n> OK. I didn't know there was another header convention that you were following.\r\n> In that case, it is fine to leave the name as-is.\r\n\r\nThanks for confirming!\r\n\r\nAttach the new version 0001 patch which addressed all other comments.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 17 Jan 2023 03:37:07 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 2:37 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 17, 2023 11:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 1:21 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, January 17, 2023 5:43 AM Peter Smith\n> > <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila\n> > > > <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith\n> > > > > <smithpb2250@gmail.com>\n> > > > wrote:\n> > > > > >\n> > > > > > 2.\n> > > > > >\n> > > > > > /*\n> > > > > > + * Return the pid of the leader apply worker if the given pid\n> > > > > > +is the pid of a\n> > > > > > + * parallel apply worker, otherwise return InvalidPid.\n> > > > > > + */\n> > > > > > +pid_t\n> > > > > > +GetLeaderApplyWorkerPid(pid_t pid) { int leader_pid =\n> > > > > > +InvalidPid; int i;\n> > > > > > +\n> > > > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > > > > > +\n> > > > > > + for (i = 0; i < max_logical_replication_workers; i++) {\n> > > > > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> > > > > > +\n> > > > > > + if (isParallelApplyWorker(w) && w->proc && pid ==\n> > > > > > + w->proc->pid) { leader_pid = w->leader_pid; break; } }\n> > > > > > +\n> > > > > > + LWLockRelease(LogicalRepWorkerLock);\n> > > > > > +\n> > > > > > + return leader_pid;\n> > > > > > +}\n> > > > > >\n> > > > > > 2a.\n> > > > > > IIUC the IsParallelApplyWorker macro does nothing except check\n> > > > > > that the leader_pid is not InvalidPid anyway, so AFAIK this\n> > > > > > algorithm does not benefit from using this macro because we will\n> > > > > > want to return InvalidPid anyway if the given pid matches.\n> > > > > >\n> > > > > > So the inner condition can just say:\n> > > > > >\n> > > > > > if (w->proc && w->proc->pid == pid) { leader_pid =\n> > > > > > w->leader_pid; break; }\n> > > > > >\n> > > > >\n> > > > > Yeah, this should also work but I feel the current one is explicit\n> > > > > and more clear.\n> > > >\n> > > > OK.\n> > > >\n> > > > But, I have one last comment about this function -- I saw there are\n> > > > already other functions that iterate max_logical_replication_workers\n> > > > like this looking for things:\n> > > > - logicalrep_worker_find\n> > > > - logicalrep_workers_find\n> > > > - logicalrep_worker_launch\n> > > > - logicalrep_sync_worker_count\n> > > >\n> > > > So I felt this new function (currently called\n> > > > GetLeaderApplyWorkerPid) ought to be named similarly to those ones.\n> > > > e.g. call it something like \"logicalrep_worker_find_pa_leader_pid\".\n> > > >\n> > >\n> > > I am not sure we can use the name, because currently all the API name\n> > > in launcher that used by other module(not related to subscription) are\n> > > like AxxBxx style(see the functions in logicallauncher.h).\n> > > logicalrep_worker_xxx style functions are currently only declared in\n> > > worker_internal.h.\n> > >\n> >\n> > OK. I didn't know there was another header convention that you were following.\n> > In that case, it is fine to leave the name as-is.\n>\n> Thanks for confirming!\n>\n> Attach the new version 0001 patch which addressed all other comments.\n>\n\nOK. I checked the differences between patches v81-0001/v82-0001 and\nfound everything I was expecting to see.\n\nI have no more review comments for v82-0001.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:48:34 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 9:07 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 17, 2023 11:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 1:21 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, January 17, 2023 5:43 AM Peter Smith\n> > <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila\n> > > > <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith\n> > > > > <smithpb2250@gmail.com>\n> > > > wrote:\n> > > > > >\n> > > > > > 2.\n> > > > > >\n> > > > > > /*\n> > > > > > + * Return the pid of the leader apply worker if the given pid\n> > > > > > +is the pid of a\n> > > > > > + * parallel apply worker, otherwise return InvalidPid.\n> > > > > > + */\n> > > > > > +pid_t\n> > > > > > +GetLeaderApplyWorkerPid(pid_t pid) { int leader_pid =\n> > > > > > +InvalidPid; int i;\n> > > > > > +\n> > > > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > > > > > +\n> > > > > > + for (i = 0; i < max_logical_replication_workers; i++) {\n> > > > > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> > > > > > +\n> > > > > > + if (isParallelApplyWorker(w) && w->proc && pid ==\n> > > > > > + w->proc->pid) { leader_pid = w->leader_pid; break; } }\n> > > > > > +\n> > > > > > + LWLockRelease(LogicalRepWorkerLock);\n> > > > > > +\n> > > > > > + return leader_pid;\n> > > > > > +}\n> > > > > >\n> > > > > > 2a.\n> > > > > > IIUC the IsParallelApplyWorker macro does nothing except check\n> > > > > > that the leader_pid is not InvalidPid anyway, so AFAIK this\n> > > > > > algorithm does not benefit from using this macro because we will\n> > > > > > want to return InvalidPid anyway if the given pid matches.\n> > > > > >\n> > > > > > So the inner condition can just say:\n> > > > > >\n> > > > > > if (w->proc && w->proc->pid == pid) { leader_pid =\n> > > > > > w->leader_pid; break; }\n> > > > > >\n> > > > >\n> > > > > Yeah, this should also work but I feel the current one is explicit\n> > > > > and more clear.\n> > > >\n> > > > OK.\n> > > >\n> > > > But, I have one last comment about this function -- I saw there are\n> > > > already other functions that iterate max_logical_replication_workers\n> > > > like this looking for things:\n> > > > - logicalrep_worker_find\n> > > > - logicalrep_workers_find\n> > > > - logicalrep_worker_launch\n> > > > - logicalrep_sync_worker_count\n> > > >\n> > > > So I felt this new function (currently called\n> > > > GetLeaderApplyWorkerPid) ought to be named similarly to those ones.\n> > > > e.g. call it something like \"logicalrep_worker_find_pa_leader_pid\".\n> > > >\n> > >\n> > > I am not sure we can use the name, because currently all the API name\n> > > in launcher that used by other module(not related to subscription) are\n> > > like AxxBxx style(see the functions in logicallauncher.h).\n> > > logicalrep_worker_xxx style functions are currently only declared in\n> > > worker_internal.h.\n> > >\n> >\n> > OK. I didn't know there was another header convention that you were following.\n> > In that case, it is fine to leave the name as-is.\n>\n> Thanks for confirming!\n>\n> Attach the new version 0001 patch which addressed all other comments.\n>\n> Best regards,\n> Hou zj\n\nHello Hou-san,\n\n1. Do we need to extend test-cases to review the leader_pid column in\npg_stats tables?\n2. Do we need to follow the naming convention for\n'GetLeaderApplyWorkerPid' like other functions in the same file which\nstarts with 'logicalrep_'\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 17 Jan 2023 10:03:30 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 8:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jan 16, 2023 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Okay, I have added the comments in get_transaction_apply_action() and\n> > > updated the comments to refer to the enum TransApplyAction where all\n> > > the actions are explained.\n> >\n> > Thank you for the patch.\n> >\n> > @@ -1710,6 +1712,7 @@ apply_handle_stream_stop(StringInfo s)\n> > }\n> >\n> > in_streamed_transaction = false;\n> > + stream_xid = InvalidTransactionId;\n> >\n> > We reset stream_xid also in stream_close_file() but probably it's no\n> > longer necessary?\n> >\n>\n> I think so.\n>\n> > How about adding an assertion in apply_handle_stream_start() to make\n> > sure the stream_xid is invalid?\n> >\n>\n> I think it would be better to add such an assert in\n> apply_handle_begin/apply_handle_begin_prepare because there won't be a\n> problem if we start_stream message even when stream_xid is valid.\n> However, maybe it is better to add in all three functions\n> (apply_handle_begin/apply_handle_begin_prepare/apply_handle_stream_start).\n> What do you think?\n>\n> > ---\n> > It's not related to this issue but I realized that if the action\n> > returned by get_transaction_apply_action() is not handled in the\n> > switch statement, we do only Assert(false). Is it better to raise an\n> > error like \"unexpected apply action %d\" just in case in order to\n> > detect failure cases also in the production environment?\n> >\n>\n> Yeah, that may be better. Shall we do that as part of this patch only\n> or as a separate patch?\n>\n\nPlease find attached the updated patches to address the above\ncomments. I think we can combine and commit them as one patch as both\nare related.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 17 Jan 2023 10:25:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 8:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 16, 2023 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Okay, I have added the comments in get_transaction_apply_action() and\n> > > > updated the comments to refer to the enum TransApplyAction where all\n> > > > the actions are explained.\n> > >\n> > > Thank you for the patch.\n> > >\n> > > @@ -1710,6 +1712,7 @@ apply_handle_stream_stop(StringInfo s)\n> > > }\n> > >\n> > > in_streamed_transaction = false;\n> > > + stream_xid = InvalidTransactionId;\n> > >\n> > > We reset stream_xid also in stream_close_file() but probably it's no\n> > > longer necessary?\n> > >\n> >\n> > I think so.\n> >\n> > > How about adding an assertion in apply_handle_stream_start() to make\n> > > sure the stream_xid is invalid?\n> > >\n> >\n> > I think it would be better to add such an assert in\n> > apply_handle_begin/apply_handle_begin_prepare because there won't be a\n> > problem if we start_stream message even when stream_xid is valid.\n> > However, maybe it is better to add in all three functions\n> > (apply_handle_begin/apply_handle_begin_prepare/apply_handle_stream_start).\n> > What do you think?\n> >\n> > > ---\n> > > It's not related to this issue but I realized that if the action\n> > > returned by get_transaction_apply_action() is not handled in the\n> > > switch statement, we do only Assert(false). Is it better to raise an\n> > > error like \"unexpected apply action %d\" just in case in order to\n> > > detect failure cases also in the production environment?\n> > >\n> >\n> > Yeah, that may be better. Shall we do that as part of this patch only\n> > or as a separate patch?\n> >\n>\n> Please find attached the updated patches to address the above\n> comments. I think we can combine and commit them as one patch as both\n> are related.\n\nThank you for the patches! Looks good to me. And +1 to merge them.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:07:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 17, 2023 12:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 17, 2023 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Tue, Jan 17, 2023 at 8:35 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Jan 16, 2023 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > Okay, I have added the comments in get_transaction_apply_action()\r\n> > > > and updated the comments to refer to the enum TransApplyAction\r\n> > > > where all the actions are explained.\r\n> > >\r\n> > > Thank you for the patch.\r\n> > >\r\n> > > @@ -1710,6 +1712,7 @@ apply_handle_stream_stop(StringInfo s)\r\n> > > }\r\n> > >\r\n> > > in_streamed_transaction = false;\r\n> > > + stream_xid = InvalidTransactionId;\r\n> > >\r\n> > > We reset stream_xid also in stream_close_file() but probably it's no\r\n> > > longer necessary?\r\n> > >\r\n> >\r\n> > I think so.\r\n> >\r\n> > > How about adding an assertion in apply_handle_stream_start() to make\r\n> > > sure the stream_xid is invalid?\r\n> > >\r\n> >\r\n> > I think it would be better to add such an assert in\r\n> > apply_handle_begin/apply_handle_begin_prepare because there won't be a\r\n> > problem if we start_stream message even when stream_xid is valid.\r\n> > However, maybe it is better to add in all three functions\r\n> >\r\n> (apply_handle_begin/apply_handle_begin_prepare/apply_handle_stream_star\r\n> t).\r\n> > What do you think?\r\n> >\r\n> > > ---\r\n> > > It's not related to this issue but I realized that if the action\r\n> > > returned by get_transaction_apply_action() is not handled in the\r\n> > > switch statement, we do only Assert(false). Is it better to raise an\r\n> > > error like \"unexpected apply action %d\" just in case in order to\r\n> > > detect failure cases also in the production environment?\r\n> > >\r\n> >\r\n> > Yeah, that may be better. Shall we do that as part of this patch only\r\n> > or as a separate patch?\r\n> >\r\n> \r\n> Please find attached the updated patches to address the above comments. I\r\n> think we can combine and commit them as one patch as both are related.\r\n\r\nThanks for fixing these.\r\nI have confirmed that all regression tests passed after applying the patches.\r\nAnd the patches look good to me.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 17 Jan 2023 06:05:53 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 12:37 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 17, 2023 11:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 1:21 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, January 17, 2023 5:43 AM Peter Smith\n> > <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 16, 2023 at 5:41 PM Amit Kapila\n> > > > <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > On Mon, Jan 16, 2023 at 10:24 AM Peter Smith\n> > > > > <smithpb2250@gmail.com>\n> > > > wrote:\n> > > > > >\n> > > > > > 2.\n> > > > > >\n> > > > > > /*\n> > > > > > + * Return the pid of the leader apply worker if the given pid\n> > > > > > +is the pid of a\n> > > > > > + * parallel apply worker, otherwise return InvalidPid.\n> > > > > > + */\n> > > > > > +pid_t\n> > > > > > +GetLeaderApplyWorkerPid(pid_t pid) { int leader_pid =\n> > > > > > +InvalidPid; int i;\n> > > > > > +\n> > > > > > + LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > > > > > +\n> > > > > > + for (i = 0; i < max_logical_replication_workers; i++) {\n> > > > > > + LogicalRepWorker *w = &LogicalRepCtx->workers[i];\n> > > > > > +\n> > > > > > + if (isParallelApplyWorker(w) && w->proc && pid ==\n> > > > > > + w->proc->pid) { leader_pid = w->leader_pid; break; } }\n> > > > > > +\n> > > > > > + LWLockRelease(LogicalRepWorkerLock);\n> > > > > > +\n> > > > > > + return leader_pid;\n> > > > > > +}\n> > > > > >\n> > > > > > 2a.\n> > > > > > IIUC the IsParallelApplyWorker macro does nothing except check\n> > > > > > that the leader_pid is not InvalidPid anyway, so AFAIK this\n> > > > > > algorithm does not benefit from using this macro because we will\n> > > > > > want to return InvalidPid anyway if the given pid matches.\n> > > > > >\n> > > > > > So the inner condition can just say:\n> > > > > >\n> > > > > > if (w->proc && w->proc->pid == pid) { leader_pid =\n> > > > > > w->leader_pid; break; }\n> > > > > >\n> > > > >\n> > > > > Yeah, this should also work but I feel the current one is explicit\n> > > > > and more clear.\n> > > >\n> > > > OK.\n> > > >\n> > > > But, I have one last comment about this function -- I saw there are\n> > > > already other functions that iterate max_logical_replication_workers\n> > > > like this looking for things:\n> > > > - logicalrep_worker_find\n> > > > - logicalrep_workers_find\n> > > > - logicalrep_worker_launch\n> > > > - logicalrep_sync_worker_count\n> > > >\n> > > > So I felt this new function (currently called\n> > > > GetLeaderApplyWorkerPid) ought to be named similarly to those ones.\n> > > > e.g. call it something like \"logicalrep_worker_find_pa_leader_pid\".\n> > > >\n> > >\n> > > I am not sure we can use the name, because currently all the API name\n> > > in launcher that used by other module(not related to subscription) are\n> > > like AxxBxx style(see the functions in logicallauncher.h).\n> > > logicalrep_worker_xxx style functions are currently only declared in\n> > > worker_internal.h.\n> > >\n> >\n> > OK. I didn't know there was another header convention that you were following.\n> > In that case, it is fine to leave the name as-is.\n>\n> Thanks for confirming!\n>\n> Attach the new version 0001 patch which addressed all other comments.\n>\n\nThank you for updating the patch. Here is one comment:\n\n@@ -426,14 +427,24 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n\n /*\n * Show the leader only for active\nparallel workers. This\n- * leaves the field as NULL for the\nleader of a parallel\n- * group.\n+ * leaves the field as NULL for the\nleader of a parallel group\n+ * or the leader of parallel apply workers.\n */\n if (leader && leader->pid !=\nbeentry->st_procpid)\n {\n values[28] = Int32GetDatum(leader->pid);\n nulls[28] = false;\n }\n+ else\n+ {\n+ int\nleader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\n+\n+ if (leader_pid != InvalidPid)\n+ {\n+ values[28] =\nInt32GetDatum(leader_pid);\n+ nulls[28] = false;\n+ }\n+ }\n }\n\nI'm slightly concerned that there could be overhead of executing\nGetLeaderApplyWorkerPid () for every backend process except for\nparallel query workers. The number of such backends could be large and\nGetLeaderApplyWorkerPid() acquires the lwlock. For example, does it\nmake sense to check (st_backendType == B_BG_WORKER) before calling\nGetLeaderApplyWorkerPid()? Or it might not be a problem since it's\nLogicalRepWorkerLock which is not likely to be contended.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 15:46:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 17, 2023 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 17, 2023 at 12:37 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > Attach the new version 0001 patch which addressed all other comments.\r\n> >\r\n> \r\n> Thank you for updating the patch. Here is one comment:\r\n> \r\n> @@ -426,14 +427,24 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\r\n> \r\n> /*\r\n> * Show the leader only for active parallel\r\n> workers. This\r\n> - * leaves the field as NULL for the\r\n> leader of a parallel\r\n> - * group.\r\n> + * leaves the field as NULL for the\r\n> leader of a parallel group\r\n> + * or the leader of parallel apply workers.\r\n> */\r\n> if (leader && leader->pid !=\r\n> beentry->st_procpid)\r\n> {\r\n> values[28] =\r\n> Int32GetDatum(leader->pid);\r\n> nulls[28] = false;\r\n> }\r\n> + else\r\n> + {\r\n> + int\r\n> leader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\r\n> +\r\n> + if (leader_pid != InvalidPid)\r\n> + {\r\n> + values[28] =\r\n> Int32GetDatum(leader_pid);\r\n> + nulls[28] = false;\r\n> + }\r\n> + }\r\n> }\r\n> \r\n> I'm slightly concerned that there could be overhead of executing\r\n> GetLeaderApplyWorkerPid () for every backend process except for parallel\r\n> query workers. The number of such backends could be large and\r\n> GetLeaderApplyWorkerPid() acquires the lwlock. For example, does it make\r\n> sense to check (st_backendType == B_BG_WORKER) before calling\r\n> GetLeaderApplyWorkerPid()? Or it might not be a problem since it's\r\n> LogicalRepWorkerLock which is not likely to be contended.\r\n\r\nThanks for the comment and I think your suggestion makes sense.\r\nI have added the check before getting the leader pid. Here is the new version patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 17 Jan 2023 09:14:44 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 17, 2023 12:34 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> \r\n> On Tue, Jan 17, 2023 at 9:07 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, January 17, 2023 11:32 AM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> > > OK. I didn't know there was another header convention that you were\r\n> > > following.\r\n> > > In that case, it is fine to leave the name as-is.\r\n> >\r\n> > Thanks for confirming!\r\n> >\r\n> > Attach the new version 0001 patch which addressed all other comments.\r\n> >\r\n> > Best regards,\r\n> > Hou zj\r\n> \r\n> Hello Hou-san,\r\n> \r\n> 1. Do we need to extend test-cases to review the leader_pid column in pg_stats\r\n> tables?\r\n\r\nThanks for the comments.\r\n\r\nWe currently don't have any tests for the view, so I feel we can extend\r\nthem later as a separate patch.\r\n\r\n> 2. Do we need to follow the naming convention for\r\n> 'GetLeaderApplyWorkerPid' like other functions in the same file which starts\r\n> with 'logicalrep_'\r\n\r\nWe have agreed [1] to follow the naming convention for functions in logicallauncher.h\r\nwhich are mainly used for other modules.\r\n\r\n[1] https://www.postgresql.org/message-id/CAHut%2BPtgj%3DDY8F1cMBRUxsZtq2-faW%3D%3D5-dSuHSPJGx1a_vBFQ%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 17 Jan 2023 09:15:06 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 6:14 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, January 17, 2023 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 12:37 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > Attach the new version 0001 patch which addressed all other comments.\n> > >\n> >\n> > Thank you for updating the patch. Here is one comment:\n> >\n> > @@ -426,14 +427,24 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> >\n> > /*\n> > * Show the leader only for active parallel\n> > workers. This\n> > - * leaves the field as NULL for the\n> > leader of a parallel\n> > - * group.\n> > + * leaves the field as NULL for the\n> > leader of a parallel group\n> > + * or the leader of parallel apply workers.\n> > */\n> > if (leader && leader->pid !=\n> > beentry->st_procpid)\n> > {\n> > values[28] =\n> > Int32GetDatum(leader->pid);\n> > nulls[28] = false;\n> > }\n> > + else\n> > + {\n> > + int\n> > leader_pid = GetLeaderApplyWorkerPid(beentry->st_procpid);\n> > +\n> > + if (leader_pid != InvalidPid)\n> > + {\n> > + values[28] =\n> > Int32GetDatum(leader_pid);\n> > + nulls[28] = false;\n> > + }\n> > + }\n> > }\n> >\n> > I'm slightly concerned that there could be overhead of executing\n> > GetLeaderApplyWorkerPid () for every backend process except for parallel\n> > query workers. The number of such backends could be large and\n> > GetLeaderApplyWorkerPid() acquires the lwlock. For example, does it make\n> > sense to check (st_backendType == B_BG_WORKER) before calling\n> > GetLeaderApplyWorkerPid()? Or it might not be a problem since it's\n> > LogicalRepWorkerLock which is not likely to be contended.\n>\n> Thanks for the comment and I think your suggestion makes sense.\n> I have added the check before getting the leader pid. Here is the new version patch.\n\nThank you for updating the patch. Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 23:37:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 17, 2023 at 8:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 6:14 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, January 17, 2023 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 17, 2023 at 12:37 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > I'm slightly concerned that there could be overhead of executing\n> > > GetLeaderApplyWorkerPid () for every backend process except for parallel\n> > > query workers. The number of such backends could be large and\n> > > GetLeaderApplyWorkerPid() acquires the lwlock. For example, does it make\n> > > sense to check (st_backendType == B_BG_WORKER) before calling\n> > > GetLeaderApplyWorkerPid()? Or it might not be a problem since it's\n> > > LogicalRepWorkerLock which is not likely to be contended.\n> >\n> > Thanks for the comment and I think your suggestion makes sense.\n> > I have added the check before getting the leader pid. Here is the new version patch.\n>\n> Thank you for updating the patch. Looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:05:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 18, 2023 12:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jan 17, 2023 at 8:07 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jan 17, 2023 at 6:14 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tuesday, January 17, 2023 2:46 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > On Tue, Jan 17, 2023 at 12:37 PM houzj.fnst@fujitsu.com\r\n> > > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > > I'm slightly concerned that there could be overhead of executing\r\n> > > > GetLeaderApplyWorkerPid () for every backend process except for parallel\r\n> > > > query workers. The number of such backends could be large and\r\n> > > > GetLeaderApplyWorkerPid() acquires the lwlock. For example, does it\r\n> make\r\n> > > > sense to check (st_backendType == B_BG_WORKER) before calling\r\n> > > > GetLeaderApplyWorkerPid()? Or it might not be a problem since it's\r\n> > > > LogicalRepWorkerLock which is not likely to be contended.\r\n> > >\r\n> > > Thanks for the comment and I think your suggestion makes sense.\r\n> > > I have added the check before getting the leader pid. Here is the new\r\n> version patch.\r\n> >\r\n> > Thank you for updating the patch. Looks good to me.\r\n> >\r\n> \r\n> Pushed.\r\n\r\nRebased and attach remaining patches for reviewing.\r\n\r\nRegards,\r\nWang Wei", "msg_date": "Wed, 18 Jan 2023 04:48:01 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 13, 2023 at 11:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v79-0002.\n>\n\nSo, this is about the latest v84-0001-Stop-extra-worker-if-GUC-was-changed.\n\n> ======\n>\n> General\n>\n> 1.\n>\n> I saw that earlier in this thread Hou-san [1] and Amit [2] also seemed\n> to say there is not much point for this patch.\n>\n> So I wanted to +1 that same opinion.\n>\n> I feel this patch just adds more complexity for almost no gain:\n> - reducing the 'max_apply_workers_per_suibscription' seems not very\n> common in the first place.\n> - even when the GUC is reduced, at that point in time all the workers\n> might be in use so there may be nothing that can be immediately done.\n> - IIUC the excess workers (for a reduced GUC) are going to get freed\n> naturally anyway over time as more transactions are completed so the\n> pool size will reduce accordingly.\n>\n\nI am still not sure if it is worth pursuing this patch because of the\nabove reasons. I don't think it would be difficult to add this even at\na later point in time if we really see a use case for this.\nSawada-San, IIRC, you raised this point. What do you think?\n\nThe other point I am wondering is whether we can have a different way\nto test partial serialization apart from introducing another developer\nGUC (stream_serialize_threshold). One possibility could be that we can\nhave a subscription option (parallel_send_timeout or something like\nthat) with some default value (current_timeout used in the patch)\nwhich will be used only when streaming = parallel. Users may want to\nwait for more time before serialization starts depending on the\nworkload (say when resource usage is high on a subscriber-side\nmachine, or there are concurrent long-running transactions that can\nblock parallel apply for a bit longer time). I know with this as well\nit may not be straightforward to test the functionality because we\ncan't be sure how many changes would be required for a timeout to\noccur. This is just for brainstorming other options to test the\npartial serialization functionality.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 18 Jan 2023 12:09:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 18, 2023 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 13, 2023 at 11:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are some review comments for patch v79-0002.\n> >\n>\n> So, this is about the latest v84-0001-Stop-extra-worker-if-GUC-was-changed.\n>\n> >\n> > I feel this patch just adds more complexity for almost no gain:\n> > - reducing the 'max_apply_workers_per_suibscription' seems not very\n> > common in the first place.\n> > - even when the GUC is reduced, at that point in time all the workers\n> > might be in use so there may be nothing that can be immediately done.\n> > - IIUC the excess workers (for a reduced GUC) are going to get freed\n> > naturally anyway over time as more transactions are completed so the\n> > pool size will reduce accordingly.\n> >\n>\n> I am still not sure if it is worth pursuing this patch because of the\n> above reasons. I don't think it would be difficult to add this even at\n> a later point in time if we really see a use case for this.\n> Sawada-San, IIRC, you raised this point. What do you think?\n>\n> The other point I am wondering is whether we can have a different way\n> to test partial serialization apart from introducing another developer\n> GUC (stream_serialize_threshold). One possibility could be that we can\n> have a subscription option (parallel_send_timeout or something like\n> that) with some default value (current_timeout used in the patch)\n> which will be used only when streaming = parallel. Users may want to\n> wait for more time before serialization starts depending on the\n> workload (say when resource usage is high on a subscriber-side\n> machine, or there are concurrent long-running transactions that can\n> block parallel apply for a bit longer time). I know with this as well\n> it may not be straightforward to test the functionality because we\n> can't be sure how many changes would be required for a timeout to\n> occur. This is just for brainstorming other options to test the\n> partial serialization functionality.\n>\n\nApart from the above, we can also have a subscription option to\nspecify parallel_shm_queue_size (queue_size used to determine the\nqueue between the leader and parallel worker) and that can be used for\nthis purpose. Basically, configuring it to a smaller value can help in\nreducing the test time but still, it will not eliminate the need for\ndependency on timing we have to wait before switching to partial\nserialize mode. I think this can be used in production as well to tune\nthe performance depending on workload.\n\nYet another way is to use the existing parameter logical_decode_mode\n[1]. If the value of logical_decoding_mode is 'immediate', then we can\nimmediately switch to partial serialize mode. This will eliminate the\ndependency on timing. The one argument against using this is that it\nwon't be as clear as a separate parameter like\n'stream_serialize_threshold' proposed by the patch but OTOH we already\nhave a few parameters that serve a different purpose when used on the\nsubscriber. For example, 'max_replication_slots' is used to define the\nmaximum number of replication slots on the publisher and the maximum\nnumber of origins on subscribers. Similarly,\nwal_retrieve_retry_interval' is used for different purposes on\nsubscriber and standby nodes.\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Jan 2023 11:10:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 19, 2023 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 13, 2023 at 11:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Here are some review comments for patch v79-0002.\n> > >\n> >\n> > So, this is about the latest v84-0001-Stop-extra-worker-if-GUC-was-changed.\n> >\n> > >\n> > > I feel this patch just adds more complexity for almost no gain:\n> > > - reducing the 'max_apply_workers_per_suibscription' seems not very\n> > > common in the first place.\n> > > - even when the GUC is reduced, at that point in time all the workers\n> > > might be in use so there may be nothing that can be immediately done.\n> > > - IIUC the excess workers (for a reduced GUC) are going to get freed\n> > > naturally anyway over time as more transactions are completed so the\n> > > pool size will reduce accordingly.\n> > >\n> >\n> > I am still not sure if it is worth pursuing this patch because of the\n> > above reasons. I don't think it would be difficult to add this even at\n> > a later point in time if we really see a use case for this.\n> > Sawada-San, IIRC, you raised this point. What do you think?\n> >\n> > The other point I am wondering is whether we can have a different way\n> > to test partial serialization apart from introducing another developer\n> > GUC (stream_serialize_threshold). One possibility could be that we can\n> > have a subscription option (parallel_send_timeout or something like\n> > that) with some default value (current_timeout used in the patch)\n> > which will be used only when streaming = parallel. Users may want to\n> > wait for more time before serialization starts depending on the\n> > workload (say when resource usage is high on a subscriber-side\n> > machine, or there are concurrent long-running transactions that can\n> > block parallel apply for a bit longer time). I know with this as well\n> > it may not be straightforward to test the functionality because we\n> > can't be sure how many changes would be required for a timeout to\n> > occur. This is just for brainstorming other options to test the\n> > partial serialization functionality.\n> >\n>\n> Apart from the above, we can also have a subscription option to\n> specify parallel_shm_queue_size (queue_size used to determine the\n> queue between the leader and parallel worker) and that can be used for\n> this purpose. Basically, configuring it to a smaller value can help in\n> reducing the test time but still, it will not eliminate the need for\n> dependency on timing we have to wait before switching to partial\n> serialize mode. I think this can be used in production as well to tune\n> the performance depending on workload.\n>\n> Yet another way is to use the existing parameter logical_decode_mode\n> [1]. If the value of logical_decoding_mode is 'immediate', then we can\n> immediately switch to partial serialize mode. This will eliminate the\n> dependency on timing. The one argument against using this is that it\n> won't be as clear as a separate parameter like\n> 'stream_serialize_threshold' proposed by the patch but OTOH we already\n> have a few parameters that serve a different purpose when used on the\n> subscriber. For example, 'max_replication_slots' is used to define the\n> maximum number of replication slots on the publisher and the maximum\n> number of origins on subscribers. Similarly,\n> wal_retrieve_retry_interval' is used for different purposes on\n> subscriber and standby nodes.\n>\n> [1] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n>\n> --\n> With Regards,\n> Amit Kapila.\n\nHi Amit,\n\nOn rethinking the complete model, what I feel is that the name\nlogical_decoding_mode is not something which defines modes of logical\ndecoding. We, I think, picked it based on logical_decoding_work_mem.\nAs per current implementation, the parameter 'logical_decoding_mode'\ntells what happens when work-memory used by logical decoding reaches\nits limit. So it is in-fact 'logicalrep_workmem_vacate_mode' or\n'logicalrep_trans_eviction_mode'. So if it is set to immediate,\nmeaning vacate the work-memory immediately or evict transactions\nimmediately. Add buffered means the reverse (i.e. keep on buffering\ntransactions until we reach a limit). Now coming to subscribers, we\ncan reuse the same parameter. On subscriber as well, shared-memory\nqueue could be considered as its workmem and thus the name\n'logicalrep_workmem_vacate_mode' might look more relevant.\n\nOn publisher:\nlogicalrep_workmem_vacate_mode=immediate, buffered.\n\nOn subscriber:\nlogicalrep_workmem_vacate_mode=stream_serialize (or if we want to\nkeep immediate here too, that will also be fine).\n\nThoughts?\nAnd I am assuming it is possible to change the GUC name before the\ncoming release. If not, please let me know, we can brainstorm other\nideas.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:44:08 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 19, 2023 at 3:44 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 18, 2023 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 13, 2023 at 11:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > Here are some review comments for patch v79-0002.\n> > > >\n> > >\n> > > So, this is about the latest v84-0001-Stop-extra-worker-if-GUC-was-changed.\n> > >\n> > > >\n> > > > I feel this patch just adds more complexity for almost no gain:\n> > > > - reducing the 'max_apply_workers_per_suibscription' seems not very\n> > > > common in the first place.\n> > > > - even when the GUC is reduced, at that point in time all the workers\n> > > > might be in use so there may be nothing that can be immediately done.\n> > > > - IIUC the excess workers (for a reduced GUC) are going to get freed\n> > > > naturally anyway over time as more transactions are completed so the\n> > > > pool size will reduce accordingly.\n> > > >\n> > >\n> > > I am still not sure if it is worth pursuing this patch because of the\n> > > above reasons. I don't think it would be difficult to add this even at\n> > > a later point in time if we really see a use case for this.\n> > > Sawada-San, IIRC, you raised this point. What do you think?\n> > >\n> > > The other point I am wondering is whether we can have a different way\n> > > to test partial serialization apart from introducing another developer\n> > > GUC (stream_serialize_threshold). One possibility could be that we can\n> > > have a subscription option (parallel_send_timeout or something like\n> > > that) with some default value (current_timeout used in the patch)\n> > > which will be used only when streaming = parallel. Users may want to\n> > > wait for more time before serialization starts depending on the\n> > > workload (say when resource usage is high on a subscriber-side\n> > > machine, or there are concurrent long-running transactions that can\n> > > block parallel apply for a bit longer time). I know with this as well\n> > > it may not be straightforward to test the functionality because we\n> > > can't be sure how many changes would be required for a timeout to\n> > > occur. This is just for brainstorming other options to test the\n> > > partial serialization functionality.\n> > >\n> >\n> > Apart from the above, we can also have a subscription option to\n> > specify parallel_shm_queue_size (queue_size used to determine the\n> > queue between the leader and parallel worker) and that can be used for\n> > this purpose. Basically, configuring it to a smaller value can help in\n> > reducing the test time but still, it will not eliminate the need for\n> > dependency on timing we have to wait before switching to partial\n> > serialize mode. I think this can be used in production as well to tune\n> > the performance depending on workload.\n> >\n> > Yet another way is to use the existing parameter logical_decode_mode\n> > [1]. If the value of logical_decoding_mode is 'immediate', then we can\n> > immediately switch to partial serialize mode. This will eliminate the\n> > dependency on timing. The one argument against using this is that it\n> > won't be as clear as a separate parameter like\n> > 'stream_serialize_threshold' proposed by the patch but OTOH we already\n> > have a few parameters that serve a different purpose when used on the\n> > subscriber. For example, 'max_replication_slots' is used to define the\n> > maximum number of replication slots on the publisher and the maximum\n> > number of origins on subscribers. Similarly,\n> > wal_retrieve_retry_interval' is used for different purposes on\n> > subscriber and standby nodes.\n> >\n> > [1] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n> >\n> > --\n> > With Regards,\n> > Amit Kapila.\n>\n> Hi Amit,\n>\n> On rethinking the complete model, what I feel is that the name\n> logical_decoding_mode is not something which defines modes of logical\n> decoding. We, I think, picked it based on logical_decoding_work_mem.\n> As per current implementation, the parameter 'logical_decoding_mode'\n> tells what happens when work-memory used by logical decoding reaches\n> its limit. So it is in-fact 'logicalrep_workmem_vacate_mode' or\n\nMinor correction in what I said earlier:\nAs per current implementation, the parameter 'logical_decoding_mode'\nmore or less tells how to deal with workmem i.e. to keep it buffering\nwith txns until it reaches its limit or immediately vacate it.\n\n> 'logicalrep_trans_eviction_mode'. So if it is set to immediate,\n> meaning vacate the work-memory immediately or evict transactions\n> immediately. Add buffered means the reverse (i.e. keep on buffering\n> transactions until we reach a limit). Now coming to subscribers, we\n> can reuse the same parameter. On subscriber as well, shared-memory\n> queue could be considered as its workmem and thus the name\n> 'logicalrep_workmem_vacate_mode' might look more relevant.\n>\n> On publisher:\n> logicalrep_workmem_vacate_mode=immediate, buffered.\n>\n> On subscriber:\n> logicalrep_workmem_vacate_mode=stream_serialize (or if we want to\n> keep immediate here too, that will also be fine).\n>\n> Thoughts?\n> And I am assuming it is possible to change the GUC name before the\n> coming release. If not, please let me know, we can brainstorm other\n> ideas.\n>\n> thanks\n> Shveta\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:52:36 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Jan 19, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 13, 2023 at 11:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Here are some review comments for patch v79-0002.\n> > >\n> >\n> > So, this is about the latest v84-0001-Stop-extra-worker-if-GUC-was-changed.\n> >\n> > >\n> > > I feel this patch just adds more complexity for almost no gain:\n> > > - reducing the 'max_apply_workers_per_suibscription' seems not very\n> > > common in the first place.\n> > > - even when the GUC is reduced, at that point in time all the workers\n> > > might be in use so there may be nothing that can be immediately done.\n> > > - IIUC the excess workers (for a reduced GUC) are going to get freed\n> > > naturally anyway over time as more transactions are completed so the\n> > > pool size will reduce accordingly.\n> > >\n> >\n> > I am still not sure if it is worth pursuing this patch because of the\n> > above reasons. I don't think it would be difficult to add this even at\n> > a later point in time if we really see a use case for this.\n> > Sawada-San, IIRC, you raised this point. What do you think?\n> >\n> > The other point I am wondering is whether we can have a different way\n> > to test partial serialization apart from introducing another developer\n> > GUC (stream_serialize_threshold). One possibility could be that we can\n> > have a subscription option (parallel_send_timeout or something like\n> > that) with some default value (current_timeout used in the patch)\n> > which will be used only when streaming = parallel. Users may want to\n> > wait for more time before serialization starts depending on the\n> > workload (say when resource usage is high on a subscriber-side\n> > machine, or there are concurrent long-running transactions that can\n> > block parallel apply for a bit longer time). I know with this as well\n> > it may not be straightforward to test the functionality because we\n> > can't be sure how many changes would be required for a timeout to\n> > occur. This is just for brainstorming other options to test the\n> > partial serialization functionality.\n\nI can see parallel_send_timeout idea could be useful somewhat but I'm\nconcerned users can tune this value properly. It's likely to indicate\nsomething abnormal or locking issues if LA waits to write data to the\nqueue for more than 10s. Also, I think it doesn't make sense to allow\nusers to set this timeout to a very low value. If switching to partial\nserialization mode early is useful in some cases, I think it's better\nto provide it as a new mode, such as streaming = 'parallel-file' etc.\n\n>\n> Apart from the above, we can also have a subscription option to\n> specify parallel_shm_queue_size (queue_size used to determine the\n> queue between the leader and parallel worker) and that can be used for\n> this purpose. Basically, configuring it to a smaller value can help in\n> reducing the test time but still, it will not eliminate the need for\n> dependency on timing we have to wait before switching to partial\n> serialize mode. I think this can be used in production as well to tune\n> the performance depending on workload.\n\nA parameter for the queue size is interesting but I agree it will not\neliminate the need for dependency on timing.\n\n>\n> Yet another way is to use the existing parameter logical_decode_mode\n> [1]. If the value of logical_decoding_mode is 'immediate', then we can\n> immediately switch to partial serialize mode. This will eliminate the\n> dependency on timing. The one argument against using this is that it\n> won't be as clear as a separate parameter like\n> 'stream_serialize_threshold' proposed by the patch but OTOH we already\n> have a few parameters that serve a different purpose when used on the\n> subscriber. For example, 'max_replication_slots' is used to define the\n> maximum number of replication slots on the publisher and the maximum\n> number of origins on subscribers. Similarly,\n> wal_retrieve_retry_interval' is used for different purposes on\n> subscriber and standby nodes.\n\nUsing the existing parameter makes sense to me. But if we use\nlogical_decoding_mode also on the subscriber, as Shveta Malik also\nsuggested, probably it's better to rename it so as not to confuse. For\nexample, logical_replication_mode or something.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 15:17:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Jan 20, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > Yet another way is to use the existing parameter logical_decode_mode\n> > [1]. If the value of logical_decoding_mode is 'immediate', then we can\n> > immediately switch to partial serialize mode. This will eliminate the\n> > dependency on timing. The one argument against using this is that it\n> > won't be as clear as a separate parameter like\n> > 'stream_serialize_threshold' proposed by the patch but OTOH we already\n> > have a few parameters that serve a different purpose when used on the\n> > subscriber. For example, 'max_replication_slots' is used to define the\n> > maximum number of replication slots on the publisher and the maximum\n> > number of origins on subscribers. Similarly,\n> > wal_retrieve_retry_interval' is used for different purposes on\n> > subscriber and standby nodes.\n>\n> Using the existing parameter makes sense to me. But if we use\n> logical_decoding_mode also on the subscriber, as Shveta Malik also\n> suggested, probably it's better to rename it so as not to confuse. For\n> example, logical_replication_mode or something.\n>\n\n+1. Among the options discussed, this sounds better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Jan 2023 08:47:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 23, 2023 at 8:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 20, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > Yet another way is to use the existing parameter logical_decode_mode\n> > > [1]. If the value of logical_decoding_mode is 'immediate', then we can\n> > > immediately switch to partial serialize mode. This will eliminate the\n> > > dependency on timing. The one argument against using this is that it\n> > > won't be as clear as a separate parameter like\n> > > 'stream_serialize_threshold' proposed by the patch but OTOH we already\n> > > have a few parameters that serve a different purpose when used on the\n> > > subscriber. For example, 'max_replication_slots' is used to define the\n> > > maximum number of replication slots on the publisher and the maximum\n> > > number of origins on subscribers. Similarly,\n> > > wal_retrieve_retry_interval' is used for different purposes on\n> > > subscriber and standby nodes.\n> >\n> > Using the existing parameter makes sense to me. But if we use\n> > logical_decoding_mode also on the subscriber, as Shveta Malik also\n> > suggested, probably it's better to rename it so as not to confuse. For\n> > example, logical_replication_mode or something.\n> >\n>\n> +1. Among the options discussed, this sounds better.\n\nYeah, this looks better option with the parameter name\n'logical_replication_mode'.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:28:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 23, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jan 20, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > >\r\n> > > Yet another way is to use the existing parameter logical_decode_mode\r\n> > > [1]. If the value of logical_decoding_mode is 'immediate', then we\r\n> > > can immediately switch to partial serialize mode. This will\r\n> > > eliminate the dependency on timing. The one argument against using\r\n> > > this is that it won't be as clear as a separate parameter like\r\n> > > 'stream_serialize_threshold' proposed by the patch but OTOH we\r\n> > > already have a few parameters that serve a different purpose when\r\n> > > used on the subscriber. For example, 'max_replication_slots' is used\r\n> > > to define the maximum number of replication slots on the publisher\r\n> > > and the maximum number of origins on subscribers. Similarly,\r\n> > > wal_retrieve_retry_interval' is used for different purposes on\r\n> > > subscriber and standby nodes.\r\n> >\r\n> > Using the existing parameter makes sense to me. But if we use\r\n> > logical_decoding_mode also on the subscriber, as Shveta Malik also\r\n> > suggested, probably it's better to rename it so as not to confuse. For\r\n> > example, logical_replication_mode or something.\r\n> >\r\n> \r\n> +1. Among the options discussed, this sounds better.\r\n\r\nOK, here is patch set which does the same.\r\nThe first patch set only renames the GUC name, and the second patch uses\r\nthe GUC to test the partial serialization.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 23 Jan 2023 08:05:06 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThank you for updating the patch! Followings are my comments.\r\n\r\n1. guc_tables.c\r\n\r\n```\r\n static const struct config_enum_entry logical_decoding_mode_options[] = {\r\n- {\"buffered\", LOGICAL_DECODING_MODE_BUFFERED, false},\r\n- {\"immediate\", LOGICAL_DECODING_MODE_IMMEDIATE, false},\r\n+ {\"buffered\", LOGICAL_REP_MODE_BUFFERED, false},\r\n+ {\"immediate\", LOGICAL_REP_MODE_IMMEDIATE, false},\r\n {NULL, 0, false}\r\n };\r\n```\r\n\r\nThis struct should be also modified.\r\n\r\n2. guc_tables.c\r\n\r\n\r\n```\r\n- {\"logical_decoding_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\r\n+ {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\r\n gettext_noop(\"Allows streaming or serializing each change in logical decoding.\"),\r\n NULL,\r\n```\r\n\r\nI felt the description seems not to be suitable for current behavior.\r\nA short description should be like \"Sets a behavior of logical replication\", and\r\nfurther descriptions can be added in lond description.\r\n\r\n3. config.sgml\r\n\r\n```\r\n <para>\r\n This parameter is intended to be used to test logical decoding and\r\n replication of large transactions for which otherwise we need to\r\n generate the changes till <varname>logical_decoding_work_mem</varname>\r\n is reached.\r\n </para>\r\n```\r\n\r\nI understood that this part described the usage of the parameter. How about adding\r\na statement like:\r\n\r\n\" Moreover, this can be also used to test the message passing between the leader\r\nand parallel apply workers.\"\r\n\r\n4. 015_stream.pl\r\n\r\n```\r\n+# Ensure that the messages are serialized.\r\n```\r\n\r\nIn other parts \"changes\" are used instead of \"messages\". Can you change the word?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 23 Jan 2023 12:34:00 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for patch v86-0001.\n\n======\nGeneral\n\n1.\n\nIIUC the GUC name was made generic 'logical_replication_mode' so that\nmultiple developer GUCs are not needed later.\n\nBut IMO those current option values (buffered/immediate) for that GUC\nare maybe a bit too generic. Perhaps in future, we might want more\ngranular control than that allows. e.g. I can imagine there might be\nmultiple different meanings for what \"buffered\" means. If there is any\nchance of the generic values being problematic later then maybe they\nshould be made more specific up-front.\n\ne.g. maybe like:\nlogical_replication_mode = buffered_decoding\nlogical_replication_mode = immediate_decoding\n\nThoughts?\n\n======\nCommit message\n\n2.\nSince we may extend the developer option logical_decoding_mode to to test the\nparallel apply of large transaction on subscriber, rename this option to\nlogical_replication_mode to make it easier to understand.\n\n~\n\n2a\ntypo \"to to\"\n\ntypo \"large transaction on subscriber\" --> \"large transactions on the\nsubscriber\"\n\n~\n\n2b.\nIMO better to rephrase the whole paragraph like shown below.\n\nSUGGESTION\n\nRename the developer option 'logical_decoding_mode' to the more\nflexible name 'logical_replication_mode' because doing so will make it\neasier to extend this option in future to help test other areas of\nlogical replication.\n\n======\ndoc/src/sgml/config.sgml\n\n3.\nAllows streaming or serializing changes immediately in logical\ndecoding. The allowed values of logical_replication_mode are buffered\nand immediate. When set to immediate, stream each change if streaming\noption (see optional parameters set by CREATE SUBSCRIPTION) is\nenabled, otherwise, serialize each change. When set to buffered, which\nis the default, decoding will stream or serialize changes when\nlogical_decoding_work_mem is reached.\n\n~\n\nIMO it's more clear to say the default when the options are first\nmentioned. So I suggested removing the \"which is the default\" part,\nand instead saying:\n\nBEFORE\nThe allowed values of logical_replication_mode are buffered and immediate.\n\nAFTER\nThe allowed values of logical_replication_mode are buffered and\nimmediate. The default is buffered.\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n4.\n@@ -396,8 +396,8 @@ static const struct config_enum_entry\nssl_protocol_versions_info[] = {\n };\n\n static const struct config_enum_entry logical_decoding_mode_options[] = {\n- {\"buffered\", LOGICAL_DECODING_MODE_BUFFERED, false},\n- {\"immediate\", LOGICAL_DECODING_MODE_IMMEDIATE, false},\n+ {\"buffered\", LOGICAL_REP_MODE_BUFFERED, false},\n+ {\"immediate\", LOGICAL_REP_MODE_IMMEDIATE, false},\n {NULL, 0, false}\n };\n\nI noticed this array is still called \"logical_decoding_mode_options\".\nWas that deliberate?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 24 Jan 2023 14:43:08 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 24, 2023 at 9:13 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 1.\n>\n> IIUC the GUC name was made generic 'logical_replication_mode' so that\n> multiple developer GUCs are not needed later.\n>\n> But IMO those current option values (buffered/immediate) for that GUC\n> are maybe a bit too generic. Perhaps in future, we might want more\n> granular control than that allows. e.g. I can imagine there might be\n> multiple different meanings for what \"buffered\" means. If there is any\n> chance of the generic values being problematic later then maybe they\n> should be made more specific up-front.\n>\n> e.g. maybe like:\n> logical_replication_mode = buffered_decoding\n> logical_replication_mode = immediate_decoding\n>\n\nFor now, it seems the meaning of buffered/immediate suits our\ndebugging and test needs for publisher/subscriber. This is somewhat\nexplained in Shveta's email [1]. I also think in the future this\nparameter could be extended for a different purpose but maybe it would\nbe better to invent some new values at that time as things would be\nmore clear. We could do what you are suggesting or in fact even use\ndifferent values for publisher and subscriber but not really sure\nwhether that will simplify the usage.\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uDzddK_ZUsB2qBJUbW_ZODYGoUHTaS5pVcYE2tzATCVXQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 Jan 2023 10:12:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are some review comments for v86-0002\n\n======\nCommit message\n\n1.\nUse the use the existing developer option logical_replication_mode to test the\nparallel apply of large transaction on subscriber.\n\n~\n\nTypo “Use the use the”\n\nSUGGESTION (rewritten)\nGive additional functionality to the existing developer option\n'logical_replication_mode' to help test parallel apply of large\ntransactions on the subscriber.\n\n~~~\n\n2.\nMaybe that commit message should also say extra TAP tests that have\nbeen added to exercise the serialization part of the parallel apply?\n\nBTW – I can see the TAP tests are testing full serialization (when the\nGUC is 'immediate') but I not sure how is \"partial\" serialization\n(when it has to switch halfway from shmem to files) being tested.\n\n======\ndoc/src/sgml/config.sgml\n\n3.\nAllows streaming or serializing changes immediately in logical\ndecoding. The allowed values of logical_replication_mode are buffered\nand immediate. When set to immediate, stream each change if streaming\noption (see optional parameters set by CREATE SUBSCRIPTION) is\nenabled, otherwise, serialize each change. When set to buffered, which\nis the default, decoding will stream or serialize changes when\nlogical_decoding_work_mem is reached.\nOn the subscriber side, if streaming option is set to parallel, this\nparameter also allows the leader apply worker to send changes to the\nshared memory queue or to serialize changes. When set to buffered, the\nleader sends changes to parallel apply workers via shared memory\nqueue. When set to immediate, the leader serializes all changes to\nfiles and notifies the parallel apply workers to read and apply them\nat the end of the transaction.\n\n~\n\nBecause now this same developer GUC affects both the publisher side\nand the subscriber side differently IMO this whole description should\nbe re-structured accordingly.\n\nSUGGESTION (something like)\n\nThe allowed values of logical_replication_mode are buffered and\nimmediate. The default is buffered.\n\nOn the publisher side, ...\n\nOn the subscriber side, ...\n\n~~~\n\n4.\nThis parameter is intended to be used to test logical decoding and\nreplication of large transactions for which otherwise we need to\ngenerate the changes till logical_decoding_work_mem is reached.\n\n~\n\nMaybe this paragraph needs rewording or moving. e.g. Isn't that\nmisleading now? Although this might be an explanation for the\npublisher side, it does not seem relevant to the subscriber side's\nbehaviour.\n\n======\n.../replication/logical/applyparallelworker.c\n\n5.\n@ -1149,6 +1149,9 @@ pa_send_data(ParallelApplyWorkerInfo *winfo, Size\nnbytes, const void *data)\n Assert(!IsTransactionState());\n Assert(!winfo->serialize_changes);\n\n+ if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE)\n+ return false;\n+\n\nI felt that code should have some comment, even if it is just\nsomething quite basic like \"/* For developer testing */\"\n\n======\n.../t/018_stream_subxact_abort.pl\n\n6.\n+# Clean up test data from the environment.\n+$node_publisher->safe_psql('postgres', \"TRUNCATE TABLE test_tab_2\");\n+$node_publisher->wait_for_catchup($appname);\n\nIs it necessary to TRUNCATE the table here? If everything is working\nshouldn't the data be rolled back anyway?\n\n~~~\n\n7.\n+$node_publisher->safe_psql(\n+ 'postgres', q{\n+ BEGIN;\n+ INSERT INTO test_tab_2 values(1);\n+ SAVEPOINT sp;\n+ INSERT INTO test_tab_2 values(1);\n+ ROLLBACK TO sp;\n+ COMMIT;\n+ });\n\nPerhaps this should insert 2 different values so then the verification\ncode can check the correct value remains instead of just checking\nCOUNT(*)?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 24 Jan 2023 18:18:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 24, 2023 3:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some review comments for v86-0002\r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> Use the use the existing developer option logical_replication_mode to test the\r\n> parallel apply of large transaction on subscriber.\r\n> \r\n> ~\r\n> \r\n> Typo “Use the use the”\r\n> \r\n> SUGGESTION (rewritten)\r\n> Give additional functionality to the existing developer option\r\n> 'logical_replication_mode' to help test parallel apply of large transactions on the\r\n> subscriber.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 2.\r\n> Maybe that commit message should also say extra TAP tests that have been\r\n> added to exercise the serialization part of the parallel apply?\r\n\r\nAdded.\r\n\r\n> BTW – I can see the TAP tests are testing full serialization (when the GUC is\r\n> 'immediate') but I not sure how is \"partial\" serialization (when it has to switch\r\n> halfway from shmem to files) being tested.\r\n\r\nThe new tests are intended to test most of new code patch for partial\r\nserialization by doing it from the beginning. Later, if required, we can add\r\ndifferent tests for it.\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/config.sgml\r\n> \r\n> 3.\r\n> Allows streaming or serializing changes immediately in logical decoding. The\r\n> allowed values of logical_replication_mode are buffered and immediate. When\r\n> set to immediate, stream each change if streaming option (see optional\r\n> parameters set by CREATE SUBSCRIPTION) is enabled, otherwise, serialize each\r\n> change. When set to buffered, which is the default, decoding will stream or\r\n> serialize changes when logical_decoding_work_mem is reached.\r\n> On the subscriber side, if streaming option is set to parallel, this parameter also\r\n> allows the leader apply worker to send changes to the shared memory queue or\r\n> to serialize changes. When set to buffered, the leader sends changes to parallel\r\n> apply workers via shared memory queue. When set to immediate, the leader\r\n> serializes all changes to files and notifies the parallel apply workers to read and\r\n> apply them at the end of the transaction.\r\n> \r\n> ~\r\n> \r\n> Because now this same developer GUC affects both the publisher side and the\r\n> subscriber side differently IMO this whole description should be re-structured\r\n> accordingly.\r\n> \r\n> SUGGESTION (something like)\r\n> \r\n> The allowed values of logical_replication_mode are buffered and immediate. The\r\n> default is buffered.\r\n> \r\n> On the publisher side, ...\r\n> \r\n> On the subscriber side, ...\r\n\r\nChanged.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 4.\r\n> This parameter is intended to be used to test logical decoding and replication of\r\n> large transactions for which otherwise we need to generate the changes till\r\n> logical_decoding_work_mem is reached.\r\n> \r\n> ~\r\n> \r\n> Maybe this paragraph needs rewording or moving. e.g. Isn't that misleading\r\n> now? Although this might be an explanation for the publisher side, it does not\r\n> seem relevant to the subscriber side's behaviour.\r\n\r\nAdjusted the description here.\r\n\r\n> \r\n> ======\r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 5.\r\n> @ -1149,6 +1149,9 @@ pa_send_data(ParallelApplyWorkerInfo *winfo, Size\r\n> nbytes, const void *data)\r\n> Assert(!IsTransactionState());\r\n> Assert(!winfo->serialize_changes);\r\n> \r\n> + if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE) return\r\n> + false;\r\n> +\r\n> \r\n> I felt that code should have some comment, even if it is just something quite\r\n> basic like \"/* For developer testing */\"\r\n\r\nAdded.\r\n\r\n> \r\n> ======\r\n> .../t/018_stream_subxact_abort.pl\r\n> \r\n> 6.\r\n> +# Clean up test data from the environment.\r\n> +$node_publisher->safe_psql('postgres', \"TRUNCATE TABLE test_tab_2\");\r\n> +$node_publisher->wait_for_catchup($appname);\r\n> \r\n> Is it necessary to TRUNCATE the table here? If everything is working shouldn't\r\n> the data be rolled back anyway?\r\n\r\nI think it's unnecessary, so removed.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 7.\r\n> +$node_publisher->safe_psql(\r\n> + 'postgres', q{\r\n> + BEGIN;\r\n> + INSERT INTO test_tab_2 values(1);\r\n> + SAVEPOINT sp;\r\n> + INSERT INTO test_tab_2 values(1);\r\n> + ROLLBACK TO sp;\r\n> + COMMIT;\r\n> + });\r\n> \r\n> Perhaps this should insert 2 different values so then the verification code can\r\n> check the correct value remains instead of just checking COUNT(*)?\r\n\r\nI think testing the count should be ok as the nearby testcases are\r\nalso checking the count.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 24 Jan 2023 12:47:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 24, 2023 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n\r\n> \r\n> Here are my review comments for patch v86-0001.\r\n\r\nThanks for your comments.\r\n\r\n> \r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 2.\r\n> Since we may extend the developer option logical_decoding_mode to to test the\r\n> parallel apply of large transaction on subscriber, rename this option to\r\n> logical_replication_mode to make it easier to understand.\r\n> \r\n> ~\r\n> \r\n> 2a\r\n> typo \"to to\"\r\n> \r\n> typo \"large transaction on subscriber\" --> \"large transactions on the subscriber\"\r\n> \r\n> ~\r\n> \r\n> 2b.\r\n> IMO better to rephrase the whole paragraph like shown below.\r\n> \r\n> SUGGESTION\r\n> \r\n> Rename the developer option 'logical_decoding_mode' to the more flexible\r\n> name 'logical_replication_mode' because doing so will make it easier to extend\r\n> this option in future to help test other areas of logical replication.\r\n\r\nChanged.\r\n\r\n> ======\r\n> doc/src/sgml/config.sgml\r\n> \r\n> 3.\r\n> Allows streaming or serializing changes immediately in logical decoding. The\r\n> allowed values of logical_replication_mode are buffered and immediate. When\r\n> set to immediate, stream each change if streaming option (see optional\r\n> parameters set by CREATE SUBSCRIPTION) is enabled, otherwise, serialize each\r\n> change. When set to buffered, which is the default, decoding will stream or\r\n> serialize changes when logical_decoding_work_mem is reached.\r\n> \r\n> ~\r\n> \r\n> IMO it's more clear to say the default when the options are first mentioned. So I\r\n> suggested removing the \"which is the default\" part, and instead saying:\r\n> \r\n> BEFORE\r\n> The allowed values of logical_replication_mode are buffered and immediate.\r\n> \r\n> AFTER\r\n> The allowed values of logical_replication_mode are buffered and immediate. The\r\n> default is buffered.\r\n\r\nI included this change in the 0002 patch.\r\n\r\n> ======\r\n> src/backend/utils/misc/guc_tables.c\r\n> \r\n> 4.\r\n> @@ -396,8 +396,8 @@ static const struct config_enum_entry\r\n> ssl_protocol_versions_info[] = { };\r\n> \r\n> static const struct config_enum_entry logical_decoding_mode_options[] = {\r\n> - {\"buffered\", LOGICAL_DECODING_MODE_BUFFERED, false},\r\n> - {\"immediate\", LOGICAL_DECODING_MODE_IMMEDIATE, false},\r\n> + {\"buffered\", LOGICAL_REP_MODE_BUFFERED, false}, {\"immediate\",\r\n> + LOGICAL_REP_MODE_IMMEDIATE, false},\r\n> {NULL, 0, false}\r\n> };\r\n> \r\n> I noticed this array is still called \"logical_decoding_mode_options\".\r\n> Was that deliberate?\r\n\r\nNo, I didn't notice this one. Changed.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 24 Jan 2023 12:47:16 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 23, 2023 8:34 PM Kuroda, Hayato wrote:\r\n> \r\n> Followings are my comments.\r\n\r\nThanks for your comments.\r\n\r\n> \r\n> 1. guc_tables.c\r\n> \r\n> ```\r\n> static const struct config_enum_entry logical_decoding_mode_options[] = {\r\n> - {\"buffered\", LOGICAL_DECODING_MODE_BUFFERED, false},\r\n> - {\"immediate\", LOGICAL_DECODING_MODE_IMMEDIATE, false},\r\n> + {\"buffered\", LOGICAL_REP_MODE_BUFFERED, false},\r\n> + {\"immediate\", LOGICAL_REP_MODE_IMMEDIATE, false},\r\n> {NULL, 0, false}\r\n> };\r\n> ```\r\n> \r\n> This struct should be also modified.\r\n\r\nModified.\r\n\r\n> \r\n> 2. guc_tables.c\r\n> \r\n> \r\n> ```\r\n> - {\"logical_decoding_mode\", PGC_USERSET,\r\n> DEVELOPER_OPTIONS,\r\n> + {\"logical_replication_mode\", PGC_USERSET,\r\n> + DEVELOPER_OPTIONS,\r\n> gettext_noop(\"Allows streaming or serializing each\r\n> change in logical decoding.\"),\r\n> NULL,\r\n> ```\r\n> \r\n> I felt the description seems not to be suitable for current behavior.\r\n> A short description should be like \"Sets a behavior of logical replication\", and\r\n> further descriptions can be added in lond description.\r\n\r\nI adjusted the description here.\r\n\r\n> 3. config.sgml\r\n> \r\n> ```\r\n> <para>\r\n> This parameter is intended to be used to test logical decoding and\r\n> replication of large transactions for which otherwise we need to\r\n> generate the changes till\r\n> <varname>logical_decoding_work_mem</varname>\r\n> is reached.\r\n> </para>\r\n> ```\r\n> \r\n> I understood that this part described the usage of the parameter. How about\r\n> adding a statement like:\r\n> \r\n> \" Moreover, this can be also used to test the message passing between the\r\n> leader and parallel apply workers.\"\r\n\r\nAdded.\r\n\r\n> 4. 015_stream.pl\r\n> \r\n> ```\r\n> +# Ensure that the messages are serialized.\r\n> ```\r\n> \r\n> In other parts \"changes\" are used instead of \"messages\". Can you change the\r\n> word?\r\n\r\nChanged.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 24 Jan 2023 12:47:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 24, 2023 8:47 PM Hou, Zhijie wrote:\r\n> \r\n> On Tuesday, January 24, 2023 3:19 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > Here are some review comments for v86-0002\r\n> >\r\n\r\nSorry, the patch set was somehow attached twice. Here is the correct new version\r\npatch set which addressed all comments so far.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 24 Jan 2023 12:49:43 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\n> Sorry, the patch set was somehow attached twice. Here is the correct new version\r\n> patch set which addressed all comments so far.\r\n\r\nThank you for updating the patch! I confirmed that\r\nAll of my comments are addressed.\r\n\r\nOne comment:\r\nIn this test the rollback-prepared seems not to be executed.\r\nThis is because serializations are finished while handling PREPARE message\r\nand the final state of transaction does not affect that, right?\r\nI think it may be helpful to add a one line comment.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 24 Jan 2023 14:34:05 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 24, 2023 at 11:49 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n...\n>\n> Sorry, the patch set was somehow attached twice. Here is the correct new version\n> patch set which addressed all comments so far.\n>\n\nHere are my review comments for patch v87-0001.\n\n======\nsrc/backend/replication/logical/reorderbuffer.c\n\n1.\n@@ -210,7 +210,7 @@ int logical_decoding_work_mem;\n static const Size max_changes_in_memory = 4096; /* XXX for restore only */\n\n /* GUC variable */\n-int logical_decoding_mode = LOGICAL_DECODING_MODE_BUFFERED;\n+int logical_replication_mode = LOGICAL_REP_MODE_BUFFERED;\n\n\nI noticed that the comment /* GUC variable */ is currently only above\nthe logical_replication_mode, but actually logical_decoding_work_mem\nis a GUC variable too. Maybe this should be rearranged somehow then\nchange the comment \"GUC variable\" -> \"GUC variables\"?\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n@@ -4908,13 +4908,13 @@ struct config_enum ConfigureNamesEnum[] =\n },\n\n {\n- {\"logical_decoding_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n+ {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n gettext_noop(\"Allows streaming or serializing each change in logical\ndecoding.\"),\n NULL,\n GUC_NOT_IN_SAMPLE\n },\n- &logical_decoding_mode,\n- LOGICAL_DECODING_MODE_BUFFERED, logical_decoding_mode_options,\n+ &logical_replication_mode,\n+ LOGICAL_REP_MODE_BUFFERED, logical_replication_mode_options,\n NULL, NULL, NULL\n },\n\nThat gettext_noop string seems incorrect. I think Kuroda-san\npreviously reported the same, but then you replied it has been fixed\nalready [1]\n\n> I felt the description seems not to be suitable for current behavior.\n> A short description should be like \"Sets a behavior of logical replication\", and\n> further descriptions can be added in lond description.\nI adjusted the description here.\n\nBut this doesn't look fixed to me. (??)\n\n======\nsrc/include/replication/reorderbuffer.h\n\n3.\n@@ -18,14 +18,14 @@\n #include \"utils/timestamp.h\"\n\n extern PGDLLIMPORT int logical_decoding_work_mem;\n-extern PGDLLIMPORT int logical_decoding_mode;\n+extern PGDLLIMPORT int logical_replication_mode;\n\nProbably here should also be a comment to say \"/* GUC variables */\"\n\n------\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716AE9F095F9E7888987BC794C99%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 25 Jan 2023 08:45:34 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for patch v87-0002.\n\n======\ndoc/src/sgml/config.sgml\n\n1.\n <para>\n- Allows streaming or serializing changes immediately in\nlogical decoding.\n The allowed values of <varname>logical_replication_mode</varname> are\n- <literal>buffered</literal> and <literal>immediate</literal>. When set\n- to <literal>immediate</literal>, stream each change if\n+ <literal>buffered</literal> and <literal>immediate</literal>.\nThe default\n+ is <literal>buffered</literal>.\n+ </para>\n\nI didn't think it was necessary to say “of logical_replication_mode”.\nIMO that much is already obvious because this is the first sentence of\nthe description for logical_replication_mode.\n\n(see also review comment #4)\n\n~~~\n\n2.\n+ <para>\n+ On the publisher side, it allows streaming or serializing changes\n+ immediately in logical decoding. When set to\n+ <literal>immediate</literal>, stream each change if\n <literal>streaming</literal> option (see optional parameters set by\n <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n is enabled, otherwise, serialize each change. When set to\n- <literal>buffered</literal>, which is the default, decoding will stream\n- or serialize changes when <varname>logical_decoding_work_mem</varname>\n- is reached.\n+ <literal>buffered</literal>, decoding will stream or serialize changes\n+ when <varname>logical_decoding_work_mem</varname> is reached.\n </para>\n\n2a.\n\"it allows\" --> \"logical_replication_mode allows\"\n\n2b.\n\"decoding\" --> \"the decoding\"\n\n~~~\n\n3.\n+ <para>\n+ On the subscriber side, if <literal>streaming</literal> option is set\n+ to <literal>parallel</literal>, this parameter also allows the leader\n+ apply worker to send changes to the shared memory queue or to serialize\n+ changes. When set to <literal>buffered</literal>, the leader sends\n+ changes to parallel apply workers via shared memory queue. When set to\n+ <literal>immediate</literal>, the leader serializes all changes to\n+ files and notifies the parallel apply workers to read and apply them at\n+ the end of the transaction.\n+ </para>\n\n\"this parameter also allows\" --> \"logical_replication_mode also allows\"\n\n~~~\n\n4.\n <para>\n This parameter is intended to be used to test logical decoding and\n replication of large transactions for which otherwise we need to\n generate the changes till <varname>logical_decoding_work_mem</varname>\n- is reached.\n+ is reached. Moreover, this can also be used to test the transmission of\n+ changes between the leader and parallel apply workers.\n </para>\n\n\"Moreover, this can also\" --> \"It can also\"\n\nI am wondering would this sentence be better put at the top of the GUC\ndescription. So then the first paragraph becomes like this:\n\n\nSUGGESTION (I've also added another sentence \"The effect of...\")\n\nThe allowed values are buffered and immediate. The default is\nbuffered. This parameter is intended to be used to test logical\ndecoding and replication of large transactions for which otherwise we\nneed to generate the changes till logical_decoding_work_mem is\nreached. It can also be used to test the transmission of changes\nbetween the leader and parallel apply workers. The effect of\nlogical_replication_mode is different for the publisher and\nsubscriber:\n\nOn the publisher side...\n\nOn the subscriber side...\n======\n.../replication/logical/applyparallelworker.c\n\n5.\n+ /*\n+ * In immeidate mode, directly return false so that we can switch to\n+ * PARTIAL_SERIALIZE mode and serialize remaining changes to files.\n+ */\n+ if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE)\n+ return false;\n\nTypo \"immediate\"\n\nAlso, I felt \"directly\" is not needed. \"return false\" and \"directly\nreturn false\" is the same.\n\nSUGGESTION\nUsing ‘immediate’ mode returns false to cause a switch to\nPARTIAL_SERIALIZE mode so that the remaining changes will be\nserialized.\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n6.\n {\n {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n- gettext_noop(\"Allows streaming or serializing each change in logical\ndecoding.\"),\n- NULL,\n+ gettext_noop(\"Controls the behavior of logical replication publisher\nand subscriber\"),\n+ gettext_noop(\"If set to immediate, on the publisher side, it \"\n+ \"allows streaming or serializing each change in \"\n+ \"logical decoding. On the subscriber side, in \"\n+ \"parallel streaming mode, it allows the leader apply \"\n+ \"worker to serialize changes to files and notifies \"\n+ \"the parallel apply workers to read and apply them at \"\n+ \"the end of the transaction.\"),\n GUC_NOT_IN_SAMPLE\n },\n\n6a. short description\n\nUser PoV behaviour should be the same. Instead, maybe say \"controls\nthe internal behavior\" or something like that?\n\n~\n\n6b. long description\n\nIMO the long description shouldn’t mention ‘immediate’ mode first as it does.\n\nBEFORE\nIf set to immediate, on the publisher side, ...\n\nAFTER\nOn the publisher side, ...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:30:19 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 25, 2023 at 3:15 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 1.\n> @@ -210,7 +210,7 @@ int logical_decoding_work_mem;\n> static const Size max_changes_in_memory = 4096; /* XXX for restore only */\n>\n> /* GUC variable */\n> -int logical_decoding_mode = LOGICAL_DECODING_MODE_BUFFERED;\n> +int logical_replication_mode = LOGICAL_REP_MODE_BUFFERED;\n>\n>\n> I noticed that the comment /* GUC variable */ is currently only above\n> the logical_replication_mode, but actually logical_decoding_work_mem\n> is a GUC variable too. Maybe this should be rearranged somehow then\n> change the comment \"GUC variable\" -> \"GUC variables\"?\n>\n\nI think moving these variables together doesn't sound like a good idea\nbecause logical_decoding_work_mem variable is defined with other\nrelated variable. Also, if we are doing the last comment, I think that\nwill obviate the need for this.\n\n> ======\n> src/backend/utils/misc/guc_tables.c\n>\n> @@ -4908,13 +4908,13 @@ struct config_enum ConfigureNamesEnum[] =\n> },\n>\n> {\n> - {\"logical_decoding_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> + {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> gettext_noop(\"Allows streaming or serializing each change in logical\n> decoding.\"),\n> NULL,\n> GUC_NOT_IN_SAMPLE\n> },\n> - &logical_decoding_mode,\n> - LOGICAL_DECODING_MODE_BUFFERED, logical_decoding_mode_options,\n> + &logical_replication_mode,\n> + LOGICAL_REP_MODE_BUFFERED, logical_replication_mode_options,\n> NULL, NULL, NULL\n> },\n>\n> That gettext_noop string seems incorrect. I think Kuroda-san\n> previously reported the same, but then you replied it has been fixed\n> already [1]\n>\n> > I felt the description seems not to be suitable for current behavior.\n> > A short description should be like \"Sets a behavior of logical replication\", and\n> > further descriptions can be added in lond description.\n> I adjusted the description here.\n>\n> But this doesn't look fixed to me. (??)\n>\n\nOkay, so, how about the following for the 0001 patch:\nshort desc: Controls when to replicate each change.\nlong desc: On the publisher, it allows streaming or serializing each\nchange in logical decoding.\n\nThen we can extend it as follows for the 0002 patch:\nControls when to replicate or apply each change\nOn the publisher, it allows streaming or serializing each change in\nlogical decoding. On the subscriber, it allows serialization of all\nchanges to files and notifies the parallel apply workers to read and\napply them at the end of the transaction.\n\n> ======\n> src/include/replication/reorderbuffer.h\n>\n> 3.\n> @@ -18,14 +18,14 @@\n> #include \"utils/timestamp.h\"\n>\n> extern PGDLLIMPORT int logical_decoding_work_mem;\n> -extern PGDLLIMPORT int logical_decoding_mode;\n> +extern PGDLLIMPORT int logical_replication_mode;\n>\n> Probably here should also be a comment to say \"/* GUC variables */\"\n>\n\nOkay, we can do this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 25 Jan 2023 10:05:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 25, 2023 at 10:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 25, 2023 at 3:15 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 1.\n> > @@ -210,7 +210,7 @@ int logical_decoding_work_mem;\n> > static const Size max_changes_in_memory = 4096; /* XXX for restore only */\n> >\n> > /* GUC variable */\n> > -int logical_decoding_mode = LOGICAL_DECODING_MODE_BUFFERED;\n> > +int logical_replication_mode = LOGICAL_REP_MODE_BUFFERED;\n> >\n> >\n> > I noticed that the comment /* GUC variable */ is currently only above\n> > the logical_replication_mode, but actually logical_decoding_work_mem\n> > is a GUC variable too. Maybe this should be rearranged somehow then\n> > change the comment \"GUC variable\" -> \"GUC variables\"?\n> >\n>\n> I think moving these variables together doesn't sound like a good idea\n> because logical_decoding_work_mem variable is defined with other\n> related variable. Also, if we are doing the last comment, I think that\n> will obviate the need for this.\n>\n> > ======\n> > src/backend/utils/misc/guc_tables.c\n> >\n> > @@ -4908,13 +4908,13 @@ struct config_enum ConfigureNamesEnum[] =\n> > },\n> >\n> > {\n> > - {\"logical_decoding_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > + {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > gettext_noop(\"Allows streaming or serializing each change in logical\n> > decoding.\"),\n> > NULL,\n> > GUC_NOT_IN_SAMPLE\n> > },\n> > - &logical_decoding_mode,\n> > - LOGICAL_DECODING_MODE_BUFFERED, logical_decoding_mode_options,\n> > + &logical_replication_mode,\n> > + LOGICAL_REP_MODE_BUFFERED, logical_replication_mode_options,\n> > NULL, NULL, NULL\n> > },\n> >\n> > That gettext_noop string seems incorrect. I think Kuroda-san\n> > previously reported the same, but then you replied it has been fixed\n> > already [1]\n> >\n> > > I felt the description seems not to be suitable for current behavior.\n> > > A short description should be like \"Sets a behavior of logical replication\", and\n> > > further descriptions can be added in lond description.\n> > I adjusted the description here.\n> >\n> > But this doesn't look fixed to me. (??)\n> >\n>\n> Okay, so, how about the following for the 0001 patch:\n> short desc: Controls when to replicate each change.\n> long desc: On the publisher, it allows streaming or serializing each\n> change in logical decoding.\n>\n\nI have updated the patch accordingly and it looks good to me. I'll\npush this first patch early next week (Monday) unless there are more\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 25 Jan 2023 11:57:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Amit,\r\n\r\n> \r\n> I have updated the patch accordingly and it looks good to me. I'll\r\n> push this first patch early next week (Monday) unless there are more\r\n> comments.\r\n\r\nThanks for updating. I checked v88-0001 and I have no objection. LGTM.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 25 Jan 2023 06:51:26 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, January 25, 2023 7:30 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for patch v87-0002.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> doc/src/sgml/config.sgml\r\n> \r\n> 1.\r\n> <para>\r\n> - Allows streaming or serializing changes immediately in\r\n> logical decoding.\r\n> The allowed values of\r\n> <varname>logical_replication_mode</varname> are\r\n> - <literal>buffered</literal> and <literal>immediate</literal>. When\r\n> set\r\n> - to <literal>immediate</literal>, stream each change if\r\n> + <literal>buffered</literal> and <literal>immediate</literal>.\r\n> The default\r\n> + is <literal>buffered</literal>.\r\n> + </para>\r\n> \r\n> I didn't think it was necessary to say “of logical_replication_mode”.\r\n> IMO that much is already obvious because this is the first sentence of the\r\n> description for logical_replication_mode.\r\n> \r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 2.\r\n> + <para>\r\n> + On the publisher side, it allows streaming or serializing changes\r\n> + immediately in logical decoding. When set to\r\n> + <literal>immediate</literal>, stream each change if\r\n> <literal>streaming</literal> option (see optional parameters set by\r\n> <link linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> is enabled, otherwise, serialize each change. When set to\r\n> - <literal>buffered</literal>, which is the default, decoding will stream\r\n> - or serialize changes when\r\n> <varname>logical_decoding_work_mem</varname>\r\n> - is reached.\r\n> + <literal>buffered</literal>, decoding will stream or serialize changes\r\n> + when <varname>logical_decoding_work_mem</varname> is\r\n> reached.\r\n> </para>\r\n> \r\n> 2a.\r\n> \"it allows\" --> \"logical_replication_mode allows\"\r\n> \r\n> 2b.\r\n> \"decoding\" --> \"the decoding\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 3.\r\n> + <para>\r\n> + On the subscriber side, if <literal>streaming</literal> option is set\r\n> + to <literal>parallel</literal>, this parameter also allows the leader\r\n> + apply worker to send changes to the shared memory queue or to\r\n> serialize\r\n> + changes. When set to <literal>buffered</literal>, the leader sends\r\n> + changes to parallel apply workers via shared memory queue. When\r\n> set to\r\n> + <literal>immediate</literal>, the leader serializes all changes to\r\n> + files and notifies the parallel apply workers to read and apply them at\r\n> + the end of the transaction.\r\n> + </para>\r\n> \r\n> \"this parameter also allows\" --> \"logical_replication_mode also allows\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> <para>\r\n> This parameter is intended to be used to test logical decoding and\r\n> replication of large transactions for which otherwise we need to\r\n> generate the changes till\r\n> <varname>logical_decoding_work_mem</varname>\r\n> - is reached.\r\n> + is reached. Moreover, this can also be used to test the transmission of\r\n> + changes between the leader and parallel apply workers.\r\n> </para>\r\n> \r\n> \"Moreover, this can also\" --> \"It can also\"\r\n> \r\n> I am wondering would this sentence be better put at the top of the GUC\r\n> description. So then the first paragraph becomes like this:\r\n> \r\n> \r\n> SUGGESTION (I've also added another sentence \"The effect of...\")\r\n> \r\n> The allowed values are buffered and immediate. The default is buffered. This\r\n> parameter is intended to be used to test logical decoding and replication of large\r\n> transactions for which otherwise we need to generate the changes till\r\n> logical_decoding_work_mem is reached. It can also be used to test the\r\n> transmission of changes between the leader and parallel apply workers. The\r\n> effect of logical_replication_mode is different for the publisher and\r\n> subscriber:\r\n> \r\n> On the publisher side...\r\n> \r\n> On the subscriber side...\r\n\r\nI think your suggestion makes sense, so changed as suggested.\r\n\r\n> ======\r\n> .../replication/logical/applyparallelworker.c\r\n> \r\n> 5.\r\n> + /*\r\n> + * In immeidate mode, directly return false so that we can switch to\r\n> + * PARTIAL_SERIALIZE mode and serialize remaining changes to files.\r\n> + */\r\n> + if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE) return\r\n> + false;\r\n> \r\n> Typo \"immediate\"\r\n> \r\n> Also, I felt \"directly\" is not needed. \"return false\" and \"directly return false\" is the\r\n> same.\r\n> \r\n> SUGGESTION\r\n> Using ‘immediate’ mode returns false to cause a switch to PARTIAL_SERIALIZE\r\n> mode so that the remaining changes will be serialized.\r\n\r\nChanged.\r\n\r\n> ======\r\n> src/backend/utils/misc/guc_tables.c\r\n> \r\n> 6.\r\n> {\r\n> {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\r\n> - gettext_noop(\"Allows streaming or serializing each change in logical\r\n> decoding.\"),\r\n> - NULL,\r\n> + gettext_noop(\"Controls the behavior of logical replication publisher\r\n> and subscriber\"),\r\n> + gettext_noop(\"If set to immediate, on the publisher side, it \"\r\n> + \"allows streaming or serializing each change in \"\r\n> + \"logical decoding. On the subscriber side, in \"\r\n> + \"parallel streaming mode, it allows the leader apply \"\r\n> + \"worker to serialize changes to files and notifies \"\r\n> + \"the parallel apply workers to read and apply them at \"\r\n> + \"the end of the transaction.\"),\r\n> GUC_NOT_IN_SAMPLE\r\n> },\r\n> \r\n> 6a. short description\r\n> \r\n> User PoV behaviour should be the same. Instead, maybe say \"controls the\r\n> internal behavior\" or something like that?\r\n\r\nChanged to \"internal behavior xxx\"\r\n\r\n> ~\r\n> \r\n> 6b. long description\r\n> \r\n> IMO the long description shouldn’t mention ‘immediate’ mode first as it does.\r\n> \r\n> BEFORE\r\n> If set to immediate, on the publisher side, ...\r\n> \r\n> AFTER\r\n> On the publisher side, ...\r\n\r\nChanged.\r\n\r\nAttach the new version patch set.\r\nThe 0001 patch is the same as the v88-0001 posted by Amit[1],\r\nattach it here to make cfbot happy.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1JpWoaB63YULpQa1KDw_zBW-QoRMuNxuiP1KafPJzuVuw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 25 Jan 2023 14:24:50 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThank you for updating the patch! Followings are comments.\r\n\r\n1. config.sgml\r\n\r\n```\r\n+ the changes till logical_decoding_work_mem is reached. It can also be\r\n```\r\n\r\nI think it should be sandwiched by <varname>.\r\n\r\n2. config.sgml\r\n\r\n```\r\n+ On the publisher side, <varname>logical_replication_mode</varname> allows\r\n+ allows streaming or serializing changes immediately in logical decoding.\r\n```\r\n\r\nTypo \"allows allows\" -> \"allows\"\r\n\r\n3. test general\r\n\r\nYou confirmed that the leader started to serialize changes, but did not ensure the endpoint.\r\nIIUC the parallel apply worker exits after applying serialized changes, and it is not tested yet.\r\nCan we add polling the log somewhere?\r\n\r\n\r\n4. 015_stream.pl\r\n\r\n```\r\n+is($result, qq(15000), 'all changes are replayed from file')\r\n```\r\n\r\nThe statement may be unclear because changes can be also replicated when streaming = on.\r\nHow about: \"parallel apply worker replayed all changes from file\"?\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 26 Jan 2023 03:36:57 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Jan 25, 2023 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 25, 2023 at 10:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 25, 2023 at 3:15 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > 1.\n> > > @@ -210,7 +210,7 @@ int logical_decoding_work_mem;\n> > > static const Size max_changes_in_memory = 4096; /* XXX for restore only */\n> > >\n> > > /* GUC variable */\n> > > -int logical_decoding_mode = LOGICAL_DECODING_MODE_BUFFERED;\n> > > +int logical_replication_mode = LOGICAL_REP_MODE_BUFFERED;\n> > >\n> > >\n> > > I noticed that the comment /* GUC variable */ is currently only above\n> > > the logical_replication_mode, but actually logical_decoding_work_mem\n> > > is a GUC variable too. Maybe this should be rearranged somehow then\n> > > change the comment \"GUC variable\" -> \"GUC variables\"?\n> > >\n> >\n> > I think moving these variables together doesn't sound like a good idea\n> > because logical_decoding_work_mem variable is defined with other\n> > related variable. Also, if we are doing the last comment, I think that\n> > will obviate the need for this.\n> >\n> > > ======\n> > > src/backend/utils/misc/guc_tables.c\n> > >\n> > > @@ -4908,13 +4908,13 @@ struct config_enum ConfigureNamesEnum[] =\n> > > },\n> > >\n> > > {\n> > > - {\"logical_decoding_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > > + {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > > gettext_noop(\"Allows streaming or serializing each change in logical\n> > > decoding.\"),\n> > > NULL,\n> > > GUC_NOT_IN_SAMPLE\n> > > },\n> > > - &logical_decoding_mode,\n> > > - LOGICAL_DECODING_MODE_BUFFERED, logical_decoding_mode_options,\n> > > + &logical_replication_mode,\n> > > + LOGICAL_REP_MODE_BUFFERED, logical_replication_mode_options,\n> > > NULL, NULL, NULL\n> > > },\n> > >\n> > > That gettext_noop string seems incorrect. I think Kuroda-san\n> > > previously reported the same, but then you replied it has been fixed\n> > > already [1]\n> > >\n> > > > I felt the description seems not to be suitable for current behavior.\n> > > > A short description should be like \"Sets a behavior of logical replication\", and\n> > > > further descriptions can be added in lond description.\n> > > I adjusted the description here.\n> > >\n> > > But this doesn't look fixed to me. (??)\n> > >\n> >\n> > Okay, so, how about the following for the 0001 patch:\n> > short desc: Controls when to replicate each change.\n> > long desc: On the publisher, it allows streaming or serializing each\n> > change in logical decoding.\n> >\n>\n> I have updated the patch accordingly and it looks good to me. I'll\n> push this first patch early next week (Monday) unless there are more\n> comments.\n\nThe patch looks good to me too. Thank you for the patch.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 28 Jan 2023 01:36:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Patch v88-0001 LGTM.\n\nBelow are just some minor review comments about the commit message.\n\n======\nCommit message\n\n1.\nWe have discussed having this parameter as a subscription option but\nexposing a parameter that is primarily used for testing/debugging to users\ndidn't seem advisable and there is no other such parameter. The other\noption we have discussed is to have a separate GUC for subscriber-side\ntesting but it appears that for the current testing existing parameter is\nsufficient and avoids adding another GUC.\n\nSUGGESTION\nWe discussed exposing this parameter as a subscription option, but it\ndid not seem advisable since it is primarily used for\ntesting/debugging and there is no other such developer option.\n\nWe also discussed having separate GUCs for publisher/subscriber-side,\nbut for current testing/debugging requirements, one GUC is sufficient.\n\n~~\n\n2.\nReviewed-by: Pater Smith, Kuroda Hayato, Amit Kapila\n\n\"Pater\" --> \"Peter\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:10:39 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Here are my review comments for v88-0002.\n\n======\nGeneral\n\n1.\nThe test cases are checking the log content but they are not checking\nfor debug logs or untranslated elogs -- they are expecting a normal\nereport LOG that might be translated. I’m not sure if that is OK, or\nif it is a potential problem.\n\n======\ndoc/src/sgml/config.sgml\n\n2.\nOn the publisher side, logical_replication_mode allows allows\nstreaming or serializing changes immediately in logical decoding. When\nset to immediate, stream each change if streaming option (see optional\nparameters set by CREATE SUBSCRIPTION) is enabled, otherwise,\nserialize each change. When set to buffered, the decoding will stream\nor serialize changes when logical_decoding_work_mem is reached.\n\n2a.\ntypo \"allows allows\" (Kuroda-san reported same)\n\n2b.\n\"if streaming option\" --> \"if the streaming option\"\n\n~~~\n\n3.\nOn the subscriber side, if streaming option is set to parallel,\nlogical_replication_mode also allows the leader apply worker to send\nchanges to the shared memory queue or to serialize changes.\n\nSUGGESTION\nOn the subscriber side, if the streaming option is set to parallel,\nlogical_replication_mode can be used to direct the leader apply worker\nto send changes to the shared memory queue or to serialize changes.\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n4.\n {\n {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n- gettext_noop(\"Controls when to replicate each change.\"),\n- gettext_noop(\"On the publisher, it allows streaming or serializing\neach change in logical decoding.\"),\n+ gettext_noop(\"Controls the internal behavior of logical replication\npublisher and subscriber\"),\n+ gettext_noop(\"On the publisher, it allows streaming or \"\n+ \"serializing each change in logical decoding. On the \"\n+ \"subscriber, in parallel streaming mode, it allows \"\n+ \"the leader apply worker to serialize changes to \"\n+ \"files and notifies the parallel apply workers to \"\n+ \"read and apply them at the end of the transaction.\"),\n GUC_NOT_IN_SAMPLE\n },\nSuggest re-wording the long description (subscriber part) to be more\nlike the documentation text.\n\nBEFORE\nOn the subscriber, in parallel streaming mode, it allows the leader\napply worker to serialize changes to files and notifies the parallel\napply workers to read and apply them at the end of the transaction.\n\nSUGGESTION\nOn the subscriber, if the streaming option is set to parallel, it\ndirects the leader apply worker to send changes to the shared memory\nqueue or to serialize changes and apply them at the end of the\ntransaction.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Jan 2023 15:12:34 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 30, 2023 at 5:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Patch v88-0001 LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:41:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 30, 2023 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for v88-0002.\r\n\r\nThanks for your comments.\r\n\r\n> \r\n> ======\r\n> General\r\n> \r\n> 1.\r\n> The test cases are checking the log content but they are not checking for\r\n> debug logs or untranslated elogs -- they are expecting a normal ereport LOG\r\n> that might be translated. I’m not sure if that is OK, or if it is a potential problem.\r\n\r\nWe have tests that check the ereport ERROR and ereport WARNING message(by\r\nsearch for the ERROR or WARNING keyword for all the tap tests), so I think\r\nchecking the LOG should be fine.\r\n\r\n> ======\r\n> doc/src/sgml/config.sgml\r\n> \r\n> 2.\r\n> On the publisher side, logical_replication_mode allows allows streaming or\r\n> serializing changes immediately in logical decoding. When set to immediate,\r\n> stream each change if streaming option (see optional parameters set by\r\n> CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change. When set\r\n> to buffered, the decoding will stream or serialize changes when\r\n> logical_decoding_work_mem is reached.\r\n> \r\n> 2a.\r\n> typo \"allows allows\" (Kuroda-san reported same)\r\n> \r\n> 2b.\r\n> \"if streaming option\" --> \"if the streaming option\"\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 3.\r\n> On the subscriber side, if streaming option is set to parallel,\r\n> logical_replication_mode also allows the leader apply worker to send changes\r\n> to the shared memory queue or to serialize changes.\r\n> \r\n> SUGGESTION\r\n> On the subscriber side, if the streaming option is set to parallel,\r\n> logical_replication_mode can be used to direct the leader apply worker to\r\n> send changes to the shared memory queue or to serialize changes.\r\n\r\nChanged.\r\n\r\n> ======\r\n> src/backend/utils/misc/guc_tables.c\r\n> \r\n> 4.\r\n> {\r\n> {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\r\n> - gettext_noop(\"Controls when to replicate each change.\"),\r\n> - gettext_noop(\"On the publisher, it allows streaming or serializing each\r\n> change in logical decoding.\"),\r\n> + gettext_noop(\"Controls the internal behavior of logical replication\r\n> publisher and subscriber\"),\r\n> + gettext_noop(\"On the publisher, it allows streaming or \"\r\n> + \"serializing each change in logical decoding. On the \"\r\n> + \"subscriber, in parallel streaming mode, it allows \"\r\n> + \"the leader apply worker to serialize changes to \"\r\n> + \"files and notifies the parallel apply workers to \"\r\n> + \"read and apply them at the end of the transaction.\"),\r\n> GUC_NOT_IN_SAMPLE\r\n> },\r\n> Suggest re-wording the long description (subscriber part) to be more like the\r\n> documentation text.\r\n> \r\n> BEFORE\r\n> On the subscriber, in parallel streaming mode, it allows the leader apply worker\r\n> to serialize changes to files and notifies the parallel apply workers to read and\r\n> apply them at the end of the transaction.\r\n> \r\n> SUGGESTION\r\n> On the subscriber, if the streaming option is set to parallel, it directs the leader\r\n> apply worker to send changes to the shared memory queue or to serialize\r\n> changes and apply them at the end of the transaction.\r\n> \r\n\r\nChanged.\r\n\r\nAttach the new version patch which addressed all comments so far (the v88-0001\r\nhas been committed, so we only have one remaining patch this time).\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 30 Jan 2023 06:23:17 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thursday, January 26, 2023 11:37 AM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Followings are comments.\r\n\r\nThanks for the comments.\r\n\r\n> In this test the rollback-prepared seems not to be executed. This is because\r\n> serializations are finished while handling PREPARE message and the final\r\n> state of transaction does not affect that, right? I think it may be helpful\r\n> to add a one line comment.\r\n\r\nYes, but I am slightly unsure if it would be helpful to add this as we only test basic\r\ncases(mainly for code coverage) for partial serialization.\r\n\r\n> \r\n> 1. config.sgml\r\n> \r\n> ```\r\n> + the changes till logical_decoding_work_mem is reached. It can also\r\n> be\r\n> ```\r\n> \r\n> I think it should be sandwiched by <varname>.\r\n\r\nAdded.\r\n\r\n> \r\n> 2. config.sgml\r\n> \r\n> ```\r\n> + On the publisher side,\r\n> <varname>logical_replication_mode</varname> allows\r\n> + allows streaming or serializing changes immediately in logical\r\n> decoding.\r\n> ```\r\n> \r\n> Typo \"allows allows\" -> \"allows\"\r\n\r\nFixed.\r\n\r\n> 3. test general\r\n> \r\n> You confirmed that the leader started to serialize changes, but did not ensure\r\n> the endpoint.\r\n> IIUC the parallel apply worker exits after applying serialized changes, and it is\r\n> not tested yet.\r\n> Can we add polling the log somewhere?\r\n\r\nI checked other tests and didn't find some examples where we test the exit of\r\napply worker or table sync worker. And if the parallel apply worker doesn't stop in\r\nthis case, we will fail anyway when reusing this worker to handle the next\r\ntransaction because the queue is broken. So, I prefer to keep the tests short.\r\n\r\n> 4. 015_stream.pl\r\n> \r\n> ```\r\n> +is($result, qq(15000), 'all changes are replayed from file')\r\n> ```\r\n> \r\n> The statement may be unclear because changes can be also replicated when\r\n> streaming = on.\r\n> How about: \"parallel apply worker replayed all changes from file\"?\r\n\r\nChanged.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Mon, 30 Jan 2023 06:24:36 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\nThank you for updating the patch!\r\nI checked your replies and new patch, and it seems good.\r\nCurrently I have no comments\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n", "msg_date": "Mon, 30 Jan 2023 12:57:34 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 30, 2023 at 3:23 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 30, 2023 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for v88-0002.\n>\n> Thanks for your comments.\n>\n> >\n> > ======\n> > General\n> >\n> > 1.\n> > The test cases are checking the log content but they are not checking for\n> > debug logs or untranslated elogs -- they are expecting a normal ereport LOG\n> > that might be translated. I’m not sure if that is OK, or if it is a potential problem.\n>\n> We have tests that check the ereport ERROR and ereport WARNING message(by\n> search for the ERROR or WARNING keyword for all the tap tests), so I think\n> checking the LOG should be fine.\n>\n> > ======\n> > doc/src/sgml/config.sgml\n> >\n> > 2.\n> > On the publisher side, logical_replication_mode allows allows streaming or\n> > serializing changes immediately in logical decoding. When set to immediate,\n> > stream each change if streaming option (see optional parameters set by\n> > CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change. When set\n> > to buffered, the decoding will stream or serialize changes when\n> > logical_decoding_work_mem is reached.\n> >\n> > 2a.\n> > typo \"allows allows\" (Kuroda-san reported same)\n> >\n> > 2b.\n> > \"if streaming option\" --> \"if the streaming option\"\n>\n> Changed.\n>\n> > ~~~\n> >\n> > 3.\n> > On the subscriber side, if streaming option is set to parallel,\n> > logical_replication_mode also allows the leader apply worker to send changes\n> > to the shared memory queue or to serialize changes.\n> >\n> > SUGGESTION\n> > On the subscriber side, if the streaming option is set to parallel,\n> > logical_replication_mode can be used to direct the leader apply worker to\n> > send changes to the shared memory queue or to serialize changes.\n>\n> Changed.\n>\n> > ======\n> > src/backend/utils/misc/guc_tables.c\n> >\n> > 4.\n> > {\n> > {\"logical_replication_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n> > - gettext_noop(\"Controls when to replicate each change.\"),\n> > - gettext_noop(\"On the publisher, it allows streaming or serializing each\n> > change in logical decoding.\"),\n> > + gettext_noop(\"Controls the internal behavior of logical replication\n> > publisher and subscriber\"),\n> > + gettext_noop(\"On the publisher, it allows streaming or \"\n> > + \"serializing each change in logical decoding. On the \"\n> > + \"subscriber, in parallel streaming mode, it allows \"\n> > + \"the leader apply worker to serialize changes to \"\n> > + \"files and notifies the parallel apply workers to \"\n> > + \"read and apply them at the end of the transaction.\"),\n> > GUC_NOT_IN_SAMPLE\n> > },\n> > Suggest re-wording the long description (subscriber part) to be more like the\n> > documentation text.\n> >\n> > BEFORE\n> > On the subscriber, in parallel streaming mode, it allows the leader apply worker\n> > to serialize changes to files and notifies the parallel apply workers to read and\n> > apply them at the end of the transaction.\n> >\n> > SUGGESTION\n> > On the subscriber, if the streaming option is set to parallel, it directs the leader\n> > apply worker to send changes to the shared memory queue or to serialize\n> > changes and apply them at the end of the transaction.\n> >\n>\n> Changed.\n>\n> Attach the new version patch which addressed all comments so far (the v88-0001\n> has been committed, so we only have one remaining patch this time).\n>\n\nI have one comment on v89 patch:\n\n+ /*\n+ * Using 'immediate' mode returns false to cause a switch to\n+ * PARTIAL_SERIALIZE mode so that the remaining changes will\nbe serialized.\n+ */\n+ if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE)\n+ return false;\n+\n\nProbably we might want to add unlikely() here since we could pass\nthrough this path very frequently?\n\nThe rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 23:19:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 30, 2023 at 5:23 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, January 30, 2023 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for v88-0002.\n>\n> Thanks for your comments.\n>\n> >\n> > ======\n> > General\n> >\n> > 1.\n> > The test cases are checking the log content but they are not checking for\n> > debug logs or untranslated elogs -- they are expecting a normal ereport LOG\n> > that might be translated. I’m not sure if that is OK, or if it is a potential problem.\n>\n> We have tests that check the ereport ERROR and ereport WARNING message(by\n> search for the ERROR or WARNING keyword for all the tap tests), so I think\n> checking the LOG should be fine.\n>\n> > ======\n> > doc/src/sgml/config.sgml\n> >\n> > 2.\n> > On the publisher side, logical_replication_mode allows allows streaming or\n> > serializing changes immediately in logical decoding. When set to immediate,\n> > stream each change if streaming option (see optional parameters set by\n> > CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change. When set\n> > to buffered, the decoding will stream or serialize changes when\n> > logical_decoding_work_mem is reached.\n> >\n> > 2a.\n> > typo \"allows allows\" (Kuroda-san reported same)\n> >\n> > 2b.\n> > \"if streaming option\" --> \"if the streaming option\"\n>\n> Changed.\n\nAlthough you replied \"Changed\" for the above, AFAICT my review comment\n#2b. was accidentally missed.\n\nOtherwise, the patch LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 31 Jan 2023 11:22:57 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, January 30, 2023 10:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> \r\n> I have one comment on v89 patch:\r\n> \r\n> + /*\r\n> + * Using 'immediate' mode returns false to cause a switch to\r\n> + * PARTIAL_SERIALIZE mode so that the remaining changes will\r\n> be serialized.\r\n> + */\r\n> + if (logical_replication_mode == LOGICAL_REP_MODE_IMMEDIATE)\r\n> + return false;\r\n> +\r\n> \r\n> Probably we might want to add unlikely() here since we could pass through this\r\n> path very frequently?\r\n\r\nI think your comment makes sense, thanks.\r\nI updated the patch for the same.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 31 Jan 2023 03:34:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, January 31, 2023 8:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Mon, Jan 30, 2023 at 5:23 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, January 30, 2023 12:13 PM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> > >\r\n> > > Here are my review comments for v88-0002.\r\n> >\r\n> > Thanks for your comments.\r\n> >\r\n> > >\r\n> > > ======\r\n> > > General\r\n> > >\r\n> > > 1.\r\n> > > The test cases are checking the log content but they are not\r\n> > > checking for debug logs or untranslated elogs -- they are expecting\r\n> > > a normal ereport LOG that might be translated. I’m not sure if that is OK, or\r\n> if it is a potential problem.\r\n> >\r\n> > We have tests that check the ereport ERROR and ereport WARNING\r\n> > message(by search for the ERROR or WARNING keyword for all the tap\r\n> > tests), so I think checking the LOG should be fine.\r\n> >\r\n> > > ======\r\n> > > doc/src/sgml/config.sgml\r\n> > >\r\n> > > 2.\r\n> > > On the publisher side, logical_replication_mode allows allows\r\n> > > streaming or serializing changes immediately in logical decoding.\r\n> > > When set to immediate, stream each change if streaming option (see\r\n> > > optional parameters set by CREATE SUBSCRIPTION) is enabled,\r\n> > > otherwise, serialize each change. When set to buffered, the decoding\r\n> > > will stream or serialize changes when logical_decoding_work_mem is\r\n> reached.\r\n> > >\r\n> > > 2a.\r\n> > > typo \"allows allows\" (Kuroda-san reported same)\r\n> > >\r\n> > > 2b.\r\n> > > \"if streaming option\" --> \"if the streaming option\"\r\n> >\r\n> > Changed.\r\n> \r\n> Although you replied \"Changed\" for the above, AFAICT my review comment\r\n> #2b. was accidentally missed.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 31 Jan 2023 03:34:45 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Thanks for the updates to address all of my previous review comments.\n\nPatch v90-0001 LGTM.\n\n------\nKind Reagrds,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 31 Jan 2023 16:40:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Jan 31, 2023 at 9:04 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> I think your comment makes sense, thanks.\n> I updated the patch for the same.\n>\n\nThe patch looks mostly good to me. I have made a few changes in the\ncomments and docs, see attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 1 Feb 2023 17:30:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Some minor review comments for v91-0001\n\n======\ndoc/src/sgml/config.sgml\n\n1.\n <para>\n- Allows streaming or serializing changes immediately in\nlogical decoding.\n- The allowed values of <varname>logical_replication_mode</varname> are\n- <literal>buffered</literal> and <literal>immediate</literal>. When set\n- to <literal>immediate</literal>, stream each change if\n+ The allowed values are <literal>buffered</literal> and\n+ <literal>immediate</literal>. The default is\n<literal>buffered</literal>.\n+ This parameter is intended to be used to test logical decoding and\n+ replication of large transactions for which otherwise we need\nto generate\n+ the changes till <varname>logical_decoding_work_mem</varname> is\n+ reached. The effect of <varname>logical_replication_mode</varname> is\n+ different for the publisher and subscriber:\n+ </para>\n\nThe \"for which otherwise...\" part is only relevant for the\npublisher-side. So it seemed slightly strange to give the reason why\nto use the GUC for one side but not the other side.\n\nMaybe we can just to remove that \"for which otherwise...\" part, since\nthe logical_decoding_work_mem gets mentioned later in the \"On the\npublisher side,...\" paragraph anyway.\n\n~~~\n\n2.\n <para>\n- This parameter is intended to be used to test logical decoding and\n- replication of large transactions for which otherwise we need to\n- generate the changes till <varname>logical_decoding_work_mem</varname>\n- is reached.\n+ On the subscriber side, if the <literal>streaming</literal>\noption is set to\n+ <literal>parallel</literal>,\n<varname>logical_replication_mode</varname>\n+ can be used to direct the leader apply worker to send changes to the\n+ shared memory queue or to serialize changes to the file. When set to\n+ <literal>buffered</literal>, the leader sends changes to parallel apply\n+ workers via a shared memory queue. When set to\n+ <literal>immediate</literal>, the leader serializes all\nchanges to files\n+ and notifies the parallel apply workers to read and apply them at the\n+ end of the transaction.\n </para>\n\n\"or serialize changes to the file.\" --> \"or serialize all changes to\nfiles.\" (just to use same wording as later in this same paragraph, and\nalso same wording as the GUC hint text).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Feb 2023 10:22:06 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Some minor review comments for v91-0001\n>\n\nPushed this yesterday after addressing your comments!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Feb 2023 08:34:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, February 3, 2023 11:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > Some minor review comments for v91-0001\r\n> >\r\n> \r\n> Pushed this yesterday after addressing your comments!\r\n\r\nThanks for pushing.\r\n\r\nCurrently, we have two remaining patches which we are not sure whether it's worth\r\ncommitting for now. Just share them here for reference.\r\n\r\n0001:\r\n\r\nBased on our discussion[1] on -hackers, it's not clear that if it's necessary\r\nto add the sub-feature to stop extra worker when\r\nmax_apply_workers_per_suibscription is reduced. Because:\r\n\r\n- it's not clear whether reducing the 'max_apply_workers_per_suibscription' is very\r\n common.\r\n- even when the GUC is reduced, at that point in time all the workers might be\r\n in use so there may be nothing that can be immediately done.\r\n- IIUC the excess workers (for a reduced GUC) are going to get freed naturally\r\n anyway over time as more transactions are completed so the pool size will\r\n reduce accordingly.\r\n\r\nAnd given that the logic of this patch is simple, it would be easy to add this\r\nat a later point if we really see a use case for this.\r\n\r\n0002:\r\n\r\nSince all the deadlock errors and other errors that caused by parallel streaming\r\nwill be logged and user can check this kind of ERROR and disable the parallel\r\nstreaming mode to resolve this. Besides, for this retry feature, It will\r\nbe hard to distinguish whether the ERROR is caused by parallel streaming, and we\r\nmight need to retry in serialize mode for all kinds of ERROR. So, it's not very\r\nclear if automatic use serialize mode to retry in case of any ERROR in parallel\r\nstreaming is necessary or not. And we can also add this when we see a use case.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1LotEuPsteuJMNpixxTj6R4B8k93q-6ruRmDzCxKzMNpA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 3 Feb 2023 03:29:27 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Feb 3, 2023 at 12:29 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, February 3, 2023 11:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > >\n> > > Some minor review comments for v91-0001\n> > >\n> >\n> > Pushed this yesterday after addressing your comments!\n>\n> Thanks for pushing.\n>\n> Currently, we have two remaining patches which we are not sure whether it's worth\n> committing for now. Just share them here for reference.\n>\n> 0001:\n>\n> Based on our discussion[1] on -hackers, it's not clear that if it's necessary\n> to add the sub-feature to stop extra worker when\n> max_apply_workers_per_suibscription is reduced. Because:\n>\n> - it's not clear whether reducing the 'max_apply_workers_per_suibscription' is very\n> common.\n\nA use case I'm concerned about is a temporarily intensive data load,\nfor example, a data loading batch job in a maintenance window. In this\ncase, the user might want to temporarily increase\nmax_parallel_workers_per_subscription in order to avoid a large\nreplication lag, and revert the change back to normal after the job.\nIf it's unlikely to stream the changes in the regular workload as\nlogical_decoding_work_mem is big enough to handle the regular\ntransaction data, the excess parallel workers won't exit. Another\nsubscription might want to use parallel workers but there might not be\nfree workers. That's why I thought we need to free the excess workers\nat some point.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Feb 2023 16:57:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Feb 3, 2023 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 12:29 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, February 3, 2023 11:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2250@gmail.com>\n> > > wrote:\n> > > >\n> > > > Some minor review comments for v91-0001\n> > > >\n> > >\n> > > Pushed this yesterday after addressing your comments!\n> >\n> > Thanks for pushing.\n> >\n> > Currently, we have two remaining patches which we are not sure whether it's worth\n> > committing for now. Just share them here for reference.\n> >\n> > 0001:\n> >\n> > Based on our discussion[1] on -hackers, it's not clear that if it's necessary\n> > to add the sub-feature to stop extra worker when\n> > max_apply_workers_per_suibscription is reduced. Because:\n> >\n> > - it's not clear whether reducing the 'max_apply_workers_per_suibscription' is very\n> > common.\n>\n> A use case I'm concerned about is a temporarily intensive data load,\n> for example, a data loading batch job in a maintenance window. In this\n> case, the user might want to temporarily increase\n> max_parallel_workers_per_subscription in order to avoid a large\n> replication lag, and revert the change back to normal after the job.\n> If it's unlikely to stream the changes in the regular workload as\n> logical_decoding_work_mem is big enough to handle the regular\n> transaction data, the excess parallel workers won't exit.\n>\n\nWon't in such a case, it would be better to just switch off the\nparallel option for a subscription? We need to think of a predictable\nway to test this path which may not be difficult. But I guess it would\nbe better to wait for some feedback from the field about this feature\nbefore adding more to it and anyway it shouldn't be a big deal to add\nthis later as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Feb 2023 15:14:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hi, \r\n\r\nwhile reading the code, I noticed that in pa_send_data() we set wait event\r\nto WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending the\r\nmessage to the queue. Because this state is used in multiple places, user might\r\nnot be able to distinguish what they are waiting for. So It seems we'd better\r\nto use WAIT_EVENT_MQ_SEND here which will be eaier to distinguish and\r\nunderstand. Here is a tiny patch for that.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 6 Feb 2023 10:13:36 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\n> while reading the code, I noticed that in pa_send_data() we set wait event\r\n> to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending\r\n> the\r\n> message to the queue. Because this state is used in multiple places, user might\r\n> not be able to distinguish what they are waiting for. So It seems we'd better\r\n> to use WAIT_EVENT_MQ_SEND here which will be eaier to distinguish and\r\n> understand. Here is a tiny patch for that.\r\n\r\nIn LogicalParallelApplyLoop(), we introduced the new wait event\r\nWAIT_EVENT_LOGICAL_PARALLEL_APPLY_MAIN whereas it is practically waits a shared\r\nmessage queue and it seems to be same as WAIT_EVENT_MQ_RECEIVE.\r\nDo you have a policy to reuse the event instead of adding a new event?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 6 Feb 2023 10:33:36 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, February 6, 2023 6:34 PM Kuroda, Hayato <kuroda.hayato@fujitsu.com> wrote:\r\n> > while reading the code, I noticed that in pa_send_data() we set wait\r\n> > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\r\n> sending\r\n> > the message to the queue. Because this state is used in multiple\r\n> > places, user might not be able to distinguish what they are waiting\r\n> > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\r\n> > be eaier to distinguish and understand. Here is a tiny patch for that.\r\n> \r\n> In LogicalParallelApplyLoop(), we introduced the new wait event\r\n> WAIT_EVENT_LOGICAL_PARALLEL_APPLY_MAIN whereas it is practically waits a\r\n> shared message queue and it seems to be same as WAIT_EVENT_MQ_RECEIVE.\r\n> Do you have a policy to reuse the event instead of adding a new event?\r\n\r\nI think PARALLEL_APPLY_MAIN waits for two kinds of event: 1) wait for new\r\nmessage from the queue 2) wait for the partial file state to be set. So, I\r\nthink introducing a new general event for them is better and it is also\r\nconsistent with the WAIT_EVENT_LOGICAL_APPLY_MAIN which is used in the main\r\nloop of leader apply worker(LogicalRepApplyLoop). But the event in\r\npg_send_data() is only for message send, so it seems fine to use\r\nWAIT_EVENT_MQ_SEND, besides MQ_SEND is also unique in parallel apply worker and\r\nuser can distinglish without adding new event.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 6 Feb 2023 11:25:03 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Dear Hou,\r\n\r\n> I think PARALLEL_APPLY_MAIN waits for two kinds of event: 1) wait for new\r\n> message from the queue 2) wait for the partial file state to be set. So, I\r\n> think introducing a new general event for them is better and it is also\r\n> consistent with the WAIT_EVENT_LOGICAL_APPLY_MAIN which is used in the\r\n> main\r\n> loop of leader apply worker(LogicalRepApplyLoop). But the event in\r\n> pg_send_data() is only for message send, so it seems fine to use\r\n> WAIT_EVENT_MQ_SEND, besides MQ_SEND is also unique in parallel apply\r\n> worker and\r\n> user can distinglish without adding new event.\r\n\r\nThank you for your explanation. I think both of you said are reasonable.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 7 Feb 2023 02:03:05 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> while reading the code, I noticed that in pa_send_data() we set wait event\n> to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending the\n> message to the queue. Because this state is used in multiple places, user might\n> not be able to distinguish what they are waiting for. So It seems we'd better\n> to use WAIT_EVENT_MQ_SEND here which will be eaier to distinguish and\n> understand. Here is a tiny patch for that.\n>\n\nThanks for noticing this. The patch LGTM. I'll push this in some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Feb 2023 08:46:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Feb 3, 2023 at 12:29 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Friday, February 3, 2023 11:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith <smithpb2250@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > Some minor review comments for v91-0001\n> > > > >\n> > > >\n> > > > Pushed this yesterday after addressing your comments!\n> > >\n> > > Thanks for pushing.\n> > >\n> > > Currently, we have two remaining patches which we are not sure whether it's worth\n> > > committing for now. Just share them here for reference.\n> > >\n> > > 0001:\n> > >\n> > > Based on our discussion[1] on -hackers, it's not clear that if it's necessary\n> > > to add the sub-feature to stop extra worker when\n> > > max_apply_workers_per_suibscription is reduced. Because:\n> > >\n> > > - it's not clear whether reducing the 'max_apply_workers_per_suibscription' is very\n> > > common.\n> >\n> > A use case I'm concerned about is a temporarily intensive data load,\n> > for example, a data loading batch job in a maintenance window. In this\n> > case, the user might want to temporarily increase\n> > max_parallel_workers_per_subscription in order to avoid a large\n> > replication lag, and revert the change back to normal after the job.\n> > If it's unlikely to stream the changes in the regular workload as\n> > logical_decoding_work_mem is big enough to handle the regular\n> > transaction data, the excess parallel workers won't exit.\n> >\n>\n> Won't in such a case, it would be better to just switch off the\n> parallel option for a subscription?\n\nNot sure. Changing the parameter would be easier since it doesn't\nrequire restarts.\n\n> We need to think of a predictable\n> way to test this path which may not be difficult. But I guess it would\n> be better to wait for some feedback from the field about this feature\n> before adding more to it and anyway it shouldn't be a big deal to add\n> this later as well.\n\nAgreed to hear some feedback before adding it. It's not an urgent feature.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 7 Feb 2023 16:11:17 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Feb 7, 2023 at 12:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > We need to think of a predictable\n> > way to test this path which may not be difficult. But I guess it would\n> > be better to wait for some feedback from the field about this feature\n> > before adding more to it and anyway it shouldn't be a big deal to add\n> > this later as well.\n>\n> Agreed to hear some feedback before adding it. It's not an urgent feature.\n>\n\nOkay, Thanks! AFAIK, there is no pending patch left in this proposal.\nIf so, I think it is better to close the corresponding CF entry.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Feb 2023 13:07:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Feb 7, 2023 15:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Feb 7, 2023 at 12:41 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > > We need to think of a predictable\r\n> > > way to test this path which may not be difficult. But I guess it would\r\n> > > be better to wait for some feedback from the field about this feature\r\n> > > before adding more to it and anyway it shouldn't be a big deal to add\r\n> > > this later as well.\r\n> >\r\n> > Agreed to hear some feedback before adding it. It's not an urgent feature.\r\n> >\r\n> \r\n> Okay, Thanks! AFAIK, there is no pending patch left in this proposal.\r\n> If so, I think it is better to close the corresponding CF entry.\r\n\r\nYes, I think so.\r\nClosed this CF entry.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Wed, 8 Feb 2023 03:01:03 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tuesday, February 7, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > while reading the code, I noticed that in pa_send_data() we set wait\r\n> > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\r\n> sending\r\n> > the message to the queue. Because this state is used in multiple\r\n> > places, user might not be able to distinguish what they are waiting\r\n> > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\r\n> > be eaier to distinguish and understand. Here is a tiny patch for that.\r\n> >\r\n\r\nAs discussed[1], we'd better invent a new state for this purpose, so here is the patch\r\nthat does the same.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 10 Feb 2023 02:32:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Feb 10, 2023 at 1:32 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, February 7, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > while reading the code, I noticed that in pa_send_data() we set wait\n> > > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\n> > sending\n> > > the message to the queue. Because this state is used in multiple\n> > > places, user might not be able to distinguish what they are waiting\n> > > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\n> > > be eaier to distinguish and understand. Here is a tiny patch for that.\n> > >\n>\n> As discussed[1], we'd better invent a new state for this purpose, so here is the patch\n> that does the same.\n>\n> [1] https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com\n>\n\nMy first impression was the\nWAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading\nbecause that makes it sound like the parallel apply worker is doing\nthe sending, but IIUC it's really the opposite.\n\nAnd since WAIT_EVENT_LOGICAL_PARALLEL_APPLY_LEADER_SEND_DATA seems too\nverbose, how about shortening the prefix for both events? E.g.\n\nBEFORE\nWAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA,\nWAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE,\n\nAFTER\nWAIT_EVENT_LOGICAL_PA_LEADER_SEND_DATA,\nWAIT_EVENT_LOGICAL_PA_STATE_CHANGE,\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 10 Feb 2023 14:26:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Feb 10, 2023 at 8:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Feb 10, 2023 at 1:32 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, February 7, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > while reading the code, I noticed that in pa_send_data() we set wait\n> > > > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\n> > > sending\n> > > > the message to the queue. Because this state is used in multiple\n> > > > places, user might not be able to distinguish what they are waiting\n> > > > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\n> > > > be eaier to distinguish and understand. Here is a tiny patch for that.\n> > > >\n> >\n> > As discussed[1], we'd better invent a new state for this purpose, so here is the patch\n> > that does the same.\n> >\n> > [1] https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com\n> >\n>\n> My first impression was the\n> WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading\n> because that makes it sound like the parallel apply worker is doing\n> the sending, but IIUC it's really the opposite.\n>\n\nSo, how about WAIT_EVENT_LOGICAL_APPLY_SEND_DATA?\n\n> And since WAIT_EVENT_LOGICAL_PARALLEL_APPLY_LEADER_SEND_DATA seems too\n> verbose, how about shortening the prefix for both events? E.g.\n>\n> BEFORE\n> WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA,\n> WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE,\n>\n> AFTER\n> WAIT_EVENT_LOGICAL_PA_LEADER_SEND_DATA,\n> WAIT_EVENT_LOGICAL_PA_STATE_CHANGE,\n>\n\nI am not sure *_PA_LEADER_* is any better that what Hou-San has proposed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 14 Feb 2023 11:34:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Feb 14, 2023 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 10, 2023 at 8:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Feb 10, 2023 at 1:32 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, February 7, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > while reading the code, I noticed that in pa_send_data() we set wait\n> > > > > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\n> > > > sending\n> > > > > the message to the queue. Because this state is used in multiple\n> > > > > places, user might not be able to distinguish what they are waiting\n> > > > > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\n> > > > > be eaier to distinguish and understand. Here is a tiny patch for that.\n> > > > >\n> > >\n> > > As discussed[1], we'd better invent a new state for this purpose, so here is the patch\n> > > that does the same.\n> > >\n> > > [1] https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com\n> > >\n> >\n> > My first impression was the\n> > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading\n> > because that makes it sound like the parallel apply worker is doing\n> > the sending, but IIUC it's really the opposite.\n> >\n>\n> So, how about WAIT_EVENT_LOGICAL_APPLY_SEND_DATA?\n>\n\nYes, IIUC all the LR events are named WAIT_EVENT_LOGICAL_xxx.\n\nSo names like the below seem correct format:\n\na) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA\nb) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA\nc) WAIT_EVENT_LOGICAL_LEADER_APPLY_SEND_DATA\n\nOf those, I prefer option c) because saying LEADER_APPLY_xxx matches\nthe name format of the existing\nWAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 14 Feb 2023 17:58:09 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Feb 14, 2023 at 3:58 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Feb 14, 2023 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 10, 2023 at 8:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 10, 2023 at 1:32 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Tuesday, February 7, 2023 11:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Feb 6, 2023 at 3:43 PM houzj.fnst@fujitsu.com\n> > > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > > >\n> > > > > > while reading the code, I noticed that in pa_send_data() we set wait\n> > > > > > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while\n> > > > > sending\n> > > > > > the message to the queue. Because this state is used in multiple\n> > > > > > places, user might not be able to distinguish what they are waiting\n> > > > > > for. So It seems we'd better to use WAIT_EVENT_MQ_SEND here which will\n> > > > > > be eaier to distinguish and understand. Here is a tiny patch for that.\n> > > > > >\n> > > >\n> > > > As discussed[1], we'd better invent a new state for this purpose, so here is the patch\n> > > > that does the same.\n> > > >\n> > > > [1] https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com\n> > > >\n> > >\n> > > My first impression was the\n> > > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading\n> > > because that makes it sound like the parallel apply worker is doing\n> > > the sending, but IIUC it's really the opposite.\n> > >\n> >\n> > So, how about WAIT_EVENT_LOGICAL_APPLY_SEND_DATA?\n> >\n>\n> Yes, IIUC all the LR events are named WAIT_EVENT_LOGICAL_xxx.\n>\n> So names like the below seem correct format:\n>\n> a) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA\n> b) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA\n> c) WAIT_EVENT_LOGICAL_LEADER_APPLY_SEND_DATA\n\nPersonally I'm fine even without \"LEADER\" in the wait event name since\nwe don't have \"who is waiting\" in it. IIUC a row of pg_stat_activity\nshows who, and the wait event name shows \"what the process is\nwaiting\". So I prefer (a).\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 14 Feb 2023 23:14:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, Feb 14, 2023 at 7:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Feb 14, 2023 at 3:58 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Feb 14, 2023 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 10, 2023 at 8:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > My first impression was the\n> > > > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading\n> > > > because that makes it sound like the parallel apply worker is doing\n> > > > the sending, but IIUC it's really the opposite.\n> > > >\n> > >\n> > > So, how about WAIT_EVENT_LOGICAL_APPLY_SEND_DATA?\n> > >\n> >\n> > Yes, IIUC all the LR events are named WAIT_EVENT_LOGICAL_xxx.\n> >\n> > So names like the below seem correct format:\n> >\n> > a) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA\n> > b) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA\n> > c) WAIT_EVENT_LOGICAL_LEADER_APPLY_SEND_DATA\n>\n> Personally I'm fine even without \"LEADER\" in the wait event name since\n> we don't have \"who is waiting\" in it. IIUC a row of pg_stat_activity\n> shows who, and the wait event name shows \"what the process is\n> waiting\". So I prefer (a).\n>\n\nThis logic makes sense to me. So, let's go with (a).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Feb 2023 08:03:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, February 15, 2023 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Feb 14, 2023 at 7:45 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Feb 14, 2023 at 3:58 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Feb 14, 2023 at 5:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Fri, Feb 10, 2023 at 8:56 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > My first impression was the\r\n> > > > > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed\r\n> > > > > misleading because that makes it sound like the parallel apply\r\n> > > > > worker is doing the sending, but IIUC it's really the opposite.\r\n> > > > >\r\n> > > >\r\n> > > > So, how about WAIT_EVENT_LOGICAL_APPLY_SEND_DATA?\r\n> > > >\r\n> > >\r\n> > > Yes, IIUC all the LR events are named WAIT_EVENT_LOGICAL_xxx.\r\n> > >\r\n> > > So names like the below seem correct format:\r\n> > >\r\n> > > a) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA\r\n> > > b) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA\r\n> > > c) WAIT_EVENT_LOGICAL_LEADER_APPLY_SEND_DATA\r\n> >\r\n> > Personally I'm fine even without \"LEADER\" in the wait event name since\r\n> > we don't have \"who is waiting\" in it. IIUC a row of pg_stat_activity\r\n> > shows who, and the wait event name shows \"what the process is\r\n> > waiting\". So I prefer (a).\r\n> >\r\n> \r\n> This logic makes sense to me. So, let's go with (a).\r\n\r\nOK, here is patch that change the event name to WAIT_EVENT_LOGICAL_APPLY_SEND_DATA.\r\n\r\nBest Regard,\r\nHou zj", "msg_date": "Wed, 15 Feb 2023 03:25:13 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Feb 15, 2023 at 8:55 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, February 15, 2023 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > So names like the below seem correct format:\n> > > >\n> > > > a) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA\n> > > > b) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA\n> > > > c) WAIT_EVENT_LOGICAL_LEADER_APPLY_SEND_DATA\n> > >\n> > > Personally I'm fine even without \"LEADER\" in the wait event name since\n> > > we don't have \"who is waiting\" in it. IIUC a row of pg_stat_activity\n> > > shows who, and the wait event name shows \"what the process is\n> > > waiting\". So I prefer (a).\n> > >\n> >\n> > This logic makes sense to me. So, let's go with (a).\n>\n> OK, here is patch that change the event name to WAIT_EVENT_LOGICAL_APPLY_SEND_DATA.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Feb 2023 14:46:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "LGTM. My only comment is about the commit message.\n\n======\nCommit message\n\nd9d7fe6 reuse existing wait event when sending data in apply worker. But we\nshould have invent a new wait state if we are waiting at a new place, so fix\nthis.\n\n~\n\nSUGGESTION\nd9d7fe6 made use of an existing wait event when sending data from the apply\nworker, but we should have invented a new wait state since the code was\nwaiting at a new place.\n\nThis patch corrects the mistake by using a new wait state\n\"LogicalApplySendData\".\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 Feb 2023 10:31:13 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Jan 9, 2023 at 5:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 8, 2023 at 11:32 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Sunday, January 8, 2023 11:59 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > > Attach the updated patch set.\n> >\n> > Sorry, the commit message of 0001 was accidentally deleted, just attach\n> > the same patch set again with commit message.\n> >\n>\n> Pushed the first (0001) patch.\n\nWhile looking at the worker.c, I realized that we have the following\ncode in handle_streamed_transaction():\n\n default:\n Assert(false);\n return false; / silence compiler warning /\n\nI think it's better to do elog(ERROR) instead of Assert() as it ends\nup returning false in non-assertion builds, which might cause a\nproblem. And it's more consistent with other codes in worker.c. Please\nfind an attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 24 Apr 2023 10:55:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "At Mon, 24 Apr 2023 10:55:44 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> While looking at the worker.c, I realized that we have the following\n> code in handle_streamed_transaction():\n> \n> default:\n> Assert(false);\n> return false; / silence compiler warning /\n> \n> I think it's better to do elog(ERROR) instead of Assert() as it ends\n> up returning false in non-assertion builds, which might cause a\n> problem. And it's more consistent with other codes in worker.c. Please\n> find an attached patch.\n\nI concur that returning false is problematic.\n\nFor assertion builds, Assert typically provides more detailed\ninformation than elog. However, in this case, it wouldn't matter much\nsince the worker would repeatedly restart even after a server-restart\nfor the same reason unless cosmic rays are involved. Moreover, the\nsituation doesn't justify server-restaring, as it would unnecessarily\ninvolve other backends.\n\nIn my opinion, it is fine to replace the Assert with an ERROR.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 24 Apr 2023 11:50:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I concur that returning false is problematic.\n> \n> For assertion builds, Assert typically provides more detailed\n> information than elog. However, in this case, it wouldn't matter much\n> since the worker would repeatedly restart even after a server-restart\n> for the same reason unless cosmic rays are involved. Moreover, the\n\n> situation doesn't justify server-restaring, as it would unnecessarily\n> involve other backends.\n\nPlease disregard this part, as it's not relavant to non-assertion builds.\n\n> In my opinion, it is fine to replace the Assert with an ERROR.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 24 Apr 2023 11:55:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> In my opinion, it is fine to replace the Assert with an ERROR.\n\nSorry for posting multiple times in a row, but I'm a bit unceratin\nwhether we should use FATAL or ERROR for this situation. The stream is\nnot provided by user, and the session or process cannot continue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 24 Apr 2023 12:10:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "On Mon, Apr 24, 2023 at 8:40 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > In my opinion, it is fine to replace the Assert with an ERROR.\n>\n> Sorry for posting multiple times in a row, but I'm a bit unceratin\n> whether we should use FATAL or ERROR for this situation. The stream is\n> not provided by user, and the session or process cannot continue.\n>\n\nI think ERROR should be fine here similar to other cases in worker.c.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Apr 2023 08:59:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "At Mon, 24 Apr 2023 08:59:07 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > Sorry for posting multiple times in a row, but I'm a bit unceratin\n> > whether we should use FATAL or ERROR for this situation. The stream is\n> > not provided by user, and the session or process cannot continue.\n> >\n> \n> I think ERROR should be fine here similar to other cases in worker.c.\n\nSure, I don't have any issues with it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 24 Apr 2023 13:03:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers\n and parallel apply" }, { "msg_contents": "On Mon, Apr 24, 2023 at 7:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> While looking at the worker.c, I realized that we have the following\n> code in handle_streamed_transaction():\n>\n> default:\n> Assert(false);\n> return false; / silence compiler warning /\n>\n> I think it's better to do elog(ERROR) instead of Assert() as it ends\n> up returning false in non-assertion builds, which might cause a\n> problem. And it's more consistent with other codes in worker.c. Please\n> find an attached patch.\n>\n\nI haven't tested it but otherwise, the changes look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 Apr 2023 10:54:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, Apr 24, 2023 at 2:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 24, 2023 at 7:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > While looking at the worker.c, I realized that we have the following\n> > code in handle_streamed_transaction():\n> >\n> > default:\n> > Assert(false);\n> > return false; / silence compiler warning /\n> >\n> > I think it's better to do elog(ERROR) instead of Assert() as it ends\n> > up returning false in non-assertion builds, which might cause a\n> > problem. And it's more consistent with other codes in worker.c. Please\n> > find an attached patch.\n> >\n>\n> I haven't tested it but otherwise, the changes look good to me.\n\nThanks for checking! Pushed.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 24 Apr 2023 15:42:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hello hackers,\n\nPlease look at a new anomaly that can be observed starting from 216a7848.\n\nThe following script:\necho \"CREATE SUBSCRIPTION testsub CONNECTION 'dbname=nodb' PUBLICATION testpub WITH (connect = false);\nALTER SUBSCRIPTION testsub ENABLE;\" | psql\n\nsleep 1\nrm $PGINST/lib/libpqwalreceiver.so\nsleep 15\npg_ctl -D \"$PGDB\" stop -m immediate\ngrep 'TRAP:' server.log\n\nLeads to multiple assertion failures:\nCREATE SUBSCRIPTION\nALTER SUBSCRIPTION\nwaiting for server to shut down.... done\nserver stopped\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899323\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899416\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899427\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899439\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899538\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899547\n\nserver.log contains:\n2023-04-26 11:00:58.797 MSK [2899300] LOG:  database system is ready to accept connections\n2023-04-26 11:00:58.821 MSK [2899416] ERROR:  could not access file \"libpqwalreceiver\": No such file or directory\nTRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\", Line: 4439, PID: 2899416\npostgres: logical replication apply worker for subscription 16385 (ExceptionalCondition+0x69)[0x558b2ac06d41]\npostgres: logical replication apply worker for subscription 16385 (VirtualXactLockTableCleanup+0xa4)[0x558b2aa9fd74]\npostgres: logical replication apply worker for subscription 16385 (LockReleaseAll+0xbb)[0x558b2aa9fe7d]\npostgres: logical replication apply worker for subscription 16385 (+0x4588c6)[0x558b2aa2a8c6]\npostgres: logical replication apply worker for subscription 16385 (shmem_exit+0x6c)[0x558b2aa87eb1]\npostgres: logical replication apply worker for subscription 16385 (+0x4b5faa)[0x558b2aa87faa]\npostgres: logical replication apply worker for subscription 16385 (proc_exit+0xc)[0x558b2aa88031]\npostgres: logical replication apply worker for subscription 16385 (StartBackgroundWorker+0x147)[0x558b2aa0b4d9]\npostgres: logical replication apply worker for subscription 16385 (+0x43fdc1)[0x558b2aa11dc1]\npostgres: logical replication apply worker for subscription 16385 (+0x43ff3d)[0x558b2aa11f3d]\npostgres: logical replication apply worker for subscription 16385 (+0x440866)[0x558b2aa12866]\npostgres: logical replication apply worker for subscription 16385 (+0x440e12)[0x558b2aa12e12]\npostgres: logical replication apply worker for subscription 16385 (BackgroundWorkerInitializeConnection+0x0)[0x558b2aa14396]\npostgres: logical replication apply worker for subscription 16385 (main+0x21a)[0x558b2a932e21]\n\nI understand, that removing libpqwalreceiver.so (or whole pginst/) is not\nwhat happens in a production environment every day, but nonetheless it's a\nnew failure mode and it can produce many coredumps when testing.\n\nIIUC, that assert will fail in case of any error raised between\nApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\nApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeConnectionByOid()->InitPostgres().\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 26 Apr 2023 12:00:02 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\r\n> Please look at a new anomaly that can be observed starting from 216a7848.\r\n> \r\n> The following script:\r\n> echo \"CREATE SUBSCRIPTION testsub CONNECTION 'dbname=nodb'\r\n> PUBLICATION testpub WITH (connect = false);\r\n> ALTER SUBSCRIPTION testsub ENABLE;\" | psql\r\n> \r\n> sleep 1\r\n> rm $PGINST/lib/libpqwalreceiver.so\r\n> sleep 15\r\n> pg_ctl -D \"$PGDB\" stop -m immediate\r\n> grep 'TRAP:' server.log\r\n> \r\n> Leads to multiple assertion failures:\r\n> CREATE SUBSCRIPTION\r\n> ALTER SUBSCRIPTION\r\n> waiting for server to shut down.... done\r\n> server stopped\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899323\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899416\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899427\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899439\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899538\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899547\r\n> \r\n> server.log contains:\r\n> 2023-04-26 11:00:58.797 MSK [2899300] LOG:  database system is ready to\r\n> accept connections\r\n> 2023-04-26 11:00:58.821 MSK [2899416] ERROR:  could not access file\r\n> \"libpqwalreceiver\": No such file or directory\r\n> TRAP: failed Assert(\"MyProc->backendId != InvalidBackendId\"), File: \"lock.c\",\r\n> Line: 4439, PID: 2899416\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (ExceptionalCondition+0x69)[0x558b2ac06d41]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (VirtualXactLockTableCleanup+0xa4)[0x558b2aa9fd74]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (LockReleaseAll+0xbb)[0x558b2aa9fe7d]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x4588c6)[0x558b2aa2a8c6]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (shmem_exit+0x6c)[0x558b2aa87eb1]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x4b5faa)[0x558b2aa87faa]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (proc_exit+0xc)[0x558b2aa88031]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (StartBackgroundWorker+0x147)[0x558b2aa0b4d9]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x43fdc1)[0x558b2aa11dc1]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x43ff3d)[0x558b2aa11f3d]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x440866)[0x558b2aa12866]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (+0x440e12)[0x558b2aa12e12]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (BackgroundWorkerInitializeConnection+0x0)[0x558b2aa14396]\r\n> postgres: logical replication apply worker for subscription 16385\r\n> (main+0x21a)[0x558b2a932e21]\r\n> \r\n> I understand, that removing libpqwalreceiver.so (or whole pginst/) is not\r\n> what happens in a production environment every day, but nonetheless it's a\r\n> new failure mode and it can produce many coredumps when testing.\r\n> \r\n> IIUC, that assert will fail in case of any error raised between\r\n> ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\r\n> ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\r\n> onnectionByOid()->InitPostgres().\r\n\r\nThanks for reporting the issue.\r\n\r\nI think the problem is that it tried to release locks in\r\nlogicalrep_worker_onexit() before the initialization of the process is complete\r\nbecause this callback function was registered before the init phase. So I think we\r\ncan add a conditional statement before releasing locks. Please find an attached\r\npatch.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 26 Apr 2023 10:41:22 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> Thanks for reporting the issue.\n>\n> I think the problem is that it tried to release locks in\n> logicalrep_worker_onexit() before the initialization of the process is complete\n> because this callback function was registered before the init phase. So I think we\n> can add a conditional statement before releasing locks. Please find an attached\n> patch.\n>\n\nYeah, this should work. Yet another possibility is to introduce a new\nvariable 'InitializingApplyWorker' similar to\n'InitializingParallelWorker' and use that to prevent releasing locks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Apr 2023 16:51:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> >\n> > IIUC, that assert will fail in case of any error raised between\n> > ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\n> > ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\n> > onnectionByOid()->InitPostgres().\n>\n> Thanks for reporting the issue.\n>\n> I think the problem is that it tried to release locks in\n> logicalrep_worker_onexit() before the initialization of the process is complete\n> because this callback function was registered before the init phase. So I think we\n> can add a conditional statement before releasing locks. Please find an attached\n> patch.\n>\n\nAlexander, does the proposed patch fix the problem you are facing?\nSawada-San, and others, do you see any better way to fix it than what\nhas been proposed?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 Apr 2023 08:21:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "Hello Amit and Zhijie,\n\n28.04.2023 05:51, Amit Kapila wrote:\n> On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n>> I think the problem is that it tried to release locks in\n>> logicalrep_worker_onexit() before the initialization of the process is complete\n>> because this callback function was registered before the init phase. So I think we\n>> can add a conditional statement before releasing locks. Please find an attached\n>> patch.\n> Alexander, does the proposed patch fix the problem you are facing?\n> Sawada-San, and others, do you see any better way to fix it than what\n> has been proposed?\n\nYes, the patch definitely fixes it.\nMaybe some other onexit actions can be skipped in the non-normal mode,\nbut the assert-triggering LockReleaseAll() not called now.\n\nThank you!\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 28 Apr 2023 08:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > >\n> > > IIUC, that assert will fail in case of any error raised between\n> > > ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\n> > > ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\n> > > onnectionByOid()->InitPostgres().\n> >\n> > Thanks for reporting the issue.\n> >\n> > I think the problem is that it tried to release locks in\n> > logicalrep_worker_onexit() before the initialization of the process is complete\n> > because this callback function was registered before the init phase. So I think we\n> > can add a conditional statement before releasing locks. Please find an attached\n> > patch.\n> >\n>\n> Alexander, does the proposed patch fix the problem you are facing?\n> Sawada-San, and others, do you see any better way to fix it than what\n> has been proposed?\n\nI'm concerned that the idea of relying on IsNormalProcessingMode()\nmight not be robust since if we change the meaning of\nIsNormalProcessingMode() some day it would silently break again. So I\nprefer using something like InitializingApplyWorker, or another idea\nwould be to do cleanup work (e.g., fileset deletion and lock release)\nin a separate callback that is registered after connecting to the\ndatabase.\n\n\nWhile investigating this issue, I've reviewed the code around\ncallbacks and worker termination etc and I found a problem.\n\nA parallel apply worker calls the before_shmem_exit callbacks in the\nfollowing order:\n\n1. ShutdownPostgres()\n2. logicalrep_worker_onexit()\n3. pa_shutdown()\n\nSince the worker is detached during logicalrep_worker_onexit(),\nMyLogicalReplication->leader_pid is an invalid when we call\npa_shutdown():\n\nstatic void\npa_shutdown(int code, Datum arg)\n{\n Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n SendProcSignal(MyLogicalRepWorker->leader_pid,\n PROCSIG_PARALLEL_APPLY_MESSAGE,\n InvalidBackendId);\n\nAlso, if the parallel apply worker fails shm_toc_lookup() during the\ninitialization, it raises an error (because of noError = false) but\nends up a SEGV as MyLogicalRepWorker is still NULL.\n\nI think that we should not use MyLogicalRepWorker->leader_pid in\npa_shutdown() but instead store the leader's pid to a static variable\nbefore registering pa_shutdown() callback. And probably we can\nremember the backend id of the leader apply worker to speed up\nSendProcSignal().\n\nFWIW, we might need to be careful about the timing when we call\nlogicalrep_worker_detach() in the worker's termination process. Since\nwe rely on IsLogicalParallelApplyWorker() for the parallel apply\nworker to send ERROR messages to the leader apply worker, if an ERROR\nhappens after logicalrep_worker_detach(), we will end up with the\nassertion failure.\n\n if (IsLogicalParallelApplyWorker())\n SendProcSignal(pq_mq_parallel_leader_pid,\n PROCSIG_PARALLEL_APPLY_MESSAGE,\n pq_mq_parallel_leader_backend_id);\n else\n {\n Assert(IsParallelWorker());\n\nIt normally would be a should-no-happen case, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Apr 2023 15:18:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > > >\n> > > > IIUC, that assert will fail in case of any error raised between\n> > > > ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\n> > > > ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\n> > > > onnectionByOid()->InitPostgres().\n> > >\n> > > Thanks for reporting the issue.\n> > >\n> > > I think the problem is that it tried to release locks in\n> > > logicalrep_worker_onexit() before the initialization of the process is complete\n> > > because this callback function was registered before the init phase. So I think we\n> > > can add a conditional statement before releasing locks. Please find an attached\n> > > patch.\n> > >\n> >\n> > Alexander, does the proposed patch fix the problem you are facing?\n> > Sawada-San, and others, do you see any better way to fix it than what\n> > has been proposed?\n>\n> I'm concerned that the idea of relying on IsNormalProcessingMode()\n> might not be robust since if we change the meaning of\n> IsNormalProcessingMode() some day it would silently break again. So I\n> prefer using something like InitializingApplyWorker,\n>\n\nI think if we change the meaning of IsNormalProcessingMode() then it\ncould also break the other places the similar check is being used.\nHowever, I am fine with InitializingApplyWorker as that could be used\nat other places as well. I just want to avoid adding another variable\nby using IsNormalProcessingMode.\n\n> or another idea\n> would be to do cleanup work (e.g., fileset deletion and lock release)\n> in a separate callback that is registered after connecting to the\n> database.\n>\n\nYeah, but not sure if it's worth having multiple callbacks for cleanup work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 Apr 2023 14:31:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 28, 2023 at 6:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > > > >\n> > > > > IIUC, that assert will fail in case of any error raised between\n> > > > > ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\n> > > > > ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\n> > > > > onnectionByOid()->InitPostgres().\n> > > >\n> > > > Thanks for reporting the issue.\n> > > >\n> > > > I think the problem is that it tried to release locks in\n> > > > logicalrep_worker_onexit() before the initialization of the process is complete\n> > > > because this callback function was registered before the init phase. So I think we\n> > > > can add a conditional statement before releasing locks. Please find an attached\n> > > > patch.\n> > > >\n> > >\n> > > Alexander, does the proposed patch fix the problem you are facing?\n> > > Sawada-San, and others, do you see any better way to fix it than what\n> > > has been proposed?\n> >\n> > I'm concerned that the idea of relying on IsNormalProcessingMode()\n> > might not be robust since if we change the meaning of\n> > IsNormalProcessingMode() some day it would silently break again. So I\n> > prefer using something like InitializingApplyWorker,\n> >\n>\n> I think if we change the meaning of IsNormalProcessingMode() then it\n> could also break the other places the similar check is being used.\n\nRight, but I think it's unclear the relationship between the\nprocessing modes and releasing session locks. If non-normal-processing\nmode means we're still in the process initialization phase, why we\ndon't skip other cleanup works such as walrcv_disconnect() and\nFileSetDeleteAll()?\n\n> However, I am fine with InitializingApplyWorker as that could be used\n> at other places as well. I just want to avoid adding another variable\n> by using IsNormalProcessingMode.\n\nI think it's less confusing.\n\n>\n> > or another idea\n> > would be to do cleanup work (e.g., fileset deletion and lock release)\n> > in a separate callback that is registered after connecting to the\n> > database.\n> >\n>\n> Yeah, but not sure if it's worth having multiple callbacks for cleanup work.\n\nFair point.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 May 2023 12:52:06 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> While investigating this issue, I've reviewed the code around\n> callbacks and worker termination etc and I found a problem.\n>\n> A parallel apply worker calls the before_shmem_exit callbacks in the\n> following order:\n>\n> 1. ShutdownPostgres()\n> 2. logicalrep_worker_onexit()\n> 3. pa_shutdown()\n>\n> Since the worker is detached during logicalrep_worker_onexit(),\n> MyLogicalReplication->leader_pid is an invalid when we call\n> pa_shutdown():\n>\n> static void\n> pa_shutdown(int code, Datum arg)\n> {\n> Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n> SendProcSignal(MyLogicalRepWorker->leader_pid,\n> PROCSIG_PARALLEL_APPLY_MESSAGE,\n> InvalidBackendId);\n>\n> Also, if the parallel apply worker fails shm_toc_lookup() during the\n> initialization, it raises an error (because of noError = false) but\n> ends up a SEGV as MyLogicalRepWorker is still NULL.\n>\n> I think that we should not use MyLogicalRepWorker->leader_pid in\n> pa_shutdown() but instead store the leader's pid to a static variable\n> before registering pa_shutdown() callback.\n>\n\nWhy not simply move the registration of pa_shutdown() to someplace\nafter logicalrep_worker_attach()? BTW, it seems we don't have access\nto MyLogicalRepWorker->leader_pid till we attach to the worker slot\nvia logicalrep_worker_attach(), so we anyway need to do what you are\nsuggesting after attaching to the worker slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 May 2023 08:51:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Friday, April 28, 2023 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin\r\n> <exclusion@gmail.com> wrote:\r\n> > > >\r\n> > > > IIUC, that assert will fail in case of any error raised between\r\n> > > >\r\n> ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and\r\n> > > >\r\n> ApplyWorkerMain()->InitializeApplyWorker()->BackgroundWorkerInitializeC\r\n> > > > onnectionByOid()->InitPostgres().\r\n> > >\r\n> > > Thanks for reporting the issue.\r\n> > >\r\n> > > I think the problem is that it tried to release locks in\r\n> > > logicalrep_worker_onexit() before the initialization of the process is\r\n> complete\r\n> > > because this callback function was registered before the init phase. So I\r\n> think we\r\n> > > can add a conditional statement before releasing locks. Please find an\r\n> attached\r\n> > > patch.\r\n> > >\r\n> >\r\n> > Alexander, does the proposed patch fix the problem you are facing?\r\n> > Sawada-San, and others, do you see any better way to fix it than what\r\n> > has been proposed?\r\n> \r\n> I'm concerned that the idea of relying on IsNormalProcessingMode()\r\n> might not be robust since if we change the meaning of\r\n> IsNormalProcessingMode() some day it would silently break again. So I\r\n> prefer using something like InitializingApplyWorker, or another idea\r\n> would be to do cleanup work (e.g., fileset deletion and lock release)\r\n> in a separate callback that is registered after connecting to the\r\n> database.\r\n\r\nThanks for the review. I agree that it’s better to use a new variable here.\r\nAttach the patch for the same.\r\n\r\n\r\n> \r\n> FWIW, we might need to be careful about the timing when we call\r\n> logicalrep_worker_detach() in the worker's termination process. Since\r\n> we rely on IsLogicalParallelApplyWorker() for the parallel apply\r\n> worker to send ERROR messages to the leader apply worker, if an ERROR\r\n> happens after logicalrep_worker_detach(), we will end up with the\r\n> assertion failure.\r\n> \r\n> if (IsLogicalParallelApplyWorker())\r\n> SendProcSignal(pq_mq_parallel_leader_pid,\r\n> PROCSIG_PARALLEL_APPLY_MESSAGE,\r\n> pq_mq_parallel_leader_backend_id);\r\n> else\r\n> {\r\n> Assert(IsParallelWorker());\r\n>\r\n> It normally would be a should-no-happen case, though.\r\n\r\nYes, I think currently PA sends ERROR message before exiting,\r\nso the callback functions are always fired after the above code which\r\nlooks fine to me.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 2 May 2023 03:35:58 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, April 28, 2023 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > Alexander, does the proposed patch fix the problem you are facing?\n> > > Sawada-San, and others, do you see any better way to fix it than what\n> > > has been proposed?\n> >\n> > I'm concerned that the idea of relying on IsNormalProcessingMode()\n> > might not be robust since if we change the meaning of\n> > IsNormalProcessingMode() some day it would silently break again. So I\n> > prefer using something like InitializingApplyWorker, or another idea\n> > would be to do cleanup work (e.g., fileset deletion and lock release)\n> > in a separate callback that is registered after connecting to the\n> > database.\n>\n> Thanks for the review. I agree that it’s better to use a new variable here.\n> Attach the patch for the same.\n>\n\n+ *\n+ * However, if the worker is being initialized, there is no need to release\n+ * locks.\n */\n- LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n+ if (!InitializingApplyWorker)\n+ LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n\nCan we slightly reword this comment as: \"The locks will be acquired\nonce the worker is initialized.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 May 2023 09:46:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 2, 2023 at 9:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, April 28, 2023 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > Alexander, does the proposed patch fix the problem you are facing?\n> > > > Sawada-San, and others, do you see any better way to fix it than what\n> > > > has been proposed?\n> > >\n> > > I'm concerned that the idea of relying on IsNormalProcessingMode()\n> > > might not be robust since if we change the meaning of\n> > > IsNormalProcessingMode() some day it would silently break again. So I\n> > > prefer using something like InitializingApplyWorker, or another idea\n> > > would be to do cleanup work (e.g., fileset deletion and lock release)\n> > > in a separate callback that is registered after connecting to the\n> > > database.\n> >\n> > Thanks for the review. I agree that it’s better to use a new variable here.\n> > Attach the patch for the same.\n> >\n>\n> + *\n> + * However, if the worker is being initialized, there is no need to release\n> + * locks.\n> */\n> - LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n> + if (!InitializingApplyWorker)\n> + LockReleaseAll(DEFAULT_LOCKMETHOD, true);\n>\n> Can we slightly reword this comment as: \"The locks will be acquired\n> once the worker is initialized.\"?\n>\n\nAfter making this modification, I pushed your patch. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 May 2023 12:47:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Wednesday, May 3, 2023 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, May 2, 2023 at 9:46 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Friday, April 28, 2023 2:18 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > >\r\n> > > > > Alexander, does the proposed patch fix the problem you are facing?\r\n> > > > > Sawada-San, and others, do you see any better way to fix it than\r\n> > > > > what has been proposed?\r\n> > > >\r\n> > > > I'm concerned that the idea of relying on IsNormalProcessingMode()\r\n> > > > might not be robust since if we change the meaning of\r\n> > > > IsNormalProcessingMode() some day it would silently break again.\r\n> > > > So I prefer using something like InitializingApplyWorker, or\r\n> > > > another idea would be to do cleanup work (e.g., fileset deletion\r\n> > > > and lock release) in a separate callback that is registered after\r\n> > > > connecting to the database.\r\n> > >\r\n> > > Thanks for the review. I agree that it’s better to use a new variable here.\r\n> > > Attach the patch for the same.\r\n> > >\r\n> >\r\n> > + *\r\n> > + * However, if the worker is being initialized, there is no need to\r\n> > + release\r\n> > + * locks.\r\n> > */\r\n> > - LockReleaseAll(DEFAULT_LOCKMETHOD, true);\r\n> > + if (!InitializingApplyWorker)\r\n> > + LockReleaseAll(DEFAULT_LOCKMETHOD, true);\r\n> >\r\n> > Can we slightly reword this comment as: \"The locks will be acquired\r\n> > once the worker is initialized.\"?\r\n> >\r\n> \r\n> After making this modification, I pushed your patch. Thanks!\r\n\r\nThanks for pushing.\r\n\r\nAttach another patch to fix the problem that pa_shutdown will access invalid\r\nMyLogicalRepWorker. I personally want to avoid introducing new static variable,\r\nso I only reorder the callback registration in this version.\r\n\r\nWhen testing this, I notice a rare case that the leader is possible to receive\r\nthe worker termination message after the leader stops the parallel worker. This\r\nis unnecessary and have a risk that the leader would try to access the detached\r\nmemory queue. This is more likely to happen and sometimes cause the failure in\r\nregression tests after the registration reorder patch because the dsm is\r\ndetached earlier after applying the patch.\r\n\r\nSo, put the patch that detach the error queue before stopping worker as 0001\r\nand the registration reorder patch as 0002.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 5 May 2023 03:44:47 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 2, 2023 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > While investigating this issue, I've reviewed the code around\n> > callbacks and worker termination etc and I found a problem.\n> >\n> > A parallel apply worker calls the before_shmem_exit callbacks in the\n> > following order:\n> >\n> > 1. ShutdownPostgres()\n> > 2. logicalrep_worker_onexit()\n> > 3. pa_shutdown()\n> >\n> > Since the worker is detached during logicalrep_worker_onexit(),\n> > MyLogicalReplication->leader_pid is an invalid when we call\n> > pa_shutdown():\n> >\n> > static void\n> > pa_shutdown(int code, Datum arg)\n> > {\n> > Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n> > SendProcSignal(MyLogicalRepWorker->leader_pid,\n> > PROCSIG_PARALLEL_APPLY_MESSAGE,\n> > InvalidBackendId);\n> >\n> > Also, if the parallel apply worker fails shm_toc_lookup() during the\n> > initialization, it raises an error (because of noError = false) but\n> > ends up a SEGV as MyLogicalRepWorker is still NULL.\n> >\n> > I think that we should not use MyLogicalRepWorker->leader_pid in\n> > pa_shutdown() but instead store the leader's pid to a static variable\n> > before registering pa_shutdown() callback.\n> >\n>\n> Why not simply move the registration of pa_shutdown() to someplace\n> after logicalrep_worker_attach()?\n\nIf we do that, the worker won't call dsm_detach() if it raises an\nERROR in logicalrep_worker_attach(), is that okay? It seems that it's\nno practically problem since we call dsm_backend_shutdown() in\nshmem_exit(), but if so why do we need to call it in pa_shutdown()?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 12:07:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Monday, May 8, 2023 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n\r\nHi,\r\n\r\n> \r\n> On Tue, May 2, 2023 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > While investigating this issue, I've reviewed the code around\r\n> > > callbacks and worker termination etc and I found a problem.\r\n> > >\r\n> > > A parallel apply worker calls the before_shmem_exit callbacks in the\r\n> > > following order:\r\n> > >\r\n> > > 1. ShutdownPostgres()\r\n> > > 2. logicalrep_worker_onexit()\r\n> > > 3. pa_shutdown()\r\n> > >\r\n> > > Since the worker is detached during logicalrep_worker_onexit(),\r\n> > > MyLogicalReplication->leader_pid is an invalid when we call\r\n> > > pa_shutdown():\r\n> > >\r\n> > > static void\r\n> > > pa_shutdown(int code, Datum arg)\r\n> > > {\r\n> > > Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\r\n> > > SendProcSignal(MyLogicalRepWorker->leader_pid,\r\n> > > PROCSIG_PARALLEL_APPLY_MESSAGE,\r\n> > > InvalidBackendId);\r\n> > >\r\n> > > Also, if the parallel apply worker fails shm_toc_lookup() during the\r\n> > > initialization, it raises an error (because of noError = false) but\r\n> > > ends up a SEGV as MyLogicalRepWorker is still NULL.\r\n> > >\r\n> > > I think that we should not use MyLogicalRepWorker->leader_pid in\r\n> > > pa_shutdown() but instead store the leader's pid to a static variable\r\n> > > before registering pa_shutdown() callback.\r\n> > >\r\n> >\r\n> > Why not simply move the registration of pa_shutdown() to someplace\r\n> > after logicalrep_worker_attach()?\r\n> \r\n> If we do that, the worker won't call dsm_detach() if it raises an\r\n> ERROR in logicalrep_worker_attach(), is that okay? It seems that it's\r\n> no practically problem since we call dsm_backend_shutdown() in\r\n> shmem_exit(), but if so why do we need to call it in pa_shutdown()?\r\n\r\nI think the dsm_detach in pa_shutdown was intended to fire on_dsm_detach\r\ncallbacks to give callback a chance to report stat before the stat system is\r\nshutdown, following what we do in ParallelWorkerShutdown() (e.g.\r\nsharedfileset.c callbacks cause fd.c to do ReportTemporaryFileUsage(), so we\r\nneed to fire that earlier).\r\n\r\nBut for parallel apply, we currently only have one on_dsm_detach\r\ncallback(shm_mq_detach_callback) which doesn't report extra stats. So the\r\ndsm_detach in pa_shutdown is only used to make it a bit future-proof in case\r\nwe add some other on_dsm_detach callbacks in the future which need to report\r\nstats.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n", "msg_date": "Mon, 8 May 2023 03:52:44 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 8, 2023 at 12:52 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, May 8, 2023 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com>\n>\n> Hi,\n>\n> >\n> > On Tue, May 2, 2023 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > >\n> > > On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > While investigating this issue, I've reviewed the code around\n> > > > callbacks and worker termination etc and I found a problem.\n> > > >\n> > > > A parallel apply worker calls the before_shmem_exit callbacks in the\n> > > > following order:\n> > > >\n> > > > 1. ShutdownPostgres()\n> > > > 2. logicalrep_worker_onexit()\n> > > > 3. pa_shutdown()\n> > > >\n> > > > Since the worker is detached during logicalrep_worker_onexit(),\n> > > > MyLogicalReplication->leader_pid is an invalid when we call\n> > > > pa_shutdown():\n> > > >\n> > > > static void\n> > > > pa_shutdown(int code, Datum arg)\n> > > > {\n> > > > Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n> > > > SendProcSignal(MyLogicalRepWorker->leader_pid,\n> > > > PROCSIG_PARALLEL_APPLY_MESSAGE,\n> > > > InvalidBackendId);\n> > > >\n> > > > Also, if the parallel apply worker fails shm_toc_lookup() during the\n> > > > initialization, it raises an error (because of noError = false) but\n> > > > ends up a SEGV as MyLogicalRepWorker is still NULL.\n> > > >\n> > > > I think that we should not use MyLogicalRepWorker->leader_pid in\n> > > > pa_shutdown() but instead store the leader's pid to a static variable\n> > > > before registering pa_shutdown() callback.\n> > > >\n> > >\n> > > Why not simply move the registration of pa_shutdown() to someplace\n> > > after logicalrep_worker_attach()?\n> >\n> > If we do that, the worker won't call dsm_detach() if it raises an\n> > ERROR in logicalrep_worker_attach(), is that okay? It seems that it's\n> > no practically problem since we call dsm_backend_shutdown() in\n> > shmem_exit(), but if so why do we need to call it in pa_shutdown()?\n>\n> I think the dsm_detach in pa_shutdown was intended to fire on_dsm_detach\n> callbacks to give callback a chance to report stat before the stat system is\n> shutdown, following what we do in ParallelWorkerShutdown() (e.g.\n> sharedfileset.c callbacks cause fd.c to do ReportTemporaryFileUsage(), so we\n> need to fire that earlier).\n>\n> But for parallel apply, we currently only have one on_dsm_detach\n> callback(shm_mq_detach_callback) which doesn't report extra stats. So the\n> dsm_detach in pa_shutdown is only used to make it a bit future-proof in case\n> we add some other on_dsm_detach callbacks in the future which need to report\n> stats.\n\nMake sense . Given that it's possible that we add other callbacks that\nreport stats in the future, I think it's better not to move the\nposition to register pa_shutdown() callback.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 May 2023 14:38:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 8, 2023 at 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 8, 2023 at 12:52 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Monday, May 8, 2023 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> >\n> > Hi,\n> >\n> > >\n> > > On Tue, May 2, 2023 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > wrote:\n> > > >\n> > > > On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada\n> > > <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > While investigating this issue, I've reviewed the code around\n> > > > > callbacks and worker termination etc and I found a problem.\n> > > > >\n> > > > > A parallel apply worker calls the before_shmem_exit callbacks in the\n> > > > > following order:\n> > > > >\n> > > > > 1. ShutdownPostgres()\n> > > > > 2. logicalrep_worker_onexit()\n> > > > > 3. pa_shutdown()\n> > > > >\n> > > > > Since the worker is detached during logicalrep_worker_onexit(),\n> > > > > MyLogicalReplication->leader_pid is an invalid when we call\n> > > > > pa_shutdown():\n> > > > >\n> > > > > static void\n> > > > > pa_shutdown(int code, Datum arg)\n> > > > > {\n> > > > > Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n> > > > > SendProcSignal(MyLogicalRepWorker->leader_pid,\n> > > > > PROCSIG_PARALLEL_APPLY_MESSAGE,\n> > > > > InvalidBackendId);\n> > > > >\n> > > > > Also, if the parallel apply worker fails shm_toc_lookup() during the\n> > > > > initialization, it raises an error (because of noError = false) but\n> > > > > ends up a SEGV as MyLogicalRepWorker is still NULL.\n> > > > >\n> > > > > I think that we should not use MyLogicalRepWorker->leader_pid in\n> > > > > pa_shutdown() but instead store the leader's pid to a static variable\n> > > > > before registering pa_shutdown() callback.\n> > > > >\n> > > >\n> > > > Why not simply move the registration of pa_shutdown() to someplace\n> > > > after logicalrep_worker_attach()?\n> > >\n> > > If we do that, the worker won't call dsm_detach() if it raises an\n> > > ERROR in logicalrep_worker_attach(), is that okay? It seems that it's\n> > > no practically problem since we call dsm_backend_shutdown() in\n> > > shmem_exit(), but if so why do we need to call it in pa_shutdown()?\n> >\n> > I think the dsm_detach in pa_shutdown was intended to fire on_dsm_detach\n> > callbacks to give callback a chance to report stat before the stat system is\n> > shutdown, following what we do in ParallelWorkerShutdown() (e.g.\n> > sharedfileset.c callbacks cause fd.c to do ReportTemporaryFileUsage(), so we\n> > need to fire that earlier).\n> >\n> > But for parallel apply, we currently only have one on_dsm_detach\n> > callback(shm_mq_detach_callback) which doesn't report extra stats. So the\n> > dsm_detach in pa_shutdown is only used to make it a bit future-proof in case\n> > we add some other on_dsm_detach callbacks in the future which need to report\n> > stats.\n>\n> Make sense . Given that it's possible that we add other callbacks that\n> report stats in the future, I think it's better not to move the\n> position to register pa_shutdown() callback.\n>\n\nHmm, what kind of stats do we expect to be collected before we\nregister pa_shutdown? I think if required we can register such a\ncallback after pa_shutdown. I feel without reordering the callbacks,\nthe fix would be a bit complicated as explained in my previous email,\nso I don't think it is worth complicating this code unless really\nrequired.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 May 2023 12:03:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Fri, May 5, 2023 at 9:14 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, May 3, 2023 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Attach another patch to fix the problem that pa_shutdown will access invalid\n> MyLogicalRepWorker. I personally want to avoid introducing new static variable,\n> so I only reorder the callback registration in this version.\n>\n> When testing this, I notice a rare case that the leader is possible to receive\n> the worker termination message after the leader stops the parallel worker. This\n> is unnecessary and have a risk that the leader would try to access the detached\n> memory queue. This is more likely to happen and sometimes cause the failure in\n> regression tests after the registration reorder patch because the dsm is\n> detached earlier after applying the patch.\n>\n\nI think it is only possible for the leader apply can worker to try to\nreceive the error message from an error queue after your 0002 patch.\nBecause another place already detached from the queue before stopping\nthe parallel apply workers. So, I combined both the patches and\nchanged a few comments and a commit message. Let me know what you\nthink of the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 8 May 2023 16:39:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 8, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 8, 2023 at 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 8, 2023 at 12:52 PM Zhijie Hou (Fujitsu)\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Monday, May 8, 2023 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> > >\n> > > Hi,\n> > >\n> > > >\n> > > > On Tue, May 2, 2023 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > > >\n> > > > > On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada\n> > > > <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > While investigating this issue, I've reviewed the code around\n> > > > > > callbacks and worker termination etc and I found a problem.\n> > > > > >\n> > > > > > A parallel apply worker calls the before_shmem_exit callbacks in the\n> > > > > > following order:\n> > > > > >\n> > > > > > 1. ShutdownPostgres()\n> > > > > > 2. logicalrep_worker_onexit()\n> > > > > > 3. pa_shutdown()\n> > > > > >\n> > > > > > Since the worker is detached during logicalrep_worker_onexit(),\n> > > > > > MyLogicalReplication->leader_pid is an invalid when we call\n> > > > > > pa_shutdown():\n> > > > > >\n> > > > > > static void\n> > > > > > pa_shutdown(int code, Datum arg)\n> > > > > > {\n> > > > > > Assert(MyLogicalRepWorker->leader_pid != InvalidPid);\n> > > > > > SendProcSignal(MyLogicalRepWorker->leader_pid,\n> > > > > > PROCSIG_PARALLEL_APPLY_MESSAGE,\n> > > > > > InvalidBackendId);\n> > > > > >\n> > > > > > Also, if the parallel apply worker fails shm_toc_lookup() during the\n> > > > > > initialization, it raises an error (because of noError = false) but\n> > > > > > ends up a SEGV as MyLogicalRepWorker is still NULL.\n> > > > > >\n> > > > > > I think that we should not use MyLogicalRepWorker->leader_pid in\n> > > > > > pa_shutdown() but instead store the leader's pid to a static variable\n> > > > > > before registering pa_shutdown() callback.\n> > > > > >\n> > > > >\n> > > > > Why not simply move the registration of pa_shutdown() to someplace\n> > > > > after logicalrep_worker_attach()?\n> > > >\n> > > > If we do that, the worker won't call dsm_detach() if it raises an\n> > > > ERROR in logicalrep_worker_attach(), is that okay? It seems that it's\n> > > > no practically problem since we call dsm_backend_shutdown() in\n> > > > shmem_exit(), but if so why do we need to call it in pa_shutdown()?\n> > >\n> > > I think the dsm_detach in pa_shutdown was intended to fire on_dsm_detach\n> > > callbacks to give callback a chance to report stat before the stat system is\n> > > shutdown, following what we do in ParallelWorkerShutdown() (e.g.\n> > > sharedfileset.c callbacks cause fd.c to do ReportTemporaryFileUsage(), so we\n> > > need to fire that earlier).\n> > >\n> > > But for parallel apply, we currently only have one on_dsm_detach\n> > > callback(shm_mq_detach_callback) which doesn't report extra stats. So the\n> > > dsm_detach in pa_shutdown is only used to make it a bit future-proof in case\n> > > we add some other on_dsm_detach callbacks in the future which need to report\n> > > stats.\n> >\n> > Make sense . Given that it's possible that we add other callbacks that\n> > report stats in the future, I think it's better not to move the\n> > position to register pa_shutdown() callback.\n> >\n>\n> Hmm, what kind of stats do we expect to be collected before we\n> register pa_shutdown? I think if required we can register such a\n> callback after pa_shutdown. I feel without reordering the callbacks,\n> the fix would be a bit complicated as explained in my previous email,\n> so I don't think it is worth complicating this code unless really\n> required.\n\nFair point. I agree that the issue can be resolved by carefully\nordering the callback registration.\n\nAnother thing I'm concerned about is that since both the leader worker\nand parallel worker detach DSM before logicalrep_worker_onexit(),\ncleaning up work that touches DSM cannot be done in\nlogicalrep_worker_onexit(). If we need to do something in the future,\nwe would need to have another callback called before detaching DSM.\nBut I'm fine as it's not a problem for now.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 May 2023 11:19:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Mon, May 8, 2023 at 8:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 5, 2023 at 9:14 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, May 3, 2023 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > Attach another patch to fix the problem that pa_shutdown will access invalid\n> > MyLogicalRepWorker. I personally want to avoid introducing new static variable,\n> > so I only reorder the callback registration in this version.\n> >\n> > When testing this, I notice a rare case that the leader is possible to receive\n> > the worker termination message after the leader stops the parallel worker. This\n> > is unnecessary and have a risk that the leader would try to access the detached\n> > memory queue. This is more likely to happen and sometimes cause the failure in\n> > regression tests after the registration reorder patch because the dsm is\n> > detached earlier after applying the patch.\n> >\n>\n> I think it is only possible for the leader apply can worker to try to\n> receive the error message from an error queue after your 0002 patch.\n> Because another place already detached from the queue before stopping\n> the parallel apply workers. So, I combined both the patches and\n> changed a few comments and a commit message. Let me know what you\n> think of the attached.\n\nI have one comment on the detaching error queue part:\n\n+ /*\n+ * Detach from the error_mq_handle for the parallel apply worker before\n+ * stopping it. This prevents the leader apply worker from trying to\n+ * receive the message from the error queue that might already\nbe detached\n+ * by the parallel apply worker.\n+ */\n+ shm_mq_detach(winfo->error_mq_handle);\n+ winfo->error_mq_handle = NULL;\n\nIn pa_detach_all_error_mq(), we try to detach error queues of all\nworkers in the pool. I think we should check if the queue is already\ndetached (i.e. is NULL) there. Otherwise, we will end up a SEGV if an\nerror happens after detaching the error queue and before removing the\nworker from the pool.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 May 2023 11:19:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" }, { "msg_contents": "On Tue, May 9, 2023 at 7:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 8, 2023 at 8:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I think it is only possible for the leader apply can worker to try to\n> > receive the error message from an error queue after your 0002 patch.\n> > Because another place already detached from the queue before stopping\n> > the parallel apply workers. So, I combined both the patches and\n> > changed a few comments and a commit message. Let me know what you\n> > think of the attached.\n>\n> I have one comment on the detaching error queue part:\n>\n> + /*\n> + * Detach from the error_mq_handle for the parallel apply worker before\n> + * stopping it. This prevents the leader apply worker from trying to\n> + * receive the message from the error queue that might already\n> be detached\n> + * by the parallel apply worker.\n> + */\n> + shm_mq_detach(winfo->error_mq_handle);\n> + winfo->error_mq_handle = NULL;\n>\n> In pa_detach_all_error_mq(), we try to detach error queues of all\n> workers in the pool. I think we should check if the queue is already\n> detached (i.e. is NULL) there. Otherwise, we will end up a SEGV if an\n> error happens after detaching the error queue and before removing the\n> worker from the pool.\n>\n\nAgreed, I have made this change, added the same check at one other\nplace for the sake of consistency, and pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 May 2023 15:31:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Perform streaming logical transactions by background workers and\n parallel apply" } ]
[ { "msg_contents": "Collegues!\n\nI've tried to build all supported versions of PostgresSQL in the Ubuntu\n22.04 which is soon to be released.\n\nAnd found out that versions 10-13 produce a lot of warnings about\ndeprecated OpenSSL functions.\n\nI've found discussion about this problem \n\n\nhttps://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se\n\nand commit 4d3db13621be64fbac2faf which fixes problem in the 14\nversion and above. (it seems to me that declaring wish to use outdated\nAPI is ugly workaround, but anyway it works)\n\nBut this commit seems not to be backpatched to versions 13-10.\nIn 2020 it probably haven't be a problem, as OpenSSL 3.0.0 was in\nalpha stage, but this month first mainline Linux distribution which uses\nopenssl 3.0.x is going to be released.\n\nIt seems that we need to backpatch this commit into all supported\nversions of PostgreSQL, because there are more and more distributions\nswitched to OpenSSL 3.0.x. Apart from Ubuntu I can see at least RH 9\nbeta and experimental packages for Debian which probably would be ready\nin time for Debian 12.\n\n-- \n Victor Wagner <vitus@wagner.pp.ru>\n\n\n", "msg_date": "Wed, 6 Apr 2022 12:55:14 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "OpenSSL deprectation warnings in Postgres 10-13" }, { "msg_contents": "> On 6 Apr 2022, at 11:55, Victor Wagner <vitus@wagner.pp.ru> wrote:\n\n> I've tried to build all supported versions of PostgresSQL in the Ubuntu\n> 22.04 which is soon to be released.\n> \n> And found out that versions 10-13 produce a lot of warnings about\n> deprecated OpenSSL functions.\n\nThanks for testing. Does it cause any issues other than compiler warnings?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 12:53:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: OpenSSL deprectation warnings in Postgres 10-13" }, { "msg_contents": "On 06.04.22 11:55, Victor Wagner wrote:\n> And found out that versions 10-13 produce a lot of warnings about\n> deprecated OpenSSL functions.\n> \n> I've found discussion about this problem\n> \n> https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se\n> \n> and commit 4d3db13621be64fbac2faf which fixes problem in the 14\n> version and above. (it seems to me that declaring wish to use outdated\n> API is ugly workaround, but anyway it works)\n> \n> But this commit seems not to be backpatched to versions 13-10.\n\nAs was discussed in that thread, we don't actually have precise \nknowledge of what OpenSSL versions we support in versions before PG13. \nSo backpatching this could alter or break things that worked before.\n\nIt seems easier to just disable deprecation warnings if you don't want \nto see them.\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 15:01:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL deprectation warnings in Postgres 10-13" }, { "msg_contents": "В Wed, 6 Apr 2022 15:01:42 +0200\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> пишет:\n\n> On 06.04.22 11:55, Victor Wagner wrote:\n> > And found out that versions 10-13 produce a lot of warnings about\n> > deprecated OpenSSL functions.\n> > \n> > I've found discussion about this problem\n> > \n> > https://www.postgresql.org/message-id/flat/FEF81714-D479-4512-839B-C769D2605F8A%40yesql.se\n> > \n> > and commit 4d3db13621be64fbac2faf which fixes problem in the 14\n> > version and above. (it seems to me that declaring wish to use\n> > outdated API is ugly workaround, but anyway it works)\n> > \n> > But this commit seems not to be backpatched to versions 13-10. \n> \n> As was discussed in that thread, we don't actually have precise \n> knowledge of what OpenSSL versions we support in versions before\n> PG13. So backpatching this could alter or break things that worked\n> before.\n> \n> It seems easier to just disable deprecation warnings if you don't\n> want to see them.\n> \n\nAs far as my understanding goes, \"just disable deprication warning\" it\nis what commit 4d3db13621be64fbac2faf does.\n\nReal fix would be rewrite corresponding parts of code so they wouldn't\nuse deprecated API. (as it was done with sha2 code in PostgreSQL 14.\nwhich doesn't use openssl version of sha2 at all).\n\nIt is quite simple to rewrite digest code to use generic EVP API, but a\nbit more complicated with DH parameters, because if EVP API appeared in\n0.9.x series and we can rely on it in any version of OpenSSL,\nnon-deprecated replacement for PEM_read_DHparams is 3.0 only. So we\nhave to keep 1.1.x compatible version for a while until all major\ndistribution would change to 3.x. \n\nReally I think that PostgreSQL does something wrong with TLS if it need\nto use deprecated functions in be-secure-openssl.c. But it might be\nthat just OpenSSL team was to slow to catch up new things in the\ntransport security world, so application (including PostgreSQL) have to\nwork around it and use too low-level functions, which are now\ndeprecated.\n\n-- \n Victor Wagner <vitus@wagner.pp.ru>\n\n\n", "msg_date": "Wed, 6 Apr 2022 17:02:07 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: OpenSSL deprectation warnings in Postgres 10-13" }, { "msg_contents": ">> It seems easier to just disable deprecation warnings if you don't\n>> want to see them.\n> \n> As far as my understanding goes, \"just disable deprication warning\" it\n> is what commit 4d3db13621be64fbac2faf does.\n\nBut as stated above, there is no set of supported versions defined so there is\ndefinitive value we can set for OPENSSL_API_COMPAT in the backbranches.\n\n> Real fix would be rewrite corresponding parts of code so they wouldn't\n> use deprecated API. (as it was done with sha2 code in PostgreSQL 14.\n> which doesn't use openssl version of sha2 at all).\n\nThat wouldn't help for 10-13 since that's a change unlikely to be back ported\ninto a stable release.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 13:38:27 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: OpenSSL deprectation warnings in Postgres 10-13" }, { "msg_contents": "On 06.04.22 16:02, Victor Wagner wrote:\n>> It seems easier to just disable deprecation warnings if you don't\n>> want to see them.\n>>\n> As far as my understanding goes, \"just disable deprication warning\" it\n> is what commit 4d3db13621be64fbac2faf does.\n\nI meant compile with -Wno-deprecated.\n\n\n", "msg_date": "Thu, 7 Apr 2022 13:56:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL deprectation warnings in Postgres 10-13" } ]
[ { "msg_contents": "Hi all,\nA few months ago a group of researchers published a paper about LZ77\nvulnerability[1]. And it also affects PGLZ. From my point of view, it could\nbe a really dangerous issue for some kind of application. If I understand\nit correctly there is a possibility of leaking approx. 24B secret data per\nhour(but it depends on HW configuration).\n\nI understand that there is no simple and easy solution. But I would like\nto know Your opinion on this. Or if you have any plan on how to deal with\nthis?\n\nThanks\n\n -Filip-\n\n[1] https://arxiv.org/abs/2111.08404\n\nHi all,A few months ago a group of researchers published a paper about LZ77 vulnerability[1]. And it also affects PGLZ. From my point of view, it could be a really dangerous issue for some kind of application. If I understand it correctly there is a possibility of leaking approx. 24B secret data per hour(but it depends on HW configuration). I understand that there is no simple and easy solution.  But I would like to know Your opinion on this. Or if you have any plan on how to deal with this?Thanks    -Filip-[1] https://arxiv.org/abs/2111.08404", "msg_date": "Wed, 6 Apr 2022 13:17:52 +0200", "msg_from": "Filip Janus <fjanus@redhat.com>", "msg_from_op": true, "msg_subject": "Practical Timing Side Channel Attacks on Memory Compression" }, { "msg_contents": "On Wed, Apr 6, 2022 at 7:18 AM Filip Janus <fjanus@redhat.com> wrote:\n> A few months ago a group of researchers published a paper about LZ77 vulnerability[1]. And it also affects PGLZ. From my point of view, it could be a really dangerous issue for some kind of application. If I understand it correctly there is a possibility of leaking approx. 24B secret data per hour(but it depends on HW configuration).\n>\n> I understand that there is no simple and easy solution. But I would like to know Your opinion on this. Or if you have any plan on how to deal with this?\n\nI hadn't heard of this before. It seems to be a real vulnerability in\nPGLZ. Fortunately, the attack relies on the presence of conditions\nthat may not always be present, and the rate of data leakage is pretty\nslow. Some threats of this kind are going to need to be addressed\noutside the database, perhaps. For example, you could rate-limit\nattempts to access your web application to make it harder to\naccumulate enough accesses to get any meaningful data leakage, and you\ncould store highly secret data in a different place than you store\ndata that the user has the ability to modify. It sounds like even just\nputting those things in separate jsonb columns rather than the same\none would block this particular attack. A user could also choose to\ndisable compression for a certain column entirely if they're worried\nabout this kind of thing.\n\nHowever, there are new attacks all the time, and it's going to be\nreally hard to block them all. Variable latency is extremely difficult\nto avoid, because pretty much every piece of code anyone writes is\ngoing to have if statements and loops that can iterate for different\nnumbers of iterations on different input, and then there are CPU\neffects like caching and branch prediction that add to the problem.\nThere are tons of attacks like this, and even if we could somehow, by\nmagic, secure PostgreSQL against this one completely, there will be\nlots more in the future. I think it's inevitable that there are going\nto be more and more papers demonstrating that a determined attacker\ncan leak information out of system A by very carefully measuring the\nlatency of operation X under different conditions, and there is no\nreal solution to that problem in general.\n\nOne thing that we could do internally to PostgreSQL is add more\npossible TOAST compression algorithms. In addition to PGLZ, which the\nattack in the paper targets, we now have LZ4 as an option. That's\nprobably vulnerable too, and probably zstd is as well, but if a state\nof the art algorithm emerges that somehow isn't vulnerable, we can\nconsider adding support for it. I don't think that as a project we\nreally ought to be in the business of trying to design our own\ncompression algorithms. PGLZ is a great job for something that was\nwritten by a PostgreSQL hacker, and many years ago at that, but not\nsurprisingly, people who spend all day thinking about compression are\nreally, really good at it. We should leave it up to them to figure out\nwhether there's something to be done here, and if the answer is yes,\nthen we can consider adopting whatever they come up with. Personally,\nI don't quite see how such a thing would be possible, but I'm not a\ncompression expert.\n\nOne last thought: I don't think it's right to suppose that every\nsecurity vulnerability is the result of some design flaw and every\nsecurity vulnerability must be patched. Imagine, for example, that\nsomeone posted a paper showing that they could break into your house.\nYour reaction to that paper would probably depend on how they did it.\nIf it turns out that the lock you have on your front door will unlock\nif you give it a hard bump with your fist, you'd probably want to\nreplace the lock with one that didn't have that design flaw. But if\nthe paper showed that they could break into your house by breaking one\nof the windows with a crowbar, would you replace all of those windows\nwith solid steel? Most people understand that a window is likely to be\nmade of a more breakable substance than whatever surrounds it, because\nit has an additional design constraint: it has to permit light to pass\nthrough it. We accept that as a trade-off when we choose to live in a\nhouse rather than a bunker. In the same way, without denying that\nthere's a real vulnerability here, I don't think that anyone who\nunderstands a little bit about how compression and decompression work\nwould expect decompression to take the same amount of time on every\ninput. Every compression algorithm pretty much has a mode where\nincompressible data is copied through byte for byte, and other modes\nthat take advantage of repeated byte sequences. It's only reasonable\nto suppose that those various code paths are not all going to run at\nthe same speed, and nobody would want them to. It would mean trying to\nslow down the fast paths through the code to the same speed as the\nslow paths, and because decompression speed is so important, that\nsounds like a thing that most people would not want.\n\nDo you have any suggestions on what we should do here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 09:58:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Practical Timing Side Channel Attacks on Memory Compression" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One last thought: I don't think it's right to suppose that every\n> security vulnerability is the result of some design flaw and every\n> security vulnerability must be patched.\n\nAs far as Postgres is concerned, I'm kind of unimpressed by timing-based\nattacks. There are enough layers between a hypothetical attacker and a\nparticular algorithm in the backend that it'd be really hard to get any\nreliable numbers. Length-based attacks are more realistic, since e.g.\nwe allow you to find out the compressed size of a data value. But as\nyou noted, those can be defeated by not storing sensitive data in the\nsame place as attacker-controlled data. Or turning off compression,\nbut that's largely throwing the baby out with the bathwater. In the\nend I think it's up to the DBA how concerned to be about this and\nwhat measures she should take to mitigate any risks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 10:14:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Practical Timing Side Channel Attacks on Memory Compression" }, { "msg_contents": "On Wed, Apr 6, 2022 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > One last thought: I don't think it's right to suppose that every\n> > security vulnerability is the result of some design flaw and every\n> > security vulnerability must be patched.\n>\n> As far as Postgres is concerned, I'm kind of unimpressed by timing-based\n> attacks. There are enough layers between a hypothetical attacker and a\n> particular algorithm in the backend that it'd be really hard to get any\n> reliable numbers. Length-based attacks are more realistic, since e.g.\n> we allow you to find out the compressed size of a data value. But as\n> you noted, those can be defeated by not storing sensitive data in the\n> same place as attacker-controlled data. Or turning off compression,\n> but that's largely throwing the baby out with the bathwater. In the\n> end I think it's up to the DBA how concerned to be about this and\n> what measures she should take to mitigate any risks.\n\nI think that the paper shows that, under the right set of\ncircumstances, a timing-based attack is possible here. How frequently\nthose circumstances will arise is debatable, but I don't find it hard\nto believe that there are production PostgreSQL clusters out there\nagainst which such an attack could be mounted. I think you're right\nwhen you say that length-based attacks might be practical, and perhaps\nsome of those might have more to do with PostgreSQL than than this\ndoes, since this is really mostly about the properties of compression\nalgorithms in general rather than PostgreSQL specifically. I also\ncompletely agree that it's really up to the DBA to decide how worried\nto be and what to do about it. I think that the fact that compression\ndoesn't always run at the same speed or produce an output of the same\nsize is pretty much intrinsic to the idea of a compression algorithm,\nand in a wide variety of circumstances that is absolutely fine and\nabsolutely desirable. When it permits this kind of attack, it's not,\nbut then don't use compression, or mitigate the problem some other\nway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 10:29:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Practical Timing Side Channel Attacks on Memory Compression" }, { "msg_contents": "On Wed, 6 Apr 2022 at 10:29, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I think that the paper shows that, under the right set of\n> circumstances, a timing-based attack is possible here.\n\nGenerally any argument that an attack is infeasible is risky and\nusually leads to security professionals showing that surprisingly\ndifficult attacks are entirely feasible.\n\nHowever I think the opposite argument is actually much more\ncompelling. There are *so many* timing attacks on a general purpose\ncomputing platform like Postgres that any defense to them can't\nconcentrate on just one code path and has to take a more comprehensive\napproach.\n\nSo for example a front-end can add some stochastic latency or perhaps\npadd latency to fixed amount.\n\nPerhaps postgres could offer some protection at that level by e.g.\noffering a function to do it. For most users I think they're better\noff implementing it at the application level but some people use\ndatabase stored functions as their application level so it might be\nuseful for them.\n\nSomething like pg_sleep_until_multiple_of('50ms') which looks at the\ntransaction start time and calculates the amount of time to sleep\nautomatically.\n\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:23:37 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Practical Timing Side Channel Attacks on Memory Compression" }, { "msg_contents": "Thanks for all of your opinions. I have almost the same feeling.\nThe best layer for mitigation should be probably a user application.\nThere can be arranged the correct data layout in the database, set up\naccess limit for the app, and many other mitigation mechanisms.\n\n\n -Filip-\n\n\nst 6. 4. 2022 v 17:24 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> On Wed, 6 Apr 2022 at 10:29, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I think that the paper shows that, under the right set of\n> > circumstances, a timing-based attack is possible here.\n>\n> Generally any argument that an attack is infeasible is risky and\n> usually leads to security professionals showing that surprisingly\n> difficult attacks are entirely feasible.\n>\n> However I think the opposite argument is actually much more\n> compelling. There are *so many* timing attacks on a general purpose\n> computing platform like Postgres that any defense to them can't\n> concentrate on just one code path and has to take a more comprehensive\n> approach.\n>\n> So for example a front-end can add some stochastic latency or perhaps\n> padd latency to fixed amount.\n>\n> Perhaps postgres could offer some protection at that level by e.g.\n> offering a function to do it. For most users I think they're better\n> off implementing it at the application level but some people use\n> database stored functions as their application level so it might be\n> useful for them.\n>\n> Something like pg_sleep_until_multiple_of('50ms') which looks at the\n> transaction start time and calculates the amount of time to sleep\n> automatically.\n>\n>\n> --\n> greg\n>\n>\n\nThanks for all of your opinions. I have almost the same feeling.The best layer for mitigation should be probably a user application.There can be arranged the correct data layout in the database, set up access limit for the app, and many other mitigation mechanisms.    -Filip-st 6. 4. 2022 v 17:24 odesílatel Greg Stark <stark@mit.edu> napsal:On Wed, 6 Apr 2022 at 10:29, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I think that the paper shows that, under the right set of\n> circumstances, a timing-based attack is possible here.\n\nGenerally any argument that an attack is infeasible is risky and\nusually leads to security professionals showing that surprisingly\ndifficult attacks are entirely feasible.\n\nHowever I think the opposite argument is actually much more\ncompelling. There are *so many* timing attacks on a general purpose\ncomputing platform like Postgres that any defense to them can't\nconcentrate on just one code path and has to take a more comprehensive\napproach.\n\nSo for example a front-end can add some stochastic latency or perhaps\npadd latency to fixed amount.\n\nPerhaps postgres could offer some protection at that level by e.g.\noffering a function to do it. For most users I think they're better\noff implementing it at the application level but some people use\ndatabase stored functions as their application level so it might be\nuseful for them.\n\nSomething like pg_sleep_until_multiple_of('50ms') which looks at the\ntransaction start time and calculates the amount of time to sleep\nautomatically.\n\n\n-- \ngreg", "msg_date": "Mon, 11 Apr 2022 08:36:17 +0200", "msg_from": "Filip Janus <fjanus@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Practical Timing Side Channel Attacks on Memory Compression" } ]
[ { "msg_contents": "Identity sequences shouldn't be addressed directly by name in normal \nuse. Therefore, requiring them to be added directly to publications is \na faulty interface. I think they should be considered included in a \npublication automatically when their owning table is. See attached \npatch for a sketch. (It doesn't actually work quite yet, but it shows \nthe idea, I think.)\n\nIf we end up keeping the logical replication of sequences feature, I \nthink something like this should be added, too.", "msg_date": "Wed, 6 Apr 2022 15:07:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "logical replication of identity sequences" } ]
[ { "msg_contents": "Hello,\n\n From the documentation \n(https://www.postgresql.org/docs/current/sql-reindex.html#id-1.9.3.162.7), \nit sounds like REINDEX won't block read queries that don't need the \nindex. But it seems like the planner wants to take an ACCESS SHARE lock \non every indexes, regardless of the query, and so REINDEX actually \nblocks any queries but some prepared queries whose plan have been cached.\n\nI wonder if it is a bug, or if the documentation should be updated. What \ndo you think?\n\nHere is a simple demo (tested with postgres 10 and master):\n\nSession #1\n===========================================================\n\nsrcpg@postgres=# CREATE TABLE flights (id INT generated always as \nidentity, takeoff DATE);\nCREATE TABLE\n\nsrcpg@postgres=# INSERT INTO flights (takeoff) SELECT date '2022-03-01' \n+ interval '1 day' * i FROM generate_series(1,1000) i;\nINSERT 0 1000\n\nsrcpg@postgres=# CREATE INDEX ON flights(takeoff);\nCREATE INDEX\n\nsrcpg@postgres=# BEGIN;\nBEGIN\nsrcpg@postgres=# REINDEX INDEX flights_takeoff_idx ;\nREINDEX\n\n\n\nSession #2\n===========================================================\n\nsrcpg@postgres=# SELECT pg_backend_pid();\n pg_backend_pid\n----------------\n 4114695\n\nsrcpg@postgres=# EXPLAIN SELECT id FROM flights;\n--> it blocks\n\n\nSession #3\n===========================================================\n\nsrcpg@postgres=# SELECT locktype, relname, mode, granted FROM pg_locks \nLEFT JOIN pg_class ON (oid = relation) WHERE pid = 4114695;\n locktype | relname | mode | granted\n------------+---------------------+-----------------+---------\n virtualxid | ∅ | ExclusiveLock | t\n relation | flights_takeoff_idx | AccessShareLock | f\n relation | flights | AccessShareLock | t\n\n\n", "msg_date": "Wed, 6 Apr 2022 16:48:57 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On Wed, Apr 6, 2022 at 7:49 AM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n> From the documentation\n> (https://www.postgresql.org/docs/current/sql-reindex.html#id-1.9.3.162.7),\n> it sounds like REINDEX won't block read queries that don't need the\n> index. But it seems like the planner wants to take an ACCESS SHARE lock\n> on every indexes, regardless of the query, and so REINDEX actually\n> blocks any queries but some prepared queries whose plan have been cached.\n>\n> I wonder if it is a bug, or if the documentation should be updated. What\n> do you think?\n\nI've always thought that the docs for REINDEX, while technically\naccurate, are very misleading in practice.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Apr 2022 08:03:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On 4/6/22 17:03, Peter Geoghegan wrote:\n> On Wed, Apr 6, 2022 at 7:49 AM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>> From the documentation\n>> (https://www.postgresql.org/docs/current/sql-reindex.html#id-1.9.3.162.7),\n>> it sounds like REINDEX won't block read queries that don't need the\n>> index. But it seems like the planner wants to take an ACCESS SHARE lock\n>> on every indexes, regardless of the query, and so REINDEX actually\n>> blocks any queries but some prepared queries whose plan have been cached.\n>>\n>> I wonder if it is a bug, or if the documentation should be updated. What\n>> do you think?\n> \n> I've always thought that the docs for REINDEX, while technically\n> accurate, are very misleading in practice.\n> \n\nMaybe something along this line? (patch attached)", "msg_date": "Thu, 7 Apr 2022 13:37:57 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On Thu, Apr 07, 2022 at 01:37:57PM +0200, Fr�d�ric Yhuel wrote:\n> Maybe something along this line? (patch attached)\n\nSome language fixes.\nI didn't verify the behavior, but +1 to document the practical consequences.\nI guess this is why someone invented REINDEX CONCURRENTLY.\n\n> From 4930bb8de182b78228d215bce1ab65263baabde7 Mon Sep 17 00:00:00 2001\n> From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Yhuel?= <frederic.yhuel@dalibo.com>\n> Date: Thu, 7 Apr 2022 13:30:59 +0200\n> Subject: [PATCH] Doc: Elaborate locking considerations for REINDEX\n> \n> ---\n> doc/src/sgml/ref/reindex.sgml | 6 +++++-\n> 1 file changed, 5 insertions(+), 1 deletion(-)\n> \n> diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml\n> index e6b25ee670..06c223d4a3 100644\n> --- a/doc/src/sgml/ref/reindex.sgml\n> +++ b/doc/src/sgml/ref/reindex.sgml\n> @@ -275,7 +275,11 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n> considerations are rather different. <command>REINDEX</command> locks out writes\n> but not reads of the index's parent table. It also takes an\n> <literal>ACCESS EXCLUSIVE</literal> lock on the specific index being processed,\n> - which will block reads that attempt to use that index. In contrast,\n> + which will block reads that attempt to use that index. In particular,\n> + the PostgreSQL query planner wants to take an <literal>ACCESS SHARE</literal>\n\ns/wants/tries/\n\n> + lock on every indexes of the table, regardless of the query, and so\n\nevery index\n\n> + <command>REINDEX</command> blocks virtually any queries but some prepared queries\n\nany query except for\n\n> + whose plan have been cached and which don't use this very index. In contrast,\n\nplan has\n\n> <command>DROP INDEX</command> momentarily takes an\n> <literal>ACCESS EXCLUSIVE</literal> lock on the parent table, blocking both\n> writes and reads. The subsequent <command>CREATE INDEX</command> locks out\n> -- \n> 2.30.2\n> \n\n\n", "msg_date": "Thu, 7 Apr 2022 07:40:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On 4/7/22 14:40, Justin Pryzby wrote:\n> On Thu, Apr 07, 2022 at 01:37:57PM +0200, Frédéric Yhuel wrote:\n>> Maybe something along this line? (patch attached)\n> Some language fixes.\n\nThank you Justin! I applied your fixes in the v2 patch (attached).\n\n> I didn't verify the behavior, but +1 to document the practical consequences.\n> I guess this is why someone invented REINDEX CONCURRENTLY.\n> \n\nIndeed ;) That being said, REINDEX CONCURRENTLY could give you an \ninvalid index, so sometimes you may be tempted to go for a simpler \nREINDEX, especially if you believe that the SELECTs won't be blocked.", "msg_date": "Thu, 7 Apr 2022 15:43:57 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "Le jeu. 7 avr. 2022 à 15:44, Frédéric Yhuel <frederic.yhuel@dalibo.com> a\nécrit :\n\n>\n>\n> On 4/7/22 14:40, Justin Pryzby wrote:\n> > On Thu, Apr 07, 2022 at 01:37:57PM +0200, Frédéric Yhuel wrote:\n> >> Maybe something along this line? (patch attached)\n> > Some language fixes.\n>\n> Thank you Justin! I applied your fixes in the v2 patch (attached).\n>\n>\nv2 patch sounds good.\n\n\n> > I didn't verify the behavior, but +1 to document the practical\n> consequences.\n> > I guess this is why someone invented REINDEX CONCURRENTLY.\n> >\n>\n> Indeed ;) That being said, REINDEX CONCURRENTLY could give you an\n> invalid index, so sometimes you may be tempted to go for a simpler\n> REINDEX, especially if you believe that the SELECTs won't be blocked.\n\n\nAgreed.\n\n\n-- \nGuillaume.\n\nLe jeu. 7 avr. 2022 à 15:44, Frédéric Yhuel <frederic.yhuel@dalibo.com> a écrit :\n\nOn 4/7/22 14:40, Justin Pryzby wrote:\n> On Thu, Apr 07, 2022 at 01:37:57PM +0200, Frédéric Yhuel wrote:\n>> Maybe something along this line? (patch attached)\n> Some language fixes.\n\nThank you Justin! I applied your fixes in the v2 patch (attached).\nv2 patch sounds good. \n> I didn't verify the behavior, but +1 to document the practical consequences.\n> I guess this is why someone invented REINDEX CONCURRENTLY.\n> \n\nIndeed ;) That being said, REINDEX CONCURRENTLY could give you an \ninvalid index, so sometimes you may be tempted to go for a simpler \nREINDEX, especially if you believe that the SELECTs won't be blocked.Agreed. -- Guillaume.", "msg_date": "Thu, 7 Apr 2022 17:29:36 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": false, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On Thu, Apr 07, 2022 at 05:29:36PM +0200, Guillaume Lelarge a écrit :\n> Le jeu. 7 avr. 2022 à 15:44, Frédéric Yhuel <frederic.yhuel@dalibo.com> a écrit :\n>> On 4/7/22 14:40, Justin Pryzby wrote:\n>> Thank you Justin! I applied your fixes in the v2 patch (attached).\n>\n> v2 patch sounds good.\n\nThe location of the new sentence and its wording seem fine to me. So\nno objections from me to add what's suggested, as suggested. I'll\nwait for a couple of days first.\n\n>> Indeed ;) That being said, REINDEX CONCURRENTLY could give you an\n>> invalid index, so sometimes you may be tempted to go for a simpler\n>> REINDEX, especially if you believe that the SELECTs won't be blocked.\n> \n> Agreed.\n\nThere are many factors to take into account, one is more expensive\nthan the other in terms of resources and has downsides, downsides\ncompensated by the reduction in the lock requirements. There are\ncases where REINDEX is a must-have, as CONCURRENTLY does not support\ncatalog indexes, and these tend to be easily noticed when corruption\nspreads around.\n--\nMichael", "msg_date": "Fri, 8 Apr 2022 09:22:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "\n\nOn 4/8/22 02:22, Michael Paquier wrote:\n> On Thu, Apr 07, 2022 at 05:29:36PM +0200, Guillaume Lelarge a écrit :\n>> Le jeu. 7 avr. 2022 à 15:44, Frédéric Yhuel <frederic.yhuel@dalibo.com> a écrit :\n>>> On 4/7/22 14:40, Justin Pryzby wrote:\n>>> Thank you Justin! I applied your fixes in the v2 patch (attached).\n>>\n>> v2 patch sounds good.\n> \n> The location of the new sentence and its wording seem fine to me. So\n> no objections from me to add what's suggested, as suggested. I'll\n> wait for a couple of days first.\n> \n\nThank you Michael.\n\n>>> Indeed ;) That being said, REINDEX CONCURRENTLY could give you an\n>>> invalid index, so sometimes you may be tempted to go for a simpler\n>>> REINDEX, especially if you believe that the SELECTs won't be blocked.\n>>\n>> Agreed.\n> \n> There are many factors to take into account, one is more expensive\n> than the other in terms of resources and has downsides, downsides\n> compensated by the reduction in the lock requirements. There are\n> cases where REINDEX is a must-have, as CONCURRENTLY does not support\n> catalog indexes, and these tend to be easily noticed when corruption\n> spreads around.\n\nIndeed!\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Fri, 8 Apr 2022 16:23:48 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "On Fri, Apr 08, 2022 at 04:23:48PM +0200, Frédéric Yhuel wrote:\n> Thank you Michael.\n\nAnd done as of 8ac700a.\n--\nMichael", "msg_date": "Mon, 11 Apr 2022 09:57:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." }, { "msg_contents": "\n\nOn 4/11/22 02:57, Michael Paquier wrote:\n> On Fri, Apr 08, 2022 at 04:23:48PM +0200, Frédéric Yhuel wrote:\n>> Thank you Michael.\n> \n> And done as of 8ac700a.\n> --\n\nThank you Micheal!\n\nFor reference purposes, we can see in the code of get_relation_info(), \nin plancat.c, that indeed every index of the table are opened, and \ntherefore locked, regardless of the query.\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Mon, 11 Apr 2022 09:06:07 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: REINDEX blocks virtually any queries but some prepared queries." } ]
[ { "msg_contents": "The commitfest ends with the feature freeze in less than 48 hours.\n\nI'm going to start moving patches that are Waiting On Author and\nhaven't received comment in more than a few days out of the\ncommitfset. If the patch has received a review or good feedback then\nI'll mark it Returned With Feedback.\n\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:57:17 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Last day of commitfest" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> The commitfest ends with the feature freeze in less than 48 hours.\n\nJust to clarify --- I think what has been agreed to is that we'll\nclose the CF as of the announced time (noon UTC Friday), but Robert\nand I will push in our two wide-ranging patches after that.\nYou might as well leave those two CF entries in this fest,\neven if you move everything else to the next one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 12:02:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Last day of commitfest" }, { "msg_contents": "I won't touch the Ready for Committer stuff until after the end of the\ncommitfest anyways. I did put those two in that state already.\n\nRight now I'm trying to get a bit ahead of the game by going through\nthe \"Waiting on Author\" patches. The documented process[*] is that\nthey get Returned with Feedback if they received at least one review\nand haven't had any discussion from the author in 5 days. I haven't\nbeen doing that but I'm saying I'm going to today.\n\nHowever I didn't get through the list today. I'll do it tomorrow.\n\nThe first batch of patches that will be Returned with Feedback are\nlisted below. If you're an author of these and you're still looking\nfor feedback then I would suggest sending an email describing the\nfeedback you're looking for and moving it forward to the next CF\nyourself before I mark them Returned with Feedback tomorrow.\n\n* Single item cache for Subtrans SLRU (Simon Riggs)\n Received feedback from Julien Rouhaud and Andrey Borodin\n\n* JIT counters in pg_stat_statements (Magnus Hagander)\n Feedback from Dmitry Dolgov and Julien Rouhaud\n\n* Consider incremental sort for fractional paths in\ngenerate_orderappend_paths (Tomas Vondra)\n Feedback from Arne Roland\n\n* Partial aggregates push down\n Feedback from Tomas Vondra, needs more feedback\n\n* Issue a log message when the backtrace logged is cut off\n Discussion died 2022-03-22\n Was this committed? Will it be?\n\n* Use -fvisibility=hidden for shared libraries (Andres Freund)\n Feedback from Tom Lane, Justin Pryzby\n\n* pg_dump new feature: exporting functions only. Bad or good idea ?\n(Lætitia Avrot)\n Feedback from all quarters\n\n\nThere are also these which don't seem to have received feedback. I\nguess we're just moving them forward but if the authors are happy with\nthe feedback they have they could mark them Returned with Feedback\nthemselves.\n\n* Add connection active, idle time to pg_stat_activity (Rafia Sabih,\nSergey Dudoladov)\n No significant feedback\n* WIN32 pg_import_system_collations (Juanjo Sntamaria Flecha)\n No significant feedback since v4 January 25\n* Teach WaitEventSets to grow automatically\n No significant feedback\n* postgres_fdw - use TABLESAMPLE when analyzing foreign tables\n No significant feedback\n* pg_stat_statements and \"IN\" conditions (Dmitry Dolgov)\n Feedback earlier but none since latest revision 2022-03-26\n* Parallelize correlated subqueries that execute within each worker\n(James Coleman)\n It doesn't look like the concerns the author raised were ever\naddressed by a reviewer\n\n[*]https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n\n", "msg_date": "Wed, 6 Apr 2022 21:32:31 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Last day of commitfest" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 06, 2022 at 09:32:31PM -0400, Greg Stark wrote:\n> I won't touch the Ready for Committer stuff until after the end of the\n> commitfest anyways. I did put those two in that state already.\n>\n> Right now I'm trying to get a bit ahead of the game by going through\n> the \"Waiting on Author\" patches. The documented process[*] is that\n> they get Returned with Feedback if they received at least one review\n> and haven't had any discussion from the author in 5 days. I haven't\n> been doing that but I'm saying I'm going to today.\n\nFWIW I think that this 5 days threshold before closing a patch with RwF is way\ntoo short. As far as I know we usually use something like 2/3 weeks.\n\n> However I didn't get through the list today. I'll do it tomorrow.\n> \n> The first batch of patches that will be Returned with Feedback are\n> listed below. If you're an author of these and you're still looking\n> for feedback then I would suggest sending an email describing the\n> feedback you're looking for and moving it forward to the next CF\n> yourself before I mark them Returned with Feedback tomorrow.\n> \n> * Single item cache for Subtrans SLRU (Simon Riggs)\n> Received feedback from Julien Rouhaud and Andrey Borodin\n\nI think that this patch is actually Ready For Committer given Andrey's review\nand Simon's answer, I only realize now that neither Andrey or Simon changed the\nstatus. The only remaining nitpicking thing is the variable cache name and\nover 80 chars comment, which is the kind of thing that a committer can tweak at\ncommit time anyway I think. I'm not sure if anyone is going to pick it up now\nbut it seems uncontroversial and useful enough to be moved to the next\ncommitfest.\n\n> * JIT counters in pg_stat_statements (Magnus Hagander)\n> Feedback from Dmitry Dolgov and Julien Rouhaud\n\nNote that the code looks good and no one disagreed with the proposed fields.\n\nThe only remaining problem is a copy/pasto in the docs so nothing critical. I\npersonally think that it would be very good to have so maybe Magnus will push\nit today (which would probably instantly break the other pg_stat_statements\npatches that are now Ready for Committer), and if not I think it should go to\nthe next commitfest instead.\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:58:52 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Last day of commitfest" }, { "msg_contents": "On Wed, 6 Apr 2022 at 21:59, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n>\n> FWIW I think that this 5 days threshold before closing a patch with RwF is way\n> too short. As far as I know we usually use something like 2/3 weeks.\n\nYeah, I haven't been enforcing a timeout like that during the\ncommitfest. But now that we're at the end...\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 7 Apr 2022 17:53:43 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Last day of commitfest" }, { "msg_contents": "On Thu, Apr 7, 2022 at 3:59 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n>\n> > * JIT counters in pg_stat_statements (Magnus Hagander)\n> > Feedback from Dmitry Dolgov and Julien Rouhaud\n>\n> Note that the code looks good and no one disagreed with the proposed\n> fields.\n>\n> The only remaining problem is a copy/pasto in the docs so nothing\n> critical. I\n> personally think that it would be very good to have so maybe Magnus will\n> push\n> it today (which would probably instantly break the other pg_stat_statements\n> patches that are now Ready for Committer), and if not I think it should go\n> to\n> the next commitfest instead.\n>\n\nDang, I missed that one while looking at the other jit patch.\n\nIt did already conflict with the patch that Michael applied, but I'll try\nto clean that up quickly and apply it with this fix.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 7, 2022 at 3:59 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> * JIT counters in pg_stat_statements (Magnus Hagander)\n>  Feedback from Dmitry Dolgov and Julien Rouhaud\n\nNote that the code looks good and no one disagreed with the proposed fields.\n\nThe only remaining problem is a copy/pasto in the docs so nothing critical.  I\npersonally think that it would be very good to have so maybe Magnus will push\nit today (which would probably instantly break the other pg_stat_statements\npatches that are now Ready for Committer), and if not I think it should go to\nthe next commitfest instead.Dang, I missed that one while looking at the other jit patch.It did already conflict with the patch that Michael applied, but I'll try to clean that up quickly and apply it with this fix. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 8 Apr 2022 13:33:27 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Last day of commitfest" } ]
[ { "msg_contents": "Hi Andrew,\n\nIt appears to me that 2ef6f11b0c77ec323c688ddfd98ffabddb72c11d broke src/test/recovery.\n\nIt looks like the following fixes it. Care to review and push? Or perhaps just revert that commit?\n\ndiff --git a/src/test/regress/expected/jsonb_sqljson.out b/src/test/regress/expected/jsonb_sqljson.out\nindex 230cfd3bfd..ee16f2770a 100644\n--- a/src/test/regress/expected/jsonb_sqljson.out\n+++ b/src/test/regress/expected/jsonb_sqljson.out\n@@ -2088,7 +2088,7 @@ ERROR: only string constants supported in JSON_TABLE path specification\n LINE 1: SELECT * FROM JSON_TABLE(jsonb '{\"a\": 123}', '$' || '.' || '...\n ^\n -- Test parallel JSON_VALUE()\n-CREATE UNLOGGED TABLE test_parallel_jsonb_value AS\n+CREATE TABLE test_parallel_jsonb_value AS\n SELECT i::text::jsonb AS js\n FROM generate_series(1, 500000) i;\n -- Should be non-parallel due to subtransactions\ndiff --git a/src/test/regress/sql/jsonb_sqljson.sql b/src/test/regress/sql/jsonb_sqljson.sql\nindex 866c708a4d..2fc41a6df5 100644\n--- a/src/test/regress/sql/jsonb_sqljson.sql\n+++ b/src/test/regress/sql/jsonb_sqljson.sql\n@@ -948,7 +948,7 @@ SELECT JSON_QUERY(jsonb '{\"a\": 123}', 'error' || ' ' || 'error');\n SELECT * FROM JSON_TABLE(jsonb '{\"a\": 123}', '$' || '.' || 'a' COLUMNS (foo int));\n \n -- Test parallel JSON_VALUE()\n-CREATE UNLOGGED TABLE test_parallel_jsonb_value AS\n+CREATE TABLE test_parallel_jsonb_value AS\n SELECT i::text::jsonb AS js\n FROM generate_series(1, 500000) i;\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 10:42:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "buildfarm failures, src/test/recovery" }, { "msg_contents": "\nOn 4/6/22 13:42, Mark Dilger wrote:\n> Hi Andrew,\n>\n> It appears to me that 2ef6f11b0c77ec323c688ddfd98ffabddb72c11d broke src/test/recovery.\n>\n> It looks like the following fixes it. Care to review and push? Or perhaps just revert that commit?\n\n\nFixed here\nhttps://git.postgresql.org/pg/commitdiff/14d3f24fa8a21f8a7e66f1fc60253a1e11410bf3\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 16:58:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: buildfarm failures, src/test/recovery" } ]
[ { "msg_contents": "It's not difficult to get psql to show you the current value\nof a single GUC --- \"SHOW\" does that fine, and it has tab\ncompletion support for the GUC name. However, I very often\nfind myself resorting to the much more tedious\n\nselect * from pg_settings where name like '%foo%';\n\nwhen I want to see some related parameters, or when I'm a bit\nfuzzy on the exact name of the parameter. Not only is this\na lot of typing, but unless I'm willing to type even more to\navoid using \"*\", I'll get a wall of mostly unreadable text,\nbecause pg_settings is far too wide and cluttered with\nlow-grade information.\n\nIn the discussion about adding privileges for GUCs [1], there\nwas a proposal to add a new psql backslash command to show GUCs,\nwhich could reduce this problem to something like\n\n\\dcp *foo*\n\n(The version proposed there was not actually useful for this\npurpose because it was too narrowly focused on GUCs with\nprivileges, but that's easily fixed.)\n\nSo does anyone else like this idea?\n\nIn detail, I'd imagine this command showing the name, setting, unit,\nand vartype fields of pg_setting by default, and if you add \"+\"\nthen it should add the context field, as well as applicable\nprivileges when server version >= 15. However, there's plenty\nof room for bikeshedding that list of columns, not to mention\nthe precise name of the command. (I'm not that thrilled with\n\"\\dcp\" myself, as it looks like it might be a sub-form of \"\\dc\".)\nSo I thought I'd solicit comments before working on a patch\nnot after.\n\nI view this as being at least in part mop-up for commit a0ffa885e,\nespecially since a form of this was discussed in that thread.\nSo I don't think it'd be unreasonable to push into v15, even\nthough it's surely a new feature.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com\n\n\n", "msg_date": "Wed, 06 Apr 2022 13:48:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "How about a psql backslash command to show GUCs?" }, { "msg_contents": "hi\n\nst 6. 4. 2022 v 19:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> It's not difficult to get psql to show you the current value\n> of a single GUC --- \"SHOW\" does that fine, and it has tab\n> completion support for the GUC name. However, I very often\n> find myself resorting to the much more tedious\n>\n> select * from pg_settings where name like '%foo%';\n>\n> when I want to see some related parameters, or when I'm a bit\n> fuzzy on the exact name of the parameter. Not only is this\n> a lot of typing, but unless I'm willing to type even more to\n> avoid using \"*\", I'll get a wall of mostly unreadable text,\n> because pg_settings is far too wide and cluttered with\n> low-grade information.\n>\n> In the discussion about adding privileges for GUCs [1], there\n> was a proposal to add a new psql backslash command to show GUCs,\n> which could reduce this problem to something like\n>\n> \\dcp *foo*\n>\n\nIt can be a good idea. I am not sure about \\dcp. maybe \\show can be better?\n\nRegards\n\nPavel\n\n\n>\n> (The version proposed there was not actually useful for this\n> purpose because it was too narrowly focused on GUCs with\n> privileges, but that's easily fixed.)\n>\n> So does anyone else like this idea?\n>\n> In detail, I'd imagine this command showing the name, setting, unit,\n> and vartype fields of pg_setting by default, and if you add \"+\"\n> then it should add the context field, as well as applicable\n> privileges when server version >= 15. However, there's plenty\n> of room for bikeshedding that list of columns, not to mention\n> the precise name of the command. (I'm not that thrilled with\n> \"\\dcp\" myself, as it looks like it might be a sub-form of \"\\dc\".)\n> So I thought I'd solicit comments before working on a patch\n> not after.\n>\n> I view this as being at least in part mop-up for commit a0ffa885e,\n> especially since a form of this was discussed in that thread.\n> So I don't think it'd be unreasonable to push into v15, even\n> though it's surely a new feature.\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com\n>\n>\n>\n\nhist 6. 4. 2022 v 19:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:It's not difficult to get psql to show you the current value\nof a single GUC --- \"SHOW\" does that fine, and it has tab\ncompletion support for the GUC name.  However, I very often\nfind myself resorting to the much more tedious\n\nselect * from pg_settings where name like '%foo%';\n\nwhen I want to see some related parameters, or when I'm a bit\nfuzzy on the exact name of the parameter.  Not only is this\na lot of typing, but unless I'm willing to type even more to\navoid using \"*\", I'll get a wall of mostly unreadable text,\nbecause pg_settings is far too wide and cluttered with\nlow-grade information.\n\nIn the discussion about adding privileges for GUCs [1], there\nwas a proposal to add a new psql backslash command to show GUCs,\nwhich could reduce this problem to something like\n\n\\dcp *foo*It can be a good idea. I am not sure about \\dcp. maybe \\show can be better?RegardsPavel \n\n(The version proposed there was not actually useful for this\npurpose because it was too narrowly focused on GUCs with\nprivileges, but that's easily fixed.)\n\nSo does anyone else like this idea?\n\nIn detail, I'd imagine this command showing the name, setting, unit,\nand vartype fields of pg_setting by default, and if you add \"+\"\nthen it should add the context field, as well as applicable\nprivileges when server version >= 15.  However, there's plenty\nof room for bikeshedding that list of columns, not to mention\nthe precise name of the command.  (I'm not that thrilled with\n\"\\dcp\" myself, as it looks like it might be a sub-form of \"\\dc\".)\nSo I thought I'd solicit comments before working on a patch\nnot after.\n\nI view this as being at least in part mop-up for commit a0ffa885e,\nespecially since a form of this was discussed in that thread.\nSo I don't think it'd be unreasonable to push into v15, even\nthough it's surely a new feature.\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com", "msg_date": "Wed, 6 Apr 2022 19:52:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 2022-Apr-06, Tom Lane wrote:\n\n> However, I very often find myself resorting to the much more tedious\n> \n> select * from pg_settings where name like '%foo%';\n> \n> when I want to see some related parameters, or when I'm a bit\n> fuzzy on the exact name of the parameter.\n\nBeen there many times, so +1 for the general idea.\n\n> In the discussion about adding privileges for GUCs [1], there\n> was a proposal to add a new psql backslash command to show GUCs,\n> which could reduce this problem to something like\n> \n> \\dcp *foo*\n\n+1. As for command name, I like \\show as proposed by Pavel.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 6 Apr 2022 19:58:02 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Apr 6, 2022, at 10:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> So does anyone else like this idea?\n\nPrivileges on targets other than parameters have a \\d command to show the privileges, as listed in doc/src/sgml/ddl.sgml. There isn't an obvious reason for omitting parameters from the list so covered.\n\nOne of the ideas that got punted in the recent commit was to make it possible to revoke SET on a USERSET guc. For example, it might be nice to REVOKE SET ON PARAMETER work_mem FROM PUBLIC. That can't be done now, but for some select parameters, we might implement that in the future by promoting them to SUSET with a default GRANT SET...TO PUBLIC. When connecting to databases of different postgres versions (albeit only those version 15 and above), it'd be nice to quickly see what context (USERSET vs. SUSET) is assigned to the parameter, and whether the GRANT has been revoked.\n\nSo yes, +1 from me.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 11:01:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/6/22 13:58, Alvaro Herrera wrote:\n> On 2022-Apr-06, Tom Lane wrote:\n> \n>> However, I very often find myself resorting to the much more tedious\n>> \n>> select * from pg_settings where name like '%foo%';\n>> \n>> when I want to see some related parameters, or when I'm a bit\n>> fuzzy on the exact name of the parameter.\n> \n> Been there many times, so +1 for the general idea.\n> \n>> In the discussion about adding privileges for GUCs [1], there\n>> was a proposal to add a new psql backslash command to show GUCs,\n>> which could reduce this problem to something like\n>> \n>> \\dcp *foo*\n> \n> +1. As for command name, I like \\show as proposed by Pavel.\n\n+1\n\nI'd love something for the same reasons.\n\nNo as sure about \\show though. That seems like it could be confused with \nshowing other stuff. Maybe consistent with \\sf[+] and \\sv[+] we could \nadd \\sc[+]?\n\nJoe\n\n\n", "msg_date": "Wed, 6 Apr 2022 14:26:36 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> No as sure about \\show though. That seems like it could be confused with \n> showing other stuff. Maybe consistent with \\sf[+] and \\sv[+] we could \n> add \\sc[+]?\n\nHmm ... my first reaction to that was \"no, it should be \\sp for\n'parameter'\". But with the neighboring \\sf for 'function', it'd\nbe easy to think that maybe 'p' means 'procedure'.\n\nI do agree that \\show might be a bad choice, the reason being that\nthe adjacent \\set command is for psql variables not GUCs; if we\nhad a \\show I'd sort of expect it to be a variant spelling of\n\"\\echo :variable\".\n\n\"\\sc\" isn't awful perhaps.\n\nAh, naming ... the hardest problem in computer science.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 14:40:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Wed, 6 Apr 2022 at 13:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> when I want to see some related parameters, or when I'm a bit\n> fuzzy on the exact name of the parameter. Not only is this\n> a lot of typing, but unless I'm willing to type even more to\n> avoid using \"*\", I'll get a wall of mostly unreadable text,\n> because pg_settings is far too wide and cluttered with\n> low-grade information.\n\nI may be suffering from some form of the Mandela Effect but I have a\ndistinct memory that once upon a time SHOW actually took patterns like\n\\d does. I keep trying it, forgetting that it doesn't actually work.\nI'm guessing my brain is generalizing and assuming SHOW fits into the\nsame pattern as \\d commands and just keeps doing it even after I've\nlearned repeatedly that it doesn't work.\n\nPersonally I would like to just make the world match the way my brain\nthinks it is and make this just work:\n\nSHOW enable_*\n\nI don't see any value in allowing * or ? in GUC names so I don't even\nsee a downside to this. It would be way easier to discover than\nanother \\ command\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 6 Apr 2022 14:45:31 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/6/22 2:40 PM, Tom Lane wrote:\r\n> Joe Conway <mail@joeconway.com> writes:\r\n>> No as sure about \\show though. That seems like it could be confused with\r\n>> showing other stuff. Maybe consistent with \\sf[+] and \\sv[+] we could\r\n>> add \\sc[+]?\r\n> \r\n> Hmm ... my first reaction to that was \"no, it should be \\sp for\r\n> 'parameter'\". But with the neighboring \\sf for 'function', it'd\r\n> be easy to think that maybe 'p' means 'procedure'.\r\n> \r\n> I do agree that \\show might be a bad choice, the reason being that\r\n> the adjacent \\set command is for psql variables not GUCs; if we\r\n> had a \\show I'd sort of expect it to be a variant spelling of\r\n> \"\\echo :variable\".\r\n> \r\n> \"\\sc\" isn't awful perhaps.\r\n> \r\n> Ah, naming ... the hardest problem in computer science.\r\n\r\n(but the easiest thing to have an opinion on ;)\r\n\r\n+1 on the feature proposal.\r\n\r\nI am a bit torn between \"\\dcp\" (or \\dsetting / \\dconfig? we don't \r\nnecessarily need for it to be super short) and \"\\sc\". Certainly with \r\npattern matching the interface for the \"\\d\" commands would fit that \r\npattern. \"\\sc\" would make sense for a thorough introspection of what is \r\nin the GUC. That said, we get that with SHOW today.\r\n\r\nSo I'm leaning towards something in the \"\\d\" family.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 6 Apr 2022 17:34:36 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Apr 6, 2022, at 2:34 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> \n> \"\\sc\" would make sense\n\nI originally wrote the command as \\dcp (describe configuration parameter) because \\dp (describe parameter) wasn't available. The thing we're showing is a \"parameter\", not a \"config\". If we're going to use a single letter, I'd prefer /p/.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 6 Apr 2022 15:36:08 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I am a bit torn between \"\\dcp\" (or \\dsetting / \\dconfig? we don't \n> necessarily need for it to be super short) and \"\\sc\". Certainly with \n> pattern matching the interface for the \"\\d\" commands would fit that \n> pattern. \"\\sc\" would make sense for a thorough introspection of what is \n> in the GUC. That said, we get that with SHOW today.\n\n> So I'm leaning towards something in the \"\\d\" family.\n\nI agree that \\d-something makes the most sense from a functionality\nstandpoint. But I don't want to make the name too long, even if we\ndo have tab completion to help.\n\n\\dconf maybe?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 21:16:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Wed, Apr 6, 2022 at 6:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > I am a bit torn between \"\\dcp\" (or \\dsetting / \\dconfig? we don't\n> > necessarily need for it to be super short) and \"\\sc\". Certainly with\n> > pattern matching the interface for the \"\\d\" commands would fit that\n> > pattern. \"\\sc\" would make sense for a thorough introspection of what is\n> > in the GUC. That said, we get that with SHOW today.\n>\n> > So I'm leaning towards something in the \"\\d\" family.\n>\n> I agree that \\d-something makes the most sense from a functionality\n> standpoint. But I don't want to make the name too long, even if we\n> do have tab completion to help.\n>\n> \\dconf maybe?\n>\n>\nI don't have a strong preference, but just tossing it out there; maybe\nembrace the novelty of GUC?\n\n\\dguc\n\nDavid J.\n\nOn Wed, Apr 6, 2022 at 6:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I am a bit torn between \"\\dcp\" (or \\dsetting / \\dconfig? we don't \n> necessarily need for it to be super short) and \"\\sc\". Certainly with \n> pattern matching the interface for the \"\\d\" commands would fit that \n> pattern. \"\\sc\" would make sense for a thorough introspection of what is \n> in the GUC. That said, we get that with SHOW today.\n\n> So I'm leaning towards something in the \"\\d\" family.\n\nI agree that \\d-something makes the most sense from a functionality\nstandpoint.  But I don't want to make the name too long, even if we\ndo have tab completion to help.\n\n\\dconf maybe?I don't have a strong preference, but just tossing it out there; maybe embrace the novelty of GUC?\\dgucDavid J.", "msg_date": "Wed, 6 Apr 2022 18:18:33 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/6/22 9:18 PM, David G. Johnston wrote:\r\n> On Wed, Apr 6, 2022 at 6:16 PM Tom Lane <tgl@sss.pgh.pa.us \r\n\r\n> I agree that \\d-something makes the most sense from a functionality\r\n> standpoint.  But I don't want to make the name too long, even if we\r\n> do have tab completion to help.\r\n> \r\n> \\dconf maybe?\r\n> \r\n> \r\n> I don't have a strong preference, but just tossing it out there; maybe \r\n> embrace the novelty of GUC?\r\n> \r\n> \\dguc\r\n\r\n+1 for \\dconf\r\n\r\nJonathan", "msg_date": "Wed, 6 Apr 2022 21:27:23 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> +1 for \\dconf\n\nHere's a draft patch using \\dconf. No tests or docs yet.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 06 Apr 2022 23:02:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "I also find myself querying pg_settings all too often. More typing\nthan I'd like.\n\nOn Thu, 7 Apr 2022 at 06:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I do agree that \\show might be a bad choice, the reason being that\n> the adjacent \\set command is for psql variables not GUCs; if we\n> had a \\show I'd sort of expect it to be a variant spelling of\n> \"\\echo :variable\".\n\nI also think \\show is not a great choice. I'd rather see us follow the\n\\d pattern for showing information about objects in the database.\n\n> \"\\sc\" isn't awful perhaps.\n\nI think \\dG is pretty good. G for GUC.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Apr 2022 15:25:06 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 06.04.2022 20:48, Tom Lane wrote:\n> However, I very often\n> find myself resorting to the much more tedious\n>\n> select * from pg_settings where name like '%foo%';\n\n> In the discussion about adding privileges for GUCs [1], there\n> was a proposal to add a new psql backslash command to show GUCs,\n> which could reduce this problem to something like\n>\n> \\dcp *foo*\n\n+1, great idea.\n\nRight now I use the psql :show variable in my .psqlrc for this purpose:\n\n=# \\echo :show\nSELECT name, current_setting(name) AS value, context FROM pg_settings\\g \n(format=wrapped columns=100) | grep\n\n=# :show autovacuum\n  autovacuum                             | \non                                    | sighup\n  autovacuum_analyze_scale_factor        | \n0.1                                   | sighup\n  autovacuum_analyze_threshold           | \n50                                    | sighup\n  autovacuum_freeze_max_age              | \n200000000                             | postmaster\n  autovacuum_max_workers                 | \n3                                     | postmaster\n  autovacuum_multixact_freeze_max_age    | \n400000000                             | postmaster\n  autovacuum_naptime                     | \n1min                                  | sighup\n  autovacuum_vacuum_cost_delay           | \n2ms                                   | sighup\n  autovacuum_vacuum_cost_limit           | \n-1                                    | sighup\n  autovacuum_vacuum_scale_factor         | \n0.2                                   | sighup\n  autovacuum_vacuum_threshold            | \n50                                    | sighup\n  autovacuum_work_mem                    | \n-1                                    | sighup\n  log_autovacuum_min_duration            | \n-1                                    | sighup\n\nAs for the name, I can't think of a better candidate. Any of the \npreviously suggested list of \\dconf, \\dguc, \\dG, \\dcp is fine.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 11:34:00 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/6/22 23:02, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> +1 for \\dconf\n> \n> Here's a draft patch using \\dconf. No tests or docs yet.\n\nWFM -- using some form of \\d<something> makes more sense than \n\\s<something>, and I can't think of anything better that \\dconf.\n\nI will say that I care about context far more often than unit or type \nthough, so from my point of view I would switch them around with respect \nto which is only shown with verbose.\n\nJoe\n\n\n", "msg_date": "Thu, 7 Apr 2022 08:36:28 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\nOn 4/6/22 23:25, David Rowley wrote:\n> I also find myself querying pg_settings all too often. More typing\n> than I'd like.\n>\n> On Thu, 7 Apr 2022 at 06:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I do agree that \\show might be a bad choice, the reason being that\n>> the adjacent \\set command is for psql variables not GUCs; if we\n>> had a \\show I'd sort of expect it to be a variant spelling of\n>> \"\\echo :variable\".\n> I also think \\show is not a great choice. I'd rather see us follow the\n> \\d pattern for showing information about objects in the database.\n>\n>> \"\\sc\" isn't awful perhaps.\n> I think \\dG is pretty good. G for GUC.\n>\n\n\n-1 on anything that is based on \"GUC\", an ancient and now largely\nirrelevant acronym. How many developers, let alone users, know what it\nstands for?\n\n\n\\dconf seems fine to me\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:21:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/7/22 8:36 AM, Joe Conway wrote:\r\n> On 4/6/22 23:02, Tom Lane wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> +1 for \\dconf\r\n>>\r\n>> Here's a draft patch using \\dconf.  No tests or docs yet.\r\n> \r\n> WFM -- using some form of \\d<something> makes more sense than \r\n> \\s<something>, and I can't think of anything better that \\dconf.\r\n> \r\n> I will say that I care about context far more often than unit or type \r\n> though, so from my point of view I would switch them around with respect \r\n> to which is only shown with verbose.\r\n\r\nI disagree somewhat -- I agree the context should be in the regular \r\nview, but unit and type are also important. If I had to choose to drop \r\none, I'd choose type as it could be inferred, but I would say better to \r\nkeep them all.\r\n\r\nThe downside is that by including context, the standard list appears to \r\npush past my 99px width terminal in non-enhanced view, but that may be OK.\r\n\r\n\r\nA couple of minor things:\r\n\r\n\t+\tappendPQExpBufferStr(&buf, \"ORDER BY 1;\");\r\n\r\nI don't know how much we do positional ordering in our queries, but it \r\nmay be better to explicitly order by \"s.name\". I doubt this column name \r\nis likely to change, and if for some reason someone shuffles the output \r\norder of \\dconf, it makes it less likely to break someone's view.\r\n\r\nI did not test this via an extension, but we do allow for mixed case in \r\ncustom GUCs:\r\n\r\npostgres=# SHOW jkatz.test;\r\n JKATZ.test\r\n------------\r\n abc\r\n\r\nI don't know if we want to throw a \"LOWER(s.name)\" on at least the \r\nordering, given we allow for \"SHOW\" itself to load these case-insensitively.\r\n\r\n\t+\tfprintf(output, _(\" \\\\dconf[+] [PATTERN] list configuration \r\nparameters\\n\"));\r\n\r\nMaybe to appeal to all crowds, we say \"list configuration parameters \r\n(GUCs)\"?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Thu, 7 Apr 2022 10:39:47 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/7/22 8:36 AM, Joe Conway wrote:\n>> I will say that I care about context far more often than unit or type \n>> though, so from my point of view I would switch them around with respect \n>> to which is only shown with verbose.\n\n> I disagree somewhat -- I agree the context should be in the regular \n> view, but unit and type are also important. If I had to choose to drop \n> one, I'd choose type as it could be inferred, but I would say better to \n> keep them all.\n\nGiven the new ability to grant privileges on GUCs, context alone is not\nsufficient to know when something can be set. So the context and the\nprivileges seem like they should go together, and that's why I have them\nboth under \"+\".\n\nI can see the argument for relegating type to the \"+\" format, in hopes of\nkeeping the default display narrow enough for ordinary terminal windows.\n\n> A couple of minor things:\n> \t+\tappendPQExpBufferStr(&buf, \"ORDER BY 1;\");\n> I don't know how much we do positional ordering in our queries, but it \n> may be better to explicitly order by \"s.name\".\n\n\"ORDER BY n\" seems to be the de facto standard in describe.c. Perhaps\nthere's an argument for changing it throughout that file, but I don't\nthink this one function should be out of step with the rest.\n\n> I don't know if we want to throw a \"LOWER(s.name)\" on at least the \n> ordering, given we allow for \"SHOW\" itself to load these case-insensitively.\n\nYeah, I went back and forth on that myself --- I was looking at the\nexample of DateStyle, and noticing that although you see it in mixed\ncase in the command's output, tab completion isn't happy unless you\nenter it in lower case (ie, date<TAB> works, Date<TAB> not so much).\nForcibly lowercasing the command output would fix that inconsistency,\nbut on the other hand it introduces an inconsistency with what the\npg_settings view shows. Not sure what's the least bad. We might be\nable to fix the tab completion behavior, if we don't mind complicating\ntab-complete.c even more; so probably that's the thing to look at\nbefore changing the output.\n\n> Maybe to appeal to all crowds, we say \"list configuration parameters \n> (GUCs)\"?\n\nI'm in the camp that says that GUC is not an acronym we wish to expose\nto end users.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 10:55:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Thu, Apr 7, 2022 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>\n>\n> Maybe to appeal to all crowds, we say \"list configuration parameters\n> > (GUCs)\"?\n>\n> I'm in the camp that says that GUC is not an acronym we wish to expose\n> to end users.\n>\n>\nI am too. In any case, either go all-in with GUC (i.e., \\dG or \\dguc) or\npretend it doesn't exist - an in-between position is unappealing.\n\nDavid J.\n\nOn Thu, Apr 7, 2022 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Jonathan S. Katz\" <jkatz@postgresql.org> writes: \n> Maybe to appeal to all crowds, we say \"list configuration parameters \n> (GUCs)\"?\n\nI'm in the camp that says that GUC is not an acronym we wish to expose\nto end users.I am too.  In any case, either go all-in with GUC (i.e., \\dG or \\dguc) or pretend it doesn't exist - an in-between position is unappealing.David J.", "msg_date": "Thu, 7 Apr 2022 08:10:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "st 6. 4. 2022 v 19:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> It's not difficult to get psql to show you the current value\n> of a single GUC --- \"SHOW\" does that fine, and it has tab\n> completion support for the GUC name. However, I very often\n> find myself resorting to the much more tedious\n>\n> select * from pg_settings where name like '%foo%';\n>\n> when I want to see some related parameters, or when I'm a bit\n> fuzzy on the exact name of the parameter. Not only is this\n> a lot of typing, but unless I'm willing to type even more to\n> avoid using \"*\", I'll get a wall of mostly unreadable text,\n> because pg_settings is far too wide and cluttered with\n> low-grade information.\n>\n> In the discussion about adding privileges for GUCs [1], there\n> was a proposal to add a new psql backslash command to show GUCs,\n> which could reduce this problem to something like\n>\n> \\dcp *foo*\n>\n> (The version proposed there was not actually useful for this\n> purpose because it was too narrowly focused on GUCs with\n> privileges, but that's easily fixed.)\n>\n> So does anyone else like this idea?\n\nI like this idea. Also I'm interested in contributing this. Feel free\nto ping me if welcomed, I can try to prepare at least the initial\npatch. Currently it seems the discussion is related mostly to the\ncommand name, which can be changed at any time.\n\n> In detail, I'd imagine this command showing the name, setting, unit,\n> and vartype fields of pg_setting by default, and if you add \"+\"\n> then it should add the context field, as well as applicable\n> privileges when server version >= 15. However, there's plenty\n> of room for bikeshedding that list of columns, not to mention\n> the precise name of the command. (I'm not that thrilled with\n> \"\\dcp\" myself, as it looks like it might be a sub-form of \"\\dc\".)\n> So I thought I'd solicit comments before working on a patch\n> not after.\n>\n> I view this as being at least in part mop-up for commit a0ffa885e,\n> especially since a form of this was discussed in that thread.\n> So I don't think it'd be unreasonable to push into v15, even\n> though it's surely a new feature.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com\n>\n>\n\n\n", "msg_date": "Thu, 7 Apr 2022 17:15:24 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Wed, Apr 06, 2022 at 11:02:54PM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > +1 for \\dconf\n> \n> Here's a draft patch using \\dconf. No tests or docs yet.\n\nThe patch as written is a thin layer around pg_settings.\n\nSHOW and current_setting() translate to human units, which is particularly\nuseful for some settings, like those with units of 8k pages. \n\nIs it better to use that \"cooked\" version for display in the backslash command\ninstead of the raw view from pg_settings ?\n\nOtherwise, I see myself first using tab completion or, failing that,\nSELECT * FROM pg_settings WHERE name~'something', followed by SHOW, to\navoid messing up counting digits, multiplication or unit conversions.\n\n> +\tprintfPQExpBuffer(&buf,\n> +\t\t\t\t\t \"SELECT s.name AS \\\"%s\\\", s.setting AS \\\"%s\\\", \"\n> +\t\t\t\t\t \"s.unit AS \\\"%s\\\", s.vartype AS \\\"%s\\\"\",\n> +\t\t\t\t\t gettext_noop(\"Parameter\"),\n> +\t\t\t\t\t gettext_noop(\"Setting\"),\n> +\t\t\t\t\t gettext_noop(\"Unit\"),\n> +\t\t\t\t\t gettext_noop(\"Type\"));\n> +\n\n> +\tappendPQExpBufferStr(&buf, \"\\nFROM pg_catalog.pg_settings s\\n\");\n\n\n", "msg_date": "Thu, 7 Apr 2022 10:24:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/7/22 11:10 AM, David G. Johnston wrote:\r\n> On Thu, Apr 7, 2022 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us \r\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\r\n> \r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org\r\n> <mailto:jkatz@postgresql.org>> writes:\r\n> \r\n> > Maybe to appeal to all crowds, we say \"list configuration parameters\r\n> > (GUCs)\"?\r\n> \r\n> I'm in the camp that says that GUC is not an acronym we wish to expose\r\n> to end users.\r\n> \r\n> \r\n> I am too.\r\n\r\nWFM.\r\n\r\nJonathan", "msg_date": "Thu, 7 Apr 2022 11:39:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Apr 7, 2022, at 6:21 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \\dconf seems fine to me\n\nWe have too many synonyms for configuration parameters. \"config\", \"guc\", \"parameter\", and \"setting\" are already in use. I thought we agreed on the other thread that \"setting\" means the value, and \"parameter\" is the thing being set. It's true that \"config\" refers to parameters in the name of pg_catalog.set_config, which is a pretty strong precedent, but sadly \"config\" also refers to configuration files, the build configuration (as in the \"pg_config\" tool), text search configuration, etc.\n\nWhile grep'ing through doc/src/sgml, I see no instances of \"conf\" ever referring to configuration parameters. It only ever refers to configuration files. I'd prefer not adding it to the list of synonyms.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:19:33 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> SHOW and current_setting() translate to human units, which is particularly\n> useful for some settings, like those with units of 8k pages. \n> Is it better to use that \"cooked\" version for display in the backslash command\n> instead of the raw view from pg_settings ?\n\nOh, that's a good idea --- lets us drop the units column entirely.\n\nThe attached revision does that and moves the \"type\" column to\nsecondary status, as discussed upthread. I also added docs and\nsimple regression tests, and fixed two problems that were preventing\ncompletion of custom (qualified) GUC names (we need to use the\nVERBATIM option for those queries). There remains the issue that\ntab completion for GUC names ought to be case-insensitive, but\nthat's a pre-existing bug in tab-complete.c's other GUC name\ncompletions too; I'll tackle it later.\n\nAs for the name, \\dconf has a slight plurality in votes so far,\nso I'm sticking with that.\n\nI think this is ready to go unless someone has a significantly\nbetter idea.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 07 Apr 2022 12:22:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> We have too many synonyms for configuration parameters. \"config\", \"guc\", \"parameter\", and \"setting\" are already in use. I thought we agreed on the other thread that \"setting\" means the value, and \"parameter\" is the thing being set.\n\nRight, so the suggestion of \\dsetting seems a tad off-kilter.\n\n> It's true that \"config\" refers to parameters in the name of pg_catalog.set_config, which is a pretty strong precedent, but sadly \"config\" also refers to configuration files, the build configuration (as in the \"pg_config\" tool), text search configuration, etc.\n\nI'd also thought briefly about \\dpar or \\dparam, but I'm not sure that\nthat's much of an improvement. \\dconf is at least in line with the\ndocs' terminology of \"configuration parameter\". (Note that bare\n\"parameter\" has other meanings too, eg function parameter.) I wouldn't\nfight too hard if people want to lengthen it to \\dconfig for consistency\nwith set_config().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 12:29:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/7/22 12:22 PM, Tom Lane wrote:\r\n> Justin Pryzby <pryzby@telsasoft.com> writes:\r\n>> SHOW and current_setting() translate to human units, which is particularly\r\n>> useful for some settings, like those with units of 8k pages.\r\n>> Is it better to use that \"cooked\" version for display in the backslash command\r\n>> instead of the raw view from pg_settings ?\r\n> \r\n> Oh, that's a good idea --- lets us drop the units column entirely.\r\n\r\n+1\r\n\r\n> The attached revision does that and moves the \"type\" column to\r\n> secondary status, as discussed upthread. I also added docs and\r\n> simple regression tests, and fixed two problems that were preventing\r\n> completion of custom (qualified) GUC names (we need to use the\r\n> VERBATIM option for those queries). There remains the issue that\r\n> tab completion for GUC names ought to be case-insensitive, but\r\n> that's a pre-existing bug in tab-complete.c's other GUC name\r\n> completions too; I'll tackle it later.\r\n> \r\n> As for the name, \\dconf has a slight plurality in votes so far,\r\n> so I'm sticking with that.\r\n> \r\n> I think this is ready to go unless someone has a significantly\r\n> better idea.\r\n\r\nI ran the equivalent SQL locally and it LGTM. Docs read well to me. Code \r\nlooks as good as it can to me.\r\n\r\n+1\r\n\r\nJonathan", "msg_date": "Thu, 7 Apr 2022 12:29:35 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wouldn't\n> fight too hard if people want to lengthen it to \\dconfig for consistency\n> with set_config().\n\nI'd prefer \\dconfig, but if the majority on this list view that as pedantically forcing them to type more, I'm not going to kick up a fuss about \\dconf.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:34:47 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wouldn't\n>> fight too hard if people want to lengthen it to \\dconfig for consistency\n>> with set_config().\n\n> I'd prefer \\dconfig, but if the majority on this list view that as pedantically forcing them to type more, I'm not going to kick up a fuss about \\dconf.\n\nMaybe I'm atypical, but I'm probably going to use tab completion\neither way, so it's not really more keystrokes. The consistency\npoint is a good one that I'd not considered before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 12:37:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Apr 7, 2022, at 9:37 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Maybe I'm atypical, but I'm probably going to use tab completion\n> either way, so it's not really more keystrokes.\n\nSame here, because after tab-completing \\dcon\\t\\t into \\dconfig, I'm likely to also tab-complete to get the list of parameters. I frequently can't recall the exact spelling of them.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:42:36 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/7/22 12:42 PM, Mark Dilger wrote:\r\n> \r\n> \r\n>> On Apr 7, 2022, at 9:37 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>\r\n>> Maybe I'm atypical, but I'm probably going to use tab completion\r\n>> either way, so it's not really more keystrokes.\r\n> \r\n> Same here, because after tab-completing \\dcon\\t\\t into \\dconfig, I'm likely to also tab-complete to get the list of parameters. I frequently can't recall the exact spelling of them.\r\n\r\nThis seems like the only \"\\d\" command that would require tab completion, \r\ngiven all the others are far less descriptive (\\dt, \\dv, etc.) And at \r\nleast from my user perspective, if I ever need anything other than \\dt, \r\nI typically have to go to \\? to look it up.\r\n\r\nI'm generally in favor of consistency, though in case skewing towards \r\nwhat we expose to the user. If \"\\dconfig\" gives a bit more across the \r\nboard, I'm OK with that. \"\\dparam\" could be a bit confusing to end users \r\n(\"parameter for what?\") so I'm -1 on that.\r\n\r\nJonathan", "msg_date": "Thu, 7 Apr 2022 12:49:34 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/7/22 12:37, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I wouldn't\n>>> fight too hard if people want to lengthen it to \\dconfig for consistency\n>>> with set_config().\n> \n>> I'd prefer \\dconfig, but if the majority on this list view that as pedantically forcing them to type more, I'm not going to kick up a fuss about \\dconf.\n> \n> Maybe I'm atypical, but I'm probably going to use tab completion\n> either way, so it's not really more keystrokes. The consistency\n> point is a good one that I'd not considered before.\n\nYeah I had thought about \\dconfig too -- +1 to that, although I am fine \nwith \\dconf too.\n\nJoe\n\n\n", "msg_date": "Thu, 7 Apr 2022 12:58:43 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Thu, Apr 7, 2022 at 9:58 AM Joe Conway <mail@joeconway.com> wrote:\n\n> On 4/7/22 12:37, Tom Lane wrote:\n> > Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> >>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> I wouldn't\n> >>> fight too hard if people want to lengthen it to \\dconfig for\n> consistency\n> >>> with set_config().\n> >\n> >> I'd prefer \\dconfig, but if the majority on this list view that as\n> pedantically forcing them to type more, I'm not going to kick up a fuss\n> about \\dconf.\n> >\n> > Maybe I'm atypical, but I'm probably going to use tab completion\n> > either way, so it's not really more keystrokes. The consistency\n> > point is a good one that I'd not considered before.\n>\n> Yeah I had thought about \\dconfig too -- +1 to that, although I am fine\n> with \\dconf too.\n>\n>\n\\dconfig[+] gets my vote. I was going to say \"conf\" just isn't common\njargon to say or write; but the one place it is - file extensions - is\nrelevant and common. But still, I would go with the non-jargon form.\n\nDavid J.\n\nOn Thu, Apr 7, 2022 at 9:58 AM Joe Conway <mail@joeconway.com> wrote:On 4/7/22 12:37, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I wouldn't\n>>> fight too hard if people want to lengthen it to \\dconfig for consistency\n>>> with set_config().\n> \n>> I'd prefer \\dconfig, but if the majority on this list view that as pedantically forcing them to type more, I'm not going to kick up a fuss about \\dconf.\n> \n> Maybe I'm atypical, but I'm probably going to use tab completion\n> either way, so it's not really more keystrokes.  The consistency\n> point is a good one that I'd not considered before.\n\nYeah I had thought about \\dconfig too -- +1 to that, although I am fine \nwith \\dconf too.\\dconfig[+] gets my vote.  I was going to say \"conf\" just isn't common jargon to say or write; but the one place it is - file extensions - is relevant and common.  But still, I would go with the non-jargon form.David J.", "msg_date": "Thu, 7 Apr 2022 10:04:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "čt 7. 4. 2022 v 19:04 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Thu, Apr 7, 2022 at 9:58 AM Joe Conway <mail@joeconway.com> wrote:\n>\n>> On 4/7/22 12:37, Tom Lane wrote:\n>> > Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> >>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >>> I wouldn't\n>> >>> fight too hard if people want to lengthen it to \\dconfig for\n>> consistency\n>> >>> with set_config().\n>> >\n>> >> I'd prefer \\dconfig, but if the majority on this list view that as\n>> pedantically forcing them to type more, I'm not going to kick up a fuss\n>> about \\dconf.\n>> >\n>> > Maybe I'm atypical, but I'm probably going to use tab completion\n>> > either way, so it's not really more keystrokes. The consistency\n>> > point is a good one that I'd not considered before.\n>>\n>> Yeah I had thought about \\dconfig too -- +1 to that, although I am fine\n>> with \\dconf too.\n>>\n>>\n> \\dconfig[+] gets my vote. I was going to say \"conf\" just isn't common\n> jargon to say or write; but the one place it is - file extensions - is\n> relevant and common. But still, I would go with the non-jargon form.\n>\n\ndconfig is better, because google can be confused - dconf is known project\nhttps://en.wikipedia.org/wiki/Dconf\n\nThe length is not too important when we have tab complete\n\nRegards\n\nPavel\n\n\n> David J.\n>\n\nčt 7. 4. 2022 v 19:04 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Apr 7, 2022 at 9:58 AM Joe Conway <mail@joeconway.com> wrote:On 4/7/22 12:37, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Apr 7, 2022, at 9:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I wouldn't\n>>> fight too hard if people want to lengthen it to \\dconfig for consistency\n>>> with set_config().\n> \n>> I'd prefer \\dconfig, but if the majority on this list view that as pedantically forcing them to type more, I'm not going to kick up a fuss about \\dconf.\n> \n> Maybe I'm atypical, but I'm probably going to use tab completion\n> either way, so it's not really more keystrokes.  The consistency\n> point is a good one that I'd not considered before.\n\nYeah I had thought about \\dconfig too -- +1 to that, although I am fine \nwith \\dconf too.\\dconfig[+] gets my vote.  I was going to say \"conf\" just isn't common jargon to say or write; but the one place it is - file extensions - is relevant and common.  But still, I would go with the non-jargon form.dconfig is better, because google can be confused - dconf is known project https://en.wikipedia.org/wiki/DconfThe length is not too important when we have tab completeRegardsPavel David J.", "msg_date": "Thu, 7 Apr 2022 19:07:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> čt 7. 4. 2022 v 19:04 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>> \\dconfig[+] gets my vote. I was going to say \"conf\" just isn't common\n>> jargon to say or write; but the one place it is - file extensions - is\n>> relevant and common. But still, I would go with the non-jargon form.\n\n> dconfig is better, because google can be confused - dconf is known project\n> https://en.wikipedia.org/wiki/Dconf\n\nLooks like the consensus has shifted to \\dconfig. I'll do it like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 16:35:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Re: Tom Lane\n> Looks like the consensus has shifted to \\dconfig. I'll do it like that.\n\nA bit late to the party, but two more ideas:\n\n\nThe name has evolved from \\dcp over various longer \\d-things to the\nmore verbose \\dconfig. How about we evolve it even more and just call\nit \\config? That would be much easier to remember - in fact after I\nseeing the patch the other day, I wanted to try it today and I was\nconfused when \\config didn't work, and had to read the git log to see\nhow it's actually called.\n\nIt also doesn't conflict with tab completion too much, \\conf<tab>\nwould work.\n\n\nThe other bit is hiding non-default values. \"\\dconfig\" by itself is\nvery long and not very interesting. I have this in my .psqlrc that I\nuse very often on servers I'm visiting:\n\n\\set config 'SELECT name, current_setting(name), CASE source WHEN $$configuration file$$ THEN regexp_replace(sourcefile, $$^/.*/$$, $$$$)||$$:$$||sourceline ELSE source END FROM pg_settings WHERE source <> $$default$$;'\n\nI would think that if \\dconfig showed the non-default settings only,\nit would be much more useful; the full list would still be available\nwith \"\\dconfig *\". This is in line with \\dt only showing tables on the\nsearch_path, and \"\\dt *.*\" showing all.\n\nChristoph\n\n\n", "msg_date": "Sat, 9 Apr 2022 11:21:51 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> The name has evolved from \\dcp over various longer \\d-things to the\n> more verbose \\dconfig. How about we evolve it even more and just call\n> it \\config?\n\nI think people felt that it should be part of the \\d family.\nAlso, because we have \\connect and \\conninfo, you'd need to\ntype at least five characters before you could tab-complete,\nwhereas \\dconfig is unique at four (you just need \\dco).\n\n> I would think that if \\dconfig showed the non-default settings only,\n> it would be much more useful; the full list would still be available\n> with \"\\dconfig *\". This is in line with \\dt only showing tables on the\n> search_path, and \"\\dt *.*\" showing all.\n\nHm, I could get on board with that -- any other opinions?\n(Perhaps there's an argument for omitting \"override\"\nsettings as well?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 10:31:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Sat, Apr 09, 2022 at 10:31:17AM -0400, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> \n> > I would think that if \\dconfig showed the non-default settings only,\n> > it would be much more useful; the full list would still be available\n> > with \"\\dconfig *\". This is in line with \\dt only showing tables on the\n> > search_path, and \"\\dt *.*\" showing all.\n> \n> Hm, I could get on board with that -- any other opinions?\n\n+1 for it, that's often what I'm interested in when looking at the GUCs in\ngeneral.\n\n> (Perhaps there's an argument for omitting \"override\"\n> settings as well?)\n\n-0.1. Most are usually not useful, but I can see at least data_checksums and\nwal_buffers that are still interesting.\n\n\n", "msg_date": "Sat, 9 Apr 2022 23:58:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/9/22 11:58 AM, Julien Rouhaud wrote:\r\n> On Sat, Apr 09, 2022 at 10:31:17AM -0400, Tom Lane wrote:\r\n>> Christoph Berg <myon@debian.org> writes:\r\n>>\r\n>>> I would think that if \\dconfig showed the non-default settings only,\r\n>>> it would be much more useful; the full list would still be available\r\n>>> with \"\\dconfig *\". This is in line with \\dt only showing tables on the\r\n>>> search_path, and \"\\dt *.*\" showing all.\r\n>>\r\n>> Hm, I could get on board with that -- any other opinions?\r\n> \r\n> +1 for it, that's often what I'm interested in when looking at the GUCs in\r\n> general.\r\n\r\n-1, at least for the moment. Sometimes a user doesn't know what they're \r\nlooking for coupled with being unaware of what the default value is. If \r\na setting is set to a default value and that value is the problematic \r\nsetting, a user should be able to see that even in a full list.\r\n\r\n(The \\dt searching only tables \"search_path\" vs. the database has also \r\nbitten me too. I did ultimately learn about \"\\dt *.*\", but this makes \r\nthe user have to unpack more layers to do simple things).\r\n\r\nJonathan", "msg_date": "Sat, 9 Apr 2022 12:14:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> -1, at least for the moment. Sometimes a user doesn't know what they're \n> looking for coupled with being unaware of what the default value is. If \n> a setting is set to a default value and that value is the problematic \n> setting, a user should be able to see that even in a full list.\n\nSure, but then you do \"\\dconfig *\". With there being several hundred\nGUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\nis a common use-case at all, let alone so common as to deserve being\nthe default behavior.\n\nOne thing we could perhaps do to reduce confusion is to change the\ntable heading when doing this, say from \"List of configuration parameters\"\nto \"List of non-default configuration parameters\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 12:27:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Sat, Apr 9, 2022 at 9:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > -1, at least for the moment. Sometimes a user doesn't know what they're\n> > looking for coupled with being unaware of what the default value is. If\n> > a setting is set to a default value and that value is the problematic\n> > setting, a user should be able to see that even in a full list.\n>\n> Sure, but then you do \"\\dconfig *\". With there being several hundred\n> GUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\n> is a common use-case at all, let alone so common as to deserve being\n> the default behavior.\n>\n>\nI'm for having a default that doesn't mean \"show everything\".\n\nI'm also wondering whether we can invent GUC namespaces for the different\ncontexts, so I can use a pattern like: context.internal.*\n\nA similar ability for category would be nice but we'd have to invent labels\nto make it practical.\n\n\\dconfig [pattern [mode]]\n\nmode: all, overridden\n\nSo mode is overridden if pattern is absent but all if pattern is present,\nwith the ability to specify overridden.\n\npattern: [[{context.{context label}}|{category.{category\nlabel}}.]...]{parameter name pattern}\nparameter name pattern: [{two part name prefix}.]{base parameter pattern}\n\n\nOne thing we could perhaps do to reduce confusion is to change the\n> table heading when doing this, say from \"List of configuration parameters\"\n> to \"List of non-default configuration parameters\".\n>\n>\nI'd be inclined to echo a note after the output table that says that not\nall configuration parameters are displayed - possibly even providing a\ncount [all - overridden]. The header is likely to be ignored even if it\nstill ends up on screen after scrolling.\n\nDavid J.\n\nOn Sat, Apr 9, 2022 at 9:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> -1, at least for the moment. Sometimes a user doesn't know what they're \n> looking for coupled with being unaware of what the default value is. If \n> a setting is set to a default value and that value is the problematic \n> setting, a user should be able to see that even in a full list.\n\nSure, but then you do \"\\dconfig *\".  With there being several hundred\nGUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\nis a common use-case at all, let alone so common as to deserve being\nthe default behavior.\nI'm for having a default that doesn't mean \"show everything\".I'm also wondering whether we can invent GUC namespaces for the different contexts, so I can use a pattern like: context.internal.*A similar ability for category would be nice but we'd have to invent labels to make it practical.\\dconfig [pattern [mode]]mode: all, overriddenSo mode is overridden if pattern is absent but all if pattern is present, with the ability to specify overridden.pattern: [[{context.{context label}}|{category.{category label}}.]...]{parameter name pattern}parameter name pattern: [{two part name prefix}.]{base parameter pattern}\nOne thing we could perhaps do to reduce confusion is to change the\ntable heading when doing this, say from \"List of configuration parameters\"\nto \"List of non-default configuration parameters\".I'd be inclined to echo a note after the output table that says that not all configuration parameters are displayed - possibly even providing a count [all - overridden].  The header is likely to be ignored even if it still ends up on screen after scrolling.David J.", "msg_date": "Sat, 9 Apr 2022 09:49:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Sat, Apr 9, 2022 at 10:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > The name has evolved from \\dcp over various longer \\d-things to the\n> > more verbose \\dconfig. How about we evolve it even more and just call\n> > it \\config?\n>\n> I think people felt that it should be part of the \\d family.\n> Also, because we have \\connect and \\conninfo, you'd need to\n> type at least five characters before you could tab-complete,\n> whereas \\dconfig is unique at four (you just need \\dco).\n\nRegarding this point, I kind of think that \\dconfig is a break with\nestablished precedent, but in a good way. Previous additions have\ngenerally tried to pick some vaguely mnemonic sequence of letters that\nsomehow corresponds to what's being listed, but you have to be pretty\nhard-core to remember what \\db and \\dAc and \\drds do. \\dconfig is\nprobably easier to remember.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 9 Apr 2022 12:50:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/9/22 16:31, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n>\n>> I would think that if \\dconfig showed the non-default settings only,\n>> it would be much more useful; the full list would still be available\n>> with \"\\dconfig *\". This is in line with \\dt only showing tables on the\n>> search_path, and \"\\dt *.*\" showing all.\n> \n> Hm, I could get on board with that -- any other opinions?\n\n+1\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 9 Apr 2022 18:50:52 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/9/22 12:27 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> -1, at least for the moment. Sometimes a user doesn't know what they're\r\n>> looking for coupled with being unaware of what the default value is. If\r\n>> a setting is set to a default value and that value is the problematic\r\n>> setting, a user should be able to see that even in a full list.\r\n> \r\n> Sure, but then you do \"\\dconfig *\". With there being several hundred\r\n> GUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\r\n> is a common use-case at all, let alone so common as to deserve being\r\n> the default behavior.\r\n> \r\n> One thing we could perhaps do to reduce confusion is to change the\r\n> table heading when doing this, say from \"List of configuration parameters\"\r\n> to \"List of non-default configuration parameters\".\r\n\r\nReasonable points. I don't have any objections to this proposal.\r\n\r\nJonathan", "msg_date": "Sat, 9 Apr 2022 14:50:59 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/9/22 12:27 PM, Tom Lane wrote:\n>> Sure, but then you do \"\\dconfig *\". With there being several hundred\n>> GUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\n>> is a common use-case at all, let alone so common as to deserve being\n>> the default behavior.\n>> \n>> One thing we could perhaps do to reduce confusion is to change the\n>> table heading when doing this, say from \"List of configuration parameters\"\n>> to \"List of non-default configuration parameters\".\n\n> Reasonable points. I don't have any objections to this proposal.\n\nHearing no further comments, done like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 15:12:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/11/22 3:12 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 4/9/22 12:27 PM, Tom Lane wrote:\r\n>>> Sure, but then you do \"\\dconfig *\". With there being several hundred\r\n>>> GUCs (and no doubt more coming), I'm not sure that \"show me every GUC\"\r\n>>> is a common use-case at all, let alone so common as to deserve being\r\n>>> the default behavior.\r\n>>>\r\n>>> One thing we could perhaps do to reduce confusion is to change the\r\n>>> table heading when doing this, say from \"List of configuration parameters\"\r\n>>> to \"List of non-default configuration parameters\".\r\n> \r\n>> Reasonable points. I don't have any objections to this proposal.\r\n> \r\n> Hearing no further comments, done like that.\r\n\r\nThanks!\r\n\r\nI have a usability comment based on my testing.\r\n\r\nI ran \"\\dconfig\" and one of the settings that came up was \r\n\"max_connections\" which was set to 100, or the default.\r\n\r\nI had noticed that \"max_connections\" was uncommented in my \r\npostgresql.conf file, which I believe was from the autogenerated \r\nprovided by initdb.\r\n\r\nSimilarly, min_wal_size/max_wal_size were in the list and set to their \r\ndefault values. These were also uncommented in my postgresql.conf from \r\nthe autogeneration.\r\n\r\nMy question is if we're only going to list out the settings that are \r\ncustomized, are we going to:\r\n\r\n1. Hide a setting if it matches a default value, even if a user set it \r\nto be the default value? OR\r\n2. Comment out all of the settings in a generated postgresql.conf file?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 11 Apr 2022 15:44:47 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> My question is if we're only going to list out the settings that are \n> customized, are we going to:\n\n> 1. Hide a setting if it matches a default value, even if a user set it \n> to be the default value? OR\n> 2. Comment out all of the settings in a generated postgresql.conf file?\n\nAs committed, it prints anything that's shown as \"source != 'default'\"\nin pg_settings, which means anything for which the value wasn't\ntaken from the wired-in default. I suppose an alternative definition\ncould be \"setting != boot_val\". Not really sure if that's better.\n\nThis idea does somewhat address my unhappiness upthread about printing\nvalues with source = 'internal', but I see that it gets confused by\nsome GUCs with custom show hooks, like unix_socket_permissions.\nMaybe it needs to be \"source != 'default' AND setting != boot_val\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:11:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Plus maybe making initdb not set values to their default if the auto probing ends up at that values.\n\nChristoph\nPlus maybe making initdb not set values to their default if the auto probing ends up at that values.Christoph", "msg_date": "Mon, 11 Apr 2022 22:17:46 +0200", "msg_from": "Christoph Berg <cb@df7cb.de>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Christoph Berg <cb@df7cb.de> writes:\n> Plus maybe making initdb not set values to their default if the auto probing ends up at that values.\n\nSeems a bit fragile: we'd have to make sure that initdb knew what the\nboot_val is. IIRC, some of those are not necessarily immutable constants,\nso there'd be room for error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:22:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/11/22 4:11 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> My question is if we're only going to list out the settings that are\r\n>> customized, are we going to:\r\n> \r\n>> 1. Hide a setting if it matches a default value, even if a user set it\r\n>> to be the default value? OR\r\n>> 2. Comment out all of the settings in a generated postgresql.conf file?\r\n> \r\n> As committed, it prints anything that's shown as \"source != 'default'\"\r\n> in pg_settings, which means anything for which the value wasn't\r\n> taken from the wired-in default. I suppose an alternative definition\r\n> could be \"setting != boot_val\". Not really sure if that's better.\r\n> \r\n> This idea does somewhat address my unhappiness upthread about printing\r\n> values with source = 'internal', but I see that it gets confused by\r\n> some GUCs with custom show hooks, like unix_socket_permissions.\r\n> Maybe it needs to be \"source != 'default' AND setting != boot_val\"?\r\n\r\nRunning through a few GUCs, that seems reasonable. Happy to test the \r\npatch out prior to commit to see if it renders better.\r\n\r\nJonathan", "msg_date": "Mon, 11 Apr 2022 20:35:52 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/11/22 4:11 PM, Tom Lane wrote:\n>> This idea does somewhat address my unhappiness upthread about printing\n>> values with source = 'internal', but I see that it gets confused by\n>> some GUCs with custom show hooks, like unix_socket_permissions.\n>> Maybe it needs to be \"source != 'default' AND setting != boot_val\"?\n\n> Running through a few GUCs, that seems reasonable. Happy to test the \n> patch out prior to commit to see if it renders better.\n\nIt'd just look like this, I think. I see from looking at guc.c that\nboot_val can be NULL, so we'd better use IS DISTINCT FROM.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 12 Apr 2022 11:19:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/12/22 11:19 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 4/11/22 4:11 PM, Tom Lane wrote:\r\n>>> This idea does somewhat address my unhappiness upthread about printing\r\n>>> values with source = 'internal', but I see that it gets confused by\r\n>>> some GUCs with custom show hooks, like unix_socket_permissions.\r\n>>> Maybe it needs to be \"source != 'default' AND setting != boot_val\"?\r\n> \r\n>> Running through a few GUCs, that seems reasonable. Happy to test the\r\n>> patch out prior to commit to see if it renders better.\r\n> \r\n> It'd just look like this, I think. I see from looking at guc.c that\r\n> boot_val can be NULL, so we'd better use IS DISTINCT FROM.\r\n\r\n(IS DISTINCT FROM is pretty handy :)\r\n\r\nI tested it and I like this a lot better, at least it's much more \r\nconsolidated. They all seem to be generated (directories, timezones, \r\ncollations/encodings).\r\n\r\nThe one exception to this seems to be \"max_stack_depth\", which is \r\nrendering on my \"\\dconfig\" though I didn't change it, an it's showing \r\nit's default value of 2MB. \"boot_val\" says 100, \"reset_val\" says 2048, \r\nand it's commented out in my postgresql.conf. Do we want to align that?\r\n\r\nThat said, the patch itself LGTM.\r\n\r\nJonathan", "msg_date": "Tue, 12 Apr 2022 12:36:55 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/12/22 11:19 AM, Tom Lane wrote:\n>> It'd just look like this, I think. I see from looking at guc.c that\n>> boot_val can be NULL, so we'd better use IS DISTINCT FROM.\n\n> I tested it and I like this a lot better, at least it's much more \n> consolidated. They all seem to be generated (directories, timezones, \n> collations/encodings).\n\nYeah, most of what shows up in a minimally-configured installation is\npostmaster-computed settings like config_file, rather than things\nthat were actually set by the DBA. Personally I'd rather hide the\nones that have source = 'override', but that didn't seem to be the\nconsensus.\n\n> The one exception to this seems to be \"max_stack_depth\", which is \n> rendering on my \"\\dconfig\" though I didn't change it, an it's showing \n> it's default value of 2MB. \"boot_val\" says 100, \"reset_val\" says 2048, \n> and it's commented out in my postgresql.conf. Do we want to align that?\n\nI don't think there's any principled thing we could do about that in\npsql. The boot_val is a conservatively small 100kB, but we crank\nthat up automatically based on getrlimit(RLIMIT_STACK), so on any\nreasonable platform it's going to show as not being default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Apr 2022 13:00:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 4/12/22 1:00 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 4/12/22 11:19 AM, Tom Lane wrote:\r\n>>> It'd just look like this, I think. I see from looking at guc.c that\r\n>>> boot_val can be NULL, so we'd better use IS DISTINCT FROM.\r\n> \r\n>> I tested it and I like this a lot better, at least it's much more\r\n>> consolidated. They all seem to be generated (directories, timezones,\r\n>> collations/encodings).\r\n> \r\n> Yeah, most of what shows up in a minimally-configured installation is\r\n> postmaster-computed settings like config_file, rather than things\r\n> that were actually set by the DBA. Personally I'd rather hide the\r\n> ones that have source = 'override', but that didn't seem to be the\r\n> consensus.\r\n\r\nThe list seems more reasonable now, though now that I'm fully in the \r\n\"less is more\" camp based on the \"non-defaults\" description, I think \r\nanything we can do to further prune is good.\r\n\r\n>> The one exception to this seems to be \"max_stack_depth\", which is\r\n>> rendering on my \"\\dconfig\" though I didn't change it, an it's showing\r\n>> it's default value of 2MB. \"boot_val\" says 100, \"reset_val\" says 2048,\r\n>> and it's commented out in my postgresql.conf. Do we want to align that?\r\n> \r\n> I don't think there's any principled thing we could do about that in\r\n> psql. The boot_val is a conservatively small 100kB, but we crank\r\n> that up automatically based on getrlimit(RLIMIT_STACK), so on any\r\n> reasonable platform it's going to show as not being default.\r\n\r\nGot it.\r\n\r\nWe may be at a point where it's \"good enough\" and let more people chime \r\nin during beta.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 12 Apr 2022 15:40:49 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/12/22 1:00 PM, Tom Lane wrote:\n>> Yeah, most of what shows up in a minimally-configured installation is\n>> postmaster-computed settings like config_file, rather than things\n>> that were actually set by the DBA. Personally I'd rather hide the\n>> ones that have source = 'override', but that didn't seem to be the\n>> consensus.\n\n> The list seems more reasonable now, though now that I'm fully in the \n> \"less is more\" camp based on the \"non-defaults\" description, I think \n> anything we can do to further prune is good.\n\nHearing no further comments, I pushed this version. There didn't seem\nto be a need to adjust the docs.\n\n> We may be at a point where it's \"good enough\" and let more people chime \n> in during beta.\n\nRight, lots of time yet for bikeshedding in beta.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Apr 2022 15:05:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Tue, Apr 12, 2022 at 11:19:40AM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 4/11/22 4:11 PM, Tom Lane wrote:\n> >> This idea does somewhat address my unhappiness upthread about printing\n> >> values with source = 'internal', but I see that it gets confused by\n> >> some GUCs with custom show hooks, like unix_socket_permissions.\n> >> Maybe it needs to be \"source != 'default' AND setting != boot_val\"?\n> \n> > Running through a few GUCs, that seems reasonable. Happy to test the \n> > patch out prior to commit to see if it renders better.\n> \n> It'd just look like this, I think. I see from looking at guc.c that\n> boot_val can be NULL, so we'd better use IS DISTINCT FROM.\n\nI noticed this is showing \"pre-computed\" gucs, like:\n\n shared_memory_size | 149MB\n shared_memory_size_in_huge_pages | 75\n\nI'm not opposed to that, but I wonder if that's what's intended / best.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Jun 2022 08:55:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I noticed this is showing \"pre-computed\" gucs, like:\n> shared_memory_size | 149MB\n> shared_memory_size_in_huge_pages | 75\n> I'm not opposed to that, but I wonder if that's what's intended / best.\n\nI had suggested upthread that we might want to hide items with\nsource = 'override', but that idea didn't seem to be getting traction.\nA different idea is to hide items with context = 'internal'.\nLooking at the items selected by the current rule in a default\ninstallation:\n\npostgres=# SELECT s.name, source, context FROM pg_catalog.pg_settings s\nWHERE s.source <> 'default' AND\n s.setting IS DISTINCT FROM s.boot_val\nORDER BY 1;\n name | source | context \n----------------------------------+----------------------+------------\n TimeZone | configuration file | user\n application_name | client | user\n client_encoding | client | user\n config_file | override | postmaster\n data_directory | override | postmaster\n default_text_search_config | configuration file | user\n hba_file | override | postmaster\n ident_file | override | postmaster\n lc_messages | configuration file | superuser\n log_timezone | configuration file | sighup\n max_stack_depth | environment variable | superuser\n shared_memory_size | override | internal\n shared_memory_size_in_huge_pages | override | internal\n wal_buffers | override | postmaster\n(14 rows)\n\nSo hiding internal-context items would hit exactly the two you mention,\nbut hiding override-source items would hit several more.\n\n(I'm kind of wondering why wal_buffers is showing as \"override\";\nthat seems like a quirk.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jun 2022 12:02:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Mon, Jun 6, 2022 at 12:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thoughts?\n\nThis all seems pretty subjective. As someone who sometimes supports\ncustomers, I usually kind of want the customer to give me all the\nsettings, just in case something that didn't seem important to them\n(or to whoever coded up the \\dconfig command) turns out to be\nrelevant. It's easier to ask once and then look for the information\nyou need than to go back and forth asking for more data, and I don't\nwant to have to try to infer things based on what version they are\nrunning or how a certain set of binaries was built.\n\nBut if I already know that the configuration on a given system is\nbasically sane, I probably only care about the parameters with\nnon-default values, and a computed parameter can't be set, so I guess\nit has a default value by definition. So if the charter for this\ncommand is to show only non-default GUCs, then it seems reasonable to\nleave these out.\n\nI think part of the problem here, though, is that one can imagine a\nvariety of charters that might be useful. A user could want to see the\nparameters that have values in their session that differ from the\nsystem defaults, or parameters that have values which differ from the\ncompiled-in defaults, or parameters that can be changed without a\nrestart, or all the pre-computed parameters, or all the parameters\nthat contain \"vacuum\" in the name, or all the query-planner-related\nparameters, or all the parameters on which any privileges have been\ngranted. And it's just a judgement call which of those things we ought\nto try to accommodate in the psql syntax and which ones ought to be\ndone by writing an ad-hoc query against pg_settings.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jun 2022 13:02:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\n\n> On Jun 6, 2022, at 9:02 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Thoughts?\n\nI think it depends on your mental model of what \\dconfig is showing you. Is it showing you the list of configs that you can SET, or just the list of all configs?\n\n\n\n", "msg_date": "Mon, 6 Jun 2022 10:04:47 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think part of the problem here, though, is that one can imagine a\n> variety of charters that might be useful. A user could want to see the\n> parameters that have values in their session that differ from the\n> system defaults, or parameters that have values which differ from the\n> compiled-in defaults, or parameters that can be changed without a\n> restart, or all the pre-computed parameters, or all the parameters\n> that contain \"vacuum\" in the name, or all the query-planner-related\n> parameters, or all the parameters on which any privileges have been\n> granted. And it's just a judgement call which of those things we ought\n> to try to accommodate in the psql syntax and which ones ought to be\n> done by writing an ad-hoc query against pg_settings.\n\nSure. Nonetheless, having decided to introduce this command, we have\nto make that judgement call.\n\npsql-ref.sgml currently explains that\n\n If <replaceable class=\"parameter\">pattern</replaceable> is specified,\n only parameters whose names match the pattern are listed. Without\n a <replaceable class=\"parameter\">pattern</replaceable>, only\n parameters that are set to non-default values are listed.\n (Use <literal>\\dconfig *</literal> to see all parameters.)\n\nso we have the \"all of them\" and \"ones whose names match a pattern\"\ncases covered. And the definition of the default behavior as\n\"only ones that are set to non-default values\" seems reasonable enough,\nbut the question is what counts as a \"non-default value\", or for that\nmatter what counts as \"setting\".\n\nI think a reasonable case can be made for excluding \"internal\" GUCs\non the grounds that (a) they cannot be set, and therefore (b) whatever\nvalue they have might as well be considered the default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jun 2022 17:01:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On Mon, Jun 6, 2022 at 5:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think a reasonable case can be made for excluding \"internal\" GUCs\n> on the grounds that (a) they cannot be set, and therefore (b) whatever\n> value they have might as well be considered the default.\n\nI agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jun 2022 10:26:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 6/7/22 10:26 AM, Robert Haas wrote:\r\n> On Mon, Jun 6, 2022 at 5:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> I think a reasonable case can be made for excluding \"internal\" GUCs\r\n>> on the grounds that (a) they cannot be set, and therefore (b) whatever\r\n>> value they have might as well be considered the default.\r\n> \r\n> I agree.\r\n\r\nI think some of these could be interesting if they deviate from the \r\ndefault (e.g. \"in_hot_standby\") as it will give the user context on the \r\ncurrent state of the system.\r\n\r\nHowever, something like that is still fairly easy to determine (e.g. \r\n`pg_catalog.pg_is_in_recovery()`). And looking through the settings \r\nmarked \"internal\" showing the non-defaults may not provide much \r\nadditional context to a user.\r\n\r\n+1 for excluding them.\r\n\r\nJonathan", "msg_date": "Tue, 7 Jun 2022 10:52:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Re: Jonathan S. Katz\n> On 6/7/22 10:26 AM, Robert Haas wrote:\n> I think some of these could be interesting if they deviate from the default\n> (e.g. \"in_hot_standby\") as it will give the user context on the current\n> state of the system.\n> \n> However, something like that is still fairly easy to determine (e.g.\n> `pg_catalog.pg_is_in_recovery()`). And looking through the settings marked\n> \"internal\" showing the non-defaults may not provide much additional context\n> to a user.\n\nin_hot_standby sounds very useful to have in that list.\n\nChristoph\n\n\n", "msg_date": "Tue, 7 Jun 2022 17:30:57 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> in_hot_standby sounds very useful to have in that list.\n\nI thought about this some more and concluded that we're blaming the\nmessenger. There's nothing wrong with \\dconfig's filtering rule;\nrather, it's the fault of the backend for mislabeling a lot of\nrun-time-computed values as source=PGC_S_OVERRIDE when they really\nare dynamic defaults. Now in fairness, I think a lot of that code\npredates the invention of PGC_S_DYNAMIC_DEFAULT, so there was not\na better way when it was written ... but there is now.\n\nThe attached draft patch makes the following changes:\n\n* All run-time calculations of PGC_INTERNAL variables are relabeled\nas PGC_S_DYNAMIC_DEFAULT. There is not any higher source value\nthat we'd allow to set such a variable, so this is sufficient.\n(I didn't do anything about in_hot_standby, which is set through\na hack rather than via set_config_option; not sure whether we want\nto do anything there, or what it should be if we do.) The net\neffect of this compared to upthread examples is to hide\nshared_memory_size and shared_memory_size_in_huge_pages from \\dconfig.\n\n* The auto-tune value of wal_buffers is relabeled as PGC_S_DYNAMIC_DEFAULT\nif possible (but due to pre-existing hacks, we might have to force it).\nThis will hide that one too as long as you didn't manually set it.\n\n* The rlimit-derived value of max_stack_depth is likewise relabeled\nas PGC_S_DYNAMIC_DEFAULT, resolving the complaint Jonathan had upthread.\nBut now that we have a way to hide this, I'm having second thoughts\nabout whether we should. If you are on a platform that's forcing an\nunreasonably small stack size, it'd be good if \\dconfig told you so.\nCould it be sane to label that value as PGC_S_DYNAMIC_DEFAULT only when\nit's the limit value (2MB) and PGC_S_ENV_VAR when it's smaller?\n\nIn any case, I expect that we'd apply this patch only to HEAD, which\nmeans that when using psql's \\dconfig against a pre-v15 server,\nyou'd still see these settings that we're trying to hide.\nThat doesn't bother me too much, but maybe some would find it\nconfusing.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Jun 2022 13:02:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "I wrote:\n> The attached draft patch makes the following changes:\n\nHere's a v2 that polishes the loose ends:\n\n> (I didn't do anything about in_hot_standby, which is set through\n> a hack rather than via set_config_option; not sure whether we want\n> to do anything there, or what it should be if we do.)\n\nI concluded that directly assigning to in_hot_standby was a fairly\nhorrid idea and we should just change it with SetConfigOption.\nWith this coding, as long as in_hot_standby is TRUE it will show\nas having a non-default setting in \\dconfig. I had to remove the\nassertion I'd added about PGC_INTERNAL variables only receiving\n\"default\" values, but this just shows that was too inflexible anyway.\n\n> * The rlimit-derived value of max_stack_depth is likewise relabeled\n> as PGC_S_DYNAMIC_DEFAULT, resolving the complaint Jonathan had upthread.\n> But now that we have a way to hide this, I'm having second thoughts\n> about whether we should. If you are on a platform that's forcing an\n> unreasonably small stack size, it'd be good if \\dconfig told you so.\n> Could it be sane to label that value as PGC_S_DYNAMIC_DEFAULT only when\n> it's the limit value (2MB) and PGC_S_ENV_VAR when it's smaller?\n\nI concluded that was just fine and did it.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Jun 2022 19:58:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 6/7/22 1:02 PM, Tom Lane wrote:\r\n\r\n> In any case, I expect that we'd apply this patch only to HEAD, which\r\n> means that when using psql's \\dconfig against a pre-v15 server,\r\n> you'd still see these settings that we're trying to hide.\r\n> That doesn't bother me too much, but maybe some would find it\r\n> confusing.\r\n\r\nWell, \"\\dconfig\" is a v15 feature, and though it's in the client, the \r\nbest compatibility for it will be with v15. I think it's OK to have the \r\nbehavior different in v15 vs. older versions.\r\n\r\nJonathan", "msg_date": "Tue, 7 Jun 2022 20:09:18 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 6/7/22 7:58 PM, Tom Lane wrote:\r\n> I wrote:\r\n>> The attached draft patch makes the following changes:\r\n> \r\n> Here's a v2 that polishes the loose ends:\r\n\r\nThanks! I reviewed and did some basic testing locally. I did not see any \r\nof the generated defaults.\r\n\r\n>> (I didn't do anything about in_hot_standby, which is set through\r\n>> a hack rather than via set_config_option; not sure whether we want\r\n>> to do anything there, or what it should be if we do.)\r\n\r\nThe comment diff showed that it went from \"hack\" to \"hack\" :)\r\n\r\n> I concluded that directly assigning to in_hot_standby was a fairly\r\n> horrid idea and we should just change it with SetConfigOption.\r\n> With this coding, as long as in_hot_standby is TRUE it will show\r\n> as having a non-default setting in \\dconfig. I had to remove the\r\n> assertion I'd added about PGC_INTERNAL variables only receiving\r\n> \"default\" values, but this just shows that was too inflexible anyway.\r\n\r\nI tested this and the server correctly rendered \"in_hot_standby\" in \r\n\\dconfig. I also tested setting \"hot_standby to \"on\" while the server \r\nwas not in recovery, and \\dconfig correctly did not render \"in_hot_standby\".\r\n\r\n>> * The rlimit-derived value of max_stack_depth is likewise relabeled\r\n>> as PGC_S_DYNAMIC_DEFAULT, resolving the complaint Jonathan had upthread.\r\n>> But now that we have a way to hide this, I'm having second thoughts\r\n>> about whether we should. If you are on a platform that's forcing an\r\n>> unreasonably small stack size, it'd be good if \\dconfig told you so.\r\n>> Could it be sane to label that value as PGC_S_DYNAMIC_DEFAULT only when\r\n>> it's the limit value (2MB) and PGC_S_ENV_VAR when it's smaller?\r\n> \r\n> I concluded that was just fine and did it.\r\n\r\nReading the docs, I think this is OK to do. We already say that \"2MB\" is \r\na very conservative setting. And even if the value can be computed to be \r\nlarger, we don't allow the server to set it higher than \"2MB\".\r\n\r\nI don't know how frequently issues around \"max_stack_depth\" being too \r\nsmall are reported -- I'd be curious to know that -- but I don't have \r\nany strong arguments against allowing the behavior you describe based on \r\nour current docs.\r\n\r\nJonathan", "msg_date": "Tue, 7 Jun 2022 20:35:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I don't know how frequently issues around \"max_stack_depth\" being too \n> small are reported -- I'd be curious to know that -- but I don't have \n> any strong arguments against allowing the behavior you describe based on \n> our current docs.\n\nI can't recall any recent gripes on our own lists, but the issue was\ntop-of-mind for me after discovering that NetBSD defaults \"ulimit -s\"\nto 2MB on at least some platforms. That would leave us setting\nmax_stack_depth to something less than that, probably about 1.5MB.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jun 2022 22:57:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 6/7/22 10:57 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> I don't know how frequently issues around \"max_stack_depth\" being too\r\n>> small are reported -- I'd be curious to know that -- but I don't have\r\n>> any strong arguments against allowing the behavior you describe based on\r\n>> our current docs.\r\n> \r\n> I can't recall any recent gripes on our own lists, but the issue was\r\n> top-of-mind for me after discovering that NetBSD defaults \"ulimit -s\"\r\n> to 2MB on at least some platforms. That would leave us setting\r\n> max_stack_depth to something less than that, probably about 1.5MB.\r\n\r\nInteresting. OK, I'd say let's keep the behavior that's in the patch.\r\n\r\nJonathan", "msg_date": "Wed, 8 Jun 2022 12:55:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Interesting. OK, I'd say let's keep the behavior that's in the patch.\n\nPushed then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jun 2022 13:26:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How about a psql backslash command to show GUCs?" }, { "msg_contents": "On 6/8/22 1:26 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Interesting. OK, I'd say let's keep the behavior that's in the patch.\r\n> \r\n> Pushed then.\r\n\r\nExcellent -- thank you!\r\n\r\nJonathan", "msg_date": "Wed, 8 Jun 2022 14:37:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: How about a psql backslash command to show GUCs?" } ]
[ { "msg_contents": "Commit a0ffa885e included some code that makes \"pg_dumpall -g\"\ndump GRANT commands for any GUCs that have had nondefault\nprivileges granted on them. I pushed that without complaint,\nbut it feels a little weird to me that we are worrying about\npreserving grants for GUCs when we don't worry about preserving\ntheir actual values.\n\nHistorically, we've been afraid to have pg_upgrade copy the\nold installation's postgresql.conf into the new one, because\nof the likelihood that the new version accepts a different\nset of GUCs, which could possibly cause the new server to\nfail to start; not to mention that there might be entries\nsuch as data_directory that we had better not copy. I think\nthat reasoning is still sound, but it wasn't revisited when\nwe added ALTER SYSTEM.\n\nWhat I want to propose today is that \"pg_dumpall -g\" should\ndump ALTER SYSTEM commands to replicate the contents of\nthe source system's postgresql.auto.conf (which it could\nread out using the pg_file_settings view if it's running\nas superuser, or less reliably from pg_settings if it isn't).\nAs far as I can see offhand, this'd be a great deal safer\nthan messing directly with postgresql.conf:\n\n* We reject ALTER SYSTEM for the most dangerous settings\nlike data_directory, so they won't show up in the source file.\n(Perhaps pg_dumpall should blacklist settings related to\nfilesystem layout, too.)\n\n* The recipient server will validate the arguments of\nALTER SYSTEM and reject anything that it doesn't like,\nso the risk of injecting bad values due to cross-version\ndifferences seems low.\n\n* We're already buying into the risk of cross-version GUC\nincompatibility by dumping settings from pg_db_role_setting,\nand that hasn't caused a lot of problems as far as I've heard.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 14:26:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Should pg_dumpall dump ALTER SYSTEM settings?" }, { "msg_contents": "On Wed, Apr 6, 2022 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thoughts?\n\nI'm a little bit skeptical about this proposal, mostly because it\nseems like it has the end result that values that are configured in\npostgresql.conf and postgresql.auto.conf end up being handled\ndifferently: one file has to be copied by hand, while the other file's\ncontents are propagated forward to the new version by pg_dump. I don't\nthink that's what people are going to be expecting...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 21:39:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should pg_dumpall dump ALTER SYSTEM settings?" }, { "msg_contents": "On Wed, 2022-04-06 at 21:39 -0400, Robert Haas wrote:\n> On Wed, Apr 6, 2022 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thoughts?\n> \n> I'm a little bit skeptical about this proposal, mostly because it\n> seems like it has the end result that values that are configured in\n> postgresql.conf and postgresql.auto.conf end up being handled\n> differently: one file has to be copied by hand, while the other file's\n> contents are propagated forward to the new version by pg_dump. I don't\n> think that's what people are going to be expecting...\n\n\"postgresql.auto.conf\" is an implementation detail, and I would expect\nmost users to distinguish between \"parameters set in postgresql.conf\"\nand \"parameters set via the SQL statement ALTER SYSTEM\".\nIf that is the way you look at things, then it seems natural for the\nlatter to be included in a dump, but not the former.\n\nAs another case in point, the Ubuntu/Debian packages split up the data\ndirectory so that the config files are under /etc, while the rest of\nthe data directory is under /var/lib. \"postgresql.auto.conf\" is *not*\nin /etc, but in /var/lib there. So a user of these distributions would\nnaturally think that the config files in /etc need to be handled manually,\nbut \"postgresql.auto.conf\" need not.\n\nI am +1 on Tom's idea.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 07 Apr 2022 12:38:43 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Should pg_dumpall dump ALTER SYSTEM settings?" }, { "msg_contents": "At Thu, 07 Apr 2022 12:38:43 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> On Wed, 2022-04-06 at 21:39 -0400, Robert Haas wrote:\n> > On Wed, Apr 6, 2022 at 2:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Thoughts?\n> > \n> > I'm a little bit skeptical about this proposal, mostly because it\n> > seems like it has the end result that values that are configured in\n> > postgresql.conf and postgresql.auto.conf end up being handled\n> > differently: one file has to be copied by hand, while the other file's\n> > contents are propagated forward to the new version by pg_dump. I don't\n> > think that's what people are going to be expecting...\n> \n> \"postgresql.auto.conf\" is an implementation detail, and I would expect\n> most users to distinguish between \"parameters set in postgresql.conf\"\n> and \"parameters set via the SQL statement ALTER SYSTEM\".\n> If that is the way you look at things, then it seems natural for the\n> latter to be included in a dump, but not the former.\n> \n> As another case in point, the Ubuntu/Debian packages split up the data\n> directory so that the config files are under /etc, while the rest of\n> the data directory is under /var/lib. \"postgresql.auto.conf\" is *not*\n> in /etc, but in /var/lib there. So a user of these distributions would\n> naturally think that the config files in /etc need to be handled manually,\n> but \"postgresql.auto.conf\" need not.\n> \n> I am +1 on Tom's idea.\n\nI'm -0.2 if it is the default/implicit behavior. postgresql.conf and\nALTER SYSTEM SET works on the same set of settings. If we include\n.auto's settings in a dump data, it overrides the intentional changes\nin postgresql.conf. I see it a bit surprising.\n\nI'm +-0 if it is a optional behavior.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Apr 2022 09:47:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should pg_dumpall dump ALTER SYSTEM settings?" } ]
[ { "msg_contents": "I just noticed that if I build without asserts on my Mac laptop\n(using Apple's latest clang, 13.1.6) I get\n\nnbtdedup.c:68:8: warning: variable 'pagesaving' set but not used [-Wunused-but-set-variable]\n Size pagesaving = 0;\n ^\n1 warning generated.\n\nApparently, late-model clang can figure out that the variable\nis incremented but not otherwise used. This is enough to\nshut it up, but I wonder if you have another preference:\n\n- Size pagesaving = 0;\n+ Size pagesaving PG_USED_FOR_ASSERTS_ONLY = 0;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Apr 2022 15:59:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "New compiler warning from btree dedup code" }, { "msg_contents": "That approach seems fine. Thanks.--\nPeter Geoghegan\n\nThat approach seems fine. Thanks.-- Peter Geoghegan", "msg_date": "Wed, 6 Apr 2022 13:14:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: New compiler warning from btree dedup code" } ]
[ { "msg_contents": "We have some error messages like this:\n\nSELECT xml_is_well_formed('<abc/>');\nERROR: unsupported XML feature\nDETAIL: This functionality requires the server to be built with libxml \nsupport.\nHINT: You need to rebuild PostgreSQL using --with-libxml.\n\nThis patch removes these kinds of hints.\n\nI think these hints are usually not useful since users will use packaged \ndistributions and won't be interested in rebuilding their installation \nfrom source. Also, we have only used these kinds of hints for some \nfeatures and in some places, not consistently throughout. And of course \nthere are build systems that don't use configure. The information \n\"needs to be built with XXX\" or \"was not built with XXX\" should be \nenough for those interested in actually changing their build \nconfiguration to figure out what to do.", "msg_date": "Thu, 7 Apr 2022 09:19:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove error message hints mentioning configure options" }, { "msg_contents": "Jo.\n\nOn 2022-04-07 09:19:14 +0200, Peter Eisentraut wrote:\n> I think these hints are usually not useful since users will use packaged\n> distributions and won't be interested in rebuilding their installation from\n> source. Also, we have only used these kinds of hints for some features and\n> in some places, not consistently throughout. And of course there are build\n> systems that don't use configure. The information \"needs to be built with\n> XXX\" or \"was not built with XXX\" should be enough for those interested in\n> actually changing their build configuration to figure out what to do.\n\n+1\n\n> diff --git a/src/test/regress/expected/compression_1.out b/src/test/regress/expected/compression_1.out\n> index 1ce2962d55..c0a47646eb 100644\n> --- a/src/test/regress/expected/compression_1.out\n> +++ b/src/test/regress/expected/compression_1.out\n\nThe xml stuff is at least old, but compression_1 isn't. I think we've tried to\navoid long alternative output files where everything fails due to being\nunsupported for a while now? Robert, Dilip? See uses of :skip_test in various\ntests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 00:34:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove error message hints mentioning configure options" }, { "msg_contents": "> On 7 Apr 2022, at 09:19, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> We have some error messages like this:\n> \n> SELECT xml_is_well_formed('<abc/>');\n> ERROR: unsupported XML feature\n> DETAIL: This functionality requires the server to be built with libxml support.\n> HINT: You need to rebuild PostgreSQL using --with-libxml.\n> \n> This patch removes these kinds of hints.\n> \n> I think these hints are usually not useful since users will use packaged distributions and won't be interested in rebuilding their installation from source.\n\nAgreed, +1 on this patch. Grepping the code I was also unable to find any\nother user-facing instances.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 11:25:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Remove error message hints mentioning configure options" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-07 09:19:14 +0200, Peter Eisentraut wrote:\n>> I think these hints are usually not useful since users will use packaged\n>> distributions and won't be interested in rebuilding their installation from\n>> source.\n\n> +1\n\n+1, those hints are from another era.\n\n> The xml stuff is at least old, but compression_1 isn't. I think we've tried to\n> avoid long alternative output files where everything fails due to being\n> unsupported for a while now? Robert, Dilip? See uses of :skip_test in various\n> tests.\n\nAgreed, we have a better technology now for stub test results. Come\nto think of it, xml.sql really ought to be redone along the same lines.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 09:49:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove error message hints mentioning configure options" } ]
[ { "msg_contents": "Prefetch data referenced by the WAL, take II.\n\nIntroduce a new GUC recovery_prefetch. When enabled, look ahead in the\nWAL and try to initiate asynchronous reading of referenced data blocks\nthat are not yet cached in our buffer pool. For now, this is done with\nposix_fadvise(), which has several caveats. Since not all OSes have\nthat system call, \"try\" is provided so that it can be enabled where\navailable. Better mechanisms for asynchronous I/O are possible in later\nwork.\n\nSet to \"try\" for now for test coverage. Default setting to be finalized\nbefore release.\n\nThe GUC wal_decode_buffer_size limits the distance we can look ahead in\nbytes of decoded data.\n\nThe existing GUC maintenance_io_concurrency is used to limit the number\nof concurrent I/Os allowed, based on pessimistic heuristics used to\ninfer that I/Os have begun and completed. We'll also not look more than\nmaintenance_io_concurrency * 4 block references ahead.\n\nReviewed-by: Julien Rouhaud <rjuju123@gmail.com>\nReviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nReviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version)\nReviewed-by: Andres Freund <andres@anarazel.de> (earlier version)\nReviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version)\nTested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version)\nTested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version)\nTested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version)\nTested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version)\nDiscussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/5dc0418fab281d017a61a5756240467af982bdfd\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 64 ++\ndoc/src/sgml/monitoring.sgml | 86 +-\ndoc/src/sgml/wal.sgml | 12 +\nsrc/backend/access/transam/Makefile | 1 +\nsrc/backend/access/transam/xlog.c | 2 +\nsrc/backend/access/transam/xlogprefetcher.c | 1082 +++++++++++++++++++++++++\nsrc/backend/access/transam/xlogreader.c | 27 +-\nsrc/backend/access/transam/xlogrecovery.c | 179 ++--\nsrc/backend/access/transam/xlogutils.c | 27 +-\nsrc/backend/catalog/system_views.sql | 14 +\nsrc/backend/storage/buffer/bufmgr.c | 4 +\nsrc/backend/storage/freespace/freespace.c | 3 +-\nsrc/backend/storage/ipc/ipci.c | 3 +\nsrc/backend/storage/smgr/md.c | 6 +-\nsrc/backend/utils/adt/pgstatfuncs.c | 5 +-\nsrc/backend/utils/misc/guc.c | 55 +-\nsrc/backend/utils/misc/postgresql.conf.sample | 6 +\nsrc/include/access/xlog.h | 1 +\nsrc/include/access/xlogprefetcher.h | 53 ++\nsrc/include/access/xlogreader.h | 8 +\nsrc/include/access/xlogutils.h | 3 +-\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 7 +\nsrc/include/utils/guc.h | 4 +\nsrc/include/utils/guc_tables.h | 1 +\nsrc/test/regress/expected/rules.out | 11 +\nsrc/tools/pgindent/typedefs.list | 6 +\n27 files changed, 1595 insertions(+), 77 deletions(-)", "msg_date": "Thu, 07 Apr 2022 07:44:20 +0000", "msg_from": "Thomas Munro <tmunro@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "Hi,\n\nOn 2022-04-07 07:44:20 +0000, Thomas Munro wrote:\n> Prefetch data referenced by the WAL, take II.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-04-07%2008%3A17%3A27\nthinks that xlogpretcher.h doesn't include enough...\n\nI had added those checks to CI, but apparently somehow screwed that up.\n\nOr maybe it's the scripts that are screwed up? Because I don't see any error\nchecking in headerscheck/cpluspluscheck. And indeed, locally they show the\nerrors, but exit with 0.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 01:36:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "\nOn 4/7/22 04:36, Andres Freund wrote:\n> Hi,\n>\n> On 2022-04-07 07:44:20 +0000, Thomas Munro wrote:\n>> Prefetch data referenced by the WAL, take II.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-04-07%2008%3A17%3A27\n> thinks that xlogpretcher.h doesn't include enough...\n>\n> I had added those checks to CI, but apparently somehow screwed that up.\n>\n> Or maybe it's the scripts that are screwed up? Because I don't see any error\n> checking in headerscheck/cpluspluscheck. And indeed, locally they show the\n> errors, but exit with 0.\n\n\nYeah, you can't rely on the exit status, if they produce output that\nshould be regarded as a failure (that's what crake does).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 08:18:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/7/22 04:36, Andres Freund wrote:\n>> Or maybe it's the scripts that are screwed up? Because I don't see any error\n>> checking in headerscheck/cpluspluscheck. And indeed, locally they show the\n>> errors, but exit with 0.\n\n> Yeah, you can't rely on the exit status, if they produce output that\n> should be regarded as a failure (that's what crake does).\n\nYeah, those scripts were only intended to be run by hand. If someone\nfeels like upgrading them to also produce a useful exit status,\nI'm for it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 09:53:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "On 2022-Apr-07, Thomas Munro wrote:\n\n> Prefetch data referenced by the WAL, take II.\n\nI propose a small wording change in guc.c,\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 9fbbfb1be5..9803741708 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -2840,7 +2840,7 @@ static struct config_int ConfigureNamesInt[] =\n \t{\n \t\t{\"wal_decode_buffer_size\", PGC_POSTMASTER, WAL_RECOVERY,\n \t\t\tgettext_noop(\"Maximum buffer size for reading ahead in the WAL during recovery.\"),\n-\t\t\tgettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks.\"),\n+\t\t\tgettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch data blocks referenced therein.\"),\n \t\t\tGUC_UNIT_BYTE\n \t\t},\n \t\t&wal_decode_buffer_size,\n\n\"referenced blocks\" seems otherwise a bit unclear to me. Other wording\nsuggestions welcome. I first thought of \"...to prefetch referenced data\nblocks\", which is probably OK too.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n", "msg_date": "Sun, 4 Sep 2022 09:54:50 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "On Sun, Sep 4, 2022 at 7:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Apr-07, Thomas Munro wrote:\n> I propose a small wording change in guc.c,\n>\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index 9fbbfb1be5..9803741708 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -2840,7 +2840,7 @@ static struct config_int ConfigureNamesInt[] =\n> {\n> {\"wal_decode_buffer_size\", PGC_POSTMASTER, WAL_RECOVERY,\n> gettext_noop(\"Maximum buffer size for reading ahead in the WAL during recovery.\"),\n> - gettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks.\"),\n> + gettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch data blocks referenced therein.\"),\n> GUC_UNIT_BYTE\n> },\n> &wal_decode_buffer_size,\n>\n> \"referenced blocks\" seems otherwise a bit unclear to me. Other wording\n> suggestions welcome. I first thought of \"...to prefetch referenced data\n> blocks\", which is probably OK too.\n\nI'd go for your second suggestion. 'therein' doesn't convey much (we\nalready said 'in the WAL', and therein is just a slightly formal and\nsomehow more Germanic way of saying 'in it' which is kinda\nduplicative; maybe 'by it' is what we want but that's still kinda\nimplied already), whereas adding 'data' makes it slightly clearer that\nit's blocks from relations and not, say, the WAL itself.\n\nAlso, hmm, it's not really the 'Maximum buffer size', it's the 'Buffer size'.\n\n\n", "msg_date": "Tue, 13 Sep 2022 09:50:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." }, { "msg_contents": "On 2022-Sep-13, Thomas Munro wrote:\n\n> I'd go for your second suggestion. 'therein' doesn't convey much (we\n> already said 'in the WAL', and therein is just a slightly formal and\n> somehow more Germanic way of saying 'in it' which is kinda\n> duplicative; maybe 'by it' is what we want but that's still kinda\n> implied already), whereas adding 'data' makes it slightly clearer that\n> it's blocks from relations and not, say, the WAL itself.\n> \n> Also, hmm, it's not really the 'Maximum buffer size', it's the 'Buffer size'.\n\nSounds reasonable. Pushed these changes. Thank you!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n", "msg_date": "Tue, 13 Sep 2022 12:07:40 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prefetch data referenced by the WAL, take II." } ]
[ { "msg_contents": "https://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=split\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758e\n\n\nselect version();\n\n version\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15devel (Ubuntu\n15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0,\n64-bit\n(1 row)\n\n(END)\n----------------\n\ncreate or replace aggregate array_agg_mult(anycompatiblearray) (\nsfunc = array_cat,\nstype = anycompatiblearray,\ninitcond = '{}'\n);\nSELECT array_agg_mult(i)\nFROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);\n\n\nERROR: 42725: function array_agg_mult(record[]) is not unique\nLINE 1: SELECT array_agg_mult(i)\n ^\nHINT: Could not choose a best candidate function. You might need to add\nexplicit type casts.\nLOCATION: ParseFuncOrColumn, parse_func.c:570\nTime: 1.292 ms\n\n\nI installed postgresql 15, one week ago.\nHow to solve the above mentioned problem. Or since it was solved last week,\nI need to update postgresql?\n\nhttps://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=splithttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758eselect version();                                                                                version------------------------------------------------------------------------------------------------------------------------------------------------------------------------ PostgreSQL 15devel (Ubuntu 15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit(1 row)(END)----------------create or replace aggregate array_agg_mult(anycompatiblearray) (sfunc = array_cat,stype = anycompatiblearray,initcond = '{}');SELECT array_agg_mult(i)FROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);ERROR:  42725: function array_agg_mult(record[]) is not uniqueLINE 1: SELECT array_agg_mult(i)               ^HINT:  Could not choose a best candidate function. You might need to add explicit type casts.LOCATION:  ParseFuncOrColumn, parse_func.c:570Time: 1.292 msI installed postgresql 15, one week ago.How to solve the above mentioned problem. Or since it was solved last week, I need to update postgresql?", "msg_date": "Thu, 7 Apr 2022 16:00:21 +0530", "msg_from": "alias <postgres.rocks@gmail.com>", "msg_from_op": true, "msg_subject": "aggregate array broken in postgresql 15." }, { "msg_contents": "Hi\n\nčt 7. 4. 2022 v 12:30 odesílatel alias <postgres.rocks@gmail.com> napsal:\n\n>\n> https://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=split\n>\n> https://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758e\n>\n>\n> select version();\n>\n> version\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 15devel (Ubuntu\n> 15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on\n> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0,\n> 64-bit\n> (1 row)\n>\n> (END)\n> ----------------\n>\n> create or replace aggregate array_agg_mult(anycompatiblearray) (\n> sfunc = array_cat,\n> stype = anycompatiblearray,\n> initcond = '{}'\n> );\n> SELECT array_agg_mult(i)\n> FROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);\n>\n>\n> ERROR: 42725: function array_agg_mult(record[]) is not unique\n> LINE 1: SELECT array_agg_mult(i)\n> ^\n> HINT: Could not choose a best candidate function. You might need to add\n> explicit type casts.\n> LOCATION: ParseFuncOrColumn, parse_func.c:570\n> Time: 1.292 ms\n>\n>\nI tested your code on my clean pg 15 without any problem\n\nMaybe you have an array_agg_mult with anyarray argument. If yes, then you\nhave to drop this aggregate function.\n\nRegards\n\nPavel\n\n\n\n\n> I installed postgresql 15, one week ago.\n> How to solve the above mentioned problem. Or since it was solved last\n> week, I need to update postgresql?\n>\n>\n\nHičt 7. 4. 2022 v 12:30 odesílatel alias <postgres.rocks@gmail.com> napsal:https://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=splithttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758eselect version();                                                                                version------------------------------------------------------------------------------------------------------------------------------------------------------------------------ PostgreSQL 15devel (Ubuntu 15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit(1 row)(END)----------------create or replace aggregate array_agg_mult(anycompatiblearray) (sfunc = array_cat,stype = anycompatiblearray,initcond = '{}');SELECT array_agg_mult(i)FROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);ERROR:  42725: function array_agg_mult(record[]) is not uniqueLINE 1: SELECT array_agg_mult(i)               ^HINT:  Could not choose a best candidate function. You might need to add explicit type casts.LOCATION:  ParseFuncOrColumn, parse_func.c:570Time: 1.292 msI tested your code on my clean pg 15 without any problemMaybe you have an array_agg_mult with anyarray argument. If yes, then you have to drop this aggregate function.RegardsPavelI installed postgresql 15, one week ago.How to solve the above mentioned problem. Or since it was solved last week, I need to update postgresql?", "msg_date": "Thu, 7 Apr 2022 12:38:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: aggregate array broken in postgresql 15." }, { "msg_contents": "my mistake. I don't know that aggregate can also overload. problem\nsolved....\n\nOn Thu, Apr 7, 2022 at 4:09 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> čt 7. 4. 2022 v 12:30 odesílatel alias <postgres.rocks@gmail.com> napsal:\n>\n>>\n>> https://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=split\n>>\n>> https://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758e\n>>\n>>\n>> select version();\n>>\n>> version\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> PostgreSQL 15devel (Ubuntu\n>> 15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on\n>> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0,\n>> 64-bit\n>> (1 row)\n>>\n>> (END)\n>> ----------------\n>>\n>> create or replace aggregate array_agg_mult(anycompatiblearray) (\n>> sfunc = array_cat,\n>> stype = anycompatiblearray,\n>> initcond = '{}'\n>> );\n>> SELECT array_agg_mult(i)\n>> FROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);\n>>\n>>\n>> ERROR: 42725: function array_agg_mult(record[]) is not unique\n>> LINE 1: SELECT array_agg_mult(i)\n>> ^\n>> HINT: Could not choose a best candidate function. You might need to add\n>> explicit type casts.\n>> LOCATION: ParseFuncOrColumn, parse_func.c:570\n>> Time: 1.292 ms\n>>\n>>\n> I tested your code on my clean pg 15 without any problem\n>\n> Maybe you have an array_agg_mult with anyarray argument. If yes, then you\n> have to drop this aggregate function.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>> I installed postgresql 15, one week ago.\n>> How to solve the above mentioned problem. Or since it was solved last\n>> week, I need to update postgresql?\n>>\n>>\n\nmy mistake. I don't know  that aggregate can also overload. problem solved....On Thu, Apr 7, 2022 at 4:09 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hičt 7. 4. 2022 v 12:30 odesílatel alias <postgres.rocks@gmail.com> napsal:https://github.com/postgres/postgres/commit/97f73a978fc1aca59c6ad765548ce0096d95a923?diff=splithttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=11a30590cb376a24df172198139d758eselect version();                                                                                version------------------------------------------------------------------------------------------------------------------------------------------------------------------------ PostgreSQL 15devel (Ubuntu 15~~devel~20220329.1030-1~680.git8cd7627.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit(1 row)(END)----------------create or replace aggregate array_agg_mult(anycompatiblearray) (sfunc = array_cat,stype = anycompatiblearray,initcond = '{}');SELECT array_agg_mult(i)FROM (VALUES (ARRAY[row(1,2),row(3,4)]), (ARRAY[row(5,6),row(7,8)])) as t(i);ERROR:  42725: function array_agg_mult(record[]) is not uniqueLINE 1: SELECT array_agg_mult(i)               ^HINT:  Could not choose a best candidate function. You might need to add explicit type casts.LOCATION:  ParseFuncOrColumn, parse_func.c:570Time: 1.292 msI tested your code on my clean pg 15 without any problemMaybe you have an array_agg_mult with anyarray argument. If yes, then you have to drop this aggregate function.RegardsPavelI installed postgresql 15, one week ago.How to solve the above mentioned problem. Or since it was solved last week, I need to update postgresql?", "msg_date": "Thu, 7 Apr 2022 16:21:57 +0530", "msg_from": "alias <postgres.rocks@gmail.com>", "msg_from_op": true, "msg_subject": "Re: aggregate array broken in postgresql 15." } ]
[ { "msg_contents": "Hi,\n\nAttaching a patch to fix a typo in xlogrecovery.c -\ns/GetLogReplayRecPtr/GetXLogReplayRecPtr\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 7 Apr 2022 17:18:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Fix a typo in xlogrecovery.c" }, { "msg_contents": "> On 7 Apr 2022, at 13:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Attaching a patch to fix a typo in xlogrecovery.c -\n> s/GetLogReplayRecPtr/GetXLogReplayRecPtr\n\nApplied, thanks.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 7 Apr 2022 14:04:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix a typo in xlogrecovery.c" } ]
[ { "msg_contents": "Hi,\n\nI think the WAL related functions [1] are misplaced under the \"Backup\nControl Functions\" category in the docs [2]. IMO, they aren't true\nbackup control functions anymore and must be under a separate category\nlike \"WAL Utility Functions\" or some other.\n\nThoughts?\n\n[1] pg_current_wal_flush_lsn, pg_current_wal_insert_lsn,\npg_current_wal_lsn, pg_walfile_name, pg_walfile_name_offset,\npg_switch_wal, pg_wal_lsn_diff\n[2] https://www.postgresql.org/docs/devel/functions-admin.html\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Apr 2022 19:16:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Are pg_current_wal_XXX, pg_walfile_XXX, pg_switch_wal and\n pg_wal_lsn_diff misplaced in docs?" } ]
[ { "msg_contents": "Hi,\n\nI see there's no single function one can rely on to get various\ntimeline IDs [1] the postgres server deals with. We have the following\nfunctions that emit controlfile's timelines\npg_control_checkpoint() - returns\nControlFile->checkPointCopy.ThisTimeLineID and\nControlFile->checkPointCopy.PrevTimeLineID\npg_control_recovery() - returns ControlFile->minRecoveryPointTLI\n\nMost of the times XLogCtl->{InsertTimeLineID, PrevTimeLineID} and\nControlFile->checkPointCopy.{ThisTimeLineID, PrevTimeLineID} may be\nthe same (?).\n\nNo functions emit XLogRecoveryCtl->{lastReplayedTLI, replayEndTLI} and\nWalRcv->{receiveStartTLI, receivedTLI}.\n\nWe may think of letting pg_current_wal_XXX, pg_last_wal_replay_lsn and\npg_last_wal_receive_lsn to return XLogCtl->{InsertTimeLineID,\nPrevTimeLineID}, XLogRecoveryCtl->{lastReplayedTLI, replayEndTLI} and\nWalRcv->{receiveStartTLI, receivedTLI} respectively, but the names of\nthose functions need a change which I don't think is a great idea\ngiven the fact that many client apps, control planes would have used\nthem.\n\nWe have two options:\n1) Having a new set of functions, something like pg_current_wal_tli,\npg_last_wal_replay_tli and pg_last_wal_receive_tli.\n2) A single function, something like pg_get_server_tlis or\npg_get_wal_timelines or some other.\n\nI prefer option (1).\n\nThoughts?\n\n[1]\nXLogCtl->InsertTimeLineID\nXLogCtl->PrevTimeLineID\n\nXLogRecoveryCtl->lastReplayedTLI\nXLogRecoveryCtl->replayEndTLI\n\nWalRcv->receiveStartTLI\nWalRcv->receivedTLI\n\nControlFile->checkPointCopy.ThisTimeLineID\nControlFile->checkPointCopy.PrevTimeLineID\nControlFile->minRecoveryPointTLI\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 7 Apr 2022 19:45:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "New function(s) to get various timelines that the postgres server\n deals with." } ]
[ { "msg_contents": "Running installcheck-world on an unrelated patch, I noticed a failure\nhere in test/isolation/expected/stats_1.out (this is line 3102):\n\nstep s1_slru_check_stats:\n SELECT current.blks_zeroed > before.value\n FROM test_slru_stats before\n INNER JOIN pg_stat_slru current\n ON before.slru = current.name\n WHERE before.stat = 'blks_zeroed';\n\n?column?\n--------\nt\n(1 row)\n\nThis is built from bab588c. On my amd64/linux box the result is f.\n\nThe same mismatch is present if I build from 6392f2a (i.e., just before\na2f433f pgstat: add alternate output for stats.spec), along with\na bunch of others. So a2f433f seems to have silenced all the rest\nof those, but not this one.\n\nIf I build from ad40166, installcheck-world passes. That's as far\nas I have pursued it.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 07 Apr 2022 12:00:49 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": true, "msg_subject": "test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "chap@anastigmatix.net writes:\n> Running installcheck-world on an unrelated patch, I noticed a failure\n> here in test/isolation/expected/stats_1.out (this is line 3102):\n\nSo what non-default build options are you using?\n\nThe only isolationcheck failure remaining in the buildfarm is\nprion's, which I can reproduce here by building with\n-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE as it does.\nLooking at the nature of the diffs, this is not too surprising;\nthe expected output appears to rely on a cache flush not happening\nquickly in s2.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 12:49:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 12:49:07 -0400, Tom Lane wrote:\n> chap@anastigmatix.net writes:\n> > Running installcheck-world on an unrelated patch, I noticed a failure\n> > here in test/isolation/expected/stats_1.out (this is line 3102):\n> \n> So what non-default build options are you using?\n> \n> The only isolationcheck failure remaining in the buildfarm is\n> prion's, which I can reproduce here by building with\n> -DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE as it does.\n> Looking at the nature of the diffs, this is not too surprising;\n> the expected output appears to rely on a cache flush not happening\n> quickly in s2.\n\nYea :(. I tested debug_discard_caches, but not -DRELCACHE_FORCE_RELEASE\n-DCATCACHE_FORCE_RELEASE.\n\nNot quite sure what to do about it - it's intentionally trying to test the\ncase of no invalidations being processed, as that's an annoying edge case with\nfunctions. Perhaps wrapping the function call of the \"already dropped\"\nfunction in another function that catches the error would do the trick? It'd\nbe more easily silently broken, but still be better than not having the test.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:57:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "On 2022-04-07 12:49, Tom Lane wrote:\n> So what non-default build options are you using?\n\nThe command that I've just been reusing from my bash_history without\nthinking about it for some years is:\n\nconfigure --enable-cassert --enable-tap-tests \\\n --with-libxml --enable-debug \\\n CFLAGS='-ggdb -Og -g3 -fno-omit-frame-pointer'\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 07 Apr 2022 13:16:53 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": true, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 13:16:53 -0400, chap@anastigmatix.net wrote:\n> The command that I've just been reusing from my bash_history without\n> thinking about it for some years is:\n> \n> configure --enable-cassert --enable-tap-tests \\\n> --with-libxml --enable-debug \\\n> CFLAGS='-ggdb -Og -g3 -fno-omit-frame-pointer'\n\nHm, that's similar to what I use without seeing the problem.\n\nIIUC you ran installcheck - did you set any non-default config options in the\npostgres instance that runs against? Is anything else running on that\ninstance? Do you use any -j setting when running installcheck-world?\n\nIs the failure reproducible, or a one-off?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 10:29:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 10:29:10 -0700, Andres Freund wrote:\n> On 2022-04-07 13:16:53 -0400, chap@anastigmatix.net wrote:\n> > The command that I've just been reusing from my bash_history without\n> > thinking about it for some years is:\n> > \n> > configure --enable-cassert --enable-tap-tests \\\n> > --with-libxml --enable-debug \\\n> > CFLAGS='-ggdb -Og -g3 -fno-omit-frame-pointer'\n> \n> Hm, that's similar to what I use without seeing the problem.\n> \n> IIUC you ran installcheck - did you set any non-default config options in the\n> postgres instance that runs against? Is anything else running on that\n> instance? Do you use any -j setting when running installcheck-world?\n> \n> Is the failure reproducible, or a one-off?\n\nI've now reproduced this, albeit not reliably yet. Looking.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 11:02:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 11:02:41 -0700, Andres Freund wrote:\n> I've now reproduced this, albeit not reliably yet. Looking.\n\nCaused by me misremembering when deduplication happens - somehow recalled that\ndeduplication didn't happen when payloads. So the statement that was supposed\nto guarantee needing more than one page:\n SELECT pg_notify('stats_test_use', repeat('0', current_setting('block_size')::int / 2)) FROM generate_series(1, 3);\n\ndidn't actually guarantee that. It just failed to fail by chance.\n\nWhen regression tests and isolation test run in sequence against the same\nfreshly started cluster, the offsets when starting out are just right to not\nneed another page in the first test.\n\nI'll change it to use distinct payloads..\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 11:54:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 11:54:08 -0700, Andres Freund wrote:\n> I'll change it to use distinct payloads..\n\nAnd done. Chap, could you confirm this fixes the issue for you?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 12:04:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "On 2022-04-07 15:04, Andres Freund wrote:\n> And done. Chap, could you confirm this fixes the issue for you?\n\nLooks good from here. One installcheck-world with no failures; \npreviously,\nit failed for me every time.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 07 Apr 2022 17:14:54 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": true, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 09:57:09 -0700, Andres Freund wrote:\n> Yea :(. I tested debug_discard_caches, but not -DRELCACHE_FORCE_RELEASE\n> -DCATCACHE_FORCE_RELEASE.\n> \n> Not quite sure what to do about it - it's intentionally trying to test the\n> case of no invalidations being processed, as that's an annoying edge case with\n> functions. Perhaps wrapping the function call of the \"already dropped\"\n> function in another function that catches the error would do the trick? It'd\n> be more easily silently broken, but still be better than not having the test.\n\nAnybody got a better idea?\n\n- Andres\n\n\n", "msg_date": "Thu, 7 Apr 2022 15:02:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-07 09:57:09 -0700, Andres Freund wrote:\n>> Yea :(. I tested debug_discard_caches, but not -DRELCACHE_FORCE_RELEASE\n>> -DCATCACHE_FORCE_RELEASE.\n>> \n>> Not quite sure what to do about it - it's intentionally trying to test the\n>> case of no invalidations being processed, as that's an annoying edge case with\n>> functions. Perhaps wrapping the function call of the \"already dropped\"\n>> function in another function that catches the error would do the trick? It'd\n>> be more easily silently broken, but still be better than not having the test.\n\n> Anybody got a better idea?\n\nMaybe if the wrapper function checks for exactly the two expected\nbehaviors, it'd be robust enough?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Apr 2022 18:31:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" }, { "msg_contents": "Hi,\n\nOn 2022-04-07 18:31:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-04-07 09:57:09 -0700, Andres Freund wrote:\n> >> Yea :(. I tested debug_discard_caches, but not -DRELCACHE_FORCE_RELEASE\n> >> -DCATCACHE_FORCE_RELEASE.\n> >>\n> >> Not quite sure what to do about it - it's intentionally trying to test the\n> >> case of no invalidations being processed, as that's an annoying edge case with\n> >> functions. Perhaps wrapping the function call of the \"already dropped\"\n> >> function in another function that catches the error would do the trick? It'd\n> >> be more easily silently broken, but still be better than not having the test.\n>\n> > Anybody got a better idea?\n>\n> Maybe if the wrapper function checks for exactly the two expected\n> behaviors, it'd be robust enough?\n\nSeems to work. If I break the code it's trying to test, it still fails... Of\ncourse only when none of debug_discard_caches, RELCACHE_FORCE_RELEASE,\nCATCACHE_FORCE_RELEASE are used, but that seems unavoidable / harmless. Let's\nsee what the buildfarm thinks.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Apr 2022 18:22:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: test/isolation/expected/stats_1.out broken for me" } ]
[ { "msg_contents": "Hi hackers,\n\nI am splitting this off of a previous thread aimed at reducing archiving\noverhead [0], as I believe this fix might deserve back-patching.\n\nPresently, WAL recycling uses durable_rename_excl(), which notes that a\ncrash at an unfortunate moment can result in two links to the same file.\nMy testing [1] demonstrated that it was possible to end up with two links\nto the same file in pg_wal after a crash just before unlink() during WAL\nrecycling. Specifically, the test produced links to the same file for the\ncurrent WAL file and the next one because the half-recycled WAL file was\nre-recycled upon restarting. This seems likely to lead to WAL corruption.\n\nThe attached patch prevents this problem by using durable_rename() instead\nof durable_rename_excl() for WAL recycling. This removes the protection\nagainst accidentally overwriting an existing WAL file, but there shouldn't\nbe one.\n\nThis patch also sets the stage for reducing archiving overhead (as\ndiscussed in the other thread [0]). The proposed change to reduce\narchiving overhead will make it more likely that the server will attempt to\nre-archive segments after a crash. This might lead to archive corruption\nif the server concurrently writes to the same file via the aforementioned\nbug.\n\n[0] https://www.postgresql.org/message-id/20220222011948.GA3850532%40nathanxps13\n[1] https://www.postgresql.org/message-id/20220222173711.GA3852671%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Apr 2022 11:29:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Thu, Apr 7, 2022 at 2:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Presently, WAL recycling uses durable_rename_excl(), which notes that a\n> crash at an unfortunate moment can result in two links to the same file.\n> My testing [1] demonstrated that it was possible to end up with two links\n> to the same file in pg_wal after a crash just before unlink() during WAL\n> recycling. Specifically, the test produced links to the same file for the\n> current WAL file and the next one because the half-recycled WAL file was\n> re-recycled upon restarting. This seems likely to lead to WAL corruption.\n\nWow, that's bad.\n\n> The attached patch prevents this problem by using durable_rename() instead\n> of durable_rename_excl() for WAL recycling. This removes the protection\n> against accidentally overwriting an existing WAL file, but there shouldn't\n> be one.\n\nI see that durable_rename_excl() has the following comment: \"Similar\nto durable_rename(), except that this routine tries (but does not\nguarantee) not to overwrite the target file.\" If those are the desired\nsemantics, we could achieve them more simply and more safely by just\ntrying to stat() the target file and then, if it's not found, call\ndurable_rename(). I think that would be a heck of a lot safer than\nwhat this function is doing right now.\n\nI'd actually be in favor of nuking durable_rename_excl() from orbit\nand putting the file-exists tests in the callers. Otherwise, someone\nmight assume that it actually has the semantics that its name\nsuggests, which could be pretty disastrous. If we don't want to do\nthat, then I'd changing to do the stat-then-durable-rename thing\ninternally, so we don't leave hard links lying around in *any* code\npath. Perhaps that's the right answer for the back-branches in any\ncase, since there could be third-party code calling this function.\n\nYour proposed fix is OK if we don't want to do any of that stuff, but\npersonally I'm much more inclined to blame durable_rename_excl() for\nbeing horrible than I am to blame the calling code for using it\nimprovidently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 10:38:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 08, 2022 at 10:38:03AM -0400, Robert Haas wrote:\n> I see that durable_rename_excl() has the following comment: \"Similar\n> to durable_rename(), except that this routine tries (but does not\n> guarantee) not to overwrite the target file.\" If those are the desired\n> semantics, we could achieve them more simply and more safely by just\n> trying to stat() the target file and then, if it's not found, call\n> durable_rename(). I think that would be a heck of a lot safer than\n> what this function is doing right now.\n\nIIUC it actually does guarantee that you won't overwrite the target file\nwhen HAVE_WORKING_LINK is defined. If not, it provides no guarantees at\nall. Using stat() before rename() would therefore weaken this check for\nsystems with working link(), but it'd probably strengthen it for systems\nwithout a working link().\n\n> I'd actually be in favor of nuking durable_rename_excl() from orbit\n> and putting the file-exists tests in the callers. Otherwise, someone\n> might assume that it actually has the semantics that its name\n> suggests, which could be pretty disastrous. If we don't want to do\n> that, then I'd changing to do the stat-then-durable-rename thing\n> internally, so we don't leave hard links lying around in *any* code\n> path. Perhaps that's the right answer for the back-branches in any\n> case, since there could be third-party code calling this function.\n\nI think there might be another problem. The man page for rename() seems to\nindicate that overwriting an existing file also introduces a window where\nthe old and new path are hard links to the same file. This isn't a problem\nfor the WAL files because we should never be overwriting an existing one,\nbut I wonder if it's a problem for other code paths. My guess is that many\ncode paths that overwrite an existing file are first writing changes to a\ntemporary file before atomically replacing the original. Those paths are\nlikely okay, too, as you can usually just discard any existing temporary\nfiles.\n\n> Your proposed fix is OK if we don't want to do any of that stuff, but\n> personally I'm much more inclined to blame durable_rename_excl() for\n> being horrible than I am to blame the calling code for using it\n> improvidently.\n\nI do agree that it's worth examining this stuff a bit closer. I've\nfrequently found myself trying to reason about all the different states\nthat callers of these functions can produce, so any changes that help\nsimplify matters are a win in my book.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 09:53:12 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 08, 2022 at 09:53:12AM -0700, Nathan Bossart wrote:\n> On Fri, Apr 08, 2022 at 10:38:03AM -0400, Robert Haas wrote:\n>> I'd actually be in favor of nuking durable_rename_excl() from orbit\n>> and putting the file-exists tests in the callers. Otherwise, someone\n>> might assume that it actually has the semantics that its name\n>> suggests, which could be pretty disastrous. If we don't want to do\n>> that, then I'd changing to do the stat-then-durable-rename thing\n>> internally, so we don't leave hard links lying around in *any* code\n>> path. Perhaps that's the right answer for the back-branches in any\n>> case, since there could be third-party code calling this function.\n> \n> I think there might be another problem. The man page for rename() seems to\n> indicate that overwriting an existing file also introduces a window where\n> the old and new path are hard links to the same file. This isn't a problem\n> for the WAL files because we should never be overwriting an existing one,\n> but I wonder if it's a problem for other code paths. My guess is that many\n> code paths that overwrite an existing file are first writing changes to a\n> temporary file before atomically replacing the original. Those paths are\n> likely okay, too, as you can usually just discard any existing temporary\n> files.\n\nHa, so there are only a few callers of durable_rename_excl() in the\nPostgreSQL tree. One is basic_archive.c, which is already doing a stat()\ncheck. IIRC I only used durable_rename_excl() here to handle the case\nwhere multiple servers are writing archives to the same location. If that\nhappened, the archiver process would begin failing. If a crash left two\nhard links to the same file around, we will silently succeed the next time\naround thanks to the compare_files() check. Besides the WAL installation\ncode, the only other callers are in timeline.c, and both note that the use\nof durable_rename_excl() is for \"paranoidly trying to avoid overwriting an\nexisting file (there shouldn't be one).\"\n\nSo AFAICT basic_archive.c is the only caller with a strong reason for using\ndurable_rename_excl(), and even that might not be worth keeping it around.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 10:05:19 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 08, 2022 at 10:38:03AM -0400, Robert Haas wrote:\n> I'd actually be in favor of nuking durable_rename_excl() from orbit\n> and putting the file-exists tests in the callers. Otherwise, someone\n> might assume that it actually has the semantics that its name\n> suggests, which could be pretty disastrous. If we don't want to do\n> that, then I'd changing to do the stat-then-durable-rename thing\n> internally, so we don't leave hard links lying around in *any* code\n> path. Perhaps that's the right answer for the back-branches in any\n> case, since there could be third-party code calling this function.\n\nI've attached a patch that simply removes durable_rename_excl() and\nreplaces existing calls with durable_rename(). I noticed that Andres\nexpressed similar misgivings about durable_rename_excl() last year [0] [1].\nI can create a stat-then-durable-rename version of this for back-patching\nif that is still the route we want to go.\n\n[0] https://postgr.es/me/20210318014812.ds2iz4jz5h7la6un%40alap3.anarazel.de\n[1] https://postgr.es/m/20210318023004.gz2aejhze2kkkqr2%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Apr 2022 12:43:45 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 8, 2022 at 12:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Fri, Apr 08, 2022 at 10:38:03AM -0400, Robert Haas wrote:\n> > I see that durable_rename_excl() has the following comment: \"Similar\n> > to durable_rename(), except that this routine tries (but does not\n> > guarantee) not to overwrite the target file.\" If those are the desired\n> > semantics, we could achieve them more simply and more safely by just\n> > trying to stat() the target file and then, if it's not found, call\n> > durable_rename(). I think that would be a heck of a lot safer than\n> > what this function is doing right now.\n>\n> IIUC it actually does guarantee that you won't overwrite the target file\n> when HAVE_WORKING_LINK is defined. If not, it provides no guarantees at\n> all. Using stat() before rename() would therefore weaken this check for\n> systems with working link(), but it'd probably strengthen it for systems\n> without a working link().\n\nSure, but a guarantee that happens on only some systems isn't worth\nmuch. And, if it comes at the price of potentially having multiple\nhard links to the same file in obscure situations, that seems like it\ncould easily cause more problems than this whole scheme can ever hope\nto solve.\n\n> I think there might be another problem. The man page for rename() seems to\n> indicate that overwriting an existing file also introduces a window where\n> the old and new path are hard links to the same file. This isn't a problem\n> for the WAL files because we should never be overwriting an existing one,\n> but I wonder if it's a problem for other code paths. My guess is that many\n> code paths that overwrite an existing file are first writing changes to a\n> temporary file before atomically replacing the original. Those paths are\n> likely okay, too, as you can usually just discard any existing temporary\n> files.\n\nI wonder if this is really true. I thought rename() was supposed to be atomic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 21:00:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 08, 2022 at 09:00:36PM -0400, Robert Haas wrote:\n> > I think there might be another problem. The man page for rename() seems to\n> > indicate that overwriting an existing file also introduces a window where\n> > the old and new path are hard links to the same file. This isn't a problem\n> > for the WAL files because we should never be overwriting an existing one,\n> > but I wonder if it's a problem for other code paths. My guess is that many\n> > code paths that overwrite an existing file are first writing changes to a\n> > temporary file before atomically replacing the original. Those paths are\n> > likely okay, too, as you can usually just discard any existing temporary\n> > files.\n> \n> I wonder if this is really true. I thought rename() was supposed to be atomic.\n\nLooks like it's atomic in that it's not like cp + rm, but not atomic the other\nway you want.\n\n| If newpath already exists, it will be atomically replaced, so that there is no point at which another process attempting to access newpath will find it missing. However, there will probably be a window in which\n| both oldpath and newpath refer to the file being renamed.\n\n\n", "msg_date": "Fri, 8 Apr 2022 20:10:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "At Thu, 7 Apr 2022 11:29:54 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> The attached patch prevents this problem by using durable_rename() instead\n> of durable_rename_excl() for WAL recycling. This removes the protection\n> against accidentally overwriting an existing WAL file, but there shouldn't\n> be one.\n\n From another direction, if the new segment was the currently active\none, we just mustn't install it. Otherwise we don't care.\n\nSo, the only thing we need to care is segment switch. Without it, the\nsegment that InstallXLogFileSegment found by the stat loop is known to\nbe safe to overwrite even if exists.\n\nWhen segment switch finds an existing file, it's no problem since the\nsegment switch doesn't create a new segment. Otherwise segment switch\nalways calls InstallXLogFileSegment. The section from searching for\nan empty segmetn slot until calling durable_rename_excl() is protected\nby ControlFileLock. Thus if a process is in the section, no other\nprocess can switch to a newly-created segment.\n\nIf this diagnosis is correct, the comment is proved to be paranoid.\n\n\n>\t * Perform the rename using link if available, paranoidly trying to avoid\n>\t * overwriting an existing file (there shouldn't be one).\n\nAs the result, I think Nathan's fix is correct that we can safely use\ndurable_rename() instead.\n\nAnd I propose to use renameat2 on Linux so that we can detect the\ncontradicting case by the regression tests even though only on Linux.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 11 Apr 2022 18:12:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, Apr 11, 2022 at 5:12 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> So, the only thing we need to care is segment switch. Without it, the\n> segment that InstallXLogFileSegment found by the stat loop is known to\n> be safe to overwrite even if exists.\n>\n> When segment switch finds an existing file, it's no problem since the\n> segment switch doesn't create a new segment. Otherwise segment switch\n> always calls InstallXLogFileSegment. The section from searching for\n> an empty segmetn slot until calling durable_rename_excl() is protected\n> by ControlFileLock. Thus if a process is in the section, no other\n> process can switch to a newly-created segment.\n>\n> If this diagnosis is correct, the comment is proved to be paranoid.\n\nIt's sometimes difficult to understand what problems really old code\ncomments are worrying about. For example, could they have been\nworrying about bugs in the code? Could they have been worrying about\nmanual interference with the pg_wal directory? It's hard to know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 12:15:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 11, 2022 at 5:12 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> If this diagnosis is correct, the comment is proved to be paranoid.\n\n> It's sometimes difficult to understand what problems really old code\n> comments are worrying about. For example, could they have been\n> worrying about bugs in the code? Could they have been worrying about\n> manual interference with the pg_wal directory? It's hard to know.\n\n\"git blame\" can be helpful here, if you trace back to when the comment\nwas written and then try to find the associated mailing-list discussion.\n(That leap can be difficult for commits pre-dating our current\nconvention of including links in the commit message, but it's usually\nnot *that* hard to locate contemporaneous discussion.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 12:28:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, Apr 11, 2022 at 12:28:47PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Apr 11, 2022 at 5:12 AM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>>> If this diagnosis is correct, the comment is proved to be paranoid.\n> \n>> It's sometimes difficult to understand what problems really old code\n>> comments are worrying about. For example, could they have been\n>> worrying about bugs in the code? Could they have been worrying about\n>> manual interference with the pg_wal directory? It's hard to know.\n> \n> \"git blame\" can be helpful here, if you trace back to when the comment\n> was written and then try to find the associated mailing-list discussion.\n> (That leap can be difficult for commits pre-dating our current\n> convention of including links in the commit message, but it's usually\n> not *that* hard to locate contemporaneous discussion.)\n\nI traced this back a while ago. I believe the link() was first added in\nNovember 2000 as part of f0e37a8. This even predates WAL recycling, which\nwas added in July 2001 as part of 7d4d5c0.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 09:52:57 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "At Mon, 11 Apr 2022 09:52:57 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Mon, Apr 11, 2022 at 12:28:47PM -0400, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> On Mon, Apr 11, 2022 at 5:12 AM Kyotaro Horiguchi\n> >> <horikyota.ntt@gmail.com> wrote:\n> >>> If this diagnosis is correct, the comment is proved to be paranoid.\n> > \n> >> It's sometimes difficult to understand what problems really old code\n> >> comments are worrying about. For example, could they have been\n> >> worrying about bugs in the code? Could they have been worrying about\n> >> manual interference with the pg_wal directory? It's hard to know.\n> > \n> > \"git blame\" can be helpful here, if you trace back to when the comment\n> > was written and then try to find the associated mailing-list discussion.\n> > (That leap can be difficult for commits pre-dating our current\n> > convention of including links in the commit message, but it's usually\n> > not *that* hard to locate contemporaneous discussion.)\n> \n> I traced this back a while ago. I believe the link() was first added in\n> November 2000 as part of f0e37a8. This even predates WAL recycling, which\n> was added in July 2001 as part of 7d4d5c0.\n\nf0e37a8 lacks discussion.. It introduced the CHECKPOINT command from\nsomwhere out of the ML.. This patch changed XLogFileInit to\nsupportusing existent files so that XLogWrite can use the new segment\nprovided by checkpoint and still allow XLogWrite to create a new\nsegment by itself.\n\nJust before the commit, calls to XLogFileInit were protected (or\nserialized) by logwr_lck. At the commit calls to the same function\nwere still serialized by ControlFileLockId.\n\nI *guess* that Vadim faced/noticed a race condition when he added\ncheckpoint. Thus introduced the link+remove protocol but finally it\nbecame useless by moving the call to XLogFileInit within\nControlFileLockId section. But, of course, all of story is mere a\nguess.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 Apr 2022 15:46:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Tue, Apr 12, 2022 at 03:46:31PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 11 Apr 2022 09:52:57 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> I traced this back a while ago. I believe the link() was first added in\n>> November 2000 as part of f0e37a8. This even predates WAL recycling, which\n>> was added in July 2001 as part of 7d4d5c0.\n> \n> f0e37a8 lacks discussion.. It introduced the CHECKPOINT command from\n> somwhere out of the ML.. This patch changed XLogFileInit to\n> supportusing existent files so that XLogWrite can use the new segment\n> provided by checkpoint and still allow XLogWrite to create a new\n> segment by itself.\n\nYeah, I've been unable to find any discussion besides a brief reference to\nadding checkpointing [0].\n\n[0] https://postgr.es/m/8F4C99C66D04D4118F580090272A7A23018D85%40sectorbase1.sectorbase.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Apr 2022 09:27:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Fri, Apr 08, 2022 at 09:00:36PM -0400, Robert Haas wrote:\n> On Fri, Apr 8, 2022 at 12:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I think there might be another problem. The man page for rename() seems to\n>> indicate that overwriting an existing file also introduces a window where\n>> the old and new path are hard links to the same file. This isn't a problem\n>> for the WAL files because we should never be overwriting an existing one,\n>> but I wonder if it's a problem for other code paths. My guess is that many\n>> code paths that overwrite an existing file are first writing changes to a\n>> temporary file before atomically replacing the original. Those paths are\n>> likely okay, too, as you can usually just discard any existing temporary\n>> files.\n> \n> I wonder if this is really true. I thought rename() was supposed to be atomic.\n\nNot always. For example, some old versions of MacOS have a non-atomic\nimplementation of rename(), like prairiedog with 10.4. Even 10.5 does\nnot handle atomicity as far as I call. In short, it looks like a bad\nidea to me to rely on this idea at all. Some FSes have their own way\nof handling things, as well, but I am not much into this world.\n\nSaying that, it would be nice to see durable_rename_excl() gone as it\nhas created quite a bit of pain for us in the past years.\n--\nMichael", "msg_date": "Mon, 18 Apr 2022 16:48:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, Apr 18, 2022 at 04:48:35PM +0900, Michael Paquier wrote:\n> Saying that, it would be nice to see durable_rename_excl() gone as it\n> has created quite a bit of pain for us in the past years.\n\nYeah, I think this is the right thing to do. Patch upthread [0].\n\nFor back-branches, I suspect we'll want to remove all uses of\ndurable_rename_excl() but leave the function around for any extensions that\nare using it. Of course, we'd also need a big comment imploring folks not\nto add any more callers. Another option would be to change the behavior of\ndurable_rename_excl() to something that we think is safer (e.g., stat then\nrename), but that might just introduce a different set of problems.\n\n[0] https://postgr.es/m/20220408194345.GA1541826%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 11:23:36 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Apr 08, 2022 at 09:00:36PM -0400, Robert Haas wrote:\n>> I wonder if this is really true. I thought rename() was supposed to be atomic.\n\n> Not always. For example, some old versions of MacOS have a non-atomic\n> implementation of rename(), like prairiedog with 10.4. Even 10.5 does\n> not handle atomicity as far as I call.\n\nI think that's not talking about the same thing. POSIX requires rename(2)\nto replace an existing target link atomically:\n\n If the link named by the new argument exists, it shall be removed and\n old renamed to new. In this case, a link named new shall remain\n visible to other threads throughout the renaming operation and refer\n either to the file referred to by new or old before the operation\n began.\n\n(It's that requirement that ancient macOS fails to meet.)\n\nHowever, I do not see any text that addresses the question of whether\nthe old link disappears atomically with the appearance of the new link,\nand it seems like that'd be pretty impractical to ensure in cases like\nmoving a link from one directory to another. (What would it even mean\nto say that, considering that a thread can't read the two directories\nat the same instant?) From a crash-safety standpoint, it'd surely be\nbetter to make the new link before removing the old, so I imagine\nthat's what most file systems do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Apr 2022 15:07:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "The readdir interface allows processes to be in the middle of reading\na directory and unless a kernel was happy to either materialize the\nentire directory list when the readdir starts, or lock the entire\ndirectory against modification for the entire time the a process has a\nreaddir fd open it's always going to be possible for the a process to\nhave previously read the old directory entry and later see the new\ndirectory entry. Kernels don't do any MVCC or cmin type of games so\nthey're not going to be able to prevent it.\n\nWhat's worse of course is that it may only happen in very large\ndirectories. Most directories fit on a single block and readdir may\nbuffer up all the entries a block at a time for efficiency. So it may\nonly be visible on very large directories that span multiple blocks.\n\n\n", "msg_date": "Mon, 18 Apr 2022 16:53:53 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "Here is an attempt at creating something that can be back-patched. 0001\nsimply replaces calls to durable_rename_excl() with durable_rename() and is\nintended to be back-patched. 0002 removes the definition of\ndurable_rename_excl() and is _not_ intended for back-patching. I imagine\n0002 will need to be held back for v16devel.\n\nI think back-patching 0001 will encounter a couple of small obstacles. For\nexample, the call in basic_archive won't exist on most of the\nback-branches, and durable_rename_excl() was named durable_link_or_rename()\nbefore v13. I don't mind producing a patch for each back-branch if needed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Apr 2022 13:09:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Tue, Apr 26, 2022 at 01:09:35PM -0700, Nathan Bossart wrote:\n> Here is an attempt at creating something that can be back-patched. 0001\n> simply replaces calls to durable_rename_excl() with durable_rename() and is\n> intended to be back-patched. 0002 removes the definition of\n> durable_rename_excl() and is _not_ intended for back-patching. I imagine\n> 0002 will need to be held back for v16devel.\n\nI would not mind applying 0002 on HEAD now to avoid more uses of this\nAPI, and I can get behind 0001 after thinking more about it.\n\n> I think back-patching 0001 will encounter a couple of small obstacles. For\n> example, the call in basic_archive won't exist on most of the\n> back-branches, and durable_rename_excl() was named durable_link_or_rename()\n> before v13. I don't mind producing a patch for each back-branch if needed.\n\nI am not sure that have any need to backpatch this change based on the\nunlikeliness of the problem, TBH. One thing that is itching me a bit,\nlike Robert upthread, is that we don't check anymore that the newfile\ndoes not exist in the code paths because we never expect one. It is\npossible to use stat() for that. But access() within a simple\nassertion would be simpler? Say something like:\nAssert(access(path, F_OK) != 0 && errno == ENOENT);\n\nThe case for basic_archive is limited as the comment of the patch\nstates, but that would be helpful for the two calls in timeline.c and\nthe one in xlog.c in the long-term. And this has no need to be part\nof fd.c, this can be added before the durable_rename() calls. What do\nyou think?\n--\nMichael", "msg_date": "Wed, 27 Apr 2022 16:09:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Wed, Apr 27, 2022 at 04:09:20PM +0900, Michael Paquier wrote:\n> I am not sure that have any need to backpatch this change based on the\n> unlikeliness of the problem, TBH. One thing that is itching me a bit,\n> like Robert upthread, is that we don't check anymore that the newfile\n> does not exist in the code paths because we never expect one. It is\n> possible to use stat() for that. But access() within a simple\n> assertion would be simpler? Say something like:\n> Assert(access(path, F_OK) != 0 && errno == ENOENT);\n> \n> The case for basic_archive is limited as the comment of the patch\n> states, but that would be helpful for the two calls in timeline.c and\n> the one in xlog.c in the long-term. And this has no need to be part\n> of fd.c, this can be added before the durable_rename() calls. What do\n> you think?\n\nHere is a new patch set with these assertions added. I think at least the\nxlog.c change ought to be back-patched. The problem may be unlikely, but\nAFAICT the possible consequences include WAL corruption.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Apr 2022 11:42:04 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Wed, Apr 27, 2022 at 11:42:04AM -0700, Nathan Bossart wrote:\n> Here is a new patch set with these assertions added. I think at least the\n> xlog.c change ought to be back-patched. The problem may be unlikely, but\n> AFAICT the possible consequences include WAL corruption.\n\nOkay, so I have applied this stuff this morning to see what the\nbuildfarm had to say, and we have finished with a set of failures in\nvarious buildfarm members:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-04-28%2002%3A13%3A27\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2022-04-28%2002%3A14%3A08\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2022-04-28%2002%3A59%3A26\n\nAll of them did not like the part where we assume that a TLI history\nfile written by a WAL receiver should not exist beforehand, but as\n025_stuck_on_old_timeline.pl is showing, a standby may attempt to\nretrieve a TLI history file after getting it from the archives.\n\nI was analyzing the whole thing, and it looks like a race condition.\nPer the the buildfarm logs, we have less than 5ms between the moment\nthe startup process retrieves the history file of TLI 2 from the\narchives and the moment the WAL receiver decides to check if this TLI\nfile exists. If it does not exist, it would then retrieve it from the\nprimary via streaming. So I guess that the sequence of events is\nthat:\n- In WalRcvFetchTimeLineHistoryFiles(), the WAL receiver checks the\nexistence of the history file for TLI 2, does not find it.\n- The startup process retrieves the file from the archives.\n- The WAL receiver goes through the internal loop of\nWalRcvFetchTimeLineHistoryFiles(), retrieves the history file from the\nprimary's stream.\n\nSwitching from durable_rename_excl() to durable_rename() would mean\nthat we'd overwrite the TLI file received from the primary stream over\nwhat's been retrieved from the archives. That does not strike me as\nan issue in itself and that should be safe, so the comment is\nmisleading, and we can live without the assertion in\nwriteTimeLineHistoryFile() called by the WAL receiver. Now, I think\nthat we'd better keep some belts in writeTimeLineHistory() called by\nthe startup process at the end-of-recovery as I should never ever have\na TLI file generated when selecting a new timeline. Perhaps this\nshould be a elog(ERROR) at least, with a check on the file existence\nbefore calling durable_rename()?\n\nAnyway, my time is constrained next week due to the upcoming Japanese\nGolden Week and the buildfarm has to be stable, so I have reverted the\nchange for now.\n--\nMichael", "msg_date": "Thu, 28 Apr 2022 14:56:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Tue, Apr 12, 2022 at 09:27:42AM -0700, Nathan Bossart wrote:\n> On Tue, Apr 12, 2022 at 03:46:31PM +0900, Kyotaro Horiguchi wrote:\n>> At Mon, 11 Apr 2022 09:52:57 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>>> I traced this back a while ago. I believe the link() was first added in\n>>> November 2000 as part of f0e37a8. This even predates WAL recycling, which\n>>> was added in July 2001 as part of 7d4d5c0.\n>> \n>> f0e37a8 lacks discussion.. It introduced the CHECKPOINT command from\n>> somwhere out of the ML.. This patch changed XLogFileInit to\n>> supportusing existent files so that XLogWrite can use the new segment\n>> provided by checkpoint and still allow XLogWrite to create a new\n>> segment by itself.\n\nYes, I think that you are right here. I also suspect that the\ncheckpoint command was facing a concurrency issue while working on\nthe feature and that Vadim saw that this part of the implementation\nwould be safer in the long run if we use link() followed by unlink().\n\n> Yeah, I've been unable to find any discussion besides a brief reference to\n> adding checkpointing [0].\n> \n> [0] https://postgr.es/m/8F4C99C66D04D4118F580090272A7A23018D85%40sectorbase1.sectorbase.com\n\nWhile looking at the history of this area, I have also noticed this\nargument, telling also that this is a safety measure if this code were\nto run in parallel, but that's without counting on the control file\nlock hold while doing this operation anyway:\nhttps://www.postgresql.org/message-id/24974.982597735@sss.pgh.pa.us\n\nAs mentioned already upthread, f0e37a8 is the origin of the\nlink()/unlink() business in the WAL segment initialization logic, and\nalso note 1f159e5 that has added a rename() as extra code path for\nsystems where link() was not working.\n\nAt the end, switching directly from durable_rename_excl() to\ndurable_rename() should be fine for the WAL segment initialization,\nbut we could do things a bit more carefully by adding a check on the\nfile existence before calling durable_rename() and issue a elog(LOG)\nif a file is found, giving a mean for the WAL recycling to give up\npeacefully as it does now. Per my analysis, the TLI history file\ncreated at the end of recovery ought to issue an elog(ERROR).\n\nNow, I am surprised by the third code path of durable_rename_excl(),\nas of the WAL receiver doing writeTimeLineHistoryFile(), to not cause\nany issues, as link() should exit with EEXIST when the startup process\ngrabs the same history file concurrently. It seems to me that in this\nlast case using durable_rename() could be an improvement and prevent\nextra WAL receiver restarts as a TLI history fetched from the primary\nvia streaming or from some archives should be the same, but we could\nbe more careful, like the WAL init logic, by skipping the\ndurable_rename() and issuing an elog(LOG). That would not be perfect,\nstill a bit better than the current state of HEAD.\n\nAs we are getting closer to the beta release, it looks safer to let\nthis change aside a bit longer and wait for v16 to be opened for\nbusiness on HEAD.\n--\nMichael", "msg_date": "Sun, 1 May 2022 22:08:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Sun, May 01, 2022 at 10:08:53PM +0900, Michael Paquier wrote:\n> Now, I am surprised by the third code path of durable_rename_excl(),\n> as of the WAL receiver doing writeTimeLineHistoryFile(), to not cause\n> any issues, as link() should exit with EEXIST when the startup process\n> grabs the same history file concurrently. It seems to me that in this\n> last case using durable_rename() could be an improvement and prevent\n> extra WAL receiver restarts as a TLI history fetched from the primary\n> via streaming or from some archives should be the same, but we could\n> be more careful, like the WAL init logic, by skipping the\n> durable_rename() and issuing an elog(LOG). That would not be perfect,\n> still a bit better than the current state of HEAD.\n\nSkimming through at the buildfarm logs, it happens that the tests are\nable to see this race from time to time. Here is one such example on\nrorqual:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2022-04-20%2004%3A47%3A58&stg=recovery-check\n\nAnd here are the relevant logs:\n2022-04-20 05:04:19.028 UTC [3109048][startup][:0] LOG: restored log\nfile \"00000002.history\" from archive\n2022-04-20 05:04:19.029 UTC [3109111][walreceiver][:0] LOG: fetching\ntimeline history file for timeline 2 from primary server\n2022-04-20 05:04:19.048 UTC [3109111][walreceiver][:0] FATAL: could\nnot link file \"pg_wal/xlogtemp.3109111\" to \"pg_wal/00000002.history\":\nFile exists\n[...]\n2022-04-20 05:04:19.234 UTC [3109250][walreceiver][:0] LOG: started\nstreaming WAL from primary at 0/3000000 on timeline 2\n\nThe WAL receiver upgrades the ERROR to a FATAL, and restarts\nstreaming shortly after. Using durable_rename() would not be an issue\nhere.\n--\nMichael", "msg_date": "Mon, 2 May 2022 19:48:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Sun, May 01, 2022 at 10:08:53PM +0900, Michael Paquier wrote:\n> At the end, switching directly from durable_rename_excl() to\n> durable_rename() should be fine for the WAL segment initialization,\n> but we could do things a bit more carefully by adding a check on the\n> file existence before calling durable_rename() and issue a elog(LOG)\n> if a file is found, giving a mean for the WAL recycling to give up\n> peacefully as it does now. Per my analysis, the TLI history file\n> created at the end of recovery ought to issue an elog(ERROR).\n\nMy only concern with this approach is that it inevitably introduces a race\ncondition. In most cases, the file existence check will prevent\noverwrites, but it might not always. Furthermore, we believe that such\noverwrites either 1) should not happen (e.g., WAL recycling) or 2) won't\ncause problems if they happen (e.g., when the WAL receiver writes the TLI\nhistory file). Also, these races will be difficult to test, so we won't\nknow what breaks when they occur.\n\nMy instinct is to just let the overwrites happen. That way, we are more\nlikely to catch breakage in tests, and we'll have one less race condition\nto worry about. I don't mind asserting that the file doesn't exist when we\ndon't expect it to, as that might help catch potential problems in\ndevelopment without affecting behavior in production. If we do want to\nadd file existence checks, I think we'd better add a comment about the\npotential for race conditions.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 2 May 2022 10:36:57 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, May 02, 2022 at 07:48:18PM +0900, Michael Paquier wrote:\n> Skimming through at the buildfarm logs, it happens that the tests are\n> able to see this race from time to time. Here is one such example on\n> rorqual:\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2022-04-20%2004%3A47%3A58&stg=recovery-check\n> \n> And here are the relevant logs:\n> 2022-04-20 05:04:19.028 UTC [3109048][startup][:0] LOG: restored log\n> file \"00000002.history\" from archive\n> 2022-04-20 05:04:19.029 UTC [3109111][walreceiver][:0] LOG: fetching\n> timeline history file for timeline 2 from primary server\n> 2022-04-20 05:04:19.048 UTC [3109111][walreceiver][:0] FATAL: could\n> not link file \"pg_wal/xlogtemp.3109111\" to \"pg_wal/00000002.history\":\n> File exists\n> [...]\n> 2022-04-20 05:04:19.234 UTC [3109250][walreceiver][:0] LOG: started\n> streaming WAL from primary at 0/3000000 on timeline 2\n> \n> The WAL receiver upgrades the ERROR to a FATAL, and restarts\n> streaming shortly after. Using durable_rename() would not be an issue\n> here.\n\nThanks for investigating this one. I think I agree that we should simply\nswitch to durable_rename() (without a file existence check beforehand).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 2 May 2022 10:39:07 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, May 02, 2022 at 10:39:07AM -0700, Nathan Bossart wrote:\n> On Mon, May 02, 2022 at 07:48:18PM +0900, Michael Paquier wrote:\n>> The WAL receiver upgrades the ERROR to a FATAL, and restarts\n>> streaming shortly after. Using durable_rename() would not be an issue\n>> here.\n> \n> Thanks for investigating this one. I think I agree that we should simply\n> switch to durable_rename() (without a file existence check beforehand).\n\nHere is a new patch set. For now, I've only removed the file existence\ncheck in writeTimeLineHistoryFile(). I don't know if I'm totally convinced\nthat there isn't a problem here (e.g., due to concurrent .ready file\ncreation), but since some platforms have been using rename() for some time,\nI don't know how worried we should be. I thought about adding some kind of\nlocking between the WAL receiver and startup processes, but that seems\nexcessive. Alternatively, we could just fix xlog.c as proposed earlier\n[0]. AFAICT that is the only caller that can experience problems due to\nthe multiple-hard-link issue. All other callers are simply renaming a\ntemporary file into place, and the temporary file can be discarded if left\nbehind after a crash.\n\n[0] https://postgr.es/m/20220407182954.GA1231544%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 2 May 2022 16:06:13 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Mon, May 02, 2022 at 04:06:13PM -0700, Nathan Bossart wrote:\n> Here is a new patch set. For now, I've only removed the file existence\n> check in writeTimeLineHistoryFile(). I don't know if I'm totally convinced\n> that there isn't a problem here (e.g., due to concurrent .ready file\n> creation), but since some platforms have been using rename() for some time,\n> I don't know how worried we should be.\n\nThat's only about Windows these days, meaning that there is much less\ncoverage in this code path.\n\n> I thought about adding some kind of\n> locking between the WAL receiver and startup processes, but that seems\n> excessive.\n\nAgreed.\n\n> Alternatively, we could just fix xlog.c as proposed earlier\n> [0]. AFAICT that is the only caller that can experience problems due to\n> the multiple-hard-link issue. All other callers are simply renaming a\n> temporary file into place, and the temporary file can be discarded if left\n> behind after a crash.\n\nI'd agree with removing all the callers at the end. pgrename() is\nquite robust on Windows, but I'd keep the two checks in\nwriteTimeLineHistory(), as the logic around findNewestTimeLine() would\nconsider a past TLI history file as in-use even if we have a crash\njust after the file got created in the same path by the same standby,\nand the WAL segment init part. Your patch does that.\n--\nMichael", "msg_date": "Thu, 5 May 2022 20:10:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Thu, May 05, 2022 at 08:10:02PM +0900, Michael Paquier wrote:\n> I'd agree with removing all the callers at the end. pgrename() is\n> quite robust on Windows, but I'd keep the two checks in\n> writeTimeLineHistory(), as the logic around findNewestTimeLine() would\n> consider a past TLI history file as in-use even if we have a crash\n> just after the file got created in the same path by the same standby,\n> and the WAL segment init part. Your patch does that.\n\nAs v16 is now open for business, I have revisited this change and\napplied 0001 to change all the callers (aka removal of the assertion\nfor the WAL receiver when it overwrites a TLI history file). The\ncommit log includes details about the reasoning of all the areas\nchanged, for clarity, as of the WAL recycling part, the TLI history\nfile part and basic_archive. \n--\nMichael", "msg_date": "Tue, 5 Jul 2022 10:19:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Tue, Jul 05, 2022 at 10:19:49AM +0900, Michael Paquier wrote:\n> On Thu, May 05, 2022 at 08:10:02PM +0900, Michael Paquier wrote:\n>> I'd agree with removing all the callers at the end. pgrename() is\n>> quite robust on Windows, but I'd keep the two checks in\n>> writeTimeLineHistory(), as the logic around findNewestTimeLine() would\n>> consider a past TLI history file as in-use even if we have a crash\n>> just after the file got created in the same path by the same standby,\n>> and the WAL segment init part. Your patch does that.\n> \n> As v16 is now open for business, I have revisited this change and\n> applied 0001 to change all the callers (aka removal of the assertion\n> for the WAL receiver when it overwrites a TLI history file). The\n> commit log includes details about the reasoning of all the areas\n> changed, for clarity, as of the WAL recycling part, the TLI history\n> file part and basic_archive. \n\nThanks! I wonder if we should add a comment in writeTimeLineHistoryFile()\nabout possible concurrent use by a WAL receiver and the startup process and\nwhy that is okay.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Jul 2022 09:58:38 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" }, { "msg_contents": "On Tue, Jul 05, 2022 at 09:58:38AM -0700, Nathan Bossart wrote:\n> Thanks! I wonder if we should add a comment in writeTimeLineHistoryFile()\n> about possible concurrent use by a WAL receiver and the startup process and\n> why that is okay.\n\nAgreed. Adding an extra note at the top of the routine would help in\nthe future.\n--\nMichael", "msg_date": "Wed, 6 Jul 2022 08:57:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: avoid multiple hard links to same WAL file after a crash" } ]
[ { "msg_contents": "Dear Sir or Madam,\n\nMy name is Yedil Serzhan, a Computer Science MSc student at the University\nof Freiburg. I have had several experiences with PostgreSQL for full-stack\ndevelopment. Among all the databases I have used, PostgreSQL gives me the\nmost comfortable user experience. Recently I'm trying to contribute to open\nsource projects, and I was very happy to find PostgreSQL in GSOC this time.\n\nIn the list of ideas from your PostgreSQL organization, I think there are\nseveral ideas I like, such as \"Improve pgarchives\", “Database Load Stress\nBenchmark\", and \"New and improved website for pgjdbc\" and so on.\n\nI used these days to get a preliminary understanding of the pgarchives\ncodebase, deployed the pgarchives project, and wrote a preliminary project\nproposal based on my understanding. Please take a look, the link to Google\ndocs of the proposal is here\n<https://docs.google.com/document/d/1XjIFAnGmzc6obqhmJUNBd0UwCnfbuDdtNcHnTHL4eDQ/edit?usp=sharing>,\nand I've opened up the comments feature for those of you who want to leave\ncomments.\n\nI'd appreciate it if you could give me some guidance. Thank you in advance!\nLooking forward to your reply!\n\nBest regards,\nYedil\n\nDear Sir or Madam,My name is Yedil Serzhan, a Computer Science MSc student at the University of Freiburg. I have had several experiences with PostgreSQL for full-stack development. Among all the databases I have used, PostgreSQL gives me the most comfortable user experience. Recently I'm trying to contribute to open source projects, and I was very happy to find PostgreSQL in GSOC this time.In the list of ideas from your PostgreSQL organization, I think there are several ideas I like, such as \"Improve pgarchives\", “Database Load Stress Benchmark\", and \"New and improved website for pgjdbc\"  and so on.I used these days to get a preliminary understanding of the pgarchives codebase, deployed the pgarchives project, and wrote a preliminary project proposal based on my understanding. Please take a look, the link to Google docs of the proposal is here, and I've opened up the comments feature for those of you who want to leave comments. I'd appreciate it if you could give me some guidance. Thank you in advance! Looking forward to your reply!Best regards,Yedil", "msg_date": "Fri, 8 Apr 2022 02:40:44 +0600", "msg_from": "Yedil Serzhan <edilserjan@gmail.com>", "msg_from_op": true, "msg_subject": "GSOC proposal for Improve pgarchives by Yedil" }, { "msg_contents": "\nForwarded to www, where I think this is more relevant.\n\n---------------------------------------------------------------------------\n\nOn Fri, Apr 8, 2022 at 02:40:44AM +0600, Yedil Serzhan wrote:\n> Dear Sir or Madam,\n> \n> My name is Yedil Serzhan, a Computer Science MSc student at the University of\n> Freiburg. I have had several experiences with PostgreSQL for full-stack\n> development. Among all the databases I have used, PostgreSQL gives me the most\n> comfortable user experience. Recently I'm trying to contribute to open source\n> projects, and I was very happy to find PostgreSQL in GSOC this time.\n> \n> In the list of ideas from your PostgreSQL organization, I think there are\n> several ideas I like, such as \"Improve pgarchives\", “Database Load Stress\n> Benchmark\", and \"New and improved website for pgjdbc\"  and so on.\n> \n> I used these days to get a preliminary understanding of the pgarchives\n> codebase, deployed the pgarchives project, and wrote a preliminary project\n> proposal based on my understanding. Please take a look, the link to Google docs\n> of the proposal is here, and I've opened up the comments feature for those of\n> you who want to leave comments. \n> \n> I'd appreciate it if you could give me some guidance. Thank you in advance!\n> Looking forward to your reply!\n> \n> Best regards,\n> Yedil\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 18 Apr 2022 16:10:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: GSOC proposal for Improve pgarchives by Yedil" } ]
[ { "msg_contents": "Add minimal tests for recovery conflict handling.\n\nPreviously none of our tests triggered recovery conflicts. The test is\nprimarily motivated by needing tests for recovery conflict stats for shared\nmemory based pgstats. But it's also a decent start for recovery conflict\nhandling in general.\n\nThe only type of recovery conflict not tested yet are rcovery deadlock\nconflicts.\n\nBy configuring log_recovery_conflict_waits the test adds some very minimal\ntesting for that path as well.\n\nAuthor: Melanie Plageman <melanieplageman@gmail.com>\nAuthor: Andres Freund <andres@anarazel.de>\nDiscussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/9f8a050f68dcb38fb0a1ea87e0e5d04df32b56f4\n\nModified Files\n--------------\nsrc/test/recovery/t/031_recovery_conflict.pl | 296 +++++++++++++++++++++++++++\n1 file changed, 296 insertions(+)", "msg_date": "Thu, 07 Apr 2022 21:54:57 +0000", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pgsql: Add minimal tests for recovery conflict handling." }, { "msg_contents": "[ starting a new thread cuz the shared-stats one is way too long ]\n\nAndres Freund <andres@anarazel.de> writes:\n> Add minimal tests for recovery conflict handling.\n\nIt's been kind of hidden by other buildfarm noise, but\n031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n\nThree of those failures look like\n\n[11:08:46.806](105.129s) ok 1 - buffer pin conflict: cursor with conflicting pin established\nWaiting for replication conn standby's replay_lsn to pass 0/33EF190 on primary\n[12:01:49.614](3182.807s) # poll_query_until timed out executing this query:\n# SELECT '0/33EF190' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for catchup at t/031_recovery_conflict.pl line 123.\n\nIn each of these examples we can see in the standby's log that it\ndetected the expected buffer pin conflict:\n\n2022-04-27 11:08:46.353 UTC [1961604][client backend][2/2:0] LOG: statement: BEGIN;\n2022-04-27 11:08:46.416 UTC [1961604][client backend][2/2:0] LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n2022-04-27 11:08:46.730 UTC [1961604][client backend][2/2:0] LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor;\n2022-04-27 11:08:47.825 UTC [1961298][startup][1/0:0] LOG: recovery still waiting after 13.367 ms: recovery conflict on buffer pin\n2022-04-27 11:08:47.825 UTC [1961298][startup][1/0:0] CONTEXT: WAL redo at 0/33E6E80 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n\nbut then nothing happens until the script times out and kills the test.\nI think this is showing us a real bug, ie we sometimes fail to cancel\nthe conflicting query.\n\nThe other example [3] looks different:\n\n[01:02:43.582](2.357s) ok 1 - buffer pin conflict: cursor with conflicting pin established\nWaiting for replication conn standby's replay_lsn to pass 0/342C000 on primary\ndone\n[01:02:43.747](0.165s) ok 2 - buffer pin conflict: logfile contains terminated connection due to recovery conflict\n[01:02:43.804](0.057s) not ok 3 - buffer pin conflict: stats show conflict on standby\n[01:02:43.805](0.000s) \n[01:02:43.805](0.000s) # Failed test 'buffer pin conflict: stats show conflict on standby'\n# at t/031_recovery_conflict.pl line 295.\n[01:02:43.805](0.000s) # got: '0'\n# expected: '1'\n\nNot sure what to make of that --- could there be a race condition in the\nreporting of the conflict?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-04-27%2007%3A16%3A51\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2022-04-21%2021%3A20%3A15\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2022-04-13%2022%3A45%3A30\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-04-11%2005%3A40%3A41\n\n\n", "msg_date": "Wed, 27 Apr 2022 12:45:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Unstable tests for recovery conflict handling" }, { "msg_contents": "\n\n> On Apr 27, 2022, at 9:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> [ starting a new thread cuz the shared-stats one is way too long ]\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> Add minimal tests for recovery conflict handling.\n> \n> It's been kind of hidden by other buildfarm noise, but\n> 031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n> \n> Three of those failures look like\n\nInteresting,\n\nI have been getting failures on REL_14_STABLE:\n\nt/012_subtransactions.pl ............. 11/12 \n# Failed test 'Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby'\n# at t/012_subtransactions.pl line 211.\n# got: '3'\n# expected: '0'\nt/012_subtransactions.pl ............. 12/12 # Looks like you failed 1 test of 12.\nt/012_subtransactions.pl ............. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/12 subtests \n\nAnd the logs, tmp_check/log/regress_log_012_subtransactions, showing:\n\n### Enabling streaming replication for node \"primary\"\n### Starting node \"primary\"\n# Running: pg_ctl -D /Users/mark.dilger/recovery_test/postgresql/src/test/recovery/tmp_check/t_012_subtransactions_primary_data/pgdata -l /Users/mark.dilger/recovery_test/postgresql/src/test/recovery/tmp_check/log/012_subtransactions_primary.log -o --cluster-name=primary start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"primary\" is 46270\npsql:<stdin>:1: ERROR: prepared transaction with identifier \"xact_012_1\" does not exist\nnot ok 11 - Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby\n\n# Failed test 'Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby'\n# at t/012_subtransactions.pl line 211.\n# got: '3'\n# expected: '0'\n\n\nThis is quite consistent for me, but only when I configure with --enable-coverage and --enable-dtrace. (I haven't yet tried one of those without the other.)\n\nI wasn't going to report this yet, having not yet completely narrowed this down, but I wonder if anybody else is seeing this?\n\nI'll try again on master....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 Apr 2022 10:11:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "\n\n> On Apr 27, 2022, at 10:11 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I'll try again on master....\n\nStill with coverage and dtrace enabled, I get the same thing, except that master formats the logs a bit differently:\n\n# Postmaster PID for node \"primary\" is 19797\npsql:<stdin>:1: ERROR: prepared transaction with identifier \"xact_012_1\" does not exist\n[10:26:16.314](1.215s) not ok 11 - Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby\n[10:26:16.314](0.000s)\n[10:26:16.314](0.000s) # Failed test 'Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby'\n[10:26:16.314](0.000s) # at t/012_subtransactions.pl line 208.\n[10:26:16.314](0.000s) # got: '3'\n# expected: '0'\n\n\nWith coverage but not dtrace enabled, I still get the error, though the log leading up to the error now has a bunch of coverage noise lines like:\n\nprofiling: /Users/mark.dilger/recovery_test/postgresql/src/backend/utils/sort/tuplesort.gcda: cannot merge previous GCDA file: corrupt arc tag\n\nThe error itself looks the same except the timing numbers differ a little.\n\n\nWith neither enabled, all tests pass.\n\n\nI'm inclined to think that either the recovery code or the test have a race condition, and that enabling coverage causes the race to come out differently. I'll keep poking....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 Apr 2022 10:50:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "I wrote:\n> It's been kind of hidden by other buildfarm noise, but\n> 031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n> ...\n> I think this is showing us a real bug, ie we sometimes fail to cancel\n> the conflicting query.\n\nAfter digging around in the code, I think this is almost certainly\nsome manifestation of the previously-complained-of problem [1] that\nRecoveryConflictInterrupt is not safe to call in a signal handler,\nleading the conflicting backend to sometimes decide that it's not\nthe problem. That squares with the observation that skink is more\nprone to show this than other animals: you'd have to get the SIGUSR1\nwhile the target backend isn't idle, so a very slow machine ought to\nshow it more. We don't seem to have that issue on the open items\nlist, but I'll go add it.\n\nNot sure if the 'buffer pin conflict: stats show conflict on standby'\nfailure could trace to a similar cause.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO%3DnEYKSg%40mail.gmail.com\n\n\n", "msg_date": "Wed, 27 Apr 2022 14:08:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-04-27 10:11:53 -0700, Mark Dilger wrote:\n> \n> \n> > On Apr 27, 2022, at 9:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > [ starting a new thread cuz the shared-stats one is way too long ]\n> > \n> > Andres Freund <andres@anarazel.de> writes:\n> >> Add minimal tests for recovery conflict handling.\n> > \n> > It's been kind of hidden by other buildfarm noise, but\n> > 031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n> > \n> > Three of those failures look like\n> \n> Interesting,\n> \n> I have been getting failures on REL_14_STABLE:\n> \n> t/012_subtransactions.pl ............. 11/12 \n> # Failed test 'Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby'\n> # at t/012_subtransactions.pl line 211.\n> # got: '3'\n> # expected: '0'\n> t/012_subtransactions.pl ............. 12/12 # Looks like you failed 1 test of 12.\n> t/012_subtransactions.pl ............. Dubious, test returned 1 (wstat 256, 0x100)\n> Failed 1/12 subtests \n> \n> And the logs, tmp_check/log/regress_log_012_subtransactions, showing:\n\nI'm a bit confused - what's the relation of that failure to this thread / the\ntests / this commit?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Apr 2022 18:26:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-04-27 14:08:45 -0400, Tom Lane wrote:\n> I wrote:\n> > It's been kind of hidden by other buildfarm noise, but\n> > 031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n> > ...\n> > I think this is showing us a real bug, ie we sometimes fail to cancel\n> > the conflicting query.\n> \n> After digging around in the code, I think this is almost certainly\n> some manifestation of the previously-complained-of problem [1] that\n> RecoveryConflictInterrupt is not safe to call in a signal handler,\n> leading the conflicting backend to sometimes decide that it's not\n> the problem.\n\nI think at least some may actually be because of\nhttps://postgr.es/m/20220409045515.35ypjzddp25v72ou%40alap3.anarazel.de\nrather than RecoveryConflictInterrupt itself.\n\nI'll go an finish up the comment bits that I still need to clean up in my\nbugix and commit that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Apr 2022 18:31:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "\n\n> On Apr 27, 2022, at 6:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> I'm a bit confused - what's the relation of that failure to this thread / the\n> tests / this commit?\n\nNone, upon further reflection. It turned out to be unrelated. Sorry for the noise.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 Apr 2022 08:42:38 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "On Thu, Apr 28, 2022 at 5:50 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> psql:<stdin>:1: ERROR: prepared transaction with identifier \"xact_012_1\" does not exist\n> [10:26:16.314](1.215s) not ok 11 - Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby\n> [10:26:16.314](0.000s)\n> [10:26:16.314](0.000s) # Failed test 'Rollback of PGPROC_MAX_CACHED_SUBXIDS+ prepared transaction on promoted standby'\n> [10:26:16.314](0.000s) # at t/012_subtransactions.pl line 208.\n> [10:26:16.314](0.000s) # got: '3'\n> # expected: '0'\n\nFWIW I see that on my FBSD/clang system when I build with\n-fsanitize=undefined -fno-sanitize-recover=all. It's something to do\nwith our stack depth detection and tricks being used by -fsanitize,\nbecause there's a stack depth exceeded message in the log.\n\n\n", "msg_date": "Wed, 11 May 2022 17:28:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "I wrote:\n>> It's been kind of hidden by other buildfarm noise, but\n>> 031_recovery_conflict.pl is not as stable as it should be [1][2][3][4].\n\n> After digging around in the code, I think this is almost certainly\n> some manifestation of the previously-complained-of problem [1] that\n> RecoveryConflictInterrupt is not safe to call in a signal handler,\n> leading the conflicting backend to sometimes decide that it's not\n> the problem.\n\nI happened to notice that while skink continues to fail off-and-on\nin 031_recovery_conflict.pl, the symptoms have changed! What\nwe're getting now typically looks like [1]:\n\n[10:45:11.475](0.023s) ok 14 - startup deadlock: lock acquisition is waiting\nWaiting for replication conn standby's replay_lsn to pass 0/33FB8B0 on primary\ndone\ntimed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl line 367.\n\nwhere absolutely nothing happens in the standby log, until we time out:\n\n2022-07-24 10:45:11.452 UTC [1468367][client backend][2/4:0] LOG: statement: SELECT * FROM test_recovery_conflict_table2;\n2022-07-24 10:45:11.472 UTC [1468547][client backend][3/2:0] LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n2022-07-24 10:48:15.860 UTC [1468362][walreceiver][:0] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly\n\nSo this is not a case of RecoveryConflictInterrupt doing the wrong thing:\nthe startup process hasn't detected the buffer conflict in the first\nplace. Don't know what to make of that, but I vaguely suspect a test\ntiming problem. gull has shown this once as well, although at a different\nstep in the script [2].\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-07-24%2007%3A00%3A29\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gull&dt=2022-07-23%2009%3A34%3A54\n\n\n", "msg_date": "Tue, 26 Jul 2022 13:57:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-07-26 13:57:53 -0400, Tom Lane wrote:\n> I happened to notice that while skink continues to fail off-and-on\n> in 031_recovery_conflict.pl, the symptoms have changed! What\n> we're getting now typically looks like [1]:\n> \n> [10:45:11.475](0.023s) ok 14 - startup deadlock: lock acquisition is waiting\n> Waiting for replication conn standby's replay_lsn to pass 0/33FB8B0 on primary\n> done\n> timed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl line 367.\n> \n> where absolutely nothing happens in the standby log, until we time out:\n> \n> 2022-07-24 10:45:11.452 UTC [1468367][client backend][2/4:0] LOG: statement: SELECT * FROM test_recovery_conflict_table2;\n> 2022-07-24 10:45:11.472 UTC [1468547][client backend][3/2:0] LOG: statement: SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n> 2022-07-24 10:48:15.860 UTC [1468362][walreceiver][:0] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly\n> \n> So this is not a case of RecoveryConflictInterrupt doing the wrong thing:\n> the startup process hasn't detected the buffer conflict in the first\n> place.\n\nI wonder if this, at least partially, could be be due to the elog thing\nI was complaining about nearby. I.e. we decide to FATAL as part of a\nrecovery conflict interrupt, and then during that ERROR out as part of\nanother recovery conflict interrupt (because nothing holds interrupts as\npart of FATAL).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Jul 2022 11:16:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-26 13:57:53 -0400, Tom Lane wrote:\n>> So this is not a case of RecoveryConflictInterrupt doing the wrong thing:\n>> the startup process hasn't detected the buffer conflict in the first\n>> place.\n\n> I wonder if this, at least partially, could be be due to the elog thing\n> I was complaining about nearby. I.e. we decide to FATAL as part of a\n> recovery conflict interrupt, and then during that ERROR out as part of\n> another recovery conflict interrupt (because nothing holds interrupts as\n> part of FATAL).\n\nThere are all sorts of things one could imagine going wrong in the\nbackend receiving the recovery conflict interrupt, but AFAICS in these\nfailures, the startup process hasn't sent a recovery conflict interrupt.\nIt certainly hasn't logged anything suggesting it noticed a conflict.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jul 2022 14:30:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-07-26 14:30:30 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-07-26 13:57:53 -0400, Tom Lane wrote:\n> >> So this is not a case of RecoveryConflictInterrupt doing the wrong thing:\n> >> the startup process hasn't detected the buffer conflict in the first\n> >> place.\n> \n> > I wonder if this, at least partially, could be be due to the elog thing\n> > I was complaining about nearby. I.e. we decide to FATAL as part of a\n> > recovery conflict interrupt, and then during that ERROR out as part of\n> > another recovery conflict interrupt (because nothing holds interrupts as\n> > part of FATAL).\n> \n> There are all sorts of things one could imagine going wrong in the\n> backend receiving the recovery conflict interrupt, but AFAICS in these\n> failures, the startup process hasn't sent a recovery conflict interrupt.\n> It certainly hasn't logged anything suggesting it noticed a conflict.\n\nI don't think we reliably emit a log message before the recovery\nconflict is resolved.\n\nI've wondered a couple times now about making tap test timeouts somehow\ntrigger a core dump of all processes. Certainly would make it easier to\ndebug some of these kinds of issues.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Jul 2022 13:03:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-07-26 13:03:54 -0700, Andres Freund wrote:\n> On 2022-07-26 14:30:30 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-07-26 13:57:53 -0400, Tom Lane wrote:\n> > >> So this is not a case of RecoveryConflictInterrupt doing the wrong thing:\n> > >> the startup process hasn't detected the buffer conflict in the first\n> > >> place.\n> >\n> > > I wonder if this, at least partially, could be be due to the elog thing\n> > > I was complaining about nearby. I.e. we decide to FATAL as part of a\n> > > recovery conflict interrupt, and then during that ERROR out as part of\n> > > another recovery conflict interrupt (because nothing holds interrupts as\n> > > part of FATAL).\n> >\n> > There are all sorts of things one could imagine going wrong in the\n> > backend receiving the recovery conflict interrupt, but AFAICS in these\n> > failures, the startup process hasn't sent a recovery conflict interrupt.\n> > It certainly hasn't logged anything suggesting it noticed a conflict.\n>\n> I don't think we reliably emit a log message before the recovery\n> conflict is resolved.\n\nI played around trying to reproduce this kind of issue.\n\nOne way to quickly run into trouble on a slow system is that\nResolveRecoveryConflictWithVirtualXIDs() can end up sending signals more\nfrequently than the target can process them. The next signal can arrive by the\ntime SIGUSR1 processing finished, which, at least on linux, causes the queued\nsignal to immediately be processed, without \"normal\" postgres code gaining\ncontrol.\n\nThe reason nothing might get logged in some cases is that\ne.g. ResolveRecoveryConflictWithLock() tells\nResolveRecoveryConflictWithVirtualXIDs() to *not* report the waiting:\n\t\t/*\n\t\t * Prevent ResolveRecoveryConflictWithVirtualXIDs() from reporting\n\t\t * \"waiting\" in PS display by disabling its argument report_waiting\n\t\t * because the caller, WaitOnLock(), has already reported that.\n\t\t */\n\nso ResolveRecoveryConflictWithLock() can end up looping indefinitely without\nlogging anything.\n\n\n\nAnother question I have about ResolveRecoveryConflictWithLock() is whether\nit's ok that we don't check deadlocks around the\nResolveRecoveryConflictWithVirtualXIDs() call? It might be ok, because we'd\nonly block if there's a recovery conflict, in which killing the process ought\nto succeed?\n\n\nI think there's also might be a problem with the wait loop in ProcSleep() wrt\nrecovery conflicts: We rely on interrupts to be processed to throw recovery\nconflict errors, but ProcSleep() is called in a bunch of places with\ninterrupts held. An Assert(INTERRUPTS_CAN_BE_PROCESSED()) after releasing the\npartition lock triggers a bunch. It's possible that these aren't problematic\ncases for recovery conflicts, because they're all around extension locks:\n\n#2 0x0000562032f1968d in ExceptionalCondition (conditionName=0x56203310623a \"INTERRUPTS_CAN_BE_PROCESSED()\", errorType=0x562033105f6c \"FailedAssertion\",\n fileName=0x562033105f30 \"/home/andres/src/postgresql/src/backend/storage/lmgr/proc.c\", lineNumber=1208)\n at /home/andres/src/postgresql/src/backend/utils/error/assert.c:69\n#3 0x0000562032d50f41 in ProcSleep (locallock=0x562034cafaf0, lockMethodTable=0x562033281740 <default_lockmethod>)\n at /home/andres/src/postgresql/src/backend/storage/lmgr/proc.c:1208\n#4 0x0000562032d3e2ce in WaitOnLock (locallock=0x562034cafaf0, owner=0x562034d12c58) at /home/andres/src/postgresql/src/backend/storage/lmgr/lock.c:1859\n#5 0x0000562032d3cd0a in LockAcquireExtended (locktag=0x7ffc7b4d0810, lockmode=7, sessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0)\n at /home/andres/src/postgresql/src/backend/storage/lmgr/lock.c:1101\n#6 0x0000562032d3c1c4 in LockAcquire (locktag=0x7ffc7b4d0810, lockmode=7, sessionLock=false, dontWait=false)\n at /home/andres/src/postgresql/src/backend/storage/lmgr/lock.c:752\n#7 0x0000562032d3a696 in LockRelationForExtension (relation=0x7f54646b1dd8, lockmode=7) at /home/andres/src/postgresql/src/backend/storage/lmgr/lmgr.c:439\n#8 0x0000562032894276 in _bt_getbuf (rel=0x7f54646b1dd8, blkno=4294967295, access=2) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtpage.c:975\n#9 0x000056203288f1cb in _bt_split (rel=0x7f54646b1dd8, itup_key=0x562034ea7428, buf=770, cbuf=0, newitemoff=408, newitemsz=16, newitem=0x562034ea3fc8,\n orignewitem=0x0, nposting=0x0, postingoff=0) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtinsert.c:1715\n#10 0x000056203288e4bb in _bt_insertonpg (rel=0x7f54646b1dd8, itup_key=0x562034ea7428, buf=770, cbuf=0, stack=0x562034ea1fb8, itup=0x562034ea3fc8, itemsz=16,\n newitemoff=408, postingoff=0, split_only_page=false) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtinsert.c:1212\n#11 0x000056203288caf9 in _bt_doinsert (rel=0x7f54646b1dd8, itup=0x562034ea3fc8, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, heapRel=0x7f546823dde0)\n at /home/andres/src/postgresql/src/backend/access/nbtree/nbtinsert.c:258\n#12 0x000056203289851f in btinsert (rel=0x7f54646b1dd8, values=0x7ffc7b4d0c50, isnull=0x7ffc7b4d0c30, ht_ctid=0x562034dd083c, heapRel=0x7f546823dde0,\n checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x562034ea71c0) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtree.c:200\n#13 0x000056203288710b in index_insert (indexRelation=0x7f54646b1dd8, values=0x7ffc7b4d0c50, isnull=0x7ffc7b4d0c30, heap_t_ctid=0x562034dd083c,\n heapRelation=0x7f546823dde0, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x562034ea71c0)\n at /home/andres/src/postgresql/src/backend/access/index/indexam.c:193\n#14 0x000056203292e9da in CatalogIndexInsert (indstate=0x562034dd02b0, heapTuple=0x562034dd0838)\n\n(gdb) p num_held_lwlocks\n$14 = 1\n(gdb) p held_lwlocks[0]\n$15 = {lock = 0x7f1a0d18d2e4, mode = LW_EXCLUSIVE}\n(gdb) p held_lwlocks[0].lock->tranche\n$16 = 56\n(gdb) p BuiltinTrancheNames[held_lwlocks[0].lock->tranche - NUM_INDIVIDUAL_LWLOCKS]\n$17 = 0x558ce5710ede \"BufferContent\"\n\n\nIndependent of recovery conflicts, isn't it dangerous that we acquire the\nrelation extension lock with a buffer content lock held? I *guess* it might be\nok because BufferAlloc(P_NEW) only acquires buffer content locks in a\nconditional way.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Aug 2022 10:57:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:57 PM Andres Freund <andres@anarazel.de> wrote:\n> The reason nothing might get logged in some cases is that\n> e.g. ResolveRecoveryConflictWithLock() tells\n> ResolveRecoveryConflictWithVirtualXIDs() to *not* report the waiting:\n> /*\n> * Prevent ResolveRecoveryConflictWithVirtualXIDs() from reporting\n> * \"waiting\" in PS display by disabling its argument report_waiting\n> * because the caller, WaitOnLock(), has already reported that.\n> */\n>\n> so ResolveRecoveryConflictWithLock() can end up looping indefinitely without\n> logging anything.\n\nI understand why we need to avoid adding \"waiting\" to the PS status\nwhen we've already done that, but it doesn't seem like that should\nimply skipping ereport() of log messages.\n\nI think we could redesign the way the ps display works to make things\na whole lot simpler. Let's have a function set_ps_display() and\nanother function set_ps_display_suffix(). What gets reported to the OS\nis the concatenation of the two. Calling set_ps_display() implicitly\nresets the suffix to empty.\n\nAFAICS, that'd let us get rid of this tricky logic, and some other\ntricky logic as well. Here, we'd just say set_ps_display_suffix(\"\nwaiting\") and not worry about whether the caller might have already\ndone something similar.\n\n> Another question I have about ResolveRecoveryConflictWithLock() is whether\n> it's ok that we don't check deadlocks around the\n> ResolveRecoveryConflictWithVirtualXIDs() call? It might be ok, because we'd\n> only block if there's a recovery conflict, in which killing the process ought\n> to succeed?\n\nThe startup process is supposed to always \"win\" in any deadlock\nsituation, so I'm not sure what you think is a problem here. We get\nthe conflicting lockers. We kill them. If they don't die, that's a\nbug, but killing ourselves doesn't really help anything; if we die,\nthe whole system goes down, which seems undesirable.\n\n> I think there's also might be a problem with the wait loop in ProcSleep() wrt\n> recovery conflicts: We rely on interrupts to be processed to throw recovery\n> conflict errors, but ProcSleep() is called in a bunch of places with\n> interrupts held. An Assert(INTERRUPTS_CAN_BE_PROCESSED()) after releasing the\n> partition lock triggers a bunch. It's possible that these aren't problematic\n> cases for recovery conflicts, because they're all around extension locks:\n> [...]\n> Independent of recovery conflicts, isn't it dangerous that we acquire the\n> relation extension lock with a buffer content lock held? I *guess* it might be\n> ok because BufferAlloc(P_NEW) only acquires buffer content locks in a\n> conditional way.\n\nThese things both seem a bit sketchy but it's not 100% clear to me\nthat anything is actually broken. Now it's also not clear to me that\nnothing is broken ...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Aug 2022 16:33:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unstable tests for recovery conflict handling" }, { "msg_contents": "Hi,\n\nOn 2022-08-03 16:33:46 -0400, Robert Haas wrote:\n> On Wed, Aug 3, 2022 at 1:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > The reason nothing might get logged in some cases is that\n> > e.g. ResolveRecoveryConflictWithLock() tells\n> > ResolveRecoveryConflictWithVirtualXIDs() to *not* report the waiting:\n> > /*\n> > * Prevent ResolveRecoveryConflictWithVirtualXIDs() from reporting\n> > * \"waiting\" in PS display by disabling its argument report_waiting\n> > * because the caller, WaitOnLock(), has already reported that.\n> > */\n> >\n> > so ResolveRecoveryConflictWithLock() can end up looping indefinitely without\n> > logging anything.\n> \n> I understand why we need to avoid adding \"waiting\" to the PS status\n> when we've already done that, but it doesn't seem like that should\n> imply skipping ereport() of log messages.\n> \n> I think we could redesign the way the ps display works to make things\n> a whole lot simpler. Let's have a function set_ps_display() and\n> another function set_ps_display_suffix(). What gets reported to the OS\n> is the concatenation of the two. Calling set_ps_display() implicitly\n> resets the suffix to empty.\n> \n> AFAICS, that'd let us get rid of this tricky logic, and some other\n> tricky logic as well. Here, we'd just say set_ps_display_suffix(\"\n> waiting\") and not worry about whether the caller might have already\n> done something similar.\n\nThat sounds like it'd be an improvement. Of course we still need to fix that\nwe can signal at a rate not allowing the other side to handle the conflict,\nbut at least that'd be easier to identify...\n\n\n> > Another question I have about ResolveRecoveryConflictWithLock() is whether\n> > it's ok that we don't check deadlocks around the\n> > ResolveRecoveryConflictWithVirtualXIDs() call? It might be ok, because we'd\n> > only block if there's a recovery conflict, in which killing the process ought\n> > to succeed?\n> \n> The startup process is supposed to always \"win\" in any deadlock\n> situation, so I'm not sure what you think is a problem here. We get\n> the conflicting lockers. We kill them. If they don't die, that's a\n> bug, but killing ourselves doesn't really help anything; if we die,\n> the whole system goes down, which seems undesirable.\n\nThe way deadlock timeout for the startup process works is that we wait for it\nto pass and then send PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK to the\nbackends. So it's not that the startup process would die.\n\nThe question is basically whether there are cases were\nPROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK would resolve a conflict but\nPROCSIG_RECOVERY_CONFLICT_LOCK wouldn't. It seems plausible that there isn't,\nbut it's also not obvious enough that I'd fully trust it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Aug 2022 15:07:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Unstable tests for recovery conflict handling" } ]
[ { "msg_contents": "Hello,\n\nI am Joseph Ho, a senior at Dr Norman Bethune Collegiate Institute\ninterested in going into computer science. I am interested in working to\ncreate and improve the website for pgjdbc during GSoC 2022.\n\nI am wondering how the draft proposal should be made. Will I need to submit\na web design of the new and improved website or will I need to submit\nsomething else? Also, am I able to use a web framework of my choice or is\nthere one that you prefer that we use?\n\nLooking forward to hearing from you and working with you!\n\nRegards,\nJoseph.\n\nHello, I am Joseph Ho, a senior at Dr Norman Bethune Collegiate Institute interested in going into computer science. I am interested in working to create and improve the website for pgjdbc during GSoC 2022.I am wondering how the draft proposal should be made. Will I need to submit a web design of the new and improved website or will I need to submit something else? Also, am I able to use a web framework of my choice or is there one that you prefer that we use?Looking forward to hearing from you and working with you!Regards, Joseph.", "msg_date": "Thu, 7 Apr 2022 20:49:17 -0400", "msg_from": "Joseph Ho <josephho678@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: New and improved website for pgjdbc (JDBC)" }, { "msg_contents": "Joseph\n\n\n\nOn Thu, 7 Apr 2022 at 17:49, Joseph Ho <josephho678@gmail.com> wrote:\n\n> Hello,\n>\n> I am Joseph Ho, a senior at Dr Norman Bethune Collegiate Institute\n> interested in going into computer science. I am interested in working to\n> create and improve the website for pgjdbc during GSoC 2022.\n>\n> I am wondering how the draft proposal should be made. Will I need to\n> submit a web design of the new and improved website or will I need to\n> submit something else? Also, am I able to use a web framework of my choice\n> or is there one that you prefer that we use?\n>\n\nYou should register on the GSoC site Contributor Registration | Google\nSummer of Code <https://summerofcode.withgoogle.com/register/contributor>\n\nThe draft proposal can just be a document at this point which outlines your\nideas\n\nCurrently the site is built using Jekyll, and I see no good reason to\nchange. What I am looking for is to update the version of Jekyll to latest\nand make the site cleaner with a new design.\n\nYou should be aware that proposals have to be in by the 19th of April\n\n\nRegards,\n\nDave\n\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nJosephOn Thu, 7 Apr 2022 at 17:49, Joseph Ho <josephho678@gmail.com> wrote:Hello, I am Joseph Ho, a senior at Dr Norman Bethune Collegiate Institute interested in going into computer science. I am interested in working to create and improve the website for pgjdbc during GSoC 2022.I am wondering how the draft proposal should be made. Will I need to submit a web design of the new and improved website or will I need to submit something else? Also, am I able to use a web framework of my choice or is there one that you prefer that we use?You should register on the GSoC site Contributor Registration | Google Summer of CodeThe draft proposal can just be a document at this point which outlines your ideasCurrently the site is built using Jekyll, and I see no good reason to change. What I am looking for is to update the version of Jekyll to latestand make the site cleaner with a new design.You should be aware that proposals have to be in by the 19th of AprilRegards,Dave", "msg_date": "Fri, 8 Apr 2022 06:51:03 -0700", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: GSoC: New and improved website for pgjdbc (JDBC)" } ]
[ { "msg_contents": "Add contrib/pg_walinspect.\n\nProvides similar functionality to pg_waldump, but from a SQL interface\nrather than a separate utility.\n\nAuthor: Bharath Rupireddy\nReviewed-by: Greg Stark, Kyotaro Horiguchi, Andres Freund, Ashutosh Sharma, Nitin Jadhav, RKN Sai Krishna\nDiscussion: https://postgr.es/m/CALj2ACUGUYXsEQdKhEdsBzhGEyF3xggvLdD8C0VT72TNEfOiog%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/2258e76f90bf0254504644df0515cddc0c0a87f9\n\nModified Files\n--------------\ncontrib/Makefile | 1 +\ncontrib/pg_walinspect/.gitignore | 4 +\ncontrib/pg_walinspect/Makefile | 23 +\ncontrib/pg_walinspect/expected/pg_walinspect.out | 165 ++++++\ncontrib/pg_walinspect/pg_walinspect--1.0.sql | 118 +++++\ncontrib/pg_walinspect/pg_walinspect.c | 629 +++++++++++++++++++++++\ncontrib/pg_walinspect/pg_walinspect.control | 5 +\ncontrib/pg_walinspect/sql/pg_walinspect.sql | 120 +++++\ndoc/src/sgml/contrib.sgml | 1 +\ndoc/src/sgml/filelist.sgml | 1 +\ndoc/src/sgml/func.sgml | 2 +-\ndoc/src/sgml/pgwalinspect.sgml | 275 ++++++++++\nsrc/backend/access/rmgrdesc/xlogdesc.c | 130 +++++\nsrc/backend/access/transam/Makefile | 1 +\nsrc/backend/access/transam/xlogreader.c | 9 -\nsrc/backend/access/transam/xlogstats.c | 93 ++++\nsrc/backend/access/transam/xlogutils.c | 33 ++\nsrc/bin/pg_waldump/.gitignore | 1 +\nsrc/bin/pg_waldump/Makefile | 8 +-\nsrc/bin/pg_waldump/pg_waldump.c | 206 +-------\nsrc/include/access/xlog.h | 2 +-\nsrc/include/access/xlog_internal.h | 6 +-\nsrc/include/access/xlogreader.h | 2 -\nsrc/include/access/xlogstats.h | 40 ++\nsrc/include/access/xlogutils.h | 4 +\n25 files changed, 1675 insertions(+), 204 deletions(-)", "msg_date": "Fri, 08 Apr 2022 07:27:44 +0000", "msg_from": "Jeff Davis <jdavis@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Hi Jeff,\n\nOn Fri, Apr 08, 2022 at 07:27:44AM +0000, Jeff Davis wrote:\n> Add contrib/pg_walinspect.\n> \n> Provides similar functionality to pg_waldump, but from a SQL interface\n> rather than a separate utility.\n\nThe tests of pg_walinspect look unstable:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=topminnow&dt=2022-04-25%2001%3A48%3A47\n\n SELECT COUNT(*) >= 0 AS ok FROM\n pg_get_wal_records_info_till_end_of_wal(:'wal_lsn1');\n- ok\n-----\n- t\n-(1 row)\n-\n+ERROR: could not read WAL at 0/1903E40\n\nThis points out at ReadNextXLogRecord(), though I would not blame this\ntest suite as you create a physical replication slot beforehand.\nCould this be an issue related to the addition of the circular WAL\ndecoding buffer, aka 3f1ce97?\n\nI am adding an open item about that.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 26 Apr 2022 14:13:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Could this be an issue related to the addition of the circular WAL\n> decoding buffer, aka 3f1ce97?\n\nI already whined about this at [1].\n\nI've been wondering if the issue could be traced to topminnow's unusual\nhardware properties, specifically that it has MAXALIGN 8 even though\nit's only a 32-bit machine per sizeof(void *). I think the only\nother active buildfarm animal like that is my gaur ... but I've\nfailed to reproduce it on gaur. Best guess at the moment is that\nit's a timing issue that topminnow manages to reproduce often.\n\nAnyway, as I said in the other thread, I can reproduce it on\ntopminnow's host. Let me know if you have ideas about how to\nhome in on the cause.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/111657.1650910309%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 26 Apr 2022 01:25:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Tue, Apr 26, 2022 at 01:25:14AM -0400, Tom Lane wrote:\n> I've been wondering if the issue could be traced to topminnow's unusual\n> hardware properties, specifically that it has MAXALIGN 8 even though\n> it's only a 32-bit machine per sizeof(void *). I think the only\n> other active buildfarm animal like that is my gaur ... but I've\n> failed to reproduce it on gaur. Best guess at the moment is that\n> it's a timing issue that topminnow manages to reproduce often.\n\nI have managed to miss your message. Let's continue the discussion\nthere, then.\n\n> Anyway, as I said in the other thread, I can reproduce it on\n> topminnow's host. Let me know if you have ideas about how to\n> home in on the cause.\n\nNice. I have not been able to do so, but based on the lack of\nreports from the buildfarm, that's not surprising.\n--\nMichael", "msg_date": "Tue, 26 Apr 2022 14:36:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Tue, Apr 26, 2022 at 5:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 26, 2022 at 01:25:14AM -0400, Tom Lane wrote:\n> > I've been wondering if the issue could be traced to topminnow's unusual\n> > hardware properties, specifically that it has MAXALIGN 8 even though\n> > it's only a 32-bit machine per sizeof(void *). I think the only\n> > other active buildfarm animal like that is my gaur ... but I've\n> > failed to reproduce it on gaur. Best guess at the moment is that\n> > it's a timing issue that topminnow manages to reproduce often.\n>\n> I have managed to miss your message. Let's continue the discussion\n> there, then.\n\nI think it's a bug in pg_walinspect, so I'll move the discussion back\nhere. Here's one rather simple way to fix it, that has survived\nrunning the test a thousand times (using a recipe that failed for me\nquite soon, after 20-100 attempts or so; I never figured out how to\nget the 50% failure rate reported by Tom). Explanation in commit\nmessage. You can see that the comments near the first hunk already\ncontemplated this possibility, but just didn't try to handle it.\n\nAnother idea that I slept on, but rejected, is that the new WOULDBLOCK\nreturn value introduced to support WAL prefetching could be used here\n(it's a way of reporting a lack of data, different from errors).\nUnfortunately it's not exposed to the XLogReadRecord() interface, as I\nonly intended it for use by XLogReadAhead(). I don't really think\nit's a good idea to redesign that API at this juncture.\n\nMaybe there is some other way I haven't considered -- is there a way\nto get the LSN past the latest whole flushed record from shmem?", "msg_date": "Wed, 27 Apr 2022 12:06:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think it's a bug in pg_walinspect, so I'll move the discussion back\n> here. Here's one rather simple way to fix it, that has survived\n> running the test a thousand times (using a recipe that failed for me\n> quite soon, after 20-100 attempts or so; I never figured out how to\n> get the 50% failure rate reported by Tom).\n\nNot sure what we're doing differently, but plain \"make check\" in\ncontrib/pg_walinspect fails pretty consistently for me on gcc23.\nI tried it again just now and got five failures in five attempts.\n\nI then installed your patch and got the same failure, three times\nout of three, so I don't think we're there yet.\n\nAgain, since I do have this problem in captivity, I'm happy\nto spend some time poking at it. But I could use a little\nguidance where to poke, because I've not looked at any of\nthe WAL prefetch stuff.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Apr 2022 20:25:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I think it's a bug in pg_walinspect, so I'll move the discussion back\n> > here. Here's one rather simple way to fix it, that has survived\n> > running the test a thousand times (using a recipe that failed for me\n> > quite soon, after 20-100 attempts or so; I never figured out how to\n> > get the 50% failure rate reported by Tom).\n>\n> Not sure what we're doing differently, but plain \"make check\" in\n> contrib/pg_walinspect fails pretty consistently for me on gcc23.\n> I tried it again just now and got five failures in five attempts.\n\nI tried on the /home filesystem (a slow NFS mount) and then inside a\ndirectory on /tmp to get ext4 (I saw that Noah had somehow got onto a\nlocal filesystem, based on the present of \"ext4\" in the pathname and I\nwas trying everything I could think of). I used what I thought might\nbe some relevant starter configure options copied from the animal:\n\n./configure --prefix=$HOME/install --enable-cassert --enable-debug\n--enable-tap-tests CC=\"ccache gcc -mips32r2\" CFLAGS=\"-O2\n-funwind-tables\" LDFLAGS=\"-rdynamic\"\n\nFor me, make check always succeeds in contrib/pg_walinspect. For me,\nmake installcheck fails if I do it enough times in a loop, somewhere\naround the 20th loop or so, which I imagine has to do with WAL page\nboundaries moving around.\n\nfor i in `seq 1 1000` ; do\n make -s installcheck || exit 1\ndone\n\n> I then installed your patch and got the same failure, three times\n> out of three, so I don't think we're there yet.\n\nHrmph... Are you sure you rebuilt the contrib module? Assuming so,\nmaybe it's failing in a different way for you and me. For me, it\nalways fails after this break is reached in xlogutil.c:\n\n /* If asked, let's not wait for future WAL. */\n if (!wait_for_wal)\n break;\n\nIf you add a log message there, do you see that? For me, the patch\nfixes it, because it teaches pg_walinspect that messageless errors are\na way of detecting end-of-data (due to the code above, introduced by\nthe pg_walinspect commit).\n\n\n", "msg_date": "Wed, 27 Apr 2022 13:10:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hrmph... Are you sure you rebuilt the contrib module? Assuming so,\n> maybe it's failing in a different way for you and me. For me, it\n> always fails after this break is reached in xlogutil.c:\n\n> /* If asked, let's not wait for future WAL. */\n> if (!wait_for_wal)\n> break;\n\nHmm. For me, that statement is not reached at all in successful\n(make installcheck) runs. In a failing run, it's reached with\nwait_for_wal = false, after which we get the \"could not read WAL\"\nfailure. Usually that happens twice, as per attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 26 Apr 2022 21:47:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Hrmph... Are you sure you rebuilt the contrib module? Assuming so,\n> > maybe it's failing in a different way for you and me. For me, it\n> > always fails after this break is reached in xlogutil.c:\n>\n> > /* If asked, let's not wait for future WAL. */\n> > if (!wait_for_wal)\n> > break;\n>\n> Hmm. For me, that statement is not reached at all in successful\n> (make installcheck) runs. In a failing run, it's reached with\n> wait_for_wal = false, after which we get the \"could not read WAL\"\n> failure. Usually that happens twice, as per attached.\n\nOk, that's the same for me. Next question: why does the patch I\nposted not help? For me, the error \"could not read WAL at %X/%X\",\nseen on the BF log, is raised by ReadNextXLogRecord() in\npg_walinspect.c. The patch removes that ereport() entirely (and\nhandles NULL in a couple of places).\n\n\n", "msg_date": "Wed, 27 Apr 2022 13:54:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Apr 27, 2022 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Not sure what we're doing differently, but plain \"make check\" in\n>> contrib/pg_walinspect fails pretty consistently for me on gcc23.\n>> I tried it again just now and got five failures in five attempts.\n\n> I tried on the /home filesystem (a slow NFS mount) and then inside a\n> directory on /tmp to get ext4 (I saw that Noah had somehow got onto a\n> local filesystem, based on the present of \"ext4\" in the pathname and I\n> was trying everything I could think of). I used what I thought might\n> be some relevant starter configure options copied from the animal:\n\n> ./configure --prefix=$HOME/install --enable-cassert --enable-debug\n> --enable-tap-tests CC=\"ccache gcc -mips32r2\" CFLAGS=\"-O2\n> -funwind-tables\" LDFLAGS=\"-rdynamic\"\n\nHmph. I'm also running it on the /home filesystem, and I used\nthese settings:\n\nexport CC=\"ccache gcc -mips32r2\"\nexport CFLAGS=\"-O2 -funwind-tables\"\nexport LDFLAGS=\"-rdynamic\"\n\n./configure --enable-debug --enable-cassert --with-systemd --enable-nls --with-icu --enable-tap-tests --with-system-tzdata=/usr/share/zoneinfo\n\nplus uninteresting stuff like --prefix. Now maybe some of these\nother options affect this, but I'd be pretty surprised if so.\nSo I'm at a loss why it behaves differently for you.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Apr 2022 21:55:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 1:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 27, 2022 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Hrmph... Are you sure you rebuilt the contrib module? Assuming so,\n> > > maybe it's failing in a different way for you and me. For me, it\n> > > always fails after this break is reached in xlogutil.c:\n> >\n> > > /* If asked, let's not wait for future WAL. */\n> > > if (!wait_for_wal)\n> > > break;\n> >\n> > Hmm. For me, that statement is not reached at all in successful\n> > (make installcheck) runs. In a failing run, it's reached with\n> > wait_for_wal = false, after which we get the \"could not read WAL\"\n> > failure. Usually that happens twice, as per attached.\n>\n> Ok, that's the same for me. Next question: why does the patch I\n> posted not help? For me, the error \"could not read WAL at %X/%X\",\n> seen on the BF log, is raised by ReadNextXLogRecord() in\n> pg_walinspect.c. The patch removes that ereport() entirely (and\n> handles NULL in a couple of places).\n\nBTW If you had your local change from debug.patch (upthread), that'd\ndefeat the patch. I mean this:\n\n+ if(!*errormsg)\n+ *errormsg = \"decode_queue_head is null\";\n\n\n", "msg_date": "Wed, 27 Apr 2022 14:10:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Ok, that's the same for me. Next question: why does the patch I\n> posted not help?\n\nI improved the instrumentation a bit, and it looks like what is\nhappening is that loc > read_upto, causing that code to \"break\"\nindependently of wait_for_wal. success.log is from \"make installcheck\"\nimmediately after initdb; fail.log is from \"make check\".\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 26 Apr 2022 22:14:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> BTW If you had your local change from debug.patch (upthread), that'd\n> defeat the patch. I mean this:\n\n> + if(!*errormsg)\n> + *errormsg = \"decode_queue_head is null\";\n\nOh! Okay, I'll retry without that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Apr 2022 22:15:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> BTW If you had your local change from debug.patch (upthread), that'd\n>> defeat the patch. I mean this:\n\n>> + if(!*errormsg)\n>> + *errormsg = \"decode_queue_head is null\";\n\n> Oh! Okay, I'll retry without that.\n\nI've now done several runs with your patch and not seen the test failure.\nHowever, I think we ought to rethink this API a bit rather than just\napply the patch as-is. Even if it were documented, relying on\nerrormsg = NULL to mean something doesn't seem like a great plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Apr 2022 23:15:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 8:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> BTW If you had your local change from debug.patch (upthread), that'd\n> >> defeat the patch. I mean this:\n>\n> >> + if(!*errormsg)\n> >> + *errormsg = \"decode_queue_head is null\";\n>\n> > Oh! Okay, I'll retry without that.\n>\n> I've now done several runs with your patch and not seen the test failure.\n> However, I think we ought to rethink this API a bit rather than just\n> apply the patch as-is. Even if it were documented, relying on\n> errormsg = NULL to mean something doesn't seem like a great plan.\n\nSorry for being late in the game, occupied with other stuff.\n\nHow about using private_data of XLogReaderState for\nread_local_xlog_page_no_wait, something like this?\n\ntypedef struct ReadLocalXLogPageNoWaitPrivate\n{\n bool end_of_wal;\n} ReadLocalXLogPageNoWaitPrivate;\n\nIn read_local_xlog_page_no_wait:\n\n /* If asked, let's not wait for future WAL. */\n if (!wait_for_wal)\n {\n private_data->end_of_wal = true;\n break;\n }\n\n /*\n * Opaque data for callbacks to use. Not used by XLogReader.\n */\n void *private_data;\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 27 Apr 2022 08:57:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 8:57 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 27, 2022 at 8:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I wrote:\n> > > Thomas Munro <thomas.munro@gmail.com> writes:\n> > >> BTW If you had your local change from debug.patch (upthread), that'd\n> > >> defeat the patch. I mean this:\n> >\n> > >> + if(!*errormsg)\n> > >> + *errormsg = \"decode_queue_head is null\";\n> >\n> > > Oh! Okay, I'll retry without that.\n> >\n> > I've now done several runs with your patch and not seen the test failure.\n> > However, I think we ought to rethink this API a bit rather than just\n> > apply the patch as-is. Even if it were documented, relying on\n> > errormsg = NULL to mean something doesn't seem like a great plan.\n>\n> Sorry for being late in the game, occupied with other stuff.\n>\n> How about using private_data of XLogReaderState for\n> read_local_xlog_page_no_wait, something like this?\n>\n> typedef struct ReadLocalXLogPageNoWaitPrivate\n> {\n> bool end_of_wal;\n> } ReadLocalXLogPageNoWaitPrivate;\n>\n> In read_local_xlog_page_no_wait:\n>\n> /* If asked, let's not wait for future WAL. */\n> if (!wait_for_wal)\n> {\n> private_data->end_of_wal = true;\n> break;\n> }\n>\n> /*\n> * Opaque data for callbacks to use. Not used by XLogReader.\n> */\n> void *private_data;\n\nI found an easy way to reproduce this consistently (I think on any server):\n\nI basically generated huge WAL record (I used a fun extension that I\nwrote - https://github.com/BRupireddy/pg_synthesize_wal, but one can\nuse pg_logical_emit_message as well)\nsession 1:\nselect * from pg_synthesize_wal_record(1*1024*1024); --> generate 1 MB\nof WAL record first and make a note of the output lsn (lsn1)\n\nsession 2:\nselect * from pg_get_wal_records_info_till_end_of_wal(lsn1);\n\\watch 1\n\nsession 1:\nselect * from pg_synthesize_wal_record(1000*1024*1024); --> generate\n~1 GB of WAL record and we see ERROR: could not read WAL at XXXXX in\nsession 2.\n\nDelay the checkpoint (set checkpoint_timeout to 1hr) just not recycle\nthe wal files while we run pg_walinspect functions, no other changes\nrequired from the default initdb settings on the server.\n\nAnd, Thomas's patch fixes the issue.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 27 Apr 2022 13:47:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 1:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > I've now done several runs with your patch and not seen the test failure.\n> > > However, I think we ought to rethink this API a bit rather than just\n> > > apply the patch as-is. Even if it were documented, relying on\n> > > errormsg = NULL to mean something doesn't seem like a great plan.\n> >\n> > Sorry for being late in the game, occupied with other stuff.\n> >\n> > How about using private_data of XLogReaderState for\n> > read_local_xlog_page_no_wait, something like this?\n> >\n> > typedef struct ReadLocalXLogPageNoWaitPrivate\n> > {\n> > bool end_of_wal;\n> > } ReadLocalXLogPageNoWaitPrivate;\n> >\n> > In read_local_xlog_page_no_wait:\n> >\n> > /* If asked, let's not wait for future WAL. */\n> > if (!wait_for_wal)\n> > {\n> > private_data->end_of_wal = true;\n> > break;\n> > }\n> >\n> > /*\n> > * Opaque data for callbacks to use. Not used by XLogReader.\n> > */\n> > void *private_data;\n>\n> I found an easy way to reproduce this consistently (I think on any server):\n>\n> I basically generated huge WAL record (I used a fun extension that I\n> wrote - https://github.com/BRupireddy/pg_synthesize_wal, but one can\n> use pg_logical_emit_message as well)\n> session 1:\n> select * from pg_synthesize_wal_record(1*1024*1024); --> generate 1 MB\n> of WAL record first and make a note of the output lsn (lsn1)\n>\n> session 2:\n> select * from pg_get_wal_records_info_till_end_of_wal(lsn1);\n> \\watch 1\n>\n> session 1:\n> select * from pg_synthesize_wal_record(1000*1024*1024); --> generate\n> ~1 GB of WAL record and we see ERROR: could not read WAL at XXXXX in\n> session 2.\n>\n> Delay the checkpoint (set checkpoint_timeout to 1hr) just not recycle\n> the wal files while we run pg_walinspect functions, no other changes\n> required from the default initdb settings on the server.\n>\n> And, Thomas's patch fixes the issue.\n\nHere's v2 patch (up on Thomas's v1 at [1]) using private_data to set\nthe end of the WAL flag. Please have a look at it.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLtswFk9ZO3WMOqnDkGs6dK5kCdQK9gxJm0N8gip5cpiA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 27 Apr 2022 15:52:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, 2022-04-27 at 13:47 +0530, Bharath Rupireddy wrote:\n> I found an easy way to reproduce this consistently (I think on any\n> server):\n> \n> I basically generated huge WAL record (I used a fun extension that I\n> wrote - https://github.com/BRupireddy/pg_synthesize_wal, but one can\n> use pg_logical_emit_message as well)\n\nThank you Bharath for creating the extension and the simple test case.\n\nThomas's patch solves the issue for me as well.\n\nTom, the debug patch you posted[0] seems to be setting the error\nmessage if it's not already set. Thomas's patch uses the lack of a\nmessage as a signal that we've reached the end of WAL. That explains\nwhy you are still seeing the problem.\n\nObviously, that's a sign that Thomas's patch is not the cleanest\nsolution. But other approaches would be more invasive. I guess the\nquestion is whether that's a good enough solution for now, and\nhopefully we could improve the API later; or whether we need to come up\nwith something better.\n\nWhen reviewing, I considered the inability to read old WAL and the\ninability to read flushed-in-the-middle-of-a-record WAL as similar\nkinds of errors that the user would need to deal with. But they are\ndifferent: the former can be avoided by creating a slot; the latter\ncan't be easily avoided, only retried.\n\nDepending on the intended use cases, forcing the user to retry might be\nreasonable, in which case we could consider this a test problem rather\nthan a real problem, and we might be able to do something simpler to\njust stabilize the test.\n\nRegards,\n\tJeff Davis\n\n[0] https://postgr.es/m/295868.1651024073@sss.pgh.pa.us\n\n\n\n\n", "msg_date": "Wed, 27 Apr 2022 10:30:21 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Wed, Apr 27, 2022 at 10:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Here's v2 patch (up on Thomas's v1 at [1]) using private_data to set\n> the end of the WAL flag. Please have a look at it.\n\nI don't have a strong view on whether it's better to use a NULL error\nfor this private communication between pg_walinspect.c and the\nread_page callback it installs, or install a custom private_data to\nsignal this condition, or to give up on all this stuff completely and\njust wait (see below for thoughts). I'd feel better about both\nno-wait options if the read_page callback in question were actually in\nthe contrib module, and not in the core code. On the other hand, I'm\nnot too hung up about it because I'm really hoping to see the\nget-rid-of-all-these-callbacks-and-make-client-code-do-the-waiting\nscheme proposed by Horiguchi-san and Heikki developed further, to rip\nmuch of this stuff out in a future release.\n\nIf you go with the private_data approach, a couple of small comments:\n\n if (record == NULL)\n {\n+ ReadLocalXLogPageNoWaitPrivate *private_data;\n+\n+ /* return NULL, if end of WAL is reached */\n+ private_data = (ReadLocalXLogPageNoWaitPrivate *)\n+ xlogreader->private_data;\n+\n+ if (private_data->end_of_wal)\n+ return NULL;\n+\n if (errormsg)\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not read WAL at %X/%X: %s\",\n LSN_FORMAT_ARGS(first_record), errormsg)));\n- else\n- ereport(ERROR,\n- (errcode_for_file_access(),\n- errmsg(\"could not read WAL at %X/%X\",\n- LSN_FORMAT_ARGS(first_record))));\n }\n\nI think you should leave that second ereport() call in, in this\nvariant of the patch, no? I don't know if anything else raises errors\nwith no message, but if we're still going to treat them as silent\nend-of-data conditions then you might as well go with the v1 patch.\n\nAnother option might be to abandon this whole no-wait concept and\nrevert 2258e76f's changes to xlogutils.c. pg_walinspect already does\npreliminary checks that LSNs are in range, so you can't enter a value\nthat will wait indefinitely, and it's interruptible (it's a 1ms\nsleep/check loop, not my favourite programming pattern but that's\npre-existing code). If you're unlucky enough to hit the case where\nthe LSN is judged to be valid but is in the middle of a record that\nhasn't been totally flushed yet, it'll just be a bit slower to return\nas we wait for the inevitable later flush(es) to happen. The rest of\nyour record will *surely* be flushed pretty soon (or the flushing\nbackend panics the whole system and time ends). I don't imagine this\nis performance critical work, so maybe that'd be acceptable?\n\n\n", "msg_date": "Thu, 28 Apr 2022 12:11:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Thu, 2022-04-28 at 12:11 +1200, Thomas Munro wrote:\n> \n> Another option might be to abandon this whole no-wait concept and\n> revert 2258e76f's changes to xlogutils.c. pg_walinspect already does\n> preliminary checks that LSNs are in range, so you can't enter a value\n> that will wait indefinitely, and it's interruptible (it's a 1ms\n> sleep/check loop, not my favourite programming pattern but that's\n> pre-existing code). If you're unlucky enough to hit the case where\n> the LSN is judged to be valid but is in the middle of a record that\n> hasn't been totally flushed yet, it'll just be a bit slower to return\n> as we wait for the inevitable later flush(es) to happen. The rest of\n> your record will *surely* be flushed pretty soon (or the flushing\n> backend panics the whole system and time ends). I don't imagine this\n> is performance critical work, so maybe that'd be acceptable?\n\nI'm inclined toward this option. I was a bit iffy on those xlogutils.c\nchanges to begin with, and they are causing a problem now, so I'd like\nto avoid layering on more workarounds.\n\nThe time when we need to wait is very narrow, only in this case where\nit's earlier than the flush pointer, and the flush pointer is in the\nmiddle of a record that's not fully flushed. And as you say, we won't\nbe waiting very long in that case, because once we start to write a WAL\nrecord it better finish soon.\n\nBharath, thoughts? When you originally introduced the nowait behavior,\nI believe that was to solve the case where someone specifies an LSN\nrange well in the future, but we can still catch that and throw an\nerror if we see that it's beyond the flush pointer. Do you see a\nproblem with just doing that and getting rid of the nowait changes?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 27 Apr 2022 20:11:37 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Thu, Apr 28, 2022 at 8:41 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2022-04-28 at 12:11 +1200, Thomas Munro wrote:\n> >\n> > Another option might be to abandon this whole no-wait concept and\n> > revert 2258e76f's changes to xlogutils.c. pg_walinspect already does\n> > preliminary checks that LSNs are in range, so you can't enter a value\n> > that will wait indefinitely, and it's interruptible (it's a 1ms\n> > sleep/check loop, not my favourite programming pattern but that's\n> > pre-existing code). If you're unlucky enough to hit the case where\n> > the LSN is judged to be valid but is in the middle of a record that\n> > hasn't been totally flushed yet, it'll just be a bit slower to return\n> > as we wait for the inevitable later flush(es) to happen. The rest of\n> > your record will *surely* be flushed pretty soon (or the flushing\n> > backend panics the whole system and time ends). I don't imagine this\n> > is performance critical work, so maybe that'd be acceptable?\n>\n> I'm inclined toward this option. I was a bit iffy on those xlogutils.c\n> changes to begin with, and they are causing a problem now, so I'd like\n> to avoid layering on more workarounds.\n>\n> The time when we need to wait is very narrow, only in this case where\n> it's earlier than the flush pointer, and the flush pointer is in the\n> middle of a record that's not fully flushed. And as you say, we won't\n> be waiting very long in that case, because once we start to write a WAL\n> record it better finish soon.\n>\n> Bharath, thoughts? When you originally introduced the nowait behavior,\n> I believe that was to solve the case where someone specifies an LSN\n> range well in the future, but we can still catch that and throw an\n> error if we see that it's beyond the flush pointer. Do you see a\n> problem with just doing that and getting rid of the nowait changes?\n\nIt's not just the flush ptr, without no wait mode, the functions would\nwait if start/input lsn is, say current flush lsn - 1 or 2 or more\n(before the previous record) bytes. If the functions were to wait, by\nhow much time should they wait? a timeout? forever? This is what the\ncomplexity we wanted to avoid. I would still vote for the private_data\napproach and if the end of WAL reaches when flush lsn falls in the\nmiddle of the record, let's just exit the loop and report the results\nback to the client.\n\nI addressed Thomas's review comment and attached v3 patch. Please have a look.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 29 Apr 2022 10:46:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." }, { "msg_contents": "On Fri, 2022-04-29 at 10:46 +0530, Bharath Rupireddy wrote:\n> It's not just the flush ptr, without no wait mode, the functions\n> would\n> wait if start/input lsn is, say current flush lsn - 1 or 2 or more\n> (before the previous record) bytes. If the functions were to wait, by\n> how much time should they wait? a timeout? forever?\n\nI see, you're talking about the case of XLogFindNextRecord(), not\nXLogReadRecord().\n\nXLogFindNextRecord() is the only way to align the user-provided start\nLSN on a valid record, but that calls XLogReadRecord(), which may wait\nindefinitely. If there were a different entry point that just did the\nalignment and skipped past continuation records, we could prevent it\nfrom trying to read the next record if we are already at the flush\npointer. But without some tweak to that API, nowait is still needed.\n\nCommitted your v3 patch with minor modifications.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 30 Apr 2022 09:24:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add contrib/pg_walinspect." } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\npgstat_reset_entry does not check if lock it was really blocked.\nI think if shared_stat_reset_contents is called without lock,\nis it an issue not?\n\nregards,\n\nRanier Vilela", "msg_date": "Fri, 8 Apr 2022 08:49:48 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: shared-memory based stats collector" }, { "msg_contents": "Hi, \n\nOn April 8, 2022 4:49:48 AM PDT, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>Hi,\n>\n>Per Coverity.\n>\n>pgstat_reset_entry does not check if lock it was really blocked.\n>I think if shared_stat_reset_contents is called without lock,\n>is it an issue not?\n\nI don't think so - the nowait parameter is say to false, so the lock acquisition is blocking.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 08 Apr 2022 08:59:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: shared-memory based stats collector" } ]
[ { "msg_contents": "It has reached 2022-03-08 Anywhere on Earth[*] so I believe that means\nPostgres 15 Feature Freeze is in effect (modulo a couple patches that\nwere held until the end of the commitfest to make merging easier).\n\nI've marked the commitfest closed and will be moving any patches that\ndidn't receive feedback over to the next commitfest. I think this is\nmost of the remaining patches though there may be a few Waiting for\nAuthor patches that can be Returned with Feedback or even Rejected.\nI'll do the Ready for Committer patches last to allow for the\nstragglers held back.\n\nIt's always frustrating seeing patches get ignored but on the plus\nside nearly 100 patches are marked Committed and a lot of patches did\nget feedback.\n\nThanks to all the reviewers and committers who put a lot of work in,\nespecially in the last two weeks. I especially want to thank Andres\nwho showed me how to use the cfbot to check on patch statuses and did\na lot of work doing that until I was up to speed.\n\n[*] https://www.timeanddate.com/time/zones/aoe\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 8 Apr 2022 08:48:57 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Commitfest Closed" }, { "msg_contents": "On 2022-Apr-08, Greg Stark wrote:\n\n> Thanks to all the reviewers and committers who put a lot of work in,\n> especially in the last two weeks. I especially want to thank Andres\n> who showed me how to use the cfbot to check on patch statuses and did\n> a lot of work doing that until I was up to speed.\n\nThanks for herding through the CF!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 8 Apr 2022 14:58:16 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest Closed" }, { "msg_contents": "On Fri, Apr 8, 2022 at 5:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Thanks for herding through the CF!\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Apr 2022 08:09:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Commitfest Closed" }, { "msg_contents": "On Fri, Apr 08, 2022 at 08:09:16AM -0700, Peter Geoghegan wrote:\n> On Fri, Apr 8, 2022 at 5:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Thanks for herding through the CF!\n> \n> +1\n\n+1!\n\n\n", "msg_date": "Fri, 8 Apr 2022 23:16:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest Closed" }, { "msg_contents": "I moved to next CF almost all the Needs Review and Waiting on Author patches.\n\nThe remaining ones are either:\n\n1) Bug fixes, Documentation, or testing patches that we may want to\nmake Open Issues\n\n2) Patches that look like we may want to mark Rejected or Returned\nwith Feedback and start a new discussion\n\n3) Patches whose email history confused me, such as where multiple\npatches are under discussion\n\nI also haven't gone through the Ready for Committer patches yet. I'll\ndo that at the end of the day.\n\nIncidentally I marked a lot of the Waiting on Author patches as Needs\nReview before moving to the next CF because generally I think they\nwere only Waiting on Author because of the cfbot failures and they\nwere waiting on design feedback.\n\nAlso, as another aside, I find a lot of the patches that haven't been\nreviewed were patches that were posted without any specific concerns\nor questions. That tends to imply the author thinks the patch is ready\nand just waiting on a comprehensive review which is a daunting task.\n\nI would suggest if you're an author posting a WIP and there's some\nspecific uncertainties that you have about the patch that asking about\nthem would encourage reviewers to dive in and help you make progress.\n\n\n", "msg_date": "Fri, 8 Apr 2022 11:34:53 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Commitfest Closed" }, { "msg_contents": "On Fri, Apr 08, 2022 at 08:48:57AM -0400, Greg Stark wrote:\n> Thanks to all the reviewers and committers who put a lot of work in,\n> especially in the last two weeks. I especially want to thank Andres\n> who showed me how to use the cfbot to check on patch statuses and did\n> a lot of work doing that until I was up to speed.\n\nThanks for going through the CF, Greg!\n--\nMichael", "msg_date": "Mon, 11 Apr 2022 10:02:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest Closed" } ]
[ { "msg_contents": "Hi,\n\nHere's a new patch, well-timed for the next feature freeze. [0]\n\nLast year I submitted a patchset that updated the way nbtree keys are\ncompared [1]; by moving from index_getattr to an iterator-based\napproach. That patch did perform quite well for multi-column indexes,\nbut significantly reduced performance for keys that benefit from\nForm_pg_attribute->attcacheoff.\n\nHere's generation 2 of this effort. Instead of proceeding to trying to\nshoehorn all types of btree keys into one common code path, this\npatchset acknowledges that there exist different shapes of keys that\neach have a different best way of accessing each subsequent key\nattribute. This patch achieves this by specializing the functions to\ndifferent key shapes.\n\nThe approach is comparable to JIT, but significantly different in that\nit optimizes a set of statically defined shapes, and not any arbitrary\nshape through a runtime compilation step. Jit could be implemented,\nbut not easily with the APIs available to IndexAMs: the amhandler for\nindexes does not receive any information what the shape of the index\nis going to be; so\n\n0001: Moves code that can benefit from key attribute accessor\nspecialization out of their current files and into specializable\nfiles.\nThe functions selected for specialization are either not much more\nthan wrappers for specializable functions, or functions that have\n(hot) loops around specializable code; where specializable means\naccessing multiple IndexTuple attributes directly.\n0002: Updates the specializable code to use the specialized attribute\niteration macros\n0003: Optimizes access to the key column when there's only one key column.\n0004: Optimizes access to the key columns when we cannot use\nattcacheoff for the key columns\n0005: Creates a helper function to populate all attcacheoff fields\nwith their correct values; populating them with -2 whenever a cache\noffset is impossible to determine, as opposed to the default -1\n(unknown); allowing functions to determine the cachability of the n-th\nattribute in O(1).\n0006: Creates a specialization macro that replaces rd_indam->aminsert\nwith its optimized variant, for improved index tuple insertion\nperformance.\n\nThese patches still have some rough edges (specifically: some\nfunctions that are being generated are unused, and intermediate\npatches don't compile), but I wanted to get this out to get some\nfeedback on this approach.\n\nI expect the performance to be at least on par with current btree\ncode, and I'll try to publish a more polished patchset with\nperformance results sometime in the near future. I'll also try to\nre-attach dynamic page-level prefix truncation, but that depends on\nthe amount of time I have and the amount of feedback on this patchset.\n\n-Matthias\n\n[0] The one for PG16, that is.\n[1] https://www.postgresql.org/message-id/CAEze2Whwvr8aYcBf0BeBuPy8mJGtwxGvQYA9OGR5eLFh6Q_ZvA@mail.gmail.com", "msg_date": "Fri, 8 Apr 2022 18:54:55 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Improving btree performance through specializing by key shape, take 2" }, { "msg_contents": "On Fri, Apr 8, 2022 at 9:55 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Here's generation 2 of this effort. Instead of proceeding to trying to\n> shoehorn all types of btree keys into one common code path, this\n> patchset acknowledges that there exist different shapes of keys that\n> each have a different best way of accessing each subsequent key\n> attribute. This patch achieves this by specializing the functions to\n> different key shapes.\n\nCool.\n\n> These patches still have some rough edges (specifically: some\n> functions that are being generated are unused, and intermediate\n> patches don't compile), but I wanted to get this out to get some\n> feedback on this approach.\n\nI attempted to apply your patch series to get some general sense of\nhow it affects performance, by using my own test cases from previous\nnbtree project work. I gave up on that pretty quickly, though, since\nthe code wouldn't compile. That in itself might have been okay (some\n\"rough edges\" are generally okay). The real problem was that it wasn't\nclear what I was expected to do about it! You mentioned that some of\nthe patches just didn't compile, but which ones? How do I quickly get\nsome idea of the benefits on offer here, however imperfect or\npreliminary?\n\nCan you post a version of this that compiles? As a general rule you\nshould try to post patches that can be \"test driven\" easily. An\nopening paragraph that says \"here is why you should care about my\npatch\" is often something to strive for, too. I suspect that you\nactually could have done that here, but you didn't, for whatever\nreason.\n\n> I expect the performance to be at least on par with current btree\n> code, and I'll try to publish a more polished patchset with\n> performance results sometime in the near future. I'll also try to\n> re-attach dynamic page-level prefix truncation, but that depends on\n> the amount of time I have and the amount of feedback on this patchset.\n\nCan you give a few motivating examples? You know, queries that are\nsped up by the patch series, with an explanation of where the benefit\ncomes from. You had some on the original thread, but that included\ndynamic prefix truncation stuff as well.\n\nIdeally you would also describe where the adversized improvements come\nfrom for each test case -- which patch, which enhancement (perhaps\nonly in rough terms for now).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 10 Apr 2022 14:44:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sun, Apr 10, 2022 at 2:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Can you post a version of this that compiles?\n\nI forgot to add: the patch also bitrot due to recent commit dbafe127.\nI didn't get stuck at this point (this is minor bitrot), but no reason\nnot to rebase.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 10 Apr 2022 15:03:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sun, 10 Apr 2022 at 23:45, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 8, 2022 at 9:55 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Here's generation 2 of this effort. Instead of proceeding to trying to\n> > shoehorn all types of btree keys into one common code path, this\n> > patchset acknowledges that there exist different shapes of keys that\n> > each have a different best way of accessing each subsequent key\n> > attribute. This patch achieves this by specializing the functions to\n> > different key shapes.\n>\n> Cool.\n>\n> > These patches still have some rough edges (specifically: some\n> > functions that are being generated are unused, and intermediate\n> > patches don't compile), but I wanted to get this out to get some\n> > feedback on this approach.\n>\n> I attempted to apply your patch series to get some general sense of\n> how it affects performance, by using my own test cases from previous\n> nbtree project work. I gave up on that pretty quickly, though, since\n> the code wouldn't compile. That in itself might have been okay (some\n> \"rough edges\" are generally okay). The real problem was that it wasn't\n> clear what I was expected to do about it! You mentioned that some of\n> the patches just didn't compile, but which ones? How do I quickly get\n> some idea of the benefits on offer here, however imperfect or\n> preliminary?\n>\n> Can you post a version of this that compiles? As a general rule you\n> should try to post patches that can be \"test driven\" easily. An\n> opening paragraph that says \"here is why you should care about my\n> patch\" is often something to strive for, too. I suspect that you\n> actually could have done that here, but you didn't, for whatever\n> reason.\n\nYes, my bad. I pulled one patch that included unrelated changes from\nthe set; but I missed that it also contained some changes that\nshould've been in an earlier commit, thus breaking the set.\n\nI'll send an updated patchset soon (I'm planning on moving around when\nwhat is changed/added); but before that the attached incremental patch\nshould help. FYI, the patchset has been tested on commit 05023a23, and\ncompiles (with unused function warnings) after applying the attached\npatch.\n\n> > I expect the performance to be at least on par with current btree\n> > code, and I'll try to publish a more polished patchset with\n> > performance results sometime in the near future. I'll also try to\n> > re-attach dynamic page-level prefix truncation, but that depends on\n> > the amount of time I have and the amount of feedback on this patchset.\n>\n> Can you give a few motivating examples? You know, queries that are\n> sped up by the patch series, with an explanation of where the benefit\n> comes from. You had some on the original thread, but that included\n> dynamic prefix truncation stuff as well.\n\nQueries that I expect to be faster are situations where the index does\nfront-to-back attribute accesses in a tight loop and repeated index\nlookups; such as in index builds, data loads, JOINs, or IN () and =\nANY () operations; and then specifically for indexes with only a\nsingle key attribute, or indexes where we can determine based on the\nindex attributes' types that nocache_index_getattr will be called at\nleast once for a full _bt_compare call (i.e. att->attcacheoff cannot\nbe set for at least one key attribute).\nQueries that I expect to be slower to a limited degree are hot loops\non btree indexes that do not have a specifically optimized path, as\nthere is some extra overhead calling the specialized functions. Other\ncode might also see a minimal performance impact due to an increased\nbinary size resulting in more cache thrashing.\n\n> Ideally you would also describe where the adversized improvements come\n> from for each test case -- which patch, which enhancement (perhaps\n> only in rough terms for now).\n\nIn the previous iteration, I discerned that there are different\n\"shapes\" of indexes, some of which currently have significant overhead\nin the existing btree infrastructure. Especially indexes with multiple\nkey attributes can see significant overhead while their attributes are\nbeing extracted, which (for a significant part) can be attributed to\nthe O(n) behaviour of nocache_index_getattr. This O(n) overhead is\ncurrently avoided only by indexes with only a single key attribute and\nby indexes in which all key attributes have a fixed size (att->attlen\n> 0).\n\nThe types of btree keys I discerned were:\nCREATE INDEX ON tst ...\n... (single_key_attribute)\n... (varlen, other, attributes, ...)\n... (fixed_size, also_fixed, ...)\n... (sometimes_null, other, attributes, ...)\n\nFor single-key-attribute btrees, the performance benefits in the patch\nare achieved by reducing branching in the attribute extraction: There\nare no other key attributes to worry about, so much of the code\ndealing with looping over attributes can inline values, and thus\nreduce the amount of code generated in the hot path.\n\nFor btrees with multiple key attributes, benefits are achieved if some\nattributes are of variable length (e.g. text):\nOn master, if your index looks like CREATE INDEX ON tst (varlen,\nfixed, fixed), for the latter attributes the code will always hit the\nslow path of nocache_index_getattr. This introduces a significant\noverhead; as that function wants to re-establish that the requested\nattribute's offset is indeed not cached and not cacheable, and\ncalculates the requested attributes' offset in the tuple from\neffectively zero. That is usually quite wasteful, as (in btree code,\nusually) we'd already calculated the previous attribute's offset just\na moment ago; which should be reusable.\nIn this patch, the code will use an attribute iterator (as described\nand demonstrated in the linked thread) to remove this O(n)\nper-attribute overhead and change the worst-case complexity of\niterating over the attributes of such an index tuple from O(n^2) to\nO(n).\n\nNull attributes in the key are not yet handled in any special manner\nin this patch. That is mainly because it is impossible to statically\ndetermine which attribute is going to be null based on the index\ndefinition alone, and thus doesn't benefit as much from statically\ngenerated optimized paths.\n\n-Mathias", "msg_date": "Mon, 11 Apr 2022 01:07:46 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sun, Apr 10, 2022 at 4:08 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I'll send an updated patchset soon (I'm planning on moving around when\n> what is changed/added); but before that the attached incremental patch\n> should help. FYI, the patchset has been tested on commit 05023a23, and\n> compiles (with unused function warnings) after applying the attached\n> patch.\n\nI can get it to work now, with your supplemental patch.\n\n> Queries that I expect to be faster are situations where the index does\n> front-to-back attribute accesses in a tight loop and repeated index\n> lookups; such as in index builds, data loads, JOINs, or IN () and =\n> ANY () operations; and then specifically for indexes with only a\n> single key attribute, or indexes where we can determine based on the\n> index attributes' types that nocache_index_getattr will be called at\n> least once for a full _bt_compare call (i.e. att->attcacheoff cannot\n> be set for at least one key attribute).\n\nI did some quick testing of the patch series -- pretty much just\nreusing my old test suite from the Postgres 12 and 13 nbtree work.\nThis showed that there is a consistent improvement in some cases. It\nalso failed to demonstrate any performance regressions. That's\ndefinitely a good start.\n\nI saw about a 4% reduction in runtime for the same UK land registry\ntest that you yourself have run in the past for the same patch series\n[1]. I suspect that there just aren't that many ways to get that kind\nof speed up with this test case, except perhaps by further compressing\nthe on-disk representation used by nbtree. My guess is that the patch\nreduces the runtime for this particular test case to a level that's\nsignificantly closer to the limit for this particular piece of\nsilicon. Which is not to be sniffed at.\n\nAdmittedly these test cases were chosen purely because they were\nconvenient. They were originally designed to test space utilization,\nwhich isn't affected either way here. I like writing reproducible test\ncases for indexing stuff, and think that it could work well here too\n(even though you're not optimizing space utilization at all). A real\ntest suite that targets a deliberately chosen cross section of \"index\nshapes\" might work very well.\n\n> In the previous iteration, I discerned that there are different\n> \"shapes\" of indexes, some of which currently have significant overhead\n> in the existing btree infrastructure. Especially indexes with multiple\n> key attributes can see significant overhead while their attributes are\n> being extracted, which (for a significant part) can be attributed to\n> the O(n) behaviour of nocache_index_getattr. This O(n) overhead is\n> currently avoided only by indexes with only a single key attribute and\n> by indexes in which all key attributes have a fixed size (att->attlen\n> > 0).\n\nGood summary.\n\n> The types of btree keys I discerned were:\n> CREATE INDEX ON tst ...\n> ... (single_key_attribute)\n> ... (varlen, other, attributes, ...)\n> ... (fixed_size, also_fixed, ...)\n> ... (sometimes_null, other, attributes, ...)\n>\n> For single-key-attribute btrees, the performance benefits in the patch\n> are achieved by reducing branching in the attribute extraction: There\n> are no other key attributes to worry about, so much of the code\n> dealing with looping over attributes can inline values, and thus\n> reduce the amount of code generated in the hot path.\n\nI agree that it might well be useful to bucket indexes into several\ndifferent \"index shape archetypes\" like this. Roughly the same\napproach worked well for me in the past. This scheme might turn out to\nbe reductive, but even then it could still be very useful (all models\nare wrong, some are useful, now as ever).\n\n> For btrees with multiple key attributes, benefits are achieved if some\n> attributes are of variable length (e.g. text):\n> On master, if your index looks like CREATE INDEX ON tst (varlen,\n> fixed, fixed), for the latter attributes the code will always hit the\n> slow path of nocache_index_getattr. This introduces a significant\n> overhead; as that function wants to re-establish that the requested\n> attribute's offset is indeed not cached and not cacheable, and\n> calculates the requested attributes' offset in the tuple from\n> effectively zero.\n\nRight. So this particular index shape seems like something that we\ntreat in a rather naive way currently.\n\nCan you demonstrate that with a custom test case? (The result I cited\nbefore was from a '(varlen,varlen,varlen)' index, which is important,\nbut less relevant.)\n\n[1] https://www.postgresql.org/message-id/flat/CAEze2Whwvr8aYcBf0BeBuPy8mJGtwxGvQYA9OGR5eLFh6Q_ZvA%40mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 10 Apr 2022 18:10:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, 11 Apr 2022 at 03:11, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Apr 10, 2022 at 4:08 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I'll send an updated patchset soon (I'm planning on moving around when\n> > what is changed/added); but before that the attached incremental patch\n> > should help. FYI, the patchset has been tested on commit 05023a23, and\n> > compiles (with unused function warnings) after applying the attached\n> > patch.\n>\n> I can get it to work now, with your supplemental patch.\n\nGreat. Attached is the updated patchset, with as main changes:\n\n- Rebased on top of 5bb2b6ab\n- All patches should compile when built on top of each preceding patch.\n- Reordered the patches to a more logical order, and cleaned up the\ncontent of each patch\n- Updated code so that GCC doesn't warn about unused code.\n- Add a patch for dynamic prefix truncation at page level; ref thread at [1].\n- Fixed issues in the specialization macros that resulted in issues\nwith DPT above.\n\nStill to-do:\n\n- Validate performance and share the numbers for the same test indexes\nin [1]. I'm planning on doing that next Monday.\n- Decide whether / how to keep the NBTS_ENABLED flag. The current\n#define in nbtree.h is a bad example of a compile-time configuration,\nthat should be changed (even if we only want to be able to disable\nspecialization at compile-time, it should be moved).\n\nMaybe:\n\n- More tests: PG already extensively tests the btree code while it is\nrunning the test suite - btree is the main index AM - but more tests\nmight be needed to test the validity of the specialized code.\n\n> > Queries that I expect to be faster are situations where the index does\n> > front-to-back attribute accesses in a tight loop and repeated index\n> > lookups; such as in index builds, data loads, JOINs, or IN () and =\n> > ANY () operations; and then specifically for indexes with only a\n> > single key attribute, or indexes where we can determine based on the\n> > index attributes' types that nocache_index_getattr will be called at\n> > least once for a full _bt_compare call (i.e. att->attcacheoff cannot\n> > be set for at least one key attribute).\n>\n> I did some quick testing of the patch series -- pretty much just\n> reusing my old test suite from the Postgres 12 and 13 nbtree work.\n> This showed that there is a consistent improvement in some cases. It\n> also failed to demonstrate any performance regressions. That's\n> definitely a good start.\n>\n> I saw about a 4% reduction in runtime for the same UK land registry\n> test that you yourself have run in the past for the same patch series\n> [1].\n\nThat's good to know. The updated patches (as attached) have dynamic\nprefix truncation from the patch series in [1] added too, which should\nimprove the performance by a few more percentage points in that\nspecific test case.\n\n> I suspect that there just aren't that many ways to get that kind\n> of speed up with this test case, except perhaps by further compressing\n> the on-disk representation used by nbtree. My guess is that the patch\n> reduces the runtime for this particular test case to a level that's\n> significantly closer to the limit for this particular piece of\n> silicon. Which is not to be sniffed at.\n>\n> Admittedly these test cases were chosen purely because they were\n> convenient. They were originally designed to test space utilization,\n> which isn't affected either way here. I like writing reproducible test\n> cases for indexing stuff, and think that it could work well here too\n> (even though you're not optimizing space utilization at all). A real\n> test suite that targets a deliberately chosen cross section of \"index\n> shapes\" might work very well.\n\nI'm not sure what you're refering to. Is the set of indexes I used in\n[1] an example of what you mean by \"test suite of index shapes\"?\n\n> > In the previous iteration, I discerned that there are different\n> > \"shapes\" of indexes, some of which currently have significant overhead\n> > in the existing btree infrastructure. Especially indexes with multiple\n> > key attributes can see significant overhead while their attributes are\n> > being extracted, which (for a significant part) can be attributed to\n> > the O(n) behaviour of nocache_index_getattr. This O(n) overhead is\n> > currently avoided only by indexes with only a single key attribute and\n> > by indexes in which all key attributes have a fixed size (att->attlen\n> > > 0).\n>\n> Good summary.\n>\n> > The types of btree keys I discerned were:\n> > CREATE INDEX ON tst ...\n> > ... (single_key_attribute)\n> > ... (varlen, other, attributes, ...)\n> > ... (fixed_size, also_fixed, ...)\n> > ... (sometimes_null, other, attributes, ...)\n> >\n> > For single-key-attribute btrees, the performance benefits in the patch\n> > are achieved by reducing branching in the attribute extraction: There\n> > are no other key attributes to worry about, so much of the code\n> > dealing with looping over attributes can inline values, and thus\n> > reduce the amount of code generated in the hot path.\n>\n> I agree that it might well be useful to bucket indexes into several\n> different \"index shape archetypes\" like this. Roughly the same\n> approach worked well for me in the past. This scheme might turn out to\n> be reductive, but even then it could still be very useful (all models\n> are wrong, some are useful, now as ever).\n>\n> > For btrees with multiple key attributes, benefits are achieved if some\n> > attributes are of variable length (e.g. text):\n> > On master, if your index looks like CREATE INDEX ON tst (varlen,\n> > fixed, fixed), for the latter attributes the code will always hit the\n> > slow path of nocache_index_getattr. This introduces a significant\n> > overhead; as that function wants to re-establish that the requested\n> > attribute's offset is indeed not cached and not cacheable, and\n> > calculates the requested attributes' offset in the tuple from\n> > effectively zero.\n>\n> Right. So this particular index shape seems like something that we\n> treat in a rather naive way currently.\n\nBut really every index shape is treated naively, except the cacheable\nindex shapes. The main reason we haven't cared about it much is that\nyou don't often see btrees with many key attributes, and when it's\nslow that is explained away 'because it is a big index and a wide\nindex key' but still saves orders of magnitude over a table scan, so\npeople generally don't complain about it. A notable exception was\n80b9e9c4, where a customer complained about index scans being faster\nthan index only scans.\n\n> Can you demonstrate that with a custom test case? (The result I cited\n> before was from a '(varlen,varlen,varlen)' index, which is important,\n> but less relevant.)\n\nAnything that has a variable length in any attribute other than the\nlast; so that includes (varlen, int) and also (int, int, varlen, int,\nint, int, int).\nThe catalogs currently seem to include only one such index:\npg_proc_proname_args_nsp_index is an index on pg_proc (name (const),\noidvector (varlen), oid (const)).\n\n- Matthias\n\n[1] https://www.postgresql.org/message-id/flat/CAEze2WhyBT2bKZRdj_U0KS2Sbewa1XoO_BzgpzLC09sa5LUROg%40mail.gmail.com#fe3369c4e202a7ed468e47bf5420f530", "msg_date": "Sat, 16 Apr 2022 01:05:27 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sat, 16 Apr 2022 at 01:05, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Still to-do:\n>\n> - Validate performance and share the numbers for the same test indexes\n> in [1]. I'm planning on doing that next Monday.\n\nWhile working on benchmarking the v2 patchset, I noticed no\nimprovement on reindex, which I attributed to forgetting to also\nspecialize comparetup_index_btree in tuplesorth.c. After adding the\nspecialization there as well (attached in v3), reindex performance\nimproved significantly too.\n\nPerformance results attached in pgbench_log.[master,patched], which\nincludes the summarized output. Notes on those results:\n\n- single-column reindex seem to have the same performance between\npatch and master, within 1% error margins.\n- multi-column indexes with useful ->attcacheoff also sees no obvious\nperformance degradation\n- multi-column indexes with no useful ->attcacheoff see significant\ninsert performance improvement:\n -8% runtime on 3 text attributes with default (C) collation (\"ccl\");\n -9% runtime on 1 'en_US'-collated attribute + 2 text attributes\n(\"ccl_collated\");\n -13% runtime on 1 null + 3 text attributes (\"accl\");\n -74% runtime (!) on 31 'en_US'-collated 0-length text attributes + 1\nuuid attribute (\"worstcase\" - I could not think of a worse index shape\nthan this one).\n- reindex performance gains are much more visible: up to 84% (!) less\ntime spent for \"worstcase\", and 18-31% for the other multi-column\nindexes mentioned above.\n\nOther notes:\n- The dataset I used is the same as I used in [1]: the pp-complete.csv\nas was available on 2021-06-20, containing 26070307 rows.\n- The performance was measured on 7 runs of the attached bench script,\nusing pgbench to measure statement times etc.\n- Database was initialized with C locale, all tables are unlogged and\nsource table was VACUUM (FREEZE, ANALYZE)-ed before starting.\n- (!) On HEAD @ 5bb2b6ab, INSERT is faster than REINDEX for the\n\"worstcase\" index. I've not yet discovered why (only lightly looked\ninto it, no sort debugging), and considering that the issue does not\nappear in similar quantities in the patched version, I'm not planning\non putting a lot of time into that.\n- Per-transaction results for the run on master were accidentally\ndeleted, I don't consider them important enough to re-run the\nbenchmark.\n\n> - Decide whether / how to keep the NBTS_ENABLED flag. The current\n> #define in nbtree.h is a bad example of a compile-time configuration,\n> that should be changed (even if we only want to be able to disable\n> specialization at compile-time, it should be moved).\n>\n> Maybe:\n>\n> - More tests: PG already extensively tests the btree code while it is\n> running the test suite - btree is the main index AM - but more tests\n> might be needed to test the validity of the specialized code.\n\nNo work on those points yet.\n\nI'll add this to CF 2022-07 for tracking.\n\nKind regards,\n\nMatthias van de Meent.\n\n[1] https://www.postgresql.org/message-id/flat/CAEze2WhyBT2bKZRdj_U0KS2Sbewa1XoO_BzgpzLC09sa5LUROg%40mail.gmail.com#fe3369c4e202a7ed468e47bf5420f530", "msg_date": "Sun, 5 Jun 2022 21:12:36 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sun, 5 Jun 2022 at 21:12, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> While working on benchmarking the v2 patchset, I noticed no\n> improvement on reindex, which I attributed to forgetting to also\n> specialize comparetup_index_btree in tuplesorth.c. After adding the\n> specialization there as well (attached in v3), reindex performance\n> improved significantly too.\n\nPFA version 4 of this patchset. Changes:\n- Silence the compiler warnings,\n- Extract itup_attiter code into its own header, so that we don't get\ncompiler warnings and pass the cfbot builds,\n- Re-order patches to be in a more logical order,\n- Updates to the dynamic prefix compression so that we don't always do\na _bt_compare on the pages' highkey. memcmp(parentpage_rightsep,\nhighkey) == 0 is often true, and allows us to skip the indirect\nfunction calls in _bt_compare most of the time.\n\nBased on local measurements, this patchset improves performance for\nall key shapes, with 2% to 600+% increased throughput (2-86% faster\noperations), depending on key shape. As can be seen in the attached\npgbench output, the performance results are based on beta1 (f00a4f02,\ndated 2022-06-04) and thus not 100% current, but considering that no\nsignificant changes have been made in the btree AM code since, I\nbelieve these measurements are still quite valid.\n\nI also didn't re-run the numbers for the main branch; but I compared\nagainst the results of master in the last mail. This is because I run\nthe performance tests locally, and a 7-iteration pgbench run for\nmaster requires 9 hours of downtime with this dataset, during which I\ncan't use the system so as to not interfere with the performance\ntests. As such, I considered rerunning the benchmark for master to be\nnot worth the time/effort/cost with the little changes that were\ncommitted.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 4 Jul 2022 16:18:35 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, 4 Jul 2022 at 16:18, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Sun, 5 Jun 2022 at 21:12, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > While working on benchmarking the v2 patchset, I noticed no\n> > improvement on reindex, which I attributed to forgetting to also\n> > specialize comparetup_index_btree in tuplesorth.c. After adding the\n> > specialization there as well (attached in v3), reindex performance\n> > improved significantly too.\n>\n> PFA version 4 of this patchset. Changes:\n\nVersion 5 now, which is identical to v4 except for bitrot fixes to\ndeal with f58d7073.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Wed, 27 Jul 2022 09:35:24 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Wed, 27 Jul 2022 at 09:35, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Mon, 4 Jul 2022 at 16:18, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Sun, 5 Jun 2022 at 21:12, Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > While working on benchmarking the v2 patchset, I noticed no\n> > > improvement on reindex, which I attributed to forgetting to also\n> > > specialize comparetup_index_btree in tuplesorth.c. After adding the\n> > > specialization there as well (attached in v3), reindex performance\n> > > improved significantly too.\n> >\n> > PFA version 4 of this patchset. Changes:\n>\n> Version 5 now, which is identical to v4 except for bitrot fixes to\n> deal with f58d7073.\n\n... and now v6 to deal with d0b193c0 and co.\n\nI probably should've waited a bit longer this morning and checked\nmaster before sending, but that's not how it went. Sorry for the\nnoise.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Wed, 27 Jul 2022 13:34:52 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Wed, 27 Jul 2022 at 13:34, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 27 Jul 2022 at 09:35, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Mon, 4 Jul 2022 at 16:18, Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > >\n> > > On Sun, 5 Jun 2022 at 21:12, Matthias van de Meent\n> > > <boekewurm+postgres@gmail.com> wrote:\n> > > > While working on benchmarking the v2 patchset, I noticed no\n> > > > improvement on reindex, which I attributed to forgetting to also\n> > > > specialize comparetup_index_btree in tuplesorth.c. After adding the\n> > > > specialization there as well (attached in v3), reindex performance\n> > > > improved significantly too.\n> > >\n> > > PFA version 4 of this patchset. Changes:\n> >\n> > Version 5 now, which is identical to v4 except for bitrot fixes to\n> > deal with f58d7073.\n>\n> ... and now v6 to deal with d0b193c0 and co.\n>\n> I probably should've waited a bit longer this morning and checked\n> master before sending, but that's not how it went. Sorry for the\n> noise.\n\nHere's the dynamic prefix truncation patch on it's own (this was 0008).\n\nI'll test the performance of this tomorrow, but at least it compiles\nand passes check-world against HEAD @ 6e10631d. If performance doesn't\ndisappoint (isn't measurably worse in known workloads), this will be\nthe only patch in the patchset - specialization would then be dropped.\nElse, tomorrow I'll post the remainder of the patchset that\nspecializes the nbtree functions on key shape.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 31 Oct 2022 19:14:08 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "Hi Matthias,\r\n\r\nI'm going to look at this patch series if you're still interested. What was the status of your final performance testing for the 0008 patch alone vs the specialization series? Last I saw on the thread you were going to see if the specialization was required or not.\r\n\r\nBest,\r\n\r\nDavid", "msg_date": "Thu, 12 Jan 2023 15:10:42 +0000", "msg_from": "David Christensen <david@pgguru.net>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Thu, 12 Jan 2023 at 16:11, David Christensen <david@pgguru.net> wrote:\n>\n> Hi Matthias,\n>\n> I'm going to look at this patch series if you're still interested. What was the status of your final performance testing for the 0008 patch alone vs the specialization series? Last I saw on the thread you were going to see if the specialization was required or not.\n\nThank you for your interest, and sorry for the delayed response. I've\nbeen working on rebasing and polishing the patches, and hit some\nissues benchmarking the set. Attached in Perf_results.xlsx are the\nresults of my benchmarks, and a new rebased patchset.\n\nTLDR for benchmarks: There may be a small regression 0.5-1% in the\npatchset for reindex and insert-based workloads in certain corner\ncases, but I can't rule out that it's just a quirk of my testing\nsetup. Master was taken at eb5ad4ff, and all patches are based on that\nas well.\n\n0001 (former 0008) sees 2% performance loss on average on\nnon-optimized index insertions - this performance loss is fixed with\nthe rest of the patchset.\n\nThe patchset was reordered again: 0001 contains the dynamic prefix\ntruncation changes; 0002 and 0003 refactor and update btree code to\nspecialize on key shape, and 0004 and 0006 define the specializations.\n\n0005 is a separated addition to attcacheoff infrastructure that is\nuseful on it's own; it flags an attribute with 'this attribute cannot\nhave a cached offset, look at this other attribute instead'.\n\nA significant change from previous versions is that the specialized\nfunction identifiers are published as macros, so `_bt_compare` is\npublished as a macro that (based on context) calls the specialized\nversion. This reduced a lot of cognitive overhead and churn in the\ncode.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Fri, 20 Jan 2023 20:37:58 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Fri, 20 Jan 2023 at 20:37, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Thu, 12 Jan 2023 at 16:11, David Christensen <david@pgguru.net> wrote:\n> >\n> > Hi Matthias,\n> >\n> > I'm going to look at this patch series if you're still interested. What was the status of your final performance testing for the 0008 patch alone vs the specialization series? Last I saw on the thread you were going to see if the specialization was required or not.\n>\n> Thank you for your interest, and sorry for the delayed response. I've\n> been working on rebasing and polishing the patches, and hit some\n> issues benchmarking the set. Attached in Perf_results.xlsx are the\n> results of my benchmarks, and a new rebased patchset.\n\nAttached v9 which rebases the patchset on b90f0b57 to deal with\ncompilation errors after d952373a. It also cleans up 0001 which\npreviously added an unrelated file, but is otherwise unchanged.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 23 Jan 2023 14:54:01 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, 23 Jan 2023 at 14:54, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Fri, 20 Jan 2023 at 20:37, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > On Thu, 12 Jan 2023 at 16:11, David Christensen <david@pgguru.net> wrote:\n> > > Last I saw on the thread you were going to see if the specialization was required or not.\n> >\n> > Thank you for your interest, and sorry for the delayed response. I've\n> > been working on rebasing and polishing the patches, and hit some\n> > issues benchmarking the set. Attached in Perf_results.xlsx are the\n> > results of my benchmarks, and a new rebased patchset.\n\nAttached v10 which fixes one compile warning, and fixes\nheaderscheck/cpluspluscheck by adding nbtree_spec.h and\nnbtree_specfuncs.h to ignored headers files. It also fixes some cases\nof later modifications of earlier patches' code where the change\nshould be incorporated in the earlier patch instead.\n\nI think this is ready for review, I don't .\n\nThe top-level design of the patchset:\n\n0001 modifies btree descent code to use dynamic prefix compression,\ni.e. skip comparing columns in binary search when the same column on\ntuples on both the left and the right of this tuple have been compared\nas 'equal'.\n\nIt also includes an optimized path when the downlink's tuple's right\nneighbor's data is bytewise equal to the highkey of the page we\ndescended onto - in those cases we don't need to run _bt_compare on\nthe index tuple as we know that the result will be the same as that of\nthe downlink tuple, i.e. it compare as \"less than\".\n\nNOTE that this patch when applied as stand-alone adds overhead for all\nindexes, with the benefits of the patch limited to non-unique indexes\nor indexes where the uniqueness is guaranteed only at later\nattributes. Later patches in the patchset return performance to a\nsimilar level as before 0001 for the impacted indexes.\n\n0002 creates a scaffold for specializing nbtree functions, and moves\nthe functions I selected for specialization into separate files. Those\nseparate files (postfixed with _spec) are included in the original\nfiles through inclusion of the nbtree specialization header file with\na macro variable. The code itself has not materially changed yet at\nthis point.\n\n0003 updates the functions selected in 0002 to utilize the\nspecializable attribute iterator macros instead of manual attribute\niteration.\n\nThen, 0004 adds specialization for single-attribute indexes,\n0005 adds a helper function for populating attcacheoff (which is\nseparately useful too, but essential in this patchset), and\n0006 adds specialization for multi-column indexes of which the offsets\nof the last key column cannot be known.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Wed, 8 Feb 2023 19:46:12 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "Hm. The cfbot has a fairly trivial issue with this with a unused variable:\n\n18:36:17.405] In file included from ../../src/include/access/nbtree.h:1184,\n[18:36:17.405] from verify_nbtree.c:27:\n[18:36:17.405] verify_nbtree.c: In function ‘palloc_btree_page’:\n[18:36:17.405] ../../src/include/access/nbtree_spec.h:51:23: error:\nunused variable ‘__nbts_ctx’ [-Werror=unused-variable]\n[18:36:17.405] 51 | #define NBTS_CTX_NAME __nbts_ctx\n[18:36:17.405] | ^~~~~~~~~~\n[18:36:17.405] ../../src/include/access/nbtree_spec.h:54:43: note: in\nexpansion of macro ‘NBTS_CTX_NAME’\n[18:36:17.405] 54 | #define NBTS_MAKE_CTX(rel) const NBTS_CTX\nNBTS_CTX_NAME = _nbt_spec_context(rel)\n[18:36:17.405] | ^~~~~~~~~~~~~\n[18:36:17.405] ../../src/include/access/nbtree_spec.h:264:28: note: in\nexpansion of macro ‘NBTS_MAKE_CTX’\n[18:36:17.405] 264 | #define nbts_prep_ctx(rel) NBTS_MAKE_CTX(rel)\n[18:36:17.405] | ^~~~~~~~~~~~~\n[18:36:17.405] verify_nbtree.c:2974:2: note: in expansion of macro\n‘nbts_prep_ctx’\n[18:36:17.405] 2974 | nbts_prep_ctx(NULL);\n[18:36:17.405] | ^~~~~~~~~~~~~\n\n\n", "msg_date": "Tue, 4 Apr 2023 11:42:54 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, 4 Apr 2023 at 17:43, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n>\n> Hm. The cfbot has a fairly trivial issue with this with a unused variable:\n\nAttached a rebase on top of HEAD @ 8cca660b to make the patch apply\nagain. I think this is ready for review once again. As the patchset\nhas seen no significant changes since v8 of the patchset this january\n[0], I've added a description of the changes below.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n= Description of the patchset so far:\n\nThis patchset implements two features that *taken togetther* improve\nthe performance of our btree implementation:\n\n== Dynamic prefix truncation (0001)\nThe code now tracks how many prefix attributes of the scan key are\nalready considered equal based on earlier binsrch results, and ignores\nthose prefix colums in further binsrch operations (sorted list; if\nboth the high and low value of your range have the same prefix, the\nmiddle value will have that prefix, too). This reduces the number of\ncalls into opclass-supplied (dynamic) compare functions, and thus\nincrease performance for multi-key-attribute indexes where shared\nprefixes are common (e.g. index on (customer, order_id)).\n\n== Index key shape code specialization (0002-0006)\nIndex tuple attribute accesses for multi-column indexes are often done\nthrough index_getattr, which gets very expensive for indexes which\ncannot use attcacheoff. However, single-key and attcacheoff-able\nindexes do benefit greatly from the attcacheoff optimization, so we\ncan't just stop applying the optimization. This is why the second part\nof the patchset (0002 and up) adds infrastructure to generate\nspecialized code paths that access key attributes in the most\nefficient way it knows of: Single key attributes do not go through\nloops/condtionals for which attribute it is (except certain\nexceptions, 0004), attcacheoff-able indexes get the same treatment as\nthey do now, and indexes where attcacheoff cannot be used for all key\nattributes will get a special attribute iterator that incrementally\ncalculates the offset of each attribute in the current index tuple\n(0005+0006).\n\nAlthough patch 0002 is large, most of the modified lines are functions\nbeing moved into different files. Once 0002 is understood, the\nfollowing patches should be fairly easy to understand as well.\n\n= Why both features in one patchset?\n\nThe overhead of keeping track of the prefix in 0001 can add up to\nseveral % of performance lost for the common index shape where dynamic\nprefix truncation cannot be applied (measured 5-9%); single-column\nunique indexes are the most sensitive to this. By adding the\nsingle-key-column code specialization in 0004, we reduce other types\nof overhead for the indexes most affected, which thus compensates for\nthe additional overhead in 0001, resulting in a net-neutral result.\n\n= Benchmarks\n\nI haven't re-run the benchmarks for this since v8 at [0] as I haven't\nmodified the patch significantly after that patch - only compiler\ncomplaint fixes and changes required for rebasing. The results from\nthat benchmark: improvements vary between 'not significantly different\nfrom HEAD' to '250+% improved throughput for select INSERT workloads,\nand 360+% improved for select REINDEX workloads'. Graphs from that\nbenchmark are also attached now; as LibreOffice Calc wasn't good at\nexporting the sheet with working graphs.\n\n[0] https://www.postgresql.org/message-id/CAEze2WixWviBYTWXiFLbD3AuLT4oqGk_MykS_ssB=TudeZ=ajQ@mail.gmail.com", "msg_date": "Thu, 22 Jun 2023 22:50:35 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Fri, Jun 23, 2023 at 2:21 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n\n> == Dynamic prefix truncation (0001)\n> The code now tracks how many prefix attributes of the scan key are\n> already considered equal based on earlier binsrch results, and ignores\n> those prefix colums in further binsrch operations (sorted list; if\n> both the high and low value of your range have the same prefix, the\n> middle value will have that prefix, too). This reduces the number of\n> calls into opclass-supplied (dynamic) compare functions, and thus\n> increase performance for multi-key-attribute indexes where shared\n> prefixes are common (e.g. index on (customer, order_id)).\n\nI think the idea looks good to me.\n\nI was looking into the 0001 patches, and I have one confusion in the\nbelow hunk in the _bt_moveright function, basically, if the parent\npage's right key is exactly matching the HIGH key of the child key\nthen I suppose while doing the \"_bt_compare\" with the HIGH_KEY we can\nuse the optimization right, i.e. column number from where we need to\nstart the comparison should be used what is passed by the caller. But\nin the below hunk, you are always passing that as 'cmpcol' which is 1.\nI think this should be '*comparecol' because '*comparecol' will either\nhold the value passed by the parent if high key data exactly match\nwith the parent's right tuple or it will hold 1 in case it doesn't\nmatch. Am I missing something?\n\n\n@@ -247,13 +256,16 @@ _bt_moveright(Relation rel,\n{\n....\n+ if (P_IGNORE(opaque) ||\n+ _bt_compare(rel, key, page, P_HIKEY, &cmpcol) >= cmpval)\n+ {\n+ *comparecol = 1;\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Jun 2023 14:56:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Fri, 23 Jun 2023 at 11:26, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jun 23, 2023 at 2:21 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n>\n> > == Dynamic prefix truncation (0001)\n> > The code now tracks how many prefix attributes of the scan key are\n> > already considered equal based on earlier binsrch results, and ignores\n> > those prefix colums in further binsrch operations (sorted list; if\n> > both the high and low value of your range have the same prefix, the\n> > middle value will have that prefix, too). This reduces the number of\n> > calls into opclass-supplied (dynamic) compare functions, and thus\n> > increase performance for multi-key-attribute indexes where shared\n> > prefixes are common (e.g. index on (customer, order_id)).\n>\n> I think the idea looks good to me.\n>\n> I was looking into the 0001 patches,\n\nThanks for reviewing.\n\n> and I have one confusion in the\n> below hunk in the _bt_moveright function, basically, if the parent\n> page's right key is exactly matching the HIGH key of the child key\n> then I suppose while doing the \"_bt_compare\" with the HIGH_KEY we can\n> use the optimization right, i.e. column number from where we need to\n> start the comparison should be used what is passed by the caller. But\n> in the below hunk, you are always passing that as 'cmpcol' which is 1.\n> I think this should be '*comparecol' because '*comparecol' will either\n> hold the value passed by the parent if high key data exactly match\n> with the parent's right tuple or it will hold 1 in case it doesn't\n> match. Am I missing something?\n\nWe can't carry _bt_compare prefix results across pages, because the\nkey range of a page may shift while we're not holding a lock on that\npage. That's also why the code resets the prefix to 1 every time it\naccesses a new page ^1: it cannot guarantee correct results otherwise.\nSee also [0] and [1] for why that is important.\n\n^1: When following downlinks, the code above your quoted code tries to\nreuse the _bt_compare result of the parent page in the common case of\na child page's high key that is bytewise equal to the right separator\ntuple of the parent page's downlink to this page. However, if it\ndetects that this child high key has changed (i.e. not 100% bytewise\nequal), we can't reuse that result, and we'll have to re-establish all\nprefix info on that page from scratch.\nIn any case, this only establishes the prefix for the right half of\nthe page's keyspace, the prefix of the left half of the data still\nneeds to be established separetely.\n\nI hope this explains the reasons for why we can't reuse comparecol as\n_bt_compare argument.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n[0] https://www.postgresql.org/message-id/CAH2-Wzn_NAyK4pR0HRWO0StwHmxjP5qyu+X8vppt030XpqrO6w@mail.gmail.com\n\n\n", "msg_date": "Fri, 23 Jun 2023 16:46:02 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Fri, Jun 23, 2023 at 8:16 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Fri, 23 Jun 2023 at 11:26, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n> > and I have one confusion in the\n> > below hunk in the _bt_moveright function, basically, if the parent\n> > page's right key is exactly matching the HIGH key of the child key\n> > then I suppose while doing the \"_bt_compare\" with the HIGH_KEY we can\n> > use the optimization right, i.e. column number from where we need to\n> > start the comparison should be used what is passed by the caller. But\n> > in the below hunk, you are always passing that as 'cmpcol' which is 1.\n> > I think this should be '*comparecol' because '*comparecol' will either\n> > hold the value passed by the parent if high key data exactly match\n> > with the parent's right tuple or it will hold 1 in case it doesn't\n> > match. Am I missing something?\n>\n> We can't carry _bt_compare prefix results across pages, because the\n> key range of a page may shift while we're not holding a lock on that\n> page. That's also why the code resets the prefix to 1 every time it\n> accesses a new page ^1: it cannot guarantee correct results otherwise.\n> See also [0] and [1] for why that is important.\n\nYeah that makes sense\n\n> ^1: When following downlinks, the code above your quoted code tries to\n> reuse the _bt_compare result of the parent page in the common case of\n> a child page's high key that is bytewise equal to the right separator\n> tuple of the parent page's downlink to this page. However, if it\n> detects that this child high key has changed (i.e. not 100% bytewise\n> equal), we can't reuse that result, and we'll have to re-establish all\n> prefix info on that page from scratch.\n> In any case, this only establishes the prefix for the right half of\n> the page's keyspace, the prefix of the left half of the data still\n> needs to be established separetely.\n>\n> I hope this explains the reasons for why we can't reuse comparecol as\n> _bt_compare argument.\n\nYeah got it, thanks for explaining this. Now I see you have explained\nthis in comments as well above the memcmp() statement.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jun 2023 09:42:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, Jun 27, 2023 at 9:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jun 23, 2023 at 8:16 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Fri, 23 Jun 2023 at 11:26, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > > and I have one confusion in the\n> > > below hunk in the _bt_moveright function, basically, if the parent\n> > > page's right key is exactly matching the HIGH key of the child key\n> > > then I suppose while doing the \"_bt_compare\" with the HIGH_KEY we can\n> > > use the optimization right, i.e. column number from where we need to\n> > > start the comparison should be used what is passed by the caller. But\n> > > in the below hunk, you are always passing that as 'cmpcol' which is 1.\n> > > I think this should be '*comparecol' because '*comparecol' will either\n> > > hold the value passed by the parent if high key data exactly match\n> > > with the parent's right tuple or it will hold 1 in case it doesn't\n> > > match. Am I missing something?\n> >\n> > We can't carry _bt_compare prefix results across pages, because the\n> > key range of a page may shift while we're not holding a lock on that\n> > page. That's also why the code resets the prefix to 1 every time it\n> > accesses a new page ^1: it cannot guarantee correct results otherwise.\n> > See also [0] and [1] for why that is important.\n>\n> Yeah that makes sense\n>\n> > ^1: When following downlinks, the code above your quoted code tries to\n> > reuse the _bt_compare result of the parent page in the common case of\n> > a child page's high key that is bytewise equal to the right separator\n> > tuple of the parent page's downlink to this page. However, if it\n> > detects that this child high key has changed (i.e. not 100% bytewise\n> > equal), we can't reuse that result, and we'll have to re-establish all\n> > prefix info on that page from scratch.\n> > In any case, this only establishes the prefix for the right half of\n> > the page's keyspace, the prefix of the left half of the data still\n> > needs to be established separetely.\n> >\n> > I hope this explains the reasons for why we can't reuse comparecol as\n> > _bt_compare argument.\n>\n> Yeah got it, thanks for explaining this. Now I see you have explained\n> this in comments as well above the memcmp() statement.\n\nAt high level 0001 looks fine to me, just some suggestions\n\n1.\n+Notes about dynamic prefix truncation\n+-------------------------------------\n\nI feel instead of calling it \"dynamic prefix truncation\" should we can\ncall it \"dynamic prefix skipping\", I mean we are not\nreally truncating anything right, we are just skipping those\nattributes in comparison?\n\n2.\nI think we should add some comments in the _bt_binsrch() function\nwhere we are having main logic around maintaining highcmpcol and\nlowcmpcol.\nI think the notes section explains that very clearly but adding some\ncomments here would be good and then reference to that section in the\nREADME.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Jun 2023 10:27:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, 27 Jun 2023 at 06:57, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> At high level 0001 looks fine to me, just some suggestions\n\nThanks for the review.\n\n> 1.\n> +Notes about dynamic prefix truncation\n> +-------------------------------------\n>\n> I feel instead of calling it \"dynamic prefix truncation\" should we can\n> call it \"dynamic prefix skipping\", I mean we are not\n> really truncating anything right, we are just skipping those\n> attributes in comparison?\n\nThe reason I am using \"prefix truncation\" is that that is a fairly\nwell-known term in literature (together with \"prefix compression\"),\nand it was introduced on this list with that name by Peter in 2018\n[0]. I've seen no good reason to change terminology; especially\nconsidering that normal \"static\" prefix truncation/compression is also\nsomewhere on my to-do list.\n\n> 2.\n> I think we should add some comments in the _bt_binsrch() function\n> where we are having main logic around maintaining highcmpcol and\n> lowcmpcol.\n> I think the notes section explains that very clearly but adding some\n> comments here would be good and then reference to that section in the\n> README.\n\nUpdated in the attached version 12 of the patchset (which is also\nrebased on HEAD @ 9c13b681). No changes apart from rebase fixes and\nthese added comments.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/CAH2-Wzn_NAyK4pR0HRWO0StwHmxjP5qyu+X8vppt030XpqrO6w@mail.gmail.com", "msg_date": "Wed, 30 Aug 2023 21:50:28 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Wed, 30 Aug 2023 at 21:50, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> Updated in the attached version 12 of the patchset (which is also\n> rebased on HEAD @ 9c13b681). No changes apart from rebase fixes and\n> these added comments.\n\nRebased again to v13 to account for API changes in 9f060253 \"Remove\nsome more \"snapshot too old\" vestiges.\"\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 18 Sep 2023 17:56:00 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, 18 Sept 2023 at 17:56, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 30 Aug 2023 at 21:50, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > Updated in the attached version 12 of the patchset (which is also\n> > rebased on HEAD @ 9c13b681). No changes apart from rebase fixes and\n> > these added comments.\n>\n> Rebased again to v13 to account for API changes in 9f060253 \"Remove\n> some more \"snapshot too old\" vestiges.\"\n\n... and now attached.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 18 Sep 2023 17:57:28 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, Sep 18, 2023 at 8:57 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Rebased again to v13 to account for API changes in 9f060253 \"Remove\n> > some more \"snapshot too old\" vestiges.\"\n>\n> ... and now attached.\n\nI see that this revised version is approximately as invasive as\nearlier versions were - it has specializations for almost everything.\nDo you really need to specialize basically all of nbtdedup.c, for\nexample?\n\nIn order for this patch series to have any hope of getting committed,\nthere needs to be significant work on limiting the amount of code\nchurn, and resulting object code size. There are various problems that\ncome from whole-sale specializing all of this code. There are\ndistributed costs -- costs that won't necessarily be in evidence from\nmicrobenchmarks.\n\nIt might be worth familiarizing yourself with bloaty, a tool for\nprofiling the size of binaries:\n\nhttps://github.com/google/bloaty\n\nIs it actually sensible to tie dynamic prefix compression to\neverything else here? Apparently there is a regression for certain\ncases caused by that patch (the first one), which necessitates making\nup the difference in later patches. But...isn't it also conceivable\nthat some completely different optimization could do that for us\ninstead? Why is there a regression from v13-0001-*? Can we just fix\nthe regression directly? And if not, why not?\n\nI also have significant doubts about your scheme for avoiding\ninvalidating the bounds of the page based on its high key matching the\nparent's separator. The subtle dynamic prefix compression race\ncondition that I was worried about was one caused by page deletion.\nBut page deletion doesn't change the high key at all (it does that for\nthe deleted page, but that's hardly relevant). So how could checking\nthe high key possibly help?\n\nPage deletion will make the pivot tuple in the parent page whose\ndownlink originally pointed to the concurrently deleted page change,\nso that it points to the deleted page's original right sibling page\n(the sibling being the page that you need to worry about). This means\nthat the lower bound for the not-deleted right sibling page has\nchanged from under us. And that we lack any simple way of detecting\nthat it might have happened.\n\nThe race that I'm worried about is extremely narrow, because it\ninvolves a page deletion and a concurrent insert into the key space\nthat was originally covered by the deleted page. It's extremely\nunlikely to happen in the real world, but it's still a bug.\n\nIt's possible that it'd make sense to do a memcmp() of the high key\nusing a copy of a separator from the parent page. That at least seems\nlike it could be made safe. But I don't see what it has to do with\ndynamic prefix compression. In any case there is a simpler way to\navoid the high key check for internal pages: do the _bt_binsrch first,\nand only consider _bt_moveright when the answer that _bt_binsrch gave\nsuggests that we might actually need to call _bt_moveright.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 18 Sep 2023 18:29:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, Sep 18, 2023 at 6:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I also have significant doubts about your scheme for avoiding\n> invalidating the bounds of the page based on its high key matching the\n> parent's separator. The subtle dynamic prefix compression race\n> condition that I was worried about was one caused by page deletion.\n> But page deletion doesn't change the high key at all (it does that for\n> the deleted page, but that's hardly relevant). So how could checking\n> the high key possibly help?\n\nTo be clear, page deletion does what I described here (it does an\nin-place update of the downlink to the deleted page, so the same pivot\ntuple now points to its right sibling, which is our page of concern),\nin addition to fully removing the original pivot tuple whose downlink\noriginally pointed to our page of concern. This is why page deletion\nmakes the key space \"move to the right\", very much like a page split\nwould.\n\nIMV it would be better if it made the key space \"move to the left\"\ninstead, which would make page deletion close to the exact opposite of\na page split -- that's what the Lanin & Shasha paper does (sort of).\nIf you have this symmetry, then things like dynamic prefix compression\nare a lot simpler.\n\nISTM that the only way that a scheme like yours could work, assuming\nthat making page deletion closer to Lanin & Shasha is not going to\nhappen, is something even more invasive than that: it might work if\nyou had a page low key (not just a high key) on every page. You'd have\nto compare the lower bound separator key from the parent (which might\nitself be the page-level low key for the parent) to the page low key.\nThat's not a serious suggestion; I'm just pointing out that you need\nto be able to compare like with like for a canary condition like this\none, and AFAICT there is no lightweight practical way of doing that\nthat is 100% robust.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 18 Sep 2023 18:56:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, 19 Sept 2023 at 03:56, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Sep 18, 2023 at 6:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I also have significant doubts about your scheme for avoiding\n> > invalidating the bounds of the page based on its high key matching the\n> > parent's separator. The subtle dynamic prefix compression race\n> > condition that I was worried about was one caused by page deletion.\n> > But page deletion doesn't change the high key at all (it does that for\n> > the deleted page, but that's hardly relevant). So how could checking\n> > the high key possibly help?\n>\n> To be clear, page deletion does what I described here (it does an\n> in-place update of the downlink to the deleted page, so the same pivot\n> tuple now points to its right sibling, which is our page of concern),\n> in addition to fully removing the original pivot tuple whose downlink\n> originally pointed to our page of concern. This is why page deletion\n> makes the key space \"move to the right\", very much like a page split\n> would.\n\nI am still aware of this issue, and I think we've discussed it in\ndetail earlier. I think it does not really impact this patchset. Sure,\nI can't use dynamic prefix compression to its full potential, but I\nstill do get serious performance benefits:\n\nFULL KEY _bt_compare calls:\n'Optimal' full-tree DPT: average O(3)\nPaged DPT (this patch): average O(2 * height)\n ... without HK opt: average O(3 * height)\nCurrent: O(log2(n))\n\nSingle-attribute compares:\n'Optimal' full-tree DPT: O(log(N))\nPaged DPT (this patch): O(log(N))\nCurrent: 0 (or, O(log(N) * natts))\n\nSo, in effect, this patch moves most compare operations to the level\nof only one or two full key compare operations per page (on average).\n\nI use \"on average\": on a sorted array with values ranging from\npotentially minus infinity to positive infinity, it takes on average 3\ncompares before a binary search can determine the bounds of the\nkeyspace it has still to search. If one side's bounds is already\nknown, it takes on average 2 compare operations before these bounds\nare known.\n\n> IMV it would be better if it made the key space \"move to the left\"\n> instead, which would make page deletion close to the exact opposite of\n> a page split -- that's what the Lanin & Shasha paper does (sort of).\n> If you have this symmetry, then things like dynamic prefix compression\n> are a lot simpler.\n>\n> ISTM that the only way that a scheme like yours could work, assuming\n> that making page deletion closer to Lanin & Shasha is not going to\n> happen, is something even more invasive than that: it might work if\n> you had a page low key (not just a high key) on every page.\n\nNote that the \"dynamic prefix compression\" is currently only active on\nthe page level.\n\nTrue, the patch does carry over _bt_compare's prefix result for the\nhigh key on the child page, but we do that only if the highkey is\nactually an exact copy of the right separator on the parent page. This\ncarry-over opportunity is extremely likely to happen, because the high\nkey generated in _bt_split is then later inserted on the parent page.\nThe only case where it could differ is in concurrent page deletions.\nThat is thus a case of betting a few cycles to commonly save many\ncycles (memcmp vs _bt_compare full key compare.\n\nAgain, we do not actually skip a prefix on the compare call of the\nP_HIGHKEY tuple, nor for the compares of the midpoints unless we've\nfound a tuple on the page that compares as smaller than the search\nkey.\n\n> You'd have\n> to compare the lower bound separator key from the parent (which might\n> itself be the page-level low key for the parent) to the page low key.\n> That's not a serious suggestion; I'm just pointing out that you need\n> to be able to compare like with like for a canary condition like this\n> one, and AFAICT there is no lightweight practical way of doing that\n> that is 100% robust.\n\nTrue, if we had consistent LOWKEYs on pages, that'd make this job much\neasier: the prefix could indeed be carried over in full. But that's\nnot currently the case for the nbtree code, and this is the next best\nthing, as it also has the benefit of working with all currently\nsupported physical formats of btree indexes.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 19 Sep 2023 15:28:26 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, Sep 19, 2023 at 6:28 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > To be clear, page deletion does what I described here (it does an\n> > in-place update of the downlink to the deleted page, so the same pivot\n> > tuple now points to its right sibling, which is our page of concern),\n> > in addition to fully removing the original pivot tuple whose downlink\n> > originally pointed to our page of concern. This is why page deletion\n> > makes the key space \"move to the right\", very much like a page split\n> > would.\n>\n> I am still aware of this issue, and I think we've discussed it in\n> detail earlier. I think it does not really impact this patchset. Sure,\n> I can't use dynamic prefix compression to its full potential, but I\n> still do get serious performance benefits:\n\nThen why have you linked whatever the first patch does with the high\nkey to dynamic prefix compression in the first place? Your commit\nmessage makes it sound like it's a way to get around the race\ncondition that affects dynamic prefix compression, but as far as I can\ntell it has nothing whatsoever to do with that race condition.\n\nQuestions:\n\n1. Why shouldn't the high key thing be treated as an unrelated piece of work?\n\nI guess it's possible that it really should be structured that way,\nbut even then it's your responsibility to make it clear why that is.\nAs things stand, this presentation is very confusing.\n\n2. Separately, why should dynamic prefix compression be tied to the\nspecialization work? I also see no principled reason why it should be\ntied to the other two things.\n\nI didn't mind this sort of structure so much back when this work was\nvery clearly exploratory -- I've certainly structured work in this\narea that way myself, in the past. But if you want this patch set to\never go beyond being an exploratory patch set, something has to\nchange. I don't have time to do a comprehensive (or even a fairly\ncursory) analysis of which parts of the patch are helping, and which\nare marginal or even add no value.\n\n> > You'd have\n> > to compare the lower bound separator key from the parent (which might\n> > itself be the page-level low key for the parent) to the page low key.\n> > That's not a serious suggestion; I'm just pointing out that you need\n> > to be able to compare like with like for a canary condition like this\n> > one, and AFAICT there is no lightweight practical way of doing that\n> > that is 100% robust.\n>\n> True, if we had consistent LOWKEYs on pages, that'd make this job much\n> easier: the prefix could indeed be carried over in full. But that's\n> not currently the case for the nbtree code, and this is the next best\n> thing, as it also has the benefit of working with all currently\n> supported physical formats of btree indexes.\n\nI went over the low key thing again because I had to struggle to\nunderstand what your high key optimization had to do with dynamic\nprefix compression. I'm still struggling. I think that your commit\nmessage very much led me astray. Quoting it here:\n\n\"\"\"\nAlthough this limits the overall applicability of the\nperformance improvement, it still allows for a nice performance\nimprovement in most cases where initial columns have many\nduplicate values and a compare function that is not cheap.\n\nAs an exception to the above rule, most of the time a pages'\nhighkey is equal to the right seperator on the parent page due to\nhow btree splits are done. By storing this right seperator from\nthe parent page and then validating that the highkey of the child\npage contains the exact same data, we can restore the right prefix\nbound without having to call the relatively expensive _bt_compare.\n\"\"\"\n\nYou're directly tying the high key optimization to the dynamic prefix\ncompression optimization. But why?\n\nI have long understood that you gave up on the idea of keeping the\nbounds across levels of the tree (which does make sense to me), but\nyesterday the issue became totally muddled by this high key business.\nThat's why I rehashed the earlier discussion, which I had previously\nunderstood to be settled.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 19 Sep 2023 13:48:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Tue, 19 Sept 2023 at 22:49, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Sep 19, 2023 at 6:28 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > To be clear, page deletion does what I described here (it does an\n> > > in-place update of the downlink to the deleted page, so the same pivot\n> > > tuple now points to its right sibling, which is our page of concern),\n> > > in addition to fully removing the original pivot tuple whose downlink\n> > > originally pointed to our page of concern. This is why page deletion\n> > > makes the key space \"move to the right\", very much like a page split\n> > > would.\n> >\n> > I am still aware of this issue, and I think we've discussed it in\n> > detail earlier. I think it does not really impact this patchset. Sure,\n> > I can't use dynamic prefix compression to its full potential, but I\n> > still do get serious performance benefits:\n>\n> Then why have you linked whatever the first patch does with the high\n> key to dynamic prefix compression in the first place? Your commit\n> message makes it sound like it's a way to get around the race\n> condition that affects dynamic prefix compression, but as far as I can\n> tell it has nothing whatsoever to do with that race condition.\n\nWe wouldn't have to store the downlink's right separator and compare\nit to the highkey if we didn't deviate from L&Y's algorithm for DELETE\noperations (which causes the race condition): just the right sibling's\nblock number would be enough.\n\n(Yes, the right sibling's block number isn't available for the\nrightmost downlink of a page. In those cases, we'd have to reuse the\nparent page's high key with that of the downlink page, but I suppose\nthat'll be relatively rare).\n\n> Questions:\n>\n> 1. Why shouldn't the high key thing be treated as an unrelated piece of work?\n\nBecause it was only significant and relatively visible after getting\nrid of the other full key compare operations, and it touches\nessentially the same areas. Splitting them out in more patches would\nbe a hassle.\n\n> I guess it's possible that it really should be structured that way,\n> but even then it's your responsibility to make it clear why that is.\n\nSure. But I think I've made that clear upthread too.\n\n> As things stand, this presentation is very confusing.\n\nI'll take a look at improving the presentation.\n\n> 2. Separately, why should dynamic prefix compression be tied to the\n> specialization work? I also see no principled reason why it should be\n> tied to the other two things.\n\nMy performance results show that insert performance degrades by 2-3%\nfor single-column indexes if only dynamic the prefix truncation patch\nis applied [0]. The specialization patches fix that regression on my\nmachine (5950x) due to having optimized code for the use case. I can't\nsay for certain that other machines will see the same results, but I\nthink results will at least be similar.\n\n> I didn't mind this sort of structure so much back when this work was\n> very clearly exploratory -- I've certainly structured work in this\n> area that way myself, in the past. But if you want this patch set to\n> ever go beyond being an exploratory patch set, something has to\n> change.\n\nI think it's fairly complete, and mostly waiting for review.\n\n> I don't have time to do a comprehensive (or even a fairly\n> cursory) analysis of which parts of the patch are helping, and which\n> are marginal or even add no value.\n\nIt is a shame that you don't have the time to review this patch.\n\n> > > You'd have\n> > > to compare the lower bound separator key from the parent (which might\n> > > itself be the page-level low key for the parent) to the page low key.\n> > > That's not a serious suggestion; I'm just pointing out that you need\n> > > to be able to compare like with like for a canary condition like this\n> > > one, and AFAICT there is no lightweight practical way of doing that\n> > > that is 100% robust.\n> >\n> > True, if we had consistent LOWKEYs on pages, that'd make this job much\n> > easier: the prefix could indeed be carried over in full. But that's\n> > not currently the case for the nbtree code, and this is the next best\n> > thing, as it also has the benefit of working with all currently\n> > supported physical formats of btree indexes.\n>\n> I went over the low key thing again because I had to struggle to\n> understand what your high key optimization had to do with dynamic\n> prefix compression. I'm still struggling. I think that your commit\n> message very much led me astray. Quoting it here:\n>\n> \"\"\"\n> Although this limits [...] relatively expensive _bt_compare.\n> \"\"\"\n>\n> You're directly tying the high key optimization to the dynamic prefix\n> compression optimization. But why?\n\nThe value of skipping the _bt_compare call on the highkey is\nrelatively much higher in the prefix-skip case than it is on master,\nas on master it's only one of the log(n) _bt_compare calls on the\npage, while in the patch it's one of (on average) 3 full key\n_bt_compare calls. This makes it much easier to prove the performance\ngain, which made me integrate it into that patch instead of keeping it\nseparate.\n\n> I have long understood that you gave up on the idea of keeping the\n> bounds across levels of the tree (which does make sense to me), but\n> yesterday the issue became totally muddled by this high key business.\n> That's why I rehashed the earlier discussion, which I had previously\n> understood to be settled.\n\nUnderstood. I'll see if I can improve the wording to something that is\nmore clear about what the optimization entails.\n\nI'm planning to have these documentation changes to be included in the\nnext revision of the patchset, which will probably also reduce the\nnumber of specialized functions (and with it the size of the binary).\nIt will take some extra time, because I would need to re-run the\nperformance suite, but the code changes should be very limited when\ncompared to the current patch (apart from moving code between .c and\n_spec.c).\n\n---\n\nThe meat of the changes are probably in 0001 (dynamic prefix skip),\n0003 (change attribute iteration code to use specializable macros),\nand 0006 (index attribute iteration for variable key offsets). 0002 is\nmostly mechanical code movement, 0004 is a relatively easy\nimplementation of the iteration functionality for single-key-column\nindexes, and 0005 adds an instrument for improving the efficiency of\nattcacheoff by implementing negatively cached values (\"cannot be\ncached\", instead of just \"isn't cached\") which are then used in 0006.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/CAEze2Wh_3%2B_Q%2BBefaLrpdXXR01vKr3R2R%3Dh5gFxR%2BU4%2B0Z%3D40w%40mail.gmail.com\n\n\n", "msg_date": "Mon, 25 Sep 2023 18:13:19 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, Sep 25, 2023 at 9:13 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I think it's fairly complete, and mostly waiting for review.\n>\n> > I don't have time to do a comprehensive (or even a fairly\n> > cursory) analysis of which parts of the patch are helping, and which\n> > are marginal or even add no value.\n>\n> It is a shame that you don't have the time to review this patch.\n\nI didn't say that. Just that I don't have the time (or more like the\ninclination) to do most or all of the analysis that might allow us to\narrive at a commitable patch. Most of the work with something like\nthis is the analysis of the trade-offs, not writing code. There are\nall kinds of trade-offs that you could make with something like this,\nand the prospect of doing that myself is kind of daunting. Ideally\nyou'd have made a significant start on that at this point.\n\n> > I have long understood that you gave up on the idea of keeping the\n> > bounds across levels of the tree (which does make sense to me), but\n> > yesterday the issue became totally muddled by this high key business.\n> > That's why I rehashed the earlier discussion, which I had previously\n> > understood to be settled.\n>\n> Understood. I'll see if I can improve the wording to something that is\n> more clear about what the optimization entails.\n\nCool.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:35:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Thu, 26 Oct 2023 at 00:36, Peter Geoghegan <pg@bowt.ie> wrote:\n> Most of the work with something like\n> this is the analysis of the trade-offs, not writing code. There are\n> all kinds of trade-offs that you could make with something like this,\n> and the prospect of doing that myself is kind of daunting. Ideally\n> you'd have made a significant start on that at this point.\n\nI believe I'd already made most trade-offs clear earlier in the\nthreads, along with rationales for the changes in behaviour. But here\ngoes again:\n\n_bt_compare currently uses index_getattr() on each attribute of the\nkey. index_getattr() is O(n) for the n-th attribute if the index tuple\nhas any null or non-attcacheoff attributes in front of the current\none. Thus, _bt_compare costs O(n^2) work with n=the number of\nattributes, which can cost several % of performance, but is very very\nbad in extreme cases, and doesO(n) calls to opclass-supplied compare\noperations.\n\nTo solve most of the O(n) compare operations, we can optimize\n_bt_compare to only compare \"interesting\" attributes, i.e. we can\napply \"dynamic prefix truncation\". This is implemented by patch 0001.\nThis is further enhanced with 0002, where we skip the compare\noperations if the HIKEY is the same as the right separator of the\ndownlink we followed (due to our page split code, this case is\nextremely likely).\n\nHowever, the above only changes the attribute indexing code in\n_bt_compare to O(n) for at most about 76% of the index tuples on the\npage (1 - (2 / log2(max_tuples_per_page))), while the other on average\n20+% of the compare operations still have to deal with the O(n^2)\ntotal complexity of index_getattr.\nTo fix this O(n^2) issue (the issue this thread was originally created\nfor) the approach I implemented originally is to not use index_getattr\nbut an \"attribute iterator\" that incrementally extracts the next\nattribute, while keeping track of the current offset into the tuple,\nso each next attribute would be O(1). That is implemented in the last\npatches of the patchset.\n\nThis attribute iterator approach has an issue: It doesn't perform very\nwell for indexes that make full use of attcacheoff. The bookkeeping\nfor attribute iteration proved to be much more expensive than just\nreading attcacheoff from memory. This is why the latter patches\n(patchset 14 0003+) adapt the btree code to generate different paths\nfor different \"shapes\" of key index attributes, to allow the current\nattcacheoff code to keep its performance, but to get higher\nperformance for indexes where the attcacheoff optimization can not be\napplied. In passing, it also specializes the code for single-attribute\nindexes, so that they don't have to manage branching code, increasing\ntheir performance, too.\n\nTLDR:\nThe specialization in 0003+ is applied because index_getattr is good\nwhen attcacheoff applies, but very bad when it doesn't. Attribute\niteration is worse than index_getattr when attcacheoff applies, but is\nsignificantly better when attcacheoff does not work. By specializing\nwe get the best of both worlds.\n\nThe 0001 and 0002 optimizations were added later to further remove\nunneeded calls to the btree attribute compare functions, thus further\nreducing the total time spent in _bt_compare.\n\nAnyway.\n\nPFA v14 of the patchset. v13's 0001 is now split in two, containing\nprefix truncation in 0001, and 0002 containing the downlink's right\nseparator/HIKEY optimization.\n\nPerformance numbers (data attached):\n0001 has significant gains in multi-column indexes with shared\nprefixes, where the prefix columns are expensive to compare, but\notherwise doesn't have much impact.\n0002 further improves performance across the board, but again mostly\nfor indexes with expensive compare operations.\n0007 sees performance improvements almost across the board, with only\nthe 'ul' and 'tnt' indexes getting some worse results than master (but\nstill better average results),\n\nAll patches applied, per-index average performance improvements on 15\nruns range from 3% to 290% across the board for INSERT benchmarks, and\n-2.83 to 370% for REINDEX.\n\nConfigured with autoconf: config.log:\n> It was created by PostgreSQL configure 17devel, which was\n> generated by GNU Autoconf 2.69. Invocation command line was\n>\n> $ ./configure --enable-tap-tests --enable-depend --with-lz4 --with-zstd COPT=-ggdb -O3 --prefix=/home/matthias/projects/postgresql/pg_install --no-create --no-recursion\n\nBenchmark was done on 1m random rows of the pp-complete dataset, as\nfound on UK Gov's S3 bucket [0]: using a parallel and threaded\ndownloader is preferred because the throughput is measured in kBps per\nclient.\n\nI'll do a few runs on the full dataset of 29M rows soon too, but\nmaster's performance is so bad for the 'worstcase' index that I can't\nfinish its runs fast enough; benchmarking it takes hours per\niteration.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] http://prod1.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv", "msg_date": "Mon, 30 Oct 2023 17:19:53 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Mon, 30 Oct 2023 at 21:50, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 26 Oct 2023 at 00:36, Peter Geoghegan <pg@bowt.ie> wrote:\n> > Most of the work with something like\n> > this is the analysis of the trade-offs, not writing code. There are\n> > all kinds of trade-offs that you could make with something like this,\n> > and the prospect of doing that myself is kind of daunting. Ideally\n> > you'd have made a significant start on that at this point.\n>\n> I believe I'd already made most trade-offs clear earlier in the\n> threads, along with rationales for the changes in behaviour. But here\n> goes again:\n>\n> _bt_compare currently uses index_getattr() on each attribute of the\n> key. index_getattr() is O(n) for the n-th attribute if the index tuple\n> has any null or non-attcacheoff attributes in front of the current\n> one. Thus, _bt_compare costs O(n^2) work with n=the number of\n> attributes, which can cost several % of performance, but is very very\n> bad in extreme cases, and doesO(n) calls to opclass-supplied compare\n> operations.\n>\n> To solve most of the O(n) compare operations, we can optimize\n> _bt_compare to only compare \"interesting\" attributes, i.e. we can\n> apply \"dynamic prefix truncation\". This is implemented by patch 0001.\n> This is further enhanced with 0002, where we skip the compare\n> operations if the HIKEY is the same as the right separator of the\n> downlink we followed (due to our page split code, this case is\n> extremely likely).\n>\n> However, the above only changes the attribute indexing code in\n> _bt_compare to O(n) for at most about 76% of the index tuples on the\n> page (1 - (2 / log2(max_tuples_per_page))), while the other on average\n> 20+% of the compare operations still have to deal with the O(n^2)\n> total complexity of index_getattr.\n> To fix this O(n^2) issue (the issue this thread was originally created\n> for) the approach I implemented originally is to not use index_getattr\n> but an \"attribute iterator\" that incrementally extracts the next\n> attribute, while keeping track of the current offset into the tuple,\n> so each next attribute would be O(1). That is implemented in the last\n> patches of the patchset.\n>\n> This attribute iterator approach has an issue: It doesn't perform very\n> well for indexes that make full use of attcacheoff. The bookkeeping\n> for attribute iteration proved to be much more expensive than just\n> reading attcacheoff from memory. This is why the latter patches\n> (patchset 14 0003+) adapt the btree code to generate different paths\n> for different \"shapes\" of key index attributes, to allow the current\n> attcacheoff code to keep its performance, but to get higher\n> performance for indexes where the attcacheoff optimization can not be\n> applied. In passing, it also specializes the code for single-attribute\n> indexes, so that they don't have to manage branching code, increasing\n> their performance, too.\n>\n> TLDR:\n> The specialization in 0003+ is applied because index_getattr is good\n> when attcacheoff applies, but very bad when it doesn't. Attribute\n> iteration is worse than index_getattr when attcacheoff applies, but is\n> significantly better when attcacheoff does not work. By specializing\n> we get the best of both worlds.\n>\n> The 0001 and 0002 optimizations were added later to further remove\n> unneeded calls to the btree attribute compare functions, thus further\n> reducing the total time spent in _bt_compare.\n>\n> Anyway.\n>\n> PFA v14 of the patchset. v13's 0001 is now split in two, containing\n> prefix truncation in 0001, and 0002 containing the downlink's right\n> separator/HIKEY optimization.\n>\n> Performance numbers (data attached):\n> 0001 has significant gains in multi-column indexes with shared\n> prefixes, where the prefix columns are expensive to compare, but\n> otherwise doesn't have much impact.\n> 0002 further improves performance across the board, but again mostly\n> for indexes with expensive compare operations.\n> 0007 sees performance improvements almost across the board, with only\n> the 'ul' and 'tnt' indexes getting some worse results than master (but\n> still better average results),\n>\n> All patches applied, per-index average performance improvements on 15\n> runs range from 3% to 290% across the board for INSERT benchmarks, and\n> -2.83 to 370% for REINDEX.\n>\n> Configured with autoconf: config.log:\n> > It was created by PostgreSQL configure 17devel, which was\n> > generated by GNU Autoconf 2.69. Invocation command line was\n> >\n> > $ ./configure --enable-tap-tests --enable-depend --with-lz4 --with-zstd COPT=-ggdb -O3 --prefix=/home/matthias/projects/postgresql/pg_install --no-create --no-recursion\n>\n> Benchmark was done on 1m random rows of the pp-complete dataset, as\n> found on UK Gov's S3 bucket [0]: using a parallel and threaded\n> downloader is preferred because the throughput is measured in kBps per\n> client.\n>\n> I'll do a few runs on the full dataset of 29M rows soon too, but\n> master's performance is so bad for the 'worstcase' index that I can't\n> finish its runs fast enough; benchmarking it takes hours per\n> iteration.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n55627ba2d334ce98e1f5916354c46472d414bda6 ===\n=== applying patch ./v14-0001-btree-Implement-dynamic-prefix-compression.patch\n...\nHunk #7 succeeded at 3169 with fuzz 2 (offset 75 lines).\nHunk #8 succeeded at 3180 (offset 75 lines).\nHunk #9 FAILED at 3157.\nHunk #10 FAILED at 3180.\nHunk #11 FAILED at 3218.\n3 out of 11 hunks FAILED -- saving rejects to file\ncontrib/amcheck/verify_nbtree.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3672.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 09:08:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Sat, 27 Jan 2024 at 04:38, vignesh C <vignesh21@gmail.com> wrote:\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 55627ba2d334ce98e1f5916354c46472d414bda6 ===\n> === applying patch ./v14-0001-btree-Implement-dynamic-prefix-compression.patch\n> ...\n> Hunk #7 succeeded at 3169 with fuzz 2 (offset 75 lines).\n> Hunk #8 succeeded at 3180 (offset 75 lines).\n> Hunk #9 FAILED at 3157.\n> Hunk #10 FAILED at 3180.\n> Hunk #11 FAILED at 3218.\n> 3 out of 11 hunks FAILED -- saving rejects to file\n> contrib/amcheck/verify_nbtree.c.rej\n>\n> Please post an updated version for the same.\n\nI've attached a rebased version, on top of HEAD @ 7b745d85. The\nchanges for prefix compression [0] and the rightsep+hikey optimization\n[1] have been split off to separate patches/threads. I've also split\nprevious patch number 0003 into multiple parts, to clearly delineate\ncode movement vs modifications.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://commitfest.postgresql.org/46/4635/\n[1] https://commitfest.postgresql.org/46/4638/", "msg_date": "Thu, 1 Feb 2024 15:49:13 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" }, { "msg_contents": "On Thu, 1 Feb 2024 at 15:49, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I've attached a rebased version, on top of HEAD @ 7b745d85. The\n> changes for prefix compression [0] and the rightsep+hikey optimization\n> [1] have been split off to separate patches/threads. I've also split\n> previous patch number 0003 into multiple parts, to clearly delineate\n> code movement vs modifications.\n\nRebased to v16 to fix bitrot from 8af25652.\n\nAgain, short explanation\n========\n\nCurrent nbtree code is optimized for short index keys, i.e. with few\nattributes, and index keys with cacheable offsets\n(pg_attribute.attcacheoff). This is optimized for many cases, but for\nsome index shapes (e.g. ON (textfield, intfield)) the current code\ndoesn't work nice and adds significant overhead during descent: We\nrepeatedly re-calculate attribute offsets for key attributes with\nuncacheable offsets.\n\nTo fix this O(n^2) problem in _bt_compare(), the first idea was to use\nattribute offset iteration (as introduced 0003). Measurements showed\nthat the changes do great work for uncacheable attribute offsets, but\nreduce the performance of indexes with cached offsets significantly.\nThis meant that using this approach for all indexes was uneconomical:\nwe couldn't reuse the same code across all index key shapes.\n\nTo get around this performance issue, this patchset introduces\nspecialization. Various key functions inside the btree AM which have\nhot paths that depend on key shape are \"specialized\" to one of 3\n\"index key shapes\" identified in the patchset: \"cached\", \"uncached\",\nand \"single attribute\".\n\nSpecialization is implemented with a helper header that #includes the\nto-be-specialized code once for each index key shape. The\nto-be-specialized code can then utilize several macros for iterating\nover index key attributes in the most efficient way available for that\nindex key shape.\n\nFor specific implementation details, see the comments in the header of\ninclude/access/nbtree_spec.h.\n\nPatches\n========\n\n0001 moves code around without modification in preparation for specialization\n0002 updates non-specialized code to hook into specialization\ninfrastructure, without yet changing functionality.\n0003 updates to-be-specialized code to use specialized index attribute\niterators, and introduces the first specialization type \"cached\". This\nis no different from our current iteration type; it just has a new\nname and now fits in the scaffolding of specialization.\n0004 introduces a helper function to be used in 0006, which calculates\nand stores attcacheoff for cacheable attributes, while also (new for\nattcacheoff) storing negative cache values (i.e. uncacheable) for\nthose attributes that can't have attcacheoff, e.g. those behind a size\nof < 0.\n0005 introduces the \"single attribute\" specialization, which realizes\nthat the first attribute has a known offset in the index tuple data\nsection, so it doesn't have to access or keep track of as much\ninformation, which saves cycles in the common case of single-attribute\nindexes.\n0006 introduces the final, \"uncached\", specialization. It\nprogressively calculates offsets of attributes as needed, rather than\nthe start from scratch approach in indexgetattr().\n\nTradeoffs\n========\n\nThis patch introduces a new CPP templating mechanism which is more\ncomplex than most other templates currently in use. This is certainly\nadditional complexity, but the file structure used allow for a mostly\nsimilar C programming experience, with the only caveat that the file\nis not built into an object file directly, but included into another\nfile (and thus shouldn't #include its own headers).\n\nBy templating functions, this patch also increases the size of the\nPostgreSQL binary. Bloaty measured a 48kiB increase in size of the\n.text section of the binary (47MiB) built with debugging options at\n[^0], while a non-debug build of the same kind [^1] (9.89MiB) has an\nincrease in size of 34.8kiB. Given the performance improvements\nachievable using this patch, I believe this patch is worth the\nincrease in size.\n\nPerformance\n========\n\nI haven't retested the results separately yet, but I assume the\nperformance results of [2] hold mostly true in comparing 0002 vs 0007.\nI will do a performance (re-)evaluation of only this patch if so\nrequested (or once I have time), but please do review the code, too.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n[^0] ./configure --enable-depend --enable-tap-tests --enable-cassert\n--with-lz4 --enable-debug --with-zstd COPT='-O3 -g3'\n--prefix=~/projects/postgresql/pg_install\n[^1] ./configure --enable-depend --enable-tap-tests --with-lz4\n--with-zstd COPT='-O3' --prefix=~/projects/postgresql/pg_install\n\n[2] https://www.postgresql.org/message-id/CAEze2WiqOONRQTUT1p_ZV19nyMA69UU2s0e2dp%2BjSBM%3Dj8snuw%40mail.gmail.com", "msg_date": "Mon, 4 Mar 2024 21:39:37 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improving btree performance through specializing by key shape,\n take 2" } ]
[ { "msg_contents": "On Fri, Apr 08, 2022 at 12:55:04PM +0000, webmaster@postgresql.org wrote:\n> The following patches that you follow, directly or indirectly,\n> have received updates in the PostgreSQL commitfest app:\n> \n> \n> Function to log backtrace of postgres processes\n> https://commitfest.postgresql.org/38/2863/\n> \n> * New status: Needs review (stark)\n> * Closed in commitfest 2022-03 with status: Moved to next CF (stark)\n> \n> \n> \n> Add --schema and --exclude-schema options to vacuumdb\n> https://commitfest.postgresql.org/39/3587/\n> \n> * New status: Needs review (stark)\n> * Closed in commitfest 2022-07 with status: Moved to next CF (stark)\n\nThe 2nd patch was already in July's CF, and it looks like you accidentally\nmoved it to July+1. AFAIK this requires a DBA to fix.\n\n\n", "msg_date": "Fri, 8 Apr 2022 21:15:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL commitfest: 2022-09-01" } ]
[ { "msg_contents": "I mentioned most/all of these ideas for cfbot at some point. I'm writing them\nnow so other people know about them and they're in once place.\n\n - Keep the original patch series and commit messages, rather than squishing\nthem into a single commit with cfbot's own commit messages. Maybe append an\nempty commit with cfbot's message, and include a parsable \"base-branch: NNN\"\ncommit hash. This supports some CI ideas like making HTML available for\npatches touching doc/ (which needs to be relative to some other commit to show\nonly the changes rather than every *.html). And code coverage report for\nchanged files, which has the same requirement.\n\nThat *also* allows directly reviewing the original patch series with a branch\nmaintained by cfbot, preserving the patch set, with commit messages. See also:\nhttps://www.postgresql.org/message-id/flat/CAKU4AWoU-P1zPS5hmiXpto6WGLOqk27VgCrxSKE2mgX%3DfypV6Q%40mail.gmail.com\n\nAlternate idea: submit the patch to cirrus as a PR which makes the \"base\nbranch\" available to cirrus as an environment variable (note that PRs also\nchange a couple of other cirrus behaviors).\n\n - I think cfbot already keeps track of historic CI build results (pass/fail\nand link to cirrus). But I don't think cfbot exposes this yet. I know cirrus\ncan show history for a branch, but I can never find it, and their history is\nlimited. This would be like the buildfarm pages, ordered by time: one showing\nall results, one showing all failures, and one showing all results for a given\npatch. You could also consider retrieving the cirrus logs themselves, to allow\nsetting our own retention interval (we could ask cirrus if they'd want to allow\nsetting more aggressive expiration for logs/artifacts).\n\n - HTML: sort by CF ID rather than alpha sort. Right now, commitfest entries\nbeginning with a capital letter sort first, which at least one person seems to\nhave discovered.\n\n - HTML: add a \"queued for CI\" page showing the next patches to be submitted to\ncirrus. These pages might allow queueing a re-run, too.\n\n - HTML: show \"target version\" and \"committer\" (maybe committer name would be\nshown in the existing list of names, but with another style applied). This helps\nto distinguish between patches which someone optimistically said was RFC and a\npatch which a committer intends to commit, which ought to be pretty visible so\nit's not lost in the mailing list and a surprise to anyone.\n\n - index on CF ID without CFNUM - reasons that you mentioned that I can't\nremember (like maybe cirrus history and rebuilds at end of each CF).\n\n\n", "msg_date": "Fri, 8 Apr 2022 21:18:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "cfbot requests" } ]
[ { "msg_contents": "These remaining CF entries look like they're bugs that are maybe Open\nIssues for release?\n\n* fix spinlock contention in LogwrtResult\n\n* Avoid erroring out when unable to remove or parse logical rewrite\nfiles to save checkpoint work\n\n* Add checkpoint and redo LSN to LogCheckpointEnd log message\n\n* standby recovery fails when re-replaying due to missing directory\nwhich was removed in previous replay.\n\n* Logical replication failure \"ERROR: could not map filenode\n\"base/13237/442428\" to relation OID\" with catalog modifying txns\n\n* fix psql pattern handling\n\n* Possible fails in pg_stat_statements test\n\n* pg_receivewal fail to streams when the partial file to write is not\nfully initialized present in the wal receiver directory\n\n* Fix pg_rewind race condition just after promotion\n\n\nWas the plan to commit this after feature freeze?\n\n* pg_stat_statements: Track statement entry timestamp\n\n\n\nA couple minor documentation, testing, and diagnostics patches that\nmay be committable even after feature freeze?\n\n* Improve role attributes docs\n\n* Reloption tests iprovement. Test resetting illegal option that was\nactually set for some reason\n\n* Make message at end-of-recovery less scary\n\n* jit_warn_above_fraction parameter\n\n\nAnd then there are the more devlish cases. I think some of these\npatches are Rejected or Returned with Feedback but I'm not certain.\nSome of them have split into multiple discussions or are partly\ncommitted but still related work remains. Any opinions on what to do\nwith these?\n\n* Simplify some RI checks to reduce SPI overhead\n\n* Map WAL segment files on PMEM as WAL buffers\n\n* Support custom authentication methods using hooks\n\n* Implement INSERT SET syntax\n\n* Logical insert/update/delete WAL records for custom table AMs\n\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 8 Apr 2022 23:25:39 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Commitfest wrapup" }, { "msg_contents": "On 2022-Apr-08, Greg Stark wrote:\n\n> These remaining CF entries look like they're bugs that are maybe Open\n> Issues for release?\n> \n> * fix spinlock contention in LogwrtResult\n\nI don't have a good enough grip on barriers needed for this, so I'd\nrather move it to pg16 to have time for further study.\n\n> * fix psql pattern handling\n\nSounds like a bugfix, but how old and how backpatchable a fix is?\nThread is quite long.\n\n> * Avoid erroring out when unable to remove or parse logical rewrite\n> files to save checkpoint work\n> * standby recovery fails when re-replaying due to missing directory\n> which was removed in previous replay.\n> * Logical replication failure \"ERROR: could not map filenode\n> \"base/13237/442428\" to relation OID\" with catalog modifying txns\n> * Fix pg_rewind race condition just after promotion\n> * pg_receivewal fail to streams when the partial file to write is not\n> fully initialized present in the wal receiver directory\n\nSound like bugfixes to be backpatched.\n\n> A couple minor documentation, testing, and diagnostics patches that\n> may be committable even after feature freeze?\n> \n> * Improve role attributes docs\n\nLet's get it pushed.\n\nThere's also a logical replication \"row filter\" doc patch that I don't\nsee in your list, we should get that in. Maybe it's not in CF. I had a\nquibble with paras length in that one; requires stylistic edit.\n\nhttps://postgr.es/m/CAHut+PvyxMedYY-jHaT9YSfEPHv0jU2-CZ8F_nPvhuP0b955og@mail.gmail.com\n\n> * Reloption tests iprovement. Test resetting illegal option that was\n> actually set for some reason\n\nI think we should push this one.\n\n> * jit_warn_above_fraction parameter\n\nUnsure; seems a good patch, but is there enough consensus?\n\n> * Simplify some RI checks to reduce SPI overhead\n\nMove to next; a lot more work is required.\n\n> * Map WAL segment files on PMEM as WAL buffers\n> * Support custom authentication methods using hooks\n> * Implement INSERT SET syntax\n> * Logical insert/update/delete WAL records for custom table AMs\n\nNew features.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n", "msg_date": "Sat, 9 Apr 2022 12:44:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest wrapup" }, { "msg_contents": "On Sat, 9 Apr 2022 at 06:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > * Simplify some RI checks to reduce SPI overhead\n>\n> Move to next; a lot more work is required.\n\nIf it's going to be part of a much larger patch set I wonder if it\nshouldn't just be marked Rejected and start a new thread and new CF\nentry for the whole suite.\n\n> > * Map WAL segment files on PMEM as WAL buffers\n> > * Support custom authentication methods using hooks\n> > * Implement INSERT SET syntax\n> > * Logical insert/update/delete WAL records for custom table AMs\n>\n> New features.\n\nYeah, this bunch definitely consists of new features, just not sure if\nthey should be moved forward or Rejected or RWF. Some of them had some\nnegative feedback or the development has taken some turns that make me\nthink starting new patches specifically for the parts that remain may\nmake more sense.\n\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 9 Apr 2022 09:25:45 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Commitfest wrapup" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Apr-08, Greg Stark wrote:\n>> * fix psql pattern handling\n\n> Sounds like a bugfix, but how old and how backpatchable a fix is?\n> Thread is quite long.\n\nThere's been more contention than one could wish about both what\nthe behavior should be and how to refactor the code to achieve it.\nNonetheless, I think there is general consensus that the current\nbehavior isn't very desirable, so if we can reach agreement I think\nthis should be treated as a bug fix.\n\n>> * Avoid erroring out when unable to remove or parse logical rewrite\n>> files to save checkpoint work\n\nIt seems like people are not convinced that this cure is better than\nthe disease.\n\n>> * standby recovery fails when re-replaying due to missing directory\n>> which was removed in previous replay.\n>> * Logical replication failure \"ERROR: could not map filenode\n>> \"base/13237/442428\" to relation OID\" with catalog modifying txns\n>> * Fix pg_rewind race condition just after promotion\n>> * pg_receivewal fail to streams when the partial file to write is not\n>> fully initialized present in the wal receiver directory\n\n> Sound like bugfixes to be backpatched.\n\nYeah. I'm not sure why these have received so little love.\nThe only one of this ilk that I've personally looked at was\n\n>> * Make message at end-of-recovery less scary\n\nwhich for me adds far too much complication for the claimed benefit.\nMaybe somebody else with a lot more familiarity with the current WAL\ncode will care to push that, but I'm not touching it. Post-FF\ndoesn't seem like a good time for it anyway from a risk/reward\nstandpoint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 10:50:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest wrapup" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Sat, 9 Apr 2022 at 06:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> * Simplify some RI checks to reduce SPI overhead\n\n>> Move to next; a lot more work is required.\n\n> If it's going to be part of a much larger patch set I wonder if it\n> shouldn't just be marked Rejected and start a new thread and new CF\n> entry for the whole suite.\n\nIMV, \"rejected\" means \"we don't want this patch nor any plausible\nrework of it\". In this case, the feedback is more like \"why aren't\nwe changing all of ri_triggers this way\", so I'd call it RWF.\n\n>>> * Map WAL segment files on PMEM as WAL buffers\n>>> * Support custom authentication methods using hooks\n>>> * Implement INSERT SET syntax\n>>> * Logical insert/update/delete WAL records for custom table AMs\n\n>> New features.\n\n> Yeah, this bunch definitely consists of new features, just not sure if\n> they should be moved forward or Rejected or RWF.\n\nProbably just move them forward. The only one of these four that's\nbeen really sitting around for a long time is INSERT SET, and I think\nthere we've just not quite made up our minds if we want it or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 11:02:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest wrapup" }, { "msg_contents": "On Sat, 9 Apr 2022 at 10:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Sound like bugfixes to be backpatched.\n>\n> Yeah. I'm not sure why these have received so little love.\n\nSince bug fixes are important enough that they'll definitely get done\n(and can happen after feature freeze) there's a bit of a perverse\nincentive to focus on other things...\n\n\"Everybody was sure that Somebody would do it. Anybody could have done\nit, but Nobody did it\"\n\nI think every project struggles with bugs that sit in bug tracking\nsystems indefinitely. The Open Issues is the biggest hammer we have.\nMaybe we should be tracking \"Open Issues\" from earlier in the process\n-- things that we think we shouldn't do a release without addressing.\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 9 Apr 2022 13:14:44 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Commitfest wrapup" }, { "msg_contents": "On 4/9/22 1:14 PM, Greg Stark wrote:\r\n> On Sat, 9 Apr 2022 at 10:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>\r\n>>> Sound like bugfixes to be backpatched.\r\n>>\r\n>> Yeah. I'm not sure why these have received so little love.\r\n> \r\n> Since bug fixes are important enough that they'll definitely get done\r\n> (and can happen after feature freeze) there's a bit of a perverse\r\n> incentive to focus on other things...\r\n> \r\n> \"Everybody was sure that Somebody would do it. Anybody could have done\r\n> it, but Nobody did it\"\r\n> \r\n> I think every project struggles with bugs that sit in bug tracking\r\n> systems indefinitely. The Open Issues is the biggest hammer we have.\r\n> Maybe we should be tracking \"Open Issues\" from earlier in the process\r\n> -- things that we think we shouldn't do a release without addressing.\r\n\r\nThe RMT does both delineate and track open issues as a result of new \r\nfeatures committed and a subset of issues in existing releases. \r\nTraditionally the RMT is more hands off on the latter unless it \r\ndetermines that not fixing the issue would have stability or other \r\nconsequences if it's not included in the release.\r\n\r\nJonathan", "msg_date": "Sat, 9 Apr 2022 18:50:21 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest wrapup" } ]
[ { "msg_contents": "Hi,\n\non CI [1] the new t/031_recovery_conflict.pl is failing occasionally. Which is\ninteresting, because I ran it there dozens if not hundreds of times before\ncommit, with - I think - only cosmetic changes.\n\nI've reproduced it in a private branch, with more logging. And the results are\nsure interesting.\n\nhttps://cirrus-ci.com/task/6448492666159104\nhttps://api.cirrus-ci.com/v1/artifact/task/6448492666159104/log/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log\n\nThe primary is waiting for 0/343A000 to be applied, which requires a recovery\nconflict to be detected and resolved. On the standby there's the following\nsequence (some omitted):\n\nprerequisite for recovery conflict:\n2022-04-09 04:05:31.292 UTC [35071][client backend] [031_recovery_conflict.pl][2/2:0] LOG: statement: BEGIN;\n2022-04-09 04:05:31.292 UTC [35071][client backend] [031_recovery_conflict.pl][2/2:0] LOG: statement: DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n2022-04-09 04:05:31.293 UTC [35071][client backend] [031_recovery_conflict.pl][2/2:0] LOG: statement: FETCH FORWARD FROM test_recovery_conflict_cursor;\n\ndetecting the conflict:\n2022-04-09 04:05:31.382 UTC [35038][startup] LOG: recovery still waiting after 28.821 ms: recovery conflict on buffer pin\n2022-04-09 04:05:31.382 UTC [35038][startup] CONTEXT: WAL redo at 0/3432800 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n\nand then nothing until the timeout:\n2022-04-09 04:09:19.317 UTC [35035][postmaster] LOG: received immediate shutdown request\n2022-04-09 04:09:19.317 UTC [35035][postmaster] DEBUG: sending signal 3 to process 35071\n2022-04-09 04:09:19.320 UTC [35035][postmaster] DEBUG: reaping dead processes\n2022-04-09 04:09:19.320 UTC [35035][postmaster] DEBUG: reaping dead processes\n2022-04-09 04:09:19.320 UTC [35035][postmaster] DEBUG: server process (PID 35071) exited with exit code 2\n\nAfaics that has to mean something is broken around sending, receiving or\nprocessing of recovery conflict interrupts.\n\n\nAll the failures so far were on freebsd, from what I can see. There were other\nfailures in other tests, but I think for reverted or fixed things.\n\n\nExcept for not previously triggering while the shmstats patch was in\ndevelopment, it's hard to tell whether it's a regression or just a\nlongstanding bug - we never had tests for recovery conflicts...\n\n\nI don't really see how recovery prefetching could play a role here, clearly\nwe've been trying to replay the record. So we're elsewhere...\n\nGreetings,\n\nAndres Freund\n\nhttps://cirrus-ci.com/github/postgres/postgres/master\n\n\n", "msg_date": "Fri, 8 Apr 2022 21:55:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-08 21:55:15 -0700, Andres Freund wrote:\n> on CI [1] the new t/031_recovery_conflict.pl is failing occasionally. Which is\n> interesting, because I ran it there dozens if not hundreds of times before\n> commit, with - I think - only cosmetic changes.\n\nScratch that part - I found an instance of the freebsd failure earlier, just\ndidn't notice because that run failed for other reasons as well. So this might\njust have uncovered an older bug around recovery conflict handling,\npotentially platform dependent.\n\nI guess I'll try to reproduce it on freebsd...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Apr 2022 22:05:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-08 22:05:01 -0700, Andres Freund wrote:\n> On 2022-04-08 21:55:15 -0700, Andres Freund wrote:\n> > on CI [1] the new t/031_recovery_conflict.pl is failing occasionally. Which is\n> > interesting, because I ran it there dozens if not hundreds of times before\n> > commit, with - I think - only cosmetic changes.\n> \n> Scratch that part - I found an instance of the freebsd failure earlier, just\n> didn't notice because that run failed for other reasons as well. So this might\n> just have uncovered an older bug around recovery conflict handling,\n> potentially platform dependent.\n> \n> I guess I'll try to reproduce it on freebsd...\n\nThat failed.\n\n\nAdding a bunch of debug statements, I think I might have found at least some\nproblems.\n\nThe code in LockBufferForCleanup() that actually interacts with other backends\nis:\n\n\t\t\t/* Publish the bufid that Startup process waits on */\n\t\t\tSetStartupBufferPinWaitBufId(buffer - 1);\n\t\t\t/* Set alarm and then wait to be signaled by UnpinBuffer() */\n\t\t\tResolveRecoveryConflictWithBufferPin();\n\t\t\t/* Reset the published bufid */\n\t\t\tSetStartupBufferPinWaitBufId(-1);\n\nwhere ResolveRecoveryConflictWithBufferPin() sends procsignals to all other\nbackends and then waits with ProcWaitForSignal():\n\nvoid\nProcWaitForSignal(uint32 wait_event_info)\n{\n\t(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,\n\t\t\t\t\t wait_event_info);\n\tResetLatch(MyLatch);\n\tCHECK_FOR_INTERRUPTS();\n}\n\none problem is that we pretty much immediately get a SIGALRM whenever we're in\nthat WaitLatch(). Which does a SetLatch(), interrupting the WaitLatch(). The\nstartup process then proceeds to SetStartupBufferPinWaitBufId(-1).\n\nIn the unlucky cases, the backend holding the pin only processes the interrupt\n(in RecoveryConflictInterrupt()) after the\nSetStartupBufferPinWaitBufId(-1). The backend then does sees that\nHoldingBufferPinThatDelaysRecovery() returns false, and happily continues.\n\n\nBut that's not the whole story, I think. It's a problem leading to conflicts\nbeing handled more slowly, but we eventually should not have more timeouts.\n\nHere's debugging output from a failing run, where I added a few debugging statements:\nhttps://api.cirrus-ci.com/v1/artifact/task/6179111512047616/log/src/test/recovery/tmp_check/log/000_0recovery_conflict_standby.log\nhttps://github.com/anarazel/postgres/commit/212268e753093861aa22a51657c6598c65eeb81b\n\nCuriously, there's only\n20644: received interrupt 11\n\nWhich is PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK, not\nPROCSIG_RECOVERY_CONFLICT_BUFFERPIN.\n\nI guess we've never gotten past the standby timeout.\n\n\n20644: received interrupt 11\n2022-04-09 21:18:23.824 UTC [20630][startup] DEBUG: one cycle of LockBufferForCleanup() iterating in HS\n2022-04-09 21:18:23.824 UTC [20630][startup] CONTEXT: WAL redo at 0/3432800 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n2022-04-09 21:18:23.824 UTC [20630][startup] DEBUG: setting timeout() in 0 10000\n2022-04-09 21:18:23.824 UTC [20630][startup] CONTEXT: WAL redo at 0/3432800 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n2022-04-09 21:18:23.835 UTC [20630][startup] DEBUG: setting latch()\n2022-04-09 21:18:23.835 UTC [20630][startup] CONTEXT: WAL redo at 0/3432800 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n2022-04-09 21:18:23.835 UTC [20630][startup] DEBUG: setting timeout() in 0 3481\n2022-04-09 21:18:23.835 UTC [20630][startup] CONTEXT: WAL redo at 0/3432800 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 1; blkref #0: rel 1663/16385/16386, blk 0\n20644: received interrupt 11\n2022-04-09 21:23:47.975 UTC [20631][walreceiver] FATAL: could not receive data from WAL stream: server closed the connection unexpectedly\n\nSo we sent a conflict interrupt, and then waited. And nothing happened.\n\n\nWhat are we expecting to wake the startup process up, once it does\nSendRecoveryConflictWithBufferPin()?\n\nIt's likely not the problem here, because we never seem to have even reached\nthat path, but afaics once we've called disable_all_timeouts() at the bottom\nof ResolveRecoveryConflictWithBufferPin() and then re-entered\nResolveRecoveryConflictWithBufferPin(), and go into the \"We're already behind,\nso clear a path as quickly as possible.\" path, there's no guarantee for any\ntimeout to be pending anymore?\n\nIf there's either no backend that we're still conflicting with (an entirely\npossible race condition), or if there's e.g. a snapshot or database conflict,\nthere's afaics nobody setting the startup processes' latch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Apr 2022 15:00:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-09 15:00:54 -0700, Andres Freund wrote:\n> What are we expecting to wake the startup process up, once it does\n> SendRecoveryConflictWithBufferPin()?\n>\n> It's likely not the problem here, because we never seem to have even reached\n> that path, but afaics once we've called disable_all_timeouts() at the bottom\n> of ResolveRecoveryConflictWithBufferPin() and then re-entered\n> ResolveRecoveryConflictWithBufferPin(), and go into the \"We're already behind,\n> so clear a path as quickly as possible.\" path, there's no guarantee for any\n> timeout to be pending anymore?\n>\n> If there's either no backend that we're still conflicting with (an entirely\n> possible race condition), or if there's e.g. a snapshot or database conflict,\n> there's afaics nobody setting the startup processes' latch.\n\nIt's not that (although I still suspect it's a problem). It's a self-deadlock,\nbecause StandbyTimeoutHandler(), which ResolveRecoveryConflictWithBufferPin()\n*explicitly enables*, calls SendRecoveryConflictWithBufferPin(). Which does\nCancelDBBackends(). Which ResolveRecoveryConflictWithBufferPin() also calls,\nif the deadlock timeout is reached.\n\nTo make it easier to hit, I put a pg_usleep(10000) in CancelDBBackends(), and boom:\n\n(gdb) bt\n#0 __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x7fd4cb001138, expected=expected@entry=0, clockid=clockid@entry=0,\n abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:87\n#1 0x00007fd4ce5a215b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7fd4cb001138, expected=expected@entry=0,\n clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>) at ../sysdeps/nptl/futex-internal.c:123\n#2 0x00007fd4ce59e44f in do_futex_wait (sem=sem@entry=0x7fd4cb001138, abstime=0x0, clockid=0) at sem_waitcommon.c:112\n#3 0x00007fd4ce59e4e8 in __new_sem_wait_slow64 (sem=0x7fd4cb001138, abstime=0x0, clockid=0) at sem_waitcommon.c:184\n#4 0x00007fd4cf20d823 in PGSemaphoreLock (sema=0x7fd4cb001138) at pg_sema.c:327\n#5 0x00007fd4cf2dd675 in LWLockAcquire (lock=0x7fd4cb001600, mode=LW_EXCLUSIVE) at /home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1324\n#6 0x00007fd4cf2c36e7 in CancelDBBackends (databaseid=0, sigmode=PROCSIG_RECOVERY_CONFLICT_BUFFERPIN, conflictPending=false)\n at /home/andres/src/postgresql/src/backend/storage/ipc/procarray.c:3638\n#7 0x00007fd4cf2cc579 in SendRecoveryConflictWithBufferPin (reason=PROCSIG_RECOVERY_CONFLICT_BUFFERPIN)\n at /home/andres/src/postgresql/src/backend/storage/ipc/standby.c:846\n#8 0x00007fd4cf2cc69d in StandbyTimeoutHandler () at /home/andres/src/postgresql/src/backend/storage/ipc/standby.c:911\n#9 0x00007fd4cf4e68d7 in handle_sig_alarm (postgres_signal_arg=14) at /home/andres/src/postgresql/src/backend/utils/misc/timeout.c:421\n#10 <signal handler called>\n#11 0x00007fd4cddfffc4 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7ffc6e5561c0) at ../sysdeps/unix/sysv/linux/select.c:71\n#12 0x00007fd4cf52ea2a in pg_usleep (microsec=10000) at /home/andres/src/postgresql/src/port/pgsleep.c:56\n#13 0x00007fd4cf2c36f1 in CancelDBBackends (databaseid=0, sigmode=PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK, conflictPending=false)\n at /home/andres/src/postgresql/src/backend/storage/ipc/procarray.c:3640\n#14 0x00007fd4cf2cc579 in SendRecoveryConflictWithBufferPin (reason=PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK)\n at /home/andres/src/postgresql/src/backend/storage/ipc/standby.c:846\n#15 0x00007fd4cf2cc50f in ResolveRecoveryConflictWithBufferPin () at /home/andres/src/postgresql/src/backend/storage/ipc/standby.c:820\n#16 0x00007fd4cf2a996f in LockBufferForCleanup (buffer=43) at /home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:4336\n#17 0x00007fd4ceec911c in XLogReadBufferForRedoExtended (record=0x7fd4d106a618, block_id=0 '\\000', mode=RBM_NORMAL, get_cleanup_lock=true, buf=0x7ffc6e556394)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogutils.c:427\n#18 0x00007fd4cee1aa41 in heap_xlog_prune (record=0x7fd4d106a618) at /home/andres/src/postgresql/src/backend/access/heap/heapam.c:8634\n\nit's reproducible on linux.\n\n\nI'm lacking words I dare to put in an email to describe how bad an idea it is\nto call CancelDBBackends() from within a timeout function, particularly when\nthe function enabling the timeout also calls that function itself. Before\ndisabling timers.\n\nI ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Apr 2022 16:10:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-09 16:10:02 -0700, Andres Freund wrote:\n> It's not that (although I still suspect it's a problem). It's a self-deadlock,\n> because StandbyTimeoutHandler(), which ResolveRecoveryConflictWithBufferPin()\n> *explicitly enables*, calls SendRecoveryConflictWithBufferPin(). Which does\n> CancelDBBackends(). Which ResolveRecoveryConflictWithBufferPin() also calls,\n> if the deadlock timeout is reached.\n> \n> To make it easier to hit, I put a pg_usleep(10000) in CancelDBBackends(), and boom:\n> \n> [... backtrace ... ]\n>\n> it's reproducible on linux.\n> \n> \n> I'm lacking words I dare to put in an email to describe how bad an idea it is\n> to call CancelDBBackends() from within a timeout function, particularly when\n> the function enabling the timeout also calls that function itself. Before\n> disabling timers.\n\nIt's been broken in different ways all the way back to 9.0, from what I can\nsee, but I didn't check every single version.\n\nAfaics the fix is to nuke the idea of doing anything substantial in the signal\nhandler from orbit, and instead just set a flag in the handler. Then check\nwhether the timeout happend after ProcWaitForSignal() and call\nSendRecoveryConflictWithBufferPin().\n\n\nAlso worth noting that the disable_all_timeouts() calls appears to break\nSTARTUP_PROGRESS_TIMEOUT.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Apr 2022 16:31:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's been broken in different ways all the way back to 9.0, from what I can\n> see, but I didn't check every single version.\n\n> Afaics the fix is to nuke the idea of doing anything substantial in the signal\n> handler from orbit, and instead just set a flag in the handler.\n\n+1. This is probably more feasible given the latch infrastructure\nthan it was when that code was first written.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 19:34:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-09 19:34:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It's been broken in different ways all the way back to 9.0, from what I can\n> > see, but I didn't check every single version.\n> \n> > Afaics the fix is to nuke the idea of doing anything substantial in the signal\n> > handler from orbit, and instead just set a flag in the handler.\n> \n> +1. This is probably more feasible given the latch infrastructure\n> than it was when that code was first written.\n\nWhat do you think about just reordering the disable_all_timeouts() to be\nbefore the got_standby_deadlock_timeout check in the back branches? I think\nthat should close at least the most obvious hole. And fix it properly in\nHEAD?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Apr 2022 11:49:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-09 19:34:26 -0400, Tom Lane wrote:\n>> +1. This is probably more feasible given the latch infrastructure\n>> than it was when that code was first written.\n\n> What do you think about just reordering the disable_all_timeouts() to be\n> before the got_standby_deadlock_timeout check in the back branches? I think\n> that should close at least the most obvious hole. And fix it properly in\n> HEAD?\n\nI don't have much faith in that, and I don't see why we can't fix it\nproperly. Don't we just need to have the signal handler set MyLatch,\nand then do the unsafe stuff back in the \"if (got_standby_deadlock_timeout)\"\nstanza in mainline?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Apr 2022 15:05:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-12 15:05:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-04-09 19:34:26 -0400, Tom Lane wrote:\n> >> +1. This is probably more feasible given the latch infrastructure\n> >> than it was when that code was first written.\n> \n> > What do you think about just reordering the disable_all_timeouts() to be\n> > before the got_standby_deadlock_timeout check in the back branches? I think\n> > that should close at least the most obvious hole. And fix it properly in\n> > HEAD?\n> \n> I don't have much faith in that, and I don't see why we can't fix it\n> properly.\n\nIt's not too hard, agreed.\n\nIt's somewhat hard to test that it works though. The new recovery conflict\ntests test all recovery conflicts except for deadlock ones, because they're\nnot easy to trigger... But I think I nearly got it working reliably.\n\nIt's probably worth backpatching the tests, after stripping them of the stats\nchecks?\n\nThree questions:\n\n- For HEAD we have to replace the disable_all_timeouts() calls, it breaks the\n replay progress reporting. Is there a reason to keep them in the\n backbranches? Hard to see how an extension or something could rely on it,\n but ...?\n\n- I named the variable set by the signal handler got_standby_delay_timeout,\n because just got_standby_timeout looked weird besides the _deadlock_. Which\n makes me think that we should rename STANDBY_TIMEOUT to\n STANDBY_DELAY_TIMEOUT too?\n\n- There's the following comment in ResolveRecoveryConflictWithBufferPin():\n\n \"We assume that only UnpinBuffer() and the timeout requests established\n above can wake us up here.\"\n\n That bogus afaict? There's plenty other things that can cause MyProc->latch\n to be set. Is it worth doing something about this at the same time? Right\n now we seem to call ResolveRecoveryConflictWithBufferPin() in rapid\n succession initially.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Apr 2022 17:26:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nAttached are patches for this issue.\n\nIt adds a test case for deadlock conflicts to make sure that case isn't\nbroken. I also tested the recovery conflict tests in the back branches, and\nthey work there with a reasonably small set of changes.\n\nQuestions:\n- I'm planning to backpatch the test as 031_recovery_conflict.pl, even though\n preceding numbers are unused. It seems way more problematic to use a\n different number in the backbranches than have gaps?\n\n- The test uses pump_until() and wait_for_log(), which don't exist in the\n backbranches. For now I've just inlined the implementation, but I guess we\n could also backpatch their introduction?\n\n- There's a few incompatibilities in the test with older branches:\n - older branches don't have allow_in_place_tablespaces - I think just\n skipping tablespace conflicts is fine, they're comparatively\n simple.\n\n Eventually it might make sense to backpatch allow_in_place_tablespaces,\n our test coverage in the area is quite poor.\n\n - the stats tests can't easily made reliably in the backbranches - which is\n ok, as the conflict itself is verified via the log\n\n - some branches don't have log_recovery_conflict_waits, since it's not\n critical to the test, it's ok to just not include it there\n\n I played with the idea of handling the differences using version comparisons\n in the code, and have the test be identically across branches. Since it's\n something we don't do so far, I'm leaning against it, but ...\n\n\n> - For HEAD we have to replace the disable_all_timeouts() calls, it breaks the\n> replay progress reporting. Is there a reason to keep them in the\n> backbranches? Hard to see how an extension or something could rely on it,\n> but ...?\n\nI've left it as is for now, will start a separate thread.\n\n\n> - There's the following comment in ResolveRecoveryConflictWithBufferPin():\n>\n> \"We assume that only UnpinBuffer() and the timeout requests established\n> above can wake us up here.\"\n>\n> That bogus afaict? There's plenty other things that can cause MyProc->latch\n> to be set. Is it worth doing something about this at the same time? Right\n> now we seem to call ResolveRecoveryConflictWithBufferPin() in rapid\n> succession initially.\n\nThe comment is more recent than I had realized. I raised this separately in\nhttps://postgr.es/m/20220429191815.xewxjlpmq7mxhsr2%40alap3.anarazel.de\n\n\npgindent uses some crazy formatting nearby:\n SendRecoveryConflictWithBufferPin(\n PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n\nI'm tempted to clean that up in passing by having just one\nSendRecoveryConflictWithBufferPin() call instead of two, storing the type of\nconflict in a local variable? Doesn't look entirely pretty either, but ...\n\n\nI'm very doubtful of this claim above ResolveRecoveryConflictWithBufferPin(),\nbtw. But that'd be a non-backpatchable cleanup, I think:\n * The ProcWaitForSignal() sleep normally done in LockBufferForCleanup()\n * (when not InHotStandby) is performed here, for code clarity.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 29 Apr 2022 13:08:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Questions:\n> - I'm planning to backpatch the test as 031_recovery_conflict.pl, even though\n> preceding numbers are unused. It seems way more problematic to use a\n> different number in the backbranches than have gaps?\n\n+1\n\n> - The test uses pump_until() and wait_for_log(), which don't exist in the\n> backbranches. For now I've just inlined the implementation, but I guess we\n> could also backpatch their introduction?\n\nI'd backpatch --- seems unlikely this will be the last need for 'em.\n\n> pgindent uses some crazy formatting nearby:\n> SendRecoveryConflictWithBufferPin(\n> PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n\nI do not believe that that line break is pgindent's fault.\nIf you just fold it into one line it should stay that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Apr 2022 19:26:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-04-29 19:26:59 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > - The test uses pump_until() and wait_for_log(), which don't exist in the\n> > backbranches. For now I've just inlined the implementation, but I guess we\n> > could also backpatch their introduction?\n> \n> I'd backpatch --- seems unlikely this will be the last need for 'em.\n\nDone.\n\nI ended up committing the extension of the test first, before the fix. I think\nthat's the cause of the failure on longfin on serinus. Let's hope the\nsituation improves with the now also committed (and backpatched) fix.\n\n\n> > pgindent uses some crazy formatting nearby:\n> > SendRecoveryConflictWithBufferPin(\n> > PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n> \n> I do not believe that that line break is pgindent's fault.\n\nOh - I'm fairly certain I've seen pgindent do that in the past. But you're\nright, it's not. Perhaps it was an older version of pgindent?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 2 May 2022 18:48:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I ended up committing the extension of the test first, before the fix. I think\n> that's the cause of the failure on longfin on serinus. Let's hope the\n> situation improves with the now also committed (and backpatched) fix.\n\nlongfin's definitely not very happy: four out of six tries have ended with\n\n\npsql:<stdin>:8: ERROR: canceling statement due to conflict with recovery\nLINE 1: SELECT * FROM test_recovery_conflict_table2;\n ^\nDETAIL: User was holding shared buffer pin for too long.\ntimed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl line 358.\n\n\nI can poke into that tomorrow, but are you sure that that isn't an\nexpectable result?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 May 2022 23:44:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-02 23:44:32 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I ended up committing the extension of the test first, before the fix. I think\n> > that's the cause of the failure on longfin on serinus. Let's hope the\n> > situation improves with the now also committed (and backpatched) fix.\n> \n> longfin's definitely not very happy: four out of six tries have ended with\n\nToo bad :(\n\n\n> psql:<stdin>:8: ERROR: canceling statement due to conflict with recovery\n> LINE 1: SELECT * FROM test_recovery_conflict_table2;\n> ^\n> DETAIL: User was holding shared buffer pin for too long.\n> timed out waiting for match: (?^:User transaction caused buffer deadlock with recovery.) at t/031_recovery_conflict.pl line 358.\n> \n> \n> I can poke into that tomorrow, but are you sure that that isn't an\n> expectable result?\n\nIt's not expected. But I think I might see what the problem is:\n\n$psql_standby{stdin} .= qq[\n BEGIN;\n -- hold pin\n DECLARE $cursor1 CURSOR FOR SELECT a FROM $table1;\n FETCH FORWARD FROM $cursor1;\n -- wait for lock held by prepared transaction\n SELECT * FROM $table2;\n ];\nok( pump_until(\n\t\t$psql_standby{run}, $psql_timeout,\n\t\t\\$psql_standby{stdout}, qr/^1$/m,),\n\t\"$sect: cursor holding conflicting pin, also waiting for lock, established\"\n);\n\nWe wait for the FETCH (and thus the buffer pin to be acquired). But that\ndoesn't guarantee that the lock has been acquired. We can't check that with\npump_until() afaics, because there'll not be any output. But a query_until()\nchecking pg_locks should do the trick?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 2 May 2022 21:05:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-02 23:44:32 -0400, Tom Lane wrote:\n>> I can poke into that tomorrow, but are you sure that that isn't an\n>> expectable result?\n\n> It's not expected. But I think I might see what the problem is:\n> We wait for the FETCH (and thus the buffer pin to be acquired). But that\n> doesn't guarantee that the lock has been acquired. We can't check that with\n> pump_until() afaics, because there'll not be any output. But a query_until()\n> checking pg_locks should do the trick?\n\nIrritatingly, it doesn't reproduce (at least not easily) in a manual\nbuild on the same box. So it's almost surely a timing issue, and\nyour theory here seems plausible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 May 2022 01:16:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "On 2022-May-02, Andres Freund wrote:\n\n> > > pgindent uses some crazy formatting nearby:\n> > > SendRecoveryConflictWithBufferPin(\n> > > PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK);\n> > \n> > I do not believe that that line break is pgindent's fault.\n> \n> Oh - I'm fairly certain I've seen pgindent do that in the past. But you're\n> right, it's not. Perhaps it was an older version of pgindent?\n\nNo, it's never done that. We used to fold lines that way manually,\nbecause pgindent used to push the argument to the left, but that changed\nat commit 382ceffdf7f6.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La grandeza es una experiencia transitoria. Nunca es consistente.\nDepende en gran parte de la imaginación humana creadora de mitos\"\n(Irulan)\n\n\n", "msg_date": "Tue, 3 May 2022 09:46:10 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-03 01:16:46 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-02 23:44:32 -0400, Tom Lane wrote:\n> >> I can poke into that tomorrow, but are you sure that that isn't an\n> >> expectable result?\n> \n> > It's not expected. But I think I might see what the problem is:\n> > We wait for the FETCH (and thus the buffer pin to be acquired). But that\n> > doesn't guarantee that the lock has been acquired. We can't check that with\n> > pump_until() afaics, because there'll not be any output. But a query_until()\n> > checking pg_locks should do the trick?\n> \n> Irritatingly, it doesn't reproduce (at least not easily) in a manual\n> build on the same box.\n\nOdd, given how readily it seem to reproduce on the bf. I assume you built with\n> Uses -fsanitize=alignment -DWRITE_READ_PARSE_PLAN_TREES -DSTRESS_SORT_INT_MIN -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\n\n> So it's almost surely a timing issue, and your theory here seems plausible.\n\nUnfortunately I don't think my theory holds, because I actually had added a\ndefense against this into the test that I forgot about momentarily...\n\n# just to make sure we're waiting for lock already\nok( $node_standby->poll_query_until(\n\t\t'postgres', qq[\nSELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n], 'waiting'),\n\t\"$sect: lock acquisition is waiting\");\n\nand on longfin that step completes sucessfully.\n\n\nI think what happens is that we get a buffer pin conflict, because these days\nwe can actually process buffer pin conflicts while waiting for a lock. The\neasiest way to get around that is to increase the replay timeout for that\ntest, I think?\n\nI think we need a restart, not a reload, because reloads aren't guaranteed to\nbe processed at any certain point in time :/.\n\n\nTesting a fix in a variety of timing circumstances now...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 3 May 2022 11:20:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-03 01:16:46 -0400, Tom Lane wrote:\n>> Irritatingly, it doesn't reproduce (at least not easily) in a manual\n>> build on the same box.\n\n> Odd, given how readily it seem to reproduce on the bf. I assume you built with\n>> Uses -fsanitize=alignment -DWRITE_READ_PARSE_PLAN_TREES -DSTRESS_SORT_INT_MIN -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\nYeah, I copied all that stuff ...\n\n>> So it's almost surely a timing issue, and your theory here seems plausible.\n\n> Unfortunately I don't think my theory holds, because I actually had added a\n> defense against this into the test that I forgot about momentarily...\n\nOh, hm. I can try harder to repro it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 May 2022 14:23:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-03 14:23:23 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> So it's almost surely a timing issue, and your theory here seems plausible.\n> \n> > Unfortunately I don't think my theory holds, because I actually had added a\n> > defense against this into the test that I forgot about momentarily...\n> \n> Oh, hm. I can try harder to repro it.\n\nI've now reproduced it a couple times here running under rr, so it's probably\nnot worth putting too much effort into that.\n\nAttached is a fix for the test that I think should avoid the problem. Couldn't\nrepro it with it applied, under both rr and valgrind.\n\n\nMy current problem is that I'm running into some IPC::Run issues (bug?). I get\n\"ack Broken pipe:\" iff I add \"SELECT pg_sleep(1);\" after\n\"-- wait for lock held by prepared transaction\"\n\nIt doesn't happen without that debugging thing, but it makes me worried that\nit's something that'll come up in random situations.\n\nIt looks to me like it's a bug in IPC::Run - with a few changes I get the\nfailure to happen inside pump_nb(), which seems like it shouldn't error out\njust because the child process exited...\n\n\nI *think* it might not happen without the sleep. But I'm not at all confident.\n\nIn general I'm kinda worried on how much effectively unmaintained perl stuff\nwe're depending :(\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 3 May 2022 12:13:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Attached is a fix for the test that I think should avoid the problem. Couldn't\n> repro it with it applied, under both rr and valgrind.\n\nMay I ask where we're at on this? Next week's back-branch release is\ngetting uncomfortably close, and I'm still seeing various buildfarm\nanimals erratically failing on 031_recovery_conflict.pl. Should we\njust remove that test from the back branches for now?\n\nAlso, it appears that the back-patch of pump_until failed to remove\nsome pre-existing copies, eg check-world in v14 now reports\n\nSubroutine pump_until redefined at t/013_crash_restart.pl line 248.\nSubroutine pump_until redefined at t/022_crash_temp_files.pl line 272.\n\nI didn't check whether these are exact duplicates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 May 2022 22:07:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-05 22:07:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Attached is a fix for the test that I think should avoid the problem. Couldn't\n> > repro it with it applied, under both rr and valgrind.\n> \n> May I ask where we're at on this? Next week's back-branch release is\n> getting uncomfortably close, and I'm still seeing various buildfarm\n> animals erratically failing on 031_recovery_conflict.pl.\n\nYea, sorry. I had crappy internet connectivity / device access for the last\nfew days, making it hard to make progress.\n\nLooks like the problems are gone on HEAD at least.\n\n\n> Should we just remove that test from the back branches for now?\n\nThat might be the best course, marking the test as TODO perhaps?\n\nUnfortunately a pg_ctl reload isn't processed reliably by the time the next\ntest steps execute (saw that once when running in a loop), and a restart\ncauses other problems (throws stats away).\n\n\n> Also, it appears that the back-patch of pump_until failed to remove\n> some pre-existing copies, eg check-world in v14 now reports\n\n> Subroutine pump_until redefined at t/013_crash_restart.pl line 248.\n> Subroutine pump_until redefined at t/022_crash_temp_files.pl line 272.\n> \n> I didn't check whether these are exact duplicates.\n\nThey're not quite identical copies, which is why I left them in-place. But the\nwarnings clearly make that a bad idea. I somehow mis-extrapolated from\nCluster.pm, where it's not a problem (accessed via object). I'll remove them.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 5 May 2022 20:09:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-05 22:07:40 -0400, Tom Lane wrote:\n>> May I ask where we're at on this? Next week's back-branch release is\n>> getting uncomfortably close, and I'm still seeing various buildfarm\n>> animals erratically failing on 031_recovery_conflict.pl.\n\n> Looks like the problems are gone on HEAD at least.\n\nIt does look that way, although the number of successes is not large yet.\n\n>> Should we just remove that test from the back branches for now?\n\n> That might be the best course, marking the test as TODO perhaps?\n\nI poked closer and saw that you reverted 5136967f1 et al because\n(I suppose) adjust_conf is not there in the back branches. While\nI'd certainly support back-patching that functionality, I think\nwe need to have a discussion about how to do it. I wonder whether\nwe shouldn't drop src/test/perl/PostgreSQL/... into the back branches\nin toto and make the old test APIs into a wrapper around the new ones\ninstead of vice versa. But that's definitely not a task to undertake\nthree days before a release deadline.\n\nSo I reluctantly vote for removing 031_recovery_conflict.pl in the\nback branches for now, with the expectation that we'll fix the\ninfrastructure and put it back after the current release round\nis done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 May 2022 23:36:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-05 23:36:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-05 22:07:40 -0400, Tom Lane wrote:\n> >> May I ask where we're at on this? Next week's back-branch release is\n> >> getting uncomfortably close, and I'm still seeing various buildfarm\n> >> animals erratically failing on 031_recovery_conflict.pl.\n> \n> > Looks like the problems are gone on HEAD at least.\n> \n> It does look that way, although the number of successes is not large yet.\n> \n> >> Should we just remove that test from the back branches for now?\n> \n> > That might be the best course, marking the test as TODO perhaps?\n> \n> I poked closer and saw that you reverted 5136967f1 et al because\n> (I suppose) adjust_conf is not there in the back branches.\n\nYea. That one was a stupid mistake, working outside my usual environment. It's\neasy enough to work around adjust_conf not existing (just appending works),\nbut then there's subsequent test failures...\n\n\n> So I reluctantly vote for removing 031_recovery_conflict.pl in the\n> back branches for now, with the expectation that we'll fix the\n> infrastructure and put it back after the current release round\n> is done.\n\nWhat about instead marking the flapping test TODO? That'd still give us most\nof the coverage...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 5 May 2022 20:53:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-05 23:36:22 -0400, Tom Lane wrote:\n>> So I reluctantly vote for removing 031_recovery_conflict.pl in the\n>> back branches for now, with the expectation that we'll fix the\n>> infrastructure and put it back after the current release round\n>> is done.\n\n> What about instead marking the flapping test TODO? That'd still give us most\n> of the coverage...\n\nAre you sure there's just one test that's failing? I haven't checked\nthe buildfarm history close enough to be sure of that. But if it's\ntrue, disabling just that one would be fine (again, as a stopgap\nmeasure).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 May 2022 23:57:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-05 23:57:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-05 23:36:22 -0400, Tom Lane wrote:\n> >> So I reluctantly vote for removing 031_recovery_conflict.pl in the\n> >> back branches for now, with the expectation that we'll fix the\n> >> infrastructure and put it back after the current release round\n> >> is done.\n> \n> > What about instead marking the flapping test TODO? That'd still give us most\n> > of the coverage...\n> \n> Are you sure there's just one test that's failing? I haven't checked\n> the buildfarm history close enough to be sure of that. But if it's\n> true, disabling just that one would be fine (again, as a stopgap\n> measure).\n\nI looked through all the failures I found and it's two kinds of failures, both\nrelated to the deadlock test. So I'm thinking of skipping just that test as in\nthe attached.\n\nWorking on committing / backpatching that, unless somebody suggests changes\nquickly...\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 6 May 2022 08:58:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I looked through all the failures I found and it's two kinds of failures, both\n> related to the deadlock test. So I'm thinking of skipping just that test as in\n> the attached.\n\n> Working on committing / backpatching that, unless somebody suggests changes\n> quickly...\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 May 2022 12:12:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "On 2022-05-06 12:12:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I looked through all the failures I found and it's two kinds of failures, both\n> > related to the deadlock test. So I'm thinking of skipping just that test as in\n> > the attached.\n> \n> > Working on committing / backpatching that, unless somebody suggests changes\n> > quickly...\n> \n> WFM.\n\nDone. Perhaps you could trigger a run on longfin, that seems to have been the\nmost reliably failing animal?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 May 2022 10:28:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Done. Perhaps you could trigger a run on longfin, that seems to have been the\n> most reliably failing animal?\n\nNo need, its cron job launched already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 May 2022 14:01:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-05 23:57:28 -0400, Tom Lane wrote:\n>> Are you sure there's just one test that's failing? I haven't checked\n>> the buildfarm history close enough to be sure of that. But if it's\n>> true, disabling just that one would be fine (again, as a stopgap\n>> measure).\n\n> I looked through all the failures I found and it's two kinds of failures, both\n> related to the deadlock test. So I'm thinking of skipping just that test as in\n> the attached.\n\nPer lapwing's latest results [1], this wasn't enough. I'm again thinking\nwe should pull the whole test from the back branches.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-05-07%2016%3A40%3A04\n\n\n", "msg_date": "Sun, 08 May 2022 11:28:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-08 11:28:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-05 23:57:28 -0400, Tom Lane wrote:\n> >> Are you sure there's just one test that's failing? I haven't checked\n> >> the buildfarm history close enough to be sure of that. But if it's\n> >> true, disabling just that one would be fine (again, as a stopgap\n> >> measure).\n> \n> > I looked through all the failures I found and it's two kinds of failures, both\n> > related to the deadlock test. So I'm thinking of skipping just that test as in\n> > the attached.\n> \n> Per lapwing's latest results [1], this wasn't enough. I'm again thinking\n> we should pull the whole test from the back branches.\n\nThat failure is different from the earlier failures though. I don't think it's\na timing issue in the test like the deadlock check one. I rather suspect it's\nindicative of further problems in this area. Potentially the known problem\nwith RecoveryConflictInterrupt() running in the signal handler? I think Thomas\nhas a patch for that...\n\nOne failure in ~20 runs, on one animal doesn't seem worth disabling the test\nfor.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 8 May 2022 10:38:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-08 11:28:34 -0400, Tom Lane wrote:\n>> Per lapwing's latest results [1], this wasn't enough. I'm again thinking\n>> we should pull the whole test from the back branches.\n\n> That failure is different from the earlier failures though. I don't think it's\n> a timing issue in the test like the deadlock check one. I rather suspect it's\n> indicative of further problems in this area.\n\nYeah, that was my guess too.\n\n> Potentially the known problem\n> with RecoveryConflictInterrupt() running in the signal handler? I think Thomas\n> has a patch for that...\n\nMaybe; or given that it's on v10, it could be telling us about some\nyet-other problem we perhaps solved since then without realizing\nit needed to be back-patched.\n\n> One failure in ~20 runs, on one animal doesn't seem worth disabling the test\n> for.\n\nNo one is going to thank us for shipping a known-unstable test case.\nIt does nothing to fix the problem; all it will lead to is possible\nfailures during package builds. I have no idea whether any packagers\nuse \"make check-world\" rather than just \"make check\" while building.\nBut if they do, even fairly low-probability failures can be problematic.\n(I still carry the scars I acquired while working at Red Hat and being\nresponsible for packaging mysql: at least back then, their test suite\nwas full of cases that mostly worked fine, except when getting stressed\nin Red Hat's build farm. Dealing with a test suite that fails 50% of\nthe time under load, while trying to push out an urgent security fix,\nis NOT a pleasant situation.)\n\nI'm happy to have this test in the stable branches once we have committed\nfixes that address all known problems. Until then, it will just be\na nuisance for anyone who is not a developer working on those problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 May 2022 13:59:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-05-08 13:59:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-08 11:28:34 -0400, Tom Lane wrote:\n> >> Per lapwing's latest results [1], this wasn't enough. I'm again thinking\n> >> we should pull the whole test from the back branches.\n> \n> > That failure is different from the earlier failures though. I don't think it's\n> > a timing issue in the test like the deadlock check one. I rather suspect it's\n> > indicative of further problems in this area.\n> \n> Yeah, that was my guess too.\n> \n> > Potentially the known problem\n> > with RecoveryConflictInterrupt() running in the signal handler? I think Thomas\n> > has a patch for that...\n> \n> Maybe; or given that it's on v10, it could be telling us about some\n> yet-other problem we perhaps solved since then without realizing\n> it needed to be back-patched.\n> \n> > One failure in ~20 runs, on one animal doesn't seem worth disabling the test\n> > for.\n> \n> No one is going to thank us for shipping a known-unstable test case.\n\nIDK, hiding failures indicating bugs isn't really better, at least if it\ndoesn't look like a bug in the test. But you seem to have a stronger opinion\non this than me, so I'll skip the entire test for now :/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 8 May 2022 15:11:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "On 2022-05-08 15:11:39 -0700, Andres Freund wrote:\n> But you seem to have a stronger opinion on this than me, so I'll skip the\n> entire test for now :/\n\nAnd done.\n\n\n", "msg_date": "Sun, 8 May 2022 18:14:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-08 15:11:39 -0700, Andres Freund wrote:\n>> But you seem to have a stronger opinion on this than me, so I'll skip the\n>> entire test for now :/\n\n> And done.\n\nThanks, I appreciate that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 May 2022 21:25:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "On 2022-May-08, Andres Freund wrote:\n\n> On 2022-05-08 13:59:09 -0400, Tom Lane wrote:\n\n> > No one is going to thank us for shipping a known-unstable test case.\n> \n> IDK, hiding failures indicating bugs isn't really better, at least if it\n> doesn't look like a bug in the test. But you seem to have a stronger opinion\n> on this than me, so I'll skip the entire test for now :/\n\nHey, I just noticed that these tests are still disabled. The next\nminors are coming soon; should we wait until *those* are done and then\nre-enable; or re-enable them now to see how they fare and then\nre-disable before the next minors if there's still problems we don't\nfind fixes for?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n", "msg_date": "Tue, 26 Jul 2022 18:33:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hey, I just noticed that these tests are still disabled. The next\n> minors are coming soon; should we wait until *those* are done and then\n> re-enable; or re-enable them now to see how they fare and then\n> re-disable before the next minors if there's still problems we don't\n> find fixes for?\n\nMaybe I missed it, but I don't think anything's been done to fix the\ntest's problems in the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jul 2022 12:47:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hi,\n\nOn 2022-07-26 12:47:38 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Hey, I just noticed that these tests are still disabled. The next\n> > minors are coming soon; should we wait until *those* are done and then\n> > re-enable; or re-enable them now to see how they fare and then\n> > re-disable before the next minors if there's still problems we don't\n> > find fixes for?\n> \n> Maybe I missed it, but I don't think anything's been done to fix the\n> test's problems in the back branches.\n\nYea, I don't think either. What's worse, there's several actual problems\nin the recovery conflict code in all branches - which causes occasional\nfailures of the test in HEAD as well.\n\nThere's one big set of fixes in\nhttps://www.postgresql.org/message-id/CA%2BhUKGKrLKx7Ky1T_FHk-Y729K0oie-gOXKCbxCXyjbPDJAOOw%40mail.gmail.com\n\nand I suspect it'll be hard to have test be fully reliable without\naddressing\nhttps://postgr.es/m/20220715172938.t7uivghdx5vj36cn%40awork3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Jul 2022 10:40:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: failures in t/031_recovery_conflict.pl on CI" }, { "msg_contents": "Hello.\n\nEven after applying the patch, we are still facing an \"ack Broken pipe\"\nproblem. \nIt occurs on the arm64 platform, presumably under high load. \nHere is a log snippet from buildfarm:\n...\n[19:08:12.150](0.394s) ok 13 - startup deadlock: cursor holding conflicting\npin, also waiting for lock, established\n[19:08:12.208](0.057s) ok 14 - startup deadlock: lock acquisition is waiting\nWaiting for replication conn standby's replay_lsn to pass 0/33F5FE0 on\nprimary\ndone\npsql:<stdin>:8: ERROR: canceling statement due to conflict with recovery\nLINE 1: SELECT * FROM test_recovery_conflict_table2;\n ^\nDETAIL: User transaction caused buffer deadlock with recovery.\n[19:08:12.319](0.112s) ok 15 - startup deadlock: logfile contains terminated\nconnection due to recovery conflict\nack Broken pipe: write( 13, '\\\\q\n' ) at /usr/share/perl5/IPC/Run/IO.pm line 550.\n### Stopping node \"primary\" using mode immediate\n# Running: pg_ctl -D\n/home/bf/build/buildfarm-flaviventris/REL_15_STABLE/pgsql.build/src/test/rec\novery/tmp_check/t_031_recovery_conflict_primary_data/pgdata -m immediate\nstop\nwaiting for server to shut down... done\nserver stopped\n# No postmaster PID for node \"primary\"\n### Stopping node \"standby\" using mode immediate\n# Running: pg_ctl -D\n/home/bf/build/buildfarm-flaviventris/REL_15_STABLE/pgsql.build/src/test/rec\novery/tmp_check/t_031_recovery_conflict_standby_data/pgdata -m immediate\nstop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"standby\"\n[19:08:12.450](0.131s) # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[19:08:12.450](0.000s) # Looks like your test exited with 32 just after 15.\n...\n\nBelow is a test report:\n... \n[20:46:35] t/030_stats_cleanup_replica.pl ....... ok 9956 ms ( 0.01 usr\n0.00 sys + 3.49 cusr 2.49 csys = 5.99 CPU)\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 32 just after 15.\n[20:46:43] t/031_recovery_conflict.pl ........... \nDubious, test returned 32 (wstat 8192, 0x2000)\nAll 15 subtests passed \n[20:46:56] t/032_relfilenode_reuse.pl ........... ok 13625 ms ( 0.01 usr\n0.00 sys + 3.53 cusr 2.39 csys = 5.93 CPU)\n[20:47:03] t/110_wal_sender_check_crc.pl ........ ok 6421 ms ( 0.00 usr\n0.00 sys + 3.20 cusr 1.87 csys = 5.07 CPU)\n[20:47:03]\n\nTest Summary Report\n-------------------\nt/031_recovery_conflict.pl (Wstat: 8192 Tests: 15 Failed: 0)\n Non-zero exit status: 32\n Parse errors: No plan found in TAP output\n\n\n\n\n-----Original Message-----\nFrom: Andres Freund <andres@anarazel.de> \nSent: Tuesday, May 3, 2022 11:13 PM\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: Robert Haas <robertmhaas@gmail.com>; pgsql-hackers@postgresql.org;\nThomas Munro <thomas.munro@gmail.com>\nSubject: Re: failures in t/031_recovery_conflict.pl on CI\n\nHi,\n\nOn 2022-05-03 14:23:23 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> So it's almost surely a timing issue, and your theory here seems\nplausible.\n> \n> > Unfortunately I don't think my theory holds, because I actually had \n> > added a defense against this into the test that I forgot about\nmomentarily...\n> \n> Oh, hm. I can try harder to repro it.\n\nI've now reproduced it a couple times here running under rr, so it's\nprobably not worth putting too much effort into that.\n\nAttached is a fix for the test that I think should avoid the problem.\nCouldn't repro it with it applied, under both rr and valgrind.\n\n\nMy current problem is that I'm running into some IPC::Run issues (bug?). I\nget \"ack Broken pipe:\" iff I add \"SELECT pg_sleep(1);\" after\n\"-- wait for lock held by prepared transaction\"\n\nIt doesn't happen without that debugging thing, but it makes me worried that\nit's something that'll come up in random situations.\n\nIt looks to me like it's a bug in IPC::Run - with a few changes I get the\nfailure to happen inside pump_nb(), which seems like it shouldn't error out\njust because the child process exited...\n\n\nI *think* it might not happen without the sleep. But I'm not at all\nconfident.\n\nIn general I'm kinda worried on how much effectively unmaintained perl stuff\nwe're depending :(\n\nGreetings,\n\nAndres Freund\n\n\n\n", "msg_date": "Thu, 24 Nov 2022 09:24:01 +0400", "msg_from": "=?utf-8?q?=D0=A4=D0=B0=D0=BA=D0=B5=D0=B5=D0=B2?=\n =?utf-8?q?_=D0=90=D0=BB=D0=B5=D0=BA=D1=81=D0=B5=D0=B9?=\n <a.fakeev@postgrespro.ru>", "msg_from_op": false, "msg_subject": "RE: failures in t/031_recovery_conflict.pl on CI" } ]
[ { "msg_contents": "Hi,\n\nUnlike most \"procsignal\" handler routines, RecoveryConflictInterrupt()\ndoesn't just set a sig_atomic_t flag and poke the latch. Is the extra\nstuff it does safe? For example, is this call stack OK (to pick one\nthat jumps out, but not the only one)?\n\nprocsignal_sigusr1_handler\n-> RecoveryConflictInterrupt\n -> HoldingBufferPinThatDelaysRecovery\n -> GetPrivateRefCount\n -> GetPrivateRefCountEntry\n -> hash_search(...hash table that might be in the middle of an update...)\n\n(I noticed this incidentally while trying to follow along with the\nnearby thread on 031_recovery_conflict.pl, but the question of why we\nreally need this of interest to me for a back-burner project I have to\ntry to remove all use of signals except for latches, and then remove\nthe signal emulation for Windows. It may turn out to be a pipe dream,\nbut this stuff is one of the subproblems.)\n\n\n", "msg_date": "Sun, 10 Apr 2022 07:57:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Unlike most \"procsignal\" handler routines, RecoveryConflictInterrupt()\n> doesn't just set a sig_atomic_t flag and poke the latch. Is the extra\n> stuff it does safe? For example, is this call stack OK (to pick one\n> that jumps out, but not the only one)?\n\n> procsignal_sigusr1_handler\n> -> RecoveryConflictInterrupt\n> -> HoldingBufferPinThatDelaysRecovery\n> -> GetPrivateRefCount\n> -> GetPrivateRefCountEntry\n> -> hash_search(...hash table that might be in the middle of an update...)\n\nUgh. That one was safe before somebody decided we needed a hash table\nfor buffer refcounts, but it's surely not safe now. Which, of course,\ndemonstrates the folly of allowing signal handlers to call much of\nanything; but especially doing so without clearly marking the called\nfunctions as needing to be signal safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 17:00:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-04-09 17:00:41 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Unlike most \"procsignal\" handler routines, RecoveryConflictInterrupt()\n> > doesn't just set a sig_atomic_t flag and poke the latch. Is the extra\n> > stuff it does safe? For example, is this call stack OK (to pick one\n> > that jumps out, but not the only one)?\n> \n> > procsignal_sigusr1_handler\n> > -> RecoveryConflictInterrupt\n> > -> HoldingBufferPinThatDelaysRecovery\n> > -> GetPrivateRefCount\n> > -> GetPrivateRefCountEntry\n> > -> hash_search(...hash table that might be in the middle of an update...)\n> \n> Ugh. That one was safe before somebody decided we needed a hash table\n> for buffer refcounts, but it's surely not safe now.\n\nMea culpa. This is 4b4b680c3d6d - from 2014.\n\n\n> Which, of course, demonstrates the folly of allowing signal handlers to call\n> much of anything; but especially doing so without clearly marking the called\n> functions as needing to be signal safe.\n\nYea. Particularly when just going through bufmgr and updating places that look\nat pin counts, it's not immediately obvious that\nHoldingBufferPinThatDelaysRecovery() runs in a signal handler. Partially\nbecause RecoveryConflictInterrupt() - which is mentioned in the comment above\nHoldingBufferPinThatDelaysRecovery() - sounds a lot like it's called from\nProcessInterrupts(), which doesn't run in a signal handler...\n\nRecoveryConflictInterrupt() calls a lot of functions, some of which quite\nplausibly could be changed to not be signal safe, even if they currently are.\n\n\nIs there really a reason for RecoveryConflictInterrupt() to run in a signal\nhandler? Given that we only react to conflicts in ProcessInterrupts(), it's\nnot immediately obvious that we need to do anything in\nRecoveryConflictInterrupt() but set some flags. There's probably some minor\nefficiency gains, but that seems unconvincing.\n\n\nThe comments really need a rewrite - it sounds like\nRecoveryConflictInterrupt() will error out itself:\n\n /*\n * If we can abort just the current subtransaction then we are\n * OK to throw an ERROR to resolve the conflict. Otherwise\n * drop through to the FATAL case.\n *\n * XXX other times that we can throw just an ERROR *may* be\n * PROCSIG_RECOVERY_CONFLICT_LOCK if no locks are held in\n * parent transactions\n *\n * PROCSIG_RECOVERY_CONFLICT_SNAPSHOT if no snapshots are held\n * by parent transactions and the transaction is not\n * transaction-snapshot mode\n *\n * PROCSIG_RECOVERY_CONFLICT_TABLESPACE if no temp files or\n * cursors open in parent transactions\n */\n\nit's technically not *wrong* because it's setting up state that then leads to\nERROR / FATAL being thrown, but ...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Apr 2022 14:39:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-04-09 14:39:16 -0700, Andres Freund wrote:\n> On 2022-04-09 17:00:41 -0400, Tom Lane wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Unlike most \"procsignal\" handler routines, RecoveryConflictInterrupt()\n> > > doesn't just set a sig_atomic_t flag and poke the latch. Is the extra\n> > > stuff it does safe? For example, is this call stack OK (to pick one\n> > > that jumps out, but not the only one)?\n> > \n> > > procsignal_sigusr1_handler\n> > > -> RecoveryConflictInterrupt\n> > > -> HoldingBufferPinThatDelaysRecovery\n> > > -> GetPrivateRefCount\n> > > -> GetPrivateRefCountEntry\n> > > -> hash_search(...hash table that might be in the middle of an update...)\n> > \n> > Ugh. That one was safe before somebody decided we needed a hash table\n> > for buffer refcounts, but it's surely not safe now.\n> \n> Mea culpa. This is 4b4b680c3d6d - from 2014.\n\nWhoa. There's way worse: StandbyTimeoutHandler() calls\nSendRecoveryConflictWithBufferPin(), which calls CancelDBBackends(), which\nacquires lwlocks etc.\n\nWhich very plausibly is the cause for the issue I'm investigating in\nhttps://www.postgresql.org/message-id/20220409220054.fqn5arvbeesmxdg5%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Apr 2022 16:00:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sun, Apr 10, 2022 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-09 14:39:16 -0700, Andres Freund wrote:\n> > On 2022-04-09 17:00:41 -0400, Tom Lane wrote:\n> > > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > > Unlike most \"procsignal\" handler routines, RecoveryConflictInterrupt()\n> > > > doesn't just set a sig_atomic_t flag and poke the latch. Is the extra\n> > > > stuff it does safe? For example, is this call stack OK (to pick one\n> > > > that jumps out, but not the only one)?\n> > >\n> > > > procsignal_sigusr1_handler\n> > > > -> RecoveryConflictInterrupt\n> > > > -> HoldingBufferPinThatDelaysRecovery\n> > > > -> GetPrivateRefCount\n> > > > -> GetPrivateRefCountEntry\n> > > > -> hash_search(...hash table that might be in the middle of an update...)\n> > >\n> > > Ugh. That one was safe before somebody decided we needed a hash table\n> > > for buffer refcounts, but it's surely not safe now.\n> >\n> > Mea culpa. This is 4b4b680c3d6d - from 2014.\n>\n> Whoa. There's way worse: StandbyTimeoutHandler() calls\n> SendRecoveryConflictWithBufferPin(), which calls CancelDBBackends(), which\n> acquires lwlocks etc.\n>\n> Which very plausibly is the cause for the issue I'm investigating in\n> https://www.postgresql.org/message-id/20220409220054.fqn5arvbeesmxdg5%40alap3.anarazel.de\n\nHuh. I wouldn't have started a separate thread for this if I'd\nrealised I was getting close to the cause of the CI failure... I\nthought this was an incidental observation. Anyway, I made a first\nattempt at fixing this SIGUSR1 problem (I think Andres is looking at\nthe SIGALRM problem in the other thread).\n\nInstead of bothering to create N different XXXPending variables for\nthe different conflict \"reasons\", I used an array. Other than that,\nit's much like existing examples.\n\nThe existing use of the global variable RecoveryConflictReason seems a\nlittle woolly. Doesn't it get clobbered every time a signal arrives,\neven if we determine that there is no conflict? Not sure why that's\nOK, but anyway, this patch always sets it together with\nRecoveryConflictPending = true.", "msg_date": "Tue, 12 Apr 2022 10:33:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-04-12 10:33:28 +1200, Thomas Munro wrote:\n> Huh. I wouldn't have started a separate thread for this if I'd\n> realised I was getting close to the cause of the CI failure...\n\n:)\n\n\n> Instead of bothering to create N different XXXPending variables for\n> the different conflict \"reasons\", I used an array. Other than that,\n> it's much like existing examples.\n\nIt kind of bothers me that each pending conflict has its own external function\ncall. It doesn't actually cost anything, because it's quite unlikely that\nthere's more than one pending conflict. Besides aesthetically displeasing,\nit also leads to an unnecessarily large amount of code needed, because the\ncalls to RecoveryConflictInterrupt() can't be merged...\n\nBut that's perhaps best fixed separately.\n\n\nWhat might actually make more sense is to just have a bitmask or something?\n\n\n> The existing use of the global variable RecoveryConflictReason seems a\n> little woolly. Doesn't it get clobbered every time a signal arrives,\n> even if we determine that there is no conflict? Not sure why that's\n> OK, but anyway, this patch always sets it together with\n> RecoveryConflictPending = true.\n\nYea. It's probably ok, kind of, because there shouldn't be multiple\noutstanding conflicts with very few exceptions (deadlock and buffer pin). And\nit doesn't matter that much which of those gets handled. And we'll retry\nagain. But brrr.\n\n\n> +/*\n> + * Check one recovery conflict reason. This is called when the corresponding\n> + * RecoveryConflictInterruptPending flags is set. If we decide that a conflict\n> + * exists, then RecoveryConflictReason and RecoveryConflictPending will be set,\n> + * to be handled later in the same invocation of ProcessInterrupts().\n> + */\n> +static void\n> +ProcessRecoveryConflictInterrupt(ProcSignalReason reason)\n> +{\n> \t/*\n> \t * Don't joggle the elbow of proc_exit\n> \t */\n> \tif (!proc_exit_inprogress)\n> \t{\n> -\t\tRecoveryConflictReason = reason;\n> \t\tswitch (reason)\n> \t\t{\n> \t\t\tcase PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK:\n> @@ -3084,9 +3094,9 @@ RecoveryConflictInterrupt(ProcSignalReason reason)\n> \t\t\t\t\tif (IsAbortedTransactionBlockState())\n> \t\t\t\t\t\treturn;\n> \n> +\t\t\t\t\tRecoveryConflictReason = reason;\n> \t\t\t\t\tRecoveryConflictPending = true;\n> \t\t\t\t\tQueryCancelPending = true;\n> -\t\t\t\t\tInterruptPending = true;\n> \t\t\t\t\tbreak;\n> \t\t\t\t}\n> \n> @@ -3094,9 +3104,9 @@ RecoveryConflictInterrupt(ProcSignalReason reason)\n> \t\t\t\t/* FALLTHROUGH */\n> \n> \t\t\tcase PROCSIG_RECOVERY_CONFLICT_DATABASE:\n> +\t\t\t\tRecoveryConflictReason = reason;\n> \t\t\t\tRecoveryConflictPending = true;\n> \t\t\t\tProcDiePending = true;\n> -\t\t\t\tInterruptPending = true;\n> \t\t\t\tbreak;\n> \n> \t\t\tdefault:\n> @@ -3115,15 +3125,6 @@ RecoveryConflictInterrupt(ProcSignalReason reason)\n> \t\tif (reason == PROCSIG_RECOVERY_CONFLICT_DATABASE)\n> \t\t\tRecoveryConflictRetryable = false;\n> \t}\n\nIt's pretty weird that we have all this stuff that we then just check a short\nwhile later in ProcessInterrupts() whether they've been set.\n\nSeems like it'd make more sense to throw the error in\nProcessRecoveryConflictInterrupt(), now that it's not in a a signal handler\nanymore?\n\n\n> /*\n> @@ -3147,6 +3148,22 @@ ProcessInterrupts(void)\n> \t\treturn;\n> \tInterruptPending = false;\n> \n> +\t/*\n> +\t * Have we been asked to check for a recovery conflict? Processing these\n> +\t * interrupts may result in RecoveryConflictPending and related variables\n> +\t * being set, to be handled further down.\n> +\t */\n> +\tfor (int i = PROCSIG_RECOVERY_CONFLICT_FIRST;\n> +\t\t i <= PROCSIG_RECOVERY_CONFLICT_LAST;\n> +\t\t ++i)\n> +\t{\n> +\t\tif (RecoveryConflictInterruptPending[i])\n> +\t\t{\n> +\t\t\tRecoveryConflictInterruptPending[i] = false;\n> +\t\t\tProcessRecoveryConflictInterrupt(i);\n> +\t\t}\n> +\t}\n\nHm. This seems like it shouldn't be in ProcessInterrupts(). How about checking\ncalling a wrapper doing all this if RecoveryConflictPending?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 11 Apr 2022 15:50:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Apr 12, 2022 at 10:50 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-12 10:33:28 +1200, Thomas Munro wrote:\n> > Instead of bothering to create N different XXXPending variables for\n> > the different conflict \"reasons\", I used an array. Other than that,\n> > it's much like existing examples.\n>\n> It kind of bothers me that each pending conflict has its own external function\n> call. It doesn't actually cost anything, because it's quite unlikely that\n> there's more than one pending conflict. Besides aesthetically displeasing,\n> it also leads to an unnecessarily large amount of code needed, because the\n> calls to RecoveryConflictInterrupt() can't be merged...\n\nOk, in this version there's two levels of flag:\nRecoveryConflictPending, so we do nothing if that's not set, and then\nthe loop over RecoveryConflictPendingReasons is moved into\nProcessRecoveryConflictInterrupts(). Better?\n\n> What might actually make more sense is to just have a bitmask or something?\n\nYeah, in fact I'm exploring something like that in later bigger\nredesign work[1] that gets rid of signal handlers. Here I'm looking\nfor something simple and potentially back-patchable and I don't want\nto have to think about async signal safety of bit-level manipulations.\n\n> It's pretty weird that we have all this stuff that we then just check a short\n> while later in ProcessInterrupts() whether they've been set.\n>\n> Seems like it'd make more sense to throw the error in\n> ProcessRecoveryConflictInterrupt(), now that it's not in a a signal handler\n> anymore?\n\nYeah. The thing that was putting me off doing that (and caused me to\nget kinda stuck in the valley of indecision for a while here, sorry\nabout that) aside from trying to keep the diff small, was the need to\nreplicate this self-loathing code in a second place:\n\n if (QueryCancelPending && QueryCancelHoldoffCount != 0)\n {\n /*\n * Re-arm InterruptPending so that we process the cancel request as\n * soon as we're done reading the message. (XXX this is seriously\n * ugly: it complicates INTERRUPTS_CAN_BE_PROCESSED(), and it means we\n * can't use that macro directly as the initial test in this function,\n * meaning that this code also creates opportunities for other bugs to\n * appear.)\n */\n\nBut I have now tried doing that anyway, and I hope the simplification\nin other ways makes it worth it. Thoughts, objections?\n\n> > /*\n> > @@ -3147,6 +3148,22 @@ ProcessInterrupts(void)\n> > return;\n> > InterruptPending = false;\n> >\n> > + /*\n> > + * Have we been asked to check for a recovery conflict? Processing these\n> > + * interrupts may result in RecoveryConflictPending and related variables\n> > + * being set, to be handled further down.\n> > + */\n> > + for (int i = PROCSIG_RECOVERY_CONFLICT_FIRST;\n> > + i <= PROCSIG_RECOVERY_CONFLICT_LAST;\n> > + ++i)\n> > + {\n> > + if (RecoveryConflictInterruptPending[i])\n> > + {\n> > + RecoveryConflictInterruptPending[i] = false;\n> > + ProcessRecoveryConflictInterrupt(i);\n> > + }\n> > + }\n>\n> Hm. This seems like it shouldn't be in ProcessInterrupts(). How about checking\n> calling a wrapper doing all this if RecoveryConflictPending?\n\nI moved the loop into ProcessRecoveryConflictInterrupt() and added an\n\"s\" to the latter's name. It already had the right indentation level\nto contain a loop, once I realised that the test of\nproc_exit_inprogress must be redundant.\n\nBetter?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2B3MkS21yK4jL4cgZywdnnGKiBg0jatoV6kzaniBmcqbQ%40mail.gmail.com", "msg_date": "Tue, 10 May 2022 16:39:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nIt'd be cool to commit and backpatch this - I'd like to re-enable the conflict\ntests in the backbranches, and I don't think we want to do so with this issue\nin place.\n\n\nOn 2022-05-10 16:39:11 +1200, Thomas Munro wrote:\n> On Tue, Apr 12, 2022 at 10:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-04-12 10:33:28 +1200, Thomas Munro wrote:\n> > > Instead of bothering to create N different XXXPending variables for\n> > > the different conflict \"reasons\", I used an array. Other than that,\n> > > it's much like existing examples.\n> >\n> > It kind of bothers me that each pending conflict has its own external function\n> > call. It doesn't actually cost anything, because it's quite unlikely that\n> > there's more than one pending conflict. Besides aesthetically displeasing,\n> > it also leads to an unnecessarily large amount of code needed, because the\n> > calls to RecoveryConflictInterrupt() can't be merged...\n> \n> Ok, in this version there's two levels of flag:\n> RecoveryConflictPending, so we do nothing if that's not set, and then\n> the loop over RecoveryConflictPendingReasons is moved into\n> ProcessRecoveryConflictInterrupts(). Better?\n\nI think so.\n\nI don't particularly like the Handle/ProcessRecoveryConflictInterrupt() split,\nnaming-wise. I don't think Handle vs Process indicates something meaningful?\nMaybe s/Interrupt/Signal/ for the signal handler one could help?\n\nIt *might* look a tad cleaner to have the loop in a separate function from the\nexisting code. I.e. a +ProcessRecoveryConflictInterrupts() that calls\nProcessRecoveryConflictInterrupts().\n\n\n> > What might actually make more sense is to just have a bitmask or something?\n> \n> Yeah, in fact I'm exploring something like that in later bigger\n> redesign work[1] that gets rid of signal handlers. Here I'm looking\n> for something simple and potentially back-patchable and I don't want\n> to have to think about async signal safety of bit-level manipulations.\n\nMakes sense.\n\n\n> /*\n> @@ -3146,6 +3192,9 @@ ProcessInterrupts(void)\n> \t\treturn;\n> \tInterruptPending = false;\n> \n> +\tif (RecoveryConflictPending)\n> +\t\tProcessRecoveryConflictInterrupts();\n> +\n> \tif (ProcDiePending)\n> \t{\n> \t\tProcDiePending = false;\n\nShould the ProcessRecoveryConflictInterrupts() call really be before the\nProcDiePending check?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 22 May 2022 17:10:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sun, May 22, 2022 at 05:10:01PM -0700, Andres Freund wrote:\n> On 2022-05-10 16:39:11 +1200, Thomas Munro wrote:\n>> Ok, in this version there's two levels of flag:\n>> RecoveryConflictPending, so we do nothing if that's not set, and then\n>> the loop over RecoveryConflictPendingReasons is moved into\n>> ProcessRecoveryConflictInterrupts(). Better?\n> \n> I think so.\n> \n> I don't particularly like the Handle/ProcessRecoveryConflictInterrupt() split,\n> naming-wise. I don't think Handle vs Process indicates something meaningful?\n> Maybe s/Interrupt/Signal/ for the signal handler one could help?\n\nHandle is more consistent with the other types of interruptions in the\nSIGUSR1 handler, so the name proposed in the patch in not that\nconfusing to me. And so does Process, in spirit with\nProcessProcSignalBarrier() and ProcessLogMemoryContextInterrupt().\nWhile on it, is postgres.c the best home for\nHandleRecoveryConflictInterrupt()? That's a very generic file, for \nstarters. Not related to the actual bug, just asking.\n\n> It *might* look a tad cleaner to have the loop in a separate function from the\n> existing code. I.e. a +ProcessRecoveryConflictInterrupts() that calls\n> ProcessRecoveryConflictInterrupts().\n\nAgreed that it would be a bit cleaner to keep the internals in a\ndifferent routine.\n\n>> Yeah, in fact I'm exploring something like that in later bigger\n>> redesign work[1] that gets rid of signal handlers. Here I'm looking\n>> for something simple and potentially back-patchable and I don't want\n>> to have to think about async signal safety of bit-level manipulations.\n>\n> Makes sense.\n\n+1.\n\nAlso note that bufmgr.c mentions RecoveryConflictInterrupt() in the\ntop comment of HoldingBufferPinThatDelaysRecovery().\n\nShould the processing of PROCSIG_RECOVERY_CONFLICT_DATABASE mention\nthat FATAL is used because we are never going to retry the conflict as\nthe database has been dropped? Getting rid of\nRecoveryConflictRetryable makes the code easier to go through.\n--\nMichael", "msg_date": "Wed, 15 Jun 2022 14:51:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Wed, Jun 15, 2022 at 1:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Handle is more consistent with the other types of interruptions in the\n> SIGUSR1 handler, so the name proposed in the patch in not that\n> confusing to me. And so does Process, in spirit with\n> ProcessProcSignalBarrier() and ProcessLogMemoryContextInterrupt().\n> While on it, is postgres.c the best home for\n> HandleRecoveryConflictInterrupt()? That's a very generic file, for\n> starters. Not related to the actual bug, just asking.\n\nYeah, there's existing precedent for this kind of split in, for\nexample, HandleCatchupInterrupt() and ProcessCatchupInterrupt(). I\nthink the idea is that \"process\" is supposed to sound like the more\ninvolved part of the operation, whereas \"handle\" is supposed to sound\nlike the initial response to the signal.\n\nI'm not sure it's the clearest possible naming, but naming things is\nhard, and this patch is apparently not inventing a new way to do it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jun 2022 13:00:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Here's a new version, but there's something wrong that I haven't\nfigured out yet (see CI link below).\n\nHere's one thing I got a bit confused about along the way, but it\nseems the comment was just wrong:\n\n+ /*\n+ * If we can abort just the current\nsubtransaction then we are OK\n+ * to throw an ERROR to resolve the conflict.\nOtherwise drop\n+ * through to the FATAL case.\n...\n+ if (!IsSubTransaction())\n...\n+ ereport(ERROR,\n\nSurely this was meant to say, \"If we're not in a subtransaction then\n...\", right? Changed.\n\nI thought of a couple more simplifications for the big switch\nstatement in ProcessRecoveryConflictInterrupt(). The special case for\nDoingCommandRead can be changed to fall through, instead of needing a\nseparate ereport(FATAL).\n\nI also folded the two ereport(FATAL) calls in the CONFLICT_DATABASE\ncase into one, since they differ only in errcode().\n\n+ (errcode(reason ==\nPROCSIG_RECOVERY_CONFLICT_DATABASE ?\n+\nERRCODE_DATABASE_DROPPED :\n+\nERRCODE_T_R_SERIALIZATION_FAILURE),\n\nNow we're down to just one ereport(FATAL), one ereport(ERROR), and a\ncouple of ways to give up without erroring. I think this makes the\nlogic a lot easier to follow?\n\nI'm confused about proc->recoveryConflictPending: the startup process\nsets it, and sometimes the interrupt receiver sets it too, and it\ncauses errdetail() to be clobbered on abort (for any reason), even\nthough we bothered to set it carefully for the recovery conflict\nereport calls. Or something like that. I haven't changed anything\nabout that in this patch, though.\n\nProblem: I saw 031_recovery_conflict.pl time out while waiting for a\nbuffer pin conflict, but so far once only, on CI:\n\nhttps://cirrus-ci.com/task/5956804860444672\n\ntimed out waiting for match: (?^:User was holding shared buffer pin\nfor too long) at t/031_recovery_conflict.pl line 367.\n\nHrmph. Still trying to reproduce that, which may be a bug in this\npatch, a bug in the test or a pre-existing problem. Note that\nrecovery didn't say something like:\n\n2022-06-21 17:05:40.931 NZST [57674] LOG: recovery still waiting\nafter 11.197 ms: recovery conflict on buffer pin\n\n(That's what I'd expect to see in\nhttps://api.cirrus-ci.com/v1/artifact/task/5956804860444672/log/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log\nif the startup process had decided to send the signal).\n\n... so it seems like the problem in that run is upstream of the interrupt stuff.\n\nOther things changed in response to feedback (quoting from several\nrecent messages):\n\nOn Thu, Jun 16, 2022 at 5:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jun 15, 2022 at 1:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > Handle is more consistent with the other types of interruptions in the\n> > SIGUSR1 handler, so the name proposed in the patch in not that\n> > confusing to me. And so does Process, in spirit with\n> > ProcessProcSignalBarrier() and ProcessLogMemoryContextInterrupt().\n> > While on it, is postgres.c the best home for\n> > HandleRecoveryConflictInterrupt()? That's a very generic file, for\n> > starters. Not related to the actual bug, just asking.\n>\n> Yeah, there's existing precedent for this kind of split in, for\n> example, HandleCatchupInterrupt() and ProcessCatchupInterrupt(). I\n> think the idea is that \"process\" is supposed to sound like the more\n> involved part of the operation, whereas \"handle\" is supposed to sound\n> like the initial response to the signal.\n\nThanks both for looking. Yeah, I was trying to keep with the existing\nconvention here (though admittedly we're not 100% consistent on this,\nsomething to tidy up separately perhaps).\n\nOn Wed, Jun 15, 2022 at 5:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, May 22, 2022 at 05:10:01PM -0700, Andres Freund wrote:\n> > It *might* look a tad cleaner to have the loop in a separate function from the\n> > existing code. I.e. a +ProcessRecoveryConflictInterrupts() that calls\n> > ProcessRecoveryConflictInterrupts().\n>\n> Agreed that it would be a bit cleaner to keep the internals in a\n> different routine.\n\nAlright, I split it into two functions: one with an 's' in the name to\ndo the looping, and one without 's' to process an individual interrupt\nreason. Makes the patch harder to read because the indentation level\nchanges...\n\n> Also note that bufmgr.c mentions RecoveryConflictInterrupt() in the\n> top comment of HoldingBufferPinThatDelaysRecovery().\n\nFixed.\n\n> Should the processing of PROCSIG_RECOVERY_CONFLICT_DATABASE mention\n> that FATAL is used because we are never going to retry the conflict as\n> the database has been dropped?\n\nOK, note added.\n\n> Getting rid of\n> RecoveryConflictRetryable makes the code easier to go through.\n\nYeah, all the communication through global variables was really\nconfusing, and also subtly wrong (that global reason gets clobbered\nwith incorrect values), and that retryable variable was hard to\nfollow.\n\nOn Mon, May 23, 2022 at 12:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-05-10 16:39:11 +1200, Thomas Munro wrote:\n> > @@ -3146,6 +3192,9 @@ ProcessInterrupts(void)\n> > return;\n> > InterruptPending = false;\n> >\n> > + if (RecoveryConflictPending)\n> > + ProcessRecoveryConflictInterrupts();\n> > +\n> > if (ProcDiePending)\n> > {\n> > ProcDiePending = false;\n>\n> Should the ProcessRecoveryConflictInterrupts() call really be before the\n> ProcDiePending check?\n\nI don't think it's important which of (say) a statement timeout and a\nrecovery conflict that arrive around the same time takes priority, but\non reflection it was an ugly place to put it, and it seems tidier to\nmove it down the function a bit further, where other various special\ninterrupts are handled after the \"main\" and original die/cancel ones.", "msg_date": "Tue, 21 Jun 2022 17:22:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Jun 21, 2022 at 05:22:05PM +1200, Thomas Munro wrote:\n> Here's one thing I got a bit confused about along the way, but it\n> seems the comment was just wrong:\n> \n> + /*\n> + * If we can abort just the current\n> subtransaction then we are OK\n> + * to throw an ERROR to resolve the conflict.\n> Otherwise drop\n> + * through to the FATAL case.\n> ...\n> + if (!IsSubTransaction())\n> ...\n> + ereport(ERROR,\n> \n> Surely this was meant to say, \"If we're not in a subtransaction then\n> ...\", right? Changed.\n\nIndeed, the code does something else than what the comment says, aka\ngenerating an ERROR if the process is not in a subtransaction,\nignoring already aborted transactions (aborted subtrans go to FATAL)\nand throwing a FATAL in the other cases. So your change looks right.\n\n> I thought of a couple more simplifications for the big switch\n> statement in ProcessRecoveryConflictInterrupt(). The special case for\n> DoingCommandRead can be changed to fall through, instead of needing a\n> separate ereport(FATAL).\n\nThe extra business with QueryCancelHoldoffCount and DoingCommandRead\nis the only addition for the snapshot, lock and tablespace conflict\nhandling part. I don't see why a reason why that could be wrong on a\nclose lookup. Anyway, why don't you check QueryCancelPending on top\nof QueryCancelHoldoffCount?\n\n> Now we're down to just one ereport(FATAL), one ereport(ERROR), and a\n> couple of ways to give up without erroring. I think this makes the\n> logic a lot easier to follow?\n\nAgreed that it looks like a gain in clarity.\n--\nMichael", "msg_date": "Tue, 21 Jun 2022 16:43:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Jun 21, 2022 at 7:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> The extra business with QueryCancelHoldoffCount and DoingCommandRead\n> is the only addition for the snapshot, lock and tablespace conflict\n> handling part. I don't see why a reason why that could be wrong on a\n> close lookup. Anyway, why don't you check QueryCancelPending on top\n> of QueryCancelHoldoffCount?\n\nThe idea of this patch is to make ProcessRecoveryConflictInterrupt()\nthrow its own ERROR, instead of setting QueryCancelPending (as an\nearlier version of the patch did). It still has to respect\nQueryCancelHoldoffCount, though, to avoid emitting an ERROR at bad\ntimes for the fe/be protocol.\n\n\n", "msg_date": "Tue, 21 Jun 2022 23:02:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Jun 21, 2022 at 11:02:57PM +1200, Thomas Munro wrote:\n> On Tue, Jun 21, 2022 at 7:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> The extra business with QueryCancelHoldoffCount and DoingCommandRead\n>> is the only addition for the snapshot, lock and tablespace conflict\n>> handling part. I don't see why a reason why that could be wrong on a\n>> close lookup. Anyway, why don't you check QueryCancelPending on top\n>> of QueryCancelHoldoffCount?\n> \n> The idea of this patch is to make ProcessRecoveryConflictInterrupt()\n> throw its own ERROR, instead of setting QueryCancelPending (as an\n> earlier version of the patch did). It still has to respect\n> QueryCancelHoldoffCount, though, to avoid emitting an ERROR at bad\n> times for the fe/be protocol.\n\nYeah, I was reading through v3 and my brain questioned the\ninconsistency, but I can see that v2 already did that and I have also\nlooked at it. Anyway, my concern here is that the code becomes more\ndependent on the ordering of ProcessRecoveryConflictInterrupt() and\nthe code path checking for QueryCancelPending in ProcessInterrupts().\nWith the patch, we should always have QueryCancelPending set to false,\nas long as there are no QueryCancelHoldoffCount. Perhaps an extra\nassertion for QueryCancelPending could be added at the beginning of \nProcessRecoveryConflictInterrupts(), in combination of the one already\npresent for InterruptHoldoffCount. I agree that's a minor point,\nthough.\n--\nMichael", "msg_date": "Wed, 22 Jun 2022 10:04:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Wed, Jun 22, 2022 at 1:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n> With the patch, we should always have QueryCancelPending set to false,\n> as long as there are no QueryCancelHoldoffCount. Perhaps an extra\n> assertion for QueryCancelPending could be added at the beginning of\n> ProcessRecoveryConflictInterrupts(), in combination of the one already\n> present for InterruptHoldoffCount. I agree that's a minor point,\n> though.\n\nBut QueryCancelPending can be set to true at any time by\nStatementCancelHandler(), if we receive SIGINT.\n\n\n", "msg_date": "Wed, 22 Jun 2022 14:09:08 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-06-21 17:22:05 +1200, Thomas Munro wrote:\n> Problem: I saw 031_recovery_conflict.pl time out while waiting for a\n> buffer pin conflict, but so far once only, on CI:\n> \n> https://cirrus-ci.com/task/5956804860444672\n> \n> timed out waiting for match: (?^:User was holding shared buffer pin\n> for too long) at t/031_recovery_conflict.pl line 367.\n> \n> Hrmph. Still trying to reproduce that, which may be a bug in this\n> patch, a bug in the test or a pre-existing problem. Note that\n> recovery didn't say something like:\n> \n> 2022-06-21 17:05:40.931 NZST [57674] LOG: recovery still waiting\n> after 11.197 ms: recovery conflict on buffer pin\n> \n> (That's what I'd expect to see in\n> https://api.cirrus-ci.com/v1/artifact/task/5956804860444672/log/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log\n> if the startup process had decided to send the signal).\n> \n> ... so it seems like the problem in that run is upstream of the interrupt stuff.\n\nOdd. The only theory I have so far is that the manual vacuum on the primary\nsomehow decided to skip the page, and thus didn't trigger a conflict. Because\nclearly replay progressed past the records of the VACUUM. Perhaps we should\nuse VACUUM VERBOSE? In contrast to pg_regress tests that should be\nunproblematic?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Jun 2022 19:33:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-06-21 19:33:01 -0700, Andres Freund wrote:\n> On 2022-06-21 17:22:05 +1200, Thomas Munro wrote:\n> > Problem: I saw 031_recovery_conflict.pl time out while waiting for a\n> > buffer pin conflict, but so far once only, on CI:\n> >\n> > https://cirrus-ci.com/task/5956804860444672\n> >\n> > timed out waiting for match: (?^:User was holding shared buffer pin\n> > for too long) at t/031_recovery_conflict.pl line 367.\n> >\n> > Hrmph. Still trying to reproduce that, which may be a bug in this\n> > patch, a bug in the test or a pre-existing problem. Note that\n> > recovery didn't say something like:\n> >\n> > 2022-06-21 17:05:40.931 NZST [57674] LOG: recovery still waiting\n> > after 11.197 ms: recovery conflict on buffer pin\n> >\n> > (That's what I'd expect to see in\n> > https://api.cirrus-ci.com/v1/artifact/task/5956804860444672/log/src/test/recovery/tmp_check/log/031_recovery_conflict_standby.log\n> > if the startup process had decided to send the signal).\n> >\n> > ... so it seems like the problem in that run is upstream of the interrupt stuff.\n>\n> Odd. The only theory I have so far is that the manual vacuum on the primary\n> somehow decided to skip the page, and thus didn't trigger a conflict. Because\n> clearly replay progressed past the records of the VACUUM. Perhaps we should\n> use VACUUM VERBOSE? In contrast to pg_regress tests that should be\n> unproblematic?\n\nI saw a couple failures of 031 on CI for the meson patch - which obviously\ndoesn't change anything around this. However it adds a lot more distributions,\nand the added ones run in docker containers on a shared host, presumably\nadding a lot of noise. I saw this more frequently when I accidentally had the\ntest runs at the number of CPUs in the host, rather than the allotted CPUs in\nthe container.\n\nThat made me look more into these issues. I played with adding a pg_usleep()\nto RecoveryConflictInterrupt() to simulate slow signal delivery.\n\nFound a couple things:\n\n- the pg_usleep(5000) in ResolveRecoveryConflictWithVirtualXIDs() can\n completely swamp the target(s) on a busy system. This surely is exascerbated\n by the usleep I added RecoveryConflictInterrupt() but a 5ms signalling pace\n does seem like a bad idea.\n\n- we process the same recovery conflict (not a single signal, but a single\n \"real conflict\") multiple times in the target of a conflict, presumably\n while handling the error. That includes handling the same interrupt once as\n an ERROR and once as FATAL.\n\n E.g.\n\n2022-07-01 12:19:14.428 PDT [2000572] LOG: recovery still waiting after 10.032 ms: recovery conflict on buffer pin\n2022-07-01 12:19:14.428 PDT [2000572] CONTEXT: WAL redo at 0/344E098 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 100; blkref #0: rel 1663/16385/1>\n2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl ERROR: canceling statement due to conflict with recovery at character 15\n2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl STATEMENT: SELECT * FROM test_recovery_conflict_table2;\n2022-07-01 12:19:54.778 PDT [2000572] LOG: recovery finished waiting after 40359.937 ms: recovery conflict on buffer pin\n2022-07-01 12:19:54.778 PDT [2000572] CONTEXT: WAL redo at 0/344E098 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 100; blkref #0: rel 1663/16385/1>\n2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2022-07-01 12:19:54.837 PDT [2001389] 031_recovery_conflict.pl LOG: statement: SELECT 1;\n\n note that the startup process considers the conflict resolved by the time\n the backend handles the interrupt.\n\n I also see cases where a FATAL is repeated:\n\n2022-07-01 12:43:18.190 PDT [2054721] LOG: recovery still waiting after 15.410 ms: recovery conflict on snapshot\n2022-07-01 12:43:18.190 PDT [2054721] DETAIL: Conflicting process: 2054753.\n2022-07-01 12:43:18.190 PDT [2054721] CONTEXT: WAL redo at 0/344AB90 for Heap2/PRUNE: latestRemovedXid 732 nredirected 18 ndead 0; blkref #0: rel 1663/16385/>\n2054753: reporting recovery conflict 9\n2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed.\n2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n...\n2054753: reporting recovery conflict 9\n2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed.\n2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n\n the FATAL one seems like it might at least partially be due to\n RecoveryConflictPending not being reset in at least some of the FATAL\n recovery conflict paths.\n\n It seems pretty obvious that the proc_exit_inprogress check in\n RecoveryConflictInterrupt() is misplaced, and needs to be where the errors\n are thrown. But that won't help, because it turns out, we don't yet set that\n necessarily.\n\n Look at this stack from an assertion in ProcessInterrupts() ensuring that\n the same FATAL isn't raised twice:\n\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49\n#1 0x00007fd47897b546 in __GI_abort () at abort.c:79\n#2 0x00005594c150b27a in ExceptionalCondition (conditionName=0x5594c16fe746 \"!in_fatal\", errorType=0x5594c16fd8f6 \"FailedAssertion\",\n fileName=0x5594c16fdac0 \"/home/andres/src/postgresql/src/backend/tcop/postgres.c\", lineNumber=3259)\n at /home/andres/src/postgresql/src/backend/utils/error/assert.c:69\n#3 0x00005594c134f6d2 in ProcessInterrupts () at /home/andres/src/postgresql/src/backend/tcop/postgres.c:3259\n#4 0x00005594c150c671 in errfinish (filename=0x5594c16b8f2e \"pqcomm.c\", lineno=1393, funcname=0x5594c16b95e0 <__func__.8> \"internal_flush\")\n at /home/andres/src/postgresql/src/backend/utils/error/elog.c:683\n#5 0x00005594c115e059 in internal_flush () at /home/andres/src/postgresql/src/backend/libpq/pqcomm.c:1393\n#6 0x00005594c115df49 in socket_flush () at /home/andres/src/postgresql/src/backend/libpq/pqcomm.c:1340\n#7 0x00005594c15121af in send_message_to_frontend (edata=0x5594c18a5740 <errordata>) at /home/andres/src/postgresql/src/backend/utils/error/elog.c:3283\n#8 0x00005594c150f00e in EmitErrorReport () at /home/andres/src/postgresql/src/backend/utils/error/elog.c:1541\n#9 0x00005594c150c42e in errfinish (filename=0x5594c16fdaed \"postgres.c\", lineno=3266, funcname=0x5594c16ff5b0 <__func__.9> \"ProcessInterrupts\")\n at /home/andres/src/postgresql/src/backend/utils/error/elog.c:592\n#10 0x00005594c134f770 in ProcessInterrupts () at /home/andres/src/postgresql/src/backend/tcop/postgres.c:3266\n#11 0x00005594c134b995 in ProcessClientReadInterrupt (blocked=true) at /home/andres/src/postgresql/src/backend/tcop/postgres.c:497\n#12 0x00005594c1153417 in secure_read (port=0x5594c2e7d620, ptr=0x5594c189ba60 <PqRecvBuffer>, len=8192)\n\n reporting a FATAL error in process of reporting a FATAL error. Yeha.\n\n I assume this could lead to sending out the same message quite a few times.\n\n\n\nThis is quite the mess.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 13:14:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "HHi,\n\nOn 2022-07-01 13:14:23 -0700, Andres Freund wrote:\n> I saw a couple failures of 031 on CI for the meson patch - which obviously\n> doesn't change anything around this. However it adds a lot more distributions,\n> and the added ones run in docker containers on a shared host, presumably\n> adding a lot of noise. I saw this more frequently when I accidentally had the\n> test runs at the number of CPUs in the host, rather than the allotted CPUs in\n> the container.\n>\n> That made me look more into these issues. I played with adding a pg_usleep()\n> to RecoveryConflictInterrupt() to simulate slow signal delivery.\n>\n> Found a couple things:\n>\n> - the pg_usleep(5000) in ResolveRecoveryConflictWithVirtualXIDs() can\n> completely swamp the target(s) on a busy system. This surely is exascerbated\n> by the usleep I added RecoveryConflictInterrupt() but a 5ms signalling pace\n> does seem like a bad idea.\n\nThis one is also implicated in\nhttps://postgr.es/m/20220701154105.jjfutmngoedgiad3%40alvherre.pgsql\nand related issues.\n\nBesides being very short, it also seems like a bad idea to wait when we might\nnot need to? Seems we should only wait if we subsequently couldn't get the\nlock?\n\nMisleadingly WaitExceedsMaxStandbyDelay() also contains a usleep, which at\nleast I wouldn't expect given its name.\n\n\nA minimal fix would be to increase the wait time, similar how it is done with\nstandbyWait_us?\n\nMedium term it seems we ought to set the startup process's latch when handling\na conflict, and use a latch wait. But avoiding races probably isn't quite\ntrivial.\n\n\n> - we process the same recovery conflict (not a single signal, but a single\n> \"real conflict\") multiple times in the target of a conflict, presumably\n> while handling the error. That includes handling the same interrupt once as\n> an ERROR and once as FATAL.\n>\n> E.g.\n>\n> 2022-07-01 12:19:14.428 PDT [2000572] LOG: recovery still waiting after 10.032 ms: recovery conflict on buffer pin\n> 2022-07-01 12:19:14.428 PDT [2000572] CONTEXT: WAL redo at 0/344E098 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 100; blkref #0: rel 1663/16385/1>\n> 2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl ERROR: canceling statement due to conflict with recovery at character 15\n> 2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n> 2022-07-01 12:19:54.597 PDT [2000578] 031_recovery_conflict.pl STATEMENT: SELECT * FROM test_recovery_conflict_table2;\n> 2022-07-01 12:19:54.778 PDT [2000572] LOG: recovery finished waiting after 40359.937 ms: recovery conflict on buffer pin\n> 2022-07-01 12:19:54.778 PDT [2000572] CONTEXT: WAL redo at 0/344E098 for Heap2/PRUNE: latestRemovedXid 0 nredirected 0 ndead 100; blkref #0: rel 1663/16385/1>\n> 2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n> 2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n> 2022-07-01 12:19:54.788 PDT [2000578] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> 2022-07-01 12:19:54.837 PDT [2001389] 031_recovery_conflict.pl LOG: statement: SELECT 1;\n>\n> note that the startup process considers the conflict resolved by the time\n> the backend handles the interrupt.\n\nI guess the reason we first get an ERROR and then a FATAL is that the second\niteration hits the if (RecoveryConflictPending && DoingCommandRead) bit,\nbecause we end up there after handling the first error? And that's a FATAL.\n\nI suspect that Thomas' fix will address at least part of this, as the check\nwhether we're still waiting for a lock will be made just before the error is\nthrown.\n\n\n> I also see cases where a FATAL is repeated:\n>\n> 2022-07-01 12:43:18.190 PDT [2054721] LOG: recovery still waiting after 15.410 ms: recovery conflict on snapshot\n> 2022-07-01 12:43:18.190 PDT [2054721] DETAIL: Conflicting process: 2054753.\n> 2022-07-01 12:43:18.190 PDT [2054721] CONTEXT: WAL redo at 0/344AB90 for Heap2/PRUNE: latestRemovedXid 732 nredirected 18 ndead 0; blkref #0: rel 1663/16385/>\n> 2054753: reporting recovery conflict 9\n> 2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n> 2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed.\n> 2022-07-01 12:43:18.482 PDT [2054753] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> ...\n> 2054753: reporting recovery conflict 9\n> 2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n> 2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl DETAIL: User query might have needed to see row versions that must be removed.\n> 2022-07-01 12:43:19.068 PDT [2054753] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n>\n> the FATAL one seems like it might at least partially be due to\n> RecoveryConflictPending not being reset in at least some of the FATAL\n> recovery conflict paths.\n>\n> It seems pretty obvious that the proc_exit_inprogress check in\n> RecoveryConflictInterrupt() is misplaced, and needs to be where the errors\n> are thrown. But that won't help, because it turns out, we don't yet set that\n> necessarily.\n>\n> Look at this stack from an assertion in ProcessInterrupts() ensuring that\n> the same FATAL isn't raised twice:\n>\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49\n> #1 0x00007fd47897b546 in __GI_abort () at abort.c:79\n> #2 0x00005594c150b27a in ExceptionalCondition (conditionName=0x5594c16fe746 \"!in_fatal\", errorType=0x5594c16fd8f6 \"FailedAssertion\",\n> fileName=0x5594c16fdac0 \"/home/andres/src/postgresql/src/backend/tcop/postgres.c\", lineNumber=3259)\n> at /home/andres/src/postgresql/src/backend/utils/error/assert.c:69\n> #3 0x00005594c134f6d2 in ProcessInterrupts () at /home/andres/src/postgresql/src/backend/tcop/postgres.c:3259\n> #4 0x00005594c150c671 in errfinish (filename=0x5594c16b8f2e \"pqcomm.c\", lineno=1393, funcname=0x5594c16b95e0 <__func__.8> \"internal_flush\")\n> at /home/andres/src/postgresql/src/backend/utils/error/elog.c:683\n> #5 0x00005594c115e059 in internal_flush () at /home/andres/src/postgresql/src/backend/libpq/pqcomm.c:1393\n> #6 0x00005594c115df49 in socket_flush () at /home/andres/src/postgresql/src/backend/libpq/pqcomm.c:1340\n> #7 0x00005594c15121af in send_message_to_frontend (edata=0x5594c18a5740 <errordata>) at /home/andres/src/postgresql/src/backend/utils/error/elog.c:3283\n> #8 0x00005594c150f00e in EmitErrorReport () at /home/andres/src/postgresql/src/backend/utils/error/elog.c:1541\n> #9 0x00005594c150c42e in errfinish (filename=0x5594c16fdaed \"postgres.c\", lineno=3266, funcname=0x5594c16ff5b0 <__func__.9> \"ProcessInterrupts\")\n> at /home/andres/src/postgresql/src/backend/utils/error/elog.c:592\n> #10 0x00005594c134f770 in ProcessInterrupts () at /home/andres/src/postgresql/src/backend/tcop/postgres.c:3266\n> #11 0x00005594c134b995 in ProcessClientReadInterrupt (blocked=true) at /home/andres/src/postgresql/src/backend/tcop/postgres.c:497\n> #12 0x00005594c1153417 in secure_read (port=0x5594c2e7d620, ptr=0x5594c189ba60 <PqRecvBuffer>, len=8192)\n>\n> reporting a FATAL error in process of reporting a FATAL error. Yeha.\n>\n> I assume this could lead to sending out the same message quite a few\n> times.\n\nThis seems like it needs to be fixed in elog.c. ISTM that at the very least we\nought to HOLD_INTERRUPTS() before the EmitErrorReport() for FATAL.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Jul 2022 16:18:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Jul 2, 2022 at 11:18 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-01 13:14:23 -0700, Andres Freund wrote:\n> > - the pg_usleep(5000) in ResolveRecoveryConflictWithVirtualXIDs() can\n> > completely swamp the target(s) on a busy system. This surely is exascerbated\n> > by the usleep I added RecoveryConflictInterrupt() but a 5ms signalling pace\n> > does seem like a bad idea.\n>\n> This one is also implicated in\n> https://postgr.es/m/20220701154105.jjfutmngoedgiad3%40alvherre.pgsql\n> and related issues.\n>\n> Besides being very short, it also seems like a bad idea to wait when we might\n> not need to? Seems we should only wait if we subsequently couldn't get the\n> lock?\n>\n> Misleadingly WaitExceedsMaxStandbyDelay() also contains a usleep, which at\n> least I wouldn't expect given its name.\n>\n>\n> A minimal fix would be to increase the wait time, similar how it is done with\n> standbyWait_us?\n>\n> Medium term it seems we ought to set the startup process's latch when handling\n> a conflict, and use a latch wait. But avoiding races probably isn't quite\n> trivial.\n\nYeah, I had the same thought; it's easy to criticise the current\ncollateral damage maximising design, but a whole project to come up\nwith a good race-free precise design. We should do that, though.\n\n> I guess the reason we first get an ERROR and then a FATAL is that the second\n> iteration hits the if (RecoveryConflictPending && DoingCommandRead) bit,\n> because we end up there after handling the first error? And that's a FATAL.\n>\n> I suspect that Thomas' fix will address at least part of this, as the check\n> whether we're still waiting for a lock will be made just before the error is\n> thrown.\n\nThat seems right.\n\n> > reporting a FATAL error in process of reporting a FATAL error. Yeha.\n> >\n> > I assume this could lead to sending out the same message quite a few\n> > times.\n>\n> This seems like it needs to be fixed in elog.c. ISTM that at the very least we\n> ought to HOLD_INTERRUPTS() before the EmitErrorReport() for FATAL.\n\nThat seems to make sense.\n\nAbout my patch... even though it solves a couple of problems now\nidentified, I found an architectural problem that I don't have a\nsolution for yet, which stopped me in my tracks a few weeks back. I\nneed to find a way forward that is back-patchable.\n\nRecap: The basic concept here is to kick all \"real work\" out of\nsignal handlers, because that work is unsafe in that context. So\ninstead of deciding whether we need to cancel the current query at the\nnext CFI by setting QueryCancelPending, we defer the whole decision to\nthe next CFI. Sometimes the decision is that we don't need to do\nanything, and the CFI returns and execution continues normally.\n\nThe problem is that there are a couple of parts of our tree that don't\nuse a standard CFI, but are interrupted by looking for\nQueryCancelPending directly. syncrep.c is one, but I don't believe\nyou could be blocked there while recovery is in progress, and\nregcomp.c is another. (There was a third case relating to that\nposix_fallocate() problem report you mentioned above, but 4518c798\nremoved that). The regular expression machinery is capable of\nconsuming a lot of CPU, and does CANCEL_REQUESTED(nfa->v->re)\nfrequently to avoid getting stuck. With the patch as it stands, that\nwould never be true.\n\n\n", "msg_date": "Wed, 27 Jul 2022 11:15:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... The regular expression machinery is capable of\n> consuming a lot of CPU, and does CANCEL_REQUESTED(nfa->v->re)\n> frequently to avoid getting stuck. With the patch as it stands, that\n> would never be true.\n\nSurely that can't be too hard to fix. We might have to refactor\nthe code around QueryCancelPending a little bit so that callers\ncan ask \"do we need a query cancel now?\" without actually triggering\na longjmp ... but why would that be problematic?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jul 2022 19:22:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Jul 26, 2022 at 07:22:52PM -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > ... The regular expression machinery is capable of\n> > consuming a lot of CPU, and does CANCEL_REQUESTED(nfa->v->re)\n> > frequently to avoid getting stuck. With the patch as it stands, that\n> > would never be true.\n> \n> Surely that can't be too hard to fix. We might have to refactor\n> the code around QueryCancelPending a little bit so that callers\n> can ask \"do we need a query cancel now?\" without actually triggering\n> a longjmp ... but why would that be problematic?\n\nIt could work. The problems are like those of making code safe to run in a\nsignal handler. You can use e.g. snprintf in rcancelrequested(), but you\nstill can't use palloc() or ereport(). I see at least these strategies:\n\n1. Accept that recovery conflict checks run after a regex call completes.\n2. Have rcancelrequested() return true unconditionally if we need a conflict\n check. If there's no actual conflict, restart the regex.\n3. Have rcancelrequested() run the conflict check, including elog-using\n PostgreSQL code. On longjmp(), accept the leak of regex mallocs.\n4. Have rcancelrequested() run the conflict check, including elog-using\n PostgreSQL code. On longjmp(), escalate to FATAL.\n5. Write the conflict check code to dutifully avoid longjmp().\n6. Convert src/backend/regex to use palloc, so longjmp() is fine.\n\nI would tend to pick (3). (6) could come later and remove the drawback of\n(3). Does one of those unblock the patch, or not?\n\n===\n\nI found this thread because $SUBJECT is causing more buildfarm failures\nlately. Here are just the ones with symptom \"timed out waiting for match:\n(?^:User was holding a relation lock for too long)\":\n\n sysname │ snapshot │ branch │ bfurl \n───────────┼─────────────────────┼───────────────┼────────────────────────────────────────────────────────────────────────────────────────────────\n wrasse │ 2022-09-16 09:19:06 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-09-16%2009%3A19%3A06\n francolin │ 2022-09-24 02:02:23 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2022-09-24%2002%3A02%3A23\n wrasse │ 2022-10-19 08:49:16 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-10-19%2008%3A49%3A16\n wrasse │ 2022-11-16 16:59:23 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-11-16%2016%3A59%3A23\n wrasse │ 2022-11-17 09:58:48 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-11-17%2009%3A58%3A48\n wrasse │ 2022-11-21 22:17:20 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-11-21%2022%3A17%3A20\n wrasse │ 2022-11-22 21:52:26 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-11-22%2021%3A52%3A26\n wrasse │ 2022-11-25 09:16:44 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-11-25%2009%3A16%3A44\n wrasse │ 2022-12-04 23:33:26 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-04%2023%3A33%3A26\n wrasse │ 2022-12-07 11:48:54 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-07%2011%3A48%3A54\n wrasse │ 2022-12-07 20:58:49 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-07%2020%3A58%3A49\n wrasse │ 2022-12-09 12:19:40 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-09%2012%3A19%3A40\n wrasse │ 2022-12-09 15:29:45 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-09%2015%3A29%3A45\n wrasse │ 2022-12-15 09:29:52 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-15%2009%3A29%3A52\n wrasse │ 2022-12-23 07:37:06 │ REL_15_STABLE │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-23%2007%3A37%3A06\n wrasse │ 2022-12-23 10:32:05 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-23%2010%3A32%3A05\n wrasse │ 2022-12-23 17:47:17 │ HEAD │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-12-23%2017%3A47%3A17\n(17 rows)\n\nI can reproduce that symptom reliably, on GNU/Linux, with the attached patch\nadding sleeps. The key log bit:\n\n2022-09-16 11:50:37.338 CEST [15022:4] 031_recovery_conflict.pl LOG: statement: BEGIN;\n2022-09-16 11:50:37.339 CEST [15022:5] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;\n2022-09-16 11:50:37.341 CEST [15022:6] 031_recovery_conflict.pl LOG: statement: SELECT 1;\n2022-09-16 11:50:38.076 CEST [14880:17] LOG: recovery still waiting after 11.482 ms: recovery conflict on lock\n2022-09-16 11:50:38.076 CEST [14880:18] DETAIL: Conflicting process: 15022.\n2022-09-16 11:50:38.076 CEST [14880:19] CONTEXT: WAL redo at 0/34243F0 for Standby/LOCK: xid 733 db 16385 rel 16386 \n2022-09-16 11:50:38.196 CEST [15022:7] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n2022-09-16 11:50:38.196 CEST [15022:8] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n2022-09-16 11:50:38.196 CEST [15022:9] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n2022-09-16 11:50:38.197 CEST [15022:10] 031_recovery_conflict.pl LOG: disconnection: session time: 0:00:01.041 user=nm database=test_db host=[local]\n2022-09-16 11:50:38.198 CEST [14880:20] LOG: recovery finished waiting after 132.886 ms: recovery conflict on lock\n\nThe second DETAIL should be \"User was holding a relation lock for too long.\"\nThe backend in question is idle in transaction. RecoveryConflictInterrupt()\nfor PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK won't see IsWaitingForLock(),\nso it will find no conflict. However, RecoveryConflictReason remains\nclobbered, hence the wrong DETAIL message. Incidentally, the affected test\ncontains comment \"# DROP TABLE containing block which standby has in a pinned\nbuffer\". The standby holds no pin at that moment; the LOCK TABLE pins system\ncatalog pages, but it drops every pin it acquires.", "msg_date": "Thu, 29 Dec 2022 00:40:52 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Thu, Dec 29, 2022 at 9:40 PM Noah Misch <noah@leadboat.com> wrote:\n> On Tue, Jul 26, 2022 at 07:22:52PM -0400, Tom Lane wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > ... The regular expression machinery is capable of\n> > > consuming a lot of CPU, and does CANCEL_REQUESTED(nfa->v->re)\n> > > frequently to avoid getting stuck. With the patch as it stands, that\n> > > would never be true.\n> >\n> > Surely that can't be too hard to fix. We might have to refactor\n> > the code around QueryCancelPending a little bit so that callers\n> > can ask \"do we need a query cancel now?\" without actually triggering\n> > a longjmp ... but why would that be problematic?\n>\n> It could work. The problems are like those of making code safe to run in a\n> signal handler. You can use e.g. snprintf in rcancelrequested(), but you\n> still can't use palloc() or ereport(). I see at least these strategies:\n>\n> 1. Accept that recovery conflict checks run after a regex call completes.\n> 2. Have rcancelrequested() return true unconditionally if we need a conflict\n> check. If there's no actual conflict, restart the regex.\n> 3. Have rcancelrequested() run the conflict check, including elog-using\n> PostgreSQL code. On longjmp(), accept the leak of regex mallocs.\n> 4. Have rcancelrequested() run the conflict check, including elog-using\n> PostgreSQL code. On longjmp(), escalate to FATAL.\n> 5. Write the conflict check code to dutifully avoid longjmp().\n> 6. Convert src/backend/regex to use palloc, so longjmp() is fine.\n\nThanks! I appreciate the help getting unstuck here. I'd thought\nabout some of these but not all. I also considered a couple more:\n\n7. Do a CFI() in a try/catch if INTERRUPTS_PENDING_CONDITION() is\ntrue, and copy the error somewhere to be re-thrown later after the\nregexp code exits with REG_CANCEL.\n8. Do a CFI() in a try/catch if INTERRUPTS_PENDING_CONDITION() is\ntrue, and call a new regexp function that will free everything before\nre-throwing.\n\nAfter Tom's response I spent some time trying to figure out how to\nmake a SOFT_CHECK_FOR_INTERRUPTS(), which would return a value to\nindicate that it would like to throw. I think it would need to re-arm\nvarious flags and introduce a programming rule for all interrupt\nprocessing routines that if they fired once under a soft check they\nmust fire again later under a non-soft check. That all seems a bit\ncomplicated, and a general mechanism like that seemed like overkill\nfor a single user, which led me to idea #7.\n\nIdea #8 is a realisation that twisting oneself into a pretzel to avoid\nhaving to change the regexp code or its REG_CANCEL control flow may be\na bit silly. If the only thing it really needs to do is free some\nmemory, maybe the regexp module should provide a function that frees\neverything that is safe to call from our rcancelrequested callback, so\nwe can do so before we longjmp back to Kansas. Then the REG_CANCEL\ncode paths would be effectively unreachable in PostgreSQL. I don't\nknow if it's better or worse than your idea #6, \"use palloc instead,\nit already has garbage collection, duh\", but it's a different take on\nthe same realisation that this is just about free().\n\nI guess idea #6 must be pretty easy to try: just point that MALLOC()\nmacro to palloc(), and do a plain old CFI() in rcancelrequested().\nWhy do you suggest #3 as an interim measure? Do we fear that palloc()\nmight hurt regexp performance?\n\n> I can reproduce that symptom reliably, on GNU/Linux, with the attached patch\n> adding sleeps. The key log bit:\n>\n> 2022-09-16 11:50:37.338 CEST [15022:4] 031_recovery_conflict.pl LOG: statement: BEGIN;\n> 2022-09-16 11:50:37.339 CEST [15022:5] 031_recovery_conflict.pl LOG: statement: LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;\n> 2022-09-16 11:50:37.341 CEST [15022:6] 031_recovery_conflict.pl LOG: statement: SELECT 1;\n> 2022-09-16 11:50:38.076 CEST [14880:17] LOG: recovery still waiting after 11.482 ms: recovery conflict on lock\n> 2022-09-16 11:50:38.076 CEST [14880:18] DETAIL: Conflicting process: 15022.\n> 2022-09-16 11:50:38.076 CEST [14880:19] CONTEXT: WAL redo at 0/34243F0 for Standby/LOCK: xid 733 db 16385 rel 16386\n> 2022-09-16 11:50:38.196 CEST [15022:7] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n> 2022-09-16 11:50:38.196 CEST [15022:8] 031_recovery_conflict.pl DETAIL: User transaction caused buffer deadlock with recovery.\n> 2022-09-16 11:50:38.196 CEST [15022:9] 031_recovery_conflict.pl HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> 2022-09-16 11:50:38.197 CEST [15022:10] 031_recovery_conflict.pl LOG: disconnection: session time: 0:00:01.041 user=nm database=test_db host=[local]\n> 2022-09-16 11:50:38.198 CEST [14880:20] LOG: recovery finished waiting after 132.886 ms: recovery conflict on lock\n>\n> The second DETAIL should be \"User was holding a relation lock for too long.\"\n> The backend in question is idle in transaction. RecoveryConflictInterrupt()\n> for PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK won't see IsWaitingForLock(),\n> so it will find no conflict. However, RecoveryConflictReason remains\n> clobbered, hence the wrong DETAIL message.\n\nAha. I'd speculated that RecoveryConflictReason must be capable of\nreporting bogus errors like that up-thread.\n\n> Incidentally, the affected test\n> contains comment \"# DROP TABLE containing block which standby has in a pinned\n> buffer\". The standby holds no pin at that moment; the LOCK TABLE pins system\n> catalog pages, but it drops every pin it acquires.\n\nOh, I guess the comment is just wrong? There are earlier sections\nconcerned with buffer pins, but the section \"RECOVERY CONFLICT 3\" is\nabout locks.\n\n\n", "msg_date": "Sat, 31 Dec 2022 10:06:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Dec 31, 2022 at 10:06:53AM +1300, Thomas Munro wrote:\n> On Thu, Dec 29, 2022 at 9:40 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Tue, Jul 26, 2022 at 07:22:52PM -0400, Tom Lane wrote:\n> > > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > > ... The regular expression machinery is capable of\n> > > > consuming a lot of CPU, and does CANCEL_REQUESTED(nfa->v->re)\n> > > > frequently to avoid getting stuck. With the patch as it stands, that\n> > > > would never be true.\n> > >\n> > > Surely that can't be too hard to fix. We might have to refactor\n> > > the code around QueryCancelPending a little bit so that callers\n> > > can ask \"do we need a query cancel now?\" without actually triggering\n> > > a longjmp ... but why would that be problematic?\n> >\n> > It could work. The problems are like those of making code safe to run in a\n> > signal handler. You can use e.g. snprintf in rcancelrequested(), but you\n> > still can't use palloc() or ereport(). I see at least these strategies:\n> >\n> > 1. Accept that recovery conflict checks run after a regex call completes.\n> > 2. Have rcancelrequested() return true unconditionally if we need a conflict\n> > check. If there's no actual conflict, restart the regex.\n> > 3. Have rcancelrequested() run the conflict check, including elog-using\n> > PostgreSQL code. On longjmp(), accept the leak of regex mallocs.\n> > 4. Have rcancelrequested() run the conflict check, including elog-using\n> > PostgreSQL code. On longjmp(), escalate to FATAL.\n> > 5. Write the conflict check code to dutifully avoid longjmp().\n> > 6. Convert src/backend/regex to use palloc, so longjmp() is fine.\n> \n> Thanks! I appreciate the help getting unstuck here. I'd thought\n> about some of these but not all. I also considered a couple more:\n> \n> 7. Do a CFI() in a try/catch if INTERRUPTS_PENDING_CONDITION() is\n> true, and copy the error somewhere to be re-thrown later after the\n> regexp code exits with REG_CANCEL.\n> 8. Do a CFI() in a try/catch if INTERRUPTS_PENDING_CONDITION() is\n> true, and call a new regexp function that will free everything before\n> re-throwing.\n> \n> After Tom's response I spent some time trying to figure out how to\n> make a SOFT_CHECK_FOR_INTERRUPTS(), which would return a value to\n> indicate that it would like to throw. I think it would need to re-arm\n> various flags and introduce a programming rule for all interrupt\n> processing routines that if they fired once under a soft check they\n> must fire again later under a non-soft check. That all seems a bit\n> complicated, and a general mechanism like that seemed like overkill\n> for a single user, which led me to idea #7.\n> \n> Idea #8 is a realisation that twisting oneself into a pretzel to avoid\n> having to change the regexp code or its REG_CANCEL control flow may be\n> a bit silly. If the only thing it really needs to do is free some\n> memory, maybe the regexp module should provide a function that frees\n> everything that is safe to call from our rcancelrequested callback, so\n> we can do so before we longjmp back to Kansas. Then the REG_CANCEL\n> code paths would be effectively unreachable in PostgreSQL. I don't\n> know if it's better or worse than your idea #6, \"use palloc instead,\n> it already has garbage collection, duh\", but it's a different take on\n> the same realisation that this is just about free().\n\nPG_TRY() isn't free, so it's nice that (6) doesn't add one. If (6) fails in\nsome not-yet-revealed way, (8) could get more relevant.\n\n> I guess idea #6 must be pretty easy to try: just point that MALLOC()\n> macro to palloc(), and do a plain old CFI() in rcancelrequested().\n> Why do you suggest #3 as an interim measure?\n\nNo strong reason. I think I suggested it because it's a strict subset of (6),\nbut I didn't think through in detail. (I've never modified src/backend/regex\nand have barely read its code, for whatever that's worth.)\n\n> Do we fear that palloc() might hurt regexp performance?\n\nNah. I don't recall any place in PostgreSQL where performance is an argument\nfor raw malloc() calls.\n\n> > Incidentally, the affected test\n> > contains comment \"# DROP TABLE containing block which standby has in a pinned\n> > buffer\". The standby holds no pin at that moment; the LOCK TABLE pins system\n> > catalog pages, but it drops every pin it acquires.\n> \n> Oh, I guess the comment is just wrong? There are earlier sections\n> concerned with buffer pins, but the section \"RECOVERY CONFLICT 3\" is\n> about locks.\n\nYes.\n\n\n", "msg_date": "Fri, 30 Dec 2022 21:36:02 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Dec 31, 2022 at 6:36 PM Noah Misch <noah@leadboat.com> wrote:\n> On Sat, Dec 31, 2022 at 10:06:53AM +1300, Thomas Munro wrote:\n> > Idea #8 is a realisation that twisting oneself into a pretzel to avoid\n> > having to change the regexp code or its REG_CANCEL control flow may be\n> > a bit silly. If the only thing it really needs to do is free some\n> > memory, maybe the regexp module should provide a function that frees\n> > everything that is safe to call from our rcancelrequested callback, so\n> > we can do so before we longjmp back to Kansas. Then the REG_CANCEL\n> > code paths would be effectively unreachable in PostgreSQL. I don't\n> > know if it's better or worse than your idea #6, \"use palloc instead,\n> > it already has garbage collection, duh\", but it's a different take on\n> > the same realisation that this is just about free().\n>\n> PG_TRY() isn't free, so it's nice that (6) doesn't add one. If (6) fails in\n> some not-yet-revealed way, (8) could get more relevant.\n>\n> > I guess idea #6 must be pretty easy to try: just point that MALLOC()\n> > macro to palloc(), and do a plain old CFI() in rcancelrequested().\n\nIt's not quite so easy: in RE_compile_and_cache we construct objects\nwith arbitrary cache-managed lifetime, which suggests we need a cache\nmemory context, but we could also fail mid construction, which\nsuggests we'd need a dedicated per-regex object memory context that is\nmade permanent with the MemoryContextSetParent() trick (as we see\nelsewhere for cached things that are constructed by code that might\nthrow), or something like the try/catch thing from idea #8.\nThinking...\n\n\n", "msg_date": "Mon, 2 Jan 2023 08:38:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Mon, Jan 2, 2023 at 8:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Dec 31, 2022 at 6:36 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Sat, Dec 31, 2022 at 10:06:53AM +1300, Thomas Munro wrote:\n> > > Idea #8 is a realisation that twisting oneself into a pretzel to avoid\n> > > having to change the regexp code or its REG_CANCEL control flow may be\n> > > a bit silly. If the only thing it really needs to do is free some\n> > > memory, maybe the regexp module should provide a function that frees\n> > > everything that is safe to call from our rcancelrequested callback, so\n> > > we can do so before we longjmp back to Kansas. Then the REG_CANCEL\n> > > code paths would be effectively unreachable in PostgreSQL. I don't\n> > > know if it's better or worse than your idea #6, \"use palloc instead,\n> > > it already has garbage collection, duh\", but it's a different take on\n> > > the same realisation that this is just about free().\n> >\n> > PG_TRY() isn't free, so it's nice that (6) doesn't add one. If (6) fails in\n> > some not-yet-revealed way, (8) could get more relevant.\n> >\n> > > I guess idea #6 must be pretty easy to try: just point that MALLOC()\n> > > macro to palloc(), and do a plain old CFI() in rcancelrequested().\n>\n> It's not quite so easy: in RE_compile_and_cache we construct objects\n> with arbitrary cache-managed lifetime, which suggests we need a cache\n> memory context, but we could also fail mid construction, which\n> suggests we'd need a dedicated per-regex object memory context that is\n> made permanent with the MemoryContextSetParent() trick (as we see\n> elsewhere for cached things that are constructed by code that might\n> throw), ...\n\nHere's an experiment-grade attempt at idea #6 done that way, for\ndiscussion. You can see how much memory is wasted by each regex_t,\nwhich I guess is probably on the order of a couple of hundred KB if\nyou use all 32 regex cache slots using ALLOCSET_SMALL_SIZES as I did\nhere:\n\npostgres=# select 'x' ~ 'hello world .*';\n-[ RECORD 1 ]\n?column? | f\n\npostgres=# select * from pg_backend_memory_contexts where name =\n'RegexpMemoryContext';\n-[ RECORD 1 ]-+-------------------------\nname | RegexpMemoryContext\nident | hello world .*\nparent | RegexpCacheMemoryContext\nlevel | 2\ntotal_bytes | 13376\ntotal_nblocks | 5\nfree_bytes | 5144\nfree_chunks | 8\nused_bytes | 8232\n\nThere's some more memory allocated in regc_pg_locale.c with raw\nmalloc() that could probably benefit from a pallocisation just to be\nable to measure it, but I didn't touch that here.\n\nThe recovery conflict patch itself is unchanged, except that I removed\nthe claim in the commit message that this would be back-patched. It's\npretty clear that this would need to spend a decent amount of time on\nmaster only.", "msg_date": "Wed, 4 Jan 2023 16:46:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2022-12-29 00:40:52 -0800, Noah Misch wrote:\n> Incidentally, the affected test contains comment \"# DROP TABLE containing\n> block which standby has in a pinned buffer\". The standby holds no pin at\n> that moment; the LOCK TABLE pins system catalog pages, but it drops every\n> pin it acquires.\n\nI guess that comment survived from an earlier version of that test (or another\ntest where it was copied from).\n\nI'm inclined to just delete it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 14:16:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 16:46:05 +1300, Thomas Munro wrote:\n> postgres=# select 'x' ~ 'hello world .*';\n> -[ RECORD 1 ]\n> ?column? | f\n> \n> postgres=# select * from pg_backend_memory_contexts where name =\n> 'RegexpMemoryContext';\n> -[ RECORD 1 ]-+-------------------------\n> name | RegexpMemoryContext\n> ident | hello world .*\n> parent | RegexpCacheMemoryContext\n> level | 2\n> total_bytes | 13376\n> total_nblocks | 5\n\nHm, if a trivial re uses 13kB, using ALLOCSET_SMALL_SIZES might actually\nincrease memory usage by increasing the number of blocks.\n\n\n> free_bytes | 5144\n> free_chunks | 8\n> used_bytes | 8232\n\nHm. So we actually have a bunch of temporary allocations in here. I assume\nthat's all the stuff from the \"non-compact\" representation that\nsrc/backend/regex/README talks about?\n\nI doesn't immedialy look trivial to use a separate memory context for the\n\"final\" representation and scratch memory though.\n\n\n> There's some more memory allocated in regc_pg_locale.c with raw\n> malloc() that could probably benefit from a pallocisation just to be\n> able to measure it, but I didn't touch that here.\n\nIt might also effectively reduce the overhead of using palloc, by filling the\ncontext up further.\n\n\n\n> diff --git a/src/backend/regex/regcomp.c b/src/backend/regex/regcomp.c\n> index bb8c240598..c0f8e77b49 100644\n> --- a/src/backend/regex/regcomp.c\n> +++ b/src/backend/regex/regcomp.c\n> @@ -2471,17 +2471,17 @@ rfree(regex_t *re)\n> /*\n> * rcancelrequested - check for external request to cancel regex operation\n> *\n> - * Return nonzero to fail the operation with error code REG_CANCEL,\n> - * zero to keep going\n> - *\n> - * The current implementation is Postgres-specific. If we ever get around\n> - * to splitting the regex code out as a standalone library, there will need\n> - * to be some API to let applications define a callback function for this.\n> + * The current implementation always returns 0, if CHECK_FOR_INTERRUPTS()\n> + * doesn't exit non-locally via ereport(). Memory allocated while compiling is\n> + * expected to be cleaned up by virtue of being allocated using palloc in a\n> + * suitable memory context.\n> */\n> static int\n> rcancelrequested(void)\n> {\n> -\treturn InterruptPending && (QueryCancelPending || ProcDiePending);\n> +\tCHECK_FOR_INTERRUPTS();\n> +\n> +\treturn 0;\n> }\n\nHm. Seems confusing for this to continue being called rcancelrequested() and\nto be called via if(CANCEL_REQUESTED()), if we're not even documenting that\nit's intended to be usable that way?\n\nSeems at the minimum we ought to keep more of the old comment, to explain the\nsomewhat odd API?\n\n\n> +\t/* Set up the cache memory on first go through. */\n> +\tif (unlikely(RegexpCacheMemoryContext == NULL))\n> +\t\tRegexpCacheMemoryContext =\n> +\t\t\tAllocSetContextCreate(TopMemoryContext,\n> +\t\t\t\t\t\t\t\t \"RegexpCacheMemoryContext\",\n> +\t\t\t\t\t\t\t\t ALLOCSET_SMALL_SIZES);\n\nI think it might be nicer to create this below CacheMemoryContext? Just so the\n\"memory context tree\" stays nicely ordered.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 14:47:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm. Seems confusing for this to continue being called rcancelrequested() and\n> to be called via if(CANCEL_REQUESTED()), if we're not even documenting that\n> it's intended to be usable that way?\n\nYeah. I'm not very happy with this line of development at all,\nbecause I think we are painting ourselves into a corner by not allowing\ncode to detect whether a cancel is pending without having it happen\nimmediately. (That is, I do not believe that backend/regex/ is the\nonly code that will ever wish for that.) But if that is the direction\nwe're going to go in, we should probably revise these APIs to make them\nless odd. I'm not sure why we'd keep the REG_CANCEL error code at all.\n\n> I think it might be nicer to create this below CacheMemoryContext?\n\nMeh ... CacheMemoryContext might not exist yet, especially for the\nuse-cases in the login logic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Jan 2023 17:55:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 17:55:43 -0500, Tom Lane wrote:\n> I'm not very happy with this line of development at all, because I think we\n> are painting ourselves into a corner by not allowing code to detect whether\n> a cancel is pending without having it happen immediately. (That is, I do\n> not believe that backend/regex/ is the only code that will ever wish for\n> that.)\n\nI first wrote that this is hard to make work without introducing overhead\n(like a PG_TRY in rcancelrequested()), for a bunch of reasons discussed\nupthread (see [1]).\n\nBut now I wonder if we didn't recently introduce most of the framework to make\nthis less hard / expensive.\n\nWhat about using a version of errsave() that can save FATALs too? We could\nhave something roughly like the ProcessInterrupts() in the proposed patch that\nis used from within rcancelrequested(). But instead of actually throwing the\nerror, we'd just remember the to-be-thrown-later error, that the next\n\"real\" CFI would throw.\n\nThat still leaves us with some increased likelihood of erroring out within the\nregex machinery, e.g. if there's an out-of-memory error within elog.c\nprocessing. But I'd not be too worried about leaking memory in that corner\ncase. Which also could be closed using the approach in Thomas' patch, except\nthat it normally would still return in rcancelrequested().\n\nInsane?\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CA%2BhUKG%2BqtNxDQAzC20AnUxuigKYb%3D7shtmsuSyMekjni%3Dik6BA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 15:33:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Thu, Jan 5, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> What about using a version of errsave() that can save FATALs too? We could\n> have something roughly like the ProcessInterrupts() in the proposed patch that\n> is used from within rcancelrequested(). But instead of actually throwing the\n> error, we'd just remember the to-be-thrown-later error, that the next\n> \"real\" CFI would throw.\n\nRight, I contemplated variations on that theme. I'd be willing to\ncode something like that to kick the tyres, but it seems like it would\nmake back-patching more painful? We're trying to fix bugs here...\nDeciding to proceed with #6 (palloc) wouldn't mean we can't eventually\nalso implement two phase/soft CFI() when we have a potential user, so\nI don't really get the painted-into-a-corner argument. However, it's\nall moot if the #6 isn't good enough on its own merits independent of\nother hypothetical future users (eg if the per regex_t MemoryContext\noverheads are considered too high and can't be tuned acceptably).\n\n\n", "msg_date": "Thu, 5 Jan 2023 13:21:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Hi,\n\nOn 2023-01-05 13:21:54 +1300, Thomas Munro wrote:\n> Right, I contemplated variations on that theme. I'd be willing to\n> code something like that to kick the tyres, but it seems like it would\n> make back-patching more painful? We're trying to fix bugs here...\n\nI think we need to accept that this mess can't be fixed in the back\nbranches. I'd rather get a decent fix sometime in PG16 than a crufty fix in PG\n17 that we then backpatch a while later.\n\n\n> Deciding to proceed with #6 (palloc) wouldn't mean we can't eventually\n> also implement two phase/soft CFI() when we have a potential user, so\n> I don't really get the painted-into-a-corner argument.\n\nI think that's a fair point.\n\n\n> However, it's all moot if the #6 isn't good enough on its own merits\n> independent of other hypothetical future users (eg if the per regex_t\n> MemoryContext overheads are considered too high and can't be tuned\n> acceptably).\n\nI'm not too worried about that, particularly because it looks like it'd not be\ntoo hard to lower the overhead further. Arguably allocating memory outside of\nmcxt.c is actually a bad thing independent of error handing, because it's\neffectively \"invisible\" to our memory-usage-monitoring facilities.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 16:31:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Thu, Jan 5, 2023 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm. Seems confusing for this to continue being called rcancelrequested() and\n> > to be called via if(CANCEL_REQUESTED()), if we're not even documenting that\n> > it's intended to be usable that way?\n>\n> Yeah. I'm not very happy with this line of development at all,\n> because I think we are painting ourselves into a corner by not allowing\n> code to detect whether a cancel is pending without having it happen\n> immediately. (That is, I do not believe that backend/regex/ is the\n> only code that will ever wish for that.) But if that is the direction\n> we're going to go in, we should probably revise these APIs to make them\n> less odd. I'm not sure why we'd keep the REG_CANCEL error code at all.\n\nAh, OK. I had the impression from the way the code is laid out with a\nwall between \"PostgreSQL\" bits and \"vendored library\" bits that we\nmight have some reason to want to keep that callback interface the\nsame (ie someone else is using this code and we want to stay in\nsync?), but your reactions are a clue that maybe I imagined a\nrequirement that doesn't exist.\n\n\n", "msg_date": "Thu, 5 Jan 2023 13:43:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jan 5, 2023 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... But if that is the direction\n>> we're going to go in, we should probably revise these APIs to make them\n>> less odd. I'm not sure why we'd keep the REG_CANCEL error code at all.\n\n> Ah, OK. I had the impression from the way the code is laid out with a\n> wall between \"PostgreSQL\" bits and \"vendored library\" bits that we\n> might have some reason to want to keep that callback interface the\n> same (ie someone else is using this code and we want to stay in\n> sync?), but your reactions are a clue that maybe I imagined a\n> requirement that doesn't exist.\n\nThe rcancelrequested API is something that I devised out of whole cloth\nawhile ago. It's not in Tcl's copy of the code, which AFAIK is the\nonly other project using this regex engine. I do still have vague\nhopes of someday seeing the engine as a standalone project, which is\nwhy I'd prefer to keep a bright line between the engine and Postgres.\nBut there's no very strong reason to think that any hypothetical future\nexternal users who need a cancel API would want this API as opposed to\none that requires exit() or longjmp() to get out of the engine. So if\nwe're changing the way we use it, I think it's perfectly reasonable to\nredesign that API to make it simpler and less of an impedance mismatch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Jan 2023 20:14:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Thu, Jan 5, 2023 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The rcancelrequested API is something that I devised out of whole cloth\n> awhile ago. It's not in Tcl's copy of the code, which AFAIK is the\n> only other project using this regex engine. I do still have vague\n> hopes of someday seeing the engine as a standalone project, which is\n> why I'd prefer to keep a bright line between the engine and Postgres.\n> But there's no very strong reason to think that any hypothetical future\n> external users who need a cancel API would want this API as opposed to\n> one that requires exit() or longjmp() to get out of the engine. So if\n> we're changing the way we use it, I think it's perfectly reasonable to\n> redesign that API to make it simpler and less of an impedance mismatch.\n\nThanks for that background. Alright then, here's a new iteration\nexploring this direction. It gets rid of CANCEL_REQUESTED() ->\nREG_CANCEL and the associated error and callback function, and instead\nhas just \"INTERRUPT(re);\" at those cancellation points, which is a\nmacro that defaults to nothing (for Tcl's benefit). Our regcustom.h\ndefines it as CHECK_FOR_INTERRUPTS(). I dunno if it's worth passing\nthe \"re\" argument... I was imagining that someone who wants to free\nmemory explicitly and then longjmp would probably need it? (It might\neven be possible to expand to something that sets an error and\nreturns, not investigated.) Better name or design very welcome.\n\nAnother decision is to use the no-OOM version of palloc. (Not\nexplored: could we use throwing palloc with attribute returns_nonnull\nto teach GCC and Clang to prune the failure handling from generated\nregex code?) (As for STACK_TOO_DEEP(): why follow a function pointer,\nwhen it could be macro-only too? But that's getting off track.)\n\nI split the patch in two: memory and interrupts. I also found a place\nin contrib/pg_trgm that did no-longer-needed try/finally.", "msg_date": "Sat, 14 Jan 2023 15:23:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Jan 14, 2023 at 3:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jan 5, 2023 at 2:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The rcancelrequested API is something that I devised out of whole cloth\n> > awhile ago. It's not in Tcl's copy of the code, which AFAIK is the\n> > only other project using this regex engine. I do still have vague\n> > hopes of someday seeing the engine as a standalone project, which is\n> > why I'd prefer to keep a bright line between the engine and Postgres.\n> > But there's no very strong reason to think that any hypothetical future\n> > external users who need a cancel API would want this API as opposed to\n> > one that requires exit() or longjmp() to get out of the engine. So if\n> > we're changing the way we use it, I think it's perfectly reasonable to\n> > redesign that API to make it simpler and less of an impedance mismatch.\n>\n> Thanks for that background. Alright then, here's a new iteration\n> exploring this direction. It gets rid of CANCEL_REQUESTED() ->\n> REG_CANCEL and the associated error and callback function, and instead\n> has just \"INTERRUPT(re);\" at those cancellation points, which is a\n> macro that defaults to nothing (for Tcl's benefit). Our regcustom.h\n> defines it as CHECK_FOR_INTERRUPTS(). I dunno if it's worth passing\n> the \"re\" argument... I was imagining that someone who wants to free\n> memory explicitly and then longjmp would probably need it? (It might\n> even be possible to expand to something that sets an error and\n> returns, not investigated.) Better name or design very welcome.\n\nI think this experiment worked out pretty well. I think it's a nice\nside-effect that you can see what memory the regexp subsystem is\nusing, and that's likely to lead to more improvements. (Why is it\nlimited to caching 32 entries? Why is it a linear search, not a hash\ntable? Why is LRU implemented with memmove() and not a list? Could\nwe have a GUC regex_cache_memory, so someone who uses a lot of regexes\ncan opt into a large cache?) On the other hand it also uses a bit\nmore RAM, like other code using the reparenting trick, which is a\ntopic for future research.\n\nI vote for proceeding with this approach. I wish we didn't have to\ntackle either a regexp interface/management change (done here) or a\nCFI() redesign (not done, but probably also a good idea for other\nreasons) before getting this signal stuff straightened out, but here\nwe are. This approach seems good to me. Anyone have a different\ntake?\n\n\n", "msg_date": "Mon, 3 Apr 2023 15:44:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think this experiment worked out pretty well. I think it's a nice\n> side-effect that you can see what memory the regexp subsystem is\n> using, and that's likely to lead to more improvements. (Why is it\n> limited to caching 32 entries? Why is it a linear search, not a hash\n> table? Why is LRU implemented with memmove() and not a list? Could\n> we have a GUC regex_cache_memory, so someone who uses a lot of regexes\n> can opt into a large cache?) On the other hand it also uses a bit\n> more RAM, like other code using the reparenting trick, which is a\n> topic for future research.\n\n> I vote for proceeding with this approach. I wish we didn't have to\n> tackle either a regexp interface/management change (done here) or a\n> CFI() redesign (not done, but probably also a good idea for other\n> reasons) before getting this signal stuff straightened out, but here\n> we are. This approach seems good to me. Anyone have a different\n> take?\n\nSorry for not looking at this sooner. I am okay with the regex\nchanges proposed in v5-0001 through 0003, but I think you need to\ntake another mopup pass there. Some specific complaints:\n* header comment for pg_regprefix has been falsified (s/malloc/palloc/)\n* in spell.c, regex_affix_deletion_callback could be got rid of\n* check other callers of pg_regerror for now-useless CHECK_FOR_INTERRUPTS\n\nIn general there's a lot of comments referring to regexes being malloc'd.\nI'm disinclined to change the ones inside the engine, because as far as\nit knows it is still using malloc, but maybe we should work harder on\nour own comments. In particular, it'd likely be useful to have something\nsomewhere pointing out that pg_regfree is only needed when you can't\nget rid of the regex by context cleanup. Maybe write a short section\nabout memory management in backend/regex/README?\n\nI've not really looked at 0004.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Apr 2023 09:25:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Tue, Apr 4, 2023 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Sorry for not looking at this sooner. I am okay with the regex\n> changes proposed in v5-0001 through 0003, but I think you need to\n> take another mopup pass there. Some specific complaints:\n> * header comment for pg_regprefix has been falsified (s/malloc/palloc/)\n\nThanks. Fixed.\n\n> * in spell.c, regex_affix_deletion_callback could be got rid of\n\nDone in a separate patch. I wondered if regex_t should be included\ndirectly as a member of that union inside AFFIX, but decided it should\nkeep using a pointer (just without the extra wrapper struct). A\ndirect member would make the AFFIX slightly larger, and it would\nrequire us to assume that regex_t is movable which it probably\nactually is in practice I guess but that isn't written down anywhere\nand it seemed strange to rely on it.\n\n> * check other callers of pg_regerror for now-useless CHECK_FOR_INTERRUPTS\n\nI found three of these to remove (jsonpath_gram.y, varlena.c, test_regex.c).\n\n> In general there's a lot of comments referring to regexes being malloc'd.\n\nThere is also some remaining direct use of malloc() in\nregc_pg_locale.c because \"we mustn't lose control on out-of-memory\".\nAt that time (2012) there was no MCXT_NO_OOM (2015), so we could\npresumably bring that cache into an observable MemoryContext now too.\nI haven't written a patch for that, though, because it's not in the\nway of my recovery conflict mission.\n\n> I'm disinclined to change the ones inside the engine, because as far as\n> it knows it is still using malloc, but maybe we should work harder on\n> our own comments. In particular, it'd likely be useful to have something\n> somewhere pointing out that pg_regfree is only needed when you can't\n> get rid of the regex by context cleanup. Maybe write a short section\n> about memory management in backend/regex/README?\n\nI'll try to write something for the README tomorrow. Here's a new\nversion of the code changes.\n\n> I've not really looked at 0004.\n\nI'm hoping to get just the regex changes in ASAP, and then take a\nlittle bit longer on the recovery conflict patch itself (v6-0005) on\nthe basis that it's bugfix work and not subject to the feature freeze.", "msg_date": "Sat, 8 Apr 2023 01:32:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Apr 08, 2023 at 01:32:22AM +1200, Thomas Munro wrote:\n> I'm hoping to get just the regex changes in ASAP, and then take a\n> little bit longer on the recovery conflict patch itself (v6-0005) on\n> the basis that it's bugfix work and not subject to the feature freeze.\n\nAgreed. It would be good to check with the RMT, but as long as that's\nnot at the middle/end of the beta cycle I guess that's OK for this\none, even if it is only for HEAD.\n--\nMichael", "msg_date": "Sat, 8 Apr 2023 09:05:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Apr 08, 2023 at 01:32:22AM +1200, Thomas Munro wrote:\n>> I'm hoping to get just the regex changes in ASAP, and then take a\n>> little bit longer on the recovery conflict patch itself (v6-0005) on\n>> the basis that it's bugfix work and not subject to the feature freeze.\n\n> Agreed. It would be good to check with the RMT, but as long as that's\n> not at the middle/end of the beta cycle I guess that's OK for this\n> one, even if it is only for HEAD.\n\nRight. regex changes pass an eyeball check here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 20:14:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "Here is a rebase over 26669757, which introduced\nPROCSIG_RECOVERY_CONFLICT_LOGICALSLOT.\n\nI got a bit confused about why this new conflict reason didn't follow\nthe usual ERROR->FATAL promotion rules and pinged Andres who provided:\n \"Logical decoding slots are only acquired while performing logical\ndecoding. During logical decoding no user controlled code is run.\nDuring [sub]transaction abort, the slot is released. Therefore user\ncontrolled code cannot intercept an error before the replication slot\nis released.\" That's included in a comment in the attached to explain\nthe special treatment.", "msg_date": "Sat, 5 Aug 2023 13:39:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" }, { "msg_contents": "On Sat, Aug 5, 2023 at 1:39 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here is a rebase over 26669757, which introduced\n> PROCSIG_RECOVERY_CONFLICT_LOGICALSLOT.\n\nOops, please disregard v7 (somehow lost a precious line of code). V8 is better.", "msg_date": "Sat, 5 Aug 2023 14:53:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is RecoveryConflictInterrupt() entirely safe in a signal handler?" } ]
[ { "msg_contents": "", "msg_date": "Sun, 10 Apr 2022 04:46:32 +0530", "msg_from": "\"S.R Keshav\" <srkeshav7@gmail.com>", "msg_from_op": true, "msg_subject": "GSOC: New and improved website for pgjdbc (JDBC) (2022)" }, { "msg_contents": "Hi, all\n\n\nWe use 32bit postgres PG11 in our product for X86_64 platform, Now we want to add new process which is 64bit\nto access the 32bit progres using 64bit libpq.\n\n\nI tried to access the PostgresDB 32bit, with 64bit libpq and it works very nice.\nSo my question is, can we use 64bit lippq to access 32bit PostgresDB in X86_64 platform? \n\n\nThank you,\nBoChen\n\n\n\n\n\n\n\nHi, allWe use 32bit postgres PG11 in our product for X86_64 platform, Now we want to add new process which is 64bitto access the 32bit progres using 64bit libpq.I tried to access the PostgresDB 32bit, with 64bit libpq and it works very nice.So my question is,  can we use 64bit lippq to access 32bit PostgresDB in X86_64 platform? Thank you,BoChen", "msg_date": "Sun, 10 Apr 2022 10:32:29 +0800 (GMT+08:00)", "msg_from": "=?UTF-8?B?6ZmI?= <bchen90@163.com>", "msg_from_op": false, "msg_subject": "Can 64bit libpq to access 32bit postgresDB in X86_64 platform" }, { "msg_contents": "=?UTF-8?B?6ZmI?= <bchen90@163.com> writes:\n> I tried to access the PostgresDB 32bit, with 64bit libpq and it works very nice.\n> So my question is, can we use 64bit lippq to access 32bit PostgresDB in X86_64 platform? \n\nSeems like you know your answer already.\n\nIn any case, the PG wire protocol is platform-independent, so yes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Apr 2022 23:47:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can 64bit libpq to access 32bit postgresDB in X86_64 platform" }, { "msg_contents": "Hi, this is keshav. I have updated my proposal for the project New and\nimproved website for pgjdbc. This update was done on 14/4/22. This will be\nmy finalized proposal for this project.\n\nThank you.\nSR\n\n\n\nOn Sun, Apr 10, 2022 at 4:46 AM S.R Keshav <srkeshav7@gmail.com> wrote:\n\n>\n>", "msg_date": "Thu, 14 Apr 2022 18:45:18 +0530", "msg_from": "\"S.R Keshav\" <srkeshav7@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSOC: New and improved website for pgjdbc (JDBC) (2022)" } ]
[ { "msg_contents": "Hello everyone,\n\nMy name is Samuel Bassaly, and I would like to submit my proposal for this\nyear's GSoC.\nKindly find my proposal under the following link:\nhttps://docs.google.com/document/d/1cEGLLJaRmb5hkpt7GayIJA4oCllYtchOU2p0FQpLZ3U/edit?usp=sharing\n\nYour feedback is highly appreciated.\n\nThank you for your time.\n\nBest Regards,\nSamuel Bassaly\n\nHello everyone, My name is Samuel Bassaly, and I would like to submit my proposal for this year's GSoC.Kindly find my proposal under the following link:https://docs.google.com/document/d/1cEGLLJaRmb5hkpt7GayIJA4oCllYtchOU2p0FQpLZ3U/edit?usp=sharingYour feedback is highly appreciated.Thank you for your time.Best Regards,Samuel Bassaly", "msg_date": "Sun, 10 Apr 2022 20:52:55 +0200", "msg_from": "Samuel Bassaly <shkshk90@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: pgBackRest port to Windows" }, { "msg_contents": "Greetings,\n\n* Samuel Bassaly (shkshk90@gmail.com) wrote:\n> My name is Samuel Bassaly, and I would like to submit my proposal for this\n> year's GSoC.\n\n> Your feedback is highly appreciated.\n\nGreat, thanks! Will respond off-list.\n\nStephen", "msg_date": "Mon, 11 Apr 2022 14:43:13 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC: pgBackRest port to Windows" } ]
[ { "msg_contents": "In docs and comments. Mostly for v15.\nMaybe Fabien will want to comment on the pgbench one.", "msg_date": "Sun, 10 Apr 2022 21:03:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "typos" }, { "msg_contents": "( Added Joe and Robert for 0011 )\n\nOn Mon, 11 Apr 2022 at 14:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> In docs and comments. Mostly for v15.\n\n0001:\n\nShould this not use <productname>PostgreSQL</productname>? (new to master)\n\n0002:\n\nThe patch looks good. (new to v12)\n\n0003:\n\nThe patch looks good. (new to master)\n\n0004:\n\nThe patch looks good. (new to master)\n\n0005:\n\nI'm not entirely certain this is an improvement. Your commit message\nI'd say is not true going by git grep \"compression algorithm\". There\nare 3 matches in the docs and take [1], for example. I'd say in that\none it's better to use \"algorithm\". In that case, \"method\" could be\ntalking about client or server.\n\nThat makes me wonder if the change you're proposing is an improvement or not.\n\n0006:\n\nThe patch looks good. (new to master)\n\n0007:\n\n- When the <option>--max-tries</option> option is used, the\ntransaction with\n- serialization or deadlock error cannot be retried if the total time of\n+ When the <option>--max-tries</option> option is used, a\ntransaction with\n+ serialization or deadlock error will not be retried if the\ntotal time of\n\nShouldn't this be slightly clearer and say \"a transaction which fails\ndue to a serialization anomaly or a deadlock\"?\n\n- database server / the syntax error in the meta command / thread\n+ database server / syntax error in the meta command / thread\n\nShould we not separate these items out with commas?\n\n- the client is aborted. Otherwise, if an SQL fails with serialization or\n+ the client is aborted. Otherwise, if an SQL command fails with\nserialization or\n deadlock errors, the client is not aborted. In such cases, the current\n\nI'd say \"if an SQL command fails due to a serialization anomaly or due\nto deadlocking\".\n\n(new to master)\n\n0008:\n\nThe patch looks good. (new to master)\n\n0009:\n\nThe patch looks good. (new to master)\n\n0010:\n\nI don't understand this change.\n\n0011:\n\nI can't quite parse the original. I might not have enough context\nhere. Robert, Joe? (new to master)\n\n0012:\n\nThis seems to contain some documentation fixes too. The patch claims\nit's just for comments.\n\n- * pages that are outwith that range.\n+ * pages that are outside that range.\n\nI personally don't see the issue here, but I'm Scottish. I think the\nbest transaction is just \"outside of\" rather than replacing with just\n\"outside\".\n\nAll the other changes look fine.\n\n0013:\n\nI think we should fix all these, regardless of how old the mistake is.\n\n0014:\n\n- * shouldn't PANIC just because we can't guarantee the the backup has been\n+ * shouldn't PANIC just because we can't guarantee the backup has been\n\n\"that the\"\n\n0015:\n\nPatch looks fine.\n\n0016:\n0017:\n\nI'm not really sure if we should fix these or not. From having a\nquick look at some of them it seems we'd be adding churn to some\npretty old code. Happy to hear what others think.\n\n0018:\n\nThe patch looks good.\n\n0019:\n\n-1. pgindent will fix these.\n\nI will start pushing the less controversial of these, after a bit of squashing.\n\nDavid\n\n[1] https://www.postgresql.org/docs/devel/app-pgbasebackup.html\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:39:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, 11 Apr 2022 at 16:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> I will start pushing the less controversial of these, after a bit of squashing.\n\nI just committed 3 separate commits for the following:\n\nCommitted: 0001 + 0003 + 0004 + 0006 + 0007 (modified) + 0008 + 0009 +\n0012 (doc parts)\nCommitted: 0012 (remainder) + 0013 + 0014 + 0018\nCommitted: 0015\n\nI skipped:\n0002 (skipped as we should backpatch)\n0005 (unsure if the proposed patch is better)\n0010 (I can't follow this change)\n0011 (Could do with input from Robert and Joe)\n\nand also skipped:\n0016 (unsure if we should change these of pgindent is not touching it)\n0017 (unsure if we should change these of pgindent is not touching it)\n0019 (pgindent will get these when the time comes)\n\nI'll wait for feedback on the ones I didn't use.\n\nAre you able to rebase the remainder? Probably with the exception of 0019.\n\nThanks for finding all these!\n\nDavid\n\n\n", "msg_date": "Mon, 11 Apr 2022 20:55:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, Apr 11, 2022 at 04:39:30PM +1200, David Rowley wrote:\n> I'm not entirely certain this is an improvement. Your commit message\n> I'd say is not true going by git grep \"compression algorithm\". There\n> are 3 matches in the docs and take [1], for example. I'd say in that\n> one it's better to use \"algorithm\". In that case, \"method\" could be\n> talking about client or server.\n\nI am not wedded to this change; but, for context, I wrote this patch before\nbasebackup supported multiple compression ... \"things\". I didn't touch\nbasebackup here, since Robert defended that choice of words in another thread\n(starting at 20220320194050.GX28503@telsasoft.com).\n\nThis change is for pg_column_compression(), and the only other use of\n\"compression algorithm\" in the docs is in pgcrypto (which is in contrib). That\nthe docs consistently use \"method\" suggests continuing to use that rather than\nsomething else. It could be described in some central place (like if we\nsupport common syntax between interfaces which expose compression).\n\n> 0010:\n> I don't understand this change.\n\nThe commit message mentions 959f6d6a1, which makes newbindir optional. But the\ndocumentation wasn't updated, and seems to indicate that it's still required.\nhttps://www.postgresql.org/docs/devel/pgupgrade.html\n\n> 0011:\n> I can't quite parse the original. I might not have enough context\n> here. Robert, Joe? (new to master)\n\nSee the link in the commit message where someone else reported the same\nproblem.\n\n> 0019:\n> -1. pgindent will fix these.\n\nBut two of those are from 2016.\n\nThanks for amending and pushing those. There's some more less obvious ones\nattached.\n\nAmit or Masahiko may want to comment on 0012 (doc review: Add ALTER\nSUBSCRIPTION ... SKIP).", "msg_date": "Mon, 11 Apr 2022 05:10:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, Apr 11, 2022 at 7:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 04:39:30PM +1200, David Rowley wrote:\n> > I'm not entirely certain this is an improvement. Your commit message\n> > I'd say is not true going by git grep \"compression algorithm\". There\n> > are 3 matches in the docs and take [1], for example. I'd say in that\n> > one it's better to use \"algorithm\". In that case, \"method\" could be\n> > talking about client or server.\n>\n> I am not wedded to this change; but, for context, I wrote this patch before\n> basebackup supported multiple compression ... \"things\". I didn't touch\n> basebackup here, since Robert defended that choice of words in another thread\n> (starting at 20220320194050.GX28503@telsasoft.com).\n>\n> This change is for pg_column_compression(), and the only other use of\n> \"compression algorithm\" in the docs is in pgcrypto (which is in contrib). That\n> the docs consistently use \"method\" suggests continuing to use that rather than\n> something else. It could be described in some central place (like if we\n> support common syntax between interfaces which expose compression).\n>\n> > 0010:\n> > I don't understand this change.\n>\n> The commit message mentions 959f6d6a1, which makes newbindir optional. But the\n> documentation wasn't updated, and seems to indicate that it's still required.\n> https://www.postgresql.org/docs/devel/pgupgrade.html\n>\n> > 0011:\n> > I can't quite parse the original. I might not have enough context\n> > here. Robert, Joe? (new to master)\n>\n> See the link in the commit message where someone else reported the same\n> problem.\n>\n> > 0019:\n> > -1. pgindent will fix these.\n>\n> But two of those are from 2016.\n>\n> Thanks for amending and pushing those. There's some more less obvious ones\n> attached.\n>\n> Amit or Masahiko may want to comment on 0012 (doc review: Add ALTER\n> SUBSCRIPTION ... SKIP).\n\nThank you for the patch! I've looked at 0012 patch. Regarding the\nfollowing part:\n\n <function>pg_replication_origin_advance()</function></link> function\n- transaction. Before using this function, the subscription needs\nto be disabled\n+ XXX? transaction. Before using this function, the subscription\nneeds to be disabled\n temporarily either by <command>ALTER SUBSCRIPTION ...\nDISABLE</command> or, the\n\nwe can remove \"transaction\", it seems a typo. The rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 11 Apr 2022 19:25:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, Apr 11, 2022 at 3:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 7:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Amit or Masahiko may want to comment on 0012 (doc review: Add ALTER\n> > SUBSCRIPTION ... SKIP).\n>\n> Thank you for the patch! I've looked at 0012 patch. Regarding the\n> following part:\n>\n> <function>pg_replication_origin_advance()</function></link> function\n> - transaction. Before using this function, the subscription needs\n> to be disabled\n> + XXX? transaction. Before using this function, the subscription\n> needs to be disabled\n> temporarily either by <command>ALTER SUBSCRIPTION ...\n> DISABLE</command> or, the\n>\n> we can remove \"transaction\", it seems a typo.\n>\n\nRight.\n\n> The rest looks good to me.\n>\n\n+1. I'll take care of pushing this one tomorrow unless we have more\ncomments on this part.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:15:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, Apr 11, 2022 at 4:56 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> 0011 (Could do with input from Robert and Joe)\n\nSeems like a reasonable change to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 13:59:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, Apr 11, 2022 at 4:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 3:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 11, 2022 at 7:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > Amit or Masahiko may want to comment on 0012 (doc review: Add ALTER\n> > > SUBSCRIPTION ... SKIP).\n> >\n>\n> +1. I'll take care of pushing this one tomorrow unless we have more\n> comments on this part.\n>\n\nI have pushed this one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Apr 2022 16:56:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On 2022-Apr-11, David Rowley wrote:\n\n> and also skipped:\n> 0016 (unsure if we should change these of pgindent is not touching it)\n> 0017 (unsure if we should change these of pgindent is not touching it)\n\nI verified that pgindent will indeed not touch these changes by running\nbefore and after. (I accepted one comment placement from that run that\ntouched a neighboring line.)\n\nI think pgindent is right not to modify vertical space very much, since\nin many cases it amounts to a subjective decision. The patch seemed a\n(small) improvement, and it seems hard to make too much of a fuss about\nsuch things. Pushed them as a single commit.\n\nI hadn't noticed that Justin had posted a refreshed patch series, so I\ndon't know if the new ones match what I pushed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n\n\n", "msg_date": "Wed, 13 Apr 2022 19:29:34 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Wed, Apr 13, 2022 at 07:29:34PM +0200, Alvaro Herrera wrote:\n> On 2022-Apr-11, David Rowley wrote:\n> \n> > and also skipped:\n> > 0016 (unsure if we should change these of pgindent is not touching it)\n> > 0017 (unsure if we should change these of pgindent is not touching it)\n> \n> I verified that pgindent will indeed not touch these changes by running\n> before and after. (I accepted one comment placement from that run that\n> touched a neighboring line.)\n> \n> I think pgindent is right not to modify vertical space very much, since\n> in many cases it amounts to a subjective decision. The patch seemed a\n> (small) improvement, and it seems hard to make too much of a fuss about\n> such things. Pushed them as a single commit.\n> \n> I hadn't noticed that Justin had posted a refreshed patch series, so I\n> don't know if the new ones match what I pushed.\n\nThere were no changes - I had resent the patches that removed blank lines so it\nwas apparent that they were \"outstanding\" / under discussion.\n\nThere's (only) a few remaining.", "msg_date": "Wed, 13 Apr 2022 12:40:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos" }, { "msg_contents": "On Mon, 11 Apr 2022 at 22:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Thanks for amending and pushing those. There's some more less obvious ones\n> attached.\n\nHere are my notes from yesterday that I made when reviewing and\npushing many of the 2nd batch of patches.\n\n0001: Pushed and back patched to v12\n\n0002: Didn't push. Compression method/algorithm.\n\n0003: Pushed and backpatched to v13\n\n0004: Pushed (reviewed by Robert)\n\n0005: Alvaro Pushed\n0006: Alvaro Pushed\n\n0007: Not pushed. No space after comment and closing */ pgindent\nfixed one of these but not the other 2. I've not looked into why\npgindent does 1 and not the other 2.\n\n0008: Pushed\n\nI've left out the following change as it does not seem to be bringing\nany sort of consistency to the docs overall. It only brings\nconsistency to a single source file in the docs.\n\n- You need <productname>zstd</productname>, if you want to support\n+ You need <productname>ZSTD</productname>, if you want to support\n\nSee: git grep -i \">zstd<\"\n\n0009:\n\nThis contains a few fixes that look correct. Not sure if the following\nhas any use as a change:\n\n- See the description of the respective commands and programs for the\n- respective details. Note that you can mix locale providers on different\n+ See the description of the respective commands and programs for\n+ details. Note that you can mix locale providers at different\n\n0010: Pushed\n\n0011: Not pushed. Not sure if this is worth the change.\n\n0012: Amit Pushed\n\n0013: Not pushed. Adds a missing comma.\n\nDavid\n\n\n", "msg_date": "Thu, 14 Apr 2022 08:56:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "(For the future, just to make discussions easier, it would be good if\nyou could have git format-patch -v N to give a unique version number\nto these patches)\n\nOn Thu, 14 Apr 2022 at 05:40, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> There's (only) a few remaining.\n\nI've pushed 0001 and 0002 of the 3rd batch of patches. I left 0003 as\nI just didn't feel it was a meaningful enough improvement.\n\n From docs/, if I do:\n\n$ git grep \", which is the default\" | wc -l\n9\n\n$ git grep \", the default\" | wc -l\n64\n\nYou're proposing to make the score 10, 63. I'm not sure if that's a\ngood direction to go in.\n\nDavid\n\n\n", "msg_date": "Thu, 14 Apr 2022 09:39:42 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Thu, Apr 14, 2022 at 09:39:42AM +1200, David Rowley wrote:\n> On Thu, 14 Apr 2022 at 05:40, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > There's (only) a few remaining.\n> \n> I've pushed 0001 and 0002 of the 3rd batch of patches. I left 0003 as\n\nThanks\n\n> I just didn't feel it was a meaningful enough improvement.\n> \n> From docs/, if I do:\n> \n> $ git grep \", which is the default\" | wc -l\n> 9\n> \n> $ git grep \", the default\" | wc -l\n> 64\n> \n> You're proposing to make the score 10, 63. I'm not sure if that's a\n> good direction to go in.\n\nWell, I'm proposing to change the only instance of this:\n\n$ git grep -F \", the default)\"\ndoc/src/sgml/ref/create_publication.sgml: partition's row filter (if the parameter is false, the default) or the root\n\nMaybe what's needed is more like this.\n\n If the publication contains a partitioned table, and the publication parameter\n <literal>publish_via_partition_root</literal> is false (the default), then the\n row filter is taken from the partition; otherwise, the row filter is taken\n from the root partitioned table.\n\nI'll plan to keep this around and may come back to it later.\n\nOn Thu, Apr 14, 2022 at 08:56:22AM +1200, David Rowley wrote:\n> I've left out the following change as it does not seem to be bringing\n> any sort of consistency to the docs overall. It only brings\n> consistency to a single source file in the docs.\n> \n> - You need <productname>zstd</productname>, if you want to support\n> + You need <productname>ZSTD</productname>, if you want to support\n> \n> See: git grep -i \">zstd<\"\n\nIt may not be worth changing just this one line, but the reason I included it\nhere is for consistency:\n\n$ git grep 'zstd.*product' doc\ndoc/src/sgml/config.sgml: <literal>zstd</literal> (if <productname>PostgreSQL</productname>\n$ git grep 'ZSTD.*product' doc\ndoc/src/sgml/install-windows.sgml: <term><productname>ZSTD</productname></term>\ndoc/src/sgml/install-windows.sgml: Required for supporting <productname>ZSTD</productname> compression\ndoc/src/sgml/installation.sgml: You need <productname>ZSTD</productname>, if you want to support\ndoc/src/sgml/installation.sgml: Build with <productname>ZSTD</productname> compression support.\n\nIf we were to change it, maybe they should all say \"Zstandard (zstd)\". ZSTD\nlooks like an acronym, which I think it is not, and Zstandard indicates how to\npronounce it.\n\n\n", "msg_date": "Wed, 13 Apr 2022 19:33:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos" }, { "msg_contents": "CCing Amit K, because I propose a few relatively minor changes to\nlogical rep docs.\n\nOn 2022-Apr-13, Justin Pryzby wrote:\n\n> $ git grep -F \", the default)\"\n> doc/src/sgml/ref/create_publication.sgml: partition's row filter (if the parameter is false, the default) or the root\n> \n> Maybe what's needed is more like this.\n> \n> If the publication contains a partitioned table, and the publication parameter\n> <literal>publish_via_partition_root</literal> is false (the default), then the\n> row filter is taken from the partition; otherwise, the row filter is taken\n> from the root partitioned table.\n\nYeah, more invasive rewording seems called for. I propose this:\n\n For publications containing partitioned tables, the row filter for each\n partition is taken from the published partitioned table if the\n publication parameter <literal>publish_via_partition_root</literal> is true,\n or from the partition itself if it is false (the default).\n\nI think we should also mention that this parameter affects row filters,\nin the <varlistentry> for the WITH clause. Currently it has\n\n <term><literal>publish_via_partition_root</literal> (<type>boolean</type>)</term>\n <listitem>\n <para>\n This parameter determines whether changes in a partitioned table (or\n on its partitions) contained in the publication will be published\n using the identity and schema of the partitioned table rather than\n that of the individual partitions that are actually changed; the\n latter is the default. Enabling this allows the changes to be\n replicated into a non-partitioned table or a partitioned table\n consisting of a different set of partitions.\n </para>\n\nI propose to add \n\n <term><literal>publish_via_partition_root</literal> (<type>boolean</type>)</term>\n <listitem>\n <para>\n This parameter determines whether changes in a partitioned table (or\n on its partitions) contained in the publication will be published\n using the identity and schema of the partitioned table rather than\n that of the individual partitions that are actually changed; the\n latter is the default. Enabling this allows the changes to be\n replicated into a non-partitioned table or a partitioned table\n consisting of a different set of partitions.\n </para>\n\n <para>\n This parameter also affects how row filters are chosen for partitions;\n see below for details.\n </para>\n\nMore generally, I think we need to connect the WHERE keyword with \"row\nfilters\" more explicitly. Right now, the parameter reference says\n\n If the optional <literal>WHERE</literal> clause is specified, rows for\n which the <replaceable class=\"parameter\">expression</replaceable>\n evaluates to false or null will not be published. Note that parentheses\n are required around the expression. It has no effect on\n <literal>TRUNCATE</literal> commands.\n\nI propose to make it \"If the optional WHERE clause is specified, it\ndefines a <firstterm>row filter</firstterm> expression. Rows for which\nthe row filter expression evaluates to false ...\"\n\n\n> $ git grep 'zstd.*product' doc\n> doc/src/sgml/config.sgml: <literal>zstd</literal> (if <productname>PostgreSQL</productname>\n> $ git grep 'ZSTD.*product' doc\n> doc/src/sgml/install-windows.sgml: <term><productname>ZSTD</productname></term>\n> doc/src/sgml/install-windows.sgml: Required for supporting <productname>ZSTD</productname> compression\n> doc/src/sgml/installation.sgml: You need <productname>ZSTD</productname>, if you want to support\n> doc/src/sgml/installation.sgml: Build with <productname>ZSTD</productname> compression support.\n\nI don't see any official sources calling it all-uppercase ZSTD. In a\nquick non-scientific survey, most seem to use Zstd or zstd. The\nnon-abbreviated official name is Zstandard, but it's hard to find any\nplaces using that spelling, and I don't think our docs are a place to\neducate people on what the official name or pronunciation is.\n\nI propose we standardize on <productname>Zstd</productname> everywhere.\nUsers can look it up if they're really interested.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 19 Apr 2022 13:05:23 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Tue, Apr 19, 2022 at 4:35 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Yeah, more invasive rewording seems called for. I propose this:\n>\n> For publications containing partitioned tables, the row filter for each\n> partition is taken from the published partitioned table if the\n> publication parameter <literal>publish_via_partition_root</literal> is true,\n> or from the partition itself if it is false (the default).\n>\n> I think we should also mention that this parameter affects row filters,\n> in the <varlistentry> for the WITH clause. Currently it has\n>\n> <term><literal>publish_via_partition_root</literal> (<type>boolean</type>)</term>\n> <listitem>\n> <para>\n> This parameter determines whether changes in a partitioned table (or\n> on its partitions) contained in the publication will be published\n> using the identity and schema of the partitioned table rather than\n> that of the individual partitions that are actually changed; the\n> latter is the default. Enabling this allows the changes to be\n> replicated into a non-partitioned table or a partitioned table\n> consisting of a different set of partitions.\n> </para>\n>\n> I propose to add\n>\n> <term><literal>publish_via_partition_root</literal> (<type>boolean</type>)</term>\n> <listitem>\n> <para>\n> This parameter determines whether changes in a partitioned table (or\n> on its partitions) contained in the publication will be published\n> using the identity and schema of the partitioned table rather than\n> that of the individual partitions that are actually changed; the\n> latter is the default. Enabling this allows the changes to be\n> replicated into a non-partitioned table or a partitioned table\n> consisting of a different set of partitions.\n> </para>\n>\n> <para>\n> This parameter also affects how row filters are chosen for partitions;\n> see below for details.\n> </para>\n>\n\nYour proposed changes look good to me but I think all these places\nneed to mention 'column list' as well because the behavior is the same\nfor it.\n\n> More generally, I think we need to connect the WHERE keyword with \"row\n> filters\" more explicitly. Right now, the parameter reference says\n>\n> If the optional <literal>WHERE</literal> clause is specified, rows for\n> which the <replaceable class=\"parameter\">expression</replaceable>\n> evaluates to false or null will not be published. Note that parentheses\n> are required around the expression. It has no effect on\n> <literal>TRUNCATE</literal> commands.\n>\n> I propose to make it \"If the optional WHERE clause is specified, it\n> defines a <firstterm>row filter</firstterm> expression. Rows for which\n> the row filter expression evaluates to false ...\"\n>\n\nLooks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Apr 2022 09:19:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On 2022-Apr-20, Amit Kapila wrote:\n\n> Your proposed changes look good to me but I think all these places\n> need to mention 'column list' as well because the behavior is the same\n> for it.\n\nHmm, you're right. Added that, and changed the wording somewhat because\nsome things read awkwardly. Here's the result in patch form.\n\nColumn lists seems not mentioned in logical-replication.sgml, either.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"", "msg_date": "Wed, 20 Apr 2022 14:01:37 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On 2022-Apr-19, Alvaro Herrera wrote:\n\n> I propose we standardize on <productname>Zstd</productname> everywhere.\n> Users can look it up if they're really interested.\n\nSo the attached.\n\nThere are other uses of <literal>zstd</literal>, but those are referring to the\nexecutable program.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)", "msg_date": "Wed, 20 Apr 2022 23:32:08 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Wed, Apr 20, 2022 at 11:32:08PM +0200, Alvaro Herrera wrote:\n> On 2022-Apr-19, Alvaro Herrera wrote:\n> \n> > I propose we standardize on <productname>Zstd</productname> everywhere.\n> > Users can look it up if they're really interested.\n> \n> So the attached.\n> \n> There are other uses of <literal>zstd</literal>, but those are referring to the\n> executable program.\n\nThis one shouldn't be changed, or not like this?\n\n> @@ -560,7 +560,7 @@ $ENV{PROVE_TESTS}='t/020*.pl t/010*.pl'\n> <varlistentry>\n> <term><varname>ZSTD</varname></term>\n> <listitem><para>\n> - Path to a <application>zstd</application> command. The default is\n> + Path to a <application>Zstd</application> command. The default is\n> <literal>zstd</literal>, which will search for a command by that\n> name in the configured <envar>PATH</envar>.\n> </para></listitem>\n\nMaybe it should say s/a/the/, like:\n\n- Path to a <application>zstd</application> command. The default is\n+ Path to the <application>zstd</application> command. The default is\n\n\n", "msg_date": "Wed, 20 Apr 2022 16:38:42 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos" }, { "msg_contents": "On Wed, Apr 20, 2022 at 5:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-20, Amit Kapila wrote:\n>\n> > Your proposed changes look good to me but I think all these places\n> > need to mention 'column list' as well because the behavior is the same\n> > for it.\n>\n> Hmm, you're right. Added that, and changed the wording somewhat because\n> some things read awkwardly. Here's the result in patch form.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Apr 2022 07:55:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Wed, Apr 20, 2022 at 11:32:08PM +0200, Alvaro Herrera wrote:\n> So the attached.\n> \n> --- a/doc/src/sgml/install-windows.sgml\n> +++ b/doc/src/sgml/install-windows.sgml\n> @@ -307,9 +307,9 @@ $ENV{MSBFLAGS}=\"/m\";\n> </varlistentry>\n> \n> <varlistentry>\n> - <term><productname>ZSTD</productname></term>\n> + <term><productname>Zstd</productname></term>\n> <listitem><para>\n> - Required for supporting <productname>ZSTD</productname> compression\n> + Required for supporting <productname>Zstd</productname> compression\n\nLooking at the zstd project itself for reference or just wiki-sensei,\nI don't think that this is correct:\nhttps://github.com/facebook/zstd\nhttps://en.wikipedia.org/wiki/Zstd\n\nTheir README uses \"zstd\" in lower-case, while \"Zstd\" (first letter\nupper-case) is used at the beginning of a sentence.\n--\nMichael", "msg_date": "Thu, 21 Apr 2022 13:36:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On 21.04.22 06:36, Michael Paquier wrote:\n> On Wed, Apr 20, 2022 at 11:32:08PM +0200, Alvaro Herrera wrote:\n>> So the attached.\n>>\n>> --- a/doc/src/sgml/install-windows.sgml\n>> +++ b/doc/src/sgml/install-windows.sgml\n>> @@ -307,9 +307,9 @@ $ENV{MSBFLAGS}=\"/m\";\n>> </varlistentry>\n>> \n>> <varlistentry>\n>> - <term><productname>ZSTD</productname></term>\n>> + <term><productname>Zstd</productname></term>\n>> <listitem><para>\n>> - Required for supporting <productname>ZSTD</productname> compression\n>> + Required for supporting <productname>Zstd</productname> compression\n> \n> Looking at the zstd project itself for reference or just wiki-sensei,\n> I don't think that this is correct:\n> https://github.com/facebook/zstd\n> https://en.wikipedia.org/wiki/Zstd\n> \n> Their README uses \"zstd\" in lower-case, while \"Zstd\" (first letter\n> upper-case) is used at the beginning of a sentence.\n\nIt is referred to as \"Zstandard\" at both of those places. Maybe we \nshould use that. That is also easier to pronounce.\n\n\n", "msg_date": "Thu, 21 Apr 2022 15:59:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On 2022-Apr-21, Peter Eisentraut wrote:\n\n> It is referred to as \"Zstandard\" at both of those places. Maybe we should\n> use that. That is also easier to pronounce.\n\nYeah, I looked at other places (such as Yann Collet's blog) and I agree\nthat Zstandard seems to be the accepted spelling of the product. Pushed\nthat way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru\n\n\n", "msg_date": "Thu, 21 Apr 2022 19:16:54 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "I found a bunch more typos; a couple from codespell, and several which are the\nresult of looking for previously-reported typos, like:\n\ntime git log origin --grep '[tT]ypo' --word-diff -U1 |grep -Eo '\\[-[[:lower:]]+-\\]' |sed 's/^\\[-//; s/-\\]$//' |sort -u |grep -Fxvwf /usr/share/dict/words >badwords.txt\ntime grep -rhoFwf badwords.txt doc |sort -u >not-badwords.txt\ntime grep -Fxvwf not-badwords.txt ./badwords.txt >./badwords.txt.new\ntime grep -rhoIFwf ./badwords.txt.new src --incl='*.[chly]' --incl='*.p[lm]' |sort |uniq -c |sort -nr |less", "msg_date": "Tue, 10 May 2022 21:03:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos" }, { "msg_contents": "On Tue, May 10, 2022 at 09:03:34PM -0500, Justin Pryzby wrote:\n> I found a bunch more typos; a couple from codespell, and several which are the\n> result of looking for previously-reported typos, like:\n\nThanks, applied 0002.\n\nRegarding 0001, I don't really know which one of {AND,OR}ed or\n{AND,OR}-ed is better. Note that the code prefers the former, but\nyour patch changes the docs to use the latter.\n--\nMichael", "msg_date": "Wed, 11 May 2022 15:41:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: typos" }, { "msg_contents": "On Thu, Apr 14, 2022 at 08:56:22AM +1200, David Rowley wrote:\n> 0007: Not pushed. No space after comment and closing */ pgindent\n> fixed one of these but not the other 2. I've not looked into why\n> pgindent does 1 and not the other 2.\n\n> -/* get operation priority by its code*/\n> +/* get operation priority by its code */\n\npgindent never touches comments that start in column zero. (That's why many\ncolumn-0 comments are wrapped to widths other than the standard 78.)\n\n\n", "msg_date": "Tue, 5 Jul 2022 00:51:39 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: typos" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that parse_jsontable.c was added with a wrong copyright year,\ntrivial patch attached.\n\nWhile at it I also see that a lot of Spanish translation files seems to be\nstuck in 2019, but those were fixed in the pgtranslation repo last June so they\nwill get synced eventually.", "msg_date": "Mon, 11 Apr 2022 14:08:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Outdated copyright year in parse_jsontable.c" }, { "msg_contents": "On Mon, Apr 11, 2022 at 02:08:38PM +0800, Julien Rouhaud wrote:\n> I just noticed that parse_jsontable.c was added with a wrong copyright year,\n> trivial patch attached.\n\nThanks, I'll go fix that in a bit. I am spotting four more, as of\nsrc/backend/replication/basebackup_*.c that point to 2020.\n--\nMichael", "msg_date": "Mon, 11 Apr 2022 15:58:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Outdated copyright year in parse_jsontable.c" }, { "msg_contents": "On Mon, Apr 11, 2022 at 03:58:01PM +0900, Michael Paquier wrote:\n> On Mon, Apr 11, 2022 at 02:08:38PM +0800, Julien Rouhaud wrote:\n> > I just noticed that parse_jsontable.c was added with a wrong copyright year,\n> > trivial patch attached.\n> \n> Thanks, I'll go fix that in a bit. I am spotting four more, as of\n> src/backend/replication/basebackup_*.c that point to 2020.\n\nAh right, I now realize that my command found them too but it was hidden with\nthe .po files and I missed them.\n\nThanks!\n\n\n", "msg_date": "Mon, 11 Apr 2022 17:16:59 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Outdated copyright year in parse_jsontable.c" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Apr 11, 2022 at 03:58:01PM +0900, Michael Paquier wrote:\n>> On Mon, Apr 11, 2022 at 02:08:38PM +0800, Julien Rouhaud wrote:\n>>> I just noticed that parse_jsontable.c was added with a wrong copyright year,\n>>> trivial patch attached.\n\n>> Thanks, I'll go fix that in a bit. I am spotting four more, as of\n>> src/backend/replication/basebackup_*.c that point to 2020.\n\n> Ah right, I now realize that my command found them too but it was hidden with\n> the .po files and I missed them.\n\nFTR, we already have a tool for this: src/tools/copyright.pl.\nI ran it to check, and it found the same five files.\n\n(I'm slightly annoyed at it for having touched the mod dates on\nevery file when it only needed to change five ... will go fix that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 09:57:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Outdated copyright year in parse_jsontable.c" } ]
[ { "msg_contents": "To Whom It May Concern,\n\nHi, I am Qiongwen Liu, and I am studying at the University of California,\nBerkeley as an economics and computer science major. I am interested in the\nDevelop Performance Farm Benchmarks and Website of PostgreSQL in\nthis program, attached below is my application. Please check it out. Thank\nyou.\n\nSincerely,\nQiongwen", "msg_date": "Sun, 10 Apr 2022 23:43:29 -0700", "msg_from": "Qiongwen Liu <qiongwen7551@berkeley.edu>", "msg_from_op": true, "msg_subject": "PostgreSQL Program Application" }, { "msg_contents": "Greetings,\n\n* Qiongwen Liu (qiongwen7551@berkeley.edu) wrote:\n> Hi, I am Qiongwen Liu, and I am studying at the University of California,\n> Berkeley as an economics and computer science major. I am interested in the\n> Develop Performance Farm Benchmarks and Website of PostgreSQL in\n> this program, attached below is my application. Please check it out. Thank\n> you.\n\nGreat, thanks! Will respond off-list.\n\nStephen", "msg_date": "Mon, 11 Apr 2022 14:43:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Program Application" } ]
[ { "msg_contents": "Hi all,\n(Added Robert and Georgios in CC:)\n\nSince babbbb5 and the introduction of LZ4, I have reworked the way\ncompression is controlled for pg_receivewal, with two options:\n- --compress-method, settable to \"gzip\", \"none\" or \"lz4\".\n- --compress, to pass down a compression level, where the allowed\nrange is 1-9. If passing down 0, we'd get an error rather than\nimplying no compression, contrary to what we did in ~14.\n\nI initially thought that this was fine as-is, but then Robert and\nothers have worked on client/server compression for pg_basebackup,\nintroducing a much better design with centralized APIs where one can\nuse METHOD:DETAIL for as compression value, where DETAIL is a\ncomma-separated list of keyword=value (keyword = \"level\" or\n\"workers\"), with centralized checks and an extensible design. \n\nThis is something I think we had better fix before beta1, because now\nwe have binaries that use an inconsistent set of options. So,\nattached is a patch set aimed at rework this option set from the\nground, taking advantage of the recent work done by Robert and others\nfor pg_basebackup:\n- 0001 is a simple rename of backup_compression.{c,h} to\ncompression.{c,h}, removing anything related to base backups from\nthat. One extra reason behind this renaming is that I would like to\nuse this infra for pg_dump, but that's material for 16~.\n- 0002 removes WalCompressionMethod, replacing it by\npg_compress_algorithm as these are the same enums. Robert complained\nabout the confusion that WalCompressionMethod could lead to as this\ncould be used for the compression of data, and not only WAL. I have\nrenamed some variables to be more consistent, while on it.\n- 0003 is the actual option rework for pg_receivewal. This takes\nadvantage of 0001, leading to the result of removing --compress-method\nand replacing it with --compress, taking care of the backward\ncompatibility problems for an integer value, aka 0 implies no\ncompression and val > 0 implies gzip. One bonus reason to switch to\nthat is that this would make the addition of zstd for pg_receivewal\neasier in the future.\n\nI am going to add an open item for this stuff. Comments or thoughts?\n\nThanks,\n--\nMichael", "msg_date": "Mon, 11 Apr 2022 15:52:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Fixes for compression options of pg_receivewal and refactoring of\n backup_compression.{c,h}" }, { "msg_contents": "\nOn Monday, April 11th, 2022 at 8:52 AM, Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> This is something I think we had better fix before beta1, because now\n> we have binaries that use an inconsistent set of options. So,\n> attached is a patch set aimed at rework this option set from the\n> ground, taking advantage of the recent work done by Robert and others\n> for pg_basebackup:\n\nAgreed. It is rather inconsistent now.\n\n> - 0001 is a simple rename of backup_compression.{c,h} to\n> compression.{c,h}, removing anything related to base backups from\n> that. One extra reason behind this renaming is that I would like to\n> use this infra for pg_dump, but that's material for 16~.\n\nI agree with the design. If you permit me a couple of nitpicks regarding naming.\n\n+typedef enum pg_compress_algorithm\n+{\n+ PG_COMPRESSION_NONE,\n+ PG_COMPRESSION_GZIP,\n+ PG_COMPRESSION_LZ4,\n+ PG_COMPRESSION_ZSTD\n+} pg_compress_algorithm;\n\nElsewhere in the codebase, (e.g. create_table.sgml, alter_table.sgml,\nbrin_tuple.c, detoast.c, toast_compression.c, tupdesc.c, gist.c to mention a\nfew) variations of of the nomenclature \"compression method\" are used, like\n'VARDATA_COMPRESSED_GET_COMPRESS_METHOD' or 'InvalidCompressionMethod' etc. I\nfeel that it would be nicer if we followed one naming rule for this and I\nrecommend to substitute algorithm for method throughout.\n\nOn a similar note, it would help readability to be able to distinguish at a\nglance the type from the variable. Maybe uppercase or camelcase the type?\n\nLast, even though it is not needed now, it will be helpful to have a\nPG_COMPRESSION_INVALID in some scenarios. Though we can add it when we come to\nit.\n\n> - 0002 removes WalCompressionMethod, replacing it by\n> pg_compress_algorithm as these are the same enums. Robert complained\n> about the confusion that WalCompressionMethod could lead to as this\n> could be used for the compression of data, and not only WAL. I have\n> renamed some variables to be more consistent, while on it.\n\nIt looks good. If you choose to discard the comment regarding the use of\n'method' over 'algorithm' from above, can you please use the full word in the\nvariable, e.g. 'wal_compress_algorithm' instead of 'wal_compress_algo'. I can\nnot really explain it, the later reads a bit rude. Then again that may be just\nme.\n\n> - 0003 is the actual option rework for pg_receivewal. This takes\n> advantage of 0001, leading to the result of removing --compress-method\n> and replacing it with --compress, taking care of the backward\n> compatibility problems for an integer value, aka 0 implies no\n> compression and val > 0 implies gzip. One bonus reason to switch to\n> that is that this would make the addition of zstd for pg_receivewal\n> easier in the future.\n\nLooks good.\n\n> I am going to add an open item for this stuff. Comments or thoughts?\n\nI agree that it is better to not release pg_receivewal with the distinct set of\noptions.\n\nCheers,\n//Georgios\n\n> Thanks,\n> --\n> Michael\n\n\n", "msg_date": "Mon, 11 Apr 2022 12:46:02 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring of\n backup_compression.{c,h}" }, { "msg_contents": "On Mon, Apr 11, 2022 at 2:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Since babbbb5 and the introduction of LZ4, I have reworked the way\n> compression is controlled for pg_receivewal, with two options:\n> - --compress-method, settable to \"gzip\", \"none\" or \"lz4\".\n> - --compress, to pass down a compression level, where the allowed\n> range is 1-9. If passing down 0, we'd get an error rather than\n> implying no compression, contrary to what we did in ~14.\n>\n> I initially thought that this was fine as-is, but then Robert and\n> others have worked on client/server compression for pg_basebackup,\n> introducing a much better design with centralized APIs where one can\n> use METHOD:DETAIL for as compression value, where DETAIL is a\n> comma-separated list of keyword=value (keyword = \"level\" or\n> \"workers\"), with centralized checks and an extensible design.\n>\n> This is something I think we had better fix before beta1, because now\n> we have binaries that use an inconsistent set of options. So,\n> attached is a patch set aimed at rework this option set from the\n> ground, taking advantage of the recent work done by Robert and others\n> for pg_basebackup:\n\n+1 for this in general, but I think that naming like\n\"compression_algo\" stinks. If you think \"compression_algorithm\" is too\nlong, I think you should use \"algorithm\" or \"compression\" or\n\"compression_method\" or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 11:15:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring of\n backup_compression.{c,h}" }, { "msg_contents": "On Mon, Apr 11, 2022 at 11:15:46AM -0400, Robert Haas wrote:\n> +1 for this in general, but I think that naming like\n> \"compression_algo\" stinks. If you think \"compression_algorithm\" is too\n> long, I think you should use \"algorithm\" or \"compression\" or\n> \"compression_method\" or something.\n\nYes, I found \"compression_algorithm\" to be too long initially. For\nwalmethods.c and pg_receivewal.c, it may be better to just stick to\n\"algorithm\" then, at least that's consistent with pg_basebackup.c.\n--\nMichael", "msg_date": "Tue, 12 Apr 2022 07:50:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring\n of backup_compression.{c,h}" }, { "msg_contents": "On Mon, Apr 11, 2022 at 12:46:02PM +0000, gkokolatos@pm.me wrote:\n> On Monday, April 11th, 2022 at 8:52 AM, Michael Paquier <michael@paquier.xyz>\n> wrote:\n>> - 0001 is a simple rename of backup_compression.{c,h} to\n>> compression.{c,h}, removing anything related to base backups from\n>> that. One extra reason behind this renaming is that I would like to\n>> use this infra for pg_dump, but that's material for 16~.\n> \n> I agree with the design. If you permit me a couple of nitpicks regarding naming.\n> \n> +typedef enum pg_compress_algorithm\n> +{\n> + PG_COMPRESSION_NONE,\n> + PG_COMPRESSION_GZIP,\n> + PG_COMPRESSION_LZ4,\n> + PG_COMPRESSION_ZSTD\n> +} pg_compress_algorithm;\n> \n> Elsewhere in the codebase, (e.g. create_table.sgml, alter_table.sgml,\n> brin_tuple.c, detoast.c, toast_compression.c, tupdesc.c, gist.c to mention a\n> few) variations of of the nomenclature \"compression method\" are used, like\n> 'VARDATA_COMPRESSED_GET_COMPRESS_METHOD' or 'InvalidCompressionMethod' etc. I\n> feel that it would be nicer if we followed one naming rule for this and I\n> recommend to substitute algorithm for method throughout.\n\nTechnically and as far as I know, both are correct and hold more or\nless the same meaning. pg_basebackup's code exposes algorithm in a\nmore extended way, so I have just stuck to it for the internal\nvariables and such. Perhaps we could rename the whole, but I see no\nstrong reason to do that.\n\n> Last, even though it is not needed now, it will be helpful to have a\n> PG_COMPRESSION_INVALID in some scenarios. Though we can add it when we come to\n> it.\n\nPerhaps. There is no need for it yet, though. pg_dump would not need\nthat, as well.\n\n>> - 0002 removes WalCompressionMethod, replacing it by\n>> pg_compress_algorithm as these are the same enums. Robert complained\n>> about the confusion that WalCompressionMethod could lead to as this\n>> could be used for the compression of data, and not only WAL. I have\n>> renamed some variables to be more consistent, while on it.\n> \n> It looks good. If you choose to discard the comment regarding the use of\n> 'method' over 'algorithm' from above, can you please use the full word in the\n> variable, e.g. 'wal_compress_algorithm' instead of 'wal_compress_algo'. I can\n> not really explain it, the later reads a bit rude. Then again that may be just\n> me.\n\nThanks. I have been able to do an extra pass on 0001 and 0002, fixing\nthose naming inconsistencies with \"algo\" vs \"algorithm\" that you and\nRobert have reported, and applied them. For 0003, I'll look at it\nlater. Attached is a rebase with improvements about the variable\nnames.\n--\nMichael", "msg_date": "Tue, 12 Apr 2022 18:22:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring\n of backup_compression.{c,h}" }, { "msg_contents": "On Tue, Apr 12, 2022 at 06:22:48PM +0900, Michael Paquier wrote:\n> On Mon, Apr 11, 2022 at 12:46:02PM +0000, gkokolatos@pm.me wrote:\n>> It looks good. If you choose to discard the comment regarding the use of\n>> 'method' over 'algorithm' from above, can you please use the full word in the\n>> variable, e.g. 'wal_compress_algorithm' instead of 'wal_compress_algo'. I can\n>> not really explain it, the later reads a bit rude. Then again that may be just\n>> me.\n> \n> Thanks. I have been able to do an extra pass on 0001 and 0002, fixing\n> those naming inconsistencies with \"algo\" vs \"algorithm\" that you and\n> Robert have reported, and applied them. For 0003, I'll look at it\n> later. Attached is a rebase with improvements about the variable\n> names.\n\nThis has been done with the proper renames. With that in place, I see\nno reason now to not be able to set the compression level as it is\npossible to pass it down with the options available. This requires\nonly a couple of lines, as of the attached. LZ4 has a dummy structure\ncalled LZ4F_INIT_PREFERENCES to initialize LZ4F_preferences_t, that\nholds the compression level before passing it down to\nLZ4F_compressBegin(), but that's available only in v1.8.3. Using it\nrisks lowering down the minimal version of LZ4 we are able to use now,\nbut replacing that with a set of memset()s is also a way to set up\nthings as per its documentation.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 13 Apr 2022 14:25:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring\n of backup_compression.{c,h}" }, { "msg_contents": "\n\n------- Original Message -------\nOn Wednesday, April 13th, 2022 at 7:25 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n\n>\n>\n> On Tue, Apr 12, 2022 at 06:22:48PM +0900, Michael Paquier wrote:\n>\n> > On Mon, Apr 11, 2022 at 12:46:02PM +0000, gkokolatos@pm.me wrote:\n> >\n> > > It looks good. If you choose to discard the comment regarding the use of\n> > > 'method' over 'algorithm' from above, can you please use the full word in the\n> > > variable, e.g. 'wal_compress_algorithm' instead of 'wal_compress_algo'. I can\n> > > not really explain it, the later reads a bit rude. Then again that may be just\n> > > me.\n> >\n> > Thanks. I have been able to do an extra pass on 0001 and 0002, fixing\n> > those naming inconsistencies with \"algo\" vs \"algorithm\" that you and\n> > Robert have reported, and applied them. For 0003, I'll look at it\n> > later. Attached is a rebase with improvements about the variable\n> > names.\n>\n> This has been done with the proper renames. With that in place, I see\n> no reason now to not be able to set the compression level as it is\n> possible to pass it down with the options available. This requires\n> only a couple of lines, as of the attached. LZ4 has a dummy structure\n> called LZ4F_INIT_PREFERENCES to initialize LZ4F_preferences_t, that\n> holds the compression level before passing it down to\n> LZ4F_compressBegin(), but that's available only in v1.8.3. Using it\n> risks lowering down the minimal version of LZ4 we are able to use now,\n> but replacing that with a set of memset()s is also a way to set up\n> things as per its documentation.\n>\n> Thoughts?\n\nIt's really not hard to add compression level. However we had briefly\ndiscussed it in the original thread [1] and decided against. That is why\nI did not write that code. If the community thinks differently now, let\nme know if you would like me to offer a patch for it.\n\nCheers,\n//Georgios\n\n\n[1] https://www.postgresql.org/message-id/flat/CABUevEwuq7XXyd4fA0W3jY9MsJu9B2WRbHumAA%2B3WzHrGAQjsg%40mail.gmail.com#b6456fa2adc1cdb049a57bf3587666b9\n\n\n", "msg_date": "Wed, 13 Apr 2022 14:58:28 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring of\n backup_compression.{c,h}" }, { "msg_contents": "On Wed, Apr 13, 2022 at 02:58:28PM +0000, gkokolatos@pm.me wrote:\n> It's really not hard to add compression level. However we had briefly\n> discussed it in the original thread [1] and decided against. That is why\n> I did not write that code. If the community thinks differently now, let\n> me know if you would like me to offer a patch for it.\n\nThe issue back then was how to design the option set to handle all\nthat (right? My memories may be short on that), and pg_basebackup\ntakes care of that with its option design.\n\nThis is roughly what has been done here, except that this was for the\ncontentSize:\nhttps://www.postgresql.org/message-id/rYyZ3Fj9qayyY9-egNsV_kkLbL_BSWcOEdi3Mb6M9eQRTkcA2jrqFEHglLUEYnzWR_wttCqn7VI94MZ2p7mwNje51lHTvWYnJ1jHdOceen4=@pm.me\n\nDo you think that the extra test coverage is going to be too much of a\nburden? I was thinking about just adding a level to the main lz4\ncommand, with an extra negative test in the SKIP block with a level\nout of range\n--\nMichael", "msg_date": "Thu, 14 Apr 2022 06:18:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring\n of backup_compression.{c,h}" }, { "msg_contents": "On Thu, Apr 14, 2022 at 06:18:29AM +0900, Michael Paquier wrote:\n> The issue back then was how to design the option set to handle all\n> that (right? My memories may be short on that), and pg_basebackup\n> takes care of that with its option design.\n\nI have looked at that again this morning, and the change is\nstraight-forward so I saw no reason to not do it. And, applied.\n--\nMichael", "msg_date": "Mon, 18 Apr 2022 11:42:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fixes for compression options of pg_receivewal and refactoring\n of backup_compression.{c,h}" } ]
[ { "msg_contents": "Hi,\n\nI have tried to read about Oracle's spatial partitioning feature (\nhttps://www.oracle.com/technetwork/database/enterprise-edition/spatial-twp-partitioningbp-10gr2-05-134277.pdf)\nand wondered if something like this is possible with PostgreSQL (with\nPostGIS):\n\nThe first part, getting the rows into the \"right\" partition isn't\nespecially interesting: Reduce every geometry to a point, and use the x and\ny coordinates separately in a range partition. This is possible with\nPostgreSQL as it is a normal range partition on double.\n\nThe second part is more interesting. Whenever the spatial index is\n(implicitly or directly) used in a query, the partition pruning step\n(during execution) checks the spatial index's root bounding box to\ndetermine if the partition can be skipped.\n\nIs this possible to achieve in PostgreSQL? There is already a function in\nPostGIS to get the spatial index root bounding box\n(_postgis_index_extent(tbl regclass, col text)), but I think the real issue\nis that the actual SQL query might not even call the index directly (SELECT\n* FROM a WHERE ST_Intersects(mygeom, a.geom) - the ST_Intersects function\nuses the index internally).\n\nBest Regards,\nMats Taraldsvik\n\nHi,I have tried to read about Oracle's spatial partitioning feature (https://www.oracle.com/technetwork/database/enterprise-edition/spatial-twp-partitioningbp-10gr2-05-134277.pdf) and wondered if something like this is possible with PostgreSQL (with PostGIS):The first part, getting the rows into the \"right\" partition isn't especially interesting: Reduce every geometry to a point, and use the x and y coordinates separately in a range partition. This is possible with PostgreSQL as it is a normal range partition on double. The second part is more interesting. Whenever the spatial index is (implicitly or directly) used in a query, the partition pruning step (during execution) checks the spatial index's root bounding box to determine if the partition can be skipped.Is this possible to achieve in PostgreSQL? There is already a function in PostGIS to get the spatial index root bounding box (_postgis_index_extent(tbl regclass, col text)), but I think the real issue is that the actual SQL query might not even call the index directly (SELECT * FROM a WHERE ST_Intersects(mygeom, a.geom) - the ST_Intersects function uses the index internally).Best Regards,Mats Taraldsvik", "msg_date": "Mon, 11 Apr 2022 14:07:34 +0200", "msg_from": "Mats Taraldsvik <mats.taraldsvik@gmail.com>", "msg_from_op": true, "msg_subject": "Declarative partitioning and partition pruning/check" }, { "msg_contents": "I'm re-trying this email here, as there were no answers in the psql-general\nlist. Hope that's ok. (please cc me when answering as I'm not subscribed\n(yet) )\n\nHi,\n\nI have tried to read about Oracle's spatial partitioning feature (\nhttps://www.oracle.com/technetwork/database/enterprise-edition/spatial-twp-partitioningbp-10gr2-05-134277.pdf)\nand wondered if something like this is possible with PostgreSQL (with\nPostGIS):\n\nThe first part, getting the rows into the \"right\" partition isn't\nespecially interesting: Reduce every geometry to a point, and use the x and\ny coordinates separately in a range partition. This is possible with\nPostgreSQL as it is a normal range partition on double.\n\nThe second part is more interesting. Whenever the spatial index is\n(implicitly or directly) used in a query, the partition pruning step\n(during execution) checks the spatial index's root bounding box to\ndetermine if the partition can be skipped.\n\nIs this possible to achieve in PostgreSQL? There is already a function in\nPostGIS to get the spatial index root bounding box\n(_postgis_index_extent(tbl regclass, col text)), but I think the real issue\nis that the actual SQL query might not even call the index directly (SELECT\n* FROM a WHERE ST_Intersects(mygeom, a.geom) - the ST_Intersects function\nuses the index internally).\n\nBest Regards,\nMats Taraldsvik\n\nI'm re-trying this email here, as there were no answers in the psql-general list. Hope that's ok. (please cc me when answering as I'm not subscribed (yet) )Hi,I have tried to read about Oracle's spatial partitioning feature (https://www.oracle.com/technetwork/database/enterprise-edition/spatial-twp-partitioningbp-10gr2-05-134277.pdf) and wondered if something like this is possible with PostgreSQL (with PostGIS):The first part, getting the rows into the \"right\" partition isn't especially interesting: Reduce every geometry to a point, and use the x and y coordinates separately in a range partition. This is possible with PostgreSQL as it is a normal range partition on double. The second part is more interesting. Whenever the spatial index is (implicitly or directly) used in a query, the partition pruning step (during execution) checks the spatial index's root bounding box to determine if the partition can be skipped.Is this possible to achieve in PostgreSQL? There is already a function in PostGIS to get the spatial index root bounding box (_postgis_index_extent(tbl regclass, col text)), but I think the real issue is that the actual SQL query might not even call the index directly (SELECT * FROM a WHERE ST_Intersects(mygeom, a.geom) - the ST_Intersects function uses the index internally).Best Regards,Mats Taraldsvik", "msg_date": "Tue, 19 Apr 2022 14:39:12 +0200", "msg_from": "Mats Taraldsvik <mats.taraldsvik@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: Declarative partitioning and partition pruning/check" }, { "msg_contents": "On Tue, Apr 19, 2022 at 02:39:12PM +0200, Mats Taraldsvik wrote:\n> I'm re-trying this email here, as there were no answers in the psql-general\n> list. Hope that's ok. (please cc me when answering as I'm not subscribed\n> (yet) )\n\n-hackers is for development and bug reports, so this isn't the right place.\nIf you had mailed on -performance, I would have responded there.\n\n> The first part, getting the rows into the \"right\" partition isn't\n> especially interesting: Reduce every geometry to a point, and use the x and\n> y coordinates separately in a range partition. This is possible with\n> PostgreSQL as it is a normal range partition on double.\n\nI agree that it's conceptually simple. Have you tried it ?\n\nts=# CREATE TABLE t(a geometry) PARTITION BY RANGE(st_x(a));\nts=# CREATE TABLE t1 PARTITION OF t FOR VALUES FROM (1)to(2);\n...\n\n> The second part is more interesting. Whenever the spatial index is\n> (implicitly or directly) used in a query, the partition pruning step\n> (during execution) checks the spatial index's root bounding box to\n> determine if the partition can be skipped.\n> \n> Is this possible to achieve in PostgreSQL? There is already a function in\n> PostGIS to get the spatial index root bounding box\n> (_postgis_index_extent(tbl regclass, col text)), but I think the real issue\n> is that the actual SQL query might not even call the index directly (SELECT\n> * FROM a WHERE ST_Intersects(mygeom, a.geom) - the ST_Intersects function\n> uses the index internally).\n\nFor partition pruning to work, a query would have to include a WHERE clause\nwhich is sufficient to prune the partitions. If the table is partitioned by\nRANGE(st_x(col)), then the query would need to say \"st_x(col) <= 11\" (or\nsimilar). If st_x() is compared to a constant, then partition pruning can\nhappen at planning time; if not, it might (since v11) happen at execution time.\n\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITION-PRUNING\n\nI doubt your queries would have the necesarily condition for this to do what\nyou want. It would be easy to 1) try; and then 2) post a question with the\nnecessary SQL to set up the test, and show what you've tried.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 Apr 2022 08:39:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: Declarative partitioning and partition pruning/check\n (+postgis)" } ]
[ { "msg_contents": "Hello\n\nI am Kunal Garg, interested in participating in GSoC 2022 as a part of\nPostgreSQL.\n\nI am looking forward to contribute in the project titled \"*GUI\nrepresentation of monitoring System Activity with the system_stats\nExtension in pgAdmin 4\"*, and I am in constant touch with the mentor,\nKhusboo Vashi, to understand the project better, under her guidance I have\ndeveloped my proposal for the project. The proposal is attached in this\nmail.\n\nKindly review it and let me know of potential changes that would make the\nproposal more relevant and better.\n\nI am looking forward to working on this project and am waiting in hope of a\npositive response.\n\nYours sincerely\nKunal Garg", "msg_date": "Mon, 11 Apr 2022 18:43:26 +0530", "msg_from": "Kunal Garg <gargkunal02@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: GUI representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4" }, { "msg_contents": "Hello Kunal,\n\nThank you so much for your proposal! I reviewed it and it looks good. \nOne minor thing to mention is that it seems to me like you are \ndistributing the majority of the workload at the beginning, perhaps you \nare going to need more than a couple weeks to go through the first \ndeliverables. Please correct me if I am wrong, I am not too much of an \nexpert here :)\n\nWishing you a nice weekend!\nIlaria\n\n\nOn 11.04.22 15:13, Kunal Garg wrote:\n> Hello\n>\n> I am Kunal Garg, interested in participating in GSoC 2022 as a part of \n> PostgreSQL.\n>\n> I am looking forward to contribute in the project titled \"*GUI \n> representation of monitoring System Activity with the system_stats \n> Extension in pgAdmin 4\"*, and I am in constant touch with the mentor, \n> Khusboo Vashi, to understand the project better, under her guidance I \n> have developed my proposal for the project. The proposal is attached \n> in this mail.\n>\n> Kindly review it and let me know of potential changes that would make \n> the proposal more relevant and better.\n>\n> I am looking forward to working on this project and am waiting in hope \n> of a positive response.\n>\n> Yours sincerely\n> Kunal Garg\n\n\n\n\n\nHello Kunal,\nThank you so much for your proposal! I reviewed it and it looks\n good. One minor thing to mention is that it seems to me like you\n are distributing the majority of the workload at the beginning,\n perhaps you are going to need more than a couple weeks to go\n through the first deliverables. Please correct me if I am wrong, I\n am not too much of an expert here :)\nWishing you a nice weekend!\n Ilaria\n\n\nOn 11.04.22 15:13, Kunal Garg wrote:\n\n\n\nHello\n \n I am Kunal Garg, interested in participating in GSoC 2022 as a\n part of PostgreSQL.\n  \n I am looking forward to contribute in the project titled \"GUI\n representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4\", and I am in\n constant touch with the mentor, Khusboo Vashi, to understand\n the project better, under her guidance I have developed my\n proposal for the project. The proposal is attached in this\n mail. \n\n Kindly review it and let me know of potential changes that\n would make the proposal more relevant and better.\n\n I am looking forward to working on this project and am waiting\n in hope of a positive response.\n\n Yours sincerely\n Kunal Garg", "msg_date": "Fri, 15 Apr 2022 22:35:10 +0200", "msg_from": "Ilaria Battiston <ilaria.battiston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GSoC: GUI representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4" }, { "msg_contents": "Hey Ilaria,\n\nThankyou so much for the review, your suggestion is quite useful. I think\nwhen first function of the system_stat gets connected with pgadmin4, then\nthe process will be pretty straightforward, the longer duration of testing\nwill iterate over enhancing the GUI. But to be on the safer side, we will\nadd 2 weeks extra in the development phase.\n\nThank you for reviewing the proposal.\n\nRegards\nKunal garg\n\nOn Sat, 16 Apr, 2022, 2:02 am Ilaria Battiston, <ilaria.battiston@gmail.com>\nwrote:\n\n> Hello Kunal,\n>\n> Thank you so much for your proposal! I reviewed it and it looks good. One\n> minor thing to mention is that it seems to me like you are distributing the\n> majority of the workload at the beginning, perhaps you are going to need\n> more than a couple weeks to go through the first deliverables. Please\n> correct me if I am wrong, I am not too much of an expert here :)\n>\n> Wishing you a nice weekend!\n> Ilaria\n>\n>\n> On 11.04.22 15:13, Kunal Garg wrote:\n>\n> Hello\n>\n> I am Kunal Garg, interested in participating in GSoC 2022 as a part of\n> PostgreSQL.\n>\n> I am looking forward to contribute in the project titled \"*GUI\n> representation of monitoring System Activity with the system_stats\n> Extension in pgAdmin 4\"*, and I am in constant touch with the mentor,\n> Khusboo Vashi, to understand the project better, under her guidance I have\n> developed my proposal for the project. The proposal is attached in this\n> mail.\n>\n> Kindly review it and let me know of potential changes that would make the\n> proposal more relevant and better.\n>\n> I am looking forward to working on this project and am waiting in hope of\n> a positive response.\n>\n> Yours sincerely\n> Kunal Garg\n>\n>\n\nHey Ilaria,Thankyou so much for the review, your suggestion is quite useful. I think when first function of the system_stat gets connected with pgadmin4, then the process will be pretty straightforward, the longer duration of testing will iterate over enhancing the GUI. But to be on the safer side, we will add 2 weeks extra in the development phase.Thank you for reviewing the proposal.Regards Kunal garg On Sat, 16 Apr, 2022, 2:02 am Ilaria Battiston, <ilaria.battiston@gmail.com> wrote:\n\nHello Kunal,\nThank you so much for your proposal! I reviewed it and it looks\n good. One minor thing to mention is that it seems to me like you\n are distributing the majority of the workload at the beginning,\n perhaps you are going to need more than a couple weeks to go\n through the first deliverables. Please correct me if I am wrong, I\n am not too much of an expert here :)\nWishing you a nice weekend!\n Ilaria\n\n\nOn 11.04.22 15:13, Kunal Garg wrote:\n\n\nHello\n \n I am Kunal Garg, interested in participating in GSoC 2022 as a\n part of PostgreSQL.\n  \n I am looking forward to contribute in the project titled \"GUI\n representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4\", and I am in\n constant touch with the mentor, Khusboo Vashi, to understand\n the project better, under her guidance I have developed my\n proposal for the project. The proposal is attached in this\n mail. \n\n Kindly review it and let me know of potential changes that\n would make the proposal more relevant and better.\n\n I am looking forward to working on this project and am waiting\n in hope of a positive response.\n\n Yours sincerely\n Kunal Garg", "msg_date": "Sat, 16 Apr 2022 19:36:12 +0530", "msg_from": "Kunal Garg <gargkunal02@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GSoC: GUI representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4" } ]
[ { "msg_contents": "Currently, XLogRecGetBlockTag has 41 callers, of which only four\nbother to check the function's result. The remainder take it on\nfaith that they got valid data back, and many of them will\nmisbehave in seriously nasty ways if they didn't. (This point\nwas drawn to my attention by a Coverity complaint.)\n\nI think we should make this a little less fragile. Since we\nalready have XLogRecGetBlockTagExtended, I propose that callers\nthat need to handle the case of no-such-block must use that,\nwhile XLogRecGetBlockTag throws an error. The attached patch\nfixes that up, and also cleans up some random inconsistency\nabout use of XLogRecHasBlockRef().\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 11 Apr 2022 14:20:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Fixing code that ignores failure of XLogRecGetBlockTag" }, { "msg_contents": "On Mon, Apr 11, 2022 at 2:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Currently, XLogRecGetBlockTag has 41 callers, of which only four\n> bother to check the function's result. The remainder take it on\n> faith that they got valid data back, and many of them will\n> misbehave in seriously nasty ways if they didn't. (This point\n> was drawn to my attention by a Coverity complaint.)\n>\n> I think we should make this a little less fragile. Since we\n> already have XLogRecGetBlockTagExtended, I propose that callers\n> that need to handle the case of no-such-block must use that,\n> while XLogRecGetBlockTag throws an error. The attached patch\n> fixes that up, and also cleans up some random inconsistency\n> about use of XLogRecHasBlockRef().\n\nLooks reasonable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:57:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fixing code that ignores failure of XLogRecGetBlockTag" }, { "msg_contents": "On Tue, Apr 12, 2022 at 8:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Apr 11, 2022 at 2:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Currently, XLogRecGetBlockTag has 41 callers, of which only four\n> > bother to check the function's result. The remainder take it on\n> > faith that they got valid data back, and many of them will\n> > misbehave in seriously nasty ways if they didn't. (This point\n> > was drawn to my attention by a Coverity complaint.)\n> >\n> > I think we should make this a little less fragile. Since we\n> > already have XLogRecGetBlockTagExtended, I propose that callers\n> > that need to handle the case of no-such-block must use that,\n> > while XLogRecGetBlockTag throws an error. The attached patch\n> > fixes that up, and also cleans up some random inconsistency\n> > about use of XLogRecHasBlockRef().\n>\n> Looks reasonable.\n\n+1\n\n\n", "msg_date": "Tue, 12 Apr 2022 09:16:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fixing code that ignores failure of XLogRecGetBlockTag" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Apr 12, 2022 at 8:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Mon, Apr 11, 2022 at 2:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think we should make this a little less fragile. Since we\n>>> already have XLogRecGetBlockTagExtended, I propose that callers\n>>> that need to handle the case of no-such-block must use that,\n>>> while XLogRecGetBlockTag throws an error. The attached patch\n>>> fixes that up, and also cleans up some random inconsistency\n>>> about use of XLogRecHasBlockRef().\n\n>> Looks reasonable.\n\n> +1\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 17:44:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Fixing code that ignores failure of XLogRecGetBlockTag" } ]
[ { "msg_contents": "Hi Hackers,\n\nI just noticed that the since the random() rewrite¹, the documentation's\nclaim² that it \"uses a simple linear congruential algorithm\" is no\nlonger accurate (xoroshiro128** is an xorshift variant, which is a\nlinear-feedback shift register algorithm).\n\nI don't have a suggestion for the exact wording, since I don't know\nwhether xoroshiro128** qualifies as \"simple\", or to what level of\nspecificity we want to document the algorithm.\n\n- ilmari\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3804539e48e794781c6145c7f988f5d507418fa8\n[2] https://www.postgresql.org/docs/devel/functions-math.html#FUNCTIONS-MATH-RANDOM-TABLE\n\n\n", "msg_date": "Mon, 11 Apr 2022 19:38:45 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "random() function documentation" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> I just noticed that the since the random() rewrite¹, the documentation's\n> claim² that it \"uses a simple linear congruential algorithm\" is no\n> longer accurate (xoroshiro128** is an xorshift variant, which is a\n> linear-feedback shift register algorithm).\n\n> I don't have a suggestion for the exact wording, since I don't know\n> whether xoroshiro128** qualifies as \"simple\", or to what level of\n> specificity we want to document the algorithm.\n\nHow about we just say \"uses a linear-feedback shift register algorithm\"?\n\"Simple\" is in the eye of the beholder anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 14:45:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> I just noticed that the since the random() rewrite¹, the documentation's\n>> claim² that it \"uses a simple linear congruential algorithm\" is no\n>> longer accurate (xoroshiro128** is an xorshift variant, which is a\n>> linear-feedback shift register algorithm).\n>\n>> I don't have a suggestion for the exact wording, since I don't know\n>> whether xoroshiro128** qualifies as \"simple\", or to what level of\n>> specificity we want to document the algorithm.\n>\n> How about we just say \"uses a linear-feedback shift register algorithm\"?\n\nThat works for me. Nice and simple, and not overly specific. Should we\nperhaps also add a warning that the same seed is not guaranteed to\nproduce the same sequence across different (major?) versions?\n\n> \"Simple\" is in the eye of the beholder anyway.\n\nIndeed.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Mon, 11 Apr 2022 20:00:26 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> How about we just say \"uses a linear-feedback shift register algorithm\"?\n\n> That works for me. Nice and simple, and not overly specific. Should we\n> perhaps also add a warning that the same seed is not guaranteed to\n> produce the same sequence across different (major?) versions?\n\nI wouldn't bother, on the grounds that then we'd need such disclaimers\nin a whole lot of places. Others might see it differently though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Apr 2022 15:19:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "On Mon, 11 Apr 2022 at 20:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> >> How about we just say \"uses a linear-feedback shift register algorithm\"?\n\nI think it'd be sufficient to just say that it's a deterministic\npseudorandom number generator. I don't see much value in documenting\nthe internal algorithm used.\n\n> > Should we\n> > perhaps also add a warning that the same seed is not guaranteed to\n> > produce the same sequence across different (major?) versions?\n>\n> I wouldn't bother, on the grounds that then we'd need such disclaimers\n> in a whole lot of places. Others might see it differently though.\n\nAgreed, though I think when the release notes are written, they ought\nto warn that the sequence will change with this release.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 12 Apr 2022 01:18:58 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" }, { "msg_contents": ">>>> How about we just say \"uses a linear-feedback shift register algorithm\"?\n>\n> I think it'd be sufficient to just say that it's a deterministic\n> pseudorandom number generator. I don't see much value in documenting\n> the internal algorithm used.\n\nHmmm… I'm not so sure. ISTM that people interested in using the random \nuser-facing variants (only random?) could like a pointer on the algorithm \nto check for the expected quality of the produced pseudo-random stream?\n\nSee attached.\n\n>>> Should we perhaps also add a warning that the same seed is not \n>>> guaranteed to produce the same sequence across different (major?) \n>>> versions?\n>>\n>> I wouldn't bother, on the grounds that then we'd need such disclaimers\n>> in a whole lot of places. Others might see it differently though.\n>\n> Agreed,\n\nAgreed.\n\n> though I think when the release notes are written, they ought\n> to warn that the sequence will change with this release.\n\nYes.\n\n-- \nFabien.", "msg_date": "Tue, 12 Apr 2022 10:19:07 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> How about we just say \"uses a linear-feedback shift register algorithm\"?\n\n>> I think it'd be sufficient to just say that it's a deterministic\n>> pseudorandom number generator. I don't see much value in documenting\n>> the internal algorithm used.\n\n> Hmmm… I'm not so sure. ISTM that people interested in using the random \n> user-facing variants (only random?) could like a pointer on the algorithm \n> to check for the expected quality of the produced pseudo-random stream?\n\n> See attached.\n\nI don't want to get that specific. We were not specific before and\nthere has been no call for such detail in the docs. (Unlike\nclosed-source software, anybody who really wants algorithmic details\ncan find all they want to know in the source code.) It would just\namount to another thing to forget to update next time someone changes\nthe algorithm ... which is a consideration that leads me to favor\nDean's phrasing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Apr 2022 11:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n\n> On Mon, 11 Apr 2022 at 20:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> >> How about we just say \"uses a linear-feedback shift register algorithm\"?\n>\n> I think it'd be sufficient to just say that it's a deterministic\n> pseudorandom number generator. I don't see much value in documenting\n> the internal algorithm used.\n>\n>> > Should we\n>> > perhaps also add a warning that the same seed is not guaranteed to\n>> > produce the same sequence across different (major?) versions?\n>>\n>> I wouldn't bother, on the grounds that then we'd need such disclaimers\n>> in a whole lot of places. Others might see it differently though.\n>\n> Agreed, though I think when the release notes are written, they ought\n> to warn that the sequence will change with this release.\n\nWFM on both points.\n\n> Regards,\n> Dean\n\n- ilmari\n\n\n", "msg_date": "Tue, 12 Apr 2022 16:12:06 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: random() function documentation" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> I think it'd be sufficient to just say that it's a deterministic\n>> pseudorandom number generator. I don't see much value in documenting\n>> the internal algorithm used.\n\n> WFM on both points.\n\nSold then, I'll make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Apr 2022 11:22:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: random() function documentation" } ]
[ { "msg_contents": "Hi,\n\nI was going through pg_stat_recovery_prefetch documentation and saw an\nissue with formatting. Attached a small patch to fix the issue. This is the\nfirst time I am sending an email to hackers. Please educate me if I\nmiss something.\n\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html#PG-STAT-RECOVERY-PREFETCH-VIEW\n\nThanks,\nSirisha", "msg_date": "Mon, 11 Apr 2022 13:11:40 -0700", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Documentation issue with pg_stat_recovery_prefetch" }, { "msg_contents": "On Tue, Apr 12, 2022 at 8:11 AM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n> I was going through pg_stat_recovery_prefetch documentation and saw an issue with formatting. Attached a small patch to fix the issue. This is the first time I am sending an email to hackers. Please educate me if I miss something.\n\nThanks Sirisha!\n\nOuch, that's embarrassing. My best guess is that I might have screwed\nthat up a long time ago while rebasing an early development version\nover commit 92f94686, which changed the link style and moved\nparagraphs around, and then never noticed that it was wrong.\nResearching that made me notice another problem: the table was using\nthe 3 column layout from a couple of years ago, because I had also\nmissed the style change in commit a0427506. Oops. Fixed.\n\n\n", "msg_date": "Tue, 12 Apr 2022 21:22:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documentation issue with pg_stat_recovery_prefetch" }, { "msg_contents": "Hi, \r\n\r\nThank you for developing the new feature.\r\nThe pg_stat_recovery_prefetch view documentation doesn't seem to have a description of the stats_reset column. The attached small patch adds a description of the stats_reset column.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Tuesday, April 12, 2022 6:23 PM\r\nTo: sirisha chamarthi <sirichamarthi22@gmail.com>\r\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: Documentation issue with pg_stat_recovery_prefetch\r\n\r\nOn Tue, Apr 12, 2022 at 8:11 AM sirisha chamarthi <sirichamarthi22@gmail.com> wrote:\r\n> I was going through pg_stat_recovery_prefetch documentation and saw an issue with formatting. Attached a small patch to fix the issue. This is the first time I am sending an email to hackers. Please educate me if I miss something.\r\n\r\nThanks Sirisha!\r\n\r\nOuch, that's embarrassing. My best guess is that I might have screwed that up a long time ago while rebasing an early development version over commit 92f94686, which changed the link style and moved paragraphs around, and then never noticed that it was wrong.\r\nResearching that made me notice another problem: the table was using the 3 column layout from a couple of years ago, because I had also missed the style change in commit a0427506. Oops. Fixed.", "msg_date": "Wed, 20 Apr 2022 12:38:54 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Documentation issue with pg_stat_recovery_prefetch" } ]
[ { "msg_contents": "The cost_subqueryscan function does not judge whether it is parallel.\r\nregress\r\n-- Incremental sort vs. set operations with varno 0\r\nset enable_hashagg to off;\r\nexplain (costs off) select * from t union select * from t order by 1,3;\r\n QUERY PLAN \r\n----------------------------------------------------------\r\n Incremental Sort\r\n Sort Key: t.a, t.c\r\n Presorted Key: t.a\r\n -> Unique\r\n -> Sort\r\n Sort Key: t.a, t.b, t.c\r\n -> Append\r\n -> Gather\r\n Workers Planned: 2\r\n -> Parallel Seq Scan on t\r\n -> Gather\r\n Workers Planned: 2\r\n -> Parallel Seq Scan on t t_1\r\nto\r\n Incremental Sort\r\n Sort Key: t.a, t.c\r\n Presorted Key: t.a\r\n -> Unique\r\n -> Sort\r\n Sort Key: t.a, t.b, t.c\r\n -> Gather\r\n Workers Planned: 2\r\n -> Parallel Append\r\n -> Parallel Seq Scan on t\r\n -> Parallel Seq Scan on t t_1\r\nObviously the latter is less expensive\r\n\r\n\r\n\r\nbucoo@sohu.com", "msg_date": "Tue, 12 Apr 2022 14:57:16 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Tue, Apr 12, 2022 at 2:57 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> The cost_subqueryscan function does not judge whether it is parallel.\n\nI don't see any reason why it would need to do that. A subquery scan\nisn't parallel aware.\n\n> regress\n> -- Incremental sort vs. set operations with varno 0\n> set enable_hashagg to off;\n> explain (costs off) select * from t union select * from t order by 1,3;\n> QUERY PLAN\n> ----------------------------------------------------------\n> Incremental Sort\n> Sort Key: t.a, t.c\n> Presorted Key: t.a\n> -> Unique\n> -> Sort\n> Sort Key: t.a, t.b, t.c\n> -> Append\n> -> Gather\n> Workers Planned: 2\n> -> Parallel Seq Scan on t\n> -> Gather\n> Workers Planned: 2\n> -> Parallel Seq Scan on t t_1\n> to\n> Incremental Sort\n> Sort Key: t.a, t.c\n> Presorted Key: t.a\n> -> Unique\n> -> Sort\n> Sort Key: t.a, t.b, t.c\n> -> Gather\n> Workers Planned: 2\n> -> Parallel Append\n> -> Parallel Seq Scan on t\n> -> Parallel Seq Scan on t t_1\n> Obviously the latter is less expensive\n\nGenerally it should be. But there's no subquery scan visible here.\n\nThere may well be something wrong here, but I don't think that you've\ndiagnosed the problem correctly, or explained it clearly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 12:49:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 15, 2022 at 12:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 12, 2022 at 2:57 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> > The cost_subqueryscan function does not judge whether it is parallel.\n>\n> I don't see any reason why it would need to do that. A subquery scan\n> isn't parallel aware.\n>\n> > regress\n> > -- Incremental sort vs. set operations with varno 0\n> > set enable_hashagg to off;\n> > explain (costs off) select * from t union select * from t order by 1,3;\n> > QUERY PLAN\n> > ----------------------------------------------------------\n> > Incremental Sort\n> > Sort Key: t.a, t.c\n> > Presorted Key: t.a\n> > -> Unique\n> > -> Sort\n> > Sort Key: t.a, t.b, t.c\n> > -> Append\n> > -> Gather\n> > Workers Planned: 2\n> > -> Parallel Seq Scan on t\n> > -> Gather\n> > Workers Planned: 2\n> > -> Parallel Seq Scan on t t_1\n> > to\n> > Incremental Sort\n> > Sort Key: t.a, t.c\n> > Presorted Key: t.a\n> > -> Unique\n> > -> Sort\n> > Sort Key: t.a, t.b, t.c\n> > -> Gather\n> > Workers Planned: 2\n> > -> Parallel Append\n> > -> Parallel Seq Scan on t\n> > -> Parallel Seq Scan on t t_1\n> > Obviously the latter is less expensive\n>\n> Generally it should be. But there's no subquery scan visible here.\n>\n\nThe paths of subtrees in set operations would be type of subqueryscan.\nThe SubqueryScan nodes are removed later in set_plan_references() in\nthis case as they are considered as being trivial.\n\n\n>\n> There may well be something wrong here, but I don't think that you've\n> diagnosed the problem correctly, or explained it clearly.\n>\n\nSome debugging work shows that the second path is generated but then\nfails when competing with the first path. So if there is something\nwrong, I think cost calculation is the suspicious point.\n\nNot related to this topic but I noticed another problem from the plan.\nNote the first Sort node which is to unique-ify the result of the UNION.\nWhy cannot we re-arrange the sort keys from (a, b, c) to (a, c, b) so\nthat we can avoid the second Sort node?\n\nThanks\nRichard\n\nOn Fri, Apr 15, 2022 at 12:50 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 12, 2022 at 2:57 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> The cost_subqueryscan function does not judge whether it is parallel.\n\nI don't see any reason why it would need to do that. A subquery scan\nisn't parallel aware.\n\n> regress\n> -- Incremental sort vs. set operations with varno 0\n> set enable_hashagg to off;\n> explain (costs off) select * from t union select * from t order by 1,3;\n>                         QUERY PLAN\n> ----------------------------------------------------------\n>  Incremental Sort\n>    Sort Key: t.a, t.c\n>    Presorted Key: t.a\n>    ->  Unique\n>          ->  Sort\n>                Sort Key: t.a, t.b, t.c\n>                ->  Append\n>                      ->  Gather\n>                            Workers Planned: 2\n>                            ->  Parallel Seq Scan on t\n>                      ->  Gather\n>                            Workers Planned: 2\n>                            ->  Parallel Seq Scan on t t_1\n> to\n>  Incremental Sort\n>    Sort Key: t.a, t.c\n>    Presorted Key: t.a\n>    ->  Unique\n>          ->  Sort\n>                Sort Key: t.a, t.b, t.c\n>                ->  Gather\n>                      Workers Planned: 2\n>                      ->  Parallel Append\n>                            ->  Parallel Seq Scan on t\n>                            ->  Parallel Seq Scan on t t_1\n> Obviously the latter is less expensive\n\nGenerally it should be. But there's no subquery scan visible here.The paths of subtrees in set operations would be type of subqueryscan.The SubqueryScan nodes are removed later in set_plan_references() inthis case as they are considered as being trivial. \n\nThere may well be something wrong here, but I don't think that you've\ndiagnosed the problem correctly, or explained it clearly.Some debugging work shows that the second path is generated but thenfails when competing with the first path. So if there is somethingwrong, I think cost calculation is the suspicious point.Not related to this topic but I noticed another problem from the plan.Note the first Sort node which is to unique-ify the result of the UNION.Why cannot we re-arrange the sort keys from (a, b, c) to (a, c, b) sothat we can avoid the second Sort node?ThanksRichard", "msg_date": "Fri, 15 Apr 2022 17:16:44 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "> Generally it should be. But there's no subquery scan visible here.\r\nI wrote a patch for distinct/union and aggregate support last year(I want restart it again).\r\nhttps://www.postgresql.org/message-id/2021091517250848215321%40sohu.com \r\nIf not apply this patch, some parallel paths will naver be selected.\r\n\r\n> Some debugging work shows that the second path is generated but then\r\n> fails when competing with the first path. So if there is something\r\n> wrong, I think cost calculation is the suspicious point.\r\nMaybe, I will check it again.\r\n\r\n> Not related to this topic but I noticed another problem from the plan.\r\n> Note the first Sort node which is to unique-ify the result of the UNION.\r\n> Why cannot we re-arrange the sort keys from (a, b, c) to (a, c, b) so\r\n> that we can avoid the second Sort node?\r\nThis is a regress test, just for test Incremental Sort plan.\r\n\r\n\n\n> Generally it should be. But there's no subquery scan visible here.I wrote a patch for distinct/union and aggregate support last year(I want restart it again).https://www.postgresql.org/message-id/2021091517250848215321%40sohu.com If not apply this patch, some parallel paths will naver be selected.> Some debugging work shows that the second path is generated but then> fails when competing with the first path. So if there is something> wrong, I think cost calculation is the suspicious point.Maybe, I will check it again.> Not related to this topic but I noticed another problem from the plan.> Note the first Sort node which is to unique-ify the result of the UNION.> Why cannot we re-arrange the sort keys from (a, b, c) to (a, c, b) so> that we can avoid the second Sort node?This is a regress test, just for test Incremental Sort plan.", "msg_date": "Fri, 15 Apr 2022 18:06:25 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 15, 2022 at 05:16:44PM +0800, Richard Guo wrote:\n> Not related to this topic but I noticed another problem from the plan.\n> Note the first Sort node which is to unique-ify the result of the UNION.\n> Why cannot we re-arrange the sort keys from (a, b, c) to (a, c, b) so\n> that we can avoid the second Sort node?\n\nI don't know, but it's possible there's a solution related to commit db0d67db2\n\"Optimize order of GROUP BY keys\" - DISTINCT is the same as GROUP BY k1, ...,\nkN. I guess UNION [DISTINCT] should learn to use GROUP BY rather than\nDISTINCT?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Apr 2022 05:10:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 15, 2022 at 6:06 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> > Generally it should be. But there's no subquery scan visible here.\n> I wrote a patch for distinct/union and aggregate support last year(I want restart it again).\n> https://www.postgresql.org/message-id/2021091517250848215321%40sohu.com\n> If not apply this patch, some parallel paths will naver be selected.\n\nSure, but that doesn't make the patch correct. The patch proposes\nthat, when parallelism in use, a subquery scan will produce fewer rows\nthan when parallelism is not in use, and that's 100% false. Compare\nthis with the case of a parallel sequential scan. If a table contains\n1000 rows, and we scan it with a regular Seq Scan, the Seq Scan will\nreturn 1000 rows. But if we scan it with a Parallel Seq Scan using\nsay 4 workers, the number of rows returned in each worker will be\nsubstantially less than 1000, because 1000 is now the *total* number\nof rows to be returned across *all* processes, and what we need is the\nnumber of rows returned in *each* process.\n\nThe same thing isn't true for a subquery scan. Consider:\n\nGather\n-> Subquery Scan\n -> Parallel Seq Scan\n\nOne thing is for sure: the number of rows that will be produced by the\nsubquery scan in each backend is exactly equal to the number of rows\nthat the subquery scan receives from its subpath. Parallel Seq Scan\ncan't just return a row count estimate based on the number of rows in\nthe table, because those rows are going to be divided among the\nworkers. But the Subquery Scan doesn't do anything like that. If it\nreceives let's say 250 rows as input in each worker, it's going to\nproduce 250 output rows in each worker. Your patch says it's going to\nproduce fewer than that, and that's wrong, regardless of whether it\ngives you the plan you want in this particular case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 09:44:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "> Sure, but that doesn't make the patch correct. The patch proposes\r\n> that, when parallelism in use, a subquery scan will produce fewer rows\r\n> than when parallelism is not in use, and that's 100% false. Compare\r\n> this with the case of a parallel sequential scan. If a table contains\r\n> 1000 rows, and we scan it with a regular Seq Scan, the Seq Scan will\r\n> return 1000 rows. But if we scan it with a Parallel Seq Scan using\r\n> say 4 workers, the number of rows returned in each worker will be\r\n> substantially less than 1000, because 1000 is now the *total* number\r\n> of rows to be returned across *all* processes, and what we need is the\r\n> number of rows returned in *each* process.\r\n\r\nfor now fuction cost_subqueryscan always using *total* rows even parallel\r\npath. like this:\r\n\r\nGather (rows=30000)\r\n Workers Planned: 2\r\n -> Subquery Scan (rows=30000) -- *total* rows, should be equal subpath\r\n -> Parallel Seq Scan (rows=10000)\r\n\r\nMaybe the codes:\r\n\r\n/* Mark the path with the correct row estimate */\r\nif (param_info)\r\npath->path.rows = param_info->ppi_rows;\r\nelse\r\npath->path.rows = baserel->rows;\r\n\r\nshould change to:\r\n\r\n/* Mark the path with the correct row estimate */\r\nif (path->path.parallel_workers > 0)\r\npath->path.rows = path->subpath->rows;\r\nelse if (param_info)\r\npath->path.rows = param_info->ppi_rows;\r\nelse\r\npath->path.rows = baserel->rows;\r\n\r\n\r\n\r\nbucoo@sohu.com\r\n\n\n> Sure, but that doesn't make the patch correct. The patch proposes> that, when parallelism in use, a subquery scan will produce fewer rows> than when parallelism is not in use, and that's 100% false. Compare> this with the case of a parallel sequential scan. If a table contains> 1000 rows, and we scan it with a regular Seq Scan, the Seq Scan will> return 1000 rows.  But if we scan it with a Parallel Seq Scan using> say 4 workers, the number of rows returned in each worker will be> substantially less than 1000, because 1000 is now the *total* number> of rows to be returned across *all* processes, and what we need is the> number of rows returned in *each* process.for now fuction cost_subqueryscan always using *total* rows even parallelpath. like this:Gather (rows=30000)  Workers Planned: 2  ->  Subquery Scan  (rows=30000) -- *total* rows, should be equal subpath        ->  Parallel Seq Scan  (rows=10000)Maybe the codes: /* Mark the path with the correct row estimate */ if (param_info) path->path.rows = param_info->ppi_rows; else path->path.rows = baserel->rows;should change to: /* Mark the path with the correct row estimate */ if (path->path.parallel_workers > 0) path->path.rows = path->subpath->rows; else if (param_info) path->path.rows = param_info->ppi_rows; else path->path.rows = baserel->rows;\nbucoo@sohu.com", "msg_date": "Wed, 20 Apr 2022 22:00:46 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Wed, Apr 20, 2022 at 10:01 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> for now fuction cost_subqueryscan always using *total* rows even parallel\n> path. like this:\n>\n> Gather (rows=30000)\n> Workers Planned: 2\n> -> Subquery Scan (rows=30000) -- *total* rows, should be equal subpath\n> -> Parallel Seq Scan (rows=10000)\n\nOK, that's bad.\n\n> Maybe the codes:\n>\n> /* Mark the path with the correct row estimate */\n> if (param_info)\n> path->path.rows = param_info->ppi_rows;\n> else\n> path->path.rows = baserel->rows;\n>\n> should change to:\n>\n> /* Mark the path with the correct row estimate */\n> if (path->path.parallel_workers > 0)\n> path->path.rows = path->subpath->rows;\n> else if (param_info)\n> path->path.rows = param_info->ppi_rows;\n> else\n> path->path.rows = baserel->rows;\n\nSuppose parallelism is not in use and that param_info is NULL. Then,\nis path->subpath->rows guaranteed to be equal to baserel->rows? If\nyes, then we don't need to a three-part if statement as you propose\nhere and can just change the \"else\" clause to say path->path.rows =\npath->subpath->rows. If no, then your change gives the wrong answer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Apr 2022 10:13:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "> > for now fuction cost_subqueryscan always using *total* rows even parallel\r\n> > path. like this:\r\n> >\r\n> > Gather (rows=30000)\r\n> > Workers Planned: 2\r\n> > -> Subquery Scan (rows=30000) -- *total* rows, should be equal subpath\r\n> > -> Parallel Seq Scan (rows=10000)\r\n> \r\n> OK, that's bad.\r\n> \r\n> > Maybe the codes:\r\n> >\r\n> > /* Mark the path with the correct row estimate */\r\n> > if (param_info)\r\n> > path->path.rows = param_info->ppi_rows;\r\n> > else\r\n> > path->path.rows = baserel->rows;\r\n> >\r\n> > should change to:\r\n> >\r\n> > /* Mark the path with the correct row estimate */\r\n> > if (path->path.parallel_workers > 0)\r\n> > path->path.rows = path->subpath->rows;\r\n> > else if (param_info)\r\n> > path->path.rows = param_info->ppi_rows;\r\n> > else\r\n> > path->path.rows = baserel->rows;\r\n> \r\n> Suppose parallelism is not in use and that param_info is NULL. Then,\r\n> is path->subpath->rows guaranteed to be equal to baserel->rows? If\r\n> yes, then we don't need to a three-part if statement as you propose\r\n> here and can just change the \"else\" clause to say path->path.rows =\r\n> path->subpath->rows. If no, then your change gives the wrong answer.\r\nI checked some regress test, Sometimes subquery scan have filter, \r\nso path->subpath->row guaranteed *not* to be equal to baserel->rows.\r\nIf the first patch is false, I don't known how to fix this,\r\nlooks like need someone's help.\r\n\r\n\n\n> > for now fuction cost_subqueryscan always using *total* rows even parallel> > path. like this:> >> > Gather (rows=30000)> >   Workers Planned: 2> >   ->  Subquery Scan  (rows=30000) -- *total* rows, should be equal subpath> >         ->  Parallel Seq Scan  (rows=10000)>  > OK, that's bad.>  > > Maybe the codes:> >> > /* Mark the path with the correct row estimate */> > if (param_info)> > path->path.rows = param_info->ppi_rows;> > else> > path->path.rows = baserel->rows;> >> > should change to:> >> > /* Mark the path with the correct row estimate */> > if (path->path.parallel_workers > 0)> > path->path.rows = path->subpath->rows;> > else if (param_info)> > path->path.rows = param_info->ppi_rows;> > else> > path->path.rows = baserel->rows;>  > Suppose parallelism is not in use and that param_info is NULL. Then,> is path->subpath->rows guaranteed to be equal to baserel->rows? If> yes, then we don't need to a three-part if statement as you propose> here and can just change the \"else\" clause to say path->path.rows => path->subpath->rows. If no, then your change gives the wrong answer.I checked some regress test, Sometimes subquery scan have filter, so path->subpath->row guaranteed *not* to be equal to baserel->rows.If the first patch is false, I don't known how to fix this,looks like need someone's help.", "msg_date": "Thu, 21 Apr 2022 14:38:22 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Thu, Apr 21, 2022 at 2:38 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> > Suppose parallelism is not in use and that param_info is NULL. Then,\n> > is path->subpath->rows guaranteed to be equal to baserel->rows? If\n> > yes, then we don't need to a three-part if statement as you propose\n> > here and can just change the \"else\" clause to say path->path.rows =\n> > path->subpath->rows. If no, then your change gives the wrong answer.\n>\n> I checked some regress test, Sometimes subquery scan have filter,\n> so path->subpath->row guaranteed *not* to be equal to baserel->rows.\n> If the first patch is false, I don't known how to fix this,\n> looks like need someone's help.\n\nPlease fix your mailer so that it doesn't send me a bounce message\nevery time I reply to one of your messages on list.\n\nI don't know how to fix this right now either, then; maybe I or\nsomeone else will have a good idea later.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Apr 2022 08:00:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "> > > Suppose parallelism is not in use and that param_info is NULL. Then,\n> > > is path->subpath->rows guaranteed to be equal to baserel->rows? If\n> > > yes, then we don't need to a three-part if statement as you propose\n> > > here and can just change the \"else\" clause to say path->path.rows =\n> > > path->subpath->rows. If no, then your change gives the wrong answer.\n> >\n> > I checked some regress test, Sometimes subquery scan have filter,\n> > so path->subpath->row guaranteed *not* to be equal to baserel->rows.\n> > If the first patch is false, I don't known how to fix this,\n> > looks like need someone's help.\n> \n> Please fix your mailer so that it doesn't send me a bounce message\n> every time I reply to one of your messages on list.\n\nThis message send using Outlook.\n\n> I don't know how to fix this right now either, then; maybe I or\n> someone else will have a good idea later.\n\nI don't known too.\n\n\n\n\n", "msg_date": "Fri, 22 Apr 2022 11:35:43 +0800", "msg_from": "\"bucoo\" <bucoo@sohu.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Wed, Apr 20, 2022 at 11:38 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n\n> > > for now fuction cost_subqueryscan always using *total* rows even\n> parallel\n> > > path. like this:\n> > >\n> > > Gather (rows=30000)\n> > > Workers Planned: 2\n> > > -> Subquery Scan (rows=30000) -- *total* rows, should be equal\n> subpath\n> > > -> Parallel Seq Scan (rows=10000)\n> >\n> > OK, that's bad.\n>\n\nI don't understand how that plan shape is possible. Gather requires a\nparallel aware subpath, so said subpath can be executed multiple times in\nparallel, and subquery isn't. If there is parallelism happening within a\nsubquery the results are consolidated using Append or Gather first - and\nthe output rows of that path entry (all subpaths of Subquery have the same\n->row value per set_subquery_size_estimates), become the input tuples for\nSubquery, to which it then applies its selectivity multiplier and stores\nthe final result in baserel->rows; which the costing code then examines\nwhen costing the RTE_SUBQUERY path entry.\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 11:38 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> > for now fuction cost_subqueryscan always using *total* rows even parallel> > path. like this:> >> > Gather (rows=30000)> >   Workers Planned: 2> >   ->  Subquery Scan  (rows=30000) -- *total* rows, should be equal subpath> >         ->  Parallel Seq Scan  (rows=10000)>  > OK, that's bad.I don't understand how that plan shape is possible.  Gather requires a parallel aware subpath, so said subpath can be executed multiple times in parallel, and subquery isn't.  If there is parallelism happening within a subquery the results are consolidated using Append or Gather first - and the output rows of that path entry (all subpaths of Subquery have the same ->row value per set_subquery_size_estimates), become the input tuples for Subquery, to which it then applies its selectivity multiplier and stores the final result in baserel->rows; which the costing code then examines when costing the RTE_SUBQUERY path entry.David J.", "msg_date": "Fri, 22 Apr 2022 08:55:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 22, 2022 at 11:55 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> On Wed, Apr 20, 2022 at 11:38 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n>>\n>> > > for now fuction cost_subqueryscan always using *total* rows even parallel\n>> > > path. like this:\n>> > >\n>> > > Gather (rows=30000)\n>> > > Workers Planned: 2\n>> > > -> Subquery Scan (rows=30000) -- *total* rows, should be equal subpath\n>> > > -> Parallel Seq Scan (rows=10000)\n>> >\n>> > OK, that's bad.\n>\n> I don't understand how that plan shape is possible. Gather requires a parallel aware subpath, so said subpath can be executed multiple times in parallel, and subquery isn't. If there is parallelism happening within a subquery the results are consolidated using Append or Gather first - and the output rows of that path entry (all subpaths of Subquery have the same ->row value per set_subquery_size_estimates), become the input tuples for Subquery, to which it then applies its selectivity multiplier and stores the final result in baserel->rows; which the costing code then examines when costing the RTE_SUBQUERY path entry.\n\nGather doesn't require a parallel aware subpath, just a parallel-safe\nsubpath. In a case like this, the parallel seq scan will divide the\nrows from the underlying relation across the three processes executing\nit. Each process will pass the rows it receives through its own copy\nof the subquery scan. Then, the Gather node will collect all the rows\nfrom all the workers to produce the final result.\n\nIt's an extremely important feature of parallel query that the\nparallel-aware node doesn't have to be immediately beneath the Gather.\nYou need to have a parallel-aware node in there someplace, but it\ncould be separated from the gather by any number of levels e.g.\n\nGather\n-> Nested Loop\n -> Nested Loop\n -> Nested Loop\n -> Parallel Seq Scan\n -> Index Scan\n -> Index Scan\n -> Index Scan\n\nYou can stick as many parameterized index scans in there as you like\nand you still only need one parallel-aware node at the bottom. Once\nthe parallel seq scan divides up the rows across workers, each worker\ncan perform the index lookups for the rows that it receives without\nany coordination with other workers. It neither knows nor cares that\nthis is happening in the midst of an operation that is parallel\noverall; all the nested loops and index scans work just as they would\nin a non-parallel plan. The only things that needed to know about\nparallelism are the operation that is dividing up the rows among\nworkers (here, the parallel seq scan) and the operation that is\ngathering up all the results produced by individual workers (here, the\ngather node).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Apr 2022 12:53:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Thu, Apr 28, 2022 at 9:53 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Apr 22, 2022 at 11:55 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > On Wed, Apr 20, 2022 at 11:38 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> >>\n> >> > > for now fuction cost_subqueryscan always using *total* rows even\n> parallel\n> >> > > path. like this:\n> >> > >\n> >> > > Gather (rows=30000)\n> >> > > Workers Planned: 2\n> >> > > -> Subquery Scan (rows=30000) -- *total* rows, should be equal\n> subpath\n> >> > > -> Parallel Seq Scan (rows=10000)\n> >> >\n> >> > OK, that's bad.\n> >\n>\n> Gather doesn't require a parallel aware subpath, just a parallel-safe\n> subpath. In a case like this, the parallel seq scan will divide the\n> rows from the underlying relation across the three processes executing\n> it. Each process will pass the rows it receives through its own copy\n> of the subquery scan. Then, the Gather node will collect all the rows\n> from all the workers to produce the final result.\n>\n>\nThank you. I think I got off on a tangent there and do understand the\ngeneral design here better now.\n\nI feel like the fact that the 2.4 divisor (for 2 planned workers) isn't\nshown in the explain plan anywhere is an oversight.\n\nTo move the original complaint forward a bit I am posting the three plan\nchanges that using path->subpath->rows provokes in the regression tests.\n\n======================================================================\n --\n -- Check for incorrect optimization when IN subquery contains a SRF\n --\n select * from int4_tbl o where (f1, f1) in\n (select f1, generate_series(1,50) / 10 g from int4_tbl i group by f1);\n\nThe material difference between the existing plan and this one is the\nestimation of 250 rows\nhere compared to 1 row.\nSo (rel.rows != path->subpath->rows) at the top of cost_subqueryscan\n+ -> Subquery Scan on \"ANY_subquery\" (cost=1.06..9.28\nrows=250 width=8)\n+ Output: \"ANY_subquery\".f1, \"ANY_subquery\".g\n+ Filter: (\"ANY_subquery\".f1 = \"ANY_subquery\".g)\n+ -> Result (cost=1.06..6.15 rows=250 width=8)\n======================================================================\nThe second plan change is basically this same thing, going from rows=4 to\nrows=1\ncauses the plan to include a materialize node. The shape for purposes of\nthe security barrier\nremains correct.\n======================================================================\nselect * from t union select * from t order by 1,3;\nGather here costs 2,600 vs the Append being 2,950 in the existing plan\nshape.\n+ -> Gather (cost=0.00..2600.00 rows=120000 width=12)\n+ Workers Planned: 2\n+ -> Parallel Append (cost=0.00..2600.00 rows=50000\nwidth=12)\n+ -> Parallel Seq Scan on t (cost=0.00..575.00\nrows=25000 width=12)\n+ -> Parallel Seq Scan on t t_1\n (cost=0.00..575.00 rows=25000 width=12)\n=======================================================================\n\nI've attached the two raw regression output diffs.\n\nUsing path->subpath->rows ignores the impact of the node's own filters, but\nthe base pre-filter number is/should be the correct one; though it is\ndifficult to say that with certainty when most of these nodes are discarded\nand one cannot debug in the middle but only observe the end results.\nDisabling that optimization is presently beyond my skill though I may take\nit up anyway as its likely still orders easier to do, and then hope some of\nthese plans produce using data to check with, than actually diving into a C\ndebugger for the first time.\n\nReverse engineering the 350 difference may be another approach - namely is\nit strictly due to the different plan shape or is it due to the number of\nrows. The fact that the row difference is 35,000 and the cost is 1%\n(cpu_tuple_cost = 0.01) of that number seems like a red herring after\nthinking it through...to many scans plus the differing shapes.\n\nDavid J.", "msg_date": "Thu, 28 Apr 2022 18:30:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 29, 2022 at 12:53 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Gather doesn't require a parallel aware subpath, just a parallel-safe\n> subpath. In a case like this, the parallel seq scan will divide the\n> rows from the underlying relation across the three processes executing\n> it. Each process will pass the rows it receives through its own copy\n> of the subquery scan. Then, the Gather node will collect all the rows\n> from all the workers to produce the final result.\n>\n> It's an extremely important feature of parallel query that the\n> parallel-aware node doesn't have to be immediately beneath the Gather.\n> You need to have a parallel-aware node in there someplace, but it\n> could be separated from the gather by any number of levels e.g.\n>\n> Gather\n> -> Nested Loop\n> -> Nested Loop\n> -> Nested Loop\n> -> Parallel Seq Scan\n> -> Index Scan\n> -> Index Scan\n> -> Index Scan\n>\n\nThanks for the explanation. That's really helpful to understand the\nparallel query mechanism.\n\nSo for the nodes between Gather and parallel-aware node, how should we\ncalculate their estimated rows?\n\nCurrently subquery scan is using rel->rows (if no parameterization),\nwhich I believe is not correct. That's not the size the subquery scan\nnode in each worker needs to handle, as the rows have been divided\nacross workers by the parallel-aware node.\n\nUsing subpath->rows is not correct either, as subquery scan node may\nhave quals.\n\nIt seems to me the right way is to divide the rel->rows among all the\nworkers.\n\nThanks\nRichard\n\nOn Fri, Apr 29, 2022 at 12:53 AM Robert Haas <robertmhaas@gmail.com> wrote:\nGather doesn't require a parallel aware subpath, just a parallel-safe\nsubpath. In a case like this, the parallel seq scan will divide the\nrows from the underlying relation across the three processes executing\nit. Each process will pass the rows it receives through its own copy\nof the subquery scan. Then, the Gather node will collect all the rows\nfrom all the workers to produce the final result.\n\nIt's an extremely important feature of parallel query that the\nparallel-aware node doesn't have to be immediately beneath the Gather.\nYou need to have a parallel-aware node in there someplace, but it\ncould be separated from the gather by any number of levels e.g.\n\nGather\n-> Nested Loop\n  -> Nested Loop\n    -> Nested Loop\n       -> Parallel Seq Scan\n       -> Index Scan\n     -> Index Scan\n   -> Index ScanThanks for the explanation. That's really helpful to understand theparallel query mechanism.So for the nodes between Gather and parallel-aware node, how should wecalculate their estimated rows?Currently subquery scan is using rel->rows (if no parameterization),which I believe is not correct. That's not the size the subquery scannode in each worker needs to handle, as the rows have been dividedacross workers by the parallel-aware node.Using subpath->rows is not correct either, as subquery scan node mayhave quals.It seems to me the right way is to divide the rel->rows among all theworkers.ThanksRichard", "msg_date": "Fri, 29 Apr 2022 16:35:59 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Currently subquery scan is using rel->rows (if no parameterization),\n> which I believe is not correct. That's not the size the subquery scan\n> node in each worker needs to handle, as the rows have been divided\n> across workers by the parallel-aware node.\n\nReally? Maybe I misunderstand the case under consideration, but\nwhat I think will be happening is that each worker will re-execute\nthe pushed-down subquery in full. Otherwise it can't compute the\ncorrect answer. What gets divided across the set of workers is\nthe total *number of executions* of the subquery, which should be\nindependent of the number of workers, so that the cost is (more\nor less) the same as the non-parallel case.\n\nAt least that's true for a standard correlated subplan, which is\nnormally run again for each row processed by the parent node.\nFor hashed subplans and initplans, what would have been \"execute\nonce\" semantics becomes \"execute once per worker\", creating a\nstrict cost disadvantage for parallelization. I don't know\nwhether the current costing model accounts for that. But if it\ndoes that wrong, arbitrarily altering the number of rows won't\nmake it better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Apr 2022 10:02:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 29, 2022 at 7:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Currently subquery scan is using rel->rows (if no parameterization),\n> > which I believe is not correct. That's not the size the subquery scan\n> > node in each worker needs to handle, as the rows have been divided\n> > across workers by the parallel-aware node.\n>\n> Really? Maybe I misunderstand the case under consideration, but\n> what I think will be happening is that each worker will re-execute\n> the pushed-down subquery in full. Otherwise it can't compute the\n> correct answer. What gets divided across the set of workers is\n> the total *number of executions* of the subquery, which should be\n> independent of the number of workers, so that the cost is (more\n> or less) the same as the non-parallel case.\n>\n>\nI've been operating under the belief that a subquery node (or, rather, all\nnodes) in a path get their row count estimates from self-selectivity * rows\nprovided by the next node down in the path. In a partial path, eventually\nthe parallel aware node at the bottom of the path divides its row count\nestimate by the parallel divisor (2.4 for two workers). That value then\nreturns back up through the path until it hits a gather. Every node in\nbetween, which are almost exclusively parallel-safe (not parallel aware),\ncan just use the row count being percolated up the path (i.e., they get the\ndivided row count in their partial pathing and full row counts in the\nnormal pathing). The gather node, realizing that the row count it is\ndealing with has been divided by the parallel divisor, undoes the division\nin order to provide the correct row count to the non-parallel executing\nnodes above it.\n\nSo a subquery is only ever executed once in a path - but the number of\nreturned rows depends on the number of planned workers done under the\nassumption that a query executed among 2 workers costs 1/2.4 the amount of\nthe same query done with just the leader. It is an independent factor\ncompared to everything else going on and so the multiplication can simply\nbe done first to the original row count.\n\n<starts looking for confirmation in the code>\n\nallpaths.c@2974-2975 (generate_gather_paths)\n(L: 2993 as well)\n(L: 3167 - generate_useful_gather_paths)\n\nrows =\ncheapest_partial_path->rows * cheapest_partial_path->parallel_workers;\n\nShouldn't that be:\n\nrows =\ncheapest_partial_path->rows * (get_parallel_divisor(cheapest_partial_path))\n\n?\n\nDavid J.\n\nOn Fri, Apr 29, 2022 at 7:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Currently subquery scan is using rel->rows (if no parameterization),\n> which I believe is not correct. That's not the size the subquery scan\n> node in each worker needs to handle, as the rows have been divided\n> across workers by the parallel-aware node.\n\nReally?  Maybe I misunderstand the case under consideration, but\nwhat I think will be happening is that each worker will re-execute\nthe pushed-down subquery in full.  Otherwise it can't compute the\ncorrect answer.  What gets divided across the set of workers is\nthe total *number of executions* of the subquery, which should be\nindependent of the number of workers, so that the cost is (more\nor less) the same as the non-parallel case.I've been operating under the belief that a subquery node (or, rather, all nodes) in a path get their row count estimates from self-selectivity * rows provided by the next node down in the path.  In a partial path, eventually the parallel aware node at the bottom of the path divides its row count estimate by the parallel divisor (2.4 for two workers).  That value then returns back up through the path until it hits a gather.  Every node in between, which are almost exclusively parallel-safe (not parallel aware), can just use the row count being percolated up the path (i.e., they get the divided row count in their partial pathing and full row counts in the normal pathing).  The gather node, realizing that the row count it is dealing with has been divided by the parallel divisor, undoes the division in order to provide the correct row count to the non-parallel executing nodes above it.So a subquery is only ever executed once in a path - but the number of returned rows depends on the number of planned workers done under the assumption that a query executed among 2 workers costs 1/2.4 the amount of the same query done with just the leader.  It is an independent factor compared to everything else going on and so the multiplication can simply be done first to the original row count.<starts looking for confirmation in the code>allpaths.c@2974-2975 (generate_gather_paths)(L: 2993 as well)(L: 3167 - generate_useful_gather_paths)rows =\t\tcheapest_partial_path->rows * cheapest_partial_path->parallel_workers;Shouldn't that be:rows =cheapest_partial_path->rows * (get_parallel_divisor(cheapest_partial_path))?David J.", "msg_date": "Fri, 29 Apr 2022 07:39:39 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "I wrote:\n> Really? Maybe I misunderstand the case under consideration, but\n> what I think will be happening is that each worker will re-execute\n> the pushed-down subquery in full.\n\nOh ... nevermind that: what I was thinking about was the SubLink/SubPlan\ncase, which is unrelated to SubqueryScan. -ENOCAFFEINE, sorry about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Apr 2022 10:54:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "I wrote:\n> -ENOCAFFEINE, sorry about that.\n\nAs penance for that blunder, I spent a little time looking into this\nissue (responding to Robert's upthread complaint that it wasn't\nexplained clearly). See the test case in parallel_subquery.sql,\nattached, which is adapted from the test in incremental_sort.sql.\nIt produces these plans:\n\nexplain select * from t union select * from t;\n\n Unique (cost=29344.85..30544.85 rows=120000 width=12)\n -> Sort (cost=29344.85..29644.85 rows=120000 width=12)\n Sort Key: t.a, t.b, t.c\n -> Append (cost=0.00..2950.00 rows=120000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t (cost=0.00..575.00 rows=25000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t t_1 (cost=0.00..575.00 rows=25000 width=12)\n\nexplain select * from t union all select * from t;\n\n Gather (cost=0.00..1400.00 rows=120000 width=12)\n Workers Planned: 2\n -> Parallel Append (cost=0.00..1400.00 rows=50000 width=12)\n -> Parallel Seq Scan on t (cost=0.00..575.00 rows=25000 width=12)\n -> Parallel Seq Scan on t t_1 (cost=0.00..575.00 rows=25000 width=12)\n\nI take no position on whether the second plan's costs are correct;\nbut if they are, then the Gather-atop-Append structure is clearly\ncheaper than the Append-atop-Gathers structure, so the planner made\nthe wrong choice in the first case.\n\nI then modified setrefs.c to not remove SubqueryScan nodes\n(just make trivial_subqueryscan() return constant false)\nand got this output from the UNION case:\n\n Unique (cost=29344.85..30544.85 rows=120000 width=12)\n -> Sort (cost=29344.85..29644.85 rows=120000 width=12)\n Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".b, \"*SELECT* 1\".c\n -> Append (cost=0.00..2950.00 rows=120000 width=12)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..1175.00 rows=60000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t (cost=0.00..575.00 rows=25000 width=12)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..1175.00 rows=60000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t t_1 (cost=0.00..575.00 rows=25000 width=12)\n\nThe UNION ALL plan doesn't change, because no SubqueryScans are ever\ncreated in that case (we flattened the UNION ALL to an appendrel\nearly on). Removing the test case's settings to allow a parallel\nplan, the non-parallelized plan for the UNION case looks like\n\n Unique (cost=30044.85..31244.85 rows=120000 width=12)\n -> Sort (cost=30044.85..30344.85 rows=120000 width=12)\n Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".b, \"*SELECT* 1\".c\n -> Append (cost=0.00..3650.00 rows=120000 width=12)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..1525.00 rows=60000 width=12)\n -> Seq Scan on t (cost=0.00..925.00 rows=60000 width=12)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..1525.00 rows=60000 width=12)\n -> Seq Scan on t t_1 (cost=0.00..925.00 rows=60000 width=12)\n\nSo: the row counts for SubqueryScan are correct, thus the upthread\nproposal to change them is not. The cost estimates, however, seem\npretty bogus. What we're seeing here is that we're charging 600\ncost units to pass the data through SubqueryScan, and that's\nconsistent across the parallel and non-parallel cases, so it's\ncorrect on its own terms. But actually it's totally wrong because\nwe're going to elide that node later.\n\nSo I think the actual problem here is that we leave the decision\nto elide no-op SubqueryScan nodes till setrefs.c. We should detect\nthat earlier, and when it applies, label the SubqueryScanPath with\nthe exact cost of its child.\n\n(I think the current implementation might've been all right when\nit was devised, on the grounds that we had no real planning\nflexibility for UNION operations anyway. But now we do, and the\nbogus cost charge is causing visible problems.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 29 Apr 2022 11:53:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "I wrote:\n> So I think the actual problem here is that we leave the decision\n> to elide no-op SubqueryScan nodes till setrefs.c. We should detect\n> that earlier, and when it applies, label the SubqueryScanPath with\n> the exact cost of its child.\n\nHmm ... actually, while doing that seems like it'd be a good idea,\nit doesn't have much bearing on the case at hand. I approximated\nthe results of such a change on this test case by just removing\nthe \"cpu_tuple_cost\" component from cost_subqueryscan:\n\n@@ -1420,7 +1420,7 @@ cost_subqueryscan(SubqueryScanPath *path, PlannerInfo *root,\n get_restriction_qual_cost(root, baserel, param_info, &qpqual_cost);\n \n startup_cost = qpqual_cost.startup;\n- cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;\n+ cpu_per_tuple = qpqual_cost.per_tuple;\n run_cost = cpu_per_tuple * baserel->tuples;\n \n /* tlist eval costs are paid per output row, not per tuple scanned */\n\nand what I got was\n\n Unique (cost=28144.85..29344.85 rows=120000 width=12)\n -> Sort (cost=28144.85..28444.85 rows=120000 width=12)\n Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".b, \"*SELECT* 1\".c\n -> Append (cost=0.00..1750.00 rows=120000 width=12)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..575.00 rows=60000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t (cost=0.00..575.00 rows=25000 width=12)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..575.00 rows=60000 width=12)\n -> Gather (cost=0.00..575.00 rows=60000 width=12)\n Workers Planned: 2\n -> Parallel Seq Scan on t t_1 (cost=0.00..575.00 rows=25000 width=12)\n\nThe subqueryscans are being costed the way we want now, but it's still\ngoing for the append-atop-gathers plan. So I dug a bit deeper, and\nfound that generate_union_paths does also create the gather-atop-append\nplan, but *it's costed at 1750 units*, exactly the same as the other\npath. So add_path is just making an arbitrary choice between two paths\nof identical cost, and it happens to pick this one.\n\n(If I don't make the above tweak to cost_subqueryscan, the same thing\nhappens, although the two competing paths now each have cost 2950.)\n\nSo: why do we come out with a cost of 1750 for the very same plan\nthat in the UNION ALL case is costed at 1400? AFAICS it's because\nthe UNION ALL case thinks that the two inputs of the parallel append\nproduce 25000 rows apiece so the parallel append produces 50000 rows,\nand it costs the append's overhead on that basis. But in the UNION\ncase, the parallel append sees two inputs that are claimed to return\n60000 rows, so the append produces 120000 rows, meaning more append\noverhead. We can't readily get EXPLAIN to print this tree since\nit's rejected by add_path, but what I'm seeing (with the above\ncosting hack) is\n\n {GATHERPATH \n :rows 120000 \n :startup_cost 0.00 \n :total_cost 1750.00 \n :subpath \n {APPENDPATH \n :parallel_aware true \n :parallel_safe true \n :parallel_workers 2 \n :rows 120000 \n :startup_cost 0.00 \n :total_cost 1750.00 \n :subpaths (\n {SUBQUERYSCANPATH \n :rows 60000 \n :startup_cost 0.00 \n :total_cost 575.00 \n :subpath \n {PATH \n :parallel_aware true \n :parallel_safe true \n :parallel_workers 2 \n :rows 25000 \n :startup_cost 0.00 \n :total_cost 575.00 \n }\n }\n {SUBQUERYSCANPATH \n :rows 60000 \n :startup_cost 0.00 \n :total_cost 575.00 \n :subpath \n {PATH \n :parallel_aware true \n :parallel_safe true \n :parallel_workers 2 \n :rows 25000 \n :startup_cost 0.00 \n :total_cost 575.00 \n }\n }\n )\n }\n }\n\nIn short, these SubqueryScans are being labeled as producing 60000 rows\nwhen their input only produces 25000 rows, which is surely insane.\n\nSo: even though the SubqueryScan itself isn't parallel-aware, the number\nof rows it processes has to be de-rated according to the number of workers\ninvolved. Perhaps something like the original patch in this thread is\nindeed correct. But I can't help feeling that this definition of\npath_rows is mighty unintuitive and error-prone, and that if\ncost_subqueryscan is wrong then it's likely got lots of company.\nMaybe we need to take two steps back and rethink the whole approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Apr 2022 14:09:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 29, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> In short, these SubqueryScans are being labeled as producing 60000 rows\n> when their input only produces 25000 rows, which is surely insane.\n>\n> So: even though the SubqueryScan itself isn't parallel-aware, the number\n> of rows it processes has to be de-rated according to the number of workers\n> involved.\n\n\nRight, so why does baserel.rows show 60,000 here when path->subpath->rows\nonly shows 25,000? Because if you substitute path->subpath->rows for\nbaserel.rows in cost_subquery you get (with your cost change above):\n\n Incremental Sort (cost=27875.50..45577.57 rows=120000 width=12) (actual\ntime=165.285..235.749 rows=60000 loops=1)\n Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".c\n Presorted Key: \"*SELECT* 1\".a\n Full-sort Groups: 10 Sort Method: quicksort Average Memory: 28kB Peak\nMemory: 28kB\n Pre-sorted Groups: 10 Sort Method: quicksort Average Memory: 521kB\n Peak Memory: 521kB\n -> Unique (cost=27794.85..28994.85 rows=120000 width=12) (actual\ntime=157.882..220.501 rows=60000 loops=1)\n -> Sort (cost=27794.85..28094.85 rows=120000 width=12) (actual\ntime=157.881..187.232 rows=120000 loops=1)\n Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".b, \"*SELECT* 1\".c\n Sort Method: external merge Disk: 2600kB\n -> Gather (cost=0.00..1400.00 rows=120000 width=12)\n(actual time=0.197..22.705 rows=120000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Append (cost=0.00..1400.00 rows=50000\nwidth=12) (actual time=0.015..13.101 rows=40000 loops=3)\n -> Subquery Scan on \"*SELECT* 1\"\n (cost=0.00..575.00 rows=25000 width=12) (actual time=0.014..6.864\nrows=30000 loops=2)\n -> Parallel Seq Scan on t\n (cost=0.00..575.00 rows=25000 width=12) (actual time=0.014..3.708\nrows=30000 loops=2)\n -> Subquery Scan on \"*SELECT* 2\"\n (cost=0.00..575.00 rows=25000 width=12) (actual time=0.010..6.918\nrows=30000 loops=2)\n -> Parallel Seq Scan on t t_1\n (cost=0.00..575.00 rows=25000 width=12) (actual time=0.010..3.769\nrows=30000 loops=2)\n Planning Time: 0.137 ms\n Execution Time: 239.958 ms\n(19 rows)\n\nWhich shows your 1400 cost goal from union all, and the expected row\ncounts, for gather-atop-append.\n\nThe fact that (baserel.rows > path->subpath->rows) here seems like a\nstraight bug: there are no filters involved in this case but in the\npresence of filters baserel->rows should be strictly (<=\npath->subpath->rows), right?\n\nDavid J.\n\nOn Fri, Apr 29, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nIn short, these SubqueryScans are being labeled as producing 60000 rows\nwhen their input only produces 25000 rows, which is surely insane.\n\nSo: even though the SubqueryScan itself isn't parallel-aware, the number\nof rows it processes has to be de-rated according to the number of workers\ninvolved.Right, so why does baserel.rows show 60,000 here when path->subpath->rows only shows 25,000?  Because if you substitute path->subpath->rows for baserel.rows in cost_subquery you get (with your cost change above): Incremental Sort  (cost=27875.50..45577.57 rows=120000 width=12) (actual time=165.285..235.749 rows=60000 loops=1)   Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".c   Presorted Key: \"*SELECT* 1\".a   Full-sort Groups: 10  Sort Method: quicksort  Average Memory: 28kB  Peak Memory: 28kB   Pre-sorted Groups: 10  Sort Method: quicksort  Average Memory: 521kB  Peak Memory: 521kB   ->  Unique  (cost=27794.85..28994.85 rows=120000 width=12) (actual time=157.882..220.501 rows=60000 loops=1)         ->  Sort  (cost=27794.85..28094.85 rows=120000 width=12) (actual time=157.881..187.232 rows=120000 loops=1)               Sort Key: \"*SELECT* 1\".a, \"*SELECT* 1\".b, \"*SELECT* 1\".c               Sort Method: external merge  Disk: 2600kB               ->  Gather  (cost=0.00..1400.00 rows=120000 width=12) (actual time=0.197..22.705 rows=120000 loops=1)                     Workers Planned: 2                     Workers Launched: 2                     ->  Parallel Append  (cost=0.00..1400.00 rows=50000 width=12) (actual time=0.015..13.101 rows=40000 loops=3)                           ->  Subquery Scan on \"*SELECT* 1\"  (cost=0.00..575.00 rows=25000 width=12) (actual time=0.014..6.864 rows=30000 loops=2)                                 ->  Parallel Seq Scan on t  (cost=0.00..575.00 rows=25000 width=12) (actual time=0.014..3.708 rows=30000 loops=2)                           ->  Subquery Scan on \"*SELECT* 2\"  (cost=0.00..575.00 rows=25000 width=12) (actual time=0.010..6.918 rows=30000 loops=2)                                 ->  Parallel Seq Scan on t t_1  (cost=0.00..575.00 rows=25000 width=12) (actual time=0.010..3.769 rows=30000 loops=2) Planning Time: 0.137 ms Execution Time: 239.958 ms(19 rows)Which shows your 1400 cost goal from union all, and the expected row counts, for gather-atop-append.The fact that (baserel.rows > path->subpath->rows) here seems like a straight bug: there are no filters involved in this case but in the presence of filters baserel->rows should be strictly (<= path->subpath->rows), right?David J.", "msg_date": "Fri, 29 Apr 2022 12:06:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The fact that (baserel.rows > path->subpath->rows) here seems like a\n> straight bug: there are no filters involved in this case but in the\n> presence of filters baserel->rows should be strictly (<=\n> path->subpath->rows), right?\n\nNo, because the subpath's rowcount has been derated to represent the\nnumber of rows any one worker is expected to process. Since the\nSubqueryScan is below the Gather, it has to represent that number\nof rows too. Like I said, this design is pretty confusing; though\nI do not have a better alternative to suggest.\n\nAnyway, after chewing on this for awhile, it strikes me that the\nsanest way to proceed is for cost_subqueryscan to compute its rows\nestimate as the subpath's rows estimate times the selectivity of\nthe subqueryscan's quals (if any). That'd be the natural way to\nproceed for most sorts of non-bottom-level paths to begin with.\nI think there are a few reasons why cost_subqueryscan currently\ndoes it the way it does:\n\n* By analogy to other sorts of relation-scan nodes. But those don't\nhave any subpath that they could consult instead. This analogy is\nreally a bit faulty, since SubqueryScan isn't a relation scan node\nin any ordinary meaning of that term.\n\n* To ensure that all paths for the same relation have the same rowcount\nestimate (modulo different parameterization or parallelism). But we'll\nhave that anyway because the only possible path type for an unflattened\nRTE_SUBQUERY rel is a SubqueryScan, so there's no risk of different\ncost_xxx functions arriving at slightly different estimates.\n\n* To avoid redundant computation of the quals' selectivity.\nThis is slightly annoying; but in practice the quals will always\nbe RestrictInfos which will cache the per-qual selectivities,\nso it's not going to cost us very much to perform that calculation\nover again.\n\nSo perhaps we should do it more like the attached, which produces\nthis plan for the UNION case:\n\n Unique (cost=28994.85..30194.85 rows=120000 width=12)\n -> Sort (cost=28994.85..29294.85 rows=120000 width=12)\n Sort Key: t.a, t.b, t.c\n -> Gather (cost=0.00..2600.00 rows=120000 width=12)\n Workers Planned: 2\n -> Parallel Append (cost=0.00..2600.00 rows=50000 width=12)\n -> Parallel Seq Scan on t (cost=0.00..575.00 rows=25000 width=12)\n -> Parallel Seq Scan on t t_1 (cost=0.00..575.00 rows=25000 width=12)\n\nIt'd still be a good idea to fix the cost estimation to not charge\nextra for SubqueryScans that will be elided later, because this\nParallel Append's cost should be 1400 not 2600. But as I showed\nupthread, that won't affect the plan choice for this particular test\ncase.\n\n\t\t\tregards, tom lane\n\n#text/x-diff; name=\"change-cost_subqueryscan-rowcount-estimate-2.patch\" [change-cost_subqueryscan-rowcount-estimate-2.patch] /home/postgres/zzz\n\n\n", "msg_date": "Fri, 29 Apr 2022 15:31:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "I wrote:\n> So perhaps we should do it more like the attached, which produces\n> this plan for the UNION case:\n\nsigh ... actually attached this time.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 29 Apr 2022 15:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 29, 2022 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > The fact that (baserel.rows > path->subpath->rows) here seems like a\n> > straight bug: there are no filters involved in this case but in the\n> > presence of filters baserel->rows should be strictly (<=\n> > path->subpath->rows), right?\n>\n> No, because the subpath's rowcount has been derated to represent the\n> number of rows any one worker is expected to process. Since the\n> SubqueryScan is below the Gather, it has to represent that number\n> of rows too. Like I said, this design is pretty confusing; though\n> I do not have a better alternative to suggest.\n>\n> Anyway, after chewing on this for awhile, it strikes me that the\n> sanest way to proceed is for cost_subqueryscan to compute its rows\n> estimate as the subpath's rows estimate times the selectivity of\n> the subqueryscan's quals (if any).\n\n\nThis is what Robert was getting at, and I followed-up on.\n\nThe question I ended up at is why doesn't baserel->rows already produce the\nvalue you now propose to calculate directly within cost_subquery\n\nset_baserel_size_estimates (multiplies rel->tuples - which is derated - by\nselectivity, sets rel->rows)\nset_subquery_size_estimates\n rel->subroot = subquery_planner(...)\n // my expectation is that\nsub_final_rel->cheapest_total_path->rows is the derated number of rows;\n // the fact you can reach the derated amount later by using\npath->subpath->rows seems to affirm this.\n sets rel->tuples from sub_final_rel->cheapest_total_path->rows)\nset_subquery_pathlist (executes the sizing call stack above, then proceeds\nto create_subqueryscan_path which in turn calls cost_subquery)\nset_rel_size\n...\n\n\n> * By analogy to other sorts of relation-scan nodes. But those don't\n> have any subpath that they could consult instead. This analogy is\n> really a bit faulty, since SubqueryScan isn't a relation scan node\n> in any ordinary meaning of that term.\n>\n\nI did observe this copy-and-paste dynamic; I take it this is why we cannot\nor would not just change all of the baserel->rows usages to\npath->subpath->rows.\n\n>\n> So perhaps we should do it more like the attached, which produces\n> this plan for the UNION case:\n>\n>\nThe fact this changes row counts in a costing function is bothersome - it\nwould be nice to go the other direction and remove the if block. More\ngenerally, path->path.rows should be read-only by the time we get to\ncosting. But I'm not out to start a revolution here either. But it feels\nlike we are just papering over a bug in how baserel->rows is computed; per\nmy analysis above.\n\nIn short, the solution seems like it should, and in fact does here, fix the\nobserved problem. I'm fine with that.\n\nDavid J.\n\nOn Fri, Apr 29, 2022 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The fact that (baserel.rows > path->subpath->rows) here seems like a\n> straight bug: there are no filters involved in this case but in the\n> presence of filters baserel->rows should be strictly (<=\n> path->subpath->rows), right?\n\nNo, because the subpath's rowcount has been derated to represent the\nnumber of rows any one worker is expected to process.  Since the\nSubqueryScan is below the Gather, it has to represent that number\nof rows too.  Like I said, this design is pretty confusing; though\nI do not have a better alternative to suggest.\n\nAnyway, after chewing on this for awhile, it strikes me that the\nsanest way to proceed is for cost_subqueryscan to compute its rows\nestimate as the subpath's rows estimate times the selectivity of\nthe subqueryscan's quals (if any).This is what Robert was getting at, and I followed-up on.The question I ended up at is why doesn't baserel->rows already produce the value you now propose to calculate directly within cost_subqueryset_baserel_size_estimates (multiplies rel->tuples - which is derated - by selectivity, sets rel->rows)set_subquery_size_estimates    rel->subroot = subquery_planner(...)          // my expectation is that sub_final_rel->cheapest_total_path->rows is the derated number of rows;           // the fact you can reach the derated amount later by using path->subpath->rows seems to affirm this.    sets rel->tuples from sub_final_rel->cheapest_total_path->rows)set_subquery_pathlist (executes the sizing call stack above, then proceeds to create_subqueryscan_path which in turn calls cost_subquery)set_rel_size...\n\n* By analogy to other sorts of relation-scan nodes.  But those don't\nhave any subpath that they could consult instead.  This analogy is\nreally a bit faulty, since SubqueryScan isn't a relation scan node\nin any ordinary meaning of that term.I did observe this copy-and-paste dynamic; I take it this is why we cannot or would not just change all of the baserel->rows usages to path->subpath->rows.\n\nSo perhaps we should do it more like the attached, which produces\nthis plan for the UNION case: The fact this changes row counts in a costing function is bothersome - it would be nice to go the other direction and remove the if block.  More generally, path->path.rows should be read-only by the time we get to costing. But I'm not out to start a revolution here either.  But it feels like we are just papering over a bug in how baserel->rows is computed; per my analysis above.In short, the solution seems like it should, and in fact does here, fix the observed problem.  I'm fine with that.David J.", "msg_date": "Fri, 29 Apr 2022 13:29:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Fri, Apr 29, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > So perhaps we should do it more like the attached, which produces\n> > this plan for the UNION case:\n>\n> sigh ... actually attached this time.\n\nI am not sure whether this is actually correct, but it seems a lot\nmore believable than the previous proposals. The problem might be more\ngeneral, though. I think when I developed this parallel query stuff I\nmodeled a lot of it on what you did for parameterized paths. Both\nparameterized paths and parallelism can create situations where\nexecuting a path to completion produces fewer rows than you would\notherwise get. In the case of parameterized paths, this happens\nbecause we enforce the parameterization we've chosen on top of the\nuser-supplied quals. In the case of parallelism, it happens because\nthe rows are split up across the different workers. I think I intended\nthat the \"rows\" field of RelOptInfo should be the row count for the\nrelation in total, and that the \"rows\" field of the Path should be the\nnumber of rows we expect to get for one execution of the path. But it\nseems like this problem is good evidence that I didn't find all the\nplaces that need to be adjusted for parallelism, and I wouldn't be\nvery surprised if there are a bunch of others that I overlooked.\n\nIt's not actually very nice that we end up having to call\nclauselist_selectivity() here. We've already called\nset_baserel_size_estimates() to figure out how many rows we expect to\nhave been filtered out by the quals, and it sucks to have to do it\nagain. Brainstorming wildly and maybe stupidly, I wonder if the whole\nmodel is wrong here. Maybe a path shouldn't have a row count; instead,\nmaybe it should have a multiplier that it applies to the relation's\nrow count. Then, if X is parameterized in the same way as its subpath\nY, we can just copy the multiplier up, but now it will be applied to\nthe new rel's \"rows\" value, which will have already been adjusted\nappropriately by set_baserel_size_estimates().\n\nAnd having thrown out that wild and crazy idea, I will now run away\nquickly and hide someplace.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 May 2022 16:54:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I am not sure whether this is actually correct, but it seems a lot\n> more believable than the previous proposals. The problem might be more\n> general, though. I think when I developed this parallel query stuff I\n> modeled a lot of it on what you did for parameterized paths. Both\n> parameterized paths and parallelism can create situations where\n> executing a path to completion produces fewer rows than you would\n> otherwise get. In the case of parameterized paths, this happens\n> because we enforce the parameterization we've chosen on top of the\n> user-supplied quals. In the case of parallelism, it happens because\n> the rows are split up across the different workers. I think I intended\n> that the \"rows\" field of RelOptInfo should be the row count for the\n> relation in total, and that the \"rows\" field of the Path should be the\n> number of rows we expect to get for one execution of the path. But it\n> seems like this problem is good evidence that I didn't find all the\n> places that need to be adjusted for parallelism, and I wouldn't be\n> very surprised if there are a bunch of others that I overlooked.\n\nI did look at the rest of costsize.c for similar instances, and didn't\nfind any. In any case, I think we have two options:\n\n1. Apply this fix, and in future fix any other places that we identify\nlater.\n\n2. Invent some entirely new scheme that we hope is less mistake-prone.\n\nOption #2 is unlikely to lead to any near-term fix, and we certainly\nwouldn't dare back-patch it.\n\n> It's not actually very nice that we end up having to call\n> clauselist_selectivity() here. We've already called\n> set_baserel_size_estimates() to figure out how many rows we expect to\n> have been filtered out by the quals, and it sucks to have to do it\n> again. Brainstorming wildly and maybe stupidly, I wonder if the whole\n> model is wrong here. Maybe a path shouldn't have a row count; instead,\n> maybe it should have a multiplier that it applies to the relation's\n> row count. Then, if X is parameterized in the same way as its subpath\n> Y, we can just copy the multiplier up, but now it will be applied to\n> the new rel's \"rows\" value, which will have already been adjusted\n> appropriately by set_baserel_size_estimates().\n\nI've wondered about that too, but it seems to depend on the assumption\nthat clauses are estimated independently by clauselist_selectivity, which\nhas not been true for a long time (and is getting less true not more so).\nSo we could possibly apply something like this for parallelism, but not\nfor parameterized paths, and that makes it less appealing ... IMO anyway.\n\nI have thought it might be good to explicitly mark partial paths with the\nestimated number of workers, which would be effectively the same thing\nas what you're talking about. But I wonder if we'd not still be better off\nkeeping the path rowcount as being number-of-rows-in-each-worker, and\njust scale it up by the multiplier for EXPLAIN output. (And then also\nprint the true total number of rows in EXPLAIN ANALYZE.) If we do the\ninverse of that, then we risk bugs from failing to correct the rowcount\nduring cost-estimation calculations.\n\nI kinda feel that the bottom line here is that cost estimation is\nhard, and we're not going to find a magic bullet that removes bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 May 2022 17:24:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Mon, May 2, 2022 at 5:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I did look at the rest of costsize.c for similar instances, and didn't\n> find any. In any case, I think we have two options:\n>\n> 1. Apply this fix, and in future fix any other places that we identify\n> later.\n>\n> 2. Invent some entirely new scheme that we hope is less mistake-prone.\n>\n> Option #2 is unlikely to lead to any near-term fix, and we certainly\n> wouldn't dare back-patch it.\n\nSure, although I think it's questionable whether we should back-patch\nanyway, since there's no guarantee that every plan change anybody gets\nwill be a desirable one.\n\n> I've wondered about that too, but it seems to depend on the assumption\n> that clauses are estimated independently by clauselist_selectivity, which\n> has not been true for a long time (and is getting less true not more so).\n> So we could possibly apply something like this for parallelism, but not\n> for parameterized paths, and that makes it less appealing ... IMO anyway.\n\nI agree. We'd have to correct for that somehow, and that might be awkward.\n\n> I have thought it might be good to explicitly mark partial paths with the\n> estimated number of workers, which would be effectively the same thing\n> as what you're talking about. But I wonder if we'd not still be better off\n> keeping the path rowcount as being number-of-rows-in-each-worker, and\n> just scale it up by the multiplier for EXPLAIN output. (And then also\n> print the true total number of rows in EXPLAIN ANALYZE.) If we do the\n> inverse of that, then we risk bugs from failing to correct the rowcount\n> during cost-estimation calculations.\n\nThat I don't like at all. I'm still of the opinion that it's a huge\nmistake for EXPLAIN to print int(rowcount/loops) instead of just\nrowcount. The division is never what I want and in my experience is\nalso not what other people want and often causes confusion. Both the\ndivision and the rounding lose information about precisely what row\ncount was estimated, which makes it harder to figure out where in the\nplan things went wrong. I am not at all keen on adding more ways for\nwhat we print out to be different from the information actually stored\nin the plan tree. I don't know for sure what we ought to be storing in\nthe plan tree, but I think whatever we store should also be what we\nprint. I think the fact that we've chosen to store something in the\nplan tree is strong evidence that that exact value, and not some\nquantity derived therefrom, is what's interesting.\n\n> I kinda feel that the bottom line here is that cost estimation is\n> hard, and we're not going to find a magic bullet that removes bugs.\n\nWell that much is certainly true.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 May 2022 09:07:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That I don't like at all. I'm still of the opinion that it's a huge\n> mistake for EXPLAIN to print int(rowcount/loops) instead of just\n> rowcount. The division is never what I want and in my experience is\n> also not what other people want and often causes confusion. Both the\n> division and the rounding lose information about precisely what row\n> count was estimated, which makes it harder to figure out where in the\n> plan things went wrong.\n\nI'm inclined to look at it a bit differently: it was a mistake to\nuse the same \"loops\" notion for parallelism as for repeated node\nexecution. But I think we are saying the same thing in one respect,\nnamely it'd be better if what EXPLAIN shows for parallelism were totals\nacross all workers rather than per-worker numbers. (I'm unconvinced\nabout whether repeated node execution ought to work like that.)\n\n> I am not at all keen on adding more ways for\n> what we print out to be different from the information actually stored\n> in the plan tree.\n\nI think the cost estimation functions want to work with per-worker\nrowcounts. We could scale that up to totals when we create the\nfinished plan tree, perhaps.\n\n> I don't know for sure what we ought to be storing in\n> the plan tree, but I think whatever we store should also be what we\n> print. I think the fact that we've chosen to store something in the\n> plan tree is strong evidence that that exact value, and not some\n> quantity derived therefrom, is what's interesting.\n\nThe only reason we store any of this in the plan tree is for\nEXPLAIN to print it out. On the other hand, I don't want the\nplanner expending any large number of cycles modifying the numbers\nit works with before putting them in the plan tree, because most\nof the time we're not doing EXPLAIN so it'd be wasted effort.\n\nIn any case, fundamental redesign of what EXPLAIN prints is a job\nfor v16 or later. Are you okay with the proposed patch as a v15 fix?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 May 2022 14:13:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "On Tue, May 3, 2022 at 2:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case, fundamental redesign of what EXPLAIN prints is a job\n> for v16 or later. Are you okay with the proposed patch as a v15 fix?\n\nYes. I can't really vouch for it, but I don't object to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 May 2022 14:37:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 3, 2022 at 2:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In any case, fundamental redesign of what EXPLAIN prints is a job\n>> for v16 or later. Are you okay with the proposed patch as a v15 fix?\n\n> Yes. I can't really vouch for it, but I don't object to it.\n\nI re-read the patch and noticed that I'd missed one additional change\nneeded:\n\n-\trun_cost = cpu_per_tuple * baserel->tuples;\n+\trun_cost = cpu_per_tuple * path->subpath->rows;\n\nThat is, we should estimate the number of qual evaluations and\ncpu_tuple_cost charges based on the subpath row count not the\nnon-parallel-aware relation size. So this reduces the cost of\nthe SubqueryScan itself, as well as its row count, in partial paths.\nI don't see any further change in regression test results though.\n\nPushed with that change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 May 2022 14:48:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix cost subqueryscan wrong parallel cost" } ]
[ { "msg_contents": "Hi hackers.\n\nI am unable to connect to my Postgres server (version 13 running) in Azure\nPostgres from the PSQL client built on the latest master. However, I am\nable to connect to the Postgres 15 server running locally on the machine. I\ninstalled an earlier version of the PSQL client (v 12) and was able to\nconnect to both the Azure PG instance as well as the local instance. Can\nthis be a bug in the master? I tried looking at the server logs in Azure\nbut couldn't get anything meaningful from those. Any tips on how I can\ndebug psql client further?\n\nMy local server is running with trust authentication and the remote server\nis running with md5 in the pg_hba.conf. I am not sure if this changes the\npsql behavior somehow.\n\nroot@userspgdev:/usr/local/pgsql# ./psql -U postgres -h\ninst.postgres.database.azure.com -d postgres\nbash: ./psql: No such file or directory\nroot@userspgdev:/usr/local/pgsql# psql -U postgres -h\ninst.postgres.database.azure.com -d postgres\nPassword for user postgres:\npsql (12.9 (Ubuntu 12.9-0ubuntu0.20.04.1), server 13.6)\nWARNING: psql major version 12, server major version 13.\n Some psql features might not work.\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits:\n256, compression: off)\nType \"help\" for help.\n\npostgres=> \\q\n bin/psql -U postgres -h inst.postgres.database.azure.com -d postgres\npsql: error: connection to server at \"inst.postgres.database.azure.com\"\n(20.116.167.xx), port 5432 failed: FATAL: no pg_hba.conf entry for host\n\"20.125.61.xx\", user \"postgres\", database \"postgres\", SSL off\n\nAlso, wondering why no error is emitted by the psql client when the\nconnection attempt fails?\n\nThanks,\nSirisha\n\nHi hackers.I am unable to connect to my Postgres server (version 13 running) in Azure Postgres from the PSQL client built on the latest master. However, I am able to connect to the Postgres 15 server running locally on the machine. I installed an earlier version of the PSQL client (v 12) and was able to connect to both the Azure PG instance as well as the local instance. Can this be a bug in the master? I tried looking at the server logs in Azure but couldn't get anything meaningful from those. Any tips on how I can debug psql client further?My local server is running with trust authentication and the remote server is running with md5 in the pg_hba.conf. I am not sure if this changes the psql behavior somehow.root@userspgdev:/usr/local/pgsql# ./psql -U postgres -h inst.postgres.database.azure.com -d postgresbash: ./psql: No such file or directoryroot@userspgdev:/usr/local/pgsql# psql -U postgres -h inst.postgres.database.azure.com -d postgresPassword for user postgres:psql (12.9 (Ubuntu 12.9-0ubuntu0.20.04.1), server 13.6)WARNING: psql major version 12, server major version 13.         Some psql features might not work.SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)Type \"help\" for help.postgres=> \\q bin/psql -U postgres -h inst.postgres.database.azure.com -d postgrespsql: error: connection to server at \"inst.postgres.database.azure.com\" (20.116.167.xx), port 5432 failed: FATAL:  no pg_hba.conf entry for host \"20.125.61.xx\", user \"postgres\", database \"postgres\", SSL offAlso, wondering why no error is emitted by the psql client when the connection attempt fails?Thanks,Sirisha", "msg_date": "Tue, 12 Apr 2022 00:47:54 -0700", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Unable to connect to Postgres13 server from psql client built on\n master" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 12, 2022 at 12:47:54AM -0700, sirisha chamarthi wrote:\n>\n> I am unable to connect to my Postgres server (version 13 running) in Azure\n> Postgres from the PSQL client built on the latest master. However, I am\n> able to connect to the Postgres 15 server running locally on the machine. I\n> installed an earlier version of the PSQL client (v 12) and was able to\n> connect to both the Azure PG instance as well as the local instance. Can\n> this be a bug in the master? I tried looking at the server logs in Azure\n> but couldn't get anything meaningful from those. Any tips on how I can\n> debug psql client further?\n>\n> root@userspgdev:/usr/local/pgsql# psql -U postgres -h\n> inst.postgres.database.azure.com -d postgres\n> Password for user postgres:\n> psql (12.9 (Ubuntu 12.9-0ubuntu0.20.04.1), server 13.6)\n> WARNING: psql major version 12, server major version 13.\n> Some psql features might not work.\n> SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits:\n> 256, compression: off)\n> Type \"help\" for help.\n>\n> postgres=> \\q\n>\n> bin/psql -U postgres -h inst.postgres.database.azure.com -d postgres\n> psql: error: connection to server at \"inst.postgres.database.azure.com\"\n> (20.116.167.xx), port 5432 failed: FATAL: no pg_hba.conf entry for host\n> \"20.125.61.xx\", user \"postgres\", database \"postgres\", SSL off\n\nIt's hard to be sure without the pg_hba.conf file, but the most likely\nexplanation is that your remote server only accept connection with SSL and you\nhaven't built your local binaries with SSL support.\n\n\n", "msg_date": "Tue, 12 Apr 2022 16:15:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unable to connect to Postgres13 server from psql client built on\n master" } ]
[ { "msg_contents": "Hi,\n\nI want to improve PostgreSQL regression test coverage.\nso, I wrote a proposal for it.\nPlease let me know if there are any improvements before I submit.\n\nregards,\nDongWook Lee.", "msg_date": "Tue, 12 Apr 2022 17:48:40 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: Improve PostgreSQL Regression Test Coverage" } ]
[ { "msg_contents": "Hi, I'm keshav, and I have updated my proposal. kindly accept my changes.", "msg_date": "Tue, 12 Apr 2022 18:26:07 +0530", "msg_from": "\"S.R Keshav\" <srkeshav7@gmail.com>", "msg_from_op": true, "msg_subject": "GSOC: New and improved website for pgjdbc (JDBC) (2022)" }, { "msg_contents": "Hello Keshav,\n\nI quickly went through your proposal and it seems like it could be \nextended a bit. Do you have in mind a potential layout for the \ndeliverables? Can you split the timeline week by week or at least in 2 \nweek blocks? Can you state any major issues with the current website and \nhow you plan to improve them?\n\nRegards,\nIlaria\n\n\nOn 12.04.22 14:56, S.R Keshav wrote:\n> Hi, I'm keshav,  and I have updated my proposal. kindly accept my \n> changes.\n>\n\n\n", "msg_date": "Fri, 15 Apr 2022 22:38:31 +0200", "msg_from": "Ilaria Battiston <ilaria.battiston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GSOC: New and improved website for pgjdbc (JDBC) (2022)" } ]
[ { "msg_contents": "Recent work on VACUUM and relfrozenxid advancement required that I\nupdate the maintenance.sgml VACUUM documentation (\"Routine\nVacuuming\"). It was tricky to keep things current, due in part to\ncertain structural problems. Many of these problems are artifacts of\nhow the document evolved over time.\n\n\"Routine Vacuuming\" ought to work as a high level description of how\nVACUUM keeps the system going over time. The intended audience is\nprimarily DBAs, so low level implementation details should either be\ngiven much less prominence, or not even mentioned. We should keep it\npractical -- without going too far in the direction of assuming that\nwe know the limits of what information might be useful.\n\nMy high level concerns are:\n\n* Instead of discussing FrozenTransactionId (and then explaining how\nthat particular magic value is not really used anymore anyway), why\nnot describe freezing in terms of the high level rules?\n\nSomething along the lines of the following seems more useful: \"A tuple\nwhose xmin is frozen (and xmax is unset) is considered visible to\nevery possible MVCC snapshot. In other words, the transaction that\ninserted the tuple is treated as if it ran and committed at some point\nthat is now *infinitely* far in the past.\"\n\nIt might also be useful to describe freezing all of a live tuple's\nXIDs as roughly the opposite process as completely physically removing\na dead tuple. It follows that we don't necessarily need to freeze\nanything to advance relfrozenxid (especially not on Postgres 15).\n\n* The general description of how the XID space works similarly places\nway too much emphasis on low level details that are of very little\nrelevance.\n\nThese details would even seem totally out of place if I was the\nintended audience. The problem isn't really that the information is\ntoo technical. The problem is that we emphasize mechanistic stuff\nwhile never quite explaining the point of it all.\n\nCurrently, \"25.1.5. Preventing Transaction ID Wraparound Failures\"\nsays this, right up-front:\n\n\"But since transaction IDs have limited size (32 bits) a cluster that\nruns for a long time (more than 4 billion transactions) would suffer\ntransaction ID wraparound\"\n\nThis is way too mechanistic. We totally muddle things by even\nmentioning 4 billion XIDs in the first place. It seems like a\nconfusing artefact of a time before freezing was invented, back when\nyou really could have XIDs that were more than 2 billion XIDs apart.\n\nThis statement has another problem: it's flat-out untrue. The\nxidStopLimit stuff will reliably kick in at about 2 billion XIDs.\n\n* The description of wraparound sounds terrifying, implying that data\ncorruption can result.\n\nThe alarming language isn't proportionate to the true danger\n(something I complained about in a dedicated thread last year [1]).\n\n* XID space isn't really a precious resource -- it isn't even a\nresource at all IMV.\n\nISTM that we should be discussing wraparound as an issue about the\nmaximum *difference* between any two unfrozen XIDs in a\ncluster/installation.\n\nTalking about an abstract-sounding XID space seems to me to be quite\ncounterproductive. The logical XID space is practically infinite,\nafter all. We should move away from the idea that physical XID space\nis a precious resource. Sure, users are often concerned that the\nxidStopLimit mechanism might kick-in, effectively resulting in an\noutage. That makes perfect sense. But it doesn't follow that XIDs are\nprecious, and implying that they are intrinsically valuable just\nconfuses matters.\n\nFirst of all, physical XID space is usually abundantly available. A\n\"distance\" of ~2 billion XIDs is a vast distance in just about any\napplication (barring those with pathological problems, such as a\nleaked replication slot). Second of all, Since the amount of physical\nfreezing required to be able to advance relfrozenxid by any given\namount (amount of XIDs) varies enormously, and is not even predictable\nfor a given table (because individual tables don't get their own\nphysical XID space), the age of datfrozenxid predicts very little\nabout how close we are to having the dreaded xidStopLimit mechanism\nkick in. We do need some XID-wise slack, but that's just a way of\nabsorbing shocks -- it's ballast, usually only really needed for one\nor two very large tables.\n\nThird of all, and most importantly, the whole idea that we can just\nput off freezing indefinitely and actually reduce the pain (rather\nthan having a substantial increase in problems) seems to have just\nabout no basis in reality, at least once you get into the tens of\nmillions range (though usually well before that).\n\nWhy should you be better off if all of your freezing occurs in one big\nballoon payment? Sometimes getting into debt for a while is useful,\nbut why should it make sense to keep delaying freezing? And if it\ndoesn't make sense, then why does it still make sense to treat XID\nspace as a precious resource?\n\n* We don't cleanly separate discussion of anti-wraparound autovacuums,\nand aggressive vacuums, and the general danger of wraparound (by which\nI actually mean the danger of having the xidStopLimit stop limit kick\nin).\n\nI think that we should move towards a world in which we explicitly\ntreat the autovacuum anti-wraparound criteria as not all that\ndifferent to any of the standard criteria (so we probably still have\nthe behavior with autovacuums not being cancellable, but it would be a\ndynamic thing that didn't depend on the original reason why\nautovacuum.c launched an autovacuum worker). But even now we aren't\nclear enough about the fact that anti-wraparound autovacuums really\naren't all that special. Which makes them seem scarier than they\nshould be.\n\n[1] https://postgr.es/m/CAH2-Wzk_FxfJvs4TnUtj=DCsokbiK0CxfjZ9jjrfSx8sTWkeUg@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 Apr 2022 14:53:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Tue, Apr 12, 2022 at 2:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> Recent work on VACUUM and relfrozenxid advancement required that I\n> update the maintenance.sgml VACUUM documentation (\"Routine\n> Vacuuming\"). It was tricky to keep things current, due in part to\n> certain structural problems. Many of these problems are artifacts of\n> how the document evolved over time.\n>\n> \"Routine Vacuuming\" ought to work as a high level description of how\n> VACUUM keeps the system going over time. The intended audience is\n> primarily DBAs, so low level implementation details should either be\n> given much less prominence, or not even mentioned. We should keep it\n> practical -- without going too far in the direction of assuming that\n> we know the limits of what information might be useful.\n>\n\n+1\n\nI've attached some off-the-cuff thoughts on reworking the first three\nparagraphs and the note.\n\nIt's hopefully useful for providing perspective if nothing else.\n\n\n> My high level concerns are:\n>\n> * Instead of discussing FrozenTransactionId (and then explaining how\n> that particular magic value is not really used anymore anyway), why\n> not describe freezing in terms of the high level rules?\n>\n\nAgreed and considered\n\n>\n> Something along the lines of the following seems more useful: \"A tuple\n> whose xmin is frozen (and xmax is unset) is considered visible to\n> every possible MVCC snapshot. In other words, the transaction that\n> inserted the tuple is treated as if it ran and committed at some point\n> that is now *infinitely* far in the past.\"\n>\n\nI'm assuming and caring only about visible rows when I'm reading this\nsection. Maybe we need to make that explicit - only xmin matters (and the\ninvisible frozen flag)?\n\n\n> It might also be useful to describe freezing all of a live tuple's\n> XIDs as roughly the opposite process as completely physically removing\n> a dead tuple. It follows that we don't necessarily need to freeze\n> anything to advance relfrozenxid (especially not on Postgres 15).\n>\n\nI failed to pickup on how this and \"mod-2^32\" math interplay, and I'm not\nsure I care when reading this. It made more sense to consider \"shortest\npath\" along the \"circle\".\n\n\n> Currently, \"25.1.5. Preventing Transaction ID Wraparound Failures\"\n> says this, right up-front:\n>\n> \"But since transaction IDs have limited size (32 bits) a cluster that\n> runs for a long time (more than 4 billion transactions) would suffer\n> transaction ID wraparound\"\n>\n\nI both agree and disagree - where I settled (as of now) is reflected in the\npatch.\n\n\n> * The description of wraparound sounds terrifying, implying that data\n> corruption can result.\n>\n\nAgreed, though I just skimmed a bit after the material the patch covers.\n\n>\n> * XID space isn't really a precious resource -- it isn't even a\n> resource at all IMV.\n>\n\nAgreed\n\n>\n> * We don't cleanly separate discussion of anti-wraparound autovacuums,\n> and aggressive vacuums, and the general danger of wraparound (by which\n> I actually mean the danger of having the xidStopLimit stop limit kick\n> in).\n>\n\nDidn't really get this far.\n\nI am wondering, for the more technical details, is there an existing place\nto send xrefs, do you plan to create one, or is it likely unnecessary?\nDavid J.", "msg_date": "Tue, 12 Apr 2022 16:24:17 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Tue, Apr 12, 2022 at 4:24 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I've attached some off-the-cuff thoughts on reworking the first three paragraphs and the note.\n>\n> It's hopefully useful for providing perspective if nothing else.\n\nMore perspective is definitely helpful.\n\n> I'm assuming and caring only about visible rows when I'm reading this section. Maybe we need to make that explicit - only xmin matters (and the invisible frozen flag)?\n\nThe statement \"only xmin matters\" is true in spirit. If xmax needed to\nbe frozen then we'd actually remove the whole tuple instead (unless it\nwas a MultiXact). Alternatively, if it looked like xmax needed to be\nfrozen, but the XID turned out to have been from an aborted xact, then\nwe'd clear the XID from xmax instead.\n\nFreezing tends to lag the removal of dead tuples, but that's just an\noptimization. If you set vacuum_freeze_min_age to 0 then freezing and\ndead tuple removal happen in tandem (actually, we can remove tuples\ninserted by an XID after VACUUM's OldestXmin/removal cutoff when the\ninserting xact aborts, but VACUUM makes no real promises about XIDs >=\nOldestXmin anyway).\n\n>> It might also be useful to describe freezing all of a live tuple's\n>> XIDs as roughly the opposite process as completely physically removing\n>> a dead tuple. It follows that we don't necessarily need to freeze\n>> anything to advance relfrozenxid (especially not on Postgres 15).\n>\n>\n> I failed to pickup on how this and \"mod-2^32\" math interplay, and I'm not sure I care when reading this. It made more sense to consider \"shortest path\" along the \"circle\".\n\nIt would probably be possible to teach the system to deal with\ncoexisting XIDs that are close to a full ~4 billion XIDs apart. We'd\nhave to give up on the mod-2^32 comparison stuff in functions like\nTransactionIdPrecedes(), and carefully keep track of things per-table,\nand carry that context around a lot more. I certainly don't think that\nthat's a good idea (if 2 billion XIDs wasn't enough, why should 4\nbillion XIDs be?), but it does seem feasible.\n\nMy point is this: there is nothing particularly intuitive or natural\nabout the current ~2 billion XID limit, even if you already know that\nXIDs are generally represented on disk as 32-bit unsigned integers.\nAnd so the fact is that we are already asking users to take it on\nfaith that there are truly good reasons why the system cannot tolerate\nany scenario in which two unfrozen XIDs are more than about ~2 billion\nXIDs apart. Why not just admit that, and then deem the XID comparison\nrules out of scope for this particular chapter of the docs?\n\n*Maybe* it's still useful to discuss why things work that way in code\nlike TransactionIdPrecedes(), but that's a totally different\ndiscussion -- it doesn't seem particularly relevant to the design of\nVACUUM, no matter the audience. Most DBAs will just accept that the\n\"XID distance\" limit/invariant is about ~2 billion XIDs for esoteric\nimplementation reasons, with some vague idea of why it must be so\n(e.g., \"32-bit integers don't have enough space\"). They will be no\nworse off for it.\n\n(Bear in mind that mod-2^32 comparison stuff was only added when\nfreezing/wraparound was first implemented back in 2001, by commit\nbc7d37a525.)\n\n>> Currently, \"25.1.5. Preventing Transaction ID Wraparound Failures\"\n>> says this, right up-front:\n>>\n>> \"But since transaction IDs have limited size (32 bits) a cluster that\n>> runs for a long time (more than 4 billion transactions) would suffer\n>> transaction ID wraparound\"\n>\n>\n> I both agree and disagree - where I settled (as of now) is reflected in the patch.\n\nI just don't think that you need to make it any more complicated than\nthis: physical XID values are only meaningful when compared to other\nXIDs from the same cluster. The system needs to make sure that no two\nXIDs can ever be more than about 2 billion XIDs apart, and here's how\nyou as a DBA can help the system to make sure of that.\n\nDiscussion of the past becoming the future just isn't helpful, because\nthat simply cannot ever happen on any version of Postgres from the\nlast decade. Freezing is more or less an overhead of storing data in\nPostgres long term (even medium term) -- that is the simple reality.\nWe should say so.\n\n> I am wondering, for the more technical details, is there an existing place to send xrefs, do you plan to create one, or is it likely unnecessary?\n\nI might end up doing that, but just want to get a general sense of how\nother hackers feel about it for now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 Apr 2022 17:22:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Tue, Apr 12, 2022 at 5:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> I just don't think that you need to make it any more complicated than\n> this: physical XID values are only meaningful when compared to other\n> XIDs from the same cluster. The system needs to make sure that no two\n> XIDs can ever be more than about 2 billion XIDs apart, and here's how\n> you as a DBA can help the system to make sure of that.\n>\n>\nI decided to run with that perspective and came up with the following rough\ndraft. A decent amount of existing material I would either just remove or\nplace elsewhere as \"see for details\".\n\nThe following represents the complete section.\n\nDavid J.\n\n <para>\n This vacuum responsibility is necessary due to the fact that a\ntransaction ID (xid)\n has a lifetime of 2 billion transactions. The rows created by a given\ntransaction\n (recorded in xmin) must be frozen prior to the expiration of the xid.\n (The expired xid values can then be resurrected, see ... for details).\n This is done by flagging the rows as frozen and thus visible for the\nremainder\n of the row's life.\n </para>\n\n <para>\n While vacuum will not touch a row's xmin while updating its frozen\nstatus, two reserved xid\n values may be seen. <literal>BootstreapTransactionId</literal> (1) may\nbe seen on system catalog\n tables to indicate records inserted during initdb.\n<literal>FronzenTransactionID</literal> (2)\n may be seen on any table and also indicates that the row is frozen.\nThis was the mechanism\n used in versions prior to 9.4, when it was decided to keep the xmin\nunchanged for forensic use.\n </para>\n\n <para>\n <command>VACUUM</command> uses the <link\nlinkend=\"storage-vm\">visibility map</link>\n to determine which pages of a table must be scanned. Normally, it\n will skip pages that don't have any dead row versions even if those\npages\n might still have row versions with old XID values. Therefore, normal\n <command>VACUUM</command>s won't always freeze every old row version in\nthe table.\n When that happens, <command>VACUUM</command> will eventually need to\nperform an\n <firstterm>aggressive vacuum</firstterm>, which will freeze all\neligible unfrozen\n XID and MXID values, including those from all-visible but not\nall-frozen pages.\n In practice most tables require periodic aggressive vacuuming.\n </para>\n\n <para>\n Thus, an aging transaction will potentially pass a number of milestone\nages,\n controlled by various configuration settings or hard-coded into the\nserver,\n as it awaits its fate either being memorialized cryogenically or in\ndeath.\n While the following speaks of an individual transaction's age, in\npractice\n each table has a relfrozenxid attribute which is used by system as a\nreference\n age as it is oldest potentially living transaction on the table (see\nxref for details).\n </para>\n\n <para>\n The first milestone is controlled by vacuum_freeze_min_age (50 million)\nand marks the age\n at which the row becomes eligible to become frozen.\n </para>\n <para>\n Next up is vacuum_freeze_table_age (120 million). Before this age the\nrow can be frozen,\n but a non-aggressive vacuum may not encounter the row due to the\nvisibility\n map optimizations described above. Vacuums performed while relfrozenxid\n is older than this age will be done aggressively.\n </para>\n <para>\n For tables where routine complete vacuuming doesn't happen the\nauto-vacuum\n daemon acts as a safety net. When the age of the row exceeds\n autovacuum_freeze_max_age (200 million) the autovacuum daemon, even if\ndisabled for the table,\n will perform an anti-wraparound vacuum on the table (see below).\n </para>\n <para>\n Finally, as a measure of last resort, the system will begin emitting\nwarnings\n (1.940 billion) and then (1.997 billion) shutdown.\n It may be restarted in single user mode for manual aggressive vacuuming.\n </para>\n\n <para>\n An anti-wraparound vacuum is much more expensive than an aggressive\nvacuum and\n so the gap between the vacuum_freeze_table_age and\nautovacuum_freeze_max_age\n should be somewhat large (vacuum age must be at most 95% of the\nautovacuum age\n to be meaningful).\n </para>\n\n <para>\n Transaction history and commit status storage requirements are directly\nrelated to\n <varname>autovacuum_freeze_max_age</varname> due to retention policies\nbased upon\n that age. See xref ... for additional details.\n </para>\n\n <para>\n The reason for vacuum_freeze_min_age is to manage the trade-off between\nminimizing\n rows marked dead that are already frozen versus minimizing the number\nof rows\n being frozen aggressively.\n </para>\n\nOn Tue, Apr 12, 2022 at 5:22 PM Peter Geoghegan <pg@bowt.ie> wrote:I just don't think that you need to make it any more complicated than\nthis: physical XID values are only meaningful when compared to other\nXIDs from the same cluster. The system needs to make sure that no two\nXIDs can ever be more than about 2 billion XIDs apart, and here's how\nyou as a DBA can help the system to make sure of that.I decided to run with that perspective and came up with the following rough draft.  A decent amount of existing material I would either just remove or place elsewhere as \"see for details\".The following represents the complete section.David J.   <para>    This vacuum responsibility is necessary due to the fact that a transaction ID (xid)    has a lifetime of 2 billion transactions.  The rows created by a given transaction    (recorded in xmin) must be frozen prior to the expiration of the xid.    (The expired xid values can then be resurrected, see ... for details).    This is done by flagging the rows as frozen and thus visible for the remainder    of the row's life.   </para>      <para>    While vacuum will not touch a row's xmin while updating its frozen status, two reserved xid    values may be seen. <literal>BootstreapTransactionId</literal> (1) may be seen on system catalog    tables to indicate records inserted during initdb. <literal>FronzenTransactionID</literal> (2)    may be seen on any table and also indicates that the row is frozen.  This was the mechanism    used in versions prior to 9.4, when it was decided to keep the xmin unchanged for forensic use.   </para>   <para>    <command>VACUUM</command> uses the <link linkend=\"storage-vm\">visibility map</link>    to determine which pages of a table must be scanned.  Normally, it    will skip pages that don't have any dead row versions even if those pages    might still have row versions with old XID values.  Therefore, normal    <command>VACUUM</command>s won't always freeze every old row version in the table.    When that happens, <command>VACUUM</command> will eventually need to perform an    <firstterm>aggressive vacuum</firstterm>, which will freeze all eligible unfrozen    XID and MXID values, including those from all-visible but not all-frozen pages.    In practice most tables require periodic aggressive vacuuming.   </para>   <para>    Thus, an aging transaction will potentially pass a number of milestone ages,    controlled by various configuration settings or hard-coded into the server,    as it awaits its fate either being memorialized cryogenically or in death.    While the following speaks of an individual transaction's age, in practice    each table has a relfrozenxid attribute which is used by system as a reference    age as it is oldest potentially living transaction on the table (see xref for details).   </para>       <para>    The first milestone is controlled by vacuum_freeze_min_age (50 million) and marks the age    at which the row becomes eligible to become frozen.   </para>   <para>    Next up is vacuum_freeze_table_age (120 million).  Before this age the row can be frozen,    but a non-aggressive vacuum may not encounter the row due to the visibility    map optimizations described above.  Vacuums performed while relfrozenxid    is older than this age will be done aggressively.   </para>   <para>    For tables where routine complete vacuuming doesn't happen the auto-vacuum    daemon acts as a safety net.  When the age of the row exceeds    autovacuum_freeze_max_age (200 million) the autovacuum daemon, even if disabled for the table,    will perform an anti-wraparound vacuum on the table (see below).   </para>   <para>    Finally, as a measure of last resort, the system will begin emitting warnings    (1.940 billion) and then (1.997 billion) shutdown.    It may be restarted in single user mode for manual aggressive vacuuming.   </para>   <para>    An anti-wraparound vacuum is much more expensive than an aggressive vacuum and    so the gap between the vacuum_freeze_table_age and autovacuum_freeze_max_age    should be somewhat large (vacuum age must be at most 95% of the autovacuum age    to be meaningful).   </para>   <para>    Transaction history and commit status storage requirements are directly related to    <varname>autovacuum_freeze_max_age</varname> due to retention policies based upon    that age. See xref ... for additional details.   </para>   <para>    The reason for vacuum_freeze_min_age is to manage the trade-off between minimizing    rows marked dead that are already frozen versus minimizing the number of rows    being frozen aggressively.   </para>", "msg_date": "Tue, 12 Apr 2022 19:20:59 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Tue, Apr 12, 2022 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My high level concerns are:\n>\n> * Instead of discussing FrozenTransactionId (and then explaining how\n> that particular magic value is not really used anymore anyway), why\n> not describe freezing in terms of the high level rules?\n>\n> Something along the lines of the following seems more useful: \"A tuple\n> whose xmin is frozen (and xmax is unset) is considered visible to\n> every possible MVCC snapshot. In other words, the transaction that\n> inserted the tuple is treated as if it ran and committed at some point\n> that is now *infinitely* far in the past.\"\n\nI agree with this idea.\n\n> * The description of wraparound sounds terrifying, implying that data\n> corruption can result.\n>\n> The alarming language isn't proportionate to the true danger\n> (something I complained about in a dedicated thread last year [1]).\n\nI mostly agree with this, but not entirely. The section needs some\nrephrasing, but xidStopLimit doesn't apply in single-user mode, and\nrelfrozenxid and datfrozenxid values can and do get corrupted. So it's\nnot a purely academic concern.\n\n> * XID space isn't really a precious resource -- it isn't even a\n> resource at all IMV.\n\nI disagree with this. Usable XID space is definitely a resource, and\nif you're in the situation where you care deeply about this section of\nthe documentation, it's probably one in short supply. Being careful\nnot to expend too many XIDs while fixing the problems that have cause\nyou to be short of safe XIDs is *definitely* a real thing.\n\n> * We don't cleanly separate discussion of anti-wraparound autovacuums,\n> and aggressive vacuums, and the general danger of wraparound (by which\n> I actually mean the danger of having the xidStopLimit stop limit kick\n> in).\n\nI think it is wrong to conflate wraparound with xidStopLimit.\nxidStopLimit is the final defense against an actual wraparound, and\nlike I say, an actual wraparound is quite possible if you put the\nsystem in single user mode and then do something like this:\n\nbackend> VACUUM FULL;\n\nBig ouch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Apr 2022 11:40:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Wed, Apr 13, 2022 at 8:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Something along the lines of the following seems more useful: \"A tuple\n> > whose xmin is frozen (and xmax is unset) is considered visible to\n> > every possible MVCC snapshot. In other words, the transaction that\n> > inserted the tuple is treated as if it ran and committed at some point\n> > that is now *infinitely* far in the past.\"\n>\n> I agree with this idea.\n\nCool. Maybe I should write a doc patch just for this part, then.\n\nWhat do you think of the idea of relating freezing to removing tuples\nby VACUUM at this point? This would be a basis for explaining how\nfreezing and tuple removal are constrained by the same cutoff. A very\nold snapshot can hold up cleanup, but it can also hold up freezing to\nthe same degree (it's just not as obvious because we are less eager\nabout freezing by default).\n\n> > The alarming language isn't proportionate to the true danger\n> > (something I complained about in a dedicated thread last year [1]).\n>\n> I mostly agree with this, but not entirely. The section needs some\n> rephrasing, but xidStopLimit doesn't apply in single-user mode, and\n> relfrozenxid and datfrozenxid values can and do get corrupted. So it's\n> not a purely academic concern.\n\nI accept the distinction that you want to make is valid. More on that below.\n\n> > * XID space isn't really a precious resource -- it isn't even a\n> > resource at all IMV.\n>\n> I disagree with this. Usable XID space is definitely a resource, and\n> if you're in the situation where you care deeply about this section of\n> the documentation, it's probably one in short supply. Being careful\n> not to expend too many XIDs while fixing the problems that have cause\n> you to be short of safe XIDs is *definitely* a real thing.\n\nI may have gone too far with this metaphor. My point was mostly that\nXID space has a highly unpredictable cost (paid in freezing).\n\nPerhaps we can agree on some (or even all) of the following specific points:\n\n* We shouldn't mention \"4 billion XIDs\" at all.\n\n* We should say that the issue is an issue of distances between\nunfrozen XIDs. The maximum distance that can ever be allowed to emerge\nbetween any two unfrozen XIDs in a cluster is about 2 billion XIDs.\n\n* We don't need to say anything about how XIDs are compared, normal vs\npermanent XIDs, etc.\n\n* The system takes drastic intervention to prevent this implementation\nrestriction from becoming a problem, starting with anti-wraparound\nautovacuums. Then there's the failsafe. Finally, there's the\nxidStopLimit mechanism, our last line of defense.\n\n> I think it is wrong to conflate wraparound with xidStopLimit.\n> xidStopLimit is the final defense against an actual wraparound, and\n> like I say, an actual wraparound is quite possible if you put the\n> system in single user mode and then do something like this:\n\nI forget to emphasize one aspect of the problem that seems quite\nimportant: the document itself seems to conflate the xidStopLimit\nmechanism with true wraparound. At least I thought so. Last year's\nthread on this subject ('What is \"wraparound failure\", really?') was\nmostly about that confusion. I personally found that very confusing,\nand I doubt that I'm the only one.\n\nThere is no good reason to use single user mode anymore (a related\nproblem with the docs is that we still haven't made that point). And\nthe pg_upgrade bug that led to invalid relfrozenxid values was\nflagrantly just a bug (adding a WARNING for this recently, in commit\ne83ebfe6). So while I accept that the distinction you're making here\nis valid, maybe we can fix the single user mode doc bug too, removing\nthe need to discuss \"true wraparound\" as a general phenomenon. You\nshouldn't ever see it in practice anymore. If you do then either\nyou've done something that \"invalidated the warranty\", or you've run\ninto a legitimate bug.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 Apr 2022 09:34:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Wed, Apr 13, 2022 at 12:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> What do you think of the idea of relating freezing to removing tuples\n> by VACUUM at this point? This would be a basis for explaining how\n> freezing and tuple removal are constrained by the same cutoff. A very\n> old snapshot can hold up cleanup, but it can also hold up freezing to\n> the same degree (it's just not as obvious because we are less eager\n> about freezing by default).\n\nI think something like that could be useful, if we can find a way to\nword it sufficiently clearly.\n\n> Perhaps we can agree on some (or even all) of the following specific points:\n>\n> * We shouldn't mention \"4 billion XIDs\" at all.\n>\n> * We should say that the issue is an issue of distances between\n> unfrozen XIDs. The maximum distance that can ever be allowed to emerge\n> between any two unfrozen XIDs in a cluster is about 2 billion XIDs.\n>\n> * We don't need to say anything about how XIDs are compared, normal vs\n> permanent XIDs, etc.\n>\n> * The system takes drastic intervention to prevent this implementation\n> restriction from becoming a problem, starting with anti-wraparound\n> autovacuums. Then there's the failsafe. Finally, there's the\n> xidStopLimit mechanism, our last line of defense.\n\nThose all sound pretty reasonable. There's a little bit of doubt in my\nmind about the third one; I think it could possibly be useful to\nexplain that the XID space is circular and 0-2 are special, but maybe\nnot.\n\n> > I think it is wrong to conflate wraparound with xidStopLimit.\n> > xidStopLimit is the final defense against an actual wraparound, and\n> > like I say, an actual wraparound is quite possible if you put the\n> > system in single user mode and then do something like this:\n>\n> I forget to emphasize one aspect of the problem that seems quite\n> important: the document itself seems to conflate the xidStopLimit\n> mechanism with true wraparound. At least I thought so. Last year's\n> thread on this subject ('What is \"wraparound failure\", really?') was\n> mostly about that confusion. I personally found that very confusing,\n> and I doubt that I'm the only one.\n\nOK.\n\n> There is no good reason to use single user mode anymore (a related\n> problem with the docs is that we still haven't made that point). And\n\nAgreed.\n\n> the pg_upgrade bug that led to invalid relfrozenxid values was\n> flagrantly just a bug (adding a WARNING for this recently, in commit\n> e83ebfe6). So while I accept that the distinction you're making here\n> is valid, maybe we can fix the single user mode doc bug too, removing\n> the need to discuss \"true wraparound\" as a general phenomenon. You\n> shouldn't ever see it in practice anymore. If you do then either\n> you've done something that \"invalidated the warranty\", or you've run\n> into a legitimate bug.\n\nI think it is probably important to discuss this, but along the lines\nof: it is possible to bypass all of these safeguards and cause a true\nwraparound by running in single-user mode. Don't do that. There's no\nwraparound situation that can't be addressed just fine in multi-user\nmode, and here's how to do that. In previous releases, we used to\nsometimes recommend single user mode, but that's no longer necessary\nand not a good idea, so steer clear.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Apr 2022 16:24:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Wed, Apr 13, 2022 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Apr 13, 2022 at 12:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > What do you think of the idea of relating freezing to removing tuples\n> > by VACUUM at this point? This would be a basis for explaining how\n> > freezing and tuple removal are constrained by the same cutoff.\n\n> I think something like that could be useful, if we can find a way to\n> word it sufficiently clearly.\n\nWhat if the current \"25.1.5. Preventing Transaction ID\nWraparound Failures\" section was split into two parts? The first part\nwould cover freezing, the second part would cover\nrelfrozenxid/relminmxid advancement.\n\nFreezing can sensibly be discussed before introducing relfrozenxid.\nFreezing is a maintenance task that makes tuples self-contained\nthings, suitable for long term storage. Freezing makes tuples not rely\non transient transaction metadata (mainly clog), and so is an overhead\nof storing data in Postgres long term.\n\nThat's how I think of it, at least. That definition seems natural to me.\n\n> Those all sound pretty reasonable.\n\nGreat.\n\nI have two more things that I see as problems. Would be good to get\nyour thoughts here, too. They are:\n\n1. We shouldn't really be discussing VACUUM FULL here at all, except\nto say that it's out of scope, and probably a bad idea.\n\nYou once wrote about the problem of how VACUUM FULL is perceived by\nusers (VACUUM FULL doesn't mean \"VACUUM, but better\"), expressing an\nopinion of VACUUM FULL that I agree with fully. The docs definitely\ncontributed to that problem.\n\n2. We don't go far enough in emphasizing the central role of autovacuum.\n\nTechnically the entire section assumes that its primary audience are\nthose users that have opted to not use autovacuum. This seems entirely\nbackwards to me.\n\nWe should make it clear that technically autovacuum isn't all that\ndifferent from running your own VACUUM commands, because that's an\nimportant part of understanding autovacuum. But that's all. ISTM that\nanybody that *entirely* opts out of using autovacuum is just doing it\nwrong (besides, it's kind of impossible to do it anyway, what with\nanti-wraparound autovacuum being impossible to disable).\n\nThere is definitely a role for using tools like cron to schedule\noff-hours VACUUM operations, and that's still worth pointing out\nprominently. But that should be a totally supplementary thing, used\nwhen the DBA understands that running VACUUM off-hours is less\ndisruptive.\n\n> There's a little bit of doubt in my\n> mind about the third one; I think it could possibly be useful to\n> explain that the XID space is circular and 0-2 are special, but maybe\n> not.\n\nI understand the concern. I'm not saying that this kind of information\ndoesn't have any business being in the docs. Just that it has no\nbusiness being in this particular chapter of the docs. In fact, it\ndoesn't even belong in \"III. Server Administration\". If it belongs\nanywhere, it should be in some chapter from \"VII. Internals\".\n\nDiscussing it here just seems inappropriate (and would be even if it\nwasn't how we introduce discussion of wraparound). It's really only\ntangentially related to VACUUM anyway. It seems like it should be\ncovered when discussing the heapam on-disk representation.\n\n> I think it is probably important to discuss this, but along the lines\n> of: it is possible to bypass all of these safeguards and cause a true\n> wraparound by running in single-user mode. Don't do that. There's no\n> wraparound situation that can't be addressed just fine in multi-user\n> mode, and here's how to do that. In previous releases, we used to\n> sometimes recommend single user mode, but that's no longer necessary\n> and not a good idea, so steer clear.\n\nYeah, that should probably happen somewhere.\n\nOn the other hand...why do we even need to tolerate wraparound in\nsingle-user mode? I do see some value in reserving extra XIDs that can\nbe used in single-user mode (apparently single-user mode can be used\nin scenarios where you have buggy event triggers, things like that).\nBut that in itself does not justify allowing single-user mode to\nexceed xidWrapLimit.\n\nWhy shouldn't single-user mode also refuse to allocate new XIDs when\nwe reach xidWrapLimit (as opposed to when we reach xidStopLimit)?\n\nMaybe there is a good reason to believe that allowing single-user mode\nto corrupt the database is the lesser evil, but if there is then I'd\nlike to know the reason.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 Apr 2022 14:18:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Wed, Apr 13, 2022 at 2:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, Apr 13, 2022 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Apr 13, 2022 at 12:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > What do you think of the idea of relating freezing to removing tuples\n> > > by VACUUM at this point? This would be a basis for explaining how\n> > > freezing and tuple removal are constrained by the same cutoff.\n>\n> > I think something like that could be useful, if we can find a way to\n> > word it sufficiently clearly.\n>\n> What if the current \"25.1.5. Preventing Transaction ID\n> Wraparound Failures\" section was split into two parts? The first part\n> would cover freezing, the second part would cover\n> relfrozenxid/relminmxid advancement.\n>\n> Freezing can sensibly be discussed before introducing relfrozenxid.\n> Freezing is a maintenance task that makes tuples self-contained\n> things, suitable for long term storage. Freezing makes tuples not rely\n> on transient transaction metadata (mainly clog), and so is an overhead\n> of storing data in Postgres long term.\n>\n> That's how I think of it, at least. That definition seems natural to me.\n>\n\nI was trying to do that with my second try, and partially failed. I'm on\nboard with the idea though.\n\n\n> > Those all sound pretty reasonable.\n>\n> Great.\n>\n> I have two more things that I see as problems. Would be good to get\n> your thoughts here, too. They are:\n>\n> 1. We shouldn't really be discussing VACUUM FULL here at all, except\n> to say that it's out of scope, and probably a bad idea.\n>\n> You once wrote about the problem of how VACUUM FULL is perceived by\n> users (VACUUM FULL doesn't mean \"VACUUM, but better\"), expressing an\n> opinion of VACUUM FULL that I agree with fully. The docs definitely\n> contributed to that problem.\n>\n\nI agree. I would remove VACUUM FULL from the \"Vacuuming Basics\" and\n\"Recovering Disk Space\" section aside from adding a warning box saying it\nexists, it is not part of routine maintenance (hence out-of-scope for the\n\"Routine Vacuuming\" Chapter), and to look elsewhere for details.\n\n>\n> 2. We don't go far enough in emphasizing the central role of autovacuum.\n>\n> Technically the entire section assumes that its primary audience are\n> those users that have opted to not use autovacuum. This seems entirely\n> backwards to me.\n>\n\nI would be on board with having the language of the entire section written\nwith the assumption that autovacuum is enabled, with a single statement\nupfront that this is the case. Most of the content remains as-is but we\nremove a non-trivial number of sentences and fragments of the form \"The\nautovacuum daemon, if enabled, will...\" and \"For those not using\nautovacuum,...\"\n\nIf the basic content is deemed worthy of preservation, relocating all of\nthose kinds of hints and whatnot to a single \"supplementing or disabling\nauto-vacuum\" section.\n\n\n> > There's a little bit of doubt in my\n> > mind about the third one; I think it could possibly be useful to\n> > explain that the XID space is circular and 0-2 are special, but maybe\n> > not.\n>\n> I understand the concern. I'm not saying that this kind of information\n> doesn't have any business being in the docs. Just that it has no\n> business being in this particular chapter of the docs. In fact, it\n> doesn't even belong in \"III. Server Administration\". If it belongs\n> anywhere, it should be in some chapter from \"VII. Internals\".\n>\n\nI think we do want to relocate some of this material elsewhere, and\nInternals seems probable, and we'd want to have a brief sentence or two\nhere before pointing the reader to more information. I'm sure we'll come\nto some conclusion on the level of detail that lead-in should include.\nLess is more to start with. Unless the rest of the revised chapter is\ngoing to lean heavily into it.\n\n\n>\n> Why shouldn't single-user mode also refuse to allocate new XIDs when\n> we reach xidWrapLimit (as opposed to when we reach xidStopLimit)?\n>\n>\nI lack the familiarity with the details here to comment on this last major\npoint.\n\nI do think the \"Server Administration\" section is missing a chapter though\n- \"Error Handling and Recovery\".\n\nWith that chapter in place I would mention the warning threshold in the\nroutine maintenance chapter as something that might be seen if routine\nmaintenance is misconfigured or encounters problems. Then direct the user\nto \"Error Handling and Recovery\" for discussion about the warning and\nwhatever else may happen if it is ignored; and how to go about fixing the\nproblem(s) that caused the warning.\n\nDavid J.\n\nOn Wed, Apr 13, 2022 at 2:19 PM Peter Geoghegan <pg@bowt.ie> wrote:On Wed, Apr 13, 2022 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Apr 13, 2022 at 12:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > What do you think of the idea of relating freezing to removing tuples\n> > by VACUUM at this point? This would be a basis for explaining how\n> > freezing and tuple removal are constrained by the same cutoff.\n\n> I think something like that could be useful, if we can find a way to\n> word it sufficiently clearly.\n\nWhat if the current \"25.1.5. Preventing Transaction ID\nWraparound Failures\" section was split into two parts? The first part\nwould cover freezing, the second part would cover\nrelfrozenxid/relminmxid advancement.\n\nFreezing can sensibly be discussed before introducing relfrozenxid.\nFreezing is a maintenance task that makes tuples self-contained\nthings, suitable for long term storage. Freezing makes tuples not rely\non transient transaction metadata (mainly clog), and so is an overhead\nof storing data in Postgres long term.\n\nThat's how I think of it, at least. That definition seems natural to me.I was trying to do that with my second try, and partially failed.  I'm on board with the idea though.\n\n> Those all sound pretty reasonable.\n\nGreat.\n\nI have two more things that I see as problems. Would be good to get\nyour thoughts here, too. They are:\n\n1. We shouldn't really be discussing VACUUM FULL here at all, except\nto say that it's out of scope, and probably a bad idea.\n\nYou once wrote about the problem of how VACUUM FULL is perceived by\nusers (VACUUM FULL doesn't mean \"VACUUM, but better\"), expressing an\nopinion of VACUUM FULL that I agree with fully. The docs definitely\ncontributed to that problem.I agree.  I would remove VACUUM FULL from the \"Vacuuming Basics\" and \"Recovering Disk Space\" section aside from adding a warning box saying it exists, it is not part of routine maintenance (hence out-of-scope for the \"Routine Vacuuming\" Chapter), and to look elsewhere for details.\n\n2. We don't go far enough in emphasizing the central role of autovacuum.\n\nTechnically the entire section assumes that its primary audience are\nthose users that have opted to not use autovacuum. This seems entirely\nbackwards to me.I would be on board with having the language of the entire section written with the assumption that autovacuum is enabled, with a single statement upfront that this is the case.  Most of the content remains as-is but we remove a non-trivial number of sentences and fragments of the form \"The autovacuum daemon, if enabled, will...\" and \"For those not using autovacuum,...\"If the basic content is deemed worthy of preservation, relocating all of those kinds of hints and whatnot to a single \"supplementing or disabling auto-vacuum\" section.\n\n> There's a little bit of doubt in my\n> mind about the third one; I think it could possibly be useful to\n> explain that the XID space is circular and 0-2 are special, but maybe\n> not.\n\nI understand the concern. I'm not saying that this kind of information\ndoesn't have any business being in the docs. Just that it has no\nbusiness being in this particular chapter of the docs. In fact, it\ndoesn't even belong in \"III. Server Administration\". If it belongs\nanywhere, it should be in some chapter from \"VII. Internals\".I think we do want to relocate some of this material elsewhere, and Internals seems probable, and we'd want to have a brief sentence or two here before pointing the reader to more information.  I'm sure we'll come to some conclusion on the level of detail that lead-in should include.  Less is more to start with.  Unless the rest of the revised chapter is going to lean heavily into it. \n\nWhy shouldn't single-user mode also refuse to allocate new XIDs when\nwe reach xidWrapLimit (as opposed to when we reach xidStopLimit)?I lack the familiarity with the details here to comment on this last major point.I do think the \"Server Administration\" section is missing a chapter though - \"Error Handling and Recovery\".With that chapter in place I would mention the warning threshold in the routine maintenance chapter as something that might be seen if routine maintenance is misconfigured or encounters problems.  Then direct the user to \"Error Handling and Recovery\" for discussion about the warning and whatever else may happen if it is ignored; and how to go about fixing the problem(s) that caused the warning.David J.", "msg_date": "Wed, 13 Apr 2022 15:03:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" }, { "msg_contents": "On Thu, Apr 14, 2022 at 5:03 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\n> I would be on board with having the language of the entire section written with the assumption that autovacuum is enabled, with a single statement upfront that this is the case. Most of the content remains as-is but we remove a non-trivial number of sentences and fragments of the form \"The autovacuum daemon, if enabled, will...\" and \"For those not using autovacuum,...\"\n\n+1\n\nThe second one goes on to say \"a typical approach...\", which seems to\nimply there are plenty of installations that hum along happily with\nautovacuum disabled. If there are, I haven't heard of any. (Aside: In\nmy experience, it's far more common for someone to disable autovacuum\non 118 tables via reloptions for who-knows-what-reason and without\nregard for the consequences). Also:\n\n- \"Some database administrators will want to supplement or replace the\ndaemon's activities with manually-managed VACUUM commands\". I'm not\nsure why we go as far as to state that *replacing* is an option to\nconsider.\n\n- \" you will need to use VACUUM FULL, or alternatively CLUSTER or one\nof the table-rewriting variants of ALTER TABLE.\"\n\nIf you follow the link to the ALTER TABLE documentation, there is no\neasy-to-find info on what these table-rewriting variants might be.\nWithin that page there are two instances of the text \"one of the forms\nof ALTER TABLE that ...\", but again no easy-to-find advice on what\nthose might be. Furthermore, I don't recall there being an ALTER TABLE\nthat rewrites the table with no other effects (*). So if you find\nyourself *really* needing to VACUUM FULL or CLUSTER, which primary\neffect of ALTER TABLE should they consider, in order to get the side\neffect of rewriting the table? Why are we mentioning ALTER TABLE here\nat all?\n\n> If the basic content is deemed worthy of preservation, relocating all of those kinds of hints and whatnot to a single \"supplementing or disabling auto-vacuum\" section.\n\nI think any mention of disabling should be in a context where it is\nnot assumed to be normal, i.e. exceptional situations. Putting it in a\nsection heading makes it too normal. Here's how I think about this: do\nwe have a section heading anywhere on disabling fsync? I know it's not\nthe same, but that's how I think about it.\n\n(*) Should there be? As alluded to upthread, VACUUM FULL is a terrible\nname for what it does as of 9.0. I continue to encounter admins who\nhave a weekly script that runs database-wide VACUUM FULL, followed by\nREINDEX. Or, upon asking someone to run a manual vacuum on a table,\nwill echo what they think they heard: \"okay, so run a full vacuum\". I\nwould prefer these misunderstandings to get a big fat syntax error if\nthey are carried out.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 13:36:01 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Improving the \"Routine Vacuuming\" docs" } ]
[ { "msg_contents": "Hi,\n\nAdd --{no-,}bypassrls flags to createuser.\nThe following is an example of execution.\n--\n$ createuser a --bypassrls\n$ psql -c \"\\du a\"\n List of roles\n Role name | Attributes | Member of\n-----------+------------+-----------\n a | Bypass RLS | {}\n\n-- \n\nDo you think?\n\nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 13 Apr 2022 14:51:35 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Wed, 13 Apr 2022 14:51:35 +0900, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote in \n> Hi,\n> \n> Add --{no-,}bypassrls flags to createuser.\n> The following is an example of execution.\n> --\n> $ createuser a --bypassrls\n> $ psql -c \"\\du a\"\n> List of roles\n> Role name | Attributes | Member of\n> -----------+------------+-----------\n> a | Bypass RLS | {}\n> \n> -- \n> \n> Do you think?\n\nIt is sensible to rig createuser command with full capability of\nCREATE ROLE is reasonable.\n\nOnly --replication is added by commit 9b8aff8c19 (2010) since\n8ae0d476a9 (2005). BYPASSRLS and NOBYPASSRLS were introduced by\n491c029dbc (2014) but it seems to have forgotten to add the\ncorresponding createuser options.\n\nBy a quick search, found a few other CREATE ROLE optinos that are not\nsupported by createuser.\n\nVALID UNTIL\nROLE (IN ROLE is -g/--role)\nADMIN\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 Apr 2022 15:46:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, Apr 13, 2022 at 03:46:25PM +0900, Kyotaro Horiguchi wrote:\n> It is sensible to rig createuser command with full capability of\n> CREATE ROLE is reasonable.\n> \n> Only --replication is added by commit 9b8aff8c19 (2010) since\n> 8ae0d476a9 (2005). BYPASSRLS and NOBYPASSRLS were introduced by\n> 491c029dbc (2014) but it seems to have forgotten to add the\n> corresponding createuser options.\n> \n> By a quick search, found a few other CREATE ROLE optinos that are not\n> supported by createuser.\n\nMy question is: is BYPASSRLS common enough to justify having a switch\nto createuser? As the development cycle of 15 has just finished and\nthat we are in feature freeze, you may want to hold on new patches for\na bit. The next commit fest is planned for July.\n--\nMichael", "msg_date": "Wed, 13 Apr 2022 16:10:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Wed, 13 Apr 2022 16:10:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Apr 13, 2022 at 03:46:25PM +0900, Kyotaro Horiguchi wrote:\n> > It is sensible to rig createuser command with full capability of\n> > CREATE ROLE is reasonable.\n> > \n> > Only --replication is added by commit 9b8aff8c19 (2010) since\n> > 8ae0d476a9 (2005). BYPASSRLS and NOBYPASSRLS were introduced by\n> > 491c029dbc (2014) but it seems to have forgotten to add the\n> > corresponding createuser options.\n> > \n> > By a quick search, found a few other CREATE ROLE optinos that are not\n> > supported by createuser.\n> \n> My question is: is BYPASSRLS common enough to justify having a switch\n> to createuser? As the development cycle of 15 has just finished and\n> that we are in feature freeze, you may want to hold on new patches for\n> a bit. The next commit fest is planned for July.\n\nI don't think there's a definitive criteria (other than feasibility)\nfor whether each CREATE ROLE option should have the correspondent\noption in the createuser command. I don't see a clear reason why\ncreateuser command should not have the option.\n\nAs far as schedules are concerned, I don't think this has anything to\ndo with 15.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 Apr 2022 17:35:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, Apr 13, 2022 at 4:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I don't think there's a definitive criteria (other than feasibility)\n> for whether each CREATE ROLE option should have the correspondent\n> option in the createuser command. I don't see a clear reason why\n> createuser command should not have the option.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Apr 2022 09:18:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-04-13 17:35, Kyotaro Horiguchi wrote:\n> At Wed, 13 Apr 2022 16:10:01 +0900, Michael Paquier\n> <michael@paquier.xyz> wrote in\n>> On Wed, Apr 13, 2022 at 03:46:25PM +0900, Kyotaro Horiguchi wrote:\n>> > It is sensible to rig createuser command with full capability of\n>> > CREATE ROLE is reasonable.\n>> >\n>> > Only --replication is added by commit 9b8aff8c19 (2010) since\n>> > 8ae0d476a9 (2005). BYPASSRLS and NOBYPASSRLS were introduced by\n>> > 491c029dbc (2014) but it seems to have forgotten to add the\n>> > corresponding createuser options.\n>> >\n>> > By a quick search, found a few other CREATE ROLE optinos that are not\n>> > supported by createuser.\n>> \n>> My question is: is BYPASSRLS common enough to justify having a switch\n>> to createuser? As the development cycle of 15 has just finished and\n>> that we are in feature freeze, you may want to hold on new patches for\n>> a bit. The next commit fest is planned for July.\n> \n> I don't think there's a definitive criteria (other than feasibility)\n> for whether each CREATE ROLE option should have the correspondent\n> option in the createuser command. I don't see a clear reason why\n> createuser command should not have the option.\n\nThank you for the review!\nI created a new patch containing 'VALID UNTIL', 'ADMIN', and 'ROLE'.\n\nTo add the ROLE clause, the originally existing --role option \n(corresponding to the IN ROLE clause) is changed to the --in-role \noption. Would this not be good from a backward compatibility standpoint?\n\n\n> As far as schedules are concerned, I don't think this has anything to\n> do with 15.\n\nI have registered this patch for the July commit fest.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 14 Apr 2022 16:42:39 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "> On 14 Apr 2022, at 09:42, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n\n> To add the ROLE clause, the originally existing --role option (corresponding to the IN ROLE clause) is changed to the --in-role option. Would this not be good from a backward compatibility standpoint?\n\n-\tprintf(_(\" -g, --role=ROLE new role will be a member of this role\\n\"));\n+\tprintf(_(\" -g, --in-role=ROLE new role will be a member of this role\\n\"));\n+\tprintf(_(\" -G, --role=ROLE this role will be a member of new role\\n\"));\n\nWon't this make existing scripts to behave differently after an upgrade? That\nseems like something we should avoid at all costs.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 14 Apr 2022 11:57:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-04-14 18:57, Daniel Gustafsson wrote:\n>> On 14 Apr 2022, at 09:42, Shinya Kato <Shinya11.Kato@oss.nttdata.com> \n>> wrote:\n> \n>> To add the ROLE clause, the originally existing --role option \n>> (corresponding to the IN ROLE clause) is changed to the --in-role \n>> option. Would this not be good from a backward compatibility \n>> standpoint?\n> \n> -\tprintf(_(\" -g, --role=ROLE new role will be a member of\n> this role\\n\"));\n> +\tprintf(_(\" -g, --in-role=ROLE new role will be a member of\n> this role\\n\"));\n> +\tprintf(_(\" -G, --role=ROLE this role will be a member of\n> new role\\n\"));\n> \n> Won't this make existing scripts to behave differently after an \n> upgrade? That\n> seems like something we should avoid at all costs.\n\nI understand. For backward compatibility, I left the ROLE clause option \nas it is and changed the IN ROLE clause option to --membership option.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 15 Apr 2022 14:55:48 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Fri, 15 Apr 2022 14:55:48 +0900, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote in \n> I understand. For backward compatibility, I left the ROLE clause\n> option as it is and changed the IN ROLE clause option to --membership\n> option.\n\nThanks!\n\n-\tprintf(_(\" -g, --role=ROLE new role will be a member of this role\\n\"));\n+\tprintf(_(\" -g, --role=ROLE new role will be a member of this role\\n\"));\n\nThis looks lik an unexpected change. We shoudl preserve it, but *I*\nthink that we can add a synonym of the old --role for\nunderstandability/memorability. (By the way \"-g\" looks like coming\nfrom \"group\", which looks somewhat strange..)\n\n> printf(_(\" -b, --belongs-to=ROLE new role will be a member of this role\\n\"));\n\n+\tprintf(_(\" -m, --membership=ROLE this role will be a member of new role\\n\"));\n\nmembership sounds somewhat obscure, it seems *to me* members is clearer\n\n> printf(_(\" -m, --member=ROLE new role will be a member of this role\\n\"));\n\nI'd like to hear others' opinions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Apr 2022 15:33:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Fri, Apr 15, 2022 at 2:33 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > printf(_(\" -b, --belongs-to=ROLE new role will be a member of this role\\n\"));\n>\n> + printf(_(\" -m, --membership=ROLE this role will be a member of new role\\n\"));\n>\n> membership sounds somewhat obscure, it seems *to me* members is clearer\n>\n> > printf(_(\" -m, --member=ROLE new role will be a member of this role\\n\"));\n>\n> I'd like to hear others' opinions.\n\nI think that we need to preserve consistency with the SQL syntax as\nmuch as possible -- and neither MEMBER nor MEMBERSHIP nor BELONGS_TO\nappear in that syntax. A lot of the terminology in this area seems\npoorly chosen and confusing to me, but having two ways to refer to\nsomething probably won't be an improvement even if the second name is\nbetter-chosen than the first one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 09:59:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "Thanks!\n\nAt Mon, 18 Apr 2022 09:59:48 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Apr 15, 2022 at 2:33 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > printf(_(\" -b, --belongs-to=ROLE new role will be a member of this role\\n\"));\n> >\n> > + printf(_(\" -m, --membership=ROLE this role will be a member of new role\\n\"));\n> >\n> > membership sounds somewhat obscure, it seems *to me* members is clearer\n> >\n> > > printf(_(\" -m, --member=ROLE new role will be a member of this role\\n\"));\n> >\n> > I'd like to hear others' opinions.\n> \n> I think that we need to preserve consistency with the SQL syntax as\n> much as possible -- and neither MEMBER nor MEMBERSHIP nor BELONGS_TO\n> appear in that syntax. A lot of the terminology in this area seems\n> poorly chosen and confusing to me, but having two ways to refer to\n> something probably won't be an improvement even if the second name is\n> better-chosen than the first one.\n\nHmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\nto go? Or we can give up adding -m for the reason of being hard to\nname it..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Apr 2022 10:50:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n> to go? Or we can give up adding -m for the reason of being hard to\n> name it..\n\nHmm, yeah, I hadn't quite realized what the problem was when I wrote\nthat. I honestly don't know what to do about that. Renaming the\nexisting option is not great, but having the syntax diverge between\nSQL and CLI is not great either. Giving up is also not great. Not sure\nwhat is best.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Apr 2022 12:13:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Tue, 19 Apr 2022 12:13:51 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n> > to go? Or we can give up adding -m for the reason of being hard to\n> > name it..\n> \n> Hmm, yeah, I hadn't quite realized what the problem was when I wrote\n> that. I honestly don't know what to do about that. Renaming the\n> existing option is not great, but having the syntax diverge between\n> SQL and CLI is not great either. Giving up is also not great. Not sure\n> what is best.\n\nExactly.. So I'm stuckX(\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 Apr 2022 17:04:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Tue, Apr 19, 2022 at 12:13:51PM -0400, Robert Haas wrote:\n> On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n>> to go? Or we can give up adding -m for the reason of being hard to\n>> name it..\n> \n> Hmm, yeah, I hadn't quite realized what the problem was when I wrote\n> that. I honestly don't know what to do about that. Renaming the\n> existing option is not great, but having the syntax diverge between\n> SQL and CLI is not great either. Giving up is also not great. Not sure\n> what is best.\n\nChanging one existing option to mean something entirely different\nshould be avoided, as this could lead to silent breakages. As the\norigin of the problem is that the option --role means \"IN ROLE\" in the\nSQL grammar, we could keep around --role for compatibility while\nmarking it deprecated, and add two new options whose names would be\nmore consistent with each other. One choice could be --role-name and\n--in-role-name, where --in-role-name maps to the older --role, just to\ngive an idea.\n--\nMichael", "msg_date": "Thu, 21 Apr 2022 13:29:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, Apr 21, 2022 at 12:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 19, 2022 at 12:13:51PM -0400, Robert Haas wrote:\n> > On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n> >> to go? Or we can give up adding -m for the reason of being hard to\n> >> name it..\n> >\n> > Hmm, yeah, I hadn't quite realized what the problem was when I wrote\n> > that. I honestly don't know what to do about that. Renaming the\n> > existing option is not great, but having the syntax diverge between\n> > SQL and CLI is not great either. Giving up is also not great. Not sure\n> > what is best.\n>\n> Changing one existing option to mean something entirely different\n> should be avoided, as this could lead to silent breakages. As the\n> origin of the problem is that the option --role means \"IN ROLE\" in the\n> SQL grammar, we could keep around --role for compatibility while\n> marking it deprecated, and add two new options whose names would be\n> more consistent with each other. One choice could be --role-name and\n> --in-role-name, where --in-role-name maps to the older --role, just to\n> give an idea.\n\nI don't think that having both --role and --role-name, doing different\nthings, is going to be clear at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Apr 2022 15:51:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, Apr 21, 2022 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Apr 21, 2022 at 12:30 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > On Tue, Apr 19, 2022 at 12:13:51PM -0400, Robert Haas wrote:\n> > > On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > >> Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n> > >> to go? Or we can give up adding -m for the reason of being hard to\n> > >> name it..\n> > >\n> > > Hmm, yeah, I hadn't quite realized what the problem was when I wrote\n> > > that. I honestly don't know what to do about that. Renaming the\n> > > existing option is not great, but having the syntax diverge between\n> > > SQL and CLI is not great either. Giving up is also not great. Not sure\n> > > what is best.\n> >\n> > Changing one existing option to mean something entirely different\n> > should be avoided, as this could lead to silent breakages. As the\n> > origin of the problem is that the option --role means \"IN ROLE\" in the\n> > SQL grammar, we could keep around --role for compatibility while\n> > marking it deprecated, and add two new options whose names would be\n> > more consistent with each other. One choice could be --role-name and\n> > --in-role-name, where --in-role-name maps to the older --role, just to\n> > give an idea.\n>\n> I don't think that having both --role and --role-name, doing different\n> things, is going to be clear at all.\n>\n>\n-g/--role or maybe/additionally (--in-role)?\n-m/--role-to or maybe/additionally (--to-role)?\n\nI'm ok with -m/--member as well (like with --role only one role can be\nspecified per switch instance so member, not membership, the later meaning,\nat least for me, the collective).\n\nThat -m doesn't match --role-to is no worse than -g not matching --role, a\nshort option seems worthwhile, and the -m (membership) mnemonic should be\nsimple to pick-up.\n\nI don't see the addition of \"-name\" to the option name being beneficial.\n\nYes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but taking that\nliberty for consistency here is very appealing and there isn't another SQL\nclause that it would be confused with.\n\nDavid J.\n\nOn Thu, Apr 21, 2022 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Apr 21, 2022 at 12:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 19, 2022 at 12:13:51PM -0400, Robert Haas wrote:\n> > On Mon, Apr 18, 2022 at 9:50 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> Hmm.. So, \"-r/--role\" and \"-m/--member(ship)\" is the (least worse) way\n> >> to go?  Or we can give up adding -m for the reason of being hard to\n> >> name it..\n> >\n> > Hmm, yeah, I hadn't quite realized what the problem was when I wrote\n> > that. I honestly don't know what to do about that. Renaming the\n> > existing option is not great, but having the syntax diverge between\n> > SQL and CLI is not great either. Giving up is also not great. Not sure\n> > what is best.\n>\n> Changing one existing option to mean something entirely different\n> should be avoided, as this could lead to silent breakages.  As the\n> origin of the problem is that the option --role means \"IN ROLE\" in the\n> SQL grammar, we could keep around --role for compatibility while\n> marking it deprecated, and add two new options whose names would be\n> more consistent with each other.  One choice could be --role-name and\n> --in-role-name, where --in-role-name maps to the older --role, just to\n> give an idea.\n\nI don't think that having both --role and --role-name, doing different\nthings, is going to be clear at all.-g/--role   or maybe/additionally (--in-role)?-m/--role-to or maybe/additionally (--to-role)?I'm ok with -m/--member as well (like with --role only one role can be specified per switch instance so member, not membership, the later meaning, at least for me, the collective).That -m doesn't match --role-to is no worse than -g not matching --role, a short option seems worthwhile, and the -m (membership) mnemonic should be simple to pick-up.I don't see the addition of \"-name\" to the option name being beneficial.Yes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but taking that liberty for consistency here is very appealing and there isn't another SQL clause that it would be confused with.David J.", "msg_date": "Thu, 21 Apr 2022 13:21:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, Apr 21, 2022 at 01:21:57PM -0700, David G. Johnston wrote:\n> I'm ok with -m/--member as well (like with --role only one role can be\n> specified per switch instance so member, not membership, the later meaning,\n> at least for me, the collective).\n> \n> That -m doesn't match --role-to is no worse than -g not matching --role, a\n> short option seems worthwhile, and the -m (membership) mnemonic should be\n> simple to pick-up.\n> \n> I don't see the addition of \"-name\" to the option name being beneficial.\n> \n> Yes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but taking that\n> liberty for consistency here is very appealing and there isn't another SQL\n> clause that it would be confused with.\n\n+1 for \"member\". It might not be perfect, but IMO it's the clearest\noption.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Apr 2022 13:19:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "Thank you for the reviews!\n\nOn 2022-04-26 05:19, Nathan Bossart wrote:\n\n> -\tprintf(_(\" -g, --role=ROLE new role will be a member of \n> this role\\n\"));\n> +\tprintf(_(\" -g, --role=ROLE new role will be a member of this \n> role\\n\"));\n> This looks lik an unexpected change.\n\nI fixed it.\n\n\n>> I'm ok with -m/--member as well (like with --role only one role can be\n>> specified per switch instance so member, not membership, the later \n>> meaning,\n>> at least for me, the collective).\n>> \n>> That -m doesn't match --role-to is no worse than -g not matching \n>> --role, a\n>> short option seems worthwhile, and the -m (membership) mnemonic should \n>> be\n>> simple to pick-up.\n>> \n>> I don't see the addition of \"-name\" to the option name being \n>> beneficial.\n>> \n>> Yes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but taking \n>> that\n>> liberty for consistency here is very appealing and there isn't another \n>> SQL\n>> clause that it would be confused with.\n> \n> +1 for \"member\". It might not be perfect, but IMO it's the clearest\n> option.\n\nThanks! I changed the option \"--membership\" to \"--member\".\n\n\nFor now, I also think \"-m / --member\" is the best choice, although it is \nambiguous:(\nI'd like to hear others' opinions.\n\nregards\n\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 28 Apr 2022 15:06:30 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, Apr 28, 2022 at 03:06:30PM +0900, Shinya Kato wrote:\n> On 2022-04-26 05:19, Nathan Bossart wrote:\n>> +1 for \"member\". It might not be perfect, but IMO it's the clearest\n>> option.\n> \n> Thanks! I changed the option \"--membership\" to \"--member\".\n\nThanks for the new patch! Would you mind adding some tests for the new\noptions?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 2 May 2022 10:07:41 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "Dear Shinya,\n\nToo bad there's no --comment parameter to do COMMENT ON ROLE name IS \n'Comment';\n\nAs you already make such changes in createuser, I would like to ask for \nan additional --comment parameter\nthat will allow sysadmins to set a comment with additional information \nabout the new DB user.\npsql is scary for some. :-)\n\nOverall a very useful patch. I needed bypassrls several times recently.\n\nShinya Kato wrote on 4/28/2022 8:06 AM:\n> Thank you for the reviews!\n>\n> On 2022-04-26 05:19, Nathan Bossart wrote:\n>\n>> -    printf(_(\"  -g, --role=ROLE           new role will be a member \n>> of this role\\n\"));\n>> +    printf(_(\"  -g, --role=ROLE        new role will be a member of \n>> this role\\n\"));\n>> This looks lik an unexpected change.\n>\n> I fixed it.\n>\n>\n>>> I'm ok with -m/--member as well (like with --role only one role can be\n>>> specified per switch instance so member, not membership, the later \n>>> meaning,\n>>> at least for me, the collective).\n>>>\n>>> That -m doesn't match --role-to is no worse than -g not matching \n>>> --role, a\n>>> short option seems worthwhile, and the -m (membership) mnemonic \n>>> should be\n>>> simple to pick-up.\n>>>\n>>> I don't see the addition of \"-name\" to the option name being \n>>> beneficial.\n>>>\n>>> Yes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but \n>>> taking that\n>>> liberty for consistency here is very appealing and there isn't \n>>> another SQL\n>>> clause that it would be confused with.\n>>\n>> +1 for \"member\".  It might not be perfect, but IMO it's the clearest\n>> option.\n>\n> Thanks! I changed the option \"--membership\" to \"--member\".\n>\n>\n> For now, I also think \"-m / --member\" is the best choice, although it \n> is ambiguous:(\n> I'd like to hear others' opinions.\n>\n> regards\n>\n>\n> -- \n> Shinya Kato\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nDear Shinya,\n\nToo bad there's no --comment parameter to do COMMENT ON ROLE name IS \n'Comment';\n\nAs you already make such changes in createuser, I would like to ask for \nan additional --comment parameter\nthat will allow sysadmins to set a comment with additional information \nabout the new DB user.\npsql is scary for some. :-)\n\nOverall a very useful patch. I needed bypassrls several times recently.\n\nShinya Kato wrote on 4/28/2022 8:06 AM:\nThank \nyou for the reviews!\n \n\nOn 2022-04-26 05:19, Nathan Bossart wrote:\n \n\n-    printf(_(\"  -g, --role=ROLE           new\n role will be a member of \nthis role\\n\"));\n+    printf(_(\"  -g, --role=ROLE        new role will be a member of\n this \nrole\\n\"));\nThis looks lik an unexpected change.\n\n\nI fixed it.\n \n\n\nI'm ok with \n-m/--member as well (like with --role only one role can be\nspecified per switch instance so member, not membership, the later \nmeaning,\nat least for me, the collective).\n\nThat -m doesn't match --role-to is no worse than -g not matching \n--role, a\nshort option seems worthwhile, and the -m (membership) mnemonic \nshould \nbe\nsimple to pick-up.\n\nI don't see the addition of \"-name\" to the option name being \nbeneficial.\n\nYes, the standard doesn't use the \"TO\" prefix for \"ROLE\" - but \ntaking \nthat\nliberty for consistency here is very appealing and there isn't \nanother \nSQL\nclause that it would be confused with.\n\n+1 for \"member\".  It might not be perfect, but IMO it's the clearest\noption.\n\n\nThanks! I changed the option \"--membership\" to \"--member\".\n \n\n\nFor now, I also think \"-m / --member\" is the best choice, although it is\n \nambiguous:(\n \nI'd like to hear others' opinions.\n \n\nregards\n \n\n\n--\n \nShinya Kato\n \nAdvanced Computing Technology Center\n \nResearch and Development Headquarters\n \nNTT DATA CORPORATION\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66", "msg_date": "Fri, 6 May 2022 00:08:22 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": false, "msg_subject": "Re: Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "Thanks for reviews and comments!\n\nOn 2022-05-06 07:08, Przemysław Sztoch wrote:\n\n> Thanks for the new patch! Would you mind adding some tests for the new\n> options?\n\nI created a new patch to test the new options!\nHowever, not all option tests exist, so it may be necessary to consider \nwhether to actually add this test.\n\n\n> Too bad there's no --comment parameter to do COMMENT ON ROLE name IS\n> 'Comment';\n> \n> As you already make such changes in createuser, I would like to ask\n> for an additional --comment parameter\n> that will allow sysadmins to set a comment with additional information\n> about the new DB user.\n> psql is scary for some. :-)\n\nSince the createuser command is a wrapper for the CREATE ROLE command, I \ndo not think it is appropriate to add options that the CREATE ROLE \ncommand does not have.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 19 May 2022 10:35:23 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, May 18, 2022 at 6:35 PM Shinya Kato <Shinya11.Kato@oss.nttdata.com>\nwrote:\n\n> > Too bad there's no --comment parameter to do COMMENT ON ROLE name IS\n> > 'Comment';\n> >\n> > As you already make such changes in createuser, I would like to ask\n> > for an additional --comment parameter\n> > that will allow sysadmins to set a comment with additional information\n> > about the new DB user.\n> > psql is scary for some. :-)\n>\n> Since the createuser command is a wrapper for the CREATE ROLE command, I\n> do not think it is appropriate to add options that the CREATE ROLE\n> command does not have.\n>\n>\nI think that this feature is at least worth considering - but absent an\nexisting command that does this I would agree that doing so constitutes a\nseparate feature.\n\nAs an aside, I'd rather overcome this particular objection by having the\nCREATE object command all accept an optional \"COMMENT IS\" clause.\n\nDavid J.\n\nOn Wed, May 18, 2022 at 6:35 PM Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:> Too bad there's no --comment parameter to do COMMENT ON ROLE name IS\n> 'Comment';\n> \n> As you already make such changes in createuser, I would like to ask\n> for an additional --comment parameter\n> that will allow sysadmins to set a comment with additional information\n> about the new DB user.\n> psql is scary for some. :-)\n\nSince the createuser command is a wrapper for the CREATE ROLE command, I \ndo not think it is appropriate to add options that the CREATE ROLE \ncommand does not have.I think that this feature is at least worth considering - but absent an existing command that does this I would agree that doing so constitutes a separate feature.As an aside, I'd rather overcome this particular objection by having the CREATE object command all accept an optional \"COMMENT IS\" clause.David J.", "msg_date": "Wed, 18 May 2022 18:46:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, May 19, 2022 at 10:35:23AM +0900, Shinya Kato wrote:\n> I created a new patch to test the new options!\n\nThanks for the new patch! I attached a new version with a few small\nchanges. What do you think?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 20 May 2022 14:45:19 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "David G. Johnston wrote on 5/19/2022 3:46 AM:\n> I think that this feature is at least worth considering - but absent \n> an existing command that does this I would agree that doing so \n> constitutes a separate feature.\n>\n> As an aside, I'd rather overcome this particular objection by having \n> the CREATE object command all accept an optional \"COMMENT IS\" clause.\n>\n> David J.\nThe createuser command is typically used by IT personnel unfamiliar with \nSQL and unfamiliar with psql.\nThey often use this command because software installation procedures \nrequire it.\nLack of comment then causes more mess in the configuration of larger \nservers.\nI still think it's worth adding the --comment argument to execute the \nnext SQL statement by createuser.\nThis will simplify the setup scripts and installation instructions for \nthe final software.\n\nI believe that it is not worth dividing it into a separate program.\n\nThe same --comment argument is needed for the createdb command.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nDavid G. Johnston wrote on 5/19/2022 \n3:46 AM:\n\nI think that this feature is \nat least worth considering - but absent an existing command that does \nthis I would agree that doing so constitutes a separate feature.As\n an aside, I'd rather overcome this particular objection by having the \nCREATE object command all accept an optional \"COMMENT IS\" clause.David\n J.\n\nThe createuser command is typically used by IT personnel unfamiliar with\n SQL and unfamiliar with psql.\nThey often use this command because software installation procedures \nrequire it.\nLack of comment then causes more mess in the configuration of larger \nservers.\nI still think it's worth adding the --comment argument to execute the \nnext SQL statement by createuser.\nThis will simplify the setup scripts and installation instructions for \nthe final software.\n\nI believe that it is not worth dividing it into a separate program.\n\nThe same --comment argument is needed for the createdb command.\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66", "msg_date": "Sun, 22 May 2022 09:55:37 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Sun, 22 May 2022 09:55:37 +0200, Przemys占쏙옙aw Sztoch <przemyslaw@sztoch.pl> wrote in \n> David G. Johnston wrote on 5/19/2022 3:46 AM:\n> > As an aside, I'd rather overcome this particular objection by having\n> > the CREATE object command all accept an optional \"COMMENT IS\" clause.\n> >\n> I believe that it is not worth dividing it into a separate program.\n> \n> The same --comment argument is needed for the createdb command.\n\nDavid didn't say that it should be another \"program\", but said it\nshould be another \"patch/development\", because how we implement the\n--comment feature is apparently controversial.\n\nIt doesn't seem to be explicity mentioned that \"createuser is mere a\nshell-substitute for the SQL CREATE ROLE\", but I feel the same with\nShinya that it is. We could directly invoke \"COMMENT ON\" from\ncreateuser command, but I think it is not the way to go in that light.\n\nWe can either add COMMENT clause only to \"CREATE ROLE\" , or \"COMMENT\nIS\" clause to all (or most of) \"CREATE object\" commands, or something\nothers. (Perhaps \"COMMETN IS\" requires \"ALTER object\" handle comments,\nand I'm not sure how we think about the difference of it from \"comment\non\" command.) We might return to \"comment on\" in the end..\n\nAnyway, after fixing that issue we will modify the createrole command\nso that it uses the new SQL feature. I find no hard obstacles in\nreaching there in the 16 cycle.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 23 May 2022 10:32:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-05-21 06:45, Nathan Bossart wrote:\n> On Thu, May 19, 2022 at 10:35:23AM +0900, Shinya Kato wrote:\n>> I created a new patch to test the new options!\n> \n> Thanks for the new patch! I attached a new version with a few small\n> changes. What do you think?\n\nThanks for updating the patch!\nIt looks good to me.\n\n\nOn 2022-05-23 10:32, Kyotaro Horiguchi wrote:\n> Anyway, after fixing that issue we will modify the createrole command\n> so that it uses the new SQL feature. I find no hard obstacles in\n> reaching there in the 16 cycle.\n\n+1.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 23 May 2022 15:18:48 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Fri, May 20, 2022 at 02:45:19PM -0700, Nathan Bossart wrote:\n> Thanks for the new patch! I attached a new version with a few small\n> changes. What do you think?\n\nSo you have settled down to --member to emulate the clause ROLE.\nWell, this choice is fine by me at the end.\n\n> +$node->issues_sql_like(\n> +\t[ 'createuser', 'regress_role2', '-a', 'regress_user1' ],\n> +\tqr/statement: CREATE ROLE regress_role2 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN ADMIN regress_user1;/,\n> +\t'add a role as a member with admin option of the newly created role');\n> +$node->issues_sql_like(\n> +\t[ 'createuser', 'regress_role3', '-m', 'regress_user1' ],\n> +\tqr/statement: CREATE ROLE regress_role3 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN ROLE regress_user1;/,\n> +\t'add a role as a member of the newly created role');\n\nMay I ask for the addition of tests when one specifies multiple\nswitches for --admin and --member? This would check the code path\nwhere you build a list of role names. You could check fancier string\npatterns, while on it, to look after the use of fmtId(), say with\nrole names that include whitespaces or such.\n--\nMichael", "msg_date": "Mon, 23 May 2022 16:29:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-05-23 16:29, Michael Paquier wrote:\n>> +$node->issues_sql_like(\n>> +\t[ 'createuser', 'regress_role2', '-a', 'regress_user1' ],\n>> +\tqr/statement: CREATE ROLE regress_role2 NOSUPERUSER NOCREATEDB \n>> NOCREATEROLE INHERIT LOGIN ADMIN regress_user1;/,\n>> +\t'add a role as a member with admin option of the newly created \n>> role');\n>> +$node->issues_sql_like(\n>> +\t[ 'createuser', 'regress_role3', '-m', 'regress_user1' ],\n>> +\tqr/statement: CREATE ROLE regress_role3 NOSUPERUSER NOCREATEDB \n>> NOCREATEROLE INHERIT LOGIN ROLE regress_user1;/,\n>> +\t'add a role as a member of the newly created role');\n> \n> May I ask for the addition of tests when one specifies multiple\n> switches for --admin and --member? This would check the code path\n> where you build a list of role names. You could check fancier string\n> patterns, while on it, to look after the use of fmtId(), say with\n> role names that include whitespaces or such.\n\nThanks!\nI changed to the test that describes multiple \"-m\".\nIt seems to be working without any problems, how about it?\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 23 May 2022 23:55:43 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Mon, May 23, 2022 at 11:55:43PM +0900, Shinya Kato wrote:\n> On 2022-05-23 16:29, Michael Paquier wrote:\n>> May I ask for the addition of tests when one specifies multiple\n>> switches for --admin and --member? This would check the code path\n>> where you build a list of role names. You could check fancier string\n>> patterns, while on it, to look after the use of fmtId(), say with\n>> role names that include whitespaces or such.\n> \n> Thanks!\n> I changed to the test that describes multiple \"-m\".\n> It seems to be working without any problems, how about it?\n\nMichael also requested a test for multiple -a switches and for fancier\nstring patterns. Once that is taken care of, I think this can be marked as\nready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 May 2022 09:37:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Mon, May 23, 2022 at 09:37:35AM -0700, Nathan Bossart wrote:\n> Michael also requested a test for multiple -a switches and for fancier\n> string patterns. Once that is taken care of, I think this can be marked as\n> ready-for-committer.\n\nLooking at v7, this means to extend the tests to process lists for\n--admin with more name patterns. And while on it, we could do the\nsame for the existing command for --role, but this one is on me, being\noverly-pedantic while looking at the patch :)\n--\nMichael", "msg_date": "Tue, 24 May 2022 11:09:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-05-24 11:09, Michael Paquier wrote:\n> On Mon, May 23, 2022 at 09:37:35AM -0700, Nathan Bossart wrote:\n>> Michael also requested a test for multiple -a switches and for fancier\n>> string patterns. Once that is taken care of, I think this can be \n>> marked as\n>> ready-for-committer.\n> \n> Looking at v7, this means to extend the tests to process lists for\n> --admin with more name patterns. And while on it, we could do the\n> same for the existing command for --role, but this one is on me, being\n> overly-pedantic while looking at the patch :)\n\nThanks! I fixed it.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 24 May 2022 20:07:31 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Tue, May 24, 2022 at 08:07:31PM +0900, Shinya Kato wrote:\n> On 2022-05-24 11:09, Michael Paquier wrote:\n>> On Mon, May 23, 2022 at 09:37:35AM -0700, Nathan Bossart wrote:\n>> > Michael also requested a test for multiple -a switches and for fancier\n>> > string patterns. Once that is taken care of, I think this can be\n>> > marked as\n>> > ready-for-committer.\n>> \n>> Looking at v7, this means to extend the tests to process lists for\n>> --admin with more name patterns. And while on it, we could do the\n>> same for the existing command for --role, but this one is on me, being\n>> overly-pedantic while looking at the patch :)\n> \n> Thanks! I fixed it.\n\nWe're still missing some \"fancier\" string patterns in the tests, but we\nmight just be nitpicking at this point.\n\nI noticed that the cfbot tests for this are failing for Windows. I've\nlooked at the relevant logs a bit, and I'm not sure what is going on. The\nexpected log messages are indeed missing, but I haven't found any clues for\nwhy those test cases are skipped.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 May 2022 10:09:10 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Tue, 24 May 2022 10:09:10 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> We're still missing some \"fancier\" string patterns in the tests, but we\n> might just be nitpicking at this point.\n\nSuch \"fancier\" strings should be properly handled by FmtId() and\nappendStringLiteralConn. If this is a privilege escalating command,\nwe should have ones but this is not.\n\n> I noticed that the cfbot tests for this are failing for Windows. I've\n> looked at the relevant logs a bit, and I'm not sure what is going on. The\n> expected log messages are indeed missing, but I haven't found any clues for\n> why those test cases are skipped.\n\ncreateuser command complains like this.\n\n> # Running: createuser regress_user4 -a regress_user1 -a regress_user2\n> createuser: error: too many command-line arguments (first is \"-a\")\n> hint: Try \"createuser --help\" for more information.\n\nIt seems like '-a' is not recognised as an option parameter.\n\n(Fortunately, the ActiveState installer looks like having been fixed,\n but something's still wrong..)\n\nI reproduced the same failure at my hand and identified the\ncause. Windows' version of getopt_long seems to dislike that\nnon-optional parameters precedes options.\n\n> createuser <user name to create> <options>\n\nThe test succeeded if I moved the <user name to create> to the end of\ncommand line.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 25 May 2022 11:07:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, May 25, 2022 at 11:07:52AM +0900, Kyotaro Horiguchi wrote:\n> I reproduced the same failure at my hand and identified the\n> cause. Windows' version of getopt_long seems to dislike that\n> non-optional parameters precedes options.\n\nTweaking the list of arguments in some commands kicked by the TAP\ntests to satisfy our implementation of getopt_long() has been the\norigin of a couple of portability fixes, like ffd3980.\n--\nMichael", "msg_date": "Wed, 25 May 2022 12:47:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On 2022-05-25 12:47, Michael Paquier wrote:\n> On Wed, May 25, 2022 at 11:07:52AM +0900, Kyotaro Horiguchi wrote:\n>> I reproduced the same failure at my hand and identified the\n>> cause. Windows' version of getopt_long seems to dislike that\n>> non-optional parameters precedes options.\n> \n> Tweaking the list of arguments in some commands kicked by the TAP\n> tests to satisfy our implementation of getopt_long() has been the\n> origin of a couple of portability fixes, like ffd3980.\n\nThanks! I fixed it.\n\n\nOn 2022-05-25 11:07, Kyotaro Horiguchi wrote:\n> At Tue, 24 May 2022 10:09:10 -0700, Nathan Bossart\n> <nathandbossart@gmail.com> wrote in\n>> We're still missing some \"fancier\" string patterns in the tests, but \n>> we\n>> might just be nitpicking at this point.\n> \n> Such \"fancier\" strings should be properly handled by FmtId() and\n> appendStringLiteralConn. If this is a privilege escalating command,\n> we should have ones but this is not.\n\nSorry, I didn't quite understand the \"fancier\" pattern. Is a string like \nthis patch correct?\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 26 May 2022 14:16:37 +0900", "msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "At Thu, 26 May 2022 14:16:37 +0900, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote in \n> On 2022-05-25 12:47, Michael Paquier wrote:\n> > On Wed, May 25, 2022 at 11:07:52AM +0900, Kyotaro Horiguchi wrote:\n> >> I reproduced the same failure at my hand and identified the\n> >> cause. Windows' version of getopt_long seems to dislike that\n> >> non-optional parameters precedes options.\n> > Tweaking the list of arguments in some commands kicked by the TAP\n> > tests to satisfy our implementation of getopt_long() has been the\n> > origin of a couple of portability fixes, like ffd3980.\n> \n> Thanks! I fixed it.\n> \n> \n> On 2022-05-25 11:07, Kyotaro Horiguchi wrote:\n> > At Tue, 24 May 2022 10:09:10 -0700, Nathan Bossart\n> > <nathandbossart@gmail.com> wrote in\n> >> We're still missing some \"fancier\" string patterns in the tests, but\n> >> we\n> >> might just be nitpicking at this point.\n> > Such \"fancier\" strings should be properly handled by FmtId() and\n> > appendStringLiteralConn. If this is a privilege escalating command,\n> > we should have ones but this is not.\n> \n> Sorry, I didn't quite understand the \"fancier\" pattern. Is a string\n> like this patch correct?\n\nFWIW, the \"fancy\" here causes me to think about something likely to\ncause syntax breakage of the query to be sent.\n\ncreateuser -a 'user\"1' -a 'user\"2' 'user\"3'\ncreateuser -v \"2023-1-1'; DROP TABLE public.x; select '\" hoge\n\nBUT, thses should be prevented by the functions enumerated above. So,\nI don't think we need them.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 May 2022 16:47:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, May 26, 2022 at 02:16:37PM +0900, Shinya Kato wrote:\n> On 2022-05-25 11:07, Kyotaro Horiguchi wrote:\n>> At Tue, 24 May 2022 10:09:10 -0700, Nathan Bossart\n>> <nathandbossart@gmail.com> wrote in\n>> > We're still missing some \"fancier\" string patterns in the tests, but\n>> > we\n>> > might just be nitpicking at this point.\n>> \n>> Such \"fancier\" strings should be properly handled by FmtId() and\n>> appendStringLiteralConn. If this is a privilege escalating command,\n>> we should have ones but this is not.\n> \n> Sorry, I didn't quite understand the \"fancier\" pattern. Is a string like\n> this patch correct?\n\nYes, thanks. I'm marking this as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 May 2022 09:20:37 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Thu, May 26, 2022 at 04:47:46PM +0900, Kyotaro Horiguchi wrote:\n> FWIW, the \"fancy\" here causes me to think about something likely to\n> cause syntax breakage of the query to be sent.\n> \n> createuser -a 'user\"1' -a 'user\"2' 'user\"3'\n> createuser -v \"2023-1-1'; DROP TABLE public.x; select '\" hoge\n\nThat would be mostly using spaces here, to make sure that quoting is\ncorrectly applied.\n\n> BUT, thses should be prevented by the functions enumerated above. So,\n> I don't think we need them.\n\nMostly. For example, the test for --valid-until can use a timestamp\nwith spaces to validate the use of appendStringLiteralConn(). A\nsecond thing is that --member was checked, but not --admin, so I have\nrenamed regress_user2 to \"regress user2\" for that to apply a maximum\nof coverage, and applied the patch.\n\nOne thing that I found annoying is that this made the list of options\nof createuser much harder to follow. That's not something caused by\nthis patch as many options have accumulated across the years and there\nis a kind pattern where the connection options were listed first, but\nI have cleaned up that while on it. A second area where this could be\ndone is createdb, as it could be easily expanded if the backend query\ngains support for more stuff, but that can happen when it makes more\nsense.\n--\nMichael", "msg_date": "Wed, 13 Jul 2022 12:25:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "Hi,\nThanks to the developers and reviewers.\nThe attached small patch fixes the message in \"createuser --help\" command. The patch has changed to specify a time stamp for the --valid-for option. I don't think the SGML description needs to be modified.\n\nRegards,\nNoriyoshi Shinoda\n-----Original Message-----\nFrom: Michael Paquier <michael@paquier.xyz> \nSent: Wednesday, July 13, 2022 12:25 PM\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nCc: Shinya11.Kato@oss.nttdata.com; nathandbossart@gmail.com; przemyslaw@sztoch.pl; david.g.johnston@gmail.com; robertmhaas@gmail.com; daniel@yesql.se; pgsql-hackers@postgresql.org\nSubject: Re: Add --{no-,}bypassrls flags to createuser\n\nOn Thu, May 26, 2022 at 04:47:46PM +0900, Kyotaro Horiguchi wrote:\n> FWIW, the \"fancy\" here causes me to think about something likely to \n> cause syntax breakage of the query to be sent.\n> \n> createuser -a 'user\"1' -a 'user\"2' 'user\"3'\n> createuser -v \"2023-1-1'; DROP TABLE public.x; select '\" hoge\n\nThat would be mostly using spaces here, to make sure that quoting is correctly applied.\n\n> BUT, thses should be prevented by the functions enumerated above. So, \n> I don't think we need them.\n\nMostly. For example, the test for --valid-until can use a timestamp with spaces to validate the use of appendStringLiteralConn(). A second thing is that --member was checked, but not --admin, so I have renamed regress_user2 to \"regress user2\" for that to apply a maximum of coverage, and applied the patch.\n\nOne thing that I found annoying is that this made the list of options of createuser much harder to follow. That's not something caused by this patch as many options have accumulated across the years and there is a kind pattern where the connection options were listed first, but I have cleaned up that while on it. A second area where this could be done is createdb, as it could be easily expanded if the backend query gains support for more stuff, but that can happen when it makes more sense.\n--\nMichael", "msg_date": "Wed, 13 Jul 2022 08:14:28 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, Jul 13, 2022 at 08:14:28AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> The attached small patch fixes the message in \"createuser --help\" command. The patch has changed to specify a time stamp for the --valid-for option. I don't think the SGML description needs to be modified.\n\nGood catch. Apart from a nitpick about the indentation, your patch looks\nreasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Jul 2022 11:38:03 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" }, { "msg_contents": "On Wed, Jul 13, 2022 at 11:38:03AM -0700, Nathan Bossart wrote:\n> On Wed, Jul 13, 2022 at 08:14:28AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>> The attached small patch fixes the message in \"createuser --help\"\n>> command. The patch has changed to specify a time stamp for the\n>> --valid-for option. I don't think the SGML description needs to be\n>> modified.\n\nThanks, Shinoda-san. Fixed.\n\n> Good catch. Apart from a nitpick about the indentation, your patch looks\n> reasonable to me.\n\nFWIW, one can check that with a simple `git diff --check` or similar\nto see what was going wrong here. This simple trick allows me to find\nquickly formatting issues in any patch posted.\n--\nMichael", "msg_date": "Thu, 14 Jul 2022 08:39:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add --{no-,}bypassrls flags to createuser" } ]
[ { "msg_contents": "Hello,\n\nI've been working yesterday on the translation of v15 .po french files.\n\nOnce again, I've found that there was no .po file for oid2name, pgbench,\nand vacuumlo. I suppose that's because they are contrib applications. But\npgbench is no longer a contrib application. It's now located in the\n\"src/bin/pgbench\" directory. So I'm wondering if it should be translatable?\nIf yes, I could work on that.\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.\n\nHello,I've been working yesterday on the translation of v15 .po french files.Once again, I've found that there was no .po file for oid2name, pgbench, and vacuumlo. I suppose that's because they are contrib applications. But pgbench is no longer a contrib application. It's now located in the \"src/bin/pgbench\" directory. So I'm wondering if it should be translatable? If yes, I could work on that.Thanks.Regards.-- Guillaume.", "msg_date": "Wed, 13 Apr 2022 11:18:07 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "pgbench translation" }, { "msg_contents": "On 2022-Apr-13, Guillaume Lelarge wrote:\n\n> Once again, I've found that there was no .po file for oid2name, pgbench,\n> and vacuumlo. I suppose that's because they are contrib applications. But\n> pgbench is no longer a contrib application. It's now located in the\n> \"src/bin/pgbench\" directory. So I'm wondering if it should be translatable?\n> If yes, I could work on that.\n\nI think pgbench should be translatable, but last I looked at the sources,\nthere's *a lot* of work to put it in good shape for translatability.\nThere are many messages constructed from pieces, some terminology is\ninconsistent, and other problems. If you want to work on fixing those\nproblems, +1 from me -- but let's get the problems fixed ahead of\nadding it to the translation catalogs.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 13 Apr 2022 11:29:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgbench translation" }, { "msg_contents": "Le mer. 13 avr. 2022 à 11:29, Alvaro Herrera <alvherre@alvh.no-ip.org> a\nécrit :\n\n> On 2022-Apr-13, Guillaume Lelarge wrote:\n>\n> > Once again, I've found that there was no .po file for oid2name, pgbench,\n> > and vacuumlo. I suppose that's because they are contrib applications. But\n> > pgbench is no longer a contrib application. It's now located in the\n> > \"src/bin/pgbench\" directory. So I'm wondering if it should be\n> translatable?\n> > If yes, I could work on that.\n>\n> I think pgbench should be translatable, but last I looked at the sources,\n> there's *a lot* of work to put it in good shape for translatability.\n> There are many messages constructed from pieces, some terminology is\n> inconsistent, and other problems. If you want to work on fixing those\n> problems, +1 from me -- but let's get the problems fixed ahead of\n> adding it to the translation catalogs.\n>\n>\nYeah, I didn't look closely at pgbench's source code. Just enough to make\nsure it wasn't written to be translated.\n\nOn the messages needing some fixes, I don't know if I can do it. The\nterminology is quite foreign to me. Translating the man page has already\nbeen a nightmare :)\n\nAnyway, I'll get a look and see what I can do.\n\nThanks for the answer.\n\n\n-- \nGuillaume.\n\nLe mer. 13 avr. 2022 à 11:29, Alvaro Herrera <alvherre@alvh.no-ip.org> a écrit :On 2022-Apr-13, Guillaume Lelarge wrote:\n\n> Once again, I've found that there was no .po file for oid2name, pgbench,\n> and vacuumlo. I suppose that's because they are contrib applications. But\n> pgbench is no longer a contrib application. It's now located in the\n> \"src/bin/pgbench\" directory. So I'm wondering if it should be translatable?\n> If yes, I could work on that.\n\nI think pgbench should be translatable, but last I looked at the sources,\nthere's *a lot* of work to put it in good shape for translatability.\nThere are many messages constructed from pieces, some terminology is\ninconsistent, and other problems.  If you want to work on fixing those\nproblems, +1 from me -- but let's get the problems fixed ahead of\nadding it to the translation catalogs.\nYeah, I didn't look closely at pgbench's source code. Just enough to make sure it wasn't written to be translated.On the messages needing some fixes, I don't know if I can do it. The terminology is quite foreign to me. Translating the man page has already been a nightmare :)Anyway, I'll get a look and see what I can do.Thanks for the answer.-- Guillaume.", "msg_date": "Wed, 13 Apr 2022 12:01:08 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: pgbench translation" } ]
[ { "msg_contents": "When reviewing the patch in \"Frontend error logging style\" [0] I noticed that\nsome messages could do with a little bit of touching up. The original review\nwas posted and responded to in that thread, but to keep goalposts in place it\nwas put off until that patch had landed. To avoid this getting buried in that\nthread I decided to start a new one with the findings from there. To make\nreviewing easier I split the patch around the sorts of changes proposed.\n\n0001: Makes sure that database and file names are printed quoted. This patch\nhas hunks in contrib and backend as well.\n\n0002: Capitalizes pg_log_error_detail and conversely starts pg_log_error with a\nlowercase letter without punctuation.\n\n0003: Extend a few messages with more information to be more consistent with\nother messages (and more helpful).\n\n0004: Add pg_log_error() calls on all calls to close in pg_basebackup. Nearly\nall had already, and while errors here are likely to be rare, when they do\nhappen something is really wrong and every bit of information can help\ndebugging.\n\n0005: Put keywords as parameters in a few pg_dump error messages, to make their\ntranslations reuseable.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/1363732.1636496441@sss.pgh.pa.us", "msg_date": "Wed, 13 Apr 2022 13:51:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Error logging messages" }, { "msg_contents": "On Wed, Apr 13, 2022 at 01:51:16PM +0200, Daniel Gustafsson wrote:\n> 0001: Makes sure that database and file names are printed quoted. This patch\n> has hunks in contrib and backend as well.\n> \n> 0002: Capitalizes pg_log_error_detail and conversely starts pg_log_error with a\n> lowercase letter without punctuation.\n>\n> 0003: Extend a few messages with more information to be more consistent with\n> other messages (and more helpful).\n>\n> 0005: Put keywords as parameters in a few pg_dump error messages, to make their\n> translations reuseable.\n\nThese look fine.\n\n> 0004: Add pg_log_error() calls on all calls to close in pg_basebackup. Nearly\n> all had already, and while errors here are likely to be rare, when they do\n> happen something is really wrong and every bit of information can help\n> debugging.\n\n+ if (stream->walmethod->close(f, CLOSE_UNLINK) != 0)\n+ pg_log_error(\"could not delete write-ahead log file \\\"%s\\\": %s\",\n+ fn, stream->walmethod->getlasterror());\nWith only the file names provided, it is possible to know that this is\na WAL file. Could we switch to a simpler \"could not delete file\n\\\"%s\\\"\" instead? Same comment for the history file and the fsync\nfailure a couple of lines above.\n--\nMichael", "msg_date": "Thu, 14 Apr 2022 16:10:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error logging messages" }, { "msg_contents": "> On 14 Apr 2022, at 09:10, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Apr 13, 2022 at 01:51:16PM +0200, Daniel Gustafsson wrote:\n>> 0001: Makes sure that database and file names are printed quoted. This patch\n>> has hunks in contrib and backend as well.\n>> \n>> 0002: Capitalizes pg_log_error_detail and conversely starts pg_log_error with a\n>> lowercase letter without punctuation.\n>> \n>> 0003: Extend a few messages with more information to be more consistent with\n>> other messages (and more helpful).\n>> \n>> 0005: Put keywords as parameters in a few pg_dump error messages, to make their\n>> translations reuseable.\n> \n> These look fine.\n\nThanks\n\n>> 0004: Add pg_log_error() calls on all calls to close in pg_basebackup. Nearly\n>> all had already, and while errors here are likely to be rare, when they do\n>> happen something is really wrong and every bit of information can help\n>> debugging.\n> \n> + if (stream->walmethod->close(f, CLOSE_UNLINK) != 0)\n> + pg_log_error(\"could not delete write-ahead log file \\\"%s\\\": %s\",\n> + fn, stream->walmethod->getlasterror());\n> With only the file names provided, it is possible to know that this is\n> a WAL file. Could we switch to a simpler \"could not delete file\n> \\\"%s\\\"\" instead? Same comment for the history file and the fsync\n> failure a couple of lines above.\n\nI don't have strong opinions, simplifying makes it easier on translators (due\nto reuse) and keeping the verbose message may make it easier for users\nexperiencing problems.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 14 Apr 2022 10:21:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Error logging messages" }, { "msg_contents": "On 13.04.22 13:51, Daniel Gustafsson wrote:\n> 0002: Capitalizes pg_log_error_detail and conversely starts pg_log_error with a\n> lowercase letter without punctuation.\n\nI'm having some doubts about some of these changes, especially for \ninteractive features in psql, where the messages are often use a bit of \na different style. I don't think this kind of thing is an improvement, \nfor example:\n\n- pg_log_error(\"You are currently not connected to a database.\");\n+ pg_log_error(\"you are currently not connected to a database\");\n\nIf we want to be strict about it, we could change the message to\n\n\"not currently connected to a database\"\n\nBut I do not think just chopping of the punctuation and lower-casing the \nfirst letter of what is after all still a sentence would be an improvement.\n\n\n", "msg_date": "Thu, 14 Apr 2022 16:32:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Error logging messages" }, { "msg_contents": "> On 14 Apr 2022, at 16:32, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 13.04.22 13:51, Daniel Gustafsson wrote:\n>> 0002: Capitalizes pg_log_error_detail and conversely starts pg_log_error with a\n>> lowercase letter without punctuation.\n> \n> I'm having some doubts about some of these changes, especially for interactive features in psql, where the messages are often use a bit of a different style. I don't think this kind of thing is an improvement, for example:\n> \n> - pg_log_error(\"You are currently not connected to a database.\");\n> + pg_log_error(\"you are currently not connected to a database\");\n\nThat may also be a bit of Stockholm Syndrome since I prefer the latter error\nmessage, but I see your point.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 14 Apr 2022 16:46:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Error logging messages" } ]
[ { "msg_contents": "Minor doc patch to replace with latest RFC number\n\nIntended for PG15\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Wed, 13 Apr 2022 14:38:11 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "JSON docs: RFC7159 is now superceded" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> Minor doc patch to replace with latest RFC number\n\nHmm, I'm a bit disinclined to claim compliance with a new RFC\nsight unseen. What were the changes?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Apr 2022 09:53:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JSON docs: RFC7159 is now superceded" }, { "msg_contents": "On Wed, 13 Apr 2022 at 14:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > Minor doc patch to replace with latest RFC number\n>\n> Hmm, I'm a bit disinclined to claim compliance with a new RFC\n> sight unseen. What were the changes?\n\nI checked... so I should have mentioned this before\n\nhttps://datatracker.ietf.org/doc/html/rfc8259#appendix-A\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 13 Apr 2022 15:02:41 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: JSON docs: RFC7159 is now superceded" }, { "msg_contents": "\nOn 2022-04-13 We 09:38, Simon Riggs wrote:\n> Minor doc patch to replace with latest RFC number\n>\n> Intended for PG15\n\n\n\nIdea is fine, but\n\n\n-  data, as specified in <ulink\nurl=\"https://tools.ietf.org/html/rfc7159\">RFC\n-  7159</ulink>. Such data can also be stored as <type>text</type>, but\n+  data, as specified in <ulink\nurl=\"https://tools.ietf.org/html/rfc8259\">RFC\n+  8259</ulink>, which supercedes the earlier <acronym>RFC</acronym> 7159.\n+  Such data can also be stored as <type>text</type>, but\n\n\nDo we need to mention the obsoleting of RFC7159? Anyone who cares enough\ncan see that by looking at the RFC - it mentions what it obsoletes.\n\nI haven't checked that anything that changed in RFC8259 affects us. I\ndoubt it would but I guess we should double check.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Apr 2022 10:02:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSON docs: RFC7159 is now superceded" } ]
[ { "msg_contents": "Hi all,\n\nI tried to build Postgres from source using Visual Studio 19. It worked all\ngood.\nThen I wanted to build it with some dependencies, started with the ones\nlisted here [1]. But I'm having some issues with lz4.\n\nFirst I downloaded the latest release of lz4 from this link [2].\nModified the src\\tools\\msvc\\config.pl file as follows:\n\n> use strict;\n> use warnings;\n>\n\n\nour $config;\n> $config->{\"tap_tests\"} = 1;\n> $config->{\"asserts\"} = 1;\n>\n\n\n$config->{\"lz4\"} = \"c:/lz4/\";\n> $config->{\"openssl\"} = \"c:/openssl/1.1/\";\n> $config->{\"perl\"} = \"c:/strawberry/$ENV{DEFAULT_PERL_VERSION}/perl/\";\n> $config->{\"python\"} = \"c:/python/\";\n>\n\n\n1;\n\nbased on /src/tools/ci/windows_build_config.pl\n\nThen ran the following commands:\n\n> vcvarsall x64\n> perl src/tools/msvc/mkvcbuild.pl\n> msbuild -m -verbosity:minimal /p:IncludePath=\"C:\\lz4\" pgsql.sln\n\n\nmsbuild command fails with the following error:\n\"LINK : fatal error LNK1181: cannot open input file\n'c:\\lz4\\\\lib\\liblz4.lib' [C:\\postgres\\libpgtypes.vcxproj]\"\n\nWhat I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist.\nThe latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib\nanymore, as opposed to what's written here [3]. Also there isn't a lib\nfolder too.\n\nAfter those changes on lz4 side, AFAIU there seems like this line adds\nlibrary from wrong path in Solution.pm file [4].\n\n> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n\n\nEven if I spent some time on this problem and tried to fix some places, I'm\nnot able to run a successful build yet.\nThis is also the case for zstd too. Enabling zstd gives the same error.\n\nHas anyone had this issue before? Is this something that anyone is aware of\nand somehow made it work?\nI would appreciate any comment/suggestion etc.\n\nThanks,\nMelih\n\n\n[1]\nhttps://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.8\n[2] https://github.com/lz4/lz4/releases/tag/v1.9.3\n[3] https://github.com/lz4/lz4/tree/dev/lib/dll/example\n[4]\nhttps://github.com/postgres/postgres/blob/c1932e542863f0f646f005b3492452acc57c7e66/src/tools/msvc/Solution.pm#L1092\n\nHi all,I tried to build Postgres from source using Visual Studio 19. It worked all good. Then I wanted to build it with some dependencies, started with the ones listed here [1]. But I'm having some issues with lz4.First I downloaded the latest release of lz4 from this link [2].Modified the src\\tools\\msvc\\config.pl file as follows:use strict;use warnings; our $config;$config->{\"tap_tests\"} = 1;$config->{\"asserts\"} = 1; $config->{\"lz4\"} = \"c:/lz4/\";$config->{\"openssl\"} = \"c:/openssl/1.1/\";$config->{\"perl\"} = \"c:/strawberry/$ENV{DEFAULT_PERL_VERSION}/perl/\";$config->{\"python\"} = \"c:/python/\"; 1;based on /src/tools/ci/windows_build_config.plThen ran the following commands:vcvarsall x64perl src/tools/msvc/mkvcbuild.plmsbuild  -m -verbosity:minimal /p:IncludePath=\"C:\\lz4\" pgsql.slnmsbuild command fails with the following error:\"LINK : fatal error LNK1181: cannot open input file 'c:\\lz4\\\\lib\\liblz4.lib' [C:\\postgres\\libpgtypes.vcxproj]\"What I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist. The latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib anymore, as opposed to what's written here [3]. Also there isn't a lib folder too.After those changes on lz4 side, AFAIU there seems like this line adds library from wrong path in Solution.pm file [4].$proj->AddIncludeDir($self->{options}->{lz4} . '\\include');$proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');Even if I spent some time on this problem and tried to fix some places, I'm not able to run a successful build yet.This is also the case for zstd too. Enabling zstd gives the same error.Has anyone had this issue before? Is this something that anyone is aware of and somehow made it work?I would appreciate any comment/suggestion etc.Thanks,Melih[1] https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.8[2] https://github.com/lz4/lz4/releases/tag/v1.9.3[3] https://github.com/lz4/lz4/tree/dev/lib/dll/example[4] https://github.com/postgres/postgres/blob/c1932e542863f0f646f005b3492452acc57c7e66/src/tools/msvc/Solution.pm#L1092", "msg_date": "Wed, 13 Apr 2022 17:21:41 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "Hi,\n\nMichael, Dilip, I think you worked most in this area? Based on\n9ca40dcd4d0cad43d95a9a253fafaa9a9ba7de24\n\nRobert, added you too, because zstd seems to have the same issue (based on the\ntail of the quoted email below).\n\nOn 2022-04-13 17:21:41 +0300, Melih Mutlu wrote:\n> I tried to build Postgres from source using Visual Studio 19. It worked all\n> good.\n> Then I wanted to build it with some dependencies, started with the ones\n> listed here [1]. But I'm having some issues with lz4.\n> \n> First I downloaded the latest release of lz4 from this link [2].\n> Modified the src\\tools\\msvc\\config.pl file as follows:\n> \n> > use strict;\n> > use warnings;\n> >\n> \n> \n> our $config;\n> > $config->{\"tap_tests\"} = 1;\n> > $config->{\"asserts\"} = 1;\n> >\n> \n> \n> $config->{\"lz4\"} = \"c:/lz4/\";\n> > $config->{\"openssl\"} = \"c:/openssl/1.1/\";\n> > $config->{\"perl\"} = \"c:/strawberry/$ENV{DEFAULT_PERL_VERSION}/perl/\";\n> > $config->{\"python\"} = \"c:/python/\";\n> >\n> \n> \n> 1;\n> \n> based on /src/tools/ci/windows_build_config.pl\n> \n> Then ran the following commands:\n> \n> > vcvarsall x64\n> > perl src/tools/msvc/mkvcbuild.pl\n> > msbuild -m -verbosity:minimal /p:IncludePath=\"C:\\lz4\" pgsql.sln\n\nI don't think the /p:IncludePath should be needed, the build scripts should\nadd that.\n\n\n> msbuild command fails with the following error:\n> \"LINK : fatal error LNK1181: cannot open input file\n> 'c:\\lz4\\\\lib\\liblz4.lib' [C:\\postgres\\libpgtypes.vcxproj]\"\n> \n> What I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist.\n> The latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib\n> anymore, as opposed to what's written here [3]. Also there isn't a lib\n> folder too.\n> \n> After those changes on lz4 side, AFAIU there seems like this line adds\n> library from wrong path in Solution.pm file [4].\n> \n> > $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n> > $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n> \n> \n> Even if I spent some time on this problem and tried to fix some places, I'm\n> not able to run a successful build yet.\n> This is also the case for zstd too. Enabling zstd gives the same error.\n> \n> Has anyone had this issue before? Is this something that anyone is aware of\n> and somehow made it work?\n> I would appreciate any comment/suggestion etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 22 Apr 2022 12:54:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "On Wed, Apr 13, 2022 at 05:21:41PM +0300, Melih Mutlu wrote:\n> I tried to build Postgres from source using Visual Studio 19. It worked all\n> good.\n> Then I wanted to build it with some dependencies, started with the ones\n> listed here [1]. But I'm having some issues with lz4.\n> \n> First I downloaded the latest release of lz4 from this link [2].\n> Modified the src\\tools\\msvc\\config.pl file as follows:\n\nYeah, that's actually quite an issue because there is no official\nrelease for liblz4.lib, so one has to compile the code by himself to\nbe able to get his/her hands on liblz4.lib. zstd is similarly\nconsistent with its release contents.\n--\nMichael", "msg_date": "Tue, 26 Apr 2022 11:19:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "On Wed, Apr 13, 2022 at 10:22 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> What I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist.\n> The latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib anymore, as opposed to what's written here [3]. Also there isn't a lib folder too.\n>\n> After those changes on lz4 side, AFAIU there seems like this line adds library from wrong path in Solution.pm file [4].\n>>\n>> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>\n> Even if I spent some time on this problem and tried to fix some places, I'm not able to run a successful build yet.\n> This is also the case for zstd too. Enabling zstd gives the same error.\n>\n> Has anyone had this issue before? Is this something that anyone is aware of and somehow made it work?\n> I would appreciate any comment/suggestion etc.\n\nIn Solution.pm we have this:\n\n if ($self->{options}->{lz4})\n {\n $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n }\n if ($self->{options}->{zstd})\n {\n $proj->AddIncludeDir($self->{options}->{zstd} . '\\include');\n $proj->AddLibrary($self->{options}->{zstd} . '\\lib\\libzstd.lib');\n }\n\nI think what you're saying is that the relative pathnames here may not\nbe correct, depending on which version of lz4/zstd you're using. The\nsolution is probably to use perl's -e to test which files actually\nexists e.g.\n\n if ($self->{options}->{lz4})\n {\n $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n if (-e $proj->AddLibrary($self->{options}->{lz4} .\n'\\someplace\\somelz4.lib')\n {\n $proj->AddLibrary($self->{options}->{lz4} .\n'\\someplace\\somelz4.lib');\n }\n else\n {\n $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n }\n $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n }\n\nThe trick, at least as it seems to me, is figuring out exactly what\nthe right set of conditions is, based on what kinds of different\nbuilds exist out there and where they put stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Apr 2022 16:26:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "\nOn 2022-04-26 Tu 16:26, Robert Haas wrote:\n> On Wed, Apr 13, 2022 at 10:22 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> What I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist.\n>> The latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib anymore, as opposed to what's written here [3]. Also there isn't a lib folder too.\n>>\n>> After those changes on lz4 side, AFAIU there seems like this line adds library from wrong path in Solution.pm file [4].\n>>> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n>>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>> Even if I spent some time on this problem and tried to fix some places, I'm not able to run a successful build yet.\n>> This is also the case for zstd too. Enabling zstd gives the same error.\n>>\n>> Has anyone had this issue before? Is this something that anyone is aware of and somehow made it work?\n>> I would appreciate any comment/suggestion etc.\n> In Solution.pm we have this:\n>\n> if ($self->{options}->{lz4})\n> {\n> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n> }\n> if ($self->{options}->{zstd})\n> {\n> $proj->AddIncludeDir($self->{options}->{zstd} . '\\include');\n> $proj->AddLibrary($self->{options}->{zstd} . '\\lib\\libzstd.lib');\n> }\n>\n> I think what you're saying is that the relative pathnames here may not\n> be correct, depending on which version of lz4/zstd you're using. The\n> solution is probably to use perl's -e to test which files actually\n> exists e.g.\n>\n> if ($self->{options}->{lz4})\n> {\n> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n> if (-e $proj->AddLibrary($self->{options}->{lz4} .\n> '\\someplace\\somelz4.lib')\n> {\n> $proj->AddLibrary($self->{options}->{lz4} .\n> '\\someplace\\somelz4.lib');\n> }\n> else\n> {\n> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n> }\n> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n> }\n>\n> The trick, at least as it seems to me, is figuring out exactly what\n> the right set of conditions is, based on what kinds of different\n> builds exist out there and where they put stuff.\n\n\nI agree that we should use perl's -e to test that the files actually\nexists. But I don't think we should try to adjust to everything the zstd\nand lz4 people put in their release files. They are just horribly\ninconsistent.\n\nWhat I did was to install the packages using vcpkg[1] which is a\nstandard framework created by Microsoft for installing package\nlibraries. It does install the .lib files in a sane place\n(installdir/lib), but it doesn't use the lib suffix. Also it names the\nlib file for zlib differently.\n\nI got around those things by renaming the lib files, but that's a bit\nugly. So I came up with this (untested) patch.\n\n\ncheers\n\n\nandrew\n\n\n[1] https://github.com/microsoft/vcpkg\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 29 Apr 2022 08:50:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "On 2022-04-29 Fr 08:50, Andrew Dunstan wrote:\n> On 2022-04-26 Tu 16:26, Robert Haas wrote:\n>> On Wed, Apr 13, 2022 at 10:22 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>>> What I realized is that c:\\lz4\\lib\\liblz4.lib actually does not exist.\n>>> The latest versions of lz4, downloaded from [2], do not contain \\liblz4.lib anymore, as opposed to what's written here [3]. Also there isn't a lib folder too.\n>>>\n>>> After those changes on lz4 side, AFAIU there seems like this line adds library from wrong path in Solution.pm file [4].\n>>>> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n>>>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>>> Even if I spent some time on this problem and tried to fix some places, I'm not able to run a successful build yet.\n>>> This is also the case for zstd too. Enabling zstd gives the same error.\n>>>\n>>> Has anyone had this issue before? Is this something that anyone is aware of and somehow made it work?\n>>> I would appreciate any comment/suggestion etc.\n>> In Solution.pm we have this:\n>>\n>> if ($self->{options}->{lz4})\n>> {\n>> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>> }\n>> if ($self->{options}->{zstd})\n>> {\n>> $proj->AddIncludeDir($self->{options}->{zstd} . '\\include');\n>> $proj->AddLibrary($self->{options}->{zstd} . '\\lib\\libzstd.lib');\n>> }\n>>\n>> I think what you're saying is that the relative pathnames here may not\n>> be correct, depending on which version of lz4/zstd you're using. The\n>> solution is probably to use perl's -e to test which files actually\n>> exists e.g.\n>>\n>> if ($self->{options}->{lz4})\n>> {\n>> $proj->AddIncludeDir($self->{options}->{lz4} . '\\include');\n>> if (-e $proj->AddLibrary($self->{options}->{lz4} .\n>> '\\someplace\\somelz4.lib')\n>> {\n>> $proj->AddLibrary($self->{options}->{lz4} .\n>> '\\someplace\\somelz4.lib');\n>> }\n>> else\n>> {\n>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>> }\n>> $proj->AddLibrary($self->{options}->{lz4} . '\\lib\\liblz4.lib');\n>> }\n>>\n>> The trick, at least as it seems to me, is figuring out exactly what\n>> the right set of conditions is, based on what kinds of different\n>> builds exist out there and where they put stuff.\n>\n> I agree that we should use perl's -e to test that the files actually\n> exists. But I don't think we should try to adjust to everything the zstd\n> and lz4 people put in their release files. They are just horribly\n> inconsistent.\n>\n> What I did was to install the packages using vcpkg[1] which is a\n> standard framework created by Microsoft for installing package\n> libraries. It does install the .lib files in a sane place\n> (installdir/lib), but it doesn't use the lib suffix. Also it names the\n> lib file for zlib differently.\n>\n> I got around those things by renaming the lib files, but that's a bit\n> ugly. So I came up with this (untested) patch.\n>\n>\n\ner this patch\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 29 Apr 2022 08:59:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "\nOn 2022-04-29 Fr 08:50, Andrew Dunstan wrote:\n> What I did was to install the packages using vcpkg[1] which is a\n> standard framework created by Microsoft for installing package\n> libraries. It does install the .lib files in a sane place\n> (installdir/lib), but it doesn't use the lib suffix. Also it names the\n> lib file for zlib differently.\n>\n\nOf course I meant \"doesn't use the 'lib' prefix\". So the files are named\njust zstd.lib and lz4.lib.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 30 Apr 2022 08:46:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" }, { "msg_contents": "Hi,\n\nOn 2022-04-29 08:50:56 -0400, Andrew Dunstan wrote:\n> I agree that we should use perl's -e to test that the files actually\n> exists. But I don't think we should try to adjust to everything the zstd\n> and lz4 people put in their release files. They are just horribly\n> inconsistent.\n\nRight now it's the source of packages we document for windows... It doesn't\nseem that crazy to accept a few different paths with a glob or such?\n\n\n> What I did was to install the packages using vcpkg[1] which is a\n> standard framework created by Microsoft for installing package\n> libraries. It does install the .lib files in a sane place\n> (installdir/lib), but it doesn't use the lib suffix. Also it names the\n> lib file for zlib differently.\n\nThat doesn't seem much better :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Apr 2022 13:33:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Building Postgres with lz4 on Visual Studio" } ]
[ { "msg_contents": "To whom it may concern:\n\nHi, I am Jichen Xu. I am a first year master computer science student at\nWaseda University.\nHere is a link to my proposal:\nhttps://docs.google.com/document/d/1vdgPY5wvhjhrX9aSUw5mTDOnXwEQNcr4cxfzuSHMWlg\nLooking forward to working with you in next few months.\n\nBest regards,\nJichen\n\nTo whom it may concern:Hi, I am Jichen Xu. I am a first year master computer science student at Waseda University.Here is a link to my proposal:https://docs.google.com/document/d/1vdgPY5wvhjhrX9aSUw5mTDOnXwEQNcr4cxfzuSHMWlgLooking forward to working with you in next few months.Best regards,Jichen", "msg_date": "Wed, 13 Apr 2022 23:40:27 +0900", "msg_from": "Solo Kyo <kyokitisin@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC 2022: Proposal of pgmoneta on-disk encryption" }, { "msg_contents": "Hi Jichen,\n\nOn 4/13/22 10:40, Solo Kyo wrote:\n> To whom it may concern:\n>\n> Hi, I am Jichen Xu. I am a first year master computer science student at\n> Waseda University.\n> Here is a link to my proposal:\n> https://docs.google.com/document/d/1vdgPY5wvhjhrX9aSUw5mTDOnXwEQNcr4cxfzuSHMWlg\n> Looking forward to working with you in next few months.\n\n\nThanks for your proposal to Google Summer of Code 2022 !\n\n\nWe'll follow up off-list to get this finalized.\n\n\nBest regards,\n\n  Jesper\n\n\n\n", "msg_date": "Wed, 13 Apr 2022 11:20:40 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: GSoC 2022: Proposal of pgmoneta on-disk encryption" } ]
[ { "msg_contents": "Can someone help me understand\n\nselect '0101-01-01'::timestamptz;\n timestamptz\n------------------------\n 0101-01-01 00:00:00+00\n(1 row)\n\ntest=# set timezone to 'America/Toronto';\nSET\ntest=# select '0101-01-01'::timestamptz;\n timestamptz\n------------------------------\n 0101-01-01 00:00:00-05:17:32\n(1 row)\n\nselect 'now()'::timestamptz;\n timestamptz\n-------------------------------\n 2022-04-13 12:31:57.271967-04\n(1 row)\n\nSpecifically why the -05:17:32\n\n\nDave Cramer\n\nCan someone help me understandselect '0101-01-01'::timestamptz;      timestamptz------------------------ 0101-01-01 00:00:00+00(1 row)test=# set timezone to 'America/Toronto';SETtest=# select '0101-01-01'::timestamptz;         timestamptz------------------------------ 0101-01-01 00:00:00-05:17:32(1 row)select 'now()'::timestamptz;          timestamptz------------------------------- 2022-04-13 12:31:57.271967-04(1 row)Specifically why the -05:17:32Dave Cramer", "msg_date": "Wed, 13 Apr 2022 12:33:01 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "timezones BCE" }, { "msg_contents": "On 2022-04-13 12:33, Dave Cramer wrote:\n> test=# set timezone to 'America/Toronto';\n> SET\n> test=# select '0101-01-01'::timestamptz;\n> timestamptz\n> ------------------------------\n> 0101-01-01 00:00:00-05:17:32\n> \n> Specifically why the -05:17:32\n\nTimezones were regularized into their (typically hour-wide) chunks\nduring a period around the late nineteenth century IIRC.\n\nIf you decompile the zoneinfo database to look at America/Toronto,\nyou will probably find an entry for dates earlier than when the\nregularized zones were established there, and that entry will have\nan offset reflecting Toronto's actual longitude.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 13 Apr 2022 12:48:10 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: timezones BCE" }, { "msg_contents": "chap@anastigmatix.net writes:\n> On 2022-04-13 12:33, Dave Cramer wrote:\n>> Specifically why the -05:17:32\n\n> Timezones were regularized into their (typically hour-wide) chunks\n> during a period around the late nineteenth century IIRC.\n\n> If you decompile the zoneinfo database to look at America/Toronto,\n> you will probably find an entry for dates earlier than when the\n> regularized zones were established there, and that entry will have\n> an offset reflecting Toronto's actual longitude.\n\nYeah, you'll see these weird offsets in just about every zone for dates\nearlier than the late 1800s. I've got my doubts about how useful it is\nto do that, but that's the policy the tzdb guys have.\n\nAt one point I was considering whether we could project the oldest\nrecorded \"standard time\" offset backwards instead of believing the LMT\noffsets. This would confuse many fewer people, and it's no less\nlogically defensible than applying the Gregorian calendar to years\ncenturies before Pope Gregory was born. But I fear that horse may\nhave left the barn already --- changing this behavior would have\nits own downsides, and I do not think any other tzdb consumers do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Apr 2022 14:10:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: timezones BCE" }, { "msg_contents": "On Wed, 13 Apr 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> chap@anastigmatix.net writes:\n> > On 2022-04-13 12:33, Dave Cramer wrote:\n> >> Specifically why the -05:17:32\n>\n> > Timezones were regularized into their (typically hour-wide) chunks\n> > during a period around the late nineteenth century IIRC.\n>\n> > If you decompile the zoneinfo database to look at America/Toronto,\n> > you will probably find an entry for dates earlier than when the\n> > regularized zones were established there, and that entry will have\n> > an offset reflecting Toronto's actual longitude.\n>\n> Yeah, you'll see these weird offsets in just about every zone for dates\n> earlier than the late 1800s. I've got my doubts about how useful it is\n> to do that, but that's the policy the tzdb guys have.\n>\n> At one point I was considering whether we could project the oldest\n> recorded \"standard time\" offset backwards instead of believing the LMT\n> offsets. This would confuse many fewer people, and it's no less\n> logically defensible than applying the Gregorian calendar to years\n> centuries before Pope Gregory was born. But I fear that horse may\n> have left the barn already --- changing this behavior would have\n> its own downsides, and I do not think any other tzdb consumers do it.\n>\n\nOh please don't do something bespoke. I'm trying to make this work with the\nJDBC driver.\nSo it has to be at least compatible with other libraries.\n\nDave\n\nOn Wed, 13 Apr 2022 at 14:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:chap@anastigmatix.net writes:\n> On 2022-04-13 12:33, Dave Cramer wrote:\n>> Specifically why the -05:17:32\n\n> Timezones were regularized into their (typically hour-wide) chunks\n> during a period around the late nineteenth century IIRC.\n\n> If you decompile the zoneinfo database to look at America/Toronto,\n> you will probably find an entry for dates earlier than when the\n> regularized zones were established there, and that entry will have\n> an offset reflecting Toronto's actual longitude.\n\nYeah, you'll see these weird offsets in just about every zone for dates\nearlier than the late 1800s.  I've got my doubts about how useful it is\nto do that, but that's the policy the tzdb guys have.\n\nAt one point I was considering whether we could project the oldest\nrecorded \"standard time\" offset backwards instead of believing the LMT\noffsets.  This would confuse many fewer people, and it's no less\nlogically defensible than applying the Gregorian calendar to years\ncenturies before Pope Gregory was born.  But I fear that horse may\nhave left the barn already --- changing this behavior would have\nits own downsides, and I do not think any other tzdb consumers do it. Oh please don't do something bespoke. I'm trying to make this work with the JDBC driver.So it has to be at least compatible with other libraries.Dave", "msg_date": "Wed, 13 Apr 2022 14:13:41 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: timezones BCE" }, { "msg_contents": "On 2022-04-13 14:13, Dave Cramer wrote:\n> \n> Oh please don't do something bespoke. I'm trying to make this work with \n> the\n> JDBC driver.\n> So it has to be at least compatible with other libraries.\n\nLooks like Java agrees with the offset, prior to Toronto's 1895 adoption\nof the hour-wide zone:\n\njshell> java.time.ZoneId.of(\"America/Toronto\").\n ...> getRules().\n ...> nextTransition(java.time.Instant.parse(\"0101-01-01T00:00:00Z\"))\n$1 ==> Transition[Gap at 1895-01-01T00:00-05:17:32 to -05:00]\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 13 Apr 2022 17:15:23 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: timezones BCE" } ]
[ { "msg_contents": "To whom it may concern,\n\nMy name is Yedil Serzhan, a student pursuing a master's degree in Computer\nScience at the University of Freiburg, Germany. I'm interested in the\nproject \"Develop Performance Farm Benchmarks and Website\". Following my\ndiscussion with Ilaria, I have listened to and incorporated her comments\nand revised my proposal letter. Please let me know if you have any other\nfeedback. I will surely value them!\n\nThe proposal is attached to this email. I would be thrilled if I could make\na contribution to the community. Thank you in advance!\n\nSincerely\nYedil", "msg_date": "Wed, 13 Apr 2022 23:42:38 +0600", "msg_from": "Yedil Serzhan <edilserjan@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC: <Develop Performance Farm Benchmarks and Website>" } ]